[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] 'Sloppier' PCI access checks in main tree
i give a snapshot of dmesg of a vm: Uniform Multi-Platform E-IDE driver Revision: 7.00beta4-2.4 ide: Assuming 50MHz system bus speed for PIO modes; override with idebus=xx hda: C/H/S=0/0/0 from BIOS ignored hdb: C/H/S=0/0/0 from BIOS ignored output of working system (dmesg from dom0 is very similar and works normal) Uniform Multi-Platform E-IDE driver Revision: 7.00beta4-2.4 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx ICH5: IDE controller at PCI slot 00:1f.1 PCI: Enabling device 00:1f.1 (0005 -> 0007) PCI: Found IRQ 10 for device 00:1f.1 PCI: Sharing IRQ 10 with 00:1d.2 PCI: Sharing IRQ 10 with 00:1f.2 ICH5: chipset revision 2 ICH5: not 100% native mode: will probe irqs later ide0: BM-DMA at 0xffa0-0xffa7, BIOS settings: hda:DMA, hdb:pio ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:DMA, hdd:pio hda: Maxtor 6Y080L0, ATA DISK drive blk: queue c043ea80, I/O limit 4095Mb (mask 0xffffffff) hdc: SONY CD-RW CRX320E, ATAPI CD/DVD-ROM drive ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 ide1 at 0x170-0x177,0x376 on irq 15 hda: attached ide-disk driver. hda: host protected area => 1 hda: 160086528 sectors (81964 MB) w/2048KiB Cache, CHS=158816/16/63, UDMA(100) ide-floppy driver 0.99.newide Partition check: /dev/ide/host0/bus0/target0/lun0: p1 p2 p3 p4 < p5 p6 p7 p8 > so i think that the correct controller is found in dom0, but in a vm (using same xen0-kernel) performance is bad (about the network, transfer speeds are high, but a ssh login takes about 40 sec) stijn On Wed, 12 Jan 2005 08:39:21 +0000, Ian Pratt <Ian.Pratt@xxxxxxxxxxxx> wrote: > > sorry missed this email. > > i'm using 2.4.28. i'll try today, since i have to recompile my kernel > > anyway. > > OK, that explains a lot. We might back-port the vm86 and full > /dev/mem support to 2.4, but its not high on the todo > list. Perhaps a volunteer from the audience will step up at this > point ? :-) > > > reaction times on my system are anything but good (what > > performace loss should i expect if i use a file-backed vbd (and what > > fs is the most for those). also network seems pretty slow. > > The performance loss using file-backed vbd's is typically pretty > small, but I do generally use LVM volumes. Our systems manage to > saturate gigabit Ethernet links no problem, so I'm surprised > you're seeing network slow downs. There have been reports about > IRQ problems causing slow downs on some systems, and we're > investigating. > > Ian > ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |