[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] Release 0.9.7 of GPL PV Drivers for Windows
Hi James; Here are some preliminary test results. I want to spend some more time testing in more depth later this weekend however from my initial tests everything is working really well. When I get some time later I will test hotplugging / unplugging in various scenarios, the tap:qcow issue and some performance stats versus the original ones I ran. I just wanted to give you some early feedback that the format bug is resolved and hot removing works! Great work James - this is fantastic. Install Drivers Fresh Windows Server 2003 R2 SP2 with latest updates disk = [ 'file:/xenimages/xanc/xanc.img,hda,w' ] * Booted and installed the 0.9.7 drivers * Rebooted without the /gplpv switch, OK * Rebooted with the /gplpv switch, OK Add Second Disk (Cold) * Shutdown DomU * Create a fresh, non sparse IMG file dd if=/dev/zero of=xanc-1.img bs=1024k count=2048 * Update disk line disk = [ 'file:/xenimages/xanc/xanc.img,hda,w', 'file:/xenimages/xanc/xanc-1.img,hdb,w' ] * Boot the DomU * Device Manager shows two "disks" and two "controllers" * It takes a bit of time for windows to find the hardware and prompt to install the device. If you wait for it and then use Disk management the new disk is available. If you don't wait, like I didn't the first time, it fails and you have to go into disk management again after the driver is loaded. * Disk Management * Initialize Disk, Disk1, OK (Basic) * Right click disk one, unallocated, new partition, Next * Welcome to the new partition wizard, next * Primary Partition, Next * Partition size in MB: 2039 * Assign the following drive letter: D * Format this partition with the following settings: <defaults>, next, Finish * The format rapidly moves to 100% and completes successfully! (Thank you!) * I was able to read and write to the new NTFS volume Hot Remove the second disk & hot add attempt... (more later) * xm block-detach 60 hdb * Windows happily reported that it was safe to remove the hardware and then kindly ejected the disk for me. I was able to watch the whole thing happen with disk manager open. This is where I got confused about what action I need to take as an operator / best practice. I tried both ejecting the Block Device and not ejecting the block device. If I didn't eject it I couldn't add the device back again but if I did eject it I couldn't remove a device. I know I am not explaining this clearly because it is not clear to me yet exactly what steps I took in exactly what order. This needs more attention. Observations * In Device Manager, under the Disk Device properties it is now possible to select "Optimize for Quick removal" or Optimize for Performance policy. Is there an "enable advanced performance" option available? * In Disk Management it seems to take longer to connect to the Logical Volume Management service - this is not a complaint, just an observation. Best Regards Geoff -----Original Message----- From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of James Harper Sent: 07 June 2008 1:24 PM To: xen-users@xxxxxxxxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx Subject: [Xen-users] Release 0.9.7 of GPL PV Drivers for Windows I've just uploaded 0.9.7 to http://www.meadowcourt.org/downloads As a reminder, the wiki page is http://wiki.xensource.com/xenwiki/XenWindowsGplPv This release fixes a bug which would cause a BSoD if you tried using 'xm block-detach', and also a bug which would cause formatting of a xenvbd device to fail. The latter bug could also possibly have caused corruption, although unlikely, so an upgrade is recommended. Also, using a physical CDROM should now work, but I haven't been able to do much testing with it as I've been away from my test machine. (anyone know if Linux can emulate a block device with a sector size other than 512 bytes? Hopefully an upgrade will be as simple as 'download and run', but during testing I broke my test machine so I haven't properly tested an upgrade from 0.9.[56] to 0.9.7. James _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |