[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] SMP support in Windows domU
> -----Original Message----- > From: Jordi Segues [mailto:jordisd.mailing@xxxxxxxxx] > Sent: 15 May 2007 10:38 > To: Petersson, Mats > Cc: bdubas@xxxxxxxxxxx; xen-users@xxxxxxxxxxxxxxxxxxx > Subject: Re: [Xen-users] SMP support in Windows domU > > Hello again, > > I've spoke too fast, sorry :) > After some tests, I finally have a Windows XP HVM guest > running with 2 vcpus. > > I set it in the config file, and then installed windows with > that config file. > After the final reboot, windows says he have upgraded some features > and that he needs a reboot to apply them. > After the reboot, it detects new "hardware" with the little icon at > the right with the "acpi" label. > Then, doing a CTRL-ALT-SUP, I see 2 CPUs in the task > manager/Performance. > That's all :) > > I've tested with a 1 vcpu install too, upgrading it to 2 vcpu then in > the config file, and I seems not to work, for the moment. This should work, but you may have to upgrade the HAL configuration in device manager, as the single-processor HAL doesn't support even detecting the second processor. -- Mats > > Sorry again. > > Cordialy, > > Jordi.S > > On 5/15/07, Jordi Segues <jordisd.mailing@xxxxxxxxx> wrote: > > Hello, > > > > I've followed your conversation... > > I'm trying too to run a windows hvm guest with several vcpus. > > > > I've Xen 3.0.4 installed and recompiled on a Debian Etch, and all > > works great, unless the SMP support in HVM windows guests. > > > > In my config file I have: > > .... > > vcpus=2 > > acpi=1 > > apic=1 > > ... > > > > But my HVM guest still don't detect 2 vcpus. > > I've tried it after the windows install, and then I've reinstalled > > windows with this config file, but nothing changes (thought windows > > detects it at install time). > > > > I've read a post in this mailing list (don't have the link) where > > someone says that the hvmloader firmware must be recomplied > > specifically. And I've understanded too that the recompilation is > > static, so all the vcpus number of all hvm guests will have to be > > identical. > > > > I haven't tried to do this, but I thought that xen 3.0.4 > offered this feature. > > So like Bart, I would like to know if anyone have done it! :) > > Anyone have an HVM windows guest running with 2 vcpus? and > if yes, how? > > > > Well, maybe the 3.1 release (3.0.5) will fix this, and maybe not, > > cause everyone says "it should", but I can't find someone > really doing > > it. :) > > > > Thank you very much! > > And I would like to thanks Mats too, who's allways here, > and answers a > > lot of questions very clearly. > > > > Cordialy, > > > > Jordi.S > > > > On 5/14/07, Petersson, Mats <Mats.Petersson@xxxxxxx> wrote: > > > > > > > > > > -----Original Message----- > > > > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx > > > > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of > > > > bdubas@xxxxxxxxxxx > > > > Sent: 14 May 2007 10:48 > > > > To: xen-users@xxxxxxxxxxxxxxxxxxx > > > > Subject: RE: [Xen-users] SMP support in Windows domU > > > > > > > > > > > > I want to install drivers for Windows as you describe... > > > > > > > > > The right solution is to install a Para-virtual driver, but > > > > for Windows, > > > > > you can't just build one yourself (unless you are > > > > experienced in Windows > > > > > driver design), so you need one that matches your Xen > > > > version (easiest > > > > way > > > > > to achieve that is to install a package with Xen and the PV > > > > driver, such > > > > > > > > > as XenExpress). The para-virtual driver will install as a > > > > device driver > > > > in > Windows, but instead of talking to a virtualized "true" > > > > > network device, it just packs up the data to go on the > > > > network and sends > > > > > it directly to the Dom0 network backend driver. > > > > > > I haven't myself done this. I think you need to remove > the "ioemu" from the network device. > > > > > > And you may have to have two disks (one to boot from that > uses the "ioemu" model, and one that you use for > applications/data that uses the PV driver). > > > > > > -- > > > Mats > > > > > > > > > > > > -- > > > > ASEC S.A. > > > > Bartłomiej Dubas > > > > > > > > -----Original Message----- > > > > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx > > > > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On > Behalf Of Petersson, > > > > Mats > > > > Sent: Monday, May 14, 2007 11:36 AM > > > > To: Bartlomiej Dubas; xen-users@xxxxxxxxxxxxxxxxxxx > > > > Subject: RE: [Xen-users] SMP support in Windows domU > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx > > > > > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of > > > > > bdubas@xxxxxxxxxxx > > > > > Sent: 14 May 2007 10:34 > > > > > To: xen-users@xxxxxxxxxxxxxxxxxxx > > > > > Subject: RE: [Xen-users] SMP support in Windows domU > > > > > > > > > > > > > > > Thanks Mats, > > > > > > > > > > What kind of entry I should have in quest config. Should I > > > > left it the > > > > > same as is or pas them without qemu option? > > > > > > > > What is it you want to achieve. Your config looks pretty much > > > > like the one > > > > I'm using. > > > > > > > > -- > > > > Mats > > > > > > > > > > My carnet config looks like this: > > > > > disk = [ 'phy:/dev/raid-0/serpiko,ioemu:hda,w' ] > > > > > memory = 512 > > > > > vcpus = 2 > > > > > cpus = "4-5" > > > > > builder = 'hvm' > > > > > device_model = '/usr/lib/xen/bin/qemu-dm' > > > > > kernel = '/usr/lib/xen/boot/hvmloader' > > > > > name = 'win2003' > > > > > vif = [ 'type=ioemu,bridge=xenbr0' ] > > > > > stdvga = 0 > > > > > sdl = 0 > > > > > vnc = 1 > > > > > vncviewer = 0 > > > > > ne2000 = 0 > > > > > localtime = 0 > > > > > on_poweroff = 'destroy' > > > > > on_reboot = 'restart' > > > > > on_crash = 'restart' > > > > > boot = 'c' > > > > > acpi = 1 > > > > > apic = 1 > > > > > > > > > > > > > > > -- > > > > > ASEC S.A. > > > > > Bartłomiej Dubas > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > Xen-users mailing list > > > > > Xen-users@xxxxxxxxxxxxxxxxxxx > > > > > http://lists.xensource.com/xen-users > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Xen-users mailing list > > > > Xen-users@xxxxxxxxxxxxxxxxxxx > > > > http://lists.xensource.com/xen-users > > > > > > > > _______________________________________________ > > > > Xen-users mailing list > > > > Xen-users@xxxxxxxxxxxxxxxxxxx > > > > http://lists.xensource.com/xen-users > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Xen-users mailing list > > > Xen-users@xxxxxxxxxxxxxxxxxxx > > > http://lists.xensource.com/xen-users > > > > > > _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |