[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG 1747]Guest could't find bootable device with memory more than 3600M



On Thu, 13 Jun 2013, George Dunlap wrote:
> On 13/06/13 15:50, Stefano Stabellini wrote:
> > Keep in mind that if we start the pci hole at 0xe0000000, the number of
> > cases for which any workarounds are needed is going to be dramatically
> > decreased to the point that I don't think we need a workaround anymore.
> 
> You don't think anyone is going to want to pass through a card with 1GiB+ of
> RAM?

Yes, but as Paolo pointed out, those devices are going to be 64-bit
capable so they'll relocate above 4G just fine.


> > The algorithm is going to work like this in details:
> > 
> > - the pci hole size is set to 0xfc000000-0xe0000000 = 448MB
> > - we calculate the total mmio size, if it's bigger than the pci hole we
> > raise a 64 bit relocation flag
> > - if the 64 bit relocation is enabled, we relocate above 4G the first
> > device that is 64-bit capable and has an MMIO size greater or equal to
> > 512MB
> > - if the pci hole size is now big enough for the remaining devices we
> > stop the above 4G relocation, otherwise keep relocating devices that are
> > 64 bit capable and have an MMIO size greater or equal to 512MB
> > - if one or more devices don't fit we print an error and continue (it's
> > not a critical failure, one device won't be used)
> > 
> > We could have a xenstore flag somewhere that enables the old behaviour
> > so that people can revert back to qemu-xen-traditional and make the pci
> > hole below 4G even bigger than 448MB, but I think that keeping the old
> > behaviour around is going to make the code more difficult to maintain.
> 
> We'll only need to do that for one release, until we have a chance to fix it
> properly.

There is nothing more lasting than a "temporary" workaround :-)
Also it's not very clear what the proper solution would be like in this
case.
However keeping the old behaviour is certainly possible. It would just
be a bit harder to also keep the old (smaller) default pci hole around.



> > Also it's difficult for people to realize that they need the workaround
> > because hvmloader logs aren't enabled by default and only go to the Xen
> > serial console.
> 
> Well if key people know about it (Pasi, David Techer, &c), and we put it on
> the wikis related to VGA pass-through, I think information will get around.

It's not that I don't value documentation, but given that the average
user won't see any logs and the error is completely non-informative,
many people are going to be lost in a wild goose chase on google.


> > The value of this workaround pretty low in my view.
> > Finally it's worth noting that Windows XP is going EOL in less than an
> > year.
> 
> That's 1 year that a configuration with a currently-supported OS won't work
> for Xen 4.3 that worked for 4.2.  Apart from that, one of the reasons for
> doing virtualization in the first place is to be able to run older,
> unsupported OSes on current hardware; so "XP isn't important" doesn't really
> cut it for me. :-)

fair enough


> > > I thought that what we had proposed was to have an option in xenstore,
> > > that
> > > libxl would set, which would instruct hvmloader whether to expand the MMIO
> > > hole and whether to relocate devices above 64-bit?
> > I think it's right to have this discussion in public on the mailing
> > list, rather than behind closed doors.
> > Also I don't agree on the need for a workaround, as explained above.
> 
> I see -- you thought it was a bad idea and so were letting someone else bring
> it up -- or maybe hoping no one would remember to bring it up. :-)
 
Nothing that Macchiavellian: I didn't consider all the implications at
the time and I thought I managed to come up with a better plan.


> (Obviously the decision needs to be made in public, but sometimes having
> technical solutions hashed out in a face-to-face meeting is more efficient.)

But it's also easier to overlook something, at least it is easier for
me.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.