[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 31/31] libxl: allow the creation of HVM domains without a device model.



> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxx] On Behalf Of Andrew Cooper
> Sent: 07 August 2015 19:42
> To: Wei Liu; Roger Pau Monne
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Ian Jackson; Ian Campbell; Stefano
> Stabellini
> Subject: Re: [Xen-devel] [PATCH v4 31/31] libxl: allow the creation of HVM
> domains without a device model.
> 
> On 07/08/15 17:24, Wei Liu wrote:
> > On Fri, Aug 07, 2015 at 05:51:02PM +0200, Roger Pau Monné wrote:
> > [...]
> >>>>  It is recommended to accept the default value for new guests.  If
> >>>> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> >>>> index 1599de4..d67feb0 100644
> >>>> --- a/tools/libxc/xc_dom_x86.c
> >>>> +++ b/tools/libxc/xc_dom_x86.c
> >>>> @@ -1269,6 +1269,13 @@ static int meminit_hvm(struct
> xc_dom_image *dom)
> >>>>      if ( nr_pages > target_pages )
> >>>>          memflags |= XENMEMF_populate_on_demand;
> >>>>
> >>>> +    /* Make sure there's a MMIO hole for the special pages. */
> >>>> +    if ( dom->mmio_size == 0 )
> >>>> +    {
> >>>> +        dom->mmio_size = NR_SPECIAL_PAGES << PAGE_SHIFT;
> >>>> +        dom->mmio_start = special_pfn(0);
> >>>> +    }
> >>>> +
> >>> Better to just assert(dom->mmio_size != 0);
> >>>
> >>> It's really libxl's responsibility to generate memory layout for guest.
> >>> Libxc doesn't have all information to make the decision.
> >> As said in a previous email, libxl doesn't know the size or position of
> >> the special pages created by libxc code, so right now it's impossible
> >> for libxl to create a correct mmio hole for a HVMlite guest.
> >>
> > Then your change here doesn't solve the real problem. You can't guarantee
> > when dom->mmio_size != 0, 1) the hole is large enough to accommodate
> all
> > special pages, 2) special pages don't clash with real mmio pages.
> >
> > I still think there should be only one entity that controls what guest
> > memory layout looks like. And that entity should be the one which has
> > all the information available. In this case, libxl should be the one who
> > decides.

Good luck convincing QEMU of that! Whatever is decided for domains without a 
QEMU still needs to be applicable when QEMU is there.

  Paul

> 
> Layout and runtime management of guests has been in a very poor state
> since forever.
> 
> This results from a combination of things not having been written down
> to start with, new features bolted on the side, and bits moving around.
> Even at the London Hackathon in 2013, a group of us couldn't even work
> out whether it was possible for a guest with certain combinations of
> features to perform correct calculates not to exhaust its PoD pool and
> suffer a domian_crash().
> 
> This seems like a good opportunity to take a step back and reconsider
> things from scratch with the benefit of hindsight, in the hopes of
> finding a way forward which gets us into a better position.
> 
> Funnily enough, there happens to be a large collection of people
> happening very shortly in Seattle, and a rumour of some whiteboards.
> 
> We should consider:
> 
> * What there is (potentially) in a guests physical address space
> ** MMIO holes (including high), VGA hole, RMRR holes, magic emulator
> pages, magic Xen pages, ACPI reported regions, etc.
> ** Ancillary bits such as the PoD pool, Shadow pool, etc.
> * What are the architectural and ABI restrictions which exist
> * What limits exist, which are static, which are dynamic
> * What needs to be known by each entity in the system
> ** including what shouldn't be known by certain entities.
> 
> This will hopefully present a (more) clear picture of which entity
> should be making things like layout decisions, and what extra
> information they need to know.
> 
> It will also hopefully show how to go about fixing the existing runtime
> memory management issues.
> 
> ~Andrew
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.