[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/3] tools/libxc: use superpages during restore of HVM guest



On Tue, Aug 22, 2017 at 05:53:25PM +0200, Olaf Hering wrote:
> On Tue, Aug 22, Wei Liu wrote:
> 
> > On Thu, Aug 17, 2017 at 07:01:33PM +0200, Olaf Hering wrote:
> > > +    /* No superpage in 1st 2MB due to VGA hole */
> > > +    xc_sr_set_bit(0, &ctx->x86_hvm.restore.attempted_1g);
> > > +    xc_sr_set_bit(0, &ctx->x86_hvm.restore.attempted_2m);
> > I don't quite get this. What about other holes such as MMIO?
> 
> This just copies what meminit_hvm does. Is there a way to know where the
> MMIO hole is? Maybe I just missed the MMIO part. In the worst case I
> think a super page is allocated, which is later split into single pages.

That's a bit different from migration. IIRC hvmloader is responsible for
shuffling pages around. I don't think that is applicable to migration.
> 
> > One potential issue I can see with your algorithm is, if the stream of
> > page info contains pages from different super pages, the risk of going
> > over memory limit is high (hence failing the migration).
> > 
> > Is my concern unfounded?
> 
> In my testing I have seen the case of over-allocation. Thats why I
> implemented the freeing of unpopulated parts. It would be nice to know
> how many pages are actually coming. I think this info is not available.
> 

Not sure I follow. What do you mean by "how many pages are actually
coming"?

> On the other side, the first iteration sends the pfns linear. This is
> when the allocation actually happens. So the over-allocation will only
> trigger near the end, if a 1G range is allocated but only a few pages
> will be stored into this range.

This could be making too many assumptions on the data stream.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.