[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v9 3/3] tools/libxc: use superpages during restore of HVM guest
On Fri, Sep 01, 2017 at 06:08:43PM +0200, Olaf Hering wrote: [...] > diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h > index 734320947a..93141a6e25 100644 > --- a/tools/libxc/xc_sr_common.h > +++ b/tools/libxc/xc_sr_common.h > @@ -139,6 +139,16 @@ struct xc_sr_restore_ops > */ > int (*setup)(struct xc_sr_context *ctx); > > + /** > + * Populate PFNs > + * > + * Given a set of pfns, obtain memory from Xen to fill the physmap for > the > + * unpopulated subset. > + */ > + int (*populate_pfns)(struct xc_sr_context *ctx, unsigned count, > + const xen_pfn_t *original_pfns, const uint32_t > *types); > + One blank line is good enough. > + > /** > * Process an individual record from the stream. The caller shall take > * care of processing common records (e.g. END, PAGE_DATA). > @@ -224,6 +234,8 @@ struct xc_sr_context > > int send_back_fd; > unsigned long p2m_size; > + unsigned long max_pages; > + unsigned long tot_pages; > xc_hypercall_buffer_t dirty_bitmap_hbuf; > > /* From Image Header. */ > @@ -336,6 +348,12 @@ struct xc_sr_context > /* HVM context blob. */ > void *context; > size_t contextsz; > + > + /* Bitmap of currently allocated PFNs during restore. */ > + struct xc_sr_bitmap attempted_1g; > + struct xc_sr_bitmap attempted_2m; > + struct xc_sr_bitmap allocated_pfns; > + xen_pfn_t idx1G_prev, idx2M_prev; > } restore; > }; > } x86_hvm; > @@ -459,14 +477,6 @@ static inline int write_record(struct xc_sr_context *ctx, > */ > int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec); > > -/* > - * This would ideally be private in restore.c, but is needed by > - * x86_pv_localise_page() if we receive pagetables frames ahead of the > - * contents of the frames they point at. > - */ > -int populate_pfns(struct xc_sr_context *ctx, unsigned count, > - const xen_pfn_t *original_pfns, const uint32_t *types); > - > #endif > /* > * Local variables: [...] > > +struct x86_hvm_sp { Forgot to ask: what does sp stand for? > +static bool x86_hvm_punch_hole(struct xc_sr_context *ctx, xen_pfn_t max_pfn) > +{ > + xc_interface *xch = ctx->xch; > + struct xc_sr_bitmap *bm; > + xen_pfn_t _pfn, pfn, min_pfn; > + uint32_t domid, freed = 0, order; unsigned int / long for freed and order. > + int rc = -1; > + > + /* > + * Scan the entire superpage because several batches will fit into > + * a superpage, and it is unknown which pfn triggered the allocation. > + */ > + order = SUPERPAGE_1GB_SHIFT; > + pfn = min_pfn = (max_pfn >> order) << order; > + min_pfn -> start_pfn? > + while ( pfn <= max_pfn ) > + { bm can be defined here. > + bm = &ctx->x86_hvm.restore.allocated_pfns; > + if ( !xc_sr_bitmap_resize(bm, pfn) ) > + { > + PERROR("Failed to realloc allocated_pfns %" PRI_xen_pfn, pfn); > + return false; > + } > + if ( !pfn_is_populated(ctx, pfn) && > + xc_sr_test_and_clear_bit(pfn, bm) ) { domid and _pfn can be defined here. > + domid = ctx->domid; > + _pfn = pfn; > + rc = xc_domain_decrease_reservation_exact(xch, domid, 1, 0, > &_pfn); Please batch the requests otherwise it is going to be very slow. It should be feasible to construct an array of pfns here and issue a single decrease_reservation outside of this loop. > + if ( rc ) > + { > + PERROR("Failed to release pfn %" PRI_xen_pfn, pfn); > + return false; > + } > + ctx->restore.tot_pages--; > + freed++; > + } > + pfn++; > + } > + if ( freed ) > + DPRINTF("freed %u between %" PRI_xen_pfn " %" PRI_xen_pfn "\n", > + freed, min_pfn, max_pfn); > + return true; > +} > + > +/* > + * Try to allocate superpages. > + * This works without memory map only if the pfns arrive in incremental > order. > + */ I have said several times, one way or another, I don't want to make assumption on the stream of pfns. So I'm afraid I can't ack a patch like this. If Ian or Andrew thinks this is OK, I won't stand in the way. > +static int x86_hvm_populate_pfns(struct xc_sr_context *ctx, unsigned count, > + const xen_pfn_t *original_pfns, original_pfns -> pfns? The list is not copied and/or altered in any way afaict. (I skipped the rest) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |