[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest areas
On Wed, Sep 27, 2023 at 02:06:47PM +0200, Jan Beulich wrote: > On 27.09.2023 13:08, Roger Pau Monné wrote: > > On Wed, May 03, 2023 at 05:56:46PM +0200, Jan Beulich wrote: > >> In preparation of the introduction of new vCPU operations allowing to > >> register the respective areas (one of the two is x86-specific) by > >> guest-physical address, add the necessary fork handling (with the > >> backing function yet to be filled in). > >> > >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > > > > Given the very limited and specific usage of the current Xen forking > > code, do we really need to bother about copying such areas? > > > > IOW: I doubt that not updating the runstate/time areas will make any > > difference to the forking code ATM. > > I expect Tamas'es reply has sufficiently addressed this question. Seeing the TODO in copy_vcpu_settings() makes me wonder how well we copy information for PV interfaces for forks. Timers are not migrated, neither runstate areas for example. Which is all fine, but I expect VMs this is currently tested against don't use (a lot of) PV interfaces, or else I don't know how they can survive. > >> --- a/xen/arch/x86/mm/mem_sharing.c > >> +++ b/xen/arch/x86/mm/mem_sharing.c > >> @@ -1641,6 +1641,68 @@ static void copy_vcpu_nonreg_state(struc > >> hvm_set_nonreg_state(cd_vcpu, &nrs); > >> } > >> > >> +static int copy_guest_area(struct guest_area *cd_area, > >> + const struct guest_area *d_area, > >> + struct vcpu *cd_vcpu, > >> + const struct domain *d) > >> +{ > >> + mfn_t d_mfn, cd_mfn; > >> + > >> + if ( !d_area->pg ) > >> + return 0; > >> + > >> + d_mfn = page_to_mfn(d_area->pg); > >> + > >> + /* Allocate & map a page for the area if it hasn't been already. */ > >> + if ( !cd_area->pg ) > >> + { > >> + gfn_t gfn = mfn_to_gfn(d, d_mfn); > >> + struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain); > >> + p2m_type_t p2mt; > >> + p2m_access_t p2ma; > >> + unsigned int offset; > >> + int ret; > >> + > >> + cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL); > >> + if ( mfn_eq(cd_mfn, INVALID_MFN) ) > >> + { > >> + struct page_info *pg = alloc_domheap_page(cd_vcpu->domain, 0); > >> + > >> + if ( !pg ) > >> + return -ENOMEM; > >> + > >> + cd_mfn = page_to_mfn(pg); > >> + set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn)); > >> + > >> + ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, > >> p2m_ram_rw, > >> + p2m->default_access, -1); > >> + if ( ret ) > >> + return ret; > >> + } > >> + else if ( p2mt != p2m_ram_rw ) > >> + return -EBUSY; > > > > Shouldn't the populate of the underlying gfn in the fork case > > be done by map_guest_area itself? > > I've used the existing logic for the info area to base my code on. As > noted in the patch switching the info area logic to use the generalize > infrastructure, I've taken the liberty to address an issue in the > original logic. But it was never a goal to re-write things from scratch. > If, as Tamas appears to agree, there a better way of layering things > here, then please as a follow-on patch. Hm, I'm unsure the code that allocates the page and adds it to the p2m for the vcpu_info area is required? map_vcpu_info() should already be able to handle this case and fork the page from the previous VM. > > What if a forked guest attempts to register a new runstate/time info > > against an address not yet populated? > > What if the same happened for the info area? Again, I'm not trying to > invent anything new here. But hopefully Tamas'es reply has helped here > as well. Yes, I don't think we should need to allocate and map a page for the vcpu_info area before calling map_vcpu_info(). > >> --- a/xen/common/domain.c > >> +++ b/xen/common/domain.c > >> @@ -1572,6 +1572,13 @@ void unmap_vcpu_info(struct vcpu *v) > >> put_page_and_type(mfn_to_page(mfn)); > >> } > >> > >> +int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size, > >> + struct guest_area *area, > >> + void (*populate)(void *dst, struct vcpu *v)) > > > > Oh, the prototype for this is added in patch 1, almost missed it. > > > > Why not also add this dummy implementation in patch 1 then? > > The prototype isn't dead code, but the function would be until it gains > users here. If anything, I'd move the prototype introduction here; it > merely seemed desirable to me to touch xen/include/xen/domain.h no > more frequently than necessary. > > Also, to be quite frank, I find this kind of nitpicking odd after the > series has been pending for almost a year. TBH when I saw this I was about to comment that the patch won't build because the prototypes was missing, but then I remembered about patch 1. I don't think it's obvious, but anyway. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |