[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v12 1/3] xen/mem_sharing: VM forking



On Thu, Mar 26, 2020 at 3:10 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
>
> On Thu, Mar 26, 2020 at 08:07:09AM +0100, Jan Beulich wrote:
> > On 25.03.2020 16:47, Roger Pau Monné wrote:
> > > On Mon, Mar 23, 2020 at 10:04:35AM -0700, Tamas K Lengyel wrote:
> > >> +static int copy_vcpu_settings(struct domain *cd, struct domain *d)
> > >> +{
> > >> +    unsigned int i;
> > >> +    struct p2m_domain *p2m = p2m_get_hostp2m(cd);
> > >> +    int ret = -EINVAL;
> > >> +
> > >> +    for ( i = 0; i < cd->max_vcpus; i++ )
> > >> +    {
> > >> +        const struct vcpu *d_vcpu = d->vcpu[i];
> > >> +        struct vcpu *cd_vcpu = cd->vcpu[i];
> > >> +        struct vcpu_runstate_info runstate;
> > >> +        mfn_t vcpu_info_mfn;
> > >> +
> > >> +        if ( !d_vcpu || !cd_vcpu )
> > >> +            continue;
> > >> +
> > >> +        /*
> > >> +         * Copy & map in the vcpu_info page if the guest uses one
> > >> +         */
> > >> +        vcpu_info_mfn = d_vcpu->vcpu_info_mfn;
> > >> +        if ( !mfn_eq(vcpu_info_mfn, INVALID_MFN) )
> > >> +        {
> > >> +            mfn_t new_vcpu_info_mfn = cd_vcpu->vcpu_info_mfn;
> > >> +
> > >> +            /*
> > >> +             * Allocate & map the page for it if it hasn't been already
> > >> +             */
> > >> +            if ( mfn_eq(new_vcpu_info_mfn, INVALID_MFN) )
> > >> +            {
> > >> +                gfn_t gfn = mfn_to_gfn(d, vcpu_info_mfn);
> > >> +                unsigned long gfn_l = gfn_x(gfn);
> > >> +                struct page_info *page;
> > >> +
> > >> +                if ( !(page = alloc_domheap_page(cd, 0)) )
> > >> +                    return -ENOMEM;
> > >> +
> > >> +                new_vcpu_info_mfn = page_to_mfn(page);
> > >> +                set_gpfn_from_mfn(mfn_x(new_vcpu_info_mfn), gfn_l);
> > >> +
> > >> +                ret = p2m->set_entry(p2m, gfn, new_vcpu_info_mfn, 
> > >> PAGE_ORDER_4K,
> > >> +                                     p2m_ram_rw, p2m->default_access, 
> > >> -1);
> > >> +                if ( ret )
> > >> +                    return ret;
> > >> +
> > >> +                ret = map_vcpu_info(cd_vcpu, gfn_l,
> > >> +                                    d_vcpu->vcpu_info_offset);
> > >> +                if ( ret )
> > >> +                    return ret;
> > >> +            }
> > >> +
> > >> +            copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
> > >> +        }
> > >> +
> > >> +        /*
> > >> +         * Setup the vCPU runstate area
> > >> +         */
> > >> +        if ( guest_handle_is_null(runstate_guest(cd_vcpu)) )
> > >
> > > Maybe I'm confused, but isn't this the other way around and you need
> > > to check? If the parent runstate is not null copy it to the fork,
> > > ie:
> > >
> > > if ( !guest_handle_is_null(runstate_guest(d_vcpu)) )
> > > {
> > >     ...
> > >
> > >> +        {
> > >> +            runstate_guest(cd_vcpu) = runstate_guest(d_vcpu);
> > >> +            vcpu_runstate_get(cd_vcpu, &runstate);
> > >> +            __copy_to_guest(runstate_guest(cd_vcpu), &runstate, 1);
> > >
> > > You should check the return code I think.
> >
> > I don't think so - this is a best effort operation just like e.g.
> > in the handling of VCPUOP_register_runstate_memory_area.
>
> I think printing a debug message might be helpful, not so much as for
> the importance of failing to copy the runstate area, but it could
> signal that something went wrong, anyway I don't have such a strong
> opinion.
>
> Just to confirm, __copy_to_guest will cause the forked domain memory
> to be populated and the whole page to be copied over right? (and will
> also cause the page tables to be added to the fork physmap in write
> mode to set the accessed/dirty bits)

I checked this and it ends up calling hvm_translate_get_page which
issues a call to get_page_from_gfn already with P2M_UNSHARE already.
The problem is that we are still in the process of forking the VM, so
mem_sharing_is_fork is not yet true, since we haven't finished the
process yet completely. So what I'll do is set the parent pointer
early which will allow memory to be populated from the parent. If
there is an error during the fork operation the fork domain is getting
destroyed by the toolstack anyway so we don't have to worry about
unwinding a half-way completed fork.

Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.