[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Question about xenpage_list
On 28/08/2019 18:07, Tamas K Lengyel wrote: > On Wed, Aug 28, 2019 at 10:55 AM Andrew Cooper > <andrew.cooper3@xxxxxxxxxx> wrote: >> On 28/08/2019 17:25, Tamas K Lengyel wrote: >>> On Wed, Aug 28, 2019 at 9:54 AM Jan Beulich <jbeulich@xxxxxxxx> wrote: >>>> On 28.08.2019 17:51, Tamas K Lengyel wrote: >>>>> On Wed, Aug 28, 2019 at 9:35 AM Jan Beulich <jbeulich@xxxxxxxx> wrote: >>>>>> On 28.08.2019 17:28, Tamas K Lengyel wrote: >>>>>>> Hi all, >>>>>>> I'm trying to track down how a call in common/grant_table.c to >>>>>>> share_xen_page_with_guest will actually populate that page into the >>>>>>> guest's physmap. >> share_xen_page_with_guest() is perhaps poorly named. It makes the page >> able to be inserted into the guests p2m. >> >> It is internal accounting, so that the permission checks in a subsequent >> add_to_physmap() call will pass. >> >> Perhaps it should be named "allow_guest_access_to_frame()" or similar. >> >>>>>>> Immediately after the call the page doesn't seem to >>>>>>> be present in the physmap, as share_xen_page_with_guest will just add >>>>>>> the page to the domain's xenpage_list linked-list: >>>>>>> >>>>>>> unsigned long mfn; >>>>>>> unsigned long gfn; >>>>>>> >>>>>>> share_xen_page_with_guest(virt_to_page(gt->shared_raw[i]), d, >>>>>>> SHARE_rw); >>>>>>> >>>>>>> mfn = virt_to_mfn(gt->shared_raw[i]); >>>>>>> gfn = mfn_to_gmfn(d, mfn); >>>>>>> >>>>>>> gdprintk(XENLOG_INFO, "Sharing %lx -> %lx with domain %u\n", >>>>>>> gfn, mfn, d->domain_id); >>>>>>> >>>>>>> This results in the following: >>>>>>> >>>>>>> (XEN) grant_table.c:1820:d0v0 Sharing ffffffffffffffff -> 42c71e with >>>>>>> domain 1 >>>>>>> >>>>>>> AFAICT the page only gets populated into the physmap once the domain >>>>>>> gets unpaused. If I let the domain run and then pause it I can see >>>>>>> that the page is in the guest's physmap (by looping through all the >>>>>>> entries in its EPT): >>>>>>> >>>>>>> (XEN) mem_sharing.c:1636:d0v0 0xf2000 -> 0x42c71e is a grant mapping >>>>>>> shared with the guest >>>>>> This should be the result of the domain having made a respective >>>>>> XENMAPSPACE_grant_table request, shouldn't it? >>>>>> >>>>> Do you mean the guest itself or the toolstack? >>>> The guest itself - how would the tool stack know where to put the >>>> frame(s)? >>> I don't think that makes sense. How would a guest itself now that it >>> needs to map that frame? When you restore the VM from a savefile, it >>> is already running and no firmware is going to run in it to initialize >>> such GFNs. >>> >>> As for the toolstack, I see calls to xc_dom_gnttab_seed from the >>> toolstack during domain creation (be it a new domain or one being >>> restored from a save file) which does issue a XENMEM_add_to_physmap >>> with the space being specified as XENMAPSPACE_grant_table. Looks like >>> it gathers the GFN via xc_core_arch_get_scratch_gpfn. So it looks like >>> that's how its done. >> On domain creation, the toolstack needs to write the store/console grant >> entry. >> >> If XENMEM_acquire_resource is available and usable (needs newish Xen and >> dom0 kernel), then that method is preferred. This lets the toolstack >> map the grant table frame directly, without inserting it into the guests >> p2m first. >> >> The fallback path is to pick a free pfn, insert it into the guest >> physmap, foreign map it, write the entries, unmap and remove from the >> guest physmap. This has various poor properties like shattering >> superpages for the guest, and a general inability to function correctly >> once the guest has started executing and has a balloon driver in place. >> >> At a later point, once the guest starts executing, a grant-table aware >> part of the kernel ought to map the grant table at the kernels preferred >> location and keep it there permanently. >> > OK, makes sense, but when the guest is being restored from a savefile, > how does it know that it needs to do that mapping again? That frame is > being re-created during restoration, so when the guest starts to > execute again it would just have a whole where that page used to be. This is where we get to the problems of Xen's "migration" not being transparent. Currently it is the requirement of the guest kernel to remap the grant table on resume. This is a reasonable requirement for PV guests. Because PV guest kernels maintain their own P2M, it is impossible to migrate transparently. This should never have made it into the HVM ABI, but it did and we're a decade too late, and only just starting to pick up the pieces. I presume you're doing some paging work here, and are logically restoring a guest without its knowledge? ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |