[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [RFC PATCH] xen: privcmd: fix ioeventfd/ioreq crashing PV domain
Starting a virtio backend in a PV domain would panic the kernel in alloc_ioreq, trying to dereference vma->vm_private_data as a pages pointer when in reality it stayed as PRIV_VMA_LOCKED. Fix by allocating a pages array in mmap_resource in the PV case, filling it with page info converted from the pfn array. This allows ioreq to function successfully with a backend provided by a PV dom0. Signed-off-by: Val Packett <val@xxxxxxxxxxxxxxxxxxxxxx> --- I've been porting the xen-vhost-frontend[1] to Qubes, which runs on amd64 and we (still) use PV for dom0. The x86 part didn't give me much trouble, but the first thing I found was this crash due to using a PV domain to host the backend. alloc_ioreq was dereferencing the '1' constant and panicking the dom0 kernel. I figured out that I can make a pages array in the expected format from the pfn array where the actual memory mapping happens for the PV case, and with the fix, the ioreq part works: the vhost frontend replies to the probing sequence and the guest recognizes which virtio device is being provided. I still have another thing to debug: the MMIO accesses from the inner driver (e.g. virtio_rng) don't get through to the vhost provider (ioeventfd does not get notified), and manually kicking the eventfd from the frontend seems to crash... Xen itself?? (no Linux panic on console, just a freeze and quick reboot - will try to set up a serial console now) But I figured I'd post this as an RFC already, since the other bug may be unrelated and the ioreq area itself does work now. I'd like to hear some feedback on this from people who actually know Xen :) [1]: https://github.com/vireshk/xen-vhost-frontend Thanks, ~val --- drivers/xen/privcmd.c | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c index f52a457b302d..c9b4dae7e520 100644 --- a/drivers/xen/privcmd.c +++ b/drivers/xen/privcmd.c @@ -834,8 +834,23 @@ static long privcmd_ioctl_mmap_resource(struct file *file, if (rc < 0) break; } - } else + } else { + unsigned int i; + unsigned int numpgs = kdata.num / XEN_PFN_PER_PAGE; + struct page **pages; rc = 0; + + pages = kvcalloc(numpgs, sizeof(pages[0]), GFP_KERNEL); + if (pages == NULL) { + rc = -ENOMEM; + goto out; + } + + for (i = 0; i < numpgs; i++) { + pages[i] = xen_pfn_to_page(pfns[i * XEN_PFN_PER_PAGE]); + } + vma->vm_private_data = pages; + } } out: @@ -1589,15 +1604,18 @@ static void privcmd_close(struct vm_area_struct *vma) int numgfns = (vma->vm_end - vma->vm_start) >> XEN_PAGE_SHIFT; int rc; - if (xen_pv_domain() || !numpgs || !pages) + if (!numpgs || !pages) return; - rc = xen_unmap_domain_gfn_range(vma, numgfns, pages); - if (rc == 0) - xen_free_unpopulated_pages(numpgs, pages); - else - pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n", - numpgs, rc); + if (!xen_pv_domain()) { + rc = xen_unmap_domain_gfn_range(vma, numgfns, pages); + if (rc == 0) + xen_free_unpopulated_pages(numpgs, pages); + else + pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n", + numpgs, rc); + } + kvfree(pages); } -- 2.51.0
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |