[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: question about virtio-vsock on xen
On 23.02.24 23:42, Stefano Stabellini wrote: > Hi Peng, Hello Peng, Stefano > > We haven't tried to setup virtio-vsock yet. > > In general, I am very supportive of using QEMU for virtio backends. We > use QEMU to provide virtio-net, virtio-block, virtio-console and more. > > However, typically virtio-vsock comes into play for VM-to-VM > communication, which is different. Going via QEMU in Dom0 just to have 1 > VM communicate with another VM is not an ideal design: it adds latency > and uses resources in Dom0 when actually we could do without it. > > A better model for VM-to-VM communication would be to have the VM talk > to each other directly via grant table or pre-shared memory (see the > static shared memory feature) or via Xen hypercalls (see Argo.) > > For a good Xen design, I think the virtio-vsock backend would need to be > in Xen itself (the hypervisor). > > Of course that is more work and it doesn't help you with the specific > question you had below :-) > > For that, I don't have a pointer to help you but maybe others in CC > have. Yes, I will try to provide some info ... > > Cheers, > > Stefano > > > On Fri, 23 Feb 2024, Peng Fan wrote: >> Hi All, >> >> Has anyone make virtio-vsock on xen work? My dm args as below: >> >> virtio = [ >> 'backend=0,type=virtio,device,transport=pci,bdf=05:00.0,backend_type=qemu,grant_usage=true' >> ] >> device_model_args = [ >> '-D', '/home/root/qemu_log.txt', >> '-d', >> 'trace:*vsock*,trace:*vhost*,trace:*virtio*,trace:*pci_update*,trace:*pci_route*,trace:*handle_ioreq*,trace:*xen*', >> '-device', >> 'vhost-vsock-pci,iommu_platform=false,id=vhost-vsock-pci0,bus=pcie.0,addr=5.0,guest-cid=3'] >> >> During my test, it always return failure in dom0 kernel in below code: >> >> vhost_transport_do_send_pkt { >> ... >> nbytes = copy_to_iter(hdr, sizeof(*hdr), &iov_iter); >> if (nbytes != sizeof(*hdr)) { >> vq_err(vq, "Faulted on copying pkt hdr %x %x %x >> %px\n", nbytes, sizeof(*hdr), >> __builtin_object_size(hdr, 0), &iov_iter); >> kfree_skb(skb); >> break; >> } >> } >> >> I checked copy_to_iter, it is copy data to __user addr, but it never pass, >> the copy to __user addr always return 0 bytes copied. >> >> The asm code "sttr x7, [x6]" will trigger data abort, the kernel will run >> into do_page_fault, but lock_mm_and_find_vma report it is VM_FAULT_BADMAP, >> that means the __user addr is not mapped, no vma has this addr. >> >> I am not sure what may cause this. Appreciate if any comments. ... Peng, we have vhost-vsock (and vhost-net) Xen PoC. Although it is non-upstreamable in its current shape (based on old Linux version, requires some rework and proper integration, most likely requires involving Qemu and protocol changes to pass an additional info to vhost), it works with Linux v5.10 + patched Qemu v7.0, so you can refer to the Yocto meta layer which contains kernel patches for the details [1]. In a nutshell, before accessing the guest data the host module needs to map descriptors in virtio rings which contain either guest grant based DMA addresses (by using Xen grant mappings) or guest pseudo-physical addresses (by using Xen foreign mappings). After accessing the guest data the host module needs to unmap them. Also note, in that PoC the target mapping scheme is controlled via module param and guest domain id is retrieved from the device-model specific part in the Xenstore (so Qemu/protocol are unmodified). But you might want to look at [2] as an example of vhost-user protocol changes how to pass that additional info. Hope that helps. [1] https://github.com/xen-troops/meta-xt-vhost/commits/main/ [2] https://www.mail-archive.com/qemu-devel@xxxxxxxxxx/msg948327.html P.S. May answer with a delay. >> >> BTW: I tested blk pci, it works, so the virtio pci should work on my setup. >> >> Thanks, >> Peng. >>
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |