[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Enabling hypervisor agnosticism for VirtIO backends
On Thu, Sep 2, 2021 at 12:19 AM AKASHI Takahiro <takahiro.akashi@xxxxxxxxxx> wrote: Hi Christopher, (for ref, this diagram is from this document: https://openxt.atlassian.net/wiki/spaces/DC/pages/1348763698 ) Takahiro, thanks for reading the Virtio-Argo materials. Some relevant context before answering your questions below: the Argo request interface from the hypervisor to a guest, which is currently exposed only via a dedicated hypercall op, has been discussed within the Xen community and is open to being changed in order to better enable support for guest VM access to Argo functions in a hypervisor-agnostic way. The proposal is to allow hypervisors the option to implement and expose any of multiple access mechanisms for Argo, and then enable a guest device driver to probe the hypervisor for methods that it is aware of and able to use. The hypercall op is likely to be retained (in some form), and complemented at least on x86 with another interface via MSRs presented to the guests. 1) How the configuration is managed? Just to be clear about my understanding: your question, in the context of a Linux kernel virtio device driver implementation, is about how a virtio-argo transport driver would implement the get_features function of the virtio_config_ops, as a parallel to the work that vp_get_features does for virtio-pci, and vm_get_features does for virtio-mmio. The design is still open on this and options have been discussed, including: * an extension to Argo to allow the system toolstack (which is responsible for managing guest VMs and enabling connections from front-to-backends) to manage a table of "implicit destinations", so a guest can transmit Argo messages to eg. "my storage service" port and the hypervisor will deliver it based on a destination table pre-programmed by the toolstack for the VM. [1] - ref: Notes from the December 2019 Xen F2F meeting in Cambridge, UK: [1] https://lists.archive.carbon60.com/xen/devel/577800#577800 So within that feature negotiation function, communication with the backend via that Argo channel will occur. * IOREQ The Xen IOREQ implementation is not currently appropriate for virtio-argo since it requires the use of foreign memory mappings of frontend memory in the backend guest. However, a new HMX interface from the hypervisor could support a new DMA Device Model Op to allow the backend to request the hypervisor to retrieve specified bytes from the frontend guest, which would enable plumbing for device configuration between an IOREQ server (device model backend implementation) and the guest driver. [2] Feature negotiation in the front end in this case would look very similar to the virtio-mmio implementation. ref: Argo HMX Transport for VirtIO meeting minutes, from January 2021: [2] https://lists.xenproject.org/archives/html/xen-devel/2021-02/msg01422.html * guest ACPI tables that surface the address of a remote Argo endpoint on behalf of the toolstack, and Argo communication can then negotiate features * emulation of a basic PCI device by the hypervisor (though details not determined) 2) Do there physically exist virtio's available/used vrings as well as In short: the latter. In the analysis that I did when looking at this, my observation was that each side (front and backend) should be able to accurately maintain their own local copy of the available/used vrings as well as descriptors, and both be kept synchronized by ensuring that updates are transmitted to the other side when they are written to. eg. As part of this, in the Linux front end implementation the virtqueue_notify function uses a function pointer in the virtqueue that is populated by the transport driver, ie. the virtio-argo driver in this case, which can implement the necessary logic to coordinate with the backend. 3) The payload in a request will be copied into the receiver's Argo ring. Effectively yes. I would treat it as a handle that is used to identify and retrieve data from messages exchanged between frontend transport driver and the backend via Argo rings established for moving data for the data path. In the diagram, those are "Argo ring for reads" and "Argo ring for writes". 4) Estimate of performance or latency? Different access methods to Argo (ie. related to my answer to your question '1)' above --) will have different performance characteristics. Data copying will necessarily involved for any Hypervisor-Mediated data eXchange (HMX) mechanism[1], such as Argo, where there is no shared memory between guest VMs, but the performance profile on modern CPUs with sizable caches has been demonstrated to be acceptable for the guest virtual device drivers use case in the HP/Bromium vSentry uXen product. The VirtIO structure is somewhat different though. Further performance profiling and measurement will be valuable for enabling tuning of the implementation and development of additional interfaces (eg. such as an asynchronous send primitive) - some of this has been discussed and described on the VirtIO-Argo-Development-Phase-1 wiki page[2]. [1] https://wiki.xenproject.org/wiki/Argo:_Hypervisor-Mediated_Exchange_(HMX)_for_Xen [2] https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development%3A+Phase+1 It appears that, on FE side, at least three hypervisor calls (and data For a write, counting FE sendv ops: 1: the write data payload is sent via the "Argo ring for writes" 2: the descriptor is sent via a sync of the available/descriptor ring -- is there a third one that I am missing? Christopher
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |