[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Stratos-dev] Enabling hypervisor agnosticism for VirtIO backends
On Tue, 14 Sep 2021, Trilok Soni wrote: > On 9/13/2021 4:51 PM, Stefano Stabellini via Stratos-dev wrote: > > On Mon, 6 Sep 2021, AKASHI Takahiro wrote: > > > > the second is how many context switches are involved in a transaction. > > > > Of course with all things there is a trade off. Things involving the > > > > very tightest latency would probably opt for a bare metal backend which > > > > I think would imply hypervisor knowledge in the backend binary. > > > > > > In configuration phase of virtio device, the latency won't be a big > > > matter. > > > In device operations (i.e. read/write to block devices), if we can > > > resolve 'mmap' issue, as Oleksandr is proposing right now, the only issue > > > is > > > how efficiently we can deliver notification to the opposite side. Right? > > > And this is a very common problem whatever approach we would take. > > > > > > Anyhow, if we do care the latency in my approach, most of virtio-proxy- > > > related code can be re-implemented just as a stub (or shim?) library > > > since the protocols are defined as RPCs. > > > In this case, however, we would lose the benefit of providing "single > > > binary" > > > BE. > > > (I know this is is an arguable requirement, though.) > > > > In my experience, latency, performance, and security are far more > > important than providing a single binary. > > > > In my opinion, we should optimize for the best performance and security, > > then be practical on the topic of hypervisor agnosticism. For instance, > > a shared source with a small hypervisor-specific component, with one > > implementation of the small component for each hypervisor, would provide > > a good enough hypervisor abstraction. It is good to be hypervisor > > agnostic, but I wouldn't go extra lengths to have a single binary. I > > cannot picture a case where a BE binary needs to be moved between > > different hypervisors and a recompilation is impossible (BE, not FE). > > Instead, I can definitely imagine detailed requirements on IRQ latency > > having to be lower than 10us or bandwidth higher than 500 MB/sec. > > > > Instead of virtio-proxy, my suggestion is to work together on a common > > project and common source with others interested in the same problem. > > > > I would pick something like kvmtool as a basis. It doesn't have to be > > kvmtools, and kvmtools specifically is GPL-licensed, which is > > unfortunate because it would help if the license was BSD-style for ease > > of integration with Zephyr and other RTOSes. > > > > As long as the project is open to working together on multiple > > hypervisors and deployment models then it is fine. For instance, the > > shared source could be based on OpenAMP kvmtool [1] (the original > > kvmtool likely prefers to stay small and narrow-focused on KVM). OpenAMP > > kvmtool was created to add support for hypervisor-less virtio but they > > are very open to hypervisors too. It could be a good place to add a Xen > > implementation, a KVM fatqueue implementation, a Jailhouse > > implementation, etc. -- work together toward the common goal of a single > > BE source (not binary) supporting multiple different deployment models. > > I have my reservations on using "kvmtool" to do any development here. > "kvmtool" can't be used on the products and it is just a tool for the > developers. > > The benefit of the solving problem w/ rust-vmm is that some of the crates from > this project can be utilized for the real product. Alex has mentioned that > "rust-vmm" today has some KVM specific bits but the rust-vmm community is > already discussing to remove or re-org them in such a way that other > Hypervisors can fit in. > > Microsoft has Hyper-V implementation w/ cloud-hypervisor which uses some of > the rust-vmm components as well and they had shown interest to add the Hyper-V > support in the "rust-vmm" project as well. I don't know the current progress > but they had proven it it "cloud-hypervisor" project. > > "rust-vmm" project's license will work as well for most of the project > developments and I see that "CrosVM" is shipping in the products as well. Most things in open source start as a developers tool before they become part of a product :) I am concerned about how "embeddable" rust-vmm is going to be. Do you think it would be possible to run it against an RTOS together with other apps written in C? Let me make a realistic example. You can imagine a Zephyr instance with simple toolstack functionalities written in C (starting/stopping VMs). One might want to add a virtio backend to it. I am not familiar enough with Rust and rust-vmm to know if it would be feasible and "easy" to run a rust-vmm backend as a Zephyr app. A C project of the size of kvmtool, but BSD-licensed, could run on Zephyr with only a little porting effort using the POSIX compatibility layer. I think that would be ideal. Anybody aware of a project fulfilling these requirements? If we have to give up the ability to integrate with an RTOS, then I think QEMU could be the leading choice because is still the main reference implementation for virtio.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |