[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Question to mem-path support at QEMU for Xen
On Thu, 28 Jul 2022, Igor Mammedov wrote: > On Thu, 28 Jul 2022 15:17:49 +0800 > Huang Rui <ray.huang@xxxxxxx> wrote: > > > Hi Igor, > > > > Appreciate you for the reply! > > > > On Wed, Jul 27, 2022 at 04:19:30PM +0800, Igor Mammedov wrote: > > > On Tue, 26 Jul 2022 15:27:07 +0800 > > > Huang Rui <ray.huang@xxxxxxx> wrote: > > > > > > > Hi Anthony and other Qemu/Xen guys, > > > > > > > > We are trying to enable venus on Xen virtualization platform. And we > > > > would > > > > like to use the backend memory with memory-backend-memfd,id=mem1,size=4G > > > > options on QEMU, however, the QEMU will tell us the "-mem-path" is not > > > > supported with Xen. I verified the same function on KVM. > > > > > > > > qemu-system-i386: -mem-path not supported with Xen > > > > > > > > So may I know whether Xen has any limitation that support > > > > memory-backend-memfd in QEMU or just implementation is not done yet? > > > > > > Currently Xen doesn't use guest RAM allocation the way the rest of > > > accelerators do. (it has hacks in memory_region/ramblock API, > > > that divert RAM allocation calls to Xen specific API) > > > > I am new for Xen and QEMU, we are working on GPU. We would like to have a > > piece of backend memroy like video memory for VirtIO GPU to support guest > > VM Mesa Vulkan (Venus). Do you mean we can the memory_region/ramblock APIs > > to work around it? > > > > > > > > The sane way would extend Xen to accept RAM regions (whatever they are > > > ram or fd based) QEMU allocates instead of going its own way. This way > > > it could reuse all memory backends that QEMU provides for the rest of > > > the non-Xen world. (not to mention that we could drop non trivial > > > Xen hacks so that guest RAM handling would be consistent with other > > > accelerators) > > > > > > > May I know what do you mean by "going its own way"? This sounds good, could > > you please elaborate on how can we implement this? We would like to give a > > try to address the problem on Xen. Would you mind to point somewhere that I > > can learn and understand the RAM region. Very happy to see your > > suggestions! > > see for example see ram_block_add(), if Xen could be persuaded to use memory > allocated by '!xen_enabled()' branch then it's likely file base backends > would also become usable. > > Whether it is possible for Xen or not I don't know, > I guess CCed Xen folks will suggest something useful. Hi Igor, Huang, I think there is room for improvement in the way Xen support in QEMU is implemented, especially because it predates support for other accelerators in QEMU. I am happy to help come up with a good idea or two on how to do it better. At the same time it is not trivial. The way Xen works with QEMU is that Xen is the one allocating memory for the VM. Keep in mind that in a Xen setup, the VM where QEMU is running is just one of the many VMs in the system (even if it is dom0) and doesn't have ownership over the entire memory. It is also possible to run QEMU in a regular guest for security benefits. Given that, Xen is typically the one allocating memory for the VM and by the time QEMU is started the main memory is already allocated. That is the reason why normally ram_block_add() is implemented in a special way for Xen using xen_ram_alloc. In most cases, there is no need to allocate any memory at all. However, it is also possible for QEMU to allocate memory for the VM. It is done by QEMU issuing a hypercall to Xen. See for instance the call to xc_domain_populate_physmap_exact. As you can see from the implementation of xen_ram_alloc, it is already used for things that are not normal memory (see the check mr == &ram_memory). So if you want to allocate memory for the VM, you could use xc_domain_populate_physmap_exact. Another thing to note is that Xen can connect to multiple emulators at the same time. Each emulator (each instance of QEMU, or other emulators) uses a hypercall to connect to Xen and register a PCI device or memory range that it is emulating. Each emulator is called "IOREQ server" in Xen terminology because it is handling "IO emulation REQuests" from Xen. The IOREQ interface is very simple. each IOREQ is defined a ioreq_t struct, see: xen/include/public/hvm/ioreq.h:ioreq_t So if you are looking to connect a secondary emulator to QEMU, a different, more efficient, way to solve the problem on Xen would be to connect the secondary emulator directly to Xen. Instead of: Xen -> QEMU -> secondary emulator You would do: Xen ---> QEMU | +-> secondary emulator The second approach is much faster in terms of latency because you are going to skip a step in the notification chain. See how QEMU calls xen_register_ioreq to register itself with Xen.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |