[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Proposal to allow setting up shared memory areas between VMs from xl config file
On Fri, 19 May 2017, Jarvis Roach wrote: > > While waiting for Jarvis and Edgar to provide more befitting information, > > I'll > > try to fill in myself. There is a demand to run "bare-metal" applications on > > embedded boards (such as the new Xilinx Zynq MPSoC board). People in > > those communities actually call them "bare-metal". They are written in C and > > run directly in kernel mode. > > There is no operating system or unikernel. They can run bare-metal or > > inside > > a Xen VM. Usually they drive one or more devices (or FPGAs) with extremely > > low latency and overhead. An example of such an application is tbm (which is > > the one I used to do irq latency measurements): > > > > https://github.com/edgarigl/tbm/app/xen/guest_irq_latency/apu.c > > > > Let's suppose they you have one bare-metal app to drive a set of FPGA space > > and another one to drive another set of FPGA space (or two different > > devices), you might need the two apps to exchange information. > > > > The owner knows exactly how many apps she is going to have on the board > > and who needs to communicate with whom from the start. It is a fully static > > configuration. > > > > In this scenario, she is going to write to the VM config files of the two > > apps > > that one page will be shared among the two, so that they can send each > > other messages. She will hard-code the address of the shared page in her > > "bare-metal" app. > > > > There is no frontend and backend (as in the usual Xen meaning). In some > > configurations one app might be more critical than the other, but in some > > other scenarios they might have the same criticality. > > > > If, as Jan pointed out, we need to call out explicitly which is the > > frontend and > > which is the backend for page ownership reasons, then I suggested we > > expose that configuration to the user, so that she can choose. > > Thank you Stefano, that is a good representation of the need. > > I am not familiar with what all page ownership entails, if someone knows of a > good link please send to me privately so I can educate myself. :) > > Because of my ignorance the following may fly in the face of some design > paradigm I'm not aware of, but I would expect to see these shared memory > regions treated as a system resource, analogous to physical I/O periphs, > defined so that Xen and Dom0 don't try to use them, and allocated to guests > as part of a static configuration under control of a system integrator. > Something would be needed in the system DT to "carve" these regions out of > the rest of memory, and then something in the guest's .cfg that gives guest > access to the location. We can already do that, albeit with some limitations, > using the "iomem" attribute in the .cfg and "uio" nodes in the system device > tree. I mentioned this elsewhere, but if we just had greater control over the > memory attributes via the "iomem" attribute in the .cfg I think we'd have > most of the functionality we'd need. As memory is allocated on demand, there is no need to carve out the regions from device tree. Using dom0_mem (which is required on ARM), there is always unused memory on the system which can be used for this purpose (unless you specifically want to share some special memory regions, not RAM). _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |