[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Grant iomem access and map IRQs to a domU guest in Xen for ARM targets


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: "Kapania, Ashish" <akapania@xxxxxx>
  • Date: Tue, 10 Jun 2014 01:37:39 +0000
  • Accept-language: en-US
  • Cc: "xen-users@xxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxx>
  • Delivery-date: Tue, 10 Jun 2014 01:38:56 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: AQHPg76wsxw0nDEnwUuzVsYyox1tiZtpP0Zw
  • Thread-topic: [Xen-users] Grant iomem access and map IRQs to a domU guest in Xen for ARM targets

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
> Sent: Monday, June 09, 2014 1:42 AM
> To: Kapania, Ashish
> Cc: xen-users@xxxxxxxxxxxxx
> Subject: Re: [Xen-users] Grant iomem access and map IRQs to a domU
> guest in Xen for ARM targets
> 
> On Fri, 2014-06-06 at 20:25 +0000, Kapania, Ashish wrote:
> > Hi All,
> >
> > I am working on running a RTOS as a domU guest on a omap5 evm
> (Cortex-A15).
> > As part of this effort, I need to create drivers for certain
> > peripherals that are owned by the domU guest (basically make the domU
> > a driver domain for certain peripherals). In order to achieve this I
> > need to be able to grant domU access to certain memory mapped
> > registers and map a SPI IRQ generated by the hw peripheral to it.
> >
> > I went through the xl.cfg documentation and my understanding is that
> > it supports 2 options called "iomem" and "irqs" that I can use to
> > achieve the aforementioned purpose when creating a guest. However, it
> > looks like these are not implemented in Xen-ARM. My question is that
> > what is the recommended way of achieving this in Xen for ARM targets
> ?
> 
> Support for iomem= is being worked on right now. See patches from
> Arianna Avanzini on xen-devel over the last few months. Julien Grall is
> also working on irq passthrough which I think will integrate irqs=
> support but also a "higher level" ability to passthrough a device based
> on e.g. device tree nodes without having to worry about the specific
> resources. I'm not sure when this will be ready though.
> 
> In the meantime (and with the Xen 4.4 release) people who want this
> functionality have just been hacking the hypervisor to add the specific
> mappings which they need to the domU domains. The key functions in the
> hypervisor are map_mmio_regions and route_irq_to_guest (see
> xen/arch/arm/platforms/*.c for examples of using these to route devices
> which are not described in devicetree to dom0, which is a bit similar
> to what you want)
> 
> I'm not sure how people have been triggering these extra mappings. I
> suppose you could either do it statically for each domid==N e.g. in
> arch_domain_create (but then you can't easily restart the guests) or
> you could add a custom domctl and add a call in the toolstack, or a new
> DOMCRF_ flag (which gets passed to arch_domain_create).
> 

Thanks Ian, this helps. I think I am going to go ahead with your
Suggestion to create a custom domctl op for now and add a call to it in
the xen toolstack. This should allow me to specify the desired mappings
in the domU.cfg file.

> Are the devices you wish to pass through DMA capable? Unless they are
> being an SMMU that will make things more complex.
> 

At the moment the peripherals I need to pass through are not DMA capable.
In the future however, I may have to pass through a dma-capable device.
It will all depend on the use-cases that we will need to support on our
RTOS.

I believe the TI chip I am working on (OMAP5) does not have a SMMU.
Considering the lack of SMMU, if I need to pass through a dma-capable
device, how do you recommend I go about it ? Do I need to implement
something similar to linux's swiotlb-xen driver in our RTOS ?

Thanks,
Ashish

> Hope that helps,
> 
> Ian.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.