[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen/arm: Virtual ITS command queue handling



On Thu, 2015-05-21 at 05:37 -0700, Manish Jaggi wrote:
> 
> On Tuesday 19 May 2015 07:18 AM, Ian Campbell wrote:
> > On Tue, 2015-05-19 at 19:34 +0530, Vijay Kilari wrote:
> >> On Tue, May 19, 2015 at 7:24 PM, Ian Campbell <ian.campbell@xxxxxxxxxx> 
> >> wrote:
> >>> On Tue, 2015-05-19 at 14:36 +0100, Ian Campbell wrote:
> >>>> On Tue, 2015-05-19 at 14:27 +0100, Julien Grall wrote:
> >>>>> With the multiple vITS we would have to retrieve the number of vITS.
> >>>>> Maybe by extending the xen_arch_domainconfig?
> >>>> I'm sure we can find a way.
> >>>>
> >>>> The important question is whether we want to go for a N:N vits:pits
> >>>> mapping or 1:N.
> >>>>
> >>>> So far I think we are leaning (slightly?) towards the 1:N model, if we
> >>>> can come up with a satisfactory answer for what to do with global
> >>>> commands.
> >>> Actually, Julien just mentioned NUMA which I think is a strong argument
> >>> for the N:N model.
> >>>
> >>> We need to make a choice here one way or another, since it has knock on
> >>> effects on other parts, e.g the handling of SYNC and INVALL etc.
> >>>
> >>> Given that N:N seems likely to be simpler from the Xen side and in any
> >>> case doesn't preclude us moving to a 1:N model (or even a 2:N model etc)
> >>> in the future how about we start with that?
> >>>
> >>> If there is agreement in taking this direction then I will adjust the
> >>> relevant sections of the document to reflect this.
> >> Yes, this make Xen side simple. Most important point to discuss is
> >>
> >> 1) How Xen maps vITS to pITS. its0 -> vits0?
> > The choices are basically either Xen chooses and the tools get told (or
> > "Just Know" the result), or the tools choose and setup the mapping in
> > Xen via hypercalls.
> >
> This could be one possible flow:
> -1- xen code parses the pci node and creates a pci_hostbridge structure 
> which stores the device_tree ptr.
> (using this pointer msi-parent (or respective its) can be retrieved)
> -2- dom0 invokes a hypercall to register pci_hostbridge (seg_no:cfg_addr)
> -3- Xen now knows that the device id (seg:bus:dev.fn) has which its.
> Using a helper function its node for a seg_no can be retrieved.
> -4- When a device is assigned to a domU, we introduce a new hypercall 
> map_guest_bdf which would let xen know
> that for a guest how a virtual sbdf maps to a physical sdbf

This is an extension to XEN_DOMCTL_assign_device, I think. An extension
because that hypercall currently only receives the physical SBDF.

I wonder how x86 knows the virtual SBDF. Perhaps it has no need to for
some reason.

Anyway, the general shape of this plan seems plausible enough.

> -5- domU is booted with a single virtual its node in device tree. Front 
> end driver  attaches this its as msi-parent
> -6- When domU accesses for ITS are trapped in Xen, using the helper 
> function say
> get_phys_its_for_guest(guest_id, guest_sbdf, /*[out]*/its_ptr *its)
> 
> its can be retrieved.
> AFAIK this is numa safe.
> >> 2) When PCI device is assigned to DomU, how does domU choose
> >>      vITS to send commands.  AFAIK, the BDF of assigned device
> >>      is different from actual BDF in DomU.
> > AIUI this is described in the firmware tables.
> >
> > e.g. in DT via the msi-parent phandle on the PCI root complex or
> > individual device.
> >
> > Is there an assumption here that a single PCI root bridge is associated
> > with a single ITS block? Or can different devices on a PCI bus use
> > different ITS blocks?
> >
> > Ian.
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.