 
	
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 4/4] iommu / pci: re-implement XEN_DOMCTL_get_device_group...
 > -----Original Message-----
> From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Sent: 16 July 2019 12:28
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Jan Beulich <jbeulich@xxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH v3 4/4] iommu / pci: re-implement 
> XEN_DOMCTL_get_device_group...
> 
> On Tue, Jul 16, 2019 at 11:16:57AM +0100, Paul Durrant wrote:
> > ... using the new iommu_group infrastructure.
> >
> > Because 'sibling' devices are now members of the same iommu_group,
> > implement the domctl by looking up the iommu_group of the pdev with the
> > matching SBDF and then finding all the assigned pdevs that are in the
> > group.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > ---
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> >
> > v3:
> >  - Make 'max_sdevs' parameter in iommu_get_device_group() unsigned.
> >  - Add missing check of max_sdevs to avoid buffer overflow.
> >
> > v2:
> >  - Re-implement in the absence of a per-group devs list.
> >  - Make use of pci_sbdf_t.
> > ---
> >  xen/drivers/passthrough/groups.c | 46 ++++++++++++++++++++++++++++++++++++
> >  xen/drivers/passthrough/pci.c    | 51 
> > ++--------------------------------------
> >  xen/include/xen/iommu.h          |  3 +++
> >  3 files changed, 51 insertions(+), 49 deletions(-)
> >
> > diff --git a/xen/drivers/passthrough/groups.c 
> > b/xen/drivers/passthrough/groups.c
> > index c6d00980b6..4e6e8022c1 100644
> > --- a/xen/drivers/passthrough/groups.c
> > +++ b/xen/drivers/passthrough/groups.c
> > @@ -12,8 +12,12 @@
> >   * GNU General Public License for more details.
> >   */
> >
> > +#include <xen/guest_access.h>
> >  #include <xen/iommu.h>
> > +#include <xen/pci.h>
> >  #include <xen/radix-tree.h>
> > +#include <xen/sched.h>
> > +#include <xsm/xsm.h>
> >
> >  struct iommu_group {
> >      unsigned int id;
> > @@ -81,6 +85,48 @@ int iommu_group_assign(struct pci_dev *pdev, void *arg)
> >      return 0;
> >  }
> >
> > +int iommu_get_device_group(struct domain *d, pci_sbdf_t sbdf,
> > +                           XEN_GUEST_HANDLE_64(uint32) buf,
> > +                           unsigned int max_sdevs)
> > +{
> > +    struct iommu_group *grp = NULL;
> > +    struct pci_dev *pdev;
> > +    unsigned int i = 0;
> > +
> > +    pcidevs_lock();
> > +
> > +    for_each_pdev ( d, pdev )
> > +    {
> > +        if ( pdev->sbdf.sbdf == sbdf.sbdf )
> > +        {
> > +            grp = pdev->grp;
> > +            break;
> > +        }
> > +    }
> > +
> > +    if ( !grp )
> > +        goto out;
> > +
> > +    for_each_pdev ( d, pdev )
> > +    {
> > +        if ( xsm_get_device_group(XSM_HOOK, pdev->sbdf.sbdf) ||
> > +             pdev->grp != grp )
> > +            continue;
> > +
> > +        if ( i < max_sdevs &&
> 
> AFAICT you are adding the check here in order to keep current
> behaviour?
Yes.
> But isn't it wrong to not report to the caller that the buffer was
> smaller than required, and that the returned result is partial?
Given that there is zero documentation I think your guess is as good as mine as 
to what intention of the implementor was.
> 
> I don't see any way a caller can differentiate between a result that
> uses the full buffer and one that's actually partial due to smaller
> than required buffer provided. I think this function should return
> -ENOSPC for such case.
I'd prefer to stick to the principle of no change in behaviour. TBH I have not 
found any caller of xc_get_device_group() apart from a python binding and who 
knows what piece of antiquated code might sit on the other side of that. FWIW 
that code sets max_sdevs to 1024 so it's unlikely to run out of space so an 
ENOSPC might be ok. Still, I'd like to hear maintainer opinions on this.
  Paul
> 
> Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
 
 
 | 
|  | Lists.xenproject.org is hosted with RackSpace, monitoring our |