|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 5/8] sysctl: Add sysctl interface for querying PCI topology
>>> On 19.03.15 at 22:54, <boris.ostrovsky@xxxxxxxxxx> wrote:
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -399,6 +399,67 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t)
> u_sysctl)
> break;
> #endif
>
> +#ifdef HAS_PCI
> + case XEN_SYSCTL_pcitopoinfo:
> + {
> + xen_sysctl_pcitopoinfo_t *ti = &op->u.pcitopoinfo;
> +
> + if ( guest_handle_is_null(ti->devs) ||
> + guest_handle_is_null(ti->nodes) ||
> + (ti->first_dev > ti->num_devs) )
> + {
> + ret = -EINVAL;
> + break;
> + }
> +
> + while ( ti->first_dev < ti->num_devs )
> + {
> + physdev_pci_device_t dev;
> + uint32_t node;
> + struct pci_dev *pdev;
> +
> + if ( copy_from_guest_offset(&dev, ti->devs, ti->first_dev, 1) )
> + {
> + ret = -EFAULT;
> + break;
> + }
> +
> + spin_lock(&pcidevs_lock);
> + pdev = pci_get_pdev(dev.seg, dev.bus, dev.devfn);
> + if ( !pdev || (pdev->node == NUMA_NO_NODE) )
> + node = XEN_INVALID_NODE_ID;
I really think the two cases folded here should be distinguishable
by the caller.
> + else
> + node = pdev->node;
> + spin_unlock(&pcidevs_lock);
> +
> + if ( copy_to_guest_offset(ti->nodes, ti->first_dev, &node, 1) )
> + {
> + ret = -EFAULT;
> + break;
> + }
> +
> + ti->first_dev++;
> +
> + if ( hypercall_preempt_check() )
> + break;
> + }
> +
> + if ( !ret )
> + {
> + if ( __copy_field_to_guest(u_sysctl, op,
> u.pcitopoinfo.first_dev) )
> + {
> + ret = -EFAULT;
> + break;
> + }
> +
> + if ( ti->first_dev < ti->num_devs )
> + ret = hypercall_create_continuation(__HYPERVISOR_sysctl,
> + "h", u_sysctl);
Considering this is a tools only interface, enforcing a not too high
limit on num_devs would seem better than this not really clean
continuation mechanism. The (tool stack) caller(s) can be made
iterate.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |