|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
On Fri, Aug 31, 2012 at 09:13:44AM -0400, Andres Lagar-Cavilla wrote:
>
> On Aug 31, 2012, at 9:08 AM, David Vrabel wrote:
>
> > On 30/08/12 19:32, Andres Lagar-Cavilla wrote:
> >> Second repost of my version, heavily based on David's.
> >
> > Doing another allocation that may be larger than a page (and its
> > associated additional error paths) seems to me to be a hammer to crack
> > the (admittedly bit wonky) casting nut.
> >
> > That said, it's up to Konrad which version he prefers.
>
> Yeah absolutely.
This one (with the comments) looks nicer.
>
> >
> > There are also some minor improvements you could make if you respin this
> > patch.
> >
> >> Complementary to this patch, on the xen tree I intend to add
> >> PRIVCMD_MMAPBATCH_*_ERROR into the libxc header files, and remove
> >> XEN_DOMCTL_PFINFO_PAGEDTAB from domctl.h
> >
> > Yes, a good idea. There's no correspondence between the ioctl's error
> > reporting values and the DOMCTL_PFINFO flags.
> >
> >> commit 3f40e8d79b7e032527ee207a97499ddbc81ca12b
> >> Author: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
> >> Date: Thu Aug 30 12:23:33 2012 -0400
> >>
> >> xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
> >>
> >> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> >> field for reporting the error code for every frame that could not be
> >> mapped. libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> >>
> >> Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top
> >> nibble
> >> in the mfn array.
> >>
> >> Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
> >> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
> > [...]
> >> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> >> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> >> {
> >> int ret;
> >> - struct privcmd_mmapbatch m;
> >> + struct privcmd_mmapbatch_v2 m;
> >> struct mm_struct *mm = current->mm;
> >> struct vm_area_struct *vma;
> >> unsigned long nr_pages;
> >> LIST_HEAD(pagelist);
> >> + int *err_array;
> >
> > int *err_array = NULL;
> >
> > and you could avoid the additional jump label as kfree(NULL) is safe.
>
> Didn't know, great.
>
> >
> >> struct mmap_batch_state state;
> >>
> >> if (!xen_initial_domain())
> >> return -EPERM;
> >>
> >> - if (copy_from_user(&m, udata, sizeof(m)))
> >> - return -EFAULT;
> >> + switch (version) {
> >> + case 1:
> >> + if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> >> + return -EFAULT;
> >> + /* Returns per-frame error in m.arr. */
> >> + m.err = NULL;
> >> + if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> >> + return -EFAULT;
> >> + break;
> >> + case 2:
> >> + if (copy_from_user(&m, udata, sizeof(struct
> >> privcmd_mmapbatch_v2)))
> >> + return -EFAULT;
> >> + /* Returns per-frame error code in m.err. */
> >> + if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> >> + return -EFAULT;
> >> + break;
> >> + default:
> >> + return -EINVAL;
> >> + }
> >>
> >> nr_pages = m.num;
> >> if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> >> return -EINVAL;
> >>
> >> - ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> >> - m.arr);
> >> + ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
> >>
> >> if (ret || list_empty(&pagelist))
> >> - goto out;
> >> + goto out_no_err_array;
> >> +
> >> + err_array = kmalloc_array(m.num, sizeof(int), GFP_KERNEL);
> >
> > kcalloc() (see below).
> >
> >> + if (err_array == NULL)
> >> + {
> >
> > Style: if (err_array == NULL) {
> >
> >> + if (state.global_error) {
> >> + int efault;
> >> +
> >> + if (state.global_error == -ENOENT)
> >> + ret = -ENOENT;
> >> +
> >> + /* Write back errors in second pass. */
> >> + state.user_mfn = (xen_pfn_t *)m.arr;
> >> + state.user_err = m.err;
> >> + state.err = err_array;
> >> + efault = traverse_pages(m.num, sizeof(xen_pfn_t),
> >> + &pagelist, mmap_return_errors, &state);
> >> + if (efault)
> >> + ret = efault;
> >> + } else if (m.err)
> >> + ret = __clear_user(m.err, m.num * sizeof(*m.err));
> >
> > Since you have an array of errors already there's no need to iterate
> > through all the MFNs again for V2. A simple copy_to_user() is
> > sufficient provided err_array was zeroed with kcalloc().
> I can kcalloc for extra paranoia, but the code will set all error slots, and
> is guaranteed to not jump over anything. I +1 the shortcut you outline below.
> Re-spin coming.
> Andres
> >
> > if (m.err)
> > ret = __copy_to_user(m.err, err_array, m.num * sizeof(*m.err));
> > else {
> > /* Write back errors in second pass. */
> > state.user_mfn = (xen_pfn_t *)m.arr;
> > state.user_err = m.err;
> > state.err = err_array;
> > ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> > &pagelist, mmap_return_errors, &state);
> > }
> >
> > if (ret == 0 && state.global_error == -ENOENT)
> > ret = -ENOENT;
> >
> > David
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |