[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v18 01/10] x86: add generic resource (e.g. MSR) access hypercall



On Wed, Oct 01, 2014 at 11:33:41AM +0100, Jan Beulich wrote:
> >>> On 01.10.14 at 12:04, <chao.p.peng@xxxxxxxxxxxxxxx> wrote:
> > On Tue, Sep 30, 2014 at 01:12:42PM +0100, Jan Beulich wrote:
> >> >>> On 30.09.14 at 12:49, <chao.p.peng@xxxxxxxxxxxxxxx> wrote:
> >> > +    case XENPF_resource_op:
> >> > +    {
> >> > +        struct xen_resource_access ra;
> >> > +        uint32_t cpu;
> >> > +        XEN_GUEST_HANDLE(xenpf_resource_entry_t) guest_entries;
> >> > +
> >> > +        ra.nr_entries = op->u.resource_op.nr_entries;
> >> > +        if ( ra.nr_entries == 0 || ra.nr_entries > 
> >> > RESOURCE_ACCESS_MAX_ENTRIES )
> >> > +        {
> >> > +            ret = -EINVAL;
> >> 
> >> I don't think ra.nr_entries == 0 is a reason to fail the hypercall.
> > Do you mean 'ret = 0' ?
> 
> Sure.
> 
> >> > +            break;
> >> > +        }
> >> > +
> >> > +        ra.entries = xmalloc_array(xenpf_resource_entry_t, 
> >> > ra.nr_entries);
> >> > +        if ( !ra.entries )
> >> > +        {
> >> > +            ret = -ENOMEM;
> >> > +            break;
> >> > +        }
> >> > +
> >> > +        guest_from_compat_handle(guest_entries, 
> >> > op->u.resource_op.entries);
> >> > +
> >> > +        if ( copy_from_guest(ra.entries, guest_entries, ra.nr_entries) )
> >> > +        {
> >> > +            xfree(ra.entries);
> >> > +            ret = -EFAULT;
> >> > +            break;
> >> > +        }
> >> > +
> >> > +        /* Do sanity check earlier to omit the potential IPI overhead. 
> >> > */
> >> > +        if ( check_resource_access(&ra) < ra.nr_entries )
> >> > +        {
> >> > +            /* Copy the return value for failed entry. */
> >> > +            if ( __copy_to_guest_offset(guest_entries, ret,
> >> > +                                        ra.entries + ret, 1) )
> >> > +                ret = -EFAULT;
> >> > +            else
> >> > +                ret = 0;
> >> 
> >> This should be the index of the failed entry. I guess it would be
> >> easier and more consistent if check_resource_access() too used
> >> ra.ret for passing back the failed index (which btw should be
> >> renamed to e.g. "done" - "ret" is no longer a suitable name).
> > 
> > I agree to use ra.ret for check_resource_access(). But I insist that return 
> > 0
> > here is more reasonable. As we use positive return value to indicate the
> > number of successful operations. But here we just passed some check and
> > have even not performed the access. Return index of the failed entry
> > will lead the caller to think the data for entries earlier is valid.
> 
> No - really correct behavior would be to carry out the operations
> that passed the validation checks rather than exiting early here.

Then make sense. Thank you for clarification.

> 
> >> > +            on_selected_cpus(cpumask_of(cpu), resource_access, &ra, 1);
> >> > +        else
> >> > +        {
> >> > +            xfree(ra.entries);
> >> > +            ret = -ENODEV;
> >> > +            break;
> >> > +        }
> >> > +
> >> > +        /* Copy all if succeeded or up to the failed entry. */
> >> > +        if ( __copy_to_guest_offset(guest_entries, 0, ra.entries,
> >> > +                                    min(ra.nr_entries, ra.ret + 1)) )
> >> 
> >> I don't see a need for min() here - ra.ret mustn't be out of range.
> >> If you're concerned, add an ASSERT().
> > 
> > For fully-succeeded case, ra.ret will be ra.nr_entries and ra.ret + 1 is out
> > of range.
> 
> I don't think I understand what you're trying to tell me.

OK, as you have said, the ra.ret is the index for the failed entry.
Here we want to copy to guest the failed entry together with the entries
before. So 'ra.ret + 1' entries should be copied.

How about all the entries are successful? In this case ra.ret will
actully be ra.nr_entries. The number we should copy here is ra.nr_entries
but not 'ra.ret + 1'.

The min() is for this purpose then.

Chao
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.