|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 3/4] xen/arm: introduce GNTTABOP_cache_flush
On Fri, 3 Oct 2014, Andrew Cooper wrote:
> On 03/10/14 15:50, Stefano Stabellini wrote:
> > Introduce a new hypercall to perform cache maintenance operation on
> > behalf of the guest. The argument is a machine address and a size. The
> > implementation checks that the memory range is owned by the guest or the
> > guest has been granted access to it by another domain.
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> >
> > ---
> >
> > Changes in v2:
> > - do not check for mfn_to_page errors;
> > - take a reference to the page;
> > - replace printk with gdprintk;
> > - split long line;
> > - remove out label;
> > - move rcu_lock_current_domain down before the loop.
> > - move the hypercall to GNTTABOP;
> > - take a spin_lock before calling grant_map_exists.
> > ---
> > xen/common/grant_table.c | 73
> > ++++++++++++++++++++++++++++++++++++++
> > xen/include/public/grant_table.h | 19 ++++++++++
> > 2 files changed, 92 insertions(+)
> >
> > diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> > index 7a6399b..d5bb4f7 100644
> > --- a/xen/common/grant_table.c
> > +++ b/xen/common/grant_table.c
> > @@ -2641,6 +2641,79 @@ do_grant_table_op(
> > }
> > break;
> > }
> > + case GNTTABOP_cache_flush:
> > + {
> > + struct gnttab_cache_flush cflush;
> > + struct domain *d, *owner;
> > + struct page_info *page;
> > + uint64_t mfn;
> > + void *v;
> > +
> > + /* currently unimplemented */
> > + if ( count != 1 )
> > + return -ENOSYS;
>
> ENOTSUPP (and elsewhere).
ENOTSUPP doesn't seem to be defined in Xen. Should I take the
opportunity to introduce it as 252?
> > +
> > + if ( copy_from_guest(&cflush, uop, 1) )
> > + return -EFAULT;
> > +
> > + if ( cflush.offset + cflush.size > PAGE_SIZE )
> > + return -EINVAL;
>
> This logic can still be fooled by cleverly crafted overflows, resulting
> in garbage being used later, which I am guessing will be a bad thing.
>
> You need to bound check them individually before bounds-checking the
> addition of the two.
>
> > +
> > + if ( cflush.size == 0 || cflush.op == 0 )
> > + return 0;
> > +
> > + if ( cflush.op & ~(GNTTAB_CACHE_INVAL|GNTTAB_CACHE_CLEAN) )
> > + return -EINVAL;
> > +
> > + /* currently unimplemented */
> > + if ( cflush.a.ref != 0 )
> > + return -ENOSYS;
> > +
> > + d = rcu_lock_current_domain();
> > + if ( d == NULL )
> > + return -ESRCH;
>
> rcu_lock_current_domain() cannot possibly fail.
>
> > +
> > + mfn = cflush.a.dev_bus_addr >> PAGE_SHIFT;
> > +
> > + if ( !mfn_valid(mfn) )
> > + {
> > + gdprintk(XENLOG_G_ERR, "mfn=%llx is not valid\n", mfn);
>
> PRIx64, as mfn is uint64_t (and elsewhere).
>
> > + rcu_unlock_domain(d);
> > + return -EINVAL;
> > + }
> > +
> > + page = mfn_to_page(mfn);
> > + owner = page_get_owner_and_reference(page);
> > + if ( !owner )
> > + {
> > + rcu_unlock_domain(d);
> > + return -EFAULT;
>
> EPERM seems better, as it is an ownership check on an existing mfn.
>
> > + }
> > +
> > + spin_lock(&owner->grant_table->lock);
>
> Is this safe without taking the paging lock and doing a dying check on
> 'owner' ?
In gnttab_release_mappings we do take the grant_table lock before
releasing any entries so it seems to me that it should be sufficient.
> > +
> > + if ( !grant_map_exists(d, owner->grant_table, mfn) )
> > + {
> > + spin_unlock(&owner->grant_table->lock);
> > + put_page(page);
> > + rcu_unlock_domain(d);
>
> These two can be reordered to reduce the length of time the rcu lock is
> held.
>
> > + gdprintk(XENLOG_G_ERR, "mfn %llx hasn't been granted by %d to
> > %d\n",
> > + mfn, owner->domain_id, d->domain_id);
>
> "hasn't been granted by d%d to d%d", to indicate that the numbers are
> domids.
>
> > + return -EINVAL;
> > + }
> > +
> > + v = map_domain_page(mfn);
> > + v += cflush.offset;
> > +
> > + if ( cflush.op & GNTTAB_CACHE_INVAL )
> > + invalidate_xen_dcache_va_range(v, cflush.size);
> > + if ( cflush.op & GNTTAB_CACHE_CLEAN )
> > + clean_xen_dcache_va_range(v, cflush.size);
> > +
> > + unmap_domain_page(v);
> > + spin_unlock(&owner->grant_table->lock);
>
> These two can be reordered to reduce the length of time the grant lock
> is held. (especially as it is a remote domains grant lock)
>
> ~Andrew
>
> > + put_page(page);
> > + }
> > default:
> > rc = -ENOSYS;
> > break;
> > diff --git a/xen/include/public/grant_table.h
> > b/xen/include/public/grant_table.h
> > index b8a3d6c..1833bba 100644
> > --- a/xen/include/public/grant_table.h
> > +++ b/xen/include/public/grant_table.h
> > @@ -309,6 +309,7 @@ typedef uint16_t grant_status_t;
> > #define GNTTABOP_get_status_frames 9
> > #define GNTTABOP_get_version 10
> > #define GNTTABOP_swap_grant_ref 11
> > +#define GNTTABOP_cache_flush 12
> > #endif /* __XEN_INTERFACE_VERSION__ */
> > /* ` } */
> >
> > @@ -574,6 +575,24 @@ struct gnttab_swap_grant_ref {
> > typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
> > DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
> >
> > +/*
> > + * Issue one or more cache maintenance operations on a portion of a
> > + * page granted to the calling domain by a foreign domain.
> > + */
> > +struct gnttab_cache_flush {
> > + union {
> > + uint64_t dev_bus_addr;
> > + grant_ref_t ref;
> > + } a;
> > + uint32_t offset; /* offset from start of grant */
> > + uint32_t size; /* size within the grant */
> > +#define GNTTAB_CACHE_CLEAN (1<<0)
> > +#define GNTTAB_CACHE_INVAL (1<<1)
> > + uint32_t op;
> > +};
> > +typedef struct gnttab_cache_flush gnttab_cache_flush_t;
> > +DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
> > +
> > #endif /* __XEN_INTERFACE_VERSION__ */
> >
> > /*
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |