[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V3 PATCH 7/9] x86/hvm: pkeys, add pkeys support for guest_walk_tables



On Wed, 2015-12-16 at 01:32 -0700, Jan Beulich wrote:
> > > > On 16.12.15 at 09:16, <huaitong.han@xxxxxxxxx> wrote:
> > On Tue, 2015-12-15 at 02:02 -0700, Jan Beulich wrote:
> > > Well, I wouldn't want you to introduce a brand new function, but
> > > instead just factor out the necessary piece from xsave() (making
> > > the new one take a struct xsave_struct * instead of a struct vcpu
> > > *,
> > > and calling it from what is now xsave()).
> > So the function looks like this:
> > unsigned int get_xsave_pkru(struct vcpu *v)
> > {
> >     void *offset;
> >     struct xsave_struct *xsave_area;
> >     uint64_t mask = XSTATE_PKRU;
> >     unsigned int index = fls64(mask) - 1;
> >     unsigned int pkru = 0;
> > 
> >     if ( !cpu_has_xsave )
> >         return 0;
> >     
> >     BUG_ON(xsave_cntxt_size < XSTATE_AREA_MIN_SIZE);
> >     xsave_area = _xzalloc(xsave_cntxt_size, 64);
> >     if ( xsave_area == NULL )
> >         return 0;
> > 
> >     xsave(xsave_area, mask);
> >     offset = (void *)xsave_area + (xsave_area_compressed(xsave) ? 
> >             XSTATE_AREA_MIN_SIZE : xstate_offsets[index] );
> >     memcpy(&pkru, offset, sizeof(pkru));
> > 
> >     xfree(xsave_area);
> > 
> >     return pkru;
> > }
> 
> Depending on how frequently this might get called, the allocation
> overhead may not be tolerable. I.e. you may want to set up e.g.
> a per-CPU buffer up front. Or you check whether using RDPKRU
> (with temporarily setting CR4.PKE) is cheaper than what you
> do right now.
RDPKRU does cost less than the function, and if temporarily setting
CR4.PKE is accepted, I will use RDPKRU instead of the function.

Andrew, what is your opinion?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.