|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [V3 PATCH 7/9] x86/hvm: pkeys, add pkeys support for guest_walk_tables
>>> On 16.12.15 at 09:16, <huaitong.han@xxxxxxxxx> wrote:
> On Tue, 2015-12-15 at 02:02 -0700, Jan Beulich wrote:
>> Well, I wouldn't want you to introduce a brand new function, but
>> instead just factor out the necessary piece from xsave() (making
>> the new one take a struct xsave_struct * instead of a struct vcpu *,
>> and calling it from what is now xsave()).
> So the function looks like this:
> unsigned int get_xsave_pkru(struct vcpu *v)
> {
> void *offset;
> struct xsave_struct *xsave_area;
> uint64_t mask = XSTATE_PKRU;
> unsigned int index = fls64(mask) - 1;
> unsigned int pkru = 0;
>
> if ( !cpu_has_xsave )
> return 0;
>
> BUG_ON(xsave_cntxt_size < XSTATE_AREA_MIN_SIZE);
> xsave_area = _xzalloc(xsave_cntxt_size, 64);
> if ( xsave_area == NULL )
> return 0;
>
> xsave(xsave_area, mask);
> offset = (void *)xsave_area + (xsave_area_compressed(xsave) ?
> XSTATE_AREA_MIN_SIZE : xstate_offsets[index] );
> memcpy(&pkru, offset, sizeof(pkru));
>
> xfree(xsave_area);
>
> return pkru;
> }
Depending on how frequently this might get called, the allocation
overhead may not be tolerable. I.e. you may want to set up e.g.
a per-CPU buffer up front. Or you check whether using RDPKRU
(with temporarily setting CR4.PKE) is cheaper than what you
do right now.
Also I don't think the buffer needs to be xsave_cntxt_size in size;
xstate_offsets[index] + sizeof(pkru) (and its equivalent in the
non-compressed case) should suffice.
And finally I don't think returning 0 in the allocation failure case
would be valid, as that - iiuc - means no restrictions at all, and
hence would hamper security inside the guest. But that's of
course moot if the allocation gets moved out of here.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |