[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Bisected Xen-unstable: "Segment register inaccessible for d1v0" when starting HVM guest on intel




> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Wednesday, July 02, 2014 5:29 PM
> To: Wu, Feng
> Cc: Andrew Cooper; Sander Eikelenboom; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] Bisected Xen-unstable: "Segment register inaccessible
> for d1v0" when starting HVM guest on intel
> 
> >>> On 02.07.14 at 11:14, <feng.wu@xxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> No, you're again looking at the segment register load side, which isn't
> >> what this started with, and which we should put aside. The implicit
> >> supervisor mode accesses we're needing to deal with here are the
> >> ones _not_ resulting from emulation of anything: The update of the
> >> runstate area (which is what Sander stumbled across) and (similar)
> >> the update of time data, i.e. update_secondary_system_time(). Now
> >> that I think about it the two are actually different: The latter is
> >> specifically intended to update posibly user mode visible data, so we
> >> need to first determine whether it is correct to apply the SMAP check
> >> here (I think it is since the virtual address given to the kernel
> >> shouldn't be the one exposed to user mode - at least on Linux, so
> >> the question is whether we can assume eventual other OSes making
> >> use of this PV extension to also use distinct virtual addresses here).
> >
> > If I understand it correctly, referring to the two examples you mentioned
> > here,
> > this is about a shared memory between Xen and guests. I have some questions
> > about this:
> > 1. What is the relationship between these operations and implicit supervisor
> > mode accesses?
> > Seems this is not what is defined for implicit supervisor mode accesses in
> > the Spec.
> > 2. For the first case you mentioned above, (v)->runstate_guest is a guest
> > pointer which
> > is set in 'VCPUOP_register_runstate_memory_area' operation, but I only see
> > this pointer
> > is set for domain 0, how is it set for HVM guests? For Sander's case, seems
> > this pointer
> > is set for the HVM guests (d1v0).
> 
> I have no idea where you found this to be set for Dom0 only.
> VCPUOP_register_runstate_memory_area is available to all guests.
> 
> > Here is a quote from Intel SDM:
> > "If CR4.SMAP=1, supervisor-mode data accesses are not allowed to linear
> > addresses that are accessible in user mode",
> > So for the second case you listed above, if Xen and user space use different
> > virtual
> > address, if the virtual address for Xen usage is supervisor-only, no SMAP
> > check will
> > be needed, However, if they use the same virtual address, SMAP check may be
> > needed
> > if this virtual address is use accessible.
> 
> This being a PV extension to the base architecture, the hardware
> specification is meaningless. What we need to do here is _extend_ what
> the hardware has specified for those extra accesses. We have three
> options basically:
> 1) never do any checking on such accesses
> 2) honor CPL and EFLAGS.AC
> 3) always do the checking
> The first one obviously is bad from a security POV. Since the third one is
> more strict than the second and since I assume adding some override is
> going to be the simpler change than altering the point in time when the
> VMCS gets loaded during context switch (the suggestion of which no one
> at all commented on so far), I'd prefer that one, but wouldn't mind
> option 2 to be implemented instead.
> 

So here is my understanding for this:
For option 2, we don't need to change the code in guest_walk_tables(), what we
should do is to adjust the time when VMCS gets loaded to make sure we can
safely get guest SS for the two cases you mentioned previously in this thread.

For option 3, we need to pass some override for these cases to check SMAP
unconditionally, so no need to get guest SS. Hence this issue will not exist.

Is this correct? Thanks a lot!

Thanks,
Feng

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.