[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V10 PATCH 12/23] PVH xen: Support privileged op emulation for PVH

On Thu, Aug 8, 2013 at 2:59 AM, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> On Wed, 7 Aug 2013 14:49:50 +0100
> George Dunlap <dunlapg@xxxxxxxxx> wrote:
>> On Wed, Jul 24, 2013 at 2:59 AM, Mukesh Rathor
>> <mukesh.rathor@xxxxxxxxxx> wrote:
> ....
>> >
>> >  const struct hvm_function_table * __init start_vmx(void)
>> > diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>> > index a3ca70b..fe8b94c 100644
>> > --- a/xen/arch/x86/traps.c
>> > +++ b/xen/arch/x86/traps.c
>> > @@ -480,6 +480,10 @@ static unsigned int
>> > check_guest_io_breakpoint(struct vcpu *v, unsigned int width, i,
>> > match = 0; unsigned long start;
>> >
>> > +    /* PVH fixme: support io breakpoint. */
>> > +    if ( is_pvh_vcpu(v) )
>> > +        return 0;
>> Does this one, and the check to IO below, have anything to do with
>> privileged op emulation?
> Yes, it's called from emulate_privileged_op().
> ...
>> > +static int read_descriptor_sel(unsigned int sel,
>> > +                               enum x86_segment which_sel,
>> > +                               struct vcpu *v,
>> > +                               const struct cpu_user_regs *regs,
>> > +                               unsigned long *base,
>> > +                               unsigned long *limit,
>> > +                               unsigned int *ar,
>> > +                               unsigned int vm86attr)
>> > +{
>> > +    struct segment_register seg;
>> > +    bool_t long_mode;
>> > +
>> > +    if ( !is_pvh_vcpu(v) )
>> > +        return read_descriptor(sel, v, regs, base, limit, ar,
>> > vm86attr);
>> Again, wouldn't it be better to rename read_desrciptor to
>> pv_read_descriptor(), name this one pvh_read_desrciptor(), give them a
>> similar function signature (e.g., have both take a which_sel and have
>> it look up the selector itself), rather than have this
>> one-function-calls-another-function thing?
> If you go back to where we discussed this in previous reviews, it
> is being done this way because of other callers of read_descriptor
> that don't need to be changed to pass enum x86_segment.

OK, first, like I said, I'm sorry I didn't have a chance to look at
this before, and in general it's totally fair for you to say "we
talked about this already".   But in this particular case, I have to
complain.  I just spent 45 minutes going back and finding where it was
discussed in previous reviews, and there turns out to have been NO
DISCUSSION.  You just said, "I did it for this reason" (which is the
same as what you said above), and Jan said, "OK".  That was a complete
waste of my time.  In the future, only send me back to look at
previous discussions if 1) there's actually something there that's
worth reading, and 2) you can't summarize it here.

OK, so read_descriptor() has other callers.  I still think, though,
that if you're doing a wrapper you should do it properly.

Before this patch, callers of read_descriptor look up the selector
themselves (normally by directly reading regs->$SEGMENT_REGISTER).
You can't do this for PVH, because you need do have VMX code read the
segment register to find which descriptor you want to read. So you
have a "wrapper" function, read_descriptor_sel, which takes the
segment register, rather than the contents of the segment register.
All well and good so far.

The problem I have is that you still pass in *both* the value of
regs->$SEGMENT_REGISTER, *and* an enum of a segment register, and use
one in one case, and another in a different case.  That's just a
really ugly interface.

What I'd like to see is for read_descriptor_sel() to *just* take
which_sel (perhaps renamed sreg or something, since it's referring to
a segment register), and in the PV case, read the appropriate segment
register, then calling read_descriptor().  Then you don't have this
crazy thing where you set two variables (sel and which_cs) all over
the place.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.