[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] pvh: clearly specify used parameters in vcpu_guest_context



On 19/11/13 16:34, Jan Beulich wrote:
>>>> On 19.11.13 at 16:04, Roger Pau MonnÃ<roger.pau@xxxxxxxxxx> wrote:
>> On 19/11/13 14:32, Jan Beulich wrote:
>>>>>> On 19.11.13 at 13:34, Roger Pau Monne <roger.pau@xxxxxxxxxx> wrote:
>>>> @@ -704,9 +705,13 @@ int arch_set_info_guest(
>>>>          /* PVH 32bitfixme */
>>>>          ASSERT(!compat);
>>>>  
>>>> -        if ( c(ctrlreg[1]) || c(ldt_base) || c(ldt_ents) ||
>>>> +        if ( (c(ctrlreg[0]) & HVM_CR0_GUEST_RESERVED_BITS) ||
>>>> +             (c(ctrlreg[4]) & HVM_CR4_GUEST_RESERVED_BITS(v)) ||
>>>> +             c(ctrlreg[1]) || c(ctrlreg[2]) || c(ctrlreg[5]) ||
>>>> +             c(ctrlreg[6]) || c(ctrlreg[7]) || c(ldt_base) || c(ldt_ents) 
>> ||
>>>>               c(user_regs.cs) || c(user_regs.ss) || c(user_regs.es) ||
>>>>               c(user_regs.ds) || c(user_regs.fs) || c(user_regs.gs) ||
>>>> +             c(kernel_ss) || c(kernel_sp) || c.nat->gs_base_kernel ||
>>>>               c.nat->gdt_ents || c.nat->fs_base || c.nat->gs_base_user )
>>>>              return -EINVAL;
>>>>      }
>>>
>>> Still no checking of debugreg[]?
>>
>> Why do we need to check debugreg[]? This code is executed on PVH (and PV
>> and HVM), and I guessed it does the right thing:
>>
>>     for ( i = 0; i < ARRAY_SIZE(v->arch.debugreg); ++i )
>>         v->arch.debugreg[i] = c(debugreg[i]);
> 
> If it does - fine; I didn't verify whether that would actually result in
> the debug registers getting properly loaded with the requested
> values.

I've done a quick test by setting debugreg[0] from vcpu_guest_context
and then reading dr0 from inside the guest AP and it seems to work fine.

>>>> +            v->arch.hvm_vcpu.guest_cr[0] |= c.nat->ctrlreg[0];
>>>
>>> Did you really mean |= here? I would expect to simply fail a
>>> request when certain required bits aren't set.
>>
>> Yes, I wanted to do |= because as described on the public header, flags
>> specified by the user are appended to PVH mandatory flags. This is kind
>> of awkward, so I wouldn't mind making cr0/cr4 mandatory for PVH AP
>> bringup, so we would have to check that:
>>
>> cr0 & (X86_CR0_PE | X86_CR0_ET | X86_CR0_PG) == (X86_CR0_PE | X86_CR0_ET
>> | X86_CR0_PG)
> 
> Except that I'm not sure the ET check is really needed.
> 
>> And:
>>
>> cr4 & X86_CR4_PAE == X86_CR4_PAE
>>
>>> Also, by now honoring CR0 and CR4 settings, you again move
>>> towards the hybrid model (some fields honored, some refused)
>>> that was (I think by you) previously described as unacceptable.
>>
>> From a strict POV we should just set cr3, flags and user_regs, but then
>> Tim mentioned also honouring cr0/cr4,
> 
> I understood his response to mean all fields, or none of them.

Trying to make all those fields functional on PVH (or HVM) is quite
useless IMHO, it's going to add more code that I doubt anyone is going
to use when you can instead use the bare metal functions to set all
those things (and from an OS point of view it's also more comfortable
because you need less Xen specific stuff).

When you refer to not using any fields, does this mean to enable LAPIC
for PVH and use the bare metal CPU bringup method?

And I guess introducing a new hypercall (that also uses a different
vcpu_guest_context struct) to bringup vcpus inside of HVM domains is
completely out of the picture?

> 
>> and I don't really have a strong
>> opinion against that. What I think was definitely wrong was only using
>> gs_base_kernel and not the other gs_* or fs_* fields.
>>
>> Since we need cr3, using only those and not the other cr* fields seems
>> strange.
> 
> Hmm, I think it's going to remain looking strange as long as some
> of the fields are used and some are not (of course ignoring those
> that are PV specific).
> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.