[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/32] xen/x86: fix arch_set_info_guest for HVM guests



On 23/07/15 17:19, Jan Beulich wrote:
>>>> On 23.07.15 at 18:15, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 23/07/15 17:00, Ian Campbell wrote:
>>> On Thu, 2015-07-23 at 17:48 +0200, Roger Pau Monnà wrote:
>>>> El 23/07/15 a les 17.32, Jan Beulich ha escrit:
>>>>>>>> On 23.07.15 at 17:10, <roger.pau@xxxxxxxxxx> wrote:
>>>>>> IMHO introducing a new structure that gets rid of all the PV-only 
>>>>>>
>>>>>> fields seems like a good option:
>>>>>>
>>>>>> struct vcpu_hvm_context {
>>>>>> #define _VGCF_online                   5
>>>>>> #define VGCF_online                    (1<<_VGCF_online)
>>>>>>     uint32_t flags;                         /* VGCF_* flags      
>>>>>>            
>>>>>> */
>>>>>>     struct cpu_hvm_user_regs user_regs;     /* User-level CPU 
>>>>>> registers     
>>>>>> */
>>>>>>     /* NB. User pagetable on x86/64 is placed in ctrlreg[1]. */
>>>>>>     uint32_t ctrlreg[8];                    /* CR0-CR7 (control 
>>>>>> registers)  
>>>>>> */
>>>>>>     uint32_t debugreg[8];                   /* DB0-DB7 (debug 
>>>>>> registers)    
>>>>>> */
>>>>>> };
>>>>>>
>>>>>> I'm also seriously considering getting rid of ctrlreg and 
>>>>>> debugreg.
>>>>>> Since HVM VCPUs will always be started in 32bit flat mode, it 
>>>>>> doesn't
>>>>>> make sense IMHO to have both the 32 and the 64 version of the 
>>>>>> registers, so cpu_hvm_user_regs is always going to be:
>>>>>>
>>>>>> struct cpu_hvm_user_regs {
>>>>>>     uint32_t ebx;
>>>>>>     uint32_t ecx;
>>>>>>     uint32_t edx;
>>>>>>     uint32_t esi;
>>>>>>     uint32_t edi;
>>>>>>     uint32_t ebp;
>>>>>>     uint32_t eax;
>>>>>>     uint32_t eip;
>>>>>>     uint32_t esp;
>>>>>>     uint32_t eflags;
>>>>>>     uint16_t cs;
>>>>>>     uint16_t ss;
>>>>>>     uint16_t es;
>>>>>>     uint16_t ds;
>>>>>>     uint16_t fs;
>>>>>>     uint16_t gs;
>>>>>> };
>>>>>>
>>>>>> We could however do something similar to what's done in ARM and 
>>>>>> have
>>>>>> a union of both the 32 and the 64bit registers in case we want to
>>>>>> start the vCPU in 64bit mode sometime in the future.
>>>>> What you gave above is suitable only for VCPUOP_initialise afaict.
>>>>> Did you intend this to be the case?
>>>> Certainly not, this should also be used by XEN_DOMCTL_setvcpucontext.
>>>> I'm afraid I'm missing something obvious, but I don't see why this
>>>> couldn't be used by XEN_DOMCTL_setvcpucontext TBH, can you please 
>>>> clarify?
>>> set/getvcpucontext need to pickle all state for save/restore/migration,
>>> not just the start of day state.
>> HVM migration doesn't use set/getvcpucontext.
>>
>> I would expect an HVMLite-based-PVH to follow suit and just use
>> set/gethvmcontext.
> Oh, right, but that doesn't mean the respective domctl-s should
> produce unconsumable data

Indeed.

> , or have to guess how to interpret
> what gets passed in. I.e. at the very minimum the domctl-s then
> should be disallowed for PVH if they can#t be made work right.

They should be able to work with any vcpu.

In particular, it would be nice to start hvmloader with a hypercall to
set eip properly, rather than mapping gfn 0 and writing a jump
instruction into memory.

Having said that, in a virtual environment it would be nice if vcpu 1
could be started by vcpu 0 on straight in long at a given rip, rsp and cr3.

The reason for starting vcpu 0 in 32bit flat mode is to move mode
knowledge out of the domain builder and into the domain, but once the
guest specific loader is running, there is nothing to say it shouldn't
be able to skip 32bit AP trampolines entirely.

Perhaps what we want is a new VCPU_initialise op, rather than attempting
to retrofit an HVM view of the world onto a PV-specific hypercall.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.