[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/8] x86/vm-event/monitor: relocate code-motion more appropriately



>>> On 04.07.16 at 18:00, <czuzu@xxxxxxxxxxxxxxx> wrote:
> On 7/4/2016 5:58 PM, Jan Beulich wrote:
>>>>> On 04.07.16 at 13:02, <czuzu@xxxxxxxxxxxxxxx> wrote:
>>> On 7/4/2016 1:22 PM, Jan Beulich wrote:
>>>>>>> On 30.06.16 at 20:43, <czuzu@xxxxxxxxxxxxxxx> wrote:
>>>>> @@ -119,6 +156,55 @@ bool_t monitored_msr(const struct domain *d, u32 msr)
>>>>>        return test_bit(msr, bitmap);
>>>>>    }
>>>>>    
>>>>> +static void write_ctrlreg_adjust_traps(struct domain *d)
>>>>> +{
>>>>> +    struct vcpu *v;
>>>>> +    struct arch_vmx_struct *avmx;
>>>>> +    unsigned int cr3_bitmask;
>>>>> +    bool_t cr3_vmevent, cr3_ldexit;
>>>>> +
>>>>> +    /* Adjust CR3 load-exiting. */
>>>>> +
>>>>> +    /* vmx only */
>>>>> +    ASSERT(cpu_has_vmx);
>>>>> +
>>>>> +    /* non-hap domains trap CR3 writes unconditionally */
>>>>> +    if ( !paging_mode_hap(d) )
>>>>> +    {
>>>>> +        for_each_vcpu ( d, v )
>>>>> +            ASSERT(v->arch.hvm_vmx.exec_control & 
>>>>> CPU_BASED_CR3_LOAD_EXITING);
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    cr3_bitmask = monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3);
>>>>> +    cr3_vmevent = !!(d->arch.monitor.write_ctrlreg_enabled & 
>>>>> cr3_bitmask);
>>>>> +
>>>>> +    for_each_vcpu ( d, v )
>>>>> +    {
>>>>> +        avmx = &v->arch.hvm_vmx;
>>>>> +        cr3_ldexit = !!(avmx->exec_control & CPU_BASED_CR3_LOAD_EXITING);
>>>>> +
>>>>> +        if ( cr3_vmevent == cr3_ldexit )
>>>>> +            continue;
>>>>> +
>>>>> +        /*
>>>>> +         * If CR0.PE=0, CR3 load exiting must remain enabled.
>>>>> +         * See vmx_update_guest_cr code motion for cr = 0.
>>>>> +         */
>>>>> +        if ( cr3_ldexit && !hvm_paging_enabled(v) && 
>>>>> !vmx_unrestricted_guest(v)
>>>>> )
>>>>> +            continue;
>>>>> +
>>>>> +        if ( cr3_vmevent )
>>>>> +            avmx->exec_control |= CPU_BASED_CR3_LOAD_EXITING;
>>>>> +        else
>>>>> +            avmx->exec_control &= ~CPU_BASED_CR3_LOAD_EXITING;
>>>>> +
>>>>> +        vmx_vmcs_enter(v);
>>>>> +        vmx_update_cpu_exec_control(v);
>>>>> +        vmx_vmcs_exit(v);
>>>>> +    }
>>>>> +}
>>>> While Razvan gave his ack already, I wonder whether it's really a
>>>> good idea to put deeply VMX-specific code outside of a VMX-specific
>>>> file.
>>> Well, a summary of what this function does would sound like: "adjusts
>>> CR3 load-exiting for cr-write monitor vm-events". IMHO that's (monitor)
>>> vm-event specific enough to be placed within the vm-event subsystem.
>>> Could you suggest concretely how this separation would look like? (where
>>> to put this function/parts of it (and what parts), what name should it
>>> have once moved).
>> I won't go into that level of detail. Fact is that VMX-specific code
>> should be kept out of here. Whether you move the entire function
>> behind a hvm_funcs hook or just part of it is of no interest to me.
>> In no case should, if and when SVM eventually gets supported for
>> vm-event/monitor too, this function end up doing both VMX and SVM
>> specific things.
> 
> Why move it behind a hvm_funcs hook if it's only valid for VMX? SVM 
> support is not currently implemented, hence the ASSERT(cpu_has_vmx) at 
> the beginning of the function.
> And of course if @ some point SVM support will be implemented then the 
> right thing to do is what you say, i.e. make this function part of 
> hvm_function_table, but until then I don't see why we should do that.
> Note that arch_monitor_get_capabilities also returns no capability at 
> the moment if !cpu_has_vmx.
> 
> What if I move the vmx-specific parts to vmx.c in a function called 
> something like vmx_vm_event_update_cr3_traps() and call it from 
> write_ctrlreg_adjust_traps instead?...

That would be fine with me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.