|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support
>>> On 13.05.14 at 17:53, <boris.ostrovsky@xxxxxxxxxx> wrote:
> With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and
> vlapic
> accesses for HVM moved to sofint, the only routines/macros that
> vpmu_do_interrupt()
> calls in NMI mode are:
> * memcpy()
> * querying domain type (is_XX_domain())
> * guest_cpu_user_regs()
> * XLAT_cpu_user_regs()
> * raise_softirq()
> * vcpu_vpmu()
> * vpmu_ops->arch_vpmu_save()
With this ...
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -185,6 +185,7 @@ static inline void context_load(struct vcpu *v)
> }
> }
>
> +/* Must be NMI-safe */
> static void amd_vpmu_load(struct vcpu *v)
... is the comment perhaps misplaced?
> +static void vpmu_send_nmi(struct vcpu *v)
> +{
> + struct vlapic *vlapic;
> + u32 vlapic_lvtpc;
> + unsigned char int_vec;
> +
> + ASSERT( is_hvm_vcpu(v) );
> +
> + vlapic = vcpu_vlapic(v);
> + if ( !is_vlapic_lvtpc_enabled(vlapic) )
> + return;
> +
> + vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
> + int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
> +
> + if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
> + vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
> + else
> + v->nmi_pending = 1;
I realize this is only the result of code movement, but what is this
if/else pair doing?
> +static void pmu_softnmi(void)
> +{
> + struct cpu_user_regs *regs;
> + struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
this_cpu().
> +
> + if ( sampled == NULL )
> + return;
> + per_cpu(sampled_vcpu, smp_processor_id()) = NULL;
Again.
> +
> + if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
> + (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
> + v = hardware_domain->vcpu[smp_processor_id() %
> + hardware_domain->max_vcpus];
> + else
> + {
> + if ( is_hvm_domain(sampled->domain) )
> + {
> + vpmu_send_nmi(sampled);
> + return;
> + }
> + v = sampled;
> + }
> +
> + regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> + if ( !is_pv_domain(sampled->domain) )
Even if it means the same, has_hvm_container_vcpu(sampled)
please: You're guarding an operation here that requires a HVM
container.
> + {
> + struct segment_register cs;
> +
> + hvm_get_segment_register(sampled, x86_seg_cs, &cs);
I hope you understand that what you read here is only implicitly (due
to the softirq getting serviced before the guest gets re-entered) the
current value. Or wait - is it at all? What if a scheduler softirq first
causes the guest to be de-scheduled and another CPU manages to
pick it up before you get here?
> @@ -472,6 +574,21 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t
> *params)
> return -EINVAL;
> }
>
> + if ( !pvpmu_initted )
> + {
> + if (reserve_lapic_nmi() == 0)
Coding style.
> + set_nmi_callback(pmu_nmi_interrupt);
> + else
> + {
> + printk("Failed to reserve PMU NMI\n");
> + put_page(page);
> + return -EBUSY;
> + }
> + open_softirq(PMU_SOFTIRQ, pmu_softnmi);
> +
> + pvpmu_initted = 1;
Is it excluded that you get two racing pvpmu_init() calls (i.e. are
these exclusively coming from e.g. a domctl)? If not, better
serialization would be needed here.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |