|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 12/19] x86/VPMU: Initialize PMU for PV guests
>>> On 11.06.14 at 14:49, <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 06/11/2014 06:20 AM, Jan Beulich wrote:
>>>>> On 06.06.14 at 19:40, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> Code for initializing/tearing down PMU for PV guests
>>> ...
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -4031,7 +4031,8 @@ static hvm_hypercall_t *const
> pvh_hypercall64_table[NR_hypercalls] = {
>>> [ __HYPERVISOR_physdev_op ] = (hvm_hypercall_t *)hvm_physdev_op,
>>> HYPERCALL(hvm_op),
>>> HYPERCALL(sysctl),
>>> - HYPERCALL(domctl)
>>> + HYPERCALL(domctl),
>>> + HYPERCALL(xenpmu_op)
>> How is this change related to the subject and description?
>
> Yes, this is result of my eliminating PVH patch as a standalone one and
> spreading its content over other patches. I'll update the text.
>
>>
>>> +static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
>>> +{
>>> + struct vcpu *v;
>>> + struct page_info *page;
>>> + uint64_t gfn = params->d.val;
>>> +
>>> + if ( !params || params->vcpu >= d->max_vcpus )
>>> + return -EINVAL;
>>> +
>>> + page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
>>> + if ( !page )
>>> + return -EINVAL;
>>> +
>>> + if ( !get_page_type(page, PGT_writable_page) )
>>> + {
>>> + put_page(page);
>>> + return -EINVAL;
>>> + }
>>> +
>>> + v = d->vcpu[params->vcpu];
>> I'm sure I asked this before - what guarantees (now as well as
>> going forward) that you get non-NULL here? Various existing
>> code paths properly verify both d->vcpu and d->vcpu[...] not
>> being NULL, so you should do this consistently too.
>
> Ah, I thought you were referring to params check last time. I'll add
> vcpu checks as well.
And drop checking param against NULL - I don't think that check
makes any sense.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |