|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [v2 06/11] vmx: add help functions to support PML
>>> On 17.04.15 at 05:10, <kai.huang@xxxxxxxxxxxxxxx> wrote:
> On 04/16/2015 11:42 PM, Jan Beulich wrote:
>>>>> On 15.04.15 at 09:03, <kai.huang@xxxxxxxxxxxxxxx> wrote:
>>> +void vmx_vcpu_flush_pml_buffer(struct vcpu *v)
>>> +{
>>> + uint64_t *pml_buf;
>>> + unsigned long pml_idx;
>>> +
>>> + ASSERT(vmx_vcpu_pml_enabled(v));
>>> +
>>> + vmx_vmcs_enter(v);
>>> +
>>> + __vmread(GUEST_PML_INDEX, &pml_idx);
>> Don't you require the vCPU to be non-running or current when you
>> get here? If so, perhaps add a respective ASSERT()?
> Yes an ASSERT would be better.
>
> v->pause_count will be increased if vcpu is kicked out by domain_pause
> explicitly, but looks the same thing won't be done if vcpu is kicked out
> by PML buffer full VMEXIT. So should the ASSERT be done like below?
>
> ASSERT(atomic_read(&v->pause_count) || (v == current));
For one I'd reverse the two parts. And then I think pause count
being non-zero is not a sufficient condition - if a non-synchronous
pause was issued against the vCPU it may still be running. I'd
suggest !vcpu_runnable(v) && !v->is_running, possibly with the
pause count check instead of the runnable one if the only
permitted case where v != current requires the vCPU to be
paused.
>>> + /*
>>> + * Need to change type from log-dirty to normal memory for logged
>>> GFN.
>>> + * hap_track_dirty_vram depends on it to work. And we really only
>>> need
>>> + * to mark GFNs which hve been successfully changed from log-dirty
>>> to
>>> + * normal memory to be dirty.
>>> + */
>>> + if ( !p2m_change_type_one(v->domain, gfn, p2m_ram_logdirty,
>>> + p2m_ram_rw) )
>> Indentation.
> To be where exactly? Sorry I didn't find an example to refer in such case.
p2m_ram_rw should align with the v in v->domain.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |