|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/2] x86/monitor: Introduce a boolean to suppress nested monitoring events
On 10/23/18 6:08 PM, Andrew Cooper wrote:
> On 23/10/18 15:54, Razvan Cojocaru wrote:
>> On 10/23/18 5:35 PM, Andrew Cooper wrote:
>>> diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
>>> index 2a41ccc..f1a196f 100644
>>> --- a/xen/arch/x86/hvm/monitor.c
>>> +++ b/xen/arch/x86/hvm/monitor.c
>>> @@ -36,6 +36,9 @@ bool hvm_monitor_cr(unsigned int index, unsigned long
>>> value, unsigned long old)
>>> struct arch_domain *ad = &curr->domain->arch;
>>> unsigned int ctrlreg_bitmask = monitor_ctrlreg_bitmask(index);
>>>
>>> + if ( curr->monitor.suppress )
>>> + return 0;
>>> +
>>> if ( index == VM_EVENT_X86_CR3 && hvm_pcid_enabled(curr) )
>>> value &= ~X86_CR3_NOFLUSH; /* Clear the noflush bit. */
>>>
>>> @@ -73,6 +76,9 @@ bool hvm_monitor_emul_unimplemented(void)
>>> .vcpu_id = curr->vcpu_id,
>>> };
>>>
>>> + if ( curr->monitor.suppress )
>>> + return false;
>> Rather than doing this for each event, I think we may be able to do it
>> only in monitor_traps(). Am I missing something?
>
> I guess that depends on how expensive it is to collect together the
> other data being fed into the monitor ring. I suppose it is only the
> hvm_do_resume() path which will suffer, and only on a reply from
> introspection, which isn't exactly a fastpath.
monitor_traps() calls vm_event_fill_regs(req); at the very end, which
you can short-circuit by returning sooner. The rest of the information I
believe has already been collected where you test v->monitor.suppress:
91 int monitor_traps(struct vcpu *v, bool sync, vm_event_request_t *req)
92 {
93 int rc;
94 struct domain *d = v->domain;
95
96 rc = vm_event_claim_slot(d, d->vm_event_monitor);
97 switch ( rc )
98 {
99 case 0:
100 break;
101 case -ENOSYS:
102 /*
103 * If there was no ring to handle the event, then
104 * simply continue executing normally.
105 */
106 return 0;
107 default:
108 return rc;
109 };
110
111 req->vcpu_id = v->vcpu_id;
112
113 if ( sync )
114 {
115 req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
116 vm_event_vcpu_pause(v);
117 rc = 1;
118 }
119
120 if ( altp2m_active(d) )
121 {
122 req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M;
123 req->altp2m_idx = altp2m_vcpu_idx(v);
124 }
125
126 vm_event_fill_regs(req);
127 vm_event_put_request(d, d->vm_event_monitor, req);
128
129 return rc;
130 }
Thanks,
Razvan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |