[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v9 15/17] vmx: VT-d posted-interrupt core logic handling
On Wed, 2015-11-11 at 13:29 +0000, Wu, Feng wrote: > > > -----Original Message----- > > From: Wu, Feng > > Yes, that is the case, after arch_vcpu_block() is called, we don't > > cancel it even if there are events to be delivered, so we need to > > check > > whether the NV is the right value, if it isn't, which means after > > after arch_vcpu_block(),local_events_need_delivery() check returns > > true, so we need to adjust the PI state before VM-Entry. As Jan > > mentioned > > before, we will check the value before actually update it > > atomically, so > > I think it is okay from performance point of view. > > Besides the case you mentioned above, it also covers another case: > now > we remove the arch hook in vcpu_wake(), so when vCPU is woken up, > the 'NV' filed is still 'wakeup_vector', hence we need to recover it > before > VM Entry. > Well, sure. That, however, could have been handled by having this code called from vmx_do_resume(), I think. In fact, waking up certainly calls for the descriptor to be updated, but it also involves scheduling (well, at least starting executing after a wakeup does). So schedule_tail(), and hence vmx_do_resume(), would at some point be involved. My question above was about confirming that the (only) reason why you need that code in vmx_enter_helper() rather than in vmx_do_resume() was the 'block cancellation issue', not about why you need to change the PI control block at all. :-) Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |