[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 15/17] vmx: VT-d posted-interrupt core logic handling



>>> On 28.10.15 at 03:58, <feng.wu@xxxxxxxxx> wrote:

> 
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: Tuesday, October 27, 2015 5:52 PM
>> To: Wu, Feng <feng.wu@xxxxxxxxx>
>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Dario Faggioli
>> <dario.faggioli@xxxxxxxxxx>; GeorgeDunlap <george.dunlap@xxxxxxxxxxxxx>;
>> Tian, Kevin <kevin.tian@xxxxxxxxx>; xen-devel@xxxxxxxxxxxxx; Keir Fraser
>> <keir@xxxxxxx>
>> Subject: RE: [Xen-devel] [PATCH v8 15/17] vmx: VT-d posted-interrupt core 
>> logic
>> handling
>> 
>> >>> On 27.10.15 at 06:19, <feng.wu@xxxxxxxxx> wrote:
>> >> From: Dario Faggioli [mailto:dario.faggioli@xxxxxxxxxx]
>> >> Sent: Monday, October 26, 2015 10:40 PM
>> >> In any case, it would have been nice, given the situation, if you'd have 
> put a
>> few
>> >> words about, e.g., which solution you ended up implementing and why,
>> either in
>> >> the cover or here (e.g., in the '---' area).
>> >
>> > Thanks for the suggestion. As I mentioned before, updating the PI 
> descriptor
>> > needs
>> > to be atomic, I think it should be a little expensive. So doing it every
>> > VM-exit seems
>> > not a good idea to me.
>> 
>> You could check whether any update is needed before doing the
>> (atomic) cmpxchg16b.
> 
> I am not sure why you mentioned (atomic) cmpxchg16b here, for atomically
> Update PI descriptor, we don't need use cmpxchg16b, right?

To be honest I'm not sure which entity was the one needing
compxchg16b. Just take that comment in a more general form:
If there is an update to be done (by whatever means) that is
expensive, try avoiding the update by first checking whether
an update is actually needed. Just like we e.g. do for a couple
of MSR writes, where we keep the last written value in a per-
CPU variable.

> In my understanding, It seems there were two suggestions about this:
> 
> #1: Set 'SN' on each VM-Exit and clear 'SN' on each VM-Entry, so we don't
> need to care about the status of 'SN' in root mode. This is what I mean "not
> a good idea", since we add an atomic operation on each VM-Exit/VM-Entry.
> 
> #2: For the blocking cancel case, let's take the following code for example:
> 
> void vcpu_block(void)
> {
>     struct vcpu *v = current;
> 
>     set_bit(_VPF_blocked, &v->pause_flags);
> 
>     arch_vcpu_block(v);
> 
>     /* Check for events /after/ blocking: avoids wakeup waiting race. */
>     if ( local_events_need_delivery() )
>     {
>         clear_bit(_VPF_blocked, &v->pause_flags);
>         /* arch_vcpu_block_cancel(v); */ 
>     }
>     else
>     {
>         TRACE_2D(TRC_SCHED_BLOCK, v->domain->domain_id, v->vcpu_id);
>         raise_softirq(SCHEDULE_SOFTIRQ);
>     }
> }
> 
> We still call arch_vcpu_block() in vcpu_block(), but we don't need to
> call arch_vcpu_block_cancel(v) when local_events_need_delivery()
> returns true. Then before VM-Entry, we can check if 'NV' field is
> ' pi_wakeup_vector', if it is, we change the PI status and remove
> it from the blocking list.
> 
> Jan, if #2 is what you mean, I think it worth a try. If I don't understand
> your comments correctly, could you please elaborate it more? Thanks
> a lot!

Ideally we'd avoid both arch_vcpu_*() calls by doing what is needed
in arch code (e.g. the VM entry path). If only avoiding the cancel
hook is reasonably possible this way, then so be it - still better to
have just one hook here than having two.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.