[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC v2 13/15] Update Posted-Interrupts Descriptor during vCPU scheduling
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: Wednesday, June 17, 2015 4:59 PM > To: Wu, Feng > Cc: andrew.cooper3@xxxxxxxxxx; george.dunlap@xxxxxxxxxxxxx; Tian, Kevin; > Zhang, Yang Z; xen-devel@xxxxxxxxxxxxx; keir@xxxxxxx > Subject: RE: [RFC v2 13/15] Update Posted-Interrupts Descriptor during vCPU > scheduling > > >>> On 17.06.15 at 10:46, <feng.wu@xxxxxxxxx> wrote: > >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > >> Sent: Wednesday, June 17, 2015 3:56 PM > >> >>> On 17.06.15 at 08:54, <feng.wu@xxxxxxxxx> wrote: > >> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > >> >> Sent: Wednesday, June 10, 2015 2:44 PM > >> >> >>> On 08.05.15 at 11:07, <feng.wu@xxxxxxxxx> wrote: > >> >> > + switch ( v->runstate.state ) > >> >> > + { > >> >> > + case RUNSTATE_runnable: > >> >> > + case RUNSTATE_offline: > >> >> > + /* > >> >> > + * We don't need to send notification event to a non-running > >> >> > + * vcpu, the interrupt information will be delivered to it > before > >> >> > + * VM-ENTRY when the vcpu is scheduled to run next time. > >> >> > + */ > >> >> > + pi_desc->sn = 1; > >> >> > + > >> >> > + /* > >> >> > + * If the state is transferred from RUNSTATE_blocked, > >> >> > + * we should set 'NV' feild back to posted_intr_vector, > >> >> > + * so the Posted-Interrupts can be delivered to the vCPU > >> >> > + * by VT-d HW after it is scheduled to run. > >> >> > + */ > >> >> > + if ( old_state == RUNSTATE_blocked ) > >> >> > + { > >> >> > + do > >> >> > + { > >> >> > + old.control = new.control = pi_desc->control; > >> >> > + new.nv = posted_intr_vector; > >> >> > + } > >> >> > + while ( cmpxchg(&pi_desc->control, old.control, > >> new.control) > >> >> > + != old.control ); > >> >> > >> >> So why is it okay to do the SN update non-atomically when the > >> >> vector update is done atomically? > >> >> > >> >> Also the curly braces do _not_ go on separate lines for do/while > >> >> constructs. > >> >> > >> >> And then do you really need to atomically update the whole 64 bits > >> >> here, rather than just the nv field? By not making it a bit field > >> >> you could perhaps just write_atomic() it? > >> >> > >> > > >> > Thinking more about this, maybe write_atomic() is not good for this, we > need > >> > use > >> > the LOCK prefix when updating the posted-interrupt descriptor. > >> > >> The LOCK prefix can't be applied to simple loads and stores; they're > >> implicitly atomic. > >> > > > > Yes, I know LOCK prefix cannot be used together with 'mov', but > > my concern is without LOCK, does it work well when this > > write-atomic() operation and hardware setting 'ON' occur at the > > same time? > > > > Hardware updates the 'ON' filed like this: > > (which is listed in the Spec.) > > > > "Hardware performs a coherent atomic read-modify-write > > operation of the posted-interrupt descriptor as follows: > > - Read contents of the Posted Interrupt Descriptor, claiming > > exclusive ownership of its hosting cache-line. > > The "cache line" part here is the really important aspect. The CPU > will do the same for LOCKed transactions as well as simple stores. > But of course - I repeat this just to avoid any misunderstanding - > this works only if the store covers _only_ nv (recall that I brought > this up in context of trying to avoid using bitfields where possible). Your clarification makes thing clear. Thank you very much! Thanks, Feng > > > Sorry for the long description above, since the hardware guys > > advise me to use LOCK prefix when updating the posted-interrupt > > descriptor, I just don't want to make any mistake here. > > And that advice is correct if you fiddle with the descriptor in > quantities covering more than the precise byte(s) you intend > to modify. > > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |