[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/23] xsplice: Implement support for applying/reverting/replacing patches. (v5)



> > > +static void reschedule_fn(void *unused)
> > > +{
> > > +    smp_mb(); /* Synchronize with setting do_work */
> > > +    raise_softirq(SCHEDULE_SOFTIRQ);
> > 
> > As you have to IPI each processor to raise a schedule softirq, you can
> > set a per-cpu "xsplice enter rendezvous" variable.  This prevents the
> > need for the return-to-guest path to poll one single byte.
> 
> .. Not sure I follow. The IPI we send to the other CPU is 0xfb - which
> makes the smp_call_function_interrupt run, which calls this function:
> reschedule_fn(). Then raise_softirq sets the bit on softirq_pending.
> 
> Great. Since we caused an IPI that means we ended up calling VMEXIT which
> eventually ends calling process_pending_softirqs() which calls schedule().
> And after that it calls check_for_xsplice_work().
> 
> Are you suggesting to add new softirq that would call in 
> check_for_xsplice_work()?
> 
> Or are you suggesting to skip the softirq_pending check and all the
> code around that and instead have each VMEXIT code path check this
> per-cpu "xsplice enter" variable? If so, why not use the existing
> softirq infrastructure? 

N/m.

You were referring to the:

> > > +void do_xsplice(void)
..
> > > +    /* Fast path: no work to do. */
> > > +    if ( likely(!xsplice_work.do_work) )
> > > +        return;

which every CPU is going to do in when it calls idle_loop, svm_do_resume,
and vmx_do_resume.

Let me add that in!

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.