[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2 of 3] RFC: mem_event: use wait queue when ring is full
>>> On 22.11.11 at 22:13, Olaf Hering <olaf@xxxxxxxxx> wrote: > --- a/xen/include/xen/sched.h > +++ b/xen/include/xen/sched.h > @@ -14,6 +14,7 @@ > #include <xen/nodemask.h> > #include <xen/radix-tree.h> > #include <xen/multicall.h> > +#include <xen/wait.h> > #include <public/xen.h> > #include <public/domctl.h> > #include <public/sysctl.h> > @@ -192,6 +193,10 @@ struct mem_event_domain > mem_event_front_ring_t front_ring; > /* event channel port (vcpu0 only) */ > int xen_port; > + /* mem_event bit for vcpu->pause_flags */ > + int mem_event_bit; Perhaps pause_bit would be a better name here? Or at least, as for the first patch, the mem_ prefix should go away (or really the mem_event_ one, but that would just leave "bit", which is how I got to the above proposal). > + /* list of vcpus waiting for room in the ring */ > + struct waitqueue_head wq; > }; > > struct mem_event_per_domain > @@ -615,9 +620,12 @@ static inline struct domain *next_domain > /* VCPU affinity has changed: migrating to a new CPU. */ > #define _VPF_migrating 3 > #define VPF_migrating (1UL<<_VPF_migrating) > - /* VCPU is blocked on memory-event ring. */ > -#define _VPF_mem_event 4 > -#define VPF_mem_event (1UL<<_VPF_mem_event) > + /* VCPU is blocked on mem_paging ring. */ > +#define _VPF_me_mem_paging 4 > +#define VPF_me_mem_paging (1UL<<_VPF_me_mem_paging) > + /* VCPU is blocked on mem_access ring. */ > +#define _VPF_me_mem_access 5 > +#define VPF_me_mem_access (1UL<<_VPF_me_mem_access) Same here - the mem_ seems superfluous. Jan > > static inline int vcpu_runnable(struct vcpu *v) > { > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |