|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 15/15] libxl: New event generation API
Stefano Stabellini writes ("Re: [Xen-devel] [PATCH 15/15] libxl: New event
generation API"):
> On Mon, 12 Dec 2011, Ian Campbell wrote:
> > On Mon, 2011-12-12 at 12:30 +0000, Ian Jackson wrote:
> > > In general this means that functions which lock the ctx must not
> > > allocate an inner gc.
>
> this is quite an important limitation and change in behaviour!
Luckily I have only introduced the ctx lock in this series so there is
no existing code which violates this rule.
> > It's probably not so important for memory allocation cleanup but it
> > would be a semantically useful change for the callbacks if we could
> > arrange for them to happen only on final exit of the library -- exactly
> > because of the difficulty of reasoning about things otherwise. Alas I
> > can't think of a good way to do this since the ctx can be shared, thread
> > local storage would be one or libxl__gc_owner could return a special
> > "nested ctx" instead of the real outer ctx, but, ick.
>
> I think that using TLS to increase/decrease a nesting counter would OK
> in this case. We could have a libxl__enter and a libxl__exit function
> that take care it this.
>
> Modifying libxl__gc_owner could lead to errors because it doesn't handle
> the case in which an externally visible function calls another
> externally visible function: libxl__gc_owner wouldn't even be called in
> this case.
I suggested to Ian the moral equivalent of:
#define CTX_LOCK \
<previous implementation> \
ctx->recursive_lock_counter++
#define GC_INIT \
CTX_LOCK; \
assert(ctx->recursive_lock_counter==1); \
CTX_UNLOCK
but I don't think this is really very nice.
I think the answer is to write a comment saying that it is forbidden
to allocate a new gc with the ctx locked.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |