|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 for 4.5 1/8] libxl: rework domain userdata file lock
On Tue, 2014-09-16 at 11:01 +0100, Wei Liu wrote:
> The lock introduced in d2cd9d4f ("libxl: functions to lock / unlock
> libxl userdata store") has a bug that can leak the lock file when domain
> destruction races with other functions that try to get hold of the lock.
>
> There are several issues:
> 1. The lock is released too early with libxl__userdata_destroyall
> deletes everything in userdata store, including the lock file.
> 2. The check of domain existence is only done at the beginning of lock
> function, by the time the lock is acquired, the domain might have
> been gone already.
>
> The effect of this two issues is we can run into such situation:
>
> Process 1 Process 2 domain destruction
> # LOCK FUNCTION # LOCK FUNCTION
> check domain existence check domain existence
> acquire lock (file created)
> # LOCK FUNCTION
> destroy all files (lock file deleted,
> lock released)
> acquire lock (file created)
> # LOCK FUNCTION destroy domain
> # UNLOCK (close fd only)
> [ lock file leaked ]
>
> Fix this problem by deploying following changes:
>
> 1. Unlink lock file in unlock function.
> 2. Modify libxl__userdata_destroyall to not delete domain-userdata-lock,
> so that the lock remains held until unlock function is called.
> 3. Check domain still exists when the lock is acquired, unlock if
> domain is already gone.
>
> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
Iff you end up resending the series (I'm hoping not) it might be nice to
s/lock_carefd/carefd/ in the struct (since the usages are lock->thing so
the lock prefix is redundant). Also in the unlock perhaps comment that
the unlock must be done first before the close, since that is quite
important.
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |