|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xsplice: Don't perform multiple operations on same payload once work is scheduled.
On Fri, Apr 29, 2016 at 01:35:08AM -0600, Jan Beulich wrote:
> >>> On 29.04.16 at 03:52, <konrad@xxxxxxxxxx> wrote:
> > --- a/xen/common/xsplice.c
> > +++ b/xen/common/xsplice.c
> > @@ -1363,6 +1363,9 @@ static int xsplice_action(xen_sysctl_xsplice_action_t
> > *action)
> > return PTR_ERR(data);
> > }
> >
> > + if ( data->rc == -EAGAIN ) /* schedule_work has been called. */
> > + goto out;
>
> And nothing else can set data->rc to this value, now or in the
Correct.
> future? Because if that's possible, you'd deny any further actions
> on that payload.
Right. If the code does change and some of the underlaying code changes
it to -EAGAIN we are in trouble.
Or rather, we can do something different (like the earlier patch that Andrew
reviewed).
>
> > @@ -1423,6 +1426,7 @@ static int xsplice_action(xen_sysctl_xsplice_action_t
> > *action)
> > break;
> > }
> >
> > + out:
> > spin_unlock(&payload_lock);
> >
> > return rc;
>
> Taking both together, and looking at patch 2 of the original series, I'm
> getting the impression you return success in that case rather than e.g.
> -EBUSY or indeed -EAGAIN.
Correct.
That way xen-xsplice will continue on trying to come back and doing its
operation
without exiting out.
But the more I think about it the more that sounds silly. If we have
an situation where the user requests a different state and we hadn't finished
with the one in progress - we shouldn't loop around. We should error out
and tell the admin. (The same way we error out if there is a timeout on the
patching or we can't get hold of the lock).
So based on your input I think (in the context of this patch), it ought to be:
if ( data->rc == -EAGAIN )
{
rc = -EBUSY;
goto out;
}
>
> Jan
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |