[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH for-4.17?] x86/paging: return -EINVAL for paging domctls for dying domains
On Wed, Nov 09, 2022 at 08:48:46AM +0100, Jan Beulich wrote: > On 08.11.2022 18:15, Roger Pau Monné wrote: > > On Tue, Nov 08, 2022 at 06:03:54PM +0100, Jan Beulich wrote: > >> On 08.11.2022 17:43, Roger Pau Monné wrote: > >>> On Tue, Nov 08, 2022 at 05:14:40PM +0100, Jan Beulich wrote: > >>>> On 08.11.2022 12:38, Roger Pau Monne wrote: > >>>>> Like on the Arm side, return -EINVAL when attempting to do a p2m > >>>>> operation on dying domains. > >>>>> > >>>>> The current logic returns 0 and leaves the domctl parameter > >>>>> uninitialized for any parameter fetching operations (like the > >>>>> GET_ALLOCATION operation), which is not helpful from a toolstack point > >>>>> of view, because there's no indication that the data hasn't been > >>>>> fetched. > >>>> > >>>> While I can see how the present behavior is problematic when it comes > >>>> to consuming supposedly returned data, ... > >>>> > >>>>> --- a/xen/arch/x86/mm/paging.c > >>>>> +++ b/xen/arch/x86/mm/paging.c > >>>>> @@ -694,9 +694,10 @@ int paging_domctl(struct domain *d, struct > >>>>> xen_domctl_shadow_op *sc, > >>>>> > >>>>> if ( unlikely(d->is_dying) ) > >>>>> { > >>>>> - gdprintk(XENLOG_INFO, "Ignoring paging op on dying domain > >>>>> %u\n", > >>>>> + gdprintk(XENLOG_INFO, > >>>>> + "Tried to do a paging domctl op on dying domain %u\n", > >>>>> d->domain_id); > >>>>> - return 0; > >>>>> + return -EINVAL; > >>>>> } > >>>> > >>>> ... going from "success" to "failure" here has a meaningful risk of > >>>> regressing callers. It is my understanding that it was deliberate to > >>>> mimic success in this case (without meaning to assign "good" or "bad" > >>>> to that decision). > >>> > >>> I would assume that was the original intention, yes, albeit the commit > >>> message doesn't go into details about why mimicking success is > >>> required, it's very well possible the code relying on this was xend. > >> > >> Quite possible, but you never know who else has cloned code from there. > >> > >>>> Can you instead fill the data to be returned in > >>>> some simple enough way? I assume a mere memset() isn't going to be > >>>> good enough, though (albeit public/domctl.h doesn't explicitly name > >>>> any input-only fields, so it may not be necessary to preserve > >>>> anything). Maybe zeroing ->mb and ->stats would do? > >>> > >>> Hm, it still feels kind of wrong. We do return errors elsewhere for > >>> operations attempted against dying domains, and that seems all fine, > >>> not sure why paging operations need to be different in this regard. > >>> Arm does also return -EINVAL in that case. > >>> > >>> So what about postponing this change to 4.18 in order to avoid > >>> surprises, but then taking it in its current form at the start of the > >>> development window, as to have time to detect any issues? > >> > >> Maybe, but to be honest I'm not convinced. Arm can't really be taken > >> for comparison, since the op is pretty new there iirc. > > > > Indeed, but the tools code paths are likely shared between x86 and > > Arm, as the hypercalls are the same. > > On x86 we have both xc_shadow_control() and (functional) > xc_logdirty_control(); on Arm only the former is used, while the latter > would also be impacted by your change. Plus you're not accounting for > external tool stacks (like xend would be if anyone had cared to forward > port it, when - as you said earlier - the suspicion is that the original > change was made to "please" xend). AFAICT XEN_DOMCTL_SHADOW_OP_{CLEAN,PEEK} are equally broken if no error is returned when the domain is dying. A caller might think the bitmap has been fetched correctly when that's not the case, as the bitmap buffer would be left untouched. > > This is a domctl interface, so we are fine to do such changes. > > We're fine to make changes to domctl which are either binary compatible > with earlier versions or which are associated with a bump of the > interface version. The latter wouldn't help in this case, while the > former is simply not true here. For Andrew's proposed new paging pool > interface the behavior suggested here would of course be fully > appropriate, demanding that tool stack either don't issue such requests > against dying domains or that they be prepared to get back errors. I still think we need to fix this bug in Xen, and then fix any toolstacks that rely on such bogus behavior. Propagating such broken interface is just going to cause more issues down the road. > Thinking about it again I'm also not convinced EINVAL is an appropriate > error code to use here. The operation isn't necessarily invalid; we > only prefer to not carry out any such anymore. EOPNOTSUPP, EPERM, or > EACCES would all seem more appropriate. Or, for ease of recognition, a > rarely used one, e.g. ENODATA, EILSEQ, or EROFS. I was about to use ESRCH, as that also used by track_dirty_vram() and seems sensible. > Finally I'm not convinced of the usefulness of this dying check in the > first place: is_dying may become set immediately after the check was > done. While strictly true, this code is executed with the domain lock held, so while is_dying might change, domain_kill() won't make progress because of the barrier on the domain lock just after setting is_dying. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |