[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/6] x86/HVM: split restore state checking from state loading



On Mon, Dec 11, 2023 at 12:31:11PM +0100, Jan Beulich wrote:
> On 11.12.2023 11:46, Roger Pau Monné wrote:
> > On Wed, Dec 06, 2023 at 08:27:59AM +0100, Jan Beulich wrote:
> >> On 05.12.2023 16:55, Roger Pau Monné wrote:
> >>> On Tue, Dec 05, 2023 at 03:59:13PM +0100, Jan Beulich wrote:
> >>>> On 05.12.2023 15:29, Roger Pau Monné wrote:
> >>>>> On Tue, Dec 05, 2023 at 09:52:31AM +0100, Jan Beulich wrote:
> >>>>>> On 04.12.2023 18:27, Roger Pau Monné wrote:
> >>>>>>> On Tue, Nov 28, 2023 at 11:34:04AM +0100, Jan Beulich wrote:
> >>>>>>>> ..., at least as reasonably feasible without making a check hook
> >>>>>>>> mandatory (in particular strict vs relaxed/zero-extend length 
> >>>>>>>> checking
> >>>>>>>> can't be done early this way).
> >>>>>>>>
> >>>>>>>> Note that only one of the two uses of hvm_load() is accompanied with
> >>>>>>>> hvm_check(). The other directly consumes hvm_save() output, which 
> >>>>>>>> ought
> >>>>>>>> to be well-formed. This means that while input data related checks 
> >>>>>>>> don't
> >>>>>>>> need repeating in the "load" function when already done by the 
> >>>>>>>> "check"
> >>>>>>>> one (albeit assertions to this effect may be desirable), domain state
> >>>>>>>> related checks (e.g. has_xyz(d)) will be required in both places.
> >>>>>>>>
> >>>>>>>> Suggested-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> >>>>>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >>>>>>>> ---
> >>>>>>>> Do we really need all the copying involved in use of 
> >>>>>>>> _hvm_read_entry()
> >>>>>>>> (backing hvm_load_entry()? Zero-extending loads are likely easier to
> >>>>>>>> handle that way, but for strict loads all we gain is a reduced risk 
> >>>>>>>> of
> >>>>>>>> unaligned accesses (compared to simply pointing into h->data[]).
> >>>>>>>
> >>>>>>> See below, but I wonder whether the checks could be performed as part
> >>>>>>> of hvm_load() without having to introduce a separate handler and loop
> >>>>>>> over the context entries.
> >>>>>>
> >>>>>> Specifically not. State loading (in the longer run) would better not 
> >>>>>> fail
> >>>>>> once started. (Imo it should have been this way from the beginning.) 
> >>>>>> Only
> >>>>>> then will the vCPU still be in a predictable state even after a 
> >>>>>> possible
> >>>>>> error.
> >>>>>
> >>>>> Looking at the callers, does such predictable state after failure
> >>>>> matter?
> >>>>>
> >>>>> One caller is an hypercall used by the toolstack at domain create,
> >>>>> failing can just lead to the domain being destroyed.  The other caller
> >>>>> is vm fork, which will also lead to the fork being destroyed if
> >>>>> context loading fails.
> >>>>>
> >>>>> Maybe I'm overlooking something.
> >>>>
> >>>> You don't (I think), but existing callers necessarily have to behave the
> >>>> way you describe. From an abstract perspective, though, failed state
> >>>> loading would better allow a retry. And really I thought that when you
> >>>> suggested to split checking from loading, you had exactly that in mind.
> >>>
> >>> Not really TBH, because I didn't think that much on a possible
> >>> implementation when proposing it.
> >>
> >> But what else did you think of then in terms of separating checking from
> >> loading?
> > 
> > Just calling the check and load functions inside of the same loop was
> > my initial thought.
> 
> Okay, I was meanwhile also guessing that this might have been what you
> thought of. I can go that route, but I wouldn't want to make it "and", but
> "or" then, depending on a new boolean parameter to be passed to hvm_load().
> IOW I'd still like to do all checking before all loading (in the longer
> run, that is i.e. after individual handlers have been adapted). Would that
> be okay with you?

Yes, that would be fine.  I assume you will introduce a 'dry run'
parameter then?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.