[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] livepatch: do not use .livepatch.funcs section to store internal state



On Wed, Nov 22, 2023 at 04:31:07PM +0000, Ross Lagerwall wrote:
> On Thu, Nov 16, 2023 at 1:54 PM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> >
> > On Thu, Nov 16, 2023 at 01:39:27PM +0100, Jan Beulich wrote:
> > > On 16.11.2023 12:58, Roger Pau Monne wrote:
> > > > --- a/xen/include/public/sysctl.h
> > > > +++ b/xen/include/public/sysctl.h
> > > > @@ -991,10 +991,7 @@ struct livepatch_func {
> > > >      uint32_t new_size;
> > > >      uint32_t old_size;
> > > >      uint8_t version;        /* MUST be LIVEPATCH_PAYLOAD_VERSION. */
> > > > -    uint8_t opaque[LIVEPATCH_OPAQUE_SIZE];
> > > > -    uint8_t applied;
> > > > -    uint8_t patch_offset;
> > > > -    uint8_t _pad[6];
> > > > +    uint8_t _pad[39];
> > >
> > > Should this be LIVEPATCH_OPAQUE_SIZE+8 and ...
> >
> > Hm, I'm not sure that's any clearer.  In fact I think
> > LIVEPATCH_OPAQUE_SIZE shouldn't have leaked into sysctl.h in the first
> > place.
> >
> > If we later want to add more fields and bump the version, isn't it
> > easier to have the padding size clearly specified as a number?
> 
> I prefer using the number as it makes it clear that this padding is not
> (anymore) related to the size of the instruction buffer in livepatch_fstate.
> 
> Do you think it would be better to call livepatch_fstate.opaque
> something like livepatch_fstate.insn_buffer instead (and rename

That would be fine.

> the constant accordingly) since it is internal to Xen and is not
> hiding something from tools building live patches?

The constant would be needed anyway due to the usage in the livepatch
tests, see the layout of livepatch_expectation_t which is public and
part of livepatch_func.

I don't mind changing fstate->opaque to insn_buffer, but I would avoid
touching anything in livepatch_expectation_t as part of this patch.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.