[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 23/28] xsplice: Stacking build-id dependency checking.



>>> On 04.04.16 at 22:01, <konrad.wilk@xxxxxxxxxx> wrote:
> On Mon, Apr 04, 2016 at 09:00:00AM -0600, Jan Beulich wrote:
>> >>> On 24.03.16 at 21:00, <konrad.wilk@xxxxxxxxxx> wrote:
>> > @@ -929,6 +932,33 @@ being loaded and requires an hypervisor build-id to 
>> > match against.
>> >  The old code allows much more flexibility and an additional guard,
>> >  but is more complex to implement.
>> >  
>> > +The second option which requires an build-id of the hypervisor
>> > +is implemented in the Xen Project hypervisor.
>> > +
>> > +Specifically each payload has two build-id ELF notes:
>> > + * The build-id of the payload itself (generated via --build-id).
>> > + * The build-id of the payload it depends on (extracted from the
>> > +   the previous payload or hypervisor during build time).
>> > +
>> > +This means that the very first payload depends on the hypervisor
>> > +build-id.
>> 
>> So this is mean to be a singly linked chain, not something with
>> branches and alike, allowing independent patches to be applied
>> solely based on the base build ID? Is such a restriction not going
> 
> Correct.
>> to get in the way rather sooner than later?
> 
> Here is what the design doc says:
> 
> "                                                         
> ### xSplice interdependencies                                                 
>   
>                                                                               
>   
> xSplice patches interdependencies are tricky.                                 
>   
>                                                                               
>   
> There are the ways this can be addressed:                                     
>   
>  * A single large patch that subsumes and replaces all previous ones.         
>   
>    Over the life-time of patching the hypervisor this large patch             
>   
>    grows to accumulate all the code changes.                                  
>   
>  * Hotpatch stack - where an mechanism exists that loads the hotpatches       
>   
>    in the same order they were built in. We would need an build-id            
>   
>    of the hypevisor to make sure the hot-patches are build against the        
>   
>    correct build.                                                             
>   
>  * Payload containing the old code to check against that. That allows         
>   
>    the hotpatches to be loaded indepedently (if they don't overlap) - or      
>   
>    if the old code also containst previously patched code - even if they      
>   
>    overlap.                                                                   
>                                                                               
>  
>    
> The disadvantage of the first large patch is that it can grow over            
>   
> time and not provide an bisection mechanism to identify faulty patches.       
>   
>                                                                               
>   
> The hot-patch stack puts stricts requirements on the order of the patches     
>       
> being loaded and requires an hypervisor build-id to match against.            
>   
>                                                                               
>   
> The old code allows much more flexibility and an additional guard,            
>   
> but is more complex to implement.                                             
>   
>                                                                               
>   
> The second option which requires an build-id of the hypervisor                
>   
> is implemented in the Xen Project hypervisor.                                 
>   
>                                                                               
>   
> "
> 
> I was all for "old_code to check against" but that would incur quite a lot
> of implementation. The 'stacking' (suggested by Martin) is much easier
> to implement. I am hoping that in next major milestone the 'old code' 
> checking
> can be implemented such that the admin has the choice to use both.

In which case could both the design doc and the commit message
be made say that this is intended to be a temporary thing, to be
replaced by the more flexible model?

>> > +# Not Yet Done
>> > +
>> > +This is for further development of xSplice.
>> > +
>> > +## Goals
>> > +
>> > +The implementation must also have a mechanism for:
>> > +
>> > + * Be able to lookup in the Xen hypervisor the symbol names of functions 
>> > from the ELF payload.
>> > + * Be able to patch .rodata, .bss, and .data sections.
>> > + * Further safety checks (blacklist of which functions cannot be patched, 
>> > check
>> > +   the stack, make sure the payload is built with same compiler as 
>> > hypervisor,
>> > +   and NMI/MCE handlers and do_nmi for right now - until an safe solution 
>> > is found).
>> > + * NOP out the code sequence if `new_size` is zero.
>> > + * Deal with other relocation types:  R_X86_64_[8,16,32,32S], 
>> > R_X86_64_PC[8,16,64] in payload file.
>> 
>> Does this belong here? Doesn't this duplicate something I saw earlier?
> 
> ? This is just shrinking the "TODO Goals" section and removing the:
> "- *  An dependency mechanism for the payloads. To use that information to 
> load:
> -    - The appropiate payload. To verify that payload is built against the
> -      hypervisor. This can be done via the `build-id`
> -      or via providing an copy of the old code - so that the hypervisor can
> -       verify it against the code in memory.
> -    - To construct an appropiate order of payloads to load in case they
> -      depend on each other.
> "
> 
> The other TODOs are not added.

Oh, I didn't realize that the block gets moved. That's certainly not
very fortunate for review (the base document structure would
better remain the same).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.