[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 01/13] xsplice: Design document (v5).



>>> On 05.02.16 at 22:47, <konrad.wilk@xxxxxxxxxx> wrote:
>> Also is using "long" here really a good idea? Shouldn't we rather use
>> fixed width or ELF types?
> 
> We can. It would look like this:
> 
> struct xsplice_patch_func {
>     const unsigned char *name;
>     Elf64_Xword new_addr;
>     Elf64_Xword old_addr;
>     Elf64_Word new_size;
>     Elf64_Word old_size;
>     uint8_t pad[32];
> };
> 
> Much nicer.

Leaving only the question on whether then we should have two (for
now) variants - one for ELF32 (which at least ARM32 will want) and
another for ELF64.

>> > +* `old_size` and `new_size` contain the sizes of the respective functions 
>> > in bytes.
>> > +   The value **MUST** not be zero.
>> 
>> For old_size I can see this, but can't new_size being zero "NOP out
>> the entire code sequence"?
> 
> The patchset does not (yet) support that. Nor the short branch instructions.
> I am trying to keep the amount of 'features' to the minimum so that reviews
> can be easier.

Understood. However, the emphasized "**MUST**" pretty much
excludes the possibility for the future: One thing is the specification
that you present here, another the implementation, which may of
course initially lack certain functionality.

>> > +### XEN_SYSCTL_XSPLICE_LIST (2)
>> > +
>> > +Retrieve an array of abbreviated status and names of payloads that are 
>> > loaded in the
>> > +hypervisor.
>> > +
>> > +The caller provides:
>> > +
>> > + * `version`. Initially (on first hypercall) *MUST* be zero.
>> > + * `idx` index iterator. On first call *MUST* be zero, subsequent calls 
>> > varies.
>> > + * `nr` the max number of entries to populate.
>> > + * `pad` - *MUST* be zero.
>> > + * `status` virtual address of where to write `struct xen_xsplice_status`
>> > +   structures. Caller *MUST* allocate up to `nr` of them.
>> > + * `id` - virtual address of where to write the unique id of the payload.
>> > +   Caller *MUST* allocate up to `nr` of them. Each *MUST* be of
>> > +   **XEN_XSPLICE_NAME_SIZE** size.
>> > + * `len` - virtual address of where to write the length of each unique id
>> > +   of the payload. Caller *MUST* allocate up to `nr` of them. Each *MUST* 
>> > be
>> > +   of sizeof(uint32_t) (4 bytes).
>> > +
>> > +If the hypercall returns an positive number, it is the number (up to `nr`)
>> > +of the payloads returned, along with `nr` updated with the number of 
>> > remaining
>> > +payloads, `version` updated (it may be the same across hypercalls. If it
>> > +varies the data is stale and further calls could fail). The `status`,
>> > +`id`, and `len`' are updated at their designed index value (`idx`) with
>> > +the returned value of data.
>> > +
>> > +If the hypercall returns E2BIG the `count` is too big and should be
>> > +lowered.
>> 
>> s/count/nr/ ?
>> 
>> > +This operation can be preempted by the hypercall returning XEN_EAGAIN.
>> > +Retry.
>> 
>> Why is this necessary when preemption via the 'nr' field is already
>> possible?
> 
> I should explain that the XEN_EAGAIN is the mechanism by which the 
> hypervisor
> signals that it could only fulfill its 'nr' value.

But such a model seems to contradict "If the hypercall returns an
positive number, ..." above: Either you expect a positive number
to be returned in this case, or -XEN_EAGAIN.

>> > +The v2 design must also have a mechanism for:
>> > +
>> > + *  An dependency mechanism for the payloads. To use that information to 
>> > load:
>> > +    - The appropiate payload. To verify that payload is built against the
>> > +      hypervisor. This can be done via the `build-id`
>> > +      or via providing an copy of the old code - so that the hypervisor 
>> > can
>> > +       verify it against the code in memory.
>> 
>> I was missing this above - do you really intend to do patching without
>> at least one of those two safety measures?
> 
> Ross wrote the patches and I will make them part of the patch series. But 
> the
> problem is that there will be now over 30 patches - so to make it easier
> to review I was thinking to roll them out in 'waves'. I can most certainly
> include it in the next posting.

Trying to break up large series is much appreciated, but I think this
shouldn't lead to stuff going in being overly fragile.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.