[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC 10/10] common: convert vCPU info area registration



Hi Jan,

On 19/12/2022 15:01, Jan Beulich wrote:
On 17.12.2022 16:53, Julien Grall wrote:
On 19/10/2022 08:45, Jan Beulich wrote:
Switch to using map_guest_area(). Noteworthy differences from
map_vcpu_info():
- remote vCPU-s are paused rather than checked for being down (which in
    principle can change right after the check),
- the domain lock is taken for a much smaller region,
- areas could now be registered more than once, if we want this,
- as a corner case registering the area at GFN 0 offset 0 is no longer
    possible (this is considered an invalid de-registration request).

Note that this eliminates a bug in copy_vcpu_settings(): The function
did allocate a new page regardless of the GFN already having a mapping,
thus in particular breaking the case of two vCPU-s having their info
areas on the same page.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
RFC: While the "no re-registration" check is retained, it is now racy.
       If we really think it needs retaining _and_ properly enforcing,
       then locking will be needed, but I don't think we can hold the
       domain lock around a call to map_guest_area(), first and foremost
       because of its use of vcpu_pause().

function like evtchn_2l_unmask() may access vcpu_info that is not the
current one.

So I believe the check needs to be reatined and properly enforced.
Otherwise, the old structure may still be used for a short time even if
it has been freed.

Good point, I'll try to come up with something suitable.

RFC: Is the GFN 0 offset 0 behavioral change deemed acceptable?

I would say n  (See the previous discussion for more details).

Of course - with 0 going to be replaced in map_guest_area(), the check
there needs to be adjusted accordingly. And ~0 was already invalid to
pass in here, due to the alignment check.

RFC: To have just a single instance of casts to vcpu_info_t *,
       introducing a macro (or inline function if header dependencies
       permit) might be nice. However, the only sensible identifier for it
       would imo be vcpu_info(). Yet we already have a macro of that name.
       With some trickery it might be possible to overload the macro
       (making the "field" argument optional), but this may not be
       desirable for other reasons (it could e.g. be deemed confusing).

You might be able to reduce the number of cast by using local variables.

But it is not entirely clear why we can't use the existing vcpu_info()
in many of the cases. For instance...



--- a/xen/arch/x86/include/asm/shared.h
+++ b/xen/arch/x86/include/asm/shared.h
@@ -27,16 +27,16 @@ static inline void arch_set_##field(stru
   static inline type arch_get_##field(const struct vcpu *v)       \
   {                                                               \
       return !has_32bit_shinfo(v->domain) ?                       \
-           v->vcpu_info->native.arch.field :                    \
-           v->vcpu_info->compat.arch.field;                     \
+           ((const vcpu_info_t *)v->vcpu_info_area.map)->native.arch.field : \
+           ((const vcpu_info_t *)v->vcpu_info_area.map)->compat.arch.field;  \

... here.

vcpu_info() has a property which gets in the way of using the macro
here. Since __vcpu_info() wants to return something which can also
be used as lvalue, the 2nd and 3rd operands of the conditional
operator need to resolve to the same pointer type. Hence the smaller
(compat) type is the only one which is safe to use. See the comment
next to __shared_info()'s definition (which is the one __vcpu_info()'s
refers to).

Thanks for the pointer! I hadn't noticed the subtlety. Then, the open-cast (even if I dislike them) is probably the way go for now.

As I mentioned above, it would nice if they can be reduced by using local variables (this will also return the length of the line).

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.