|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 9/9] x86/shim: pass through vcpu runstate to L0 Xen
On Fri, Jan 19, 2018 at 11:18:38AM +0000, Roger Pau Monné wrote:
> On Thu, Jan 18, 2018 at 06:16:52PM +0000, Wei Liu wrote:
> > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > index 558318e852..189ffac9b1 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -45,6 +45,7 @@
> >
> > #ifdef CONFIG_X86
> > #include <asm/guest.h>
> > +#include <asm/guest/hypercall.h>
> > #endif
> >
> > /* Linux config option: propageted to domain0 */
> > @@ -1425,6 +1426,15 @@ long do_vcpu_op(int cmd, unsigned int vcpuid,
> > XEN_GUEST_HANDLE_PARAM(void) arg)
> > if ( !guest_handle_okay(area.addr.h, 1) )
> > break;
> >
> > +#if CONFIG_X86
> > + if ( pv_shim )
> > + {
> > + rc =
> > xen_hypercall_vcpu_op(VCPUOP_register_runstate_memory_area,
> > + vcpuid, &area);
> > + break;
> > + }
> > +#endif
>
> This only fixes VCPUOP_register_runstate_memory_area, but
> VCPUOP_get_runstate_info will still report wrong information. I've
Yes, it appears that we should passthrough that call as well.
> also wondered whether simply returning L0 information is correct. To
> get the exact information the shim should actually return the L0
> information plus whatever time it steals from the guest.
>
I think that should be mostly correct. The error margin in the shim
should be very small.
> Also, before adding more hooks to do_vcpu_op I would attempt to add a
> pv_shim_do_vcpu_op helper and patch the hypercall table, in order to
> avoid modifying more common code. AFAICT this doesn't require adding
> much compat code to the shim implementation.
>
Good idea.
I'm going to revisit this idea next week when I have more time. In the
mean time I will repost the fixes patches I have.
Wei.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |