|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH for-4.14] x86/rtc: provide mediated access to RTC for PVH dom0
> -----Original Message-----
> From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Sent: 05 June 2020 08:50
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: paul@xxxxxxx; Roger Pau Monne <roger.pau@xxxxxxxxxx>; Jan Beulich
> <jbeulich@xxxxxxxx>; Andrew
> Cooper <andrew.cooper3@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>
> Subject: [PATCH for-4.14] x86/rtc: provide mediated access to RTC for PVH dom0
>
> At some point (maybe PVHv1?) mediated access to the RTC was provided
> for PVH dom0 using the PV code paths (guest_io_{write/read}). At some
> point this code has been made PV specific and unhooked from the
> current PVH IO path. This patch provides such mediated access to the
> RTC for PVH dom0, just like it's provided for a classic PV dom0.
>
> Instead of re-using the PV paths implement such handler together with
> the vRTC code for HVM, so that calling rtc_init will setup the
> appropriate handlers for all HVM based guests.
>
> Without this a Linux PVH dom0 will read garbage when trying to access
> the RTC, and one vCPU will be constantly looping in
> rtc_timer_do_work.
>
> Note that such issue doesn't happen on domUs because the ACPI
> NO_CMOS_RTC flag is set in FADT, which prevents the OS from accessing
> the RTC.
>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> for-4.14 reasoning: the fix is completely isolated to PVH dom0, and as
> such the risk is very low of causing issues to other guests types, but
> without this fix one vCPU when using a Linux dom0 will be constantly
> looping over rtc_timer_do_work with 100% CPU usage, at least when
> using Linux 4.19 or newer.
> ---
I can't say I'm a big fan of the code duplication with emul-priv-op.c, but it's
not much code and it does keep this patch small. If the maintainers are happy
to ack then consider this...
Release-acked-by: Paul Durrant <paul@xxxxxxx>
> xen/arch/x86/hvm/rtc.c | 69 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 69 insertions(+)
>
> diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
> index 5bbbdc0e0f..5d637cf018 100644
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -808,10 +808,79 @@ void rtc_reset(struct domain *d)
> s->pt.source = PTSRC_isa;
> }
>
> +/* RTC mediator for HVM hardware domain. */
> +static unsigned int hw_read(unsigned int port)
> +{
> + const struct domain *currd = current->domain;
> + unsigned long flags;
> + unsigned int data = 0;
> +
> + switch ( port )
> + {
> + case RTC_PORT(0):
> + data = currd->arch.cmos_idx;
> + break;
> +
> + case RTC_PORT(1):
> + spin_lock_irqsave(&rtc_lock, flags);
> + outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
> + data = inb(RTC_PORT(1));
> + spin_unlock_irqrestore(&rtc_lock, flags);
> + break;
> + }
> +
> + return data;
> +}
> +
> +static void hw_write(unsigned int port, unsigned int data)
> +{
> + struct domain *currd = current->domain;
> + unsigned long flags;
> +
> + switch ( port )
> + {
> + case RTC_PORT(0):
> + currd->arch.cmos_idx = data;
> + break;
> +
> + case RTC_PORT(1):
> + spin_lock_irqsave(&rtc_lock, flags);
> + outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
> + outb(data, RTC_PORT(1));
> + spin_unlock_irqrestore(&rtc_lock, flags);
> + break;
> + }
> +}
> +
> +static int hw_rtc_io(int dir, unsigned int port, unsigned int size,
> + uint32_t *val)
> +{
> + if ( size != 1 )
> + {
> + gdprintk(XENLOG_WARNING, "bad RTC access size (%u)\n", size);
> + *val = ~0;
> + return X86EMUL_OKAY;
> + }
> +
> + if ( dir == IOREQ_WRITE )
> + hw_write(port, *val);
> + else
> + *val = hw_read(port);
> +
> + return X86EMUL_OKAY;
> +}
> +
> void rtc_init(struct domain *d)
> {
> RTCState *s = domain_vrtc(d);
>
> + if ( is_hardware_domain(d) )
> + {
> + /* Hardware domain gets mediated access to the physical RTC. */
> + register_portio_handler(d, RTC_PORT(0), 2, hw_rtc_io);
> + return;
> + }
> +
> if ( !has_vrtc(d) )
> return;
>
> --
> 2.26.2
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |