[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 10/26] ARM: GICv3: forward pending LPIs to guests



On Thu, 12 Jan 2017, Andre Przywara wrote:
> On 05/01/17 22:10, Stefano Stabellini wrote:
> > On Thu, 22 Dec 2016, Andre Przywara wrote:
> >> Upon receiving an LPI, we need to find the right VCPU and virtual IRQ
> >> number to get this IRQ injected.
> >> Iterate our two-level LPI table to find this information quickly when
> >> the host takes an LPI. Call the existing injection function to let the
> >> GIC emulation deal with this interrupt.
> >>
> >> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
> >> ---
> >>  xen/arch/arm/gic-its.c    | 35 +++++++++++++++++++++++++++++++++++
> >>  xen/arch/arm/gic.c        |  6 ++++--
> >>  xen/include/asm-arm/irq.h |  8 ++++++++
> >>  3 files changed, 47 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/gic-its.c b/xen/arch/arm/gic-its.c
> >> index e7ddd90..0d4ca1b 100644
> >> --- a/xen/arch/arm/gic-its.c
> >> +++ b/xen/arch/arm/gic-its.c
> >> @@ -72,6 +72,41 @@ static union host_lpi *gic_get_host_lpi(uint32_t plpi)
> >>      return &lpi_data.host_lpis[plpi / HOST_LPIS_PER_PAGE][plpi % 
> >> HOST_LPIS_PER_PAGE];
> >>  }
> >>  
> >> +/* Handle incoming LPIs, which are a bit special, because they are 
> >> potentially
> >> + * numerous and also only get injected into guests. Treat them specially 
> >> here,
> >> + * by just looking up their target vCPU and virtual LPI number and hand it
> >> + * over to the injection function.
> >> + */
> >> +void do_LPI(unsigned int lpi)
> >> +{
> >> +    struct domain *d;
> >> +    union host_lpi *hlpip, hlpi;
> >> +    struct vcpu *vcpu;
> >> +
> >> +    WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
> >> +
> >> +    hlpip = gic_get_host_lpi(lpi);
> >> +    if ( !hlpip )
> >> +        return;
> >> +
> >> +    hlpi.data = hlpip->data;
> > 
> > Why can't we just reference hlpip directly throughout this function? Is
> > it for atomicity reasons?
> 
> Yes. We have to make sure that the LPI entry is consistent, but we don't
> want to (and probably can't) take a lock here and everywhere else where
> we touch it. We are fine with reading an "outdated" entry (which is just
> about to change), this is a benign race which can happen on real
> hardware as well.

In that case, it's best to use atomic functions.


> >> +    /* We may have mapped more host LPIs than the guest actually asked 
> >> for. */
> >> +    if ( !hlpi.virt_lpi )
> >> +        return;
> >> +
> >> +    d = get_domain_by_id(hlpi.dom_id);
> >> +    if ( !d )
> >> +        return;
> >> +
> >> +    if ( hlpi.vcpu_id >= d->max_vcpus )
> 
> I am just seeing that I miss a put_domain(d) here and ....
> 
> >> +        return;
> >> +
> >> +    vcpu = d->vcpu[hlpi.vcpu_id];
> 
> ... here.

Oops, you are right.


> Which makes me wonder if it is legal to use a VCPU reference even though
> I "put back" the domain pointer?

I don't think so.


> Is there a get_vcpu()/put_vcpu() equivalent? Or is this supposed to
> covered by the domain pointer as well?
> 
> Or shall I use get_domain() the moment I enter the domain ID into the
> host LPI array and only "put" it when an entry gets changed or the LPI
> gets somehow else invalid (VCPU destroyed, domain destroyed)?

The pattern is to get_domain at the beginning of the implementation of
the function in the hypervisor and put_domain at the end of it. In this
case, I think you should replace the returns in this function with goto
out, and have an out label with put_domain.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.