[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5.99.1 RFC 1/4] xen/arm: Duplicate gic-v2.c file to support hip04 platform version



On Thu, 2015-02-26 at 13:54 +0000, Julien Grall wrote:
> > NB: I'm only considering host level stuff here. Our virtualised hardware
> > as exposed to the guest is well defined right now and any conversation
> > about deviating from the set of hardware (e.g. providing a guest view to
> > a non-GIC compliant virtual interrupt controller) would be part of a
> > separate larger conversation about "hvm" style guests (and I'd, as you
> > might imagine, be very reluctant to code to Xen itself to support
> > non-standard vGICs in particular).
> 
> That would mean on platform such as Hisilicon, Guest (including DOM0)
> won't be able to use more than 8 CPUs. But I guess this is a fair trade
> for having a GIC which differs from the spec.

Correct.

> 
> > From a "what does 'standards compliant' mean" PoV we have:
> > 
> > CPUs:
> > 
> >         Specified in the ARM ARM (v7=ARM DDI 0406, v8=ARM DDI 0487).
> >         
> >         Uncontroversial, I hope!
> >         
> > Host interrupt controllers:
> > 
> >         Defined in "ARM Generic Interrupt Controller Architecture
> >         Specification" (v2=ARM IHI 0048B, v3=???).
> 
> AFAICT, for GICv3 there is a hardware spec available (though not
> publicly) but no developer spec.


The "Architecture Specification" is the one we want, I don't know if
that is what you meant by hardware spec, I have one although as you say
I don't think it is public yet.

> 
> >         Referenced from ARMv8 ARM (but not required AFAICT) but
> >         nonetheless this is what we consider when we talk about the
> >         "standard interrupt controller".
> >         
> > Host timers:
> > 
> >         Defined in the ARMv8 ARM "Generic Timers" chapter.
> >         
> >         Defined as an extension to ARMv7 (don't have doc ref handy). For
> >         our purposes such extensions are considered standardized[*].
> 
> It's worth to mention that we don't support generic memory mapped timer
> for now. I don't know if we aim to support it.

I don't know either, yet. For now we don't, that's correct.

> > UARTS:
> > 
> >         SBSA defines some (pl011 only?), but there are lots, including
> >         8250-alike ones (ns16550 etc) which is a well established
> >         standard (from x86).
> >         
> >         Given that UART drivers are generally small and pretty trivial I
> >         think we can tolerate "non-standard" (i.e. non-SBSA, non-8250)
> >         ones, so long as they are able to support our vuart interface.
> >         
> >         I think the non-{pl011,8250} ones should be subject to non-core
> >         (i.e. community) maintenance as I mentioned previously, i.e
> >         should be listed in MAINTAINERS other than under the core ARM
> >         entry. If we decide to go ahead with this approach I'll ask the
> >         appropriate people to update MAINTAINERS.
> 
> At that time we have 3 "non-compliant" UART in Xen: exynos4210, scif and
> omap.
> 
> Having maintainers per non-compliant UART would make some generic more
> complicate to upstream.

In reality by a negligible amount, I expect.

>  Indeed, it would require all the ack.

I don't think that's true, an update to core which requires updates to
all drivers shouldn't be blocked by non-responsive maintainer. If they
don't respond then their driver might break.

This all works fine for much larger projects. Take Linux for example:
you don't see them getting stalled on core infrastructure updates
because the author of some niche serial driver isn't responding to his
mail. They do the sensible thing and get on with it.

> [..]
> 
> > I think the above is a workable definition of what it is reasonable to
> > expect the core Xen/ARM maintainers to look after (with that particular
> > hat on) vs. what it should be expected for interested members of the
> > community to step up and maintain (where the people who happen to be
> > core Xen/ARM maintainers may occasionally chose to have such a hat too.)
> 
> I got few questions about it:
>       -  What happen if the maintainers of a specific driver (UART/IRQ
> controller) doesn't answer?

Then their driver might break or bitrot, and eventually be removed.

>       - How do we handle possible security issue related to a specific
> driver? Is it even considered as a security one?

In the same way we do today with any security issue, which is to say the
security team will deal with it, bringing in people as they feel
appropriate (and the discoverer agrees). This is no different to a bug
in any other bit of Xen who's maintainer is not on the security team.

>       - As a new drivers would tight to a new set of maintainers, how do we
> decide that a new drivers is accepted in Xen?

In the normal way.

> Given the governance spec [1], we may decide to reject a maintainers for
> some reason. Does it mean the drivers is rejected too?

If someone writes a driver for a h/w component and wants to be the
maintainer then there is no normal reason to reject them IMHO.

To put it another way, if we don't want to accept them as maintainer for
the driver which they have written then why would we want to accept the
driver itself.

> Overall, I think we should clearly define the condition of
> acceptance/maintenance of a specific driver.

This will follow the normal development process and patch acceptance
criteria, I don't think we need to make any of this more complicated
than that.

TBH, I think you are worrying about the process stuff unnecessarily,
this will all just work like it already does.

> 
> [..]
> 
> > [**] The LPAE extensions include/are mixed with the hyp mode page table
> > format, so we pretty certainly need it.
> 
> Rigth, the ARM spec required LPAE extensions when virtualization is
> supported.
> 
> Regards,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.