[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5.99.1 RFC 1/4] xen/arm: Duplicate gic-v2.c file to support hip04 platform version
On Wed, 2015-02-25 at 16:59 +0000, Ian Campbell wrote: > > I think we should disable the build of all drivers in Xen by default, > > except for the ARM standard compliant ones (for aarch64 the SBSA is a > > nice summary of what is considered compliant), to keep the size of the > > binary small. > > I don't think the SBSA is necessarily the best reference here, since it > only defines what is standardised within the scope of "server > systems" (whatever you take that to mean) and there are things which do > not fall under that umbrella. > > That said I'm not sure what better reference there is. > > Maybe even on non-server systems the set of hardware which Xen has to > drive itself is limited to things which are covered by the SBSA in > practice, by the nature of the fact that the majority of the wacky stuff > is driven from hardware domains. > > So maybe SBSA is OK then... Having thought about this overnight I'm now not so sure of this again, I don't think SBSA is the definition we want. Lets take a step back... The set of hardware which Xen needs to be able to drive (rather than give to a hardware domain) is: * CPUs * Host interrupt controllers * Host timers * A UART * Second-state MMUs * Second-stage SMMUs/IOMMU (First-stage used by domains) * PCI host bridges (in the near future) (did I forget any?) NB: I'm only considering host level stuff here. Our virtualised hardware as exposed to the guest is well defined right now and any conversation about deviating from the set of hardware (e.g. providing a guest view to a non-GIC compliant virtual interrupt controller) would be part of a separate larger conversation about "hvm" style guests (and I'd, as you might imagine, be very reluctant to code to Xen itself to support non-standard vGICs in particular). From a "what does 'standards compliant' mean" PoV we have: CPUs: Specified in the ARM ARM (v7=ARM DDI 0406, v8=ARM DDI 0487). Uncontroversial, I hope! Host interrupt controllers: Defined in "ARM Generic Interrupt Controller Architecture Specification" (v2=ARM IHI 0048B, v3=???). Referenced from ARMv8 ARM (but not required AFAICT) but nonetheless this is what we consider when we talk about the "standard interrupt controller". Host timers: Defined in the ARMv8 ARM "Generic Timers" chapter. Defined as an extension to ARMv7 (don't have doc ref handy). For our purposes such extensions are considered standardized[*]. UARTS: SBSA defines some (pl011 only?), but there are lots, including 8250-alike ones (ns16550 etc) which is a well established standard (from x86). Given that UART drivers are generally small and pretty trivial I think we can tolerate "non-standard" (i.e. non-SBSA, non-8250) ones, so long as they are able to support our vuart interface. I think the non-{pl011,8250} ones should be subject to non-core (i.e. community) maintenance as I mentioned previously, i.e should be listed in MAINTAINERS other than under the core ARM entry. If we decide to go ahead with this approach I'll ask the appropriate people to update MAINTAINERS. Second stage MMU: Defined in the ARMv8 ARM, or the ARMv7 LPAE extensions[*, **]. Second stage SMMU/IOMMU: Defined in "ARM System Memory Management Unit Architecture Specification" (ARM IHI 0062D?) PCI host bridges: We don't actually (properly) support PCI yet, but see my mails in the "Enhance platform support for PCI" thread this morning, for what we think the general shape might be taking. The meaning of the PCI CFG (CAM, ECAM etc), MMIO and IO (MMIO mapped) regions are, I think, all pretty well defined by the (various?) PCI spec(s). What's not quite so clear cut is the discovery of where the controllers actually live (based on the fact that Documentation/devicetree/bindings/pci/ has more than one entry in it...). SBSA doesn't really cover this either, it says "The base address of each ECAM region within the system memory map is IMPLEMENTATION DEFINED and is expected to be discoverable from system firmware data." However I think it will be the case that most "pci host bridge drivers" for Xen will end up being fairly trivial affairs, i.e. parse the DT, perhaps a little bit of basic setup and register the resulting regions with the PCI core and perhaps provide some simple accessors. So, I think we can say that for PCI controllers which export a set of PCI standard CFG, MMIO, IO space regions (all as MMIO mapped regions) and just require a specific driver for discovery of the host bridge and small amounts of start of day setup should be considered "standard" for our purposes. So, based on the above I think we overall have a pretty good idea what "standardized" means for the hardware components which we care about. I think the above is a workable definition of what it is reasonable to expect the core Xen/ARM maintainers to look after (with that particular hat on) vs. what it should be expected for interested members of the community to step up and maintain (where the people who happen to be core Xen/ARM maintainers may occasionally chose to have such a hat too.) I think it also provides a workable definition for the line for Stefano's HAS_NON_STANDARD_DRIVERS idea as well, if we want to go down that path. Ian. [*] For ARMv7 I think we can consider things like the Generic Timers and LPAE extension specs to be morally equivalent to the bits of the ARMv8 ARM spec, i.e. if it was merged into the ARMv8 ARM then we treat it as part of the ARMv7 ARM from a "is it standardized" PoV. [**] The LPAE extensions include/are mixed with the hyp mode page table format, so we pretty certainly need it. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |