[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] xen, apic: Setup our own APIC driver and validator for APIC IDs.



On Wed, Feb 11, 2015 at 09:53:26AM +0000, David Vrabel wrote:
> On 10/02/15 20:33, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 22, 2015 at 10:00:55AM +0000, David Vrabel wrote:
> >> On 21/01/15 21:56, Konrad Rzeszutek Wilk wrote:
> >>> +static struct apic xen_apic = {
> >>> + .name = "Xen",
> >>> + .probe = probe_xen,
> >>> + /* The rest is copied from the default. */
> >>
> >> Explicitly initialize all required members here.  memcpy'ing from the
> >> default makes it far too unclear which ops this apic driver actually
> >> provides.
> > 
> > RFC (boots under PV, PVHVM, PV dom0):

And it boots under the 288 CPU machine (the original problem)
.. thought it exposes two other issues:


(XEN) SMP: Allowing 288 CPUs (0 hotplug CPUs)
(XEN) Brought up 288 CPUs
..
(XEN) Dom0 has maximum 255 VCPUs
(XEN) xentrace: p157 mfn 225524 offset 35896
(XEN) xentrace: p255 mfn 21fe3e offset 58142
[    0.000000] smpboot: Allowing 288 CPUs, 0 hotplug CPUs
[    0.000000] xen_filter_cpu_maps: CPU255 is not up!
..
[    0.000000] xen_filter_cpu_maps: CPU287 is not up!
[    0.000000] xen_filter_cpu_maps: nr_cpu_ids: 288, subtract: 33
[    0.000000]  RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=255.

... with the result that we can't bring up the 256->287 CPUs up.

It looks as if we a limiting Dom0 to 255. That seems to be due to:

> > 
> > From 27702ef618af068736d13aeadcbcacd2a6780e82 Mon Sep 17 00:00:00 2001
> > From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> > Date: Fri, 9 Jan 2015 17:55:52 -0500
> > Subject: [PATCH] xen,apic: Setup our own APIC driver and validator for APIC
> >  IDs.
> > 
> > Via CPUID masking and the different apic-> overrides we
> > effectively make PV guests only but with the default APIC
> > driver. That is OK as an PV guest should never access any
> > APIC registers. However, the APIC is also used to limit the
> > amount of CPUs if the APIC IDs are incorrect - and since we
> > mask the x2APIC from the CPUID - any APIC IDs above 0xFF
> > are deemed incorrect by the default APIC routines.
> > 
> > As such add a new routine to check for APIC ID which will
> > be only used if the CPUID (native one) tells us the system
> > is using x2APIC.
> > 
> > This allows us to boot with more than 255 CPUs if running
> > as initial domain.
> 
> This looks quite reasonable to me.  What order are apic driver tried in?

No order. Or rather the order is based on how the compiler stashes
them in.

>  Do we need a mechanism to ensure that this one is tried before any for
> real hardware?

There are two probe mechanism - the .probe and then later it is:
x86_platform.apic_post_init which we can also utilize to make sure
the APIC is set to Xen.

Let me add that in.

> 
> > +static struct apic xen_apic = {
> > +   .name                           = "Xen PV",
> > +   .probe                          = probe_xen,
> > +   .acpi_madt_oem_check            = xen_madt_oem_check,
> > +   .apic_id_valid                  = xen_id_always_valid,
> > +   .apic_id_registered             = xen_id_always_registered,
> > +
> > +   .irq_delivery_mode              = 0xbeef, /* used in 
> > native_compose_msi_msg only */
> > +   .irq_dest_mode                  = 0xbeef, /* used in 
> > native_compose_msi_msg only */
> 
> Omit members that are unused, leaving them as 0 or NULL.
> 
> > +   .target_cpus                    = default_target_cpus,
> > +   .disable_esr                    = 0,
> > +   .dest_logical                   = 0, /* default_send_IPI_ use it but we 
> > use our own. */
> > +   .check_apicid_used              = default_check_apicid_used, /* Used on 
> > 32-bit */
> > +
> > +   .vector_allocation_domain       = flat_vector_allocation_domain,
> > +   .init_apic_ldr                  = xen_noop, /* setup_local_APIC calls 
> > it */
> > +
> > +   .ioapic_phys_id_map             = default_ioapic_phys_id_map, /* Used 
> > on 32-bit */
> > +   .setup_apic_routing             = NULL,
> > +   .cpu_present_to_apicid          = default_cpu_present_to_apicid,
> > +   .apicid_to_cpu_present          = physid_set_mask_of_physid, /* Used on 
> > 32-bit */
> > +   .check_phys_apicid_present      = default_check_phys_apicid_present, /* 
> > smp_sanity_check needs it */
> > +   .phys_pkg_id                    = xen_phys_pkg_id, /* detect_ht */
> > +
> > +   .get_apic_id                    = xen_get_apic_id,
> > +   .set_apic_id                    = xen_set_apic_id, /* Can be NULL on 
> > 32-bit. */
> > +   .apic_id_mask                   = 0xFF << 24, /* Used by 
> > verify_local_APIC. Match with what xen_get_apic_id does. */
> > +
> > +   .cpu_mask_to_apicid_and         = flat_cpu_mask_to_apicid_and,
> > +
> > +   .send_IPI_mask                  = xen_send_IPI_mask,
> > +   .send_IPI_mask_allbutself       = xen_send_IPI_mask_allbutself,
> > +   .send_IPI_allbutself            = xen_send_IPI_allbutself,
> > +   .send_IPI_all                   = xen_send_IPI_all,
> > +   .send_IPI_self                  = xen_send_IPI_self,
> > +
> > +   .wait_for_init_deassert         = false, /* Used by AP bootup - 
> > smp_callin which we don't use */
> 
> Omit.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.