[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] xen, apic: Setup our own APIC driver and validator for APIC IDs.



On 10/02/15 20:33, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 22, 2015 at 10:00:55AM +0000, David Vrabel wrote:
>> On 21/01/15 21:56, Konrad Rzeszutek Wilk wrote:
>>> +static struct apic xen_apic = {
>>> +   .name = "Xen",
>>> +   .probe = probe_xen,
>>> +   /* The rest is copied from the default. */
>>
>> Explicitly initialize all required members here.  memcpy'ing from the
>> default makes it far too unclear which ops this apic driver actually
>> provides.
> 
> RFC (boots under PV, PVHVM, PV dom0):
> 
> From 27702ef618af068736d13aeadcbcacd2a6780e82 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> Date: Fri, 9 Jan 2015 17:55:52 -0500
> Subject: [PATCH] xen,apic: Setup our own APIC driver and validator for APIC
>  IDs.
> 
> Via CPUID masking and the different apic-> overrides we
> effectively make PV guests only but with the default APIC
> driver. That is OK as an PV guest should never access any
> APIC registers. However, the APIC is also used to limit the
> amount of CPUs if the APIC IDs are incorrect - and since we
> mask the x2APIC from the CPUID - any APIC IDs above 0xFF
> are deemed incorrect by the default APIC routines.
> 
> As such add a new routine to check for APIC ID which will
> be only used if the CPUID (native one) tells us the system
> is using x2APIC.
> 
> This allows us to boot with more than 255 CPUs if running
> as initial domain.

This looks quite reasonable to me.  What order are apic driver tried in?
 Do we need a mechanism to ensure that this one is tried before any for
real hardware?

> +static struct apic xen_apic = {
> +     .name                           = "Xen PV",
> +     .probe                          = probe_xen,
> +     .acpi_madt_oem_check            = xen_madt_oem_check,
> +     .apic_id_valid                  = xen_id_always_valid,
> +     .apic_id_registered             = xen_id_always_registered,
> +
> +     .irq_delivery_mode              = 0xbeef, /* used in 
> native_compose_msi_msg only */
> +     .irq_dest_mode                  = 0xbeef, /* used in 
> native_compose_msi_msg only */

Omit members that are unused, leaving them as 0 or NULL.

> +     .target_cpus                    = default_target_cpus,
> +     .disable_esr                    = 0,
> +     .dest_logical                   = 0, /* default_send_IPI_ use it but we 
> use our own. */
> +     .check_apicid_used              = default_check_apicid_used, /* Used on 
> 32-bit */
> +
> +     .vector_allocation_domain       = flat_vector_allocation_domain,
> +     .init_apic_ldr                  = xen_noop, /* setup_local_APIC calls 
> it */
> +
> +     .ioapic_phys_id_map             = default_ioapic_phys_id_map, /* Used 
> on 32-bit */
> +     .setup_apic_routing             = NULL,
> +     .cpu_present_to_apicid          = default_cpu_present_to_apicid,
> +     .apicid_to_cpu_present          = physid_set_mask_of_physid, /* Used on 
> 32-bit */
> +     .check_phys_apicid_present      = default_check_phys_apicid_present, /* 
> smp_sanity_check needs it */
> +     .phys_pkg_id                    = xen_phys_pkg_id, /* detect_ht */
> +
> +     .get_apic_id                    = xen_get_apic_id,
> +     .set_apic_id                    = xen_set_apic_id, /* Can be NULL on 
> 32-bit. */
> +     .apic_id_mask                   = 0xFF << 24, /* Used by 
> verify_local_APIC. Match with what xen_get_apic_id does. */
> +
> +     .cpu_mask_to_apicid_and         = flat_cpu_mask_to_apicid_and,
> +
> +     .send_IPI_mask                  = xen_send_IPI_mask,
> +     .send_IPI_mask_allbutself       = xen_send_IPI_mask_allbutself,
> +     .send_IPI_allbutself            = xen_send_IPI_allbutself,
> +     .send_IPI_all                   = xen_send_IPI_all,
> +     .send_IPI_self                  = xen_send_IPI_self,
> +
> +     .wait_for_init_deassert         = false, /* Used by AP bootup - 
> smp_callin which we don't use */

Omit.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.