|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v3 2/2] x86/IRQ: allocate guest array of max size only for shareable IRQs
... and increase default "irq-max-guests" to 32.
It's not necessary to have an array of a size more than 1 for non-shareable
IRQs and it might impact scalability in case of high "irq-max-guests"
values being used - every IRQ in the system including MSIs would be
supplied with an array of that size.
Since it's now less impactful to use higher "irq-max-guests" value - bump
the default to 32. That should give more headroom for future systems.
Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
---
New in v2.
Based on Jan's suggestion.
Changes in v3:
- almost none since I prefer the clarity of the code as is
---
docs/misc/xen-command-line.pandoc | 2 +-
xen/arch/x86/irq.c | 7 ++++---
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/docs/misc/xen-command-line.pandoc
b/docs/misc/xen-command-line.pandoc
index 53e676b..f7db2b6 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1644,7 +1644,7 @@ This option is ignored in **pv-shim** mode.
### irq-max-guests (x86)
> `= <integer>`
-> Default: `16`
+> Default: `32`
Maximum number of guests any individual IRQ could be shared between,
i.e. a limit on the number of guests it is possible to start each having
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 4fd0578..a088818 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -440,7 +440,7 @@ int __init init_irq_data(void)
irq_to_desc(irq)->irq = irq;
if ( !irq_max_guests )
- irq_max_guests = 16;
+ irq_max_guests = 32;
#ifdef CONFIG_PV
/* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */
@@ -1540,6 +1540,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq,
int will_share)
unsigned int irq;
struct irq_desc *desc;
irq_guest_action_t *action, *newaction = NULL;
+ unsigned char max_nr_guests = will_share ? irq_max_guests : 1;
int rc = 0;
WARN_ON(!spin_is_locked(&v->domain->event_lock));
@@ -1571,7 +1572,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq,
int will_share)
{
spin_unlock_irq(&desc->lock);
if ( (newaction = xmalloc_flex_struct(irq_guest_action_t, guest,
- irq_max_guests)) != NULL &&
+ max_nr_guests)) != NULL &&
zalloc_cpumask_var(&newaction->cpu_eoi_map) )
goto retry;
xfree(newaction);
@@ -1640,7 +1641,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq,
int will_share)
goto retry;
}
- if ( action->nr_guests == irq_max_guests )
+ if ( action->nr_guests >= max_nr_guests )
{
printk(XENLOG_G_INFO
"Cannot bind IRQ%d to dom%pd: already at max share %u ",
--
2.7.4
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |