|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v5 2/7] x86/msi: Extend per-domain/device warning mechanism
The arch_msix struct had a single "warned" field with a domid for which
warning was issued. Upcoming patch will need similar mechanism for few
more warnings, so change it to save a bit field of issued warnings.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
---
Should I add also some helper for the boilerplate the handling requires
now? I tried some macro that also sets the bit field, but I couldn't get
it working. I guess I could make it working with a bitmask in a single
uint8_t - would that be preferred?
New in v5
---
xen/arch/x86/include/asm/msi.h | 8 +++++++-
xen/arch/x86/msi.c | 9 +++++++--
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/include/asm/msi.h b/xen/arch/x86/include/asm/msi.h
index 997ccb87be0c..19b1e6631fdf 100644
--- a/xen/arch/x86/include/asm/msi.h
+++ b/xen/arch/x86/include/asm/msi.h
@@ -217,7 +217,13 @@ struct arch_msix {
int table_idx[MAX_MSIX_TABLE_PAGES];
spinlock_t table_lock;
bool host_maskall, guest_maskall;
- domid_t warned;
+ domid_t warned_domid;
+ union {
+ uint8_t all;
+ struct {
+ bool maskall : 1;
+ } u;
+ } warned_kind;
};
void early_msi_init(void);
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index e721aaf5c001..6433df30bd60 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -364,9 +364,14 @@ static bool msi_set_mask_bit(struct irq_desc *desc, bool
host, bool guest)
domid_t domid = pdev->domain->domain_id;
maskall = true;
- if ( pdev->msix->warned != domid )
+ if ( pdev->msix->warned_domid != domid )
{
- pdev->msix->warned = domid;
+ pdev->msix->warned_domid = domid;
+ pdev->msix->warned_kind.all = 0;
+ }
+ if ( !pdev->msix->warned_kind.u.maskall )
+ {
+ pdev->msix->warned_kind.u.maskall = true;
printk(XENLOG_G_WARNING
"cannot mask IRQ %d: masking MSI-X on Dom%d's %pp\n",
desc->irq, domid, &pdev->sbdf);
--
git-series 0.9.1
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |