[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN][PATCH v7 14/19] common/device_tree: Add rwlock for dt_host
Hi, On 02/06/2023 01:48, Vikram Garhwal wrote: Dynamic programming ops will modify the dt_host and there might be other function which are browsing the dt_host at the same time. To avoid the race conditions, adding rwlock for browsing the dt_host during runtime. Please explain that writer will be added in a follow-up patch. Reason behind adding rwlock instead of spinlock: For now, dynamic programming is the sole modifier of dt_host in Xen during run time. All other access functions like iommu_release_dt_device() are just reading the dt_host during run-time. So, there is a need to protect others from browsing the dt_host while dynamic programming is modifying it. rwlock is better suitable for this task as spinlock won't be able to differentiate between read and write access. Signed-off-by: Vikram Garhwal <vikram.garhwal@xxxxxxx> --- Changes from v6: Remove redundant "read_unlock(&dt_host->lock);" in the following case: XEN_DOMCTL_deassign_device --- xen/common/device_tree.c | 4 ++++ xen/drivers/passthrough/device_tree.c | 15 +++++++++++++++ xen/include/xen/device_tree.h | 6 ++++++ 3 files changed, 25 insertions(+) diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c index c5250a1644..c8fcdf8fa1 100644 --- a/xen/common/device_tree.c +++ b/xen/common/device_tree.c @@ -2146,7 +2146,11 @@ int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)dt_dprintk(" <- unflatten_device_tree()\n"); + /* Init r/w lock for host device tree. */+ rwlock_init(&dt_host->lock); + return 0; + }static void dt_alias_add(struct dt_alias_prop *ap,diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c index 301a5bcd97..f4d9deb624 100644 --- a/xen/drivers/passthrough/device_tree.c +++ b/xen/drivers/passthrough/device_tree.c @@ -112,6 +112,8 @@ int iommu_release_dt_devices(struct domain *d) if ( !is_iommu_enabled(d) ) return 0;+ read_lock(&dt_host->lock);+ list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list) { rc = iommu_deassign_dt_device(d, dev); @@ -119,10 +121,14 @@ int iommu_release_dt_devices(struct domain *d) { dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n", dt_node_full_name(dev), d->domain_id); + + read_unlock(&dt_host->lock); return rc; } }+ read_unlock(&dt_host->lock);+ return 0; }@@ -246,6 +252,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,int ret; struct dt_device_node *dev;+ read_lock(&dt_host->lock);+ switch ( domctl->cmd ) { case XEN_DOMCTL_assign_device: @@ -295,7 +303,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, spin_unlock(&dtdevs_lock);if ( d == dom_io )+ { + read_unlock(&dt_host->lock); return -EINVAL; + }ret = iommu_add_dt_device(dev);if ( ret < 0 ) @@ -333,7 +344,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, break;if ( d == dom_io )+ { + read_unlock(&dt_host->lock); return -EINVAL; + }ret = iommu_deassign_dt_device(d, dev); @@ -348,5 +362,6 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,break; }+ read_unlock(&dt_host->lock);return ret; } diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h index e239f7de26..dee40d2ea3 100644 --- a/xen/include/xen/device_tree.h +++ b/xen/include/xen/device_tree.h @@ -18,6 +18,7 @@ #include <xen/string.h> #include <xen/types.h> #include <xen/list.h> +#include <xen/rwlock.h>#define DEVICE_TREE_MAX_DEPTH 16 @@ -106,6 +107,11 @@ struct dt_device_node {struct list_head domain_list;struct device dev;+ + /* + * Lock that protects r/w updates to unflattened device tree i.e. dt_host. + */ From the description, it sounds like the rwlock will only be used to protect the entire device-tree rather than a single node. So it doesn't seem to be sensible to increase each node structure (there are a lot) by 12 bytes. Can you outline your plan? + rwlock_t lock; };#define dt_to_dev(dt_node) (&(dt_node)->dev) Cheers, -- Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |