[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 3/3] xen: Document XEN_SYSCTL_CPUPOOL_OP_RMCPU anomalous EBUSY result



This is my attempt at understanding the situation, from reading
descriptions provided on list in the context of toolstack patches
which were attempting to work around the anomaly.

The multiple `xxx' entries reflect 1. my lack of complete understanding
2. API defects which I think I have identified.

Signed-off-by: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
CC: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
CC: Juergen Gross <jgross@xxxxxxxx>
CC: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
CC: Jan Beulich <jbeulich@xxxxxxxx>
CC: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
---
 xen/include/public/sysctl.h |   28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 0849908..cfccf38 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -560,6 +560,34 @@ struct xen_sysctl_cpupool_op {
 typedef struct xen_sysctl_cpupool_op xen_sysctl_cpupool_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpupool_op_t);
 
+/*
+ * cpupool operations may return EBUSY if the operation cannot be
+ * executed right now because of another cpupool operation which is
+ * still in progress.  In this case, EBUSY means that the failed
+ * operation had no effect.
+ *
+ * Some operations including at least RMCPU (xxx which others?) may
+ * also return EBUSY because a guest has temporarily pinned one of its
+ * vcpus to the pcpu in question.  It is the pious hope (xxx) of the
+ * author of this comment that this can only occur for domains which
+ * have been granted some kind of hardware privilege (eg passthrough).
+ *
+ * In this case the operation may have been partially carried out and
+ * the pcpu is left in an anomalous state.  In this state the pcpu may
+ * be used by some not readily predictable subset of the vcpus
+ * (domains) whose vcpus are in the old cpupool.  (xxx is this true?)
+ *
+ * This can be detected by seeing whether the pcpu can be added to a
+ * different cpupool.  (xxx this is a silly interface; the situation
+ * should be reported by a different errno value, at least.)  If the
+ * pcpu can't be added to a different cpupool for this reason,
+ * attempts to do so will returning (xxx what errno value?)
+ *
+ * The anomalous situation can be recovered by adding the pcpu back to
+ * the cpupool it came from (xxx this permits a buggy or malicious
+ * guest to prevent the cpu ever being removed from its cpupool).
+ */ 
+
 #define ARINC653_MAX_DOMAINS_PER_SCHEDULE   64
 /*
  * This structure is used to pass a new ARINC653 schedule from a
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.