[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Xenstore quota and driver domains
On 04.06.20 10:07, Paul Durrant wrote: -----Original Message----- From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Jürgen Groß Sent: 04 June 2020 06:03 To: xen-devel@xxxxxxxxxxxxxxxxxxxx Subject: Xenstore quota and driver domains A recent report on xen-users surfaced a problem we have with driver domains in medium sized or large configuration: the driver domain can easily hit the default Xenstore quota (in the report it was a driver domain for disks which hit the quota when 15 domUs were active at the same time).Which quota is hit? Node or watch? Node. Setting the quota to a higher limit will work, of course, but this enables "normal" domUs to use this quota, too, thus hogging lots of Xenstore resources. I believe the most sensible way to solve that would be to have a way to set per-domain quota in Xenstore. As the original reporter already raised concerns regarding rebooting the server for applying new global quota, I believe new quota settings should be possible at runtime. The question is how this interface should look like. Right now I could think of two variants: 1. A new XS_SET_QUOTA and XS_GET_QUOTA wire command pair which can set and get the quota (both, default values for new domains and for any existing domain) 2. A new XS_CONTROL command for setting/getting quota (same scope as 1.) Both variants have advantages and disadvantages: 1. Pros: - clear interface, easily usable Cons: - requires more code churn in tools (libxenstore, xl, libxl) for support in domain-config - any extension will again require similar churn 2. Pros: - easy extensible domain-config possible (via a single item, e.g. "xenstore-quota='watches=10 entries=2000'") Cons: - text parsing in Xenstore instead of xl/libxl - XS_CONTROL sub-options for quota will be ABI Any thoughts?Even though 1 requires more code churn, I still think it is the better way to go. Noted. Juergen
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |