[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Runtime adjustment of hypervisor parameters

On Fri, Aug 4, 2017 at 3:57 PM, Juergen Gross <jgross@xxxxxxxx> wrote:
> On 04/08/17 15:57, Andrew Cooper wrote:
>> On 04/08/17 14:36, Juergen Gross wrote:
>>> On 04/08/17 15:23, Andrew Cooper wrote:
>>>> On 04/08/17 14:20, Juergen Gross wrote:
>>>>> Last year Jan posted a patch series to change hypervisor log level
>>>>> thresholds via xl command [1]. This approach was later modified by Wei
>>>>> resulting in patch series [2].
>>>>> I'd like to follow up with another approach being able to do the same,
>>>>> but being much more flexible:
>>>>> Instead of controlling only loglvl I suggest to add a xl command
>>>>> xl xen-param <parameters>
>>>>> which will take a <parameters> string being parsed by the hypervisor
>>>>> the same way it is parsing boot parameters. Allowed parameters are
>>>>> specified in the hypervisor the same way as boot parameters, but with
>>>>> another set of macros (e.g. custom_runtime_param(), ...). Often enough
>>>>> (e.g. in the loglvl case) the definitions could be just the same, while
>>>>> in other cases they might differ a little bit (example: conring_size
>>>>> would require a different handling as at boot time due to race
>>>>> condition handling).
>>>>> Parsing functions could be reused in most cases, they'd just need to
>>>>> lose the __init modifier.
>>>>> What do you think: is this approach sensible, or can I just put it into
>>>>> /dev/null instead of starting with the patches?
>>>> What sort of parameters were you thinking of tweaking?  (Without any
>>>> evidence) I'm going to go out on a limb and say that most of the
>>>> hypervisor command line parameters are not safe to play with after boot.
>>> The following would be a nice start for discussion:
>>> async-show-all, console_timestamps, conswitch,
>> conswitch can already be altered using `xl debug-keys`
>>> guest_loglvl, loglvl, hvm_debug,
>> I'm getting slowly more annoyed with the scattergun approach of
>> hvm_debug and the trace logic when it comes to the subset they each have
>> of functionality.
>> I've a cunning plan which I was going to propose once Paul's general
>> mapping patches are a little better formed, whereby we can control
>> action logging on a per-domain or per-vcpu basis, rather than getting a
>> full-system torrent or nothing.
>>> hvm_fep,
>> This is a parameter for reasons of security (i.e. if you didn't choose
>> it at boot, its definitely not usable in the system).  I plan to make it
>> also need to be opted-in to at the toolstack level at domain build time.
>> It isn't currently safe to try and flip this option behind the back of a
>> domain, as the alteration only happens when constructing the vcpu.
>>> hvm_port80,
>> I have to admit that I question the utility of this at all.  I was
>> considering killing it completely, not least because it makes
>> nested-virt IO bitmap handling far harder.
>>> iommu_dev_iotlb_timeout, irq_ratelimit,
>>> nmi, noreboot, reboot, sync_console, vpmu,
>> My CPUID series will hopefully sort vpmu out properly.  (like hvm_fep)
>> needing to opt in to it in the Xen command line (due to its security
>> status), and then opt in to it at the toolstack level.
>>>  watchdog_timeout
> Which parameters should be changeable is subject to discussion. I just
> wanted to show there are more than 1 or 2 possible candidates.
>> I think there is merit in having a whitelist of parameters we think are
>> safe to be altered at runtime, and a dom0 mechanism of tweaking them.
> That's what I'm suggesting.
> The whitelist is a natural result from the need to specify each runtime
> changeable parameter via another macro, e.g.:
>  custom_param("console_timestamps", parse_console_timestamps);
> +custom_runtime_param("console_timestamps", parse_console_timestamps);

FWIW Something like this seems like a reasonable option to me.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.