|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4 of 4 V5] tools/xl: Remus - Network buffering cmdline switch
Shriram Rajagopalan writes ("[PATCH 4 of 4 V5] tools/xl: Remus - Network
buffering cmdline switch"):
> tools/xl: Remus - Network buffering cmdline switch
>
> Command line switch to 'xl remus' command, to enable network buffering.
> Pass on this flag to libxl so that it can act accordingly.
> Also update man pages to reflect the addition of a new option to
> 'xl remus' command.
Shouldn't enabling network buffering be the default ?
Is it really useful to have a command-line option to change the script ?
> diff -r 94eea030e009 -r 3c11efb5e8fe docs/man/xl.pod.1
> --- a/docs/man/xl.pod.1 Mon Nov 18 11:10:02 2013 -0800
> +++ b/docs/man/xl.pod.1 Mon Nov 18 11:46:55 2013 -0800
> @@ -398,8 +398,7 @@ Print huge (!) amount of debug during th
> Enable Remus HA for domain. By default B<xl> relies on ssh as a transport
> mechanism between the two hosts.
>
> -N.B: Remus support in xl is still in experimental (proof-of-concept) phase.
> - There is no support for network or disk buffering at the moment.
> +N.B: There is no support for disk buffering at the moment.
I think you need to keep the "experimental (proof-of-concept)" note.
Without disk buffering, surely any VM which uses remus might corrupt
its disk ?
> +=item B<-n>
> +
> +Enable network output buffering. The default script used to configure
> +network buffering is /etc/xen/scripts/remus-netbuf-setup. If you wish to
> +use a custom script, use the I<-N> option or set the global variable
> +I<remus.default.netbufscript> in /etc/xen/xl.conf to point to your script.
There is no need to mention the default script again in this
paragraph.
I seem to have missed something, but should it be possible to specify
the script individually per domain ?
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |