[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-tools] [PATCH 1/2] Interface cleanup and renaming
Now that most of our commands are in the form: xm <subsystem>-<command> Would it make sense to make it more hierarchical such as: xm <subsystem> <command>One benefit would be that instead of just having short and long help, we could have subsystem help. For instance: xm help blockWould give you help only for the block commands. This is akin to how some other tools behave (what comes to mind first is Samba's net command). Thoughts? Regards, Anthony Liguori Dan Smith wrote: The attached patch renames several xm commands per the recent discussion on xen-tools. It makes nothing worse, and does fix another IndexException in block-create (now block-attach). The network-attach and network-detach stubs are also included, but not yet listed in the help output. Signed-off-by: Dan Smith <danms@xxxxxxxxxx>------------------------------------------------------------------------ diff -r 936be0ae823f docs/misc/sedf_scheduler_mini-HOWTO.txt --- a/docs/misc/sedf_scheduler_mini-HOWTO.txt Mon Aug 29 14:53:38 2005 +++ b/docs/misc/sedf_scheduler_mini-HOWTO.txt Mon Aug 29 10:23:18 2005 @@ -10,7 +10,7 @@ Usage: -add "sched=sedf" on Xen's boot command-line -create domains as usual - -use "xm sedf <dom-id> <period> <slice> <latency-hint> <extra> <weight>" + -use "xm sched-sedf <dom-id> <period> <slice> <latency-hint> <extra> <weight>" Where: -period/slice are the normal EDF scheduling parameters in nanosecs -latency-hint is the scaled period in case the domain is doing heavy I/O @@ -22,23 +22,23 @@ Examples: normal EDF (20ms/5ms): - xm sedf <dom-id> 20000000 5000000 0 0 0 + xm sched-sedf <dom-id> 20000000 5000000 0 0 0best-effort domains (i.e. non-realtime):- xm sedf <dom-id> 20000000 0 0 1 0 + xm sched-sedf <dom-id> 20000000 0 0 1 0normal EDF (20ms/5ms) + share of extra-time:- xm sedf <dom-id> 20000000 5000000 0 1 0 + xm sched-sedf <dom-id> 20000000 5000000 0 1 04 domains with weights 2:3:4:2- xm sedf <d1> 0 0 0 0 2 - xm sedf <d2> 0 0 0 0 3 - xm sedf <d3> 0 0 0 0 4 - xm sedf <d4> 0 0 0 0 2 + xm sched-sedf <d1> 0 0 0 0 2 + xm sched-sedf <d2> 0 0 0 0 3 + xm sched-sedf <d3> 0 0 0 0 4 + xm sched-sedf <d4> 0 0 0 0 21 fully-specified (10ms/3ms) domain, 3 other domains shareavailable rest in 2:7:3 ratio: - xm sedf <d1> 10000000 3000000 0 0 0 - xm sedf <d2> 0 0 0 0 2 - xm sedf <d3> 0 0 0 0 7 - xm sedf <d4> 0 0 0 0 3 \ No newline at end of file + xm sched-sedf <d1> 10000000 3000000 0 0 0 + xm sched-sedf <d2> 0 0 0 0 2 + xm sched-sedf <d3> 0 0 0 0 7 + xm sched-sedf <d4> 0 0 0 0 3 diff -r 936be0ae823f tools/python/xen/xm/main.py --- a/tools/python/xen/xm/main.py Mon Aug 29 14:53:38 2005 +++ b/tools/python/xen/xm/main.py Mon Aug 29 10:23:18 2005 @@ -64,7 +64,6 @@ Domain Commands: console <DomId> attach to console of DomId cpus-list <DomId> <VCpu> get the list of cpus for a VCPU- cpus-set <DomId> <VCpu> <CPUS> set which cpus a VCPU can use. create <ConfigFile> create a domaindestroy <DomId> terminate a domain immediately domid <DomName> convert a domain name to a domain id @@ -83,6 +82,7 @@ vcpu-enable <DomId> <VCPU> disable VCPU in a domain vcpu-disable <DomId> <VCPU> enable VCPU in a domain vcpu-list <DomId> get the list of VCPUs for a domain+ vcpu-pin <DomId> <VCpu> <CPUS> set which cpus a VCPU can use.Xen Host Commands: dmesg [--clear] read or clear Xen's message buffer @@ -91,14 +91,15 @@ top monitor system and domains in real-time Scheduler Commands: - bvt <options> set BVT scheduler parameters - bvt_ctxallow <Allow> set the BVT scheduler context switch allowance - sedf <options> set simple EDF parameters + sched-bvt <options> set BVT scheduler parameters + sched-bvt-ctxallow <Allow> + Set the BVT scheduler context switch allowance + sched-sedf <options> set simple EDF parameters Virtual Device Commands: - block-create <DomId> <BackDev> <FrontDev> <Mode> [BackDomId] + block-attach <DomId> <BackDev> <FrontDev> <Mode> [BackDomId]Create a new virtual block device - block-destroy <DomId> <DevId> Destroy a domain's virtual block device+ block-detach <DomId> <DevId> Destroy a domain's virtual block device block-list <DomId> List virtual block devices for a domain block-refresh <DomId> <DevId> Refresh a virtual block device for a domain network-limit <DomId> <Vif> <Credit> <Period> @@ -358,8 +359,8 @@ return cpumap -def xm_cpus_set(args): - arg_check(args, 3, "cpus-set") +def xm_vcpu_pin(args): + arg_check(args, 3, "vcpu-pin")dom = args[0]vcpu = int(args[1]) @@ -423,22 +424,22 @@ dom = server.xend_domain(name) print sxp.child_value(dom, 'name') -def xm_bvt(args): - arg_check(args, 6, "bvt") +def xm_sched_bvt(args): + arg_check(args, 6, "sched-bvt") dom = args[0] v = map(long, args[1:6]) from xen.xend.XendClient import server server.xend_domain_cpu_bvt_set(dom, *v) -def xm_bvt_ctxallow(args): - arg_check(args, 1, "bvt_ctxallow") +def xm_sched_bvt_ctxallow(args): + arg_check(args, 1, "sched-bvt-ctxallow") slice = int(args[0]) from xen.xend.XendClient import server server.xend_node_cpu_bvt_slice_set(slice) -def xm_sedf(args): - arg_check(args, 6, "sedf") +def xm_sched_sedf(args): + arg_check(args, 6, "sched-sedf")dom = args[0]v = map(int, args[1:6]) @@ -509,6 +510,14 @@ sxp.show(x) print +def xm_network_attach(args): + + print "Not implemented" + +def xm_network_detach(args): + + print "Not implemented"+ def xm_block_list(args):arg_check(args,1,"block-list") dom = args[0] @@ -517,11 +526,14 @@ sxp.show(x) print -def xm_block_create(args): +def xm_block_attach(args): n = len(args) + if n == 0: + usage("block-attach")+ if n < 4 or n > 5:err("%s: Invalid argument(s)" % args[0]) - usage("block-create") + usage("block-attach") dom = args[0] vbd = ['vbd', @@ -543,8 +555,8 @@ from xen.xend.XendClient import server server.xend_domain_device_refresh(dom, 'vbd', dev) -def xm_block_destroy(args): - arg_check(args,2,"block-destroy") +def xm_block_detach(args): + arg_check(args,2,"block-detach") dom = args[0] dev = args[1] @@ -612,7 +624,7 @@ "mem-max": xm_mem_max, "mem-set": xm_mem_set, # cpu commands - "cpus-set": xm_cpus_set, + "vcpu-pin": xm_vcpu_pin, # "cpus-list": xm_cpus_list, "vcpu-enable": xm_vcpu_enable, "vcpu-disable": xm_vcpu_disable, @@ -628,17 +640,19 @@ "info": xm_info, "log": xm_log, # scheduler - "bvt": xm_bvt, - "bvt_ctxallow": xm_bvt_ctxallow, - "sedf": xm_sedf, + "sched-bvt": xm_sched_bvt, + "sched-bvt-ctxallow": xm_sched_bvt_ctxallow, + "sched-sedf": xm_sched_sedf, # block - "block-create": xm_block_create, - "block-destroy": xm_block_destroy, + "block-attach": xm_block_attach, + "block-detach": xm_block_detach, "block-list": xm_block_list, "block-refresh": xm_block_refresh, # network "network-limit": xm_network_limit, "network-list": xm_network_list, + "network-attach": xm_network_attach, + "network-detach": xm_network_detach, # vnet "vnet-list": xm_vnet_list, "vnet-create": xm_vnet_create,------------------------------------------------------------------------------------------------------------------------------------------------ _______________________________________________ Xen-tools mailing list Xen-tools@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-tools _______________________________________________ Xen-tools mailing list Xen-tools@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-tools
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |