[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-cim] xm test summary


  • To: <xen-cim@xxxxxxxxxxxxxxxxxxx>
  • From: "Szymanski, Lukasz K" <Lukasz.Szymanski@xxxxxxxxxx>
  • Date: Wed, 25 Apr 2007 17:16:57 -0400
  • Delivery-date: Wed, 25 Apr 2007 14:15:29 -0700
  • List-id: xen-cim mailing list <xen-cim.lists.xensource.com>
  • Thread-index: AceHfw5Q5xOQRlJcSOuAvzMma4KZqg==
  • Thread-topic: xm test summary

This is a brief outline of the major test categories covered by xm-test.  Let's talk tomorrow (I may be a few minutes late) about which of these apply to the providers.  I took the top few lines of the first available test from each directory (category) so we could have a basic idea of what each category does.  If there were comments, I included them, if not, then I took what I could.  Most directories have more than one test, but for the initial discussion this should suffice.

Luke

--------- ./help/01_help_basic_pos.py
status, output = traceCommand("xm help")

--------- ./info/01_info_basic_pos.py
status, output = traceCommand("xm info")

--------- ./list/01_list_basic_pos.py
status, output = traceCommand("xm list")

--------- ./save/01_save_basic_pos.py
SKIP("Save currently not supported for HVM domains")

--------- ./sedf/01_sedf_period_slice_pos.py
status, output = traceCommand("xm sched-sedf %s" %(domain.getName()))

--------- ./vtpm/01_vtpm-list_pos.py
# Positive Test: create domain with virtual TPM attached at build time,verify list

--------- ./block-create/01_block_attach_device_pos.py
SKIP("Block-attach not supported for HVM domains")

--------- ./console/01_console_badopt_neg.py
# Test console command with non existent option in the command line. Verify fail.

--------- ./destroy/01_destroy_basic_pos.py
# destroy domain - positive test

--------- ./restore/01_restore_basic_pos.py
# Save a domain and attempt to restore it

--------- ./dmesg/01_dmesg_basic_pos.py
status, output = traceCommand("xm dmesg")

--------- ./domid/01_domid_basic_pos.py
status, output = traceCommand("xm domid Domain-0")

--------- ./pause/01_pause_basic_pos.py
# Tests for xm pause
# 1) Create domain, verify it's up with console
# 2) pause the domain
# 3) verify it's paused by failure to connect console

--------- ./sysrq/01_sysrq_basic_neg.py
# Check to make sure an invalid sysrq is handled appropriately

--------- ./domname/01_domname_basic_pos.py
status, output = traceCommand("xm domname 0")

--------- ./vcpu-pin/01_vcpu-pin_basic_pos.py
# 1) Make sure we have a multi cpu system
# 2) Create a test domain and pin its VCPU0 to CPU 0 and then 1

--------- ./shutdown/01_shutdown_basic_pos.py
# Test for xm shutdown
# 1) Create domain, verify it's up with console
# 2) shut down the domain, verify it's down

--------- ./block-integrity/01_block_device_read_verify.py
# This test initialises a ram disk in dom0 with data from /dev/urandom and
# then imports the ram disk device as a physical device into a domU. The md5
# checksum of the data in the ramdisk is calculated in dom0 and also
# calculated by the domU reading the data through the blk frontend and
# backend drivers.  The test succeeds if the checksums match indicating that
# the domU successfully read all the correct data from the device.

--------- ./block-destroy/01_block-destroy_btblock_pos.py
SKIP("Block-detach not supported for HVM domains")

--------- ./enforce_dom0_cpus/01_enforce_dom0_cpus_basic_pos.py
# 1) Make sure we have a multi cpu system and dom0 has at
#    least 2 vcpus online.
# 2) clone standard config (/etc/xen/xend-config.sxp)
# 3) modify clone with enforce_dom0_cpus=X
# 4) restart xend with modified config
# 5) check /proc/cpuinfo for cpu count
# 6) check xm info 'VCPUs' field to see that only 'enforce_dom0_cpus'
#    number of cpus are online in dom0
# 7) Restore initial dom0 vcpu state
# 8) Restart xend with default config

--------- ./migrate/01_migrate_localhost_pos.py
# Tests for xm migrate
# 1) Create domain, verify it's up with console
# 2) live migrate the domain to localhost
# 3) verify it's migrated, see that it has a new domain ID
# 4) verify it's still working properly by running a command on it

--------- ./vcpu-disable/01_vcpu-disable_basic_pos.py
# 1) Make sure we have a multi cpu system
# 2) Create a test domain with 2 VCPUs
# 3) Verify that both VCPUs are alive
# 4) Disable DOM VCPU1 by setting the VCPU count to 1
# 5) Assert that the VCPU has been disabled
# 6) Enable DOM VCPU1 (restore VCPU count to 2)
# 7) Assert that the VCPUs are both alive again

--------- ./create/01_create_basic_pos.py
# Create a domain (default XmTestDomain, with our ramdisk), Start it

--------- ./_sanity/01_domu_proc.py
# Test that the library and ramdisk are working to the point
# that we can start a DomU and read /proc

--------- ./memmax/01_memmax_badparm_neg.py
status, output = traceCommand("xm mem-max")

--------- ./memset/01_memset_basic_pos.py
## 1) Test for xm mem-set
##      create domain,
##      verify domain and ls output,
##      mem-set in dom0,
##      verify with xm list memory change external,
##      verify with xm list memory change internal,

--------- ./block-list/01_block-list_pos.py
# Positive Test: create domain with block attached at build time, verify list

--------- ./reboot/01_reboot_basic_pos.py
status, output = traceCommand("xm reboot %s" % domain.getName())

--------- ./unpause/01_unpause_basic_pos.py
# Tests for xm unpause
# 1) Create domain, verify it's up with console
# 2) randomly pause and unpause the domain
# 3) unpause it one last time
# 4) verify it's still alive with console

--------- ./security-acm/01_security-acm_basic.py
# A couple of simple tests that test ACM security extensions
# for the xm tool. The following xm subcommands are tested:
#
# - makepolicy
# - labels
# - rmlabel
# - addlabel
# - getlabel
# - resources

--------- ./network/02_network_local_ping_pos.py
# Ping tests on local interfaces.
#  - creates a single guest domain
#  - sets up a single NIC
#  - conducts ping tests to the local loopback and IP address.

--------- ./network-attach/01_network_attach_pos.py
SKIP("Network-attach not supported for HVM domains")

--------- ./sched-credit/01_sched_credit_weight_cap_pos.py
# Sched-credit tests modified from SEDF tests


_______________________________________________
Xen-cim mailing list
Xen-cim@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-cim

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.