[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] XCP 1.6 BETA problem with pool-dump-database


  • To: xen-api@xxxxxxxxxxxxx
  • From: Black Bird <blackbird1758@xxxxxxxxx>
  • Date: Tue, 6 Nov 2012 11:29:03 +1100
  • Delivery-date: Tue, 06 Nov 2012 00:30:35 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>

(discreet bump)

I'd be grateful if anyone could point me in the right direction.

On Wed, Oct 31, 2012 at 12:56 PM, Black Bird <blackbird1758@xxxxxxxxx> wrote:
> Hi all
>
> I have a pool of 3 hosts which I have upgraded from XCP1.1 to XCP Beta
> 2.  I have attempted to run a database backup using 'xe
> pool-dump-database file-name=xxxx' on the pool master.  The command
> runs and finishes successfully without returning an error.  A file is
> also generated which is non-zero in length.  However when opening the
> file, I don't see any metadata I would be expecting (host uuids, vifs,
> vm uuids etc), but only some generic definitions.
>
> I do have a previous pool database dump file taken on the master with
> the old XCP version 1.1 and the file clearly contains all the metadata
> and is 617KB long.
>
> Following is a dump of the new file generated on the master with XCP
> Beta 2, 864 bytes long.
>
>
> <database><manifest><pair key="schema_major_vsn" value="5"/><pair
> key="schema_minor_vsn" value="35"/><pair key="generation_count"
> value="103"/></manifest><table name="SR" /><table name="pool" /><table
> name="VBD_metrics"/><table name="console" /><table name="host"
> /><table name="VIF_metrics"/><table name="user" /><table name="PBD"
> /><table name="pool_patch" /><table name="host_metrics" /><table
> name="VLAN" /><table name="Bond" /><table name="VTPM" /><table
> name="event"/><table name="VBD" /><table name="VM_guest_metrics"
> /><table name="VDI" /><table name="VM_metrics"/><table
> name="task"/><table name="VM" /><table name="crashdump"/><table
> name="network" /><table name="PIF" /><table name="host_patch"/><table
> name="host_crashdump"/><table name="SM" /><table name="host_cpu"
> /><table name="VIF" /><table name="session" /><table
> name="PIF_metrics" /></database>
>
> A dump on the other hosts produces exactly the same file, as expected
> as the database is replicated from the master to the slaves as a
> matter of course.
>
> Following is a dump of the portion of xensource.log taken during
> execution of the pool database dump:
>
>
> Oct 31 12:37:45 xen1 xapi: [ info|xen1|24112 UNIX /var/xapi/xapi||cli]
> xe pool-dump-database password=null file-name=pool-dump-database
> username=root
> Oct 31 12:37:45 xen1 xapi: [ info|xen1|24112 UNIX
> /var/xapi/xapi|session.login_with_password D:6a0b6a3b5be0|xapi]
> Session.create trackid=949de24d97b7ebe26d4b555dd6509e0b pool=false
> uname=root is_local_superuser=true auth_user_sid=
> parent=trackid=9834f5af41c964e225f24279aefe4e49
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24112 UNIX
> /var/xapi/xapi|session.login_with_password D:6a0b6a3b5be0|mscgen]
> xapi=>xapi 
> [label="<methodCall><methodName>session.get_uuid</methodName><params><param><value>OpaqueRef:69e50c5a-725e-a27d-b80e-cda7ce7d6a46</value></param><param><value>OpaqueRef:69e50c5a-725e-a27d-b80e-cda7ce7d6a46</value></param></params></methodCall>"];
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24113 UNIX
> /var/xapi/xapi||dummytaskhelper] task dispatch:session.get_uuid
> D:d53ec83a4add created by task D:6a0b6a3b5be0
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24113 UNIX
> /var/xapi/xapi|dispatch:session.get_uuid D:d53ec83a4add|api_readonly]
> session.get_uuid
> Oct 31 12:37:45 xen1 xapi: [ info|xen1|24112 UNIX
> /var/xapi/xapi|task.create D:ed79e10f9da2|taskhelper] task dump
> database R:8a999bbaa423 (uuid:635e0674-1f79-5c22-b0ab-243ade6f6fca)
> created (trackid=949de24d97b7ebe26d4b555dd6509e0b) by task
> D:ed79e10f9da2
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24112 UNIX /var/xapi/xapi||cli]
> /pool/xmldbdump?session_id=OpaqueRef:69e50c5a-725e-a27d-b80e-cda7ce7d6a46&task_id=OpaqueRef:8a999bba-a423-c4a7-6173-6bade5afca8a
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24114 INET
> 0.0.0.0:80||pool_db_sync] received request to write out db as xml
> Oct 31 12:37:45 xen1 xapi: [ info|xen1|24114 INET
> 0.0.0.0:80||taskhelper] task dump database R:8a999bbaa423 forwarded
> (trackid=949de24d97b7ebe26d4b555dd6509e0b)
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24114 INET 0.0.0.0:80|dump
> database R:8a999bbaa423|pool_db_sync] sending headers
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24114 INET 0.0.0.0:80|dump
> database R:8a999bbaa423|pool_db_sync] writing database xml
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24114 INET 0.0.0.0:80|dump
> database R:8a999bbaa423|pool_db_sync] finished writing database xml
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24114 INET 0.0.0.0:80|dump
> database R:8a999bbaa423|taskhelper] forwarded task destroyed
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24112 UNIX /var/xapi/xapi||cli]
> Waiting for the task to be completed
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24112 UNIX
> /var/xapi/xapi|dispatch:task.get_status D:9c36942a54e7|api_readonly]
> task.get_status
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24112 UNIX
> /var/xapi/xapi|dispatch:task.get_status D:c88da02d77eb|api_readonly]
> task.get_status
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24112 UNIX
> /var/xapi/xapi|dispatch:task.get_result D:4b68c1120f7c|api_readonly]
> task.get_result
> Oct 31 12:37:45 xen1 xapi: [debug|xen1|24112 UNIX /var/xapi/xapi||cli]
> result was []
> Oct 31 12:37:45 xen1 xapi: [ info|xen1|24112 UNIX
> /var/xapi/xapi|session.logout D:0a377b2fbab2|xapi] Session.destroy
> trackid=949de24d97b7ebe26d4b555dd6509e0b
>
> Can anyone give an idea of what may be happening?
>
> I do hope this is a specific configuration issue, as if it were a bug
> it would probably be considered a showstopper for production use.

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.