[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] XCP 1.6 BETA problem with pool-dump-database



Sounds like a different problem.  I shut down and restart the pool
regularly and the slave/s reconnect normally.  Sometimes a slave fails
to see the master due to some network glitch, and the only workaround
is to reboot the slave after which the pool is reconstituted.

You didn't mention the XCP version.  My pool is at 1.6Beta2.

On Thu, Nov 22, 2012 at 1:53 PM, Philip Chan <pip.chan@xxxxxxxxx> wrote:
> I also find that if restart the pool (which means I shutdown all master and
> slaves then reboot all) all slaves will failed to join the pool any more. it
> will reported the slaves are deleted from the master. and it is unable to
> recover that and the only resolution I find is to reinstall the XCP in fresh
> clean mode (do not create local file repos during the setup) then rejoin the
> master from pool. The workaround now I have is to keep the master on
> otherwise it will be disaster.
>
>
> On Fri, Nov 16, 2012 at 3:00 PM, Black Bird <blackbird1758@xxxxxxxxx> wrote:
>>
>> I'm still struggling with this issue, but have observed something
>> interesting.  The command executes correctly on the pool master
>> (xen3v3), and fails on a slave (xen2) with the 'database not in
>> memory' error.
>>
>> Following is the xensource.log file on the pool master in both cases.
>>
>> ----------------------------------------------
>> successful attempt from master.  Log on master
>> ----------------------------------------------
>> Nov 16 17:36:02 xen3v3 xapi: [ info|xen3v3|10236 UNIX
>> /var/xapi/xapi||cli] xe pool-dump-database password=null
>> file-name=pool-dump-database.20121116173600 username=root
>> Nov 16 17:36:02 xen3v3 xapi: [ info|xen3v3|10236 UNIX
>> /var/xapi/xapi|session.login_with_password D:2d839a80985d|xapi]
>> Session.create trackid=8e34ba23e24c9883b5c413ddfcb356bc pool=false
>> uname=root is_local_superuser=true auth_user_sid=
>> parent=trackid=9834f5af41c964e225f24279aefe4e49
>> Nov 16 17:36:02 xen3v3 xapi: [ info|xen3v3|10236 UNIX
>> /var/xapi/xapi|task.create D:ebb96e910762|taskhelper] task dump
>> database R:be1da62787ce (uuid:35f0640e-c59a-5f2b-f5e9-a6e869f28888)
>> created (trackid=8e34ba23e24c9883b5c413ddfcb356bc) by task
>> D:ebb96e910762
>> Nov 16 17:36:02 xen3v3 xapi: [ info|xen3v3|10238 INET
>> 0.0.0.0:80||taskhelper] task dump database R:be1da62787ce forwarded
>> (trackid=8e34ba23e24c9883b5c413ddfcb356bc)
>> Nov 16 17:36:03 xen3v3 xapi: [ info|xen3v3|10236 UNIX
>> /var/xapi/xapi|session.logout D:7b78ff9cc719|xapi] Session.destroy
>> trackid=8e34ba23e24c9883b5c413ddfcb356bc
>>
>>
>>
>> ----------------------------------------
>> failed attempt from slave.  Log on master
>> ----------------------------------------
>> Nov 16 17:41:08 xen3v3 xapi: [debug|xen3v3|11148 UNIX
>> /var/xapi/xapi||dummytaskhelper] task dispatch:event.from
>> D:f31d73627b1e created by task D:1f9945cc5399
>> Nov 16 17:41:10 xen3v3 xapi: [debug|xen3v3|11149 INET
>> 0.0.0.0:80||dummytaskhelper] task dispatch:session.get_uuid
>> D:c0b914428578 created by task D:ee2a926fc137
>> Nov 16 17:41:10 xen3v3 xapi: [debug|xen3v3|11149 INET
>> 0.0.0.0:80|dispatch:session.get_uuid D:c0b914428578|api_readonly]
>> session.get_uuid
>> Nov 16 17:41:10 xen3v3 xapi: [ info|xen3v3|11150 INET 0.0.0.0:80||cli]
>> xe pool-dump-database password=null
>> file-name=pool-dump-database.20121116174110 username=root
>> Nov 16 17:41:11 xen3v3 xapi: [ info|xen3v3|11150 INET
>> 0.0.0.0:80|task.create D:a7fff784ed18|taskhelper] task dump database
>> R:b2185135d308 (uuid:6d6d8e2c-df90-8cb3-1603
>> -698c87ec4a6c) created (trackid=9c40424ccae8e6ab343a904ecd1a7f4c) by
>> task D:a7fff784ed18
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET 0.0.0.0:80||cli]
>>
>> /pool/xmldbdump?session_id=OpaqueRef:72f48933-ffd3-f6de-bc27-6d71f38e9777&task_id=OpaqueRef:b218
>> 5135-d308-c183-1b21-5eb561db5511
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11151 INET
>> 0.0.0.0:80||dummytaskhelper] task dispatch:session.slave_login
>> D:b2b15de6d447 created by task D:631b2c68a809
>> Nov 16 17:41:11 xen3v3 xapi: [ info|xen3v3|11151 INET
>> 0.0.0.0:80|session.slave_login D:c38111cf7aa1|xapi] Session.create
>> trackid=398737a9788c502c00dda1747cffb792 pool=
>> true uname= is_local_superuser=true auth_user_sid=
>> parent=trackid=9834f5af41c964e225f24279aefe4e49
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11151 INET
>> 0.0.0.0:80|session.slave_login D:c38111cf7aa1|mscgen] xapi=>xapi
>>
>> [label="<methodCall><methodName>session.get_uuid</methodName><params><param><value>OpaqueRef:d021831a-5298-5975-9960-0c10862a11ad</value></param><param><value>OpaqueRef:d021831a-5298-5975-9960-0c10862a11ad</value></param></params></methodCall>"];
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|4 UNIX
>> /var/xapi/xapi||xapi] Set threads number soft limit to 67.351919 with
>> 0.334173 confidence and 0.984833 tolerance, the next tweak will be
>> 60.000000 seconds away at the earliest.
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11152 UNIX
>> /var/xapi/xapi||dummytaskhelper] task dispatch:session.get_uuid
>> D:c4cd084c3e8c created by task D:c38111cf7aa1
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11152 UNIX
>> /var/xapi/xapi|dispatch:session.get_uuid D:c4cd084c3e8c|api_readonly]
>> session.get_uuid
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11153 INET
>> 0.0.0.0:80||dummytaskhelper] task dispatch:pool.audit_log_append
>> D:2f3681049c6c created by task D:631b2c68a809
>> Nov 16 17:41:11 xen3v3 xapi: [ info|xen3v3|11153 INET
>> 0.0.0.0:80|dispatch:pool.audit_log_append D:2f3681049c6c|taskhelper]
>> task pool.audit_log_append R:36a9cfe6d24e
>> (uuid:ba7dfdb3-64dd-0d7b-6100-2f13c51cf949) created
>> (trackid=398737a9788c502c00dda1747cffb792) by task D:631b2c68a809
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11154 INET
>> 0.0.0.0:80||dummytaskhelper] task dispatch:session.logout
>> D:c7077053a3ff created by task D:631b2c68a809
>> Nov 16 17:41:11 xen3v3 xapi: [ info|xen3v3|11154 INET
>> 0.0.0.0:80|session.logout D:4d6256e6431a|xapi] Session.destroy
>> trackid=398737a9788c502c00dda1747cffb792
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET 0.0.0.0:80||cli]
>> Waiting for the task to be completed
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET
>> 0.0.0.0:80|dispatch:task.get_status D:ab929847708b|api_readonly]
>> task.get_status
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET
>> 0.0.0.0:80|dispatch:task.get_status D:b42846f0f346|api_readonly]
>> task.get_status
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET
>> 0.0.0.0:80|dispatch:task.get_error_info D:76d361eb5c76|api_readonly]
>> task.get_error_info
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET
>> 0.0.0.0:80||xapi] Raised at cli_util.ml:106.10-67 ->
>> pervasiveext.ml:22.2-9
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET
>> 0.0.0.0:80||xapi] Raised at pervasiveext.ml:26.22-25 ->
>> cli_operations.ml:3757.7-76 -> xapi_cli.ml:119.18-58 ->
>> pervasiveext.ml:22.2-9
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET
>> 0.0.0.0:80||backtrace] Raised at pervasiveext.ml:26.22-25 ->
>> xapi_cli.ml:118.2-139 -> xapi_cli.ml:205.7-44 -> xapi_cli.ml:257.4-23
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET 0.0.0.0:80||cli]
>> Xapi_cli.exception_handler: Got exception INTERNAL_ERROR: [
>> Db_ref.Database_not_in_memory ]
>> Nov 16 17:41:11 xen3v3 xapi: [debug|xen3v3|11150 INET 0.0.0.0:80||cli]
>> Raised at string.ml:150.25-34 -> stringext.ml:108.13-29
>> Nov 16 17:41:11 xen3v3 xapi: [ info|xen3v3|11155 INET
>> 0.0.0.0:80|session.logout D:97f7efba1254|xapi] Session.destroy
>> trackid=9c40424ccae8e6ab343a904ecd1a7f4c
>>
>> _______________________________________________
>> Xen-api mailing list
>> Xen-api@xxxxxxxxxxxxx
>> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
>
>
>
>
> --
> Best Regards,
>
>  Philip
>  ** <http://www.flexsystem.com/>
> *E**mail :* pip.chan@xxxxxxxxx
>
>

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.