[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] XCP 1.6 VM.get_all_records API Call hanging



Dave,

On 3 Jan 2013, at 12:05, Dave Scott wrote:

> Hi,
> 
> That's very strange. Could it be the size of the "VM.get_all_records" 
> response is too big to be sent properly? Do other APIs which generate less 
> traffic (e.g. "Pool.get_all_records") always work, or do they sometimes fail 
> too?

I have only traced it to VM.get_all_records….so far. Also, this is a vanilla 
install of 1.6 so just the standard list of Templates installed.

> 
> Perhaps you've got a thread or fd leak? If you run "top" and look at the 
> virtual size of the "xapi" process, is it very large?

Virtual: 256M, RSS: 28M

> Maybe there are too many threads and it's blocking? If it looks very large 
> (more than 1 or 2 GB) then I'd be interested to see how many file descriptors 
> are open (look in /proc/<pid>/fd -- but make sure you select the child pid 
> since the parent is a much smaller watchdog process). It might be that some 
> client(s) are keeping connections open for longer than expected and currently 
> xapi uses a thread-per-connection model.

# ls -l /proc/<pid>/fd | wc -l
28

> 
> Does restarting the xapi process cause the problem to go away (even if 
> temporarily)?

This had no affect at all. However, restarting xapi-templatescan did return 
this:

updateXapiTemplates.py:MUTTER:reading DB from 
/opt/xensource/guest-templates/dbcache
updateXapiTemplates.py:MUTTER:DB does not exist at 
/opt/xensource/guest-templates/dbcache
updateXapiTemplates.py:MUTTER:Rebuilding database..
Traceback (most recent call last):

 File "/opt/xensource/bin/updateXapiTemplates.py", line 357, in ?
   commitDatabase()

 File "/opt/xensource/bin/updateXapiTemplates.py", line 331, in commitDatabase
   foo=open(filename,'w')

IOError: [Errno 2] No such file or directory: 
'/opt/xensource/guest-templates/dbcache.new'


At the point of the problem I have no Guest Templates (just the pre-installed 
ones) so this may be expected but thought it was worth mentioning.

Dave

> 
> Cheers,
> Dave
> 
>> -----Original Message-----
>> From: xen-api-bounces@xxxxxxxxxxxxx [mailto:xen-api-
>> bounces@xxxxxxxxxxxxx] On Behalf Of Dave Avent
>> Sent: 03 January 2013 11:14 AM
>> To: xen-api@xxxxxxxxxxxxx
>> Subject: [Xen-API] XCP 1.6 VM.get_all_records API Call hanging
>> 
>> All,
>> 
>> I have been playing with Xen Cloud Platform 1.6 in the lab and have been
>> noticing some strange behaviour when using the API to drive it. Using
>> XenCenter worked some of the time but then appeared to freeze, not
>> refresh and then on re-connecting get stuck on "Synchronising". I put this
>> down to the client software and switched instead to writing my own client
>> interface using Ruby and the FOG libraries. This worked well but I started
>> seeing the same lock-ups. After digging about I realised that it always locks
>> up at the same point namely the VM.get_all_records call.
>> 
>> The call is received as the xensource.log shows:
>> 
>> xapi: [debug|xcp-test2|105923 INET 0.0.0.0:80|dispatch:VM.get_all_records
>> D:7f14fc2bea45|api_readonly] VM.get_all_records
>> 
>> But nothing is received and the clients TCP connection times out.
>> 
>> This problem happens for well over 50% of the calls made to the API so I am
>> interested to know if anyone else is having this problem?
>> 
>> Regards,
>> 
>> Dave--
>> Future Publishing Limited (registered company number 2008885) is a wholly
>> owned subsidiary of Future plc (registered company number 3757874), both
>> of which are incorporated in England and Wales and share the same
>> registered address at Beauford Court, 30 Monmouth Street, Bath BA1 2BW.
>> 
>> This email and any files transmitted with it are confidential and intended
>> solely for the use of the individual or entity to which they are addressed. 
>> If
>> you have received this email in error please reply to this email and then
>> delete it. Please note that any views or opinions presented in this email are
>> solely those of the author and do not necessarily represent those of Future.
>> 
>> The recipient should check this email and any attachments for the presence
>> of viruses. Future accepts no liability for any damage caused by any virus
>> transmitted by this email.
>> 
>> Future may regularly and randomly monitor outgoing and incoming emails
>> and other telecommunications on its email and telecommunications
>> systems. By replying to this email you give your consent to such monitoring.
>> 
>> *****
>> Save resources: think before you print.
>> 
>> 
>> _______________________________________________
>> Xen-api mailing list
>> Xen-api@xxxxxxxxxxxxx
>> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

-- 
Future Publishing Limited (registered company number 2008885) is a wholly owned 
subsidiary of Future plc (registered company number 3757874), both of which are 
incorporated in England and Wales and share the same registered address at 
Beauford Court, 30 Monmouth Street, Bath BA1 2BW.

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to which they are addressed. If 
you have received this email in error please reply to this email and then 
delete it. Please note that any views or opinions presented in this email are 
solely those of the author and do not necessarily represent those of Future.

The recipient should check this email and any attachments for the presence of 
viruses. Future accepts no liability for any damage caused by any virus 
transmitted by this email.

Future may regularly and randomly monitor outgoing and incoming emails and 
other telecommunications on its email and telecommunications systems. By 
replying to this email you give your consent to such monitoring.

*****
Save resources: think before you print.


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.