[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] XCP 1.0 and openvswitch problem high memory


  • To: xen-api@xxxxxxxxxxxxx
  • From: George Shuklin <george.shuklin@xxxxxxxxx>
  • Date: Tue, 08 May 2012 00:57:15 +0400
  • Delivery-date: Mon, 07 May 2012 20:57:30 +0000
  • List-id: Development list for XCP and Xen API <xen-api.lists.xen.org>

I've got that problem periodically (XCP 0.5 we still using for older pool in product) in case of low resources for dom0. Solution: add more cpu to dom0, raise memory for dom0.

After relaxing resource problem I've never get that condition for prolonged time (>4 month) under pretty high loaded pool.

On 07.05.2012 16:35, Nycko wrote:
Hello, I have a problem with XCP 1.0. In recent weeks I saw very
resource openvswitch cpu and memory, no memory. At point (after a
while, less than a week) reaches the limit (512) and also begins to
swap out the point of leaving inoperative the real host and hence
their virtual machines. Everything returns to normal when I restart
the daemon (/ etc / init.d / openvswitch) but I can not be doing it
all the time. Someone can give me some tips to keep looking?

Leave some logs that may be useful:
#tail -f /var/log/openvswitch/ovs-vswitchd.log
May 07 09:28:36|47029|timeval|WARN|6 ms poll interval (0 ms user, 0 ms
system) is over 9 times the weighted mean interval 1 ms (3462238
samples)
May 07 09:28:36|47030|timeval|WARN|context switches: 0 voluntary, 3 involuntary
May 07 09:28:36|47031|coverage|INFO|Skipping details of duplicate
event coverage for hash=ba09b798 in epoch 3462238

If you need any more information do not hesitate to ask me because I
can not find the way to the solution and I'm several weeks into this.

PD: my english sucks

Regards


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.