[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] [Squeezd] Disabling some balancing features


  • To: Mike McClurg <mike.mcclurg@xxxxxxxxxx>
  • From: Dave Scott <Dave.Scott@xxxxxxxxxxxxx>
  • Date: Tue, 23 Oct 2012 15:47:37 +0100
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: "xen-api@xxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxx>
  • Delivery-date: Tue, 23 Oct 2012 14:47:48 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>
  • Thread-index: Ac2xI7abeJQe4NueTySJRWb1ZjQEGQACKvcw
  • Thread-topic: [Xen-API] [Squeezd] Disabling some balancing features


> On 23/10/12 14:15, Dave Scott wrote:
> > In case it's useful: the most recent versions of xapi (found in
> XenServer 6.1 and should be in XCP 1.6) can run without squeezed. So
> you can
> >
> > service squeezed stop
> >
> > and then when you try to start a VM, there won't be any squeezing at
> all. Your new daemon could do whatever it likes to manage the VM
> balloon targets independently of xapi.
> >
> > Does that help at all?
> >

Mike wrote:
> Hi Dave,
> 
> I just tried this on XCP 1.6. I stopped squeezed, and then restarted
> xenopsd and xapi (for luck), and then tried a localhost migrate. I got
> the error:
> 
> The server failed to handle your request, due to an internal error.
> The
> given message may give details useful for debugging the problem.
> message: Xenops_interface.Internal_error("Unix.Unix_error(63,
> \"connect\", \"\")")
> 
> xensource.log (see below) seems to show xenopsd trying to rebalance
> memory, even though there is plenty of memory free. Do you know what's
> going on here?

Oh dear -- that's really supposed to work.

Looking at the code in master

https://github.com/xen-org/xen-api/blob/master/ocaml/xenops/xenops_server_xen.ml

line 467

(** After an event which frees memory (eg a domain destruction), perform a 
one-off memory rebalance *)
let balance_memory dbg =
        debug "rebalance_memory";
        Client.balance_memory dbg

It looks like we forgot to use the function "wrap" defined

Line 347

let wrap f =
        try Some (f ())
        with
        (* ... *)       
        | Unix.Unix_error(Unix.ECONNREFUSED, "connect", _) ->
        info "ECONNREFUSED talking to squeezed: assuming it has been switched 
off";
        None

This is probably worth fixing, I'll make a pull request a bit later (unless you 
beat me to it :-)

Thanks,
Dave

> 
> Mike
> 
> 
> [xensource.log]
> Oct 23 14:26:30 xcp-boston-53341-1 /opt/xensource/libexec/xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] VM =
> dd2b8958-15aa-1988-c7fa-e7b8c3a
> 28eb2; domid = 3; set_memory_dynamic_range min = 262144; max = 262144
> Oct 23 14:26:30 xcp-boston-53341-1 /opt/xensource/libexec/xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops]
> rebalance_memory
> Oct 23 14:26:30 xcp-boston-53341-1 /opt/xensource/libexec/xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|mscgen]
> xenops=>squeezed [label="balance_mem
> ory"];
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd: [
> info|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] Caught
> Unix.Unix_error(63, "connect", "") executing ["VM_mi
> grate", ["dd2b8958-15aa-1988-c7fa-e7b8c3a28eb2", {}, {},
> "http:\/\/10.80.238.191\/services\/xenops?session_id=OpaqueRef:7fc22b4d
> -f70b-25dc-dca3-b834b7eb5e5d"]]:
> triggerin
> g cleanup actions
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] Task
> 11 reference VM.pool_migrate R:944931f7933c: ["VM_chec
> k_state", "dd2b8958-15aa-1988-c7fa-e7b8c3a28eb2"]
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] VM
> dd2b8958-15aa-1988-c7fa-e7b8c3a28eb2 is not requesting a
> ny attention
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops]
> VM_DB.signal dd2b8958-15aa-1988-c7fa-e7b8c3a28eb2
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] Task
> 11 completed; duration = 0
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] Task
> 10 failed; exception = ["Internal_error", "Unix.Unix_e
> rror(63, \"connect\", \"\")"]
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops]
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7||xenops] TASK.signal 10 = ["Failed",
> ["Internal_error", "Unix.Unix_error(63, \"connect\", \"\")"]]
> 


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.