[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Inconsistent Pool / NIC Requirements in XCP 1.6


  • To: Graham Frank <jakiao@xxxxxxxxx>
  • From: Rob Hoes <Rob.Hoes@xxxxxxxxxx>
  • Date: Fri, 9 Nov 2012 19:32:54 +0000
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: "xen-api@xxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxx>
  • Delivery-date: Fri, 09 Nov 2012 19:33:14 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>
  • Thread-index: Ac2+sQSN/C/5Gga7TEKHAMjSQIWnvw==
  • Thread-topic: [Xen-API] Inconsistent Pool / NIC Requirements in XCP 1.6

(Forgot to add the list in my reply - it is back now)

If the joining host already has bonding set up, this may conflict with the 
config of the pool. To simplify things, this is not allowed.

When adding a host to a pool, it is best to add one that has no configuration 
at all yet, but was just installed. It will automatically inherit the pool 
config. It is just to make sure that the network config is consistent across 
the pool, so that a slave can always talk to the master, and VMs on the same 
network can talk to each other.

Cheers,
Rob

On 9 Nov 2012, at 18:19, "Graham Frank" <jakiao@xxxxxxxxx> wrote:

> I will give it a try, just need to configure IPMI.  Is there a reason
> for why it will support bonding after it joins but not prior to?
> Thanks!
> --Graham
> 
> On Fri, Nov 9, 2012 at 12:14 PM, Rob Hoes <Rob.Hoes@xxxxxxxxxx> wrote:
>> HI Graham,
>> 
>> Make sure that the management interface on the host that you are adding to 
>> the pool is not yet bonded. It will be automatically bonded while it joins 
>> the pool to match the configuration of the pool.
>> 
>> Cheers,
>> Rob
>> 
>>> -----Original Message-----
>>> From: xen-api-bounces@xxxxxxxxxxxxx [mailto:xen-api-
>>> bounces@xxxxxxxxxxxxx] On Behalf Of Graham Frank
>>> Sent: 09 November 2012 5:47 PM
>>> To: xen-api@xxxxxxxxxxxxx
>>> Subject: [Xen-API] Inconsistent Pool / NIC Requirements in XCP 1.6
>>> 
>>> Hi there,
>>> 
>>> I have been working on a small resource pool for our company, and I just got
>>> our xcp3 node reinstalled and ready to join our XCP 1.6 beta
>>> 2 pool.
>>> 
>>> The resource pool presently consists of xcp1 and xcp2.  Both have two
>>> physical network interfaces configured in bonded mode (Bond 0+1), which
>>> the management IP address runs over.  When I tried adding xcp3, I get an
>>> error informing me that the management IP must be on a dedicated
>>> interface.  If that's the case, then why is it allowing it on xcp1 and xcp2?
>>> 
>>> Here's the good bits from the xensource.log file --
>>> 
>>> Nov  9 09:20:04 xcp3 xapi: [error|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|xapi] The current host has a management interface which is
>>> not physical: cannot join a new pool Nov  9 09:20:04 xcp3 xapi:
>>> [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|xapi] Raised at xapi_pool.ml:184.9-101 ->
>>> xapi_pool.ml:358.1-43 -> xapi_pool.ml:676.2-52 ->
>>> pervasiveext.ml:22.2-9
>>> Nov  9 09:20:04 xcp3 xapi: [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|mscgen] xapi=>remote_xapi
>>> [label="<methodCall><methodName>session.logout</methodName><para
>>> ms><param><value>OpaqueRef:ea2aaaa8-e69f-2676-ddd8-
>>> eaf9c1b475fa</value></param></params></methodCall>"];
>>> Nov  9 09:20:04 xcp3 xapi: [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|stunnel] Using commandline: /usr/sbin/stunnel -fd
>>> b82229e4-5d4a-0605-6422-7e2a56af39b8
>>> Nov  9 09:20:04 xcp3 xapi: [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|stunnel] stunnel has pidty: (FEFork (30,10483)) Nov  9
>>> 09:20:04 xcp3 xapi: [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|stunnel] stunnel start
>>> Nov  9 09:20:04 xcp3 xapi: [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|xmlrpc_client] stunnel pid: 10483 (cached = false)
>>> connected to xcp1.cloud.lax1.mydomain.com:443 Nov  9 09:20:04 xcp3 xapi:
>>> [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|xmlrpc_client] with_recorded_stunnelpid task_opt=None
>>> s_pid=10483
>>> Nov  9 09:20:04 xcp3 xapi: [debug|xcp3.cloud.lax1|376 INET
>>> 0.0.0.0:80|dispatch:task.get_allowed_operations
>>> D:d8d3dec4836f|api_readonly] task.get_allowed_operations Nov  9 09:20:04
>>> xcp3 xapi: [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|backtrace] Raised at pervasiveext.ml:26.22-25 ->
>>> xapi_pool.ml:675.1-1023 -> rbac.ml:229.16-23 Nov  9 09:20:04 xcp3 xapi:
>>> [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|backtrace] Raised at rbac.ml:238.10-15 ->
>>> server_helpers.ml:79.11-41
>>> Nov  9 09:20:04 xcp3 xapi: [debug|xcp3.cloud.lax1|502|Async.pool.join
>>> R:0b45df90b376|dispatcher] Server_helpers.exec exception_handler: Got
>>> exception
>>> POOL_JOINING_HOST_MUST_HAVE_PHYSICAL_MANAGEMENT_NIC: [  ]
>>> 
>>> I think the reason this issue happened with xcp3 and not the other two is
>>> because I created the pool before bonding their NICs.  It didn't raise an 
>>> error
>>> when I converted it to bonding.
>>> 
>>> Is this a bug or user error? Thank you.
>>> 
>>> -Graham
>>> 
>>> _______________________________________________
>>> Xen-api mailing list
>>> Xen-api@xxxxxxxxxxxxx
>>> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.