[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Xen Management API draft



On Fri, Jun 23, 2006 at 05:48:08PM -0600, Jim Fehlig wrote:
> Ewan Mellor wrote:
> >In particular, we need to get this API into a state so that it is the right
> >API for Xen-CIM.  We need to think whether libvirt is ready to rely upon 
> >this
> >API too.  That would mean moving the hypercalls and Xenstore reads and 
> >writes
> >out of libvirt, and pushing it up the stack alongside the other client-side
> >bindings.
> >  
> 
> Is it realistic to use XML-RPC for tasks such as resource monitoring?  
> Seems like there should be an optional, lower-latency, lighter-weight 
> mechanism to retrieve things such as host_cpu.utilisation, 
> VBD.IO_bandwidth/incoming_kbs, etc.  (Makes me thing of Uncertainty 
> Principle or Observer Effect, i.e. we crank up cpu utilization by asking 
> for cpu utilization :-)).  I guess the hypercalls and Xenstore 
> reads/writes mentioned above would still exist but not be part of the 
> "guarantees" for Xen.

Even if there were a optional high peformance channel for these stats,
it would still be desirable to adjust the APIs to make it possible to
retrieve all the statistics associated with a guest (CPU, network, 
disk utilization, etc) in a single XML-RPC call - with the current API
you'd have to make 3 separate calls - one per guest device you want
stats from which compounds the performance hit.

> Many of the RPCs return void, e.g. VM.start(), VM.pause(), etc.  How are 
> failures indicated?  Via the Status and ErrorDescription elements of 
> return value struct?

I would like to see an explicit set of error/exception conditions documented
against each API. When writing apps against the API it is important to know
what failure scenarios one has to be aware of. I realize some of the failures
scenarios won't become clear until the server side implementation is actually
written, but it ought to be possible to identify an initial set of errors
for each API, and add others later as they become known. If the errors can
be enumerated then it ought to be possible to associate an explicit "exception"
name or number against each, to allow them to be handled without resorting 
to string comparisons on the description field.

Moving on to a different topic, I've a couple of questions around the topic
of authentication:

 * Section 1.4.2 states that all clients must login & initiate a session
   before making any of the other XML-RPC calls. Is there any provision
   made for anonymous access. For an application which merely wants to read
   information about guest VM state it would preferrable to have access
   without needing to prompt the user for a login/password.

   Could the requirement to always login be relaxed, and instead annotate
   each API to indicate whether anonymous (read-only) access is allowed

 * What is the motivation for implementing an explicit login_with_password
   method rather than utilizing the existing HTTP authentication protocols ?
   The proposed login API utilizing a simple username/password pair is quite
   limiting, preventing the use of any of the more advanced authentication
   protocols such as challenge/response, public / private key, kerberos 
   ticket passing.

   The latter would be particuarly important if the apps using this API want
   to integrate with any kind of single sign on system. Perhaps it would be
   possible to define a more advanced login process which could be backed by
   something like SASL

     http://www.ietf.org/rfc/rfc2222.txt
     http://asg.web.cmu.edu/sasl/

 * What is the lifetime of sessions ? I assume they persist across distinct
   HTTP connections for the case where the client isn't using keep-alive.
   Will there, however, be any form of timeout, after which the client will
   implicitly be logged out.


Minor question on the 'bios_boot_option' enumeration - rather than hardcoding
a list of floppy/hd/cdrom, could the bootable device field instead simply be
a UUID refering to one of the VDB devices defined for the domain ?

Finally, a higher level question - the API is still basically unidirectional,
poll based model. By this I mean, if I want to detect changes in a guest VMs
lifecycle state (eg, from running -> paused), then I have no choice but to
poll the 'power_state' field on every VM i'm interested in. If the app using
the API is a GUI app, then this polling is going to have to be done pretty
frequently (once per second or more) to provide an acceptable refresh in the 
UI. I'm concerned that polling so frequently over an HTTP / XML-RPC protocol
is going to impose a very significant performance hit on the host machine.
It would be very desriable if there was a formal API for getting asynchronous
callbacks notifying the client of changes in a VM's lifecycle state. I know
this is not really possible with XML-RPC, but I wanted to bring it up as a
use-case none-the-less in case there are alternate ways to provide such a
capability. 

The same issue is true of the proposed mechanism for asynchronous invocation
invocation of APIs - it also requires the client to make requests polling
for completion of the API which is not really buying any performance benefit.
Unless the client actually wanted the 'estimated time of completion' data,
they might as well just send a regular synchronous request & use select()
or poll() system calls on the HTTP socket to do a non-blocking wait for 
completion client side.

Regards,
Dan.
-- 
|=- Red Hat, Engineering, Emerging Technologies, Boston.  +1 978 392 2496 -=|
|=-           Perl modules: http://search.cpan.org/~danberr/              -=|
|=-               Projects: http://freshmeat.net/~danielpb/               -=|
|=-  GnuPG: 7D3B9505   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505  -=| 

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.