[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-API] Xen Management API draft
Attached is a draft Xen Management API. This document presents our ideas in terms of a data model, with an implied binding to XML-RPC calls. There are a few examples in there too -- hopefully it's clear how the mapping between the document and the wire protocol. Hopefully, we can standardise the data model and wire protocol (one implying the other) and then the Xen project would guarantee that that wire protocol would be supported for the long term. Behind that interface, we would then be free to improve Xend, and giving a solid foundation for Xen-CIM, and third party GUIs and tools and so on. The API becomes effectively part of the "guarantees" for Xen. I expect that we will be asked for client-side bindings for a number of languages. It is my intention that a binding to any particular language would be a very thin translation from the host language's values and types onto the fixed XML-RPC, and then we shouldn't have too much trouble maintaining bindings in Python or Perl or C++ or whatever people want. This document is totally open to discussion and modification, so please, let's get started! In particular, we need to get this API into a state so that it is the right API for Xen-CIM. We need to think whether libvirt is ready to rely upon this API too. That would mean moving the hypercalls and Xenstore reads and writes out of libvirt, and pushing it up the stack alongside the other client-side bindings. Like the OVA spec, this document is for narrow circulation, and once we are getting happy with it, we'll circulate it more widely. I'd like to be able to circulate this before OLS, as that will be a good chance to discuss it with people face-to-face once they've had a chance to read it. Cheers, Ewan. Attachment:
xenapi.pdf _______________________________________________ xen-api mailing list xen-api@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-api
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |