[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RE: [Xen-API] Xen-API: XMLRPC Documentation



On Fri, Aug 27, 2010 at 3:39 PM, George Shuklin
<george.shuklin@xxxxxxxxx> wrote:
> Thank you for reply.
>
> Ð ÐÑÐ, 27/08/2010 Ð 13:50 +0100, Dave Scott ÐÐÑÐÑ:
>
>> > 3) Not very cute internal architecture of xapi. Database are kept in
>> > plaintext format (just ocaml serialisation), intensive cross-syncing
>> > (lot of empty pool updates via XenAPI notification system), not very
>> > cute architecture (master and slave, all slaves are kept database, but
>> > only master will reply to requests).
>>
>> The database is serialized in XML format but the primary copy is kept in 
>> memory on the pool master. The copies sent to the slaves are just backups. 
>> The master is authoritative for the pool metadata.
>
> ... And this cause problems for any third party applications: they need
> to use XenAPI calls to host master to get any, may be non-important
> information every time. We wants to show customer list of 10 VMs? We
> need to do _TEN_ XenAPI call just to get data to display (customer press
> 'f5' and we sending requests again). As alternative we can get status
> for all VM, but this unacceptable in case of thousands customers.
>

I think you should use another strategy for it. On xenwebmanager i
only do a call for get all data (when user is connected), and i get
updates from xapi calling to event.next, then i save it into memory
and i only need use this information saved in memory for show data

> I think, using some kind of database (sql is not good, may be something
> like mongo?) with allowed access to database (with nice and powerful
> query language) will help to obtain required data without master
> overload.
>
> (see notes about our project below).
>

could be a good idea as a third party application

>> I think that the best choice of architecture depends entirely on the 
>> proposed users and usage scenarios. What do you think? What sort of 
>> scenarios and architectures are you thinking of?
>
>
> Right now we are working on cloud for customers with resources on demand
> and payment on real usage (not prepaid resources).
>
> And we need right now simply replicate all data in XCP to our mongo
> database to allow web-interface and service routines obtain required
> data without pack of requests... (Most simple trouble: we need keep
> information about destroyed VMs).
>
>
> I believe, that external controller for cloud with database and simple
> agents on hosts will be better solution. But XCP is very interests by
> implemented conception of pool, shared resources, etc.
>
> Those controller (as primary source of information for cloud) will solve
> one more real problem - controlling few different pools (f.e. with
> different host configuration) with consistent uuid (we looks to VM/VDI
> uuid and can say 'which pool contain it').
>
>> > 4) Lack of full access to domain internals (XS, XC interfaces). Domains
>> > have pool-wide options and 'local' options (from xc point of view). We
>> > have no method to set up some XS values to keep them between migration.
>>
>> This is interesting -- could you explain more what you are trying to do? For 
>> stuff like calling libxc functions, I recommend adding flags to the 
>> VM.platform/ map, which gets written to xenstore, and then picking them up 
>> in the 'xenguest' binary (mostly written in C)
>
> Few things we extremely lack in XCP - is resource accounting.
>
> 1. CPU usage. xc.domain_getinfo() let us know in seconds. It cool, but
> value loss on reboot and migration (so we create subsystem to collect
> data).
>
> 2. Memory usage. XCP data suck. If xenballon fail to complete request we
> can get memory-dynamic/target values different from real allocation.
> xc.domain_getinfo() show us real use in kb. If we changing memory on
> demand, it requied to account it in some synthetic form (kb*hr).
>
> 3. disk io accounting. I found information about I/O operation amount
> in /sys fs for VBD and we collecting those data (by our agent). And, yet
> again, information lost on every domain reboot/migration.
>
> 4. Same for network (vif).
>
> If those data are storing in metrics - it really helps... But common
> database with direct access via requests is required again.
>
>> > 5) Scrapyard within other-config field (it contains some critical for
>> > VM
>> > values to run (install-repository), but it all joined in some kind of
>> > blob "dig it out yourself").
>> The other-config (string -> string) map is intended for "extra" data which 
>> isn't part of the full API. It's good for making things work quickly and 
>> experimenting... the idea is that, once we are happy with an approach, we 
>> can make "first-class" API fields for the data. The danger is of adding 
>> things to the API too quickly.. because then you are left with a 
>> backwards-compat legacy problem IYSWIM.
> I'm talking not about adding those values to MAIN XenAPI spec, but a simple 
> structured access, like vm.other_config.install-repository, not 
> vm.other_config["install_repository"]. OK, we saying 'other_config' hierarchy 
> can change between versions. But at least they will not be a pack of strings 
> (some of them are numbers, some can be none/NULL, etc).
>
> One other problem is last_boot_record. It content is simple ocaml code
> with pieces of XML.
>
>
>> > 7) Lack of 'change startup config while VM running' (i'm about values,
>> > which one have meaning while vm is starting: VCPUs-at-startup,
>> > memory-static-*, etc - you can change them only while VM is offline;
>> > more reasonably implementation must allow to change those values while
>> > VM is online, but accept them to work at reboot/shutdown/start).
>>
>> This was actually changed relatively recently for two reasons (which you may 
>> disagree with!):
>> 1. it simplifies the presentation in the user interface
>> 2. it makes it easier to tightly control the amount of memory in use by a 
>> domain, which is essential for controlling ballooning on the host/
>
> No, we simply must split 'domain running configuration' and 'domain
> startup configuration'. Domain running configuration have many fields
> R/O., domain startup.configuration allow to change them. Every domain
> destruction (reboot/restart), except migration, startup values will
> transfer to running. (And needs in VCPUs-at-startup will be lost - it
> will be simple vm.running.vcpus-number and vm.startup.vcpus-number).
>
>>
>> > 8) Heavy limitation on pool size (I still not test it myself, but I
>> > hear, limitation is 16 hosts per pool - that is not good for normal
>> > cloud with many hosts).
>> There is no pool size limit. However we focus most of our testing on 16 host 
>> pools. Interestingly, I'm currently working on a bug which manifests on 
>> pools with 30+ hosts.
>> Although, it depends on what you mean by "many". The xapi pool is really the 
>> "unit of shared storage"; if you have a much larger setup then I'd recommend 
>> making some kind of higher-level interface.
> Thank you for information. I'll test it up to 50 hosts next month to see what 
> happens.
>
>
>> > 9) Unavailability for using external kernels for PV.
>> Could you explain this one more?
>
> Well... It simple. We have many customers (not yet, but hoping them to
> be after we finish :), every customer have few vm's. Now we decide to
> change kernel for VM (for example, due pool upgrade). ... And what we
> shall to do? Writing every customer with message 'please update kernel
> on every VM?'. I think, using external files with kernel image (like
> xend with /etc/xen/vmconfigs and "kernel=" parameter) will be best
> solution.
>
>
>> > 10) Heavy depends on guest tools.
>> I think this could probably be changed fairly easily.
> yes, this is not a strategic problem for XCP, I just pumped up to wrote all 
> problems at once, sorry.
>
>
>> > One more problem is XenAPI by itself: one connection per one operation
>> > is TOO heavy to use as fast basis for future development. (I think, it
>> > needs for some kind of interaction without connectivity lost after each
>> > command).
>>
>> I don't understand... xapi does support HTTP/1.1 with persistent 
>> connections-- do you mean something different?
> OK, may be I wrong, but how I could do few post requests for XenAPI via
> single http connections? It is really possible? If yes, this will close
> this nasty question.

if you sniff xencenter connections (or openxenmanager/xenwebmanager
using xmlrpclib on python) you will see only a connection

Here there is some information
http://en.wikipedia.org/wiki/HTTP_persistent_connection

>
>
> Thank you again.
>
> PS We have some presentation of our targets but they are in Russian and
> project is in stage of development, so translation to English and more
> wide presentation of conception I plan later...
>
>
> ---
> wBR, George Shuklin
> system administrator
> Selectel.ru
>
>
>
> _______________________________________________
> xen-api mailing list
> xen-api@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/mailman/listinfo/xen-api
>

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.