[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-API] XCP: dom0 scalability


  • To: "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>
  • From: George Shuklin <george.shuklin@xxxxxxxxx>
  • Date: Mon, 11 Oct 2010 17:36:54 +0400
  • Delivery-date: Mon, 11 Oct 2010 06:37:10 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:in-reply-to:references:content-type:date:message-id :mime-version:x-mailer:content-transfer-encoding; b=BUrk1C6yiLRz/vWjIAyLKZuoK2iz/2JGUidGE91TxPqW939H8gr8xziM+oLM9J83O+ 1GXsgtdJqRzJtR+jYjcstP/dTkKKzXnUF1gwOJli4+zBErSL4r/twqYO1TJrSr2jIedE xoBRwyyM88pVUOelJMneD+h3zptim7XSv+bAM=
  • List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>

I think memcpy have limited usage in case of highload data exchange. For
low load we already have XenStore. 

I think, the new <s>p2p</s> D2D communication protocol shall grant more
speed than IP-based communication via netchannel by reducing overhead on
IP-adresses TTL/CRC, etc.

You point problem about dying domain. I think it have to split it to
cases. As I understand, main problem is 'other (bad) domain' not
returning pages to normal domain wich one wants to shutdown, but could
not due bad domain keeping pages.

I think it have simple solution: good domain shall simply transfer
'snatched' pages (pointers to it?) to some service to dom0 which one
keep it until bad domain is not giving up.

Same approach used by init in Linux with zombie processes. Of course
this solution will not help will bugged behavior of bad domain, but it
allow to let rest in peace good domain without side effects.

And dom0 if found bad domain snatches too many pages from dying domains
may: 1st: request pages back from bad domain (if domain return it, it
ok, may be 'good' domain was wrong). 2nd: if bad domain refuse transfer
of zombie pages, after few (thousands of them) use most effective way to
cure patient - do domain_destroy() to it as buggly domain (and may be
restart it later).
 

Ð ÐÐÐ, 11/10/2010 Ð 14:08 +0100, Dave Scott ÐÐÑÐÑ:
> Hi George,
> 
> > And one more...
> > 
> > I think, this is point where we shall think again about XenSockets.
> > They
> > well be perfect for IDC.
> 
> I definitely agree that we need to create a nice IDC mechanism :)
> 
> I don't really know what the best mechanism would be... I've heard people 
> talk about a few different options.
> 
> One option is to create a protocol similar to blkback/blkfront and 
> netback/netfront i.e. using shared memory pages and event channels. This 
> ought to be the highest bandwidth approach. One potential downside is that a 
> software bug in one endpoint can prevent the other domain being cleaned up 
> properly -- the domain will remain in the 'D'ying state until all memory is 
> returned to it.
> 
> Another option is to create a protocol where instead of sharing memory pages, 
> memcpy() is used to move the data between domains. I think the XCI people 
> might be using a variant of this already.
> 
> There may be other options... what do you think?
> 
> Cheers,
> Dave
> 
> 
> > 
> > Ð ÐÑÐ, 08/10/2010 Ð 20:54 +0100, Dave Scott ÐÐÑÐÑ:
> > > Hi,
> > >
> > > I've added a draft of a doc called "the dom0 capacity crunch" to the
> > wiki:
> > >
> > >
> > http://wiki.xensource.com/xenwiki/XCP_Overview?action=AttachFile&do=get
> > &target=xcp_dom0_capacity_crunch.pdf
> > >
> > > The doc contains a proposal for dealing with ever-increasing dom0
> > load, primarily by moving the load out of dom0 (stubdoms, helper
> > domains etc) and, where still necessary, tweaking the number of dom0
> > vcpus. I think this is becoming pretty important and we'll need to work
> > on this asap.
> > >
> > > Comments are welcome.
> > >
> > > Cheers,
> > > Dave
> > >
> > > _______________________________________________
> > > xen-api mailing list
> > > xen-api@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/mailman/listinfo/xen-api
> > 
> > 
> > 
> > _______________________________________________
> > xen-api mailing list
> > xen-api@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/mailman/listinfo/xen-api



_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.