[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] problems with smp


  • To: "Alex Williamson" <alex.williamson@xxxxxx>
  • From: "David Brown" <dmlb2000@xxxxxxxxx>
  • Date: Thu, 8 Feb 2007 16:28:18 -0800
  • Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 08 Feb 2007 16:27:32 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=KQ3bqWkVYAcLaovaUCv2wFgf38VZqUgnFB/DAWz5CwegR5Hjnnz24HQZwijSgFsgHHRp1wnHFYdX061CtEuOQgkgP5S1zQQ/kucZfSGSZOd9ovFCoOkgOlqPOswg9/xTgNe4yxnaNSB1Nfv+fbDFNjw9BNfjnS5pBIcmd3CwRAM=
  • List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>

   You can tune the Xen credit scheduler to effectively do what you're
asking (I think).  AFAIK, dom0 already has some scheduling priority.
See here for details on tweaking the credit scheduler:

http://wiki.xensource.com/xenwiki/CreditScheduler

Thanks, I'll take a look at this, for sure...

You can't really "share" PCI devices.  At the PCI function level, one
and only one domain can own a function.  This sometimes means you can
split a dual port card among 2 domains if the device exposes the 2 ports
as 2 separate functions.  So you could give each domain it's own NIC and
SCSI device, if you have enough empty PCI slots.  I'm not sure that's
going to help your situation though.  There's probably some tuning you
can do on the vbd side too, like using LVM devices or raw block devices
instead of disk images if you're not already.

I'm definately using raw block devices for each domU and passing them
directly onto the distributed filesystem. I only have one nic really
to use so the dom0 has a bridge that handles the network, which I
guess is more work for the dom0...

If things go well there might be a press release out of this and I'll
most certainly post to the a link to the ML for what I'm doing... have
to run it by management but I'll probably be able to share most of
what I'm doing as well when the time comes (actual code).

I really appreciate the help I've been getting from the ML, thanks all of you.

- David Brown

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.