[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] 4.2 TODO update

  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Tue, 14 Feb 2012 06:58:50 -0800
  • Cc: olaf@xxxxxxxxx, tim@xxxxxxx, ian.campbell@xxxxxxxxxx
  • Delivery-date: Tue, 14 Feb 2012 14:59:08 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=AuoBYsNwJ6yKNpvUq+JMNrwEgrz48KQhPDEHI7aMiFyl W/mRV554Fklq5IIpidRMvSccNJPD2kQMEQRVeLbDkE9cwRMjs19GHSlv1812mhJ6 nvTsu3ihWxyxwaouZ5l9jeq31MzNNVBvRTiRguEcE2+HTpgi5c/HfHComfDjrRc=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> Date: Mon, 13 Feb 2012 10:24:29 +0000
> From: Tim Deegan <tim@xxxxxxx>
> To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
> Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
> Subject: Re: [Xen-devel] 4.2 TODO update
> Message-ID: <20120213102429.GA88765@xxxxxxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=iso-8859-1
> At 10:17 +0000 on 13 Feb (1329128265), Ian Campbell wrote:
>> This weeks update. I was away last Monday so this covers two weeks worth
>> of updates. I have dropped things which were marked as DONE in the last
>> update and added a bunch more DONE (so the list is shrinking!). Please
>> send me corrections (especially "done").
>> hypervisor, blockers:
>>       * round-up of the closing of the security hole in MSI-X
>>         passthrough (uniformly - i.e. even for Dom0 - disallowing write
>>         access to MSI-X table pages). (Jan Beulich -- more fixes
>>         required than first thought, patches posted)
>>       * domctls / sysctls set up to modify scheduler parameters, like
>>         the credit1 timeslice and schedule rate. (George Dunlap)
>>       * get the interface changes for sharing/paging/mem-events done and
>>         dusted so that 4.2 is a stable API that we hold to. (Tim Deegan,
>>         Andres Lagar-Cavilla et al)
>>               * sharing patches posted
> The hypervisor interface changes for sharing and events have gone in;
> there's a libxc-level change (to do with mlock()ing buffers) that ought
> to go in before 4.2, if at all.

Ian, it's probably OK to mark the paging/sharing as DONE.

There is the outstanding issue of properly mapping the event rings, which
ties into the mlock()ing buffers bit Tim mentions. I think we all want it
Done Right for 4.2. And that will likely not be the patch I last posted.
Perhaps best to rename the bullet point?

In the "nice to have" category, "properly synchronized p2m lookups" have
also gone into the tree.

> There's a paging+waitqueues patch that's been posted but needs a bit more
> work still; I'll try and look at it on Thursday.

This one is thorny. Right now we seem to have fairly stable paging. (Dare
I say bug-free? I dare not, lest I be smitten) I surmise other consumers
of the interface would agree with this (Olaf? Huawei?)

Wait queues have a significant potential for damage, unless we use them in
very specific contexts. I worry we'll drop from A- to C in terms of
stability. And right now I have not crashed hosts with paging in a long
while, whereas wait queues will definitely introduce that.

Why? Because it's really really hard to guarantee we'll go to sleep in an
atomic context. The main use for wait queues (imho) is in hvm_copy, and
there's a zillion paths going into hvm_copy (copy_from/to_user!) with all
ways of bumping the preemption count.

I'm dealing with this at the moment in a different context: running out of
memory to unshare pages when writing to them in the hypervisor. And we
simply cannot throw a wait queue sleep in there carelessly.

I'm not saying this is impossible, I'm saying there's a long path ahead,
and since it's all *internal* to the hypervisor, my vote is to be very
conservative for the 4.2 window.


> Tim.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.