[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2] enable event channel wake-up for mem_event interfaces


  • To: Adin Scannell <adin@xxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.xen@xxxxxxxxx>
  • Date: Fri, 30 Sep 2011 13:23:26 -0700
  • Cc: Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Fri, 30 Sep 2011 13:24:31 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acx+swtBFoXOYffjI0md8opwEL2aZwA9avWmAAGF9xo=
  • Thread-topic: [Xen-devel] [PATCH 0/2] enable event channel wake-up for mem_event interfaces

On 30/09/2011 12:39, "Keir Fraser" <keir.xen@xxxxxxxxx> wrote:

> On 29/09/2011 07:21, "Keir Fraser" <keir.xen@xxxxxxxxx> wrote:
> 
>> On 28/09/2011 14:22, "Adin Scannell" <adin@xxxxxxxxxxxxxxx> wrote:
>> 
>>> Currently the mem_event code requires a domctl to kick the hypervisor
>>> and unpause vcpus.  An event channel is used to notify dom0 of
>>> requests placed in the ring, and it can similarly be used to notify
>>> Xen, so as not to overuse domctls when running many domains with
>>> mem_event interfaces (domctls are not a great interface for this sort
>>> of thing, because they will all be serialized).
>>> 
>>> This patch set enables the use of the event channel to signal when a
>>> response in placed in a mem_event ring.
>> 
>> I don't have an opinion on patch 1/2. I'll leave it to someone else, like
>> Tim, to comment.
>> 
>> Patch 2/2 I don't mind the principle, but the implementation is not very
>> scalable. I will post a rewritten version to the list. It might be early
>> next week before I do so.
> 
> I've attached it. Let me know how it works for you.

By the way my patch doesn't hook up event notification for the d->mem_share
structure. It doesn't look like d->mem_share.xen_port ever gets set up, and
your patches didn't appear to fix that either.

>  -- Keir
> 
>> 
>>  -- Keir
>> 
>>> The two patches are as follows:
>>> - The first patch tweaks the memevent code to ensure that no events
>>> are lost.  Instead of calling get_response once per kick, we may have
>>> to pull out multiple events at a time.  This doesn't affect normal
>>> operation with the domctls.
>>> This patch also ensures that each vCPU can generate a request in each
>>> mem_event ring (i.e. put_request will always work), by appropriately
>>> pausing vCPUs when after requests are placed.
>>> - The second patch breaks the Xen-side event channel handling into a
>>> new arch-specific file "events.c", and adds cases for the different
>>> Xen-handled event channels.  This formalizes the tiny exception that
>>> was in place for just qemu in event_channel.c.
>>> 
>>> All the domctls are still in place and everything should be backwards
>>> compatible.
>>> 
>>> Cheers,
>>> -Adin
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>> 
>> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.