[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1 of 8] x86/mm: Fix paging_load


  • To: "Olaf Hering" <olaf@xxxxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Thu, 26 Jan 2012 04:23:10 -0800
  • Cc: andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, tim@xxxxxxx, adin@xxxxxxxxxxxxxx
  • Delivery-date: Thu, 26 Jan 2012 12:23:45 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=MMxgciC8oRqNfrIhm3G9Mk5mELUqaZLejpNswX3fLZtG kBARjSjQtT53oaWZnqpIm7idgPi166DrQAAZah6tuei7YpprEyyBeulskUj2dn9x f9FJRo2mu6cmYbhnMR+4xTfl0DrPnazFEKZrNfyMDncuLuFDMzp8BCX0r+CvWF8=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> On Thu, Jan 26, Andres Lagar-Cavilla wrote:
>
>> Now, afaict, the p2m_ram_paging_in state is not needed anymore. Can you
>> provide feedback as to whether
>> 1. remove p2m_ram_paging_in
>> 2. rename p2m_ram_paging_in_start to p2m_ram_paging_in
>>
>> sounds like a good plan?
>
> In my opinion the common case is that evicted pages get populated, an
> request is sent. Later an response is expected to make room in the ring.
>
> If p2m_mem_paging_populate allocates a page for the guest, it can let
> the pager know that it did so (or failed to allocate one).
> If there is a page already, the pager can copy the gfn content into a
> buffer, put a pointer to it in the response and let
> p2m_mem_paging_resume() handle both the ring accounting (as it does now)
> and also the copy_from_user.

So, this would bounce the page contents twice for the case when the page
hasn't yet been evicted?

> If page allocation failed, the pager has to allocate one via
> p2m_mem_paging_prep() as it is done now, as an intermediate step.

The issue of failed allocations is more pervasive. It also affects mem
sharing. And even PoD. What I'm trying to say is that even though your
solution seems to work (as long as the pager does dom0 ballooning to free
up some memory in between populate and prep!), we need a more generic
mechanism. Something along the lines of an ENOMEM mem_event ring, and a
matching dom0 daemon.

>
> The buffer page handling in the pager is probably simple, it needs to
> maintain RING_SIZE() buffers. There cant be more than that in flight
> because thats the limit of requests as well. In other words, the pager
> does not need to wait for p2m_mem_paging_resume() to run and pull the
> buffer content.
>
>
> If the "populate - allocate - put_request - get_request - fill_buffer -
> put_response - resume  get_response - copy_from_buffer - resume_vcpu"
> cycle works, it would reduce the overall amount of work to be done
> during paging, even if the hypercalls itself are not the bottleneck.
> It all depends on the possibility to allocate a page in the various
> contexts where p2m_mem_paging_populate is called.

The gist here is that the paging_load call would be removed?

I like the general direction, but one excellent property of paging_resume
is that it doesn't fail. This is particularly important since we already
do resumes via ring responses and event channel kicks (see below). So, if
resume needs to propagate failures back to the pager, things get icky.

(paging_load is restartable, see other email)

>
> The resume part could be done via eventchannel and
> XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME could be removed.

This is already the case. I'm not eager to remove the domctl, but resuming
via event channel kick only is in place.

>
> Also the question is if freeing one p2mt is more important than reducing
> the number if hypercalls to execute at runtime.

Agreed. However, eliminating code complexity is also useful, and these two
ram_paging_in states cause everyone headaches.

Thanks
Andres
>
> Olaf
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.