[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] tools/libxc: send page-in requests in batches in linux_privcmd_map_foreign_bulk

  • To: "Olaf Hering" <olaf@xxxxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Thu, 26 Jan 2012 08:58:02 -0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 26 Jan 2012 16:58:20 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=HtGoByYC3DUn5EVK6aR9/hSLl2ey1SUBAPZrNwP/ka98 D6i9WcdOWUUBaidjOmxIKXJrKc13AR7ekuoNW5AOSs+m3DCx2mdUPPp8BHQZDCsc V691eXv/mI/c13ZuWgoQh8DUAdTSvrUxMA//6rtSnsIjr/nA74DFhYjOTcfL5KU=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> On Thu, Jan 26, Andres Lagar-Cavilla wrote:
>> Olaf,
>> if I get a pattern of errors in the err array of the form:
>> this patch does not optimize anything.
> It will poke all gfns instead of a single one before going to sleep.

All gfns that are contiguous in the arr array and have all failed with

>> Why not alloc a separate arr2 array, move the gfn's that failed with
>> ENOENT there, and retry the whole new arr2 block? You only need to
>> allocate arr2 once, to size num. That will batch always, everything.
> The errors for gfns are most likely fragmented. Merging the gfns means
> they will get a different addr. So the overall layout of arr and err
> needs to remain the same.

Ummh yeah, addr is the problem. But as you say, the error for the gfns are
fragmented, so that seriously limits your batching abilities.

You could pass the whole arr sub-segment encompassing the first and last
gfn that failed with ENOENT. Successful maps within that array will be
re-done by the hypervisor, at no correctness cost. I would imagine that
the extra work is offset by the gains, but that remains to be seen.

> Olaf

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.