[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Xen backend support for paged out grant targets V4.



On Sep 21, 2012, at 2:52 PM, Konrad Rzeszutek Wilk wrote:

> On Mon, Sep 17, 2012 at 05:29:24AM -0400, Andres Lagar-Cavilla wrote:
>> On Sep 17, 2012, at 4:17 AM, Ian Campbell wrote:
>> 
>>> (I think I forgot to hit send on this on Friday, sorry. Also
>>> s/xen.lists.org/lists.xen.org in the CC line…)
>> I'm on a roll here…
>> 
>>> 
>>> On Fri, 2012-09-14 at 15:26 +0100, Andres Lagar-Cavilla wrote:
>>>> Since Xen-4.2, hvm domains may have portions of their memory paged out. 
>>>> When a
>>>> foreign domain (such as dom0) attempts to map these frames, the map will
>>>> initially fail. The hypervisor returns a suitable errno, and kicks an
>>>> asynchronous page-in operation carried out by a helper. The foreign domain 
>>>> is
>>>> expected to retry the mapping operation until it eventually succeeds. The
>>>> foreign domain is not put to sleep because itself could be the one running 
>>>> the
>>>> pager assist (typical scenario for dom0).
>>>> 
>>>> This patch adds support for this mechanism for backend drivers using grant
>>>> mapping and copying operations. Specifically, this covers the blkback and
>>>> gntdev drivers (which map foregin grants), and the netback driver (which 
>>>> copies
>>> 
>>>                          foreign
>>> 
>>>> foreign grants).
>>>> 
>>>> * Add a retry method for grants that fail with GNTST_eagain (i.e. because 
>>>> the
>>>> target foregin frame is paged out).
>>> 
>>>         foreign
>>> 
>>>> * Insert hooks with appropriate wrappers in the aforementioned drivers.
>>>> 
>>>> The retry loop is only invoked if the grant operation status is 
>>>> GNTST_eagain.
>>>> It guarantees to leave a new status code different from GNTST_eagain. Any 
>>>> other
>>>> status code results in identical code execution as before.
>>>> 
>>>> The retry loop performs 256 attempts with increasing time intervals 
>>>> through a
>>>> 32 second period. It uses msleep to yield while waiting for the next retry.
>>> [...]
>>>> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>>> 
>>> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
>>> 
>>> Since this is more about grant tables than netback this should probably
>>> go via Konrad rather than Dave, is that OK with you Dave?
>> 
>> If that is the case hopefully Konrad can deal with the two typos? Otherwise 
>> happy to re-spin the patch.
> 
> So with this patch when I launch an PVHVM guest on Xen 4.1 I get this
> in the initial domain and the guest is crashed:
> 
> [  261.927218] privcmd_fault: vma=ffff88002a31dce8 7f4edc095000-7f4edc195000, 
> pgoff=c8, uv=00007f4edc15d000

With this patch? Or with the mmapbatch v2? This is a page fault in a 
foreign-mapped VMA. Not touched by this grant backend patch we are talking 
about.

Does the hypervisor dump anything to its console?

At which point during xc_hvm_build do you see this? (or elsewhere in the 
toolstack?)

Thanks
Andres

> 
> guest config:
>> more /mnt/lab/latest/hvm.xm 
> kernel = "/usr/lib/xen/boot/hvmloader"
> builder='hvm'
> memory=1024
> #maxmem=1024
> maxvcpus = 2
> serial='pty'
> vcpus = 2
> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> boot="dn"
> #vif = [ 'type=ioemu,model=e1000,mac=00:0F:4B:00:00:71, bridge=switch' ]
> vif = [ 'type=netfront, bridge=switch' ]
> #vfb = [ 'vnc=1, vnclisten=0.0.0.0 ,vncunused=1']
> vnc=1
> vnclisten="0.0.0.0"
> usb=1
> xen_platform_pci=1
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.