[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Immediate kernel panic using gntdev device




The issue seems to be my version of Xen (XenClient XT) must not support ballon drivers. Any call to the memory_op hypercall to change the reservation terminates my guest with extreme prejudice.

I'll take that one up with Citrix.  However, can someone explain why mapping a grant needs to manipulate the balloon reservation?  

Specifically, in the 3.7-RC7 linux kernel tree, the file drivers/xen/balloon.c:

At line 512 it tries to get a page out of the balloon.  This returns null (no page).
If page.... at line 513 evaluates to false
At line 518 the else block calls decrease_reservation().


Thanks
David
  


On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx> wrote:
On 11/12/2012 08:15 AM, D Sundstrom wrote:
> Thank you Pablo.
>
> It makes no difference if I run both the src-add and map from the same
> domain or from different DomU domains.
> Whichever DomU I run the map function in crashes immediately.

Mapping your own grants (which is what the test run you showed did) might
cause problems - although it's a bug that needs to be fixed, if so. You
may want to try using the vchan-node2 tool (tools/libvchan) for testing
and as an example user.

> You mention Dom0.  I just want to be clear that I'd like to share
> between two DomU domains.  Have you gotten this to work?

That was the goal of gntalloc/libvchan - it should work (and has for me).

> I also tried the userspace APIs provided by Xen such as
> xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
> the same driver IOCTLs, so this isn't a surprise.
>
> I'll need to see if I can get some debug info from the DomU kernel to
> make progress.

You might want to try booting your domU with console=hvc0 and look at
xl console - that will usually give you useful backtraces. Without that,
it's rather difficult to tell what the problem is.

> If I can get this to work, are there any restrictions on sharing large
> amounts of memory?  Say 160Mb?  Or are grant tables intended for a
> small number of pages?

There are restrictions within the modules (default is 1024 4K pages), and
in Xen itself for the number of grant table and maptrack pages - but I
think those can be adjusted via a boot parameter. The grant tables aren't
currently intended to share large amounts of memory, so you may run in to
some inefficiencies when doing the map/unmap. If you're using an IOMMU for
one of the domUs, this may end up being especially costly.

> Thanks,
> David
>
>
>
> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@xxxxxxxxxxxxxxxxx> wrote:
>>
>> It is not clear from your output from which domain you are running
>> each command. It looks like you are trying to issue a grant and map it
>> from within the same domain. That's probably the reason it crashes.
>> You are supposed to run this tool from both domains, running the calls
>> which interface with gntalloc from one domain, and the calls which
>> interface with gntdev from the other domain.
>> In any case, the domid you have to specify in the map must be the
>> domid of the domain which issued the grant. In other words, when
>> creating a grant, the domid which is granted access is specified. When
>> mapping a grant, the domid which issued the grant is specified. (i.e.
>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
>> 8)
>>
>>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
>


--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.