[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V3 PATCH 9/9] pvh dom0: add opt_dom0pvh to setup.c



On Mon, 2 Dec 2013 12:38:12 -0800
Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:

> On Mon, 2 Dec 2013 20:38:26 +0100
> Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:
> 
> > On 02/12/13 20:30, Mukesh Rathor wrote:
> > > On Mon, 2 Dec 2013 16:09:50 +0100
> > > Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:
> > > 
> ......
> > Thanks for the input, I haven't been able to do much debugging, but
> > AFAICT the problem comes from alloc_vcpu_guest_context returning
> > NULL, because alloc_domheap_page(NULL, 0) inside of that function
> > also returned NULL (not able to allocate a page).
> > 
> > I'm currently using your tree for both Linux (branch tmp2) and Xen
> > (branch dom0pvh-v3), no added or removed patches (only the line
> > mentioned above in order to boot Dom0 in PVH mode).
> > 
> > Can you confirm you are able to boot a PVH Dom0 from you Xen and
> > Linux trees?
> 
> yes, I'd not be submitting the patches otherwise ;)..
> 
> Name                                        ID   Mem VCPUs
> State   Time(s) Domain-0                                     0
> 1200     3     r-----    6250.6

FWIW, here's my command line:

        kernel /xen.hybrid.kdb console=com1,vga com1=57600,8n1 dom0_mem=1200M 
maxcpus=4 dom0_max_vcpus=3 noreboot sync_console dom0pvh
        module /bzImage.hyb ro root=/dev/sda10 console=tty console=hvc0,57600n8


Something is going with memory on your system looks like, the line 

    Released 18446744073708661104 pages of unused memory

is worrysome. May be start with dom0_mem and see if that makes a difference?

OTOH, after I am done cranking out another dom0 version with Jan's
comments, I'll try out on my system without dom0_mem specified. That
make take a day or two.

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.