[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 17/17] PVH xen: PVH dom0 creation....
On Fri, 10 May 2013 08:14:55 +0100 "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > >>> On 10.05.13 at 03:53, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> > >>> wrote: > > On Fri, 26 Apr 2013 08:22:08 +0100 > >> >> > + /* If the e820 ended under 4GB, we must map the remaining > >> >> > space upto 4GB */ > >> >> > + if ( end < GB(4) ) > >> >> > + { > >> >> > + start_pfn = PFN_UP(end); > >> >> > + end_pfn = (GB(4)) >> PAGE_SHIFT; > >> >> > + nump = end_pfn - start_pfn; > >> >> > + rc = domctl_memory_mapping(d, start_pfn, start_pfn, > >> >> > nump, 1); > >> >> > + BUG_ON(rc); > >> >> > + } > >> >> > >> >> That's necessary, but not sufficient. Or did I overlook MMIO > >> >> ranges getting added somewhere else for Dom0, when they sit > >> >> above the highest E820 covered address? > >> > > >> > construct_dom0() adds the entire range: > >> > > >> > /* DOM0 is permitted full I/O capabilities. */ > >> > rc |= ioports_permit_access(dom0, 0, 0xFFFF); > >> > rc |= iomem_permit_access(dom0, 0UL, ~0UL); > >> > >> Which does not create any mappings at all - these are just > >> permissions being granted. > > > > Right. I'm not sure where its happening for dom0. > > So if you don't know where you do this, I have to guess you don't > do this at all. But you obviously need to. Your main problem is that > you likely don't want to waste memory on page tables to cover the > whole (up to 52 bit wide) address space, so I assume you will need > to add these tables on demand. Yet then again iirc IOMMU faults Hmm... well, I originally had it where the tables were updated "on demand" initiated by guest, but then suggestions were to make that transparent to the guest. I don't really know what the best solution is, let me investigate/think some more. thanks, Mukesh _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |