|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/pv: Fix construction of 32bit dom0's
On 07/02/2019 14:29, Andrew Cooper wrote:
> On 07/02/2019 12:58, Jan Beulich wrote:
>>>>> On 06.02.19 at 21:41, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> Slightly RFC:
>>>
>>> 1) I've not worked out exactly what the
>>>
>>> v->vcpu_info = (void *)&d->shared_info->compat.vcpu_info[0];
>>>
>>> line is supposed to be doing and whether it is needed, but it doesn't
>>> appear to matter. It is perhaps another redundant opencoding.
>> Afaict this is just to be independent of the fact that the vcpu_info
>> array is first in struct shared_info. I'd be fine with it getting replaced
>> by a respective BUILD_BUG_ON(), but I'd like to ask that it not be
>> dropped without replacement.
>
> There is some ancillary logic with vcpu_info_mfn (which looks not to be
> initialised), which is why I suspect this is some more incorrect
> opencoding to begin with.
>
>>> 2) The reported
>>>
>>> Dom0 alloc.: 000000003e800000->000000003ec00000 (240470 pages to be
>>> allocated)
>>>
>>> line changes by 1 page because of the alloc_domheap_page() moving ahead
>>> of
>>> the printk(), but I'm fairly sure this is benign. There is a matching
>>> reduction in the length of the constructed m2p which is perhaps less
>>> benign.
>> Well, the M2P of course has to be correctly sized. An off-by-one would
>> likely result in hard to repro bug reports.
>
> The delta in output (with some of my own debugging) is:
>
> @@ -22,13 +22,13 @@
> (XEN) p2m_base = 0xffffffffffffffff
> (XEN) Xen kernel: 64-bit, lsb, compat32
> (XEN) Dom0 kernel: 32-bit, PAE, lsb, paddr 0x100000 -> 0x112000
> -(XEN) ** nr_pages 241494
> +(XEN) ** nr_pages 241493
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> -(XEN) Dom0 alloc.: 000000003e800000->000000003ec00000 (240470 pages to be
> allocated) (tot 1024, nr 241494)
> +(XEN) Dom0 alloc.: 000000003e800000->000000003ec00000 (240469 pages to be
> allocated) (tot 1024, nr 241493)
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN) Loaded kernel: 0000000000100000->0000000000112000
> (XEN) Init. ramdisk: 0000000000112000->0000000000112000
> -(XEN) Phys-Mach map: 0000000000112000->00000000001fdd58
> +(XEN) Phys-Mach map: 0000000000112000->00000000001fdd54
> (XEN) Start info: 00000000001fe000->00000000001fe4b4
> (XEN) Xenstore ring: 0000000000000000->0000000000000000
> (XEN) Console ring: 0000000000000000->0000000000000000
>
> I meant the P2M rather than M2P, and it is different by 1 entry which is
> expected, given the change by 1 page. I've positively identified the
> 1-page change to be the alloc_domheap_page() for the monitor table moving.
>
>>> @@ -606,23 +598,14 @@ int __init dom0_construct_pv(struct domain *d,
>>> {
>>> maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l4_page_table;
>>> l4start = l4tab = __va(mpt_alloc); mpt_alloc += PAGE_SIZE;
>>> + clear_page(l4tab);
>>> + init_xen_l4_slots(l4tab, _mfn(virt_to_mfn(l4start)),
>>> + d, INVALID_MFN, true);
>>> + v->arch.guest_table = pagetable_from_paddr(__pa(l4start));
>>> }
>>> else
>>> - {
>>> - page = alloc_domheap_page(d, MEMF_no_owner | MEMF_no_scrub);
>>> - if ( !page )
>>> - panic("Not enough RAM for domain 0 PML4\n");
>>> - page->u.inuse.type_info = PGT_l4_page_table|PGT_validated|1;
>>> - l4start = l4tab = page_to_virt(page);
>>> - maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l3_page_table;
>>> - l3start = __va(mpt_alloc); mpt_alloc += PAGE_SIZE;
>> This one is lost without replacement, but is needed. Commit
>> 7a9d764630 ("x86/32-on-64: adjust Dom0 initial page table layout")
>> specifically introduced it to make sure the guest-perceived top level
>> page table is allocated first (and hence marks the beginning of the
>> boot page tables, so Dom0 can later put all of them into general use).
>
> I did call this out specifically in the commit message. I had no idea
> about that commit when editing the code, but I still don't understand
> why it is important that the guests top level needs to be first.
In the Linux kernel I have the following comment for 32-bit PV guests:
/*
* For 32 bit domains xen_start_info->pt_base is the pgd address which
might be
* not the first page table in the page table pool.
* Iterate through the initial page tables to find the real page table base.
*/
Does this help somehow?
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |