|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 8/8] x86/mm: adjust loop in arch_init_memory() to iterate over the PDX space
On Tue, Aug 05, 2025 at 02:38:38PM +0200, Jan Beulich wrote:
> On 05.08.2025 11:52, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -275,7 +275,7 @@ static void __init assign_io_page(struct page_info
> > *page)
> >
> > void __init arch_init_memory(void)
> > {
> > - unsigned long i, pfn, rstart_pfn, rend_pfn, iostart_pfn, ioend_pfn;
> > + unsigned long i, pfn, rstart_pfn, rend_pfn, iostart_pfn, ioend_pfn,
> > pdx;
> >
> > /*
> > * Basic guest-accessible flags:
> > @@ -328,9 +328,20 @@ void __init arch_init_memory(void)
> > destroy_xen_mappings((unsigned long)mfn_to_virt(iostart_pfn),
> > (unsigned long)mfn_to_virt(ioend_pfn));
> >
> > - /* Mark as I/O up to next RAM region. */
> > - for ( ; pfn < rstart_pfn; pfn++ )
> > + /*
> > + * Mark as I/O up to next RAM region. Iterate over the PDX space
> > to
> > + * skip holes which would always fail the mfn_valid() check.
> > + *
> > + * pfn_to_pdx() requires a valid (iow: RAM) PFN to convert to PDX,
> > + * hence provide pfn - 1, which is the tailing PFN from the last
> > RAM
> > + * range, or pdx 0 if the input pfn is 0.
> > + */
> > + for ( pdx = pfn ? pfn_to_pdx(pfn - 1) + 1 : 0;
> > + pdx < pfn_to_pdx(rstart_pfn);
> > + pdx++ )
> > {
> > + pfn = pdx_to_pfn(pdx);
> > +
> > if ( !mfn_valid(_mfn(pfn)) )
> > continue;
> >
>
> As much as I would have liked to ack this, I fear there's another caveat here:
> At the top of the loop we check not only for RAM, but also for UNUSABLE. The
> latter, like RAM, shouldn't be marked I/O, but we also can't use PFN <-> PDX
> transformations on any such page.
Right you are. I'm not sure however why we do this - won't we want
the mappings of UNUSABLE regions also be removed from the Xen
page-tables? (but not marked as IO)
I could do something like:
/* Mark as I/O up to next RAM or UNUSABLE region. */
if ( (!pfn || pdx_is_region_compressible(pfn_to_paddr(pfn - 1), 1)) &&
pdx_is_region_compressible(pfn_to_paddr(rstart_pfn), 1) )
{
/*
* Iterate over the PDX space to skip holes which would always fail
* the mfn_valid() check.
*
* pfn_to_pdx() requires a valid (iow: RAM) PFN to convert to PDX,
* hence provide pfn - 1, which is the tailing PFN from the last
* RAM range, or pdx 0 if the input pfn is 0.
*/
for ( pdx = pfn ? pfn_to_pdx(pfn - 1) + 1 : 0;
pdx < pfn_to_pdx(rstart_pfn);
pdx++ )
{
pfn = pdx_to_pfn(pdx);
if ( !mfn_valid(_mfn(pfn)) )
continue;
assign_io_page(mfn_to_page(_mfn(pfn)));
}
}
else
{
/* Slow path, iterate over the PFN space. */
for ( ; pfn < rstart_pfn; pfn++ )
{
if ( !mfn_valid(_mfn(pfn)) )
continue;
assign_io_page(mfn_to_page(_mfn(pfn)));
}
}
But I find it a bit ugly - I might send v5 without this final patch
while I see if I can find a better alternative.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |