[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 4/9] xen/riscv: setup fixmap mapping



On Mon, 2024-08-05 at 17:45 +0200, Jan Beulich wrote:
> On 05.08.2024 17:13, oleksii.kurochko@xxxxxxxxx wrote:
> > On Mon, 2024-07-29 at 15:35 +0200, Jan Beulich wrote:
> > > > +    }
> > > > +
> > > > +    BUG_ON(pte_is_valid(*pte));
> > > > +
> > > > +    tmp = paddr_to_pte(LINK_TO_LOAD((unsigned
> > > > long)&xen_fixmap),
> > > > PTE_TABLE);
> > > 
> > > I'm a little puzzled by the use of LINK_TO_LOAD() (and
> > > LOAD_TO_LINK()
> > > a
> > > little further up) here. Don't you have functioning __pa() and
> > > __va()?
> > Can __pa() and __va() be used in this case?
> > 
> > According to comments for other architectures, these macros are
> > used
> > for converting between Xen heap virtual addresses (VA) and machine
> > addresses (MA). I may have misunderstood what is meant by the Xen
> > heap
> > in this context, but I'm not sure if xen_fixmap[] and page tables
> > are
> > considered part of the Xen heap.
> 
> I didn't check Arm, but on x86 virt_to_maddr() (underlying __pa())
> has
> special case code to also allow addresses within the Xen image
> (area).

Yes, it is true for __virt_to_maddr:
   static inline unsigned long __virt_to_maddr(unsigned long va)
   {
       ASSERT(va < DIRECTMAP_VIRT_END);
       if ( va >= DIRECTMAP_VIRT_START )
           va -= DIRECTMAP_VIRT_START;
       else
       {
           BUILD_BUG_ON(XEN_VIRT_END - XEN_VIRT_START != GB(1));
           /* Signed, so ((long)XEN_VIRT_START >> 30) fits in an imm32. */
           ASSERT(((long)va >> (PAGE_ORDER_1G + PAGE_SHIFT)) ==
                  ((long)XEN_VIRT_START >> (PAGE_ORDER_1G + PAGE_SHIFT)));
   
           va += xen_phys_start - XEN_VIRT_START;
       }
       return (va & ma_va_bottom_mask) |
              ((va << pfn_pdx_hole_shift) & ma_top_mask);
   }
   
But in case of __maddr_to_virt ( __va() ) it is using directmap region:
   static inline void *__maddr_to_virt(unsigned long ma)
   {
       ASSERT(pfn_to_pdx(ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >>
   PAGE_SHIFT));
       return (void *)(DIRECTMAP_VIRT_START +
                       ((ma & ma_va_bottom_mask) |
                        ((ma & ma_top_mask) >> pfn_pdx_hole_shift)));
   }

But I have to use both __va() and __pa().
__va() inside cycle to find L1 page table:

    for ( i = HYP_PT_ROOT_LEVEL; i-- > 1; )
    {
        BUG_ON(!pte_is_valid(*pte));

        pte = (pte_t *)LOAD_TO_LINK(pte_to_paddr(*pte));
        pte = &pte[pt_index(i, FIXMAP_ADDR(0))];
    }

__pa() to set a physical address of L0 page table:
    tmp = paddr_to_pte(LINK_TO_LOAD((unsigned long)&xen_fixmap),
PTE_TABLE);
    write_pte(pte, tmp);


~ Oleksii



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.