|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v02 1/7] arm: introduce remoteprocessor iommu module
Hi Julien,
Thanks a lot for detailed review and sorry for so late reply. Was in
big rush last days.
On Sun, Jun 29, 2014 at 9:00 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
> Hi Andrii,
>
>
> On 26/06/14 12:07, Andrii Tseglytskyi wrote:
>>
>> This is a fisrst patch from patch series which was
>
>
> s/firsrst/first/
>
> Although I don't think you have to say that this is the first patch :). This
> is not useful for the commit message.
>
>
>> developed to handle remote (external) processors
>> memory management units. Remote processors are
>> typically used for graphic rendering (GPUs) and
>> high quality video decoding (IPUs). They are typically
>> installed on such multimedia SoCs as OMAP4 / OMAP5.
>>
>> As soon as remoteprocessor MMU typically works with
>> pagetables filled by physical addresses, which are
>> allocated by domU kernel, it is almost impossible to
>> use them under Xen, intermediate physical addresses
>> allocated by kernel, need to be translated to machine
>> addresses.
>>
>> This patch introduces a simple framework to perform
>> pfn -> mfn translation for external MMUs.
>> It introduces basic data structures and algorithms
>> needed for translation.
>>
>> Typically, when MMU is configured, some it registers
>> are updated by new values. Introduced frameworks
>> uses traps as starting point of remoteproc MMUs
>> pagetables translation.
>>
>> Change-Id: Ia4d311a40289df46a003f5ae8706c150bee1885d
>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
>
>
> [..]
>
>
>> +static struct mmu_info *mmu_list[] = {
>> +};
>> +
>> +#define mmu_for_each(pfunc, data) \
>> +({ \
>> + u32 __i; \
>> + int __res = 0; \
>> + \
>> + for ( __i = 0; __i < ARRAY_SIZE(mmu_list); __i++ ) \
>> + { \
>> + __res |= pfunc(mmu_list[__i], data); \
>
>
> Using _res |= will result to a wrong errno at the end.
> See the usage in iommu mmu_init_all.
>
> I would use
>
> __res = pfunc(...)
> if ( !__res )
> break;
>
> And therefore modify mmu_check to return 1 if failing, 0 otherwise.
>
> This will also avoid to continue to browse all the MMU for nothing.
>
OK.
>
>> + } \
>> + __res; \
>> +})
>> +
>> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
>> +{
>> + if ( (addr >= mmu->mem_start) && (addr < (mmu->mem_start +
>> mmu->mem_size)) )
>> + return 1;
>> +
>> + return 0;
>> +}
>> +
>> +static inline struct mmu_info *mmu_lookup(u32 addr)
>> +{
>> + u32 i;
>> +
>> + /* enumerate all registered MMU's and check is address in range */
>> + for ( i = 0; i < ARRAY_SIZE(mmu_list); i++ )
>> + {
>> + if ( mmu_check_mem_range(mmu_list[i], addr) )
>> + return mmu_list[i];
>> + }
>> +
>> + return NULL;
>> +}
>> +
>> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
>> +{
>> + return mmu_for_each(mmu_check_mem_range, addr);
>> +}
>
>
> This solution leads any guest to access to the MMU and therefore program it.
> If you plan to use for passthrough, you have to find a way to say that a
> specific domain is able to use the MMU, maybe an hypercall. Otherwise this
> is a security issue.
>
> IIRC I have already raised this concern on V1 :). It would be nice if you
> resolve it ASAP, because I suspect it will need some rework in the way you
> handle MMNU.
>
Agree. I need to think how to solve this. I don't think that the
hypercall is what is needed here.
>
>> +static int mmu_copy_pagetable(struct mmu_info *mmu, struct mmu_pagetable
>> *pgt)
>> +{
>> + void __iomem *pagetable = NULL;
>> + u32 maddr, i;
>> +
>> + ASSERT(mmu);
>> + ASSERT(pgt);
>> +
>> + if ( !pgt->paddr )
>> + return -EINVAL;
>> +
>> + /* pagetable size can be more than one page */
>> + for ( i = 0; i < MMU_PGD_TABLE_SIZE(mmu) / PAGE_SIZE; i++ )
>> + {
>> + /* lookup address where remoteproc pagetable is stored by kernel
>> */
>> + maddr = p2m_lookup(current->domain, pgt->paddr + i * PAGE_SIZE,
>> NULL);
>> + if ( !maddr )
>> + {
>> + pr_mmu("failed to translate 0x%08lx to maddr", pgt->paddr + i
>> * PAGE_SIZE);
>> + return -EINVAL;
>> + }
>> +
>> + pagetable = ioremap_nocache(maddr, MMU_PGD_TABLE_SIZE(mmu));
>
>
> ioremap_* should only be used to map device memory. For the guest memory you
> have to use copy_from_guest helper.
>
OK. Just a small question - if I use copy_from_guest(), can I copy
from physical pointer ? Here I have an address, which points to exact
physical memory.
>
>> +struct mmu_pagetable *mmu_pagetable_lookup(struct mmu_info *mmu, u32
>> addr, bool is_maddr)
>
>
> you have to use paddr_t for addr. The number of bit for the physical address
> can be greater than 40 bits.
>
OK
> The remark is the same everywhere within this file.
>
> [..]
>
>
>> +static u32 mmu_translate_pagetable(struct mmu_info *mmu, u32 paddr)
>> +{
>> + struct mmu_pagetable *pgt;
>> + int res;
>> +
>> + /* lookup using machine address first */
>> + pgt = mmu_pagetable_lookup(mmu, paddr, true);
>> + if ( !pgt )
>> + {
>> + /* lookup using kernel physical address */
>> + pgt = mmu_pagetable_lookup(mmu, paddr, false);
>> + if ( !pgt )
>> + {
>> + /* if pagetable doesn't exists in lookup list - allocate it
>> */
>> + pgt = mmu_alloc_pagetable(mmu, paddr);
>
>
> The function mmu_alloc_pagetable can return NULL. But you never check the
> return value and dereference it just below.
>
OK. Thank you for catching this.
>
>> + }
>> + }
>> +
>> + pgt->maddr = MMU_INVALID_ADDRESS;
>
>
> [..]
>
>
>> +static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
>> +{
>> + struct mmu_info *mmu = NULL;
>> + unsigned long flags;
>> + register_t *r;
>> +
>> + r = select_user_reg(guest_cpu_user_regs(), info->dabt.reg);
>> +
>> + ASSERT(r);
>
>
> The ASSERT is pointless, select_user_reg will never return NULL.
OK
>
> [..]
>
>
>> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
>> +{
>> + struct mmu_info *mmu = NULL;
>> + unsigned long flags;
>> + register_t *r;
>> + u32 new_addr, val;
>> +
>> + r = select_user_reg(guest_cpu_user_regs(), info->dabt.reg);
>> +
>> + ASSERT(r);
>
>
> Same here.
>
OK
>
>> + /* dom0 should not access remoteproc MMU */
>> + if ( 0 == current->domain->domain_id )
>> + return 1;
>
>
> Why this restriction?
>
We agreed that remoteproc will be handled by domU, which doesn't has 1
to 1 memory mapping.
If remoteproc belongs to dom0, translation is not needed - it MMU will
be configured with machine pointers.
>
>> + /* find corresponding MMU */
>> + mmu = mmu_lookup(info->gpa);
>> + if ( !mmu )
>> + {
>> + pr_mmu("can't get mmu for addr 0x%08x", (u32)info->gpa);
>> + return -EINVAL;
>> + }
>> +
>> + ASSERT(v->domain == current->domain);
>
>
> I guess this restriction is because you are using current in
> mmu_trap_translate_pagetable?
Right. I just tried to make sure that p2m_lookup is called with proper
domain pointer.
>
> So, the iohandler is usually call on the current VCPU, there is no need to
> worry about it. Futhermore, I would pass the vcpu/domain in argument of the
> next function.
>
Can be. In first revision of these patches domain passed as a
parameter. But that leaded to one more additional parameter in all
functions below this call. I found that I can reduce arg list if I use
current-> domain pointer.
>
>> +static int mmu_init(struct mmu_info *mmu, u32 data)
>> +{
>> + ASSERT(mmu);
>> + ASSERT(!mmu->mem_map);
>> +
>> + INIT_LIST_HEAD(&mmu->pagetables_list);
>> +
>> + /* map MMU memory */
>> + mmu->mem_map = ioremap_nocache(mmu->mem_start, mmu->mem_size);
>> + if ( !mmu->mem_map )
>
>
> It looks like there is a hard tab here.
>
OK. Thank you for catching.
> [..]
>
>
>> +static int mmu_init_all(void)
>> +{
>> + int res;
>> +
>> + res = mmu_for_each(mmu_init, 0);
>> + if ( res )
>> + {
>> + printk("%s error during init %d\n", __func__, res);
>> + return res;
>> + }
>
>
> Hmmm... do_initcalls doesn't check the return value. How your code behave we
> one of the MMU has not been initialized?
>
> I think do_initcalls & co should check the return, but as it's the common
> code I don't know how x86 respect this convention to return 0 if succeded.
> Ian, Stefano, any thoughs?
>
I would like to make this specific to ARM only if possible.
> [..]
>
>> diff --git a/xen/include/xen/remoteproc_iommu.h
>> b/xen/include/xen/remoteproc_iommu.h
>
>
> I think this file as
>
>
>> new file mode 100644
>> index 0000000..22e2951
>> --- /dev/null
>> +++ b/xen/include/xen/remoteproc_iommu.h
>
>
> The remoteproc things is ARM specific, right? If so, this header should be
> moved in include/asm-arm.
>
OK
>> +
>> +#define MMU_INVALID_ADDRESS ((u32)(-1))
>
>
> You should not assume that the MMU ADDRESS is always 32 bits. Please use
> paddr_t here.
>
Sure. Thank you. Next series will contain paddr_t everywhere, where
u32 is used for address definition.
>
>> +#define pr_mmu(fmt, ...) \
>> + printk("%s: %s: "fmt"\n", __func__, ((mmu) ? (mmu)->name : ""),
>> ##__VA_ARGS__)
>
>
> Hmmm, you are assuming that mmu is existing within the function. You should
> pass the mmu in parameter of this macro.
>
> Also, most of the usage are for an error (except the one in mmu_page_alloc).
> I would prefix it by XENLOG_ERROR.
>
OK
>
>> +struct pagetable_data {
>> + /* 1st level translation */
>> + u32 pgd_shift;
>> + u32 pte_shift;
>> + u32 super_shift;
>> + u32 section_shift;
>> + /* 2nd level translation */
>> + u32 pte_large_shift;
>> +};
>
>
> No hard tab in Xen. Please remove them.
>
OK
>
>> +
>> +struct mmu_pagetable {
>> + u32 *hyp_pagetable;
>> + u32 *kern_pagetable;
>> + u32 paddr;
>> + u32 maddr;
>> + struct list_head link_node;
>> + u32 page_counter;
>> +};
>
>
> Same here.
>
OK
>
>> +
>> +struct mmu_info {
>> + const char *name;
>> + const struct pagetable_data *pg_data;
>> + /* register where phys pointer to pagetable is stored */
>> + u32 *trap_offsets;
>> + paddr_t mem_start;
>> + u32 mem_size;
>> + spinlock_t lock;
>> + struct list_head pagetables_list;
>> + u32 num_traps;
>> + void __iomem *mem_map;
>> + u32 (*translate_pfunc)(struct mmu_info *, struct mmu_pagetable
>> *);
>> + void (*print_pagetable_pfunc)(struct mmu_info *);
>> +};
>
>
> Same here.
>
> Regards,
>
> --
> Julien Grall
--
Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |