|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host address not provided
Hi Julien
Happy new year~~~~
> -----Original Message-----
> From: Julien Grall <julien@xxxxxxx>
> Sent: Sunday, January 8, 2023 8:53 PM
> To: Penny Zheng <Penny.Zheng@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Wei Chen <Wei.Chen@xxxxxxx>; Stefano Stabellini
> <sstabellini@xxxxxxxxxx>; Bertrand Marquis <Bertrand.Marquis@xxxxxxx>;
> Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
> Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner
> when host address not provided
>
> Hi,
>
> On 15/11/2022 02:52, Penny Zheng wrote:
> > @@ -922,33 +927,82 @@ static mfn_t __init
> acquire_shared_memory_bank(struct domain *d,
> > d->max_pages += nr_pfns;
> >
> > smfn = maddr_to_mfn(pbase);
> > - res = acquire_domstatic_pages(d, smfn, nr_pfns, 0);
> > - if ( res )
> > + page = mfn_to_page(smfn);
> > + /*
> > + * If page is allocated from heap as static shared memory, then we just
> > + * assign it to the owner domain
> > + */
> > + if ( page->count_info == (PGC_state_inuse | PGC_static) )
> I am a bit confused how this can help differentiating becaPGC_state_inuse is
> 0. So effectively, you are checking that count_info is equal to PGC_static.
>
When host address is provided, the host address range defined in
xen,static-mem will be stored as a "struct membank" with type
of "MEMBANK_STATIC_DOMAIN" in "bootinfo.reserved_mem"
Then it will be initialized as static memory through "init_staticmem_pages"
So here its page->count_info is PGC_state_free |PGC_static.
For pages allocated from heap, its page state is different, being
PGC_state_inuse.
So actually, we are checking page state to tell the difference
.
> But as I wrote in a previous patch, I don't think you should convert
> {xen,dom}heap pages to a static pages.
>
I agree that taking reference could also prevent giving these pages back to
heap.
But may I ask what is your concern on converting {xen,dom}heap pages to a
static pages?
> [...]
>
> > +static int __init assign_shared_memory(struct domain *d,
> > + struct shm_membank *shm_membank,
> > + paddr_t gbase) {
> > + int ret = 0;
> > + unsigned long nr_pages, nr_borrowers;
> > + struct page_info *page;
> > + unsigned int i;
> > + struct meminfo *meminfo;
> > +
> > + /* Host address is not provided in "xen,shared-mem" */
> > + if ( shm_membank->mem.banks.meminfo )
> > + {
> > + meminfo = shm_membank->mem.banks.meminfo;
> > + for ( i = 0; i < meminfo->nr_banks; i++ )
> > + {
> > + ret = acquire_shared_memory(d,
> > + meminfo->bank[i].start,
> > + meminfo->bank[i].size,
> > + gbase);
> > + if ( ret )
> > + return ret;
> > +
> > + gbase += meminfo->bank[i].size;
> > + }
> > + }
> > + else
> > + {
> > + ret = acquire_shared_memory(d,
> > + shm_membank->mem.bank->start,
> > + shm_membank->mem.bank->size, gbase);
> > + if ( ret )
> > + return ret;
> > + }
>
> Looking at this change and...
>
> > +
> > /*
> > * Get the right amount of references per page, which is the number of
> > * borrower domains.
> > @@ -984,23 +1076,37 @@ static int __init assign_shared_memory(struct
> domain *d,
> > * So if the borrower is created first, it will cause adding pages
> > * in the P2M without reference.
> > */
> > - page = mfn_to_page(smfn);
> > - for ( i = 0; i < nr_pages; i++ )
> > + if ( shm_membank->mem.banks.meminfo )
> > {
> > - if ( !get_page_nr(page + i, d, nr_borrowers) )
> > + meminfo = shm_membank->mem.banks.meminfo;
> > + for ( i = 0; i < meminfo->nr_banks; i++ )
> > {
> > - printk(XENLOG_ERR
> > - "Failed to add %lu references to page %"PRI_mfn".\n",
> > - nr_borrowers, mfn_x(smfn) + i);
> > - goto fail;
> > + page = mfn_to_page(maddr_to_mfn(meminfo->bank[i].start));
> > + nr_pages = PFN_DOWN(meminfo->bank[i].size);
> > + ret = add_shared_memory_ref(d, page, nr_pages, nr_borrowers);
> > + if ( ret )
> > + goto fail;
> > }
> > }
> > + else
> > + {
> > + page = mfn_to_page(
> > + maddr_to_mfn(shm_membank->mem.bank->start));
> > + nr_pages = shm_membank->mem.bank->size >> PAGE_SHIFT;
> > + ret = add_shared_memory_ref(d, page, nr_pages, nr_borrowers);
> > + if ( ret )
> > + return ret;
> > + }
>
> ... this one. The code to deal with a bank is exactly the same. But you need
> the duplication because you special case "one bank".
>
> As I wrote in a previous patch, I don't think we should special case it.
> If the concern is memory usage, then we should look at reworking meminfo
> instead (or using a different structure).
>
A few concerns explained why I didn't choose "struct meminfo" over two
pointers "struct membank*" and "struct meminfo*".
1) memory usage is the main reason.
If we use "struct meminfo" over the current "struct membank*" and "struct
meminfo*",
"struct shm_meminfo" will become a array of 256 "struct shm_membank", with
"struct shm_membank" being also an 256-item array, that is 256 * 256, too big
for
a structure and If I remembered clearly, it will lead to "more than PAGE_SIZE"
compiling error.
FWIT, either reworking meminfo or using a different structure, are both leading
to sizing down
the array, hmmm, I don't know which size is suitable. That's why I prefer
pointer
and dynamic allocation.
2) If we use "struct meminfo*" over the current "struct membank*" and "struct
meminfo*",
we will need a new flag to differentiate two scenarios(host address provided or
not), As
the special case "struct membank*" is already helping us differentiate.
And since when host address is provided, the related "struct membank" also
needs to be
reserved in "bootinfo.reserved_mem". That's why I used pointer " struct
membank*" to
avoid memory waste.
For the duplication, I could extract the common codes to mitigate the impact.
> >
> > return 0;
>
>
> > fail:
> > while ( --i >= 0 )
> > - put_page_nr(page + i, nr_borrowers);
> > + {
> > + page = mfn_to_page(maddr_to_mfn(meminfo->bank[i].start));
> > + nr_pages = PFN_DOWN(meminfo->bank[i].size);
> > + remove_shared_memory_ref(page, nr_pages, nr_borrowers);
> > + }
> > return ret;
> > }
> >
>
> Cheers,
>
> --
> Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |