[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V10 PATCH 0/4] pvh dom0 patches...



>>> On 07.05.14 at 11:48, <roger.pau@xxxxxxxxxx> wrote:
> I've expanded my patch that added the memory removed form the holes to 
> the end of the memory map to also create an e820 map for Dom0 that 
> reflects the reality of the underlying p2m. This way PVH guests (either 
> DomU or Dom0) should only use XENMEM_memory_map to get the correct e820 
> map (and no need for clamping in the Dom0 case).

Looks generally good (except that it doesn't seem to deal with the
DomU case yet), but there are a couple of possible issues:

> --- a/xen/arch/x86/domain_build.c
> +++ b/xen/arch/x86/domain_build.c
> @@ -327,11 +327,13 @@ static __init void pvh_add_mem_mapping(struct domain 
> *d, unsigned long gfn,
>   * pvh fixme: The following doesn't map MMIO ranges when they sit above the
>   *            highest E820 covered address.
>   */
> -static __init void pvh_map_all_iomem(struct domain *d)
> +static __init void pvh_map_all_iomem(struct domain *d, unsigned long 
> nr_pages)
>  {
>      unsigned long start_pfn, end_pfn, end = 0, start = 0;
>      const struct e820entry *entry;
> -    unsigned int i, nump;
> +    unsigned int i, j, nump, navail, nmap, nr_holes = 0;

With nr_pages above being unsigned long, I think some of these
variables should be too.

> @@ -369,6 +374,89 @@ static __init void pvh_map_all_iomem(struct domain *d)
>          nump = end_pfn - start_pfn;
>          pvh_add_mem_mapping(d, start_pfn, start_pfn, nump);
>      }
> +
> +    /*
> +     * Add the memory removed by the holes at the end of the
> +     * memory map.
> +     */
> +    for ( i = 0, entry = e820.map; i < e820.nr_map; i++, entry++ )
> +    {
> +        if ( entry->type != E820_RAM )
> +            continue;
> +
> +        end_pfn = PFN_UP(entry->addr + entry->size);
> +        if ( end_pfn <= nr_pages )
> +            continue;
> +
> +        navail = end_pfn - nr_pages;
> +        nmap = navail > nr_holes ? nr_holes : navail;
> +        start_pfn = PFN_DOWN(entry->addr) < nr_pages ?
> +                        nr_pages : PFN_DOWN(entry->addr);
> +        page = alloc_domheap_pages(d, get_order_from_pages(nmap), 0);
> +        if ( !page )
> +            panic("Not enough RAM for domain 0");
> +        for ( j = 0; j < nmap; j++ )
> +        {
> +            rc = guest_physmap_add_page(d, start_pfn + j, page_to_mfn(page), 
> 0);

I realize that this interface isn't the most flexible one in terms of what
page orders it allows to be passed in, but could you at least use order
9 here when the allocation order above is 9 or higher?

> +static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages)
> +{
> +    struct e820entry *entry;
> +    unsigned int i;
> +    unsigned long pages, cur_pages = 0;
> +
> +    /*
> +     * Craft the e820 memory map for Dom0 based on the hardware e820 map.
> +     */
> +    d->arch.e820 = xzalloc_array(struct e820entry, e820.nr_map);
> +    if ( !d->arch.e820 )
> +        panic("Unable to allocate memory for Dom0 e820 map");
> +
> +    memcpy(d->arch.e820, e820.map, sizeof(struct e820entry) * e820.nr_map);
> +    d->arch.nr_e820 = e820.nr_map;
> +
> +    /* Clamp e820 memory map to match the memory assigned to Dom0 */
> +    for ( i = 0, entry = d->arch.e820; i < d->arch.nr_e820; i++, entry++ )
> +    {
> +        if ( entry->type != E820_RAM )
> +            continue;
> +
> +        if ( nr_pages == cur_pages )
> +        {
> +            /*
> +             * We already have all the assigned memory,
> +             * mark this region as reserved.
> +             */
> +            entry->type = E820_RESERVED;

That seems undesirable, as it will prevent Linux from using that space
for MMIO purposes. Plus keeping the region here is consistent with
simply shrinking the range below (yielding the remainder uncovered
by any E820 entry). I think you ought to zap the entry here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.