[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v10][PATCH 06/16] hvmloader/pci: Try to avoid placing BARs in RMRRs



>>> On 20.07.15 at 08:16, <tiejun.chen@xxxxxxxxx> wrote:
> --- a/tools/firmware/hvmloader/pci.c
> +++ b/tools/firmware/hvmloader/pci.c
> @@ -38,6 +38,43 @@ uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0;
>  enum virtual_vga virtual_vga = VGA_none;
>  unsigned long igd_opregion_pgbase = 0;
>  
> +/* Check if any conflicts with all reserved device memory. */

/* Check if the specified range conflicts with any reserved device memory. */

(and the "any" could perhaps be left out)

> +static bool check_overlap_all(uint64_t start, uint64_t size)
> +{
> +    unsigned int i;
> +
> +    for ( i = 0; i < memory_map.nr_map; i++ )
> +    {
> +        if ( memory_map.map[i].type == E820_RESERVED &&
> +             check_overlap(start, size,
> +                           memory_map.map[i].addr,
> +                           memory_map.map[i].size) )
> +            return true;
> +    }
> +
> +    return false;
> +}
> +
> +/* Find the lowest RMRR higher than base. */
> +static int find_next_rmrr(uint32_t base)
> +{
> +    unsigned int i;
> +    int next_rmrr = -1;
> +    uint64_t min_base = (1ull << 32);
> +
> +    for ( i = 0; i < memory_map.nr_map ; i++ )
> +    {
> +        if ( memory_map.map[i].type == E820_RESERVED &&
> +             memory_map.map[i].addr > base &&
> +             memory_map.map[i].addr < min_base )
> +        {
> +            next_rmrr = i;
> +            min_base = memory_map.map[i].addr;
> +        }
> +    }
> +    return next_rmrr;
> +}

Considering _both_ callers, I think the function should actually return
the lowest RMRR higher than or equal to base. Or wait - we actually
need to find the lowest RMRR the _end_ of which is higher than base.

Also - blank line before final return statement please.

> @@ -299,6 +337,15 @@ void pci_setup(void)
>                      || (((pci_mem_start << 1) >> PAGE_SHIFT)
>                          >= hvm_info->low_mem_pgend)) )
>              pci_mem_start <<= 1;
> +
> +        /*
> +         * Try to accomodate RMRRs in our MMIO region on a best-effort basis.
> +         * If we have RMRRs in the range, then make pci_mem_start just after
> +         * hvm_info->low_mem_pgend.
> +         */
> +        if ( pci_mem_start > (hvm_info->low_mem_pgend << PAGE_SHIFT) &&
> +             check_overlap_all(pci_mem_start, pci_mem_end-pci_mem_start) )
> +            pci_mem_start = ((hvm_info->low_mem_pgend + 1) << PAGE_SHIFT);

Since it looks like low_mem_pgend is exclusive, is the "+ 1" here
really needed (other than to put an arbitrary one page gap
between RAM and MMIO)?

> @@ -407,6 +456,19 @@ void pci_setup(void)
>          }
>  
>          base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> +
> +        /* If we're using mem_resource, check for RMRR conflicts. */
> +        while ( resource == &mem_resource &&
> +                next_rmrr > 0 &&

>= I think.

> +                check_overlap(base, bar_sz,
> +                              memory_map.map[next_rmrr].addr,
> +                              memory_map.map[next_rmrr].size) )
> +        {
> +            base = memory_map.map[next_rmrr].addr + 
> memory_map.map[next_rmrr].size;
> +            base = (base + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);

Pointless cast.

And as George said elsewhere - please make sure the whole series
applies cleanly to staging; generally committers don't do fixups while
applying.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.