[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V10 PATCH 0/4] pvh dom0 patches...



On 08/05/14 12:44, Jan Beulich wrote:
>>>> On 08.05.14 at 12:27, <roger.pau@xxxxxxxxxx> wrote:
>> On 07/05/14 13:34, Jan Beulich wrote:
>>>>>> On 07.05.14 at 11:48, <roger.pau@xxxxxxxxxx> wrote:
>>>> @@ -369,6 +374,89 @@ static __init void pvh_map_all_iomem(struct domain *d)
>>>>          nump = end_pfn - start_pfn;
>>>>          pvh_add_mem_mapping(d, start_pfn, start_pfn, nump);
>>>>      }
>>>> +
>>>> +    /*
>>>> +     * Add the memory removed by the holes at the end of the
>>>> +     * memory map.
>>>> +     */
>>>> +    for ( i = 0, entry = e820.map; i < e820.nr_map; i++, entry++ )
>>>> +    {
>>>> +        if ( entry->type != E820_RAM )
>>>> +            continue;
>>>> +
>>>> +        end_pfn = PFN_UP(entry->addr + entry->size);
>>>> +        if ( end_pfn <= nr_pages )
>>>> +            continue;
>>>> +
>>>> +        navail = end_pfn - nr_pages;
>>>> +        nmap = navail > nr_holes ? nr_holes : navail;
>>>> +        start_pfn = PFN_DOWN(entry->addr) < nr_pages ?
>>>> +                        nr_pages : PFN_DOWN(entry->addr);
>>>> +        page = alloc_domheap_pages(d, get_order_from_pages(nmap), 0);
>>>> +        if ( !page )
>>>> +            panic("Not enough RAM for domain 0");
>>>> +        for ( j = 0; j < nmap; j++ )
>>>> +        {
>>>> +            rc = guest_physmap_add_page(d, start_pfn + j, 
>>>> page_to_mfn(page), 0);
>>>
>>> I realize that this interface isn't the most flexible one in terms of what
>>> page orders it allows to be passed in, but could you at least use order
>>> 9 here when the allocation order above is 9 or higher?
>>
>> I've changed it to be:
>>
>> guest_physmap_add_page(d, start_pfn, page_to_mfn(page), order);
>>
>> and removed the loop.
> 
> Honestly I'd be very surprised if this worked - the P2M backend
> functions generally accept only orders matching up with what the
> hardware supports. I.e. I specifically didn't suggest to remove the
> loop altogether.

Sure, just checked now and EPT doesn't support orders > 9
(EPT_TABLE_ORDER). It was working in my case because the order of the
allocated pages is 7.

> 
>>>> +static __init void pvh_setup_e820(struct domain *d, unsigned long 
>>>> nr_pages)
>>>> +{
>>>> +    struct e820entry *entry;
>>>> +    unsigned int i;
>>>> +    unsigned long pages, cur_pages = 0;
>>>> +
>>>> +    /*
>>>> +     * Craft the e820 memory map for Dom0 based on the hardware e820 map.
>>>> +     */
>>>> +    d->arch.e820 = xzalloc_array(struct e820entry, e820.nr_map);
>>>> +    if ( !d->arch.e820 )
>>>> +        panic("Unable to allocate memory for Dom0 e820 map");
>>>> +
>>>> +    memcpy(d->arch.e820, e820.map, sizeof(struct e820entry) * 
>>>> e820.nr_map);
>>>> +    d->arch.nr_e820 = e820.nr_map;
>>>> +
>>>> +    /* Clamp e820 memory map to match the memory assigned to Dom0 */
>>>> +    for ( i = 0, entry = d->arch.e820; i < d->arch.nr_e820; i++, entry++ )
>>>> +    {
>>>> +        if ( entry->type != E820_RAM )
>>>> +            continue;
>>>> +
>>>> +        if ( nr_pages == cur_pages )
>>>> +        {
>>>> +            /*
>>>> +             * We already have all the assigned memory,
>>>> +             * mark this region as reserved.
>>>> +             */
>>>> +            entry->type = E820_RESERVED;
>>>
>>> That seems undesirable, as it will prevent Linux from using that space
>>> for MMIO purposes. Plus keeping the region here is consistent with
>>> simply shrinking the range below (yielding the remainder uncovered
>>> by any E820 entry). I think you ought to zap the entry here.
>>
>> I don't think this regions can be used for MMIO, the gpfns in the guest
>> p2m are not set as p2m_mmio_direct.
> 
> Not at the point you build the domain, but such a use can't and
> shouldn't be precluded ones the domain is up.

OK, setting entry.size = 0 should work.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.