[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 09/16] x86/hvm: limit reps to avoid the need to handle retry



On 02/07/15 18:14, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
>> Sent: 02 July 2015 18:11
>> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx
>> Cc: Keir (Xen.org); Jan Beulich
>> Subject: Re: [PATCH v5 09/16] x86/hvm: limit reps to avoid the need to
>> handle retry
>>
>> On 30/06/15 14:05, Paul Durrant wrote:
>>> @@ -235,7 +219,7 @@ static int hvmemul_do_io_buffer(
>>>
>>>      BUG_ON(buffer == NULL);
>>>
>>> -    rc = hvmemul_do_io(is_mmio, addr, reps, size, dir, df, 0,
>>> +    rc = hvmemul_do_io(is_mmio, addr, *reps, size, dir, df, 0,
>>>                         (uintptr_t)buffer);
>>>      if ( rc == X86EMUL_UNHANDLEABLE && dir == IOREQ_READ )
>>>          memset(buffer, 0xff, size);
>>> @@ -287,17 +271,53 @@ static int hvmemul_do_io_addr(
>>>      bool_t is_mmio, paddr_t addr, unsigned long *reps,
>>>      unsigned int size, uint8_t dir, bool_t df, paddr_t ram_gpa)
>>>  {
>>> -    struct page_info *ram_page;
>>> +    struct vcpu *v = current;
>> curr.
>>
>>> +    unsigned long ram_gmfn = paddr_to_pfn(ram_gpa);
>> ram_gfn.
>>
>>> +    unsigned int page_off = ram_gpa & (PAGE_SIZE - 1);
>> offset and ~PAGE_MASK.
>>
>>> +    struct page_info *ram_page[2];
>>> +    int nr_pages = 0;
>> unsigned int.
>>
>>> +    unsigned long count;
>>>      int rc;
>>>
>>> -    rc = hvmemul_acquire_page(paddr_to_pfn(ram_gpa), &ram_page);
>>> +    rc = hvmemul_acquire_page(ram_gmfn, &ram_page[nr_pages]);
>>>      if ( rc != X86EMUL_OKAY )
>>> -        return rc;
>>> +        goto out;
>>>
>>> -    rc = hvmemul_do_io(is_mmio, addr, reps, size, dir, df, 1,
>>> +    nr_pages++;
>>> +
>>> +    /* Detemine how many reps will fit within this page */
>>> +    count = min_t(unsigned long,
>>> +                  *reps,
>>> +                  df ?
>>> +                  (page_off + size - 1) / size :
>>> +                  (PAGE_SIZE - page_off) / size);
>>> +
>>> +    if ( count == 0 )
>>> +    {
>>> +        /*
>>> +         * This access must span two pages, so grab a reference to
>>> +         * the next page and do a single rep.
>>> +         */
>>> +        rc = hvmemul_acquire_page(df ? ram_gmfn - 1 : ram_gmfn + 1,
>>> +                                  &ram_page[nr_pages]);
>> All guest-based ways to trigger an IO spanning a page boundary will be
>> based on linear address.  If a guest has paging enabled, this movement
>> to an adjacent physical is not valid.  A new pagetable walk will be
>> required to determine the correct second page.
> I don't think that is true. hvmemul_linear_to_phys() will break at 
> non-contiguous boundaries.

Hmm - it looks like it will bail with unhandleable on a straddled access
across a non-contiguous boundary.  In which case a comment confirming
the safety of the +/- 1 will be useful to the next person who follows
the same track of logic as I did.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.