[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 04/14] arch/arm: unmap partially-mapped I/O-memory regions



On 07/03/2014 03:40 PM, Ian Campbell wrote:
> On Wed, 2014-07-02 at 20:42 +0200, Arianna Avanzini wrote:
>> This commit changes the function apply_p2m_changes() to unwind changes
>> performed while mapping an I/O-memory region, if errors are seen during
>> the operation. This is useful to avoid that I/O-memory areas are
>> partially accessible to guests.
> 
> I think it won't quite unwind, since it will actually destroy whatever
> was there before we tried to put something new.
> 
> Is there anything here which is specific to I/O memory areas? It seems
> to affect all mapping types.

I think it's fine to unmap for mapping types.

>> Signed-off-by: Arianna Avanzini <avanzini.arianna@xxxxxxxxx>
>> Cc: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
>> Cc: Paolo Valente <paolo.valente@xxxxxxxxxx>
>> Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>> Cc: Julien Grall <julien.grall@xxxxxxxxxx>
>> Cc: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>> Cc: Keir Fraser <keir@xxxxxxx>
>> Cc: Tim Deegan <tim@xxxxxxx>
>> Cc: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> Cc: Eric Trudeau <etrudeau@xxxxxxxxxxxx>
>> Cc: Viktor Kleinik <viktor.kleinik@xxxxxxxxxxxxxxx>
>>
>> ---
>>
>>     v9:
>>         - Let apply_p2m_ranges() unwind its own progress instead of relying 
>> on
>>           the caller to unmap partially-mapped I/O-memory regions.
>>
>> ---
>>  xen/arch/arm/p2m.c | 16 ++++++++++++----
>>  1 file changed, 12 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 22646e9..92fc4ec 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -314,7 +314,7 @@ static int apply_p2m_changes(struct domain *d,
>>      unsigned long cur_first_page = ~0,
>>                    cur_first_offset = ~0,
>>                    cur_second_offset = ~0;
>> -    unsigned long count = 0;
>> +    unsigned long count = 0, inserted = 0;
>>      unsigned int flush = 0;
>>      bool_t populate = (op == INSERT || op == ALLOCATE);
>>      lpae_t pte;
>> @@ -328,6 +328,7 @@ static int apply_p2m_changes(struct domain *d,
>>  
>>      spin_lock(&p2m->lock);
>>  
>> +unwind:
> 
> I think I'd prefer a recursive call to the use of goto.
> 
> In particular the use of goto will result in the overall function
> appearing to succeed, won't it? Because the REMOVE will have succeeded.
> 
>>      addr = start_gpaddr;
>>      while ( addr < end_gpaddr )
>>      {
>> @@ -338,7 +339,9 @@ static int apply_p2m_changes(struct domain *d,
>>              if ( !first )
>>              {
>>                  rc = -EINVAL;
>> -                goto out;
>> +                end_gpaddr = start_gpaddr + inserted * PAGE_SIZE;
> 
> Isn't start_gpaddr + inserted * PAGE_SIZE == addr at any given moment?
> 
> Or you could just recurse on the entire region, which is much simpler I
> think, especially when you consider that you are clearing the region
> rather than unwinding it.
> 
>> @@ -384,7 +389,9 @@ static int apply_p2m_changes(struct domain *d,
>>                                    flush_pt);
>>              if ( rc < 0 ) {
>>                  printk("p2m_populate_ram: L2 failed\n");
>> -                goto out;
>> +                end_gpaddr = start_gpaddr + inserted * PAGE_SIZE;
>> +                op = REMOVE;
>> +                goto unwind;
>>              }
> 
> What about the L3 ALLOCATE case? Or is you intention only to handle
> INSERT (but you partially handle ALLOCATE)?
> 
>>          }
>>  
>> @@ -441,6 +448,7 @@ static int apply_p2m_changes(struct domain *d,
>>                      pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr, t);
>>                      p2m_write_pte(&third[third_table_offset(addr)],
>>                                    pte, flush_pt);
>> +                    inserted++;
>>                  }
>>                  break;
>>              case REMOVE:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.