[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 1/2] gnttab: add IOMMU entries for grant mappings on auto-translated guests



On 07/04/14 14:04, Jan Beulich wrote:
>>>> On 07.04.14 at 13:51, <roger.pau@xxxxxxxxxx> wrote:
>> On 04/04/14 17:35, Jan Beulich wrote:
>>>>>> On 04.04.14 at 16:37, <roger.pau@xxxxxxxxxx> wrote:
>>>> --- a/xen/common/grant_table.c
>>>> +++ b/xen/common/grant_table.c
>>>> @@ -749,6 +749,22 @@ __gnttab_map_grant_ref(
>>>>              goto undo_out;
>>>>          }
>>>>      }
>>>> +    else if ( need_iommu(ld) )
>>>> +    {
>>>> +        int err;
>>>> +
>>>> +        BUG_ON(!paging_mode_translate(ld));
>>>> +        err = iommu_map_page(ld, op->host_addr >> PAGE_SHIFT, frame,
>>>> +                             op->flags & GNTMAP_readonly ?
>>>> +                                IOMMUF_readable :
>>>> +                                IOMMUF_readable|IOMMUF_writable);
>>>> +        if ( err )
>>>> +        {
>>>> +            double_gt_unlock(lgt, rgt);
>>>> +            rc = GNTST_general_error;
>>>> +            goto undo_out;
>>>> +        }
>>>> +    }
>>>
>>> As much of this as possible should be folded with the if() branch.
>>> And looking at the PV code, I think it makes no sense for the
>>> conditions whether to map the page r/o or r/w to be different
>>> between PV and non-PV.
>>>
>>> Plus - wouldn't it be better to have this taken care of via
>>> create_grant_host_mapping(), by not blindly calling
>>> iommu_unmap_page() on anything other than p2m_ram_rw in
>>> ept_set_entry() and p2m_set_entry(), the more that this should
>>> already be taken care of in the iommu_hap_pt_share case.
>>
>> Thanks for the comment, it indeed makes much more sense to fix this in 
>> ept_set_entry/p2m_set_entry, the following patch adds IOMMU support for 
>> all page types except p2m_mmio_direct (when the hap memory map is not 
>> shared with IOMMU):
> 
> This seems to be going a little too far - I'm not sure we want to include
> all types here: p2m_mmio_dm, p2m_ram_paging_*, p2m_ram_paged,
> and p2m_ram_broken all might require different treatment. Please do
> this via some sort of switch() setting the permissions instead, calling
> iommu_unmap_page() when the permissions remain zero.

Right, I was looking at the wrong p2m.h header (the ARM one), which has 
a more limited set of p2m types.

I'm not sure if the type p2m_ram_shared should be given an IOMMU 
read-only entry, that's what I've done in the patch below.

Also, there's a comment on the top of p2m.h that worries me:

/*
 * AMD IOMMU: When we share p2m table with iommu, bit 52 -bit 58 in pte 
 * cannot be non-zero, otherwise, hardware generates io page faults when 
 * device access those pages. Therefore, p2m_ram_rw has to be defined as 0.
 */

Does this mean that on AMD if the map is shared between HAP and IOMMU, 
the only page type accessible from the IOMMU would be p2m_ram_rw? If 
so, that should be fixed also (probably by setting iommu_hap_pt_share = 
0 on AMD hardware).

---
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index beb7288..993d11a 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -475,18 +475,37 @@ out:
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
         else
         {
-            if ( p2mt == p2m_ram_rw )
+            unsigned int flags;
+
+            switch( p2mt )
+            {
+            case p2m_ram_rw:
+            case p2m_grant_map_rw:
+            case p2m_map_foreign:
+                flags =  IOMMUF_readable | IOMMUF_writable;
+                break;
+            case p2m_ram_ro:
+            case p2m_grant_map_ro:
+            case p2m_ram_shared:
+                flags = IOMMUF_readable;
+                break;
+            default:
+                flags = 0;
+                break;
+            }
+
+            if ( flags != 0 )
             {
                 if ( order > 0 )
                 {
                     for ( i = 0; i < (1 << order); i++ )
                         iommu_map_page(
                             p2m->domain, gfn - offset + i, mfn_x(mfn) - offset 
+ i,
-                            IOMMUF_readable | IOMMUF_writable);
+                            flags);
                 }
                 else if ( !order )
                     iommu_map_page(
-                        p2m->domain, gfn, mfn_x(mfn), IOMMUF_readable | 
IOMMUF_writable);
+                        p2m->domain, gfn, mfn_x(mfn), flags);
             }
             else
             {
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 766fd67..33addf9 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -450,10 +450,28 @@ p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, 
mfn_t mfn,
         }
         else
         {
-            if ( p2mt == p2m_ram_rw )
+            unsigned int flags;
+
+            switch( p2mt )
+            {
+            case p2m_ram_rw:
+            case p2m_grant_map_rw:
+            case p2m_map_foreign:
+                flags =  IOMMUF_readable | IOMMUF_writable;
+                break;
+            case p2m_ram_ro:
+            case p2m_grant_map_ro:
+            case p2m_ram_shared:
+                flags = IOMMUF_readable;
+                break;
+            default:
+                flags = 0;
+                break;
+            }
+
+            if ( flags != 0 )
                 for ( i = 0; i < (1UL << page_order); i++ )
-                    iommu_map_page(p2m->domain, gfn+i, mfn_x(mfn)+i,
-                                   IOMMUF_readable|IOMMUF_writable);
+                    iommu_map_page(p2m->domain, gfn+i, mfn_x(mfn)+i, flags);
             else
                 for ( int i = 0; i < (1UL << page_order); i++ )
                     iommu_unmap_page(p2m->domain, gfn+i);


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.