|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/4] iommu: generalize iommu_inclusive_mapping
On Tue, Jul 31, 2018 at 08:39:22AM -0600, Jan Beulich wrote:
> >>> On 27.07.18 at 17:31, <roger.pau@xxxxxxxxxx> wrote:
> > Introduce a new iommu=inclusive generic option that supersedes
> > iommu_inclusive_mapping. This should be a non-functional change on
> > Intel hardware, while AMD hardware will gain the same functionality of
> > mapping almost everything below the 4GB boundary.
>
> So first of all - what's the motivation behind this change? So far we
> had no need for hacks line the VT-d side one on AMD. I don't think
> this should be widened without there being indication of a problem
> with non-niche AMD systems.
OK, I can leave the default on for Intel and off for everything else,
but I will introduce the generic dom0-iommu= option anyway.
> > --- a/xen/drivers/passthrough/x86/iommu.c
> > +++ b/xen/drivers/passthrough/x86/iommu.c
> > @@ -20,6 +20,8 @@
> > #include <xen/softirq.h>
> > #include <xsm/xsm.h>
> >
> > +#include <asm/setup.h>
>
> Why? And if it's needed here now, can it be dropped from the VT-d
> file where you removed the code?
setup.h is needed for xen_in_range. And yes, I can drop it from the
VTd code then.
> > @@ -132,6 +134,74 @@ void arch_iommu_domain_destroy(struct domain *d)
> > {
> > }
> >
> > +void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
> > +{
> > + unsigned long i, j, tmp, top, max_pfn;
> > +
> > + if ( iommu_passthrough || !is_pv_domain(d) )
> > + return;
> > +
> > + BUG_ON(!is_hardware_domain(d));
> > +
> > + max_pfn = (GB(4) >> PAGE_SHIFT) - 1;
> > + top = max(max_pdx, pfn_to_pdx(max_pfn) + 1);
> > +
> > + for ( i = 0; i < top; i++ )
> > + {
> > + unsigned long pfn = pdx_to_pfn(i);
> > + bool map;
> > + int rc = 0;
> > +
> > + /*
> > + * Set up 1:1 mapping for dom0. Default to include only
> > + * conventional RAM areas and let RMRRs include needed reserved
> > + * regions. When set, the inclusive mapping additionally maps in
> > + * every pfn up to 4GB except those that fall in unusable ranges.
> > + */
> > + if ( pfn > max_pfn && !mfn_valid(_mfn(pfn)) )
> > + continue;
> > +
> > + if ( iommu_inclusive && pfn <= max_pfn )
> > + map = !page_is_ram_type(pfn, RAM_TYPE_UNUSABLE);
> > + else
> > + map = page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL);
> > +
> > + if ( !map )
> > + continue;
> > +
> > + /* Exclude Xen bits */
> > + if ( xen_in_range(pfn) )
> > + continue;
> > +
> > + /*
> > + * If dom0-strict mode is enabled then exclude conventional RAM
> > + * and let the common code map dom0's pages.
> > + */
> > + if ( iommu_dom0_strict &&
> > + page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL) )
> > + continue;
> > +
> > + tmp = 1 << (PAGE_SHIFT - PAGE_SHIFT_4K);
> > + for ( j = 0; j < tmp; j++ )
> > + {
> > + int ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
> > + IOMMUF_readable|IOMMUF_writable);
> > +
> > + if ( !rc )
> > + rc = ret;
> > + }
>
> To VT-d specific code was this way to also cope with ia64. I don't
> see the need for this to be a loop when the code is now x86-
> specific.
Oh, I wondered about this and TBH I couldn't figure out why it's this
way, now I understand.
Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |