[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] Dom0 PV IOMMU control design (draft A)



>>> On 14.04.14 at 18:55, <malcolm.crossley@xxxxxxxxxx> wrote:
> On 14/04/14 17:00, Jan Beulich wrote:
>>>>> On 14.04.14 at 17:48, <malcolm.crossley@xxxxxxxxxx> wrote:
>>> The current idea for maximum performance is to create a 1:1 mapping for
>>> dom0 GPFN's and an offset (offset by max dom0 GPFN) 1:1 mapping for the
>>> whole of the MFN address space which will be used for grant mapped
>>> pages. This should result in IOMMU updates only for ballooned pages.
>> 
>> That's not fully scalable: You can't cope with systems making use of
>> the full physical address space (e.g. by sparsely allocating where
>> each node's memory goes). HP has been having such systems for a
>> couple of years.
>> 
> Good point, I was not aware that there were systems which use the top of
> the physical address space.
> 
> I propose that Domain 0 can detect the sparse holes via the E820 address
> map and implement a simple form of address space compression. As long as
> the amount physical memory in the system is not more than half of the
> physical memory address space then both a GPFN and "grant map 1:MFN" BFN
> mapping should be able to fit.

In the hopes that e.g. no system populates more than half of its
physical address space - not a very nice assumption. Plus when
determining holes here, I'm afraid the E820 map isn't sufficient: You'd
also need to consider MMIO regions, and other special firmware
(e.g. ACPI) and hardware (e.g. HyperTransport) ranges.

>>>>> IOMMUOP_unmap_page
>>>>> ----------------
>>>>> First argument, pointer to array of `struct iommu_map_op`
>>>>> Second argument, integer count of `struct iommu_map_op` elements in array
>>>>>
>>>>> This subop will attempt to unmap each element in the `struct 
>>>>> iommu_map_op` array
>>>>> and record the mapping status back into the array itself. If an 
>>>> unmapping?
>>>
>>> Agreed
>>>>
>>>>> unmapping fault
>>>> mapping?
>>>
>>> This is an unmap hypercall so I believe it would be an unmapping fault
>>> reported.
>> 
>> No - an individual unmap operation failing should result in the
>> status to be set to -EFAULT (or whatever). The only time you'd
>> fail the entire hypercall with -EFAULT would be when you can't
>> read an interface structure array element. Hence if anything this
>> would be a mapping fault (albeit I didn't like the term in the
>> IOMMU_map_page description either).
>> 
> 
> OK, I agree. Would it be cleaner hypercall interface if the hypercall
> returns 0 if all op's succeeded and a positive count of number of
> failure ops? The caller would then need to scan the status entry of the
> array to find the failed op's. Failed op's should hopefully be rare and
> so the overview of scanning the array should be acceptable.

Yes, that's reasonable (as would be the alternative used by some
memory ops, returning the number of succeeded slots).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.