[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram



On Mon, May 19, 2014 at 8:48 AM, Zhang, Yang Z <yang.z.zhang@xxxxxxxxx> wrote:
> Tim Deegan wrote on 2014-02-14:
>> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>>>>>> On 13.02.14 at 16:46, George Dunlap
>>>>>> <george.dunlap@xxxxxxxxxxxxx>
>> wrote:
>>>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>>>>> George Dunlap wrote on 2014-02-11:
>>>>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>>>>> Actually, if the overhead of not sharing tables isn't very high,
>>>>>> then B isn't such a bad option.  In fact, B is what I expected
>>>>>> Yang to submit when he originally described the problem.
>>>>> Actually, the first solution came to my mind is B. Then I
>>>>> realized that even
>>>> chose B, we still cannot track the memory updating from DMA(even with
>>>> A/D bit, it still a problem). Also, considering the current usage case
>>>> of log dirty in Xen(only vram tracking has problem), I though A is
>>>> better.: Hypervisor only need to track the vram change. If a malicious
>>>> guest try to DMA to vram range, it only crashed himself (This should be
>> reasonable).
>>>>>
>>>>>> I was going to say, from a release perspective, B is probably
>>>>>> the safest option for now.  But on the other hand, if we've been
>>>>>> testing sharing all this time, maybe switching back over to
>>>>>> non-sharing whole-hog has
>>>> the higher risk?
>>>>> Another problem with B is that current VT-d large paging
>>>>> supporting relies on
>>>> the sharing EPT and VT-d page table. This means if we choose B,
>>>> then we need to re-enable VT-d large page. This would be a huge
>>>> performance impaction for Xen 4.4 on using VT-d solution.
>>>>
>>>> OK -- if that's the case, then it definitely tips the balance back
>>>> to A.  Unless Tim or Jan disagrees, can one of you two check it in?
>>>>
>>>> Don't rush your judgement; but it would be nice to have this in
>>>> before RC4, which would mean checking it in today preferrably, or
>>>> early tomorrow at the latest.
>>>
>>> That would be Tim then, as he would have to approve of it anyway.
>>
>> Done.
>>
>>> I should also say that while I certainly understand the
>>> argumentation above, I would still want to go this route only with
>>> the promise that B is going to be worked on reasonably soon after
>>> the release, ideally with the goal of backporting the changes for 4.4.1.
>>
>> Agreed.
>>
>> Tim.
>
> Hi all
>
> Sorry to turn out this old thread.
> Because I just noticed that someone is asking when Intel will implement the 
> VT-d page table separately. Actually, I am totally unaware it. The original 
> issue that this patch tries to fix is the VRAM tracking which using the 
> global log dirty mode. And I thought the best solution to fix it is in VRAM 
> side not VT-d side. Because even use separate VT-d page table, we still 
> cannot track the memory update from DMA. Even worse, I think two page tables 
> introduce redundant code and maintain effort. So I wonder is it really 
> necessary to implement the separate VT-d large page?

Yes, it does introduce redundant code.  But unfortunately, IOMMU
faults at the moment have to be considered rather risky; having on
happens risks (in order of decreasing probability / increasing
damage):
* Device stops working for that VM until an FLR (losing a lot of its state)
* The VM has to be killed
* The device stops working until a host reboot
* The host crashes

Avoiding these by "hoping" that the guest OS doesn't DMA into a video
buffer isn't really robust enough.  I think that was Tim and Jan's
primary reason for wanting the ability to have separate tables for HAP
and IOMMU.

Is that about right, Jan / Tim?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.