[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K



On 20.01.2020 18:18, Roger Pau Monné wrote:
> On Mon, Jan 20, 2020 at 05:10:33PM +0100, Jan Beulich wrote:
>> On 17.01.2020 12:08, Roger Pau Monne wrote:
>>> When placing memory BARs with sizes smaller than 4K multiple memory
>>> BARs can end up mapped to the same guest physical address, and thus
>>> won't work correctly.
>>
>> Thinking about it again, aren't you fixing one possible case by
>> breaking the opposite one: What you fix is when the two distinct
>> BARs (of the same or different devices) map to distinct MFNs
>> (which would have required a single GFN to map to both of these
>> MFNs). But don't you, at the same time, break the case of two
>> BARs (perhaps, but not necessarily, of the same physical device)
>> mapping both to the same MFN, i.e. requiring to have two distinct
>> GFNs map to one MFN? (At least for the moment I can't see a way
>> for hvmloader to distinguish the two cases.)
> 
> IMO we should force all BARs to be page-isolated by dom0 (since Xen
> doesn't have the knowledge of doing so), but I don't see the issue
> with having different gfns pointing to the same mfn. Is that a
> limitation of paging?

It's a limitation of the (global) M2P table.

> I think you can map a grant multiple times into
> different gfns, which achieves the same AFAICT.

One might think this would be possible, but afaict
guest_physmap_add_{page,entry}() will destroy the prior association
when/before inserting the new one. I.e. if subsequently any operation
was used which needs to consult the M2P, only the most recently
recorded GFN would be returned and hence operated on. Whether that's
a problem in practice (i.e. whether any such M2P lookup might
sensibly happen) is pretty hard to tell without going through a lot
of code, I guess.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.