[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K
On Mon, Jan 20, 2020 at 12:18 PM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote: > > On Mon, Jan 20, 2020 at 05:10:33PM +0100, Jan Beulich wrote: > > On 17.01.2020 12:08, Roger Pau Monne wrote: > > > When placing memory BARs with sizes smaller than 4K multiple memory > > > BARs can end up mapped to the same guest physical address, and thus > > > won't work correctly. > > > > Thinking about it again, aren't you fixing one possible case by > > breaking the opposite one: What you fix is when the two distinct > > BARs (of the same or different devices) map to distinct MFNs > > (which would have required a single GFN to map to both of these > > MFNs). But don't you, at the same time, break the case of two > > BARs (perhaps, but not necessarily, of the same physical device) > > mapping both to the same MFN, i.e. requiring to have two distinct > > GFNs map to one MFN? (At least for the moment I can't see a way > > for hvmloader to distinguish the two cases.) > > IMO we should force all BARs to be page-isolated by dom0 (since Xen > doesn't have the knowledge of doing so), but I don't see the issue > with having different gfns pointing to the same mfn. Is that a > limitation of paging? I think you can map a grant multiple times into > different gfns, which achieves the same AFAICT. BARs on a shared MFN would be a problem since the second BAR would be at an offset into the page. Meanwhile the guest's view of the BAR would be at offset 0 of the page. But I agree with Roger that we basically need page alignment for all pass-through memory BARs. With my limited test hardware, all the PCI memory BARs are page aligned in dom0. So it was only the guest addresses that needed alignment. Regards, Jason _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |