[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] granting access to MSI-X table and pending bit array
>>> On 07.07.10 at 16:14, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
> On Wed, Jul 07, 2010 at 11:14:04AM +0100, Jan Beulich wrote:
>> The original implementation (c/s 17536) disallowed access to these
>> after granting access to all BAR specified resources (i.e. this was
>> almost correct, except for a small time window during which the
>> memory was accessible to the guest and except for hiding the
>> pending bit array from the guest), but this got reverted with c/s
>> Afaics this is a security problem, as CPU accesses to the granted
>> memory don't go through any IOMMU and hence there's no place
>> these could be filtered out even in a supposedly secure environment
>> (not that I think devices accesses would be filtered at present, but
>> for those this would at least be possible ), and such accesses could
>> inadvertently or maliciously unmask masked vectors or modify the
>> message address/data fields.
>> Imo the pending bit array must be granted read-only access to the
>> guest (instead of either granting full access or no access at all),
>> with the potential side effect of also granting read-only access to
>> the table. And I would even think that this shouldn't be done in the
>> tools, but rather in Xen itself (since it knows of all the PCI devices
>> and their respective eventual MSI-X address ranges), thus at once
>> eliminating any timing windows.
> That sounds sensible. You got a patch ready?
Would be nice, but that's not that simple as currently there's no way
to distinguish r/w mmio and r/o mmio.
Additionally we may need to override the guest's page table
attributes, as generally at least Linux maps I/O memory writeable
no matter whether any writes will actually happen - which to me
seems to be an ugly hack (but I can't think of better alternatives).
Furthermore I'm not really clear how a HVM guest's writes to the
MSI-X table get processed - clearly they can't be let through to
the actual MMIO region. I'd guess they somehow get redirected
to qemu (since pciback doesn't seem to be handling this), but I
don't know this process very well...
Xen-devel mailing list