[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][PATCH 4/5] tools:firmware:hvmloader: reserve RMRR mappings in e820

On 2014/8/15 7:11, Tian, Kevin wrote:
From: Chen, Tiejun
Sent: Wednesday, August 13, 2014 8:03 PM

On 2014/8/14 3:10, Tian, Kevin wrote:
From: Chen, Tiejun
Sent: Tuesday, August 12, 2014 5:57 PM

On 2014/8/12 20:25, Jan Beulich wrote:
On 12.08.14 at 12:59, <tiejun.chen@xxxxxxxxx> wrote:
On 2014/8/12 0:00, Tian, Kevin wrote:
From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
Sent: Sunday, August 10, 2014 11:53 PM
On 08.08.14 at 23:47, <kevin.tian@xxxxxxxxx> wrote:
strictly speaking besides reserving in e820, you should also poke later
MMIO BAR allocations to avoid confliction too. Currently it's relative
to low_mem_pgend, which is likely to be different from host layout
so it's still possible to see a virtual MMIO bar base conflicting to the
RMRR ranges which are supposed to be sparse.

Correct. And what's worse: Possible collisions between RMRRs and
the BIOS we place into the VM need to be taken care of, which may
turn out rather tricky.

right that becomes tricky. We can provide another hypercall to allow a
VM tell Xen which RMRR can't be assigned due to confliction with gust
BIOS or other hvmloader allocation (if confliction can't be resolved).

If Xen detects a device owning RMRR is already assigned to the VM,
then fail the hypercall and hvmloader just panic with information to
indicate confliction.

Otherwise Xen records the information and future dynamic device
assignment like hotplug will be failed if associated RMRR will be in
the confliction list.

    From my point of view its becoming over complicated.

In HVM case, theoretically any devices involving RMRR may be assigned
any given VM. So it may not be necessary to introduce such complex
mechanism. Therefore, I think we can reserve all RMRR maps simply in
e820, and check if MMIO is overlapping with RMRR for every VM. It
be acceptable.

Then you didn't understand what Kevin and I said above. Just

I have to admit I'm poor in this coverage.

keep in mind that the RMRRs can conflict not just with MMIO
ranges inside the guest, but also RAM ranges (which include, as
mentioned above, the range where the BIOS for the guest gets


So just to clarify, as a summary there are four ranges we should be

#1 MMIO in guest

In my patch [RFC][v2][PATCH 5/6] tools:libxc: check if mmio BAR is out
of RMRR mappings,

I will check if this is overlapping.

hvmloader controls actual mmio BAR allocation, so it's important to have

I guess you're saying pci_setup().

After setup_guest(), in pci_setup() we will reallocate mmio and ram if
necessary and possible. Then all final info is reflected to fill into GS

check there. And your patch treats the whole mmio as one big region
to check overlapping with RMRR which is too coarse-grained. Better to

But its easy to feasible.

overlapping every time when an allocation, either of memory ranges, or
MMIO ranges, actually happen.

What is your policy to handle a conflict?

I mean those RMRR mapping entries are undermined and often they are not
continuous. For example, IGD needs two entries in my current BDW,

#1 ab805000 ~ ab819000
#2 ad000000 ~ af800000

So if just one of them conflicts something, how to handle such a case?
Push mmio out of RMRR? Or allow many mmio hole?  As you know IGD can't
work as long as one of two entries is overlapping.

So I think it may not be necessary to handle this as complicated mechanism.

  From my point of view its enough to double check RMRR in GS e820 since
just do check rather than check-to-fix. If any overlap occurs we will
post WARNING/ERROR to notify the user, then let user decide what we
should do next. If they know don't need any PCI passthrough its fine.
And especially, actually RMRR should be rare.

I'm OK to do check-only w/o check-and-fix, at least it's a step forward
to fail-safe.

Thanks a lot.

BTW, looks Xen have many known places, even bug, we need to clean or improve, right? So why don't we have an explicit plan to push this step by step? I mean at least we can document these somewhere like this,

#1 Notice some known problems
#2 Any useful discussion
#2 Workaround if possible
#3 Next step or plan

Its convenient to track them. Once one guy meet this again, he can find enough information to know how to deal with this.



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.