[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 06/13] hvmloader/ram: check if guest memory is out of reserved device memory maps



On 2014/11/12 17:02, Jan Beulich wrote:
On 12.11.14 at 09:45, <tiejun.chen@xxxxxxxxx> wrote:
#2 flags field in each specific device of new domctl would control
whether this device need to check/reserve its own RMRR range. But its
not dependent on current device assignment domctl, so the user can use
them to control which devices need to work as hotplug later, separately.

And this could be left as a second step, in order for what needs to
be done now to not get more complicated that necessary.


Do you mean currently we still rely on the device assignment domctl to
provide SBDF? So looks nothing should be changed in our policy.

I can't connect your question to what I said. What I tried to tell you

Something is misunderstanding to me.

was that I don't currently see a need to make this overly complicated:
Having the option to punch holes for all devices and (by default)
dealing with just the devices assigned at boot may be sufficient as a
first step. Yet (repeating just to avoid any misunderstanding) that
makes things easier only if we decide to require device assignment to
happen before memory getting populated (since in that case there's

Here what do you mean, 'if we decide to require device assignment to happen before memory getting populated'?

Because -quote-
"
In the present the device assignment is always after memory population. And I also mentioned previously I double checked this sequence with printk.
"

Or you already plan or deciede to change this sequence?

Thanks
Tiejun

no need for a new domctl to communicate SBDFs, as devices needing
holes will be known to the hypervisor already).

Jan




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.