[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
On 6/21/2016 4:22 PM, Jan Beulich wrote: On 21.06.16 at 09:45, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:On 6/20/2016 9:38 PM, Jan Beulich wrote:On 20.06.16 at 14:06, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:However, if live migration is started(all pte entries invalidated again), resolve_misconfig() would change both gfn A's and gfn B's p2m type back to p2m_ram_rw, which means the emulation of gfn B would fail.Why would it? Changes to p2m_ram_logdirty won't alter p2m_ioreq_server entries, and hence changes from it back to p2m_ram_rw won't either.Oh, above example is based on the assumption that resolve_misconfig() is extended to handle the p2m_ioreq_server case(see my "Suppose resolve_misconfig() is modified..."). The code change could be something like below: @@ -542,10 +542,14 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn) - if ( e.recalc && p2m_is_changeable(e.sa_p2mt) ) + if ( e.recalc ) { - e.sa_p2mt = p2m_is_logdirty_range(p2m, gfn + i, gfn + i) - ? p2m_ram_logdirty : p2m_ram_rw; + if ( e.sa_p2mt == p2m_ioreq_server ) + e.sa_p2mt = p2m_ram_rw; + else if ( p2m_is_changeable(e.sa_p2mt) ) + e.sa_p2mt = p2m_is_logdirty_range(p2m, gfn + i, gfn + i) + ? p2m_ram_logdirty : p2m_ram_rw; + ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access); } e.recalc = 0; With changes like this, both p2m types of gfn A and gfn B from above example would be set to p2m_ram_rw if log dirty is enabled.Above modification would convert _all_ p2m_ioreq_server into p2m_ram_rw, irrespective of log-dirty mode being active. Which I don't think is what you want. Well, this is another situation I found very interesting: without log-dirty, this approach actually works. :) And the reasons are: - resolve_misconfig() will only recalculate entries which have e.recalc flag set; - For the outdated p2m_ioreq_server entries, this routine will reset them back to p2m_ram_rw; - For the new p2m_ioreq_server entries, their e.recalc flag will first be cleared to 0 by ept_set_entry() -> resolve_misconfig() is used to set the p2m type when hvmop_set_mem_type is invoked); - Yet live migration will turn on the recalc flag for all entries again... You can see this in steps 3> to 6> in my previous example. :) So that's what I am worrying - if a user unintentionally typed "xl save" during the emulation process , the emulation would fail. We can let the enable_logdirty() return false if XenGT is detected, but we still wish to keep the log dirty feature.Well, enabling log-dirty mode would succeed as soon as all p2m_ioreq_server pages got converted back to normal ones (by the device model). So an unintentional "xl save" would simply fail. Is there any problem with that? Well, I agree. Since this patchset is only about ioreq server change, my plan is to keepthe log dirty logic as it is for now, and in the future if we are gonna support vGPU migration, we can consider to let "xl save" simply fail if the device model side is not ready(i.e. not having finished its memory type cleaning tasks). And then - didn't we mean to disable that part of XenGT during migration, i.e. temporarily accept the higher performance overhead without the p2m_ioreq_server entries? In which case flipping everything back to p2m_ram_rw after (completed or canceled) migration would be exactly what we want. The (new or previous) ioreq server should attach only afterwards, and can then freely re-establish any p2m_ioreq_server entries it deems necessary.Well, I agree this part of XenGT should be disabled during migration. But in such case I think it's device model's job to trigger the p2m type flipping(i.e. by calling HVMOP_set_mem_type).I agree - this would seem to be the simpler model here, despite (as George validly says) the more consistent model would be for the hypervisor to do the cleanup. Such cleanup would imo be reasonable only if there was an easy way for the hypervisor to enumerate all p2m_ioreq_server pages. Well, for me, the "easy way" means we should avoid traversing the whole eptpaging structure all at once, right? I have not figured out any clean solution in hypervisor side, that's one reason I'd like to left this job to device model side(another reason is that I do think device model should take this responsibility). Thanks Yu _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |