[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC] x86/altp2m: fix display frozen when switching to a new view early
>>> On 04.10.18 at 17:34, <george.dunlap@xxxxxxxxxx> wrote: > On 10/04/2018 04:20 PM, Jan Beulich wrote: >>>>> On 04.10.18 at 16:56, <rcojocaru@xxxxxxxxxxxxxxx> wrote: >>> The biggest problem here is p2m->logdirty_ranges. This patch will >>> (justly) not work, because struct rangeset is only forward-declared in >>> xen/rangeset.h, so an incomplete type here: >>> >>> -void p2m_init_altp2m_ept(struct domain *d, unsigned int i) >>> +int p2m_init_altp2m_ept(struct domain *d, unsigned int i) >>> { >>> struct p2m_domain *p2m = d->arch.altp2m_p2m[i]; >>> struct p2m_domain *hostp2m = p2m_get_hostp2m(d); >>> struct ept_data *ept; >>> >>> + if ( !p2m->logdirty_ranges ) >>> + p2m->logdirty_ranges = rangeset_new(d, "log-dirty", >>> + RANGESETF_prettyprint_hex); >>> + if ( !p2m->logdirty_ranges ) >>> + return -ENOMEM; >>> + >>> + *p2m->logdirty_ranges = *hostp2m->logdirty_ranges; >>> + >>> p2m->ept.ad = hostp2m->ept.ad; >>> + p2m->max_mapped_pfn = hostp2m->max_mapped_pfn; >>> + p2m->default_access = hostp2m->default_access; >>> + p2m->domain = hostp2m->domain; >>> + >>> + p2m->global_logdirty = hostp2m->global_logdirty; >>> p2m->min_remapped_gfn = gfn_x(INVALID_GFN); >>> p2m->max_remapped_gfn = 0; >>> ept = &p2m->ept; >>> ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m)); >>> d->arch.altp2m_eptp[i] = ept->eptp; >>> + >>> + return 0; >>> +} >>> >>> But that's not even the biggest problem: even if that would compile, it >>> would still be wrong, because logdirty_pages has pointers of its own, >>> which means that two bitwise-copied distinct rangesets can still point >>> to the same data and thus be vulnerable to race conditions and wanting >>> synchronization. > > Yes, so "deep copy" means if a structure has pointers, you copy the > structures pointed to; and if that structure has pointers, you copy > those, all the way down. After a deep copy, any operations on the > structure should be operating on completely separate bits of memory and > pointers. > >>> Furthermore there's no rangeset_copy() function in sight in rangeset.h >>> (though there is a rangeset_swap()). >>> >>> Would you like me to add a rangeset_copy() function (presumably another >>> intermediary patch) and proceed in that manner? >> >> Roger recently has posted a patch adding rangeset_merge(), which I think >> is more general than your rangeset_copy(). That said, I'm in no way >> convinced copying (and then keeping in sync) the range sets across the >> altp2m-s is the best approach. It may well be that the optimal solution is >> somewhere in the middle between sharing everything and copying >> everything. > > Er, you mean maybe we should share logdirty ranges and copy other > things? Or do you actually mean somehow to share bits of the logdirty > range structure? The former, of course. I'm sorry for the ambiguity. > I think we can reasonably start with a simple-and-correct approach, and > try to come up with an optimization later if we decide it's necessary. > The two basic simple-but-correct approaches would be: > > 1. Share logdirty_ranges. This would mean not duplicating the structure > and keeping it in sync; but it would mean grabbing the main p2m lock on > every resolv_misconfig(). > > 2. Duplicate the structure and keep it in sync. This means not > grabbing the main p2m lock on every resolv_misconfig(); but it does mean > duplicating memory, as well as grabbing the lock of every altp2m > structure every time logdirty_ranges changes. > > As I've said before, I think #2 is the most promising, since > resolv_misconfig will be called (potentially) for each entry in the p2m > table, but logdirty_ranges only changes once or twice during the entire > lifetime of the guest. So perhaps some r/w lock wants to be introduced? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |