|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
Hi Stefano, On 11/12/2020 01:28, Stefano Stabellini wrote:
Hmmm... I didn't realize that you were going to call domain_has_ioreq_server() here. Per your comment, this can only be called when on p2m->domain == current. One way would be to switch the two check. However, I am not entirely sure this is necessary. I see no issue to always set mapcache_invalidate even if there are no IOREQ server available. + (p2m->domain == current->domain) && p2m_is_ram(entry.p2m.type) ) + p2m->domain->mapcache_invalidate = true;Why the (p2m->domain == current->domain) check? Shouldn't we set mapcache_invalidate to true anyway? What happens if p2m->domain != current->domain? We wouldn't want the domain to lose the mapcache_invalidate notification.This is also discussed in [1]. :) The main question is why would a toolstack/device model modify the guest memory after boot? If we assume it does, then the device model would need to pause the domain before modifying the RAM. We also need to make sure that all the IOREQ servers have invalidated the mapcache before the domain run again. This would require quite a bit of work. I am not sure the effort is worth if there are no active users today.OK, that explains why we think p2m->domain == current->domain, but why do we need to have a check for it right here? In other words, we don't think it is realistc to get here with p2m->domain != current->domain, but let's say that we do somehow. I am guessing by "here", you mean in the situation where a RAM entry would be removed. Is it correct? If so yes, I don't believe this should happen today (even at domain creation/destruction). What's the best course of action? The best course of action would be to forward the invalidation to *all* the IOREQ servers and wait for it before the domain can run again. So if the toolstack (or an IOREQ server ends to up use it), then we need to make sure all the IOREQ server have invalidated the mapcache before the domain can run again.Probably, set mapcache_invalidate to true and possibly print a warning? Leaving mapcache_invalidate to false doesn't seem to be what we want to do? Setting to true/false is not going to be very helful because the guest may never issue an hypercall. Without any more work, the guest may get corrupted. So I would suggest to either prevent the P2M to be modified after the domain has been created and before it is destroyed (it more a stopgap) or fix it properly.
I thought a bit more about it and I am actually leaning towards keeping the first check. The common implementation of the hypercall path is mostly (if not all) accessing per-vCPU variables. So the hypercalls can mostly work independently (at least in the common part). Assuming we drop the first check, we would now be writing to a per-domain variable at every hypercall. You are probably going to notice performance impact if you were going to benchmark concurrent no-op hypercall, because the cache line is going to bounce (because of the write). Arguably, this may become noise if you execute a full hypercall. But I would still like to treat the common hypercall path as the entry/exit path. IOW, I would like to be careful in what we add there. The main reason is hypercalls may be used quite a lot by PV backends or device emulator (if we think about Virtio). If we decided to move the change in the entry/exit path, then it woulddefinitely be an issue for the reasons I explained above. So I would also like to avoid the write in shared variable if we can. FAOD, I am not saying this optmization will save the world :). I am sure there will be more (in particular in the vGIC part) in order to get Virtio performance in par with PV backends on Xen. This discussion would also be moot if ... On a related topic, Jan pointed out that the invalidation would not work properly if you have multiple vCPU modifying the P2M at the same time. ... we had a per-vCPU flag instead. Cheers, -- Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |