[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.12] x86: Improve the efficiency of domain_relinquish_resources()



Hi Andrew,

On 2/21/19 12:22 PM, Andrew Cooper wrote:
pci_release_devices() takes the global PCI lock.  Once pci_release_devices()
has completed, it will be called redundantly each time paging_teardown() and
vcpu_destroy_pagetables() continue.

This is liable to be millions of times for a reasonably sized guest, and is a
serialising bottleneck now that domain_kill() can be run concurrently on
different domains.

Instead of propagating the opencoding of the relinquish state machine, take
the opportunity to clean it up.

Leave a proper set of comments explaining that domain_relinquish_resources()
implements a co-routine.  Introduce a documented PROGRESS() macro to avoid
latent bugs such as the RELMEM_xen case, and make the new PROG_* states
private to domain_relinquish_resources().

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
CC: Jan Beulich <JBeulich@xxxxxxxx>
CC: Wei Liu <wei.liu2@xxxxxxxxxx>
CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
CC: Juergen Gross <jgross@xxxxxxxx>
CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
CC: Julien Grall <julien.grall@xxxxxxx>

So I know Xen 4.12 isn't going to crash and burn without this change, but I
also can't un-see the unnecessary global PCI lock contention.  In terms of
risk, this is extremely low - this function has complete coverage in testing,
and its behaviour isn't changing dramatically.

ARM: There are no problems, latent or otherwise, with your version of
domain_relinquish_resources(), but I'd recommend the same cleanup in due
course.

I will add in my todo list of cleanup!

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.