[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] common/vm_event: Prevent guest locking with large max_vcpus
On 02/08/2017 07:20 PM, Andrew Cooper wrote: > On 08/02/17 17:08, Razvan Cojocaru wrote: >> On 02/08/2017 06:25 PM, Tamas K Lengyel wrote: >>> >>> On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru >>> <rcojocaru@xxxxxxxxxxxxxxx <mailto:rcojocaru@xxxxxxxxxxxxxxx>> wrote: >>> >>> It is currently possible for the guest to lock when subscribing >>> to synchronous vm_events if max_vcpus is larger than the >>> number of available ring buffer slots. This patch no longer >>> blocks already paused VCPUs, fixing the issue for this use >>> case. >>> >>> Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx >>> <mailto:rcojocaru@xxxxxxxxxxxxxxx>> >>> --- >>> xen/common/vm_event.c | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> >>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c >>> index 82ce8f1..2005a64 100644 >>> --- a/xen/common/vm_event.c >>> +++ b/xen/common/vm_event.c >>> @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d, >>> * See the comments above wake_blocked() for more information >>> * on how this mechanism works to avoid waiting. */ >>> avail_req = vm_event_ring_available(ved); >>> - if( current->domain == d && avail_req < d->max_vcpus ) >>> + if( current->domain == d && avail_req < d->max_vcpus && >>> + !atomic_read( ¤t->vm_event_pause_count ) ) >>> vm_event_mark_and_pause(current, ved); >>> >>> >>> Hi Razvan, >>> I would also like to have the change made in this patch that unblocks >>> the vCPUs as soon as a spot opens up on the ring. Doing just what this >>> patch has will not solve the problem if there are asynchronous events used. >> Fair enough, I thought that might need more discussion and thus put into >> a subsequent patch, but I'll modify that as well, give it a spin and >> submit V2. > > If you are spinning a v2, please pull current out into a variable at the > start (more efficient), and atomic_read() doesn't need spaces inside. Will do. Thanks, Razvan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |