[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Notes for xen summit 2018 design session ] Xenwatch Multithreading



> -----Original Message-----
> From: Dongli Zhang <dongli.zhang@xxxxxxxxxx>
> Sent: Monday, July 2, 2018 4:19 AM
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>; jgross@xxxxxxxx; Paul
> Durrant <Paul.Durrant@xxxxxxxxxx>; Srinivas REDDY Eeda
> <srinivas.eeda@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Joao Marcal
> Lemos Martins <joao.m.martins@xxxxxxxxxx>
> Subject: [Notes for xen summit 2018 design session ] Xenwatch
> Multithreading
> 
> Hi all,
> 
> This is the summary of the xen summit 2018 design session for Xenwatch
> Multithreading. Would you please share your input on it?
> 
> Below are slides/patch for the talk in the morning:
> 
> http://www.donglizhang.org/xenwatch_multithreading.pdf
> http://donglizhang.org/xenwatch-multithreading.patch
> 
> The design session was primarily lead by Dongli Zhang,  Paul Durant and Wei
> iu.
> 
> 
> Problem to solve
> ================
> 
> xenwatch kernel thread is blocked in 'D' state if any xs_watch_event callback
> function is stalled. As a result, dom0 is not able to create/destroy guest VMs
> or hotplug devices.
> 
> The xen summit 2018 talk presents the idea derived from the RFC discussion
> at:
> 
> https://lists.xenproject.org/archives/html/xen-devel/2018-
> 04/msg00465.html
> 
> The core idea in the talk is to create a per-domid xenwatch thread for every
> domid on dom0 to run xs_watch_event callback function. The following
> requirements should be fulfilled per discussion with Paul and Wei:
> 
> 1. In addition to the default xenwatch thread, more threads are used to
> process xenwatch event callback functions.
> 
> 2. The modification should be limited within linux kernel (dom0), excluding
> xen hypervisor/toolstack.
> 
> 
> Design Session
> ==============
> 
> Regardless about the engineering work, I just realized there are primarily two
> differences between old idea (talk) and new idea (design session) specifically
> on: (1) how to create/destroy per-domid xenwatch thread; (2) how to
> distribute the xenwatch event to per-domid xenwatch thread.
> 
> The old idea (in the talk):
> ---------------------------
> 
> * per-domid xenwatch creation/destroy:
> 
> The dom0 kernel watches at @introduceDomain and @releaseDomain. Once
> there is corresponding xenwatch event, dom0 kernel lists '/local/domain' in
> xenstore via 'XS_DIRECTORY' to identify which domid is most recently
> created or removed.
> 
> * xenwatch event distribution:
> 
> The xenwatch_thread() is modified that the xs_watch_event (from default
> event
> list) is distributed to per-domid event list if this event is going to be
> processed by per-domid xenwatch thread.
> 
> 
> The new idea (in the design session):
> -------------------------------------
> 
> * per-domid xenwatch creation/destroy:
> 
> The new idea is to create per-domid thread *on demand*.
> 
> The per-domid thread is created when the 1st pv backend device (for this
> domid) is created, while this thread is destroyed when the last pv backend
> device (for this domid) is removed.
> 
> Just regarding blkback and netback, the be_watch's
> callback=backend_changed()-->xenbus_probe_node() is modified to create
> the per-domid xenwatch thread. A counter is introduced to maintain the
> status of each per-domid xenwatch thread. The relevant get() and put()
> method will be introduced to increment or decrement the counter,
> respectively. The xenwatch thread is destroyed if the counter becomes zero.
> Therefore, although
> backend_changed() would be triggered several times depending on the
> number of pv devices, there is only one single xenwatch thread on dom0 for
> each domid.
> 
> Any components that might use xenwatch multithreading would call the
> interface (e.g., create_xenwatch_thread()) to create per-domid xenwatch
> thread. The duplicate call would only increment the counter if the thread for
> a specific domid already exists.
> 
> * xenwatch event distribution:
> 
> In stead of default xenwatch_thread(), we would modify
> xenbus_thread()-->xs_watch_msg() to append xs_watch_event to each
> per-domid event list directly during very early stage if this event is going 
> to
> be processed by per-domid xenwatch thread.  Per-domid xs_watch_event is
> never put on the default event list, but is put on the per-domid event list
> directly.
> 
> * In addition, the signature of register_xenbus_watch() is modified so that
> the subscriber (e.g., a specific pv driver) would specify whether the watch is
> processed by default or per-domid xenwatch thread.
> 
> Please correct me if anything is wrong and not as what we discussed during
> the design session. Please let me know how you think about it.
> 

That's all how I remember it, thanks :-)

One note though... I hacked up a simpler per-pv-backend watch thread patch for 
test purposes (which clearly won't scale out so isn't suitable to upstream) and 
found that I could actually avoid changing the prototype of 
register_xenbus_watch() by just setting the watch handler in a new field of the 
watch structure prior to making the call. If the handler field is left as NULL 
then, internally, register_xenbus_watch() can set it to the default handler.

Cheers,

  Paul

> Thank you very much!
> 
> Dongli Zhang
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.