[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: xenstore - Suggestion of batching watch events



On Tue, Jun 24, 2025 at 4:51 PM Christian Lindig
<christian.lindig@xxxxxxxxx> wrote:
>
>
>
> > On 24 Jun 2025, at 16:43, Andriy Sultanov <sultanovandriy@xxxxxxxxx> wrote:
> >
> > But I think this only covers part of the problem. Looking at the concrete 
> > case of
> > xenopsd and Windows VMs, xenopsd cares about most of the nodes it receives
> > watch events from, the issue is that it doesn't know when these are grouped
> > together in one way or another.
>
> Is the common case that a client wants to observe the changes rather than 
> just knowing that a change occurred? If so, it seems transmitting changes as 
> part of a watch event (rather than forcing the client to traverse the tree) 
> would be a better protocol. That would suggest to introduce a new kind of 
> watch that reports changes at the end of a transaction. The size of the event 
> would depend on the size of the change.

Most of the code in xapi_xenops.ml that deals with watch events wants
to compare the new value with the old, to know whether it needs to
send an update to XAPI.
So yes, including as much information as we can in the watch event
would avoid round-trips (and might also avoid some of the O(N^2)
issues because xenopsd would know exactly what key changed to what
value), for xenopsd's use-case.

The situation may be different for PV device backends/frontends, what
might be useful there are wildcards, e.g.
`/path/to/vif/*/state`, etc. which would also trigger when new devices
appear or old devices disappear.
The tree depth limit already proposed in xenstore.txt could also help
with this (so you can watch for a new device to appear without
constantly watching all its keys), and the driver can then setup
watches just on state (and whatever other fields it wants to watch).

Best regards,
--Edwin

>
> — C
>
>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.