[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore operations



On Tue, 7 Mar 2023 at 18:27, David Woodhouse <dwmw2@xxxxxxxxxxxxx> wrote:
>
> From: David Woodhouse <dwmw@xxxxxxxxxxxx>
>
> Now that we have an internal implementation of XenStore, we can populate
> the xenstore_backend_ops to allow PV backends to talk to it.
>
> Watches can't be processed with immediate callbacks because that would
> call back into XenBus code recursively. Defer them to a QEMUBH to be run
> as appropriate from the main loop. We use a QEMUBH per XS handle, and it
> walks all the watches (there shouldn't be many per handle) to fire any
> which have pending events. We *could* have done it differently but this
> allows us to use the same struct watch_event as we have for the guest
> side, and keeps things relatively simple.
>
> Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx>
> Reviewed-by: Paul Durrant <paul@xxxxxxx>



> +static void xs_be_unwatch(struct qemu_xs_handle *h, struct qemu_xs_watch *w)
> +{
> +    xs_impl_unwatch(h->impl, DOMID_QEMU, w->path, NULL, xs_be_watch_cb, w);

Coverity points out that this is the only call to xs_impl_unwatch()
where we don't check the return value. Is there some useful way
we can report the error, or is it a "we're closing everything down
anyway, no way to report anything" situation? (This particular
Coverity heuristic is quite prone to false positives, so if that's
the way it is I'll just mark it as a f-p in the coverity UI.)


> +    h->watches = g_list_remove(h->watches, w);
> +    g_list_free_full(w->events, (GDestroyNotify)free_watch_event);
> +    g_free(w->path);
> +    g_free(w);
> +}

thanks
-- PMM



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.