|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3] displif: add ABI for para-virtual display
On 02/15/2017 11:33 PM, Konrad Rzeszutek Wilk wrote: .snip.. kind of Now I get it. sorry, I was probably not clear In that case why not just prefix it with 'in' and 'out'? Such as: 'out-ring-ref' and 'out-event-channel' and 'in-ring-ref' along with 'in-event-channel'. hmmmm, it may confuse, because you must know "out" from which POV, e.g. frontend's or backend's. What is more, these "out-" and "in-" are... nameless? Can we still have something like "ctrl-"/"cmd-"/"req-" for the req/resp path and probably "evt-" for events from back to front? Or perhaps better - borrow the same idea that Stefano came up for 9pfs and PV calls - where his ring does both. Then you just need 'ring-ref', 'event-channel', 'max-page-ring-order' (which must be 1 or larger). And you split the ring-ref in two - one for 'in' events and the other part for 'out' events? yes, I saw current implementations (kbdif, fbif) and what Stefano did, but would rather stick to what is currently defined (I believe it is optimal as is) And hope, that maybe someone will put new functionality into ring.h to serve async events one day :) ..giant snip..Thus, I was thinking of XenbusStateReconfiguringstate as appropriate in this caseRight, but somebody has to move to this state. Who would do it?when backend dies its state changes to "closed". At this moment front tries to remove virtualized device and if it is possible/done, then it goes into "initialized" state. If not - "reconfiguring". So, you would ask how does the front go from "reconfiguring" into "initialized" state? This is OS/front specific, but: 1. the underlying framework, e.g. DRM/KMS, ALSA, provide a callback(s) to signal that the last client to the virtualized device has gone and the driver can be removed (equivalent to module's usage counter 0) 2. one can schedule a delayed work (timer/tasklet/workqueue) to periodically check if this is the right time to re-try the removal and remove In both cases, after the virtualized device has been removed we move into "initialized" state again and are ready for new connections with backend (if it arose from the dead :)Would the frontend have some form of timer to make sure that the backend is still alive? And if it had died then move to Reconfiguring?There are at least 2 ways to understand if back is dead: 1. XenBus state change (back is closed).. If the backend does a nice shutdown..hm, on Linux I can kill -9 backend and XenBus driver seems to be able to turn back's state into "closed" isn't this expected behavior?That is the expected behavior. I was thinking more of a backend being a guest - and the guest completly going away and nobody clearing its XenStore keys. In which case your second option of doing a timeout will work. But you may need an 'PING' type request to figure this out? no ping, just usual calls, one of which will fail ok, so this is what I have for the recovery flow now:*------------------------------- Recovery flow ------------------------------- * * In case of frontend unrecoverable errors backend handles that as * if frontend goes into the XenbusStateClosed state. * * In case of backend unrecoverable errors frontend tries removing * the virtualized device. If this is possible at the moment of error,* then frontend goes into the XenbusStateInitialising state and is ready for * new connection with backend. If the virtualized device is still in use and * cannot be removed, then frontend goes into the XenbusStateReconfiguring state * until either the virtualized device removed or backend initiates a new * connection. On the virtualized device removal frontend goes into the * XenbusStateInitialising state. * * Note on XenbusStateReconfiguring state of the frontend: if backend has * unrecoverable errors then frontend cannot send requests to the backend * and thus cannot provide functionality of the virtualized device anymore.* After backend is back to normal the virtualized device may still hold some * state: configuration in use, allocated buffers, client application state etc. * So, in most cases, this will require frontend to implement complex recovery * reconnect logic. Instead, by going into XenbusStateReconfiguring state, * frontend will make sure no new clients of the virtualized device are * accepted, allow existing client(s) to exit gracefully by signaling error * state etc. * Once all the clients are gone frontend can reinitialize the virtualized * device and get into XenbusStateInitialising state again signaling the * backend that a new connection can be made. * * There are multiple conditions possible under which frontend will go from* XenbusStateReconfiguring into XenbusStateInitialising, some of them are OS * specific. For example:* 1. The underlying OS framework may provide callbacks to signal that the last * client of the virtualized device has gone and the device can be removed * 2. Frontend can schedule a deferred work (timer/tasklet/workqueue) * to periodically check if this is the right time to re-try removal of * the virtualized device. * 3. By any other means.I would also like to re-use this as is for sndif, for that reason I do not use "virtualized display" here, but keep it nameless, e.g. "virtualized device" I am attaching the diff between v3 and v4 for your convenience Thank you, Olekandr Attachment:
v4.diff _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |