[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 1/4] xen: evtchn: make evtchn_reset() ready for soft reset
On Fri, 2015-06-05 at 09:52 +0100, Jan Beulich wrote: > >>> On 04.06.15 at 17:47, <tim@xxxxxxx> wrote: > > At 16:19 +0100 on 04 Jun (1433434780), David Vrabel wrote: > >> On 04/06/15 15:05, Tim Deegan wrote: > >> > At 15:35 +0200 on 03 Jun (1433345719), Vitaly Kuznetsov wrote: > >> >> We need to close all event channel so the domain performing soft reset > >> >> will be able to open them back. Interdomain channels are, however, > >> >> special. We need to keep track of who opened it as in (the most common) > >> >> case it was opened by the control domain we won't be able (and allowed) > >> >> to > >> >> re-establish it. > >> > > >> > > >> > I'm not sure I understand -- can you give an example of what this is > >> > avoiding? I would have thought that the kexec'ing VM needs to tear > >> > down _all_ its connections and then restart the ones it wantrs in the > >> > new OS. > >> > >> There are some that are in state ECS_UNBOUND (console, xenstore, I > >> think) at start of day that are allocated on behalf of the domain by the > >> toolstack. The kexec'd image will expect them in these same state. > > > > I see. But does that not come under "tear down all its connections > > and then restart the ones it wants"? I.e. should the guest not reset > > all its channels and then allocate fresh console & xenstore ones? > > But just like during boot the guest doesn't (and can't) created these, > it also can't do so after kexec. The remote side needs to be set up > for it, so it can simply bind to them. I don't think this can work without > tool stack involvement. Which IIRC is pretty much the logic which lead to the more toolstack centric approach in the other series. Are you suggesting something in the middle? _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |