[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 16/27] tools/libxl: Infrastructure for reading a libxl migration v2 stream
On Tue, 2015-06-16 at 16:01 +0100, Andrew Cooper wrote: > On 16/06/15 15:31, Ian Campbell wrote: > > On Mon, 2015-06-15 at 14:44 +0100, Andrew Cooper wrote: > >> From: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> > >> > >> Signed-off-by: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> > >> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > >> CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx> > >> CC: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> > >> CC: Wei Liu <wei.liu2@xxxxxxxxxx> > > Overall looks good, I've got some comments below and I think it almost > > certainly wants eyes from Ian who knows more about the dc infra etc. > > > >> +void libxl__stream_read_start(libxl__egc *egc, > >> + libxl__stream_read_state *stream) > >> +{ > >> + libxl__datacopier_state *dc = &stream->dc; > >> + int ret = 0; > >> + > >> + /* State initialisation. */ > >> + assert(!stream->running); > >> + > >> + memset(dc, 0, sizeof(*dc)); > > libxl__datacopier_init, please > > That call is made by libxl__datacopier_start() each and every time, and > unlike here, is matched with an equivalent _kill() call. Hrm, I think I'd best defer to Ian J on what the right way to deal with a dc being setup and resused in this way is. > >> + /* Start reading the stream header. */ > >> + dc->readwhat = "stream header"; > >> + dc->readbuf = &stream->hdr; > >> + stream->expected_len = dc->bytes_to_read = sizeof(stream->hdr); > >> + dc->used = 0; > >> + dc->callback = stream_header_done; > > This pattern of resetting and reinitialising the dc occurs in multiple > > places, I think a helper would be in order, some sort of > > stream_next_record_init or something perhaps? > > The only feasible helper would have to take everything as parameters; > there is insufficient similarity between all users. > > I dunno whether that would be harder to read... I was more concerned about ensuring everyone does everything (especially if a new field gets added), having a function with parameters would cause a compile failure when the addition was (hopefully) propagated to that function and added as a parameter. Lets see what Ian thinks. > > > > >> +void libxl__stream_read_abort(libxl__egc *egc, > >> + libxl__stream_read_state *stream, int rc) > >> +{ > >> + stream_failed(egc, stream, rc); > >> +} > >> + > >> +static void stream_success(libxl__egc *egc, libxl__stream_read_state > >> *stream) > >> +{ > >> + stream->rc = 0; > >> + stream->running = false; > >> + > >> + stream_done(egc, stream); > > Push the running = false into stream_done and flip the assert there? > > Logically the stream is still running until it is done, so having done > > assert it isn't running seems counter-intuitive. > > This is more for piece of mind. stream_done() my strictly only ever be > called once, hence its assert. assert(stream->running); stream->running = false in stream_done() gives the same piece of mind, doesn't it? > >> + if (onwrite || dc->used != stream->expected_len) { > >> + ret = ERROR_FAIL; > >> + LOG(ERROR, "write %d, err %d, expected %zu, got %zu", > >> + onwrite, errnoval, stream->expected_len, dc->used); > >> + goto err; > >> + } > > I think you need to check errnoval == 0 in the !onwrite case, otherwise > > you may miss a read error? > > "dc->used != stream->expected_len" covers all possible read errors, in > the "something went wrong" kind of way. With the current implementation, perhaps, but the doc doesn't seem to say you can rely on it (an appropriate reaction might be to change the doc rather than the code, I don't mind). > > Sharing the dc between al these differing usages is starting to rankle a > > little, but I think it is necessary because it may have queued data from > > a previous read which was larger than the current record, correct? > > > > Hrm, isn't setting dc->used = 0 on each reset potentially throwing some > > stuff away? > > We should never be in a case where we are setting up a new read/write > from the dc with any previous IO pending. Essentially because dc uses its idea of bytes remaining to do the read, the scenario I was imagining only comes about if the dc is reading large blocks and segmenting them later, which isn't how it is used here. > >> + if (dc->writefd == -1) { > >> + ret = ERROR_FAIL; > >> + LOGE(ERROR, "Unable to open '%s'", path); > >> + goto err; > >> + } > >> + dc->maxsz = dc->bytes_to_read = rec_hdr->length - sizeof(*emu_hdr); > >> + stream->expected_len = dc->used = 0; > > expecting 0? This differs from the pattern common everywhere else and > > I'm not sure why. > > The datacopier has been overloaded so many times, it is messy to use. > > In this case, we are splicing from read fd to a write fd, rather than to > a local buffer. > > Therefore, when the IO is complete, we expect 0 bytes in the local > buffer, as all data should end up in the fd. I think using 2 or more data copiers to cover the different configurations might help? You can still reuse one for the normal record processing but a separate dedicated one for writing the emu to a file might iron out a wrinkle. > > > >> + memset(dc, 0, sizeof(*dc)); > >> + dc->ao = stream->ao; > >> + dc->readfd = stream->fd; > >> + dc->writefd = -1; > >> + > >> + /* Do we need to eat some padding out of the stream? */ > > Why only now and not for e.g. the xenstore stuff (which doesn't appear > > to be explicitly padded). > > Any record which is read into a local buffer has the local buffer > aligned up, and the padding read onto the end. OK. > > And given that why not handle this in some central place rather than in > > the emulator only place? > > Experimentally, some versions of Qemu barf if they have trailing zeros > in save file. I think they expect to find eof() on a qemu record boundary. What I was suggesting was to do the padding in the core, where it would often be a zero nop, but would save mistakes (or duplication) if some other record also needs such handling in the future. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |