[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11 20/27] Support colo mode for qemu disk



On Tue, Mar 08, 2016 at 09:09:20PM -0500, Konrad Rzeszutek Wilk wrote:
> > Furthermore, I think coming up with a story for PV-aware guests (PVHVM,
> > PV and PVH) is also non-trivial. For one the disk replication logic is
> 
> Pls keep in mind that PV guests can use QEMU qdisk if they wish.
> 

Oh right, I somehow talked myself into thinking blkback only.

> This is more of 'phy' vs 'file' question.
> > not implemented in PV block backend,  we're not sure how feasible to
> > replicate thing in QEMU into kernel, but we're quite sure it is not
> > going to be trivial technically and politically. The uncertainty is too
> 
> .. and I think doing it in the kernel is not a good idea. With PVH
> coming as the initial domain - we will now have solved the @$@#@@
> syscall bounce to hypervisor slowpath - which means that any libaio
> operations QEMU does on a file - which ends up going to the kernel
> with a syscall - will happen without incuring a huge context switch!
> 
> In other words the speed difference between qdisk and xen-blkback
> will now be just the syscall + aio code + fs overhead.
> 
> With proper batching this can be made faster*.
> 
> *: I am ignoring the other issues such as grant unmap in PVH guests
> and qdisk being quite spartan in features compared to xen-blkback.
> 

So now we do have a tangible story for PV-aware guest for disk
replication (note, other hindrance such as ckeckpointing the kernel in a
fast enough pace still needs to be solved) -- to use qdisk backend. That
means the current interfaces should be sufficient.

Wei.

> > big to come up with a clear idea what it would look like.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.