[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 1/3] xen_disk: handle disk files on ramfs/tmpfs

On Fri, 4 Jan 2013, Roger Pau Monne wrote:
> On 04/01/13 15:54, Stefano Stabellini wrote:
> > On Thu, 3 Jan 2013, Ian Campbell wrote:
> >> On Mon, 2012-12-31 at 12:16 +0000, Roger Pau Monne wrote:
> >>> Files that reside on ramfs or tmpfs cannot be opened with O_DIRECT,
> >>> if first call to bdrv_open fails with errno = EINVAL, try a second
> >>> call without BDRV_O_NOCACHE.
> >>
> >> Doesn't that risk spuriously turning of NOCACHE on other sorts of
> >> devices as well which (potentially) opens up a data loss issue?
> > 
> > I agree, we shouldn't have this kind of critical configuration changes
> > behind the user's back.
> > 
> > I would rather let the user set the cache attributes, QEMU has already a
> > command line option for it, but we can't use it directly because
> > xen_disk gets the configuration solely from xenstore at the moment.
> > 
> > I guess we could add a key pair cache=foobar to the xl disk
> > configuration spec, that gets translated somehow to a key on xenstore.
> > Xen_disk would read the key and sets qflags accordingly.
> > We could use the same cache parameters supported by QEMU, see
> > bdrv_parse_cache_flags.
> > 
> > As an alternative, we could reuse the already defined "access" key, like
> > this:
> > 
> > access=rw|nocache
> > 
> > or
> > 
> > access=rw|unsafe
> I needed this patch to be able to perform the benchmarks for the
> persistent grants implementation, but I realize this is not the best way
> to solve this problem.
> It might be worth to think of a good way to pass more information to the
> qdisk backend (not only limited to whether O_DIRECT should be used or
> not), so we can take advantage in the future of all the possible file
> backends that Qemu supports, like GlusterFS or SheepDog.

Yes, you are right.
However in the Xen world QEMU is never invoked directly, always via
libxl. So it is natural that whatever we want to expose to the user has
to go through libxl. I suppose that GlusterFS and SheepDog make no
exception: they would be just another key=value pair or just another
value in the xl disk config line.
At this point it doesn't matter that much how we pass these parameters
from libxl to QEMU: at the moment everything is done via xenstore, we
might as well keep doing it that way.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.