[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Failed to read /local/domain/0/backend/qdisk/26/51713/feature-flush-cache.



On Thu, 2013-10-31 at 19:13 +0100, Samuel Thibault wrote:
> Ian Campbell, le Thu 31 Oct 2013 15:14:59 +0000, a écrit :
> > extras/mini-os/blkfront.c should deal with this. However it
> > unconditionally uses BLKIF_OP_FLUSH_DISKCACHE in blkfront_sync,
> 
> ? It checks for dev->info.flush being 1.

Oh, I incorrectly read that as the request being to flush...

>   AFAIK the printed warning
> does not have consequences by itself.
> 
> > I'm not sure what the fallback (if any) is supposed to be. I doubt it
> > can be safely ignored, but that might be the only/best option?
> 
> Well, I don't know the precise semantic of missing barrier and cache
> features.  Normally it means (resp.) there is strict ordering of
> operations, and no cache, but I doubt all implementations do that.  I'm
> indeed afraid there is no other option than what is already implemented.

OK, so I also incorrectly read init_blkfront() and took the
         char path[strlen(dev->backend) + strlen("/feature-flush-cache") + 1];
        snprintf(path, sizeof(path), "%s/mode", dev->backend);
        msg = xenbus_read(XBT_NIL, path, &c);
        if (msg) {
            printk("Error %s when reading the mode\n", msg);
            goto error;
        }
to be erroring on reading feature-flush-cache, which it isn't...

So far I'm 0/2, sorry.

So, the upshot is that this is just a warning message and nothing to
worry about.

fanh2010, if you are really bothered by this warning perhaps a patch to
make a variant of xenbus_read_integer which silently accepts ENOENT
would be reasonable. (wait for Samuel to ack/nack the concept before
putting in the effort though)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.