[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2] xen-blkfront: allow discard-* nodes to be optional
This time the patch applied cleanly. The trim command seems to work as well, meaning no error messages and a certain amount of blocks (5GB) is trimmed. The trimming did consume a bit of time (10-20 seconds), assuming it is actually discarding the blocks at the host. First run: [arthur@test-arch ~]$ sudo fstrim -v / /: 5.7 GiB (6074368000 bytes) trimmed Second run: [arthur@test-arch ~]$ sudo fstrim -v / /: 0 B (0 bytes) trimmed No errors were reported in the dmesg of the VM; no errors in Dom0 and no errors in dmesg of Xen (xl dmesg). Based on this single test, it seems to work. You can add me as Tested-By. On Wed, 20 Jan 2021 at 15:35, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote: > > On Wed, Jan 20, 2021 at 03:23:30PM +0100, Arthur Borsboom wrote: > > Hi Roger, > > > > I have set up a test environment based on Linux 5.11.0-rc4. > > The patch did not apply clean, so I copied/pasted the patch manually. > > > > Without the patch the call trace (as reported) is visible in dmesg. > > With the patch the call trace in dmesg is gone, but ... (there is always a > > but) ... > > > > Now the discard action returns the following. > > > > [arthur@test-arch ~]$ sudo fstrim -v / > > fstrim: /: the discard operation is not supported > > > > It might be correct, but of course I was hoping the Xen VM guest would pass > > on the discard request to the block device in the Xen VM host, which is a > > disk partition. > > Any suggestions? > > Hm, that's not what I did see on my testing, the operation worked OK, > and that's what I would expect to happen in your case also, since I > know the xenstore keys. > > I think it's possible your email client has mangled the patch, I'm > attaching the same patch to this email, could you try to apply it > again and report back? (this time it should apply cleanly) > > Thanks, Roger. -- Arthur Borsboom
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |