[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv5 12/14] xen-blkback: safely unmap grants in case they are still in use



Hello,

El 12/03/15 a les 9.08, Bob Liu ha escrit:
> 
> On 03/09/2015 07:02 PM, David Vrabel wrote:
>> On 09/03/15 10:51, Bob Liu wrote:
>>>
>>> On 03/09/2015 05:30 PM, David Vrabel wrote:
>>>> On 09/03/15 09:09, Bob Liu wrote:
>>>>> Hi David,
>>>>>
>>>>> Recently I met an issue which is likely related with this patch. It
>>>>> happened when running block benchmark on domU, the backend was an iSCSI
>>>>> disk connected to dom0. I got below panic at put_page_testzero() on
>>>>> dom0, at that time the ixgbe network card was freeing skb pages in
>>>>> __skb_frag_unref() but the page->_count was already 0.
>>>>> Do you think is it possiable that page was already freed by blkback?
>>>>
>>>> It's possible, but in this case I think the blkback device must have
>>>> been destroyed for this to have happened,  because blkback doesn't free
>>>> the pages until it is destroyed.
>>>>
>>>
>>> Sorry, I didn't get the point here, doesn't bio_complete free pages?
>>> E.g.
>>> __end_block_io_op() > xen_blkbk_unmap_and_respond() > put_free_pages()
>>> Then shrink_free_pagepool() free the page finally.
>>
>> Ug. That's all the persistent grant stuff I'm not very familiar with.
>> Perhaps Roger can comment?
>>
> 
> Well, I think I may figure out this issue but haven't confirmed yet.
> 
> In purge_persistent_gnt() > unmap_purged_grants(), we should also change
> to use gnttab_unmap_refs_async().

Good catch! Yes xen_blkbk_unmap_purged_grants is not using the new 
unmap interface. I have a patch that should fix this, but right now I'm 
not even able to compile test it. If you want to give it a spin:
 
diff --git a/drivers/block/xen-blkback/blkback.c 
b/drivers/block/xen-blkback/blkback.c
index 2a04d34..cd1517f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -327,9 +327,19 @@ void xen_blkbk_unmap_purged_grants(struct work_struct 
*work)
        struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];
        struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST];
        struct persistent_gnt *persistent_gnt;
-       int ret, segs_to_unmap = 0;
+       int segs_to_unmap = 0;
+       struct gntab_unmap_queue_data unmap_data;
+       struct completion unmap_completion;
        struct xen_blkif *blkif = container_of(work, typeof(*blkif), 
persistent_purge_work);
 
+       init_completion(&unmap_completion);
+
+       unmap_data.data = &unmap_completion;
+       unmap_data.done = &free_persistent_gnts_unmap_callback;
+       unmap_data.pages = pages;
+       unmap_data.unmap_ops = unmap;
+       unmap_data.kunmap_ops = NULL;
+
        while(!list_empty(&blkif->persistent_purge_list)) {
                persistent_gnt = list_first_entry(&blkif->persistent_purge_list,
                                                  struct persistent_gnt,
@@ -343,20 +353,19 @@ void xen_blkbk_unmap_purged_grants(struct work_struct 
*work)
 
                pages[segs_to_unmap] = persistent_gnt->page;
 
-               if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-                       ret = gnttab_unmap_refs(unmap, NULL, pages,
-                               segs_to_unmap);
-                       BUG_ON(ret);
+               if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
+                   list_empty(&blkif->persistent_purge_list)) {
+
+                       unmap_data.count = segs_to_unmap;
+                       gnttab_unmap_refs_async(&unmap_data);
+                       wait_for_completion(&unmap_completion);
+
                        put_free_pages(blkif, pages, segs_to_unmap);
                        segs_to_unmap = 0;
                }
                kfree(persistent_gnt);
        }
-       if (segs_to_unmap > 0) {
-               ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
-               BUG_ON(ret);
-               put_free_pages(blkif, pages, segs_to_unmap);
-       }
+       BUG_ON(segs_to_unmap != 0);
 }
 
 static void purge_persistent_gnt(struct xen_blkif *blkif)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.