[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] gnttab: defer allocation of status frame tracking array

Hi Jan,

On 04/01/2021 13:37, Jan Beulich wrote:
On 24.12.2020 10:57, Julien Grall wrote:
Hi Jan,

On 23/12/2020 15:13, Jan Beulich wrote:
This array can be large when many grant frames are permitted; avoid
allocating it when it's not going to be used anyway, by doing this only
in gnttab_populate_status_frames().

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
v2: Defer allocation to when a domain actually switches to the v2 grant

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1725,6 +1728,17 @@ gnttab_populate_status_frames(struct dom
       /* Make sure, prior version checks are architectural visible */
+ if ( gt->status == ZERO_BLOCK_PTR )
+    {
+        gt->status = xzalloc_array(grant_status_t *,
+        if ( !gt->status )
+        {
+            gt->status = ZERO_BLOCK_PTR;
+            return -ENOMEM;
+        }
+    }
       for ( i = nr_status_frames(gt); i < req_status_frames; i++ )
           if ( (gt->status[i] = alloc_xenheap_page()) == NULL )
@@ -1745,18 +1759,23 @@ status_alloc_failed:
           gt->status[i] = NULL;
+    if ( !nr_status_frames(gt) )
+    {
+        xfree(gt->status);
+        gt->status = ZERO_BLOCK_PTR;
+    }
       return -ENOMEM;
static int
   gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
-    unsigned int i;
+    unsigned int i, n = nr_status_frames(gt);
/* Make sure, prior version checks are architectural visible */
- for ( i = 0; i < nr_status_frames(gt); i++ )
+    for ( i = 0; i < n; i++ )
           struct page_info *pg = virt_to_page(gt->status[i]);
           gfn_t gfn = gnttab_get_frame_gfn(gt, true, i);
@@ -1811,12 +1830,12 @@ gnttab_unpopulate_status_frames(struct d
           page_set_owner(pg, NULL);
- for ( i = 0; i < nr_status_frames(gt); i++ )
-    {
-        free_xenheap_page(gt->status[i]);
-        gt->status[i] = NULL;
-    }
       gt->nr_status_frames = 0;
+    smp_wmb(); /* Just in case - all accesses should be under lock. */

I think gt->status cannot be accessed locklessly. If a entity read
gt->nr_status_frames != 0, then there is no promise the array will be
accessible when trying to access it as it may have been freed.

Yet the common principle of (pointer,count) pairs to describe arrays
to be updated / accessed in sequences guaranteeing a non-zero count
implies a valid pointer could as well be considered to apply here.
I.e. when freeing, at least in principle clearing count first would
be a sensible thing to do, wouldn't it?

I am not arguing on whether this is a sensible thing to do but how someone else can make use of it. The common lockless pattern to access the array would be checking the count and if it is not zero, then access the array. Imagine the following:

CPU0 (free the array)       | CPU1 (access the array)
                            | if ( !gt->nr_status_frames )
gt->nr_status_frames = 0;   |   return;
smp_wmb();                  |
gt->status = NULL;          |
                            | smp_rmb();
                            | access gt->status[X];

Without any lock (or refcounting), I can't see how the example above would be safe.

Subsequently allocation and
consumers could be updated to follow suit ...
The allocation and free cannot happen locklessly (at least in the current state). For the consumers see above.

So I would drop the barrier here.

I certainly can (unless double checking would tell me otherwise),
but I'm not convinced this is a good idea. I'd rather have a barrier
too many in the code than one too little, even if the "too little"
may only be the result of a future change.

Future-proof code is always good to have. However, we need to avoid adding barriers that doesn't seem to usuable either today or in the future.

Your in-code comment suggests the barrier would help when the array is access without any lock. However, I can't see how this hypothetical code would work. Can you provide an example how you can deal with memory freed behind your back?

And this isn't a path
where performance considerations would suggest avoiding barriers
when they're not strictly needed.

I am not concerned about the performance here.


Julien Grall



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.