[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen/mm: limit non-scrubbed allocations to a specific order


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 15 Jan 2026 11:48:47 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LlVpFKCHmOhYc/17Woga3RbyZhaYj8zZbHJcIq5OwbM=; b=pzWMbracWm78c1WwN5dY2pUwyGz4eTCTL7NF//M/LDNSlYJBDBAeSbljGBHjAxRBVYrdS9sXudpLW5JscYUbrxlKi053wgJX7mvQh4LIzKSm6qnvUXATUlB4beqI9HHbeDVyABoZnNqAUhq3cpCl2eOqVcogyKc9JK6iXBSzUqgCcK5OEdSwqtezlaKFbiUsI132jv4syN/yoJXPQ3arjL/Pp4JSjVxH0P8AC1H93qBy+fxde0QB5ydSpsheg5rqaMagVyO5opJ9ox/YJ94+6FoyhmXdoS106xwHBBQnFZwNbJMsc+j9p8l6Spu8/v/cETjY4uvF7+Ay1PRVsYmC9w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=cNHTZ6YSwe3K6lsme5ecCkrmm60mbukPGJjx0Mhc2Lr+Bat+60Nmty5Oe8UXoNIRN+tyQE9LjVIX6wusho4Prtup5ofxNL2zbkpFZE23329hi06JlmvjVhBTfgJsPGD3Rnk6X0dQ+1YYljiBCCzGBjKx9QCFdUdojZnQRQiBl5Bjaf8kxHcICiI5CHacwqSNucBEWLXU6U1GbiDl0D0Z7QNXqIpNXCwVVdB/kJlpUxtw+fDVGjAZdvMZr6tvP89BitBlLQuJF6mIhV9lIY16EaLe1Ayg0rKCBYucGp4RcqghXzc1LB0h9xzp+gHRREkh6HWwNZSxJNTaFxz2OrF5HA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 15 Jan 2026 10:49:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Jan 14, 2026 at 09:48:59AM +0100, Jan Beulich wrote:
> On 13.01.2026 15:01, Roger Pau Monné wrote:
> > On Fri, Jan 09, 2026 at 12:19:26PM +0100, Jan Beulich wrote:
> >> On 08.01.2026 18:55, Roger Pau Monne wrote:
> >>> --- a/xen/common/memory.c
> >>> +++ b/xen/common/memory.c
> >>> @@ -279,6 +279,18 @@ static void populate_physmap(struct memop_args *a)
> >>>  
> >>>                  if ( unlikely(!page) )
> >>>                  {
> >>> +                    nodeid_t node = MEMF_get_node(a->memflags);
> >>> +
> >>> +                    if ( memory_scrub_pending(node) ||
> >>> +                         (node != NUMA_NO_NODE &&
> >>> +                          !(a->memflags & MEMF_exact_node) &&
> >>> +                          memory_scrub_pending(node = NUMA_NO_NODE)) )
> >>> +                    {
> >>> +                        scrub_free_pages(node);
> >>> +                        a->preempted = 1;
> >>> +                        goto out;
> >>> +                    }
> >>
> >> At least for order 0 requests there's no point in trying this. With the
> >> current logic, actually for orders up to MAX_DIRTY_ORDER.
> > 
> > Yes, otherwise we might force the CPU to do some scrubbing work when
> > it won't satisfy it's allocation request anyway.
> > 
> >> Further, from a general interface perspective, wouldn't we need to do the
> >> same for at least XENMEM_increase_reservation?
> > 
> > Possibly yes.  TBH I would also be fine with strictly limiting
> > XENMEM_increase_reservation to 2M order extents, even for the control
> > domain.  The physmap population is the only that actually requires
> > bigger extents.
> 
> Hmm, that's an option, yes, but an ABI-changing one.

I don't think it changes the ABI: Xen has always reserved the right to
block high order allocations.  See for example how max_order() has
different limits depending on the domain permissions, and I would not
consider those limits part of the ABI, they can be changed from the
command line.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.