[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v2 1/2] xen/memory : Add a stats_table resource type


  • To: Matias Ezequiel Vara Larsen <matiasevara@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 21 Feb 2023 09:48:10 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WzS8JE7Jn0QUDZ/qH5Ic23ZH09b9Uv6EWBtNUT05FZ4=; b=Z2MWfnmIHP0GU2Trm3L5gYzyLzB/DDYCrRr/EGMAL0ClUCvRjHz93FyQbga9ZPHBumY1z50CXYbzJXmnTq5+Q2RHMYNa4HtVL9RUtlK4F3G3dx48U+4+ZrMgxD7i53NXZgBHjFTTmdcNe7MHLlwuof1weH/2mfd5b5yAeDlzzw/XT+u5N+chEOngzGYaflCpyerV6sqsXlbz4MHSKLlbdHy90XWN+26CRjHBF36nqDLlPdjOcGamFKXIZmKrEjPnFcgZT109KGIuwQfSHv/Ch98sMne+8NMzJkciqkg/BzlS796WKdg6Bosc2D+0TvlcKKRgD14g2AohnVpBWyLqhA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WLjZJVtMxc6JpyRtkOYIFzob9C+Vgd8s8ekWWosrLdG8DDIrsb+rvxsVrr8DTWsgojVUO/UqXp0sSfq+z0STTAe3Fl2aWJNT4Qegu+n7DWnJnOMQa2pztGjngPcxcvUGIE/tV4EJ5If26mhFABCeAGbJl/YMfXvDOMJY6VarhevMK+agkgKN1l9yCEuyiiwYFjB1u1mRL4hf9bJQUflvaKt7w4ttk41rF8up4Gb2Xl8hTNGKgObfp4tmu2EEoSRJ/mPq4SURWSP9imvWLo/dITbKFxlJr5wR0ZWSGvX+owRPGqRNPScKwUfcc+tTkIBYpZMsbUkzbcAi5b9FesmBTg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Matias Ezequiel Vara Larsen <matias.vara@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 21 Feb 2023 08:48:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 20.02.2023 17:51, Matias Ezequiel Vara Larsen wrote:
> On Thu, Feb 16, 2023 at 04:15:29PM +0100, Jan Beulich wrote:
>> On 16.02.2023 16:07, Matias Ezequiel Vara Larsen wrote:
>>> On Wed, Dec 14, 2022 at 08:29:53AM +0100, Jan Beulich wrote:
>>>> On 07.10.2022 14:39, Matias Ezequiel Vara Larsen wrote:
>>>>> @@ -287,6 +289,20 @@ static inline void vcpu_runstate_change(
>>>>>      }
>>>>>  
>>>>>      v->runstate.state = new_state;
>>>>> +
>>>>> +    vcpustats_va = (shared_vcpustatspage_t*)d->vcpustats_page.va;
>>>>> +    if ( vcpustats_va )
>>>>> +    {
>>>>> + vcpustats_va->vcpu_info[v->vcpu_id].version =
>>>>> +     version_update_begin(vcpustats_va->vcpu_info[v->vcpu_id].version);
>>>>> +        smp_wmb();
>>>>> +        
>>>>> memcpy(&vcpustats_va->vcpu_info[v->vcpu_id].runstate_running_time,
>>>>> +               &v->runstate.time[RUNSTATE_running],
>>>>> +               sizeof(v->runstate.time[RUNSTATE_running]));
>>>>> +        smp_wmb();
>>>>> +        vcpustats_va->vcpu_info[v->vcpu_id].version =
>>>>> +            
>>>>> version_update_end(vcpustats_va->vcpu_info[v->vcpu_id].version);
>>>>> +    }
>>>>
>>>> A further aspect to consider here is cache line ping-pong. I think the
>>>> per-vCPU elements of the array want to be big enough to not share a
>>>> cache line. The interface being generic this presents some challenge
>>>> in determining what the supposed size is to be. However, taking into
>>>> account the extensibility question, maybe the route to take is to
>>>> simply settle on a power-of-2 value somewhere between x86'es and Arm's
>>>> cache line sizes and the pretty common page size of 4k, e.g. 512 bytes
>>>> or 1k?
>>>>
>>>
>>> I do not now how to address this. I was thinking to align each vcpu_stats
>>> instance to a multiple of the cache-line. I would pick up the first multiple
>>> that is bigger to the size of the vcpu_stats structure. For example, 
>>> currently
>>> the structure is 16 bytes so I would align each instance in a frame to 64
>>> bytes. Would it make sense? 
>>
>> Well, 64 may be an option, but I gave higher numbers for a reason. One thing
>> I don't know is what common cache line sizes are on Arm or e.g. RISC-V.
> 
> Thanks. I found that structures that require cache-aligment are defined with
> "__cacheline_aligned" that uses L1_CACHE_BYTES. For example, in x86, this
> aligns to 128 bytes. What is the reason to use a higher value like 512 bytes 
> or
> 1k?.

You cannot bake an x86 property (which may even change: at some point we may
choose to drop the 128-byte special for the very few CPUs actually using
such, when the majority uses 64-byte cache lines) into the public interface.
You also don't want to make an aspect of the public interface arch-dependent
when not really needed. My suggestion for a higher value was in the hope that
we may never see a port to an architecture with cache lines wider than, say,
512 bytes. What exactly the value should be is of course up for discussion,
but I think it wants to include some slack on top of what we currently
support (arch-wise).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.