[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 03/24] x86: refactor psr: implement main data structures.



On 17-02-28 11:58:59, Roger Pau Monn� wrote:
> On Wed, Feb 15, 2017 at 04:49:18PM +0800, Yi Sun wrote:
> > To construct an extendible framework, we need analyze PSR features
> > and abstract the common things and feature specific things. Then,
> > encapsulate them into different data structures.
> > 
> > By analyzing PSR features, we can get below map.
> >                 +------+------+------+
> >       --------->| Dom0 | Dom1 | ...  |
> >       |         +------+------+------+
> >       |            |
> >       |Dom ID      | cos_id of domain
> >       |            V
> >       |        
> > +-----------------------------------------------------------------------------+
> > User --------->| PSR                                                        
> >                  |
> >      Socket ID |  +--------------+---------------+---------------+          
> >                  |
> >                |  | Socket0 Info | Socket 1 Info |    ...        |          
> >                  |
> >                |  +--------------+---------------+---------------+          
> >                  |
> >                |    |                   cos_id=0               cos_id=1     
> >      ...         |
> >                |    |          
> > +-----------------------+-----------------------+-----------+ |
> >                |    |->Ref   : |         ref 0         |         ref 1      
> >    | ...       | |
> >                |    |          
> > +-----------------------+-----------------------+-----------+ |
> >                |    |          
> > +-----------------------+-----------------------+-----------+ |
> >                |    |->L3 CAT: |         cos 0         |         cos 1      
> >    | ...       | |
> >                |    |          
> > +-----------------------+-----------------------+-----------+ |
> >                |    |          
> > +-----------------------+-----------------------+-----------+ |
> >                |    |->L2 CAT: |         cos 0         |         cos 1      
> >    | ...       | |
> >                |    |          
> > +-----------------------+-----------------------+-----------+ |
> >                |    |          
> > +-----------+-----------+-----------+-----------+-----------+ |
> >                |    |->CDP   : | cos0 code | cos0 data | cos1 code | cos1 
> > data | ...       | |
> >                |               
> > +-----------+-----------+-----------+-----------+-----------+ |
> >                
> > +-----------------------------------------------------------------------------+
> > 
> > So, we need define a socket info data structure, 'struct
> > psr_socket_info' to manage information per socket. It contains a
> > reference count array according to COS ID and a feature list to
> > manage all features enabled. Every entry of the reference count
> > array is used to record how many domains are using the COS registers
> > according to the COS ID. For example, L3 CAT and L2 CAT are enabled,
> > Dom1 uses COS_ID=1 registers of both features to save CBM values, like
> > below.
> >         +-------+-------+-------+-----+
> >         | COS 0 | COS 1 | COS 2 | ... |
> >         +-------+-------+-------+-----+
> > L3 CAT  | 0x7ff | 0x1ff | ...   | ... |
> >         +-------+-------+-------+-----+
> > L2 CAT  | 0xff  | 0xff  | ...   | ... |
> >         +-------+-------+-------+-----+
> > 
> > If Dom2 has same CBM values, it can reuse these registers which COS_ID=1.
> > That means, both Dom1 and Dom2 use same COS registers(ID=1) to save same
> > L3/L2 values. So, the value ref[1] is 2 which means 2 domains are using
> > COS_ID 1.
> > 
> > To manage a feature, we need define a feature node data structure,
> > 'struct feat_node', to manage feature's specific HW info, its callback
> > functions (all feature's specific behaviors are encapsulated into these
> > callback functions), and an array of all COS registers values of this
> > feature.
> > 
> > CDP is a special feature which uses two entries of the array
> > for one COS ID. So, the number of CDP COS registers is the half of L3
> > CAT. E.g. L3 CAT has 16 COS registers, then CDP has 8 COS registers if
> > it is enabled. CDP uses the COS registers array as below.
> > 
> >                          
> > +-----------+-----------+-----------+-----------+-----------+
> > CDP cos_reg_val[] index: |     0     |     1     |     2     |     3     |  
> >   ...    |
> >                          
> > +-----------+-----------+-----------+-----------+-----------+
> >                   value: | cos0 code | cos0 data | cos1 code | cos1 data |  
> >   ...    |
> >                          
> > +-----------+-----------+-----------+-----------+-----------+
> > 
> > For more details, please refer SDM and patches to implement 'get value' and
> > 'set value'.
> 
> I would recommend that you merge this with a patch that actually makes use of
> this structures, or else it's hard to review it's usage IMHO.
> 
Sorry for this. To make codes less and simpler in a patch, I split this patch
out to only show the data structures. I think I can merge it with next patch:
[PATCH v8 04/24] x86: refactor psr: implement CPU init and free flow.

How do you think?

> > +struct psr_socket_info {
> > +    /*
> > +     * It maps to values defined in 'enum psr_feat_type' below. Value in 
> > 'enum
> > +     * psr_feat_type' means the bit position.
> > +     * bit 0:   L3 CAT
> > +     * bit 1:   L3 CDP
> > +     * bit 2:   L2 CAT
> > +     */
> > +    unsigned int feat_mask;
> > +    unsigned int nr_feat;
> > +    struct list_head feat_list;
> 
> Isn't it a little bit overkill to use a list when there can only be a maximum
> of 3 features? (and the feat_mask is currently 32bits, so I guess you don't
> really foresee having more than 32 features).
> 
> I would suggest using:
> 
>      struct feat_node *features[PSR_SOCKET_MAX_FEAT];
> 
> (PSR_SOCKET_MAX_FEAT comes from the expansion of the enum below). Then you can
> simply use the enum value of each feature as the position of it's 
> corresponding
> feat_node into the array.
> 
I really thought this before. But there may be different features enabled on
different sockets. For example, socket 0 enables L3 CAT and L2 CAT but socket 1
only supports L3 CAT. So, the feature array may be different for different
sockets. I think it is more complex to use array to handle all things than list.

> > +    unsigned int cos_ref[MAX_COS_REG_CNT];
> > +    spinlock_t ref_lock;
> > +};
> > +
> > +enum psr_feat_type {
> > +    PSR_SOCKET_L3_CAT = 0,
> > +    PSR_SOCKET_L3_CDP,
> > +    PSR_SOCKET_L2_CAT,
>     PSR_SOCKET_MAX_FEAT,
> > +};
> > +
> > +/* CAT/CDP HW info data structure. */
> > +struct psr_cat_hw_info {
> > +    unsigned int cbm_len;
> > +    unsigned int cos_max;
> > +};
> > +
> > +/* Encapsulate feature specific HW info here. */
> > +struct feat_hw_info {
> > +    union {
> > +        struct psr_cat_hw_info l3_cat_info;
> > +    };
> > +};
> 
> Why don't you use an union here directly, instead of encapsulating an union
> inside of a struct?
> 
> union feat_hw_info {
>     struct psr_cat_hw_info l3_cat_info;
> };
> 
> > +
> > +struct feat_node;
> > +
> > +/*
> > + * This structure defines feature operation callback functions. Every 
> > feature
> > + * enabled MUST implement such callback functions and register them to ops.
> > + *
> > + * Feature specific behaviors will be encapsulated into these callback
> > + * functions. Then, the main flows will not be changed when introducing a 
> > new
> > + * feature.
> > + */
> > +struct feat_ops {
> > +    /* get_cos_max is used to get feature's cos_max. */
> > +    unsigned int (*get_cos_max)(const struct feat_node *feat);
> > +};
> > +
> > +/*
> > + * This structure represents one feature.
> > + * feature     - Which feature it is.
> > + * feat_ops    - Feature operation callback functions.
> > + * info        - Feature HW info.
> > + * cos_reg_val - Array to store the values of COS registers. One entry 
> > stores
> > + *               the value of one COS register.
> > + *               For L3 CAT and L2 CAT, one entry corresponds to one 
> > COS_ID.
> > + *               For CDP, two entries correspond to one COS_ID. E.g.
> > + *               COS_ID=0 corresponds to cos_reg_val[0] (Data) and
> > + *               cos_reg_val[1] (Code).
> > + * list        - Feature list.
> > + */
> > +struct feat_node {
> > +    enum psr_feat_type feature;
> 
> If you index them in an array with the key being psr_feat_type I don't think
> you need that field.
> 
I need this to check if input type can match this feature, you can refer get
val or set val flow. Thanks!

> > +    struct feat_ops ops;
> 
> I would place the function hooks in this struct directly, instead of nesting
> them inside of another struct. The hooks AFAICT are shared between all the
> different PSR features.
> 
Jan suggested this before in v4 patch. We have discussed this and Jan accepts
current implementation. The reason is below:

"To facilitate the callback functions assignment for a feature, I defined
feature specific callback function ops like below.

struct feat_ops l3_cat_ops = {
    .init_feature = l3_cat_init_feature,
    .get_max_cos_max = l3_cat_get_max_cos_max,
    ......
};

And directly assign it to global feature node in initialization process like
below.

static void cpu_init_work(void)
{
......
            feat_tmp = feat_l3_cat;
            feat_l3_cat = NULL;
            feat_tmp->ops = l3_cat_ops;
......
}

I think this can make codes be clear."

> > +    struct feat_hw_info info;
> 
> Same with this, you can place the actual union for storage here directly,
> instead of nesting it inside of feat_hw_info, so:
> 
>     union {
>         struct psr_cat_hw_info l3_cat_info;
>     } hw_info;
> 
Thanks for the suggestion! Will do this.

> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.