[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v16 06/10] x86: collect global CMT information



> > > +        case XEN_SYSCTL_PSR_CMT_get_l3_cache_size:
> > > +        {
> > > +            struct cpuid4_info info;
> > > +
> > > +            ret = cpuid4_cache_lookup(3, &info);
> > 
> > Couldn't you use 'struct cpuinfo_x86' and extend it if you need to?
> I can, indeed. Field 'x86_cache_size' is actully l3 cache size if it is
> available. I still need to add a new field to indicate it's l3 to use
> in this way.

That should make it easier I would think? As you would not actually do
the cpuid call anymore and just pull the data from a 'new' field?

You would naturally have to make sure it also reports an sensible
value under AMD CPUs. If it is too difficult to do that under the
AMD code that stuffs 'struct cpuinfo_x86' then lets ignore this
whole suggestion and just keep your patch as in regards to the
'cpuid' call.

> > 
> > 
> > > +            if ( ret < 0 )
> > > +                break;
> > > +
> > > +            sysctl->u.psr_cmt_op.data = info.size / 1024; /* in KB unit 
> > > */
> > 
> > With the Haswell EP they have this weird setup where there
> > are 8 cores on one side and 10 cores on another. Also the cache size is
> > different (20MB LLC and 25MB LLC). With that wouldn't you want to enumerate
> > exactly _which_ CPU cache you want instead of the one you running at?
> > 
> > Or is my reading of the diagrams wrong and OS never sees the split and
> > gets 45MB?
> Not sure as I don't have such machine. If this is the case, better to
> use per-socket value here.

<nods> In which case my comment above is irrelevant.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.