[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Transcendent Memory ("tmem") -capable kernel now publicly released



Hi Phillip, 

I remember we exchanged an e-mail about tmem the other day. I have investigated a little bit about tmem. First of all, the current linux branch has tmem + cleancache upstream, but it does not contain frontswap. Basically, file systems (ext3, ext4) decide whether it puts memory to page cache, so it puts pages to tmem when it wants to remove pages from the memory. This does not change the amount of memory used in the system. But tmem support many other options about how to manage these pages such as zcache (one kernel) and ramster (multiple kernels). 

For the log, refer to the following parsing I did. It might help!

-----------------------------------------------------------------------------------
 
G=Tt:204155,Te:100904,Cf:0,Af:0,Pf:0,Ta:6557,Lm:0,Et:0,Ea:1,Rt:0,Ra:0,Rx:0,Fp:0,Ec:1240,Em:7597,Oc:179,Om:571,Nc:103,Nm:448,Pc:1240,Pm:7597,Fc:1240,Fm:7597,Sc:0,Sm:0,Ep:17542,Gd:0,Zt:0,Gz:0
C=CI:11,ww:0,ca:0,co:0,fr:0,Tc:9362200,Ge:445,Pp:0,Gp:0,Ec:1240,Em:1685,cp:0,cb:0,cn:0,cm:0
 
G= (global)
Tt:204155, (total_tmem_ops)
Te:100904, (errored_tmem_ops)
Cf:0, (failed_copies)
Af:0, (alloc_failed)
Pf:0, (alloc_page_failed)
Ta:6557, (tmh_avail_pages())
Lm:0, (low_on_memory)
Et:0, (evicted_pgs)
Ea:1, (evict_attempts)
Rt:0, (relinq_pgs)
Ra:0, (relinq_attempts)
Rx:0, (max_evicts_per_relinq)
Fp:0, (total_flush_pool)
Ec:1240, (glogal_eph_count)
Em:7597, (global_eph_count_max)
Oc:179, (global_obj_count)
Om:571, (global_obj_count_max)
Nc:103, (global_rtree_node_count)
Nm:448, (global_rtree_node_count_max)
Pc:1240, (global_pgp_count)
Pm:7597, (global_pgp_count_max)
Fc:1240, (global_page_count)
Fm:7597, (global_page_count_max)Sc:0, (global_pcd_count)
Sm:0, (global_pcd_count_max)
Ep:17542, (tot_good_eph_puts)
Gd:0, (deduped_puts)
Zt:0, (pcd_tot_tze_size)
Gz:0 (pcd_tot_csize)
 
-----------------------------------------------------------------------------------
 
S=SI:0,PT:ES,U0:f34b3f0627e0f696,U1:ff55f9cce0264db8,SC:9,SC:10,Pc:0,Pm:0,Oc:0,Om:0,Nc:0,Nm:0,ps:0,pt:0,pd:0,pr:0,px:0,gs:0,gt:0,fs:0,ft:32,os:0,ot:96
 
S= (shared)
SI:0, (pool_id)
PT:ES, (Ephemeral Shared)
U0:f34b3f0627e0f696,U1:ff55f9cce0264db8, (UUID)
SC:9, (cli_id)
SC:10, (cli_id)
Pc:0, (pgp_count)
Pm:0, (pgp_count_max)
Oc:0, (obj_count)
Om:0, (obj_count_max)
Nc:0, (objnode_count)
Nm:0, (objnode_count_max)
ps:0, (good_puts)
pt:0, (puts)
pd:0, (dup_puts_flushed)
pr:0, (dup_puts_replaced)
px:0, (no_mem_puts)
gs:0, (found_gets)
gt:0, (gets)
fs:0, (flushs_found)
ft:32, (flushs)
os:0, (flush_objs_found)
ot:96 (flush_objs)
 
-----------------------------------------------------------------------------------

C=CI:11,ww:0,ca:0,co:0,fr:0,Tc:9362200,Ge:445,Pp:0,Gp:0,Ec:1240,Em:1685,cp:0,cb:0,cn:0,cm:0
 
C= (client)
CI:11, (cli_id)
ww:0, (weight)
ca:0, (cap)
co:0, (compress)
fr:0, (frozen)
Tc:9362200, (total_cycles)
Ge:445, (succ_eph_gets)                                                                                                                                                                                             
Pp:0, (succ_pers_puts)
Gp:0, (succ_pers_gets)
Ec:1240, (eph_count)
Em:1685, (eph_count_max)
cp:0, (compressed_pages)
cb:0, (compressed_sum_size)
cn:0, (compress_poor)
cm:0 (compress_nomem)
 
-----------------------------------------------------------------------------------
 
P=CI:11,PI:0,PT:EP,U0:0,U1:0,Pc:1240,Pm:1685,Oc:179,Om:194,Nc:103,Nm:128,ps:1685,pt:1685,pd:0,pr:0,px:0,gs:445,gt:2817,fs:0,ft:10,os:0,ot:33
 
P= (pools)
CI:11, (cli_id)
PI:0, (pool_id)
PT:E (ephemeral) P (private), (reference: P (persistent) S(shared)
U0:0, (UUID)
U1:0, (UUID)
Pc:1240, (pgp_count)
Pm:1685, (pgp_count_max)
Oc:179, (obj_count)
Om:194, (obj_count_max)
Nc:103, (objnode_count)
Nm:128, (objnode_count_max)
ps:1685, (good_puts)
pt:1685, (puts)
pd:0, (dup_puts_flushed)
pr:0, (dup_puts_replaced)
px:0, (no_mem_puts)
gs:445, (found_gets)
gt:2817, (gets)
fs:0, (flushs_found)
ft:10, (flushs)
os:0, (flush_objs_found)
ot:33 (flush_objs)
 
-----------------------------------------------------------------------------------
 
T=Gn:5443,Gt:21414124,Gx:55912,Gm:1980,Pn:17542,Pt:72415084,Px:238988,Pm:2532,gn:112911,gt:29206664,gx:13368,gm:120,pn:0,pt:0,px:0,pm:2147483647,Fn:43718,Ft:10429096,Fx:5120,Fm:180,On:12262,Ot:12024416,Ox:950220,Om:180,Cn:22985,Ct:43492480,Cx:129064,Cm:964,cn:0,ct:0,cx:0,cm:2147483647,dn:0,dt:0,dx:0,dm:2147483647
 
T= (global_perf)
Gn:5443, (succ_get_count)
Gt:21414124, (succ_get_sum_cycles)
Gx:55912, (succ_get_max_cycles)
Gm:1980, (succ_get_min_cycles)
Pn:17542, (succ_put_count)
Pt:72415084, (succ_put_sum_cycles)
Px:238988, (succ_put_max_cycles)
Pm:2532, (succ_put_min_cycles)
gn:112911, (non_succ_get_count)
gt:29206664, (non_succ_get_sum_cycles)
gx:13368, (non_succ_get_max_cycles)
gm:120, (non_succ_get_min_cycles)
pn:0, (non_succ_put_count)
pt:0, (non_succ_put_sum_cycles)
px:0, (non_succ_put_max_cycles)
pm:2147483647, (non_succ_put_min_cycles)
Fn:43718, (flush_count)
Ft:10429096, (flush_sum_cycles)
Fx:5120, (flush_max_cycles)
Fm:180, (flush_min_cycles)
On:12262, (flush_obj_count)
Ot:12024416, (flush_obj_sum_cycles)
Ox:950220, (flush_obj_max_cycles)
Om:180, (flush_obj_min_cycles)
Cn:22985, (pg_copy_count)
Ct:43492480, (pg_copy_sum_cycles)
Cx:129064, (pg_copy_max_cycles)
Cm:964, (pg_copy_min_cycles)
cn:0, (compress_count)
ct:0, (compress_sum_cycles)
cx:0, (compress_max_cycles)
cm:2147483647, (compress_min_cycles)
dn:0, (decmopress_count)
dt:0, (decompress_sum_cycles)
dx:0, (decompress_max_cycles)
dm:2147483647 (decompress_min_cycles)
 
-----------------------------------------------------------------------------------

Good luck!

Jinho

On Mon, Apr 16, 2012 at 2:41 PM, Phillip Susi <psusi@xxxxxxxxxx> wrote:
On 4/16/2012 12:40 PM, Dan Magenheimer wrote:
Hi Psusi (Phillip?) - Just saw your tmem post... sorry I don't keep

up with xen-users.   If you are looking at implementing Xen tmem
support on/for a future Ubuntu release, let me know if I can help and
see:
http://oss.oracle.com/git/?p=linux-2.6-unbreakable.git;a=summary

The last time I played with it I think I managed to get it working after adding the "tmem" command line arguments to xen and the guest kernel. What I have not been able to do is figure out how to monitor its usage/effectiveness.  The output of xm tmem was completely indecipherable.  Could you shed some light on that?

Also I was wondering about how pages move back and forth between cleancache and pagecache.  Are they copied or moved back and forth?  In other words, is the data placed in cleancache only once the local pagecache discards it, or as soon as it is read from disk, and when it is later requested, is it copied back from cleancache to the local pagecache, thus resulting in the data being duplicated in memory, once for the guest, and once in tmem?



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users



--
Jinho Hwang
PhD Student
Department of Computer Science
The George Washington University
Washington, DC 20052
hwang.jinho@xxxxxxxxx (email)
276.336.0971 (Cell)
202.994.4875 (fax)
070.8285.6546 (myLg070)
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.