[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] windows tmem



> 
> Unfortunately it gets worse... I'm testing on windows 2003 at the moment,
> and it seems to always write out data in 64k chunks, which are aligned to a 4k
> boundary. Then it reads in one or more of those pages, and maybe later re-
> uses the same part of the swapfile for something else. It seems that all reads
> are 4k in size, but there may be some grouping of those requests at a lower
> layer.
> 
> So I would end up caching up to 16x the actual data, with no way of knowing
> which of those 16 pages are actually being swapped out and which are just
> optimistically being written to disk without actually being paged out.
> 
> I'll do a bit of analysis of the MDL being written as that may give me some
> more information but it's not looking as good as I'd hoped.
> 

I now have a working implementation that does write-through caching of pagefile 
writes to ephemeral tmem. It keeps some counters on get and put operations, and 
on a Windows 2003 server with 256MB memory assigned, after a bit of running and 
flipping between applications I get:

put_success_count = 96565
put_fail_count    = 0
get_success_count = 34514
get_fail_count    = 5369

which is somewhere around 85% hit rate vs misses. That seems pretty good except 
that windows is quite aggressive about paging out, so there are a lot of unused 
writes (and therefore tmem usage) and I'm not sure if it's a net win.

Subjectively, windows does seem faster with my driver active, but I'm using it 
over a crappy adsl connection so it's hard to measure in any precise way.
 
I'm trying to see if it is possible to use tmem as a page cache cache which 
would be more useful but I'm not yet sure if the required hooks exist in fs 
minifilters.

James


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.