[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: Distro kernel and 'virtualization server' vs. 'server that sometimes runs virtual instances' rant (was: Re: [Xen-devel] Re: [GIT PULL] Xen APIC hooks (with io_apic_ops))
Not to beat this to death, but one more comment: > wait what? the difference is if you aren't using the CPU, I can take > it away, and then give it back to you when you want it almost > immediately, > with a small cost (of flushing the cpu cache, but that is fast enough > that while it's a big deal for scientific type applications, > it doesn't > really make the percieved responsiveness of the box worse, unless you > do it a bunch of times in a small period of time.) > > Ram is different. If I take away your pagecache, either I save it to > disk (slow) and restore it (slow) when I return it, or I take > it from you > without saving to disk, and return clean pages when you want it back, > meaning if you want that data you've got to re-read from disk. (slow) You are technically correct, but I'm not talking about taking away ALL of the pagecache. Pagecache is a guess as to what pages might be used in the future. A large percentage of those guesses are wrong and the page will never be used again and will eventually be evicted. This is what I call "idle memory" but I love the way Tim Post put it: "Linux is like pac man gobbling up blocks for cache." The right long-term answer is for Linux and OS's in general to get smarter about giving up memory that they know is not going to be used again, but even if they get smarter, they will never be omniscient. So self-ballooning creates pressure on the page cache, making the OS evict pages that its not so sure about. Then tmem acts as a backup for those pages; if the OS was wrong and the page is needed again (soon), it can get it right back without a disk read. Clearly this won't help users who leave their VM idle for three months and then expect instantaneous response, but that's what I meant by your memory partitioning helping only a few users. Does that make sense? Is it at least a step in the right direction? Dan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |