I’ve been noticing that it takes longer and longer to get a meaningful Bonnie run. To make sure you’ve busted the filesystem caching and are actually doing I/O, you need to use a test file two or three times the size of system memory. Which can easily get into a couple of hundred gigs on a serious server these days. And while I/O has been getting faster, it still takes a while to process that much data; and Bonnie does it five times. So, the ratio that governs Bonnie testing time is something like memory-size over I/O-performance. Thus we observe that, proportionately, memory size has grown faster than I/O speed. Thus, memcached and friends.



Contributions

Comment feed for ongoing:Comments feed

From: Joe (Dec 14 2007, at 10:33)

It's a lot easier to run benchmarks like these on recent versions of Linux (2.6.16+) that allow you to drop the page cache...

[link]

author · Dad · software · colophon · rights
picture of the day
December 13, 2007
· Technology (87 fragments)
· · Storage (29 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.