In case you hadn’t noticed, yesterday the much-announced ZFS finally shipped. There’s the now-typical flurry of blogging; the best place to start is with Bryan Cantrill’s round-up. I haven’t had time to break out Bonnie and ZFS myself, but I do have some raw data to report, from Dana Myers, who did some Bonnie runs on a great big honkin’ Dell [Surely you jest. -Ed.] server. The data is pretty interesting. [Update: Another run, with compression.] [And another, with bigger data. Very interesting.]

Dana reports:

Dell PowerEdge 6850, 2x Xeon 64 3.16GHz, 16GB RAM, and a collection of U320 73GB 15K RPM drives.

The current configuration is now 3x 73GB drives in a single conventional ZFS pool (no mirroring or RAIDZ).

I compiled Bonnie with -xarch=amd64 and ran it with a 32GB test file on both a UFS fs and the ZFS fs. In both cases, atime is on.

Now, 32G is a little on the small side for this benchmark on a 16G server, but still, here’s what we see (you may have to widen the your window a little bit to fit all the Bonnie data in):


              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
UFS        32  37.4 80.0  37.6 42.9  18.8 31.1  46.0 77.9  55.2 32.2  40.2  4.0
ZFS        32  70.3 94.3  96.7 65.7  50.4 49.4  63.1 99.0  20.7 94.9 694.4  9.4
ZFS+c      32  81.7 99.9 124.6 62.9 105.3 64.3  72.6 99.1 393.5 97.1 831.6  9.6
ZFS        64  69.5 93.8  98.8 65.8  53.1 53.7  63.1 98.7 193.9 83.8 385.0  7.3
ZFS+c      64  80.4 99.9 130.4 62.6 101.1 64.8  73.0 99.1 390.5 96.7 532.6  6.9

Well, well, well. Those seeks/sec numbers are pretty interesting, in fact I’m not sure I believe them, I’m going to have to try it with one of my own 2G V20z’s which only have 2G per. This is worth checking, because I keep hearing anecdotal evidence that Bonnie’s seeks-per-second number correlates strongly with observed MySQL performance.

The two lines marked ZFS+c are with compression turned on. Those numbers are remarkable, but possibly misleading. Here’s the issue, I think: the data that Bonnie writes is very regular and probably subject to extreme compression. If the compression is squeezing the data down near 25% of the original, the data-set is fitting in RAM and there’s no real I/O going on at all. That’s not the case, because zspool iostat reports quite a bit. But clearly the compression is muddying the waters.

Still, no matter how you cut it, 500+ seeks/second in a 64G file is pretty extreme performance; even 385 (sans compression) is excellent. On the other hand, the sequential runs do look kinda like CPU-limited I/O.

This brings up an interesting issue: should I modify Bonnie so that the data it writes is less compressible? The answer’s not obvious, since real application data will vary wildly in compressibility. Hmmm.

The other thing that these (preliminary, unverified, OK?) results suggest is that yes, there is a price for all that ZFS magic: it takes more out of your CPU. Now remember that those %CPU numbers are percent of one CPU.

There are a lot of apps where investing some of your CPU cycles in making I/O go faster is a good trade-off. In fact, that would include a high proportion of enterprise data-center apps.

I’ll get some more numbers.


author · Dad · software · colophon · rights
picture of the day
November 17, 2005
· Technology (77 fragments)
· · Storage (26 more)
· · Sun (48 more)

By .

I am an employee
of Amazon.com, but
the opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.