[This is part of the Wide Finder 2 series.] I’ve just updated the Wide Finder wiki with a straw-man benchmark (see “Strawman”), with sample Ruby code and output. Argh... it takes Ruby over 23 seconds to process just 100K lines. I hear talk of a Solaris/SPARC optimized version, gotta track that down. Comments please on the benchmark, then we can all start bashing our heads against the big file.



Contributions

Comment feed for ongoing:Comments feed

From: Haik (May 22 2008, at 22:47)

Check out

http://cooltools.sunsource.net/coolstack/

for the SPARC optimized version of Ruby.

[link]

From: Erik Engbrecht (May 23 2008, at 11:09)

I'd like to comment on the security of Internet Exploder and Ruby...so I click on your link to the Ruby code, and what does IE do? You would think it would give me the code in an editor or whine about not knowing what to do with a *.rb file...but nooo... It ran the script.

[link]

From: Marius (May 23 2008, at 16:09)

IE doesn't run scripts by default, so you must have done something to the browser that makes it run scripts without asking. That's like blaming Linux if I choose to be logged in as root everytime...

[link]

From: Preston L. Bannister (May 23 2008, at 20:14)

Tim,

You want to re-consider the part on the wiki about "it should remain substantially but not exclusively I/O dominated". The core problem is how to distribute processing over lots of cores - so you want lots of processing in need of spreading around.

What you what is a problem that requires substantially more processing than can be done on a single core. If a many-cored Niagara chip can chomp through the benchmark faster than a faster-per-core (but fewer cores) x86 chip, that makes for a pretty good example.

The example may be doing a lot of I/O, but the bottleneck has to be CPU (big time), to offer the sort of demonstration for which I think you are aiming.

[link]

From: Erik Engbrecht (May 23 2008, at 20:45)

Well, I didn't do anything to my knowledge. I installed Ruby maybe a year ago.

[link]

From: Erik Engbrecht (May 24 2008, at 05:22)

I think the IO part is good because a lot of very common real-world tasks are IO intensive.

[link]

From: anon (May 24 2008, at 05:54)

Preston:

I don't want to speak for anybody, but I think the main point of this is to the show benefits of a language in a multi-core environment for the practical 'everyuser'.

The benefits for massively computation-heavy science apps are already well known, but if you're implying that there's no real benefit for practical everyday use by normal people... well, then that leads to another set of questions...

[link]

From: anon (May 24 2008, at 06:24)

just to press my point a bit further, if the most mundane and practical examples are all IO bound, then why aren't vendors pushing mirrored HDs instead of multi-cores?

it seems to me that a typical end user would benefit much more from the former. and that's not even including the very large backup problems that end users have, and that that would help with.

[link]

From: erik Engbrecht (May 24 2008, at 10:32)

@anon - I think that in the data center various RAID techniques have been pretty much standard practice for a long, long time. Also, the performance benefits of things like mirroring aren't always clear cut. You may theoretically be able to read twice as fast, but you write more slowly.

On the consumer end most units sold today are are laptops, not desktops. I don't want a second HD in my laptop.

[link]

From: anon (May 24 2008, at 12:31)

Eric:

Evidently those solutions aren't good enough :P

Look, all I'm trying to say is that it's disingenuous to swap real world problems for benchmarks that show off some aspect of your favorite cool little language. The main question to ask when it comes to defining a benchmark, is how representative it is of most common tasks.

If, at the end of all this, we find out that yes, most tasks are IO bound and it doesn't really matter which language you use, or how many cores you have... then at least we know where the real problems for real people are and what kinds of solutions we should really be working towards.

[link]

From: breath (May 24 2008, at 16:31)

I'm seeing a hint of circularity here -- you're only considering IO bound benchmarks, with the eventual result being that you will conclude that "most normal tasks are IO-bound".

I think if you want to figure out what a truly representative task is, you have to do it some other way. I personally haven't written any disk-IO-bound software in years. Network-IO-bound, memory-bandwidth-bound, that's a different matter. :-)

[link]

From: Ray Waldin (May 24 2008, at 23:46)

I don't know if this is going to be a problem against the final dataset, but right now the sort order for items that have the same count is unspecified. Even worse, since we're only showing top 10, if the 10th ties with later items, there's no way to validate the counts at all. Can we add a secondary ascending sort on the key when the counts match? Thanks!

[link]

From: Preston L. Bannister (May 25 2008, at 16:36)

My take (on Tim's show) is we are looking for a task with something like the following characteristics:

1) Massive I/O (as many mundane real-world tasks do large I/O), but not so much that I/O is the bottleneck (at least on a common single/dual CPU box).

2) Easy parallelism - as in log file entries that do not need to be processed in order. Anything harder is going to limit the number of participants (and take the solution too far away from "general purpose").

3) Enough processing so that the CPU is the bottleneck on common single and dual CPU systems (also a not-uncommon case).

The failing of the first "wide-finder" example was with (1). The problem could become I/O bound on a single CPU box - which was not the point of the exercise.

I do not think Tim is looking to prove one fashionable programming language is better than another. Good examples might illuminate a currently-cool language's strength or weakness in this aspect (as was the case in the prior benchmark).

But ... I am not Tim, and this is his show. :)

[link]

From: Alex Morega (May 28 2008, at 11:42)

"Argh... it takes Ruby over 23 seconds to process just 100K lines" - it took under 3 seconds for the test ruby script to run on my macbook, are you sure it took 23 seconds for you?

[link]

From: Tim (May 28 2008, at 17:26)

Alex: Yes, I re-checked. On my old-ish MacBook it's in the 3-4 second range (starting the *second* time you run it, so the filesystem cache obviously helps). Hmmm

[link]

author · Dad
colophon · rights
picture of the day
May 22, 2008
· Technology (90 fragments)
· · Concurrency (75 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!