[Shootout-list] New Benchmarks

Brent Fulgham bfulg@pacbell.net
Tue, 15 Mar 2005 11:59:32 -0800 (PST)


We have a slate of new benchmarks that are being added

(such as FASTA, pidigits, etc.) that will eventually 
replace some of the original benchmarks.  We are doing

this because we found that many of the original 
benchmarks actually did not test what they were meant
to  test (e.g., a dictionary type test would really 
test the cost of string formatting, rather than lookup

cost, etc.).

Consequently, we are always on the lookout for new and

exciting benchmarks to include (and possibly displace
'old' benchmarks.)

I recently noticed a discussion of benchmarks related
to computational chemistry
(http://www.cfs.dl.ac.uk/benchmarks/compchem.html).
It seems there are at least a few benchmarks that
might
be of interest (and see
http://www.cse.clrc.ac.uk/disco/hw-perf.shtml
for more useful information):

1.  Matrix multiplication.  We have an existing
test, but we should also test sparse multiplication,
as well as the cost of matrix transformation (as 
might be found in 3D games or computational
chemistry).  [Ref: MATRIX-97]

2.  'Computational Chemistry Kernels'.  This group
is probably more complex than we would typically use,
but it has a few interesting things such as a
Monte Carlo solution technique, and a Jacobi iterative
linear equation solver.

Unfortunately, I could not locate sources for any of
these tests.

3.  The 'Stream' benchmark.  This tests four simple
building blocks of vector operations (copy, sum,
scale,
triad), and is more in keeping with our smaller
benchmarks.  This test is designed to measure the 
memory bandwidth limitations of a system.  (See 
http://home.austin.rr.com/mccalpin/papers/bandwidth/node2.html#SECTION00020000000000000000).

=======================================

Jon Harrop, our resident Objective Caml/numerical
methods expert, has proposed several benchmarks based 
on his new book 'OCaml for Scientists' 
(http://www.ffconsultancy.com/products/ocaml_for_scientists/complete/).

I think several of these (perhaps all of them) would
be useful in the context of the shootout.

I would like to see more "real world" test of this
kind, since they are (a) interesting, and (b) easier 
to specify in such a way that we don't have to go to
ridiculous extremes to prevent Haskell (for example)
from optimizing away operations that it (rightly)
identifies as worthless!  :-)

-Brent