[Shootout-list] Rule 30

John Skaller skaller@users.sourceforge.net
Sat, 21 May 2005 19:43:44 +1000


On Sat, 2005-05-21 at 10:18 +0100, Jon Harrop wrote:
> On Saturday 21 May 2005 05:09, John Skaller wrote:

> > I have to think, though, that entirely avoiding it is self-defeating:
> > most optimisation is 'redundancy reduction' and if we eliminate it
> > from the tests, then there is no space for optimisers to compete.
> 
> No, it is not self-defeating. We can design benchmarks where no part can be 
> completely optimised away. That is quite different from saying that no 
> optimisations can be done. 

Yes of course, I did say 'entirely'. What I meant was that
it still isn't "hard science", there's still scope for some
debate as to whether there are enough -- or too few -- optimisation
opportunities in a given benchmark for it to be both interesting
and fair -- the dividing line between 'interesting' and 'fair'
isn't as hard as one might like and never will be -- just that
we can do better.

> Most optimisations will be specialisations or 
> rearrangements, which is exactly what we should be testing as these are the 
> optimisations which speed up real programs.

Yes, but by that argument the test for which Haskell is smart
enough to eliminate all but one of the loops and thereby
screams in ahead of everyone else .. is actually a perfectly
valid test, and Haskell *deserves* one test that lets it
do that. One could argue that -- I'm not actually making
that argument, but if someone did I couldn't just dismiss it.

So choice of 'reasonably fair and interesting' tests will always
be a matter of opinion, and hopefully consensus.

-- 
John Skaller, skaller at users.sf.net
PO Box 401 Glebe, NSW 2037, Australia Ph:61-2-96600850 
Download Felix here: http://felix.sf.net