[Shootout-list] Ackermann

John Skaller skaller@users.sourceforge.net
Mon, 20 Jun 2005 07:43:34 +1000


On Sat, 2005-06-18 at 11:58 +0300, Einar Karttunen wrote:
> John Skaller <skaller@users.sourceforge.net> writes:
> > We need |S| to be larger than an array which can
> > practically fit into memory. We know in advance
> > that we can easily fit a 65K * 65K array into
> > a large machine, that is, it isn't that hard
> > to have enough virtual memory to support 32 bits
> > of indexing.
> 
> Is there any compiler in the shootout which does this? 

g++ does something similar with the templates thing :)
GHC could do it. Meta-Ocaml could do it. And ALL 
compilers optimise things, that is what a compiler
does for a living.

You cannot easily specify what reductions the compiler
will do.

I will be examining the Felix compiler to see how far
it is possible to reduce ack(3,n) by unfolding:
the compiler will *automatically*
reduce it to four distinct functions:

	ack0(n)
	ack1(n)
	ack2(n)
	ack3(n)

I'm sure GHC can do this already. The point is that
you cannot possibly label this as cheating -- and it
makes little sense to label a C++ emulation of this
using templates as cheating either.

The point about pre-computing ALL values of ack(m,n)
for m<=3, n<=10, is that it is perfectly acceptable
to partially evaluate because that is what ALL compilers
do .. you can't put limits on how good the optimisers
are .. when that is precisely what the Shootout is
trying to measure.

As far as I'm concerned any compiler that *failed*
to partially evaluate by eliminating m is a pretty
brain dead compiler .. and a compiler that didn't
pre-compute ALL the values if told n<=10 would
never rate 'smart' or 'advanced' as a description
(they'd just be 'ordinary' :)

The problem is when programmers 'help' the compiler
do these optimisations  -- eg by just returning a value
from a pre-computed array .. or using templates ..
or just specifying a -funroll-loops on the command
line .. all of this is cheating.

In fact failing to use Python to brain-deadly
interpret the code without tail call optimisations
is probably the only solution where there is no 
partial evaluation: if you allow *compilers*
into this game, you simply cannot use 'do it
the same way' specifications or people will just
laugh at the Shootout methodology.

So you tell me if THIS program is valid:

int ack(m:[0:3],n[0:10]) { ... }

Yes, I added some notation to show the possible
range of arguments. And the smart compiler will
note that the arguments can easily fit into
one machine word .. which is a massive speed
improvement over using two. Hehe .. and if
you use an array for the arguments instead
of the machine stack, you could fit it into
a single byte .. and a compiler doing that
would just cream all the others .. doing
this is actually under consideration
for Felix (although not for performance reasons).

But you CANNOT ban this example source
because of the constraints,
because C is doing exactly the same thing by
using the type 'int' instead of, for example,
the type 'long'. It just happens C is pretty
brain dead when it comes to specifying 
constraints. Pascal and Ada are better at this
particular example. But ALL systems have some
kind of constraint on the input, even ones
using Gnu Multiple Precision integers have
some limits.