[Shootout-list] fun vs. serious

Bengt Kleberg bengt.kleberg@ericsson.com
Wed, 22 Sep 2004 10:37:16 +0200


Brandon J. Van Every wrote:
...deleted
> No, actually.  To the extent I'm 'doing' anything here, I'm not here to
> have fun.  I'm here to evaluate and promote better languages at the
> expense of inferior ones.  Why?  Because I'd like mainstream industry to
> use less 'crap' on a daily basis.  I'd like to get paid lots of money to
> use languages that aren't 'crap'.  If I thought the Shootout was only a
> hobbyist funzie toy, I'd ignore it.  I see it as more about the validity
> of open source business models, commercial vs. open source compiler
> comparos, etc.
> 
> To persuade PHBs, I think it's best to stick to a consistent message.
> Like performance.  I'm doubtful that any website could encompass all
> aspects of language advocacy, and still be taken seriously by business
> types.

i am paid to use a language that isn't 'crap'. not lots, but the same as 
before when i had to write c and java. i had an better paid offer but 
that was c++). this language was not choosen (neither by me, nor by 
management) because of performance. (well, good enough performance was 
certainly a criteria, but that was like ''support available'', or 
something similar).
instead we have development time, maintenance costs and ''in service 
performance'' (down time).


...deleted

> Ok, look.  Are we even doing a good job at the very basics of the
> Shootout at this point?  I feel like I'm hearing a lot of ideas for new
> tests / new features, when the old stuff isn't even in particularly good
> shape yet.  How's that C# benchmark doing lately?

to redesign the existing test to better measure the performance they are 
suppoed to measure is a very good thing (tm).
having said that i still think it would be rather booring to have to 
persuade the comitters that the hash test has to be removed, before 
beeing allowed to design a new test.

and i would not like to wait with new tests until somebody manages to 
make c# pass certain tests (btw: which tests?).


>  I'd like to see the
> essentials done well, before worrying about R&D issues like how you
> define a "safety benchmark."  The only "safety benchmark" I can think
> of, would be to provide thousands of unsafe tasks and measure how many
> times something fails, and whether the failures are reported.

a good idea. it would have to be ''the same kind of unsafe task''. and 
we want the performance. we have to measure how long time it takes to 
detect and report :-)


bengt