[Shootout-list] fannkuch

Bengt Kleberg bengt.kleberg@ericsson.com
Tue, 24 May 2005 08:08:06 +0200


On 2005-05-23 21:31, Brent Fulgham wrote:
...deleted
> Here some info that might belong in the FAQ:
> 
> 1.  The timer measurements are made using the 
> BSD::Resource Perl module.  The 'times' method makes 

this one is in the faq. good work.


> two internal calls to 'getrusage' to get this data.
> 
> From the POD
> (http://search.cpan.org/~jhi/BSD-Resource-1.24/Resource.pm):
> 
> "The current implementation uses two getrusage()
> system calls: one with RUSAGE_SELF (the current 
> process) and one with RUSAGE_CHILDREN (all the child 
> processes of the current process that have terminated
> at the time the call is made). Therefore the 
> operation is not 'atomic': the times for the children
> are recorded a little bit later."
> 
> So, this is one potential source of measurement error,
> though probably negligible.

while i think you are correct (i make the same assumption to be able to 
ignore RUSAGE_SELF when i measure) it could be measured.


> 2.  There was a bug in the 2.6.8 and below Linux
> Kernels where some thread time was not accounted if
> the thread did not 'join' with its parent process
> prior to terminating.  This was fixed in the 2.6.9 
> release, so is not an issue for us.
> 
> 3.  The 'struct timeval' used to hold the user and
> system time has a microsecond resolution.
> 
> IIRC, testing that we did in C using the underlying
> 'getrusage' methods indicate that the resolution
> of the timings was about 0.01 second.
> 
> That's the sum total of available information on the
> resolution of the timings provided by the shootout.

imho the interesting bit (for this particular discussion) is:
''the resolution of the timings was about 0.01 second''.

that is the same that i get on my computer (SunOS ws12490 5.8 
Generic_117350-16 sun4u sparc SUNW,Sun-Blade-1500). in my mind that 
meant that a granularity of 10 ms. and that we need to measure minimum 
100ms to have 10% reliablility. preferably more (1000 ms)to drive the 
realiability towards 1%. i am not good at statistics and i could be 
wrong here. i do think that setting the minimum runtime to the same 
value as the granularity is a bad idea.


in the future, when minibench is replaced with a slightly more flexible 
system, i think it would be possible to measure the granularity before 
each run and then setting the minimum runtime from that value. allowing 
an override if somebody thinks it is too much/little.


bengt