[Pkg-octave-devel] Bug#532656: Bug#532656: Bug#532656: Bug#532656: octave3.2_3.2.0-1(mips/unstable): FTBFS on mips. Segfault in regression test.

Rafael Laboissiere rafael at debian.org
Sun Jun 14 20:45:50 UTC 2009


* Rafael Laboissiere <rafael at debian.org> [2009-06-14 15:39]:

> * Rafael Laboissiere <rafael at debian.org> [2009-06-14 12:25]:
> 
> > I think that the different values of x_max and x_min explain the bug on
> > the mips system.  I guess that this is caused by the following lines in
> > pr-output.cc (function set_format):
> > 
> >       int x_max = max_abs == 0.0                                                                                               
> >         ? 0 : static_cast<int> (floor (log10 (max_abs) + 1.0));                                                                
> >                                                                                                                            
> >       int x_min = min_abs == 0.0                                                                                               
> >         ? 0 : static_cast<int> (floor (log10 (min_abs) + 1.0));                                                                
> 
> I meant lines 854 to 858 in pr-output.cc, in function 
> set_format (const Complex& c, int& r_fw, int& i_fw)

I think I found the cause of the bug.  The simple program:

///////////////////////////////////////////////////////////////////////////
#include <cmath>
#include <iostream>

int
main (void)
{
    std::cerr << static_cast<int> (floor (log10 (0.0/0.0))) << std::endl;
    return 0;
}
///////////////////////////////////////////////////////////////////////////

when compiled on mips yields 2147483647, while on amd64 and i386 it
yields -2147483648.  This explains the different behavior on both
architectures.

Now, I am wondering whether the code in set_format() makes sense.  The
function reads:

///////////////////////////////////////////////////////////////////////////
static void
set_format (const Complex& c, int& r_fw, int& i_fw)
{
  // [snip]

  double rp = c.real ();
  double ip = c.imag ();

  bool inf_or_nan = (xisinf (c) || xisnan (c));

  bool int_only = (D_NINT (rp) == rp && D_NINT (ip) == ip);

  double r_abs = rp < 0.0 ? -rp : rp;
  double i_abs = ip < 0.0 ? -ip : ip;
                                                                                                                           
  int r_x = r_abs == 0.0 
    ? 0 : static_cast<int> (floor (log10 (r_abs) + 1.0));

  // [snip]
///////////////////////////////////////////////////////////////////////////

When r_abs is NaN (as in the bug-triggering case) what is the sense of
computing log10 (r_abs) and propagating the result?  I am probably
missing something but it seems just pure chance that r_x ends up being
negative on amd64 and i386, what does not trigger the bug.

-- 
Rafael





More information about the Pkg-octave-devel mailing list