Python float vs numpy.float32

373
May 15, 2017, at 07:40 AM

Using numpy.float32.

t = numpy.float32(.3)
x = numpy.float32(1)
r = numpy.float32(-.3)
_t = t+x+r
_t == 1 # -> False

Using regular Python float.

t = .3
x = 1
r = -.3
_t = t+x+r
_t == 1 # -> True

Why?

Answer 1

Python float is a C double type: documentation:

Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info.

Therefore, you are comparing 32 and 64 precision floating point numbers. The following will work:

t = numpy.float64(.3)
x = numpy.float64(1)
r = numpy.float64(-.3)
_t = t+x+r
_t == 1
Answer 2

Floating point values are inherently non-exact on computers. The python default float is a what's called a double precision floating point number on most machines according to https://docs.python.org/2/tutorial/floatingpoint.html. numpy.float32 is a single precision float. It's double precision counterpart is numpy.float64. This could explain the difference in this case.

In general floating point numbers shouldn't be compared directly using ==. You can use numpy.isclose to deal with the small errors caused by non-exact floating point representations.

READ ALSO
Why I'm getting TypeError

Why I'm getting TypeError

I'm getting 'TypeError: coercing to Unicode: need string or buffer, tuple found' when trying to send data back to serverHere is my code:

201
Reverse a defaultdict(dict)

Reverse a defaultdict(dict)

If I have a defaultdict(dict):

216
Support for Enum arguments in argparse

Support for Enum arguments in argparse

Is there a better way of supporting Enums as types of argparse arguments than this pattern?

341