'How to control the display precision of a NumPy float64 scalar?

I'm writing a teaching document that uses lots of examples of Python code and includes the resulting numeric output. I'm working from inside IPython and a lot of the examples use NumPy.

I want to avoid print statements, explicit formatting or type conversions. They clutter the examples and detract from the principles I'm trying to explain.

What I know:

  • From IPython I can use %precision to control the displayed precision of any float results.

  • I can use np.set_printoptions() to control the displayed precision of elements within a NumPy array.

What I'm looking for is a way to control the displayed precision of a NumPy float64 scalar which doesn't respond to either of the above. These get returned by a lot of NumPy functions.

>>> x = some_function()
Out[2]: 0.123456789

>>> type(x)
Out[3]: numpy.float64

>>> %precision 2
Out[4]: '%.2f'

>>> x
Out[5]: 0.123456789

>>> float(x)  # that precision works for regular floats
Out[6]: 0.12

>>> np.set_printoptions(precision=2)

>>> x  # but doesn't work for the float64
Out[8]: 0.123456789

>>> np.r_[x]  # does work if it's in an array
Out[9]: array([0.12])

What I want is

>>> # some formatting command
>>> x = some_function() # that returns a float64 = 0.123456789
Out[2]: 0.12

but I'd settle for:

  • a way of telling NumPy to give me float scalars by default, rather than float64.
  • a way of telling IPython how to handling a float64, kind of like what I can do with a repr_pretty for my own classes.


Solution 1:[1]

IPython has formatters (core/formatters.py) which contain a dict that maps a type to a format method. There seems to be some knowledge of NumPy in the formatters but not for the np.float64 type.

There are a bunch of formatters, for HTML, LaTeX etc. but text/plain is the one for consoles.

We first get the IPython formatter for console text output

plain = get_ipython().display_formatter.formatters['text/plain']

and then set a formatter for the float64 type, we use the same formatter as already exists for float since it already knows about %precision

plain.for_type(np.float64, plain.lookup_by_type(float))

Now

In [26]: a = float(1.23456789)

In [28]: b = np.float64(1.23456789)

In [29]: %precision 3
Out[29]: '%.3f'

In [30]: a
Out[30]: 1.235

In [31]: b
Out[31]: 1.235

In the implementation I also found that %precision calls np.set_printoptions() with a suitable format string. I didn't know it did this, and potentially problematic if the user has already set this. Following the example above

In [32]: c = np.r_[a, a, a]

In [33]: c
Out[33]: array([1.235, 1.235, 1.235])

we see it is doing the right thing for array elements.

I can do this formatter initialisation explicitly in my own code, but a better fix might to modify IPython code/formatters.py line 677

    @default('type_printers')
    def _type_printers_default(self):
        d = pretty._type_pprinters.copy()
        d[float] = lambda obj,p,cycle: p.text(self.float_format%obj)
        # suggested "fix"
        if 'numpy' in sys.modules:
            d[numpy.float64] = lambda obj,p,cycle: p.text(self.float_format%obj)
        # end suggested fix
        return d

to also handle np.float64 here if NumPy is included. Happy for feedback on this, if I feel brave I might submit a PR.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Peter Corke