'Memory usage of Python base types (particulary int and float)

This is an example from Python 3.8.0 interpreter (however, it is similar in 3.7.5)

>>> import sys
>>> sys.getsizeof(int)
416
>>> sys.getsizeof(float)
416
>>> sys.getsizeof(list)
416
>>> sys.getsizeof(tuple)
416
>>> sys.getsizeof(dict)
416
>>> sys.getsizeof(bool)
416

getsizeof() returns how much bytes Python object consumes together with the garbage collector overhead (see here). What is the reason that basic python classes consume the same amount of memory?

If we take a look at instances of these classes

>>> import sys
>>> sys.getsizeof(int())
24
>>> sys.getsizeof(float())
24

The default argument is 0 and these two instances have the same amount of memory usage for this argument. However, if I try to add an argument

>>> sys.getsizeof(int(1))
28
>>> sys.getsizeof(float(1))
24

and this is where it gets strange. Why does the instance memory usage increase for int but not for float type?



Solution 1:[1]

In short, it all boils down to how Python represents arbitrary long integers. float() types are represented (limited) just as C double.

In CPython implementation, every object (source) begins with a reference count and a pointer to the type object for that object. That's 16 bytes.

Float object stores its data as C double (source), that's 8 bytes. So 16 + 8 = 24 bytes for float objects.

With integers, situation is more complicated. The integer objects are represented as variable sized object (source), which for 16 bytes adds another 8 bytes. Digits are represented as array. Depending on the platform, Python uses either 32-bit unsigned integer arrays with 30-bit digits or 16-bit unsigned integer arrays with 15-bit digits. So for small integers there's only one 32bit integer in the array, so add another 4 bytes = 16 + 8 + 4 = 28 bytes.

If you want to represent larger integer number, the size will grow:

sys.getsizeof(int(2**32))  # prints 32 (24 + 2*4 bytes)
sys.getsizeof(int(2**64))  # prints 36 (24 + 3*4 bytes)

EDIT:

With sys.getsizeof(int) you're getting the size of the class, not of an instance of the class. That's same for float, bool, ...

print(type(int))  # prints <class 'type'>

If you look into the source, there's lot of stuff under the hood. In my version of Python 3.6.9 (Linux/64bit) this prints 400 bytes.

Solution 2:[2]

Looking at the docs, it's important to observe that:

Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.

So what can you infer from the fact that the returned value of sys.getsizeof(int(1)) is greater than that of the sys.getsizeof(float(1))?

Simply that it takes more memory to represent an int than it does to represent a float. Is this surprising? Well, possibly not, if we can expect to "do more things" with an int than we can do with a float. We can gauge the "amount of functionality" to the first degree by looking at the number of their attributes:

>>> len(dir(int))
70
>>> len(dir(float))
57

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 EJoshuaS - Stand with Ukraine
Solution 2 gstukelj