'Is torch.as_tensor() the same as torch.from_numpy() for a numpy array on a CPU?
On a CPU, is torch.as_tensor(a)
the same as torch.from_numpy(a)
for a numpy array, a
? If not, then why not?
From the docs for torch.as_tensor
if the data is an
ndarray
of the correspondingdtype
and thedevice
is the cpu, no copy will be performed.
From the docs for torch.from_numpy
:
The returned tensor and
ndarray
share the same memory. Modifications to the tensor will be reflected in thendarray
and vice versa.
In both cases, any changes the resulting tensor changes the original numpy array.
a = np.array([[1., 2], [3, 4]])
t1 = torch.as_tensor(a)
t2 = torch.from_numpy(a)
t1[0, 0] = 42.
print(a)
# prints [[42., 2.], [3., 4.]]
t2[1, 1] = 55.
print(a)
# prints [[42., 2.], [3., 55.]]
Also, in both cases, attempting to resize_ the tensor results in an error.
Solution 1:[1]
They are basically the same, except than as_tensor
is more generic:
- Contrary to
from_numpy
, it supports a wide range of datatype, including list, tuple, and native Python scalars. as_tensor
supports changing dtype and device directly, which is very convenient in practice since the default dtype of Torch tensor is float32, while for Numpy array it is float64.
as_tensor
is sharing memory with the original data if and only if the original object is a Numpy array, and the requested dtype, if any, is the same than the original data. Those are the same conditions than from_numpy
, but are always satisfied by design for the later.
Solution 2:[2]
Yes, as_tensor
and from_numpy
are strictly equivalent. From the documentation:
If data is a NumPy array (an ndarray) with the same dtype and device then a tensor is constructed using torch.from_numpy().
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | milembar |
Solution 2 | qgallouedec |