'Indexing in two dimensional PyTorch Tensor using another Tensor
Suppose that tensor A is defined as:
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
I'm trying to extract a flat array out of this matrix by using another tensor as indices. For example, if the second tensor is defined as:
0
1
2
3
I want the result of the indexing to be 1-D tensor with the contents:
1
6
11
16
It doesn't seem to behave like NumPy; I've tried A[:, B]
but it just throws an error for not being able to allocate an insane amount of memory and I've no idea why!
Solution 1:[1]
You can convert your Tensor to a NumPy array. If you are using Cuda, don't forget to pass it to cpu. If don't, there is no need to pass it to cpu. Example code is below:
val.data.cpu().numpy()[:,B]
Let me know if it resolves your issue
Solution 2:[2]
PyTorch implements torch.take which is equivalent to numpy.take
Solution 3:[3]
1st Approach: using torch.gather
torch.gather(A, 1, B.unsqueeze_(dim=1))
if you want one-dimensional vector, you can add squeeze to the end:
torch.gather(A, 1, B.unsqueeze_(dim=1)).squeeze_()
2nd Approach: using list comprehensions
You can use list comprehensions to select the items at specific indexes, then they can be concatenated using the torch.stack
. An importat point here is that you should not use torch.tensor
to create a new tensor from a list, if you do, you will break the chain (you cannot calculate gradient through that node):
torch.stack([A[i, B[i]] for i in range(A.size()[0])])
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | anlgrses |
Solution 2 | Najib Ishaq |
Solution 3 |