'Torch sum a tensor along an axis

How do I sum over the columns of a tensor?

torch.Size([10, 100])    --->    torch.Size([10])


Solution 1:[1]

The simplest and best solution is to use torch.sum().

To sum all elements of a tensor:

torch.sum(x) # gives back a scalar

To sum over all rows (i.e. for each column):

torch.sum(x, dim=0) # size = [1, ncol]

To sum over all columns (i.e. for each row):

torch.sum(x, dim=1) # size = [nrow, 1]

Solution 2:[2]

Alternatively, you can use tensor.sum(axis) where axis indicates 0 and 1 for summing over rows and columns respectively, for a 2D tensor.

In [210]: X
Out[210]: 
tensor([[  1,  -3,   0,  10],
        [  9,   3,   2,  10],
        [  0,   3, -12,  32]])

In [211]: X.sum(1)
Out[211]: tensor([ 8, 24, 23])

In [212]: X.sum(0)
Out[212]: tensor([ 10,   3, -10,  52])

As, we can see from the above outputs, in both cases, the output is a 1D tensor. If you, on the other hand, wish to retain the dimension of the original tensor in the output as well, then you've set the boolean kwarg keepdim to True as in:

In [217]: X.sum(0, keepdim=True)
Out[217]: tensor([[ 10,   3, -10,  52]])

In [218]: X.sum(1, keepdim=True)
Out[218]: 
tensor([[ 8],
        [24],
        [23]])

Solution 3:[3]

If you have tensor my_tensor, and you wish to sum across the second array dimension (that is, the one with index 1, which is the column-dimension, if the tensor is 2-dimensional, as yours is), use torch.sum(my_tensor,1) or equivalently my_tensor.sum(1) see documentation here.

One thing that is not mentioned explicitly in the documentation is: you can sum across the last array-dimension by using -1 (or the second-to last dimension, with -2, etc.)

So, in your example, you could use: outputs.sum(1) or torch.sum(outputs,1), or, equivalently, outputs.sum(-1) or torch.sum(outputs,-1). All of these would give the same result, an output tensor of size torch.Size([10]), with each entry being the sum over the all rows in a given column of the tensor outputs.

To illustrate with a 3-dimensional tensor:

In [1]: my_tensor = torch.arange(24).view(2, 3, 4) 
Out[1]: 
tensor([[[ 0,  1,  2,  3],
         [ 4,  5,  6,  7],
         [ 8,  9, 10, 11]],

        [[12, 13, 14, 15],
         [16, 17, 18, 19],
         [20, 21, 22, 23]]])

In [2]: my_tensor.sum(2)
Out[2]:
tensor([[ 6, 22, 38],
        [54, 70, 86]])

In [3]: my_tensor.sum(-1)
Out[3]:
tensor([[ 6, 22, 38],
        [54, 70, 86]])

Solution 4:[4]

Based on doc https://pytorch.org/docs/stable/generated/torch.sum.html

it should be

dim (int or tuple of python:ints) – the dimension or dimensions to reduce.

dim=0 means reduce row dimensions: condense all rows = sum by col
dim=1 means reduce col dimensions: condense cols= sum by row

Solution 5:[5]

Torch sum along multiple axis or dimensions

Just for the sake of completeness (I could not find it easily) I include how to sum along multiple dimensions with torch.sum which is heavily used in computer vision tasks where you have to reduce along H and W dimensions.

If you have an image x with shape C x H x W and want to compute the average pixel intensity value per channel you could do:

avg = torch.sum(x, dim=(1,2)) / (H*W)     # Sum along (H,W) and norm

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 kmario23
Solution 3 postylem
Solution 4 Frank Xu
Solution 5 JVGD