'Neural Network initialized with random weights always returns the same output with random inputs
I have a problem with pytorch in Spyder. A randomly initialized Neural Network returns always the same output also for random input tensor. I am currently using local GPU with Spyder. I made sure that the initialization of the weights is random and not all zeros.
Example:
x = torch.rand(1, 3, 360, 640)
x = self.stage_1(x)
x = self.stage_2(x)
x = self.stage_3(x)
x = self.stage_4(x)
x = self.stage_5(x)
x = self.stage_6(x)
x = torch.flatten(x, start_dim=1)
y = torch.rand(1, 3, 360, 640)
y = self.stage_1(y)
y = self.stage_2(y)
y = self.stage_3(y)
y = self.stage_4(y)
y = self.stage_5(y)
y = self.stage_6(y)
y = torch.flatten(y, start_dim=1)
This code returns always y == x
This is the stage class:
class VggStage(nn.Module):
def __init__(self,
input_channels: int,
output_channels: int) -> None:
"""
Parameters
----------
input_channels : int
DESCRIPTION.
output_channels : int
DESCRIPTION.
Returns
-------
None
DESCRIPTION.
"""
super().__init__()
self.conv1 = nn.Conv2d(in_channels=input_channels,
out_channels=output_channels,
kernel_size=(3, 3))
self.conv2 = nn.Conv2d(in_channels=output_channels,
out_channels=output_channels,
kernel_size=(3, 3))
self.max_pool = nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
def forward(self,
x: torch.Tensor) -> torch.Tensor:
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = self.max_pool(x)
return x
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|