'RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

I am getting the above error for code:-

    def get_model(path, device):
        model = models.vgg16(pretrained=False)
    
        for param in model.parameters():
            param.requires_grad = False
        n_inputs = model.classifier[6].in_features
    
        model.classifier[6] = torch.nn.Sequential(
            torch.nn.Linear(n_inputs, 256), torch.nn.ReLU(), torch.nn.Dropout(0.2),
            torch.nn.Linear(256, 10), torch.nn.LogSoftmax(dim=1))
    
        model.load_state_dict(torch.load(path), map_location=torch.device('cpu'))
        model.to(device)
        model.eval()
        return model
    
    device = torch.device("cpu")
    model = get_model('vgg16.pt', device)


Solution 1:[1]

You are passing the map_location to the wrong function (to model.load_state_dict instead of torch.load).

The corrected line would look like this:

model.load_state_dict(torch.load(path, map_location=torch.device('cpu')))

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 CherryDT