'Converting From .pt model to .h5 model

I am using Google colab. I want to convert .pt model from my google drive to .h5 model. I follow link https://github.com/gmalivenko/pytorch2keras and https://www.programmersought.com/article/57938937172/ and install libraries and also write code as below:

%pip install pytorch2keras
%pip install onnx==1.8.1
import numpy as np
from numpy import random
from random import uniform
import torch
from torch.autograd import Variable
input_np = np.random.uniform(0, 1, (1, 10, 32, 32))
input_var = Variable(torch.FloatTensor(input_np))
model='/content/gdrive/MyDrive/model.pt'
pytorch_to_keras(model,input_var,input_shapes= [(10, 32, 32,)], verbose=True)

but it gives me error like:

WARNING:pytorch2keras:Custom shapes isn't supported now.
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-53-eef217a11c8a> in <module>()
      8 input_var = Variable(torch.FloatTensor(input_np))
      9 model='/content/gdrive/MyDrive/model.pt'
---> 10 pytorch_to_keras(model,input_np,input_shapes= [(10, 32, 32,)], verbose=True)
     11 
     12 

/usr/local/lib/python3.7/dist-packages/pytorch2keras/converter.py in pytorch_to_keras(model, args, input_shapes, change_ordering, verbose, name_policy, use_optimizer, do_constant_folding)
     51     args = tuple(args)
     52 
---> 53     dummy_output = model(*args)
     54 
     55     if isinstance(dummy_output, torch.autograd.Variable):

TypeError: 'str' object is not callable


Solution 1:[1]

Ah, the classic problem of PyTorch to Tensorflow. Many libraries have come and gone over the years, but I've found ONNX to work the most consistent. You could try something like this.

Specific to PyTorch is a Dynamic Computational Graph. A dynamic computational graph means that PyTorch models can dynamically adapt to different input sizes. You can specify which axes need dynamic sizing as such. Here is some minimal code to convert a CNN from PyTorch to ONNX.

import onnx
import torch

model = get_model()
model.eval()

# Test model on sample image size 
example_input = torch.randn((1, 3, img_size, img_size), requires_grad=True)
model(example_input)

# Set input and output names, include more names in the list if your model has more than 1 input/output
input_names = ["input0"]
output_names = ["output0"]

# Set dynamic axes (in this case, make the batch a dynamic dimension)
dynamic_axes = {'input0': {0: 'batch'}, 'output0': {0: 'batch'}}

# Export model with the above parameters
torch_out = torch.onnx.export(
    model, example_input, 'model.onnx', export_params=True, input_names=input_names, output_names=output_names, 
    dynamic_axes=dynamic_axes, operator_export_type=torch.onnx.OperatorExportTypes.ONNX
)

# Use ONNX checker to verify integrity of model
onnx_model = onnx.load("model.onnx")
onnx.checker.check_model(onnx_model)

One could also set the height and width of the model as a dynamic input size with

dynamic_axes['input0'][2] = 'height'
dynamic_axes['input0'][3] = 'width'

Next, we convert our ONNX model to a Tensorflow SavedModel.

from onnx_tf.backend import prepare
import onnx

onnx_model = onnx.load('model.onnx')
tf_model = prepare(onnx_model)
tf_model.export_graph('tf_model')

tf_model is now a Tensorflow SavedModel.

Solution 2:[2]

Seems like you haven't imported the pytorch_to_keras method to begin with. Also, as the first parameter you might need to feed the actual model, not the model path. Give these a shot however, i am not familiar with this library.

Solution 3:[3]

"model" should be a (loaded) pytorch model (not a string)

model = torch.load('your_model.py')
keras_model = pytorch_to_keras(model,input_var,input_shapes= [(10, 32, 32,)], verbose=True)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Stanley Zheng
Solution 2 franzmaliszt
Solution 3 Carmine T. Guida