'Unable to load libmodelpackage. Cannot make save spec

Getting this error when I was trying to convert the Keras model to the core ml model

import coremltools as ct
coreml_model = ct.converters.convert(model)

Here are the version of the libs I use:

Keras 2.8.0
Tensorflow 2.8.0
coremltools 5.2.0
python 3.10.4
ubuntu 22.04

Error:

2022-05-05 15:09:04.213669: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.213796: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2022-05-05 15:09:04.213850: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
2022-05-05 15:09:04.214045: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214141: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214227: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214363: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214445: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-05-05 15:09:04.215812: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.003ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

2022-05-05 15:09:04.253705: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.253780: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2022-05-05 15:09:04.253824: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
2022-05-05 15:09:04.253982: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254070: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254149: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254262: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254344: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254399: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-05-05 15:09:04.261510: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize
  constant_folding: Graph size after: 42 nodes (-14), 55 edges (-14), time = 2.991ms.
  dependency_optimizer: Graph size after: 41 nodes (-1), 40 edges (-15), time = 0.306ms.
  debug_stripper: debug_stripper did nothing. time = 0.005ms.
  constant_folding: Graph size after: 41 nodes (0), 40 edges (0), time = 1.042ms.
  dependency_optimizer: Graph size after: 41 nodes (0), 40 edges (0), time = 0.229ms.
  debug_stripper: debug_stripper did nothing. time = 0.004ms.

2022-05-05 15:09:04.283289: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283357: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2022-05-05 15:09:04.283393: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
2022-05-05 15:09:04.283593: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283681: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283759: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283864: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283944: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-05-05 15:09:04.285078: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.003ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

2022-05-05 15:09:04.318159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318237: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2022-05-05 15:09:04.318283: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
2022-05-05 15:09:04.318470: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318559: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318637: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318745: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318827: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318882: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-05-05 15:09:04.328088: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize
  constant_folding: Graph size after: 42 nodes (-14), 55 edges (-14), time = 4.112ms.
  dependency_optimizer: Graph size after: 41 nodes (-1), 40 edges (-15), time = 0.282ms.
  debug_stripper: debug_stripper did nothing. time = 0.004ms.
  constant_folding: Graph size after: 41 nodes (0), 40 edges (0), time = 1.075ms.
  dependency_optimizer: Graph size after: 41 nodes (0), 40 edges (0), time = 0.227ms.
  debug_stripper: debug_stripper did nothing. time = 0.003ms.

Running TensorFlow Graph Passes:   0%|                                                 | 0/6 [00:00<?, ? passes/s]2022-05-05 15:09:04.587585: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.587766: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.587860: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.588042: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.588165: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.588249: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
Running TensorFlow Graph Passes: 100%|█████████████████████████████████████████| 6/6 [00:00<00:00, 51.88 passes/s]
Converting Frontend ==> MIL Ops: 100%|████████████████████████████████████████| 41/41 [00:00<00:00, 2051.40 ops/s]
Running MIL Common passes: 100%|███████████████████████████████████████████| 34/34 [00:00<00:00, 1104.92 passes/s]
Running MIL Clean up passes: 100%|████████████████████████████████████████████| 9/9 [00:00<00:00, 682.35 passes/s]
Translating MIL ==> NeuralNetwork Ops: 100%|██████████████████████████████████| 67/67 [00:00<00:00, 1622.07 ops/s]
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Input In [34], in <cell line: 2>()
      1 import coremltools as ct
----> 2 coreml_model = ct.converters.convert(model)

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py:352, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, useCPUOnly, package_dir, debug)
    349     if ext != _MLPACKAGE_EXTENSION:
    350         raise Exception("If package_dir is provided, it must have extension {} (not {})".format(_MLPACKAGE_EXTENSION, ext))
--> 352 mlmodel = mil_convert(
    353     model,
    354     convert_from=exact_source,
    355     convert_to=exact_target,
    356     inputs=inputs,
    357     outputs=outputs,
    358     classifier_config=classifier_config,
    359     transforms=tuple(transforms),
    360     skip_model_load=skip_model_load,
    361     compute_units=compute_units,
    362     package_dir=package_dir,
    363     debug=debug,
    364 )
    366 if exact_target == 'milinternal':
    367     return mlmodel # Returns the MIL program

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/mil/converter.py:183, in mil_convert(model, convert_from, convert_to, compute_units, **kwargs)
    144 @_profile
    145 def mil_convert(
    146     model,
   (...)
    150     **kwargs
    151 ):
    152     """
    153     Convert model from a specified frontend `convert_from` to a specified
    154     converter backend `convert_to`.
   (...)
    181         See `coremltools.converters.convert`
    182     """
--> 183     return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/mil/converter.py:231, in _mil_convert(model, convert_from, convert_to, registry, modelClass, compute_units, **kwargs)
    224     package_path = _create_mlpackage(proto, weights_dir, kwargs.get("package_dir"))
    225     return modelClass(package_path,
    226                       is_temp_package=not kwargs.get('package_dir'),
    227                       mil_program=mil_program,
    228                       skip_model_load=kwargs.get('skip_model_load', False),
    229                       compute_units=compute_units)
--> 231 return modelClass(proto,
    232                   mil_program=mil_program,
    233                   skip_model_load=kwargs.get('skip_model_load', False),
    234                   compute_units=compute_units)

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/model.py:346, in MLModel.__init__(self, model, useCPUOnly, is_temp_package, mil_program, skip_model_load, compute_units, weights_dir)
    343     filename = _tempfile.mktemp(suffix=_MLMODEL_EXTENSION)
    344     _save_spec(model, filename)
--> 346 self.__proxy__, self._spec, self._framework_error = _get_proxy_and_spec(
    347     filename, compute_units, skip_model_load=skip_model_load,
    348 )
    349 try:
    350     _os.remove(filename)

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/model.py:123, in _get_proxy_and_spec(filename, compute_units, skip_model_load)
    120     _MLModelProxy = None
    122 filename = _os.path.expanduser(filename)
--> 123 specification = _load_spec(filename)
    125 if _MLModelProxy and not skip_model_load:
    126 
    127     # check if the version is supported
    128     engine_version = _MLModelProxy.maximum_supported_specification_version()

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/utils.py:210, in load_spec(filename)
    184 """
    185 Load a protobuf model specification from file.
    186 
   (...)
    207 save_spec
    208 """
    209 if _ModelPackage is None:
--> 210     raise Exception(
    211         "Unable to load libmodelpackage. Cannot make save spec."
    212     )
    214 spec = _Model_pb2.Model()
    216 specfile = filename

Exception: Unable to load libmodelpackage. Cannot make save spec.


Solution 1:[1]

After downgrading python to 3.9, the issue is resolved.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 zs2020