'onnx to trt failed - RuntimeError: cannot get YoloLayer_TRT plugin creator
I'm trying to run the YoloV4 (Demo 5) in TensorRt demos repo on AWS ec2.
I created ec2 VM with nvidia-gpu (with AMI - Amazon Linux 2 AMI with NVIDIA TESLA GPU Driver
),
which has: NVIDIA-SMI 450.119.01 Driver Version: 450.119.01 CUDA Version: 11.0
.
On this EC2 I pulled and entered into the tensorrt official container, with:
sudo docker run --gpus all -it -v /home/ec2-user/player-detection:/home nvcr.io/nvidia/tensorrt:20.02-py3 bash
I did the following steps:
- Ran
python3 -m pip install --upgrade setuptools pip
&&python3 -m pip install nvidia-pyindex
&&pip install nvidia-tensorrt
. - Inside the
yolo/
folder, I ran:pip3 install -r requirements.txt
. pip3 install onnx==1.9.0
.- Inside the
plugins/
folder, I ranmake
. - Inside the
yolo/
folder, I ran./download_yolo.sh
&&python3 yolo_to_onnx.py -m yolov4
&&python3 onnx_to_tensorrt.py -m yolov4
.
I got the following error for the python3 onnx_to_tensorrt.py -m yolov4
command:
"RuntimeError: cannot get YoloLayer_TRT plugin creator"
From reading https://github.com/jkjung-avt/tensorrt_demos/issues/476 it seems that the problem is related to dynamic libaries.
I tried to view the libaries that I have, and got:
$ ldd libyolo_layer.so
linux-vdso.so.1 (0x00007fff142a4000)
libnvinfer.so.7 => /usr/lib/x86_64-linux-gnu/libnvinfer.so.7 (0x00007f9673734000)
libcudart.so.11.0 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudart.so.11.0 (0x00007f96734af000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f9673126000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9672f0e000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9672b1d000)
libcudnn.so.8 => /usr/lib/x86_64-linux-gnu/libcudnn.so.8 (0x00007f96728f4000)
libmyelin.so.1 => /usr/lib/x86_64-linux-gnu/libmyelin.so.1 (0x00007f9672074000)
libnvrtc.so.11.1 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libnvrtc.so.11.1 (0x00007f966feac000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f966fca4000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f966faa0000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f966f702000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9699135000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f966f4e3000)
libcublas.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcublas.so.11 (0x00007f9668008000)
libcublasLt.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcublasLt.so.11 (0x00007f965a23e000)
It seems that I miss some, and also when I tried to print all the plugins, I didn't see the YoloLayer_TRT
.
Any idea how to solve it?
Solution 1:[1]
The solution was:
- change the image tag to
:21.10-py3
(thx to Hyunwoo Kim!). - change the
TENSORRT_INCS
to/usr/include/x86_64-linux-gnu/NvInfer*
andTENSORRT_LIBS
to/usr/lib/x86_64-linux-gnu/libnvinfer*
. - changed computes to 70 (in my environment) - you can check your version here.
Solution 2:[2]
Did you load your libyolo_layer.so using ctypes module? It seems that YoloLayer_TRT is custom plugin built by yolo team, and you have to load library file to use it in python.
https://github.com/jkjung-avt/tensorrt_demos/issues/476#issuecomment-935225260
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Yagel |
Solution 2 | Hyunwoo Kim |