'setUpNet DNN module was not built with CUDA backend; switching to CPU

I want to run my script python with GPU as u see in this photo

in this photo

I used the command line: watch nvidia-smi,to show Processes of GPU, unfortunately the script python use just 41Mib of GPU capacity:

this is a part of my code :

import time
import math
import cv2
import numpy as np
labelsPath = "./coco.names"
LABELS = open(labelsPath).read().strip().split("\n")

np.random.seed(42)

weightsPath = "./yolov3.weights"
configPath = "./yolov3.cfg"

net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
FR=0
vs = cv2.VideoCapture(vid_path)
# vs = cv2.VideoCapture(0)  ## USe this if you want to use webcam feed
writer = None
(W, H) = (None, None)

fl = 0
q = 0
while True:

    (grabbed, frame) = vs.read()

    if not grabbed:
        break

    if W is None or H is None:
        (H, W) = frame.shape[:2]
        FW=W
        if(W<1075):
            FW = 1075
        FR = np.zeros((H+210,FW,3), np.uint8)

        col = (255,255,255)
        FH = H + 210
    FR[:] = col

    blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416),
                                 swapRB=True, crop=False)
    net.setInput(blob)
    start = time.time()
    layerOutputs = net.forward(ln)
    end = time.time()

I tried to add this command line to force run with GPU ,

net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)

then after running the script again it gives me this message and continue running the script with CPU :

     [ WARN:0] global /io/opencv/modules/dnn/src/dnn.cpp (1363) setUpNet DNN module was not built with CUDA backend; switching to CPU


Solution 1:[1]

You'll need to manually build OpenCV to work with your GPU.

Here is a great tutorial on how to do so.

Solution 2:[2]

Compatibility chart of cuda and cudnn:

https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html#cudnn-cuda-hardware-versions

Checking the computation capability version from: https://en.wikipedia.org/wiki/CUDA

Which is 7.5

In GPU supported, for 7.5 computation capability, CUDA SDK 11.0 – 11.2 support for compute capability 3.5 – 8.6 (Kepler (in part), Maxwell, Pascal, Volta, Turing, Ampere):

check for your Supported NVIDIA Hardware.

In my case, I was using Tesla T4 having Turing, which is compatible with cuDNN.

so in compilation report, you can see that Cmake returns cuDNN availability as "NO": enter image description here

Got the docker Image Using:

sudo docker nvidia/cuda:11.1-cudnn8-runtime-ubuntu18.04

Compiled Opencv Cuda from: https://www.pyimagesearch.com/2020/02/03/how-to-use-opencvs-dnn-module-with-nvidia-gpus-cuda-and-cudnn/

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Eric Smith
Solution 2