'Detection and classification of objects placed in front of a video camera

Through this code, I aim to detect an object in real-time that will be put in front of the video camera and classify it. My reasoning is the following I tried to make two for loops the first one for the detection, and once the object is detected I want to apply the classification through the second for loop. I don't know if my reasoning is correct or not, I tested the code but I received this error

ValueError       Traceback (most recent call last)
<ipython-input-1-88a18bf89e71> in <module>()

     85         for obj_coordinates in objs:
---> 87             x1, x2, y1, y2 = apply_offsets(obj_coordinates, class_offsets)
     88             gray_obj = gray_obj[y1:y2, x1:x2]
     89             try:

/home/nada/Desktop/testforimage/src/utils/inference.pyc in apply_offsets(obj_coordinates, offsets)
     25 
     26 def apply_offsets(obj_coordinates, offsets):
---> 27     x, y, width, height = obj_coordinates
     28     x_off, y_off = offsets
     29     return (x - x_off, x + width + x_off, y - y_off, y + height + y_off)

ValueError: too many values to unpack

Could you please correct the following code and tell me if my reasoning is correct or not and thank you in advance.

video_capture = cv2.VideoCapture(0)
if video_capture.isOpened():
 frame = video_capture.read()
else:
 rval = False 
while True:
    rval, frame = video_capture.read()
    gray_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 
    gray_image = cv2.cvtColor(np.float32(imgUMat), cv2.COLOR_RGB2GRAY) 
    blur = cv2.GaussianBlur(gray_image, (5,5) , 0)

    ctrs = cv2.findContours(blur.copy(),cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
    rects = [cv2.boundingRect(ctr) for ctr in ctrs]
for coordinates in rects:
    a1, a2, b1, b2 = app_offsets(coordinates, obj_offsets)
    gray_image = gray_image[b1:b2, a1:a2]
    try:
        gray_image = cv2.resize(gray_image, (obj_target_size))
    except:
        continue

    gray_image = preprocess_input(gray_image, True)
    gray_image = np.expand_dims(gray_image, 0)
    gray_image = np.expand_dims(gray_image, -1)
    objs = obj_detection.predict(gray_image)
    key = cv2.waitKey(1)
    b,g,r = cv2.split(frame) # get b,g,r
    rgb_img = cv2.merge([r,g,b]) # switch it to rgb
    
    for obj_coordinates in objs:
        x1, x2, y1, y2 = apply_offsets(obj_coordinates, class_offsets)
        gray_obj = gray_obj[y1:y2, x1:x2]
        try:
            gray_obj = cv2.resize(gray_obj, (class_target_size))
        except:
            continue

        gray_obj = preprocess_input(gray_obj, True)
        gray_obj = np.expand_dims(gray_obj, 0)
        gray_obj = np.expand_dims(gray_obj, -1)
        class_prediction = class_classifier.predict(gray_obj)
        class_probability = np.max(class_prediction)
        class_label_arg = np.argmax(class_prediction)
        class_text = emotion_labels[class_label_arg]
        class_window.append(class_text)


Solution 1:[1]

In line 20 ctrs = cv2.findContours(blur.copy(),cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE), the function returns the contours and its hierarchy.

To draw the bounding box for each contour, you need to pass the first output (contours)

Change the line to the following:

ctrs = cv2.findContours(blur.copy(),cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[0]

Solution 2:[2]

It is your obj_coordinates that does not seem to be a 4-uple. It is an element of objs produced by obj_detection.predict(gray_image). The code context you shared to us is insufficient to tell what is wrong in that function.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Jeru Luke
Solution 2 Victor Paléologue