'OpenCV (-215:Assertion failed) !_src.empty() in function 'cvtColor

So, basically i'm writing a program in Google Colab that will detect faces through the webcam using python and opencv2. "I have Ubuntu 19.10, if this helps"

import cv2
faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")

video_capture = cv2.VideoCapture(0)

while True:
  ret, frame = video_capture.read()

  gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

At this point, an Assertion error appears

Traceback (most recent call last)
<ipython-input-94-ca2ba51b9064> in <module>()
      7   ret, frame = video_capture.read()
      8 
----> 9   gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

Nothing is using webcam while i'm running this code



Solution 1:[1]

!_src.empty() means you have empty frame.

When cv2 can't get frame from camera/file/stream then it doesn't show error but it set None in frame and False in ret - and you have to check one of these values

if frame is not None: 
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    # ... other code ...
else:
    print("empty frame")
    exit(1)

or

if ret:  # if ret is True:
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    # ... other code ...
else:
    print("empty frame")
    exit(1)

or

if not ret: 
    print("empty frame")
    exit(1)

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# ... other code ...

or

if frame is None: 
    print("empty frame")
    exit(1)

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# ... other code ...

BTW: you can't use shorter if frame: because when it gets image from camera then frame gets numpy.array() which tries to check value in every cells separatelly and it shows warning which asks to use .all() or .any() - but .all() or .any() may give error when frame is None.


BTW: Sometimes cv2 has problem to find haarcascades file. And there is special variable with path to folder with .xml - cv2.data.haarcascades - and you may need

faceCascade = cv2.CascadeClassifier( os.path.join(cv2.data.haarcascades, "haarcascade_frontalface_default.xml") )

EDIT: (2022.05.10)

Main problem can be that Google Colab runs code on server and it doesn't have access to local webcam. Only web browser has access to local webcam and you have to use JavaScript to access this webcam, take image and send it to server.

Google Colab has example how to do it for single screenshot. See Camera Capture

I made code which can also get simulate VideoCapture() and cap.read() to work with video from local webcam.

https://colab.research.google.com/drive/1a2seyb864Aqpu13nBjGRJK0AIU7JOdJa?usp=sharing

#
# based on: https://colab.research.google.com/notebooks/snippets/advanced_outputs.ipynb#scrollTo=2viqYx97hPMi
#

from IPython.display import display, Javascript
from google.colab.output import eval_js
from base64 import b64decode, b64encode
import numpy as np

def init_camera():
  """Create objects and functions in HTML/JavaScript to access local web camera"""

  js = Javascript('''

    // global variables to use in both functions
    var div = null;
    var video = null;   // <video> to display stream from local webcam
    var stream = null;  // stream from local webcam
    var canvas = null;  // <canvas> for single frame from <video> and convert frame to JPG
    var img = null;     // <img> to display JPG after processing with `cv2`

    async function initCamera() {
      // place for video (and eventually buttons)
      div = document.createElement('div');
      document.body.appendChild(div);

      // <video> to display video
      video = document.createElement('video');
      video.style.display = 'block';
      div.appendChild(video);

      // get webcam stream and assing to <video>
      stream = await navigator.mediaDevices.getUserMedia({video: true});
      video.srcObject = stream;

      // start playing stream from webcam in <video>
      await video.play();

      // Resize the output to fit the video element.
      google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);

      // <canvas> for frame from <video>
      canvas = document.createElement('canvas');
      canvas.width = video.videoWidth;
      canvas.height = video.videoHeight;
      //div.appendChild(input_canvas); // there is no need to display to get image (but you can display it for test)

      // <img> for image after processing with `cv2`
      img = document.createElement('img');
      img.width = video.videoWidth;
      img.height = video.videoHeight;
      div.appendChild(img);
    }

    async function takeImage(quality) {
      // draw frame from <video> on <canvas>
      canvas.getContext('2d').drawImage(video, 0, 0);

      // stop webcam stream
      //stream.getVideoTracks()[0].stop();

      // get data from <canvas> as JPG image decoded base64 and with header "data:image/jpg;base64,"
      return canvas.toDataURL('image/jpeg', quality);
      //return canvas.toDataURL('image/png', quality);
    }

    async function showImage(image) {
      // it needs string "data:image/jpg;base64,JPG-DATA-ENCODED-BASE64"
      // it will replace previous image in `<img src="">`
      img.src = image;
      // TODO: create <img> if doesn't exists,
      // TODO: use `id` to use different `<img>` for different image - like `name` in `cv2.imshow(name, image)`
    }

  ''')

  display(js)
  eval_js('initCamera()')

def take_frame(quality=0.8):
  """Get frame from web camera"""

  data = eval_js('takeImage({})'.format(quality))  # run JavaScript code to get image (JPG as string base64) from <canvas>

  header, data = data.split(',')  # split header ("data:image/jpg;base64,") and base64 data (JPG)
  data = b64decode(data)  # decode base64
  data = np.frombuffer(data, dtype=np.uint8)  # create numpy array with JPG data

  img = cv2.imdecode(data, cv2.IMREAD_UNCHANGED)  # uncompress JPG data to array of pixels

  return img

def show_frame(img, quality=0.8):
  """Put frame as <img src="data:image/jpg;base64,...."> """

  ret, data = cv2.imencode('.jpg', img)  # compress array of pixels to JPG data

  data = b64encode(data)  # encode base64
  data = data.decode()  # convert bytes to string
  data = 'data:image/jpg;base64,' + data  # join header ("data:image/jpg;base64,") and base64 data (JPG)

  eval_js('showImage("{}")'.format(data))  # run JavaScript code to put image (JPG as string base64) in <img>
                                           # argument in `showImage` needs `" "`
class BrowserVideoCapture():

    def __init__(self, src=None):
        init_camera()

    def read(self):
        return True, take_frame()
import cv2

cap = BrowserVideoCapture(0)

while True:
    ret, frame = cap.read()
    if ret:
        show_frame(frame)

Solution 2:[2]

Oh god... I just realized that i was working in Google Colab virtual environment, that's why it can't connect to my local camera.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2