'YoloV5 Image identification and Tracking -How to Draw a A continuous line connecting the previous point and current point until the object is in frame

I am trying to Detect human objects and balls from video input, I am able to identify both objects and draw a square box around the identified objects, but how can I draw a continuous line in the trajectory in which they are moving? I have downloaded the detect.py file from YoloV5 Github Repo and customized the objects to identify.

I would like to draw a continuous line that connects the previous point and current point until the object is out of focus in the video?

I need to Draw a line on the balls Trajectory like in this image,

enter image description here

# Apply Classifier
if classify:
    pred = apply_classifier(pred, modelc, img, im0s)

# Process detections
for i, det in enumerate(pred):  # detections per image
    if webcam:  # batch_size >= 1
        p, s, im0, frame = path[i], f'{i}: ', im0s[i].copy(), dataset.count
    else:
        p, s, im0, frame = path, '', im0s.copy(), getattr(dataset, 'frame', 0)

    p = Path(p)  # to Path
    save_path = str(save_dir / p.name)  # img.jpg
    txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # img.txt
    s += '%gx%g ' % img.shape[2:]  # print string
    gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
    imc = im0.copy() if opt.save_crop else im0  # for opt.save_crop
    if len(det):
        # Rescale boxes from img_size to im0 size
        det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

        # Print results
        for c in det[:, -1].unique():
            n = (det[:, -1] == c).sum()  # detections per class
            s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string

        # Write results
        for *xyxy, conf, cls in reversed(det):
            if save_txt:  # Write to file
                xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh)  # label format
                with open(txt_path + '.txt', 'a') as f:
                    f.write(('%g ' * len(line)).rstrip() % line + '\n')

            if save_img or opt.save_crop or view_img:  # Add bbox to image
                c = int(cls)  # integer class
                label = None if opt.hide_labels else (names[c] if opt.hide_conf else f'{names[c]} {conf:.2f}')
                plot_one_box(xyxy, im0, label=label, color=colors(c, True), line_thickness=opt.line_thickness)
                if opt.save_crop:
                    save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)

    # Print time (inference + NMS)
    print(f'{s}Done. ({t2 - t1:.3f}s)')

    view_img=True
    # Stream results
    if view_img:
        cv2.imshow(str(p), im0)
        cv2.waitKey(1)  # 1 millisecond


Solution 1:[1]

Let's suppose you need to track only a single ball. Once you have detected the ball in all the frames, all you need to do is to draw a yellow transparent line from the first frame it was detected starting at the center of the ball, to the center of the ball in the next frame. The width of the line would be (say) 30% of the width of the ball in that frame. Just keep a list of the object's centers and sizes.

Now, if you have a couple of balls that never intersect, all you need to do is for each ball to find the one that was closer in the previous couple of frames.

And finally if couple of balls do intersect, find their movement vector (take a regression of several frames back and forward from the moment in time where two "ball" objects become one or stop being recognized as balls and then split back again), and assign trajectory according to their historic locations.

If the trajectory lines would be too jittery, smooth the trajectory/width with moving median.

Solution 2:[2]

The following structure might help:

This is in case there is only 1 detection per frame


# A list to store centroids of detected
cent_hist = []

def draw_trajectory(frame: numpy.ndarray ,cent_hist: list = cent_hist,trajectory_length: int = 50) -> numpy.ndarray:
    if len(cent_hist)>trajectory_length:
        while len(cent_hist)!=trajectory_length:
            cent_hist.pop(0)
    for i in range(len(cent_hist)-1):
        frame = cv2.line(frame, cent_hist[i], cent_hist[i+1], (0,0,0))
    return frame

for i, det in enumerate(pred):  # detections per image
    if webcam:  # batch_size >= 1
        p, s, im0, frame = path[i], f'{i}: ', im0s[i].copy(), dataset.count
    else:
        p, s, im0, frame = path, '', im0s.copy(), getattr(dataset, 'frame', 0)

    p = Path(p)  # to Path
    save_path = str(save_dir / p.name)  # img.jpg
    txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # img.txt
    s += '%gx%g ' % img.shape[2:]  # print string
    gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
    imc = im0.copy() if opt.save_crop else im0  # for opt.save_crop
    if len(det):
        # Rescale boxes from img_size to im0 size
        det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

        # Print results
        for c in det[:, -1].unique():
            n = (det[:, -1] == c).sum()  # detections per class
            s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string

        # Write results
        for *xyxy, conf, cls in reversed(det):
            if save_txt:  # Write to file
                xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh)  # label format
                with open(txt_path + '.txt', 'a') as f:
                    f.write(('%g ' * len(line)).rstrip() % line + '\n')

            if save_img or opt.save_crop or view_img:  # Add bbox to image
                c = int(cls)  # integer class
                label = None if opt.hide_labels else (names[c] if opt.hide_conf else f'{names[c]} {conf:.2f}')
                plot_one_box(xyxy, im0, label=label, color=colors(c, True), line_thickness=opt.line_thickness)
                if opt.save_crop:
                    save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)

    # Print time (inference + NMS)
    print(f'{s}Done. ({t2 - t1:.3f}s)')
    
    ### Calculate centroid here
    centroid = (50,50) # Change this

    cent_hist.append(centroid)
    im0 = draw_trajectory(im0, cent_hist,50)

    view_img=True
    # Stream results
    if view_img:
        cv2.imshow(str(p), im0)
        cv2.waitKey(1)  # 1 millisecond

If you want to use this for multiple detections then I would suggest use some object tracking algorithm like : link which will help you solve the assignment problem (when you have multiple points) better.

Solution 3:[3]

self.centroid_append[idx].append(centroid[idx])
        #draw motion path
for j in range(1, len(self.centroid_append[idx])):
   if self.centroid_append[idx][j - 1] is None or self.centroid_append[idx][j] is None:
      continue
   thickness = int(np.sqrt(64 / float(j + 1)) * 2)
   cv2.line(frame,(self.centroid_append[idx][j-1]), (self.centroid_append[idx][j]),(self.color),thickness)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 igrinis
Solution 2 Atharva Gundawar
Solution 3 Krypton