'How to share a numpy array between multiple threads python?

I was actually trying to modify some yolov5 script. Here I'm trying to pass an array between threads.

def detection(out_q):
    while(cam.isOpened()):
        ref, img = cam.read()
        img = cv2.resize(img, (640, 320))

        result = model(img)
        yoloBbox = result.xywh[0].numpy() # yolo format
        bbox = result.xyxy[0].numpy() # pascal format
        for i in bbox:
            out_q.put(i) # 'i' is the List of length 6
        
def resultant(in_q):

    while(cam.isOpened()):
        ref, img =cam.read()
        img = cv2.resize(img, (640, 320))
        qbbox = in_q.get()
        print(qbbox)
if __name__=='__main__':
    q = Queue(maxsize = 10)

    t1 = threading.Thread(target= detection, args = (q, ))

    t2 = threading.Thread(target= resultant, args = (q, ))

    t1.start()
    t2.start()

    t1.join()
    t2.join()

I tried with this but it's giving me errors like:

Assertion fctx->async_lock failed at libavcodec/pthread_frame.c:155

so is there any other method to pass the array? any kind of tutorial/ solution is appreciated. If there is any misunderstanding with my question, please let me know. Thanks a lot!!

Update:::

I was trying like this..

def detection(ns, event):#
##    a = np.array([1, 2, 3])   -
####    a= list(a)               | #This is working
##    ns.value = a               |
##    event.set()               -
    while(cam.isOpened()):
        ref, img = cam.read()
        img = cv2.resize(img, (640, 320))
    
        result = model(img)
        yoloBbox = result.xywh[0].numpy() # yolo format
        bbox = result.xyxy[0].numpy() # pascal format
        for i in bbox:
            arr = np.squeeze(np.array(i))
            print("bef: ", arr)      -
            ns.value = arr            |  # This is not working
            event.set()              -

def transfer(ns, event):
    event.wait()
    print(ns.value)

if __name__=='__main__':
    ##    detection()

    manager = multiprocessing.Manager()
    namespace = manager.Namespace()
    event=multiprocessing.Event()

    p1 = multiprocessing.Process(target=detection, args= 
(namespace, event),)
    p2= multiprocessing.Process(target=transfer, args=(namespace, 
 event),)
    p1.start()
    p2.start()
    p1.join()
    p2.join()

The output from the above "arr" = [          0      1.8232      
407.98      316.46     0.92648           0]

but all I got is blank. no error, no warning, only blank. I tested arr is having value. I tested the list, np array all are shareing data which is marked as working. But why that the data from "arr" array is blank (after sharing) so what should I do?



Solution 1:[1]

so is there any other method to pass the array?

Yes, you could use multiprocessing.shared_memory, it is part of standard library since python3.8, and PyPI has backport allowing to use it in python3.6 and python3.7. See example in linked docs to learn how to use multiprocessing.shared_memory with numpy.ndarray

Solution 2:[2]

The answer provided by @Daweo suggesting use of shared memory is correct.

However, it's also worth considering using a lock to 'protect' access to the numpy array (which is not thread-safe).

See:- this

Solution 3:[3]

Okay guys, Thanks for the help. I used multiprocessing queue to share data. Then I transfered my program multiprocessing to threading.

def capture(q):
    cap = 
        cv2.VideoCapture(0)
    while True:
        ref, frame = cap.read()
        frame = cv2.resize(frame, (640, 480))
        q.put(frame)

def det(q):
    model = torch.hub.load('ultralytics/yolov5','yolov5s',device='cpu') 
    model.conf = 0.30  # model confidence level
    model.classes = [0]  # model classes (where 0 = person, 2 = car)
    model.iou = 0.55 # bounding box accuracy
    while True:
        mat = q.get()
        det = model(mat)
        bbox = det.xyxy[0].numpy()
        for i in bbox:
            print(i)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Daweo
Solution 2 Albert Winestein
Solution 3 rohan099