'Gstreamer rtsp server with different urls for payloaders
I am working on streaming device with CSI camera input. I want to duplicate the incomming stream with tee and subsequently access each of these streams with different url using gst-rtsp-server. I can have only one consumer on my camera so it is impossible to have two standalone pipelines. Is this possible? See the pseudo pipeline below.
source -> tee name=t -> rtsp with url0 .t -> rtsp with url1
Thanks!
EDIT 1:
I tried the first solution with appsink | appsrc pair, but I was only half successful. Now I have two pipelines.
nvv4l2camerasrc device=/dev/video0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY, framerate=50/1 ! nvvidconv name=conv ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=50/1 ! nvv4l2h264enc control-rate=1 bitrate=10000000 preset-level=1 profile=0 disable-cabac=1 maxperf-enable=1 name=encoder insert-sps-pps=1 insert-vui=1 ! appsink name=appsink sync=false
and
appsrc name=appsrc format=3 is-live=true do-timestamp=true ! queue ! rtph264pay config-interval=1 name=pay0
The second pipeline is used to create media factory. I push the buffers from appsink to appsrc in callback to new-sample signal like this.
static GstFlowReturn
on_new_sample_from_sink (GstElement * elt, void * data)
{
GstSample *sample;
GstFlowReturn ret = GST_FLOW_OK;
/* get the sample from appsink */
sample = gst_app_sink_pull_sample (GST_APP_SINK (elt));
if(appsrc)
{
ret = gst_app_src_push_sample(GST_APP_SRC (appsrc), sample);
}
gst_sample_unref (sample);
return ret;
}
This works - video is streamed and can be seen on different machine using gstreamer or vlc. The problem is latency. For some reason the latency is about 3s. When I merge these two pipelines into one to create media factory directly withou usage of appsink and appsrc it works fine without large latency.
I think that for some reason the appsrc is queuing buffers until it starts pushing them to its source pad - On the debug output bellow you can see the number of queued bytes it stabilize itself on.
0:00:19.202295929 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1113444 >= 200000)
0:00:19.202331834 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1113444 >= 200000)
0:00:19.202353818 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1863:gst_app_src_push_internal:<appsrc> queueing buffer 0x7f58039690
0:00:19.222150573 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1141310 >= 200000)
0:00:19.222184302 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1141310 >= 200000)
EDIT 2:
I add the max-buffers property to appsink and suggested properties to queues but it didn't helped at all.
I just don't understand how it can buffer so many buffers and why. If I run my test application with GST_DEBUG=appsrc:5 then I get output like this.
0:00:47.923713520 14035 0x7f68003850 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (2507045 >= 200000)
0:00:47.923757840 14035 0x7f68003850 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (2507045 >= 200000)
According to this debug output it is all queued in appsrc even if it has max-bytes property set to 200 000 bytes. Maybe I don't understand it correctly but It looks weird to me.
I tried the first solution with appsink | appsrc pair, but I was only half successful. Now I have two pipelines.
My pipelines are currently like this.
nvv4l2camerasrc device=/dev/video0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY, framerate=50/1 ! queue max-size-buffers=3 leaky=downstream ! nvvidconv name=conv ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=50/1 ! nvv4l2h264enc control-rate=1 bitrate=10000000 preset-level=1 profile=0 disable-cabac=1 maxperf-enable=1 name=encoder insert-sps-pps=1 insert-vui=1 ! appsink name=appsink sync=false max-buffers=3
and
appsrc name=appsrc format=3 stream-type=0 is-live=true do-timestamp=true blocksize=16384 max-bytes=200000 ! queue max-size-buffers=3 leaky=no ! rtph264pay config-interval=1 name=pay0
Solution 1:[1]
I can think of three possibilities:
Use appsink/appsrc (as in this example) to separate the pipeline in something like:
Factory with URL 1 Capture pipeline .---------------------------. .----------------------. appsrc ! encoder ! rtph264pay v4l2src ! ... ! appsink appsrc ! encoder ! rtph264pay '---------------------------' Factory with URL 2
You would manually take out buffers from the appsink and push them into the different appsrcs.
Build something like above, but use something like interpipes or intervideosink in place of the appsink/appsrc to perform the buffer transfer automatically.
Use something like GstRtspSink (paid product though)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Michael Gruner |