我的目标是从rtsp服务器读取帧,执行一些opencv操作,并将操作过的帧写入新的rtsp服务器。
我尝试了以下基于Write in Gstreamer pipeline from opencv in python的方法,但是我无法找到创建rtsp服务器的合适的GST-Laune-1.0参数应该是什么。有人能为GST-Launt-1.0提供适当的论据吗?我试过的那些被困在“管道是预言片”里的
import cv2
cap = cv2.VideoCapture("rtsp://....")
framerate = 25.0
out = cv2.VideoWriter('appsrc ! videoconvert ! '
'x264enc noise-reduction=10000 speed-preset=ultrafast
tune=zerolatency ! '
'rtph264pay config-interval=1 pt=96 !'
'tcpserversink host=192.168.1.27 port=5000 sync=false',
0, framerate, (640, 480))
counter = 0
while cap.isOpened():
ret, frame = cap.read()
if ret:
out.write(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
out.release()我还尝试了另一种基于Write opencv frames into gstreamer rtsp server pipeline的解决方案
import cv2
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import Gst, GstRtspServer, GObject
class SensorFactory(GstRtspServer.RTSPMediaFactory):
def __init__(self, **properties):
super(SensorFactory, self).__init__(**properties)
#self.cap = cv2.VideoCapture(0)
self.cap = cv2.VideoCapture("rtsp://....")
self.number_frames = 0
self.fps = 30
self.duration = 1 / self.fps * Gst.SECOND # duration of a frame in nanoseconds
self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ' \
'caps=video/x-raw,format=BGR,width=640,height=480,framerate={}/1 ' \
'! videoconvert ! video/x-raw,format=I420 ' \
'! x264enc speed-preset=ultrafast tune=zerolatency ' \
'! rtph264pay config-interval=1 name=pay0 pt=96'.format(self.fps)
def on_need_data(self, src, lenght):
if self.cap.isOpened():
ret, frame = self.cap.read()
if ret:
data = frame.tostring()
buf = Gst.Buffer.new_allocate(None, len(data), None)
buf.fill(0, data)
buf.duration = self.duration
timestamp = self.number_frames * self.duration
buf.pts = buf.dts = int(timestamp)
buf.offset = timestamp
self.number_frames += 1
retval = src.emit('push-buffer', buf)
print('pushed buffer, frame {}, duration {} ns, durations {} s'.format(self.number_frames, self.duration, self.duration / Gst.SECOND))
if retval != Gst.FlowReturn.OK:
print(retval)
def do_create_element(self, url):
return Gst.parse_launch(self.launch_string)
def do_configure(self, rtsp_media):
self.number_frames = 0
appsrc = rtsp_media.get_element().get_child_by_name('source')
appsrc.connect('need-data', self.on_need_data)
class GstServer(GstRtspServer.RTSPServer):
def __init__(self, **properties):
super(GstServer, self).__init__(**properties)
self.factory = SensorFactory()
self.factory.set_shared(True)
self.get_mount_points().add_factory("/test", self.factory)
self.attach(None)
GObject.threads_init()
Gst.init(None)
server = GstServer()
loop = GObject.MainLoop()
loop.run()此解决方案生成rtsp服务器并将其流到该服务器。我可以在VLC中打开生成的rtsp流,但是它会继续显示第一个帧,并且不会使用新的框架进行更新。有谁知道原因吗?
我正在寻找任何解决方案,使我能够以较低的延迟将帧从rtsp服务器读取为opencv格式,操作框架并将帧输出到新的rtsp服务器(我还需要创建该服务器)。如果存在更好的解决方案,则不需要基于gstreamer。
我使用的是Ubuntu16.04与python2.7和opencv 3.4.1
发布于 2022-01-24 18:02:44
我做过一次类似的事情,从RTSP服务器读取帧并在OpenCV中处理它们。由于某些原因,我无法使用VideoCapture of cv2,它没有工作。因此,我的解决方案是使用ffmpeg将RTSP输入转换为位图流,对于我的问题,可以读取每像素1字节的灰度图像。
基本的实施理念是:
这是我的代码(这是python3,但应该很容易转换为2.7)。
import subprocess
import shlex
import time
from threading import Thread
import os
import numpy as np
import logging
class FFMPEGVideoReader(object):
def __init__(self, rtsp_url: str, width:int=320, height:int=180) -> None:
super().__init__()
self.rtsp_url = rtsp_url
self.width = width
self.height=height
self.process = None
self._stdout_reader = Thread(target=self._receive_output, name='stdout_reader', daemon=True)
self._stdout_reader.start()
self.frame_number = -1
self._last_frame_read = -1
def start_reading(self):
if self.process is not None:
self.process.kill()
self.process = None
# Customize your input/output params here
command = 'ffmpeg -i {rtsp} -f rawvideo -r 4 -pix_fmt gray -vf scale={width}:{height} -'.format(rtsp=self.rtsp_url, width=self.width, height=self.height)
logging.debug('Opening ffmpeg process with command "%s"' % command)
args = shlex.split(command)
FNULL = open(os.devnull, 'w')
self.process = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=FNULL)
def _receive_output(self):
chunksize = self.width*self.height
while True:
while self.process is None:
time.sleep(1)
self._last_chunk = self.process.stdout.read(chunksize)
self.frame_number += 1
@property
def frame(self):
started = time.time()
while self._last_frame_read == self.frame_number:
time.sleep(0.125) # Put your FPS threshold here
if time.time() - started > self.MAX_FRAME_WAIT:
logging.warning('Reloading ffmpeg process...')
self.start_reading()
started = time.time()
self._last_frame_read = self.frame_number
dt = np.dtype('uint8')
vec = np.frombuffer(self._last_chunk, dtype=dt)
return np.reshape(vec, (self.height, self.width))
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
vr = FFMPEGVideoReader('rtsp://192.168.1.10:554/onvif2', width=320, height=180)
vr.start_reading()
while True:
print('update')
fr = vr.frame
np.save('frame.npy', fr)如果您需要彩色图像,则需要在ffmpeg的命令中更改pix_fmt,读取(宽度*高度*通道)字节,然后将其正确地重塑为多一个轴。
发布于 2022-01-30 21:07:52
另一种选择是使用opencv VideoWriter编码H264帧并发送到shm接收器:
h264_shmsink = cv2.VideoWriter("appsrc is-live=true ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! "
"nvv4l2h264enc insert-sps-pps=1 ! video/x-h264, stream-format=byte-stream ! h264parse ! shmsink socket-path=/tmp/my_h264_sock ",
cv2.CAP_GSTREAMER, 0, float(fps), (int(width), int(height)))其中宽度和高度是推送帧的大小,然后使用shmsrc执行时间戳作为测试启动RTSP服务器的源,如:
./test-launch "shmsrc socket-path=/tmp/my_h264_sock do-timestamp=1 ! video/x-h264, stream-format=byte-stream, width=640, height=480, framerate=30/1 ! h264parse ! video/x-h264, stream-format=byte-stream ! rtph264pay pt=96 name=pay0 "这可能有一些系统开销,但可能适用于低比特率,或者可能需要对较高比特率进行一些优化。
发布于 2021-12-20 16:40:40
没有试过,但你可以尝试:
将nvv4l2h264enc替换为omxh264enc,因为它看起来更好。我很快就对UDP流9进行了实验,发现在我的例子中,nvv4l2h264enc使用的默认配置文件比omxh264enc要高,甚至设置最快的预设值,在omxh264enc保留UDP时,它也是通过UDP进行同步的(可能我错过了一些选项)。在h264parse和rstpclient接收器之间添加h264编码器后的rtph264pay配置-interval=1。在appsrc之后添加队列。如果有效的话,可以试着移除任何无用的部分。
还请注意,如果升级opencv版本/构建时,python中的VideoWriter API可能发生了变化(第二个参数现在是使用的API,如cv2.CAP_GSTREAMER或cv2.CAP_ANY),但这似乎不是您的情况,因为您有一个工作案例。
希望我帮了忙
https://stackoverflow.com/questions/51058911
复制相似问题