作为初学者,我不知道该问什么。所以如果你能帮忙..。
我想使用以下方法创建一个项目:
实际上,开始时,我需要一个更简单的项目,比如: Intel RealSense SDK 2.0的示例,
代码示例将是最好的,甚至更好的教程。
直到现在,我还没有在谷歌上找到什么。
如果上面提到的是困难的话,一个关于如何开始的建议将是好的。
例如,我发现:"基于MSVC的Windows上的Qt和openCV“
这是一个好的地方,我开始,我需要openCV显示/显示深度图像像项目“im”所做的吗?
提前谢谢你。
发布于 2020-06-23 20:54:59
这是一个非常简单的例子,它只使用Qt和Intel Realsense SDK。
我们首先编写一个处理摄像机的类:
#ifndef CAMERA_H
#define CAMERA_H
// Import QT libs, one for threads and one for images
#include <QThread>
#include <QImage>
// Import librealsense header
#include <librealsense2/rs.hpp>
// Let's define our camera as a thread, it will be constantly running and sending frames to
// our main window
class Camera : public QThread
{
Q_OBJECT
public:
// We need to instantiate a camera with both depth and rgb resolution (as well as fps)
Camera(int rgb_width, int rgb_height, int depth_width, int depth_height, int fps);
~Camera() {}
// Member function that handles thread iteration
void run();
// If called it will stop the thread
void stop() { camera_running = false; }
private:
// Realsense configuration structure, it will define streams that need to be opened
rs2::config cfg;
// Our pipeline, main object used by realsense to handle streams
rs2::pipeline pipe;
// Frames returned by our pipeline, they will be packed in this structure
rs2::frameset frames;
// A bool that defines if our thread is running
bool camera_running = true;
signals:
// A signal sent by our class to notify that there are frames that need to be processed
void framesReady(QImage frameRGB, QImage frameDepth);
};
// A function that will convert realsense frames to QImage
QImage realsenseFrameToQImage(const rs2::frame& f);
#endif // CAMERA_H为了完全理解这个类要做什么,我将您重定向到这两个页面:信号和插槽和QThread。这个类是一个QThread,这意味着它可以与我们的主窗口并行运行。当几个帧准备就绪时,将发出信号framesReady,窗口将显示图像。
让我们从如何用librealsense打开相机流开始:
Camera::Camera(int rgb_width, int rgb_height, int depth_width, int depth_height, int fps)
{
// Enable depth stream with given resolution. Pixel will have a bit depth of 16 bit
cfg.enable_stream(RS2_STREAM_DEPTH, depth_width, depth_height, RS2_FORMAT_Z16, fps);
// Enable RGB stream as frames with 3 channel of 8 bit
cfg.enable_stream(RS2_STREAM_COLOR, rgb_width, rgb_height, RS2_FORMAT_RGB8, fps);
// Start our pipeline
pipe.start(cfg);
}如您所见,我们的构造函数非常简单,它只会用给定的流打开管道。
现在管道已经启动,我们只需要得到相应的帧。我们将在我们的“run”方法中这样做,该方法将在QThread启动时启动:
void Camera::run()
{
while(camera_running)
{
// Wait for frames and get them as soon as they are ready
frames = pipe.wait_for_frames();
// Let's get our depth frame
rs2::depth_frame depth = frames.get_depth_frame();
// And our rgb frame
rs2::frame rgb = frames.get_color_frame();
// Let's convert them to QImage
auto q_rgb = realsenseFrameToQImage(rgb);
auto q_depth = realsenseFrameToQImage(depth);
// And finally we'll emit our signal
emit framesReady(q_rgb, q_depth);
}
}执行转换的函数如下:
QImage realsenseFrameToQImage(const rs2::frame &f)
{
using namespace rs2;
auto vf = f.as<video_frame>();
const int w = vf.get_width();
const int h = vf.get_height();
if (f.get_profile().format() == RS2_FORMAT_RGB8)
{
auto r = QImage((uchar*) f.get_data(), w, h, w*3, QImage::Format_RGB888);
return r;
}
else if (f.get_profile().format() == RS2_FORMAT_Z16)
{
// only if you have Qt > 5.13
auto r = QImage((uchar*) f.get_data(), w, h, w*2, QImage::Format_Grayscale16);
return r;
}
throw std::runtime_error("Frame format is not supported yet!");
}我们的相机完成了。
现在我们将定义我们的主窗口。我们需要一个插槽来接收我们的帧和两个标签,在那里我们将放置我们的图像:
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
#include <QLabel>
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
explicit MainWindow(QWidget *parent = 0);
public slots:
// Slot that will receive frames from the camera
void receiveFrame(QImage rgb, QImage depth);
private:
QLabel *rgb_label;
QLabel *depth_label;
};
#endif // MAINWINDOW_H我们为窗口创建一个简单的视图,图像将垂直显示。
#include "mainwindow.h"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent)
{
// Creates our central widget that will contain the labels
QWidget *widget = new QWidget();
// Create our labels with an empty string
rgb_label = new QLabel("");
depth_label = new QLabel("");
// Define a vertical layout
QVBoxLayout *widgetLayout = new QVBoxLayout;
// Add the labels to the layout
widgetLayout->addWidget(rgb_label);
widgetLayout->addWidget(depth_label);
// And then assign the layout to the central widget
widget->setLayout(widgetLayout);
// Lastly assign our central widget to our window
setCentralWidget(widget);
}现在我们需要定义插槽函数。分配给该函数的唯一任务是更改与标签相关的图像:
void MainWindow::receiveFrame(QImage rgb, QImage depth)
{
rgb_label->setPixmap(QPixmap::fromImage(rgb));
depth_label->setPixmap(QPixmap::fromImage(depth));
}我们就完了!
最后,我们写我们的主,这将启动我们的线程并显示我们的窗口。
#include <QApplication>
#include "mainwindow.h"
#include "camera.h"
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
MainWindow window;
Camera camera(640, 480, 320, 240, 30);
// Connect the signal from the camera to the slot of the window
QApplication::connect(&camera, &Camera::framesReady, &window, &MainWindow::receiveFrame);
window.show();
camera.start();
return a.exec();
}发布于 2020-06-21 14:58:34
从您的问题中,我了解您正在尝试预览Qt应用程序中的相机数据。
Mat img; QImage img1 = QImage((uchar *) img.data,img.cols,img.rows,img.step,QImage::Format_Indexed8);
https://stackoverflow.com/questions/62474665
复制相似问题