我有一个立体相机系统,我正在尝试这个MATLAB的计算机视觉工具箱示例(http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html)与我自己的图像和相机校准文件。我使用的是加州理工学院的相机校准工具箱(http://www.vision.caltech.edu/bouguetj/calib_doc/)。
首先,基于第一个示例,我分别尝试了每个摄像机,并找到每个摄像机的内部摄像机校准矩阵并保存它们。我还使用Caltech工具箱消除了左侧和右侧图像的失真。因此,我注释掉了MATLAB示例中的代码。
以下是固有的相机矩阵:
K1=1050 0 630;0 1048 460;0 0 1;
K2=1048 0 662;0 1047 468;0 0 1;
顺便说一句,这些是来自大黄蜂XB3相机的右和中间镜头。
问:它们不是应该是一样的吗?
然后,我根据第五个例子进行了立体校准。我保存了旋转矩阵(R)和平移矩阵(T)。因此,我注释掉了MATLAB示例中的代码。
以下是旋转和平移矩阵:
R=0.9999 -0.0080 -0.0086;0.0080 1 0.0048;0.0086 -0.0049 1;
T=120.14为0.55 ~ 1.04;
然后,我将所有这些图像、校准文件和相机矩阵输入到MATLAB示例中,并试图找到三维点云,但结果并不令人满意。我在这里附上代码。我认为这里有两个问题:
1-我的极线约束值太大了!(16的幂)
2-我不确定相机矩阵,以及我如何从R和T计算它们从Caltech工具箱!
另外,就特征提取而言,这是很好的。
如果有人能帮上忙那就太好了。
clear
close all
clc
files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')
% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;
%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;
% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
extractFeatures(rgb2gray(images(i).image),images(i).points);
end
% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));
indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;
% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage' );
title('Original Matched Features from Globe01 and Globe02');
% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
*helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
*[matchedPoints2.Location(k,:),1]';
%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%
% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end
validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');
% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3); % store color information
% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
pointInImage1(1)*P1(3,:) - P1(1,:);...
pointInImage1(2)*P1(3,:) - P1(2,:);...
pointInImage2(1)*P2(3,:) - P2(1,:);...
pointInImage2(2)*P2(3,:) - P2(2,:)];
% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);
X = V(:,end);
X = X/X(end);
% Store location
points3D(i,:) = X';
% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end
% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')
% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');发布于 2014-08-26 02:29:18
首先,计算机视觉系统工具箱现在包括一个用于校准单个摄像机的Camera Calibrator App,以及support for programmatic stereo camera calibration。您可以更轻松地使用这些工具,因为您使用的示例和Caltech校准工具箱使用的约定稍有不同。
该示例使用前乘法约定,即行向量*矩阵,而Caltech工具箱使用后乘法约定(矩阵*列向量)。这意味着,如果您使用来自Caltech的相机参数,您将必须转置固有矩阵和旋转矩阵。这可能是你问题的主要原因。
至于你的两个相机之间的本质是不同的,这是完全正常的。所有的摄像头都略有不同。
它还可以帮助您查看用于三角测量的匹配特征。假设你正在重建一个细长的物体,看到重建的点在3D中形成一条线似乎并不令人惊讶……
您还可以尝试校正图像并执行密集重建,就像在example I've linked to above中一样。
https://stackoverflow.com/questions/25437115
复制相似问题