我正在尝试理解相机校准/3D重建,并面对cv::fisheye::fisheye::fisheye/undistortPoints函数的奇怪行为。我预计鱼眼模型会沿着连接它和主点(cx,cy)的射线移动一个点,但它不是这样的。此外,functions cv::fisheye::distortPoints和cv::fisheye::undistortPoints并不是彼此相反的(正如人们所预期的那样)。
下面的代码创建了一个具有失真系数的相机矩阵,并且不失真,然后失真回任意点。相机内部特征和失真系数的值是从公共数据集中获取的。
cv::Mat camera_matrix = cv::Mat::zeros(3,3,CV_64F);
camera_matrix.at<double>(0,0) = 190.9784;
camera_matrix.at<double>(1,1) = 190.9733;
camera_matrix.at<double>(0,2) = 254.9317;
camera_matrix.at<double>(1,2) = 256.8974;
camera_matrix.at<double>(2,2) = 1;
std::cout << "Camera matrix: \n" << camera_matrix << "\n" <<std::endl;
cv::Mat distortion_coefficients(4,1,CV_64F);
distortion_coefficients.at<double>(0) = 0.003482;
distortion_coefficients.at<double>(1) = 0.000715;
distortion_coefficients.at<double>(2) = -0.0020532;
distortion_coefficients.at<double>(3) = 0.000203;
std::cout << "Distortion coefficients\n"<< distortion_coefficients<< "\n" << std::endl;
cv::Mat original_point(1,1,CV_64FC2);
original_point.at<cv::Point2d>(0).x= 7.7;
original_point.at<cv::Point2d>(0).y= 9.9;
cv::Mat undistorted, distorted;
cv::fisheye::undistortPoints(original_point, undistorted, camera_matrix,
distortion_coefficients, cv::Mat(), camera_matrix);
cv::fisheye::distortPoints(undistorted, distorted, camera_matrix, distortion_coefficients);
std:: cout << "Original point: " << original_point.at<cv::Point2d>(0).x << " " << original_point.at<cv::Point2d>(0).y << std::endl;
std:: cout << "Undistorted point: " << undistorted.at<cv::Point2d>(0).x << " " << undistorted.at<cv::Point2d>(0).y<< std::endl;
std:: cout << "Distorted point: " << distorted.at<cv::Point2d>(0).x << " " << distorted.at<cv::Point2d>(0).y;这样的结果是
Camera matrix:
[190.9784, 0, 254.9317;
0, 190.9733, 256.8974;
0, 0, 1]
Distortion coefficients
[0.003482;
0.000715;
-0.0020532;
0.000203]
Original point: 7.7 9.9
Undistorted point: 8905.69 8899.45
Distorted point: 464.919 466.732左上角附近的点将移动到最右下角。
这是一个bug,还是我不理解什么?
cv::fisheye::undistortImage正在处理数据集图像-曲线被重新转换为直线。
我遗漏了什么?
发布于 2021-05-07 22:29:28
你遗漏了两样东西。
请注意,该函数假定未失真点的相机内部矩阵为单位。这意味着如果你想用undistortPoints()转换回没有失真的点,你必须用P−1乘以它们。
,
为了对这些点进行归一化,您首先需要对这些点进行均质化(转换为3d),然后将逆矩阵与每个均质化的点相乘,以获得归一化点。
您可以使用新相机矩阵的fisheye::estimateNewCameraMatrixForUndistortRectify来调整源和目标中有效像素之间的平衡。但是,如果您希望未失真的点与未失真的图像中的点匹配,则需要在undistortImage中使用这个新的相机矩阵。
cv::Mat k = cv::Mat::zeros(3,3,CV_64F);
k.at<double>(0,0) = 190.9784;
k.at<double>(1,1) = 190.9733;
k.at<double>(0,2) = 254.9317;
k.at<double>(1,2) = 256.8974;
k.at<double>(2,2) = 1;
std::cout << "Camera matrix: \n" << k << "\n" <<std::endl;
cv::Mat d(4,1,CV_64F);
d.at<double>(0) = 0.003482;
d.at<double>(1) = 0.000715;
d.at<double>(2) = -0.0020532;
d.at<double>(3) = 0.000203;
std::cout << "Distortion coefficients\n"<< d<< "\n" << std::endl;
cv::Mat points_original(1,4,CV_64FC2);
points_original.at<cv::Point2d>(0).x= 7.7;
points_original.at<cv::Point2d>(0).y= 9.9;
points_original.at<cv::Point2d>(1).x= 30;
points_original.at<cv::Point2d>(1).y= 30;
points_original.at<cv::Point2d>(2).x= 40;
points_original.at<cv::Point2d>(2).y= 40;
points_original.at<cv::Point2d>(3).x= 50;
points_original.at<cv::Point2d>(3).y= 50;
cv::Mat nk;
// float balance = 1;
// fisheye::estimateNewCameraMatrixForUndistortRectify(k,d,cv::Size(512,512),Mat::eye(3,3,CV_64FC1),nk,balance);
nk = k;
std::cout << "New Camera matrix: \n" << nk << "\n" <<std::endl;
cv::Mat points_undistorted, points_redistorted;
cv::fisheye::undistortPoints(points_original,points_undistorted,k,d,cv::Mat(),nk);
// {x,y} -> {x,y,1}
std::vector<Point3d> points_undistorted_homogeneous;
convertPointsToHomogeneous(points_undistorted, points_undistorted_homogeneous);
Mat cam_intr_inv = nk.inv();
for(int i=0;i<points_undistorted_homogeneous.size();++i){
Mat p(Size(1,3),CV_64FC1);
p.at<double>(0,0) = points_undistorted_homogeneous[i].x;
p.at<double>(1,0) = points_undistorted_homogeneous[i].y;
p.at<double>(2,0) = points_undistorted_homogeneous[i].z;
Mat q = cam_intr_inv*p;
points_undistorted_homogeneous[i].x = q.at<double>(0,0);
points_undistorted_homogeneous[i].y = q.at<double>(1,0);
points_undistorted_homogeneous[i].z = q.at<double>(2,0);
}
std::vector<Point2d> points_undistorted_normalized;
convertPointsFromHomogeneous(points_undistorted_homogeneous,points_undistorted_normalized);
fisheye::distortPoints(points_undistorted_normalized, points_redistorted,k, d);
for(int i = 0;i<points_original.size().width;++i){
std:: cout << "Original point: " << points_original.at<cv::Point2d>(i) << "\n";
std:: cout << "Undistorted point: " << points_undistorted.at<cv::Point2d>(i) << "\n";
std:: cout << "Redistorted point: " << points_redistorted.at<cv::Point2d>(i) << "\n\n";
} 结果:
Original point: [7.7, 9.9]
Undistorted point: [8905.69, 8899.45]
Redistorted point: [463.048, 464.816]
Original point: [30, 30]
Undistorted point: [8125.4, 8196.15]
Redistorted point: [461.864, 465.638]
Original point: [40, 40]
Undistorted point: [7775.49, 7846.24]
Redistorted point: [461.725, 465.582]
Original point: [50, 50]
Undistorted point: [-3848.98, -3886.37]
Redistorted point: [50, 50]如你所见,从角落开始的前40个点失败了。也许你可以通过再次校准并在图像的边缘添加更多的棋盘角(以不同的角度)来改进这一点。
https://stackoverflow.com/questions/66059554
复制相似问题