我使用的是薄板转换器的Python实现,并遇到了一些问题。当我做WarpImage时,图像会被正确地扭曲,但是当我使用一些手动输入点的estimateTransformation时,这些点不能正确地映射。相反,所有的点最终都映射到完全相同的位置。任何帮助都将不胜感激!我把我的代码附在下面:
splines= cv2.createThinPlateSplineShapeTransformer()
temp=splines.estimateTransformation(reference_coordinate_arr,image_marks_coordinates_arr,matches)
warpedimage=splines.warpImage(image) #image warps fine
moved_barcodes= splines.applyTransformation(image_bar_coordinates_arr)[0] #these coordinates all map to the same location 发布于 2022-01-06 10:00:17
非常感谢您问这个问题,我一直在寻找样条扭曲,但从未在thinPlateTransformation中找到openCV。
对我来说,在C++,它是有效的。我提供了一些样本点,它们可能不是共面的afaik。
#include <opencv2/shape/shape_transformer.hpp>
int main()
{
cv::Mat img = cv::imread("C:/data/StackOverflow/Lenna.png");
auto tps = cv::createThinPlateSplineShapeTransformer();
std::vector<cv::Point2f> sourcePoints, targetPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
targetPoints.push_back(cv::Point2f(0, 0));
sourcePoints.push_back(cv::Point2f(0.5*img.cols, 0));
targetPoints.push_back(cv::Point2f(0.5*img.cols, 0.25*img.rows));
sourcePoints.push_back(cv::Point2f(img.cols, 0));
targetPoints.push_back(cv::Point2f(img.cols, 0));
sourcePoints.push_back(cv::Point2f(img.cols, 0.5*img.rows));
targetPoints.push_back(cv::Point2f(0.75*img.cols, 0.5*img.rows));
sourcePoints.push_back(cv::Point2f(img.cols, img.rows));
targetPoints.push_back(cv::Point2f(img.cols, img.rows));
sourcePoints.push_back(cv::Point2f(0.5*img.cols, img.rows));
targetPoints.push_back(cv::Point2f(0.5*img.cols, 0.75*img.rows));
sourcePoints.push_back(cv::Point2f(0, img.rows));
targetPoints.push_back(cv::Point2f(0, img.rows));
sourcePoints.push_back(cv::Point2f(0, 0.5*img.rows/2)); // accidentally unwanted y value here by 0.5 and /2
targetPoints.push_back(cv::Point2f(0.25*img.cols, 0.5*img.rows));
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
tps->estimateTransformation(targetPoints, sourcePoints, matches); // this gives right warping from source to target, but wront point transformation
//tps->estimateTransformation(sourcePoints, targetPoints, matches); // this gives wrong warping but right point transformation from source to target
std::vector<cv::Point2f> transPoints;
tps->applyTransformation(sourcePoints, transPoints);
std::cout << "sourcePoints = " << std::endl << " " << sourcePoints << std::endl << std::endl;
std::cout << "targetPoints = " << std::endl << " " << targetPoints << std::endl << std::endl;
std::cout << "transPos = " << std::endl << " " << transPoints << std::endl << std::endl;
cv::Mat dst;
tps->warpImage(img, dst);
cv::imshow("dst", dst);
cv::waitKey(0);
};给出这一结果:
[0, 0;
128, 0;
256, 0;
256, 256;
256, 512;
128, 512;
0, 512;
0, 128]
targetPoints =
[0, 0;
128, 128;
256, 0;
192, 256;
256, 512;
128, 384;
0, 512;
64, 256]
transPos =
[0.0001950264, -5.7220459e-05;
128, -27.710777;
255.99991, -0.00023269653;
337.67929, 279.34125;
255.99979, 512;
127.99988, 570.5177;
-0.00029873848, 511.99994;
-45.164845, -0.20605469]因此,它正在改变点,但不是在正确的方向上。
当源和目的地在estimateTransformation调用中切换时,给出正确的值(但是图像产生了错误的扭曲):
tps->estimateTransformation(sourcePoints, targetPoints, matches);
[0, 0;
128, 0;
256, 0;
256, 256;
256, 512;
128, 512;
0, 512;
0, 128]
targetPoints =
[0, 0;
128, 128;
256, 0;
192, 256;
256, 512;
128, 384;
0, 512;
64, 256]
transPos =
[-4.7683716e-05, -0.00067138672;
128.00008, 127.99954;
256.00012, 0;
192.00012, 256.00049;
255.99988, 512.00049;
127.9995, 383.99976;
-0.00016021729, 512.00049;
64.000031, 255.99982]输入:

输出:

我只是不知道为什么要在estimateTransformation调用中切换源点和目标点。一开始它表现出与我预期相反的行为..。
Sourcecode基取自:https://github.com/opencv/opencv/issues/7084
发布于 2022-01-25 17:59:40
注意到我自己的错误。当数组本应是np.int32时,该数组被设置为np.float32。将其更改为np.float32修复了所有问题。THank你们所有的反馈!
https://stackoverflow.com/questions/70601059
复制相似问题