我正在尝试估计单个图像的头部姿势,主要遵循以下指南:https://towardsdatascience.com/real-time-head-pose-estimation-in-python-e52db1bc606a
人脸的检测效果很好--如果我绘制图像和检测到的地标,它们就会很好地对齐。
我从图像中估计相机矩阵,并假设没有镜头失真:
size = image.shape
focal_length = size[1]
center = (size[1]/2, size[0]/2)
camera_matrix = np.array([[focal_length, 0, center[0]],
[0, focal_length, center[1]],
[0, 0, 1]], dtype="double")
dist_coeffs = np.zeros((4, 1)) # Assuming no lens distortion我试图通过使用solvePNP将图像中的点与3D模型中的点进行匹配来获得头部姿势:
# 3D-model points to which the points extracted from an image are matched:
model_points = np.array([
(0.0, 0.0, 0.0), # Nose tip
(0.0, -330.0, -65.0), # Chin
(-225.0, 170.0, -135.0), # Left eye corner
(225.0, 170.0, -135.0), # Right eye corner
(-150.0, -150.0, -125.0), # Left Mouth corner
(150.0, -150.0, -125.0) # Right mouth corner
])
image_points = np.array([
shape[30], # Nose tip
shape[8], # Chin
shape[36], # Left eye left corner
shape[45], # Right eye right corne
shape[48], # Left Mouth corner
shape[54] # Right mouth corner
], dtype="double")
success, rotation_vec, translation_vec) = \
cv2.solvePnP(model_points, image_points, camera_matrix, dist_coeffs)最后,我从旋转中得到euler角:
rotation_mat, _ = cv2.Rodrigues(rotation_vec)
pose_mat = cv2.hconcat((rotation_mat, translation_vec))
_, _, _, _, _, _, angles = cv2.decomposeProjectionMatrix(pose_mat)现在方位角是我所期望的-如果我向左看,它是负的,在中间是零,向右是正的。
然而,高度很奇怪-如果我看中间,它有一个恒定值,但符号是随机变化的-从一幅图像到另一幅图像(该值约为170)。
当我往上看的时候,符号是正的,当我往下看的时候,符号是负的,当我往下看的时候,数值减小了。
有人能给我解释一下这个输出吗?
发布于 2020-11-06 00:35:29
好的,看起来我找到了一个解决方案--模型点(我在几个博客中找到的)似乎是错误的。代码似乎与模型和图像点的组合一起工作(不知道为什么它是试验和错误):
model_points = np.float32([[6.825897, 6.760612, 4.402142],
[1.330353, 7.122144, 6.903745],
[-1.330353, 7.122144, 6.903745],
[-6.825897, 6.760612, 4.402142],
[5.311432, 5.485328, 3.987654],
[1.789930, 5.393625, 4.413414],
[-1.789930, 5.393625, 4.413414],
[-5.311432, 5.485328, 3.987654],
[2.005628, 1.409845, 6.165652],
[-2.005628, 1.409845, 6.165652],
[2.774015, -2.080775, 5.048531],
[-2.774015, -2.080775, 5.048531],
[0.000000, -3.116408, 6.097667],
[0.000000, -7.415691, 4.070434]])
image_points = np.float32([shape[17], shape[21], shape[22], shape[26],
shape[36], shape[39], shape[42], shape[45],
shape[31], shape[35], shape[48], shape[54],
shape[57], shape[8]])https://stackoverflow.com/questions/64679950
复制相似问题