我想使用中介面板检测模块从原始图像和视频中裁剪人脸图像,建立一个用于情感识别的数据集。
有什么方法可以从faceDetection解决方案中获得包围框吗?
cap = cv2.VideoCapture(0)
with mp_face_detection.FaceDetection(
model_selection=0, min_detection_confidence=0.5) as face_detection:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
continue
image.flags.writeable = False
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = face_detection.process(image)
# Draw the face detection annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.detections:
for detection in results.detections:
mp_drawing.draw_detection(image, detection)
##
'''
#### here i want to grab the bounding box for the detected faces in order to crop the face image
'''
##
cv2.imshow('MediaPipe Face Detection', cv2.flip(image, 1))
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()谢谢
发布于 2021-12-17 07:19:28
为了确定格式,您可以采用以下两种方法:
检查中介管道中的原型文件
https://github.com/google/mediapipe/blob/master/mediapipe/framework/formats/detection.proto
location_data。它应该有format字段,应该是BOUNDING_BOX,或者RELATIVE_BOUNDING_BOX (但实际上只有BOUNDING_BOX字段)。结帐查看drawing_utils内容:
检查一下draw_detection方法。您需要使用cv2.rectangle call的行
这里有个片段
results = face_detection.process(image)
# Draw the face detection annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.detections:
for detection in results.detections:
draw_detection(image, detection)
##
'''
#### here i want to grab the bounding box for the detected faces in order to crop the face image
'''
##
location_data = detection.location_data
if location_data.format == LocationData.RELATIVE_BOUNDING_BOX:
bb = location_data.relative_bounding_box
bb_box = [
bb.xmin, bb.ymin,
bb.width, bb.height,
]
print(f"RBBox: {bb_box}")https://stackoverflow.com/questions/69810210
复制相似问题