我正在使用CIFeature类引用进行人脸检测,我对核心图形坐标和常规UIKit坐标感到非常困惑。这是我的密码:
UIImage *mainImage = [UIImage imageNamed:@"facedetectionpic.jpg"];
CIImage *image = [[CIImage alloc] initWithImage:mainImage];
NSDictionary *options = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
NSArray *features = [detector featuresInImage:image];
CGRect faceRect;
for (CIFaceFeature *feature in features)
{
faceRect= [feature bounds];
}很标准的。现在,根据官方的文档,它说:
对保存已发现功能的矩形进行边界。(只读)
讨论矩形在图像的坐标系中。
当我直接输出FaceRect时,我得到:FaceRect{136,427},{46,46}。当我应用CGAffineTransfer以正确的方式翻转它时,我得到了不正确的负坐标。我正在处理的图像是在一个ImageView中。
那么这些坐标在哪个坐标系中呢?图像?ImageView?核心图形坐标?正常坐标?
发布于 2013-08-09 10:46:27
我终于想出来了。正如文档所指出的,CIFaceFeature绘制的矩形位于图像的坐标系中。这意味着矩形具有原始图像的坐标。如果选中了“自动调整大小”选项,这意味着您的图像将被缩小以适应UIImageView。因此,您需要将旧的图像坐标转换为新的图像坐标。
我从这里中改编的这段漂亮的代码为您做了如下工作:
- (CGPoint)convertPointFromImage:(CGPoint)imagePoint {
CGPoint viewPoint = imagePoint;
CGSize imageSize = self.setBody.image.size;
CGSize viewSize = self.setBody.bounds.size;
CGFloat ratioX = viewSize.width / imageSize.width;
CGFloat ratioY = viewSize.height / imageSize.height;
UIViewContentMode contentMode = self.setBody.contentMode;
if (contentMode == UIViewContentModeScaleAspectFit)
{
if (contentMode == UIViewContentModeScaleAspectFill)
{
CGFloat scale;
if (contentMode == UIViewContentModeScaleAspectFit) {
scale = MIN(ratioX, ratioY);
}
else /*if (contentMode == UIViewContentModeScaleAspectFill)*/ {
scale = MAX(ratioX, ratioY);
}
viewPoint.x *= scale;
viewPoint.y *= scale;
viewPoint.x += (viewSize.width - imageSize.width * scale) / 2.0f;
viewPoint.y += (viewSize.height - imageSize.height * scale) / 2.0f;
}
}
return viewPoint;
}https://stackoverflow.com/questions/18104244
复制相似问题