首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >AVFoundation -检测人脸和裁剪面部区域?

AVFoundation -检测人脸和裁剪面部区域?
EN

Stack Overflow用户
提问于 2014-06-06 13:30:35
回答 1查看 2.6K关注 0票数 3

就像标题说的那样,我想检测人脸,然后裁剪面部区域。到目前为止,这就是我所拥有的:

代码语言:javascript
复制
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {

for (AVMetadataObject *face in metadataObjects) {
    if ([face.type isEqualToString:AVMetadataObjectTypeFace]) {

        AVCaptureConnection *stillConnection = [_stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
        stillConnection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
        [_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
            if (error) {
                NSLog(@"There was a problem");
                return;
            }

            NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
            UIImage *stillImage = [UIImage imageWithData:jpegData];

            CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:[CIContext contextWithOptions:nil] options:nil];
            CIImage *ciimage = [CIImage imageWithData:jpegData];

            NSArray *features = [faceDetector featuresInImage:ciimage];
            self.captureImageView.image = stillImage;

            for(CIFeature *feature in features) {
                if ([feature isKindOfClass:[CIFaceFeature class]]) {
                    CIFaceFeature *faceFeature = (CIFaceFeature *)feature;

                    CGImageRef imageRef = CGImageCreateWithImageInRect([stillImage CGImage], faceFeature.bounds);
                    self.detectedFaceImageView.image = [UIImage imageWithCGImage:imageRef];
                    CGImageRelease(imageRef);
                }
            }
            //[_session stopRunning];
        }];
    }
}

}

该代码部分工作,可以检测人脸,但不能与人脸裁剪,它总是裁剪错误的区域,它完全裁剪了一些东西。我一直在浏览堆栈寻找答案,尝试这个和那个,但没有结果。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2014-06-24 14:25:38

这是答案

代码语言:javascript
复制
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

// when do we start face detection
if (!_canStartDetection) return;

CIImage *ciimage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
NSArray *features = [_faceDetector featuresInImage:ciimage options:nil];

// find face feature
for(CIFeature *feature in features) {

    // if not face feature ignore
    if (![feature isKindOfClass:[CIFaceFeature class]]) continue;

    // face detected
    _canStartDetection = NO;
    CIFaceFeature *faceFeature = (CIFaceFeature *)feature;

    // crop detected face
    CIVector *cropRect = [CIVector vectorWithCGRect:faceFeature.bounds];
    CIFilter *cropFilter = [CIFilter filterWithName:@"CICrop"];
    [cropFilter setValue:ciimage forKey:@"inputImage"];
    [cropFilter setValue:cropRect forKey:@"inputRectangle"];
    CIImage *croppedImage = [cropFilter valueForKey:@"outputImage"];
    UIImage *stillImage = [UIImage imageWithCIImage:ciimage];
}

}

请注意,我这次使用的是AVCaptureVideoDataOutput,下面是代码:

代码语言:javascript
复制
// set output for face frames
AVCaptureVideoDataOutput *output2 = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output2];
output2.videoSettings = @{(NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};
output2.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue = dispatch_queue_create("com.myapp.faceDetectionQueueSerial", DISPATCH_QUEUE_SERIAL);
[output2 setSampleBufferDelegate:self queue:queue];
票数 4
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/24083110

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档