首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >保存高质量的图像,进行实时处理-最好的方法是什么?

保存高质量的图像,进行实时处理-最好的方法是什么?
EN

Stack Overflow用户
提问于 2013-05-20 19:21:46
回答 2查看 1.1K关注 0票数 2

我仍然在学习AVFoundation,所以我不确定我应该如何最好地处理需要捕获高质量静止图像的问题,但提供低质量的预览视频流。

我有一个应用程序,需要拍摄高质量的图像(AVCaptureSessionPresetPhoto),但使用OpenCV处理预览视频流-这是可以接受的低得多的分辨率。简单地使用基本OpenCV Video Camera class是不好的,因为将defaultAVCaptureSessionPreset设置为AVCaptureSessionPresetPhoto会导致将全分辨率帧传递给processImage -这确实非常慢。

如何才能连接到可用于捕获静止图像的设备的高质量连接,以及可处理和显示的低质量连接?描述一下我需要如何设置会话/连接将非常有帮助。有没有这样的应用程序的开源例子?

EN

回答 2

Stack Overflow用户

发布于 2013-05-20 19:53:08

我做了类似的事情--我抓取委托方法中的像素,创建它们的CGImageRef,然后将其分派到normal priority队列,在那里修改它。因为AVFoundation必须对回调方法使用CADisplayLink,所以它具有最高的优先级。在我的特殊情况下,我没有抓取所有像素,所以它在30fps的iPhone 4上工作。根据你想要运行的设备,你可以权衡像素、fps等的数量。

另一个想法是获取2个像素子集的幂-例如每行中的每4行和每4行。同样,我在我的应用程序中以20-30fps的速度做了类似的事情。然后,您可以在分派的块中对这个较小的图像进行进一步操作。

如果这看起来令人望而生畏,请悬赏给正常工作的代码。

代码:

代码语言:javascript
复制
// Image is oriented with bottle neck to the left and the bottle bottom on the right
- (void)captureOutput:(AVCaptureVideoDataOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#if 1   
    AVCaptureDevice *camera = [(AVCaptureDeviceInput *)[captureSession.inputs lastObject] device];
    if(camera.adjustingWhiteBalance || camera.adjustingExposure) NSLog(@"GOTCHA: %d %d", camera.adjustingWhiteBalance, camera.adjustingExposure);
    printf("foo\n");
#endif

    if(saveState != saveOne && saveState != saveAll) return;


    @autoreleasepool {
        CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
        //NSLog(@"PE: value=%lld timeScale=%d flags=%x", prStamp.value, prStamp.timescale, prStamp.flags);

        /*Lock the image buffer*/
        CVPixelBufferLockBaseAddress(imageBuffer,0); 

        NSRange captureRange;
        if(saveState == saveOne) {
#if 0 // B G R A MODE ! 
NSLog(@"PIXEL_TYPE: 0x%lx", CVPixelBufferGetPixelFormatType(imageBuffer));
uint8_t *newPtr = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
NSLog(@"ONE VAL %x %x %x %x", newPtr[0], newPtr[1], newPtr[2], newPtr[3]);
}
exit(0);
#endif
            [edgeFinder setupImageBuffer:imageBuffer];

            BOOL success = [edgeFinder delineate:1];

            if(!success) {
                dispatch_async(dispatch_get_main_queue(), ^{ edgeFinder = nil; [delegate error]; });
                saveState = saveNone;
            } else 
                bottleRange = edgeFinder.sides;
                xRange.location = edgeFinder.shoulder;
                xRange.length = edgeFinder.bottom - xRange.location;

                NSLog(@"bottleRange 1: %@ neck=%d bottom=%d", NSStringFromRange(bottleRange), edgeFinder.shoulder, edgeFinder.bottom );
                //searchRows = [edgeFinder expandRange:bottleRange];

                rowsPerSwath = lrintf((bottleRange.length*NUM_DEGREES_TO_GRAB)*(float)M_PI/360.0f);
NSLog(@"rowsPerSwath = %d", rowsPerSwath);
                saveState = saveIdling;

                captureRange = NSMakeRange(0, [WLIPBase numRows]);
                dispatch_async(dispatch_get_main_queue(), ^
                    {
                        [delegate focusDone];
                        edgeFinder = nil;
                        captureOutput.alwaysDiscardsLateVideoFrames = YES;
                    });
        } else {        
            NSInteger rows = rowsPerSwath;
            NSInteger newOffset = bottleRange.length - rows;
            if(newOffset & 1) {
                --newOffset;
                ++rows;
            }
            captureRange = NSMakeRange(bottleRange.location + newOffset/2, rows);
        }
        //NSLog(@"captureRange=%u %u", captureRange.location, captureRange.length);

        /*Get information about the image*/
        uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
        size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
        size_t width = CVPixelBufferGetWidth(imageBuffer); 

        // Note Apple sample code cheats big time - the phone is big endian so this reverses the "apparent" order of bytes
        CGContextRef newContext = CGBitmapContextCreate(NULL, width, captureRange.length, 8, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little); // Video in ARGB format

assert(newContext);

        uint8_t *newPtr = (uint8_t *)CGBitmapContextGetData(newContext);
        size_t offset   = captureRange.location * bytesPerRow;

        memcpy(newPtr, baseAddress + offset, captureRange.length * bytesPerRow);

        CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

        OSAtomicIncrement32(&totalImages);
        int32_t curDepth = OSAtomicIncrement32(&queueDepth);
        if(curDepth > maxDepth) maxDepth = curDepth;

#define kImageContext   @"kImageContext"
#define kState          @"kState"
#define kPresTime       @"kPresTime"

        CMTime prStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);      // when it was taken?
        //CMTime deStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer);          // now?

        NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
            [NSValue valueWithBytes:&saveState objCType:@encode(saveImages)], kState,
            [NSValue valueWithNonretainedObject:(__bridge id)newContext], kImageContext,
            [NSValue valueWithBytes:&prStamp objCType:@encode(CMTime)], kPresTime,
            nil ];
        dispatch_async(imageQueue, ^
            {
                // could be on any thread now
                OSAtomicDecrement32(&queueDepth);

                if(!isCancelled) {
                    saveImages state; [(NSValue *)[dict objectForKey:kState] getValue:&state];
                    CGContextRef context; [(NSValue *)[dict objectForKey:kImageContext] getValue:&context];
                    CMTime stamp; [(NSValue *)[dict objectForKey:kPresTime] getValue:&stamp];

                    CGImageRef newImageRef = CGBitmapContextCreateImage(context); 
                    CGContextRelease(context);
                    UIImageOrientation orient = state == saveOne ? UIImageOrientationLeft : UIImageOrientationUp;
                    UIImage *image = [UIImage imageWithCGImage:newImageRef scale:1.0 orientation:orient]; // imageWithCGImage:  UIImageOrientationUp  UIImageOrientationLeft
                    CGImageRelease(newImageRef);
                    NSData *data = UIImagePNGRepresentation(image);

                    // NSLog(@"STATE:[%d]: value=%lld timeScale=%d flags=%x", state, stamp.value, stamp.timescale, stamp.flags);

                    {
                        NSString *name = [NSString stringWithFormat:@"%d.png", num];
                        NSString *path = [[wlAppDelegate snippetsDirectory] stringByAppendingPathComponent:name];
                        BOOL ret = [data writeToFile:path atomically:NO];
//NSLog(@"WROTE %d err=%d w/time %f path:%@", num, ret, (double)stamp.value/(double)stamp.timescale, path);
                        if(!ret) {
                            ++errors;
                        } else {
                            dispatch_async(dispatch_get_main_queue(), ^
                                {
                                    if(num) [delegate progress:(CGFloat)num/(CGFloat)(MORE_THAN_ONE_REV * SNAPS_PER_SEC) file:path];
                                } );
                        }
                        ++num;
                    }
                } else NSLog(@"CANCELLED");

            } );
    }
}
票数 1
EN

Stack Overflow用户

发布于 2015-07-17 22:53:46

在AVCaptureSessionPresetPhoto中,它使用小视频预览(iPhone6约1000x700 )和高分辨率照片(约3000x2000)。

所以我使用修改后的'CvPhotoCamera‘类来处理小预览和拍摄全尺寸的照片。我在这里发布了这段代码:https://stackoverflow.com/a/31478505/1994445

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/16648394

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档