首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在GrayScale中设置AVCaptureDevice输出的iOS

在GrayScale中设置AVCaptureDevice输出的iOS
EN

Stack Overflow用户
提问于 2016-06-30 11:39:32
回答 1查看 2.7K关注 0票数 11

我想在我的应用程序中实现自定义摄像头。所以,我正在用AVCaptureDevice制作这个相机。

现在,我只想显示灰色输出到我的定制相机。因此,我正在尝试使用setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:AVCaptureWhiteBalanceGains来实现这个功能。为此,我使用了AVCamManual:扩展AVCam以使用手动捕获

代码语言:javascript
复制
- (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains
{
    NSError *error = nil;

    if ( [videoDevice lockForConfiguration:&error] ) {
        AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains]; // Conversion can yield out-of-bound values, cap to limits
        [videoDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil];
        [videoDevice unlockForConfiguration];
    }
    else {
        NSLog( @"Could not lock device for configuration: %@", error );
    }
}

但是为此,我必须在1到4之间传递RGB增益值,所以我正在创建这个方法来检查最大值和最小值。

代码语言:javascript
复制
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
    AVCaptureWhiteBalanceGains g = gains;

    g.redGain = MAX( 1.0, g.redGain );
    g.greenGain = MAX( 1.0, g.greenGain );
    g.blueGain = MAX( 1.0, g.blueGain );

    g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
    g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain );
    g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );

    return g;
}

另外,我还试图获得不同的效果,比如传递RGB增益静态值。

代码语言:javascript
复制
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
    AVCaptureWhiteBalanceGains g = gains;
    g.redGain = 3;
    g.greenGain = 2;
    g.blueGain = 1;
    return g;
}

现在,我想在我的定制相机上设置这个灰度(公式:像素= 0.30078125f *R+ 0.5859375f *G+ 0.11328125f * B)。我试过这个公式。

代码语言:javascript
复制
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
    AVCaptureWhiteBalanceGains g = gains;

    g.redGain = g.redGain * 0.30078125;
    g.greenGain = g.greenGain * 0.5859375;
    g.blueGain = g.blueGain * 0.11328125;

    float grayScale = g.redGain + g.greenGain + g.blueGain;

    g.redGain = MAX( 1.0, grayScale );
    g.greenGain = MAX( 1.0, grayScale );
    g.blueGain = MAX( 1.0, grayScale );

    g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
    g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain);
    g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );

    return g;
}

那么,如何在1到4.之间传递这个值?

有没有任何方法或尺度来比较这件事..?

任何帮助都将不胜感激。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2016-07-19 22:59:41

CoreImage为使用GPU调整图像提供了大量的过滤器,并且可以有效地用于视频数据,无论是来自摄像机馈送,还是来自视频文件。

有一篇关于objc.io的文章展示了如何做到这一点。这些例子都在目标C中,但解释应该足够清晰,可以遵循。

基本步骤是:

  1. 创建一个EAGLContext,配置为使用OpenGLES2。
  2. 创建一个GLKView来显示呈现的输出,使用EAGLContext
  3. 使用相同的CIContext创建一个EAGLContext
  4. 使用一个CIFilter CIColorMonochrome CoreImage滤波器创建一个CoreImage滤波器
  5. 用一个AVCaptureSession创建一个AVCaptureVideoDataOutput
  6. AVCaptureVideoDataOutputDelegate方法中,将CMSampleBuffer转换为CIImage。将CIFilter应用于图像。将过滤后的图像绘制到CIImageContext

该流水线确保视频像素缓冲区保持在GPU上(从摄像机到显示),并避免将数据移动到CPU,以保持实时性能。

若要保存已过滤的视频,请实现一个AVAssetWriter,并将示例缓冲区附加到完成筛选的同一个AVCaptureVideoDataOutputDelegate中。

这里是Swift的一个例子。

GitHub上的实例

代码语言:javascript
复制
import UIKit
import GLKit
import AVFoundation

private let rotationTransform = CGAffineTransformMakeRotation(CGFloat(-M_PI * 0.5))

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

    private var context: CIContext!
    private var targetRect: CGRect!
    private var session: AVCaptureSession!
    private var filter: CIFilter!

    @IBOutlet var glView: GLKView!

    override func prefersStatusBarHidden() -> Bool {
        return true
    }

    override func viewDidAppear(animated: Bool) {
        super.viewDidAppear(animated)

        let whiteColor = CIColor(
            red: 1.0,
            green: 1.0,
            blue: 1.0
        )

        filter = CIFilter(
            name: "CIColorMonochrome",
            withInputParameters: [
                "inputColor" : whiteColor,
                "inputIntensity" : 1.0
            ]
        )

        // GL context

        let glContext = EAGLContext(
            API: .OpenGLES2
        )

        glView.context = glContext
        glView.enableSetNeedsDisplay = false

        context = CIContext(
            EAGLContext: glContext,
            options: [
                kCIContextOutputColorSpace: NSNull(),
                kCIContextWorkingColorSpace: NSNull(),
            ]
        )

        let screenSize = UIScreen.mainScreen().bounds.size
        let screenScale = UIScreen.mainScreen().scale

        targetRect = CGRect(
            x: 0,
            y: 0,
            width: screenSize.width * screenScale,
            height: screenSize.height * screenScale
        )

        // Setup capture session.

        let cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)

        let videoInput = try? AVCaptureDeviceInput(
            device: cameraDevice
        )

        let videoOutput = AVCaptureVideoDataOutput()
        videoOutput.setSampleBufferDelegate(self, queue: dispatch_get_main_queue())

        session = AVCaptureSession()
        session.beginConfiguration()
        session.addInput(videoInput)
        session.addOutput(videoOutput)
        session.commitConfiguration()
        session.startRunning()
    }

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

        guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
            return
        }

        let originalImage = CIImage(
            CVPixelBuffer: pixelBuffer,
            options: [
                kCIImageColorSpace: NSNull()
            ]
        )

        let rotatedImage = originalImage.imageByApplyingTransform(rotationTransform)

        filter.setValue(rotatedImage, forKey: kCIInputImageKey)

        guard let filteredImage = filter.outputImage else {
            return
        }

        context.drawImage(filteredImage, inRect: targetRect, fromRect: filteredImage.extent)

        glView.display()
    }

    func captureOutput(captureOutput: AVCaptureOutput!, didDropSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
        let seconds = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
        print("dropped sample buffer: \(seconds)")
    }
}
票数 5
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/38122040

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档