首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何使用CVPixelBuffer/IOSurface更新CALayer?

如何使用CVPixelBuffer/IOSurface更新CALayer?
EN

Stack Overflow用户
提问于 2020-11-17 03:24:13
回答 4查看 615关注 0票数 8

我有一个IOSurface支持的CVPixelBuffer,它正在以30fps的速度从外部源更新。我想在NSView中呈现图像数据的预览--对我来说最好的方法是什么?

我可以直接在视图上设置CALayer的.contents,但它只在视图第一次更新时更新(或者,比如说,如果我调整了视图的大小)。我一直在研究文档,但我找不到在层或视图上正确调用needsDisplay来让视图基础结构知道要刷新自己,特别是当更新来自视图之外的时候。

理想情况下,我只需要将IOSurface绑定到我的层,我对它所做的任何更改都会被传播,但我不确定这是否可能。

代码语言:javascript
复制
class VideoPreviewController: NSViewController, VideoFeedConsumer {
    let customLayer : CALayer = CALayer()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do view setup here.
        print("Loaded our video preview")
        
        view.layer?.addSublayer(customLayer)
        customLayer.frame = view.frame
        
        // register our view with the browser service
        VideoFeedBrowser.instance.registerConsumer(self)
    }
    
    override func viewWillDisappear() {
        // deregister our view from the video feed
        VideoFeedBrowser.instance.deregisterConsumer(self)

        super.viewWillDisappear()
    }
    
    // This callback gets called at 30fps whenever the pixelbuffer is updated
    @objc func updateFrame(pixelBuffer: CVPixelBuffer) {

        guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
            print("pixelbuffer isn't IOsurface backed! noooooo!")
            return;
        }

        // Try and tell the view to redraw itself with new contents?
        // These methods don't work
        //self.view.setNeedsDisplay(self.view.visibleRect)
        //self.customLayer.setNeedsDisplay()
        self.customLayer.contents = surface

    }
    
}

这是我尝试的一个缩放版本,它是NSView而不是基于NSViewController的,它也不能正确更新(或正确缩放):

代码语言:javascript
复制
class VideoPreviewThumbnail: NSView, VideoFeedConsumer {
   

    required init?(coder decoder: NSCoder) {
        super.init(coder: decoder)
        self.wantsLayer = true
        
        // register our view with the browser service
        VideoFeedBrowser.instance.registerConsumer(self)
    }
    
    override init(frame frameRect: NSRect) {
        super.init(frame: frameRect)
        self.wantsLayer = true
        
        // register our view with the browser service
        VideoFeedBrowser.instance.registerConsumer(self)
    }
    
    deinit{
        VideoFeedBrowser.instance.deregisterConsumer(self)
    }
    
    override func updateLayer() {
        // Do I need to put something here?
        print("update layer")
    }
    
    @objc
    func updateFrame(pixelBuffer: CVPixelBuffer) {
        guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
            print("pixelbuffer isn't IOsurface backed! noooooo!")
            return;
        }
        self.layer?.contents = surface
        self.layer?.transform = CATransform3DMakeScale(
            self.frame.width / CGFloat(CVPixelBufferGetWidth(pixelBuffer)),
            self.frame.height / CGFloat(CVPixelBufferGetHeight(pixelBuffer)),
            CGFloat(1))
    }

}

我遗漏了什么?

EN

回答 4

Stack Overflow用户

发布于 2020-12-04 19:43:08

也许我错了,但我认为你是在后台更新你的NSView。(我假设对updateFrame的回调是在后台线程上)

如果我是对的,那么当您想要更新NSView时,将您的pixelBuffer转换为您想要的任何内容(NSImage?),然后将其分派到主线程上。

伪代码(我不经常使用CVPixelBuffer,所以我不确定这是转换为NSImage的正确方式)

代码语言:javascript
复制
let ciImage = CIImage(cvImageBuffer: pixelBuffer)
let context = CIContext(options: nil)

let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)

let cgImage = context.createCGImage(ciImage, from: CGRect(x: 0, y: 0, width: width, height: height))

let nsImage = NSImage(cgImage: cgImage, size: CGSize(width: width, height: height))

DispatchQueue.main.async {
    // assign the NSImage to your NSView here
}

Another catch:我做了一些测试,似乎不能将IOSurface直接赋给CALayer的内容。

我试着这样做:

代码语言:javascript
复制
    let textureImageWidth = 1024
    let textureImageHeight = 1024

    let macPixelFormatString = "ARGB"
    var macPixelFormat: UInt32 = 0
    for c in macPixelFormatString.utf8.reversed() {
       macPixelFormat *= 256
       macPixelFormat += UInt32(c)
    }

    let ioSurface = IOSurfaceCreate([kIOSurfaceWidth: textureImageWidth,
                    kIOSurfaceHeight: textureImageHeight,
                    kIOSurfaceBytesPerElement: 4,
                    kIOSurfaceBytesPerRow: textureImageWidth * 4,
                    kIOSurfaceAllocSize: textureImageWidth * textureImageHeight * 4,
                    kIOSurfacePixelFormat: macPixelFormat] as CFDictionary)!

    IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
    let test = CIImage(ioSurface: ioSurface)
    IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
    
    v1?.layer?.contents = ioSurface

其中v1是我的视图。没有效果

即使使用CIImage也没有效果(只有最后几行)

代码语言:javascript
复制
    IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
    let test = CIImage(ioSurface: ioSurface)
    IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
    
    v1?.layer?.contents = test

如果我创建了一个CGImage,它就能工作

代码语言:javascript
复制
    IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
    let test = CIImage(ioSurface: ioSurface)
    IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
    
    let context = CIContext.init()
    let img = context.createCGImage(test, from: test.extent)
    v1?.layer?.contents = img
票数 1
EN

Stack Overflow用户

发布于 2020-11-30 12:27:22

如果有一些IBActions对其进行更新,则使用didSet块创建一个观察到的变量,并在触发IBAction时更改其值。另外,请记住编写在该块中更新时要运行的代码。

我建议将变量设置为Int,将其缺省值设置为0,并在每次更新时向其添加1

您可以将NSView转换为NSImageView,用于询问如何在NSView上显示图像数据的部分,这样就可以完成任务。

票数 0
EN

Stack Overflow用户

发布于 2020-12-05 23:30:46

您需要将像素缓冲区转换为CGImage,并将其转换为图层,以便可以更改主视图的图层。请尝试此代码

代码语言:javascript
复制
@objc
func updateFrame(pixelBuffer: CVPixelBuffer) {
    guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
        print("pixelbuffer isn't IOsurface backed! noooooo!")
        return;
    }
    void *baseAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
    size_t width = CVPixelBufferGetWidth(pixelBuffer);
    size_t height = CVPixelBufferGetHeight(pixelBuffer);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef cgContext = CGBitmapContextCreate(baseAddr, width, height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), colorSpace, kCGImageAlphaNoneSkipLast);
    CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);
    CGContextRelease(cgContext);
    
    let outputImage = UIImage(cgImage: outputCGImage, scale: 1, orientation: img.imageOrientation)
    let newLayer:CGLayer = CGLayer.init(cgImage: outputImage)
    self.layer = newLayer
    
    CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
    CVPixelBufferRelease(pixelBuffer);
}
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64864430

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档