首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何在RPScreenRecorder.shared().startCapture中从CMSampleBuffer中获取视频帧?

如何在RPScreenRecorder.shared().startCapture中从CMSampleBuffer中获取视频帧?
EN

Stack Overflow用户
提问于 2018-09-18 17:35:16
回答 2查看 1.7K关注 0票数 2

我使用RPScreenRecorder.shared().startCapture进行屏幕录制,并使用AVAssetWriterInput编码成h264视频文件,但它给了我直接的.mp4,我想要h264视频文件一帧一帧地录制流媒体。有没有办法访问来自RPScreenRecorder.shared().startCapture样本缓冲区数据?下面是代码。在这里我得到了整个mp4文件,但我只想要视频帧

代码语言:javascript
复制
import Foundation
import ReplayKit
import AVKit


class ScreenRecorder
{
    var assetWriter:AVAssetWriter!
    var videoInput:AVAssetWriterInput!

    let viewOverlay = WindowUtil()

    let fileNameTxt = "Test"
    let dir = try? FileManager.default.url(for: .documentDirectory,
                                           in: .userDomainMask, appropriateFor: nil, create: true)
    var sampleFileBuffer : String = ""

    //MARK: Screen Recording
    func startRecording(withFileName fileName: String, recordingHandler:@escaping (Error?)-> Void)
    {
        if #available(iOS 11.0, *)
        {

            let fileURL = URL(fileURLWithPath: ReplayFileUtil.filePath(fileName))
            assetWriter = try! AVAssetWriter(outputURL: fileURL, fileType:
                AVFileType.mp4)
            let videoOutputSettings: Dictionary<String, Any> = [
                AVVideoCodecKey : AVVideoCodecType.h264,
                AVVideoWidthKey : UIScreen.main.bounds.size.width,
                AVVideoHeightKey : UIScreen.main.bounds.size.height
            ];


            videoInput  = AVAssetWriterInput (mediaType: AVMediaType.video, outputSettings: videoOutputSettings)
            videoInput.expectsMediaDataInRealTime = true
            assetWriter.add(videoInput)

            // If the directory was found, we write a file to it and read it back
             let fileURLTxt = dir?.appendingPathComponent(fileNameTxt).appendingPathExtension("txt") 


            RPScreenRecorder.shared().startCapture(handler: { (sample, bufferType, error) in
//print(sample, bufferType, error)

                recordingHandler(error)

                if CMSampleBufferDataIsReady(sample)
                {
                    if self.assetWriter.status == AVAssetWriterStatus.unknown
                    {
                        self.assetWriter.startWriting()
                        self.assetWriter.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sample))
                    }

                    if self.assetWriter.status == AVAssetWriterStatus.failed {
                        print("Error occured, status = \(self.assetWriter.status.rawValue), \(self.assetWriter.error!.localizedDescription) \(String(describing: self.assetWriter.error))")
                        return
                    }

                    if (bufferType == .video)
                    {

                        if self.videoInput.isReadyForMoreMediaData
                        {
                             self.videoInput.append(sample)
                           // self.sampleFileBuffer = self.videoInput as! String
                            self.sampleFileBuffer = String(sample as! String)         //sample as! String
                            do {

                                try self.sampleFileBuffer.write(to: fileURLTxt!, atomically: true, encoding: .utf8)
                            } catch {
                                print("Failed writing to URL: \(fileURLTxt), Error: " + error.localizedDescription)
                            }


                        }
                    }
                    self.sampleFileBuffer = ""
                }

            }) { (error) in
                recordingHandler(error)

            }
        } else
        {
            // Fallback on earlier versions
        }
    }

    func stopRecording(handler: @escaping (Error?) -> Void)
    {
        if #available(iOS 11.0, *)
        {
            RPScreenRecorder.shared().stopCapture
            {    (error) in
                    handler(error)
                    self.assetWriter.finishWriting
                {
                    print(ReplayFileUtil.fetchAllReplays())

                }
            }
        } 
    }


}
EN

回答 2

Stack Overflow用户

发布于 2018-09-20 07:32:02

在您的代码中,示例是CMSampleBuffer。调用CMSampleBufferGetImageBuffer()并获取CVImageBuffer。要锁定帧缓冲区,请调用CVPixelBufferLockBaseAddress(imageBuffer)。在我的例子中,imageBuffer有两个平面,Y和UV。调用CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0)并获取Y平面地址。调用与planeIndex=1相同的接口,获取UV平面地址。

一旦你得到了plane的基地址,你就可以读作uint8*。调用CVPixelBufferGetXXX接口获取宽、高、每行字节数。别忘了给CVPixelBufferUnlockBaseAddress打电话。

票数 0
EN

Stack Overflow用户

发布于 2020-06-07 08:07:27

事实证明,有一种非常简单的方法:

代码语言:javascript
复制
import CoreGraphics
import CoreMedia
import Foundation
import QuartzCore
import UIKit

private func createImage(from sampleBuffer: CMSampleBuffer) -> UIImage? {
    guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
        return nil
    }

    let ciImage = CIImage(cvPixelBuffer: imageBuffer)
    let context = CIContext()

    guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else {
        return nil
    }

    return UIImage(cgImage: cgImage)
}

如果你想对流程进行微调控制,在浪费了一个下午的时间来弄清楚如何手动完成后,我发现了这一点,下面是粒度方法:https://stackoverflow.com/a/62239338/969967

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/52383474

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档