首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >AVCaptureSession和AVCaptureMovieFileOutput帧时间戳

AVCaptureSession和AVCaptureMovieFileOutput帧时间戳
EN

Stack Overflow用户
提问于 2012-12-04 07:09:38
回答 2查看 1.4K关注 0票数 11

我正在和AVCaptureSession和AVCaptureMovieFileOutput一起录制一部电影。我也在记录加速数据,并尝试将加速数据与视频对齐。

我正在找一种方法来获取视频文件录制开始的时间。我正在做以下工作:

代码语言:javascript
复制
currentDate = [NSDate date];
[output startRecordingToOutputFileURL:fileUrl recordingDelegate:self];

但是,根据我的测试,视频录制在调用startRecordingToOutputFileURL之前0.12秒开始。我假设这是因为各种视频缓冲区已经充满了数据,这些数据被添加到文件中。

有没有办法获得视频第一帧的实际NSDate?

EN

回答 2

Stack Overflow用户

发布于 2022-03-03 09:37:21

我也遇到了同样的问题,我终于找到了答案。我将在下面编写所有代码,但我要找的缺少的部分是:

代码语言:javascript
复制
self.captureSession.masterClock!.time

captureSession中的masterClock是每个缓冲区的相对时间所基于的时钟(presentationTimeStamp)。

完整代码和说明

你要做的第一件事就是把AVCaptureMovieFileOutput转换成AVCaptureVideoDataOutputAVCaptureAudioDataOutput。因此,请确保您的类实现了AVCaptureVideoDataOutputSampleBufferDelegateAVCaptureAudioDataOutputSampleBufferDelegate。它们共享相同的功能,因此将其添加到您的类中(我将在稍后介绍实现):

代码语言:javascript
复制
    let videoDataOutput = AVCaptureVideoDataOutput()
    let audioDataOutput = AVCaptureAudioDataOutput()

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // I will get to this
    }

在捕获会话中添加输出时,我的代码如下所示(如果需要,您可以更改videoOrientation和其他内容)

代码语言:javascript
复制
            if captureSession.canAddInput(cameraInput)
                && captureSession.canAddInput(micInput)
//                && captureSession.canAddOutput(self.movieFileOutput)
                && captureSession.canAddOutput(self.videoDataOutput)
                && captureSession.canAddOutput(self.audioDataOutput)
            {
                captureSession.beginConfiguration()
                captureSession.addInput(cameraInput)
                captureSession.addInput(micInput)
//                self.captureSession.addOutput(self.movieFileOutput)
                
                let videoAudioDataOutputQueue = DispatchQueue(label: "com.myapp.queue.video-audio-data-output") //Choose any label you want

                self.videoDataOutput.alwaysDiscardsLateVideoFrames = false
                self.videoDataOutput.setSampleBufferDelegate(self, queue: videoAudioDataOutputQueue)
                self.captureSession.addOutput(self.videoDataOutput)

                self.audioDataOutput.setSampleBufferDelegate(self, queue: videoAudioDataOutputQueue)
                self.captureSession.addOutput(self.audioDataOutput)

                if let connection = self.videoDataOutput.connection(with: .video) {
                    if connection.isVideoStabilizationSupported {
                        connection.preferredVideoStabilizationMode = .auto
                    }
                    if connection.isVideoOrientationSupported {
                        connection.videoOrientation = .portrait
                    }
                }
                
                self.captureSession.commitConfiguration()
                
                DispatchQueue.global(qos: .userInitiated).async {
                    self.captureSession.startRunning()
                }
            }

要像使用AVCaptureMovieFileOutput一样编写视频,您可以使用AVAssetWriter。因此,在您的类中添加以下内容:

代码语言:javascript
复制
    var videoWriter: AVAssetWriter?
    var videoWriterInput: AVAssetWriterInput?
    var audioWriterInput: AVAssetWriterInput?

    private func setupWriter(url: URL) {
        self.videoWriter = try! AVAssetWriter(outputURL: url, fileType: AVFileType.mov)
        
        self.videoWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: self.videoDataOutput.recommendedVideoSettingsForAssetWriter(writingTo: AVFileType.mov))
        self.videoWriterInput!.expectsMediaDataInRealTime = true
        self.videoWriter!.add(self.videoWriterInput!)
        
        self.audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: self.audioDataOutput.recommendedAudioSettingsForAssetWriter(writingTo: AVFileType.mov))
        self.audioWriterInput!.expectsMediaDataInRealTime = true
        self.videoWriter!.add(self.audioWriterInput!)
        
        self.videoWriter!.startWriting()
    }

每次你想记录的时候,你首先需要设置写入器。startWriting函数实际上并不会开始写入文件,但会让编写器做好准备,让它很快就会写入一些内容。

在接下来的代码中,我们将添加开始或停止录制的代码。但请注意,我仍然需要修复stopRecording。stopRecording实际上完成录制的时间太快了,因为缓冲区总是延迟。但也许这对你来说无关紧要。

代码语言:javascript
复制
    var isRecording = false
    var recordFromTime: CMTime?
    var sessionAtSourceTime: CMTime?

    func startRecording(url: URL) {
        guard !self.isRecording else { return }
        self.isRecording = true
        self.sessionAtSourceTime = nil
        self.recordFromTime = self.captureSession.masterClock!.time //This is very important, because based on this time we will start recording appropriately
        self.setupWriter(url: url)
        //You can let a delegate or something know recording has started now
    }
    
    func stopRecording() {
        guard self.isRecording else { return }
        self.isRecording = false
        self.videoWriter?.finishWriting { [weak self] in
            self?.sessionAtSourceTime = nil
            guard let url = self?.videoWriter?.outputURL else { return }
            
            //Notify finished recording and pass url if needed
        }
    }

最后是我们在本文开头提到的函数的实现:

代码语言:javascript
复制
    private func canWrite() -> Bool {
        return self.isRecording && self.videoWriter != nil && self.videoWriter!.status == .writing
    }
    
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        guard CMSampleBufferDataIsReady(sampleBuffer), self.canWrite() else { return }
        
        //sessionAtSourceTime is the first buffer we will write to the file
        if self.sessionAtSourceTime == nil {
            //Make sure we start by capturing the videoDataOutput (if we start with the audio the file gets corrupted)
            guard output == self.videoDataOutput else { return }
            //Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time)
            guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return }
            self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp
            self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp)
        }
        
        if output == self.videoDataOutput {
            if self.videoWriterInput!.isReadyForMoreMediaData {
                self.videoWriterInput!.append(sampleBuffer)
            }
        } else if output == self.audioDataOutput {
            if self.audioWriterInput!.isReadyForMoreMediaData {
                self.audioWriterInput!.append(sampleBuffer)
            }
        }
    }

所以修复时间差的最重要的事情就是开始录制,你自己的代码就是self.captureSession.masterClock!.time。我们查看缓冲区的相对时间,直到它达到您开始录制的时间。如果您还想确定结束时间,只需在didOutput sampleBuffer方法中添加一个变量recordUntilTime并检查。

票数 1
EN

Stack Overflow用户

发布于 2015-12-09 01:40:49

如果我没弄错你的问题,你想知道第一帧录制的时间戳。你可以试一试

代码语言:javascript
复制
CMTime captureStartTime = nil;

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { 

      if !captureStartTime{ 
         captureStartTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
      }
  // do the other things you want
 }
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/13693444

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档