我有一个AVCaptureSession,它包括一个AVCaptureScreenInput和一个AVCaptureDeviceInput。两者都是作为数据输出委托连接起来的,我使用AVAssetWriter来写入单个MP4文件。
当写入单个MP4文件时,一切都正常。当我尝试在多个AVAssetWriters之间切换以每5秒保存到连续文件时,当通过FFMPEG将所有文件连接在一起时,会出现轻微的音频下降。
示例已加入视频(注意每5秒播放一次音频):
经过大量的调查,我认为这可能是因为音频和视频片段被分割/不是在同一时间开始的。
我现在已经知道我的算法应该能工作,但是我不知道如何分割音频CMBufferSample。这似乎是有用的CMSampleBufferCopySampleBufferForRange,但不确定如何根据某个时间进行拆分(希望在该时间之前和之后都有一个带有所有示例的缓冲区)。
func getBufferUpToTime(sample: CMSampleBuffer, to: CMTime) -> CMSampleBuffer {
var numSamples = CMSampleBufferGetNumSamples(sample)
var sout: CMSampleBuffer?
let endSampleIndex = // how do I get this?
CMSampleBufferCopySampleBufferForRange(nil, sample, CFRangeMake(0, numSamples), &sout)
return sout!
}发布于 2018-06-23 22:57:31
如果您使用的是AVCaptureScreenInput,那么您不是在使用iOS,对吗?因此,我本来打算写一篇关于分割示例缓冲区的文章,但后来我想起在OSX上,AVCaptureFileOutput.startRecording (而不是AVAssetWriter)有这样一条诱人的评论:
在Mac上,如果在captureOutput:didOutputSampleBuffer:fromConnection:委托方法中调用此方法,则保证写入新文件的第一个示例包含在传递给该方法的示例缓冲区中。
不删除示例听起来很有希望,所以如果您可以使用mov而不是mp4文件,您应该能够通过使用AVCaptureMovieFileOutput、实现fileOutputShouldProvideSampleAccurateRecordingStart和从didOutputSampleBuffer调用startRecording获得音频丢失的免费结果,如下所示:
import Cocoa
import AVFoundation
@NSApplicationMain
class AppDelegate: NSObject, NSApplicationDelegate {
@IBOutlet weak var window: NSWindow!
let session = AVCaptureSession()
let movieFileOutput = AVCaptureMovieFileOutput()
var movieChunkNumber = 0
var chunkDuration = kCMTimeZero // TODO: synchronize access? probably fine.
func startRecordingChunkFile() {
let filename = String(format: "capture-%.2i.mov", movieChunkNumber)
let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent(filename)
movieFileOutput.startRecording(to: url, recordingDelegate: self)
movieChunkNumber += 1
}
func applicationDidFinishLaunching(_ aNotification: Notification) {
let displayInput = AVCaptureScreenInput(displayID: CGMainDisplayID())
let micInput = try! AVCaptureDeviceInput(device: AVCaptureDevice.default(for: .audio)!)
session.addInput(displayInput)
session.addInput(micInput)
movieFileOutput.delegate = self
session.addOutput(movieFileOutput)
session.startRunning()
self.startRecordingChunkFile()
}
}
extension AppDelegate: AVCaptureFileOutputRecordingDelegate {
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
// NSLog("error \(error)")
}
}
extension AppDelegate: AVCaptureFileOutputDelegate {
func fileOutputShouldProvideSampleAccurateRecordingStart(_ output: AVCaptureFileOutput) -> Bool {
return true
}
func fileOutput(_ output: AVCaptureFileOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) {
if CMFormatDescriptionGetMediaType(formatDescription) == kCMMediaType_Audio {
let duration = CMSampleBufferGetDuration(sampleBuffer)
chunkDuration = CMTimeAdd(chunkDuration, duration)
if CMTimeGetSeconds(chunkDuration) >= 5 {
startRecordingChunkFile()
chunkDuration = kCMTimeZero
}
}
}
}
}https://stackoverflow.com/questions/50961125
复制相似问题