我试图用.builtInDualCamera拍摄一张带有深度数据的照片。我一直在效仿苹果的例子,AVCamFilter (这对我有用)。根据我从WWDC演示文稿中了解到的情况,您所需要做的就是设置一个AVCapturePhotoOutput来启用深度数据捕获,并在AVCapturePhotoSettings上启用相同的数据捕获。
当我运行我的应用程序时,我会得到一个通用错误(见下文)。
如果我删除了捕获深度数据的设置,就不会发生错误,并且会捕获一张照片。如果我没有设置捕获深度数据的设置,但我在true
photoOutput(_:AVCapturePhotoOutput, didCapturePhotoFor: AVCaptureResolvedPhotoSettings)中设置了一个断点,当我拍摄照片时,应用程序停止在这个断点,然后立即继续运行,照片+深度数据就会被捕获(没有错误发生)。iOS 15.2
我不知道我错过了什么。
下面是我在AVCaptureSession中设置输入和输出的代码
guard let videoDevice = discoverDevice(from: [.builtInDualCamera]) else {
fatalError("No dual camera.")
}
guard let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice) else {
fatalError("Can't create video input.")
}
self.session.beginConfiguration()
self.session.sessionPreset = .photo
guard self.session.canAddInput(videoDeviceInput) else {
fatalError("Can't add video input.")
}
self.session.addInput(videoDeviceInput)
guard self.session.canAddOutput(photoOutput) else {
fatalError("Can't add photo output.")
}
self.session.addOutput(photoOutput)
photoOutput.isHighResolutionCaptureEnabled = true
if photoOutput.isDepthDataDeliverySupported {
photoOutput.isDepthDataDeliveryEnabled = true
} else {
fatalError("DepthData is not supported by this camera configuration")
}
self.session.commitConfiguration()
self.videoDeviceInput = videoDeviceInput这是我想要拍照时调用的代码(取自AVCamFilter示例):
sessionQueue.async {
let photoSettings = AVCapturePhotoSettings(format: [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)])
if self.photoOutput.isDepthDataDeliveryEnabled {
photoSettings.isDepthDataDeliveryEnabled = true
photoSettings.embedsDepthDataInPhoto = false
}
self.photoOutput.capturePhoto(with: photoSettings, delegate: self)
}这是我在AVCapturePhotoCaptureDelegate函数photoOutput(_:AVCapturePhotoOutput, didFinishProcessingPhoto: AVCapturePhoto, error: Error?)中遇到的错误
Error capturing photo: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-16800), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x2829f9650 {Error Domain=NSOSStatusErrorDomain Code=-16800 "(null)"}}发布于 2021-12-29 21:13:37
我想我想明白了。它与输入设备上的帧速率和深度数据可以捕获的速率有关。在配置输入和输出时,我在提交会话更改之前使用了此函数。
private func capFrameRate(videoDevice: AVCaptureDevice) {
if self.photoOutput.isDepthDataDeliverySupported {
// Cap the video framerate at the max depth framerate.
if let frameDuration = videoDevice.activeDepthDataFormat?.videoSupportedFrameRateRanges.first?.minFrameDuration {
do {
try videoDevice.lockForConfiguration()
videoDevice.activeVideoMinFrameDuration = frameDuration
videoDevice.unlockForConfiguration()
} catch {
print("Could not lock device for configuration: \(error)")
}
}
}
}一旦我这样做,我能够获得图像的像素缓冲区和它的depthData。
当我在捕获会话的中间使用断点时,它一定是帧速率的同步。无论情况是否如此,设置activeVideoMinFrameDuration以确定深度数据的速率是关键。
https://stackoverflow.com/questions/70509717
复制相似问题