我有一个计算机视觉应用程序,可以从传感器获取灰度图像并对其进行处理。iOS的图像采集是用Obj-C编写的,图像处理是使用OpenCV在C++中执行的。因为我只需要亮度数据,所以我获取了YUV (或Yp Cb ) 420双平面全范围格式的图像,并将缓冲区的数据分配给一个OpenCV Mat对象(参见下面的采集代码)。到目前为止,这个方法工作得很好,直到全新的iOS 13问世……由于某些原因,我在iOS 13上获得的图像未对齐,导致出现对角线条纹。通过查看我获得的图像,我怀疑这是因为缓冲区的Y Cb和Cr分量的顺序发生了变化,或者缓冲区的步幅发生了变化。有没有人知道iOS 13是否引入了这种变化,以及我如何更新代码以避免这种情况,最好是以向后兼容的方式?
这是我的图像采集代码:
//capture config
- (void)initialize {
AVCaptureDevice *frontCameraDevice;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if (device.position == AVCaptureDevicePositionFront) {
frontCameraDevice = device;
}
}
if (frontCameraDevice == nil) {
NSLog(@"Front camera device not found");
return;
}
_session = [[AVCaptureSession alloc] init];
_session.sessionPreset = AVCaptureSessionPreset640x480;
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:frontCameraDevice error: &error];
if (error != nil) {
NSLog(@"Error getting front camera device input: %@", error);
}
if ([_session canAddInput:input]) {
[_session addInput:input];
} else {
NSLog(@"Could not add front camera device input to session");
}
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// This is the default, but making it explicit
videoOutput.alwaysDiscardsLateVideoFrames = YES;
if ([videoOutput.availableVideoCVPixelFormatTypes containsObject:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]]) {
OSType format = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;
videoOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
} else {
NSLog(@"YUV format not available");
}
[videoOutput setSampleBufferDelegate:self queue:dispatch_queue_create("extrapage.camera.capture.sample.buffer.delegate", DISPATCH_QUEUE_SERIAL)];
if ([_session canAddOutput:videoOutput]) {
[_session addOutput:videoOutput];
} else {
NSLog(@"Could not add video output to session");
}
AVCaptureConnection *captureConnection = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
captureConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
//acquisition code
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if (_listener != nil) {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"Only YUV is supported");
// The first plane / channel (at index 0) is the grayscale plane
// See more infomation about the YUV format
// http://en.wikipedia.org/wiki/YUV
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);
cv::Mat frame(height, width, CV_8UC1, baseaddress, 0);
[_listener onNewFrame:frame];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
}发布于 2019-10-09 21:14:23
我找到了这个问题的解决方案。这是一个行跨距问题:显然,在iOS 13中,Yp 4:2:08位双平面缓冲器的行跨距发生了改变。也许它总是2的幂。因此,在某些情况下,行步距不再与宽度相同。对我来说就是这样。修复很简单,只需从缓冲区的信息中获取行跨距,并将其传递给OpenCV Mat的构造函数,如下所示。
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
size_t width = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0);
size_t height = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
cv::Mat frame(height, width, CV_8UC1, baseaddress, bytesPerRow);请注意,我还更改了获取宽度和高度的方式,使用的是平面的尺寸而不是缓冲区的尺寸。对于Y平面,它应该始终相同。我不确定这会有什么不同。
还要小心:在Xcode更新以支持iOS 13SDK之后,我不得不从测试设备上卸载我的应用程序,因为否则Xcode将继续运行旧版本,而不是新编译的版本。
发布于 2019-10-09 21:03:34
这不是一个答案,但我们有一个类似的问题。我试着用不同的分辨率拍摄照片,只有一种分辨率(2592x1936)不起作用,其他分辨率都可以。例如,我认为将分辨率更改为1440x1920可能是解决您的问题的一种方法。
https://stackoverflow.com/questions/58171534
复制相似问题