我正在我的iOS应用程序中录制实时视频。在另一个Stack Overflow page上,我发现您可以使用vImage_Buffer来处理我的框架。
问题是我不知道如何从输出的vImage_buffer返回到CVPixelBufferRef。
下面是另一篇文章中给出的代码:
NSInteger cropX0 = 100,
cropY0 = 100,
cropHeight = 100,
cropWidth = 100,
outWidth = 480,
outHeight = 480;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0 * bytesPerRow + 4 * cropX0;
inBuff.data = baseAddress + startpos;
unsigned char *outImg = (unsigned char*)malloc(4 * outWidth * outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4 * outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(@" error %ld", err);现在我需要将outBuff转换为CVPixelBufferRef。
我假设我需要使用vImageBuffer_CopyToCVPixelBuffer,但我不确定如何使用。
我的第一次尝试因EXC_BAD_ACCESS: CVPixelBufferUnlockBaseAddress(imageBuffer, 0);而失败
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 480, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst, //BGRX8888
.colorSpace = NULL, //sRGB
};
vImageBuffer_CopyToCVPixelBuffer(&outBuff,
&format,
pixelBuffer,
NULL,
NULL,
kvImageNoFlags); // Here is the crash!
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);有什么想法吗?
发布于 2018-01-03 13:35:20
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool : YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool : YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt : 480], kCVPixelBufferWidthKey,
[NSNumber numberWithInt : 480], kCVPixelBufferHeightKey,
nil];
status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_32BGRA,
outImg,
bytesPerRow,
NULL,
NULL,
(__bridge CFDictionaryRef)options,
&pixbuffer);您应该像上面那样生成一个新的pixelBuffer。
发布于 2019-11-02 03:40:13
AVPlayerLayer、AVCaptureVideoPreviewLayer和/或其他CALayer子类,使用100x100像素区域到480x480区域的图层边界、帧和位置。您问题的vImage备注(不同情况可能会有所不同):
因为您需要将
CVPixelBufferCreateWithBytes数据复制到“干净的”或“空的”CVPixelBuffer.中,所以vImageBuffer_CopyToCVPixelBuffer()不能使用vImage_Buffer
inBuff vImage_Buffer只需要从像素缓冲区数据进行初始化,而不是手动(除非您知道如何使用CGContexts等初始化像素网格)使用vImageBuffer_InitWithCVPixelBuffer()的
vImageScale_ARGB8888会将整个CVPixel数据缩放到一个较小/较大的矩形。它不会将缓冲区的一部分/裁剪区域缩放到另一个缓冲区。vImageBuffer_CopyToCVPixelBuffer()时,需要正确填写vImageCVImageFormatRef & vImage_CGImageFormat。CGColorSpaceCreateWithName(kCGColorSpaceITUR_709);格式= CGColorSpaceRef dstColorSpace ={ .bitsPerComponent = 16,.bitsPerPixel = 64,.bitmapInfo = (CGBitmapInfo)kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder16Big,.colorSpace = dstColorSpace };vImageCVImageFormatRef vformat = vImageCVImageFormat_Create(kCVPixelFormatType_4444AYpCbCr16,kvImage_ARGBToYpCbCrMatrix_ITU_R_709_2,kCVImageBufferChromaLocation_Center,format.colorSpace,0);CVReturn状态= CVPixelBufferCreate(kCFAllocatorDefault,480,480,kCVPixelFormatType_4444AYpCbCr16,NULL,& destBuffer );NSParameterAssert(status == kCVReturnSuccess && destBuffer != NULL);kvImagePrintDiagnosticsToConsole); = vImageBuffer_CopyToCVPixelBuffer(&sourceBuffer,、destBuffer、vformat、0、err
注意:这些是针对64位ProRes的设置,其中32位采用Alpha调整。
https://stackoverflow.com/questions/28858720
复制相似问题