我正在使用glReadPixels抓取opengl场景的屏幕截图,然后在iOS4上使用AVAssetWriter将它们转换成视频。我的问题是,我需要将alpha通道传递给只接受kCVPixelFormatType_32ARGB和glReadPixels的视频。所以基本上我需要一种方法将我的RGBA转换成ARGB,换句话说,把alpha字节放在第一位。
int depth = 4;
unsigned char buffer[width * height * depth];
glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer), width*height*depth, NULL );
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGImageRef image = CGImageCreate(width, height, 8, 32, width*depth, CGColorSpaceCreateDeviceRGB(), bitmapInfo, ref, NULL, true, kCGRenderingIntentDefault);
UIWindow* parentWindow = [self window];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess);
NSParameterAssert(pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, depth*width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, parentWindow.transform);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer; // chuck pixel buffer into AVAssetWriter我想我会发布整个代码,因为我可能会帮助其他人。
干杯
发布于 2010-10-17 01:43:31
注意:我假设每个通道8位。如果不是这样,则相应地进行调整。
要最后移动alpha位,需要执行旋转。这通常最容易通过位移位来表达。
在这种情况下,您希望将RGB位向右移动8位,将A位向左移动24位。然后应该使用逐位OR将这两个值放在一起,这样就变成了argb = (rgba >> 8) | (rgba << 24)。
发布于 2012-04-27 00:00:43
更好的是,不要使用ARGB编码你的视频,发送你的AVAssetWriter BGRA帧。正如我在this answer中所描述的,这样做可以让你在iPhone 4上以30FPS的速度编码640x480视频,而对于720p的视频,编码速度高达20FPS。使用这种技术,iPhone 4S可以以30 FPS的速度播放1080p的视频。
此外,您还需要确保每次都使用像素缓冲池,而不是重新创建像素缓冲区。复制该答案中的代码,使用以下命令配置AVAssetWriter:
NSError *error = nil;
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
NSLog(@"Error: %@", error);
}
NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];
assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;
// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[assetWriter addInput:assetWriterVideoInput];然后使用此代码通过glReadPixels()抓取每个呈现的帧
CVPixelBufferRef pixel_buffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
return;
}
else
{
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}
// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);
if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime])
{
NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
}
else
{
// NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
CVPixelBufferRelease(pixel_buffer);当使用glReadPixels()时,你需要调整帧的颜色,所以我使用了一个屏幕外的FBO和一个片段着色器来做这件事,代码如下:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate).bgra;
}然而,在iOS 5.0上,有一种比我在this answer中描述的glReadPixels()更快的方法来获取OpenGL ES内容。这个过程的好处是纹理已经存储了BGRA像素格式的内容,所以你可以直接将封装的像素缓冲区提供给AVAssetWriter,而不需要任何颜色转换,仍然可以看到很快的编码速度。
发布于 2013-03-26 11:21:08
我知道这个问题已经得到了回答,但我想确保大家都知道vImage,它是Accelerate框架的一部分,可以在iOS和OSX中使用。我的理解是,Core Graphics使用vImage对位图进行CPU限制的向量操作。
要将ARGB转换为RGBA的特定API是vImagePermuteChannels_ARGB8888。还有一些API可以将RGB转换为ARGB/XRGB、翻转图像、覆盖通道等。这是一种隐藏的宝石!
更新: Brad Larson对基本上相同的问题here写了一个很好的答案。
https://stackoverflow.com/questions/3950014
复制相似问题