首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >用CVPixelBuffer支持从YUV创建IOSurface

用CVPixelBuffer支持从YUV创建IOSurface
EN

Stack Overflow用户
提问于 2015-08-05 04:51:00
回答 3查看 11.4K关注 0票数 9

因此,我正在从一个网络回调(voip应用程序)中获得3个单独的数组中的原始YUV数据。据我所知,您不能根据IOSurface创建CVPixelBufferCreateWithPlanarBytes支持的像素缓冲区。

重要:不能将CVPixelBufferCreateWithBytes()或CVPixelBufferCreateWithPlanarBytes()与kCVPixelBufferIOSurfacePropertiesKey一起使用。调用CVPixelBufferCreateWithBytes()或CVPixelBufferCreateWithPlanarBytes()将导致非IOSurface支持的CVPixelBuffers。

因此,您必须使用CVPixelBufferCreate创建它,但是如何将数据从调用传递回您创建的CVPixelBufferRef

代码语言:javascript
复制
- (void)videoCallBack(uint8_t *yPlane, uint8_t *uPlane, uint8_t *vPlane, size_t width, size_t height, size_t stride yStride,
                      size_t uStride, size_t vStride)
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer = NULL;
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          width,
                                          height,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);

我不知道以后在这里该怎么办?最后,我想把它变成一个CIImage,然后我可以使用我的GLKView来渲染视频。当您创建数据时,人们是如何将数据“放入”缓冲区的?

EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2015-08-05 18:15:16

我想出来了,这是相当微不足道的。下面是完整的代码。唯一的问题是,我得到一个BSXPCMessage received error for message: Connection interrupted,它需要一段时间的视频显示。

代码语言:javascript
复制
NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                      width,
                                      height,
                                      kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                      (__bridge CFDictionaryRef)(pixelAttributes),
                                      &pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

if (result != kCVReturnSuccess) {
    DDLogWarn(@"Unable to create cvpixelbuffer %d", result);
}

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success!
CVPixelBufferRelease(pixelBuffer);

我忘了添加代码来交织两个U和V平面,但那应该不会太糟糕。

票数 9
EN

Stack Overflow用户

发布于 2020-01-17 15:44:30

以下是obj-c中的全部转换。同样,对于那些说:“这是微不足道的”的天才,不要光顾任何人!如果你来这里是为了帮助,帮助,如果你在这里展示你是多么“聪明”,去别的地方做吧。以下是有关YUV处理的详细说明的链接:www.glebsoft.com

代码语言:javascript
复制
    /// method to convert YUV buffers to pixelBuffer in otder to feed it to face unity methods
-(CVPixelBufferRef*)pixelBufferFromYUV:(uint8_t *)yBuffer vBuffer:(uint8_t *)uBuffer uBuffer:(uint8_t *)vBuffer width:(int)width height:(int)height  {
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer;
    /// NumberOfElementsForChroma is width*height/4 because both U plane and V plane are quarter size of Y plane
    CGFloat uPlaneSize =  width * height / 4;
    CGFloat vPlaneSize = width * height / 4;
    CGFloat numberOfElementsForChroma = uPlaneSize + vPlaneSize;

    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          width,
                                          height,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);

    ///for simplicity and speed create a combined UV panel to hold the pixels
    uint8_t *uvPlane = calloc(numberOfElementsForChroma, sizeof(uint8_t));
    memcpy(uvPlane, uBuffer, uPlaneSize);
    memcpy(uvPlane += (uint8_t)(uPlaneSize), vBuffer, vPlaneSize);
    
    
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    memcpy(yDestPlane, yBuffer, width * height);
    
    uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    CVPixelBufferRelease(pixelBuffer);
    free(uvPlane);
    return pixelBuffer;
}
票数 4
EN

Stack Overflow用户

发布于 2016-09-30 23:29:40

我有一个类似的问题,以下是我在SWIFT2.0中获得的信息,这些信息来自于对其他问题或链接的回答。

代码语言:javascript
复制
func generatePixelBufferFromYUV2(inout yuvFrame: YUVFrame) -> CVPixelBufferRef?
{
    var uIndex: Int
    var vIndex: Int
    var uvDataIndex: Int
    var pixelBuffer: CVPixelBufferRef? = nil
    var err: CVReturn;

    if (m_pixelBuffer == nil)
    {
        err = CVPixelBufferCreate(kCFAllocatorDefault, yuvFrame.width, yuvFrame.height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, &pixelBuffer)
        if (err != 0) {
            NSLog("Error at CVPixelBufferCreate %d", err)
            return nil
        }
    }

    if (pixelBuffer != nil)
    {
        CVPixelBufferLockBaseAddress(pixelBuffer!, 0)
        let yBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer!, 0)
        if (yBaseAddress != nil)
        {
            let yData = UnsafeMutablePointer<UInt8>(yBaseAddress)
            let yDataPtr = UnsafePointer<UInt8>(yuvFrame.luma.bytes)

            // Y-plane data
            memcpy(yData, yDataPtr, yuvFrame.luma.length)
        }

        let uvBaseAddress = CVPixelBufferGetBaseAddressOfPlane(m_pixelBuffer!, 1)
        if (uvBaseAddress != nil)
        {
            let uvData = UnsafeMutablePointer<UInt8>(uvBaseAddress)
            let pUPointer = UnsafePointer<UInt8>(yuvFrame.chromaB.bytes)
            let pVPointer = UnsafePointer<UInt8>(yuvFrame.chromaR.bytes)

            // For the uv data, we need to interleave them as uvuvuvuv....
            let iuvRow = (yuvFrame.chromaB.length*2/yuvFrame.width)
            let iHalfWidth = yuvFrame.width/2

            for i in 0..<iuvRow
            {
                for j in 0..<(iHalfWidth)
                {
                    // UV data for original frame.  Just interleave them.
                    uvDataIndex = i*iHalfWidth+j
                    uIndex = (i*yuvFrame.width) + (j*2)
                    vIndex = uIndex + 1
                    uvData[uIndex] = pUPointer[uvDataIndex]
                    uvData[vIndex] = pVPointer[uvDataIndex]
                }
            }
        }
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, 0)
    }

    return pixelBuffer
}

注意: yuvFrame是一个具有y、u和v平面缓冲区以及宽度和高度的结构。还有,我有CFDictionary?CVPixelBufferCreate(.)中的参数设定为零。如果我给它IOSurface属性,它就会失败,并抱怨它不是IOSurface支持的或错误的6683。

访问这些链接以获得更多信息:此链接是关于UV交织:如何将YUV转换为CIImage for iOS的。

及相关问题:CVOpenGLESTextureCacheCreateTextureFromImage返回错误6683

票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/31823673

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档