首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >将AudioBufferList转换为CMBlockBufferRef时出错

将AudioBufferList转换为CMBlockBufferRef时出错
EN

Stack Overflow用户
提问于 2015-07-19 19:33:55
回答 1查看 1.1K关注 0票数 9

我尝试使用AVAssetReader读取视频文件,并将音频传递给CoreAudio处理(添加效果和其他内容),然后使用AVAssetWriter将其保存回磁盘。我想指出的是,如果我将输出节点的componentSubType设置为RemoteIO,则通过扬声器正确地播放。这使我有信心,我的AUGraph是正确的设置,因为我可以听到的东西工作。不过,我正在将subType设置为GenericOutput,这样我就可以自己进行渲染,并获得调整后的音频。

我正在读音频,我把CMSampleBufferRef传递给copyBuffer。这会将音频放入一个循环缓冲区中,稍后再读取。

代码语言:javascript
复制
- (void)copyBuffer:(CMSampleBufferRef)buf {  
    if (_readyForMoreBytes == NO)  
    {  
        return;  
    }  

    AudioBufferList abl;  
    CMBlockBufferRef blockBuffer;  
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);  

    UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);  
    BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);  

    if (!bytesCopied){  
        /  
        _readyForMoreBytes = NO;  

        if (size > kRescueBufferSize){  
            NSLog(@"Unable to allocate enought space for rescue buffer, dropping audio frame");  
        } else {  
            if (rescueBuffer == nil) {  
                rescueBuffer = malloc(kRescueBufferSize);  
            }  

            rescueBufferSize = size;  
            memcpy(rescueBuffer, abl.mBuffers[0].mData, size);  
        }  
    }  

    CFRelease(blockBuffer);  
    if (!self.hasBuffer && bytesCopied > 0)  
    {  
        self.hasBuffer = YES;  
    }  
} 

接下来我叫processOutput。这将在outputUnit上执行手动修改。当AudioUnitRender被调用时,它会调用下面的playbackCallback,这就是在我的第一个节点上作为输入回调而连接起来的内容。playbackCallback从循环缓冲区中提取数据,并将其输入到传入的audioBufferList中。正如我之前所说,如果输出设置为RemoteIO,这将导致音频在扬声器上正确播放。当AudioUnitRender完成时,它返回noErr,bufferList对象包含有效数据。当我打电话给call时,得到了kCMSampleBufferError_RequiredParameterMissing (-12731)

代码语言:javascript
复制
-(CMSampleBufferRef)processOutput  
{  
    if(self.offline == NO)  
    {  
        return NULL;  
    }  

    AudioUnitRenderActionFlags flags = 0;  
    AudioTimeStamp inTimeStamp;  
    memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));  
    inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;  
    UInt32 busNumber = 0;  

    UInt32 numberFrames = 512;  
    inTimeStamp.mSampleTime = 0;  
    UInt32 channelCount = 2;  

    AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));  
    bufferList->mNumberBuffers = channelCount;  
    for (int j=0; j<channelCount; j++)  
    {  
        AudioBuffer buffer = {0};  
        buffer.mNumberChannels = 1;  
        buffer.mDataByteSize = numberFrames*sizeof(SInt32);  
        buffer.mData = calloc(numberFrames,sizeof(SInt32));  

        bufferList->mBuffers[j] = buffer;  

    }  
    CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), @"AudioUnitRender outputUnit");  

    CMSampleBufferRef sampleBufferRef = NULL;  
    CMFormatDescriptionRef format = NULL;  
    CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };  
    AudioStreamBasicDescription audioFormat = self.audioFormat;  
    CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), @"CMAudioFormatDescriptionCreate");  
    CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), @"CMSampleBufferCreate");  
    CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), @"CMSampleBufferSetDataBufferFromAudioBufferList");  

    return sampleBufferRef;  
} 


static OSStatus playbackCallback(void *inRefCon,  
                                 AudioUnitRenderActionFlags *ioActionFlags,  
                                 const AudioTimeStamp *inTimeStamp,  
                                 UInt32 inBusNumber,  
                                 UInt32 inNumberFrames,  
                                 AudioBufferList *ioData)  
{  
    int numberOfChannels = ioData->mBuffers[0].mNumberChannels;  
    SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;  

    /  
    memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);  

    MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;  

    if (p.hasBuffer){  
        int32_t availableBytes;  
        SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);  

        int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;  

        int bytesToRead = MIN(availableBytes, requestedBytesSize);  
        memcpy(outSample, bufferTail, bytesToRead);  
        TPCircularBufferConsume([p getBuffer], bytesToRead);  

        if (availableBytes <= requestedBytesSize*2){  
            [p setReadyForMoreBytes];  
        }  

        if (availableBytes <= requestedBytesSize) {  
            p.hasBuffer = NO;  
        }    
    }  
    return noErr;  
} 

我传入的CMSampleBufferRef看起来是有效的(下面是调试器中对象的转储)

代码语言:javascript
复制
CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180  
  invalid = NO  
  dataReady = NO  
  makeDataReadyCallback = 0x0  
  makeDataReadyRefcon = 0x0  
  formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {  
  mediaType:'soun'  
  mediaSubType:'lpcm'  
  mediaSpecific: {  
  ASBD: {  
  mSampleRate: 44100.000000  
  mFormatID: 'lpcm'  
  mFormatFlags: 0xc2c  
  mBytesPerPacket: 2  
  mFramesPerPacket: 1  
  mBytesPerFrame: 2  
  mChannelsPerFrame: 1  
  mBitsPerChannel: 16 }  
  cookie: {(null)}  
  ACL: {(null)}  
  }  
  extensions: {(null)}  
}  
  sbufToTrackReadiness = 0x0  
  numSamples = 512  
  sampleTimingArray[1] = {  
  {PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},  
  }  
  dataBuffer = 0x0  

缓冲区列表如下所示

代码语言:javascript
复制
Printing description of bufferList:  
(AudioBufferList *) bufferList = 0x00007f87d280b0a0  
Printing description of bufferList->mNumberBuffers:  
(UInt32) mNumberBuffers = 2  
Printing description of bufferList->mBuffers:  
(AudioBuffer [1]) mBuffers = {  
  [0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)  
}  

真是不知所措,希望有人能帮忙。谢谢,

如果重要的话,我在ios 8.3模拟器上调试这个程序,音频来自我在iphone 6上拍摄的mp4,然后保存到我的笔记本上。

我读过以下几个问题,尽管仍然没有用,但事情并没有奏效。

How to convert AudioBufferList to CMSampleBuffer?

Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results

CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731

core audio offline rendering GenericOutput

更新

我四处打听了一下,注意到当我的AudioBufferList在AudioUnitRender运行之前运行时,如下所示:

代码语言:javascript
复制
bufferList->mNumberBuffers = 2,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 2048

mDataByteSize是数字框架*SInt32,它是512 * 4。当我查看在playbackCallback中传递的AudioBufferList时,列表如下:

代码语言:javascript
复制
bufferList->mNumberBuffers = 1,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 1024

不太确定另一个缓冲区的去向,或者其他1024字节大小.

如果我打完电话给雷德纳如果我做了这样的事

代码语言:javascript
复制
AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;

然后传递给,错误就消失了。

如果我尝试将BufferList的大小设置为1 mNumberBuffers或其mDataByteSize设置为numberFrames*mDataByteSize(SInt16),则在调用AudioUnitRender时将得到a -50。

更新2

我连接了一个渲染回调,这样,当我在扬声器上播放声音时,我可以检查输出。我注意到,到扬声器的输出还有一个带有2个缓冲区的AudioBufferList,输入回调期间的mDataByteSize是1024,在呈现回调中是2048,这与我手动调用AudioUnitRender时看到的相同。当我检查呈现的AudioBufferList中的数据时,我注意到两个缓冲区中的字节是相同的,这意味着我可以忽略第二个缓冲区。但是,我不知道如何处理这样一个事实,即数据在呈现后是2048大小,而不是在接收数据时的1024。知道为什么会发生这种事吗?在看完音频图之后,它是否以更原始的形式出现,这就是为什么大小加倍的原因吗?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2015-07-29 19:55:26

听起来你要处理的问题是因为频道数量不一致。你看到数据块为2048年而不是1024的原因是因为它给你提供了两个通道(立体声)。检查以确保您的所有音频单元被正确地配置为在整个音频图中使用mono,包括音调单元和任何音频格式描述。

特别要注意的一点是,对AudioUnitSetProperty的调用可能会失败--所以一定要将这些调用包装在CheckError()中。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/31505111

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档