ioOutputDataPacketSize: UnsafeMutablePointer<UInt32>, outOutputData: UnsafeMutablePointer<AudioBufferList outOutputData : 输出数据 AudioBufferList 指针。 outPacketDescription : 输出包描述符。 //设置输入 AudioBufferList inAaudioBufferList; inAaudioBufferList.mBuffers[0].mDataByteSize = ioNumBytes; audioBufferList = *(AudioBufferList *)inUserData; ioData->mBuffers[0].mData = audioBufferList.mBuffers [0].mData; ioData->mBuffers[0].mDataByteSize = audioBufferList.mBuffers[0].mDataByteSize; return
编解码使用的缓冲区 void * 读写 kExtAudioFileProperty_PacketTable 设置PacketTable AudioFilePacketTableInfo 读写 struct AudioBufferList variable length array of mNumberBuffers elements #if defined(__cplusplus) && CA_STRICT public: AudioBufferList AudioBufferList(const AudioBufferList&); AudioBufferList& operator=(const AudioBufferList&); # endif }; typedef struct AudioBufferList AudioBufferList; struct AudioBuffer { UInt32 OSStatus ExtAudioFileWriteAsync ( ExtAudioFileRef inExtAudioFile, UInt32 inNumberFrames, const AudioBufferList
ioOutputDataPacketSize: UnsafeMutablePointer<UInt32>, outOutputData: UnsafeMutablePointer<AudioBufferList outOutputData : 输出数据 AudioBufferList 指针。 outPacketDescription : 输出包描述符。 下面是转码的具体代码: 首先,创建一个 AudioBufferList,并将输入数据存到 AudioBufferList里。 其次,设置输出。 audioBufferList = *(AudioBufferList *)inUserData; ioData->mBuffers[0].mData = audioBufferList.mBuffers [0].mData; ioData->mBuffers[0].mDataByteSize = audioBufferList.mBuffers[0].mDataByteSize; return
的回调中,把bufferList的数据copy给AudioUnit; 4、处理好PlayCallback中左右声道数据的对齐后,回调给AudioUnit; 遇到的问题 1、内存分配方式 在给双声道的AudioBufferList 分配内存的时候,尝试对buffList.mBuffers[1]分配内存,发现并不可行,因为AudioBufferList默认是只有1个buffer,mBuffers[1]的属性是未初始化的。 最后终于用一种方式解决: buffList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + (numberBuffers - 1) * sizeof 在分析了AudioFileFormat的格式和AudioBufferList的结构后猜测,可能是双声道数据格式设置问题。
func captureAudioDataOutput(capture: TRTCAVCapture, sampleBuffer: CMSampleBuffer) { var audioBufferList : AudioBufferList = AudioBufferList() var blockBuffer: CMBlockBuffer? : nil, bufferListOut: &audioBufferList .rate48000 } if let data = audioBufferList.mBuffers.mData { audioFrame.data = Data(bytes: data, count: Int(audioBufferList.mBuffers.mDataByteSize)) }
inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { // TODO: // 使用 inNumberFrames 计算有多少数据是有效的 // 在 AudioBufferList 里存放着更多的有效空间 AudioBufferList *bufferList; //bufferList里存放着一堆 buffers, buffers的长度是动态的。 inBusNumber, UInt32 inNumberFrames, AudioBufferList
*audioBufferList); // 音频渲染数据输入回调。 _audioRender.audioBufferInputCallBack = ^(AudioBufferList * _Nonnull audioBufferList) { if (weakSelf.pcmDataCacheLength < audioBufferList->mBuffers[0].mDataByteSize) { memset( audioBufferList->mBuffers[0].mData, 0, audioBufferList->mBuffers[0].mDataByteSize); } else [weakSelf.pcmDataCache replaceBytesInRange:NSMakeRange(0, audioBufferList->mBuffers[0].mDataByteSize)
初始化是一个耗时的操作,需要分配buffer、申请系统资源等; kAudioUnitProperty_SetRenderCallback 用来设置回调,AURenderCallbackStruct是回调的结构体; AudioBufferList 是音频的缓存数据结构,具体如下: struct AudioBufferList { UInt32 mNumberBuffers; AudioBuffer mBuffers[1]; AudioComponentDescription,然后再调用AudioComponentFindNext得到AudioComponent,最后调用AudioComponentInstanceNew初始化,得到AudioUnit; 3、初始化AudioBufferList AudioUnitInitialize初始化AudioUnit; 6、调用AudioOutputUnitStart开始,AudioUnit会调用之前设置的PlayCallback,在回调函数中把音频数据赋值给AudioBufferList
AudioBufferList inBufferList; inBufferList.mNumberBuffers = 1; inBufferList.mBuffers[0] = inBuffer ; // 2、创建编码输出缓冲区 AudioBufferList 接收编码后的数据。 AudioBufferList outBufferList; outBufferList.mNumberBuffers = 1; outBufferList.mBuffers[0].mNumberChannels 这里把我们自己创建的待编码缓冲区 AudioBufferList 作为 inInputDataProcUserData 传入,在回调方法中直接拷贝它。 AudioBufferList bufferList = *(AudioBufferList *) inUserData; ioData->mBuffers[0].mNumberChannels
self.errorCallBack(error); }); } } + (CMSampleBufferRef)sampleBufferFromAudioBufferList:(AudioBufferList 其中数据拷贝自 AudioBufferList,并将 CMBlockBuffer 实例关联到 CMSampleBuffer 实例。 capture) { return -1; } // 1、创建 AudioBufferList 空间,用来接收采集回来的数据。 buffer.mNumberChannels = 1; AudioBufferList buffers; buffers.mNumberBuffers = 1; buffers.mBuffers[0] = buffer; // 2、获取音频 PCM 数据,存储到 AudioBufferList 中。
const AudioTimeStamp * inTimeStamp, UInt32 inOutputBusNumber, UInt32 inNumberFrames, AudioBufferList 第五个参数 NumberFrames 就是音频帧数量, 最后一个就是返回的数据, 使用 AudioBufferList 来承接. 这里我们先有个概念. inBusNumber, UInt32 inNumberFrames, AudioBufferList // 第一个参数是我们的 ioUnit // 最后一个参数需注意, ioData 参数在这里 永远为 null, 所以不能把这个参数直接传给 AudioUnitRender, 需要我们自定义一个 AudioBufferList ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList
inBusNumber, UInt32 inNumberFrames, AudioBufferList UInt32 * ioOutputDataPacketSize, AudioBufferList OSStatus BXPlayerConverterFiller(AudioConverterRef inAudioConverter, UInt32 * ioNumberDataPackets, AudioBufferList
tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList
code:status userInfo:nil]; } memset(_aacBuffer, 0, _aacBufferSize); AudioBufferList data. */ OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList ioNumberDataPackets = 1; return noErr; } /** * 填充PCM到缓冲区 */ - (size_t) copyPCMSamplesIntoBuffer:(AudioBufferList
在iOS平台可以通过设置kAudioFormatFlagIsNonInterleaved,使得左右声道的数据分别存储在AudioBufferList的两个AudioBuffers中。
malloc(pcmBufferSize); } memset(_pcmBuffer, 0, pcmBufferSize); // 6、创建解码器接口对应的解码缓冲区 AudioBufferList AudioBufferList outAudioBufferList = {0}; outAudioBufferList.mNumberBuffers = 1; outAudioBufferList.mBuffers CallBack static OSStatus inputDataProcess(AudioConverterRef inConverter, UInt32 *ioNumberDataPackets, AudioBufferList
mReaderAudioTrackOutput加载音频信息得到CMSampleBuffer,用方法CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer把音频数据转成AudioBufferList
AudioStreamPacketDescription的数组; 2、初始化AudioUnit,设置AVAudioSession的Category为AVAudioSessionCategoryPlayback;初始化AudioBufferList
kExtAudioFileError_CodecUnavailableInputNotConsumed的区别是,前者的buffer已经被使用,下次调用需要赋值新的buffer,后者需要再次提供相同的buffer; 具体细节 1、初始化AVAudioSession和AudioBufferList
CMSampleBufferSetDataBufferFromAudioBufferList(...)[33]:为指定的 CMSampleBuffer 创建其对应的 CMBlockBuffer,其中的数据拷贝自 AudioBufferList AudioBufferList[48]:一组 AudioBuffer。 AudioTimeStamp[49]:从多维度来表示一个时间戳的数据结构。 language=objc [48]AudioBufferList: https://developer.apple.com/documentation/coreaudiotypes/audiobufferlist