我们使用AudioUnits输入回调来处理传入缓冲区。音频单元的设置主要是从
https://github.com/robovm/apple-ios-samples/blob/master/aurioTouch/Classes/AudioController.mm
我在音频回传中增加了一些理智检查。看上去像这样
/// The audio input callback
static OSStatus audioInputCallback(void __unused *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 __unused inBusNumber,
UInt32 inNumberFrames,
AudioBufferList __unused *ioData)
{
OSStatus err = noErr;
if(!*callbackData.audioChainIsBeingReconstructed)
{
// we are calling AudioUnitRender on the input bus of AURemoteIO
// this will store the audio data captured by the microphone in cd.audioBufferList
err = AudioUnitRender(callbackData.audioUnit, ioActionFlags, inTimeStamp, kInputBus, inNumberFrames, &callbackData.audioBufferList);
// check if the sample count is set correctly
assert(callbackData.audioBufferList.mBuffers[0].mDataByteSize == inNumberFrames * sizeof(float));
// Assert that we only received one buffer
assert(callbackData.audioBufferList.mNumberBuffers == 1);
// Copy buffer
TPCircularBufferCopyAudioBufferList(callbackData.buffer, &callbackData.audioBufferList, inTimeStamp, kTPCircularBufferCopyAll, NULL);
}
return err;
}现在,assert(callbackData.audioBufferList.mBuffers[0].mDataByteSize == inNumberFrames * sizeof(float));语句有时会失败,因为缓冲区是不一样的。
,有人能解释一下这个现象吗?
发布于 2019-09-10 14:58:51
这是iOS上的正常行为。iOS音频单元回调可以更改缓冲区大小中提供的帧数,使其不同于原始缓冲区大小。当操作系统状态发生变化时,或者当音频硬件状态发生变化时,或者当可用的硬件格式与您的音频格式请求不完全匹配时,就会发生这种情况。
因此,必须编写所有Audio回调,以处理来自请求值或前一个回调值的可变数量的inNumberFrames。
https://stackoverflow.com/questions/57852372
复制相似问题