首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >EZAudio自定义AudioStreamBasicDescription的工作方式不符合我的预期

EZAudio自定义AudioStreamBasicDescription的工作方式不符合我的预期
EN

Stack Overflow用户
提问于 2014-01-28 19:37:41
回答 1查看 1.3K关注 0票数 2

使用EZAudio,我想尽可能地创建单光audioBufferList。在过去,我实现了每个audioBuffer 46字节,但bufferDuration相对较小。首先,如果我使用下面的AudioStreamBasicDescription作为输入和输出

代码语言:javascript
复制
AudioStreamBasicDescription audioFormat;
     audioFormat.mBitsPerChannel   = 8 * sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerFrame    = sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerPacket   = sizeof(AudioUnitSampleType);
     audioFormat.mChannelsPerFrame = 2;
     audioFormat.mFormatFlags      = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
     audioFormat.mFormatID         = kAudioFormatLinearPCM;
     audioFormat.mFramesPerPacket  = 1;
     audioFormat.mSampleRate       = 44100;

并使用TPCircularBuffer作为传输器,然后我在使用mDataByteSize 4096的bufferList中得到两个缓冲区,这绝对是太多了。所以我试着用我以前的ASBD

代码语言:javascript
复制
audioFormat.mSampleRate         = 8000.00;
audioFormat.mFormatID           = kAudioFormatLinearPCM;
audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket    = 1;
audioFormat.mChannelsPerFrame   = 1;
audioFormat.mBitsPerChannel     = 8;
audioFormat.mBytesPerPacket     = 1;
audioFormat.mBytesPerFrame      = 1;

现在mDataByteSize是128,我只有一个缓冲区,但是TPCircularBuffer不能正确处理这个问题。我想这是因为我只想使用一个通道。所以atm拒绝了TBCB,并尝试将字节编码和解码为NSData,或者只是为了测试直接通过AudioBufferList,但即使是第一个AudioStreamBasicDescription声音也太失真了。

我当前的代码

代码语言:javascript
复制
-(void)initMicrophone{

    AudioStreamBasicDescription audioFormat;
    //*
     audioFormat.mBitsPerChannel   = 8 * sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerFrame    = sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerPacket   = sizeof(AudioUnitSampleType);
     audioFormat.mChannelsPerFrame = 2;
     audioFormat.mFormatFlags      = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
     audioFormat.mFormatID         = kAudioFormatLinearPCM;
     audioFormat.mFramesPerPacket  = 1;
     audioFormat.mSampleRate       = 44100;

     /*/
    audioFormat.mSampleRate         = 8000.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 8;
    audioFormat.mBytesPerPacket     = 1;
    audioFormat.mBytesPerFrame      = 1;

    //*/


    _microphone = [EZMicrophone microphoneWithDelegate:self withAudioStreamBasicDescription:audioFormat];

    _output = [EZOutput outputWithDataSource:self withAudioStreamBasicDescription:audioFormat];
    [EZAudio circularBuffer:&_cBuffer withSize:128];
}

-(void)startSending{
    [_microphone startFetchingAudio];
    [_output startPlayback];
}

-(void)stopSending{
    [_microphone stopFetchingAudio];
    [_output stopPlayback];
}

-(void)microphone:(EZMicrophone *)microphone
 hasAudioReceived:(float **)buffer
   withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
    dispatch_async(dispatch_get_main_queue(), ^{
    });
}

-(void)microphone:(EZMicrophone *)microphone
    hasBufferList:(AudioBufferList *)bufferList
   withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
//*
        abufferlist = bufferList;
    /*/
     audioBufferData = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
     //*/
 dispatch_async(dispatch_get_main_queue(), ^{
 });
}
-(AudioBufferList*)output:(EZOutput *)output needsBufferListWithFrames:(UInt32)frames withBufferSize:(UInt32 *)bufferSize{
    //*
    return abufferlist;
    /*/
     //    int bSize = 128;
     //    AudioBuffer audioBuffer;
     //    audioBuffer.mNumberChannels = 1;
     //    audioBuffer.mDataByteSize = bSize;
     //    audioBuffer.mData = malloc(bSize);
     ////    [audioBufferData getBytes:audioBuffer.mData length:bSize];
     //    memcpy(audioBuffer.mData, [audioBufferData bytes], bSize);
     //
     //
     //    AudioBufferList *bufferList = [EZAudio audioBufferList];
     //    bufferList->mNumberBuffers = 1;
     //    bufferList->mBuffers[0] = audioBuffer;
     //
     //    return bufferList;
    //*/


}

我知道output:needsBufferListWithFrames:withBufferSize:中bSize的值可能会改变。

我的主要目标是创建光,因为它可以是单声道声音,编码为nsdata,并解码输出。你能告诉我我哪里做错了吗?

EN

回答 1

Stack Overflow用户

发布于 2015-03-26 00:29:30

我遇到了同样的问题,转到AVAudioRecorder并设置了我需要的参数I kept (EZMicrophone)用于音频可视化,下面是实现这一点的链接:

iOS: Audio Recording File Format

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/21404617

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档