在openframeworks和这个网站http://atastypixel.com/blog/using-remoteio-audio-unit的帮助下,我已经成功地使用audio Units将麦克风中的音频录制成音频文件。
我希望能够将文件流回音频单元并播放音频。根据Play an audio file using RemoteIO and Audio Unit的说法,我可以使用ExtAudioFileOpenURL和ExtAudioFileRead。但是,我如何播放我的缓冲区中的音频数据?
这是我目前所拥有的:
static OSStatus setupAudioFileRead() {
//construct the file destination URL
CFURLRef destinationURL = audioSystemFileURL();
OSStatus status = ExtAudioFileOpenURL(destinationURL, &audioFileRef);
CFRelease(destinationURL);
if (checkStatus(status)) { ofLog(OF_LOG_ERROR, "ofxiPhoneSoundStream: Couldn't open file to read"); return status; }
while( TRUE ) {
// Try to fill the buffer to capacity.
UInt32 framesRead = 8000;
status = ExtAudioFileRead( audioFileRef, &framesRead, &inputBufferList );
// error check
if( checkStatus(status) ) { break; }
// 0 frames read means EOF.
if( framesRead == 0 ) { break; }
//play audio???
}
return noErr;
}发布于 2012-08-03 06:49:34
来自作者:http://atastypixel.com/blog/using-remoteio-audio-unit/,如果你向下滚动到回放部分,尝试如下所示:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Notes: ioData contains buffers (may be more than one!)
// Fill them up as much as you can. Remember to set the size value in each buffer to match how
// much data is in the buffer.
for (int i=0; i < ioData->mNumberBuffers; i++)
{
AudioBuffer buffer = ioData->mBuffers[i];
// copy from your whatever buffer data to output buffer
UInt32 size = min(buffer.mDataByteSize, your buffer.size);
memcpy(buffer.mData, your buffer, size);
buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer
// To test if your Audio Unit setup is working - comment out the three
// lines above and uncomment the for loop below to hear random noise
/*
UInt16 *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames; j++) {
frameBuffer[j] = rand();
}
*/
}
return noErr;
}如果你只是想从麦克风录制到一个文件并回放,那么苹果的Speakhere样例可能更好用。
发布于 2013-10-16 10:05:25
基本上是1、创建一个RemoteIO单元(参考创建RemoteIO的参考资料);
上面只是对在iOS上使用音频单元播放文件的基本操作的初步概述。您可以参考Chris Adamson和Kevin Avila的Learning Core Audio了解详细信息。
发布于 2015-05-14 08:57:22
这是一种相对简单的方法,它利用了Tasty博客中提到的音频单元。在录音回调中,您可以使用ExtAudioFileRead将来自文件的数据填充到缓冲区中,而不是使用麦克风中的数据填充缓冲区。我将尝试在下面粘贴一个示例。请注意,这只适用于.caf文件。
在start方法中,调用一个readAudio或initAudioFile函数,该函数只获取有关文件的所有信息。
- (void) start {
readAudio();
OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);
}现在,在readAudio方法中,初始化音频文件引用。
ExtAudioFileRef fileRef;
void readAudio() {
NSString * name = @"AudioFile";
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"caf"];
const char * cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(NULL, cString, kCFStringEncodingMacRoman);
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, str, kCFURLPOSIXPathStyle, false);
AudioFileID fileID;
OSStatus err = AudioFileOpenURL(inputFileURL, kAudioFileReadPermission, 0, &fileID);
CheckError(err, "AudioFileOpenURL");
err = ExtAudioFileOpenURL(inputFileURL, &fileRef);
CheckError(err, "ExtAudioFileOpenURL");
err = ExtAudioFileSetProperty(fileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
CheckError(err, "ExtAudioFileSetProperty");
}现在您已经有了音频数据,下一步就很简单了。在recordingCallback中,从文件而不是麦克风中读取数据。
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus err = ExtAudioFileRead(fileRef, &inNumberFrames, &bufferList);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}这对我很有效。
https://stackoverflow.com/questions/11784133
复制相似问题