首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >从线性PCM中提取音频通道

从线性PCM中提取音频通道
EN

Stack Overflow用户
提问于 2011-01-08 06:46:22
回答 3查看 5.1K关注 0票数 8

我想提取一个声道音频从一个LPCM原始文件,即提取左声道和右声道的立体声LPCM文件。LPCM是16位深度,交错,2通道,小端。从我收集的字节顺序是{LeftChannel,RightChannel,LeftChannel,RightChannel...},因为它是16位深度,所以每个通道将有2个字节的样本,对吗?

所以我的问题是,如果我想提取左声道,那么我会取0,2,4,6...n*2地址中的字节?而右通道将是1,3,4,...(n*2+1)。

另外,在提取音频通道后,我是否应该将提取通道的格式设置为16位深度,1通道?

提前感谢

这是我目前用来从AssetReader中提取PCM音频的代码。这段代码适用于在不提取音乐文件通道的情况下编写音乐文件,所以它可能是由格式或其他原因引起的。

代码语言:javascript
复制
    NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey, 
                                [NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                                [NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                            //  [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
                                [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                                [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                                [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
                                nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [[AVAssetReader assetReaderWithAsset:songAsset
                                                            error:&assetError]
                              retain];
if (assetError) {
    NSLog (@"error: %@", assetError);
    return;
}

AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderAudioMixOutput 
                                           assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
                                           audioSettings: outputSettings]
                                          retain];
if (! [assetReader canAddOutput: assetReaderOutput]) {
    NSLog (@"can't add reader output... die!");
    return;
}
[assetReader addOutput: assetReaderOutput];


NSArray *dirs = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [dirs objectAtIndex:0];

//CODE TO SPLIT STEREO
[self setupAudioWithFormatMono:kAudioFormatLinearPCM];
NSString *splitExportPath = [[documentsDirectoryPath stringByAppendingPathComponent:@"monoleft.caf"] retain];
if ([[NSFileManager defaultManager] fileExistsAtPath:splitExportPath]) {
    [[NSFileManager defaultManager] removeItemAtPath:splitExportPath error:nil];
}

AudioFileID mRecordFile;
NSURL *splitExportURL = [NSURL fileURLWithPath:splitExportPath];


OSStatus status =  AudioFileCreateWithURL(splitExportURL, kAudioFileCAFType, &_streamFormat, kAudioFileFlags_EraseFile,
                                          &mRecordFile);

NSLog(@"status os %d",status);

[assetReader startReading];

CMSampleBufferRef sampBuffer = [assetReaderOutput copyNextSampleBuffer];
UInt32 countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(@"number of samples %d",countsamp);

SInt64 countByteBuf = 0;
SInt64 countPacketBuf = 0;
UInt32 numBytesIO = 0;
UInt32 numPacketsIO = 0;
NSMutableData * bufferMono = [NSMutableData new];
while (sampBuffer) {


    AudioBufferList  audioBufferList;
    CMBlockBufferRef blockBuffer;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
    for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
        AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
        //frames = audioBuffer.mData;
        NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
        NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize);






        //Append mono left to buffer data
        for (int i=0; i<audioBuffer.mDataByteSize; i= i+4) {
            [bufferMono appendBytes:(audioBuffer.mData+i) length:2];
        }

        //the number of bytes in the mutable data containing mono audio file
        numBytesIO = [bufferMono length];
        numPacketsIO = numBytesIO/2;
        NSLog(@"numpacketsIO %d",numPacketsIO);
        status = AudioFileWritePackets(mRecordFile, NO, numBytesIO, &_packetFormat, countPacketBuf, &numPacketsIO, audioBuffer.mData);
        NSLog(@"status for writebyte %d, packets written %d",status,numPacketsIO);
        if(numPacketsIO != (numBytesIO/2)){
            NSLog(@"Something wrong");
            assert(0);
        }


        countPacketBuf = countPacketBuf + numPacketsIO;
        [bufferMono setLength:0];


    }

    sampBuffer = [assetReaderOutput copyNextSampleBuffer];
    countsamp= CMSampleBufferGetNumSamples(sampBuffer);
    NSLog(@"number of samples %d",countsamp);
}
AudioFileClose(mRecordFile);
[assetReader cancelReading];
[self performSelectorOnMainThread:@selector(updateCompletedSizeLabel:)
                       withObject:0
                    waitUntilDone:NO];

音频文件服务的输出格式如下:

代码语言:javascript
复制
        _streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
    _streamFormat.mBitsPerChannel = 16;
    _streamFormat.mChannelsPerFrame = 1;
    _streamFormat.mBytesPerPacket = 2;
    _streamFormat.mBytesPerFrame = 2;// (_streamFormat.mBitsPerChannel / 8) * _streamFormat.mChannelsPerFrame;
    _streamFormat.mFramesPerPacket = 1;
    _streamFormat.mSampleRate = 44100.0;

    _packetFormat.mStartOffset = 0;
    _packetFormat.mVariableFramesInPacket = 0;
    _packetFormat.mDataByteSize = 2;
EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2011-01-08 07:22:31

听起来几乎是对的-你有一个16位的深度,所以这意味着每个样本将需要2个字节。这意味着左声道数据将以字节{0,1},{4,5},{8,9}等为单位。交错表示样本是交错的,而不是字节。除此之外,我会试一试,看看你的代码是否有任何问题。

在提取音频通道后,我是否应该将提取通道的格式设置为16位深度,1通道?

提取后,两个通道中只剩下一个通道,所以这是正确的。

票数 4
EN

Stack Overflow用户

发布于 2012-02-19 05:44:58

我有一个类似的错误,音频听起来‘慢’,原因是你指定的mChannelsPerFrame为1,而你有一个双声道声音。将其设置为2,将加快播放速度。如果你这样做了,输出“听起来”是否正确……:)

票数 1
EN

Stack Overflow用户

发布于 2015-01-13 07:12:53

我正在尝试将立体声音频分割成两个单声道文件(split stereo audio to mono streams on iOS)。我一直在使用你的代码,但似乎不能让它工作。您的setupAudioWithFormatMono方法的内容是什么?

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/4630969

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档