首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何使用AVAudioConverter从单声道转换为立体声?

如何使用AVAudioConverter从单声道转换为立体声?
EN

Stack Overflow用户
提问于 2022-01-14 21:22:10
回答 1查看 288关注 0票数 0

我正在尝试使用AVAudioEngine而不是AVAudioPlayer,因为在播放音频时,我需要对每个包进行一些处理,但在达到这个目的之前,我需要将16位的8khz单音频数据转换成立体声,这样AVAudioEngine就可以播放它了。这是我(不完整)做这件事的尝试。目前,我一直在研究如何使AVAudioConverter进行单立体声转换。如果我不使用AVAudioConverter,iOS运行时会抱怨输入格式与输出格式不匹配。如果我确实使用它(如下所示),运行时不会抱怨,但是音频不能正常播放(可能是因为我没有正确地执行单立体声转换)。任何帮助都是非常感谢的!

代码语言:javascript
复制
  private func loadAudioData(audioData: Data?) {
      // Load audio data into player

      guard let audio = audioData else {return}
      do {
          let inputAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: Double(sampleRate), channels: 1, interleaved: false)
          let outputAudioFormat = self.audioEngine.mainMixerNode.outputFormat(forBus: 0)
          
          if inputAudioFormat != nil {
              let inputStreamDescription = inputAudioFormat?.streamDescription.pointee
              let outputStreamDescription = outputAudioFormat.streamDescription.pointee
              let count = UInt32(audio.count)
              if inputStreamDescription != nil && count > 0 {
                  if let ibpf = inputStreamDescription?.mBytesPerFrame {
                      let inputFrameCapacity = count / ibpf
                      let outputFrameCapacity = count / outputStreamDescription.mBytesPerFrame
                      self.pcmInputBuffer = AVAudioPCMBuffer(pcmFormat: inputAudioFormat!, frameCapacity: inputFrameCapacity)
                      self.pcmOutputBuffer = AVAudioPCMBuffer(pcmFormat: outputAudioFormat, frameCapacity: outputFrameCapacity)
          
                      if let input = self.pcmInputBuffer, let output = self.pcmOutputBuffer {
                          self.pcmConverter = AVAudioConverter(from: inputAudioFormat!, to: outputAudioFormat)
                          input.frameLength = input.frameCapacity
                      
                          let b = UnsafeMutableBufferPointer(start: input.int16ChannelData?[0], count: input.stride * Int(inputFrameCapacity))
                          let bytesCopied = audio.copyBytes(to: b)
                          assert(bytesCopied == count)
          
                          audioEngine.attach(playerNode)
                          audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: nil)
          
                          self.pcmConverter?.convert(to: output, error: nil) { packets, status in
                              status.pointee = .haveData
                              return self.pcmInputBuffer    // I know this is wrong, but i'm not sure how to do it correctly
                          }
                          try audioEngine.start()
                      }
                  }
              }
          }
      }
  }
EN

回答 1

Stack Overflow用户

发布于 2022-01-15 11:52:05

推测,不正确的答案

pcmConverter?.channelMap = [0, 0]怎么样?

实际答案

您不需要使用音频转换器通道映射,因为mono到立体声AVAudioConverters在默认情况下似乎复制了单声道。主要问题是outputFrameCapacity错了,在调用audioEngine.prepare()或启动引擎之前使用mainMixeroutputFormat

假设sampleRate = 8000,修改后的解决方案如下所示:

代码语言:javascript
复制
private func loadAudioData(audioData: Data?) throws  {
    // Load audio data into player
    
    guard let audio = audioData else {return}
    do {
        audioEngine.attach(playerNode)
        audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: nil)
        audioEngine.prepare() // https://stackoverflow.com/a/70392017/22147
        
        let outputAudioFormat = self.audioEngine.mainMixerNode.outputFormat(forBus: 0)
        guard let inputAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: Double(sampleRate), channels: 1, interleaved: false) else { return }
        
        let inputStreamDescription = inputAudioFormat.streamDescription.pointee
        let outputStreamDescription = outputAudioFormat.streamDescription.pointee
        let count = UInt32(audio.count)
        if count > 0 {
            let ibpf = inputStreamDescription.mBytesPerFrame
            let inputFrameCapacity = count / ibpf
            let outputFrameCapacity = Float64(inputFrameCapacity) * outputStreamDescription.mSampleRate / inputStreamDescription.mSampleRate
            self.pcmInputBuffer = AVAudioPCMBuffer(pcmFormat: inputAudioFormat, frameCapacity: inputFrameCapacity)
            self.pcmOutputBuffer = AVAudioPCMBuffer(pcmFormat: outputAudioFormat, frameCapacity: AVAudioFrameCount(outputFrameCapacity))
            
            if let input = self.pcmInputBuffer, let output = self.pcmOutputBuffer {
                self.pcmConverter = AVAudioConverter(from: inputAudioFormat, to: outputAudioFormat)
                input.frameLength = input.frameCapacity
                
                let b = UnsafeMutableBufferPointer(start: input.int16ChannelData?[0], count: input.stride * Int(inputFrameCapacity))
                let bytesCopied = audio.copyBytes(to: b)
                assert(bytesCopied == count)
                
                self.pcmConverter?.convert(to: output, error: nil) { packets, status in
                    status.pointee = .haveData
                    return self.pcmInputBuffer    // I know this is wrong, but i'm not sure how to do it correctly
                }
                try audioEngine.start()
                
                self.playerNode.scheduleBuffer(output, completionHandler: nil)
                self.playerNode.play()
            }
        }
    }
}
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/70716647

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档