首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >将解码样本的Float32Array转换为AudioBuffer

将解码样本的Float32Array转换为AudioBuffer
EN

Stack Overflow用户
提问于 2014-06-12 00:38:25
回答 2查看 2.6K关注 0票数 2

因为我正在尝试支持的一个浏览器不允许我使用AudioContext.decodeAudioData()解码特定的编解码器,所以我使用Aurora.js来解码音频文件。

如何将从Aurora.js接收到的解码样本转换为可以实际用于播放音频的AudioBuffer?

这是我到目前为止的代码:

代码语言:javascript
复制
var AudioContext = (window.AudioContext || window.webkitAudioContext);
var context = new AudioContext();
var segmentUrls = [
    '/segments/00.wav',
    '/segments/05.wav',
    '/segments/10.wav',
    '/segments/15.wav',
    '/segments/20.wav',
    '/segments/25.wav',
    '/segments/30.wav',
    '/segments/35.wav',
    '/segments/40.wav',
    '/segments/45.wav',
    '/segments/50.wav',
    '/segments/55.wav'
];

Promise.all(segmentUrls.map(loadSound))
    .then(function(buffers) {
        var startAt = 0;
        buffers.forEach(function(buffer) {
            playSound(startAt, buffer);
            startAt += buffer.duration;
        });
    })
    .catch(function(err) {
        console.error(err);
    });

function playSound(offset, buffer) {
    var source = context.createBufferSource();
    source.buffer = buffer;
    source.connect(context.destination);
    source.start(offset);
}

function loadSound(url) {
    return new Promise(function(resolve, reject) {
        var request = new XMLHttpRequest();
        request.open('GET', url, true);
        request.responseType = 'arraybuffer';

        request.onload = function onLoad() {
            resolve(decodeAudioData(request.response));
        };

        request.onerror = function onError() {
            reject('Could not request file');
        };
        request.send();
    });
}

function decodeAudioData(audioData) {
    return new Promise(function(resolve, reject) {
        var asset = AV.Asset.fromBuffer(audioData);
        asset.decodeToBuffer(function(buffer) {
            // Create an AudioBuffer here
        });
    });
}
EN

回答 2

Stack Overflow用户

发布于 2014-06-12 02:22:41

您必须创建一个具有适当大小和通道数量的AudioBuffer,并将数据从一个Float32缓冲区复制到另一个缓冲区。

票数 0
EN

Stack Overflow用户

发布于 2015-10-28 20:19:27

下面是将数据放入AudioBuffer并播放它的MDN代码片段:

https://developer.mozilla.org/en-US/docs/Web/API/AudioBuffer

代码语言:javascript
复制
// Stereo
var channels = 2;

// Create an empty two second stereo buffer at the
// sample rate of the AudioContext
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();    
var frameCount = audioCtx.sampleRate * 2.0;

var myArrayBuffer = audioCtx.createBuffer(channels, frameCount, audioCtx.sampleRate);

button.onclick = function() {
  // Fill the buffer with white noise;
  // just random values between -1.0 and 1.0
  for (var channel = 0; channel < channels; channel++) {
    // This gives us the actual array that contains the data
    var nowBuffering = myArrayBuffer.getChannelData(channel);
    for (var i = 0; i < frameCount; i++) {
      // Math.random() is in [0; 1.0]
      // audio needs to be in [-1.0; 1.0]
      nowBuffering[i] = Math.random() * 2 - 1;
    }
  }

  // Get an AudioBufferSourceNode.
  // This is the AudioNode to use when we want to play an AudioBuffer
  var source = audioCtx.createBufferSource();

  // set the buffer in the AudioBufferSourceNode
  source.buffer = myArrayBuffer;

  // connect the AudioBufferSourceNode to the
  // destination so we can hear the sound
  source.connect(audioCtx.destination);

  // start the source playing
  source.start();

}
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/24168163

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档