我正在尝试按照cwilso在下面的链接中建议的那样使用AudioWorklet获得块:Using web audio api for analyzing input from microphone (convert MediaStreamSource to BufferSource),但不幸的是我没有让它运行。有人知道如何使用AudioWorker从流中获取数据块,以便进行分析吗?以下是我的代码:
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(function(stream) {
/* use the stream */
var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // define audio context
var source = audioCtx.createMediaStreamSource(stream);
//connect stream to a web audio context with a MediaStreamAudioNode
source.connect(audioCtx);
//Use an AudioWorklet to grab the bits and do detection
//https://developers.google.com/web/updates/2017/12/audio-worklet
//https://developers.google.com/web/updates/2018/06/audio-worklet-design-pattern
//https://webaudio.github.io/web-audio-api/#audioworklet
audioCtx.audioWorklet.addModule('AudioWorklet.js').then(() => {
let bypassNode = new AudioWorkletNode(audioCtx, 'bypass-processor');
});
//collect chunks for beat detection
//do BPM detection
})
.catch(function(err) {
/* handle the error */
alert("Error");
});
// Script in an extra file like it is explained in the api
class BypassProcessor extends AudioWorkletProcessor {
process (inputs, outputs) {
// Single input, single channel.
let input = inputs[0];
let output = outputs[0];
output[0].set(input[0]);
// Process only while there are inputs.
alert(input);
alert(output);
return false;}}); registerProcessor('bypass-processor', BypassProcessor);发布于 2018-08-18 03:45:15
根据您需要进行多少处理,您可以在AudioWorkletNode本身中进行处理。
如果不是,则需要使用MessagePort将数据从AudioWorkletNode传递到主线程
您可能还会对MessagePort with AudioWorklet和AudioWorklet with SharedArrayBuffer and Worker感兴趣
https://stackoverflow.com/questions/51901704
复制相似问题