如何处理来自用户麦克风的音频

François Beaufort
François Beaufort

在 Web 平台上,可以使用 Media Capture and Streams API 访问用户的摄像头和麦克风。getUserMedia() 方法会提示用户访问摄像头和/或麦克风,以捕获媒体流。然后,可以在单独的 Web Audio 线程中使用 AudioWorklet 处理此音频流,从而实现极低延迟的音频处理。

以下示例展示了如何以高性能方式处理来自用户麦克风的音频。

let stream;

startMicrophoneButton.addEventListener("click", async () => {
  // Prompt the user to use their microphone.
  stream = await navigator.mediaDevices.getUserMedia({
    audio: true,
  });
  const context = new AudioContext();
  const source = context.createMediaStreamSource(stream);

  // Load and execute the module script.
  await context.audioWorklet.addModule("processor.js");
  // Create an AudioWorkletNode. The name of the processor is the
  // one passed to registerProcessor() in the module script.
  const processor = new AudioWorkletNode(context, "processor");

  source.connect(processor).connect(context.destination);
  log("Your microphone audio is being used.");
});

stopMicrophoneButton.addEventListener("click", () => {
  // Stop the stream.
  stream.getTracks().forEach(track => track.stop());

  log("Your microphone audio is not used anymore.");
});
// processor.js
// This file is evaluated in the audio rendering thread
// upon context.audioWorklet.addModule() call.

class Processor extends AudioWorkletProcessor {
  process([input], [output]) {
    // Copy inputs to outputs.
    output[0].set(input[0]);
    return true;
  }
}

registerProcessor("processor", Processor);

浏览器支持

MediaDevices.getUserMedia()

Browser Support

  • Chrome: 53.
  • Edge: 12.
  • Firefox: 36.
  • Safari: 11.

Source

Web Audio

Browser Support

  • Chrome: 35.
  • Edge: 12.
  • Firefox: 25.
  • Safari: 14.1.

Source

AudioWorklet

Browser Support

  • Chrome: 66.
  • Edge: 79.
  • Firefox: 76.
  • Safari: 14.1.

Source

深入阅读

演示

打开演示