【问题标题】:OfflineAudioContext processing takes increasingly longer in SafariSafari 中的 OfflineAudioContext 处理时间越来越长
【发布时间】:2022-11-12 06:20:28
【问题描述】:

我正在使用具有以下节点布局的 OfflineAudioContext 处理音频缓冲区:

[AudioBufferSourceNode] -> [AnalyserNode] -> [OfflineAudioContext]

这在 Chrome (106.0.5249.119) 上非常有效,但在 Safari 16 (17614.1.25.9.10, 17614) 上,每次我运行分析需要的时间越来越长。两者都在 macOS 上运行。

奇怪的是,我必须退出 Safari 才能“重置”处理时间。

我猜有内存泄漏?

我在 JavaScript 代码中做错了什么会导致 Safari 不进行垃圾收集吗?

async function processFrequencyData(
  audioBuffer,
  options
) {
  const {
    fps,
    numberOfSamples,
    maxDecibels,
    minDecibels,
    smoothingTimeConstant,
  } = options;

  const frameFrequencies = [];

  const oc = new OfflineAudioContext({
    length: audioBuffer.length,
    sampleRate: audioBuffer.sampleRate,
    numberOfChannels: audioBuffer.numberOfChannels,
  });

  const lengthInMillis = 1000 * (audioBuffer.length / audioBuffer.sampleRate);

  const source = new AudioBufferSourceNode(oc);
  source.buffer = audioBuffer;

  const az = new AnalyserNode(oc, {
    fftSize: numberOfSamples * 2,
    smoothingTimeConstant,
    minDecibels,
    maxDecibels,
  });
  source.connect(az).connect(oc.destination);

  const msPerFrame = 1000 / fps;
  let currentFrame = 0;

  function process() {
    const frequencies = new Uint8Array(az.frequencyBinCount);
    az.getByteFrequencyData(frequencies);

    // const times = new number[](az.frequencyBinCount);
    // az.getByteTimeDomainData(times);

    frameFrequencies[currentFrame] = frequencies;

    const nextTime = (currentFrame + 1) * msPerFrame;

    if (nextTime < lengthInMillis) {
      currentFrame++;
      const nextTimeSeconds = (currentFrame * msPerFrame) / 1000;
      oc.suspend(nextTimeSeconds).then(process);
    }

    oc.resume();
  }

  oc.suspend(0).then(process);

  source.start(0);
  await oc.startRendering();

  return frameFrequencies;
}

const buttonsDiv = document.createElement('div');
document.body.appendChild(buttonsDiv);

const initButton = document.createElement('button');
initButton.onclick = init;
initButton.innerHTML = 'Load audio'
buttonsDiv.appendChild(initButton);

const processButton = document.createElement('button');
processButton.disabled = true;
processButton.innerHTML = 'Process'
buttonsDiv.appendChild(processButton);

const resultElement = document.createElement('pre');
document.body.appendChild(resultElement)



async function init() {
  initButton.disabled = true;
  resultElement.innerText += 'Loading audio... ';

  const audioContext = new AudioContext();

  const arrayBuffer = await fetch('https://gist.githubusercontent.com/marcusstenbeck/da36a5fc2eeeba14ae9f984a580db1da/raw/84c53582d3936ac78625a31029022c8fdb734b2a/base64audio.txt').then(r => r.text()).then(fetch).then(r => r.arrayBuffer())
  
  resultElement.innerText += 'finished.';

  resultElement.innerText += '\nDecoding audio... ';
  const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
  resultElement.innerText += 'finished.';
  
  processButton.onclick = async () => {
    processButton.disabled = true;
    resultElement.innerText += '\nStart processing... ';
      const t0 = Date.now();
    
    await processFrequencyData(audioBuffer, {
      fps: 30,
      numberOfSamples: 2 ** 13,
      maxDecibels: -25,
      minDecibels: -70,
      smoothingTimeConstant: 0.2,
    });
    
    resultElement.innerText += `finished in ${Date.now() - t0} ms`;
    processButton.disabled = false;
  };
  
  processButton.disabled = false;
}

【问题讨论】:

    标签: safari garbage-collection mobile-safari web-audio-api


    【解决方案1】:

    我想这确实是 Safari 中的一个错误。我可以通过渲染没有任何节点的OfflineAudioContext 来重现它。一旦我使用suspend()/resume(),每次调用都需要更长的时间。

    我只是在这里推测,但我认为可能有一些内部机制试图阻止音频线程和主线程之间的快速来回。感觉就像是每次尝试验证密码都需要更长的时间的登录表单之一。

    无论如何,我认为您可以避免将suspend()/resume() 用于您的特定用例。应该可以为每个切片创建一个OfflineAudioContext。为了获得相同的效果,您只需使用每个OfflineAudioContext 渲染特定切片。

    const currentTime = 0;
    
    while (currentTime < duration) {
        const offlineAudioContext = new OfflineAudioContext({
            length: LENGTH_OF_ONE_SLICE,
            sampleRate
        });
        const audioBufferSourceNode = new AudioBufferSourceNode(
            offlineAudioContext,
            {
                buffer
            }
        );
        const analyserNode = new AnalyserNode(offlineAudioContext);
    
        audioBufferSourceNode.start(0, currentTime);
    
        audioBufferSourceNode
            .connect(analyserNode)
            .connect(offlineAudioContext.destination);
        
        await offlineAudioContext.startRendering();
    
        const frequencies = new Uint8Array(analyserNode.frequencyBinCount);
    
        analyserNode.getByteFrequencyData(frequencies);
    
        // do something with the frequencies ...
    
        currentTime += LENGTH_OF_ONE_SLICE * sampleRate;
    }
    

    我认为唯一缺少的是平滑,因为每个切片都有自己的AnalyserNode

    【讨论】:

      猜你喜欢
      • 2016-07-13
      • 1970-01-01
      • 2018-01-23
      • 1970-01-01
      • 1970-01-01
      • 2012-10-17
      • 2022-11-03
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多