This specification describes a high-level Web API for processing and
        synthesizing audio in web applications. The primary paradigm is of an
        audio routing graph, where a number of AudioNode
        objects are connected together to define the overall audio rendering.
        The actual processing will primarily take place in the underlying
        implementation (typically optimized Assembly / C / C++ code), but
        direct script processing and synthesis is
        also supported.
      
The introductory section covers the motivation behind this specification.
        This API is designed to be used in conjunction with other APIs and
        elements on the web platform, notably: XMLHttpRequest [[XHR]] (using
        the responseType and response attributes).
        For games and interactive applications, it is anticipated to be used
        with the canvas 2D [[2dcontext]] and WebGL [[WEBGL]] 3D
        graphics APIs.
      
          Audio on the web has been fairly primitive up to this point and until
          very recently has had to be delivered through plugins such as Flash
          and QuickTime. The introduction of the audio element in
          HTML5 is very important, allowing for basic streaming audio playback.
          But, it is not powerful enough to handle more complex audio
          applications. For sophisticated web-based games or interactive
          applications, another solution is required. It is a goal of this
          specification to include the capabilities found in modern game audio
          engines as well as some of the mixing, processing, and filtering
          tasks that are found in modern desktop audio production applications.
        
The APIs have been designed with a wide variety of use cases [[webaudio-usecases]] in mind. Ideally, it should be able to support any use case which could reasonably be implemented with an optimized C++ engine controlled via script and run in a browser. That said, modern desktop audio software can have very advanced capabilities, some of which would be difficult or impossible to build with this system. Apple's Logic Audio is one such application which has support for external MIDI controllers, arbitrary plugin audio effects and synthesizers, highly optimized direct-to-disk audio file reading/writing, tightly integrated time-stretching, and so on. Nevertheless, the proposed system will be quite capable of supporting a large range of reasonably complex games and interactive applications, including musical ones. And it can be a very good complement to the more advanced graphics features offered by WebGL. The API has been designed so that more advanced capabilities can be added at a later time.
The API supports these primary features:
audio or 
            video media
            element.
          MediaStreamTrackAudioSourceNode and
              [[!webrtc]].
              MediaStreamAudioDestinationNode
              and [[!webrtc]].
              
            Modular routing allows arbitrary connections between different
            AudioNode objects. Each node can have
            inputs and/or outputs. A source
            node has no inputs and a single output. A destination
            node has one input and no outputs. Other nodes such as
            filters can be placed between the source and destination nodes. The
            developer doesn't have to worry about low-level stream format
            details when two objects are connected together; the right thing just
            happens. For example, if a mono audio stream is connected to a
            stereo input it should just mix to left and right channels appropriately.
          
            In the simplest case, a single source can be routed directly to the
            output. All routing occurs within an AudioContext containing a single
            AudioDestinationNode:
          
            Illustrating this simple routing, here's a simple example playing a single sound:
var context = new AudioContext();
function playSound() {
    var source = context.createBufferSource();
    source.buffer = dogBarkingBuffer;
    source.connect(context.destination);
    source.start(0);
}
          Here's a more complex example with three sources and a convolution reverb send with a dynamics compressor at the final output stage:
            
var context = 0;
var compressor = 0;
var reverb = 0;
var source1 = 0;
var source2 = 0;
var source3 = 0;
var lowpassFilter = 0;
var waveShaper = 0;
var panner = 0;
var dry1 = 0;
var dry2 = 0;
var dry3 = 0;
var wet1 = 0;
var wet2 = 0;
var wet3 = 0;
var masterDry = 0;
var masterWet = 0;
function setupRoutingGraph () {
    context = new AudioContext();
    // Create the effects nodes.
    lowpassFilter = context.createBiquadFilter();
    waveShaper = context.createWaveShaper();
    panner = context.createPanner();
    compressor = context.createDynamicsCompressor();
    reverb = context.createConvolver();
    // Create master wet and dry.
    masterDry = context.createGain();
    masterWet = context.createGain();
    // Connect final compressor to final destination.
    compressor.connect(context.destination);
    // Connect master dry and wet to compressor.
    masterDry.connect(compressor);
    masterWet.connect(compressor);
    // Connect reverb to master wet.
    reverb.connect(masterWet);
    // Create a few sources.
    source1 = context.createBufferSource();
    source2 = context.createBufferSource();
    source3 = context.createOscillator();
    source1.buffer = manTalkingBuffer;
    source2.buffer = footstepsBuffer;
    source3.frequency.value = 440;
    // Connect source1
    dry1 = context.createGain();
    wet1 = context.createGain();
    source1.connect(lowpassFilter);
    lowpassFilter.connect(dry1);
    lowpassFilter.connect(wet1);
    dry1.connect(masterDry);
    wet1.connect(reverb);
    // Connect source2
    dry2 = context.createGain();
    wet2 = context.createGain();
    source2.connect(waveShaper);
    waveShaper.connect(dry2);
    waveShaper.connect(wet2);
    dry2.connect(masterDry);
    wet2.connect(reverb);
    // Connect source3
    dry3 = context.createGain();
    wet3 = context.createGain();
    source3.connect(panner);
    panner.connect(dry3);
    panner.connect(wet3);
    dry3.connect(masterDry);
    wet3.connect(reverb);
    // Start the sources now.
    source1.start(0);
    source2.start(0);
    source3.start(0);
}
          
            Modular routing also permits the output of
            AudioNodes to be routed to an
            AudioParam parameter that controls the behavior
            of a different AudioNode. In this scenario, the
            output of a node can act as a modulation signal rather than an
            input signal.
          
            
function setupRoutingGraph() {
  var context = new AudioContext();
  // Create the low frequency oscillator that supplies the modulation signal
  var lfo = context.createOscillator();
  lfo.frequency.value = 1.0;
  // Create the high frequency oscillator to be modulated
  var hfo = context.createOscillator();
  hfo.frequency.value = 440.0;
  // Create a gain node whose gain determines the amplitude of the modulation signal
  var modulationGain = context.createGain();
  modulationGain.gain.value = 50;
  // Configure the graph and start the oscillators
  lfo.connect(modulationGain);
  modulationGain.connect(hfo.detune);
  hfo.connect(context.destination);
  hfo.start(0);
  lfo.start(0);
}
        The interfaces defined are:
AudioNodes.
          AudioNode interface, which represents
          audio sources, audio outputs, and intermediate processing modules.
          AudioNodes can be dynamically connected together
          in a modular fashion.
          AudioNodes exist in the context of an
          AudioContext.
          AnalyserNode interface, an
          AudioNode for use with music visualizers, or
          other visualization applications.
          AudioBuffer interface, for working with
          memory-resident audio assets. These can represent one-shot sounds, or
          longer audio clips.
          AudioBufferSourceNode interface, an
          AudioNode which generates audio from an
          AudioBuffer.
          AudioDestinationNode interface, an
          AudioNode subclass representing the final
          destination for all rendered audio.
          AudioParam interface, for controlling an
          individual aspect of an AudioNode's functioning,
          such as volume.
          AudioListener interface, which works with
          a PannerNode for spatialization.
          AudioWorklet interface representing a
          factory for creating custom nodes that can process audio directly
          using scripts.
          AudioWorkletGlobalScope interface, the
          context in which AudioWorkletProcessor processing scripts run.
          AudioWorkletNode interface, an
          AudioNode representing a node processed in an
          AudioWorkletProcessor.
          AudioWorkletProcessor interface,
          representing a single node instance inside an audio worker.
          BiquadFilterNode interface, an
          AudioNode for common low-order filters such as:
            ChannelMergerNode interface, an
          AudioNode for combining channels from multiple
          audio streams into a single audio stream.
          ChannelSplitterNode interface, an 
            AudioNode for accessing the individual channels of an
            audio stream in the routing graph.
          ConstantSourceNode interface, an
          AudioNode for generating a nominally constant output value
          with an AudioParam to allow automation of the value.
          ConvolverNode interface, an
          AudioNode for applying a real-time linear effect (such as the sound of
          a concert hall).
          DelayNode interface, an
          AudioNode which applies a dynamically adjustable
          variable delay.
          DynamicsCompressorNode interface, an
          AudioNode for dynamics compression.
          GainNode interface, an
          AudioNode for explicit gain control. Because
          inputs to AudioNodes support multiple connections
          (as a unity-gain summing junction), mixers can be easily built with GainNodes.
          IIRFilterNode interface, an
          AudioNode for a general IIR filter.
          MediaElementAudioSourceNode interface, an
          AudioNode which is the audio source from an
          audio, video, or other media element.
          MediaStreamAudioSourceNode interface, an
          AudioNode which is the audio source from a
          MediaStream such as live audio input, or from a remote peer.
          MediaStreamTrackAudioSourceNode interface,
          an AudioNode which is the audio source from a
          MediaStreamTrack.
          MediaStreamAudioDestinationNode interface,
          an AudioNode which is the audio destination to a
          MediaStream sent to a remote peer.
          PannerNode interface, an
          AudioNode for spatializing / positioning audio in
          3D space.
          PeriodicWave interface for specifying
          custom periodic waveforms for use by the
          OscillatorNode.
          OscillatorNode interface, an
          AudioNode for generating a periodic waveform.
          StereoPannerNode interface, an
          AudioNode for equal-power positioning of audio
          input in a stereo stream.
          WaveShaperNode interface, an
          AudioNode which applies a non-linear waveshaping
          effect for distortion and other more subtle warming effects.
          There are also several features that have been deprecated from the Web Audio API but not yet removed, pending implementation experience of their replacements:
ScriptProcessorNode interface, an 
            AudioNode for generating or processing audio directly
            using scripts.
          AudioProcessingEvent interface, which is
          an event type used with ScriptProcessorNode
          objects.
          The following conformance classes are defined by this specification:
A user agent is considered to be a conforming implementation if it satisfies all of the MUST-, REQUIRED- and SHALL-level criteria in this specification that apply to implementations.
User agents that use ECMAScript to implement the APIs defined in this specification must implement them in a manner consistent with the ECMAScript Bindings defined in the Web IDL specification [[!WEBIDL]] as this specification uses that specification and terminology.
          This interface represents a set of AudioNode
          objects and their connections. It allows for arbitrary routing of
          signals to an AudioDestinationNode. Nodes are
          created from the context and are then connected together.
        
          BaseAudioContext is not instantiated directly,
          but is instead extended by the concrete interfaces
          AudioContext (for real-time rendering) and
          OfflineAudioContext (for offline rendering).
        
InvalidStateError.
            (AudioBuffers may still be created, through 
            createBuffer, 
            decodeAudioData, or the AudioBuffer constructor.)
          
              An AudioDestinationNode
              with a single input representing the final destination for all
              audio. Usually this will represent the actual audio hardware. All
              AudioNodes actively rendering audio will
              directly or indirectly connect to destination.
            
              The sample rate (in sample-frames per second) at which the
              BaseAudioContext handles audio. It is assumed
              that all AudioNodes in the context run at
              this rate. In making this assumption, sample-rate converters or
              "varispeed" processors are not supported in real-time processing.
              The Nyquist frequency is half this sample-rate value.
            
              This is the time in seconds of the sample frame immediately
              following the last sample-frame in the block of audio most
              recently processed by the context's rendering graph. If the
              context's rendering graph has not yet processed a block of audio,
              then currentTime
              has a value of zero.
            
              In the time coordinate system of currentTime,
              the value of zero corresponds to the first sample-frame in the
              first block processed by the graph. Elapsed time in this system
              corresponds to elapsed time in the audio stream generated by the
              BaseAudioContext, which may not be
              synchronized with other clocks in the system. (For an
              OfflineAudioContext, since the stream is not
              being actively played by any device, there is not even an
              approximation to real time.)
            
              All scheduled times in the Web Audio API are relative to the
              value of currentTime.
            
              When the BaseAudioContext is in the running
              state, the value of this attribute is monotonically increasing
              and is updated by the rendering thread in uniform increments,
              corresponding to one render
              quantum. Thus, for a running context,
              currentTime increases steadily as the system
              processes audio blocks, and always represents the time of the
              start of the next audio block to be processed. It is also the
              earliest possible time when any change scheduled in the current
              state might take effect.
            
              currentTime MUST be read atomically on the control thread before being
              returned.
            
              An AudioListener which
              is used for 3D spatialization.
            
Describes the current state of the AudioContext, on the control thread.
Resumes the progression of the BaseAudioContext's currentTime when it has been suspended.
When resume is called, execute these steps:
closed reject the promise
              with InvalidStateError, abort these steps, returning
              promise.
              running, resolve promise, return it, and
              abort these steps.
              running.
              Running a control message to resume an BaseAudioContext means running these steps on the rendering thread:
running.
              running:
                    running.
                      statechange at the BaseAudioContext.
                      
              A property used to set the EventHandler for an event
              that is dispatched to BaseAudioContext when
              the state of the AudioContext has changed (i.e. when the
              corresponding promise would have resolved). An event of type
              Event will be dispatched to the event
              handler, which can query the AudioContext's state directly. A
              newly-created AudioContext will always begin in the
              suspended state, and a state change event will be
              fired whenever the state changes to a different state. This event
              is fired before the oncomplete event is
              fired.
            
              Creates an AudioBuffer of the given size. The audio data in the
              buffer will be zero-initialized (silent). A NotSupportedError exception MUST be
              thrown if any of the arguments is negative, zero, or outside its
              nominal range.
            
              Asynchronously decodes the audio file data contained in the
              ArrayBuffer. The ArrayBuffer can, for
              example, be loaded from an XMLHttpRequest's
              response attribute after setting the
              responseType to "arraybuffer". Audio
              file data can be in any of the formats supported by the
              audio element. The buffer passed to 
              decodeAudioData has its content-type determined by sniffing,
              as described in [[mimesniff]].
            
Although the primary method of interfacing with this function is via its promise return value, the callback parameters are provided for legacy reasons. The system shall ensure that the AudioContext is not garbage collected before the promise is resolved or rejected and any callback function is called and completes.
              When decodeAudioData is
              called, the following steps must be performed on the control
              thread:
            
IsDetachedBuffer
              (described in [[!ECMASCRIPT]]) on audioData is
              false, execute the following steps:
                
              DataCloneError.
                  When queuing a decoding operation to be performed on another thread, the following steps MUST happen on a thread that is not the control thread nor the rendering thread, called the decoding thread.
decodeAudioData.
            DOMException whose
                  name is "EncodingError".
                  AudioContext if it is different from the
                  sample-rate of audioData.
                  AudioBuffer
                      containing the final result (after possibly sample-rate
                      conversion).
                      
              Factory method for a
              ConstantSourceNode.
            
              Factory method for a ScriptProcessorNode. This
              method is DEPRECATED, as it is intended to be replaced by
              AudioWorkletNode.
              Creates a ScriptProcessorNode for direct
              audio processing using scripts. An
              IndexSizeError exception MUST be thrown if
              bufferSize or
              numberOfInputChannels or
              numberOfOutputChannels are outside the valid
              range.
            
bufferSize parameter determines the
                buffer size in units of sample-frames. If it's not passed in,
                or if the value is 0, then the implementation will choose the
                best buffer size for the given environment, which will be
                constant power of 2 throughout the lifetime of the node.
                Otherwise if the author explicitly specifies the bufferSize, it
                must be one of the following values: 256, 512, 1024, 2048,
                4096, 8192, 16384. This value controls how frequently the
                audioprocess
                event is dispatched and how many sample-frames need to be
                processed each call. Lower values for
                bufferSize will result in a lower (better)
                latency. Higher values will be necessary
                to avoid audio breakup and glitches. It is recommended for authors
                to not specify this buffer size and allow the implementation to
                pick a good buffer size to balance between latency and audio quality. If the value of this
                parameter is not one of the allowed power-of-2 values listed
                above, an IndexSizeError
                MUST be thrown.
              NotSupportedError must
                be thrown if the number of channels is not supported.
              NotSupportedError must
                be thrown if the number of channels is not supported.
              
              It is invalid for both numberOfInputChannels
              and numberOfOutputChannels to be zero. In
              this case an IndexSizeError MUST be thrown.
            
              Factory method for an AnalyserNode.
            
Factory method for GainNode.
Factory method for a DelayNode. The initial default delay time will be 0 seconds.
NotSupportedError exception MUST be thrown.
              
              Factory method for a BiquadFilterNode
              representing a second order filter which can be configured as one
              of several common filter types.
            
              Factory method for an IIRFilterNode
              representing a general IIR Filter.
            
InvalidStateError MUST be
                thrown. A
                NotSupportedError MUST be thrown if the array
                length is 0 or greater than 20.
              InvalidStateError
                MUST be thrown. A
                NotSupportedError MUST be thrown if the array
                length is 0 or greater than 20.
              
              Factory method for a WaveShaperNode
              representing a non-linear distortion.
            
              Factory method for a PannerNode.
            
              Factory method for an a
              StereoPannerNode.
            
              Factory method for a ConvolverNode.
            
              Factory method for a
              ChannelSplitterNode representing a channel
              splitter. An
              IndexSizeError exception MUST be thrown for invalid
              parameter values.
            
              Factory method for a ChannelMergerNode
              representing a channel merger. An
              IndexSizeError exception MUST be thrown for invalid
              parameter values.
            
              Factory method for an OscillatorNode.
            
              Factory method to create a
              PeriodicWave. When calling this method,
              execute these steps:
            
real and imag
              parameters passed to this factory method to the attributes of the
              same name on o.
              disableNormalization attribute on 
                o to the value of the disableNormalization
                attribute of the constraints attribute passed to
                the factory method.
              constraints.
              
            Once created, an AudioContext will continue to play
            sound until it has no more sound to play, or the page goes away.
          
The Web Audio API takes a fire-and-forget approach to audio source scheduling. That is, source nodes are created for each note during the lifetime of the AudioContext, and never explicitly removed from the graph. This is incompatible with a serialization API, since there is no stable set of nodes that could be serialized.
Moreover, having an introspection API would allow content script to be able to observe garbage collections.
The subclasses AudioContext and OfflineAudioContext should be considered expensive objects. Creating these objects may involve creating a high-priority thread, or using a low-latency system audio stream, both having an impact on energy consumption. It is usually not necessary to create more than one AudioContext in a document.
Constructing or resuming a BaseAudioContext subclass involves acquiring system resources for that context. For AudioContext, this also requires creation of a system audio stream. These operations return when the context begins generating output from its associated audio graph.
            Additionally, a user-agent can have an implementation-defined
            maximum number of AudioContexts, after which any attempt to
            create a new AudioContext will fail, throwing NotSupportedError.
          
suspend and close allow authors to release system resources, including threads, processes and audio streams. Suspending a BaseAudioContext permits implementations to release some of its resources, and allows it to continue to operate later by invoking resume. Closing an AudioContext permits implementations to release all of its resources, after which it cannot be used or resumed again.
For example, this can involve waiting for the audio callbacks to fire regularly, or to wait for the hardware to be ready for processing.
          This interface represents an audio graph whose
          AudioDestinationNode is routed to a real-time
          output device that produces a signal directed at the user. In most
          use cases, only a single AudioContext is used per
          document.
        
          An AudioContext is said to be allowed to
          start if the user agent and the system allow audio output in
          the current context. In other words, if the
          AudioContext control thread state is
          allowed to transition from suspended to
          running.
        
          For example, a user agent could require that an
          AudioContext control thread state change to
          running is 
          triggered by a user activation (as described in [[HTML]]).
        
When creating an AudioContext, execute these steps:
control thread state to
              suspended on the AudioContext.
              suspended on the AudioContext.
              Sending a control message to start processing means executing the following steps:
running
              on the AudioContext.
              running.
                  statechange at the AudioContext.
                  It is unfortunately not possible to programatically notify authors that the creation of the AudioContext failed. User-Agents are encouraged to log an informative message if they have access to a logging mechanism, such as a developer tools console.
This represents the number of seconds of processing latency incurred by the AudioContext passing the audio from the AudioDestinationNode to the audio subsystem. It does not include any additional latency that might be caused by any other processing between the output of the AudioDestinationNode and the audio hardware and specifically does not include any latency incurred the audio graph itself.
For example, if the audio context is running at 44.1 kHz and the AudioDestinationNode implements double buffering internally and can process and output audio each render quantum, then the processing latency is \((2\cdot128)/44100 = 5.805 \mathrm{ ms}\), approximately.
The estimation in seconds of audio output latency, i.e., the interval between the time the UA requests the host system to play a buffer and the time at which the first sample in the buffer is actually processed by the audio output device. For devices such as speakers or headphones that produce an acoustic signal, this latter time refers to the time when a sample's sound is produced.
              The outputLatency attribute value depends on
              the platform and the connected hardware audio output device. The
              outputLatency attribute value does not change
              for the context's lifetime as long as the connected audio output
              device remains the same. If the audio output device is changed
              the outputLatency attribute value will be
              updated accordingly.
            
              Returns a new AudioTimestamp instance
              containing two correlated context's audio stream position values:
              the contextTime
              member contains the time of the sample frame which is currently
              being rendered by the audio output device (i.e., output audio
              stream position), in the same units and origin as context's
              currentTime;
              the performanceTime
              member contains the time estimating the moment when the sample
              frame corresponding to the stored contextTime value
              was rendered by the audio output device, in the same units and
              origin as performance.now() (described in
              [[!hr-time-2]]).
            
              If the context's rendering graph has not yet processed a block of
              audio, then getOutputTimestamp call returns
              an AudioTimestamp instance with both members
              containing zero.
            
              After the context's rendering graph has started processing of
              blocks of audio, its currentTime
              attribute value always exceeds the contextTime
              value obtained from getOutputTimestamp
              method call.
            
              The value returned from getOutputTimestamp
              method can be used to get performance time estimation for the
              slightly later context's time value:
            
            function outputPerformanceTime(contextTime) {
                var timestamp = context.getOutputTimestamp();
                var elapsedTime = contextTime - timestamp.contextTime;
                return timestamp.performanceTime + elapsedTime * 1000;
            }
            
              In the above example the accuracy of the estimation depends on
              how close the argument value is to the current output audio
              stream position: the closer the given contextTime is
              to timestamp.contextTime, the better the accuracy of
              the obtained estimation.
            
              The difference between the values of the context's currentTime
              and the contextTime
              obtained from getOutputTimestamp
              method call cannot be considered as a reliable output latency
              estimation because currentTime
              may be incremented at non-uniform time intervals, so outputLatency
              attribute should be used instead.
            
              Suspends the progression of AudioContext's currentTime, allows any
              current context processing blocks that are already processed to
              be played to the destination, and then allows the system to
              release its claim on audio hardware. This is generally useful
              when the application knows it will not need the
              AudioContext for some time, and wishes to temporarily
              release system resource associated with the
              AudioContext. The promise resolves when the frame buffer
              is empty (has been handed off to the hardware), or immediately
              (with no other effect) if the context is already
              suspended. The promise is rejected if the context
              has been closed.
            
When suspend is called, execute these steps:
closed reject the promise
              with InvalidStateError, abort these steps, returning
              promise.
              suspended,
              resolve promise, return it, and abort these steps.
              suspended.
              Running a control message to suspend an AudioContext means running these steps on the rendering thread:
suspended.
              suspended:
                    suspended.
                      statechange at the AudioContext.
                      
              While an AudioContext is suspended,
              MediaStreams will have their output ignored; that
              is, data will be lost by the real time nature of media streams.
              HTMLMediaElements will similarly have their output
              ignored until the system is resumed. AudioWorkletNodes and
              ScriptProcessorNodes will cease to have their processing
              handlers invoked while suspended, but will resume when the
              context is resumed. For the purpose of AnalyserNode window
              functions, the data is considered as a continuous stream - i.e.
              the resume()/suspend() does not cause
              silence to appear in the AnalyserNode's stream of data. In
              particular, calling AnalyserNode functions repeatedly when
              a AudioContext is suspended MUST return the same data.
            
              Closes the AudioContext, releasing the system
              resources it's using. This will not automatically release all
              AudioContext-created objects, but will suspend the
              progression of the AudioContext's currentTime, and stop
              processing audio data.
            
When close is called, execute these steps:
closed reject the promise
              with InvalidStateError, abort these steps, returning
              promise.
              closed,
              resolve promise, return it, and abort these steps.
              closed.
              Running a control message to close an AudioContext means running these steps on the rendering thread:
suspended.
              closed:
                    closed.
                      statechange at the AudioContext.
                      When an AudioContext has been closed, implementation can choose to aggressively release more resources than when suspending.
              Creates a MediaElementAudioSourceNode
              given an HTMLMediaElement. As a consequence of calling this
              method, audio playback from the HTMLMediaElement will be
              re-routed into the processing graph of the
              AudioContext.
            
              Creates a MediaStreamAudioSourceNode.
            
              Creates a MediaStreamTrackAudioSourceNode.
            
MediaStreamTrack that will act as source.
                The value of its kind
                attribute must be equal to "audio", or an
                InvalidStateError exception MUST be thrown.
              
              Creates a MediaStreamAudioDestinationNode
            
            The AudioContextOptions dictionary is used to
            specify a requested latency for an AudioContext.
          
Identify the type of playback, which affects tradeoffs between audio output latency and power consumption.
                The preferred value of the latencyHint is a value
                from AudioContextLatencyCategory. However, a double can
                also be specified for the number of seconds of latency for
                finer control to balance latency and power consumption. It is
                at the browser's discretion to interpret the number
                appropriately. The actual latency used is given by
                AudioContext's baseLatency attribute.
              
                Set the sampleRate
                to this value for the AudioContext that will be created.
                The supported values are the same as the sample rates for an
                AudioBuffer. A
                NotSupportedError exception MUST be thrown if the
                specified sample rate is not supported.
              
                If sampleRate
                is not specified, the preferred sample rate of the output
                device for this AudioContext is used.
              
currentTime.
            Performance interface implementation (described in
              [[!hr-time-2]]).
            
          OfflineAudioContext is a particular type of
          BaseAudioContext for rendering/mixing-down
          (potentially) faster than real-time. It does not render to the audio
          hardware, but instead renders as quickly as possible, fulfilling the
          returned promise with the rendered result as an
          AudioBuffer.
        
          The OfflineAudioContext is constructed with the same arguments as
          AudioContext.createBuffer. A
          NotSupportedError exception MUST be thrown if any of the
          arguments is negative, zero, or outside its nominal range.
        
              Given the current connections and scheduled changes, starts
              rendering audio. The system shall ensure that the
              OfflineAudioContext is not garbage collected until
              either the promise is resolved and any callback function is
              called and completes, or until the suspend function
              is called.
            
              Although the primary method of getting the rendered audio data is
              via its promise return value, the instance will also fire an
              event named complete for legacy reasons.
            
              When startRendering is
              called, the following steps must be performed on the control
              thread:
            
InvalidStateError, and abort these
              steps.
              When rendering an audio graph on another thread, the following steps MUST happen on a rendering thread that is created for the occasion.
AudioBuffer, with
              a number of channels, length and sample rate equal respectively
              to the numberOfChannels, length and
              sampleRate parameters used when this instance's
              constructor was called.
              length sample-frames of audio into
              buffer.
              complete at this instance, using an instance
                      of OfflineAudioCompletionEvent whose
                      renderedBuffer property is set to
                      buffer.
                      
              Schedules a suspension of the time progression in the audio
              context at the specified time and returns a promise. This is
              generally useful when manipulating the audio graph synchronously
              on OfflineAudioContext.
            
Note that the maximum precision of suspension is the size of the render quantum and the specified suspension time will be rounded down to the nearest render quantum boundary. For this reason, it is not allowed to schedule multiple suspends at the same quantized frame. Also, scheduling should be done while the context is not running to ensure precise suspension.
InvalidStateError.
              
              The size of the buffer in sample-frames. This is the same as the
              value of the length parameter for the constructor.
            
An EventHandler of type OfflineAudioCompletionEvent. It is the last event fired on an OfflineAudioContext.
            This is an Event object which is dispatched to
            OfflineAudioContext for legacy reasons.
          
                An AudioBuffer containing the rendered audio data.
              
renderedBuffer
                attribute of the event.
              
          AudioNodes are the building blocks of an AudioContext. This interface
          represents audio sources, the audio destination, and intermediate
          processing modules. These modules can be connected together to form
          processing graphs for rendering audio
          to the audio hardware. Each node can have inputs and/or
          outputs. A source node has no inputs and a single
          output. Most processing nodes such as filters will have one input and
          one output. Each type of AudioNode differs in the
          details of how it processes or synthesizes audio. But, in general, an
          AudioNode will process its inputs (if it has
          any), and generate audio for its outputs (if it has any).
        
          Each output has one or more channels. The exact number of channels
          depends on the details of the specific AudioNode.
        
          An output may connect to one or more AudioNode
          inputs, thus fan-out is supported. An input initially has no
          connections, but may be connected from one or more AudioNode
          outputs, thus fan-in is supported. When the
          connect() method is called to connect an output of an
          AudioNode to an input of an AudioNode, we call that a
          connection to the input.
        
Each AudioNode input has a specific number of channels at any given time. This number can change depending on the connection(s) made to the input. If the input has no connections then it has one channel which is silent.
          For each input, an AudioNode performs a
          mixing (usually an up-mixing) of all connections to that input.
          Please see  for more informative
          details, and the 
          section for normative requirements.
        
The processing of inputs and the internal operations of an AudioNode take place continuously with respect to AudioContext time, regardless of whether the node has connected outputs, and regardless of whether these outputs ultimately reach an AudioContext's AudioDestinationNode.
          For performance reasons, practical implementations will need to use
          block processing, with each AudioNode processing
          a fixed number of sample-frames of size block-size. In order
          to get uniform behavior across implementations, we will define this
          value explicitly. block-size is defined to be 128
          sample-frames which corresponds to roughly 3ms at a sample-rate of
          44.1 kHz.
        
AudioNodes can be created in two ways: by using the constructor for this particular interface, or by using the factory method on the BaseAudioContext or AudioContext.
            The BaseAudioContext passed as first argument of the
            constructor of an AudioNodes is called the associated BaseAudioContext of the
            AudioNode to be created. Similarly, when using the factory
            method, the associated BaseAudioContext of the
            AudioNode is the BaseAudioContext this factory method
            is called on.
          
To create a new AudioNode of a particular type n using its constructor, with a BaseAudioContext c as first argument, and an associated option object option as second argument, from the relevant global of c, execute these steps:
To create a new AudioNode of a particular type n using its factory method, called on a BaseAudioContext c, execute these steps:
Initializing an object o of interface n that inherits from AudioNode means executing the following steps, given the arguments context and dict passed to the constructor of this interface.
disableNormalization or
                buffer and n is ConvolverNode,
                jump to the beginning of this loop.
                The associated interface for a factory method is the interface of the objects that are returned from this method. The associated option object for an interface is the option object that can be passed to the constructor for this interface.
          AudioNodes are EventTargets, as described in
          DOM [[!DOM]].
          This means that it is possible to dispatch events to
          AudioNodes the same way that other EventTargets
          accept events.
        
computedNumberOfChannels is computed as the
            maximum of the number of channels of all connections. In this mode
            channelCount is ignored
          computedNumberOfChannels is the exact value as
            specified in channelCount
          destination parameter is the
                AudioNode to connect to. If the
                destination parameter is an
                AudioNode that has been created using
                another AudioContext, an
                InvalidAccessError MUST be thrown. That is,
                AudioNodes cannot be shared between
                AudioContexts.
              output parameter is an index describing which
                output of the AudioNode from which to
                connect. If this parameter is
                out-of-bound, an IndexSizeError exception MUST be
                thrown. It is possible to connect an
                AudioNode output to more than one input
                with multiple calls to connect(). Thus, "fan-out" is supported.
              input parameter is an index describing which
                input of the destination AudioNode to
                connect to. If this parameter is
                out-of-bounds, an IndexSizeError exception MUST be
                thrown. It is possible to connect an
                AudioNode to another
                AudioNode which creates a cycle:
                an AudioNode may connect to another
                AudioNode, which in turn connects back to
                the first AudioNode. This is allowed only
                if there is at least one DelayNode in the
                cycle or a
                NotSupportedError exception MUST be thrown.
              There can only be one connection between a given output of one specific node and a given input of another specific node. Multiple connections with the same termini are ignored. For example:
    nodeA.connect(nodeB);
    nodeA.connect(nodeB);
    
            will have the same effect as
      nodeA.connect(nodeB);
    
            
              This method returns destination
              AudioNode object.
            
              Connects the AudioNode to an
              AudioParam, controlling the parameter value
              with an audio-rate signal.
            
destination parameter is the
                AudioParam to connect to. This method does
                not return destination
                AudioParam object. If destination belongs to an
                AudioNode that belongs to a BaseAudioContext that
                is different from the BaseAudioContext that has created
                the AudioNode on which this method was called, an
                InvalidAccessError MUST be thrown.
              output parameter is an index describing which
                output of the AudioNode from which to
                connect. If the
                parameter is out-of-bound, an
                IndexSizeError exception MUST be thrown.
              
              It is possible to connect an AudioNode output
              to more than one AudioParam with multiple
              calls to connect(). Thus, "fan-out" is supported.
            
              It is possible to connect more than one
              AudioNode output to a single
              AudioParam with multiple calls to connect().
              Thus, "fan-in" is supported.
            
              An AudioParam will take the rendered audio
              data from any AudioNode output connected to
              it and convert it to mono by down-mixing
              if it is not already mono, then mix it together with other such
              outputs and finally will mix with the intrinsic
              parameter value (the value the
              AudioParam would normally have without any
              audio connections), including any timeline changes scheduled for
              the parameter.
            
              There can only be one connection between a given output of one
              specific node and a specific AudioParam.
              Multiple connections with the same termini are ignored. For
              example:
            
      nodeA.connect(param);
      nodeA.connect(param);
    
            will have the same effect as
      nodeA.connect(param);
    
          
              Disconnects all outgoing connections from the
              AudioNode.
            
              Disconnects a single output of the AudioNode
              from any other AudioNode or
              AudioParam objects to which it is connected.
            
AudioNode to disconnect. It disconnects all
                outgoing connections from the given output. If this parameter is out-of-bounds, an
                IndexSizeError exception MUST be thrown.
              
              Disconnects all outputs of the AudioNode that
              go to a specific destination AudioNode.
            
destination parameter is the
                AudioNode to disconnect. It disconnects all
                outgoing connections to the given destination.
                If there is no connection to
                destination, an InvalidAccessError
                exception MUST be thrown.
              
              Disconnects a specific output of the
              AudioNode from a specific destination
              AudioNode.
            
destination parameter is the
                AudioNode to disconnect. If there is no connection to the
                destination from the given output, an
                InvalidAccessError exception MUST be
                thrown.
              output parameter is an index describing which
                output of the AudioNode from which to
                disconnect. If this parameter is
                out-of-bound, an IndexSizeError exception MUST be
                thrown.
              
              Disconnects a specific output of the
              AudioNode from a specific input of some
              destination AudioNode.
            
destination parameter is the
                AudioNode to disconnect. If there is no connection to the
                destination from the given output to the given
                input, an InvalidAccessError exception MUST be
                thrown.
              output parameter is an index describing which
                output of the AudioNode from which to
                disconnect. If this parameter is
                out-of-bound, an IndexSizeError exception MUST be
                thrown.
              input parameter is an index describing which
                input of the destination AudioNode to
                disconnect. If this parameter is
                out-of-bounds, an IndexSizeError exception MUST be
                thrown.
              
              Disconnects all outputs of the AudioNode that
              go to a specific destination AudioParam. The
              contribution of this AudioNode to the
              computed parameter value goes to 0 when this operation takes
              effect. The intrinsic parameter value is not affected by this
              operation.
            
destination parameter is the
                AudioParam to disconnect. If there is no connection to the
                destination, an InvalidAccessError
                exception MUST be thrown.
              
              Disconnects a specific output of the
              AudioNode from a specific destination
              AudioParam. The contribution of this
              AudioNode to the computed parameter value
              goes to 0 when this operation takes effect. The intrinsic
              parameter value is not affected by this operation.
            
destination parameter is the
                AudioParam to disconnect. If there is no connection to the
                destination, an InvalidAccessError
                exception MUST be thrown.
              output parameter is an index describing which
                output of the AudioNode from which to
                disconnect. If the
                parameter is out-of-bound, an
                IndexSizeError exception MUST be thrown.
              
              The BaseAudioContext which owns this
              AudioNode.
            
              The number of inputs feeding into the
              AudioNode. For source nodes, this
              will be 0. This attribute is predetermined for many
              AudioNode types, but some
              AudioNodes, like the
              ChannelMergerNode and the
              AudioWorkletNode, have variable number of
              inputs.
            
              The number of outputs coming out of the
              AudioNode. This attribute is predetermined
              for some AudioNode types, but can be
              variable, like for the ChannelSplitterNode
              and the AudioWorkletNode.
            
              channelCount is the number of channels used when
              up-mixing and down-mixing connections to any inputs to the node.
              The default value is 2 except for specific nodes where its value
              is specially determined. This attribute has no effect for nodes
              with no inputs. If this value is set to
              zero or to a value greater than the implementation's maximum
              number of channels the implementation MUST throw a
              NotSupportedError exception.
            
In addition, some nodes have additional channelCount constraints on the possible values for the channel count:
The behavior depends on whether the destination node is the destination of an AudioContext or OfflineAudioContext
IndexSizeError exception must be
                    thrown for any attempt to set the count outside this
                    range.
                  InvalidStateError exception MUST
                    be thrown for any attempt to change the value.
                  InvalidStateError exception MUST be
                thrown for any attempt to change the value.
              InvalidStateError exception MUST be
                thrown for any attempt to change the value.
              NotSupportedError exception MUST be
                thrown for any attempt to change the value..
              NotSupportedError
                exception MUST be thrown for any attempt to change the to a
                value greater than two.
              InvalidStateError exception MUST be
                thrown for any attempt to change the value.
              NotSupportedError
                exception MUST be thrown for any attempt to change the to a
                value greater than two.
              channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. This attribute has no effect for nodes with no inputs.
In addition, some nodes have additional channelCountMode constraints on the possible values for the channel count mode:
InvalidStateError exception MUST be
                thrown for any attempt to change the value.
              InvalidStateError
                exception must be thrown for any attempt to change the
                value.
              InvalidStateError
                exception must be thrown for any attempt to change the
                value.
              NotSupportedError
                exception MUST be thrown for any attempt to change the
                value.
              NotSupportedError
                exception MUST be thrown for any attempt to set it to
                "max".
              InvalidStateError
                exception MUST be thrown for any attempt to change the
                value.
              NotSupportedError
                exception MUST be thrown for any attempt to set it to
                "max".
              channelInterpetation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. This attribute has no effect for nodes with no inputs.
See the section for more information on this attribute.
              When attempting to this attribute on a
              ChannelSplitterNode, an InvalidStateError
              MUST be thrown.
            
This specifies the options that can be used in constructing all AudioNodes. All members are optional. However, the specific values used for each node depends on the actual node.
The following behaviors provide a normative description of the conditions under which an AudioNode is alive, meaning that it MUST be retained in the graph by an implementation. Where these conditions do not apply, AudioNodes MAY be released by an implementation.
There are several types of references:
AudioBufferSourceNodes,
            MediaElementAudioSourceNodes,
            MediaStreamAudioSourceNodes and
            OscillatorNodes. These nodes maintain a
            playing reference to themselves while they are currently
            playing.
            running.
            AudioNode maintains on itself as long as it has
            any internal processing state which has not yet been emitted. For
            example, a ConvolverNode has a tail which
            continues to play even after receiving silent input (think about
            clapping your hands in a large concert hall and continuing to hear
            the sound reverberate throughout the hall). Some
            AudioNodes have this property. Please see
            details for specific nodes.
            MediaStreams keep a
              MediaStreamAudioSourceNode alive as long as the underlying
              MediaStreamTrack that is playing through the
              MediaStreamAudioSourceNode has not ended
              (as per [[!mediacapture-streams]]).
            HTMLMediaElements keep their associated
              MediaElementAudioSourceNode alive as long as the
              HTMLMediaElement is in a state where audio could
              ever be played in the future.
              
                  An HTMLMediaElement that has its
                  src attribute set to "", and all
                  its references dropped allows the
                  MediaElementAudioSourceNode to be released as well
                  (granted nothing keeps the MediaElementAudioSourceNode
                  alive).
                
            Any AudioNodes which are connected in a cycle
            and are directly or indirectly connected to a
            AudioDestinationNode or
            MediaStreamAudioDestinationNode within the
            AudioContext will stay alive as long as the
            AudioContext is alive.
          
The uninterrupted operation of AudioNodes implies that as long as live references exist to a node, the node will continue processing its inputs and evolving its internal state even if it is disconnected from the audio graph. Since this processing will consume CPU and power, developers should carefully consider the resource usage of disconnected nodes. In particular, it is a good idea to minimize resource consumption by explicitly putting disconnected nodes into a stopped state when possible.
            When an AudioNode has no references it will be
            deleted. Before it is deleted, it will disconnect itself from any
            other AudioNodes which it is connected to. In
            this way it releases all connection references (3) it has to other
            nodes.
          
            Regardless of any of the above references, it can be assumed that
            the AudioNode will be deleted when its
            AudioContext is deleted.
          
          This is an AudioNode representing the final audio
          destination and is what the user will ultimately hear. It can often
          be considered as an audio output device which is connected to
          speakers. All rendered audio to be heard will be routed to this node,
          a "terminal" node in the AudioContext's routing
          graph. There is only a single AudioDestinationNode per
          AudioContext, provided through the
          destination attribute of
          AudioContext.
        
          The output of a AudioDestinationNode is produced
          by summing its input, allowing to
          capture the output of an AudioContext into, for
          example, a MediaStreamAudioDestinationNode, or a
          MediaRecorder (described in [[mediastream-recording]]).
        
      numberOfInputs  : 1
      numberOfOutputs : 1
        The AudioDestinationNode can be either the destination of an AudioContext or OfflineAudioContext, and the channel properties depend on what the context is.
For an AudioContext, the defaults are
      channelCount = 2
      channelCountMode = "explicit"
      channelInterpretation = "speakers"
        
          The channelCount can be set to any value less than or equal to
          maxChannelCount.
          An IndexSizeError exception
          MUST be thrown if this value is not within the valid range.
          Giving a concrete example, if the audio hardware supports 8-channel
          output, then we may set channelCount to 8, and render 8
          channels of output.
        
For an OfflineAudioContext, the defaults are
      channelCount = numberOfChannels
      channelCountMode = "explicit"
      channelInterpretation = "speakers"
        
          where numberOfChannels is the number of channels
          specified when constructing the OfflineAudioContext. This
          value may not be changed; a
          NotSupportedError exception MUST be thrown if channelCount is changed to a
          different value.
        
              The maximum number of channels that the channelCount
              attribute can be set to. An
              AudioDestinationNode representing the audio
              hardware end-point (the normal case) can potentially output more
              than 2 channels of audio if the audio hardware is multi-channel.
              maxChannelCount is the maximum number of channels
              that this hardware is capable of supporting.
            
          AudioParam controls an individual aspect of an
          AudioNode's functioning, such as volume. The
          parameter can be set immediately to a particular value using the
          value attribute. Or, value changes can be scheduled to
          happen at very precise times (in the coordinate system of
          AudioContext's currentTime attribute), for
          envelopes, volume fades, LFOs, filter sweeps, grain windows, etc. In
          this way, arbitrary timeline-based automation curves can be set on
          any AudioParam. Additionally, audio signals from
          the outputs of AudioNodes can be connected to an
          AudioParam, summing with the intrinsic
          parameter value.
        
          Some synthesis and processing AudioNodes have
          AudioParams as attributes whose values must be taken
          into account on a per-audio-sample basis. For other
          AudioParams, sample-accuracy is not important and the
          value changes can be sampled more coarsely. Each individual
          AudioParam will specify that it is either an
          a-rate parameter which means that its values must be taken
          into account on a per-audio-sample basis, or it is a k-rate
          parameter.
        
          Implementations must use block processing, with each
          AudioNode processing one render quantum.
        
For each render quantum, the value of a k-rate parameter must be sampled at the time of the very first sample-frame, and that value must be used for the entire block. a-rate parameters must be sampled for each sample-frame of the block.
          Each AudioParam includes minValue and maxValue attributes that
          together form the nominal range for the parameter. In effect,
          value of the parameter is clamped to the range \([\mathrm{minValue},
          \mathrm{maxValue}]\). See the section Computation of Value for full details.
        
          An AudioParam maintains a time-ordered event list which
          is initially empty. The times are in the time coordinate system of
          the AudioContext's currentTime attribute. The
          events define a mapping from time to value. The following methods can
          change the event list by adding a new event into the list of a type
          specific to the method. Each event has a time associated with it, and
          the events will always be kept in time-order in the list. These
          methods will be called automation methods:
        
The following rules will apply when calling these methods:
NotSupportedError exception MUST be thrown. In
            other words, it's not ok to schedule a value curve during a time
            period containing other events.
          NotSupportedError exception MUST be thrown if any
            automation method is called at a time which is inside of
            the time interval of a SetValueCurve event at time T and
            duration D.
          
              The parameter's floating-point value. This attribute is
              initialized to the defaultValue.
            
              The effect of setting this attribute is equivalent to calling
              setValueAtTime() with the current
              AudioContext's currentTime and the
              requested value. Subsequent accesses to this attribute's getter
              will return the same value.
            
              Initial value for the value attribute.
            
              The nominal minimum value that the parameter can take. Together
              with maxValue, this forms the nominal range
              for this parameter.
            
              The nominal maximum value that the parameter can take. Together
              with minValue, this forms the nominal range
              for this parameter.
            
Schedules a parameter value change at the given time.
BaseAudioContext's currentTime attribute at
                which the parameter changes to the given value. A RangeError exception MUST be thrown if
                startTime is negative or is not a finite
                number. If startTime is less than currentTime, it is
                clamped to currentTime.
              
              If there are no more events after this SetValue event,
              then for \(t \geq T_0\), \(v(t) = V\), where \(T_0\) is the
              startTime parameter and \(V\) is the
              value parameter. In other words, the value will
              remain constant.
            
If the next event (having time \(T_1\)) after this SetValue event is not of type LinearRampToValue or ExponentialRampToValue, then, for \(T_0 \leq t < T_1\):
              $$
                v(t) = V
              $$
            
            In other words, the value will remain constant during this time interval, allowing the creation of "step" functions.
              If the next event after this SetValue event is of type
              LinearRampToValue or ExponentialRampToValue
              then please see 
              linearRampToValueAtTime or 
              exponentialRampToValueAtTime, respectively.
            
Schedules a linear continuous change in parameter value from the previous scheduled parameter value to the given value.
AudioContext's currentTime attribute
                at which the automation ends. A
                RangeError exception MUST be thrown if endTime is
                negative or is not a finite number. If
                endTime is less than currentTime, it is
                clamped to currentTime.
              
              The value during the time interval \(T_0 \leq t < T_1\) (where
              \(T_0\) is the time of the previous event and \(T_1\) is the
              endTime parameter passed into this method) will be
              calculated as:
            
              $$
                v(t) = V_0 + (V_1 - V_0) \frac{t - T_0}{T_1 - T_0}
              $$
            
              Where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is the
              value parameter passed into this method.
            
If there are no more events after this LinearRampToValue event then for \(t \geq T_1\), \(v(t) = V_1\).
              If there is no event preceding this event, the linear ramp
              behaves as if setValueAtTime(value, currentTime)
              were called where value is the current value of the
              attribute and currentTime is the context currentTime at the time
              linearRampToValueAtTime is called.
            
If the preceding event is a SetTarget event, \(T_0\) and \(V_0\) are chosen from the current time and value of SetTarget automation. That is, if the SetTarget event has not started, \(T_0\) is the start time of the event, and \(V_0\) is the value just before the SetTarget event starts. In this case, the LinearRampToValue event effectively replaces the SetTarget event. If the SetTarget event has already started, \(T_0\) is the current context time, and \(V_0\) is the current SetTarget automation value at time \(T_0\). In both cases, the automation curve is continuous.
Schedules an exponential continuous change in parameter value from the previous scheduled parameter value to the given value. Parameters representing filter frequencies and playback rate are best changed exponentially because of the way humans perceive sound.
              The value during the time interval \(T_0 \leq t < T_1\) (where
              \(T_0\) is the time of the previous event and \(T_1\) is the
              endTime parameter passed into this method) will be
              calculated as:
            
              $$
                v(t) = V_0 \left(\frac{V_1}{V_0}\right)^\frac{t - T_0}{T_1 - T_0}
              $$
            
              where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is the
              value parameter passed into this method. If \(V_0\)
              and \(V_1\) have opposite signs or if \(V_0\) is zero, then
              \(v(t) = V_0\) for \(T_0 \le t \lt T_1\).
            
This also implies an exponential ramp to 0 is not possible. A good approximation can be achieved using setTargetAtTime with an appropriately chosen time constant.
If there are no more events after this ExponentialRampToValue event then for \(t \geq T_1\), \(v(t) = V_1\).
              If there is no event preceding this event, the exponential ramp
              behaves as if setValueAtTime(value, currentTime)
              were called where value is the current value of the
              attribute and currentTime is the context currentTime at the time
              exponentialRampToValueAtTime is called.
            
If the preceding event is a SetTarget event, \(T_0\) and \(V_0\) are chosen from the current time and value of SetTarget automation. That is, if the SetTarget event has not started, \(T_0\) is the start time of the event, and \(V_0\) is the value just before the SetTarget event starts. In this case, the ExponentialRampToValue event effectively replaces the SetTarget event. If the SetTarget event has already started, \(T_0\) is the current context time, and \(V_0\) is the current SetTarget automation value at time \(T_0\). In both cases, the automation curve is continuous.
RangeError
                exception MUST be thrown if this value is equal to 0.
              AudioContext's currentTime attribute
                where the exponential ramp ends. A
                RangeError exception MUST be thrown if endTime is
                negative or is not a finite number. If
                endTime is less than currentTime, it is
                clamped to currentTime.
              Start exponentially approaching the target value at the given time with a rate having the given time constant. Among other uses, this is useful for implementing the "decay" and "release" portions of an ADSR envelope. Please note that the parameter value does not immediately change to the target value at the given time, but instead gradually changes to the target value.
AudioContext's currentTime attribute.
                A RangeError exception MUST be thrown
                if start is negative or is not a finite
                number. If startTime is less than currentTime, it is
                clamped to currentTime.
              timeConstant is zero, the output
                value jumps immediately to the final value.
                More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value \(1 - 1/e\) (around 63.2%) given a step input response (transition from 0 to 1 value).
              During the time interval: \(T_0 \leq t\), where \(T_0\) is the
              startTime parameter:
            
              $$
                v(t) = V_1 + (V_0 - V_1)\, e^{-\left(\frac{t - T_0}{\tau}\right)}
              $$
            
            
              where \(V_0\) is the initial value (the .value
              attribute) at \(T_0\) (the startTime parameter),
              \(V_1\) is equal to the target parameter, and
              \(\tau\) is the timeConstant parameter.
            
              If a LinearRampToValue or
              ExponentialRampToValue event follows this event, the
              behavior is described in 
              linearRampToValueAtTime or 
              exponentialRampToValueAtTime, respectively. For all
              other events, the SetTarget event ends at the time of
              the next event.
            
Sets an array of arbitrary parameter values starting at the given time for the given duration. The number of values will be scaled to fit into the desired duration.
A sequence of float values representing a parameter value curve. These values will apply starting at the given time and lasting for the given duration. When this method is called, an internal copy of the curve is created for automation purposes. Subsequent modifications of the contents of the passed-in array therefore have no effect on the AudioParam.
                  An InvalidStateError
                  MUST be thrown if this attribute is a
                  sequence<float> object that has a length
                  less than 2.
                
AudioContext's currentTime attribute
                at which the value curve will be applied. A RangeError exception MUST be thrown if
                startTime is negative or is not a finite
                number.. If startTime is less than currentTime, it is
                clamped to currentTime.
              RangeError exception MUST be thrown if
                duration is not strictly positive or is not a
                finite number.
              
              Let \(T_0\) be startTime, \(T_D\) be
              duration, \(V\) be the values array,
              and \(N\) be the length of the values array. Then,
              during the time interval: \(T_0 \le t < T_0 + T_D\), let
            
              $$
                \begin{align*} k &= \left\lfloor \frac{N - 1}{T_D}(t-T_0) \right\rfloor \\
                \end{align*}
              $$
            
            Then \(v(t)\) is computed by linearly interpolating between \(V[k]\) and \(V[k+1]\),
After the end of the curve time interval (\(t \ge T_0 + T_D\)), the value will remain constant at the final curve value, until there is another automation event (if any).
              An implicit call to 
              setValueAtTime is made at time \(T_0 + T_D\) with
              value \(V[N-1]\) so that following automations will start from
              the end of the 
              setValueCurveAtTime event.
            
              Cancels all scheduled parameter changes with times greater than
              or equal to cancelTime. Cancelling a scheduled
              parameter change means removing the scheduled event from the
              event list. Any active automations whose event time is less than
              cancelTime are also cancelled, and such
              cancellations may cause discontinuities because the original
              value (from before such automation) is restored immediately. Any
              hold values scheduled by cancelAndHoldAtTime
              are also removed if the hold time occurs after
              cancelTime.
            
AudioContext's currentTime attribute.
                A RangeError exception MUST be thrown
                if cancelTime is negative or is not a finite
                number. If cancelTime is less than currentTime, it is
                clamped to currentTime.
              
              This is similar to 
              cancelScheduledValues in that it cancels all scheduled
              parameter changes with times greater than or equal to
              cancelTime. However, in addition, the automation
              value that would have happened at cancelTime is then
              proprogated for all future time until other automation events are
              introduced.
            
              The behavior of the timeline in the face of
              cancelAndHoldAtTime when automations are running and
              can be introduced at any time after calling
              cancelAndHoldAtTime and before
              cancelTime is reached is quite complicated. The
              behavior of cancelAndHoldAtTime is therefore
              specified in the following algorithm.
            
              Let \(t_c\) be the value of cancelTime. Then
            
setTarget event,
                    setValueAtTime event
                      at time \(t_c\) with the value that the
                      setTarget would have at time
                      \(t_c\).setValueCurve with a start
                  time of \(t_3\) and a duration of \(d\)
                    setValueCurve event with a start time of
                          \(t_3\) and a new duration of \(t_c-t_3\). However,
                          this is not a true replacement; this automation must
                          take care to produce the same output as the original,
                          and not one computed using a different duration.
                          (That would cause sampling of the value curve in a
                          slightly different way, producing different
                          results.)
              If no events are added, then the automation value after
              cancelAndHoldAtTime is the the constant value that
              the original timeline would have had at time \(t_c\).
            
AudioContext's currentTime attribute.
                A RangeError exception MUST be thrown
                if cancelTime is negative or is not a finite
                number. If cancelTime is less than currentTime, it is
                clamped to currentTime.
              There are two different kind of AudioParams, simple parameters and compound parameters. Simple parameters (the default) are used on their own to compute the final audio output of an AudioNode. Compound parameters are AudioParams that are used with other AudioParams to compute a value that is then used as an input to compute the output of an AudioNode.
The computedValue is the final value controlling the audio DSP and is computed by the audio rendering thread during each rendering time quantum. It must be internally computed as follows:
value attribute, or, if there are any scheduled
            parameter changes (automation events) with times before or at this
            time, the value as calculated from these events. When read, the
            value attribute always returns the intrinsic
            value for the current time. If automation events are removed from a
            given time range, then the intrinsic value will remain
            unchanged and stay at its previous value until either the
            value attribute is directly set, or automation events
            are added for the time range.
            AudioParam buffer.
            The nominal range for a computedValue are the lower and higher values this parameter can effectively have. For simple parameters, the computedValue is clamped to the nominal range for this parameter. Compound parameters have their final value clamped to their nominal range after having been computed from the different AudioParam value they are composed of.
When automation methods are used, clamping is still applied. However, the automation is run as if there were no clamping at all. Only when the automation values are to be applied to the output is the clamping done as specified above.
For example, consider a node \(N\) has an AudioParam \(p\) with a nominal range of \([0, 1]\), and following automation sequence
            N.p.setValueAtTime(0, 0);
            N.p.linearRampToValueAtTime(4, 1);
            N.p.linearRampToValueAtTime(0, 2)
          The initial slope of the curve is 4, until it reaches the maximum value of 1, at which time, the output is held constant. Finally, near time 2, the slope of the curve is -4. This is illustrated in the graph below where the dashed line indicates what would have happened without clipping, and the solid line indicates the actual expected behavior of the audioparam due to clipping to the nominal range.
            
            
var curveLength = 44100;
var curve = new Float32Array(curveLength);
for (var i = 0; i < curveLength; ++i)
    curve[i] = Math.sin(Math.PI * i / curveLength);
var t0 = 0;
var t1 = 0.1;
var t2 = 0.2;
var t3 = 0.3;
var t4 = 0.325;
var t5 = 0.5;
var t6 = 0.6;
var t7 = 0.7;
var t8 = 1.0;
var timeConstant = 0.1;
param.setValueAtTime(0.2, t0);
param.setValueAtTime(0.3, t1);
param.setValueAtTime(0.4, t2);
param.linearRampToValueAtTime(1, t3);
param.linearRampToValueAtTime(0.8, t4);
param.setTargetAtTime(.5, t4, timeConstant);
// Compute where the setTargetAtTime will be at time t5 so we can make
// the following exponential start at the right point so there's no
// jump discontinuity.  From the spec, we have
//   v(t) = 0.5 + (0.8 - 0.5)*exp(-(t-t4)/timeConstant)
// Thus v(t5) = 0.5 + (0.8 - 0.5)*exp(-(t5-t4)/timeConstant)
param.setValueAtTime(0.5 + (0.8 - 0.5)*Math.exp(-(t5 - t4)/timeConstant), t5);
param.exponentialRampToValueAtTime(0.75, t6);
param.exponentialRampToValueAtTime(0.05, t7);
param.setValueCurveAtTime(curve, t7, t8 - t7);
        The interface represents the common features of the source nodes such as AudioBufferSourceNode, ConstantSourceNode, and OscillatorNode.
          Before a source is started (by calling start,
          the source node must output silence (0). After a source has been
          stopped (by calling stop),
          the source must then output silence (0).
        
AudioScheduledSourceNode cannot be instantiated directly, but is instead extended by the concrete interfaces for the source nodes.
              A property used to set the EventHandler (described
              in 
              HTML[[!HTML]]) for the ended event that is dispatched
              to AudioScheduledSourceNode node types. When
              the source node has stopped playing (as determined by the
              concrete node), an event of type Event (described in
              
              HTML [[!HTML]]) will be dispatched to the event
              handler.
            
              For all AudioScheduledSourceNodes, the
              onended event is dispatched when the stop time
              determined by stop()
              is reached. For an AudioBufferSourceNode, the
              event is also dispatched because the 
              duration has been reached or if the entire
              buffer has
              been played.
            
              Schedules a sound to playback at an exact time.
              start may only be called one time and must be called before stop is called
              or an InvalidStateError exception MUST be thrown.
            
when parameter describes at what time
                (in seconds) the sound should start playing. It is in the same
                time coordinate system as the
                AudioContext's currentTime attribute.
                When the signal emitted by the AudioScheduledSourceNode
                depends on the sound's start time, the exact value of
                when is always used without rounding to the
                nearest sample frame. If 0 is passed in for this value or if
                the value is less than currentTime, then the sound will
                start playing immediately. A
                RangeError exception MUST be thrown if when is
                negative.
              
              Schedules a sound to stop playback at an exact time. If
              stop is called again after already having been
              called, the last invocation will be the only one applied; stop
              times set by previous calls will not be applied, unless the
              buffer has already stopped prior to any subsequent calls. If the
              buffer has already stopped, further calls to stop
              will have no effect. If a stop time is reached prior to the
              scheduled start time, the sound will not play.
            
when parameter describes at what time
                (in seconds) the source should stop playing. It is in the same
                time coordinate system as the
                AudioContext's currentTime attribute.
                If 0 is passed in for this value or if the value is less than
                currentTime,
                then the sound will stop playing immediately. A RangeError exception MUST be thrown if
                when is negative.
              
          Changing the gain of an audio signal is a fundamental operation in
          audio applications. The GainNode is one of the building
          blocks for creating mixers. This
          interface is an AudioNode with a single input and
          single output:
        
numberOfInputs : 1 numberOfOutputs : 1 channelCountMode = "max"; channelInterpretation = "speakers";
          Each sample of each channel of the input data of the
          GainNode MUST be multiplied by the
          computedValue of the gain
          AudioParam.
        
This node has no tail-time reference.
Let gain be a new GainNode object. Initialize gain, and return gain.
Represents the amount of gain to apply. This parameter is a-rate. Its nominal range is (-\(\infty\), +\(\infty\)).
            This specifies options to use in constructing a
            GainNode. All members are optional; if not
            specified, the normal defaults are used in constructing the node.
          
gain AudioParam.
            
          A delay-line is a fundamental building block in audio applications.
          This interface is an AudioNode with a single
          input and single output:
        
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCountMode = "max";
    channelInterpretation = "speakers";
        The number of channels of the output always equals the number of channels of the input.
          It delays the incoming audio signal by a certain amount.
          Specifically, at each time t, input signal
          input(t), delay time delayTime(t) and output signal
          output(t), the output will be output(t) = input(t -
          delayTime(t)). The default delayTime is 0 seconds
          (no delay).
        
When the number of channels in a DelayNode's input changes (thus changing the output channel count also), there may be delayed audio samples which have not yet been output by the node and are part of its internal state. If these samples were received earlier with a different channel count, they must be upmixed or downmixed before being combined with newly received input so that all internal delay-line mixing takes place using the single prevailing channel layout.
          This node has a tail-time reference such that this node
          continues to output non-silent audio with zero input up to the
          maxDelayTime of
          the node.
        
By definition, a DelayNode introduces an audio processing latency equal to the amount of the delay.
Let node be a new DelayNode object. Initialize node, and return node.
              An AudioParam object representing the amount
              of delay (in seconds) to apply. Its default value is
              0 (no delay). The minimum value is 0 and the maximum value is
              determined by the maxDelayTime argument to the
              AudioContext method createDelay.
            
              If DelayNode is part of a cycle, then
              the value of the delayTime attribute is
              clamped to a minimum of one render
              quantum.
            
              Its nominal range is [0, maxDelayTime], where
              maxDelayTime is the value passed to the
              createDelay method on the
              AudioContext or the maxDelayTime
              member of the DelayOptions dictionary in the
              node constructor.
            
This parameter is a-rate.
            This specifies options for constructing a
            DelayNode. All members are optional; if not
            given, the node is constructed using the normal defaults.
          
          This interface represents a memory-resident audio asset (for one-shot
          sounds and other short audio clips). Its format is non-interleaved
          32-bit linear floating-point PCM values with a normal range of \([-1,
          1]\), but values are not limited to this range. It can contain one or
          more channels. Typically, it would be expected that the length of the
          PCM data would be fairly short (usually somewhat less than a minute).
          For longer sounds, such as music soundtracks, streaming should be
          used with the audio element and
          MediaElementAudioSourceNode.
        
          An AudioBuffer may be used by one or more
          AudioContexts, and can be shared between an
          OfflineAudioContext and an
          AudioContext.
        
AudioBuffer has four internal slots:
Let b be a new AudioBuffer object. Respectively assign the values of the attributes numberOfChannels, length, sampleRate of the AudioBufferOptions passed in the constructor to the internal slots [[number of channels]], [[\length]], [[sample rate]].
              Set the internal slot [[internal data]] of this
              AudioBuffer to the result of calling CreateByteDataBlock([[\length]]
              * [[number of channels]]).
            
This initializes the underlying storage to zero.
The sample-rate for the PCM audio data in samples per second. This MUST return the value of [[sample rate]].
Length of the PCM audio data in sample-frames. This MUST return the value of [[\length]].
Duration of the PCM audio data in seconds.
This is computed from the [[sample rate]] and the [[\length]] of the AudioBuffer by performing a division between the [[\length]] and the [[sample rate]].
The number of discrete audio channels. This MUST return the value of [[number of channels]].
              According to the rules described in acquire the content either 
              get a reference to or get
              a copy of the bytes stored in [[internal data]] in a new
              Float32Array.
            
numberOfChannels or an
                IndexSizeError exception MUST be thrown.
              
              The copyFromChannel method copies the samples from
              the specified channel of the AudioBuffer to the
              destination array.
            
              Let buffer be the AudioBuffer buffer with
              \(N_b\) frames, let \(N_f\) be the number of elements in the
              destination array, and \(k\) be the value of
              startInChannel. Then the number of frames copied
              from buffer to destination is
              \(\min(N_b - k, N_f)\). If this is less than \(N_f\), then the
              remaining elements of destination are not modified.
            
channelNumber is greater or equal than the number
                of channel of the AudioBuffer, an IndexSizeError MUST be
                thrown.
              startInChannel is greater than the
                length of the AudioBuffer, an IndexSizeError MUST be
                thrown.
              
              The copyToChannel method copies the samples to the
              specified channel of the AudioBuffer, from the
              source array.
            
              Let buffer be the AudioBuffer with \(N_b\)
              frames, let \(N_f\) be the number of elements in the
              source array, and \(k\) be the value of
              startInChannel. Then the number of frames copied
              from source to the buffer is \(\min(N_b
              - k, N_f)\). If this is less than \(N_f\), then the remaining
              elements of buffer are not modified.
            
channelNumber is greater or equal than the number
                of channel of the AudioBuffer, an IndexSizeError MUST be
                thrown.
              startInChannel is greater than the
                length of the AudioBuffer, an IndexSizeError MUST be
                thrown.
              
          The methods copyToChannel and
          copyFromChannel can be used to fill part of an array by
          passing in a Float32Array that's a view onto the larger
          array. When reading data from an AudioBuffer's channels, and
          the data can be processed in chunks, copyFromChannel
          should be preferred to calling getChannelData and
          accessing the resulting array, because it may avoid unnecessary
          memory allocation and copying.
        
          An internal operation acquire the
          contents of an AudioBuffer is invoked when the
          contents of an AudioBuffer are needed by some API
          implementation. This operation returns immutable channel data to the
          invoker.
        
When an acquire the content operation occurs on an AudioBuffer, run the following steps:
IsDetachedBuffer
          on any of the AudioBuffer's ArrayBuffers return
          true, abort these steps, and return a zero-length
          channel data buffer to the invoker.
          ArrayBuffers for arrays previously returned by
            getChannelData on this AudioBuffer.
          ArrayBuffers and return references to them to the
          invoker.
          ArrayBuffers containing copies of the data to
          the AudioBuffer, to be returned by the next call to
          getChannelData.
          AudioBufferSourceNode.start is called, it
          acquires the contents of the
          node's buffer. If the operation fails, nothing is
          played.
          buffer is set to an
          AudioBuffer while the node is connected to an output node, or
          a ConvolverNode is connected to an output node while the
          ConvolverNode's buffer is set to an
          AudioBuffer, it acquires the
          content of the AudioBuffer.
          outputBuffer.
          
          This means that copyToChannel cannot be used to change
          the content of an AudioBuffer currently in use by an
          AudioNode that has acquired the content of an AudioBuffer,
          since the AudioNode will continue to use the data previously
          acquired.
        
            This specifies the options to use in constructing an
            AudioBuffer. The length and
            sampleRate
            members are required. A NotFoundError exception MUST
            be thrown if any of the required members are not specified.
          
          This interface represents an audio source from an in-memory audio
          asset in an AudioBuffer. It is useful for playing audio
          assets which require a high degree of scheduling flexibility and
          accuracy. If sample-accurate playback of network- or disk-backed
          assets is required, an implementer should use
          AudioWorkletNode to implement playback.
        
          The 
          start() method is used to schedule when sound
          playback will happen. The 
          start() method may not be issued multiple times. The
          playback will stop automatically when the buffer's audio data has
          been completely played (if the loop attribute is
          false), or when the stop()
          method has been called and the specified time has been reached.
          Please see more details in the 
          start() and stop()
          description.
        
numberOfInputs : 0 numberOfOutputs : 1
          The number of channels of the output always equals the number of
          channels of the AudioBuffer assigned to the buffer
          attribute, or is one channel of silence if buffer is
          null.
        
This node has no tail-time reference.
          A playhead position for an AudioBufferSourceNode is
          defined as any quantity representing a time offset in seconds,
          relative to the time coordinate of the first sample frame in the
          buffer. Such values are to be considered independently from the
          node's playbackRate and detune parameters.
          In general, playhead positions may be subsample-accurate and need not
          refer to exact sample frame positions. They may assume valid values
          between 0 and the duration of the buffer.
        
          AudioBufferSourceNodes are created with an internal boolean
          slot [[buffer set]], initially set to false.
        
Let node be a new AudioBufferSourceNode object. Initialize node, and return node.
              Represents the audio asset to be played. To set the
              buffer attribute, execute these steps:
            
new buffer be the AudioBuffer to be
              assigned to buffer.
              new buffer is not null and 
                [[buffer set]] is true, throw an
                InvalidStateError and abort these steps.
              new buffer is not null, set
              [[buffer set]] to true.
              new buffer to the buffer
              attribute.
              
              The speed at which to render the audio stream. Its default
              value is 1. This parameter is k-rate. This is
              a compound parameter with detune. Its
              nominal range is \([-\infty, \infty]\).
            
              An additional parameter, in cents, to modulate the speed at which
              is rendered the audio stream. Its default value is 0. This
              parameter is k-rate. This parameter is a compound
              parameter with playbackRate. Its nominal
              range is \((-\infty, \infty)\).
            
              Indicates if the region of audio data designated by
              loopStart and loopEnd should be played
              continuously in a loop. The default value is false.
            
              An optional playhead position where looping should begin
              if the loop attribute is true. Its default
              value is 0, and it may usefully be set to any value
              between 0 and the duration of the buffer. If
              loopStart is less than 0, looping will begin at 0.
              If loopStart is greater than the duration of the
              buffer, looping will begin at the end of the buffer.
            
              An optional playhead position where looping should end if
              the loop attribute is true. Its value is exclusive
              of the content of the loop. Its default value is 0,
              and it may usefully be set to any value between 0 and the
              duration of the buffer. If loopEnd is less than or
              equal to 0, or if loopEnd is greater than the
              duration of the buffer, looping will end at the end of the
              buffer.
            
Schedules a sound to playback at an exact time.
When this method is called, execute these steps:
stop has been called on this node, or if an
              earlier call to start has already occurred, an
              InvalidStateError exception MUST be thrown.
              
              Running a control message to start the
              AudioBufferSourceNode means invoking the
              handleStart() function in the playback algorithm which
              follows.
            
when parameter describes at what time
                (in seconds) the sound should start playing. It is in the same
                time coordinate system as the
                AudioContext's currentTime attribute.
                If 0 is passed in for this value or if the value is less than
                currentTime, then the sound will start playing
                immediately. A RangeError exception
                MUST be thrown if when is negative.
              offset is
                negative. If offset is greater than
                loopEnd, playback will begin at
                loopEnd (and immediately loop to
                loopStart). offset is silently
                clamped to [0, duration], when
                startTime is reached, where duration
                is the value of the duration attribute of the
                AudioBuffer set to the buffer
                attribute of this AudioBufferSourceNode.
              duration parameter describes the
                duration of the sound (in seconds) to be played. If this
                parameter is passed, this method has exactly the same effect as
                the invocation of start(when, offset) followed by
                stop(when + duration). A
                RangeError exception MUST be thrown if duration is
                negative.
              
              Schedules a sound to stop playback at an exact time. If
              stop is called again after already having been
              called, the last invocation will be the only one applied; stop
              times set by previous calls will not be applied, unless the
              buffer has already stopped prior to any subsequent calls. If the
              buffer has already stopped, further calls to stop
              will have no effect. If a stop time is reached prior to the
              scheduled start time, the sound will not play.
            
When this method is called, execute these steps:
stop has been called on this node, or if an
              earlier call to start has already occurred, an
              InvalidStateError exception MUST be thrown.
              
              Running a control message to start the
              AudioBufferSourceNode means invoking the
              handleStop() function in the playback algorithm which
              follows.
            
when parameter describes at what time
                (in seconds) the source should stop playing. It is in the same
                time coordinate system as the
                AudioContext's currentTime attribute.
                If 0 is passed in for this value or if the value is less than
                currentTime,
                then the sound will stop playing immediately. A RangeError exception MUST be thrown if
                when is negative.
              
            This specifies options for constructing a
            AudioBufferSourceNode. All members are
            optional; if not specified, the normal default is used in
            constructing the node.
          
buffer to the buffer
              attribute of the AudioBufferSourceNode.
            detune
              AudioParam.
            loop
              attribute.
            loopEnd
              attribute.
            loopStart
              attribute.
            playbackRate
              AudioParam.
            This section is non-normative. Please see the playback algorithm for normative requirements.
            Setting the loop attribute to true causes playback of
            the region of the buffer defined by the endpoints
            loopStart and loopEnd to continue
            indefinitely, once any part of the looped region has been played.
            While loop remains true, looped playback will continue
            until stop() is called, or the scheduled stop time has
            been reached.
          
            The body of the loop is considered to occupy a region from
            loopStart up to, but not including,
            loopEnd. The direction of playback of the looped
            region respects the sign of the node's playback rate. For positive
            playback rates, looping occurs from loopStart to
            loopEnd; for negative rates, looping occurs from
            loopEnd to loopStart.
          
            Looping does not affect the interpretation of the
            offset argument of 
            start(). Playback always starts at the requested
            offset, and looping only begins once the body of the loop is
            encountered during playback.
          
            The effective loop start and end points are required to lie within
            the range of zero and the buffer duration, as specified in the
            algorithm below. loopEnd is further constrained to be
            at or after loopStart. If any of these constraints are
            violated, the loop is considered to include the entire buffer
            contents.
          
Loop endpoints have subsample accuracy. When endpoints do not fall on exact sample frame offsets, or when the playback rate is not equal to 1, playback of the loop is interpolated to splice the beginning and end of the loop together just as if the looped audio occurred in sequential, non-looped regions of the buffer.
Loop-related properties may be varied during playback of the buffer, and in general take effect on the next rendering quantum. The exact results are defined by the normative playback algorithm which follows.
            The default values of the loopStart and
            loopEnd attributes are both 0. Since a
            loopEnd value of zero is equivalent to the length of
            the buffer, the default endpoints cause the entire buffer to be
            included in the loop.
          
            Note that the values of the loop endpoints are expressed as time
            offsets in terms of the sample rate of the buffer, meaning that
            these values are independent of the node's
            playbackRate parameter which can vary dynamically
            during the course of playback.
          
This normative section specifies the playback of the contents of the buffer, accounting for the fact that playback is influenced by the following factors working in combination, which can vary dynamically during playback:
The algorithm to be followed internally to generate output from an AudioBufferSourceNode conforms to the following principles:
The description of the algorithm is as follows:
let buffer;  // AudioBuffer employed by this node
let context; // AudioContext employed by this node
// The following variables capture attribute and AudioParam values for the node.
// They are updated on a k-rate basis, prior to each invocation of process().
let loop;
let detune;
let loopStart;
let loopEnd;
let playbackRate;
// Variables for the node's playback parameters
let start = 0, offset = 0; // Set by start()
let stop = Infinity;  // Set by stop(), or by start() with a supplied duration
// Variables for tracking node's playback state
let bufferTime = 0, started = false, enteredLoop = false;
let dt = 1 / context.sampleRate;
// Handle invocation of start method call
function handleStart(when, pos, duration) {
  if (arguments.length >= 1) {
    start = when;
  }
  offset = pos;
  if (arguments.length >= 3) {
    stop = when + duration;
  }
}
// Handle invocation of stop method call
function handleStop(when) {
  if (arguments.length >= 1) {
    stop = when;
  } else {
    stop = context.currentTime;
  }
}
// Interpolate a multi-channel signal value for some sample frame.
// Returns an array of signal values.
function playbackSignal(position) {
  /*
    This function provides the playback signal function for buffer, which is a
    function that maps from a playhead position to a set of output signal
    values, one for each output channel. If |position| corresponds to the
    location of an exact sample frame in the buffer, this function returns
    that frame. Otherwise, its return value is determined by a UA-supplied
    algorithm that interpolates between sample frames in the neighborhood of
    position.
    If position is greater than or equal to loopEnd and there is no subsequent
    sample frame in buffer, then interpolation should be based on the sequence
    of subsequent frames beginning at loopStart.
   */
   ...
}
// Generate a single render quantum of audio to be placed
// in the channel arrays defined by output. Returns an array
// of |numberOfFrames| sample frames to be output.
function process(numberOfFrames) {
  let currentTime = context.currentTime; // context time of next rendered frame
  let output = [];  // accumulates rendered sample frames
  // Combine the two k-rate parameters affecting playback rate
  let computedPlaybackRate = playbackRate * Math.pow(2, detune / 1200);
  // Determine loop endpoints as applicable
  let actualLoopStart, actualLoopEnd;
  if (loop) {
    if (loopStart >= 0 && loopEnd > 0 && loopStart < loopEnd) {
      actualLoopStart = loopStart;
      actualLoopEnd = Math.min(loopEnd, buffer.duration);
    } else {
      actualLoopStart = 0;
      actualLoopEnd = buffer.duration;
    }
  } else {
    // If the loop flag is false, remove any record of the loop having been entered
    enteredLoop = false;
  }
  // Render each sample frame in the quantum
  for (let index = 0; index < numberOfFrames; index++) {
    // Check that currentTime is within allowable range for playback
    if (currentTime < start || currentTime >= stop) {
      output.push(0); // this sample frame is silent
      currentTime += dt;
      continue;
    }
    if (!started) {
      // Take note that buffer has started playing and get initial playhead position.
      bufferTime = offset + ((currentTime - start) * computedPlaybackRate);
      started = true;
    }
    // Handle loop-related calculations
    if (loop) {
      // Determine if looped portion has been entered for the first time
      if (!enteredLoop) {
        if (offset < actualLoopEnd && bufferTime >= actualLoopStart) {
          // playback began before or within loop, and playhead is now past loop start
          enteredLoop = true;
        }
        if (offset >= actualLoopEnd && bufferTime < actualLoopEnd) {
          // playback began after loop, and playhead is now prior to the loop end
          enteredLoop = true;
        }
      }
      // Wrap loop iterations as needed. Note that enteredLoop
      // may become true inside the preceding conditional.
      if (enteredLoop) {
        while (bufferTime >= actualLoopEnd) {
          bufferTime -= actualLoopEnd - actualLoopStart;
        }
        while (bufferTime < actualLoopStart) {
          bufferTime += actualLoopEnd - actualLoopStart;
        }
      }
    }
    if (bufferTime >= 0 && bufferTime < buffer.duration) {
      output.push(playbackSignal(bufferTime));
    } else {
      output.push(0); // past end of buffer, so output silent frame
    }
    bufferTime += dt * computedPlaybackRate;
    currentTime += dt;
  } // End of render quantum loop
  if (currentTime >= stop) {
    // end playback state of this node.
    // no further invocations of process() will occur.
  }
  return output;
}
          The following non-normative figures illustrate the behavior of the algorithm in assorted key scenarios. Dynamic resampling of the buffer is not considered, but as long as the times of loop positions are not changed this does not materially affect the resulting playback. In all figures, the following conventions apply:
start at the x origin.
            This figure illustrates basic playback of a buffer, with a simple loop that ends after the last sample frame in the buffer:
            
            This figure illustrates playbackRate interpolation,
            showing half-speed playback of buffer contents in which every other
            output sample frame is interpolated. Of particular note is the last
            sample frame in the looped output, which is interpolated using the
            loop start point:
          
            This figure illustrates sample rate interpolation, showing playback of a buffer whose sample rate is 50% of the context sample rate, resulting in a computed playback rate of 0.5 that corrects for the difference in sample rate between the buffer and the context. The resulting output is the same as the preceding example, but for different reasons.
            This figure illustrates subsample offset playback, in which the offset within the buffer begins at exactly half a sample frame. Consequently, every output frame is interpolated:
            This figure illustrates subsample loop playback, showing how fractional frame offsets in the loop endpoints map to interpolated data points in the buffer that respect these offsets as if they were references to exact sample frames:
            
          This interface represents a constant audio source whose output is
          nominally a constant value. It is useful as a constant source node in
          general and can be used as if it were a constructible
          AudioParam by automating its
          offset or connecting another node to it.
        
The single output of this node consists of one channel (mono).
numberOfInputs : 0 numberOfOutputs : 1
Let node be a new ConstantSourceNode object. Initialize node, and return node.
The constant value of the source. Its default value is 1. This parameter is a-rate. Its nominal range is \((-\infty, \infty)\).
            This specifies options for constructing a
            ConstantSourceNode. All members are optional;
            if not specified, the normal defaults are used for constructing the
            node.
          
offset
              AudioParam of this node.
            
          This interface represents an audio source from an audio
          or video element.
        
numberOfInputs : 0 numberOfOutputs : 1
          The number of channels of the output corresponds to the number of
          channels of the media referenced by the
          HTMLMediaElement. Thus, changes to the media element's
          src attribute can change the number of channels output
          by this node.
        
This node has no tail-time reference.
Let node be a new MediaElementAudioSourceNode object. Initialize node, and return node.
              the HTMLMediaElement used when constructing this
              MediaElementAudioSourceNode.
            
          A MediaElementAudioSourceNode is created given an
          HTMLMediaElement using the AudioContext
          createMediaElementSource() method.
        
          The number of channels of the single output equals the number of
          channels of the audio referenced by the HTMLMediaElement
          passed in as the argument to createMediaElementSource(),
          or is 1 if the HTMLMediaElement has no audio.
        
          The HTMLMediaElement must behave in an identical fashion
          after the MediaElementAudioSourceNode has been created,
          except that the rendered audio will no longer be heard
          directly, but instead will be heard as a consequence of the
          MediaElementAudioSourceNode being connected through the
          routing graph. Thus pausing, seeking, volume, src
          attribute changes, and other aspects of the
          HTMLMediaElement must behave as they normally would if
          not used with a MediaElementAudioSourceNode.
        
  var mediaElement = document.getElementById('mediaElementID');
  var sourceNode = context.createMediaElementSource(mediaElement);
  sourceNode.connect(filterNode);
        
            This specifies the options to use in constructing a
            MediaElementAudioSourceNode.
          
            HTMLMediaElement allows the playback of cross-origin
            resources. Because Web Audio allows inspection of the content of
            the resource (e.g. using a MediaElementAudioSourceNode, and
            a ScriptProcessorNode to read the samples), information
            leakage can occur if scripts from one 
            origin inspect the content of a resource from another 
            origin.
          
            To prevent this, a MediaElementAudioSourceNode MUST output
            silence instead of the normal output of the
            HTMLMediaElement if it has been created using an
            HTMLMediaElement for which the execution of the
            fetch
            algorithm labeled the resource as 
            CORS-cross-origin.
          
The AudioWorklet object allows developers to supply scripts (such as JavaScript or WebAssembly code) to process audio on the rendering thread, supporting custom AudioNodes. This processing mechanism ensures the synchronous execution of the script code with other built-in AudioNodes in the audio graph.
An associated pair of objects must be defined in order to realize this mechanism: AudioWorkletNode and AudioWorkletProcessor. The former represents the interface for the main global scope similar to other AudioNode objects, and the latter implements the internal audio processing within a special scope named AudioWorkletGlobalScope.
            AudioWorkletNode and
              AudioWorkletProcessor
            Importing a script via the addModule(moduleUrl) method registers class definitions of AudioWorkletProcessor under the AudioWorkletGlobalScope. There are two internal storage areas for the imported class definitions and the active instances created from the definition.
// bypass.js script file, AudioWorkletGlobalScope
registerProcessor("Bypass", class extends AudioWorkletProcessor {
  process (inputs, outputs) {
    // Single input, single channel.
    var input = inputs[0], output = outputs[0];
    output[0].set(input[0]);
  }
});
          
// The main global scope
window.audioWorklet.addModule("bypass.js").then(function () {
  var context = new AudioContext();
  var bypass = new AudioWorkletNode(context, "Bypass");
});
          At the instantiation of AudioWorkletNode in the main global scope, the counterpart AudioWorkletProcessor will also be created in AudioWorkletGlobalScope. These two objects communicate via the asynchronous message passing described in the processing model section.
              The audioWorklet attributes allows access to the
              Worklet object that can import a script containing
              AudioWorkletProcessor class definitions via
              the algorithm defined by [[!worklets-1]].
            
This special execution context is designed to enable the generation, processing, and analysis of audio data directly using a script in the audio rendering thread. The user-supplied script code is evaluated in this scope to define one or more AudioWorkletProcessor subclasses, which in turn are used to instantiate AudioWorkletProcessors, in a 1:1 association with AudioWorkletNodes in the main scope.
At least one AudioWorkletGlobalScope exists for each AudioContext that contains one or more AudioWorkletNodes. The running of imported scripts is performed by the UA as defined in [[!worklets-1]], in such a way that all scripts are applied consistently to every global scope, and all scopes thus exhibit identical behavior. Beyond these guarantees, the creation of global scopes is transparent to the author and cannot be observed from the main window scope.
            AudioWorkletGlobalScope has a node name to processor
            definition map. This map stores definitions of
            AudioWorkletProcessor with the associated string key.
            Initially it is empty and populated when
            registerProcessor method is called, but this storage
            is internal and is not directly exposed to the user.
          
Registers a class definition derived from AudioWorkletProcessor.
                When the registerProcessor(name,
                processorConstructor) method is called, the
                user agent must run the following steps:
              
name is the empty string,
                throw a
                NotSupportedError exception and abort these steps
                because the empty string is not a valid key.
                name exists as a key in the
                node name to processor definition map, throw a NotSupportedError exception
                and abort these steps because registering a definition
                with a duplicated key is not allowed.
                
                  IsConstructor(argument=processorConstructor)
                  is false, throw a
                  TypeError and abort these steps.
                prototype be the result of 
                  
                  Get(O=processorConstructor,
                  P="prototype").
                
                  Type(argument=prototype) is not
                  Object, throw a
                  TypeError and abort all these steps.
                
                  IsCallable(argument=Get(O=prototype,
                  P="process")) is false, throw a TypeError and abort these
                  steps.
                
                  Get(O=processorConstructor,
                  P="parameterDescriptors") is not an array or
                  undefined, throw a
                  TypeError and abort these steps.
                The class constructor should only be looked up once, thus it does not have the opportunity to dynamically change its definition.
This interface represents a user-defined AudioNode which lives on the control thread. The user can create an AudioWorkletNode from an BaseAudioContext, and such a node can be connected with other built-in AudioNodes to form an audio graph.
Every AudioWorkletNode has an associated processor reference, initially null, which refers to the AudioWorkletProcessor handling the processing for this node.
            Every AudioWorkletProcessor has an associated active
            source flag, initially true. This flag causes
            the node to be retained in memory and perform audio processing in
            the absence of any connected inputs.
          
true.
            false.
            constructor, process method, or any
              user-defined class method throws an exception. Note that once an
              AudioWorkletNode reaches to this state, the processor will
              output silence throughout its lifetime.
            
                Let node be a new AudioWorkletNode object.
                Initialize
                node. Perform the construction
                procedure of an AudioWorkletNode and
                the corresponding AudioWorkletProcessor
                object. Return node.
              
                The parameters attribute is a collection of
                AudioParam objects with associated names. This maplike
                object is populated from a list of AudioParamDescriptors
                in the AudioWorkletProcessor class definition at the
                instantiation.
              
                Every AudioWorkletNode has an associated
                port which is a 
                MessagePort. It is connected to the port on the
                corresponding AudioWorkletProcessor object allowing
                bidirectional communication between a pair of
                AudioWorkletNode and AudioWorkletProcessor.
              
Indicates the state of the associated processor. The propagation from the actual processor's active source flag to this property is done by queueing a task.
Any state change on the processor will queue a task on the control thread to fire onprocessorstatechange event to the node.
              The AudioWorkletNodeOptions dictionary can be used
              for the custom initialization of AudioNode
              attributes in the AudioWorkletNode
              constructor. Entries in this dictionary whose names correspond to
              AudioParams in the class definition of an
              AudioWorkletProcessor are used to initialize
              the parameter values upon the creation of a node.
            
            This interface represents an audio processing code that runs on the
            audio rendering thread. It lives in an
            AudioWorkletGlobalScope and the definition of
            the class manifests the actual audio processing mechanism of a
            custom audio node. AudioWorkletProcessor can
            only be instantiated by the construction of an
            AudioWorkletNode instance. Every
            AudioWorkletProcessor has an associated node
            reference, initially null.
          
                Returns an AudioContextInfo object that contains
                various states of the associated BaseAudioContext.
              
                Every AudioWorkletProcessor has an associated
                port which is a 
                MessagePort. It is connected to the port on the
                corresponding AudioWorkletProcessor object allowing
                bidirectional communication between a pair of
                AudioWorkletNode and AudioWorkletProcessor.
              
              User can define a custom audio processor by extending
              AudioWorkletProcessor. The subclass must define a method
              named process() that implements the audio processing
              algorithm and have a valid static property named
              parameterDescriptors which is an iterable
              of AudioParamDescriptor that is looked up by the
              AudioWorkletProcessor constructor to create instances of
              AudioParam in the parameters maplike storage
              in the node. The step 5 and 6 of 
              registerProcessor() ensure the validity of a given
              AudioWorkletProcessor subclass.
            
An example of a valid subclass is as follows:
class MyProcessor extends AudioWorkletProcessor {
  static get parameterDescriptors() { 
    return [{
      name: 'myParam',
      defaultValue: 0.5,
      minValue: 0,
      maxValue: 1 
    }];
  }
  process(inputs, outputs, parameters) {
    // Get the first input and output.
    var input = inputs[0];
    var output = outputs[0];
    var myParam = parameters.myParam;
    // A simple amplifier for single input and output.
    for (var channel = 0; channel < output.length; ++channel) {
      for (var i = 0; i < output[channel].length; ++i) {
        output[channel][i] = input[channel][i] * myParam[i];
      }
    }
  }
}
            
              The process() method is called synchronously by the
              audio rendering thread at every render quantum, if
              ANY of the following active processing conditions are
              true:
            
true.
              The method is invoked with the following arguments:
                  inputs of type
                  sequence<sequence<Float32Array>>
                  The input audio buffer from the incoming connections provided
                  by the user agent. inputs[n][m] is a
                  Float32Array of audio samples for the
                  mth channel of nth input. While the
                  number of inputs is fixed at the construction, the number of
                  channels can be changed dynamically.
                
                  If no connections exist to the nth input of the
                  node during the current render quantum, then the content of
                  inputs[n] is an empty array, indicating that
                  zero channels of input are available. This is the only
                  circumstance under which the number of elements of
                  inputs[n] can be zero.
                
outputs of type
                sequence<sequence<Float32Array>>outputs[n][m] is a
                Float32Array object containing the audio samples
                for mth channel of nth output. While
                the number of outputs is fixed at the construction, the number
                of channels can be changed dynamically.
              parameters of type ObjectFloat32Arrays.
                parameters["name"] corresponds to the automation
                values of the AudioParam named
                "name".
              
              The return value of this method controls the lifetime of the
              AudioWorkletProcessor's associated
              AudioWorkletNode. At the conclusion of each call to the
              process() method, if the result of applying ToBoolean
              (described in [[!ECMASCRIPT]]) to the return value is assigned to
              the associated AudioWorkletProcessor's active
              source flag. This in turn can affects whether subsequent
              invocations of process() occur and also the flag
              change is propagated by queueing a task on
              the control thread to update the corresponding
              AudioWorkletNode's state property
              accordingly.
            
false from
                process() which allows the presence or absence of
                connected inputs to determine whether active processing occurs.
                process() should return
                true for some period of time after
                inputs is found to contain zero channels. The
                current time may be obtained from the processor's
                contextInfo to measure the start and end of this
                tail-time interval, or the interval could be calculated
                dynamically depending on the processor's internal state.
                true from
                process() until the point at which they are no
                longer producing an output.
                process(), the effect is identical to returning
              false (since the effective return value is the falsy
              value undefined). This is a reasonable behavior for
              any AudioWorkletProcessor that is active only when it has
              active inputs.
            
              If process() is not called during some rendering
              quantum due to the lack of any applicable active processing
              conditions, the result is is as if the processor emitted
              silence for this period.
            
              The AudioParamDescriptor dictionary is used to
              specify properties for an AudioParam object
              that is used in an AudioWorkletNode.
            
NotSupportedError exception MUST be thrown when a
                duplicated name is found when registering the class definition.
              minValue and maxValue, an
                NotSupportedError exception MUST be thrown.
              NotSupportedError
                exception MUST be thrown if this value is out of range of float
                data type or it is greater than maxValue. This
                value is the most negative finite single precision
                floating-point number.
              NotSupportedError
                exception MUST be thrown if this value is out of range of float
                data type or it is smaller than minValue. This
                value is the most positive finite single precision
                floating-point number.
              
              The AudioContextInfo dictionary provides an
              AudioWorkletGlobalScope with a view of a
              BaseAudioContext.
            
When the constructor of AudioWorkletNode is invoked in the main global scope, the corresponding AudioWorkletProcessor instance is automatically created in AudioWorkletGlobalScope. After the construction, they maintain the internal reference to each other until the AudioWorkletNode instance is destroyed.
Note that the instantiation of these two objects spans with the control thread and the rendering thread.
When AudioWorkletNode(context, nodeName, options) constructor is invoked, the user agent must perform the following steps on the control thread, where the constructor was called.
NotSupportedError exception and abort
            these steps.
            port1 attribute.
            port2 attribute.
            
            In order to process a control message for the construction of an
            AudioWorkletProcessor, given a string nodeName, a
            serialization record processorPortSerialization, and an
            AudioWorkletNode node, perform the following
            steps on the rendering thread. If any of these steps throws
            an exception (either explicitly or implicitly), abort the rest of
            steps and queue a task on the control thread to fire processorstatechange
            event to node.
          
undefined,
            throw a NotSupportedError DOMException.
            "InvalidStateError" DOMException. NOTE: this can occur
            if the author-supplied constructor uses JavaScript's
            return-override feature, or does not properly call
            super().
            running event to the associated
            AudioWorkletNode.
            The following figure illustrates an idealized sequence of events occurring relative to an AudioWorklet:
            The steps depicted in the diagram are one possible sequence of events involving the creation of an AudioContext and an associated AudioWorkletGlobalScope, followed by the creation of an AudioWorkletNode and its associated AudioWorkletProcessor.
window.audioWorklet is
            requested to import a script. No AudioWorkletGlobalScopes
            exist yet, so the script is fetched and added to the Worklet module
            responses map.
            "Custom1" within the AudioWorkletGlobalScope.
            "Custom1" along with an opts
            dictionary of options.
            opts
            dictionary. This instance is paired with the previously created 
              AudioWorkletNode.
            
              Bitcrushing is a mechanism by which the quality of an audio
              stream is reduced both by quantizing the sample value (simulating
              a lower bit-depth), and by quantizing in time resolution
              (simulating a lower sample rate). This example shows how to use
              AudioParams (in this case, treated as
              a-rate) inside an
              AudioWorkletProcessor.
            
window.audioWorklet.addModule('bitcrusher.js').then(function () {
  let context = new AudioContext();
  let osc = new OscillatorNode(context);
  let amp = new GainNode(context);
  // Create a worklet node. 'BitCrusher' identifies the 
  // AudioWorkletProcessor previously registered when
  // bitcrusher.js was imported. The options automatically
  // initialize the correspondingly named AudioParams.
  let bitcrusher = new AudioWorkletNode(context, 'BitCrusher', { 
    bitDepth: 8, 
    frequencyReduction: 0.5
  });
  osc.connect(bitcrusher).connect(amp).connect(context.destination);
  osc.start();
});
            
registerProcessor('BitCrusher', class extends AudioWorkletProcessor {
  static get parameterDescriptors () {
    return [{
      name: 'bitDepth',
      defaultValue: 12,
      minValue: 1,
      maxValue: 16 
    }, {
      name: 'frequencyReduction',
      defaultValue: 0.5,
      minValue: 0,
      maxValue: 1
    }];
  }
  constructor (options) {
    // We don't need to look at options: only AudioParams are initialized,
    // which were taken care of by the node.
    super(options);
    this._phase = 0;
    this._lastSampleValue = 0;
  }
  process (inputs, outputs, parameters) {
    let input = inputs[0];
    let output = outputs[0];
    let bitDepth = parameters.bitDepth;
    let frequencyReduction = parameters.frequencyReduction;
    for (let channel = 0; channel < output.length; ++channel) { 
      for (let i = 0; i < output[channel].length; ++i) {
        let step = Math.pow(0.5, bitDepth[i]);
        this._phase += frequencyReduction[i];
        if (this._phase >= 1.0) {
          this._phase -= 1.0;
          this._lastSampleValue = 
            step * Math.floor(input[channel][i] / step + 0.5);
        }
        output[channel][i] = this._lastSampleValue;
      }
    }
    // No need to return a value; this node's lifetime is dependent only on its
    // input connections.
  }
});
          
              This example of a simple sound level meter further illustrates
              how to create an AudioWorkletNode subclass
              that acts like a native AudioNode, accepting
              constructor options and encapsulating the inter-thread
              communication (asynchronous) between
              AudioWorkletNode and
              AudioWorkletProcessor in clean method calls
              and attribute accesses. This node does not use any output.
            
class VUMeterNode extends AudioWorkletNode {
  constructor (context, options) {
    // Setting default values for the input, the output and the channel count.
    options.numberOfInputs = 1;
    options.numberOfOutputs = 0;
    options.channelCount = 1;
    options.updatingInterval = options.hasOwnProperty('updatingInterval') 
      ? options.updatingInterval 
      : 100;
    super(context, 'VUMeter', options);
    // States in AudioWorkletNode
    this._updatingInterval = options.updatingInterval;
    this._volume = 0;
    // Handles updated values from AudioWorkletProcessor
    this.port.onmessage = event => {
      if (event.data.volume)
        this._volume = event.data.volume;
    }
    this.port.start();
  }
  get updatingInterval() {
    return this._updatingInterval;
  }
  set updatingInterval (intervalValue) {
    this._updatingInterval = intervalValue;
    this.port.postMessage({ updatingInterval: intervalValue });
  }
  draw () {
    /* Draw the meter based on the volume value. */
  }
}
// The application can use the node when this promise resolves.
let importAudioWorkletNode = window.audioWorklet.addModule('vumeterprocessor.js');
            
registerProcessor('VUMeter', class extends AudioWorkletProcessor {
  static meterSmoothingFactor = 0.9;
  static meterMinimum = 0.00001;
  constructor (options) {
    super(options);
    this._volume = 0;
    this._updatingInterval = options.updatingInterval;
    this._nextUpdateFrames = this.interval;
    this.port.onmessage = event => {
      if (event.data.updatingInterval)
        this._updatingInterval = event.data.updatingInterval;
    }
    this.port.start();
  }
  get interval () {
    return this._updatingInterval / 1000 * this.contextInfo.sampleRate;
  }
  process (inputs, outputs, parameters) {
    // Note that the input will be down-mixed to mono; however, if no inputs are
    // connected then zero channels will be passed in.
    if (inputs[0].length > 0) {
      let buffer = inputs[0][0];
      let bufferLength = buffer.length;
      let sum = 0, x = 0, rms = 0;
      // Calculated the squared-sum.
      for (let i = 0; i < bufferLength; ++i) {
        x = buffer[i];
        sum += x * x;
      }
      // Calculate the RMS level and update the volume.
      rms =  Math.sqrt(sum / bufferLength);
      this.volume = Math.max(rms, this._volume * meterSmoothingFactor);
      // Update and sync the volume property with the main thread.
      this._nextUpdateFrame -= bufferLength;
      if (this._nextUpdateFrame < 0) {
        this._nextUpdateFrame += this.interval;
        this.port.postMessage({ volume: this._volume });
      }
    }
    // Keep on processing if the volume is above a threshold, so that
    // disconnecting inputs does not immediately cause the meter to stop 
    // computing its smoothed value.
    return this._volume >= meterMinimum;
  }
});
            
<script src="vumeternode.js"></script>
<script>
  importAudioWorkletNode.then(function () {
    let context = new AudioContext();
    let oscillator = new Oscillator(context);
    let vuMeterNode = new VUMeterNode(context, { updatingInterval: 50 });
    oscillator.connect(vuMeterNode);
    function drawMeter () {
      vuMeterNode.draw();
      requestAnimationFrame(drawMeter);
    }
    drawMeter();
  });
</script>
          
          This interface is an AudioNode which can
          generate, process, or analyse audio directly using a script. This
          node type is deprecated, to be replaced by the
          AudioWorkletNode; this text is only here for informative
          purposes until implementations remove this node type.
        
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCount = numberOfInputChannels;
    channelCountMode = "explicit";
    channelInterpretation = "speakers";
        There are channelCount constraints and channelCountMode constraints for this node.
          The ScriptProcessorNode is constructed with a
          bufferSize which must be one of the following values: 256,
          512, 1024, 2048, 4096, 8192, 16384. This value controls how
          frequently the audioprocess event is
          dispatched and how many sample-frames need to be processed each call.
          audioprocess
          events are only dispatched if the
          ScriptProcessorNode has at least one input or one
          output connected. Lower numbers for bufferSize will result in
          a lower (better) latency. Higher numbers will
          be necessary to avoid audio breakup and glitches. This value will be picked by the
          implementation if the bufferSize argument to
          createScriptProcessor is not passed in, or is set to 0.
        
          numberOfInputChannels and
          numberOfOutputChannels determine the number of input and
          output channels. It is invalid for both
          numberOfInputChannels and
          numberOfOutputChannels to be zero.
        
var node = context.createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels);
              A property used to set the EventHandler (described
              in 
              HTML[[!HTML]]) for the audioprocess
              event that is dispatched to
              ScriptProcessorNode node types. An event of
              type AudioProcessingEvent will be dispatched
              to the event handler.
            
              The size of the buffer (in sample-frames) which needs to be
              processed each time onaudioprocess
              is called. Legal values are (256, 512, 1024, 2048, 4096, 8192,
              16384).
            
            This is an Event object which is dispatched to
            ScriptProcessorNode nodes. It will be removed
            when the ScriptProcessorNode is removed, as the replacement
            AudioWorkletNode uses a different approach.
          
            The event handler processes audio from the input (if any) by
            accessing the audio data from the inputBuffer
            attribute. The audio data which is the result of the processing (or
            the synthesized data if there are no inputs) is then placed into
            the outputBuffer.
          
                The time when the audio will be played in the same time
                coordinate system as the AudioContext's
                currentTime.
              
                An AudioBuffer containing the input audio data. It will have a
                number of channels equal to the
                numberOfInputChannels parameter of the
                createScriptProcessor() method. This AudioBuffer is only valid
                while in the scope of the onaudioprocess
                function. Its values will be meaningless outside of this scope.
              
                An AudioBuffer where the output audio data should be written.
                It will have a number of channels equal to the
                numberOfOutputChannels parameter of the
                createScriptProcessor() method. Script code within the scope of
                the onaudioprocess
                function is expected to modify the Float32Array
                arrays representing channel data in this AudioBuffer. Any
                script modifications to this AudioBuffer outside of this scope
                will not produce any audible effects.
              
playbackTime
                attribute of the event.
              inputBuffer
                attribute of the event.
              outputBuffer
                attribute of the event.
              
          This interface represents a processing node which positions / spatializes an incoming audio
          stream in three-dimensional space. The spatialization is in relation
          to the AudioContext's AudioListener
          (listener attribute).
        
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCount = 2;
    channelCountMode = "clamped-max";
    channelInterpretation = "speakers";
        The input of this node is either mono (1 channel) or stereo (2 channels) and cannot be increased. Connections from nodes with fewer or more channels will be up-mixed or down-mixed appropriately.
There are channelCount constraints and channelCountMode constraints for this node.
The output of this node is hard-coded to stereo (2 channels) and cannot be configured.
          The PanningModelType enum determines which
          spatialization algorithm will be used to position the audio in 3D
          space. The default is "equalpower".
        
This node may have a tail-time reference. If the panningModel is set to "HRTF", the node will produce non-silent output for silent input due to the inherent processing for the head responses.
          The DistanceModelType enum determines which
          algorithm will be used to reduce the volume of an audio source as it
          moves away from the listener. The default is "inverse".
        
          In the description of each distance model below, let \(d\) be the
          distance between the listener and the panner; \(d_{ref}\) be the
          value of the refDistance attribute; \(d_{max}\) be the
          value of the maxDistance attribute; and \(f\) be the
          value of the rolloffFactor attribute.
        
A linear distance model which calculates distanceGain according to:
            $$
              1 - f\frac{\max(\min(d, d'_{max}), d'_{ref}) - d'_{ref}}{d'_{max} - d'_{ref}}
            $$
            
            where \(d'_{ref} = \min(d_{ref}, d_{max})\) and \(d'_{max} = \max(d_{ref}, d_{max})\). In the case where \(d'_{ref} = d'_{max}\), the value of the linear model is taken to be \(1-f\).
Note that \(d\) is clamped to the interval \([d'_{ref},\, d'_{max}]\).
An inverse distance model which calculates distanceGain according to:
              $$
                \frac{d_{ref}}{d_{ref} + f (\max(d, d_{ref}) - d_{ref})}
              $$
            
            That is, \(d\) is clamped to the interval \([d_{ref},\, \infty)\). If \(d_{ref} = 0\), the value of the inverse model is taken to be 0, independent of the value of \(d\) and \(f\).
An exponential distance model which calculates distanceGain according to:
              $$
                \left(\frac{\max(d, d_{ref})}{d_{ref}}\right)^{-f}
              $$
            
            That is, \(d\) is clamped to the interval \([d_{ref},\, \infty)\). If \(d_{ref} = 0\), the value of the exponential model is taken to be 0, independent of \(d\) and \(f\).
Let node be a new PannerNode object. Initialize node, and return node.
              Specifies the panning model used by this
              PannerNode. Defaults to
              "equalpower".
            
              Sets the x coordinate position of the audio source in a 3D
              Cartesian system. The default value is 0. This parameter is
              a-rate when panningModel is
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Sets the y coordinate position of the audio source in a 3D
              Cartesian system. The default value is 0. This parameter is
              a-rate when panningModel is
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Sets the z coordinate position of the audio source in a 3D
              Cartesian system. The default value is 0. The default value is 0.
              This parameter is a-rate when panningModel is
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Describes the x component of the vector of the direction the
              audio source is pointing in 3D Cartesian coordinate space.
              Depending on how directional the sound is (controlled by the
              cone attributes), a sound pointing away from the listener
              can be very quiet or completely silent. The default value is 1.
              This parameter is a-rate when panningModel is
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Describes the y component of the vector of the direction the
              audio source is pointing in 3D cartesian coordinate space. The
              default value is 0. This parameter is a-rate when panningModel is
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Describes the Z component of the vector of the direction the
              audio source is pointing in 3D cartesian coordinate space. The
              default value is 0. This parameter is a-rate when panningModel is
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Specifies the distance model used by this
              PannerNode. Defaults to
              "inverse".
            
              A reference distance for reducing volume as source moves further
              from the listener. The default value is 1. A
              RangeError exception must be thrown if this is set
              to a non-negative value.
            
              The maximum distance between source and listener, after which the
              volume will not be reduced any further. The default value is
              10000. A RangeError exception must be thrown if this
              is set to a non-positive value.
            
Describes how quickly the volume is reduced as source moves away from listener. The default value is 1.
              The nominal range for the rolloffFactor specifies
              the minimum and maximum values the rolloffFactor can
              have. Values outside the range are clamped to lie within this
              range. The nominal range depends on the distanceModel
              as follows:
            
linear
              inverse
              exponential
              A parameter for directional audio sources, this is an angle, in degrees, inside of which there will be no volume reduction. The default value is 360. The behavior is undefined if the angle is outside the interval [0, 360].
              A parameter for directional audio sources, this is an angle, in
              degrees, outside of which the volume will be reduced to a
              constant value of coneOuterGain. The default
              value is 360. The behavior is undefined if the angle is outside
              the interval [0, 360].
            
              A parameter for directional audio sources, this is the gain
              outside of the coneOuterAngle. The default
              value is 0. It is a linear value (not dB) in the range [0, 1]. An
              InvalidStateError MUST be thrown if the parameter is
              outside this range.
            
              This method is DEPRECATED. It is equivalent to setting
              positionX, positionY, and
              positionZ AudioParams directly.
            
              Sets the position of the audio source relative to the
              listener attribute. A 3D cartesian coordinate
              system is used.
            
              The x, y, z parameters represent the coordinates in
              3D space.
            
The default value is (0,0,0)
              This method is DEPRECATED. It is equivalent to setting
              orientationX, orientationY, and
              orientationZ AudioParams directly.
            
Describes which direction the audio source is pointing in the 3D cartesian coordinate space. Depending on how directional the sound is (controlled by the cone attributes), a sound pointing away from the listener can be very quiet or completely silent.
              The x, y, z parameters represent a direction vector
              in 3D space.
            
The default value is (1,0,0)
            This specifies options for constructing a
            PannerNode. All members are optional; if not
            specified, the normal default is used in constructing the node.
          
positionX
              AudioParam.
            positionY
              AudioParam.
            positionZ
              AudioParam.
            orientationX
              AudioParam.
            orientationY
              AudioParam.
            orientationZ
              AudioParam.
            refDistance
              attribute of the node.
            maxDistance
              attribute of the node.
            rolloffFactor
              attribute of the node.
            coneInnerAngle
              attribute of the node.
            coneOuterAngle
              attribute of the node.
            coneOuterGain
              attribute of the node.
            
            The set of channel
            limitations for StereoPannerNode also apply
            to PannerNode.
          
          This interface represents the position and orientation of the person
          listening to the audio scene. All PannerNode
          objects spatialize in relation to the
          BaseAudioContext's listener. See
          Spatialization/Panning for more details about spatialization.
        
          The positionX, positionY, positionZ parameters represent
          the location of the listener in 3D Cartesian coordinate space.
          PannerNode objects use this position relative to
          individual audio sources for spatialization.
        
          The forwardX, forwardY, forwardZ parameters represent a
          direction vector in 3D space. Both a forward vector and
          an up vector are used to determine the orientation of
          the listener. In simple human terms, the forward vector
          represents which direction the person's nose is pointing. The
          up vector represents the direction the top of a person's
          head is pointing. These values are expected to be linearly
          independent (at right angles to each other), and unpredictable
          behavior may result if they are not. For normative requirements of
          how these values are to be interpreted, see the
          Spatialization/Panning section.
        
              Sets the x coordinate position of the audio listener in a 3D
              Cartesian coordinate space. The default value is 0. This
              parameter is a-rate when used with a PannerNode
              that has a panningModel set to
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Sets the y coordinate position of the audio listener in a 3D
              Cartesian coordinate space. The default value is 0. This
              parameter is a-rate when used with a PannerNode
              that has a panningModel set to
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Sets the z coordinate position of the audio listener in a 3D
              Cartesian coordinate space. The default value is 0. This
              parameter is a-rate when used with a PannerNode
              that has a panningModel set to
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Sets the x coordinate component of the forward direction the
              listener is pointing in 3D Cartesian coordinate space. The
              default value is 0. This parameter is a-rate when used
              with a PannerNode that has a panningModel set to
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Sets the y coordinate component of the forward direction the
              listener is pointing in 3D Cartesian coordinate space. The
              default value is 0. This parameter is a-rate when used
              with a PannerNode that has a panningModel set to
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
              Sets the z coordinate component of the forward direction the
              listener is pointing in 3D Cartesian coordinate space. The
              default value is -1. This parameter is a-rate when used
              with a PannerNode that has a panningModel set to
              "equalpower", k-rate otherwise. Its nominal
              range is \((-\infty, \infty)\).
            
Sets the x coordinate component of the up direction the listener is pointing in 3D Cartesian coordinate space. The default value is 0. This parameter is a-rate. Its nominal range is \((-\infty, \infty)\).
Sets the y coordinate component of the up direction the listener is pointing in 3D Cartesian coordinate space. The default value is 1. This parameter is a-rate. Its nominal range is \((-\infty, \infty)\).
Sets the z coordinate component of the up direction the listener is pointing in 3D Cartesian coordinate space. The default value is 0. This parameter is a-rate. Its nominal range is \((-\infty, \infty)\).
              This method is DEPRECATED. It is equivalent to setting
              positionX.value, positionY.value, and
              positionZ.value directly with the given
              x, y, and z values,
              respectively.
            
              Sets the position of the listener in a 3D cartesian coordinate
              space. PannerNode objects use this position
              relative to individual audio sources for spatialization.
            
              The x, y, z parameters represent the coordinates in
              3D space.
            
The default value is (0,0,0)
              This method is DEPRECATED. It is equivalent to setting
              orientationX.value, orientationY.value,
              orientationZ.value, upX.value,
              upY.value, and upZ.value directly with
              the given x, y, z,
              xUp, yUp, and zUp values,
              respectively.
            
Describes which direction the listener is pointing in the 3D cartesian coordinate space. Both a front vector and an up vector are provided. In simple human terms, the front vector represents which direction the person's nose is pointing. The up vector represents the direction the top of a person's head is pointing. These values are expected to be linearly independent (at right angles to each other). For normative requirements of how these values are to be interpreted, see the spatialization section.
              The x, y, z parameters represent a front
              direction vector in 3D space, with the default value being
              (0,0,-1).
            
              The xUp, yUp, zUp parameters represent an up
              direction vector in 3D space, with the default value being
              (0,1,0).
            
This interface represents a processing node which positions an incoming audio stream in a stereo image using a low-cost equal-power panning algorithm. This panning effect is common in positioning audio components in a stereo stream.
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCount = 2;
    channelCountMode = "clamped-max";
    channelInterpretation = "speakers";
        The input of this node is stereo (2 channels) and cannot be increased. Connections from nodes with fewer or more channels will be up-mixed or down-mixed appropriately.
There are channelCount constraints and channelCountMode constraints for this node.
The output of this node is hard-coded to stereo (2 channels) and cannot be configured.
This node has no tail-time reference.
Let node be a new StereoPannerNode object. Initialize node, and return node.
The position of the input in the output's stereo image. -1 represents full left, +1 represents full right. Its default value is 0, and its nominal range is [-1, 1]. This parameter is a-rate.
            This specifies the options to use in constructing a
            StereoPannerNode. All members are optional; if
            not specified, the normal default is used in constructing the node.
          
pan AudioParam.
            
            Because its processing is constrained by the above definitions,
            StereoPannerNode is limited to mixing no more
            than 2 channels of audio, and producing exactly 2 channels. It is
            possible to use a ChannelSplitterNode,
            intermediate processing by a subgraph of
            GainNodes and/or other nodes, and recombination
            via a ChannelMergerNode to realize arbitrary
            approaches to panning and mixing.
          
This interface represents a processing node which applies a linear convolution effect given an impulse response.
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCount = 2;
    channelCountMode = "clamped-max";
    channelInterpretation = "speakers";
        The input of this node is either mono (1 channel) or stereo (2 channels) and cannot be increased. Connections from nodes with more channels will be down-mixed appropriately.
There are channelCount constraints and channelCountMode constraints for this node. These constraints ensure that the input to the node is either mono or stereo.
          This node has a tail-time reference such that this node
          continues to output non-silent audio with zero input for the length
          of the buffer.
        
          ConvolverNodes are created with an internal flag buffer
          set, initially set to false.
        
              Let node be a new ConvolverNode object.
              Initialize
              node. Set an internal boolean slot [[buffer
              set]], and initialize it to false. Return
              node.
            
              A mono, stereo, or 4-channel AudioBuffer
              containing the (possibly multi-channel) impulse response used by
              the ConvolverNode. The AudioBuffer must have 1, 2, or 4
              channels or a NotSupportedError exception MUST be
              thrown. This
              AudioBuffer must be of the same sample-rate
              as the AudioContext or a
              NotSupportedError exception MUST be thrown.
              At the time when this attribute is set, the buffer and
              the state of the normalize attribute will be used to
              configure the ConvolverNode with this impulse
              response having the given normalization. The initial value of
              this attribute is null.
            
              To set the buffer attribute, execute these steps:
            
new buffer be the AudioBuffer to be
              assigned to buffer.
              new buffer is not null and 
                [[buffer set]] is true, throw
                an InvalidStateError and abort these steps.
              new buffer is not null, set
              [[buffer set]] to true.
              new buffer to the buffer
              attribute.
              The following text is non-normative. For normative information please see the channel configuration diagrams.
              The ConvolverNode only produces a mono output in the
              single case where there is a single input channel and a
              single-channel buffer. In all other cases, the
              output is stereo. In particular, when the buffer has
              four channels and there are two input channels, the
              ConvolverNode performs matrix "true" stereo convolution.
            
              Controls whether the impulse response from the buffer will be
              scaled by an equal-power normalization when the
              buffer atttribute is set. Its default value is
              true in order to achieve a more uniform output level
              from the convolver when loaded with diverse impulse responses. If
              normalize is set to false, then the
              convolution will be rendered with no pre-processing/scaling of
              the impulse response. Changes to this value do not take effect
              until the next time the buffer attribute is set.
            
              If the normalize attribute is false when the
              buffer attribute is set then the
              ConvolverNode will perform a linear
              convolution given the exact impulse response contained within the
              buffer.
            
              Otherwise, if the normalize attribute is true when the
              buffer attribute is set then the
              ConvolverNode will first perform a scaled
              RMS-power analysis of the audio data contained within
              buffer to calculate a normalizationScale given
              this algorithm:
            
function calculateNormalizationScale(buffer)
{
    var GainCalibration = 0.00125;
    var GainCalibrationSampleRate = 44100;
    var MinPower = 0.000125;
    // Normalize by RMS power.
    var numberOfChannels = buffer.numberOfChannels;
    var length = buffer.length;
    var power = 0;
    for (var i = 0; i < numberOfChannels; i++) {
        var channelPower = 0;
        var channelData = buffer.getChannelData(i);
        for (var j = 0; j < length; j++) {
            var sample = channelData[j];
            channelPower += sample * sample;
        }
        power += channelPower;
    }
    power = Math.sqrt(power / (numberOfChannels * length));
    // Protect against accidental overload.
    if (!isFinite(power) || isNaN(power) || power < MinPower)
        power = MinPower;
    var scale = 1 / power;
    // Calibrate to make perceived volume same as unprocessed.
    scale *= GainCalibration;
    // Scale depends on sample-rate.
    if (buffer.sampleRate)
        scale *= GainCalibrationSampleRate / buffer.sampleRate;
    // True-stereo compensation.
    if (numberOfChannels == 4)
        scale *= 0.5;
    return scale;
}
      
            During processing, the ConvolverNode will then take this calculated normalizationScale value and multiply it by the result of the linear convolution resulting from processing the input with the impulse response (represented by the buffer) to produce the final output. Or any mathematically equivalent operation may be used, such as pre-multiplying the input by normalizationScale, or pre-multiplying a version of the impulse-response by normalizationScale.
            The specifies options for constructing a
            ConvolverNode. All members are optional; if not
            specified, the node is contructing using the normal defaults.
          
ConvolverNode.
              This buffer will be normalized according to the value of
              disableNormalization.
            normalize
              attribute of the ConvolverNode.
            
            Implementations MUST support the following allowable configurations
            of impulse response channels in a ConvolverNode
            to achieve various reverb effects with 1 or 2 channels of input.
          
            The first image in the diagram illustrates the general case, where
            the source has N input channels, the impulse response has K
            channels, and the playback system has M output channels. Because
            ConvolverNode is limited to 1 or 2 channels of
            input, not every case can be handled.
          
            Single channel convolution operates on a mono audio input, using a
            mono impulse response, and generating a mono output. The remaining
            images in the diagram illustrate the supported cases for mono and
            stereo playback where N and M are 1 or 2 and K is 1, 2, or 4.
            Developers desiring more complex and arbitrary matrixing can use a
            ChannelSplitterNode, multiple single-channel
            ConvolverNodes and a
            ChannelMergerNode.
          
            ConvolverNode.
            This interface represents a node which is able to provide real-time frequency and time-domain analysis information. The audio stream will be passed un-processed from input to output.
    numberOfInputs  : 1
    numberOfOutputs : 1    Note that this output may be left unconnected.
    channelCount = 1;
    channelCountMode = "max";
    channelInterpretation = "speakers";
        This node has no tail-time reference.
Let node be a new AnalyserNode object. Initialize node, and return node.
              Copies the current frequency data into the passed
              floating-point array. If the array has fewer elements than the
              frequencyBinCount, the excess elements will
              be dropped. If the array has more elements than the
              frequencyBinCount, the excess elements will
              be ignored. The most recent fftSize frames are
              used in computing the frequency data.
            
              If another call to getFloatFrequencyData or
              getByteFrequencyDataoccurs within the same render
              quantum as a previous call, the current frequency data
              is not updated with the same data. Instead, the previously
              computed data is returned.
            
The frequency data are in dB units.
              Copies the current frequency data into the passed unsigned
              byte array. If the array has fewer elements than the
              frequencyBinCount, the excess elements will
              be dropped. If the array has more elements than the
              frequencyBinCount, the excess elements will
              be ignored. The most recent fftSize frames are
              used in computing the frequency data.
            
              If another call to getByteFreqencyData or
              getFloatFrequencyData occurs within the same
              render quantum as a previous call, the current
              frequency data is not updated with the same data. Instead,
              the previously computed data is returned.
            
The values stored in the unsigned byte array are computed in the following way. Let \(Y[k]\) be the current frequency data as described in FFT windowing and smoothing. Then the byte value, \(b[k]\), is
                  $$
                    b[k] = \left\lfloor
                        \frac{255}{\mbox{dB}_{max} - \mbox{dB}_{min}}
                        \left(Y[k] - \mbox{dB}_{min}\right)
                      \right\rfloor
                  $$
            
              where \(\mbox{dB}_{min}\) is minDecibels and
              \(\mbox{dB}_{max}\) is maxDecibels. If
              \(b[k]\) lies outside the range of 0 to 255, \(b[k]\) is clipped
              to lie in that range.
            
              Copies the current down-mixed time-domain (waveform) data into
              the passed floating-point array. If the array has fewer elements
              than the value of fftSize, the excess
              elements will be dropped. If the array has more elements than
              fftSize,
              the excess elements will be ignored. The most recent fftSize frames are
              returned (after downmixing).
            
              Copies the current down-mixed time-domain (waveform) data into
              the passed unsigned byte array. If the array has fewer elements
              than the value of fftSize, the excess
              elements will be dropped. If the array has more elements than
              fftSize,
              the excess elements will be ignored. The most recent fftSize frames are
              used in computing the byte data.
            
The values stored in the unsigned byte array are computed in the following way. Let \(x[k]\) be the time-domain data. Then the byte value, \(b[k]\), is
              $$
                b[k] = \left\lfloor 128(1 + x[k]) \right\rfloor.
              $$
            If \(b[k]\) lies outside the range 0 to 255, \(b[k]\) is clipped to lie in that range.
              The size of the FFT used for frequency-domain analysis.
              This must be a power of two in the
              range 32 to 32768, otherwise an IndexSizeError
              exception MUST be thrown. The default value is 2048. Note
              that large FFT sizes can be costly to compute.
            
              If the fftSize is changed to a different value, then
              all state associated with smoothing of the frequency data (for
              
              getByteFrequencyData and 
              getFloatFrequencyData) is reset. That is the
              previous block, \(\hat{X}_{-1}[k]\), used for smoothing over time is set to 0 for
              all \(k\).
            
Half the FFT size.
              minDecibels is the minimum power
              value in the scaling range for the FFT analysis data for
              conversion to unsigned byte values. The default value is -100.
              If the value of this attribute is set
              to a value more than or equal to maxDecibels,
              an IndexSizeError exception MUST be thrown.
            
              maxDecibels is the maximum power
              value in the scaling range for the FFT analysis data for
              conversion to unsigned byte values. The default value is -30.
              If the value of this attribute is set
              to a value less than or equal to minDecibels,
              an IndexSizeError exception MUST be thrown.
            
              A value from 0 -> 1 where 0 represents no time averaging with
              the last analysis frame. The default value is 0.8. If the value of this attribute is set to a value
              less than 0 or more than 1, an IndexSizeError
              exception MUST be thrown.
            
            This specifies the options to be used when constructing an
            AnalyserNode. All members are optional; if not
            specified, the normal default values are used to construct the
            node.
          
fftSizeframes are
              used for the down-mixing operation.
            
            In the following, let \(N\) be the value of the
            .fftSize attribute of this AnalyserNode.
          
Applying a Blackman window consists in the following operation on the input time domain data. Let \(x[n]\) for \(n = 0, \ldots, N - 1\) be the time domain data. The Blackman window is defined by
          $$
          \begin{align*}
            \alpha &= \mbox{0.16} \\ a_0 &= \frac{1-\alpha}{2} \\
             a_1   &= \frac{1}{2} \\
             a_2   &= \frac{\alpha}{2} \\
             w[n] &= a_0 - a_1 \cos\frac{2\pi n}{N} + a_2 \cos\frac{4\pi n}{N}, \mbox{ for } n = 0, \ldots, N - 1
           \end{align*}
           $$
          
          The windowed signal \(\hat{x}[n]\) is
            $$
              \hat{x}[n] = x[n] w[n], \mbox{ for } n = 0, \ldots, N - 1
            $$
          
          Applying a Fourier transform consists of computing the Fourier transform in the following way. Let \(X[k]\) be the complex frequency domain data and \(\hat{x}[n]\) be the windowed time domain data computed above. Then
            $$
              X[k] = \frac{1}{N} \sum_{n = 0}^{N - 1} \hat{x}[n]\, e^{\frac{-2\pi i k n}{N}}
            $$
          for \(k = 0, \dots, N/2-1\).
Smoothing over time frequency data consists in the following operation:
smoothingTimeConstant
            attribute for this AnalyserNode.
            Then the smoothed value, \(\hat{X}[k]\), is computed by
            $$
              \hat{X}[k] = \tau\, \hat{X}_{-1}[k] + (1 - \tau)\, |X[k]|
            $$
          
          for \(k = 0, \ldots, N - 1\).
Conversion to dB consists of the following operation, where \(\hat{X}[k]\) is computed in smoothing over time:
          $$
            Y[k] = 20\log_{10}\hat{X}[k]
          $$
          
          for \(k = 0, \ldots, N-1\).
            This array, \(Y[k]\), is copied to the output array for
            getFloatFrequencyData. For
            getByteFrequencyData, the \(Y[k]\) is clipped to lie
            between minDecibels and
            maxDecibels and then scaled to fit in an
            unsigned byte such that minDecibels is
            represented by the value 0 and maxDecibels is
            represented by the value 255.
          
          The ChannelSplitterNode is for use in more advanced
          applications and would often be used in conjunction with
          ChannelMergerNode.
        
    numberOfInputs  : 1
    numberOfOutputs : Variable N (defaults to 6) // number of "active" (non-silent) outputs is determined by number of channels in the input
    channelCountMode = "explicit";
    channelInterpretation = "discrete";
        
          This interface represents an AudioNode for
          accessing the individual channels of an audio stream in the routing
          graph. It has a single input, and a number of "active" outputs which
          equals the number of channels in the input audio stream. For example,
          if a stereo input is connected to an
          ChannelSplitterNode then the number of active
          outputs will be two (one from the left channel and one from the
          right). There are always a total number of N outputs (determined by
          the numberOfOutputs parameter to the
          AudioContext method 
          createChannelSplitter()), The default number is 6 if
          this value is not provided. Any outputs which are not "active" will
          output silence and would typically not be connected to anything.
        
          The channelCount is set equal to
          numberOfOutputs. There are channelCount
          constraints and channelCountMode constraints for this
          node.
        
This node has no tail-time reference.
          Please note that in this example, the splitter does not interpret the channel identities (such as left, right, etc.), but simply splits out channels in the order that they are input.
          One application for ChannelSplitterNode is for doing
          "matrix mixing" where individual gain control of each channel is
          desired.
        
Let node be a new ChannelSplitterNode object. Initialize of node, and return node.
ChannelSplitterNode.
            
          The ChannelMergerNode is for use in more advanced
          applications and would often be used in conjunction with
          ChannelSplitterNode.
        
numberOfInputs : Variable N (default to 6) numberOfOutputs : 1 channelCount = 1; channelCountMode = "explicit"; channelInterpretation = "speakers";
          This interface represents an AudioNode for
          combining channels from multiple audio streams into a single audio
          stream. It has a variable number of inputs (defaulting to 6), but not
          all of them need be connected. There is a single output whose audio
          stream has a number of channels equal to the number of inputs.
        
To merge multiple inputs into one stream, each input gets downmixed into one channel (mono) based on the specified mixing rule. An unconnected input still counts as one silent channel in the output. Changing input streams does not affect the order of output channels.
There are channelCount constraints and channelCountMode constraints for this node.
This node has no tail-time reference.
          For example, if a default ChannelMergerNode has
          two connected stereo inputs, the first and second input will be
          downmixed to mono respectively before merging. The output will be a
          6-channel stream whose first two channels are be filled with the
          first two (downmixed) inputs and the rest of channels will be silent.
        
          Also the ChannelMergerNode can be used to arrange
          multiple audio streams in a certain order for the multi-channel
          speaker array such as 5.1 surround set up. The merger does not
          interpret the channel identities (such as left, right, etc.), but
          simply combines channels in the order that they are input.
        
          Let node be a new ChannelMergerNode object. Initialize node, and return node.
ChannelSplitterNode.
            
          DynamicsCompressorNode is an
          AudioNode processor implementing a dynamics
          compression effect.
        
Dynamics compression is very commonly used in musical production and game audio. It lowers the volume of the loudest parts of the signal and raises the volume of the softest parts. Overall, a louder, richer, and fuller sound can be achieved. It is especially important in games and musical applications where large numbers of individual sounds are played simultaneous to control the overall signal level and help avoid clipping (distorting) the audio output to the speakers.
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCount = 2;
    channelCountMode = "explicit";
    channelInterpretation = "speakers";
        This node has no tail-time reference.
Let node be a new DynamicsCompressorNode object. Initialize node, and return node.
              The decibel value above which the compression will start taking
              effect. Its default value is -24. This parameter is
              k-rate. Its nominal range is [-100, 0].
            
              A decibel value representing the range above the threshold where
              the curve smoothly transitions to the "ratio" portion. Its
              default value is 30. This parameter is
              k-rate. Its nominal range is [0, 40].
            
              The amount of dB change in input for a 1 dB change in output. Its
              default value is 12. This parameter is
              k-rate. Its nominal range is [1, 20].
            
A read-only decibel value for metering purposes, representing the current amount of gain reduction that the compressor is applying to the signal. If fed no signal the value will be 0 (no gain reduction).
              The amount of time (in seconds) to reduce the gain by 10dB. Its
              default value is 0.003. This parameter is
              k-rate. Its nominal range is [0, 1].
            
              The amount of time (in seconds) to increase the gain by 10dB. Its
              default value is 0.250. This parameter is
              k-rate. Its nominal range is [0, 1].
            
            This specifies the options to use in constructing a
            DynamicsCompressorNode. All members are
            optional; if not specified the normal defaults are used in
            constructing the node.
          
attack
              AudioParam.
            knee
              AudioParam.
            ratio
              AudioParam.
            release
              AudioParam.
            threshold
              AudioParam.
            
          BiquadFilterNode is an
          AudioNode processor implementing very common
          low-order filters.
        
          Low-order filters are the building blocks of basic tone controls
          (bass, mid, treble), graphic equalizers, and more advanced filters.
          Multiple BiquadFilterNode filters can be combined
          to form more complex filters. The filter parameters such as frequency can be
          changed over time for filter sweeps, etc. Each
          BiquadFilterNode can be configured as one of a
          number of common filter types as shown in the IDL below. The default
          filter type is "lowpass".
        
          Both frequency and
          detune form
          a compound parameter and are both a-rate. They are used
          together to determine a computedFrequency value:
        
computedFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)
The nominal range for this compound parameter is [0, Nyquist frequency].
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCountMode = "max";
    channelInterpretation = "speakers";
        The number of channels of the output always equals the number of channels of the input.
This node has a tail-time reference such that this node continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.
A lowpass filter allows frequencies below the cutoff frequency to pass through and attenuates frequencies above the cutoff. It implements a standard second-order resonant lowpass filter with 12dB/octave rolloff.
- frequency
 - The cutoff frequency
 - Q
 - Controls how peaked the response will be at the cutoff frequency. A large value makes the response more peaked. Please note that for this filter type, this value is not a traditional Q, but is a resonance value in decibels.
 - gain
 - Not used in this filter type
 
A highpass filter is the opposite of a lowpass filter. Frequencies above the cutoff frequency are passed through, but frequencies below the cutoff are attenuated. It implements a standard second-order resonant highpass filter with 12dB/octave rolloff.
- frequency
 - The cutoff frequency below which the frequencies are attenuated
 - Q
 - Controls how peaked the response will be at the cutoff frequency. A large value makes the response more peaked. Please note that for this filter type, this value is not a traditional Q, but is a resonance value in decibels.
 - gain
 - Not used in this filter type
 
A bandpass filter allows a range of frequencies to pass through and attenuates the frequencies below and above this frequency range. It implements a second-order bandpass filter.
- frequency
 - The center of the frequency band
 - Q
 - Controls the width of the band. The width becomes narrower as the Q value increases.
 - gain
 - Not used in this filter type
 
The lowshelf filter allows all frequencies through, but adds a boost (or attenuation) to the lower frequencies. It implements a second-order lowshelf filter.
- frequency
 - The upper limit of the frequences where the boost (or attenuation) is applied.
 - Q
 - Not used in this filter type.
 - gain
 - The boost, in dB, to be applied. If the value is negative, the frequencies are attenuated.
 
The highshelf filter is the opposite of the lowshelf filter and allows all frequencies through, but adds a boost to the higher frequencies. It implements a second-order highshelf filter
- frequency
 - The lower limit of the frequences where the boost (or attenuation) is applied.
 - Q
 - Not used in this filter type.
 - gain
 - The boost, in dB, to be applied. If the value is negative, the frequencies are attenuated.
 
The peaking filter allows all frequencies through, but adds a boost (or attenuation) to a range of frequencies.
- frequency
 - The center frequency of where the boost is applied.
 - Q
 - Controls the width of the band of frequencies that are boosted. A large value implies a narrow width.
 - gain
 - The boost, in dB, to be applied. If the value is negative, the frequencies are attenuated.
 
The notch filter (also known as a band-stop or band-rejection filter) is the opposite of a bandpass filter. It allows all frequencies through, except for a set of frequencies.
- frequency
 - The center frequency of where the notch is applied.
 - Q
 - Controls the width of the band of frequencies that are attenuated. A large value implies a narrow width.
 - gain
 - Not used in this filter type.
 
An allpass filter allows all frequencies through, but changes the phase relationship between the various frequencies. It implements a second-order allpass filter
- frequency
 - The frequency where the center of the phase transition occurs. Viewed another way, this is the frequency with maximal group delay.
 - Q
 - Controls how sharp the phase transition is at the center frequency. A larger value implies a sharper transition and a larger group delay.
 - gain
 - Not used in this filter type.
 
          All attributes of the BiquadFilterNode are
          a-rate AudioParam.
        
Let node be a new BiquadFilterNode object. Initialize node, and return node.
              The type of this BiquadFilterNode. Its
              default value is "lowpass". The exact meaning of the other
              parameters depend on the value of the type
              attribute.
            
              The frequency at which the BiquadFilterNode
              will operate, in Hz. Its default value is 350Hz. It forms a
              compound parameter with detune. Its
              nominal range is [0, Nyquist frequency].
            
              A detune value, in cents, for the frequency. Its default value is
              0. It forms a compound parameter with
              frequency. Its nominal range is \((-\infty,
              \infty)\).
            
The Q factor has a default value of 1. Its nominal range is \((-\infty, \infty)\). This is not used for lowshelf or highshelf filters.
The gain has a default value of 0. Its nominal range is \((-\infty, \infty)\). Its value is in dB units. The gain is only used for lowshelf, highshelf, and peaking filters.
              Given the current filter parameter
              settings, synchronously calculates the frequency response for the
              specified frequencies. The three parameters MUST be
              Float32Arrays of the same length, or an
              InvalidAccessError MUST be thrown.
            
              The frequency response returned MUST be computed with the
              AudioParam sampled for the current processing
              block.
            
This parameter specifies an array of frequencies at which the response values will be calculated.
This parameter specifies an output array receiving the linear magnitude response values.
                  If a value in the frequencyHz parameter is not
                  within [0; sampleRate/2], where sampleRate is
                  the value of the sampleRate
                  property of the AudioContext, the corresponding value
                  at the same index of the magResponse array MUST
                  be NaN.
                
This parameter specifies an output array receiving the phase response values in radians.
                  If a value in the frequencyHz parameter is not
                  within [0; sampleRate/2], where sampleRate is
                  the value of the sampleRate
                  property of the AudioContext, the corresponding value
                  at the same index of the phaseResponse array
                  MUST be NaN.
                
            This specifies the options to be used when constructing a
            BiquadFilterNode. All members are optional; if
            not specified, the normal default values are used to construct the
            node.
          
Q.
            detune.
            frequency.
            gain.
            
            There are multiple ways of implementing the type of filters
            available through the BiquadFilterNode each
            having very different characteristics. The formulas in this section
            describe the filters that a conforming implementation MUST
            implement, as they determine the characteristics of the different
            filter types. They are inspired by formulas found in the Audio EQ
            Cookbook.
          
            The transfer function for the filters implemented by the
            BiquadFilterNode is:
          
  $$
  H(z) = \frac{\frac{b_0}{a_0} + \frac{b_1}{a_0}z^{-1} + \frac{b_2}{a_0}z^{-2}}
              {1+\frac{a_1}{a_0}z^{-1}+\frac{a_2}{a_0}z^{-2}}
  $$
            
          The initial filter state is 0.
The coefficients in the transfer function above are different for each node type. The following intermediate variable are necessary for their computation, based on the computedValue of theAudioParams of the
          BiquadFilterNode.
          sampleRate
            attribute for this AudioContext.
            computedFrequency.
            gain
            AudioParam.
            Q
            AudioParam.
            
$$
\begin{align*}
  A        &= 10^{\frac{G}{40}} \\
  \omega_0 &= 2\pi\frac{f_0}{F_s} \\
  \alpha_Q &= \frac{\sin\omega_0}{2Q} \\
  \alpha_{Q_{dB}} &= \frac{\sin\omega_0}{2 \cdot 10^{Q/20}} \\
  S        &= 1 \\
  \alpha_S &= \frac{\sin\omega_0}{2}\sqrt{\left(A+\frac{1}{A}\right)\left(\frac{1}{S}-1\right)+2}
\end{align*}
$$
            
            lowpass
            
                $$
                  \begin{align*}
                    b_0 &= \frac{1 - \cos\omega_0}{2} \\
                    b_1 &= 1 - \cos\omega_0 \\
                    b_2 &= \frac{1 - \cos\omega_0}{2} \\
                    a_0 &= 1 + \alpha_{Q_{dB}} \\
                    a_1 &= -2 \cos\omega_0 \\
                    a_2 &= 1 - \alpha_{Q_{dB}}
                  \end{align*}
                $$
              
            highpass
            
                  $$
                    \begin{align*}
                      b_0 &= \frac{1 + \cos\omega_0}{2} \\
                      b_1 &= -(1 + \cos\omega_0) \\
                      b_2 &= \frac{1 + \cos\omega_0}{2} \\
                      a_0 &= 1 + \alpha_{Q_{dB}} \\
                      a_1 &= -2 \cos\omega_0 \\
                      a_2 &= 1 - \alpha_{Q_{dB}}
                    \end{align*}
                  $$
              
            bandpass
            
              $$
                \begin{align*}
                  b_0 &= \alpha_Q \\
                  b_1 &= 0 \\
                  b_2 &= -\alpha_Q \\
                  a_0 &= 1 + \alpha_Q \\
                  a_1 &= -2 \cos\omega_0 \\
                  a_2 &= 1 - \alpha_Q
                \end{align*}
              $$
            
            notch
            
                $$
                  \begin{align*}
                    b_0 &= 1 \\
                    b_1 &= -2\cos\omega_0 \\
                    b_2 &= 1 \\
                    a_0 &= 1 + \alpha_Q \\
                    a_1 &= -2 \cos\omega_0 \\
                    a_2 &= 1 - \alpha_Q
                  \end{align*}
                $$
              
            allpass
            
                $$
                  \begin{align*}
                    b_0 &= 1 - \alpha_Q \\
                    b_1 &= -2\cos\omega_0 \\
                    b_2 &= 1 + \alpha_Q \\
                    a_0 &= 1 + \alpha_Q \\
                    a_1 &= -2 \cos\omega_0 \\
                    a_2 &= 1 - \alpha_Q
                  \end{align*}
                $$
              
            peaking
            
                $$
                  \begin{align*}
                    b_0 &= 1 + \alpha_Q\, A \\
                    b_1 &= -2\cos\omega_0 \\
                    b_2 &= 1 - \alpha_Q\,A \\
                    a_0 &= 1 + \frac{\alpha_Q}{A} \\
                    a_1 &= -2 \cos\omega_0 \\
                    a_2 &= 1 - \frac{\alpha_Q}{A}
                  \end{align*}
                $$
              
            lowshelf
            
                $$
                  \begin{align*}
                    b_0 &= A \left[ (A+1) - (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A})\right] \\
                    b_1 &= 2 A \left[ (A-1) - (A+1) \cos\omega_0 )\right] \\
                    b_2 &= A \left[ (A+1) - (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A}) \right] \\
                    a_0 &= (A+1) + (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A} \\
                    a_1 &= -2 \left[ (A-1) + (A+1) \cos\omega_0\right] \\
                    a_2 &= (A+1) + (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A})
                  \end{align*}
                $$
              
            highshelf
            
                $$
                  \begin{align*}
                    b_0 &= A\left[ (A+1) + (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} )\right] \\
                    b_1 &= -2A\left[ (A-1) + (A+1)\cos\omega_0 )\right] \\
                    b_2 &= A\left[ (A+1) + (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A} )\right] \\
                    a_0 &= (A+1) - (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} \\
                    a_1 &= 2\left[ (A-1) - (A+1)\cos\omega_0\right] \\
                    a_2 &= (A+1) - (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A}
                  \end{align*}
                $$
              
            
          IIRFilterNode is an AudioNode
          processor implementing a general IIR Filter. In general, it is best
          to use BiquadFilterNode's to implement
          higher-order filters for the following reasons:
        
However, odd-ordered filters cannot be created, so if such filters are needed or automation is not needed, then IIR filters may be appropriate.
Once created, the coefficients of the IIR filter cannot be changed.
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCountMode = "max";
    channelInterpretation = "speakers";
        The number of channels of the output always equals the number of channels of the input.
This node has a tail-time reference such that this node continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.
Let node be a new IIRFilterNode object. Initialize node, and return node.
              Given the current filter parameter
              settings, synchronously calculates the frequency response for the
              specified frequencies. The three parameters MUST be
              Float32Arrays of the same length, or an
              InvalidAccessError MUST be thrown.
            
This parameter specifies an output array receiving the linear magnitude response values.
                  If a value in the frequencyHz parameter is not
                  within [0; sampleRate/2], where sampleRate is
                  the value of the sampleRate
                  property of the AudioContext, the corresponding value
                  at the same index of the magResponse array MUST
                  be NaN.
                
This parameter specifies an output array receiving the phase response values in radians.
                  If a value in the frequencyHz parameter is not
                  within [0; sampleRate/2], where sampleRate is
                  the value of the sampleRate
                  property of the AudioContext, the corresponding value
                  at the same index of the phaseResponse array
                  MUST be NaN.
                
            The IIRFilterOptions dictionary is used to specify the
            filter coefficients of the IIRFilterNode.
          
IIRFilterNode. This member is required. If
              not specifed, a NotFoundError MUST be thrown.
            IIRFilterNode. This member is required. If
              not specifed, a NotFoundError MUST be thrown.
            
            Let \(b_m\) be the feedforward coefficients and
            \(a_n\) be the feedback coefficients specified by
            
            createIIRFilter. Then the transfer function of the general IIR
            filter is given by
          
            $$
              H(z) = \frac{\sum_{m=0}^{M} b_m z^{-m}}{\sum_{n=0}^{N} a_n z^{-n}}
            $$
          
          where \(M + 1\) is the length of the \(b\) array and \(N + 1\) is the length of the \(a\) array. The coefficient \(a_0\) cannot be 0. At least one of \(b_m\) must be non-zero.
Equivalently, the time-domain equation is:
            $$
              \sum_{k=0}^{N} a_k y(n-k) = \sum_{k=0}^{M} b_k x(n-k)
            $$
          
          The initial filter state is the all-zeroes state.
          WaveShaperNode is an
          AudioNode processor implementing non-linear
          distortion effects.
        
Non-linear waveshaping distortion is commonly used for both subtle non-linear warming, or more obvious distortion effects. Arbitrary non-linear shaping curves may be specified.
    numberOfInputs  : 1
    numberOfOutputs : 1
    channelCountMode = "max";
    channelInterpretation = "speakers";
        The number of channels of the output always equals the number of channels of the input.
          WaveShaperNodes are created with an internal flag curve
          set, initially set to false.
        
If the oversample attribute is set to none, the WaveShaperNode has no tail-time. If the oversample attribute is set to 2x or 4x, the WaveShaperNode can have tail-time caused by the resampling technique used. The duration of this tail-time is therefore implementation-dependent.
Let node be a new WaveShaperNode object. Initialize node, and return node.
The shaping curve used for the waveshaping effect. The input signal is nominally within the range [-1; 1]. Each input sample within this range will index into the shaping curve, with a signal level of zero corresponding to the center value of the curve array if there are an odd number of entries, or interpolated between the two centermost values if there are an even number of entries in the array. Any sample value less than -1 will correspond to the first value in the curve array. Any sample value greater than +1 will correspond to the last value in the curve array.
The implementation must perform linear interpolation between adjacent points in the curve. Initially the curve attribute is null, which means that the WaveShaperNode will pass its input to its output without modification.
              Values of the curve are spread with equal spacing in the [-1; 1]
              range. This means that a curve with a even
              number of value will not have a value for a signal at zero, and a
              curve with an odd number of value will have a
              value for a signal at zero.
            
              A InvalidStateError MUST be thrown if this attribute
              is set with a Float32Array that has a
              length less than 2.
            
              When this attribute is set, an internal copy of the curve is
              created by the WaveShaperNode. Subsequent
              modifications of the contents of the array used to set the
              attribute therefore have no effect: the attribute must be set
              again in order to change the curve.
            
              To set the curve attribute, execute these steps:
            
new curve be the Float32Array
              to be assigned to curve.
              new curve is not null and 
                curve set is true, throw an
                InvalidStateError and abort these steps.
              new curve is not null, set
              curve set to true.
              new curve to the curve
              attribute.
              Specifies what type of oversampling (if any) should be used when applying the shaping curve. The default value is "none", meaning the curve will be applied directly to the input samples. A value of "2x" or "4x" can improve the quality of the processing by avoiding some aliasing, with the "4x" value yielding the highest quality. For some applications, it's better to use no oversampling in order to get a very precise shaping curve.
A value of "2x" or "4x" means that the following steps must be performed:
AudioContext. Thus for each render
              quantum, generate 256 (for 2x) or 512 (for 4x) samples.
              AudioContext. Thus taking the 256 (or 512)
              processed samples, generating 128 as the final result.
              The exact up-sampling and down-sampling filters are not specified, and can be tuned for sound quality (low aliasing, etc.), low latency, and performance.
Use of oversampling introduces some degree of audio processing latency due to the up-sampling and down-sampling filters. The amount of this latency can vary from one implementation to another.
            This specifies the options for constructing a
            WaveShaperNode. All members are optional; if
            not specified, the normal default is used in constructing the node.
          
          OscillatorNode represents an audio source
          generating a periodic waveform. It can be set to a few commonly used
          waveforms. Additionally, it can be set to an arbitrary periodic
          waveform through the use of a PeriodicWave
          object.
        
          Oscillators are common foundational building blocks in audio
          synthesis. An OscillatorNode will start emitting sound at the time
          specified by the start() method.
        
Mathematically speaking, a continuous-time periodic waveform can have very high (or infinitely high) frequency information when considered in the frequency domain. When this waveform is sampled as a discrete-time digital audio signal at a particular sample-rate, then care must be taken to discard (filter out) the high-frequency information higher than the Nyquist frequency before converting the waveform to a digital form. If this is not done, then aliasing of higher frequencies (than the Nyquist frequency) will fold back as mirror images into frequencies lower than the Nyquist frequency. In many cases this will cause audibly objectionable artifacts. This is a basic and well understood principle of audio DSP.
There are several practical approaches that an implementation may take to avoid this aliasing. Regardless of approach, the idealized discrete-time digital audio signal is well defined mathematically. The trade-off for the implementation is a matter of implementation cost (in terms of CPU usage) versus fidelity to achieving this ideal.
It is expected that an implementation will take some care in achieving this ideal, but it is reasonable to consider lower-quality, less-costly approaches on lower-end hardware.
          Both frequency and detune are a-rate
          parameters, and form a compound parameter. They are used
          together to determine a computedFrequency value:
        
computedFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)
The OscillatorNode's instantaneous phase at each time is the definite time integral of computedFrequency, assuming a phase angle of zero at the node's exact start time. Its nominal range is [-Nyquist frequency, Nyquist frequency].
numberOfInputs : 0 numberOfOutputs : 1 (mono output)
Let node be a new OscillatorNode object. Initialize node, and return node.
              The shape of the periodic waveform. It may directly be set to any
              of the type constant values except for "custom". Doing so MUST throw an
              InvalidStateError exception. The 
              setPeriodicWave() method can be used to set a
              custom waveform, which results in this attribute being set to
              "custom". The default value is "sine". When this attribute is
              set, the phase of the oscillator MUST be conserved.
            
              The frequency (in Hertz) of the periodic waveform. Its default
              value is 440. This parameter is a-rate. It
              forms a compound parameter with detune. Its
              nominal range is [-Nyquist frequency, Nyquist
              frequency].
            
              A detuning value (in cents) which will offset the
              frequency by the given amount. Its default
              value is 0. This parameter is a-rate. It
              forms a compound parameter with frequency.
              Its nominal range is \((-\infty, \infty)\).
            
              Sets an arbitrary custom periodic waveform given a
              PeriodicWave.
            
            This specifies the options to be used when constructing an
            OscillatorNode. All of the members are
            optional; if not specified, the normal default values are used for
            constructing the oscillator.
          
periodicWave,
              then an InvalidStateError
              exception MUST be thrown. If periodicWave
              is specified, then any valid value for type is ignored;
              it is treated as if it were set to "custom".
            OscillatorNode.
            OscillatorNode.
            PeriodicWave for the
              OscillatorNode. If this is specified, then
              any valid value for type is ignored;
              it is treated as if "custom" were specified.
            The idealized mathematical waveforms for the various oscillator types are defined here. In summary, all waveforms are defined mathematically to be an odd function with a positive slope at time 0. The actual waveforms produced by the oscillator may differ to prevent aliasing affects.
The oscillator must produce the same result as if a PeriodicWave with the appropriate Fourier series and with normalization enabled were used to create these basic waveforms.
                $$
                  x(t) = \sin t
                $$.
              
            
                $$
                  x(t) = \begin{cases}
                         1 & \mbox{for } 0≤ t < \pi \\
                         -1 & \mbox{for } -\pi < t < 0.
                         \end{cases}
                $$
              
            
                $$
                  x(t) = \frac{t}{\pi} \mbox{ for } -\pi < t ≤ \pi;
                $$
              
            
                $$
                  x(t) = \begin{cases}
                           \frac{2}{\pi} t & \mbox{for } 0 ≤ t ≤ \frac{\pi}{2} \\
                           1-\frac{2}{\pi} (t-\frac{\pi}{2}) & \mbox{for }
                           \frac{\pi}{2} < t ≤ \pi.
                         \end{cases}
                $$
              
This is extended to all \(t\) by using the fact that the waveform is an
odd function with period \(2\pi\).
            
          PeriodicWave represents an arbitrary periodic waveform to be used
          with an OscillatorNode.
        
Conforming implementations MUST support PeriodicWave up to at least 8192 elements.
                IndexSizeError exception MUST be thrown.
              Float32Array and length 2. Set the
              second element of the [[\imag]] array be 1.
                "sine".
                Float32Array, of
              length both equal to the maximum length of the real
              and imag of the attributes of the
              PeriodicWaveOptions passed in. Make
              a copy of those arrays into their respective internal slots.
              The BaseAudioContext for which to create this PeriodicWave.
Unlike AudioBuffer, PeriodicWaves can't be shared accross AudioContexts or OfflineAudioContexts. It is associated with a particular BaseAudioContext.
PeriodicWaveConstraints dictionary is used to
          specify how the waveform is normalized.
          true, the waveform is not normalized; otherwise, the
              waveform is normalized.
            
            The PeriodicWaveOptions dictionary is used to specify
            how the waveform is constructed. If only one of real
            or imag is specified. the other is treated as if it
            were an array of all zeroes of the same length, as specified below
            in description of
            the dictionary members. If neither is given, a
            PeriodicWave is created that must be equivalent
            to an OscillatorNode with type "sine". If both are
            given, the sequences must have the same length; otherwise an
            error of type
            NotSupportedError MUST be thrown.
          
                The real parameter represents an array
                of cosine terms. The first element (index 0) is
                the DC-offset of the periodic waveform. Implementations MUST
                set it to zero when computing the waveform. The second element
                (index 1) represents the fundamental frequency. The third
                element represents the first overtone, and so on.
              
                This defaults to a sequence of all zeroes of the same length as
                imag if imag is given.
              
                The imag parameter represents an array
                of sine terms. The first element (index 0) does
                not exist in the Fourier series. Implementations MUST set it to
                zero when computing the waveform. The second element (index 1)
                represents the fundamental frequency. The third element
                represents the first overtone, and so on.
              
real if real is given.
            The createPeriodicWave() method takes two arrays to specify the Fourier coefficients of the PeriodicWave. Let \(a\) and \(b\) represent the real and imaginary arrays of length \(L\). Then the basic time-domain waveform, \(x(t)\), can be computed using:
            $$
              x(t) = \sum_{k=1}^{L-1} \left(a[k]\cos2\pi k t + b[k]\sin2\pi k t\right)
            $$
          
          This is the basic (unnormalized) waveform.
            If the internal slot [[\normalize]] of this
            PeriodicWave is true (the default), the
            waveform defined in the previous section is normalized so that the
            maximum value is 1. The normalization is done as follows.
          
Let
          $$
            \tilde{x}(n) = \sum_{k=1}^{L-1} \left(a[k]\cos\frac{2\pi k n}{N} + b[k]\sin\frac{2\pi k n}{N}\right)
          $$
          
          where \(N\) is a power of two. (Note: \(\tilde{x}(n)\) can conveniently be computed using an inverse FFT.) The fixed normalization factor \(f\) is computed as follows.
            $$
              f = \max_{n = 0, \ldots, N - 1} |\tilde{x}(n)|
            $$
          
          Thus, the actual normalized waveform \(\hat{x}(n)\) is:
            $$
              \hat{x}(n) = \frac{\tilde{x}(n)}{f}
            $$
          
          This fixed normalization factor must be applied to all generated waveforms.
The builtin oscillator types are created using PeriodicWave objects. For completeness the coefficients for the PeriodicWave for each of the builtin oscillator types is given here. This is useful if a builtin type is desired but without the default normalization.
In the following descriptions, let \(a\) be the array of real coefficients and \(b\) be the array of imaginary coefficients for createPeriodicWave(). In all cases \(a[n] = 0\) for all \(n\) because the waveforms are odd functions. Also, \(b[0] = 0\) in all cases. Hence, only \(b[n]\) for \(n \ge 1\) is specified below.
                  $$
                    b[n] = \begin{cases}
                             1 & \mbox{for } n = 1 \\
                             0 & \mbox{otherwise}
                           \end{cases}
                  $$
              
            
                  $$
                    b[n] = \frac{2}{n\pi}\left[1 - (-1)^n\right]
                  $$
              
            
                $$
                  b[n] = (-1)^{n+1} \dfrac{2}{n\pi}
                $$
            
            
                  $$
                    b[n] = \frac{8\sin\dfrac{n\pi}{2}}{(\pi n)^2}
                  $$
              
            
          This interface represents an audio source from a
          MediaStream. The track that will be used as the source
          of audio and will be output from this node is the first
          MediaStreamTrack whose kind attribute has
          the value "audio", when alphabetically sorting the
          tracks of this MediaStream by their id
          attribute. Those interfaces are described in
          [[!mediacapture-streams]].
        
The behaviour for picking the track to output is weird for legacy reasons. MediaStreamTrackAudioSourceNode should be used instead.
    numberOfInputs  : 0
    numberOfOutputs : 1
        
          The number of channels of the output corresponds to the number of
          channels of the MediaStreamTrack. If there is no valid
          audio track, then the number of channels output will be one silent
          channel.
        
This node has no tail-time reference.
Let node be a new MediaStreamAudioSourceNode object. Initialize node, and return node.
mediaStreamTrack parameter does not reference a
                MediaStreamTrack whose kind attribute
                has the value "audio", an InvalidStateError MUST be
                thrown.
              
              The MediaStream used when constructing this
              MediaStreamAudioSourceNode.
            
            This specifies the options for constructing a
            MediaStreamAudioSourceNode.
          
          This interface represents an audio source from a
          MediaStreamTrack.
        
    numberOfInputs  : 0
    numberOfOutputs : 1
        
          The number of channels of the output corresponds to the number of
          channels of the MediaStreamTrack.
        
This node has no tail-time reference.
Let node be a new MediaStreamTrackAudioSourceNode object. Initialize node, and return node.
            This specifies the options for constructing a
            MediaStreamTrackAudioSourceNode. This is
            required.
          
          This interface is an audio destination representing a
          MediaStream with a single MediaStreamTrack
          whose kind is "audio". This MediaStream is
          created when the node is created and is accessible via the
          stream attribute. This stream can be used in a similar way
          as a MediaStream obtained via
          getUserMedia(), and can, for example, be sent to a
          remote peer using the RTCPeerConnection (described in
          [[!webrtc]]) addStream() method.
        
    numberOfInputs  : 1
    numberOfOutputs : 0
    channelCount = 2;
    channelCountMode = "explicit";
    channelInterpretation = "speakers";
        The number of channels of the input is by default 2 (stereo).
Let node be a new MediaStreamAudioDestinationNode object. Initialize node, and return node.
              A MediaStream containing a single MediaStreamTrack with the same
              number of channels as the node itself, and whose
              kind attribute has the value "audio".
            
Real-time audio systems that require low latency are often implemented using callback functions, where the operating system calls the program back when more audio has to be computed in order for the playback to stay uninterrupted. Such callback is called on a high priority thread (often the highest priority on the system). This means that a program that deals with audio only executes code from this callback, as any buffering between a rendering thread and the callback would naturally add latency or make the system less resilient to glitches.
For this reason, the traditional way of executing asynchronous operations on the Web Platform, the event loop, does not work here, as the thread is not continuously executing. Additionally, a lot of unnecessary and potentially blocking operations are available from traditional execution contexts (Windows and Workers), which is not something that is desirable to reach an acceptable level of performance.
Additionally, the Worker model makes creating a dedicated thread necessary for a script execution context, while all AudioNodes usually share the same execution context.
This section specifies how the end result should look like, not how it should be implemented. In particular, instead of using message queue, implementors can use memory that is shared between threads, as long as the memory operations are not reordered.
The Web Audio API MUST be implemented using a control thread, and a rendering thread.
The control thread is the thread from which the AudioContext is instantiated, and from which authors manipulate the audio graph, that is, from where the operation on a BaseAudioContext are invoked. The rendering thread is the thread on which the actual audio output is computed, in reaction to the calls from the control thread. It can be a real-time, callback-based audio thread, if computing audio for an AudioContext, or a normal thread if rendering and audio graph offline using an OfflineAudioContext.
The control thread uses a traditional event loop, as described in [[HTML]].
The rendering thread uses a specialized rendering loop, described in the section Rendering an audio graph
Communication from the control thread to the rendering thread is done using control message passing. Communication in the other direction is done using regular event loop tasks.
Each AudioContext has a single control message queue, that is a list of control messages that are operations running on the control thread.
Queuing a control message means adding the message to the end of the control message queue of an AudioContext.
Control messages in a control message queue are ordered by time of insertion. The oldest message is therefore the one at the front of the control message queue.
Swapping a control message queue QA with another control message queue QB means executing the following steps:
          For example, successfuly calling start() on an
          AudioBufferSourceNode source adds a control
          message to the control message
          queue of the AudioContext source.context.
        
Calling methods on AudioNodes is effectively asynchronous, and MUST to be done in two phases, a synchronous part and an asynchronous part. For each method, some part of the execution happens on the control thread (for example, throwing an exception in case of invalid parameters), and some part happens on the rendering thread (for example, changing the value of an AudioParam).
In the description of each operation on AudioNodes and AudioContexts, the synchronous section is marked with a ⌛. All the other operations are executed in parallel, as described in [[HTML]].
The synchronous section is executed on the control thread, and happens immediately. If it fails, the method execution is aborted, possibly throwing an exception. If it succeeds, a control message, encoding the operation to be executed on the rendering thread is enqueued on the control message queue of this rendering thread.
The synchronous and asynchronous sections order with respect to other events MUST be the same: given two operation A and B with respective synchronous and asynchronous section ASync and AAsync, and BSync and BAsync, if A happens before B, then ASync happens before BSync, and AAsync happens before BAsync. In other words, synchronous and asynchronous sections can't be reordered.
Rendering an audio graph is done in blocks of 128 samples-frames. A block of 128 samples-frames is called a render quantum.
Operations that happen atomically on a given thread can only be executed when no other atomic operation is running on another thread.
The algorithm for rendering a block of audio from an AudioContext G with a control message queue Q is as follows.
If the algorithm returns true then it MUST be executed again in the future, to render the next block of audio. Else, the rendering thread yields and the processing stops. The control thread can restart executing this algoritm if needed.
In practice, the AudioContext rendering thread is often running off a system level audio callback, that executes in an isochronous fashion. This callback passes in a buffer that has to be filled with the audio that will be output. The size of the buffer is often larger than a rendering quantum. In this case, multiple invocations of the rendering algorithm will be called in a rapid succession, in the same callback, before returning. After some time, the underlying audio system will call the callback again, and the algorithm will be executed again. This is an implementation detail that should not be observable, apart from the latency implications.
OfflineAudioContext will execute the algorithm continuously, until length (as passed in the OfflineAudioContext contructor) frames have been rendered.
running, return false.
          Muting an AudioNode means that its output MUST be silence for the rendering of this audio block.
Making a buffer available for reading from an AudioNode means putting it in a state where other AudioNodes connected to this AudioNode can safely read from it.
For example, implementations can choose to allocate a new buffer, or have a more elaborate mechanism, reusing an existing buffer that is now unused.
Recording the input of an AudioNode means copying the input data of this AudioNode for future usage.
Computing a block of audio means running the algorithm for this AudioNode to produce 128 sample-frames.
Processing an input buffer means running the algorithm for an AudioNode, using an input buffer and the value(s) of the AudioParam(s) of this AudioNode as the input for this algorithm.
One of the most important considerations when dealing with audio processing graphs is how to adjust the gain (volume) at various points. For example, in a standard mixing board model, each input bus has pre-gain, post-gain, and send-gains. Submix and master out busses also have gain control. The gain control described here can be used to implement standard mixing boards as well as other architectures.
          The inputs to AudioNodes have the ability to
          accept connections from multiple outputs. The input then acts as a
          unity gain summing junction with each output signal being added with
          the others:
        
          In cases where the channel layouts of the outputs do not match, a mix (usually up-mix) will occur according to the mixing rules.
          No clipping is applied at the inputs or outputs of the
          AudioNode to allow a maximum of dynamic range
          within the audio graph.
        
          In many scenarios, it's important to be able to control the gain for
          each of the output signals. The GainNode gives
          this control:
        
          Using these two concepts of unity gain summing junctions and GainNodes, it's possible to construct simple or complex mixing scenarios.
In a routing scenario involving multiple sends and submixes, explicit control is needed over the volume or "gain" of each connection to a mixer. Such routing topologies are very common and exist in even the simplest of electronic gear sitting around in a basic recording studio.
Here's an example with two send mixers and a main mixer. Although possible, for simplicity's sake, pre-gain control and insert effects are not illustrated:
          
          This diagram is using a shorthand notation where "send 1", "send 2",
          and "main bus" are actually inputs to AudioNodes,
          but here are represented as summing busses, where the intersections
          g2_1, g3_1, etc. represent the "gain" or volume for the given source
          on the given mixer. In order to expose this gain, a
          GainNode is used:
        
Here's how the above diagram could be constructed in JavaScript:
  var context = 0;
  var compressor = 0;
  var reverb = 0;
  var delay = 0;
  var s1 = 0;
  var s2 = 0;
  var source1 = 0;
  var source2 = 0;
  var g1_1 = 0;
  var g2_1 = 0;
  var g3_1 = 0;
  var g1_2 = 0;
  var g2_2 = 0;
  var g3_2 = 0;
  // Setup routing graph
  function setupRoutingGraph() {
      context = new AudioContext();
      compressor = context.createDynamicsCompressor();
      // Send1 effect
      reverb = context.createConvolver();
      // Convolver impulse response may be set here or later
      // Send2 effect
      delay = context.createDelay();
      // Connect final compressor to final destination
      compressor.connect(context.destination);
      // Connect sends 1 & 2 through effects to main mixer
      s1 = context.createGain();
      reverb.connect(s1);
      s1.connect(compressor);
      s2 = context.createGain();
      delay.connect(s2);
      s2.connect(compressor);
      // Create a couple of sources
      source1 = context.createBufferSource();
      source2 = context.createBufferSource();
      source1.buffer = manTalkingBuffer;
      source2.buffer = footstepsBuffer;
      // Connect source1
      g1_1 = context.createGain();
      g2_1 = context.createGain();
      g3_1 = context.createGain();
      source1.connect(g1_1);
      source1.connect(g2_1);
      source1.connect(g3_1);
      g1_1.connect(compressor);
      g2_1.connect(reverb);
      g3_1.connect(delay);
      // Connect source2
      g1_2 = context.createGain();
      g2_2 = context.createGain();
      g3_2 = context.createGain();
      source2.connect(g1_2);
      source2.connect(g2_2);
      source2.connect(g3_2);
      g1_2.connect(compressor);
      g2_2.connect(reverb);
      g3_2.connect(delay);
      // We now have explicit control over all the volumes g1_1, g2_1, ..., s1, s2
      g2_1.gain.value = 0.2;  // For example, set source1 reverb gain
      // Because g2_1.gain is an "AudioParam",
      // an automation curve could also be attached to it.
      // A "mixing board" UI could be created in canvas or WebGL controlling these gains.
  }
   
      This section is non-normative. Please see AudioContext lifetime and AudioNode lifetime for normative requirements.
In addition to allowing the creation of static routing configurations, it should also be possible to do custom effect routing on dynamically allocated voices which have a limited lifetime. For the purposes of this discussion, let's call these short-lived voices "notes". Many audio applications incorporate the ideas of notes, examples being drum machines, sequencers, and 3D games with many one-shot sounds being triggered according to game play.
In a traditional software synthesizer, notes are dynamically allocated and released from a pool of available resources. The note is allocated when a MIDI note-on message is received. It is released when the note has finished playing either due to it having reached the end of its sample-data (if non-looping), it having reached a sustain phase of its envelope which is zero, or due to a MIDI note-off message putting it into the release phase of its envelope. In the MIDI note-off case, the note is not released immediately, but only when the release envelope phase has finished. At any given time, there can be a large number of notes playing but the set of notes is constantly changing as new notes are added into the routing graph, and old ones are released.
          The audio system automatically deals with tearing-down the part of
          the routing graph for individual "note" events. A "note" is
          represented by an AudioBufferSourceNode, which
          can be directly connected to other processing nodes. When the note
          has finished playing, the context will automatically release the
          reference to the AudioBufferSourceNode, which in
          turn will release references to any nodes it is connected to, and so
          on. The nodes will automatically get disconnected from the graph and
          will be deleted when they have no more references. Nodes in the graph
          which are long-lived and shared between dynamic voices can be managed
          explicitly. Although it sounds complicated, this all happens
          automatically with no extra handling required.
        
          The low-pass filter, panner, and second gain nodes are directly connected from the one-shot sound. So when it has finished playing the context will automatically release them (everything within the dotted line). If there are no longer any references to the one-shot sound and connected nodes, then they will be immediately removed from the graph and deleted. The streaming source, has a global reference and will remain connected until it is explicitly disconnected. Here's how it might look in JavaScript:
var context = 0;
var compressor = 0;
var gainNode1 = 0;
var streamingAudioSource = 0;
// Initial setup of the "long-lived" part of the routing graph
function setupAudioContext() {
    context = new AudioContext();
    compressor = context.createDynamicsCompressor();
    gainNode1 = context.createGain();
    // Create a streaming audio source.
    var audioElement = document.getElementById('audioTagID');
    streamingAudioSource = context.createMediaElementSource(audioElement);
    streamingAudioSource.connect(gainNode1);
    gainNode1.connect(compressor);
    compressor.connect(context.destination);
}
// Later in response to some user action (typically mouse or key event)
// a one-shot sound can be played.
function playSound() {
    var oneShotSound = context.createBufferSource();
    oneShotSound.buffer = dogBarkingBuffer;
    // Create a filter, panner, and gain node.
    var lowpass = context.createBiquadFilter();
    var panner = context.createPanner();
    var gainNode2 = context.createGain();
    // Make connections
    oneShotSound.connect(lowpass);
    lowpass.connect(panner);
    panner.connect(gainNode2);
    gainNode2.connect(compressor);
    // Play 0.75 seconds from now (to play immediately pass in 0)
    oneShotSound.start(context.currentTime + 0.75);
}
      This section is normative.
         describes how an input to an
        AudioNode can be connected from one or more outputs
        of an AudioNode. Each of these connections from an
        output represents a stream with a specific non-zero number of channels.
        An input has mixing rules for combining the channels from all
        of the connections to it. As a simple example, if an input is connected
        from a mono output and a stereo output, then the mono connection will
        usually be up-mixed to stereo and summed with the stereo connection.
        But, of course, it's important to define the exact mixing
        rules for every input to every AudioNode. The
        default mixing rules for all of the inputs have been chosen so that
        things "just work" without worrying too much about the details,
        especially in the very common case of mono and stereo streams. Of
        course, the rules can be changed for advanced use cases, especially
        multi-channel.
      
To define some terms, up-mixing refers to the process of taking a stream with a smaller number of channels and converting it to a stream with a larger number of channels. down-mixing refers to the process of taking a stream with a larger number of channels and converting it to a stream with a smaller number of channels.
        An AudioNode input use three basic pieces of
        information to determine how to mix all the outputs connected to it. As
        part of this process it computes an internal value
        computedNumberOfChannels representing the
        actual number of channels of the input at any given time:
      
        The AudioNode attributes involved in channel
        up-mixing and down-mixing rules are defined above. The following is a more precise
        specification on what each of them mean.
      
channelCount
          is used to help compute computedNumberOfChannels.
        channelCountMode
          determines how computedNumberOfChannels will be
          computed. Once this number is computed, all of the connections will
          be up or down-mixed to that many channels. For most nodes, the
          default value is "max".
          "max":
              computedNumberOfChannels is computed as the
              maximum of the number of channels of all connections. In this
              mode channelCount is
              ignored.
            "clamped-max":
              same as “max” up to a limit of the channelCount
            "explicit":
              computedNumberOfChannels is the exact value
              as specified in channelCount
            channelInterpretation
          determines how the individual channels will be treated. For example,
          will they be treated as speakers having a specific layout, or will
          they be treated as simple discrete channels? This value influences
          exactly how the up and down mixing is performed. The default value is
          "speakers".
          “speakers”:
              use up-down-mix equations for
              mono/stereo/quad/5.1. In cases where the number of channels
              do not match any of these basic speaker layouts, revert to
              "discrete".
            “discrete”:
              up-mix by filling channels until they run out then zero out
              remaining channels. down-mix by filling as many channels as
              possible, then dropping remaining channels
            
        For each input of an AudioNode, an implementation
        must:
      
computedNumberOfChannels.
        computedNumberOfChannels according to
              channelInterpretation.
            
          When channelInterpretation
          is "speakers"
          then the up-mixing and down-mixing is defined for specific channel
          layouts.
        
Mono (one channel), stereo (two channels), quad (four channels), and 5.1 (six channels) MUST be supported. Other channel layout may be supported in future version of this specification.
    Mono
      0: M: mono
    Stereo
      0: L: left
      1: R: right
    
        
  Quad
      0: L:  left
      1: R:  right
      2: SL: surround left
      3: SR: surround right
    5.1
      0: L:   left
      1: R:   right
      2: C:   center
      3: LFE: subwoofer
      4: SL:  surround left
      5: SR:  surround right
  
      
Mono up-mix:
    1 -> 2 : up-mix from mono to stereo
        output.L = input;
        output.R = input;
    1 -> 4 : up-mix from mono to quad
        output.L = input;
        output.R = input;
        output.SL = 0;
        output.SR = 0;
    1 -> 5.1 : up-mix from mono to 5.1
        output.L = 0;
        output.R = 0;
        output.C = input; // put in center channel
        output.LFE = 0;
        output.SL = 0;
        output.SR = 0;
Stereo up-mix:
    2 -> 4 : up-mix from stereo to quad
        output.L = input.L;
        output.R = input.R;
        output.SL = 0;
        output.SR = 0;
    2 -> 5.1 : up-mix from stereo to 5.1
        output.L = input.L;
        output.R = input.R;
        output.C = 0;
        output.LFE = 0;
        output.SL = 0;
        output.SR = 0;
Quad up-mix:
    4 -> 5.1 : up-mix from quad to 5.1
        output.L = input.L;
        output.R = input.R;
        output.C = 0;
        output.LFE = 0;
        output.SL = input.SL;
        output.SR = input.SR;
      A down-mix will be necessary, for example, if processing 5.1 source material, but playing back stereo.
  Mono down-mix:
      2 -> 1 : stereo to mono
          output = 0.5 * (input.L + input.R);
      4 -> 1 : quad to mono
          output = 0.25 * (input.L + input.R + input.SL + input.SR);
      5.1 -> 1 : 5.1 to mono
          output = sqrt(0.5) * (input.L + input.R) + input.C + 0.5 * (input.SL + input.SR)
  Stereo down-mix:
      4 -> 2 : quad to stereo
          output.L = 0.5 * (input.L + input.SL);
          output.R = 0.5 * (input.R + input.SR);
      5.1 -> 2 : 5.1 to stereo
          output.L = L + sqrt(0.5) * (input.C + input.SL)
          output.R = R + sqrt(0.5) * (input.C + input.SR)
  Quad down-mix:
      5.1 -> 4 : 5.1 to quad
          output.L = L + sqrt(0.5) * input.C
          output.R = R + sqrt(0.5) * input.C
          output.SL = input.SL
          output.SR = input.SR
  
      // Set gain node to explicit 2-channels (stereo). gain.channelCount = 2; gain.channelCountMode = "explicit"; gain.channelInterpretation = "speakers"; // Set "hardware output" to 4-channels for DJ-app with two stereo output busses. context.destination.channelCount = 4; context.destination.channelCountMode = "explicit"; context.destination.channelInterpretation = "discrete"; // Set "hardware output" to 8-channels for custom multi-channel speaker array // with custom matrix mixing. context.destination.channelCount = 8; context.destination.channelCountMode = "explicit"; context.destination.channelInterpretation = "discrete"; // Set "hardware output" to 5.1 to play an HTMLAudioElement. context.destination.channelCount = 6; context.destination.channelCountMode = "explicit"; context.destination.channelInterpretation = "speakers"; // Explicitly down-mix to mono. gain.channelCount = 1; gain.channelCountMode = "explicit"; gain.channelInterpretation = "speakers";
        The range of all audio signals at a destination node of any audio graph
        is nominally [-1, 1]. The audio rendition of signal values outside this
        range, or of the values NaN, positive infinity or negative
        infinity, is undefined by this specification.
      
A common feature requirement for modern 3D games is the ability to dynamically spatialize and move multiple audio sources in 3D space. Game audio engines such as OpenAL, FMOD, Creative's EAX, Microsoft's XACT Audio, etc. have this ability.
          Using an PannerNode, an audio stream can be
          spatialized or positioned in space relative to an
          AudioListener. An
          AudioContext will contain a single
          AudioListener. Both panners and listeners have a
          position in 3D space using a right-handed cartesian coordinate
          system. The units used in the coordinate system are not defined, and
          do not need to be because the effects calculated with these
          coordinates are independent/invariant of any particular units such as
          meters or feet. PannerNode objects (representing
          the source stream) have an orientation vector representing
          in which direction the sound is projecting. Additionally, they have a
          sound cone representing how directional the sound is. For
          example, the sound could be omnidirectional, in which case it would
          be heard anywhere regardless of its orientation, or it can be more
          directional and heard only if it is facing the listener.
          AudioListener objects (representing a person's
          ears) have an orientation and up vector
          representing in which direction the person is facing.
        
          During rendering, the PannerNode calculates an
          azimuth and elevation. These values are used
          internally by the implementation in order to render the
          spatialization effect. See the Panning Algorithm section for
          details of how these values are used.
        
          The following algorithm must be used to calculate the
          azimuth and elevation for the
          PannerNode:
        
  // Calculate the source-listener vector.
  vec3 sourceListener = source.position - listener.position;
  if (sourceListener.isZero()) {
      // Handle degenerate case if source and listener are at the same point.
      azimuth = 0;
      elevation = 0;
      return;
  }
  sourceListener.normalize();
  // Align axes.
  vec3 listenerFront = listener.orientation;
  vec3 listenerUp = listener.up;
  vec3 listenerRight = listenerFront.cross(listenerUp);
  listenerRight.normalize();
  vec3 listenerFrontNorm = listenerFront;
  listenerFrontNorm.normalize();
  vec3 up = listenerRight.cross(listenerFrontNorm);
  float upProjection = sourceListener.dot(up);
  vec3 projectedSource = sourceListener - upProjection * up;
  projectedSource.normalize();
  azimuth = 180 * acos(projectedSource.dot(listenerRight)) / PI;
  // Source in front or behind the listener.
  float frontBack = projectedSource.dot(listenerFrontNorm);
  if (frontBack < 0)
      azimuth = 360 - azimuth;
  // Make azimuth relative to "front" and not "right" listener vector.
  if ((azimuth >= 0) && (azimuth <= 270))
      azimuth = 90 - azimuth;
  else
      azimuth = 450 - azimuth;
  elevation = 90 - 180 * acos(sourceListener.dot(up)) / PI;
  if (elevation > 90)
      elevation = 180 - elevation;
  else if (elevation < -90)
      elevation = -180 - elevation;
  
      Mono-to-stereo and stereo-to-stereo panning must be supported. Mono-to-stereo processing is used when all connections to the input are mono. Otherwise stereo-to-stereo processing is used.
            This is a simple and relatively inexpensive algorithm which
            provides basic, but reasonable results. It is used for the for the
            PannerNode when the panningModel
            attribute is set to "equalpower", in which case the
            elevation value is ignored. This algorithm MUST be
            implemented using a-rate parameters.
          
Let azimuth be the value computed in the azimuth and elevation section.
The azimuth value is first contained to be within the range [-90, 90] according to:
  // First, clamp azimuth to allowed range of [-180, 180].
  azimuth = max(-180, azimuth);
  azimuth = min(180, azimuth);
  // Then wrap to range [-90, 90].
  if (azimuth < -90)
    azimuth = -180 - azimuth;
  else if (azimuth > 90)
    azimuth = 180 - azimuth;
                A normalized value x is calculated from azimuth for a mono input as:
x = (azimuth + 90) / 180;
Or for a stereo input as:
  if (azimuth <= 0) { // -90 -> 0
    // Transform the azimuth value from [-90, 0] degrees into the range [-90, 90].
    x = (azimuth + 90) / 90;
  } else { // 0 -> 90
    // Transform the azimuth value from [0, 90] degrees into the range [-90, 90].
    x = azimuth / 90;
  }
                Left and right gain values are calculated as:
    gainL = cos(x * Math.PI / 2);
    gainR = sin(x * Math.PI / 2);
                For mono input, the stereo output is calculated as:
    outputL = input * gainL;
    outputR = input * gainR;
                  Else for stereo input, the output is calculated as:
    if (azimuth <= 0) {
      outputL = inputL + inputR * gainL;
      outputR = inputR * gainR;
    } else {
      outputL = inputL * gainL;
      outputR = inputR + inputL * gainR;
    }
                This requires a set of HRTF (Head-related Transfer Function) impulse responses recorded at a variety of azimuths and elevations. The implementation requires a highly optimized convolution function. It is somewhat more costly than "equalpower", but provides more perceptually spatialized sound.
            
            For a StereoPannerNode, the following algorithm
            MUST be implemented.
          
                    Let pan be the computedValue of the
                    pan AudioParam of this
                    StereoPannerNode.
                  
Clamp pan to [-1, 1].
    pan = max(-1, pan);
    pan = min(1, pan);
                Calculate x by normalizing pan value to [0, 1]. For mono input:
    x = (pan + 1) / 2;
                  For stereo input:
    if (pan <= 0)
      x = pan + 1;
    else
      x = pan;
                Left and right gain values are calculated as:
    gainL = cos(x * Math.PI / 2);
    gainR = sin(x * Math.PI / 2);
                For mono input, the stereo output is calculated as:
    outputL = input * gainL;
    outputR = input * gainR;
                  Else for stereo input, the output is calculated as:
    if (pan <= 0) {
      outputL = inputL + inputR * gainL;
      outputR = inputR * gainR;
    } else {
      outputL = inputL * gainL;
      outputR = inputR + inputL * gainR;
    }
                Sounds which are closer are louder, while sounds further away are quieter. Exactly how a sound's volume changes according to distance from the listener depends on the distanceModel attribute.
During audio rendering, a distance value will be calculated based on the panner and listener positions according to:
  function dotProduct(v1, v2) {
    var d = 0;
    for (var i = 0; i < Math.min(v1.length, v2.length); i++)
      d += v1[i] * v2[i];
    return d;
  }
  var v = panner.position - listener.position;
  var distance = Math.sqrt(dotProduct(v, v));
  
        distance will then be used to calculate distanceGain which depends on the distanceModel attribute. See the distanceModel section for details of how this is calculated for each distance model. The value computed by the distanceModel equations are to be clamped to [0, 1].
          As part of its processing, the PannerNode
          scales/multiplies the input audio signal by distanceGain to
          make distant sounds quieter and nearer ones louder.
        
The listener and each sound source have an orientation vector describing which way they are facing. Each sound source's sound projection characteristics are described by an inner and outer "cone" describing the sound intensity as a function of the source/listener angle from the source's orientation vector. Thus, a sound source pointing directly at the listener will be louder than if it is pointed off-axis. Sound sources can also be omni-directional.
          The following algorithm must be used to calculate the gain
          contribution due to the cone effect, given the source (the
          PannerNode) and the listener:
        
function dotProduct(v1, v2) {
  var d = 0;
  for (var i = 0; i < Math.min(v1.length, v2.length); i++)
    d += v1[i] * v2[i];
  return d;
}
function diff(v1, v2) {
  var v = [];
  for (var i = 0; i & lt; Math.min(v1.length, v2.length); i++)
    v[i] = v1[i] - v2[i];
  return v;
}
function coneGain() {
  if (dotProduct(source.orientation, source.orientation) == 0 || ((source.coneInnerAngle ==
      360) && (source.coneOuterAngle == 360)))
    return 1; // no cone specified - unity gain
  // Normalized source-listener vector
  var sourceToListener = diff(listener.position, source.position);
  sourceToListener.normalize();
  var normalizedSourceOrientation = source.orientation;
  normalizedSourceOrientation.normalize();
  // Angle between the source orientation vector and the source-listener vector
  var dotProduct = dotProduct(sourceToListener, normalizedSourceOrientation);
  var angle = 180 * Math.acos(dotProduct) / Math.PI;
  var absAngle = Math.abs(angle);
  // Divide by 2 here since API is entire angle (not half-angle)
  var absInnerAngle = Math.abs(source.coneInnerAngle) / 2;
  var absOuterAngle = Math.abs(source.coneOuterAngle) / 2;
  var gain = 1;
  if (absAngle <= absInnerAngle) {
    // No attenuation
    gain = 1;
  } else if (absAngle >= absOuterAngle) {
    // Max attenuation
    gain = source.coneOuterGain;
  } else {
    // Between inner and outer cones
    // inner -> outer, x goes from 0 -> 1
    var x = (absAngle - absInnerAngle) / (absOuterAngle - absInnerAngle);
    gain = (1 - x) + source.coneOuterGain * x;
  }
  return gain;
}  
      
          For web applications, the time delay between mouse and keyboard events (keydown, mousedown, etc.) and a sound being heard is important.
This time delay is called latency and is caused by several factors (input device latency, internal buffering latency, DSP processing latency, output device latency, distance of user's ears from speakers, etc.), and is cumulative. The larger this latency is, the less satisfying the user's experience is going to be. In the extreme, it can make musical production or game-play impossible. At moderate levels it can affect timing and give the impression of sounds lagging behind or the game being non-responsive. For musical applications the timing problems affect rhythm. For gaming, the timing problems affect precision of gameplay. For interactive applications, it generally cheapens the users experience much in the same way that very low animation frame-rates do. Depending on the application, a reasonable latency can be from as low as 3-6 milliseconds to 25-50 milliseconds.
Implementations will generally seek to minimize overall latency.
          Along with minimizing overall latency, implementations will generally
          seek to minimize the difference between an
          AudioContext's currentTime and an
          AudioProcessingEvent's playbackTime.
          Deprecation of ScriptProcessorNode will make this
          consideration less important over time.
        
          When an acquire the content
          operation is performed on an AudioBuffer, the entire operation
          can usually be implemented without copying channel data. In
          particular, the last step should be performed lazily at the next
          
          getChannelData call. That means a sequence of
          consecutive acquire the contents
          operations with no intervening 
          getChannelData (e.g. multiple
          AudioBufferSourceNodes playing the same
          AudioBuffer) can be implemented with no
          allocations or copying.
        
          Implementations can perform an additional optimization: if 
          getChannelData is called on an AudioBuffer, fresh
          ArrayBuffers have not yet been allocated, but all
          invokers of previous acquire the
          content operations on an AudioBuffer have stopped using
          the AudioBuffer's data, the raw data buffers can be recycled
          for use with new AudioBuffers, avoiding any reallocation or
          copying of the channel data.
        
          While no automatic smoothing is done when directly setting the
          value attribute of
          an AudioParam, for certain parameters, smooth
          transition are preferable to directly setting the value.
        
          Using the 
          setTargetAtTime method with a low
          timeConstant allows authors to perform a smooth
          transition.
        
Audio glitches are caused by an interruption of the normal continuous audio stream, resulting in loud clicks and pops. It is considered to be a catastrophic failure of a multi-media system and must be avoided. It can be caused by problems with the threads responsible for delivering the audio stream to the hardware, such as scheduling latencies caused by threads not having the proper priority and time-constraints. It can also be caused by the audio DSP trying to do more work than is possible in real-time given the CPU's speed.
The W3C TAG is developing a Self-Review Questionnaire: Security and Privacy for editors of specifications to informatively answer.
Per the Questions to Consider
Does this specification deal with personally-identifiable information?
No.
Does this specification deal with high-value data?
No. Credit card information and the like is not used in Web Audio. It is possible to use Web Audio to process or analyze voice data, which might be a privacy concern, but access to the user's microphone is permission-based via getUserMedia.
Does this specification introduce new state for an origin that persists across browsing sessions?
No. AudioWorklet does not persist across browsing sessions. right?
Does this specification expose persistent, cross-origin state to the web?
Not sure. If audio sample data is loaded cross-origin, it exposes state (whether that sample data resolves or not) to the script origin.
Does this specification expose any other data to an origin that it doesn’t currently have access to?
            Yes. When giving various information on available
            AudioNodes, the Web Audio API potentially
            exposes information on characteristic features of the client (such
            as audio hardware sample-rate) to any page that makes use of the
            AudioNode interface. Additionally, timing
            information can be collected through the
            AnalyserNode or
            ScriptProcessorNode interface. The information
            could subsequently be used to create a fingerprint of the client.
          
Does this specification enable new script execution/loading mechanisms?
No. However, it does use the worker script execution method, defined in that specification.
Does this specification allow an origin access to a user’s location?
No.
Does this specification allow an origin access to sensors on a user’s device?
Not directly. Currently audio input is not specified in this document, but it will involve gaining access to the client machine's audio input or microphone. This will require asking the user for permission in an appropriate way, probably via the getUserMedia() API.
Does this specification allow an origin access to aspects of a user’s local computing environment?
Not sure. Does it allow probing of supported sample rates? Supported audio codecs? We should mention denial of service attack by consuming CPU cycles.
Does this specification allow an origin access to other devices?
No.
Does this specification allow an origin some measure of control over a user agent’s native UI?
No?. Though it could be used to emulate system sounds to make an attack seem more like a local system event?
Does this specification expose temporary identifiers to the web?
No.
Does this specification distinguish between behavior in first-party and third-party contexts?
No.
How should this specification work in the context of a user agent’s "incognito" mode?
No differently.
Does this specification persist data to a user’s local device?
Maybe? Cached impulses or audio sample data stored locally?
Does this specification have a "Security Considerations" and "Privacy Considerations" section?
Yes.
Does this specification allow downgrading default security characteristics?
No.
Please see [[webaudio-usecases]].
This specification is the collective work of the W3C Audio Working Group.
        Members of the Working Group are (at the time of writing, and by
        alphabetical order):
        Adenot, Paul (Mozilla Foundation) - Specification Co-editor; Akhgari,
        Ehsan (Mozilla Foundation); Berkovitz, Joe (Hal Leonard/Noteflight) –
        WG Chair; Bossart, Pierre (Intel Corporation); Carlson, Eric (Apple,
        Inc.); Choi, Hongchan (Google, Inc.); Geelnard, Marcus (Opera
        Software); Goode, Adam (Google, Inc.); Gregan, Matthew (Mozilla
        Foundation); Hofmann, Bill (Dolby Laboratories); Jägenstedt, Philip
        (Opera Software); Kalliokoski, Jussi (Invited Expert); Lilley, Chris
        (W3C Staff); Lowis, Chris (Invited Expert. WG co-chair from December
        2012 to September 2013, affiliated with British Broadcasting
        Corporation); Mandyam, Giridhar (Qualcomm Innovation Center, Inc);
        Noble, Jer (Apple, Inc.); O'Callahan, Robert(Mozilla Foundation);
        Onumonu, Anthony (British Broadcasting Corporation); Paradis, Matthew
        (British Broadcasting Corporation); Raman, T.V. (Google, Inc.);
        Schepers, Doug (W3C/MIT); Shires, Glen (Google, Inc.); Smith, Michael
        (W3C/Keio); Thereaux, Olivier (British Broadcasting Corporation); Toy,
        Raymond (Google, Inc.); Verdie, Jean-Charles (MStar Semiconductor,
        Inc.); Wilson, Chris (Google,Inc.) - Specification Co-editor; ZERGAOUI,
        Mohamed (INNOVIMAX)
      
        Former members of the Working Group and contributors to the
        specification include:
        Caceres, Marcos (Invited Expert); Cardoso, Gabriel (INRIA); Chen, Bin
        (Baidu, Inc.); MacDonald, Alistair (W3C Invited Experts) — WG co-chair
        from March 2011 to July 2012; Michel, Thierry (W3C/ERCIM); Rogers,
        Chris (Google, Inc.) – Specification Editor until August 2013; Wei,
        James (Intel Corporation);
      
See changelog.html.