Audio Device Module (ADM)


The ADM is responsible for driving input (microphone) and output (speaker) audio in WebRTC and the API is defined in audio_device.h.

Main functions of the ADM are:

  • Initialization and termination of native audio libraries.
  • Registration of an AudioTransport object which handles audio callbacks for audio in both directions.
  • Device enumeration and selection (only for Linux, Windows and Mac OSX).
  • Start/Stop physical audio streams:
    • Recording audio from the selected microphone, and
    • playing out audio on the selected speaker.
  • Level control of the active audio streams.
  • Control of built-in audio effects (Audio Echo Cancelation (AEC), Audio Gain Control (AGC) and Noise Suppression (NS)) for Android and iOS.

ADM implementations reside at two different locations in the WebRTC repository: /modules/audio_device/ and /sdk/. The latest implementations for iOS and Android can be found under /sdk/. /modules/audio_device/ contains older versions for mobile platforms and also implementations for desktop platforms such as Linux, Windows and Mac OSX. This document is focusing on the parts in /modules/audio_device/ but implementation specific details such as threading models are omitted to keep the descriptions as simple as possible.

By default, the ADM in WebRTC is created in WebRtcVoiceEngine::Init but an external implementation can also be injected using rtc::CreatePeerConnectionFactory. An example of where an external ADM is injected can be found in PeerConnectionInterfaceTest where a so-called fake ADM is utilized to avoid hardware dependency in a gtest. Clients can also inject their own ADMs in situations where functionality is needed that is not provided by the default implementations.


This section contains a historical background of the ADM API.

The ADM interface is old and has undergone many changes over the years. It used to be much more granular but it still contains more than 50 methods and is implemented on several different hardware platforms.

Some APIs are not implemented on all platforms, and functionality can be spread out differently between the methods.

The most up-to-date implementations of the ADM interface are for iOS and for Android.

Desktop version are not updated to comply with the latest C++ style guide and more work is also needed to improve the performance and stability of these versions.


WebRtcVoiceEngine does not utilize all methods of the ADM but it still serves as the best example of its architecture and how to use it. For a more detailed view of all methods in the ADM interface, see ADM unit tests.

Assuming that an external ADM implementation is not injected, a default - or internal - ADM is created in WebRtcVoiceEngine::Init using AudioDeviceModule::Create.

Basic initialization is done using a utility method called adm_helpers::Init which calls fundamental ADM APIs like:

WebRtcVoiceEngine::Init also calls AudiDeviceModule::RegisterAudioTransport to register an existing AudioTransport implementation which handles audio callbacks in both directions and therefore serves as the bridge between the native ADM and the upper WebRTC layers.

Recorded audio samples are delivered from the ADM to the WebRtcVoiceEngine (who owns the AudioTransport object) via AudioTransport::RecordedDataIsAvailable:

int32_t RecordedDataIsAvailable(const void* audioSamples, size_t nSamples, size_t nBytesPerSample,
                                size_t nChannels, uint32_t samplesPerSec, uint32_t totalDelayMS,
                                int32_t clockDrift, uint32_t currentMicLevel, bool keyPressed,
                                uint32_t& newMicLevel)

Decoded audio samples ready to be played out are are delivered by the WebRtcVoiceEngine to the ADM, via AudioTransport::NeedMorePlayoutData:

int32_t NeedMorePlayData(size_t nSamples, size_t nBytesPerSample, size_t nChannels, int32_t samplesPerSec,
                         void* audioSamples, size_t& nSamplesOut,
                         int64_t* elapsed_time_ms, int64_t* ntp_time_ms)

Audio samples are 16-bit linear PCM using regular interleaving of channels within each sample.

WebRtcVoiceEngine also owns an AudioState member and this class is used has helper to start and stop audio to and from the ADM. To initialize and start recording, it calls:

and to initialize and start playout:

Finally, the corresponding stop methods AudiDeviceModule::StopRecording and AudiDeviceModule::StopPlayout are called followed by AudiDeviceModule::Terminate.