Allow an external audio processing module to be used in WebRTC

[This CL is a rebase of an original CL by solenberg@:
https://codereview.webrtc.org/2948763002/ which in turn was a
rebase of an original CL by peah@:
https://chromium-review.googlesource.com/c/527032/]

Allow an external audio processing module to be used in WebRTC

This CL adds support for optionally using an externally created audio
processing module in a peerconnection. The ownership is shared
between the peerconnection and the external creator of the module.

As part of this the internal ownership of the audio processing module
is moved from VoiceEngine to WebRtcVoiceEngine.

BUG=webrtc:7775

Review-Url: https://codereview.webrtc.org/2961723004
Cr-Commit-Position: refs/heads/master@{#18837}
diff --git a/webrtc/test/call_test.h b/webrtc/test/call_test.h
index 39df343..e7b75d6 100644
--- a/webrtc/test/call_test.h
+++ b/webrtc/test/call_test.h
@@ -146,6 +146,8 @@
 
   VoiceEngineState voe_send_;
   VoiceEngineState voe_recv_;
+  rtc::scoped_refptr<AudioProcessing> apm_send_;
+  rtc::scoped_refptr<AudioProcessing> apm_recv_;
 
   // The audio devices must outlive the voice engines.
   std::unique_ptr<test::FakeAudioDevice> fake_send_audio_device_;