Custom Audio Source and Renderer
Introduction
Generally, Agora SDKs use default audio modules for capturing and rendering in real-time communications.
However, these default modules might not meet your development requirements, such as in the following scenarios:
- Your app has its own audio module.
- You need to process the captured audio with a pre-processing library.
- You need flexible device resource allocation to avoid conflicts with other services.
Agora provides a solution to enable a custom audio source and/or renderer in the above scenarios. This article describes how to do so using the Agora Native SDK.
Sample project
Agora provides an open-source demo project on GitHub. You can view the source code on Github or download the project to try it out.
Implementation
Before proceeding, ensure that you have implemented the basic real-time communication functions in your project. For details, see Start a Voice Call or Start Interactive Live Audio Streaming.
Custom audio source
Refer to the following steps to implement a custom audio source in your project:
- Before calling
joinChannel
, callsetExternalAudioSource
to specify the custom audio source. - Implement audio capture and processing yourself using methods from outside the SDK.
- Call
pushExternalAudioFrame
to send the audio frames to the SDK for later use.
API call sequence
Refer to the following diagram to implement the custom audio source:
Audio data transfer
The following diagram shows how the audio data is transferred when you customize the audio source:
- You need to implement the capture module yourself using methods from outside the SDK.
- Call
pushExternalAudioFrame
to send the captured audio frames to the SDK.
Code samples
Refer to the following code samples to implement the custom audio source in your project.
- Before the local user joins the channel, specify the custom audio source.
- Implement your own audio capture module. After the local user joins the channel, enable the capture module to start capturing audio frames from the custom audio source.
API reference
Custom audio renderer
Refer to the following steps to implement a custom audio renderer in your project:
- Before calling
joinChannel
, callsetExternalAudioSink
to enable and configure the external audio renderer. - After joining the channel, call
pullPlaybackAudioFrame
to retrieve the audio data sent by a remote user. - Use your own audio renderer to process the audio data, then play the rendered data.
API call sequence
Refer to the following diagram to implement the custom audio renderer in your project:
Audio data transfer
The following diagram shows how the audio data is transferred when you customize the audio renderer:
- You need to implement the rendering module yourself using methods from outside the SDK.
- Call
pullPlaybackAudioFrame
to retrieve the audio data sent by a remote user.
Code samples
Refer to the following code samples to implement the custom audio renderer in your project:
API reference
Considerations
Performing the following operations requires you to use methods from outside the Agora SDK:
- Manage the capture and processing of audio frames when using a custom audio source.
- Manage the processing and playback of audio frames when using a custom audio renderer.