The audio subsystem allows a user space application to capture audio and request playback. The audio DSP (ADSP) is the component responsible for capture and playback. There is a compute DSP (CDSP) component available which can also be leveraged for any high-performance audio use cases including keyword activation, far field voice, echo cancellation, and noise suppression.

The flow for audio capture and playback is shown in the following diagram. There are multiple ways the audio subsystem can be accessed including GST pulse elements (sink, src) and the pulse client.

Abstraction to the audio subsystem is provided by PulseAudio. PulseAudio is the sound server supported on the platform. A GStreamer application can open up pulsesink and pulsesrc elements to render or capture audio, respectively. The captured audio can be rendered out to a ROS2 node for further processing. The Audio subsystem has in built smarts viz. Qualcomm® Voice Suite which has FFV, ECNS, and SVA. The detail on the Qualcomm® Smart Audio Platform can be found on the QTI website.

Reference examples

The following example using pulse sink and src.

gst-launch-1.0 -v pulsesrc ! queue ! audioconvert ! vorbisenc ! oggmux ! filesink location=alsasrc.ogg gst-launch-1.0 -v filesrc location=sine.ogg ! oggdemux ! vorbisdec ! audioconvert ! audioresample ! pulsesink

The direct ALSA route is supported. The requisite ALSA src and sink elements can be invoked.

gst-launch-1.0 -v alsasrc ! queue ! audioconvert ! vorbisenc ! oggmux ! filesink location=alsasrc.ogg gst-launch-1.0 -v uridecodebin uri=file:///path/to/audio.ogg ! audioconvert ! audioresample ! autoaudiosink

Compressed offload playback is supported in the ADSP. This mode saves power as the compressed audio data is sent over to the ADSP for simultaneous decode and playback. This keeps the application processor off as much as possible (for example, screen off with music playback).