Adding HDR Video Capabilities to Android Apps

Friday 6/23/23 07:40pm
|
Posted By Morris Novello
  • Up0
  • Down0

Snapdragon and Qualcomm branded products are products of
Qualcomm Technologies, Inc. and/or its subsidiaries.

Co-written by Sunid Wilson, Director of Engineering, Camera and Satish Goverdhan, Sr. Director of Engineering, Camera

The latest Android devices feature powerful mobile compute and rich media capabilities, with ultra-high quality video in High Dynamic Range (HDR) being one of the most enticing features. HDR allows for brighter, more detailed highlights and shadows, and a wider range of colors compared to Standard Dynamic Range (SDR). However, because SDR remains a prominent format, users need a seamless experience to share HDR content with other users’ SDR devices, as well as external systems like social media sites (e.g., some social media sites support the expansion of HDR from a standard 8-bit JPEG).

To support this, Android 13 requires that any Android device with 10-bit YUV capability must also support SDR and use HLG10 as the baseline for HDR capture. Apps can optionally support additional HDR standards including HDR10, HDR10+, and Dolby Vision 8.4.

If you’re an Android app developer integrating a typical camera-to-end-user pipeline that supports HDR, then you’ll want to become more familiar with the Camera2 API package found in the Android API. The API provides low-level access to device-specific functionality and although it requires managing device-specific configurations, it allows you to handle complex use cases.

Let’s take a closer look at what this entails.

Common Terms

Before getting into Camera2, it’s important to understand a few key terms you’ll encounter when implementing a typical camera-to-end-user pipeline:

  • Capture: Capture data from the onboard camera sensor(s) – either for preview or recording.
  • Edit: Process the raw data as HDR or SDR at the codec level. A key phase of this process is tone mapping, which reduces tonal values so the imagery is suitable for display on digital screens.
  • Encode: Compress the raw data (e.g., for storing and sharing).
  • Transcode: Decompress the video and re-encode it (e.g., to different codec formats, HDR to SDR mode, etc.). Additional alterations can also be made during this phase (e.g., adding watermarks).
  • Decode: Decompress an encoded video for playback.

Check for HLG Support

The first order of business is to check for HLG support. Android’s Camera2 API provides a straight-forward interface for this. As their documentation states: “HLG10 is the baseline HDR standard that device makers must support on cameras with 10-bit”, so start by checking for the presence of a 10-bit camera as shown in their code sample:

val cameraCharacteristics = cameraManager.getCameraCharacteristics(cameraId)
val availableCapabilities = cameraCharacteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)
for (capability in availableCapabilities!!)
{
  if (capability == CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_DYNAMIC_RANGE_TEN_BIT)
  {
    //HDR available
  }
}

HDR Video Capture

The next step is to set up a CameraCaptureSession for a CameraDevice to capture video from the camera. A CameraCaptureSession abstracts the process for capturing images from a camera to one or more target surfaces.

The following code sample from Android’s HDR Video Capture topic, shows the creation of a CameraCaptureSession using different methods based on the OS version:

private fun setupSessionWithDynamicRangeProfile(
          dynamicRange: Long,
          device: CameraDevice,
          targets: List,
          handler: Handler? = null,
          stateCallback: CameraCaptureSession.StateCallback
  ): Boolean {
      if (android.os.Build.VERSION.SDK_INT >=
              android.os.Build.VERSION_CODES.TIRAMISU) {
          val outputConfigs = mutableListOf()
              for (target in targets) {
                  val outputConfig = OutputConfiguration(target)
                  //sets the dynamic range profile, for example DynamicRangeProfiles.HLG10
                  outputConfig.setDynamicRangeProfile(dynamicRange)
                  outputConfigs.add(outputConfig)
              }
 
          device.createCaptureSessionByOutputConfigurations(
                  outputConfigs, stateCallback, handler)
          return true
      } else {
          device.createCaptureSession(targets, stateCallback, handler)
          return false
      }
  }
}

Note: A preview stream and its shared streams require a low-latency profile, but this is optional for video streams. An application can determine if there is an extra look-ahead delay for any of the HDR modes by invoking isExtraLatencyPresent() (passing in DynamicRangeProfiles.HDR10_PLUS, DynamicRangeProfiles.HDR10, and DynamicRangeProfiles.HLG) before invoking setDynamicRangeProfile().

The session object can then be used for both preview and recordings. The code sample below shows how to start a preview by invoking a repeating CaptureRequest:

session.setRepeatingRequest(previewRequest, null, cameraHandler).

Notes:

  • cameraHandler is the thread handler on which the listener should be invoked (or can be set to null to use the current thread).
  • If the application is using different HDR profiles for preview and video, it must check for valid profile combinations using getProfileCaptureRequestConstraints().

A repeating CaptureRequest maintains a continuous stream of frames, without having to continually invoke frame-by-frame capture requests. The first parameter is a CaptureRequest that contains the information required to perform the capture (e.g., capture hardware, output buffer, target surface(s), etc.).

Similarly, a recording is also started using a repeating request. The following example shows this request with a CaptureCallback that can be used to track the capture progress (e.g., started, stopped, etc.):

session.setRepeatingRequest(recordRequest,
        object : CameraCaptureSession.CaptureCallback() {
    override fun onCaptureCompleted(session: CameraCaptureSession,
            request: CaptureRequest, result: TotalCaptureResult) {
        if (currentlyRecording) {
            encoder.frameAvailable()
        }
    }
}, cameraHandler)

HDR10/10+ Video Editing

Video editing is performed using the MediaCodec class. To determine if the device supports HDR video, invoke the getCapabilitiesForType() method which returns a MediaCodecInfo.CodecCapabilities object. Then invoke that object’s isFeatureSupported() method passing in the FEATURE_HdrEditing string. If the method returns true, then the device supports YUV and RGB input. In this case, the encoder transforms and tone-maps RGBA1010102 to encode-able YUV P010. For example, if a user recorded an HDR video in HLG, they can downscale/rotate it or add a logo/sticker and save it in HDR format.

The TransformationRequest.Builder class’s experimental_setEnableHdrEditing() method can be used to construct a transformation for HDR editing.

HDR Transcoding to SDR

You may need to support transcoding HDR content to SDR to allow for sharing content across different devices or exporting video to other formats. Snapdragon® technology features an optimized pipeline that looks for ways to reduce latency during the transformation and can tone map various HDR formats including HLG10, HDR10, HDR10+, and Dolby Vision (on licensed devices).

You enable transcoding by implementing the Codec.DecoderFactory interface and working with the Media API. Here, you construct a MediaFormat object for your video’s MIME type and pass MediaFormat.KEY_COLOR_TRANSFER_REQUEST to the object’s setInteger() method, along with the MediaFormat.COLOR_TRANSFER_SDR_VIDEO flag.

The following code sample shows an implementation of the interface’s createForVideoDecoding() method. The implementation configures a codec that can tone-map raw video frames to match the requested transfer, if requested by the caller:

public Codec createForVideoDecoding(
      Format, Surface outputSurface, boolean enableRequestSdrToneMapping)
      throws TransformationException {
   
    MediaFormat =
        MediaFormat.createVideoFormat(
            checkNotNull(format.sampleMimeType), format.width, format.height);
   
    MediaFormatUtil.maybeSetInteger(mediaFormat, MediaFormat.KEY_ROTATION, format.rotationDegrees);
   
    MediaFormatUtil.maybeSetInteger(
        mediaFormat, MediaFormat.KEY_MAX_INPUT_SIZE, format.maxInputSize);
    MediaFormatUtil.setCsdBuffers(mediaFormat, format.initializationData);
    if (SDK_INT >= 29) {
      // On API levels over 29, Transformer decodes as many frames as possible in one render
      // cycle. This key ensures no frame dropping when the decoder's output surface is full.
      mediaFormat.setInteger(MediaFormat.KEY_ALLOW_FRAME_DROP, 0);
    }
    if (SDK_INT >= 31 && enableRequestSdrToneMapping) {
      mediaFormat.setInteger(
          MediaFormat.KEY_COLOR_TRANSFER_REQUEST, MediaFormat.COLOR_TRANSFER_SDR_VIDEO);
    }
 
    @Nullable
    String mediaCodecName = EncoderUtil.findCodecForFormat(mediaFormat, /* isDecoder= */ true);
    if (mediaCodecName == null) {
      throw createTransformationException(format);
    }
    return new DefaultCodec(
        format, mediaFormat, mediaCodecName, /* isDecoder= */ true, outputSurface);
}

See here for the full sample.

The Transformer example in AndroidX Media (a collection of media code samples) shows a more complex example.

Conclusion

Camera2 is a good starting point for Android app developers to add HDR support. With it, you can query for device capabilities at runtime, and provide optional code paths which take full advantage of HDR on supported devices like those powered by Snapdragon mobile platforms. Best of all, these foundations are available for you today!

Be sure to check out our new Android on Snapdragon pages for learning resources this and other API Resources for your Android development.

Stay up to date with the latest in Android development by signing up for our newsletter.



Snapdragon is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.