DashCam Android Application

Skill LevelArea of FocusOperating System
IntermediateArtificial Intelligence, Computer VisionAndroid

The project is designed to utilize the Qualcomm® Neural Processing SDK, which allows you to tune the performance of AI applications running on Snapdragon® mobile platforms. The Qualcomm Neural Processing SDK is used to convert the trained models from Caffe, Caffe2, ONNX, and TensorFlow to the Snapdragon supported format (.dlc format), allowing developers to enable their AI applications with optimized on-device inference.

The main objective of this project is to develop an Android Application that uses a built-in camera to capture the objects on a road and use a Machine Learning model to get the prediction and location of the respective objects.

Parts used

Below are the items used in this project.

Project components for Dashboard Android Application project

  1. Mobile Display with QC Dash Cam app
  2. Snapdragon 835 Mobile Hardware Development Kit
  3. External camera setup

Deploying the project

  1. Download code from the GitHub Repository.
  2. Compile the code and run the application from Android Studio to generate application (apk) file.

How does it work?

QC_DashCam Android application opens a camera preview, collects all the frames and converts it to bitmap. The network is built via Neural Network builder by passing caffe_mobilenet.dlc as the input. The bitmap is then given to model as inference, which returns object prediction and localization of the respective object.

Prerequisite for Camera Preview.

Permission to obtain camera preview frames is granted in the following file:

        <uses-permission android:name="android.permission.CAMERA" />

In order to use camera2 APIs, add the below feature

        <uses-feature android:name="android.hardware.camera2" />

Loading Model

Code snippet for neural network connection and loading model:

        final SNPE.NeuralNetworkBuilder builder = new
        // Allows selecting a runtime order for the network.
        // In the example below use DSP and fall back, in order, to GPU then CPU
        // depending on whether any of the runtimes are available.
        .setRuntimeOrder(DSP, GPU, CPU)
        // Loads a model from DLC file
        .setModel(new File("<model-path>"))
        // Build the network
        network = builder.build();

Capturing Preview Frames:

Texture view is used to render camera preview. TextureView.SurfaceTextureListener is an interface which can be used to notify when the surface texture associated with this texture view is available.

          public void onSurfaceTextureAvailable(SurfaceTexture surfaceTexture, int i, int i1) {
              Logger.d(TAG, "OnSurfaceTextureAvailable");
                try {
      //id[0] indicates rear camera
                  mCameraManager.openCamera(ids[0], mCameraCallback, mBackgroundHandler);
              } catch (CameraAccessException e) {

Camera Callbacks

Camera call back, CameraDevice.StateCallback is used for receiving updates about the state of a camera device. In the below overridden method, surface texture is created to capture the review and obtain the frames.

        public void onOpened(@NonNull CameraDevice cameraDevice) {
            Logger.d(TAG, "onOpened()");
            mCameraDevice = cameraDevice;
              mSurfaceTexture = mTextureView.getSurfaceTexture();
        Surface mSurface = new Surface(mSurfaceTexture);
        try {      
            mCameraDevice.createCaptureSession(Arrays.asList(mSurface), new  CameraCapture(),null);
        } catch (CameraAccessException e) {
        try {
            builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
        } catch (CameraAccessException e) {

Getting Bitmap from Texture view

Bitmap of fixed height and width can be obtained from TextureView in onCaptureCompleted callback using TotalCaptureResult. That bitmap can be compressed and sent to model as input.

        public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {
            super.onCaptureCompleted(session, request, result);          
            if (mNetworkLoaded) {
                Bitmap mBitmap = mTextureView.getBitmap(Constants.BITMAP_WIDTH, Constants.BITMAP_HEIGHT);

Object Inference

The bitmap image converted to RGBA byte array of size 300*300*3. Basic image processing depends on the kind of input shape required by the model, then converting that processed image into the tensor is required. The prediction API requires a tensor format with type Float which returns object prediction and localization in <code>Map<String, FloatTensor></code> object.

        private Map<String, FloatTensor> inferenceOnBitmap(Bitmap inputBitmap) {
            final Map<String, FloatTensor> outputs;
            try {
              if (mNeuralNetwork == null || mInputTensorReused == null || inputBitmap.getWidth() != getInputTensorWidth() || inputBitmap.getHeight() != getInputTensorHeight()) {
                    return null;
                // [0.3ms] Bitmap to RGBA byte array (size: 300*300*3 (RGBA..))
                // [2ms] Pre-processing: Bitmap (300,300,4 ints) -> Float Input Tensor (300,300,3 floats)
                final float[] inputFloatsHW3 = mBitmapToFloatHelper.bufferToNormalFloatsBGR();
                if (mBitmapToFloatHelper.isFloatBufferBlack())
                    return null;
                mInputTensorReused.write(inputFloatsHW3, 0, inputFloatsHW3.length, 0, 0);
                mTimeStat.stopInterval("i_tensor", 20, false);
                // [31ms on GPU16, 50ms on GPU] execute the inference
                outputs = mNeuralNetwork.execute(mInputTensorsMap);
                mTimeStat.stopInterval("nn_exec ", 20, false);
            } catch (Exception e) {
                      return null;
            return outputs;

Object Localization

Model returns the respective Float Tensors, from which the shape of the object and its name can be inferred. Canvas is used to draw a rectangle on the predicted object.

        private void computeFinalGeometry(Box box, Canvas canvas) {
                final int viewWidth = getWidth();
                final int viewHeight = getHeight();
                float y = viewHeight * box.left;
                float x = viewWidth * box.top;
                float y1 = viewHeight * box.right;
                float x1 = viewWidth * box.bottom;
                // draw the text
                String textLabel = (box.type_name != null && !box.type_name.isEmpty()) ? box.type_name : String.valueOf(box.type_id + 2);
                canvas.drawText(textLabel, x + 10, y + 30, mTextPaint);
                // draw the box
                canvas.drawRect(x, y, x1, y1, mOutlinePaint);

Sample screen shot of the application with model prediction

Sample screen shot of the application with model prediction

How to install Android Application

  • You will need an Android Phone with version 7.0 and above.
  • ADB installed in the Windows / Linux system.
  • Follow the instructions below to install adb tools on your computer. https://developer.android.com/studio/command-line/adb.html
  • Install the application using the ADB tool and following command: adb install qc_dashCam.apk
  • Run the application in the phone.
Rakesh Sankar[email protected]Sr. System Architect,
GlobalEdge Software, Ltd
Shivanand Pujar[email protected]Project Manager,
GlobalEdge Software, Ltd
Akshay Kulkarni[email protected]Technical Lead,
GlobalEdge Software, Ltd
Pooja Prasad[email protected]Sr. Software Engineer,
GlobalEdge Software, Ltd
Pritam Mukherjee[email protected]Software Engineer,
GlobalEdge Software, Ltd
Rajagonda Pujari[email protected]Module Lead,
GlobalEdge Software, Ltd
Patcha Vengamamba[email protected]Software Engineer,
GlobalEdge Software, Ltd