DashCam ML Model Training

Skill LevelArea of FocusOperating System
IntermediateArtificial Intelligence, Computer VisionLinux

The main objective of this project is to develop a Machine Learning model that detects the objects on the road like pedestrians, cars, motorbikes, bicycles, buses, etc.

The Machine Learning model that detects the object is designed to use Single Shot Detector (SSD) algorithm trained on Mobilenet network architecture and optimize the application for Snapdragon mobile platforms by converting it to Deep Learning Container format (.dlc) supported by the Qualcomm Neural Processing SDK.

Requirements

S/W

  1. Ubuntu 16.04 machine
  2. Qualcomm Neural Processing SDK
  3. Python 3.5

H/W

  1. Intel core i5 or greater processor
  2. Minimum 16 GB System RAM
  3. GTX architecture based graphic card more than 1050Ti (https://www.nvidia.com/en-in/geforce/products/10series/geforce-gtx-1050/)

Why MobilenetSSD

  1. Provides real time inference frame rates of 8-13 FPS on the Snapdragon 835 hardware development kit
  2. Better performance and accuracy compared to other architectures like YOLO, … etc.

How to train the model

Installation of MobilenetSSD and Caffe

  1. Clone the Caffe source code from the git repository using below commands:
          $ git clone https://github.com/weiliu89/caffe.git
          $ cd caffe
          $ git checkout ssd
        
  2. Depending on the processor used (either CPU or GPU), install the depending packages by executing respective instructions from http://caffe.berkeleyvision.org/install_apt.html.
  3. Build Caffe using the below instructions:
          $ cp Makefile.config.example Makefile.config
          (Make necessary modification  in the Makefile as per the device configuration)
          $ make -j8
            (Make sure you add the $CAFFE_ROOT/python to your PYTHON PATH)
        

Getting the data ready

  1. Download fully convolution reduced (atrous) VGGNet by following the given instructions:
          $ git clone https://gist.github.com/2ed6e13bfd5b57cf81d6.git
          $ mv 2ed6e13bfd5b57cf81d6  $CAFFE_ROOT/models/VGGNet/
        
  2. Download the VOC2007 and VOC2012 dataset:
          $ mkdir home/<username>/data
          $ cd home/<username>/data
          $ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
          $ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
          $ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
          $ tar -xvf VOCtrainval_11-May-2012.tar
          $ tar -xvf VOCtrainval_06-Nov-2007.tar
          $ tar -xvf VOCtest_06-Nov-2007.tar
        
  3. Create the Lightning Memory-Mapped Database (LMDB) file:
          $ cd $CAFFE_ROOT
          $ ./data/VOC0712/create_list.sh
          $ ./data/VOC0712/create_data.sh
        

Training the model

  1. Create the symbolic link to training and test data sets:
          $ ln -s PATH_TO_YOUR_TRAIN_LMDB trainval_lmdb
          $ ln -s PATH_TO_YOUR_TEST_LMDB test_lmdb
        
  2. Copy labelmap_voc.prototxt file from ssd repo tree to MobileNetSSD Dir using below command:
          $ cp $CAFFE_DIR/data/VOC0712/labelmap_voc.prototxt $MOBILENETSSD_DIR/labelmap.prototxt
        
    Run gen_model.sh for generating the training and testing prototxt with given number of classes. Please make sure that the number of classes while running gen_model.sh and number of classes mentioned in labelmap.prototxt match.
          $ ./gen_model.sh 21
        
    Now train your model using train.sh, and keep on training until loss is in between 1.5 to 2.5:
          $ ./train.sh
        
    Test the trained model with test.sh script:
          $ ./test.sh
        
    Run merge_bn.py to generate your own no-bn Caffe model if necessary.
          $ python merge_bn.py --model example/MobileNetSSD_deploy.prototxt --weights snapshot/mobilenet_iter_xxxxxx.caffemodel
        

Once merge_bn.py has run successfully, you’ll get two files in the root directory of MobileNetSSD. These are the trained models:

  1.       no_bn.prototxt
        
  2.       no_bn.caffemodel
        

How to Convert Caffe into DLC?

Prerequisites

Neural Processing SDK setup. Use the instructions from the below link to setup the SDK:

https://developer.qualcomm.com/software/qualcomm-neural-processing-sdk/getting-started

Initialize environmental variables of Neural Processing SDK with Caffe. For converting the model from Caffe to DLC, you need two files: prototxt and caffemodel.

Convert to DLC using the following command:

  $ snpe-caffe-to-dlc --caffe_txt MobileNetSSD_deploy.prototxt --caffe_bin MobileNetSSD_deploy.caffemodel --dlc caffe_mobilenet_ssd.dlc
NameEmailTitle/Company
Rakesh Sankar[email protected]Sr. System Architect,
GlobalEdge Software, Ltd
Shivanand Pujar[email protected]Project Manager,
GlobalEdge Software, Ltd
Akshay Kulkarni[email protected]Technical Lead,
GlobalEdge Software, Ltd
Sushant Ahuja[email protected]Sr. Software Engineer,
GlobalEdge Software, Ltd
Jinka Venkata Saikiran[email protected]Sr. Software Engineer,
GlobalEdge Software, Ltd
Sahil Munaf Bandar[email protected]Software Engineer,
GlobalEdge Software, Ltd
Rajagonda Pujari[email protected]Module Lead,
GlobalEdge Software, Ltd
Patcha Vengamamba[email protected]Software Engineer,
GlobalEdge Software, Ltd