Solutions and Sample Apps – Image Super Resolution

Sample App for Image Super Resolution




Introduction

  • Current project is a sample Android application for AI-based Image Super Resolution using Qualcomm® Neural Processing SDK framework.
  • Model used in this sample is : Collapsible Linear Blocks for Super-Efficient Super Resolution (https://arxiv.org/abs/2103.09404)
  • SESR model is also part of AIMET Model Zoo (https://github.com/quic/aimet-model-zoo/#pytorch-model-zoo)
  • This sample enhances input image resolution by 2x along width, and height. If input resolution is wxh, output resolution will be 2w x 2h
  • DLC models take only fixed input size.
  • If users intend to modify the input size and/or scale factor of the above model, they need to regenerate model DLC using SNPE framework (steps given below)
  • If users intend to use a different model in this demo framework, image pre/post-processing will be needed.
  • Current pre/post-processing is specific for the model used.

Pre-Requisites

Model Architecture and DLC Conversion

Model Overview

Source : https://arxiv.org/pdf/2103.09404.pdf

  • Proposed SESR at training time contains two 5 × 5 and m 3 × 3 linear blocks.
  • There are two long residuals and several short residuals over 3 × 3 linear blocks.
  • A k × k linear block first uses a k × k convolution to project x input channels to p intermediate channels, which are projected back to y output channels via a 1 × 1 convolution.
  • Short residuals can further be collapsed into convolutions.
  • Final inference time SESR just contains two long residuals and m+2 narrow convolutions, resulting in a VGG-like CNN.

Steps to convert model to DLC

  • Ensure the pre-requisites mentioned at the beginning of this page, are completed
  • Note: As a general practice, please convert PYTORCH models to ONNX first, and then convert ONNX to DLC.
  • To convert PYTORCH to ONNX clone the AIMET-MODEL-ZOO repo. For, below steps we are assuming that the repo is pointing to this header.
  • Make below changes in this python file : inference.py.
  1. Comment below imports:
    #from aimet_torch.quantsim import QuantizationSimModel
    #from aimet_torch.qc_quantize_op import QuantScheme
  2. Replace this block of code:

    with torch.no_grad():
       sr_img = model(img_lr.unsqueeze(0).to(device)).squeeze(0)

    images_sr.append(post_process(sr_img))
    With this block

    # With this block
    with torch.no_grad():
       sr_img = model(img_lr.unsqueeze(0).to(device)).squeeze(0)
       input_shape = [1, 1, 128, 128]
       input_data = torch.randn(input_shape)
       torch.onnx.export(model, input_data, "super_resolution.onnx", export_params=True, opset_version=11, do_constant_folding=True, input_names = ['lr'], output_names = ['sr'])
    images_sr.append(post_process(sr_img))
  • Make below changes in this jupyter notebook : superres_quanteval.ipynb (Path: aimet-model-zoo\zoo_torch\examples\superres\notebooks\superres_quanteval.ipynb)
    1. comment below imports :
      #from aimet_torch.quantsim import QuantizationSimModel
      #from aimet_torch.qc_quantize_op import QuantScheme
    2. Set below variables as mentioned:
      DATA_DIR = './data'
      DATASET_NAME = 'Set14'
      use_cuda = False
      model_index = 6
    3. Now create directory- aimet-model-zoo\zoo_torch\examples\superres\notebooks\data\Set14 and put any one image in this folder (.jpg). This is required to run the notebook.
    4. In section, "Create model instance and load weights" comment all initialization except "model_original_fp32"
    5. In section, "Model Inference" comment all initialization except "IMAGES_SR_original_fp32"
  • Please, ignore the last block of code in superres_quanteval.ipynb
  • Now, run superres_quanteval.ipynb. Changes made in inference.py will be used in superres_quanteval.ipynb
  • While running the scripts the code will automatically download pre-trained weights "release_sesr_xl_2x.tar.gz" from AIMET model zoo - https://github.com/quic/aimet-model-zoo/releases/tag/sesr-checkpoint-pytorch
  • With above step, ONNX model should get generated.
  • Convert the ONNX model to DLC with below command. Please note that we locked in input dimensions in above code

    snpe-onxx-to-dlc --input_network super_resolution.onxx
             --output_path super_resolution.dlc
  • For better performance on Snapdragon® SM8550, model quantization is recommended. For quantization please keep the dlc, data and list.txt in one directory. "data" has some raw input for model quantization. "list.txt" has names of all files in data directory

    snpe-dlc-quantize --input_dlc super_resolution.dlc --input_list list.txt --use_enhanced_quantizer --use_adjusted_weights_quantizer --axis_quant --output_dlc super_resolution_sesr_opt.dlc --enable_htp --htp_socs sm8550
  • Package the DLC in application /src/package_name>/assets folder


  • How to change the model?

    • As mentioned above, current sample is packaged with 128x128 input resolution, and 2x upscale. i.e., output is 256x256
    • If user wants to try with other resolution/upscale factor they need to download respective pre-trained weights from AIMET and generate DLC using above steps.
    • If user wants to try with a different model, then user needs to ensure if the pre-post processing done for SESR is applicable for the new model and modify accordingly.

    Source Overview

    Source Organization

    • demo: Contains demo video, GIF
    • doc: Contains documentation/images for current project
    • snpe-release: Contains SDK release binary. It is recommended to ** regenerate DLC and replace the SDK release binary ** if there is a significant change in SDK release over time
    • superresolution: Contains source files in standard Android app format.

    Code Implementation

    Model Initialization
    Before You can use the model it has to be initialized first. The initialization process needs these parameters:

    • Context: Activity or Application context
    • ModelName: Name of the model that you want to use
    • String run_time: A specific runtime (The alternatives for run_time are CPU, GPU, and DSP).
       package com.qcom.imagesuperres;
       ...

       public class SNPEActivity extends AppCompatActivity {
          public static final String SNPE_MODEL_NAME = "SuperResolution_sesr";
          ...

          public Result<SuperResolutionResult> process(Bitmap bmps, String run_time){

             ....

             superResolution = new SuperResolution();

             // Model is initialised in below fun. Model config been set as per SNPE.

          boolean superresInited = superResolution.initializingModel(this, SNPE_MODEL_NAME, run_time);
          }
       }

    Running Model

    After the model is initialized, you can pass a list of bitmaps to be processed.

    As mentioned earlier DLC models can work with specific image sizes. Therefore, we need to resize the input image to the size accepted by DLC before passing image to DLC

    This source code uses an operator to aid users in image pre-processing. sizeOperation parameter to the Process method, can be used to pre-process the image as shown below:

    Note: If user is already passing exact input size 128x128 sizeOperation should be set to 1.

    • 0: Without changes
    • 1: if (IMAGE_WIDTH = INPUT_HEIGHT and IMAGE_HEIGHT = INPUT_WIDTH) (user passing input size as required by model)
    • 2: if ((IMAGE_WIDTH / INPUT_WIDTH) = (IMAGE_HEIGHT / INPUT_HEIGHT))(input needs to be scaled)
    int sizeOperation = 1;

       // Function to process the enhancement operation

       result = enhancement.process(new Bitmap[] {bmps}, sizeOperation);

    Results

    • The processed results are returned in a Result object of generic type SuperResolutionResult.
    • The SuperResolutionResult contains an array of bitmaps each representing an Enhancement process (In the sample there is only one bitmap).
    • Along with this list, there is also the inferenceTime of the process for performance information.

    Release
    Since the model initiation process happens on the native side it will not be garbage collected; therefore, you need to release the model after you are done with it.

       superResolution.freeNetwork();

    Build and run with Android Studio

    Build APK file with Android Studio

    • Clone the git repository.
    • Generate DLC using the steps mentioned above (super_resolution_sesr_opt.dlc)
    • Copy "snpe-release.aar" file from Android folder in "Qualcomm Neural Processing SDK release from Qualcomm Developer Network into this folder: VisionSolution2-ImageSuperResolution\snpe-release\
    • Copy DLC generated in step-2 at : VisionSolution2-ImageSuperResolution\superresolution\src\main\assets\ (super_resolution_sesr_opt.dlc)
    • Import folder VisionSolution2-ImageSuperResolution as a project in Android Studio
    • Compile the project.
    • Output APK file should get generated : superresolution-debug.apk
    • Prepare the Qualcomm Innovators Development Kit to install the application (Do not run APK on emulator)

    If Unsigned or Signed DSP runtime is not getting detected, then please check the logcat logs for the FastRPC error. DSP runtime may not get detected due to SE Linux security policy. Please try out following commands to set permissive SE Linux policy.

    adb disable-verity adb reboot adb root adb remount adb shell setenforce 0

    Install and test application : superresolution-debug.apk

    adb install -r -t superresolution-debug.apk

    launch the application

    Following is the basic "Image Super Resolution" Android App

    • Select one of the given images from the drop-down list
    • Select the run-time to run the model (CPU, GPU or DSP)
    • Observe the result of model on screen
    • Also note the performance indicator for the particular run-time in mSec

    Same results for the application are shown below

    Results

    • Sample app was verified on Snapdragon SM8550 platform with CPU, GPU and DSP run-times with multiple images
    • Demo video, and performance details as seen below:

    References

    1. Collapsible Linear Blocks for Super-Efficient Super Resolution - https://arxiv.org/abs/2103.09404
    2. SESR at AIMET model zoo - https://github.com/quic/aimet-model-zoo/#pytorch-model-zoo

    Snapdragon, Qualcomm Neural Processing SDK, and Qualcomm Innovators Development Kit are products of Qualcomm Technologies, Inc. and/or its subsidiaries. AIMET Model Zoo is a product of Qualcomm Innovation Center, Inc.