Forums - Linux sample app to use snpe with a camera input

2 posts / 0 new
Last post
Linux sample app to use snpe with a camera input
vikaash.kb
Join Date: 16 Aug 21
Posts: 1
Posted: Sun, 2021-08-22 05:41

Hi,

I would like to know , if there is any linux sample app to use snpe with a camera input . I have found similar app for android from the link provided below

 https://developer.qualcomm.com/comment/16880

I am using QCS610 Chip, which supports only linux, So I couldn't use the example provided on the link.

Also I could find sample apps for linux under snpe-1.50.0.2622/examples/NativeCpp/ . But those are for single frames and not for the live input from the camera.

Could someone point an linux application which uses snpe to run any linux sample app that takes input from the live camera and provides it to a snpe .



Regards,

Vikaash.K.B

  • Up1
  • Down0
ap.arunraj
Join Date: 20 Apr 20
Posts: 15
Posted: Tue, 2021-08-31 07:32

Hello Vikaash,
Let me just walk through the process of getting inference for camera input in SNPE runtime on QCS610 based board using code snippets(I assume you already went through the tutorial for building C++ application given in the SNPE documentation) .

  1. Get the Camera input:
    This can be done through GST plugins available on the board and opencv library.
     
    cv::VideoCapture cap ("qtiqmmfsrc ldc=TRUE !video/x-raw, format=NV12, width=1280, height=720, framerate=30/1 ! videoconvert ! appsink");
    cv::Mat frame;
    bool bSuccess = cap.read(frame);
    cv::Mat  input_image = preprocess(frame); // Preprocess input image
    size_t img_size = input_image.channels() * input_image.cols * input_image.rows;
     
  2. Load the DLC Model:
    Load the DLC model, set the runtime and then build the network. Here we will use CPU as sole runtime.
    Check Runtime_t documentation for other supported runtimes.

    zdl::DlSystem::RuntimeList runtime_list;
    zdl::DlSystem::Runtime_t runtime_cpu = zdl::DlSystem::Runtime_t::CPU;

    std::unique_ptr<zdl::DlContainer::IDlContainer> container = 
                                                        zdl::DlContainer::IDlContainer::open(zdl::DlSystem::String(<path-to-dlc>));
    zdl::SNPE::SNPEBuilder snpeBuilder(container.get());
    runtime_list.add(runtime_cpu);
     //If there are multiple output layers, pass that instead of empty array.
    std::unique_ptr<zdl::SNPE::SNPE> model_handler = snpeBuilder.setOutputLayers({}) 
                                                                                                            .setRuntimeProcessorOrder(runtime_list)
                                                                                                            .build();
  3. Create iTensor:
    Create built-in SNPE buffers called iTensor (you can also use user-backed buffer, instructions for the same can be found here) for loading input.

    std::unique_ptr<zdl::DlSystem::ITensor> input_tensor =
            zdl::SNPE::SNPEFactory::getTensorFactory().createTensor(model_handler->getInputDimensions());
    zdl::DlSystem::ITensor *tensor_ptr = input_tensor.get();
     
  4. Load input into iTensor:
    Next, load the input in OpenCV Mat frame into iTensor. In this case let's assume model expects the input to be normalized between -1.0 and 1.0.
     
    float *tensor_ptr_fl = reinterpret_cast<float *>(&(*input_tensor->begin()));
    for(auto i=0; i<img_size; i++) {
            tensor_ptr_fl[i] = (static_cast<float>(input_image.data[i]) - 128.f )/ 128.f;
    }
     
  5. Get Inference:
    For getting the predictions for an input we can use execute method of SNPE interface class. This method expects an iTensor and also an output TensorMap to store output.

    zdl::DlSystem::TensorMap output_tensor_map;
    bool exec_status = model_handler->execute(tensor_ptr, output_tensor_map);
     
  6. Postprocess Output:
    Once the inferencing is successful, output_tensor_map will have the output values. Assuming multiple output tensors, code snippet for postprocessing is as follows.

    zdl::DlSystem::StringList out_tensors = 
                                                                       output_tensor_map.getTensorNames(); //Get all output tensor name
    std::map<std::string, std::vector<float>> out_itensor_map; //C++ STL map to store postprocessed
                                                       //output tensor name and corresponding output values as key value pairs
     
    for(size_t i=0; i<out_tensors.size(); i++) {
            zdl::DlSystem::ITensor *out_itensor = output_tensor_map.getTensor(out_tensors.at(i));
            std::vector<float> out_vec { reinterpret_cast<float *>(&(*out_itensor->begin())), reinterpret_cast<float *>(&(*out_itensor->end()))};
            out_itensor_map.insert(std::make_pair(std::string(out_tensors.at(i)), out_vec));
    }

Note: Add necessary header files while buiding complete application.
For more information please refer SNPE C++ API documentation and tutorial on building C++ sample application.
Feel free to contact me for any other query. 

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.