Forums - Using GStreamer plugin qtimletflite

6 posts / 0 new
Last post
Using GStreamer plugin qtimletflite
22imonreal
Join Date: 10 Feb 21
Posts: 80
Posted: Thu, 2021-05-27 06:35

Hello,

I was trying to run a tflite model on the RB5 platform, and I have been told that the easiest and simplest way of achiving this would be using the GStreamer based plugin qtimletflite, but I have not found a comprehensive example or documentation of how to use qtimletflite.

How is qtimletflite used to run a tflite model?

Any help will be very much appreciated.

Thanks

  • Up0
  • Down0
hs.chaya
Join Date: 20 Mar 20
Posts: 23
Posted: Thu, 2021-06-03 04:20

Hi,

About qtimletflite is given in below link

https://developer.qualcomm.com/qualcomm-robotics-rb5-kit/software-refere...

 

To run sample tflite model with qtimletflite follow below steps

Download the sample tflite model coco ssd mobilenet model on host pc using below command                 wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coc... -outfile coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip

Unzip the file

Push the detect.tflite and labelmap.txt to /data/misc/camera folder

Create configuration file by gstreamer property. File extension should be .config

gst-launch-1.0 v4l2src ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg

*/

Above pipeline takes the inference frames from the camera source and are delivered to the GStreamer TF Lite plugin along with a model tflite. The TF Lite runtime can be running on the DSP, GPU or CPU. Inference results are gathered back in the GStreamer TF sink for postprocessing and that metadata is stored in the file.

*/

 

  • Up0
  • Down0
22imonreal
Join Date: 10 Feb 21
Posts: 80
Posted: Mon, 2021-06-07 07:49

Hi,

Thanks for your reply.

I tried your suggestion but got the following error:

root@qrb5165-rb5:/data/misc/camera# gst-launch-1.0 v4l2src ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Error getting capabilities for device '/dev/video0': It isn't a v4l2 driver. Check if it is a v4l1 driver.
Additional debug info:
v4l2_calls.c(94): gst_v4l2_get_capabilities (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
system error: Inappropriate ioctl for device
Setting pipeline to NULL ...
Freeing pipeline ...

My config file is as follows:

org.codeaurora.mle.tflite
input_format = 0
BlueMean = 0.0
GreenMean = 0.0
RedMean = 0.0
BlueSigma = 0.0
GreenSigma = 0.0
RedSigma = 0.0
UseNorm = false
preprocess_type = 1
confidence_threshold = 0.6
model = "/data/misc/camera/detect.tflite"
labels = "/data/misc/camera/labelmap.txt"
delegate = "dsp"
num_threads = 2

How could I solve the problem above?

Thanks in advance.

 

  • Up0
  • Down0
hs.chaya
Join Date: 20 Mar 20
Posts: 23
Posted: Wed, 2021-06-09 00:15

Try by connecting usb camera, When we attach the usb camera, It will give device ID 2 which is nothing but video2, Please try giving video2 and test the application by running below command

gst-launch-1.0 v4l2src device=/dev/video2 ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg

*/
  • Up0
  • Down0
22imonreal
Join Date: 10 Feb 21
Posts: 80
Posted: Wed, 2021-06-09 03:39

Hello and thank you for your answer.

After trying your suggestion, I am getting the following:

sh-4.4# gst-launch-1.0 v4l2src device=/dev/video2 ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for NNAPI.
gbm_create_device(156): Info: backend name is: msm_drm
INFO: Replacing 63 node(s) with delegate (TfLiteNnapiDelegate) node, yielding 2 partitions.

The terminal screen does not display anything else, and the weston terminal remains blank.

How could I get the results?

Thanks

  • Up0
  • Down0
hs.chaya
Join Date: 20 Mar 20
Posts: 23
Posted: Wed, 2021-06-09 22:18

Hi,

gst-launch-1.0 v4l2src device=/dev/video2 ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg

In the above pipeline output of model is storing in image file, if we want output display in weston terminal then need to use waylandsink gstreamer plugin in the above pipeline.

*/
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.