Hello,
I was trying to run a tflite model on the RB5 platform, and I have been told that the easiest and simplest way of achiving this would be using the GStreamer based plugin qtimletflite, but I have not found a comprehensive example or documentation of how to use qtimletflite.
How is qtimletflite used to run a tflite model?
Any help will be very much appreciated.
Thanks
Hi,
About qtimletflite is given in below link
https://developer.qualcomm.com/qualcomm-robotics-rb5-kit/software-refere...
To run sample tflite model with qtimletflite follow below steps
Download the sample tflite model coco ssd mobilenet model on host pc using below command wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coc... -outfile coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
Unzip the file
Push the detect.tflite and labelmap.txt to /data/misc/camera folder
Create configuration file by gstreamer property. File extension should be .config
gst-launch-1.0 v4l2src ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg
*/Above pipeline takes the inference frames from the camera source and are delivered to the GStreamer TF Lite plugin along with a model tflite. The TF Lite runtime can be running on the DSP, GPU or CPU. Inference results are gathered back in the GStreamer TF sink for postprocessing and that metadata is stored in the file.
*/
Hi,
Thanks for your reply.
My config file is as follows:
How could I solve the problem above?
Thanks in advance.
Try by connecting usb camera, When we attach the usb camera, It will give device ID 2 which is nothing but video2, Please try giving video2 and test the application by running below command
gst-launch-1.0 v4l2src device=/dev/video2 ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg
*/Hello and thank you for your answer.
After trying your suggestion, I am getting the following:
The terminal screen does not display anything else, and the weston terminal remains blank.
How could I get the results?
Thanks
Hi,
gst-launch-1.0 v4l2src device=/dev/video2 ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg
In the above pipeline output of model is storing in image file, if we want output display in weston terminal then need to use waylandsink gstreamer plugin in the above pipeline.
*/