I have converted Yolo tflite to .dlc but after running the below command there is no display showing....
gst-launch-1.0 qtiqmmfsrc name=camsrc camera=0 ! video/x-raw\(memory:GBM\),f
ormat=NV12,width=1280,height=720,framerate=30/1, camera=0 ! qtimlesnpe config=/d
ata/misc/camera/mle_snpe.config model=/data/misc/camera/yolov4snpe.dlc labels=/d
ata/misc/camera/coco.txt postprocessing=detection ! queue ! qtioverlay ! wayland
sink fullscreen=true async=true sync=false
Output:
gbm_create_device(156): Info: backend name is: msm_drm
Setting pipeline to PAUSED ...
gbm_create_device(156): Info: backend name is: msm_drm
** (gst-launch-1.0:31776): WARNING **: 02:02:46.975: Wayland compositor is missing the ability to scale, video display may not work properly. - No viewporter present
** (gst-launch-1.0:31776): WARNING **: 02:02:46.975: Could not bind to zwp_linux_dmabuf_v1
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
No display after this command just showing grey screen and qualcomm processor is getting hot. Please put a document on how to run a custom trained model in Qualcomm which should be able to run using Gstreamer. Thanks.
Hi,
To run sample tflite model with Gstreamer plugin qtimletflite on Qualcomm RB5 board follow below steps
Download the sample tflite model coco ssd mobilenet model on host pc using below command wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coc... -outfile coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
Unzip the file
Push the detect.tflite and labelmap.txt to /data/misc/camera folder
Create configuration file by gstreamer property. File extension should be .config
To change delegate, open the config file and change the delegate value to cpu or gpu or dsp.
gst-launch-1.0 v4l2src ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg
Above pipeline takes the inference frames from the camera source and are delivered to the GStreamer TF Lite plugin along with a model tflite. The TF Lite runtime can be running on the DSP, GPU or CPU. Inference results are gathered back in the GStreamer TF sink for postprocessing and that metadata is stored in the file.