Hi,
I am using the inception_v3.dlc which is generated using this script($SNPE_ROOT/models/inception_v3/scripts/setup_inceptionv3.py) . This gives correct prediction in snpe-net-run binary with the images generated when setup_inception_v3.py which is mentioned below.
snpe-1.50.0.2622/models/inception_v3/data/cropped$ ls
chairs.jpg chairs.raw notice_sign.jpg notice_sign.raw plastic_cup.jpg plastic_cup.raw raw_list.txt trash_bin.jpg trash_bin.raw
But when I show the same images using the camera and qtimlesnpe plugin , I get wrong predictions.(for cabbage I get hammer .etc) The pipeline I use is mentioned below.
gst-launch-1.0 -e qtiqmmfsrc ! video/x-raw, format=NV12, width=1280, height=720, framerate=30/1, camera=0 ! qtimlesnpe model=/data/local/tmp/inception_v3/inception_v3.dlc labels=/data/imagenet_slim_labels.txt postprocessing=classification ! qtioverlay ! qtivtransform rotate=1 ! waylandsink async=true sync=false fullscreen=true enable-last-sample=false
Also I need to test my custom images with snpe-net-run binary .What is the resolution and format of the image to be fed to the model using snpe-net-run for inception_v3.dlc ?
Hello prabukumar,
You can refer $SNPE_ROOT/models/inception_v3/scripts/create_inceptionv3_raws.py for the preprocessing required for the inception v3 model used in the tutorial.
Hi arunraj,
Thanks for your reply.
Could you please provide some pointers to a working model which works with qtimelsnpe. Is there any pretrained dlc file which works with the qtimlesnpe . Also could you please provide the pipeline ?