How could one ensure that the TFlite model runs on the DSP or AIP?
I have tried:
The idea was to use the delegate presented in:
https://www.tensorflow.org/lite/performance/hexagon_delegate#add_the_sha...
but I keep getting the following error:
OSError: libhexagon_nn_skel_v66.so: wrong ELF class: ELFCLASS32
Which suggests that libhexagon_nn_skel_v66.so is not built for the rb5 aarch64 architecture.
So to reiterate, I would like to know how to use TFlite runtime Python package to run the TFlite model on the DSP on the RB5.
Thanks in advance.
I believe offloading a TFLite model to DSP via Hexagon Delegate can only be done via Native C++ application and not via Python.
To run a TFLite model on RB5 DSP using Hexagon Delegate, you can use existing GStreamer based plugins "qtimletflite" (simple and easier) or you can write your own Native C++ application to execute network.
If you are opting for second, you application will need two different libraries as mentioned below:
1. libhexagon_interface.so:
This library is loaded on ARM side and acts as a stub that runs on arm to send RPC calls to DSP to execute a network.
This library has to be build from tensorflow repository (using bazel if running on android, RB5 ships this library. You can find it in /usr/lib/).
2. libhexagon_nn_skel_v66.so:
Actual library that gets executed on DSP (contains Hexagon DSP code and can only get executed on cDSP, not meant to be run on arm/arm64). This library can be downloaded from here: https://storage.cloud.google.com/download.tensorflow.org/tflite/hexagon_... (latest).
Again, RB5 ships this skel library as well (you can find it in /usr/lib/rfsa/adsp path on target).
From application point, you can follow a sample application like label_image (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/exa...) on how to invoke hexagon delegate.
Please note, hexagon delegate came out of experimental and merged into mainline from TFv2.3 so APIs might be a bit different.
Unfortunately, using delegates via python is not supported.
Thank you for your answer.
I do not necesssarily need to run it on Python or using the delegate.
In fact, I just need to be able to run a custom neural net with different inputs (not only images and not only expressed as raw files (for example, I need to be able to feed in a 640,320,6 channel deep tensor)), and I have been trying the dlc approach but did not know how to feed said 6 channel deep input into the snpe-net-run application, which is why I started looking into Python and tflite.
Your GStreamer suggesion would also be an option, but I haven't found much documentation. How could I use the qtimletflite tool? I can't see an example or tutorial in:
https://developer.qualcomm.com/qualcomm-robotics-rb5-kit/software-refere...
Thank you in advance.
Hi,
Example for qtimletflite is given in below link
https://developer.qualcomm.com/qualcomm-robotics-rb5-kit/software-refere...
To run sample tflite model with qtimletflite follow below steps
Download the sample tflite model coco ssd mobilenet model on host pc using below command wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coc... -outfile coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
Unzip the file
Push the detect.tflite and labelmap.txt to /data/misc/camera folder
Create configuration file by gstreamer property. File extension should be .config
To change delegate, open the config file and change the delegate value to cpu or gpu or dsp.
gst-launch-1.0 v4l2src ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg
Above pipeline takes the inference frames from the camera source and are delivered to the GStreamer TF Lite plugin along with a model tflite. The TF Lite runtime can be running on the DSP, GPU or CPU. Inference results are gathered back in the GStreamer TF sink for postprocessing and that metadata is stored in the file.
*/
Hi,
Thanks for your reply.
My config file is as follows:
How could I solve the problem above?
Thanks in advance.
Try by connecting usb camera, When we attach the usb camera, It will give device ID 2 which is nothing but video2, Please try giving video2 and test the application by running below command
gst-launch-1.0 v4l2src device=/dev/video2 ! jpegdec ! videoconvert ! qtimletflite config=/data/misc/camera/mle_tflite.config model=/data/misc/camera/detect.tflite labels=/data/misc/camera/labelmap.txt postprocessing=detection ! videoconvert ! jpegenc ! filesink location=image.jpeg
*/I still got an error on RB5:
I have built libhexagon_interface.so (ubuntu arm64) on RB5 locally with latest tensorflow source code and bazel.
Could you please let me know what's the value returned by calling hexagon_nn_version() from libhexagon_nn_skel.so shiped within this RB5 system version:
cat /proc/version
Hi @xbwee,
The way Hexagon Delegate requires two components to work correctly:
1. libhexagon_interface.so
2. libhexagon_nn_skel_*.so (v65/v66 based on cDSP version).
Since cDSP on RB5 is a co-processor, we use an RPC mechanism to invoke calls on cDSP. For this, there is a Stub/Skel mechanism implemented (Stub runs on ARM cores while Skel runs on DSP). In the above, libhexagon_interface.so acts as a stub which invokes routines via FastRPC within Skel library running on DSP. For this stub and skel to work properly, Hexagon Delegate requires matching versions executing on the target. By default, RB5 ships with a matching libhexagon_interface.so and libhexagon_nn_skel.so files which you can use directly along with necessary tflite libraries and binaries.
If you are compiling tflite libraries on your own, then you have to ensure you are using correct version of skel that matches the version of libhexagon_interface.so library. For example, you are compiling tflite v2.8 branch from mainline github repo. According to doc: https://www.tensorflow.org/lite/android/delegates/hexagon, you need to download Hexagon skel library version v1.20.0.1 (link in hexagon delegate page) and install the downloaded Skel (v66) in target. This should ensure you are using correct version of Stub and Skel on Hexagon delegate.
Hopefully, above should fix the issue. If not, let me know and we can look at rootcausing the issue further. Also note, logcat logs are very helpful to look at the issue at hand.
Cheers!
Did anyone try to experiment with this repo:
https://github.com/DoanNguyenTrong/object-detection-tflite-cpp
I followed that, but can't run on RB5. If you tried successfully, please help me correct some points!
Thank you!
Hi, unfortunately the versions shipped with RB5 are also not matching.
Neither does the interface library generated from tensorflow and corresponding skel library downloaded from the tensorflow website.
How can we ensure that we are using the correct versions?
There seems to be no way to check the versions for these libraries.