After sucessful convertion of MobilenetSSD and performing inference with it from native c++ code, I was not able to get outputs other than "detection_classes:0", even if two more nodes should be there: "--out_node detection_classes --out_node detection_boxes --out_node detection_scores". But outputTensorMap, which I get from execute() contains only "detection_classes:0".
Documentation (https://developer.qualcomm.com/docs/snpe/convert_mobilenetssd.html) tells:
"
The output layers for the model are:
- Postprocessor/BatchMultiClassNonMaxSuppression
- add
The output buffer names are:
- (classes) detection_classes:0 (+1 index offset)
- (classes) Postprocessor/BatchMultiClassNonMaxSuppression_classes (0 index offset)
- (boxes) Postprocessor/BatchMultiClassNonMaxSuppression_boxes
- (scores) Postprocessor/BatchMultiClassNonMaxSuppression_scores
"
I suspected that I can find something in buffers, but when I try "getInputOutputBufferAttributes(name)" with different names, only "detection_classes:0" returns something.
Not sure how I can get access to remaining part of data?
You need explicitly add "Postprocessor/BatchMultiClassNonMaxSuppression" layer as output before loading network to SNPE.
After this, there will be three additional buffers after execute. One of the buffer would be some kind of duplicated - buffer which would be responsible for classes. The difference - the buffer "detection_classes:0" is a buffer after add operation and will have class id starting from 1 while "Postprocessor/BatchMultiClassNonMaxSuppression_classes" buffer will have class id starting from 0
You can take a look in object detection sample here: https://github.com/elvin-nnov/dldt_tools/blob/feature/amalyshe/snpe_vali...
One more note - it seems there was a bug in SNPE 1.36 and some class id were wrong if model was executed on DSP. This problem disappeared in 1.38 and in 1.39 (have not verified on 1.37)