Hello,
Version 1.6.0 supports faster-RCNN however I am having dificulties using the model and I was wondering if you can help me.
I used the model from:
https://github.com/rbgirshick/py-faster-rcnn/tree/master/models
and managed to convert the network to dlc from caffe but I cannot do either of the following: quantize the model or load the model as part of an android app.
When trying to quantize I need to feed in the input to the network in the input-list file but since Faster RCNN expects more than one image input I can't seem to find any documantation explaining the right format for this input file.
When trying to load the model in the android app I get an exception.
I feel like I am missing something, however I cannot find the right reference in the documentation.
I am equally interested in this...have you made any progress?
Thanks,
Martin
SNPE 1.6.0 snpe-dlc-quantize cannot quantize models with multiple inputs or outputs. So this model cannot be quantized offline using the snpe-dlc-quantize tool. But, the SNPE runtime will quantize it on-the-fly when it is deploying it to the DSP.
FasterRCNN support in SNPE has some limitations and some slight deviations from the public py-faster-rcnn project. There is a bit of documentation in the user's guide, limitations section (on Proposal layer and ROIPool).
We will try to provide further documentation on this.
How can I set the value of PRE_NMS_TOP_N and POST_NMS_TOP_N for the Proposal Layer ?
Hi jesliger,
I tried to set multiple output layers when load container, but I got NULL pointer for snpe, and I cannot get the error log about why snpe is set to NULL. Is it a bug of SNPE SDK?
Thanks!
Dear engineers at Qualcomm,
Is there any plan to write code for the Proposal Layer that can run on GPU ?
hi jesliger,
According to the limitations of Proposal layer and ROIPool, does "max_num_rois" (an attribute of layer proposal) can only be set to "1", and other options doesn't make any sense? my snpe version is 1.17.0.
Looking forward to your reply, thanks!