Forums - Output of the LSTM layer in SNPE not matching with Caffe

3 posts / 0 new
Last post
Output of the LSTM layer in SNPE not matching with Caffe
praveen.n
Join Date: 3 Jan 18
Posts: 3
Posted: Tue, 2018-03-13 05:31

I am trying to convert and run inference on a model that has Convolutional layers and LSTM layers. The model was trained using Caffe. I was able to convert the model to a .dlc with some modifications to the conversion script (snpe_caffe_to_dlc.py). A permutation layer had to added before the LSTM Layer.

Now, I tried to run the .dlc through a sample SNPE inference project (a modification of the NativeCpp example given in the SDK). I have confirmed that the input to the network is as expected by SNPE, also the output of the CNN part match with the Caffe output. But, the output of the SNPE LSTM layer does NOT match with the caffe LSTM model. There is no error thrown by the code or the SNPE libs, just the output doe not match.

Have I missed something w.r.t the way the SNPE LSTM layer expects input tensors? Is there an example on LSTM usage in SNPE?

  • Up0
  • Down0
ramamoorthy.san...
Join Date: 29 Jan 18
Posts: 2
Posted: Wed, 2018-07-25 23:58

I have also tried converting tensorflow based LSTM model. the conversion to DLC  was successfull but loading the network in android throws error  "error_code=204; error_message=Couldn't find name. None of the specified output layers exist!". 

i am not sure about the error whether it is caused by input or output layer name. but when converting to dlc format using snpe-tensorflow-to-dlc i dont see any error.

need to solve this issue.. example available?

 

  • Up0
  • Down0
349074299
Join Date: 13 Sep 17
Posts: 8
Posted: Tue, 2018-08-14 02:42

hello praveen.n

       i  convert tensorflow model with lstm ,but my dlc file is 1KB. i dont know why.

       if i set output node name before lstm ,it can convert sucess !!!can you help me?

       the lstm ops i use is :

                      rnn_cell_h18 = tf.contrib.rnn.BasicLSTMCell(num_units=1422, forget_bias=0.0)

                      layer_h18, states_18 = tf.nn.static_rnn(rnn_cell_h18, layer_h16, dtype=tf.float32)
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.