Forums - Getting segmentation fault with my converted dlc model

7 posts / 0 new
Last post
Getting segmentation fault with my converted dlc model
shanesmiskol
Join Date: 19 May 19
Posts: 10
Posted: Thu, 2019-05-23 20:18

I'm trying to get the input and output tensor names on my converted DLC model on an Android phone (just to make sure my model is working correctly), but I keep getting 'Segmentation fault' errors no matter what. I tried using other models already converted by another party that made the platform I'm working on and it works fine, it's just my converted model that's not working. I converted my keras TensorFlow model to a frozen graph (.pb), then converted that to DLC using snpe-tensorflow-to-dlc successfully. But when I load it with loadContainerFromFile() and run this code on it:

std::unique_ptr<zdl::DlContainer::IDlContainer> container = loadContainerFromFile("/data/openpilot/selfdrive/df/new_dlc.dlc");
std::unique_ptr<zdl::SNPE::SNPE> snpe = setBuilderOptions(container, runtime);
const auto &strListi_opt = snpe->getInputTensorNames();
const auto &strListo_opt = snpe->getOutputTensorNames();

const auto &strListi = *strListi_opt;
//assert(strListi.size() == 1);
const char *input_tensor_name = strListi.at(0);

const auto &strListo = *strListo_opt;
assert(strListo.size() == 1);
const char *output_tensor_name = strListo.at(0);

 

I get a segmentation fault at line getInputTensorNames. Again, this all works with other models I've tested made by other people, but not mine. Is there anything I could have done wrong converting it to a dlc file? To go from keras h5 to pb, I used the freeze_session function found online along with:

frozen_graph = freeze_session(K.get_session(),

output_names=[out.op.name for out in model.outputs])

 

tf.train.write_graph(frozen_graph, "pb_models", model_name+".pb", as_text=False)

 

to convert to a pb file. Then I used this to get the input and output node names to use in the conversion command for SNPE:

 

model = load_model("h5_models/"+model_name+".h5")

print([out.op.name for out in model.inputs])

print([out.op.name for out in model.outputs])

My keras TF model takes an array of 5 floats and returns one single float from -1 to 1. That's it, it doesn't need to process any images or anything, just that one 5 number array. The first layer is a dense layer with 5 nodes, and the last layer is also a dense layer with 1 node. For the input shape on the snpe conversion command, I used "1,5"

 

Here's the full command I'm using:

 

python snpe-tensorflow-to-dlc --graph /home/shane/model.pb --input_dim dense_1_input "1,5" --out_node "dense_5/BiasAdd" --dlc /home/shane/test.dlc

The SNPE version I used to convert to a dlc file was 1.25.1, and the version I'm running the model on is 1.19.2.0.

If you notice anything out of the ordinary that might be causing SNPE to not work, please let me know. I've been working on this for a week and a half and for the life of me can't figure anything out. Thank you very much.

  • Up0
  • Down0
shanesmiskol
Join Date: 19 May 19
Posts: 10
Posted: Sun, 2019-05-26 01:53

An update to this issue, I ran the code to generate the DLC for the inception sample and it still has an error when I initialize the snpebuilder, saying: error_message=Undefined error. SNPE model format version detected: 3.0.0; error_component=SNPE

  • Up0
  • Down0
shanesmiskol
Join Date: 19 May 19
Posts: 10
Posted: Sun, 2019-05-26 05:54

Another update! I believe this is because I was unknowingly trying to use my 1.25 SNPE converted model with 1.9 SNPE and the models have recently changed thus making my model incompatible with the old SDK.

  • Up0
  • Down0
justinianpopa
Join Date: 23 Oct 18
Posts: 2
Posted: Fri, 2019-05-31 08:51

@shanesmiskol

Did you eventually find a solution?

I am seeing the same issues with a MobilenetV2 re-trained on 1.11 (as tested from the snpe docs). The vanilla quantized model from github works fine but a re-trained network fails in the same point as you described.

  • Up0
  • Down0
shanesmiskol
Join Date: 19 May 19
Posts: 10
Posted: Fri, 2019-05-31 23:12

@justinianpopa For me, it was that I was trying to use my model I converted with SNPE 1.25 with SNPE 1.19 which was preinstalled on the system I was working on. Once I set my LD_LIBRARY_PATH to the new shared library files that I copied over, it worked fine and I was able to get predictions.

Quick question for you if you don't mind, there are proprietary converted models using 1.19 on my system I'm using and I'm having trouble using two different versions of SNPE side by side. Do you have a zip of an older SNPE before 1.25 you could share so I can convert my TensorFlow model using the old model standard? It would be amazing if so. Thanks

  • Up0
  • Down0
justinianpopa
Join Date: 23 Oct 18
Posts: 2
Posted: Sun, 2019-06-02 23:02

Thanks for the reply, i'll try re-copying latest libs and setting the LD_LIBRARY_PATH to see if it works

As for older versions of SNPE i've found that at the link below qualcomm has an archive of older versions down to 1.19.2 

https://developer.qualcomm.com/software/qualcomm-neural-processing-sdk/tools

  • Up0
  • Down0
shanesmiskol
Join Date: 19 May 19
Posts: 10
Posted: Mon, 2019-06-03 01:04
Omg thank you so much! I even emailed Qualcomm and asked for an older version and they said they didn't allow that. No idea how I missed the archive. Thanks again, and good luck.
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.