I'm seeing the Error below when trying to convert Meta file to DLC conversion!
- Is Meta to DLC allowed? (I have three files of .index, .meta, .data-0000-of-00001 for each TF network)
- Anything I need to change in my input command?
amir@aceslab:~/Documents/TensorFlow-Examples-master_basic/examples/3_NeuralNetworks$ /home/amir/snpe-1.25.1.310/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph my_test_model1.meta --input_dim input "1,28,28,1" --out_node "test_amir1" --dlc test_amir1.dlc
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 980
major: 5 minor: 2 memoryClockRate (GHz) 1.3925
pciBusID 0000:01:00.0
Total memory: 3.95GiB
Free memory: 3.84GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
2019-06-07 16:17:03,482 - 106 - ERROR - Conversion failed: ERROR_TF_NODE_NOT_FOUND_IN_GRAPH: Node not found in graph. Node name: test_amir1
Hi,
1. We can convert a Meta file into DLC by the same command we follow to convert a .pb file into .dlc, where you have to give the meta file name to --graph argument.
2. This error might be due to incorrect output node name which is not present in the graph. Please go through the graph once for the correct output node name.
Hi,
I converted my TF file to Meta, where can i find the output name?
TF code:
Hi,
Please make the following changes to your code :
1. Before line no. 113 (inside "tf.session() as sess:" and right before the sess.run(init)) please add :
writer = tf.summary.FileWriter("./output", sess.graph)
2. After line no. 138 (saver.save(sess, 'my_test_model'+str(i))) while inside the "tf.session() as sess:" ) please add the following right after the above :
writer.close()
What this change effectively does is that it would save the session in the form of tensoboard graph which can be visualized with tensorboard tool. It would be in the form of events file inside the output folder created.
After that run the training of the training python file.
3. Once you are done with the training you can go to the directory containing the training python file you just ran. There would be a new directory with the name of "output", cd into this directory in terminal and write the following command:
tensorboard --logdir="."
4. Once done open the link showing in the terminal in your web browser (it would be hosted at a set port on localhost which would be displayed in the terminal).
5. Once you open the link in the web browser please click on "GRAPHS" on the top (in the orange tensorboard header under the link bar). This will show you the graph of the network that you have trained.
6. In order to get the output node, now please look for the final output node (the shape of the node would be a small elliptical circle), Once you click on that, on the top-right of the web page under the orange bar, there is a box giving the following information : "name", "Attributes", "Inputs", "Output".
7. From this box the "Outputs" is the name of the node that you need to give in the snpe-tensorflow-to-dlc conversion tool command.
You can also share your graph (tensobaord events file) here once you are done with the training, we can help with the same.
Hi!
I am trying to convert a custom tensorflow model saved as .pb graph to .dlc using snpe-tensorflow-to-dlc.
I first tried with this command :
I got the input and output node names by running print(model.outputs) and print(model.inputs) in my tensorflow python script. which gave me the following node names: conv2d_1_input:0 and dense_2/Softmax:0.
The conversion script did not work with the ':0' suffix so I removed this and ran snpe-tensorflow-to-dlc.
This is the output log I got by running the above command -> log.txt
Now the script says that model conversion is completed, but the new .dlc file size is only 1.5kb. So I checked with snpe-dlc-info and got the following output:
Please advise!!
My custom .pb model for reference.
Tensorboard image of the model
Update:
I now tried with this command:
The script says 'Model conversion completed!', but if I check the output DLC file with snpe-dlc-info;
Bump: Please reply!