Forums - seeing error converting to DLC

8 posts / 0 new
Last post
seeing error converting to DLC
amirajaee
Join Date: 31 Jul 18
Posts: 14
Posted: Fri, 2019-06-07 16:19
I'm seeing the Error below when trying to convert Meta file to DLC conversion!
 
  1. Is Meta to DLC allowed? (I have three files of .index, .meta, .data-0000-of-00001 for each TF network)
  2. Anything I need to change in my input command?
 
amir@aceslab:~/Documents/TensorFlow-Examples-master_basic/examples/3_NeuralNetworks$ /home/amir/snpe-1.25.1.310/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph my_test_model1.meta --input_dim input "1,28,28,1" --out_node "test_amir1" --dlc test_amir1.dlc
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 980
major: 5 minor: 2 memoryClockRate (GHz) 1.3925
pciBusID 0000:01:00.0
Total memory: 3.95GiB
Free memory: 3.84GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
2019-06-07 16:17:03,482 - 106 - ERROR - Conversion failed: ERROR_TF_NODE_NOT_FOUND_IN_GRAPH: Node not found in graph. Node name: test_amir1
 
  • Up0
  • Down0
gesqdn-forum
Join Date: 4 Nov 18
Posts: 184
Posted: Tue, 2019-06-11 00:26

Hi,

1. We can convert a Meta file into DLC by the same command we follow to convert a .pb file into .dlc, where you have to give the meta file name to --graph argument.

2. This error might be due to incorrect output node name which is not present in the graph. Please go through the graph once for the correct output node name.

  • Up0
  • Down0
amirajaee
Join Date: 31 Jul 18
Posts: 14
Posted: Wed, 2019-06-12 17:13

Hi,

I converted my TF file to Meta, where can i find the output name?

TF code:

 

""" Convolutional Neural Network.
"""
 
from __future__ import division, print_function, absolute_import
 
import tensorflow as tf
 
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
 
counter=0
for i in range(1, 3):
# Training Parameters
learning_rate = 0.001
num_steps = 200
batch_size = 128
display_step = 10
 
print("******************************&&&&&&&&&&&&&&&&&&&&counter: ", counter)
counter=counter+1
# Network Parameters
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
 
# tf Graph input
X = tf.placeholder(tf.float32, [None, num_input])
Y = tf.placeholder(tf.float32, [None, num_classes])
keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)
 
 
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
 
 
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
  padding='SAME')
 
 
# Create model
def conv_net(x, weights, biases, dropout):
# MNIST data input is a 1-D vector of 784 features (28*28 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 28, 28, 1])
 
# Convolution Layer
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)
 
# Convolution Layer
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)
 
# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
 
# Output, class prediction
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
 
# Store layers weight & bias
weights = {
# 5x5 conv, 1 input, 32 outputs
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 5x5 conv, 32 inputs, 64 outputs
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
# fully connected, 7*7*64 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
# 1024 inputs, 10 outputs (class prediction)
'out': tf.Variable(tf.random_normal([1024, num_classes]))
}
 
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([num_classes]))
}
 
# Construct model
logits = conv_net(X, weights, biases, keep_prob)
prediction = tf.nn.softmax(logits)
 
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
 
 
# Evaluate model
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
 
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
 
#saver = tf.train.Saver()
# Start training
with tf.Session() as sess:
 
# Run the initializer
sess.run(init)
 
for step in range(0, num_steps+1):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.8})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y,
keep_prob: 1.0})
print("Step " + str(step) + ", Minibatch Loss= " + \
  "{:.4f}".format(loss) + ", Training Accuracy= " + \
  "{:.3f}".format(acc))
 
print("Optimization Finished!")
 
# Calculate accuracy for 256 MNIST test images
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={X: mnist.test.images[:256],
  Y: mnist.test.labels[:256],
  keep_prob: 1.0}))
 
#saver.save(sess, 'my_test_model'+str(i))
print("*************************file name",'my_test_model'+str(i))
  • Up0
  • Down0
gesqdn-forum
Join Date: 4 Nov 18
Posts: 184
Posted: Tue, 2019-07-23 05:57

     

  • Up0
  • Down0
gesqdn-forum
Join Date: 4 Nov 18
Posts: 184
Posted: Tue, 2019-07-23 05:58

Hi,
Please make the following changes to your code :

1. Before line no. 113 (inside "tf.session() as sess:" and right before the sess.run(init)) please add : 
writer = tf.summary.FileWriter("./output", sess.graph)

2. After line no. 138 (saver.save(sess, 'my_test_model'+str(i))) while inside the "tf.session() as sess:" ) please add the following right after the above :
writer.close()

What this change effectively does is that it would save the session in the form of tensoboard graph which can be visualized with tensorboard tool. It would be in the form of events file inside the output folder created.
After that run the training of the training python file.

3. Once you are done with the training you can go to the directory containing the training python file you just ran. There would be a new directory with the name of "output", cd into this directory in terminal and write the following command:
tensorboard --logdir="."

4. Once done open the link showing in the terminal in your web browser (it would be hosted at a set port on localhost which would be displayed in the terminal).

5. Once you open the link in the web browser please click on "GRAPHS" on the top (in the orange tensorboard header under the link bar). This will show you the graph of the network that you have trained.

6. In order to get the output node, now please look for the final output node (the shape of the node would be a small elliptical circle), Once you click on that, on the top-right of the web page under the orange bar, there is a box giving the following information : "name", "Attributes", "Inputs", "Output".

7. From this box the "Outputs" is the name of the node that you need to give in the snpe-tensorflow-to-dlc conversion tool command.

You can also share your graph (tensobaord events file) here once you are done with the training, we can help with the same.

  • Up0
  • Down0
pratheekb96
Join Date: 2 Jul 19
Posts: 3
Posted: Fri, 2019-08-02 05:38

Hi!

I am trying to convert a custom tensorflow model saved as .pb graph to .dlc using snpe-tensorflow-to-dlc. 

I first tried with this command :

snpe-tensorflow-to-dlc --graph tf_fingers.pb --input_dim "conv2d_1_input" 1,256,256,3 --out_node 'dense_2/Softmax' --dlc './tf_fingers.dlc' --allow_unconsumed_nodes --verbose

I got the input and output node names by running print(model.outputs) and print(model.inputs) in my tensorflow python script. which gave me the following node names: conv2d_1_input:0 and dense_2/Softmax:0.

The conversion script did not work with the ':0' suffix so I removed this and ran snpe-tensorflow-to-dlc.

This is the output log I got by running the above command -> log.txt

Now the script says that model conversion is completed, but the new .dlc file size is only 1.5kb. So I checked with snpe-dlc-info and got the following output:

DLC info for: /home/pratheek/Setup/snpe-1.27.1.382/tf_fingers.dlc
Model Version: N/A
Model Copyright:N/A
-----------------------------------------------------------------------------------------------------------------------
| Id | Name             | Type | Inputs           | Outputs          | Out Dims    | Parameters                       |
-----------------------------------------------------------------------------------------------------------------------
| 0  | conv2d_1_input:0 | data | conv2d_1_input:0 | conv2d_1_input:0 | 1x256x256x3 | input_preprocessing: passthrough |
|    |                  |      |                  |                  |             | input_type: default              |
Total parameters: 0 (0 MB assuming single precision float)
Total MACs per inference: 0 (0%)
Converter command: snpe-tensorflow-to-dlc verbose=True out_node=['dense_2/Softmax'] allow_unconsumed_nodes=True model_version=None input_dim=[['conv2d_1_input', '1,256,256,3']] copyright_file=None in_type=None
DLC created with converter version: 1.27.1.382
-----------------------------------------------------------------------------------------------------------------------
 
Now after finding this forum thread, I tried to open my .pb graph in Tensorboard to find the input and output node names (following the suggested guide mentioned in above comment).
 
According to Tensorboard in the 'outputs' attribute, the input node name = import/conv2d_1/convolution,
                                                                                                             output node name = import/dense_2
 
Now, conversion with the above node names fails with:
 
 snpe-tensorflow-to-dlc --graph tf_fingers.pb --input_dim 'import/conv2d_1/convolution' 1,256,256,3 --out_node 'import/dense_2' --dlc './tf_fingers.dlc' --allow_unconsumed_nodes --verbose
2019-08-02 15:28:37.021130: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-08-02 15:28:37,042 - 106 - ERROR - Conversion failed: ERROR_TF_NODE_NOT_FOUND_IN_GRAPH: Node not found in graph. Node name: import/conv2d_1/convolution
 
I tried the conversion again by removing the 'import' keyword and got this error:
 
snpe-tensorflow-to-dlc --graph tf_fingers.pb --input_dim 'conv2d_1/convolution' 1,256,256,3 --out_node 'dense_2/Softmax' --dlc './tf_fingers.dlc' --allow_unconsumed_nodes --verbose
 
2019-08-02 15:31:14,487 - 404 - WARNING - ERROR_TF_FALLBACK_TO_ONDEMAND_EVALUATION: Unable to resolve operation output shapes in single pass. Using on-demand evaluation!
2019-08-02 15:31:14,487 - 306 - INFO - INFO_ALL_BUILDING_NETWORK: 
==============================================================
Building Network
==============================================================
2019-08-02 15:31:14,487 - 109 - ERROR - Encountered Error: Cannot feed value of shape (1, 256, 256, 3) for Tensor u'conv2d_1/convolution:0', which has shape '(?, 254, 254, 32)'
Traceback (most recent call last):
  File "/home/pratheek/Setup/snpe-1.27.1.382/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 103, in main
    converter.convert(args.dlc, args.copyright_file, args.model_version, converter_command)
  File "/home/pratheek/Setup/snpe-1.27.1.382/lib/python/snpe/converters/tensorflow/converter.py", line 308, in convert
    self._convert_input_layers()
  File "/home/pratheek/Setup/snpe-1.27.1.382/lib/python/snpe/converters/tensorflow/converter.py", line 321, in _convert_input_layers
    shape = self._graph_helper.get_op_output_shape(input_operation)
  File "/home/pratheek/Setup/snpe-1.27.1.382/lib/python/snpe/converters/tensorflow/util.py", line 179, in get_op_output_shape
    shapes = self._evaluate_tensors_output_shape([tensor])
  File "/home/pratheek/Setup/snpe-1.27.1.382/lib/python/snpe/converters/tensorflow/util.py", line 191, in _evaluate_tensors_output_shape
    outputs_map = self.evaluate_tensors_output(tensors)
  File "/home/pratheek/Setup/snpe-1.27.1.382/lib/python/snpe/converters/tensorflow/util.py", line 242, in evaluate_tensors_output
    outputs = self._session.run(fetches=requiring_evaluation, feed_dict=input_tensors)
  File "/home/pratheek/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 887, in run
    run_metadata_ptr)
  File "/home/pratheek/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1086, in _run
    str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 256, 256, 3) for Tensor u'conv2d_1/convolution:0', which has shape '(?, 254, 254, 32)'
 

 

Please advise!!

My custom .pb model for reference. 

Tensorboard image of the model

  • Up0
  • Down0
pratheekb96
Join Date: 2 Jul 19
Posts: 3
Posted: Mon, 2019-08-05 06:26

Update: 

I now tried with this command: 

snpe-tensorflow-to-dlc --graph tf_fingers.pb --input_dim 'conv2d_1_input' 1,256,256,3 --out_node 'dense_2/Softmax' --dlc './tf_fingers.dlc' --allow_unconsumed_nodes --verbose

The script says 'Model conversion completed!', but if I check the output DLC file with snpe-dlc-info;

snpe-dlc-info -i tf_fingers.dlc 
DLC info for: /home/pratheek/Setup/snpe-1.27.1.382/tf_fingers.dlc
Model Version: N/A
Model Copyright:N/A
-----------------------------------------------------------------------------------------------------------------------
| Id | Name             | Type | Inputs           | Outputs          | Out Dims    | Parameters                       |
-----------------------------------------------------------------------------------------------------------------------
| 0  | conv2d_1_input:0 | data | conv2d_1_input:0 | conv2d_1_input:0 | 1x256x256x3 | input_preprocessing: passthrough |
|    |                  |      |                  |                  |             | input_type: default              |
Total parameters: 0 (0 MB assuming single precision float)
Total MACs per inference: 0 (0%)
Converter command: snpe-tensorflow-to-dlc verbose=True out_node=['dense_2/Softmax'] allow_unconsumed_nodes=True model_version=None input_dim=[['conv2d_1_input', '1,256,256,3']] copyright_file=None in_type=None
DLC created with converter version: 1.27.1.382
-----------------------------------------------------------------------------------------------------------------------
 
Please advise.
  • Up0
  • Down0
pratheekb96
Join Date: 2 Jul 19
Posts: 3
Posted: Wed, 2019-08-07 02:31

Bump: Please reply!

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.