Forums - Xception does not run on DSP (but works on CPU/GPU)

11 posts / 0 new
Last post
Xception does not run on DSP (but works on CPU/GPU)
shiangyong
Join Date: 21 Sep 17
Posts: 15
Posted: Tue, 2018-01-09 12:12

I converted the Xception in Keras to dlc and it ran fine on the CPU and GPU on my Snapdragon 820 device.

However when I tried running it on the DSP, the computation just hangs there, without any error messages. Any ideas on what could be the cause?

My suspicion: The culprit is the Depth Separable Convolutional layers in Xception.

BTW I'm using SNPE 1.8.0

  • Up0
  • Down0
jesliger
Join Date: 6 Aug 13
Posts: 75
Posted: Tue, 2018-01-09 12:45

Hi.  We have not tested Xception on any runtime.   However, DSP support for Depthwise conv2d was added in 1.10.1.  Do you have access to 1.10.1?

 

 

  • Up0
  • Down0
shiangyong
Join Date: 21 Sep 17
Posts: 15
Posted: Tue, 2018-01-09 14:08

just saw that 1.10.1 is available now, will give it a try. thanks!

  • Up0
  • Down0
zf.africa
Join Date: 15 Jun 17
Posts: 51
Posted: Wed, 2018-01-10 21:38

Hi jesliger,

So you have already run mobilenet on DSP runtime?

I tried to use SNPE 1.10.1, but the network execution time is 2~3 ms, it is very weird, and cannot detect any objects,

when switching back to 1.8.0, the network works fine.

Of course I would switch libSNPE.so, header files, and also re-convert model according to the SDK version.

 

  • Up0
  • Down0
jesliger
Join Date: 6 Aug 13
Posts: 75
Posted: Thu, 2018-01-11 06:01

Mobilenet executes and completes on DSP runtime in SNPE 1.10.1.  However Mobilenet is not quantization friendly, and the results are completely incorrect.  This is not only on SNPE DSP, it also occurs in Tensorflow.  Mobilenet doesn't return correct results in Tensorflow when quantizing the weights and activations.  Google is looking into this, as are others.

 

  • Up0
  • Down0
zf.africa
Join Date: 15 Jun 17
Posts: 51
Posted: Thu, 2018-01-11 06:57

Hi jesliger,

When initialize a non-quantized model on DSP runtime, it would quantize the model automatically according to documentation.

Do you mean a quantized mobilenet model would output only incorrect data, no matter how the model is quantized?

  • Up0
  • Down0
jesliger
Join Date: 6 Aug 13
Posts: 75
Posted: Thu, 2018-01-11 07:12

Correct.   No matter how mobilienet is quantized (using snpe-dlc-quantize) or by passing the float dlc to SNPE and letting SNPE quantize the model on the fly, the results on the DSP runtime will be incorrect.   The same occurs in Tensorflow when you quantize the weights and activations and run it in tensorflow.

 

  • Up0
  • Down0
zf.africa
Join Date: 15 Jun 17
Posts: 51
Posted: Thu, 2018-01-11 08:03

Thanks for information.

It seems that it is mobilenet's issue, I would follow the issue.

  • Up0
  • Down0
shiangyong
Join Date: 21 Sep 17
Posts: 15
Posted: Thu, 2018-01-11 09:35

Hi jesliger,

Just curious, do you know which layers in MobileNet is causing problems when running on the DSP? i.e. which layers are not quantization friendly

Shiang Yong

  • Up0
  • Down0
jesliger
Join Date: 6 Aug 13
Posts: 75
Posted: Thu, 2018-01-11 11:01

Analysis of this is still ongoing.  Cannot comment on the details right now.

Google has published something about it but I'm not sure it's got much detail of exactly which ops aren't working under quantization.  See: https://arxiv.org/pdf/1712.05877.pdf

Google put out a mobilenet model that they trained with "fake quantizatoin" that does better but is still not as good as float (still not really usable)  They mention it here:  https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/...

 

 

  • Up0
  • Down0
shiangyong
Join Date: 21 Sep 17
Posts: 15
Posted: Wed, 2018-01-31 18:14

Quick update.

Tried running Xception model using SNPE 1.10.1 and it does not run. This time it actually resets the adb connection, really weird. Hope that this will be fixed in the next release.

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.