I try to use tensorflow lite and .dlc on android.
But .dlc cost much time on build network. This part shown in below:
final SNPE.NeuralNetworkBuilder builder = new SNPE.NeuralNetworkBuilder(application) .setDebugEnabled(false) .setRuntimeOrder(targetRuntime) .setModel(mModel, mModel.available()) .setCpuFallbackEnabled(true); long start = System.currentTimeMillis(); network = builder.build(); long end = System.currentTimeMillis();
.dlc cost 1300ms on build.build(), but .tflite only cost 10ms to initial interpreter .
Could you help me to improve this?
Hi weiting_hsiao,
Initilization time takes longer in case of followings:
- Un-quantized model is used on DSP runtime
- Quantized model is used on CPU and GPU runtime
Check if your environment is one of above.
Thanks,
Jihoon
Hi jihoonk
I check my enviroment, but no longer these two cases.
Un-quantized pb(convert to .dlc) model and GPU run time in my app.
Is there any solution to improve initial time ?
Please try snpe-diagview tool, which is profiling tool using the result of snpe-nut-run. It might help you.
https://developer.qualcomm.com/docs/snpe/tools.html#tools_snpe-diagview
Thanks,
Jihoon
I still don't know how to solve this problem of initial time.
tensorflow .lite initial time: 10ms inference time:250ms
snpe .dlc initial time:1300ms inference time:60ms
if you have any solution, please contact me.