Forums - GPU_timing vs GPU_ub_float_timing

1 post / 0 new
GPU_timing vs GPU_ub_float_timing
mpaschenko
Join Date: 14 Jan 18
Posts: 3
Posted: Sun, 2018-01-21 00:05

Hello.

I was able to benchmark my model on GPU and see, that there two results rows: GPU_timing and GPU_ub_float_timing.

GPU_timing inference time: 67280 usec

GPU_ub_float_timing inference time: 1249774 usec

Actual model execution in the sample app takes about 240-320ms, measured time before and after actual execution.

t1 = System.currentTimeMillis();
outputs = mNeuralNetwork.execute(inputs);
t2 = System.currentTimeMillis();
 

Are there any tricks to enable mode mentioned in GPU_timing vs GPU_ub_float_timing or so?

And if the model already executed with mode mentioned in GPU_timing, why are my measurements differs from benchmark about 4 times 67ms vs 250ms

I'll appretiate for the answer, Thanks.

/*-->*/
  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.