The development kit I use is 6490 (778G). When I use snpe2.12 to load the yolov8s model (int8 quantized), it takes about 5.5 seconds to load. I try to analyze this process, I found that snpebuilder.build() takes most of time. So how can I save time ?
How to accelerate neuron network's load while using snpe2.12
Posted: Thu, 2023-11-23 17:52
Dear developer,
You can try to run quantization(INT8) model on HTP backend that will speed up execution time.
BR.
Wei
Dear developer,
I am trying to export yolob8m.pt to onnx format and then convert it to dlc format using SNPE. However, the exported yolob8.dlc cannot be inferred using SNPE tool. I would like to ask how you converted it
su