In the tutorial of SNPE (1.19.2), the example is to compile source code into executable files. When running the AI model on the phone, we need to run with the command line on the PC end. How to make the AI model run independently on the mobile end?
We try to simply modify “main.cpp”, compile the source code of the example into a dynamic library file (“.so” format), and call it at the HAL layer on the phone. But it never succeeded. By adjusting the position of “return 0;” in the “main.cpp”, we find that when running to “snpe->function” (such as, the 168th line in “main.cpp”), the run got stuck.
The attached is the modified “main.cpp”.
Is our idea of calling dynamic library files in HAL layer correct? If it is correct, we hoped for your guidance about the question. In addition, how can the AI model run independently on the mobile phone end? We hope you can give some practical suggestions. Thanks very much anyway.
enum {UNKNOWN, USERBUFFER, ITENSOR};
enum {CPUBUFFER, GLBUFFER};
static std::string dlc = "bvlc_alexnet.dlc";
static std::string OutputDir = "output";
const char* inputFile = "target_raw_list.txt";
std::string bufferTypeStr = "USERBUFFER";
std::string userBufferSourceStr = "CPUBUFFER";