I have tried implementing GlBuffer, as Snpe neural network input. SNPE version 1.66
Not able to run dlc with GlBuffer. Snpe->execute results in false, with no warnings, no erros returned.
before creating the input/output UserBuffers,
I set the runtime to Gpu, set UserGlConfig, checked if it is GLBuffer set as follows.
NN::NeuralNetworkBufferType setGPUPlatformConfig(zdl::DlSystem::PlatformConfig& platformConfig) { if(zdl::SNPE::SNPEFactory::isGLCLInteropSupported() == false) { return NN::NeuralNetworkBufferType::UserSuppliedBuffer; } zdl::DlSystem::UserGLConfig userGLConfig; userGLConfig.userGLContext = getContext(); userGLConfig.userGLDisplay = getDisplay(); zdl::DlSystem::UserGpuConfig userGpuConfig; userGpuConfig.userGLConfig = userGLConfig; bool result = platformConfig.setUserGpuConfig(userGpuConfig); if (result == false) { return NN::NeuralNetworkBufferType::UserSuppliedBuffer; } if (userGLConfig.userGLContext == nullptr ||userGLConfig.userGLDisplay == nullptr) { return NN::NeuralNetworkBufferType::UserSuppliedBuffer; } zdl::DlSystem::PlatformConfig::SetIsUserGLBuffer(true); const bool isGLBuffer = zdl::DlSystem::PlatformConfig::GetIsUserGLBuffer(); if(isGLBuffer == false) { return NN::NeuralNetworkBufferType::UserSuppliedBuffer; } return NN::NeuralNetworkBufferType::OpenGlBuffer; }
I have created UserBuffer, pointing to OpenGl buffer id, with preallocated Gpu memory.
UserBuffer creation looks as follows:
zdl::DlSystem::UserBufferEncodingFloat userBufferEncodingFloat; zdl::DlSystem::UserBufferSourceGLBuffer userBufferSourceGLBuffer; std::vector<float> cpuExampleInput(bufferSize / bufferElementSize, 0.25); glBuffer = NN::createGlBuffer(bufferSize, cpuExampleInput.data()); // create SNPE user buffer from the user-backed buffer zdl::DlSystem::IUserBufferFactory& ubFactory = zdl::SNPE::SNPEFactory::getUserBufferFactory(); snpeUserBackedBuffers.emplace_back(ubFactory.createUserBuffer(&glBuffer, bufferSize, strides, &userBufferEncodingFloat, &userBufferSourceGLBuffer)); // add the user-backed buffer to the inputMap, which is later on fed to the network for execution userBufferMap.add(name, snpeUserBackedBuffers.back().get());
output Buffer Creation looks as follows:
const size_t bufferElementSize = (*bufferAttributesOpt)->getElementSize(); bufferSize = calcSizeFromDims(bufferShape.getDimensions(), bufferShape.rank(), bufferElementSize); userBuffer.resize(bufferSize / bufferElementSize); // set the buffer encoding type zdl::DlSystem::UserBufferEncodingFloat userBufferEncodingFloat; // create SNPE user buffer from the user-backed buffer zdl::DlSystem::IUserBufferFactory& ubFactory = zdl::SNPE::SNPEFactory::getUserBufferFactory(); snpeUserBackedBuffers.emplace_back(ubFactory.createUserBuffer(userBuffer.data(), bufferSize, strides, &userBufferEncodingFloat));
I have used the same code to run the dlc with UserSupplied Buffers and it works perfectly fine, but for optimization purposes. i have OpenGl FrameBuffer with GL_RGB32F texture bound, with channel swap and signed normalized colors(-0.5f, 0.5f). Which would be best, if possible, to be passed as dlc input. Texture size is exactly 448x224x3(as dlc input requires).
i tried running the snpe/example, with GL_BUFFER, but the result stays the same.
So, my question is. Is threre somethig i have missed? How may i check if the dlc does not support GL_BUFFER(if possible) and what steps of verification should i make, in order to run the example code?
Is there any example you could provide?
Best regards.
I tried runnig the example code, with VGG model and input(generated from scripts/create_VGG_raws.py).
Unfortunately the result stays the same, segmentation fault, after executing the network.
Is the issue resolved in SNPE - 2.05?
How should i generate the dlc, in order for it to work with GL_BUFFER?
Well, the issue is not resolved in 2.05, or i am missing an important step.