Qualcomm Developer Network September Developer of the Month is Boris Denisenko from Mapbox, located in Minsk, Belarus. Boris is a Senior Software Engineer (Mobile) working on a Vision SDK for a Computer Vision solution based on Machine Learning which will provide drivers with extra information about traffic situations.
As a startup, Mapbox was created in Washington, DC in 2010 as a part of Development Seed to offer map customization for non-profit customers. It has since grown to include offices in San Francisco, Bangalore, Berlin as well as Minsk. Mapbox’s location data platform for mobile and web applications provides developers with data, APIs, and SDKs to add features like maps, search, and navigation into mobile experiences. Customers include Foursquare, Lonely Planet, Facebook, the Financial Times, The Weather Channel, and Snapchat.
The Minsk office and Vision SDK research team was founded at the end of 2017 to develop a fundamentally new method of navigation-based on Computer Vision, Machine Learning (ML), and Augmented Reality called the Mapbox Vision SDK.
Do you have any interesting facts about your company?
Over 70,000 active developers build with Mapbox every month. These applications reach more than 300,000,000 people worldwide each month. Our secure data pipeline processes over 225 million miles of anonymized traffic data per day — the distance from the earth to the sun and half-way back. To support this, we have over 300 designers, software engineers, cartographers, and strategists … and 15 dogs.
How did QDN tools assist in your development?
We are inspired to attain optimized performance of machine learning models on mobile devices, so it’s really important to work with powerful chipsets and have a mobile machine learning (ML) backend like the Qualcomm Neural Processing SDK for AI. Thanks to this Neural Processing SDK, our Android Vision SDK has a high performance on Snapdragon devices, especially the Snapdragon 835 and 845. It allows us to run three different ML models simultaneously on a video stream with amazing speed (from 7 to 30+ frames per second). As a result, we have semantic segmentation, object detection (four classes: cars, signs, people, traffic lights), sign classification (100+ classes), and a real time distance measurement system.
Do you plan on using technologies from QDN on future projects?
QTI provides excellent hardware for mobile phone makers and a large set of developer tools which helps us to achieve optimal software quality. In addition to using the Qualcomm Neural Processing SDK for AI, we plan to take advantage of the Qualcomm® Hexagon™ DSP processor and the Hexagon DSP SDK to help increase video processing speed and reduce power consumption. Additional specialized solutions (e.g., FastCV Library and Snapdragon Math Libraries) will help us to make our products even more accurate and faster.
Can you provide us a fun anecdote about your team or company?
The day before we presented our product at SoftBank World 2018, we tried to run the Semantic Segmentation neural network model on a Snapdragon device using the Neural Processing SDK, and got controversial results. From one side it showed good performance on benchmarks, but from another side it was 10 times slower using Java interfaces. After investigating, we discovered that the Neural Processing SDK worked well, but the process of copying the result took 10 times longer than the execution. In a couple of hours, we implemented a new data-copying mechanism by extending our existing ITesor class, after which our system started working great.
Anything else to share with our developer community?
From the start of our Vision SDK development, we looked for the right tool to run our neural network models on mobile and embedded devices. Currently we feel that the Snapdragon processors and tools are the optimal choice. The results provided by the Snapdragon 845 are amazing and much better than we could achieve on other platforms or flagman devices in terms of balancing heat, energy consumption, and performance.