Qualcomm products mentioned within this post are offered by
Qualcomm Technologies, Inc. and/or its subsidiaries.
Qualcomm Developer Network September Developer of the Month is Daqing Zhou from Airy3D Inc. located in Montreal, Canada. Daqing is a Senior Embedded Software Developer at Airy3D and his work centers around optimizing Airy3D’s software algorithms on PC and embedded platforms. Daqing is passionate about making code run as fast as possible and developing real-time imaging applications.
Airy3D is a 3D computer vision start-up based in Montréal, Canada that was founded in 2015. Airy3D’s DepthIQ™ platform can convert any single 2D imaging sensor into one that generates a 2D image and 3D depth data. It combines simple optical physics using our patented Transmissive Diffraction Mask technology with proprietary algorithms, to deliver versatile 3D sensing solutions while preserving 2D performance.
How was Airy3D Started?
Airy3D was founded in 2015 by Jonathan Saari, Ji-Ho Cho, and Guillaume Poirier, licensing its initial technology from Cornell University. The company was spun out of TandemLaunch, a Montreal-based deep technology start-up incubator.
What can you tell us about the products you develop?
Airy3D’s DepthIQ depth-sensing platform is a versatile and straightforward solution that is far more computationally efficient than other approaches, while also being significantly lower in cost. DepthIQ is also “sensor agnostic”, meaning it can be customized to any given CMOS sensor specification. DepthIQ is a simple, drop-in solution useful for many burgeoning industries that span mobile, automotive, AR/VR, drones, robotics, smart homes and beyond.
Airy3D’s DepthIQTM technology can render 3D point cloud images from a single image taken from a single camera (shown as a 2D image).” (images provided by Airy3D)
Where does your team get inspiration?
We all love technology and imaging. We have some avid amateur photographers on our team and it’s always a delight to check out their latest photos and adventures. Working with computational photography is inspiring because we can see the fruit of our efforts daily. Also, our team is very diverse, comprising people with backgrounds in physics, machine vision, machine learning, electronics, and software engineering. It’s always fun to see these people coming together, sharing knowledge, and exchanging ideas.
How are you using QTI technologies in your products?
We are currently working with a Qualcomm® Snapdragon™ 820 development kit and the Qualcomm® Hexagon™ DSP SDK. Our DepthIQ technology is a combination of both hardware and software components. We implemented the software components on the Hexagon DSP and were able to achieve real-time performance.
We use the Hexagon DSP to pre-filter the camera’s raw stream before it’s passed to the Qualcomm ISP (Image Signal Processor). This is a very important feature of the Snapdragon 820 that is not always present on other mobile SOCs and that is very powerful for new computational photography applications. Thanks to the Qualcomm heterogeneous mobile platform, we can calculate depth on the CPU, DSP, or GPU depending on the use case requirements. When minimizing power usage is key, then the DSP is a great option.
Do you plan on using QTI technologies on future projects?
We recently raised funding to advance our roadmap for the first commercial adoptions of our DepthIQ 3D sensor platform with top-tier mobile OEMs in 2019. So, we’ll continue to work with Snapdragon hardware and software technologies for mobile development. We’re impressed with the recent development tools from Qualcomm and are looking forward to how we can further utilize them once we are a qualified development partner.
What are some development tools and resources that you consider essential to the development of your products?
There are multiple tools that are essential to me for doing heterogeneous computing on the CPU, GPU, and DSP. On the CPU I use x86 AVX, SSE intrinsic, assembly programming, and multithreaded programming, on the GPU I use OpenCL and CUDA, and on the DSP I use VLIW SIMD programming.
The Snapdragon ARM CPU benchmarking / profiling (cycle count) and power consumption measurements are also very important.
Where do you see the visual intelligence industry in 10 years?
Many trends are converging right now to make visual intelligence ubiquitous: a golden age in machine learning, proliferation of cameras, and ever-increasing computational processing power. In ten years, most cameras will be able to ‘understand’ the visual context and environment. That understanding will be driven by sensors that capture not just the image but also depth.
Can you provide us a fun fact about your team or company as a whole?
The company name is a riff on the optics term Airy disc. When our very first sensor prototype was produced, it had a minor flaw that caused an Airy disc to appear in the images from that sensor!
We are currently hiring for various positions so if you have experience with mobile development tools and are passionate about 3D computer vision we would love to hear from you!