Welcome to the QDN blog! We regularly post blogs on a wide array of topics for our developers. From AI, gaming, XR, robotics, IoT, Snapdragon tools, and even 5G. Scroll down to see our most recent posts, and review our Blog Topics in the right navigation.
Co-written with Aleksandra Krstic, Alex Bourd and Shuaib Arshad.
Suppose you came home from vacation with a few dozen photos from different perspectives of the Eiffel Tower or the Taj Mahal or Michelangelo’s David. What if you wanted to “walk” around them again, if only virtually? That would entail stitching all the 2D images together somehow to create a 3D scene you could view from different, freely placed points, as in the video below.
What does it take to live-stream video from a drone at an emergency rescue and response location 100 kilometers (62 miles) away?
Want to accelerate your large language model (LLM) inference workloads without blowing your power budget? Or your cooling budget?
To render stereoscopic views in extended reality (XR) development, how do you treat each view differently and account for the difference in perspective? A common XR technique is to use multi-viewport for viewports and scissor out different regions of each eye, but multi-viewport ignores clears and is incompatible with other XR extensions. It can also prevent foveation and slow down the rendering work.
Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries ("Qualcomm"). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.