Solutions Resources

Filters

Filter By

To run neural networks efficiently at the edge on mobile, IoT, and other embedded devices, developers strive to optimize their machine learning (ML) models' size and complexity while taking advantage of hardware acceleration for inference.
The term image processing encompasses many different tasks, including computational photography, computer vision algorithms, and even basics like image compression.
Edge devices are playing a key role in IIoT, Indus
In his recent webinar, Accelerating Distributed AI Applications, Ziad Asghar, our Vice President, Product Management, Qualcomm Technologies, Inc., gave an insightful and pragma
Centralized machine learning (ML) is the ML workflow that most of us are familiar with today, where training is allocated to powerful servers which update model parameters using large datasets.
Start and stop video recording through an Azure IoT hub. Send the status of your camera recordings to the AWS IoT console. Use GStreamer plug-ins to record [email protected], [email protected] and 1080p video streams via TCP.
New opportunities for mobile game developers to provide heightened levels of immersion are being created at the intersection of television, cinema, and videogames. It becomes even more compelling when these transmedia experiences are combined with 5G, pervasive gaming, and AI.
Are you still running your artificial intelligence workloads in the cloud? That may make sense for training your models, but if your applications depend on techniques like person detection and pose estimation to name a few, then it’s time you looked into on-device AI.
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
Qualcomm Technologies contributes Hexagon DSP improvements to the open source Apache TVM community to scale AI
There are many types of neural networks being employed by machine learning practitioners today. In this blog we look three specific types of neural networks, to broaden our understanding of how different topologies and layers can solve different problems. We then look at how neural networks...
In contrast to Qualcomm Neural Processing SDK (which can accelerate a dlc model converted from TF, Caffe, Caffe2 or Onnx), QRB5165 has support to accelerate TFLite models on Hexagon DSPs, GPU, and CPU via NNAPI. Although NNAPI from Google is specific to Android, the NN framework (along with NNHAL 1...
The Qualcomm Neural Processing SDK for AI (also formally known as the Snapdragon Neural Processing Engine (SNPE)) is a software accelerated, inference-only runtime engine for the execution of deep neural networks. With the SDK, users can: Execute an arbitrarily deep neural network...
Visualization in GoogleNet Inception and ResNet Visualization helps in exploring the layers responsible for extracting a specific feature. In the process of building a CNN model, visualizing layers is as important as calculating the training error (accuracy) and validation error. It also helps in...
Three classic network architectures for combining layers to increase accuracy In essence, the neural network replicates the same process that humans undergo in learning from their mistakes. In addition to that neural process, convolution in CNN performs the process of feature extraction. Here are...
Inside the convolution and pooling layers of a CNN Why do we use the biological concept and term “neural network” in solving real-time problems within artificial intelligence? In an experiment with cats in the mid-1900s, researchers Hubel and Wiesel determined that neurons are structurally...
Applying convolutional neural networks to computer vision CNNs are useful in extracting features from images. The following pages describe the approaches and differentiators among the most common architectures used in computer vision. Deep Learning and Convolutional Neural Networks for...

Pages

Showing 1 - 20 out of 51