FaceBlock

Skill LevelArea of FocusOperating SystemPlatform/Hardware
IntermediateComputer Vision, Artificial IntelligenceAndroidArtificial Intelligence

Computer vision privacy protection tool for videos

FaceBlock is a tool that is designed to allow individuals to record themselves without exposing the identities of those around them. Using both deep neural networks and classical machine learning methods, FaceBlock uses facial detection and tracking to block unwanted faces with custom emojis and icons in videos and streams.

With the ubiquity of smartphones, live video streaming has exploded in popularity on platforms like Facebook, Instagram, and YouTube. As more and more people are caught on video every day, significant concerns have been raised over online privacy. We wanted to utilize leading edge solutions from Qualcomm Technologies to help protect the privacy of others in an increasingly streamed and shared world.

Real time app demo tracking and object detection

Compatibility

In order to have the detection model run in real-time, GPU acceleration must be used. This is only supported on the Galaxy S8/S8+ and Galaxy S9.Project will likely work on any Android M+ phone. DSP acceleration is not available in this version of the project.

Dependencies

  • Android SDK v27
  • Android NDK v16b
  • Tensorflow v1.8 (optional)
  • Tensorflow Object Detection API (November 17, 2017 Release) (optional)
  • OpenCV4Android v3.2.4 (optional)
  • Qualcomm Neural Processing SDK v1.15 (optional)

The optional items are listed for users who wish to train and deploy their own Tensorflow based models onto the app. They are not required if a user simply want to use the defaults provided.

Build Instructions

Begin by setting up your Android development environment and installing the above dependencies. Ensure you have the correct version of the Android SDK, Android NDK and Android Studio installed. The converted DLC models used are supplied in this project, so the portion of this guide will be about the build process.

After setting up your environment, import your project through Android Studio as an existing project.

Building OpenCV4Android with Tracking
If you are not using the prebuilt APK, you must then build OpenCV for Android in order to get access to the “tracking” module. The high level overview of this process is as follows: Clone the OpenCV project and the OpenCV extra modules repositorygit clone https://github.com/opencv/opencv.git
git clone https://github.com/opencv/opencv_contrib.git

From here, you build OpenCV4Android using the supplied build script. This can be found in opencv/platforms/android. For this project, Android SDK Build-tools 27.0.3 and Android API Level 27 were used. For this step, make sure you have the Ninja build system and the Apache Ant command line tool installed. An example of running the build script is as follows:export ANDROID_NATIVE_API_LEVEL=27
export ANDROID_PROJECTS_BUILD_TYPE=ANT
./build_sdk.py \
    --sdk_path YOUR_SDK_LOCATION \
    --ndk_path YOUR_NDK_LOCATION \
    --extra_modules_path YOUR_OPENCV_CONTRIB_LOCATION/modules \
    --config ndk-16.config.py YOUR_OPENCV_LOCATION/build_android \
    YOUR_OPENCV_LOCATION

Adding OpenCV4Android to the project
After the OpenCV Android build is complete, we can now import OpenCV as a module in Android Studio.

To import OpenCV4Android as a module, go to Menu -> "File" -> "New" -> "Module" -> "Import Gradle project" Select the sdk directory in the final build location that you specified, name the module opencv.

Then, add the dependency into the application module by going to "Open Module Settings" (F4) -> "Dependencies" tab.

Building complete project
After syncing has completed, build the project and deploy to one of the devices listed above.

Advanced Instructions
This repository contains the required Tensorflow Checkpoints and SNPE DLC files required to run the app. However, if you wish to train your own model, please refer to the instructions located in the Tensorflow Models Object Detection API repository.

We trained a MobileNetV1 SSD model on the WIDER dataset. Your dataset of choice must be converted to a TF record format for training. To convert the model using the SNPE converter, you must first freeze your model with the Object Detection export script.

When you open FaceBlock, the app will automatically block all faces. Turn off Block All and then tap the screen to make the largest face in the center unblocked. Tap the emoji icon in the UI to select a different emoji. Use the bar underneath the emoji to change the size of all emojis blocking faces. Turn on rotate to block faces correctly when the phone is rotated counter-clockwise. The bar at the bottom sets the threshold on what confidence the app needs before recognizing something as a face.

NameEmailTitleCompany
Oles Andrienko[email protected]Machine Learning Software Engineering InternQualcomm Technologies, Inc.
Alexander Li[email protected]Machine Learning Software Engineering InternQualcomm Technologies, Inc.
Zhiyu Liang[email protected]Machine Learning Software Engineering InternQualcomm Technologies, Inc.