Forums - GPU Utilization - SSD Mobile-net training (Object Detection API)

1 post / 0 new
GPU Utilization - SSD Mobile-net training (Object Detection API)
shubham.jain
Join Date: 28 May 20
Posts: 4
Posted: Fri, 2020-07-10 05:19

While trying to train SSD Mobilenet "ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03" using the Tensorflow's Object Detection API (https://github.com/tensorflow/models/tree/master/research/object_detection), I am not able to utilize GPUs to their maximum capability.

I am using a custom data-set to fine-tune SSD Mobilenet V2, but while training only ~160 MB are being utilized from each GPU. I am using Nvidia 1080-ti GPU for training. I am using a batch size of 24.

I have used the following command for training -

python train.py --logtostderr --train_dir=training/ --pipeline_config_path=ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.config

Please refer to the "nvidia-smi" output below:

+-------------------------------+----------------------+----------------------+

|   2  GeForce GTX 108...  Off  | 00000000:86:00.0 Off |                  N/A |

| 23%   25C    P8    16W / 250W |    161MiB / 11172MiB |      0%      Default |

+-------------------------------+----------------------+----------------------+

Can anyone please help me in figuring out the method to efficiently utilize the GPU in the training process?

 

  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.