While trying to train SSD Mobilenet "ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03" using the Tensorflow's Object Detection API (https://github.com/tensorflow/models/tree/master/research/object_detection), I am not able to utilize GPUs to their maximum capability.
I am using a custom data-set to fine-tune SSD Mobilenet V2, but while training only ~160 MB are being utilized from each GPU. I am using Nvidia 1080-ti GPU for training. I am using a batch size of 24.
I have used the following command for training -
python train.py --logtostderr --train_dir=training/ --pipeline_config_path=ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.config
Please refer to the "nvidia-smi" output below:
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... Off | 00000000:86:00.0 Off | N/A |
| 23% 25C P8 16W / 250W | 161MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
Can anyone please help me in figuring out the method to efficiently utilize the GPU in the training process?