Object detection is a challenging but important task for computer vision, and this is especially true in the remote sensing domain where data collections may bring billions of pixels in a single image. There are many methods for object detection, but in recent years the You Only Look Once (YOLO) algorithm has become a leading technique, gaining popularity due to its ability to perform real-time object detection. While YOLO and its successors have shown excellent results in realtime detection, there are many object detection tasks that require better precision, and do not require real-time detection. In this paper, YOLOv3 is compared to other deep neural networks (DNN) for detecting Vehicle Groups in very high resolution remote sensing imagery (VHR-RSI). A unique centerpoint-based dataset is developed by leveraging a novel data framework and combining quality assured chips with regions of interest in the XView Challenge Dataset. This dataset is then used to train state of the art models including two Neural Architecture Search (NAS) variant DNN for object detection. Additionally, a blind test set is developed to further compare our methods with the YOLOv3 algorithm. The results shows that our method detects vehicle groups with a lower false positive rate (FPR) and better true positive rate (TPR) than state-of-the-art YOLOv3 models for the blind test set; achieving a reduction in error rate of 26.70% over YOLOv3 in F1 Score on the blind test set.