A novel deep learning model for visibility correction of environmental factors in autonomous vehicles
(1) Dougherty Valley High School, San Ramon, California, (2) DataCore Software, Fort Lauderdale, Florida
https://doi.org/10.59720/22-101Intelligent vehicles utilize a combination of video-enabled object detection and radar data to traverse safely through surrounding environments. However, since the most momentary missteps in these systems can cause devastating collisions, the margin of error in the software for these systems is small. Furthermore, extenuating weather conditions such as rain, snow, and fog exponentially increase the likelihood of accidents by reducing visibility and increasing the time for detection. In this paper, we hypothesized that a novel object detection system that improves detection accuracy and speed of detection during adverse weather conditions would outperform industry alternatives in an average comparison. To do so, the model employs multiple classical deep learning techniques in two separate sub-modules: a Visibility Correction Module (VCM) and an Object Detection Module (ODM). Firstly, the model employs image classification techniques and masking to identify environmental factors frame-by-frame within an image, and then uses a novel dimensionality reduction network to remove said effects. Next, corrected images are analyzed to classify and label objects within frames. The proposed algorithm achieved an average accuracy of 89.72%, and outperformed industry alternatives in mean accuracy and time for detection, demonstrating the validity and efficiency of utilizing dimensionality reduction to improve object detection.
This article has been tagged with: