Radar Object Detection (ROD) on Radio Frequency Images for Autonomous Driving at ACM ICMR 2021 aims to detect and classify the objects in the radar’s field of view (FoV) based on organizer’s previous work and self-collected ROD2021 dataset.
Radar is commonly used in autonomous or assisted driving strategies, and it has usually been considered as a robust and cost-effective solution even in adverse driving scenarios, e.g., weak/strong lighting and bad weather. However, object detection task on radar data is not well explored either in academia or industry. The reasons can be concluded into three folds:
- Radar signal, especially radio frequency (RF) data, is not an intuitive type of data like RGB images, so that its role in autonomous driving is seriously underestimated.
- Significantly limited public datasets with proper object annotations are available so that it is difficult to address the problem using powerful machine learning mechanisms.
- It is noticeably difficult to extract semantic information for object classification from the radar signals.
Facing these challenges, the team has proposed a novel cross-modality deep learning framework for the radar object detection task using the Squeeze-and-Excitation network, which aims to provide more powerful feature representation. Moreover, a novel noisy detection approach is also explored in our study, to increase the model’s ability to handle with noise. Finally, a novel weighted location fusion strategy is introduced in their framework, with the goal to improve the detection performance further. To empirically investigate the effectiveness of the proposed framework, they conduct extensive experiments on the 2021 ICMR ROD challenge. The obtained results suggest that their framework outperforms related approaches. Their method ranks as the 3rd place on the final leaderboard, with an average precision (AP) percentage of 76.1.