BiCo-Fusion: Bidirectional Complementary LiDAR-Camera Fusion for Semantic- and Spatial-Aware 3D Object Detection
-
Yang Song
AI Thrust, HKUST(GZ)
    -
Addison Lin Wang
School of Electrical and Electronic Engineering (EEE)
Nanyang Technological University (NTU)
Abstract
3D object detection is an important task that has been widely applied in autonomous driving. To perform this task, a new trend is to fuse multi-modal inputs, i.e., LiDAR and camera. Under such a trend, recent methods fuse these two modalities by unifying them in the same 3D space. However, during direct fusion in a unified space, the drawbacks of both modalities (LiDAR features struggle with detailed semantic information and the camera lacks accurate 3D spatial information) are also preserved, diluting semantic and spatial awareness of the final unified representation. To address the issue, this letter proposes a novel bidirectional complementary LiDAR-camera fusion framework, called BiCo-Fusion that can achieve robust semantic- and spatial-aware 3D object detection. The key insight is to fuse LiDAR and camera features in a bidirectional complementary way to enhance the semantic awareness of the LiDAR and the 3D spatial awareness of the camera. The enhanced features from both modalities are then adaptively fused to build a semantic- and spatial-aware unified representation. Specifically, we introduce Pre-Fusion consisting of a Voxel Enhancement Module (VEM) to enhance the semantic awareness of voxel features from 2D camera features and Image Enhancement Module (IEM) to enhance the 3D spatial awareness of camera features from 3D voxel features. We then introduce Unified Fusion (U-Fusion) to adaptively fuse the enhanced features from the last stage to build a unified representation. Extensive experiments demonstrate the superiority of our BiCo-Fusion against the prior arts.
Overall framework of our BiCo-Fusion
BiCo-Fusion first extracts features from LiDAR and camera data using modality-specific encoders. In Pre-Fusion, the LiDAR voxel features are enhanced with the camera semantics by our VEM, and the camera features are enhanced to be spatial-aware with the IEM. These enhanced features are then fused in an adaptive way during the Unified Fusion stage. Finally, the fused features are flattened to get the BEV features, which are fed to the head for final detection.