BiCo-Fusion: Bidirectional Complementary LiDAR-Camera Fusion for Semantic- and Spatial-Aware 3D Object Detection


  • Yang Song
    AI Thrust, HKUST(GZ)
       

  • Addison Lin Wang
    AI & CMA Thrust, HKUST(GZ)
    Dept. of CSE, HKUST

Abstract

3D object detection is an important task that has been widely applied in autonomous driving. Recently, fusing multi-modal inputs, i.e., LiDAR and camera data, to perform this task has become a new trend. Existing methods, however, either ignore the sparsity of LiDAR features or fail to preserve the original spatial structure of LiDAR and the semantic density of camera features simultaneously due to the modality gap. To address these issues, this letter proposes a novel bidirectional complementary LiDAR-camera fusion framework, called BiCo-Fusion that can achieve robust semantic- and spatial-aware 3D object detection. The key insight is to mutually fuse the multi-modal features to enhance the semantics of LiDAR features and the spatial awareness of the camera features and adaptatively select features from both modalities to build a unified 3D representation. Specifically, we introduce Pre-Fusion consisting of a Voxel Enhancement Module (VEM) to enhance the semantics of voxel features from 2D camera features and Image Enhancement Module (IEM) to enhance the spatial characteristics of camera features from 3D voxel features. Both VEM and IEM are bidirectionally updated to effectively reduce the modality gap. We then introduce Unified Fusion to adaptively weigh to select features from the enhanced LiDAR and camera features to build a unified 3D representation. Extensive experiments demonstrate the superiority of our BiCo-Fusion against the prior arts.

Overall framework of our BiCo-Fusion

BiCo-Fusion first extracts features from LiDAR and camera data using modality-specific encoders. In Pre-Fusion, the LiDAR voxel features are enhanced with the camera semantics by our VEM, and the camera features are enhanced to be spatial-aware with the IEM. These enhanced features are then fused in an adaptive way during the Unified Fusion stage. Finally, the fused features are flattened to get the BEV features, which are fed to the head for final detection.

vis_res

Experimental results

vis_res vis_res

BibTeX