Skip navigation links

January

30

2219 Engineering Building and Zoom

Doctoral Defense - Yunfei Long

Register
the famous Belmont tower facing a sunset

About the Event

The Department of Electrical and Computer Engineering  

Michigan State University  

Ph.D. Dissertation Defense  

Thursday, January 30, 2025, 1:00 pm  

Electrical and Computer Engineering Conference Room - 2219 Engineering Building and Zoom

Contact Department or Advisor for Zoom Information 

 

ABSTRACT 

PERCEPTION VIA RADAR-CAMERA FUSION FOR AUTONOMOUS DRIVING  

By: Yunfei Long

Advisor: Daniel Morris

 

Reliable sensing of environments is a key bottleneck in autonomous driving. Among a variety of sensors, automotive radar stands out for its low cost, robustness to adverse weather, and ability to capture motion. Nevertheless, radar is not widely recognized in the computer vision community, facing challenges in sparse, low-dimensional and inaccurate measurements. This work sheds a different light on the traditional view of radar on perception, exploring how radar can be used to enhance monocular perception in multiple vision tasks including depth completion, velocity estimation, and 3D object detection. Depth completion with radar-camera fusion aims to predict dense depths for image pixels given sparse radar points and images. To handle ambiguous geometric associations between raw radar pixels and image pixels, we propose radar-camera pixel depth association (RC-PDA), which maps radar pixels to nearby image pixels with the same depths. We train a model to predict RC-PDA, which is used to enhance and densify radar returns for depth completion. Full velocity estimation for radar points focuses on predicting the tangential velocity, which is absent in radar measurements. We present a closed-form solution to compute point-wise full velocity for radar returns by combining radar Doppler velocity with corresponding optical flows on images. 3D object detection aims to estimate object categories and 3D bounding boxes. We focus on using radar to improve monocular detections in position estimation. To address discrepancy between radar hits and object centers, we build a model to predict point-wise 3D object centers, which are subsequently matched with monocular estimated centers for depth fusion. To deal with the complexity of possible locations of radar points reflected by targets, we build a model to estimate radar hit distributions conditioned on object properties predicted by a monocular detector, and spatially match the distributions with actual radar hits in the neighborhood of monocular detections. This method reveals radar distributions under different conditions and achieves interpretable position estimation via radar-camera fusion. Experiments show that the proposed methods achieve state-of-the-art performance on the individual vision tasks via radar-camera fusion.  We believe this work will contribute new practical solutions for perception with radar for autonomous driving. 

 

Persons with disabilities have the right to request and receive reasonable accommodation. Please call the Department of Electrical and Computer Engineering at 355-5066 at least one day prior to the seminar; requests received after this date will be met when possible   

Tags

Doctoral Defenses

Date

Thursday, January 30, 2025

Time

1:00 PM

Location

2219 Engineering Building and Zoom

Organizer

Yunfei Long