Abstract
Autonomous vehicles must accurately identify a variety of road objects, including cars, pedestrians, and bicycles, in order to operate safely in inclement weather and limited visibility. Millimeter-wave (mmWave) radar is consistently successful in these situations, although LiDAR and camera systems frequently fail. Despite this benefit, the existing radar-based detection systems have limitations: they are too computationally costly for real-time use, frequently limited to single-class identification, and perform poorly with the sparse, irregular nature of radar point clouds when using the standard grid-based convolutional neural networks (CNNs). To address the prevalent issue of high computational demands that limit embedded system implementation, we presented a real-time, low-latency framework for multiclass object classification utilizing mmWave radar. Our method uses 2-D top-view (TV) and front-view (FV) representations of radar point clouds, with the TV projection outperforming the FV projection in terms of spatial organization. To extract features, we used the graph Fourier transform (GFT), which uses the graph Laplacian to transform irregular input into a spectral domain while effectively capturing both macrostructures and microdetails. This approach, together with lightweight classifiers, yielded exceptional results: random forest (RF) and logistic regression (LR) reached 99.51% and 99.15% accuracy. LR also demonstrated an inference latency of just 15 ms/frame. The entire system was implemented on a PYNQ-ZU FPGA, demonstrating its scalability, efficiency, and real-time performance for robust, multiclass road object categorization in self-driving vehicles.