TY - GEN
T1 - Pruning CNNs for LiDAR-based Perception in Resource Constrained Environments
AU - Vemparala, Manoj Rohit
AU - Singh, Anmol
AU - Mzid, Ahmed
AU - Fasfous, Nael
AU - Frickenstein, Alexander
AU - Mirus, Florain
AU - Voegel, Hans Joerg
AU - Nagaraja, Naveen Shankar
AU - Stechele, Walter
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Deep neural networks provide high accuracy for perception. However they require high computational power. In particular, LiDAR-based object detection delivers good accuracy and real-time performance, but demands high computation due to expensive feature-extraction from point cloud data in the encoder and backbone networks. We investigate the model complexity versus accuracy trade-off using reinforcement learning based pruning for PointPillars, a recent LiDAR-based 3D object detection network. We evaluate the model on the validation dataset of KITTI (80/20-splits) according to the mean average precision (mAP) for the car class. We prune the original PointPillars model (mAP 89.84) and achieve 65.8% reduction in floating point operations (FLOPs) for a marginal accuracy loss. The compression corresponds to 31.7% reduction in inference time and 35% reduction in GPU memory on GTX 1080 Ti.
AB - Deep neural networks provide high accuracy for perception. However they require high computational power. In particular, LiDAR-based object detection delivers good accuracy and real-time performance, but demands high computation due to expensive feature-extraction from point cloud data in the encoder and backbone networks. We investigate the model complexity versus accuracy trade-off using reinforcement learning based pruning for PointPillars, a recent LiDAR-based 3D object detection network. We evaluate the model on the validation dataset of KITTI (80/20-splits) according to the mean average precision (mAP) for the car class. We prune the original PointPillars model (mAP 89.84) and achieve 65.8% reduction in floating point operations (FLOPs) for a marginal accuracy loss. The compression corresponds to 31.7% reduction in inference time and 35% reduction in GPU memory on GTX 1080 Ti.
UR - http://www.scopus.com/inward/record.url?scp=85124936160&partnerID=8YFLogxK
U2 - 10.1109/IVWorkshops54471.2021.9669256
DO - 10.1109/IVWorkshops54471.2021.9669256
M3 - Conference contribution
AN - SCOPUS:85124936160
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 228
EP - 235
BT - 2021 IEEE Intelligent Vehicles Symposium Workshops, IV Workshops 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 32nd IEEE Intelligent Vehicles Symposium Workshops, IV Workshops 2021
Y2 - 11 July 2021 through 17 July 2021
ER -