DSC: Dense-sparse convolution for vectorized inference of convolutional neural networks

Alexander Frickenstein, Manoj Rohit Vemparala, Christian Unger, Fatih Ayar, Walter Stechele

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

14 Scopus citations

Abstract

The efficient applications of Convolutional Neural Networks (CNNs) in automotive-rated and safety critical hardware-accelerators require an interplay of DNN design optimization, programming techniques and hardware resources. Ad-hoc pruning would result in irregular sparsity and compression leading in very inefficient real world applications. Therefore, the proposed methodology, called Dense-Sparse Convolution, makes use of the right balance between pruning regularity, quantization and the underlying vectorized hardware. Different word length compute units, e.g. CPU, are used for low latency inference of the spares CNNs. The proposed open source CPU-kernel scales along with the vector word length and the number of cores.

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019
PublisherIEEE Computer Society
Pages1353-1360
Number of pages8
ISBN (Electronic)9781728125060
DOIs
StatePublished - Jun 2019
Event32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019 - Long Beach, United States
Duration: 16 Jun 201920 Jun 2019

Publication series

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Volume2019-June
ISSN (Print)2160-7508
ISSN (Electronic)2160-7516

Conference

Conference32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019
Country/TerritoryUnited States
CityLong Beach
Period16/06/1920/06/19

Fingerprint

Dive into the research topics of 'DSC: Dense-sparse convolution for vectorized inference of convolutional neural networks'. Together they form a unique fingerprint.

Cite this