Parallelized Context Modeling for Faster Image Coding

A. Burakhan Koyuncu, Kai Cui, Atanas Boev, Eckehard Steinbach

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Learning-based image compression has reached the performance of classical methods such as BPG. One common approach is to use an autoencoder network to map the pixel information to a latent space and then approximate the symbol probabilities in that space with a context model. During inference, the learned context model provides symbol probabilities, which are used by the entropy encoder to obtain the bitstream. Currently, the most effective context models use autoregression, but autoregression results in a very high decoding complexity due to the serialized data processing. In this work, we propose a method to parallelize the autoregressive process used for image compression. In our experiments, we achieve a decoding speed that is over 8 times faster than the standard autoregressive context model almost without compression performance reduction.

Original languageEnglish
Title of host publication2021 International Conference on Visual Communications and Image Processing, VCIP 2021 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728185514
DOIs
StatePublished - 2021
Event2021 International Conference on Visual Communications and Image Processing, VCIP 2021 - Munich, Germany
Duration: 5 Dec 20218 Dec 2021

Publication series

Name2021 International Conference on Visual Communications and Image Processing, VCIP 2021 - Proceedings

Conference

Conference2021 International Conference on Visual Communications and Image Processing, VCIP 2021
Country/TerritoryGermany
CityMunich
Period5/12/218/12/21

Fingerprint

Dive into the research topics of 'Parallelized Context Modeling for Faster Image Coding'. Together they form a unique fingerprint.

Cite this