A theoretical analysis of the RDTC space

Ingo Bauermann, Eckehard Steinbach

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Remote navigation in image-based scene representations requires random access to the compressed reference image data to compose virtual views. When using block-based hybrid video coding concepts, the degree of inter frame dependencies introduced during compression has an impact on the effort that is required to access reference image data and at the same time delimits the rate distortion trade-off that can be achieved. If, additionally, a maximum available channel bitrate is taken into account, the traditional rate-distortion (RD) trade-off can be extended to a trade-off between the storage rate (R), distortion (D), transmission data rate (T), and decoding complexity (C). In this work we present a theoretical analysis of this RDTC space. Experimental results qualitatively match those predicted by theory and show that an adaptation of the encoding process to scenario specific parameters like computational power of the receiver and channel throughput can significantly reduce the user perceived delay or required storage for RDTC optimized streams compared to RD optimized or independently encoded scene representations.

Original languageEnglish
Title of host publicationPACKET VIDEO 2007 - 16th International Packet Video Workshop
PublisherIEEE Computer Society
Pages272-279
Number of pages8
ISBN (Print)1424409810, 9781424409815
DOIs
StatePublished - 2007
EventPACKET VIDEO 2007 - 16th International Packet Video Workshop - Lausanne, Switzerland
Duration: 12 Nov 200713 Nov 2007

Publication series

NamePACKET VIDEO 2007 - 16th International Packet Video Workshop

Conference

ConferencePACKET VIDEO 2007 - 16th International Packet Video Workshop
Country/TerritorySwitzerland
CityLausanne
Period12/11/0713/11/07

Fingerprint

Dive into the research topics of 'A theoretical analysis of the RDTC space'. Together they form a unique fingerprint.

Cite this