TY - GEN
T1 - Compression Techniques for MIMO Channels in FDD Systems
AU - Rizzello, Valentina
AU - Zhang, Hanyi
AU - Joham, Michael
AU - Utschick, Wolfgang
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In this work, we present an innovative application of transformers and vector quantized variational autoencoders (VQ-VAE) to compress multiple-input-multiple-output (MIMO) channels in frequency-division-duplex (FDD) systems. Existing works consider multiple-input-single-output (MISO) channels across all frequencies (subcarriers) of a certain bandwidth, where high compression ratios can be achieved due to the structure of the channels across the frequency domain, or due to their sparsity in the time domain. With this work, we take into account that in reality, the channels cannot be observed for all the subcarriers inside the bandwidth, therefore, it is crucial to compress the channels considering a single subcarrier observation. Simulation results demonstrate that transformers can be used to construct efficient autoencoders with a reduced amount of parameters. Furthermore, we show that embedding the quantization during the training, using the VQ-VAE framework, helps to achieve better performances compared to a post-training quantization based on standard techniques.
AB - In this work, we present an innovative application of transformers and vector quantized variational autoencoders (VQ-VAE) to compress multiple-input-multiple-output (MIMO) channels in frequency-division-duplex (FDD) systems. Existing works consider multiple-input-single-output (MISO) channels across all frequencies (subcarriers) of a certain bandwidth, where high compression ratios can be achieved due to the structure of the channels across the frequency domain, or due to their sparsity in the time domain. With this work, we take into account that in reality, the channels cannot be observed for all the subcarriers inside the bandwidth, therefore, it is crucial to compress the channels considering a single subcarrier observation. Simulation results demonstrate that transformers can be used to construct efficient autoencoders with a reduced amount of parameters. Furthermore, we show that embedding the quantization during the training, using the VQ-VAE framework, helps to achieve better performances compared to a post-training quantization based on standard techniques.
KW - FDD systems
KW - MIMO systems
KW - Transformers
KW - autoencoders
KW - vector quantized variational autoencoders
UR - http://www.scopus.com/inward/record.url?scp=85135402963&partnerID=8YFLogxK
U2 - 10.1109/DSLW53931.2022.9820270
DO - 10.1109/DSLW53931.2022.9820270
M3 - Conference contribution
AN - SCOPUS:85135402963
T3 - 2022 IEEE Data Science and Learning Workshop, DSLW 2022
BT - 2022 IEEE Data Science and Learning Workshop, DSLW 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE Data Science and Learning Workshop, DSLW 2022
Y2 - 22 May 2022 through 23 May 2022
ER -