TY - GEN
T1 - Guided U-Net Aided Efficient Image Data Storing with Shape Preservation
AU - Banerjee, Nirwan
AU - Malakar, Samir
AU - Gupta, Deepak Kumar
AU - Horsch, Alexander
AU - Prasad, Dilip K.
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - The proliferation of high-content microscopes (∼ 32 GB for a single image) and the increasing amount of image data generated daily have created a pressing need for compact storage solutions. Not only is the storage of such massive image data cumbersome, but it also requires a significant amount of storage and data bandwidth for transmission. To address this issue, we present a novel deep learning technique called Guided U-Net (GU-Net) that compresses images by training a U-Net architecture with a loss function that incorporates shape, budget, and skeleton losses. The trained model learns to selects key points in the image that need to be stored, rather than the entire image. Compact image representation is different from image compression because the former focuses on assigning importance to each pixel in an image and selecting the most important ones for storage whereas the latter encodes information of the entire image for more efficient storage. Experimental results on four datasets (CMATER, UiTMito, MNIST, and HeLA) show that GU-Net selects only a small percentage of pixels as key points (3%, 3%, 5%, and 22% on average, respectively), significantly reducing storage requirements while preserving essential image features. Thus, this approach offers a more efficient method of storing image data, with potential applications in a range of fields where large-scale imaging is a vital component of research and development.
AB - The proliferation of high-content microscopes (∼ 32 GB for a single image) and the increasing amount of image data generated daily have created a pressing need for compact storage solutions. Not only is the storage of such massive image data cumbersome, but it also requires a significant amount of storage and data bandwidth for transmission. To address this issue, we present a novel deep learning technique called Guided U-Net (GU-Net) that compresses images by training a U-Net architecture with a loss function that incorporates shape, budget, and skeleton losses. The trained model learns to selects key points in the image that need to be stored, rather than the entire image. Compact image representation is different from image compression because the former focuses on assigning importance to each pixel in an image and selecting the most important ones for storage whereas the latter encodes information of the entire image for more efficient storage. Experimental results on four datasets (CMATER, UiTMito, MNIST, and HeLA) show that GU-Net selects only a small percentage of pixels as key points (3%, 3%, 5%, and 22% on average, respectively), significantly reducing storage requirements while preserving essential image features. Thus, this approach offers a more efficient method of storing image data, with potential applications in a range of fields where large-scale imaging is a vital component of research and development.
KW - Budget Loss
KW - Compact Image Representation
KW - Guided U-Net
KW - Shape Loss
KW - Skeleton Loss
KW - Storage Efficient
UR - http://www.scopus.com/inward/record.url?scp=85177170183&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-47634-1_24
DO - 10.1007/978-3-031-47634-1_24
M3 - Conference contribution
AN - SCOPUS:85177170183
SN - 9783031476334
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 317
EP - 330
BT - Pattern Recognition - 7th Asian Conference, ACPR 2023, Proceedings
A2 - Lu, Huimin
A2 - Blumenstein, Michael
A2 - Cho, Sung-Bae
A2 - Liu, Cheng-Lin
A2 - Yagi, Yasushi
A2 - Kamiya, Tohru
PB - Springer Science and Business Media Deutschland GmbH
T2 - 7th Asian Conference on Pattern Recognition, ACPR 2023
Y2 - 5 November 2023 through 8 November 2023
ER -