Abstract
With rapid advances in the field of deep learning, explainable artificial intelligence (XAI) methods were introduced to gain insight into internal procedures of deep neural networks. Information gathered by XAI methods can help to identify shortcomings in network architectures and image datasets. Recent studies, however, advise to handle XAI interpretations with care, as they can be unreliable. Due to this unreliability, this study uses meta information that is produced when applying XAI to enhance the architecture - and thus the prediction performance - of a recently published regression model. This model aimed to contribute to solving the photometric registration problem in the field of augmented reality by regressing the dominant light direction in a scene. Bypassing misleading XAI interpretations, the influence of synthetic training data, generated with different rendering techniques, is furthermore evaluated empirically. In conclusion, this study demonstrates how the prediction performance of the recently published model can be increased by improving the network architecture and training dataset.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 227-234 |
Seitenumfang | 8 |
Fachzeitschrift | Computer Science Research Notes |
Jahrgang | 3201 |
Ausgabenummer | 2022 |
DOIs | |
Publikationsstatus | Veröffentlicht - 2022 |
Veranstaltung | 30th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2022 - Plzen, Tschechische Republik Dauer: 17 Mai 2022 → 20 Mai 2022 |