Abstract
Graph representation of objects and their relations in a scene, known as a scene graph, provides a precise and discernible interface to manipulate a scene by modifying the nodes or the edges in the graph. Although existing works have shown promising results in modifying the placement and pose of objects, scene manipulation often leads to losing some visual characteristics like the appearance or identity of objects. In this work, we propose DisPositioNet, a model that learns a disentangled representation for each object for the task of image manipulation using scene graphs in a self-supervised manner. Our framework enables the disentanglement of the variational latent embeddings as well as the feature representation in the graph. In addition to producing more realistic images due to the decomposition of features like pose and identity, our method takes advantage of the probabilistic sampling in the intermediate features to generate more diverse images in object replacement or addition tasks. The results of our experiments show that disentangling the feature representations in the latent manifold of the model outperforms the previous works qualitatively and quantitatively on two public benchmarks.
Originalsprache | Englisch |
---|---|
Publikationsstatus | Veröffentlicht - 2022 |
Veranstaltung | 33rd British Machine Vision Conference Proceedings, BMVC 2022 - London, Großbritannien/Vereinigtes Königreich Dauer: 21 Nov. 2022 → 24 Nov. 2022 |
Konferenz
Konferenz | 33rd British Machine Vision Conference Proceedings, BMVC 2022 |
---|---|
Land/Gebiet | Großbritannien/Vereinigtes Königreich |
Ort | London |
Zeitraum | 21/11/22 → 24/11/22 |