TY - JOUR
T1 - D2IFLN
T2 - Disentangled Domain-Invariant Feature Learning Networks for Domain Generalization
AU - Liu, Zhengfa
AU - Chen, Guang
AU - Li, Zhijun
AU - Qu, Sanqing
AU - Knoll, Alois
AU - Jiang, Changjun
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/12/1
Y1 - 2023/12/1
N2 - Domain generalization (DG) aims to learn a model that generalizes well to an unseen test distribution. Mainstream methods follow the domain-invariant representational learning philosophy to achieve this goal. However, due to the lack of priori knowledge to determine which features are domain specific and task-independent, and which features are domain invariant and task relevant, existing methods typically learn entangled representations, limiting their capacity to generalize to the distribution-shifted target domain. To address this issue, in this article, we propose novel disentangled domain-invariant feature learning networks (D2IFLN) to realize feature disentanglement and facilitate domain-invariant feature learning. Specifically, we introduce a semantic disentanglement network and a domain disentanglement network, disentangling the learned domain-invariant features from both domain-specific class-irrelevant features and domain-discriminative features. To avoid the semantic confusion in adversarial learning for domain-invariant feature learning, we further introduce a graph neural network to aggregate different domain semantic features during model training. Extensive experiments on three DG benchmarks show that the proposed D2IFLN performs better than the state of the art.
AB - Domain generalization (DG) aims to learn a model that generalizes well to an unseen test distribution. Mainstream methods follow the domain-invariant representational learning philosophy to achieve this goal. However, due to the lack of priori knowledge to determine which features are domain specific and task-independent, and which features are domain invariant and task relevant, existing methods typically learn entangled representations, limiting their capacity to generalize to the distribution-shifted target domain. To address this issue, in this article, we propose novel disentangled domain-invariant feature learning networks (D2IFLN) to realize feature disentanglement and facilitate domain-invariant feature learning. Specifically, we introduce a semantic disentanglement network and a domain disentanglement network, disentangling the learned domain-invariant features from both domain-specific class-irrelevant features and domain-discriminative features. To avoid the semantic confusion in adversarial learning for domain-invariant feature learning, we further introduce a graph neural network to aggregate different domain semantic features during model training. Extensive experiments on three DG benchmarks show that the proposed D2IFLN performs better than the state of the art.
KW - Domain generalization (DG)
KW - domain-invariant feature learning
KW - representation disentanglement
UR - http://www.scopus.com/inward/record.url?scp=85153356124&partnerID=8YFLogxK
U2 - 10.1109/TCDS.2023.3264615
DO - 10.1109/TCDS.2023.3264615
M3 - Article
AN - SCOPUS:85153356124
SN - 2379-8920
VL - 15
SP - 2269
EP - 2281
JO - IEEE Transactions on Cognitive and Developmental Systems
JF - IEEE Transactions on Cognitive and Developmental Systems
IS - 4
ER -