D2IFLN: Disentangled Domain-Invariant Feature Learning Networks for Domain Generalization

Zhengfa Liu, Guang Chen, Zhijun Li, Sanqing Qu, Alois Knoll, Changjun Jiang

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


Domain generalization (DG) aims to learn a model that generalizes well to an unseen test distribution. Mainstream methods follow the domain-invariant representational learning philosophy to achieve this goal. However, due to the lack of priori knowledge to determine which features are domain specific and task-independent, and which features are domain invariant and task relevant, existing methods typically learn entangled representations, limiting their capacity to generalize to the distribution-shifted target domain. To address this issue, in this article, we propose novel disentangled domain-invariant feature learning networks (D2IFLN) to realize feature disentanglement and facilitate domain-invariant feature learning. Specifically, we introduce a semantic disentanglement network and a domain disentanglement network, disentangling the learned domain-invariant features from both domain-specific class-irrelevant features and domain-discriminative features. To avoid the semantic confusion in adversarial learning for domain-invariant feature learning, we further introduce a graph neural network to aggregate different domain semantic features during model training. Extensive experiments on three DG benchmarks show that the proposed D2IFLN performs better than the state of the art.

Original languageEnglish
Pages (from-to)2269-2281
Number of pages13
JournalIEEE Transactions on Cognitive and Developmental Systems
Issue number4
StatePublished - 1 Dec 2023


  • Domain generalization (DG)
  • domain-invariant feature learning
  • representation disentanglement


Dive into the research topics of 'D2IFLN: Disentangled Domain-Invariant Feature Learning Networks for Domain Generalization'. Together they form a unique fingerprint.

Cite this