Analysis of loss functions for fast single-class classification

Gil Keren, Sivan Sabato, Björn Schuller

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

We consider neural network training, in applications in which there are many possible classes, but at test time, the task is a binary classification task of determining whether the given example belongs to a specific class. We define the single logit classification (SLC) task: training the network so that at test time, it would be possible to accurately identify whether the example belongs to a given class in a computationally efficient manner, based only on the output logit for this class. We propose a natural principle, the Principle of Logit Separation, as a guideline for choosing and designing losses suitable for the SLC task. We show that the cross-entropy loss function is not aligned with the Principle of Logit Separation. In contrast, there are known loss functions, as well as novel batch loss functions that we propose, which are aligned with this principle. Our experiments show that indeed in almost all cases, losses that are aligned with the Principle of Logit Separation obtain at least 20% relative accuracy improvement in the SLC task compared to losses that are not aligned with it, and sometimes considerably more. Furthermore, we show that fast SLC does not cause any drop in binary classification accuracy, compared to standard classification in which all logits are computed, and yields a speedup which grows with the number of classes.

Original languageEnglish
Pages (from-to)337-358
Number of pages22
JournalKnowledge and Information Systems
Volume62
Issue number1
DOIs
StatePublished - 1 Jan 2020
Externally publishedYes

Keywords

  • Classification
  • Extreme classification
  • Neural networks

Fingerprint

Dive into the research topics of 'Analysis of loss functions for fast single-class classification'. Together they form a unique fingerprint.

Cite this