Auxiliary image regularization for deep cNNs with noisy labels

Samaneh Azadi, Jiashi Feng, Stefanie Jegelka, Trevor Darrell

Research output: Contribution to conferencePaperpeer-review

22 Scopus citations

Abstract

Precisely-labeled data sets with sufficient amount of samples are very important for training deep convolutional neural networks (CNNs). However, many of the available real-world data sets contain erroneously labeled samples and those errors substantially hinder the learning of very accurate CNN models. In this work, we consider the problem of training a deep CNN model for image classification with mislabeled training samples – an issue that is common in real image data sets with tags supplied by amateur users. To solve this problem, we propose an auxiliary image regularization technique, optimized by the stochastic Alternating Direction Method of Multipliers (ADMM) algorithm, that automatically exploits the mutual context information among training images and encourages the model to select reliable images to robustify the learning process. Comprehensive experiments on benchmark data sets clearly demonstrate our proposed regularized CNN model is resistant to label noise in training data.

Original languageEnglish
StatePublished - 2016
Externally publishedYes
Event4th International Conference on Learning Representations, ICLR 2016 - San Juan, Puerto Rico
Duration: 2 May 20164 May 2016

Conference

Conference4th International Conference on Learning Representations, ICLR 2016
Country/TerritoryPuerto Rico
CitySan Juan
Period2/05/164/05/16

Fingerprint

Dive into the research topics of 'Auxiliary image regularization for deep cNNs with noisy labels'. Together they form a unique fingerprint.

Cite this