Abstract
We introduce Divnet, a flexible technique for learning networks with diverse neurons. Divnet models neuronal diversity by placing a Determinantal Point Process (DPP) over neurons in a given layer. It uses this DPP to select a subset of diverse neurons and subsequently fuses the redundant neurons into the selected ones. Compared with previous approaches, Divnet offers a more principled, flexible technique for capturing neuronal diversity and thus implicitly enforcing regularization. This enables effective auto-tuning of network architecture and leads to smaller network sizes without hurting performance. Moreover, through its focus on diversity and neuron fusing, Divnet remains compatible with other procedures that seek to reduce memory footprints of networks. We present experimental results to corroborate our claims: for pruning neural networks, Divnet is seen to be notably superior to competing approaches.
Originalsprache | Englisch |
---|---|
Publikationsstatus | Veröffentlicht - 2016 |
Extern publiziert | Ja |
Veranstaltung | 4th International Conference on Learning Representations, ICLR 2016 - San Juan, Puerto Rico Dauer: 2 Mai 2016 → 4 Mai 2016 |
Konferenz
Konferenz | 4th International Conference on Learning Representations, ICLR 2016 |
---|---|
Land/Gebiet | Puerto Rico |
Ort | San Juan |
Zeitraum | 2/05/16 → 4/05/16 |