Bias Field Robustness Verification of Large Neural Image Classifiers

Patrick Henriksen, Kerstin Hammernik, Daniel Rueckert, Alessio Lomuscio

Research output: Contribution to conferencePaperpeer-review

14 Scopus citations

Abstract

We present a method for verifying the robustness of neural network-based image classifiers against a large class of intensity perturbations that frequently occur in computer vision. These perturbations, or intensity inhomogeneities, can be modelled by a spatially varying, multiplicative transformation of the intensities by a bias field. We illustrate an encoding of bias field transformations into neural network operations to exploit neural network formal verification toolkits. We extend the toolkit VeriNet with the above encoding, GPU support, input-domain splitting and a symbolic interval propagation pre-processing step. Finally, we show that the resulting implementation, VeriNetBF, can analyse models with up to 11M tuneable parameters and 6.5M ReLU nodes trained on the CIFAR-10 ImageNet and NYU fastMRI datasets.

Original languageEnglish
StatePublished - 2021
Event32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online
Duration: 22 Nov 202125 Nov 2021

Conference

Conference32nd British Machine Vision Conference, BMVC 2021
CityVirtual, Online
Period22/11/2125/11/21

Fingerprint

Dive into the research topics of 'Bias Field Robustness Verification of Large Neural Image Classifiers'. Together they form a unique fingerprint.

Cite this