TY - GEN
T1 - Statistical priors for efficient combinatorial optimization via graph cuts
AU - Cremers, Daniel
AU - Grady, Leo
PY - 2006
Y1 - 2006
N2 - Bayesian inference provides a powerful framework to optimally integrate statistically learned prior knowledge into numerous computer vision algorithms. While the Bayesian approach has been successfully applied in the Markov random field literature, the resulting combinatorial optimization problems have been commonly treated with rather inefficient and inexact general purpose optimization methods such as Simulated Annealing. An efficient method to compute the global optima of certain classes of cost functions defined on binary-valued variables is given by graph min-cuts. In this paper, we propose to reconsider the problem of statistical learning for Bayesian inference in the context of efficient optimization schemes. Specifically, we address the question: Which prior information may be learned while retaining the ability to apply Graph Cut optimization? We provide a framework to learn and impose prior knowledge on the distribution of pairs and triplets of labels. As an illustration, we demonstrate that one can optimally restore binary textures from very noisy images with runtimes on the order of a second while imposing hundreds of statistically learned constraints per pixel.
AB - Bayesian inference provides a powerful framework to optimally integrate statistically learned prior knowledge into numerous computer vision algorithms. While the Bayesian approach has been successfully applied in the Markov random field literature, the resulting combinatorial optimization problems have been commonly treated with rather inefficient and inexact general purpose optimization methods such as Simulated Annealing. An efficient method to compute the global optima of certain classes of cost functions defined on binary-valued variables is given by graph min-cuts. In this paper, we propose to reconsider the problem of statistical learning for Bayesian inference in the context of efficient optimization schemes. Specifically, we address the question: Which prior information may be learned while retaining the ability to apply Graph Cut optimization? We provide a framework to learn and impose prior knowledge on the distribution of pairs and triplets of labels. As an illustration, we demonstrate that one can optimally restore binary textures from very noisy images with runtimes on the order of a second while imposing hundreds of statistically learned constraints per pixel.
UR - http://www.scopus.com/inward/record.url?scp=33745851543&partnerID=8YFLogxK
U2 - 10.1007/11744078_21
DO - 10.1007/11744078_21
M3 - Conference contribution
AN - SCOPUS:33745851543
SN - 3540338365
SN - 9783540338369
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 263
EP - 274
BT - Computer Vision - ECCV 2006, 9th European Conference on Computer Vision, Proceedings
T2 - 9th European Conference on Computer Vision, ECCV 2006
Y2 - 7 May 2006 through 13 May 2006
ER -