About

The world is full of uncertainty and explicitly modeling and reasoning about this uncertainty is often helpful in making predictions. Justin Domke's research focuses on two related issues: the computational challenges this reasoning presents, and how to make methods work well when the phenomena being modeled is too complex to be modeled exactly. Specifically, he works on algorithms for better reasoning in probabilistic graphical models, how to learn when using an approximate reasoning algorithm, and integrating probabilistic models with other machine learning tools to help address problems too complex to model exactly. Often these algorithms are inspired by problems in computer vision.

Domke serves on the program committee or reviews for most major machine learning and computer vision conferences and journals, and was recognized as an outstanding reviewer at CVPR 2011 and NIPS 2013. He is currently on leave for the 2023 Spring semester.