Content

Speaker

Justin Payan

Abstract

The modern knowledge economy relies on expertise. In important technocratic tasks such as scientific peer review and community question answering, knowledge workers can only fulfill requests they have the expertise, interest, and availability to complete. We develop multiple novel approaches to assign experts to requests, addressing questions of fairness, scalability, assignment quality, and robustness to uncertainty. We use peer review as the primary case study, though Chapter 3 highlights the domain of community question answering. Our algorithms can be applied to other domains where resource-constrained experts are assigned to complete complex requests, such as crowd-sourced editing of knowledge repositories or corporate staff assignment.

Expert assignments must be both fair and welfare efficient, so that all requests receive a reasonably well-qualified set of experts. We first present a set of simple mechanisms that fairly distribute expertise across requests, with welfare guarantees. Our algorithms, Greedy Expert Round Robin and FairSequence, assign experts in such a way that no request "envies" another request's assigned experts.

Although fairness and welfare criteria ensure evenly-distributed, high quality expertise, they both depend on the method of quantifying expert performance. In automated reviewer assignment systems, existing methods for estimating the benefits of assigning each reviewer to each paper can be noisy and ineffective. We take a data-driven perspective on the expert assignment problem, demonstrating how to more accurately estimate the benefits of assigning experts to requests. We train a variety of models to predict answer quality on StackExchange, then compare the results when using these models to produce constrained assignments of users to questions. This study demonstrates the benefits of fully predictive expert assignment.

No matter how accurate our predictive model, we always are uncertain when we assign experts to requests. Distribution shift can cause our models to make errors, or experts may be unable to perform due to unforeseen circumstances. We discuss two main solutions to hedge against the worst outcomes. The robust optimization framework optimizes over a region containing the true matching scores with high probability. The stochastic optimization framework assigns experts using a percentile criterion over the assignment objective. We study both the robust and stochastic approaches for utilitarian and egalitarian welfare objectives, and we detail applications in reviewer assignment and community question answering.

Expert assignment is a rich problem, which needs to be addressed from both a data analysis and algorithmic lens. Our work improves the end-to-end expert assignment pipeline, which will result in less wasted time and greater productivity for knowledge workers.
 

Hybrid event posted in PhD Thesis Defense for Faculty and Alumni