Content

Professor Yuriy Brun of the Manning College of Information and Computer Sciences (CICS) at the University of Massachusetts Amherst has been named a Fellow by IEEE, the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.

The IEEE Fellow recognition is the organization's highest member grade, with less than 0.1% of voting IEEE members elevated to Fellow annually. Brun is part of the 2025 class of 338 new fellows representing 58 universities, companies, and research centers from around the world. With the latest announcement, a total of five current and seven emeriti CICS faculty are IEEE Fellows.

Brun was cited by IEEE for his "contributions to software bias mitigation and to software engineering automation." His work focuses on increasing the trustworthiness of software by developing techniques that make software less buggy and less biased. He is known for his efforts to simplify the development and deployment of software systems and ensuring they align with societal needs for fairness.

"Yuriy's focus on fair and accurate results in computing exemplifies our college's ethos of computing for the common good," says Laura Haas, Donna M. and Robert J. Manning Dean of CICS. "His commitment to collaborative and multidisciplinary research has made an important impact in the field of computing."

Brun credited his collaborators and UMass support for his success: "I'm grateful to be working with fantastic students and colleagues who push tirelessly beyond the state of the art to generate scientific knowledge and help engineers build software we can all trust. Working in a college that encourages and rewards collaboration helps us enact real-world impact, and our atmosphere that focuses on teamwork makes creating top-quality science possible."

A notable way in which modern software has lost users’ trust is by exhibiting racist or sexist behavior, often because of learning from biased data. Brun's well-known contributions to preventing and correcting bias in machine learning include his 2017 paper, co-authored with CICS Professor Alexandra Meliou, "Fairness Testing: Testing Software for Discrimination," which won an ACM SIGSOFT Distinguished Paper Award that year. The paper, which has been cited over 400 times, came at a time when software fairness was not a common consideration in the development lifecycle and introduced some of the first techniques that automatically test software for bias. Since then, companies such as IBM, Microsoft, and Google have introduced tools to detect and mitigate bias in machine learning software.

In addition, Brun, Associate Professor Philip Thomas, and other colleagues from UMass Amherst and Stanford University developed a framework that fundamentally re-envisions machine learning to produce trustworthy models guaranteed to be safe and fair.

"The positive response this research has received shows that people care about using artificial intelligence and machine learning responsibly," says Brun. "That means ensuring that we learn to build systems that will not discriminate and that will not exacerbate biases from the data."

Beyond his work on bias in machine learning, Brun is also known for his broader contributions to automated software engineering to make software more trustworthy, particularly in developing tools that can automatically detect and fix bugs. In 2015, his paper out of the Laboratory for Advanced Software Engineering Research (LASER) at UMass Amherst demonstrated that existing tools to automatically repair software were invisibly breaking software due to a behavior called "overfitting," where patches make tests pass but fail to generalize to the intended behavior.

In response, his lab developed the now-widely used methodology to measure overfitting and evaluate the quality of a software patch. The methodology has become the standard for evaluating automated program repair techniques and has helped steer the field toward significantly improving the quality of the patches.

Brun's current work further builds toward his vision for engineering "software you can trust"—moving from automated testing and repair of software systems to automatically proving that software is correct. Toward that goal, Brun applies artificial intelligence to formal verification, automating that critical piece of making software systems more trustworthy.

Brun has received numerous honors throughout his career, including an NSF CAREER Award (2015), the SEAMS Most Influential Paper Award (2020), the IEEE TCSC Young Achiever in Scalable Computing Award (2013), and several Distinguished Paper Awards from ACM SIGSOFT (2011, 2017, 2022, and 2023) and ACM SIGPLAN (2019). He has also received the IEEE ICSA Best Paper Award (2017), a Microsoft Research Software Engineering Innovation Foundation Award (2014), a Google Faculty Research Award (2015), a Google Inclusion Research Award (2021), an Amazon Research Award (2021), a Lilly Fellowship for Teaching Excellence (2017), and the College Outstanding Teacher Award (2017). Brun is a Distinguished Member of the ACM, and his doctoral work on privacy in cloud computing was a finalist for the ACM Doctoral Dissertation Competition in 2008.

Brun joined the CICS faculty in 2012. Previously, he was a postdoctoral fellow at the University of Washington. He received his doctorate and master's in computer science from the University of Southern California in 2008 and 2006, respectively, and his master's in engineering along with a double bachelor's in computer science and engineering and in mathematics from the Massachusetts Institute of Technology in 2003.

Award or honor posted in Awards