Abstract: Information access systems provide substantial information to many human activities and decision making with different levels of sensitivity. Existing information seeking systems, including those that employ large language models (LLMs), face challenges in appropriately providing users with unbiased, diverse, and well-explained responses to explore the answer space.
In this talk, I will present a summary of our works for interpretable information retrieval. I will provide a more detailed description of our efforts to enhance the intrinsic interpretability of models for providing users with diverse and unbiased results. Through enhanced interpretability, we have achieved comparable or even higher effectiveness. I conclude by highlighting the mutual benefits between interpretable information retrieval and generative AI systems.
Bio: Negin Rahimi is a research assistant professor at the Manning College of Information and Computer Sciences, UMass Amherst, where she is part of the Center for Intelligent Information Retrieval. Prior to that, she was a postdoctoral researcher at UMass Amherst. She obtained a Ph.D. in computer engineering from The University of Tehran. Her research is on information retrieval with a focus on interpretable information access. Her research is supported by Google Research Scholar, Adobe, and NSF awards.