Could your next job interview be with a chatbot? New study seeks to help bring fairness into AI-powered hiring – Rice University
Landing a job traditionally meant polishing a resume, printing extra copies and sitting across from a hiring manager. Today, the first “person” to evaluate you might not be a person at all — it could be a chatbot powered by artificial intelligence. These automated systems can ask questions, score responses and even recommend who gets hired.
Rice University’s Tianjun Sun has received a National Science Foundation award to lead a two-year collaborative project with the University of Florida examining how AI interview systems work — and how to make them more fair.
For employers, chatbot interviews promise consistency and efficiency. For applicants, though, the stakes are high. What if AI interprets the same answer differently depending on whether it comes from a man or a woman or from someone with a different cultural background? Those questions are at the heart of Sun’s project.
“Two candidates may give essentially the same answer,” said Sun, assistant professor of psychological sciences. “But the algorithm might process them differently. That can lead to unfair or inaccurate hiring decisions.”
The risks aren’t hypothetical. Studies show that a growing share of companies already use AI tools in hiring with many relying on chatbots to screen candidates. At the same time, research has found that these systems can reflect or even amplify human bias, sometimes favoring certain groups over others.
Sun’s project, titled “A Process-Driven Approach to Artificial Intelligence Chatbot Interviews,” will study bias at three levels: the predictors, or the language features AI extracts; the outcomes, or the interview scores and recommendations it produces; and the perceptions of job seekers themselves, particularly whether they view the process as fair and transparent. Her lab has already created a prototype chatbot that conducts short interviews and generates a Big Five personality profile, which she uses as a demonstration of how these systems might evolve.
Sun describes her approach as “psychometric AI” — the application of psychological measurement principles to modern algorithms. “Computer scientists often focus on whether an algorithm predicts well,” Sun said. “But psychologists ask a different question: Are we really predicting what we think we’re predicting, and is the process fair?”
Patricia DeLucia, associate dean for research in Rice’s School of Social Sciences, said Sun’s study exemplifies the kind of research that anticipates real-world needs. “Sun’s work is at the cutting edge and will have significant societal impacts as AI becomes more prevalent,” DeLucia said.
If successful, Sun’s research will help establish benchmarks for more ethical AI hiring tools and offer employers ways to design systems that better serve human purposes.
6100 Main St., Houston, TX 77005-1827 |
Mailing Address: P.O. Box 1892, Houston, TX 77251-1892 |
713-348-0000 | Privacy Policy | Campus Carry