AI Can Deliver Personalized Learning at Scale, Study Shows – Dartmouth


A Dartmouth study finds that students use curated chatbots for trusted 24/7 support.
A new Dartmouth study finds that artificial intelligence has the potential to deliver educational support that meets the individual needs of large numbers of students. The researchers are the first to report that students may put more trust in AI platforms programmed to pull answers from only curated expert sources, rather than from massive data sets of general information.
Professor Thomas Thesen and co-author Soo Hwan Park, MED ’25, tracked how 190 medical students in the Geisel School of Medicine used an AI teaching assistant called NeuroBot TA, which provides around-the-clock individualized support for students in Thesen’s Neuroscience and Neurology course.
Thesen and Park built the platform using retrieval-augmented generation, or RAG, a technique that anchors the responses of large language models to specific information sources. This results in more accurate and relevant answers by reducing “hallucinations,” AI-generated information that often sounds convincing but is inaccurate or incorrect.
NeuroBot TA is designed to base its responses on select course materials such as textbooks, lecture slides, and clinical guidelines. Unlike general chatbots that have been known to invent facts, NeuroBot TA only answers questions it can support with the vetted materials.
We need to understand how students interact with and accept AI, and how they will react if guardrails are implemented.
Thesen and Park’s study examined whether the RAG approach inspires more trust in student users, and how they might actually integrate such safeguarded systems into their learning. They report in npj Digital Medicine that students overwhelmingly trusted NeuroBot’s curated knowledge more than generally available chatbots.
This pattern indicates that generative AI and RAG have the potential to provide tailored, interactive instruction outside of the traditional academic setting, says Thesen, the study’s first author and an associate professor of medical education. Park, who took the Neuroscience and Neurology course, is now a neurology resident at Stanford Health Care and co-author of the study.
“This work represents a step toward precision education, meaning the tailoring of instruction to each learner’s specific needs and context,” Thesen says. “We’re showing that AI can scale personalized learning, all while gaining students’ trust. This has implications for future learning with AI, especially in low-resource settings.”
“But first, we need to understand how students interact with and accept AI, and how they will react if guardrails are implemented,” he says.
The study focused on students from two different class years who took the course in fall 2023 and fall 2024. Of the students in the study, 143 completed a final survey and provided comments about their experience using NeuroBot TA. More than a quarter of respondents highlighted the chatbot’s trust and reliability, as well as its convenience and speed, especially when studying for exams. Nearly half thought the software was a useful study aide.
“Transparency builds trust,” Thesen says. “Students appreciated knowing that answers were grounded in their actual course materials rather than drawn from training data based on the entire internet, where information quality and relevance varies.”
The findings also highlight some of the challenges educators may face in implementing generative AI chatbots, Thesen and Park report. Surveys have shown that nearly half of medical students use chatbots at least weekly. In the Dartmouth study, students mainly used NeuroBot TA for fact-checking—which increased dramatically before exams—rather than for in-depth learning or long, engaging discussions.
Some users also were frustrated by the platform’s limited scope, which might nudge students toward using larger but less quality-controlled chatbots. The study also revealed a unique vulnerability students face when interacting with AI—they often lack the expertise to identify hallucinations, Thesen says.
“We’re now exploring hybrid approaches that could mark RAG-based answers as highly reliable while carefully expanding the breadth of information students can encounter on their learning journey,” he says.
While students at institutions like Dartmouth benefit from low instructor-to-student ratios that allow for personalized learning, many institutions around the world lack these resources, Thesen says. Students face overcrowded classrooms and limited access to instructors. In these settings, AI tools like NeuroBot TA could have the most significant impact and expand access to individualized instruction, he says. 
That impact is being seen with AI Patient Actor, which was developed in Thesen’s Neuroscience-Informed Learning and Education Lab in 2023. The platform helps medical students hone their communication and diagnostic skills by simulating conversations with patients and providing immediate feedback on students’ performance. AI Patient Actor is now used in medical schools in and outside the United States and in Geisel courses, including the new On Doctoring curriculum.
An August study led by Thesen and Roshini Pinto-Powell, a professor of medicine and medical education and co-director of On Doctoring, found that AI Patient Actor provided first-year medical students with a safe space to test their skills, learn from their mistakes, and identify their strengths and weaknesses.
For NeuroBot TA, Thesen and Park plan to enhance the software with teaching techniques and cognitive science principles known to produce deeper understanding and long-term retention, such as Socratic tutoring and spaced retrieval practice. 
Rather than providing answers, a chatbot would guide students to discover solutions through targeted questioning and dialogue and quiz them at regular intervals. Future systems also could choose one strategy or the other depending on context, such as preparing for an exam versus doing regular study, Thesen and Park suggest. 
“At a metacognitive level, students, like the rest of us, need to understand when they can use AI to just get a task done, and when and how they should use it for long-term learning,” Thesen says.
“There is an illusion of mastery when we cognitively outsource all of our thinking and learning to AI, but we’re not really learning,” he says. “We need to develop new pedagogies that can positively leverage AI while still allowing learning to occur.”
Morgan Kelly can be reached at morgan.kelly@dartmouth.edu
We inspire students to practice good global citizenship while strengthening their own communities. You can invest in our future leaders.
Act Now

source

Jesse
https://playwithchatgtp.com