What new state bans mean for college students and faculty. – Psychology Today
Whatever your goals, it’s the struggle to get there that’s most rewarding. It’s almost as if life itself is inviting us to embrace difficulty—not as punishment but as a design feature. It's a robust system for growth.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted September 9, 2025 Reviewed by Monica Vilhauer Ph.D.
In 2025, three U.S. states (Utah, Nevada, and Illinois) have taken significant steps to limit the role of artificial intelligence in mental health care. Each state enacted laws that prohibit AI chatbots from providing therapy, diagnosis, or therapeutic decision-making, while still allowing their use for administrative support like scheduling and documentation. Utah’s restrictions went into effect first on May 7, followed by Nevada on July 1, and Illinois on August 4. Enforcement varies by state: In Illinois, for example, the Department of Financial and Professional Regulation is authorized to fine violators up to $10,000 per incident, usually in response to consumer complaints.
Lawmakers in all three states justified the restrictions by pointing to patient safety concerns and the need to clarify where AI fits in health care. The Illinois legislation explicitly drew a boundary between administrative support and clinical practice, while Nevada’s law barred any AI system from providing professional mental or behavioral health care. Utah’s framework focused on regulating “mental health chatbots” used by state residents, requiring disclosures, privacy protections, and clear limitations to prevent AI from impersonating licensed professionals. These measures reflect growing anxiety about what happens when untested tools interact with vulnerable users, especially in moments of crisis.
Dr. Jessica Kizorek and Dr. Otis Kopp, both professors at Florida International University (FIU), have focused their academic inquiry on the intersection of artificial intelligence and emotional anxiety in college students and young professionals. In their research at FIU, a consistent theme emerges: Students worry about AI replacing human roles, yet they use it daily for learning, creativity, and stress relief. As one FIU undergraduate student explained, “One thing that stresses me out about AI is that it’s making navigating the job market much more difficult. Jobs that were historically well-suited for college graduates are now being scaled back because of all the ways AI is smarter and more efficient than us humans.”
Others have expressed concern over cognitive over-reliance. “AI gives me anxiety because it’s making people stupid,” another student noted. “I noticed that I started using AI for everything and it was making my brain used to not thinking. If you just let AI do everything, you won’t develop critical thinking and problem solving skills.” A third articulated the broader uncertainty: “One way AI makes me anxious is the confusing role people play in the equation. What should we use it for? How do we still use our brains in the process? How do we optimize the human-computer interaction so that we’re using it in all the right ways, rather than relying on it as a crutch?” This mirrors the reality in mental health care. AI cannot replicate the accountability, empathy, and nuanced judgment of a trained clinician. However, it can still play a constructive role in well-being when used appropriately in activities like journaling, mindfulness, or creative exploration.
For colleges and universities, the bans raise both academic and practical questions. Students majoring in psychology, sociology, and related mental health fields may find that their training opportunities differ depending on the state where they study. In Illinois and Nevada, campus counseling centers and affiliated clinics are unlikely to integrate AI chatbots into therapy sessions. Faculty and students conducting research may still examine AI tools, but clinical use will be off-limits, and Institutional Review Boards (IRBs) will apply tighter restrictions. In Utah, the rules permit more regulated experimentation with AI, though still not in a therapeutic role. Students interested in exploring digital therapeutics may therefore find more opportunity to study disclosure, compliance, and ethical frameworks in Utah than in Illinois or Nevada.
Counseling centers themselves are also adapting. Human-delivered services remain the standard across the country, but AI tools may still appear behind the scenes, automating intake forms, scheduling, or triage documentation. On campuses in Illinois and Nevada, students seeking chatbot-based emotional support will not find those services officially endorsed or integrated by counseling centers. In Utah, approved chatbot vendors may still be accessible, but they must comply with disclosure and safety requirements. For faculty supervising clinical practicums, these laws mean less emphasis on experimenting with AI as a therapy tool and more focus on policy, ethics, and oversight in the classroom.
The bans also matter for liability and professional responsibility. While penalties generally target providers or companies that market chatbots as therapeutic agents, questions remain about what happens if a patient uses AI tools on their own. Legal scholars suggest that liability typically falls on the tool’s developer unless a licensed professional actively recommended it, but this area is still evolving. Faculty preparing the next generation of clinicians will need to prepare students to navigate these gray areas, reinforcing the importance of professional accountability even when patients turn to outside tools.
Nationally, the regulatory picture continues to shift. Other states, including New Jersey, Massachusetts, and California, are debating their own measures, many of which aim to prevent chatbots from posing as therapists, require greater transparency, and ensure clinician oversight. At the federal level, agencies like the FDA and the Department of Health and Human Services may eventually regulate AI mental health tools, though Congress has yet to act. One proposed bill would even block states from passing new AI regulations for a decade, creating tension between state and federal approaches.
For psychology and mental health students weighing where to attend college, these differences may factor into their decision-making. Choosing Illinois or Nevada means limited exposure to chatbot therapy in academic or clinical settings, while Utah allows slightly more flexibility under regulation. In other states without bans, students may encounter a wider range of pilot programs and research opportunities, though oversight is tightening everywhere.
Meanwhile, student use of AI outside official channels continues to grow. National surveys show that most teens and young adults have experimented with AI companions for journaling and/or stress relief. This trend underscores the reality that while legislation limits clinical use, informal reliance on chatbots remains widespread. For campuses, this creates a dual challenge: complying with state law while recognizing that students often turn to AI tools in their personal lives.
In practice, these new laws are less about rejecting technology and more about clarifying its place. For now, AI may support administrative tasks, personal reflection, and research into compliance and ethics, but the role of therapist remains reserved for licensed professionals. For students preparing to enter psychology, sociology, or counseling fields, the state they choose to study in will shape not only their access to certain technologies but also how they learn to navigate the boundaries between innovation, regulation, and human care.
References
Adetayo, A. J. (2024). Reimagining learning through AI art: the promise of DALL-E and MidJourney for education and libraries. Library Hi Tech News. https://doi.org/10.1108/LHTN-01-2024-0005
Alzahrani, L. (2023). Analyzing students’ attitudes and behavior toward artificial intelligence technologies in higher education. International Journal of Recent Technology and Engineering (IJRTE), 11(6), 65-73. https://doi.org/10.35940/ijrte.F7475.0311623
Cortez, P. M., Ong, A. K. S., Diaz, J. F. T., German, J. D., & Jagdeep, S. J. S. S. (2024). Analyzing Preceding factors affecting behavioral intention on communicational artificial intelligence as an educational tool. Heliyon, 10(3), e25896. https://doi.org/10.1016/j.heliyon.2024.e25896 Dai, Y., Chai, C. S., Lin, P. Y
Firat, M. (2023). What ChatGPT means for universities: Perceptions of scholars and students. Journal of Applied Learning and Teaching, 6(1), 57-63. https://doi.org/10.37074/jalt.2023.6.1.22
Ghotbi, N., Ho, M. T., & Mantello, P. (2022). Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI & Society, 37, 283-290. https://doi.org/10.1007/s00146-021-01168-2
https://healthjournalism.org/blog/2025/08/states-crack-down-on-ai-for-behavioral-health-care/
https://nypost.com/2025/08/13/us-news/illinois-becomes-third-state-to-restrict-use-of-ai-in-mental-health-industry-as-experts-warn-about-ai-psychosis/
https://www.washingtonpost.com/nation/2025/08/12/illinois-ai-therapy-ban/
https://www.psychiatrist.com/news/illinois-outlaws-ai-in-therapy-sessions/
Eva Ritvo, M.D., is an internationally-known speaker, best-selling author, vitality expert, and psychiatrist with a practice in Miami Beach.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2025 Sussex Publishers, LLC
Whatever your goals, it’s the struggle to get there that’s most rewarding. It’s almost as if life itself is inviting us to embrace difficulty—not as punishment but as a design feature. It's a robust system for growth.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.