Artificial Intelligence in Mental Health Care: Ethical Implications and Assessment of the U.S. Legislation

Academic Level at Time of Presentation

Senior

Major

Cybersecurity & Network Management/Security & Digital Forensics

Minor

Spanish

List all Project Mentors & Advisor(s)

Randall Joyce

Presentation Format

Event

Abstract/Description

In the past few years, the use of Artificial Intelligence (AI) has increased at an exponential rate and has grown into sectors that previously had not been expected to move into a technology-based and non-human format. One of the areas that has seen an unexpected increase in the use of AI is the mental health and therapy sector. Because of the extensive training and specialization required of professionals in this industry, combined with the sensitive nature of the work, there are growing concerns about the use of AI in therapeutic capacities. This research follows the studies conducted on rule-based models, generative AI (GenAI) and Large Language Models (LLMs) to show the effects and growing ethical concerns accompanying the use of these technologies. There is also a large legal and regulatory gap for their use within the United States. This study assesses each of the states' AI legislation that has been enacted or is in progress, as well as what specific industries and topics are directly affected through their implementation. It also reviews the legislation for AI mental health and psychotherapy services and discusses the impact on the industry. This research aims to magnify the focus on the ethical and moral limitations of AI in mental health care, the efficacy of the services they are said to be providing, as well as the legality of use within the United States for the services to be offered.

Spring Scholars Week 2026

Honors College Senior Thesis Presentations

This document is currently not available here.

Share

COinS
 

Artificial Intelligence in Mental Health Care: Ethical Implications and Assessment of the U.S. Legislation

In the past few years, the use of Artificial Intelligence (AI) has increased at an exponential rate and has grown into sectors that previously had not been expected to move into a technology-based and non-human format. One of the areas that has seen an unexpected increase in the use of AI is the mental health and therapy sector. Because of the extensive training and specialization required of professionals in this industry, combined with the sensitive nature of the work, there are growing concerns about the use of AI in therapeutic capacities. This research follows the studies conducted on rule-based models, generative AI (GenAI) and Large Language Models (LLMs) to show the effects and growing ethical concerns accompanying the use of these technologies. There is also a large legal and regulatory gap for their use within the United States. This study assesses each of the states' AI legislation that has been enacted or is in progress, as well as what specific industries and topics are directly affected through their implementation. It also reviews the legislation for AI mental health and psychotherapy services and discusses the impact on the industry. This research aims to magnify the focus on the ethical and moral limitations of AI in mental health care, the efficacy of the services they are said to be providing, as well as the legality of use within the United States for the services to be offered.