Introduction
Artificial intelligence (AI) is increasingly being used in mental healthcare for functions such as clinical decision support, notetaking and transcription, symptom screening and triage, administrative and operational improvements, augmenting the provider-patient relationship, post-visit remote monitoring, and companionship.
This article outlines key laws and regulations that may be applicable when using AI in mental healthcare, including scope of practice restrictions, U.S. Food and Drug Administration (FDA) regulations, and data privacy obligations.
Unauthorized Practice of Therapy
State licensing laws restrict who can provide professional health and mental health services. Each state has rules that define what activities constitute the “practice” of a particular health profession and what licenses are required to provide those services. For example, California law makes it unlawful for anyone other than a licensed physician to advertise or hold themselves out as an individual who “diagnoses, treats, operates for or prescribes for any ailment, blemish, deformity, disease, disfigurement, disorder, injury or other physical or mental health condition.”1
Similar laws apply to mental health professionals. In California, for example, mental health counselors are typically licensed as Licensed Professional Clinical Counselors (LPCCs),2 and California’s scope of practice law prohibits anyone but a licensed LPCC from the “application of counseling interventions and psychotherapeutic techniques to identify and remediate cognitive, mental, and emotional issues…”3 Under the law, “professional clinical counseling” includes conducting assessments for the purpose of establishing counseling goals and objectives to empower individuals to deal adequately with life situations, reduce stress, experience growth, change behavior, and make well-informed, rational decisions.
State Laws Regulating the Use of AI in Healthcare
Several states have enacted new state regulations in 2025 addressing the use of AI in mental healthcare.
- Illinois. In August 2025, Illinois Governor JB Pritzker signed into law Illinois House Bill 1806, known as the Wellness and Oversight for Psychological Resources Act (the WOPR Act), which regulates the use of AI in mental health therapy and psychotherapy services. The WOPR Act prohibits an individual, corporation, or entity from offering therapy or psychotherapy services through the use of AI unless the therapy or psychotherapy services are provided by a licensed professional. The WOPR Act also prohibits the use of AI in therapy to: 1) make independent therapeutic decisions, 2) directly interact with clients in any form of therapeutic communication, 3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional, or 4) detect emotions or mental state. The WOPR Act does not apply to peer support or self-help materials and educational resources that do not purport to offer therapy or psychotherapy services.
- Nevada. In June 2025, Nevada Governor Joe Lombardo signed Nevada AB 406, which regulates the use of AI for mental and behavioral healthcare. AB 406 prohibits offering AI systems designed to provide services that constitute the practice of professional mental or behavioral healthcare and prohibits making representations that an AI system can provide such care. In addition, AB 406 limits how mental and behavioral healthcare professionals can use AI systems. For more information, see here.
- New York. In May 2025, New York enacted the first law requiring safeguards for AI companions. Scheduled to take effect on November 5, 2025, the law requires operators of AI companions to implement safety measures to detect and address users’ expressions of suicidal ideation or self-harm and to regularly disclose to users that they are not communicating with a human. For more information, see here.
- Utah. In March 2025, Utah Governor Spencer Cox signed Utah HB 452, which establishes new rules for the use of AI mental health chatbots. Among other things, HB 452 prohibits suppliers of mental health chatbots unless they clearly and conspicuously disclose that the mental health chatbot is an AI technology and not a human 1) prior to the user accessing the mental health chatbot’s features for the first time; 2) any time the user has not accessed the mental health chatbot’s features for more than seven days; and 3) when a user asks whether AI is being used. For more information, see here.
- California.
- Generative AI in Clinical Communications. Certain healthcare providers that use Generative AI to generate patient communications pertaining to patient clinical information must disclose to consumers that AI generated the communications and provide instructions on how to contact a human provider.4
- Attorney General Guidance. In January 2025, California Attorney General Rob Bonta issued a legal advisory providing guidance to healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use AI about their obligations under California law, including under the state’s consumer protection, anti-discrimination, and patient privacy laws. The advisory highlights how California laws: 1) prohibit unlawful, unfair, and fraudulent business practices, including the marketing of AI that does not comply with state or federal law; 2) prohibit AI from practicing licensed professions such as medicine; 3) prohibit discrimination based on certain protected traits through the use of AI; and 4) protect the use and disclosure of patient information, including information that is accessed or disclosed by AI. For more information, see here.
- Texas. Healthcare providers that use AI for diagnostic purposes, including to recommend a diagnosis or course of treatment based on a patient’s medical record, must comply with various obligations, including disclosing the use of AI to patients and reviewing all records created with AI in a manner consistent with medical records standards developed by the Texas Medical Board.5
Additional statutes scheduled to take effect include the Colorado AI Act6 and the California Privacy Protection Act (CCPA) Automated Decision-Making Technologies Regulations.7
FDA
The FDA has jurisdiction over a broad range of products, including medical devices.
Medical devices subject to FDA regulation include hardware products, wearables, diagnostic products, and various software applications and mobile apps. The FDA considers a product to be a device subject to FDA regulation if it meets the definition at Section 201(h) of the Food, Drug, and Cosmetic Act, which can encompass certain AI-driven tools used in mental health diagnosis, treatment, patient monitoring, and certain clinical decision support tools used by healthcare professionals.8
Data Privacy
Because the use of AI in healthcare generally involves processing consumer personal information using AI, various privacy laws may also apply, including:
- Section 5 of the Federal Trade Commission Act. Section 5 of the Federal Trade Commission Act (FTCA) prohibits companies from undertaking unfair or deceptive acts or practices. In April 2025, the FTC announced an order requiring company Workado, LLC to stop advertising the accuracy of its AI detection products unless it maintains competent and reliable evidence showing those products are as accurate as claimed. The order settled allegations that Workado promoted its AI Content Detector as “98 percent” accurate in detecting whether text was written by AI or human, when independent testing by the FTC showed the accuracy rate on general-purpose content was just 53 percent.
- Health Privacy Laws. Personal information processed by AI mental health companies may be subject to one or more state or federal privacy laws or regulations governing the processing of individuals’ health information, such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA), state comprehensive consumer privacy laws like the CCPA, and Washington State’s My Health My Data Act (MHMDA).
- Wiretapping and Eavesdropping Laws. Companies that use AI—both in the mental health industry and also more broadly—have been the subject of class action lawsuits brought under theories of wiretapping and eavesdropping, including under theories based on the California Invasion of Privacy Act and Electronic Communications Privacy Act (ECPA). Recent class action lawsuits have alleged that companies that use AI through various means, including AI meeting notetakers and AI chatbots, “record” or “eavesdrop” on consumers’ communications without their consent.
- Telephone Consumer Protection Act. The Telephone Consumer Protection Act (TCPA) generally prohibits telephone calls that use an “artificial or pre-recorded voice” to residential and wireless phone lines without prior consent. Last year, the Federal Communications Commission (FCC) issued a Declaratory Ruling declaring that AI technologies simulating human voices or using prerecorded voices fall within the meaning of “artificial” voice under the TCPA, requiring appropriate level of consent prior to contacting the consumer.9 Additional disclosure requirements for AI generated calls may soon be on the way, based on a notice of proposed rulemaking from the FCC in August 2024.
- Children’s Privacy. Companies handling children’s information must be aware of state and federal laws regulating children’s privacy, which offer protections beyond those for adults.
Contact Us
Wilson Sonsini routinely helps companies navigate complex issues pertaining to AI and healthcare. For more information or assistance, please contact Andrea Linna, Tracy Shapiro, Maneesha Mithal, Eva Yin, Lauren Gallo White, Hale Melnick, Nawa Lodin, Seamus Taylor, or another member of the firm’s Digital Health practice or Data, Privacy, and Cybersecurity practice.
[1]Cal. Bus. & Prof. Code § 2052.
[2]California also licenses Licensed Marriage and Family Therapists (LMFTs) and Licensed Clinical Social Workers (LCSWs), which also have their own scopes of practice.
[3]Cal. Bus. & Prof. Code § 4999.20.
[4]Cal. Health & Safety Code § 1339.75.
[5]V.T.C.A., Health & Safety Code § 183.001, et seq.
[6]The Colorado AI Act, which is set to take effect June 30, 2026, will impose obligations on developers and users of “high risk AI systems,” which include those that have an effect on the provision, denial, or cost or terms of healthcare services.
[7]Under the new CCPA regulations promulgated by the California Privacy Protection Agency, businesses that use Automated Decision-Making Technologies to make a “significant decision” concerning a consumer will need to fulfill certain notice, opt-out, and access request obligations. “Significant decisions” include decisions that result in the provision or denial of healthcare services. For more information, see here.
[8]A device is defined as “An instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory which is … (B) intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or (C) intended to affect the structure or any function of the body of man or other animals, and which does not achieve its primary intended purposes through chemical action within or on the body of man or other animals and which is not dependent upon being metabolized for the achievement of its primary intended purposes.”
[9]The appropriate level of consent depends on a variety of factors, including but not limited to whether the call involves telemarketing and whether the call is to a residential line or cell phone.