On June 5, 2025, Nevada Governor Joe Lombardo signed AB 406, a law regulating the use of artificial intelligence (AI) for mental and behavioral healthcare. AB 406 comes as other states, such as Utah and New York, have taken steps to regulate AI chatbots, including AI chatbots providing mental health services. AB 406 prohibits offering AI systems designed to provide services that constitute the practice of professional mental or behavioral healthcare (such as therapy) and prohibits making representations that an AI system can provide such care. In addition, AB 406 limits how mental and behavioral healthcare professionals can use AI systems.[1] AB 406 takes effect on July 1, 2025.
A more detailed summary of the key provisions is below.
Prohibitions on AI Systems Providing Mental and Behavioral Healthcare
AB 406 prohibits AI providers from making available in Nevada an AI system that is specifically programmed to provide a service that would otherwise constitute the practice of professional mental or behavioral healthcare if provided by a person. Under AB 406, “professional mental or behavioral health care” refers to services relating to the diagnosis, treatment, or prevention of mental illness or emotional or behavioral disorder typically provided by a provider in their practice. AB 406 does not prohibit AI systems designed to be used by mental and behavioral health providers for administrative tasks, such as scheduling and billing.
AB 406 also prohibits AI providers from making representations, explicitly or implicitly, that:
- The AI system is capable of providing professional mental or behavioral healthcare;
- A user can obtain professional mental health or behavioral healthcare by interacting with the AI system, which simulates human conversation; or
- The AI system (or any component, feature, embodiment, or avatar) is a provider of mental or behavioral healthcare, a therapist, a psychiatrist, or any other term commonly used to refer to a provider of professional mental health or behavioral healthcare.
AI providers should therefore be cautious about how they market, characterize, and contract for AI systems that are designed to support mental and behavioral health practices. Although AB 406 is notable for explicitly prohibiting the use of AI to provide professional mental or behavioral healthcare, Nevada already makes it unlawful to practice a mental health service (or other professional service) without a proper license or authorization. This is not unique to Nevada, as most states prohibit the provision of a licensed service without a license, and the use of AI does not create an exception. Whenever AI is deployed alongside a regulated profession, it is crucial to assess whether the service provided by the AI requires a licensed practitioner and is compliant with applicable state professional laws, as violations are frequently subject to criminal penalties.
AB 406 authorizes the Division of Public and Behavioral Health to investigate potential violations and bring actions to recover civil penalties. Violations are subject to civil penalties of up to $15,000 per violation. Further, under existing law, it is a crime to provide certain professional services without a license. The Division is also directed to develop recommended best practices for the potential use of AI by individuals seeking mental and behavioral healthcare.
Limits Use of AI Systems by Mental and Behavioral Professionals
AB 406 also prohibits Nevada providers of mental and behavioral care from using AI systems in connection with providing care directly to a patient. However, AB 406 allows providers of mental and behavioral healthcare to use AI systems to perform tasks for administrative support. If used to manage billing or notes from patient sessions, the provider must independently review the accuracy of the AI system output. Providers who violate these requirements are considered guilty of unprofessional conduct and subject to disciplinary action.
Next Steps
Consistent with existing law, and in light of the July 2025 compliance deadline, companies that offer AI products related to mental health in Nevada should take immediate steps to review their marketing language, provider contracts, disclaimers, terms of service, and other public-facing statements to ensure they do not suggest that their AI systems can diagnose, treat, or prevent mental illnesses or emotional or behavioral disorders. Providers of mental and behavioral health care should similarly ensure that they are not using AI systems to provide care to patients that would constitute the practice of professional mental or behavioral healthcare, as well as ensure that they are independently reviewing the outputs of their AI systems when AI is used for administrative support.
Wilson Sonsini Goodrich & Rosati routinely helps companies navigate complex privacy and data security issues and specializes in issues pertaining to AI and healthcare. For more information or assistance with your compliance program, please do not hesitate to contact Maneesha Mithal, Andrea Linna, Hale Melnick, or another member of the firm’s Data, Privacy, and Cybersecurity practice or Digital Health practice.
Taylor Stenberg Erb and Maddie Smurzynski contributed to the preparation of this post.
[1] AB 406 also restricts the use of AI in behavioral and mental healthcare services in schools, which is beyond the scope of this alert.