Introduction

Artificial intelligence (AI) is increasingly being used in mental healthcare for functions such as clinical decision support, notetaking and transcription, symptom screening and triage, administrative and operational improvements, augmenting the provider-patient relationship, post-visit remote monitoring, and companionship.Continue Reading Legal Framework for AI in Mental Healthcare

Artificial intelligence (AI) companion apps have been in the news, with Commissioner Melissa Holyoak of the Federal Trade Commission calling for a study on AI companions earlier this month, and lawmakers at the state and federal level voicing concerns about the technologies. In response, New York has enacted the first law requiring safeguards for AI companions. Scheduled to come into effect on November 5, 2025, the law requires operators of AI companions to implement safety measures to detect and address users’ expression of suicidal ideation or self-harm and to regularly disclose to users that they are not communicating with a human. Here are some answers to the key questions about the law:Continue Reading New York Passes Novel Law Requiring Safeguards for AI Companions

On June 5, 2025, Nevada Governor Joe Lombardo signed AB 406, a law regulating the use of artificial intelligence (AI) for mental and behavioral healthcare. AB 406 comes as other states, such as Utah and New York, have taken steps to regulate AI chatbots, including AI chatbots providing mental health services. AB 406 prohibits offering AI systems designed to provide services that constitute the practice of professional mental or behavioral healthcare (such as therapy) and prohibits making representations that an AI system can provide such care. In addition, AB 406 limits how mental and behavioral healthcare professionals can use AI systems.[1] AB 406 takes effect on July 1, 2025.Continue Reading Nevada Passes Law Limiting AI Use for Mental and Behavioral Healthcare

On June 16, 2023, the Federal Trade Commission (FTC) announced a proposed settlement agreement (in the form of a stipulated order) with genetic testing company Vitagene, Inc., now known as 1Health.io (1Health.io), for allegedly misrepresenting its security and privacy practices regarding its data storage, deletion, and usage. The FTC also alleged that the company unfairly changed material privacy policy disclosures without obtaining affirmative consumer consent.Continue Reading FTC Announces Proposed Settlement with 1Health.io Genetic Testing Firm for Privacy and Security Violations