Artificial intelligence (AI) companion apps have been in the news, with Commissioner Melissa Holyoak of the Federal Trade Commission calling for a study on AI companions earlier this month, and lawmakers at the state and federal level voicing concerns about the technologies. In response, New York has enacted the first law requiring safeguards for AI companions. Scheduled to come into effect on November 5, 2025, the law requires operators of AI companions to implement safety measures to detect and address users’ expression of suicidal ideation or self-harm and to regularly disclose to users that they are not communicating with a human. Here are some answers to the key questions about the law:

How Do I Know If the Law Applies to Me? 

The law applies to operators of AI companions with users within New York. AI companions are defined as systems using AI, generative AI, or “emotional recognition algorithms” that are designed to simulate a sustained human-like relationship with a user by i) retaining information on prior interactions and user preferences to personalize and facilitate ongoing engagement; ii) asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and iii) sustaining an ongoing dialogue concerning personal matters. “Emotional recognition algorithms” are defined as AI that “detects and interprets human emotional signals in text (using natural language processing and sentiment analysis), audio (using voice emotion AI), video (using facial movement analysis, gait analysis, or physiological signals), or a combination thereof.” The term “AI companion” does not include any system used by a business for solely customer service, internal purposes, or employee productivity; or any system used primarily for providing efficiency improvements, or research or technical assistance.

If the Law Applies to Me, What Do I Need to Do to Comply?

Operators of AI companions must ensure that their system contains a protocol to take reasonable measures to detect and address a user’s expression of suicidal ideation or self-harm. At a minimum, the law states that, upon detection of a user’s expression of suicidal ideation or self-harm, the operator must refer the user to a crisis service provider, crisis text line, or another appropriate crisis service.

In addition, the New York law requires operators of AI companions to provide a clear and conspicuous notification, verbally or in writing, that the user is not communicating with a human. “The notification does not need to exceed once per day” and at least every “three hours for ongoing interactions.”

How Will the Law Be Enforced? 

The state attorney general may seek an injunction and civil penalties against operators of AI companions that the attorney general believes have violated or are about to violate these provisions. The attorney general may seek civil penalties of up to $15,000 per day for violating the required notification and safety measures. The penalties collected will be deposited into a suicide prevention fund, which was created and passed as part of the state fiscal year budget for 2025-26 as well.

What Should I Do Now to Prepare for the November 5 Compliance Date? 

  • Determine whether your AI system constitutes an AI companion under the law.
  • If you’re covered, decide i) if you will notify users verbally or in writing that they are not interacting with a human, ii) how you will ensure that the notice is clear and conspicuous, and iii) how often you will provide this notification.
  • Assess how you will detect a user’s expression of suicidal ideation or self-harm. Note that some AI vendors offer products that can help with detection.
  • Consider how you will address expressions of suicidal ideation or self-harm. Some companies are already surfacing messages with resources about crisis-support when a user includes phrases suggesting suicidal ideation or self-harm.

What If I Don’t Offer Apps in New York?

New York is the first state to require safeguards for AI companions; however, other states are taking similar steps to regulate chatbots. For instance, Utah passed SB 226, which came into effect in May 2025 and amends the state’s existing AI disclosure obligations. The law requires companies that use generative AI to interact with individuals in consumer transactions to disclose that the individual is interacting with AI rather than a human, if the individual prompts the company about its use of AI. The law also requires companies to disclose if an individual is interacting with AI in the provision of services in a “regulated occupation” (such as healthcare services) if the use of generative AI is a “high-risk artificial intelligence interaction.” In 2024, Colorado passed comprehensive AI legislation, scheduled to come into effect in February 2026, which requires consumer-facing developers and deployers to provide notice to consumers when they are interacting with an AI system, unless it would be obvious to a reasonable person. (See prior Wilson Sonsini alert). California has also had a “bot disclosure law” in place since 2019 that applies to bots used to “incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.”  

Some states have placed particular emphasis on chatbot regulation in the healthcare context. For instance, Utah passed HB 452 in March 2025, which, among other things, requires operators of “mental health chatbots” to disclose that the chatbot is not human. For further information on this law, see a prior analysis here. Most recently, Nevada passed AB 406, which prohibits using AI to provide certain mental or behavioral health care services. See client alert here

Wilson Sonsini Goodrich & Rosati routinely helps companies navigate complex privacy and data security issues and specializes in issues pertaining to AI. For more information or assistance with your compliance program, please do not hesitate to contact Maneesha Mithal or another member of the firm’s Data, Privacy, and Cybersecurity practice.

Taylor Stenberg Erb and Maddie Smurzynski contributed to the preparation of this post.