In 2025, lawmakers and enforcement agencies around the globe have kept one issue firmly in the spotlight: the privacy and safety of minors online. This heightened focus shows no sign of abating, with early indications that companies should expect to see more legislative and regulatory initiatives in the year ahead.
In this post, we identify some of the key developments over the last 12 months and outline our predictions about what the next year may bring.
Trend #1. Age Assurance
Age assurance technologies can be an important tool in helping to identify minors on a service, and tailor their experience. Laws requiring some form of age assurance or age verification—beyond a self-declaration—have been in focus over the last 12 months.
- U.S. We saw a flurry of age assurance laws enacted this year. This included revised attempts at social media regulation (see, e.g., Va. SB 854 (2025)) and age-appropriate design codes (see, e.g., Vt. S69 (2025)), despite consistent legal challenges to these statutory frameworks. Four states also passed laws requiring app stores to provide age category signals to developers, often referred to as “app store accountability acts.” And in December 2025, the House Committee on Energy and Commerce considered a flurry of minor-privacy and online safety legislation, which has been advanced to the House of Representatives.
- Europe.The European Commission (EC) struck a pragmatic chord in its guidelines on the protection of minors (the Guidelines), noting that providers of online platforms should adopt a proportionate risk-based approach when deciding whether to implement age assurance measures for the purposes of complying with the Digital Services Act (DSA). In the UK, the Online Safety Act 2023 (OSA) sets out relatively clear rules on the purposes for which age assurance should be used. Ofcom’s early enforcement efforts have focused on compliance by websites hosting adult content, however it is expected that the regulator will turn its attention to a broader range of services in 2026, and report on the overall effectiveness of age assurance technologies.
- Rest of World. Australia’s “social media ban” became effective December 10, 2025, preventing minors under 16 from holding accounts on designated platforms. The law places responsibility on platforms to verify a user’s age and to provide various means of doing so. Singapore has taken a different approach, with the country’s digital regulator requiring app stores—rather than service providers—to carry out age assurance under its Code of Practice for Online Safety for App Distribution Services.
Our Prediction: In 2026, we can expect to see further movement in this space globally. In the U.S., states will likely draft copy-cat laws if statutes survive constitutional scrutiny, either in whole or in part. U.S. Congress will also consider minors’ privacy and online safety legislation again. And the Federal Trade Commission (FTC) will host a workshop regarding the use and implementation of age assurance technologies. The EU has shown early indications that it may introduce new rules requiring 13-16-year-olds to obtain parental consent before accessing social media platforms, video-sharing platforms, and certain artificial intelligence (AI) companions. It is also actively working towards a harmonized age-verification standard. Broadly, we can expect increased discussion on the potential benefits and risks of implementing these technologies.
Trend #2. “Addictive” Features
Increased attention has been paid globally to how products and services directed at minors are designed, especially with respect to so-called “addictive” features.
- U.S. Courts continue to grapple with the question of whether companies can be regulated or sued regarding certain design decisions about their online services. For example, the U.S. Court of Appeals for the Ninth Circuit partially enjoined California’s Protecting Our Kids from Social Media Addiction Act, saying that certain requirements did not survive strict scrutiny. And in Massachusetts, the state’s Supreme Judicial Court is set to hear argument on whether Section 230 of the Communications Decency Act can shield a platform for designs that allegedly harm minors.
- Europe. At present, addictive design in digital services is addressed primarily through rules on so-called “dark patterns” under existing EU and UK legal frameworks including the DSA, the General Data Protection Regulation (GDPR), and the Unfair Commercial Practices Directive. However, in July 2025 the EC signaled a potential shift in approach by launching a call for evidence in relation to a proposed Digital Fairness Act (DFA), which would address practices such as the “addictive design of digital products.” The DFA is also referenced within the Commission’s 2030 Consumer Agenda.
In the UK, it remains uncertain whether new legislation will be introduced to address addiction in digital services explicitly, or whether the government will continue to rely on existing regulatory frameworks and authorities—such as the OSA and the Information Commissioner’s Office’s Age-Appropriate Design Code—to address these issues. - Rest of World. Brazil enacted the Estatuto Digital da Criança e do Adolescente (ECA Digital Law), which includes various provisions aimed to protect minors online. Notably, the ECA Digital Law prohibits companies from offering loot boxes in electronic games that are targeted or likely to be accessed by minors.
Our Prediction: In 2026, we can expect continued discussions around product design and its effects on minors. In the U.S., courts will continue to examine the issue of whether certain design features can cause actionable harms and whether companies can be held liable for those designs. Discussions in Europe around the introduction of a DFA will be watched closely, particularly given the body’s broader push to simplify certain elements of its regulatory framework for digital services. Broadly, we can expect that more jurisdictions will start examining how design choices affect minors in particular.
Trend #3. Uptick in Investigations, Enforcement, and Claims
Enforcement actions can provide key insights into government priorities. Absent enforcement, enacted laws may not seem like a true government priority. By contrast, announced actions signal to all the importance of compliance and highlight how regulators are viewing the law.
- U.S. Both state and federal regulators were active in enforcing issues related to minors’ privacy and online safety. The FTC brought numerous actions under its primary children’s privacy framework, the Children’s Online Privacy and Protection Act (COPPA). At the state level, state attorneys general were also very active, announcing various actions related to minors’ privacy and online safety. For example, three states announced a settlement with an educational technology provider following a data breach involving student data. The FTC later announced an action against that same company.
- Europe. Investigations and enforcement actions under the DSA are gathering pace, as we approach two years since the legislation first became applicable. The UK’s OSA’s “supercomplaint” regime, which allows eligible independent organizations representing the interests of users to submit complaints directly to Ofcom, will come into force starting December 31, 2025. Complaints can relate to features or conduct of online services and can be made against individual organizations or categories of services. Investigations in the minor privacy space also continue under the GDPR; in December 2025, the UK ICO announced that it will launch a monitoring program targeting 10 popular mobile games for minors to assess the aforementioned priority areas.
Our Prediction: In 2026, we expect to see a continuing regulatory focus on this area, with some of the longer-running investigations coming to a conclusion across jurisdictions. In the U.S., the FTC will likely announce new actions under COPPA. The agency will also scrutinize companies under its new statutory authority, the TAKE IT DOWN Act, as discussed further below. At the state level, state attorneys general will continue to be active in bringing actions to protect minors online, either under standalone legislation or their states’ unfair and deceptive practices frameworks. In particular, we can expect increased collaboration among states attorneys general, resulting in increased multi-state settlement announcements. In the UK, we expect to see Ofcom broaden its focus beyond services that host adult content. The regulator has indicated that it will focus on topics including risk assessments, prevention of child sexual abuse material (CSAM), the online experiences of women and girls, and the removal of terror and hate content. In Europe, regulators will likely release their findings from child safety investigations, and we expect regulators to continue issuing information requests, potentially from smaller online platforms. Broadly, we expect an uptick in enforcement from global regulators related to minors’ privacy and safety.
Trend #4. Addressing Potentially Harmful Content for Minors
One of the motivating factors animating increased regulation and enforcement to protect minors online stems from their access to certain content perceived as harmful to minors. This content can include sexually explicit material or content that promotes goods or services that are illegal for minors.
- U.S. Many states have passed laws requiring age verification for certain websites that provide content that is obscene for minors. This year, the U.S. Supreme Court upheld one of these laws (i.e., Texas HB 1811) in Free Speech Coalition v. Paxton, determining that the law survived intermediate scrutiny and thus did not violate the First Amendment. At the same time Texas’s core obscene content law was being challenged, the state legislature considered and ultimately passed amendments to the law under Texas HB 581. This law extended age verification requirements to websites that make available AI tools that can generate content considered obscene for minors, unless such use is prohibited by the company’s terms and the company takes steps to limit the creation of such content.
- Europe. The EC’s Guidelines have provided direction to service providers on the types of content they may need to protect minors from, including age-restricted content and content that is hateful, harmful, or amounts to disinformation. Early investigations under the DSA have focused on the availability of adult content to minors, while jurisdictions including France have introduced specific laws addressing the topic. In July 2025, duties under the OSA to assess and mitigate risks to minors became applicable, requiring regulated services to take steps in relation to broad categories of content deemed “harmful.” The impacts of these measures are now being felt in terms of user experience.
- Rest of World. Other governments also began to address content considered harmful to minors. Brazil’s ECA Digital Law requires companies whose online content, product, or services are “inappropriate, inadequate, or prohibited” for minors under 18 to implement measures to prevent minors from accessing such material. The law clarifies that age verification technology must be used instead of self-declared ages.
Our Prediction: In 2026, we can expect this trend to solidify its place in the minors’ online safety playbook. In the U.S., states may begin enforcing these laws, especially given the Free Speech Coalition v. Paxton ruling. Congress may also consider whether to implement such a restriction at the federal level. Additionally, we may see more scrutiny around how companies enforce content moderation for minors in light of recent app store accountability acts. In Europe, despite the potentially broad application of laws such as the DSA and OSA, we expect to see a continued focus on the availability of content judged to be most harmful to minors, including adult content and disinformation. Broadly, we can expect more countries to address concerns about minors accessing age-appropriate experiences online, possibly requiring some form of age assurance when certain content is illegal for minors or perceived as harmful.
Trend #5. AI-Generated CSAM and Deepfakes
Increasingly, bad actors are seeking to use generative AI technologies to generate harmful content. We expect regulators to continue to wrestle with this issue.
- U.S. Congress passed the TAKE IT DOWN Act this year. The federal law prohibits the knowing distribution of non-consensual intimate images (NCII), including AI-generated deepfakes, and requires platforms to take down NCII within 48 hours of an individual’s request. While not targeted at minors specifically, the speakers discussed the new law’s impact on minors during the FTC workshop titled, “The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families.” Speakers, including FTC Chair Andrew Ferguson, explained that this law will help combat minors’ online exploitation and cyberbullying.
- Europe. Lawmakers are targeting both the creation of harmful content, and its distribution. In the UK, a new law announced in November 2025 would provide child protection organizations with an ability to scrutinize AI models, to ensure safeguards are in place to prevent them generating CSAM. The EU’s proposed CSAM Regulation was also revived at the end of 2025, promising to impose further obligations on providers to assess the risk of their offerings being used for illegal purposes.
- Rest of World. Other countries are also considering how to best tackle these issues. In Singapore, the parliament passed the Online Safety (Relief and Accountability) Act 2025, which lists defined online harms and requires prompt takedown by platforms of certain content in response to reports. Online harmful activities include “image-based child abuse,” which includes altered or generated images or recording.
Our Prediction: In 2026, this trend will continue to be a focus. In the U.S., the FTC will likely focus on enforcing the TAKE IT DOWN Act, perhaps with a focus on protecting minors. In Europe, we expect to see ongoing debate about how best to tackle these harms, with CSAM and deepfake-related issues remaining high on regulatory enforcement agendas. Broadly, we can expect that other jurisdictions will consider implementing or expanding existing frameworks aimed at addressing NCII or CSAM to encompass AI-generated or modified images.
Trend #6. AI Companions
When considering minors’ online safety, regulators have begun expressing concerns about how minors interact with AI services.
- U.S. Legislatures and regulators began focusing on AI “companion chatbots” this year. Two states passed laws requiring safeguards for AI companion chatbots (see N.Y. S3008 (2025); Cal. SB 243 (2025)). The FTC also launched an inquiry into consumer-facing AI chatbots as part of a study to understand, in part, what steps companies are taking to protect minors using these AI products.
- Europe. In the UK, the government has indicated that it is exploring whether additional legislation should be introduced to regulate AI chatbots that fall outside the scope of the OSA, and Ofcom has been tasked with outlining its expectations for regulated providers. The EC also issued the “GenAI Code of Practice,” which recommends that a system risk identification process be conducted to identify risks such as CSAM and NCII.
- Rest of World. Australia’s eSafety Commissioner issued four legal notices to companies that provide AI companions, requesting information on the safety measures the companies implement to protect minors from exposure to certain harms, including sexually explicit content and self-harm.
Our Prediction: In 2026, we expect to see continued conversations about how minors engage with these technologies. In the U.S., the Senate will discuss the Guidelines for User Age-verification and Responsible Dialogue Act (known as the GUARD Act), which currently contemplates requiring reasonable age verification of account holders, banning AI chatbots for minors, and criminalizing making available AI chatbots to minors that solicits or produces sexually explicit content. Even if passed, if the bill does not preempt state laws. States may continue to regulate in this space, possibly following either the New York or California model. In Europe, regulators may begin issuing information requests to AI chatbot providers that have not addressed or mitigated risks identified under laws including the GDPR, DSA, and the OSA. Broadly, we can expect increased focus on how these increasingly popular technologies implement safeguards to ensure that minors using their services receive age-appropriate and safe material.
The upshot of all of these initiatives is clear. Safety and privacy issues for minors are top of mind for regulators around the world. Companies providing services to minors will want to pay close attention to developments in these areas in the coming year.
Wilson Sonsini Goodrich & Rosati routinely advises clients on minors’ privacy and online safety laws and regulations and counsels companies facing enforcement actions. For more information, please contact Chris Olsen, Cédric Burton, Tom Evans, Claudia Chan, Rebecca Weitzel Garcia, or another member of the firm’s Data, Privacy, and Cybersecurity practice.