Aletho News

ΑΛΗΘΩΣ

40 State Attorneys General Want To Tie Online Access to ID

The bill’s supporters call it child protection; its architecture looks more like a national ID system for the internet.

Reclaim The Net | February 12, 2026

A bloc of 40 state and territorial attorneys general is urging Congress to adopt the Senate’s version of the controversial Kids Online Safety Act, positioning it as the stronger regulatory instrument and rejecting the House companion as insufficient.

The Act would kill online anonymity and tie online activity and speech to a real-world identity.

Acting through the National Association of Attorneys General, the coalition sent a letter to congressional leadership endorsing S. 1748 and opposing H.R. 6484.

We obtained a copy of the letter for you here.

Their request centers on structural differences between the bills. The Senate proposal would create a federally enforceable “Duty of Care” requiring covered platforms to mitigate defined harms to minors.

Enforcement authority would rest with the Federal Trade Commission, which could investigate and sue companies that fail to prevent minors from encountering content deemed to cause “harm to minors.”

That framework would require regulators to evaluate internal content moderation systems, recommendation algorithms, and safety controls.

S. 1748 also directs the Secretary of Commerce, the FTC, and the Federal Communications Commission to study “the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”

This language moves beyond platform-level age gates and toward infrastructure embedded directly into hardware or operating systems.

Age verification at that layer would not function without some form of credentialing. Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems.

That means users could be required to submit identifying information before accessing broad categories of lawful online speech. Anonymous browsing depends on the ability to access content without linking identity credentials to activity.

A device-level age verification architecture would establish identity checkpoints upstream of content access, creating records that age was verified and potentially associating that verification with a persistent device or account.

Even if content is not stored, the existence of a verified identity token tied to access creates a paper trail.

Constitutional questions follow. The Supreme Court has repeatedly recognized anonymous speech as protected under the First Amendment. Mandating identity verification before accessing lawful speech raises prior restraint and overbreadth concerns, particularly where the definition of “harm to minors” extends into categories that are legal for adults.

Courts have struck down earlier efforts to impose age verification requirements for online content on First Amendment grounds, citing the chilling effect on lawful expression and adult access.

Despite this history, state officials continue to advocate for broader age verification regimes. Several states have enacted or proposed laws requiring age checks for social media or adult content sites, often triggering litigation over compelled identification and privacy burdens.

The coalition’s letter suggests that state attorneys general are not retreating from that position and are instead seeking federal backing.

The attorneys general argue that social media companies deliberately design products that draw in underage users and monetize their personal data through targeted advertising. They contend that companies have not adequately disclosed addictive features or mental health risks and point to evidence suggesting firms are aware of adverse consequences for minors.

Multiple state offices have already filed lawsuits or opened investigations against Meta and TikTok, alleging “harm” to young users.

At the same time, the coalition objects to provisions in H.R. 6484 that would limit state authority. The House bill contains broader federal preemption language, which could restrict states from enforcing parallel or more stringent requirements. The attorneys general warn that this would curb their ability to pursue emerging online harms under state law. They also fault the House proposal for relying on company-maintained “reasonable policies, practices, and procedures” rather than imposing a statutory Duty of Care.

The Senate approach couples enforceable federal standards with preserved state enforcement power.

The coalition calls on the United States House of Representatives to align with the Senate framework, expand the list of enumerated harms to include even suicide, eating disorders, compulsive use, mental health harms, and financial harms, and ensure that states retain authority to act alongside federal regulators. The measure has bipartisan sponsorship in the United States Senate.

The policy direction is clear. Federal agencies would study device-level age verification systems, the FTC would police compliance with harm mitigation duties, and states would continue to pursue parallel litigation. Those mechanisms would reshape how platforms design their systems and how users access speech.

Whether framed as child protection or platform accountability, the architecture contemplated by S. 1748 would move identity verification closer to the heart of internet access.

Once age checks are embedded at the operating system level, the boundary between verifying age and verifying identity becomes difficult to maintain.

The internet would be changed forever.

February 13, 2026 - Posted by | Civil Liberties, Full Spectrum Dominance | ,

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.