Ukraine to ban Russian literature – culture minister
RT | February 12, 2026
The Ukrainian authorities are preparing a draft law to take all Russian and Russian-language books out of circulation, Ukrainian Culture Minister Tatyana Berezhnaya told Interfax-Ukraine in an interview published on Thursday.
Moscow maintains that Kiev’s discriminatory policies against ethnic Russians in Ukraine, as well as its persecution of the Russian language and culture are some of the fundamental causes of the current conflict.
According to Berezhnaya, Ukraine’s media authority is working on a bill to ban Russian books with the support of her ministry. She did not specify whether the measure would only remove them from store shelves or include confiscations from private collections.
Vladimir Zelensky’s predecessor, Pyotr Poroshenko, banned the import of books from Russia and Belarus in 2016, long before the escalation of the Ukraine conflict six years later. Kiev has since systematically purged Russian literature from state curricula, and intensified a purge of cultural monuments, memorials, and inscriptions to remove historical links to Russia.
Kiev has also steadily cracked down on the use of the Russian language in public life, restricting or banning its use in media and in professional spheres. Nevertheless, it remains the first and primary language for many people in Ukraine, especially in metropolitan areas and in the east of the country.
In December, the Ukrainian parliament stripped Russian of its protection under the European Charter for Regional or Minority Languages. Berezhnaya at the time proclaimed that the move would “strengthen Ukrainian” as the state language.
Moscow has noted that this crackdown has largely been ignored by Kiev’s Western backers.
“Human rights – ostensibly so dear to the West – must be inviolable. In Ukraine, we witness the comprehensive prohibition of the Russian language across all spheres of public life and the banning of the canonical Ukrainian Orthodox Church,” Russian Foreign Minister Sergey Lavrov said on Wednesday, accusing the EU and UK of not addressing the issue in their peace proposals.
Russia has long said that stopping the persecution of Russians in Ukraine is one of its core peace demands, which it is ready to continue pursuing through military means if Kiev resists diplomacy.
40 State Attorneys General Want To Tie Online Access to ID
The bill’s supporters call it child protection; its architecture looks more like a national ID system for the internet.
Reclaim The Net | February 12, 2026
A bloc of 40 state and territorial attorneys general is urging Congress to adopt the Senate’s version of the controversial Kids Online Safety Act, positioning it as the stronger regulatory instrument and rejecting the House companion as insufficient.
The Act would kill online anonymity and tie online activity and speech to a real-world identity.
Acting through the National Association of Attorneys General, the coalition sent a letter to congressional leadership endorsing S. 1748 and opposing H.R. 6484.
We obtained a copy of the letter for you here.
Their request centers on structural differences between the bills. The Senate proposal would create a federally enforceable “Duty of Care” requiring covered platforms to mitigate defined harms to minors.
Enforcement authority would rest with the Federal Trade Commission, which could investigate and sue companies that fail to prevent minors from encountering content deemed to cause “harm to minors.”
That framework would require regulators to evaluate internal content moderation systems, recommendation algorithms, and safety controls.
S. 1748 also directs the Secretary of Commerce, the FTC, and the Federal Communications Commission to study “the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”
This language moves beyond platform-level age gates and toward infrastructure embedded directly into hardware or operating systems.
Age verification at that layer would not function without some form of credentialing. Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems.
That means users could be required to submit identifying information before accessing broad categories of lawful online speech. Anonymous browsing depends on the ability to access content without linking identity credentials to activity.
A device-level age verification architecture would establish identity checkpoints upstream of content access, creating records that age was verified and potentially associating that verification with a persistent device or account.
Even if content is not stored, the existence of a verified identity token tied to access creates a paper trail.
Constitutional questions follow. The Supreme Court has repeatedly recognized anonymous speech as protected under the First Amendment. Mandating identity verification before accessing lawful speech raises prior restraint and overbreadth concerns, particularly where the definition of “harm to minors” extends into categories that are legal for adults.
Courts have struck down earlier efforts to impose age verification requirements for online content on First Amendment grounds, citing the chilling effect on lawful expression and adult access.
Despite this history, state officials continue to advocate for broader age verification regimes. Several states have enacted or proposed laws requiring age checks for social media or adult content sites, often triggering litigation over compelled identification and privacy burdens.
The coalition’s letter suggests that state attorneys general are not retreating from that position and are instead seeking federal backing.
The attorneys general argue that social media companies deliberately design products that draw in underage users and monetize their personal data through targeted advertising. They contend that companies have not adequately disclosed addictive features or mental health risks and point to evidence suggesting firms are aware of adverse consequences for minors.
Multiple state offices have already filed lawsuits or opened investigations against Meta and TikTok, alleging “harm” to young users.
At the same time, the coalition objects to provisions in H.R. 6484 that would limit state authority. The House bill contains broader federal preemption language, which could restrict states from enforcing parallel or more stringent requirements. The attorneys general warn that this would curb their ability to pursue emerging online harms under state law. They also fault the House proposal for relying on company-maintained “reasonable policies, practices, and procedures” rather than imposing a statutory Duty of Care.
The Senate approach couples enforceable federal standards with preserved state enforcement power.
The coalition calls on the United States House of Representatives to align with the Senate framework, expand the list of enumerated harms to include even suicide, eating disorders, compulsive use, mental health harms, and financial harms, and ensure that states retain authority to act alongside federal regulators. The measure has bipartisan sponsorship in the United States Senate.
The policy direction is clear. Federal agencies would study device-level age verification systems, the FTC would police compliance with harm mitigation duties, and states would continue to pursue parallel litigation. Those mechanisms would reshape how platforms design their systems and how users access speech.
Whether framed as child protection or platform accountability, the architecture contemplated by S. 1748 would move identity verification closer to the heart of internet access.
Once age checks are embedded at the operating system level, the boundary between verifying age and verifying identity becomes difficult to maintain.
The internet would be changed forever.
