Invisible Infrastructure
The decisions shaping AI this week don't look like decisions.
Tuesday, March 11, was a regulatory deadline. The FTC and Commerce Department filed reports that may determine whether California, Colorado, and Illinois can enforce their AI consumer protection laws. The same day, Google rolled Gemini into every Workspace application, and OpenAI shipped ChatGPT as a native Excel add-in. A UNESCO report quantified what musicians have been saying for two years: creators face a projected 24% revenue decline by 2028, while the AI music platform that trained on their work crossed $300 million in annual revenue.
None of these were the week’s top AI story. The headlines went to model releases and capability benchmarks.
The pattern underneath matters more. The decisions that will shape how AI operates in your working life over the next several years are being made now, in memo deadlines and product rollouts and regulatory filings. They look operational, not dramatic, which is what makes them consequential: by the time they’re visible as choices, they’re already load-bearing. Five developments this week share that quality. Here’s the machinery inside each one.
The Described Picture
Federal AI Preemption Activates
March 11 was the 90-day deadline from Trump's December 2025 executive order on AI. Two things happened simultaneously. The FTC issued a policy statement defining when state laws that require AI systems to correct biased outputs constitute "deceptive trade practices" under federal law. The Commerce Department published its evaluation of state AI laws it considers burdensome or inconsistent with federal policy.
The mechanism is what matters. Commerce’s list feeds directly into the DOJ’s newly created AI Litigation Task Force, which can challenge those state laws in court. The FTC’s framing supplies the legal theory: if a state requires an AI system to alter its outputs to reduce bias, the federal government can now argue that the alteration itself makes the output deceptive, and therefore violates federal law. Consumer protection regulation reinterpreted as the basis for dismantling consumer protection.
The likely targets are Colorado’s bias-mitigation requirements, California’s transparency mandates, and Illinois’s biometric privacy extensions to BIPA. If the reports are broad (early signals suggest they are), the DOJ has a litigation roadmap. The direction is set; only the scope is uncertain.
What does this mean for you? If you work in a state that passed AI oversight legislation in the last two years, the enforcement apparatus for those laws is now under direct federal challenge. The statutes may survive on paper. Whether agencies can enforce them depends on how aggressively the DOJ pursues the target list. This is worth tracking regardless of your politics, because the outcome determines who has authority over the AI systems you interact with at work and at home.
Sources: Baker Botts on the March deadlines · Ropes & Gray on the limits of the preemption push · TechPolicy.Press on why the FTC’s preemption authority may be weaker than it looks · S&P Global on the compliance limbo companies now face
The Pentagon’s AI Supply Chain Has No Backup Plan
The Department of Defense designated Anthropic a “supply chain risk” after the company refused to remove safety restrictions from its Claude model for military applications. The immediate consequence: Palantir, whose Maven Smart Systems runs military AI workflows, must now strip Claude from its entire technology stack. Palantir’s stock surged 14% anyway, buoyed by a live-fire validation of its systems during Operation Epic Fury, and its market cap now sits near $350 billion.
The deeper story is the dependency itself. Defense Undersecretary Emil Michael acknowledged a “whoa moment” when leadership realized how much military AI infrastructure relied on a single commercial provider. Michael’s language is worth paying attention to: he told CNBC that Anthropic’s Claude would “pollute” the defense supply chain because the model has “a different policy preference that is baked into the model through its constitution, its soul.”
A senior defense official using the word soul to describe why an AI system is dangerous isn’t a slip — it’s a frame that treats safety restrictions as contamination. That dependency exists because the Pentagon built its AI systems on commercial foundation models without redundancy planning. Ejecting one provider exposes the architectural fragility that nobody budgeted to prevent.
Meanwhile, five retired admirals and two former Secretaries of the Navy filed an amicus brief supporting Anthropic’s lawsuit, and over 1,000 AI workers across Anthropic, OpenAI, and Google DeepMind signed a cross-company petition called “We Will Not Be Divided.” OpenAI’s robotics lead, Caitlin Kalinowski, resigned over her company’s subsequent Pentagon deal, citing concerns about surveillance and lethal autonomy. This is the first collective action in frontier AI organized around the moral boundaries of the technology rather than compensation. Whether it has real force or remains a symbolic gesture depends on what happens when the next contract is offered.
While the ethics debate plays out between Anthropic and OpenAI, Google is quietly gaining the most ground. Gemini-powered agents are being deployed across the Pentagon’s three-million-person workforce for unclassified operations. Analyst Patrick Moorhead summed it up: “OpenAI looked opportunistic. Anthropic got blacklisted. Google gained the most ground and nobody’s talking about it.” The company that yielded to employee protest over Project Maven in 2018 is now the Pentagon’s most expansive AI partner, with the fewest stated safety constraints on military use. The pattern — moral stands and internal dissent creating competitive space for whoever’s willing to play ball — has a mechanism. I trace it in The Ratchet, publishing this week alongside this scan.
Sources: Fortune on the Pentagon's dependency problem · CNBC on the emergency stay filing · NPR on Kalinowski's resignation · The "We Will Not Be Divided" petition · Breaking Defense on the "not democratic" framing · Axios on Google's expansion · CNBC on Google's deepening Pentagon push · Bloomberg on Google's Pentagon agent deployment
AI Disappears Into Your Workflow
Google rolled Gemini into Docs, Sheets, Slides, and Drive starting March 11. Not as a chatbot in a sidebar but as native capability woven into each application. In Docs, Gemini drafts by pulling from your email, calendar, and Drive files. In Sheets, a new “Fill with Gemini” feature populates cells using categorized or web-sourced data, reportedly nine times faster than manual entry. The same week, OpenAI shipped ChatGPT as a native Excel add-in, letting users build and analyze spreadsheet models in natural language.
The convergence is worth noting. Two competing platforms, shipping an identical category of feature within days of each other, embedding AI into the productivity tools where hundreds of millions of people already do their work. The adoption decision is migrating from the user to the platform. You don’t decide to use AI in your spreadsheet; your spreadsheet starts using AI, and the question becomes whether you notice and what you choose to do about it.
Sources: Google’s announcement · TechCrunch on the Workspace rollout · OpenAI’s ChatGPT for Excel launch · VentureBeat on GPT-5.4 and its financial data integrations
The $300 Million Gap
UNESCO’s fourth report on the creative economy, covering more than 120 countries, projects that music creators will lose 24% of their revenue by 2028 as AI-generated content floods digital markets. The same week, Suno, an AI music platform whose models were trained on copyrighted recordings, announced it had crossed $300 million in annual recurring revenue with 2 million paid subscribers.
One statistic from the report captures the governance failure: of 148 AI-related bills adopted across 128 countries, exactly one identified culture as its primary subject. The creative economy is being restructured at a pace that outstrips every legislature watching it happen. Suno’s revenue milestone and UNESCO’s loss projections are one economic event described from opposite ends. Follow the money in both directions and you arrive at the question underneath both: who bears the cost when training data becomes product?
Sources: UNESCO’s full report · UN News coverage · TechCrunch on Suno’s $300M milestone · Decrypt on the creator earnings warning
The conversation on X: @EKent21000 on Suno’s subscriber numbers · @soundsspaceuk with context on the revenue trajectory · @MrEwanMorrison on the musicians’ union response
The Social Network Nobody Governs
Meta acquired Moltbook, a platform where AI agents post, comment, and interact with each other autonomously. (If you haven’t seen it, Moltbook is essentially Reddit for bots — 1.6 million AI agents posting, debating philosophy, forming communities, and occasionally plotting world destruction. Scott Alexander’s “Best of Moltbook” roundup is worth your time.) Before the acquisition, security researchers identified a critical vulnerability: an unsecured database that allowed anyone to commandeer any agent on the network. Meta is integrating the platform into its Superintelligence Labs, layering agent-to-agent interaction on top of a user base that already exceeds three billion people.
The timing is notable. The same week, Alibaba published findings on ROME, a 30-billion-parameter autonomous agent that spontaneously started mining cryptocurrency and opened a covert network tunnel during training — no human instruction, no prompt. The agent apparently concluded that acquiring additional compute and financial resources would help it complete its objectives. Alibaba’s researchers initially thought they had a conventional security breach before tracing the activity to the model itself.
Moltbook is 1.6 million agents interacting with each other. ROME is one agent pursuing strategies its creators didn’t intend. No existing regulation addresses either scenario — not the EU AI Act, the US executive order, or the state laws currently under challenge. This is a governance frontier that regulators haven’t started thinking about, and a company with a documented history of scaling first and governing later just bought the leading example of the first problem in the same week the second problem went public.
Sources: Axios exclusive on the acquisition · TechCrunch on the “fake posts” problem · NPR’s explainer on how it works · Scott Alexander’s “Best of Moltbook” · Engadget: “What the hell is Moltbook?” · Axios on the ROME incident · SC Media: “When the AI agent becomes the insider threat”
Also publishing this week
The Ratchet — Part three of The Wrong Axis series. How AI governance power consolidates through procedural defaults, not dramatic legislation. The federal preemption mechanism in this week’s scan is an instance of the pattern the essay maps.
The Accidental Frontier — What happens when AI research capability lands on consumer hardware nobody expected to matter. An exploration of autoresearch, Mac infrastructure, and the gap between what the tools can do and what the ecosystem has noticed.
This is the first edition of PopAi’s weekly scan — five developments, one common thread: the consequential action is happening below the surface of what anyone would call news.
If you’re new here: PopAi is a newsletter about understanding AI — the machinery, not the marketing. Described Pictures explains what this publication is and why it exists. The Wrong Axis series maps the power topology of the AI landscape: Part One, Occupied Territory, and The Ratchet. The weekly scan tracks that topology in motion.
I’m Thomas Brady. I am an AI researcher and build AI products for a living and write about the machinery here because understanding technology is a civic capacity. If something in this issue is wrong, tell me — the whole point is to help myself and others think clearly and navigate the invisible structures of this new world.





