Described Pictures
Making the machinery visible
There are roughly 2,500 AI newsletters. Most of them tell you what happened this week in AI. Almost none of them help you understand it.
The gap isn’t information — the internet is drowning in AI information. Papers, podcasts, product announcements, breathless Twitter threads. The gap is comprehension. The ability to take a piece of AI news, understand what’s actually happening underneath the press release, evaluate whether it matters, and decide what — if anything — it means for your work and your life.
That ability is almost entirely absent from the current media landscape, and the absence isn’t accidental. The publications covering AI have incentives that work against the content you need. Labs control what research they publish and when. Newsletters monetize through tool recommendations and affiliate links, biasing them toward hype — every new product is a miracle, every update is a game-changer. Tech press needs early access to labs for scoops, which constrains how critical they can be. Mainstream media needs dramatic framing for clicks, so every story is either salvation or apocalypse.
What’s left out of every layer is the register that matters most: here’s how this actually works, here’s what we know and don’t know, here’s how to think about it for yourself.
PopAi exists to fill that gap.
The tradition that died
For most of the twentieth century, Americans had institutions that did this work. Popular Mechanics, Popular Science, Scientific American under Gerard Piel — these weren’t just magazines. They were informal technical education at civilizational scale, reaching millions of people who learned to reason about technology through a distinctive combination of honest explanation, visual clarity, and respect for the reader’s intelligence.
Popular Mechanics’ founder Henry Haven Windsor — a former city editor who once spent six months disguised as a grip car operator to understand how mechanical workers actually thought — built the magazine on a radical editorial principle. “Most magazines use illustrated articles,” he explained. “We do not. We use described pictures.” Text existed to annotate the image, not the other way around. The thing you couldn’t see clearly was primary. The explanation served it. His tagline for over a century — “Written So You Can Understand It” — wasn’t a marketing slogan. It was a binding editorial commitment: we will trust you with real complexity, and in return, we’ll do the work to make it structurally clear.
That inversion — start with the thing that’s invisible or opaque, make it primary, build the explanation around it — is what the best technical communication has always done. It’s what nobody is doing for AI.
That tradition is effectively dead. Popular Science was gutted in a five-minute Zoom call in 2023 after six ownership changes stripped every trace of editorial mission. Popular Mechanics survives but drifts. The digital successors — YouTube, Reddit, Substack — serve fragments of the function but lack what the institutional press provided: curatorial breadth, editorial independence, and the willingness to expose readers to things they didn’t already know they needed to understand.
The result is a civic problem, not just a media one. About two-thirds of American adults lack the scientific literacy to independently evaluate technical claims in policy debates. That number was bad before AI. It’s going to get worse, because AI is the most consequential technology most people will encounter in their working lives, and the information ecosystem around it is failing them.
What PopAi is
PopAi is a newsletter about understanding AI — the machinery, not the marketing.
When a lab releases a new model, the interesting question is never what they announced. You can get that from a dozen sources by lunchtime. The interesting question is how it actually works, what the architecture reveals about its real capabilities and limitations, and what the gap between the technical report and the press release tells you about the incentives in play. PopAi writes at the level of someone who’s smart, works in a technical or knowledge-intensive field, and wants to understand mechanism — not someone who needs a research paper, and not someone who needs a summary.
That requires being honest about uncertainty. Research on trust in science communication shows that expressing doubt — “the evidence suggests but doesn’t prove,” “we don’t know yet” — increases reader trust, because it reduces perceived bias. The AI discourse has collapsed into a doomer-booster binary where both sides share the same flaw: more confidence than the evidence warrants. AI systems are powerful, limited, and poorly understood even by the people building them. A publication worth reading should reflect that reality rather than flattening it into a narrative.
Most AI coverage positions the reader as a spectator watching the future arrive, either thrilled or terrified. PopAi treats you as someone who can evaluate. The point isn’t to hand you conclusions but to build the kind of understanding that lets you evaluate the next claim yourself — the one I haven’t written about yet.
What most AI coverage actually optimizes for
Open any of the big AI newsletters — the ones with a million-plus subscribers — and trace the dependency chain of a typical post. A company announces a new model. The newsletter covers the announcement: benchmarks quoted from the company’s own technical report, a paragraph of context, a verdict. Sometimes there’s a sponsored tool recommendation at the bottom. Sometimes the tool recommendation is the post.
The problem isn’t dishonesty. It’s that the structure optimizes for something other than your understanding. Affiliate revenue requires product mentions. Access to early briefings requires a relationship with labs that critical coverage jeopardizes. Daily publishing cadence requires speed that precludes depth. Each incentive is rational. None of them serves the reader who wants to know what’s actually going on underneath the announcement.
Prediction is the other dominant mode. Will AGI arrive by 2027? Will AI replace your job? Which startup will win? These questions generate enormous engagement because they’re unanswerable — you can argue about them forever without resolving anything. They mirror the doomer-booster binary: both sides are more confident than the evidence warrants, and the confidence is the product, not the analysis.
PopAi doesn’t operate in either register. I’ll make arguments, take positions, tell you when I think the conventional wisdom is wrong. But I’ll show the reasoning, and I’ll mark what’s load-bearing and what’s speculative. Independence from labs, from tool vendors, from the prediction industry — not because neutrality is a virtue (neutrality is just the absence of a position) but because understanding technology is a civic capacity, and that conviction shapes what I cover and how.
Who this is for
Scientific American’s great editor Gerard Piel described his reader as “someone who knew about one area of science but wanted to know about other areas.” That formulation resolves a tension most publications get wrong: you don’t have to choose between depth and accessibility if you’re writing for people who are already competent thinkers in their own domain.
The software engineer who understands distributed systems but has no mental model for transformer architectures. The biologist who’s fluent in statistics but couldn’t explain gradient descent to a colleague. The product manager who can build a roadmap but can’t evaluate whether a vendor’s AI claims are real or marketing. These people don’t need simplified content. They need content that does the work of making unfamiliar complexity navigable.
There’s a less obvious audience too: the person who just wants to understand the thing everyone’s talking about. Not to build anything or optimize a workflow — just to understand. That impulse — curiosity about the technological world you inhabit — is the same impulse that drove millions of people to subscribe to Popular Mechanics for a century. It deserves the same respect now.
What to expect
One substantive piece per week. One essay that goes deep on something that matters.
The content falls into three registers. Explanatory engineering: how retrieval-augmented generation actually works, what’s going on inside chain-of-thought reasoning, why context windows matter and what their limits mean. The kind of thing you’d want to read before evaluating an AI claim at work next week. Analysis: what a particular development actually changes, for whom, and what it doesn’t change — the part that usually gets lost in the excitement or the panic. And methodology: how to think about a particular class of AI problem. I’m building a framework for AI-assisted work called Absolute Beginners++ that synthesizes several convergent problem-solving traditions into a practical method. The newsletter is where I apply that thinking in public.
The AI news cycle rewards speed. I’m optimizing for shelf life — pieces worth reading six months after publication, not just the morning they arrive.
Why me
I research and build AI for a living — ML platforms at AWS and H2O.ai, AI-integrated products, the plumbing that sits between a model and the thing it’s supposed to do in the real world. I work directly with the systems I’m writing about, which means I know where the press releases diverge from the engineering reality, and I know which limitations are temporary and which are architectural.
Before that, I spent time in Army Special Forces. The core problem AI presents to most people — making high-stakes decisions under uncertainty with incomplete information, in a domain that shifts faster than your mental models — is the same problem unconventional warfare presents. The military’s response is frameworks: repeatable structures for reasoning that hold up when the specific situation is novel. That instinct runs through everything here.
I also have a serious contemplative practice. The hardest part of thinking clearly about AI isn’t technical — it’s attentional. The hype cycle, the FOMO, the constant pressure to adopt the next tool. These are attentional problems before they’re information problems. Noticing when your relationship to a technology is driven by clarity versus reactivity is a skill, and PopAi will occasionally address it directly.
The bet
The bet underneath all of this: understanding reduces fear. The data bears it out — people with greater AI awareness feel more comfortable with AI applications, not less. The gap between how AI experts and the general public feel about the technology isn’t a disagreement about values. It’s an information asymmetry. Experts aren’t more comfortable because they’re naive. They’re more comfortable because they understand the machinery, including where it breaks.
A publication that closes that asymmetry — that helps people build real mental models of how AI systems work, what they can and cannot do, and how to evaluate the claims made about them — does more than inform readers. It changes the quality of the conversation — from anxious speculation toward grounded judgment, from spectators toward participants.
The popular technical press did this for a century with mechanical and electrical technology. Nobody is doing it for AI. PopAi is where I start.
Subscribe if you want to read it. Send it to someone who needs it. And if you think I’m wrong about any of this, tell me — the whole point is to think clearly, which means being willing to update.
You can also find me and some of my more divergent writing and projects at https://thbrdy.dev and you can always reach out to me at jthomasbrady@proton.me


