Pretty common: I get a bag of links — a newsletter, a Twitter thread, a friend’s Slack message. I open them all in tabs, skim, close most, and forget. Maybe one in ten sticks. The rest vanish into the consumption void.
This is my attempt to fix that using Randy the Raccoon — my AI reading partner. Why give AI a name? Same reason Larry the Life Coach has one: humans talk to people, not systems. Instead of the tab-explosion-and-forget cycle, I paste links into Randy, who fetches, summarizes, predicts what I’ll like, and captures everything here. Later, when I actually read the good ones, I come back and debrief — recording what resonated and what didn’t.
It’s GTD capture meets AI triage. And unlike passive consumption, this process produces something: a public record of what I read, what I think about it, and how my taste evolves. That makes it production, not consumption.
Feed
2026-03-04
- Don’t Become an Engineering Manager — Anton Zaides on why the EM career path no longer makes sense
- Summary: Zaides argues that senior engineers should no longer pursue management roles — rapid tech change makes it hard for managers to stay current, flattened orgs mean fewer senior management positions, and Staff engineers now earn 20-30% more than EMs. He reverses his previous advice, positioning the IC track as more rational, though acknowledges genuine interest in management work should still outweigh financial optimization.
- Why Randy thinks I’d like it: Management/leadership in AI era — directly relevant to your ai-native-manager post. Contrarian reversal from someone who previously advocated for the EM path.
- Cross-links: AI Native Manager, Manager Book, Software Leadership Roles
- Tags: #management #contrarian #career
- Reaction: This is good. These are the reasons — the only reason to be an EM is because you want to. Adding to “Should I become an EM?” on the AI native EM post.
2026-03-01
Source: Joy & Curiosity #76 by Thorsten Ball
- Cognitive Debt — Margaret-Anne Storey on losing understanding when AI generates code
- Summary: Velocity without understanding is not sustainable. As AI accelerates development, the real risk isn’t technical debt but cognitive debt — teams losing shared understanding of what software does and why. Advocates pair programming, refactoring, and TDD to maintain collective “theory” of a system.
- Why Randy thinks I’d like it: You coined “The Willison Pattern” and built /explainers around this exact concept. This is your home turf.
- Cross-links: Explainers, AI Native Manager, CHOW, Pet Projects
- Tags: #cognitive-debt #ai-practice #craft
- Reaction: Cognitive debt doesn’t just apply to produced code — it applies to everything happening in an organization. When we say “everyone should just do demos,” that’s cognitive debt too. Everyone can’t keep up with the firehose, and the same mitigations apply. We should be able to use AI to comprehend all of it and keep it coherent. Related: Meta is circulating “second brain” (based on Tango’s thing) — same problem space.
- How We Hire Engineers When AI Writes Our Code — Dan Federman on rethinking technical interviews
- Summary: Traditional technical interviews are obsolete. Tolan redesigned hiring to test judgment, reasoning, and architecture by having candidates build real features with AI tools — testing what actually matters now, not algorithm memorization.
- Why Randy thinks I’d like it: Direct extension of your /ai-hiring post. Real company doing it, not just theorizing.
- Cross-links: AI Hiring, Agency, AI Native Manager
- Tags: #ai-hiring #management #craft
- Reaction: Shallow. Basically “give candidate AI tools and watch them build something for 4 hours” — which is the classic take-home-but-onsite format. I like the idea in principle but it has the same cost problem I flag in my ai-hiring post: it’s expensive for both sides. If that’s the entire interview day, maybe it works. Was hoping for deeper insight into what signals they actually extract and how they evaluate judgment vs. just watching someone vibe-code.
- 747s and Coding Agents — Carl Kolon on AI eroding deep understanding
- Summary: AI agents boost output but erode deep understanding — like a 747 pilot who stops learning because the plane flies itself. Programmers risk becoming operators, not engineers, if they don’t deliberately maintain domain knowledge through hands-on problem-solving.
- Why Randy thinks I’d like it: Maps directly to your “Don’t stop thinking” and “Deep Blue” sections in /ai-native-manager.
- Cross-links: AI Native Manager, CHOP
- Tags: #skill-atrophy #ai-practice #contrarian
- Reaction: Already-seen insight dressed up with a metaphor. “AI makes you lazy if you let it” — I’ve already written this. Nothing novel in the execution or the framing. Need either a genuinely new angle or exceptional writing to justify retreading familiar ground.
- Nobody Knows How the Whole System Works — Lorin Hochstein on irreducible complexity
- Summary: Complete comprehension across all layers of modern systems is impossible — a reality AI is making more acute, not fundamentally changing. Synthesizes perspectives from Simon Wardley, Adam Jacob, and others to argue it’s dangerous to build without understanding, but full understanding is a myth.
- Why Randy thinks I’d like it: Systems thinking is your jam. Connects to your complexity-per-person concerns in /ai-native-manager.
- Cross-links: AI Native Manager, Explainers
- Tags: #complexity #systems-thinking #cognitive-debt
- Reaction: Nothing new or novel. “You can’t understand the whole system” is well-trodden ground.
- Building An Elite AI Engineering Culture — Chris Roth on AI amplifying organizational strengths
- Summary: AI amplifies existing organizational strengths and weaknesses — creating a 5.7x efficiency gap between disciplined teams and the rest. Without engineering rigor (specs, testing, reviews), AI tools just make your problems louder.
- Why Randy thinks I’d like it: Relevant to your management writing, concrete data on the amplification effect.
- Cross-links: AI Native Manager, CHOP, AI Hiring
- Tags: #management #ai-practice #engineering-culture
- Reaction: Long, not powerful. Meh writing. Some interesting points buried in there but doesn’t earn the length.
- Scattered Thoughts on LLM Tools — Justin Duke on practical AI tool challenges
- Summary: LLM tools are improving incrementally but remain fundamentally flawed as software products. The real bottleneck isn’t AI capability but infrastructure — sandboxed environments, data integration, feedback loops, and systemic process gaps.
- Why Randy thinks I’d like it: Practitioner perspective on the gap between AI hype and actual tool experience. Grounded.
- Cross-links: CHOP, AI Cockpit, How Igor CHOPs
- Tags: #ai-tools #practitioner #infrastructure
- Reaction: Weird grab bag. Liked it but not insightful — scattered observations without a unifying thread.
- Cloudflare vinext — Steve Faulkner on one engineer rebuilding Next.js with AI
- Summary: One engineer rebuilt Next.js as “vinext” in a week for $1,100 in AI tokens. AI doesn’t need the intermediate abstractions humans created to manage complexity — many existing software layers will become obsolete.
- Why Randy thinks I’d like it: Great case study of AI productivity, but more “look what AI can do” than practitioner reflection.
- Cross-links: CHOP, AI Developer, Produce vs Consume
- Tags: #ai-productivity #vibe-coding #case-study
- Reaction: Didn’t land because I don’t know Next.js well enough to appreciate the technical achievement. But the underlying pattern — AI reimplements a well-documented API surface from scratch, skipping all the human-cognitive-load abstractions — is interesting. Would love to see this pattern applied to a domain I know better.
- The Happiest I’ve Ever Been — Ben Wallace on finding joy outside career
- Summary: True happiness comes from activities aligned with intrinsic values, not career prestige. Coaching youth basketball fulfilled him more than his corporate job. Tech workers should question Silicon Valley’s propaganda about professional value equaling personal worth.
- Why Randy thinks I’d like it: Connects to your joy/meaning writing but not AI-specific.
- Cross-links: Joy, Produce vs Consume
- Tags: #happiness #meaning #anti-hustle
- Reaction: Interesting post on purpose. Worth linking from spiritual health content somewhere.
- The Very Hungry Caterpillar Design Analysis — Mac Barnett & Jon Klassen on Eric Carle’s craft
- Summary: Carle’s book succeeds through masterful control of color, shape, and die-cut innovation — not expressive character design. Deliberately stripped away emotional facial features to create warmth, bridging toys and books while carrying bittersweet melancholy about transformation.
- Why Randy thinks I’d like it: Beautiful craft analysis but tangential to your usual themes.
- Cross-links: AI Journal
- Tags: #design #craft #children
- Reaction: Skimmed, didn’t hook me.
- What Claude Code Actually Chooses — Research on AI code assistant tool selection
- Summary: (Already read — Igor noted it was excellent)
- Why Randy thinks I’d like it: Directly relevant to your daily CHOP workflow.
- Cross-links: CHOP, How Igor CHOPs, AI Cockpit
- Tags: #ai-tools #claude #research
- Reaction: Already read — excellent.
What I Gravitate Toward
This section is Randy’s recommender training data. It evolves as patterns emerge from what I read, skip, and love — building toward a personalized content ranker. The more detailed this gets, the better Randy’s predictions become.
Content Attributes I Love (stack-ranked)
- Practitioner reflection on craft — Someone who actually does the work reflecting on what they learned, not theorizing from the sidelines. Bonus if they challenge their own assumptions.
- Connects to my existing writing — If it extends, challenges, or provides evidence for something I’ve already written about (cognitive debt, agency, AI-native management), I’m almost always in.
- Contrarian takes with substance — Not contrarian for shock value, but genuinely questioning mainstream consensus with real reasoning. “AI will replace X” hot takes = skip. “Here’s why AI actually makes X harder” = love.
- Systems thinking — How parts interact, emergent complexity, unintended consequences. Especially when applied to AI’s second-order effects on teams and organizations.
- Real case studies over theory — One company’s actual experience beats ten thought pieces. Show me the receipts.
- Management/leadership in AI era — How should orgs adapt? What’s breaking? What’s working?
Content Attributes I Skip
- Hype pieces — “AI will revolutionize X” without specifics
- Benchmark comparisons — Model A vs Model B on task C
- Already-seen insights — If I’ve internalized the core idea (e.g., PG startup lessons, “AI erodes deep skills”), I skip even good writing about it. Topic match alone doesn’t save a shallow article. The bar for familiar territory: either a genuinely novel angle or exceptional writing. Hard to know depth before reading though — flag the risk, don’t predict love or skip.
- Mediocre writing — Good ideas in long, underpowered prose get downgraded. Length must be earned.
- Pure tutorials — Step-by-step how-tos without opinions or reflection
- Political framing of AI — Left/right lens on technology
Mood Modifiers
- Deep focus mood: Long-form practitioner essays, systems thinking, anything requiring concentration
- Quick scan mood: Case studies, data points, contrarian one-pagers
- Creative/playful mood: Design analysis, craft deep-dives, unexpected connections between fields
- Management mood: Hiring, culture, team dynamics, organizational design
Calibration Notes
- 2026-03-01: First session. From Joy & Curiosity #76, picked 10 of 19 links. Strong bias toward cognitive debt / AI practice / management cluster. Skipped PG startup essay (already internalized), political AI framing, and pure X/Twitter commentary. Already had read Claude Code research — actively follows AI tooling research independently.
- 2026-03-03: Debrief on 8 articles. Most landed as “familiar territory, nothing new.” Key lessons: (1) Topic relevance alone doesn’t save shallow execution — 4 of 8 were right topic but underwhelming depth. (2) Writing quality is a real filter — good ideas in mediocre prose get downgraded. (3) Domain knowledge matters — vinext pattern was interesting but didn’t land because Igor doesn’t know Next.js. (4) Non-AI content (happiness/purpose) can land when it connects to personal writing themes. (5) For familiar territory, flag “you’ve written about this — worth a skim for new angles” rather than predicting love.