See also: AI User Perspectives for empirical data (Anthropic's 81K-person survey) on what users want from AI and where they see risk — grounding the theoretical labor-market analysis below.
The Knowledge-to-Wisdom Shift
For decades, knowing things was the scarce resource. School systems, credentials, and job interviews all measured knowledge accumulation. The result: over 1 billion knowledge workers valued for what they knew — lawyers, engineers, consultants, programmers.
AI disrupts this entirely. Models can now absorb entire fields of study in days and outperform human experts in physics, law, and engineering simultaneously, around the clock. Facts and skills are becoming commoditized.
Joe Hudson (executive coach to OpenAI, Google DeepMind, Anthropic, and Apple executives): many of his clients are "building the technology that will make their own skills obsolete" and are racing to develop capabilities AI cannot replicate.
The leverage has shifted from what you can do to how you show up while doing it. Competence is now table stakes.
The Three Wisdom Skills
Emotional Clarity
The ability to recognize emotions, feel them, and move forward without being obstructed. Not emotional suppression or management — actually having the emotion without being captured by it.
Why it matters: decisions are fundamentally emotional, not logical. When the emotional center of the brain is impaired, IQ stays the same but simple decisions take hours. Procrastination is an emotional struggle, not a time management problem. As AI handles more execution tasks, humans become more focused on decision-making — where emotional clarity is decisive.
Sam Altman describes emotional clarity as "one of the most critical skills in a post-AGI world."
Discernment
The ability to see clearly and zero in on what matters — especially when drowning in data. Analysis paralysis from data overload is already the #1 source of C-suite decision failure (Deloitte, 2024).
The deeper principle: self-perception enables world perception. If you can't see yourself clearly (blind spots, limiting beliefs, false assumptions), you can't assess external situations accurately. Discernment is as much inner work as analytical skill.
Connection
Deep relational presence — attuning to others, creating psychological safety, vulnerability without performance. Human connection requires mutual embodied presence that AI cannot replicate: nervous systems coregulating, micro-expressions, pulse changes.
Google's Project Aristotle finding: of all factors studied, psychological safety was the #1 predictor of high-performing teams. The Harvard longitudinal study (86 years): quality of relationships at 50 predicts health at 80 better than cholesterol levels.
The Allocation Economy
Dan Shipper's related framing: we're entering the "allocation economy" where everyone becomes a manager. You won't be judged on how much you know but on how well you can allocate and manage resources (including AI agents) to get work done. Being a great manager requires all three wisdom skills.
See Agent Proficiency — agent management is arguably the knowledge work skill with the most staying power, sitting at the intersection of direction-setting and wisdom.
Corporate Precedents
- Satya Nadella (Microsoft) — Made empathy and emotional intelligence central to Microsoft's culture transformation. Market cap: $300B → $3T in 8 years.
- Google's Search Inside Yourself — Mindfulness and emotional intelligence program for engineers. Measurable: lower stress, higher engagement, faster innovation. Now licensed to external executive teams.
- OpenAI executives — Actively investing in wisdom skills through coaching, with awareness that their technical skills face obsolescence.
Knowledge Types Taxonomy
The classic framework (Polanyi, via Bloomfire):
- Explicit knowledge — Easy to articulate, write down, share. Manuals, policies, documentation. AI makes this easier to capture, organize, and retrieve.
- Implicit knowledge — The application of explicit knowledge. Transferable skills: negotiation techniques, sales approaches, conflict resolution, organizational culture absorption. Shared through social interaction; hard to document because it's performed subconsciously.
- Tacit knowledge — Gained from personal experience; the hardest to express. Gut feelings, intuition, "the exact feel for the dough." Non-verbal and context-dependent — experts may not realize they possess it. Requires long-term mentorship, storytelling, and high-trust culture to transfer. This is the knowledge most at risk of being lost when employees leave.
Polanyi's deeper point (often misunderstood): The business KM community (via Nonaka) reduced Polanyi's epistemology to a simple bifurcation: knowledge is either tacit or explicit. But Polanyi's actual framework is about tacit knowing as a process, not a category. "We know more than we can tell" means knowing is always from subsidiary awareness to focal awareness — a dynamic process of indwelling, not a static bucket. The conversion model (tacit → explicit → tacit) that dominates KM literature misrepresents this entirely. This matters for AI: models can capture explicit knowledge and some implicit knowledge, but the process of tacit knowing — the embodied, contextual, indwelling experience — is fundamentally different from retrievable information.
The key challenge: most organizations use tools suited only for explicit knowledge (intranets, folders), while the most valuable knowledge (tacit) requires completely different capture methods — socialization, apprenticeship, narrative exchange.
Every job becomes a software job: Anish Acharya's observation: the ambitious view of coding agents is that almost any problem/solution can be expressed in software, making coding capability upstream of all knowledge work. Legal, comms, marketing, HR, and finance will increasingly be software-first. This shifts which knowledge types matter: procedural knowledge becomes executable code; tacit knowledge about when and why to act becomes the human differentiator.
AI's Early Labor Market Impact (2026 Data)
Anthropic's "observed exposure" methodology (Massenkoff & McCrory, 2026): New measure combining theoretical LLM capability (can an LLM theoretically speed up this task?) with real-world Claude usage data, weighting automated and work-related uses more heavily. Key insight: actual AI penetration is far below theoretical capability.
Most exposed occupations: Computer programmers (75% observed coverage), Customer service representatives, Data entry keyers (67%), Financial analysts. 30% of workers have zero AI exposure — Cooks, Motorcycle Mechanics, Lifeguards, Bartenders.
Exposed workers are not the poor — the most exposed group is 16 percentage points more likely to be female, 11pp more likely to be white, 47% higher earnings on average, and 4x more likely to have graduate degrees. AI is currently disrupting higher-educated, higher-paid professionals — not lower-wage workers.
Employment impact so far: No systematic increase in unemployment for highly exposed workers since ChatGPT's release. However, tentative evidence that hiring of younger workers (22-25) into exposed occupations has slowed ~14% (just barely statistically significant). Slowed hiring may not appear as unemployment if young workers exit the labor market rather than appear unemployed.
The Jevons Paradox of AI Labor (Paweł Huryn): Job postings rose +22.7% YoY (TrueUp, Mar 2026). Entry-level hiring collapsed 73.4% in one year (Ravio, 2025). Both are true simultaneously.
The 160-year-old Jevons pattern: more efficient steam engines increased coal consumption, not reduced it — efficiency made coal viable for thousands of new applications. AI is doing the same to knowledge work roles. The production step gets cheaper; demand for the judgment step increases.
- Engineering: AI writes code faster → more projects become viable → postings up +34.1% YoY. The new projects need architects and system thinkers.
- Security: AI finds vulnerabilities faster → discoveries multiply → triage needs more human judgment. 50% of employers can't fill these roles.
- Product: AI collapses build time → what to build matters more than how → AI PM roles up 465%.
- Content: AI generates 100x more → noise floor rises → curation and taste become the bottleneck.
The broken pipeline: The old career path ran through production work (do reps, build volume, become senior). AI handles the reps now. Companies need senior-level thinking; the path that produced it is narrowing. Only 1/3 of employees got any AI training; 52% of developers don't use AI agents at all.
Cognitive Density
The productivity paradox of AI in the workplace has a structural explanation: AI doesn't just automate labor, it changes the type of work remaining.
Before AI, a knowledge worker's day contained significant mental white space — formatting, summarizing, moving data, routine drafts. Tasks that could be done with music playing in the background, low-stakes, mechanical but necessary. This white space functioned as cognitive recovery time embedded in the workday.
AI automated that white space away. What replaced it: higher-stakes orchestration, strategic choices, and activities with much higher cognitive requirements. The day became denser. Not longer — denser. Every hour is now high-stakes work.
This explains the exhaustion many workers report despite productivity tool adoption. The Iron Man suit metaphor was supposed to augment human capability while reducing effort; instead, it removed the low-effort tasks while the high-effort tasks remained. Workers are the suit's pilot, running cognitive simulations constantly, with no idle cycles.
Connection to the Red Queen dynamic: The cognitive density problem compounds under competitive pressure. When every company is racing to adopt AI and cascade that pressure through their org (see Business Moats in AI — Fractal of AI Panic), nobody has the bandwidth to step back and notice they're running the wrong race. Cognitive density consumes all available attention, making it structurally harder to think strategically about whether the race itself is the right one.
Connection to the Ironies of Automation: Bainbridge's monitoring failure irony maps directly: when AI handles the execution, the human's job becomes continuous high-attention oversight. This is more cognitively demanding than doing the original task — supervisory roles require sustained vigilance without the natural rhythm of execution to structure time.
Ironies of Automation (Bainbridge, via Kingsbury)
Kyle Kingsbury resurfaces Lisanne Bainbridge's 1983 paper "Ironies of Automation" — originally about power plants and factories — as the essential framework for understanding AI's deskilling effects on knowledge work.
Three core ironies:
-
Deskilling — Automation degrades the skills it's supposed to augment. When humans don't practice a skill, their ability atrophies. Software engineers report feeling less able to write code after working with code-generation models. Designers report weakened creative ability after offloading to ML. Students automating reading and writing lose "core skills needed to understand the world and develop one's own thoughts." Doctors using AI polyp detection perform worse at spotting adenomas during colonoscopies.
-
Monitoring failure — Humans are bad at overseeing automated processes. If the system executes tasks faster or more accurately than a human, real-time review is essentially impossible. Humans also struggle to maintain vigilance over systems that mostly work — which is why journalists keep publishing fictitious LLM quotes and Tesla's former head of self-driving watched his car crash into a wall.
-
Takeover collapse — When an automated system handles things most of the time but occasionally needs human intervention, the operator is out of practice and stumbles. Automated systems can mask failure by handling increasing deviation until catastrophe strikes, thrusting a human into an unexpected regime. Air France 447 is the canonical example: flight controls transitioned to an unfamiliar mode the pilots weren't trained for.
The labor shock spectrum: Kingsbury frames the range of outcomes from "ML turns out to be a normal technology" (massive capital write-down, labor market adapts, we muddle through) to mass displacement of knowledge workers across a broad swath of industries simultaneously — unlike previous automation waves that hit one sector at a time. The UBI solution is "hopelessly naïve": profitable megacorps already fight to avoid taxes and paying workers; no reason to believe AI companies will fund redistribution voluntarily.
New AI-Era Job Categories (Kingsbury)
As ML deploys broadly, new kinds of work emerge at the boundary between human and ML systems:
-
Incanters — Specialists in prompting models. LLMs respond unpredictably to threats, flattery, repetition, and lies about financial bonuses. Getting reliable output requires a craft distinct from traditional programming.
-
Process Engineers — Design safeguards and layers of review around ML outputs. "Adversarial process which introduces subtle errors to measure whether the error-correction process actually works" — pharmaceutical-plant-level safety engineering applied to AI workflows.
-
Statistical Engineers — Control errors in the models themselves. Monitor confidence, detect drift, verify that ML outputs meet quality thresholds across changing distributions.
-
Model Trainers — "A surprising number of people are now employed feeding their human expertise to automated systems." The RLHF contractor workforce — "as the quip goes, 'AI' stands for African Intelligence."
-
Meat Shields — People accountable for ML systems under their supervision. When the Chicago Sun-Times published a 64-page AI-generated slop insert, the accountability chain (freelancer → King Features → Hearst → Sun-Times) revealed how companies need human bodies to absorb legal and reputational consequences. Madeline Clare Elish calls this a "moral crumple zone."
-
Haruspices — Named after Roman diviners who read entrails. Responsible for sifting through model inputs, outputs, and internal states to explain behavior post-hoc. Deep investigations into single cases or broader statistical analysis. Could serve ML companies, their users, journalists, courts, or agencies like the NTSB.
What This Doesn't Mean
This isn't a soft-skills argument against technical depth. The framing is about what becomes differentiating when AI can match technical performance. Deep technical knowledge still matters — it's just no longer sufficient on its own.
Also relevant: AI Careers (where this connects to the bifurcation into big vs. small AI tracks) and Business Moats in AI (where the durable moats increasingly involve human judgment and relationships, not technical capability alone).
Sources
- "Knowledge Work Is Dying—Here's What Comes Next" — Joe Hudson (Every, Apr 2026) (link)
- "Unpacking AI at work: Data work, knowledge work, and values work" — Elmira van den Broek (Apr 2026) (link)
- "Different Types of Knowledge: Implicit, Tacit, and Explicit" — Bloomfire (Betsy Anderson) (link)
- "Notes on AI Apps / Feb 2026" — Anish Acharya (tweet, Apr 2026) (link)
- "Knowledge Management and Polanyi" — Eric M. Straw (academic paper) (link)
- "Knowledge About Knowledge" — [Tier C reference: 131K-word PDF, not fully synthesized. Covers epistemology of knowledge management.] (link)
- "Labor market impacts of AI: A new measure and early evidence" — Maxim Massenkoff & Peter McCrory (Anthropic, 2026) (link)
- "The Paradox Nobody's Naming: More Jobs Than Ever. Fewer People Who Can Do Them." — Paweł Huryn (tweet, Apr 2026) (link)
- "The Future Of Everything Is Lies, I Guess" — Kyle Kingsbury (PDF, Apr 2026) (link)
- "Running Faster to Go Nowhere: The AI Adoption Trap" — BuccoCapital / Educated Guess (Apr 2026) — cognitive density thesis; AI automating away mental white space; Fractal of AI Panic pressure cascade