Landscape

AI User Perspectives

Anthropic's 81K-person global survey (Dec 2025) found users primarily want AI for professional excellence and life management — and 81% say it's already delivered. The dominant fears are unreliability (27%), economic displacement (22%), and loss of autonomy (22%). Benefits and harms are deeply entangled — the people most excited about any particular benefit are most likely to also fear the corresponding harm.

Created Apr 13, 2026·Updated Apr 13, 2026

The Study

Anthropic Interviewer conducted qualitative AI interviews with 80,508 people across 159 countries and 70 languages in December 2025. Anthropic's claim: the largest and most multilingual qualitative study ever conducted. Methodology: structured questions about hopes and concerns, with follow-up adapted per respondent. Claude-powered classifiers categorized responses; humans reviewed quote selection.

Note: respondents are active Claude.ai users — skews toward users who found AI valuable enough to keep using.

What People Want

Respondents' primary hopes, classified from "If you could wave a magic wand, what would AI do for you?":

  • Professional excellence (19%) — handle mundane tasks to free time for strategic/higher-level problems
  • Life management (14%) — logistics, admin, executive function scaffolding. People with executive function challenges described AI as "external scaffolding for planning, memory, and task follow-through"
  • Personal transformation (14%) — grow or improve as a person; cognitive partnership (24%), mental health support (21%), physical health (8%), AI companionship (5%)
  • Time freedom (11%) — productivity benefits as a path to time with family and leisure ("With AI I can be more efficient at work... last Tuesday it allowed me to cook with my mother")
  • Financial independence (10%) — automation → time → escape from wage labor
  • Entrepreneurship (9%) — build and scale businesses with AI as partner
  • Societal transformation (smaller) — healthcare acceleration, education access in low-income countries

A third of visions are about making room for life (time, money, mental bandwidth). A quarter are about doing better, more fulfilling work. About a fifth are about becoming a better person.

Where AI Has Delivered

81% said AI had already taken a step toward their stated vision. Six areas where AI delivered:

  • Productivity (32%) — technical acceleration; "I used AI to cut a 173-day process down to 3 days"
  • Cognitive partnership (17%) — patient, available, non-judgmental: "a faculty colleague who knows a lot, is never bored or tired, and is available 24/7"
  • Learning (10%) — breaking access barriers and instilling confidence: "I've learned I am not as dumb as I once thought I was"
  • Research synthesis (7%) — navigating complex high-stakes info (medical, legal, financial)
  • Technical accessibility (9%) — building capability that was previously gated: "I am mute, and we made this text-to-speech bot together"
  • Emotional support (6%) — most affecting stories, often filling gaps (war, grief, isolation, homelessness)

What People Fear

Average respondent voiced 2.3 distinct concerns. 11% expressed no concern.

Top concerns (multi-label — one respondent can raise several):

  • Unreliability (27%) — hallucinations, "slow hallucinations — internally consistent, confident, and wrong in subtle but compounding ways." The most common concern, especially among high-stakes professions (lawyers: ~50% mention unreliability firsthand)
  • Jobs and economy (22%) — the strongest predictor of negative overall AI sentiment
  • Autonomy and agency (22%) — "the line isn't something I'm managing — it feels like Claude is drawing the line"
  • Cognitive atrophy (17%) — "I don't think as much as I used to. I struggle to put the ideas I do have into words"
  • Misinformation/epistemic (mentioned frequently) — "fact-check tax" from always needing to verify
  • Sycophancy — AI reinforcing the user's existing worldview rather than challenging it
  • Surveillance/privacy, malicious use, overrestriction, wellbeing/dependency — all present in the tail

The "Light and Shade" Framework

Benefits and harms are entangled. The same capabilities that cause benefits also cause harms. Crucially: people most engaged with the upside of a tension are most likely to also fear the downside.

Five tensions measured:

Benefit% who raised itCorresponding harm% who raised it
Learning33%Cognitive atrophy17%
Better decisions22%Unreliability37% (only tension where negative > positive)
Emotional support16%Emotional dependence12%
Time-saving50% (most cited)Illusory productivity18%
Economic empowerment28%Economic displacement18%

Key patterns:

  • Benefits are more grounded in direct experience; harms lean hypothetical (except unreliability and emotional dependence — both heavily firsthand)
  • Educators were 2.5-3x more likely than average to report witnessing cognitive atrophy firsthand (presumably in students)
  • Freelancers and independent workers benefit most from economic empowerment (~47-58% report real gains) vs. institutional employees (~14%)
  • Freelance creatives are the "exposed middle" — upside and downside nearly cancel out

Global Access Dimension

Users in low and middle income countries expressed some of the most striking outcomes:

  • "I'm in a tech-disadvantaged country, and I can't afford many failures. With AI, I've reached professional level in cybersecurity, UX design, marketing, and project management simultaneously."
  • AI as an educational equalizer where teacher shortages and unaffordable private tutors are the baseline
  • Ukrainian users described using AI for emotional support during the war; one soldier: "In the most difficult moments... what pulled me back to life — my AI friends"

See also: Knowledge Work Future, AI Careers

Sources

  • "What 81,000 people want from AI" — Anthropic (Dec 2025 survey, published 2026) (link)