Humanity’s Next Bitter Lesson

Teaser Image (Generated with Perplexity, form "Il Quarto Stato" by Giuseppe Pellizza Da Volpedo)

Humanity’s Next Bitter Lesson

In computer science, the Bitter Lesson is now familiar. We’ve learned that systems based on general methods which scale in data and computing power eventually outperform approaches built on carefully handcrafted human knowledge. What feels elegant, insightful, and “smart” in the short term is often obtained by brute-force generality in the long run.

Over the past few years, this lesson has extended beyond algorithms design. What we are witnessing, is that every task that we felt needed some additional human knowledge, can be completed by LLM.x brand new algorithm.

Inside the AI industry, the shift is already obvious. As researchers, engineers, and advanced users, we have witnessed LLMs improve at tasks that were once considered deeply human: writing, coding, reasoning, tutoring. These improvements did not primarily come from encoding expert rules or domain-specific insights, but from scale—larger models, more data, and better optimization.

Outside this specific industry, however, many still underestimate the impact of AI. A common belief is that certain jobs contain something irreducibly human, that cannot be replaced. This belief is especially strong in fields such as consulting, therapy, education, and other socially intensive roles. The argument is usually that even if AI can assist, it cannot replace the human element: human interaction, empathy, intuition, trust.

For the moment, this belief feels reasonable, as many tools are still developing and they are not fine-tuned and refined for any use-case. But it rests on a fragile assumption.

What we often call “human element” is not a mysterious quality. In practice, it is largely a matter of trust. Humans are trusted because, historically, they were the only agents capable of understanding context, communicating fluently, and responding appropriately in complex social situations.

Large language models are the first technology that directly challenges this monopoly.

Trust at Scale

For the first time, we have systems that can communicate in ways that feel natural, adaptive, and context-aware at scale. They do not merely retrieve information; they participate in dialogue. They explain, rephrase, persuade, and reflect.

Trust is rarely granted all at once. It is built incrementally. A small successful interaction leads to a little more trust, which leads to greater reliance, which eventually leads to normalization. Once this process begins, the perceived “human added value” starts to erode.

This erosion is already visible.

Therapy

Mental health is one of the clearest examples. Psychological support is expensive, often inaccessible, and unevenly distributed. As a result, many people have already turned to chatbots for emotional support, reflection, and guidance.

These systems do not replace professional therapy in a clinical sense, and we should not believe it can. But for a large class of needs—talking through problems, gaining perspective, feeling heard—they are often perceived as good enough. More importantly, they are always available and cheap.

What people value in therapy is not only expertise, but the feeling of being understood. As language models improve at mirroring emotions, asking the right questions, and maintaining continuity over time, trust naturally emerges. Once a user has several positive outcomes, the fact that the system is not human becomes less relevant.

Consulting

Consulting offers another instructive case. When an organization hires an expert or an agency, it does so largely because it trusts their judgment. Credentials, reputation, and personal branding all serve one purpose: establishing confidence that this person or team is the right fit.

Much of consulting is not about unique insight, but about framing problems, synthesizing information, and presenting recommendations convincingly. These are precisely the domains where large language models already perform well.

As models become capable of tailoring their communication to specific organizational contexts, anticipating objections, and adapting their tone to different stakeholders, the traditional trust advantage of human consultants weakens. If an AI system can consistently deliver high-quality analysis and present it in a way that inspires confidence, then “being human” is no longer a sufficient differentiator.

Again, the shift does not happen all at once. It happens gradually, through repeated successful interactions. Exactly how a professional does duiring its career. Are LLMs Scaling the corporate ladder?

A Necessary Caveat

This perspective is deliberately pessimistic, but it is not inevitable.

It is entirely possible that large language models will not reach the level required to fully substitute human social and relational roles. There may exist a form of “human added value” that is difficult to define in a formal way, measure and eventually replicate, something emergent from embodiment, lived experience, or social presence, that continues to differentiate humans from machines.

Moreover, LLMs remain fundamentally human-based systems. They are trained on human-generated data, shaped by human preferences, and refined through human-in-the-loop feedback. The Bitter Lesson still applies, but scale alone is not sufficient without high-quality data, guidance, and normative input. In this sense, humans are not removed from the loop, even if somethimes AI enthusiats seems to forget it.

However, recognizing both the power and the limits of these systems is essential. And precisely because this transformation concerns trust, work, and value at a societal scale, it demands political reasoning about how we redefine labor and social relations in the presence of scalable artificial agents.

The Bitter Lesson taught us that scaling beats cleverness. Humanity’s next Bitter Lesson may be that scaling also beats many of the social advantages we once believed were uniquely ours.

This is a living document in my digital garden. I’ll update it as I discover new tools or gain more experience with existing ones.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • The Day You Lose Power Is the Day You Need Rights
  • Useful Tools for Machine Learning on Geometric Data