January 26, 2026

The Bitter Lesson: An Uncomfortable Truth Every AI Startup Needs to Face

Josine oude Lohuis

We built an AI assistant for sustainability managers. Companies now use it and they're willing to pay for it. And yet, the most important question we ask ourselves is "will any of this matter in five years?"

That question comes from something called the bitter lesson.

What We Built (And Why It Works Today)

Sustainability managers face a mess of regulatory requirements, impact calculations, and reporting deadlines. Generic AI tools like ChatGPT can't handle this work reliably. The answers aren't personalized to their industry. The guidance isn't specific enough. And the stakes are too high for hallucinated compliance advice.

So we did what any niche player would do: we packed domain expertise into every layer of our system.

Users enter their goal — say, writing a sustainability report. We break that down into specific tasks: analyze your environmental impact, assess climate risks, draft the report structure. Each task uses domain-specific tools we've built. Impact calculators for cotton water usage. Methods aligned with scientific standards. A curated library of official regulations and trusted resources.

This approach works. Sustainability managers get actionable guidance that generic models simply can't provide. That's why companies pay for our tool. That's why we have a business.

But here's the uncomfortable part.

General methods that leverage computation always outperform those that leverage human domain knowledge.

70 Years of AI Research Says We're Doing It Wrong

Rich Sutton's "Bitter Lesson" summarizes seven decades of AI research in one painful insight: general methods that leverage computation always outperform those that leverage human domain knowledge.

The clearest example comes from chess and Go.

For years, researchers tried to beat humans at these games by encoding grandmaster strategies directly into their systems. They studied the best players, catalogued winning moves, and hardcoded that knowledge into their models. Progress was incremental.

Then a team tried something different. They built a system with no human knowledge at all. Just the rules of the game, a goal, and the ability to learn through self-play. Given enough computational power, this approach didn't just match the human-knowledge systems — it demolished them.

The lesson is bitter because it repeats across every domain. Researchers keep trying to build knowledge into their systems. It always helps short-term. It always feels satisfying. And it always plateaus, eventually getting outpaced by simpler systems that scale through learning.

Did we Build the Wrong Thing?

So we sat down with this uncomfortable truth: did we build the wrong thing?

The answer, for now, is no. Generic models today cannot do a good enough job completing sustainability managers' work. Our domain expertise creates real value. That's why people pay for it.

But the question isn't whether our approach works today. It's whether Google or OpenAI will release a model next year, or in three years, that can do everything our system does — without all our carefully crafted prompts and specialized tools.

If that happens, we need to have something else to compete on.

What Actually Compounds Over Time

The bitter lesson doesn't say domain knowledge is useless. It says domain knowledge baked into the system gets outpaced by learning. The companies that win are the ones that can learn faster than their competitors.

That reframing changed how we think about our roadmap. We're not just building features. We're building data flywheels. Everything we do now should create assets that help us train and improve our systems over time.

Four things compound for us:

Jobs to be done. Every conversation sustainability managers have with our system tells us what they're actually trying to accomplish. We analyze these interactions constantly. What do they need? Can we solve it today? If not, that gap becomes our training target.

User preferences. Was this answer helpful? Too long? Too generic? This feedback is pure gold for model improvement. We capture it at every interaction.

Domain-specific tools. Some things will always be too specialized for general models. Impact calculators. Regulatory lookups. Industry-specific workflows. These tools remain valuable regardless of how smart base models become.

UX/UI innovation. AI interfaces are still primitive. Chats and graphs barely scratch the surface. The companies that figure out how to deliver AI value effectively — not just accurately — will have a meaningful edge.

We’re building a flywheel: data comes in, we learn from it, our system improves, we attract more users, more data comes in.

Learning How to Learn

Here's the shift in mindset: we're not building an agent that contains what we've discovered. We're building an agent that can discover like we can.

That sounds simple on paper. In practice, it's a complete rethinking of how we allocate engineering time.

We now spend hours each week reviewing conversations users have with our system. For every exchange, we ask: was this actionable? Was it concise? Was it personalized? We tag what worked and what didn't. This annotation data feeds back into our system's learning.

The goal isn't to write better prompts. It's to create the feedback loops that help the system to get better by itself. And it will make better prompts unnecessary.

We’re building a flywheel: data comes in, we learn from it, our system improves, we attract more users, more data comes in. In software, this is called a moat. A defensible advantage that gets stronger over time, not weaker.

AI models are improving at a pace that should terrify anyone building on top of them. What feels like a competitive advantage today — that carefully tuned system prompt, that clever workflow — will be table stakes tomorrow.

The companies that survive aren't the ones with the best domain knowledge encoded into their systems. They're the ones that learn fastest from their users.

That's the bitter lesson. And we're building our company around it.

Any questions? Get in touch.
Pieter van Exter
CEO and Co-Founder
pieter.vanexter@linknature.io

Related articles