Sometimes the easiest route is the one an 8-year-old will take
16 April 2026

Most professionals don’t struggle to solve problems. What we struggle with is noticing when our default way of solving problems has become the problem.

We call experience “pattern recognition,” but in reality, it’s “pattern enforcement.” The longer we operate within a craft — engineering, operations, customer success, finance — the more our brains learn to compress complexity into familiar moves: a framework, a checklist, a set of best practices that once worked and now feel inevitable. It’s efficient, but it’s also constraining.

Expertise is meant to expand what we can do, yet it often narrows what we can imagine. The same knowledge that makes us valuable can make us predictable.

I was reminded of this in an unexpected place: a conversation about an eight-year-old building a digital library for their home bookshelf. The “adult” solution I presented — a hosted database of record, lending history, due dates, automated chasers, maybe even a little dashboard. A proper system. Clean. Rational. Slightly overbuilt.

The child’s solution was different. Photos of the bookshelf were fed into an AI model; a catalogue came out, and a simple website made the collection browsable for friends. No grand architecture, no operational overhead, no obsession with future-proofing a system that didn’t yet have real users.

This is not my stating that children are smarter or that AI is magical. It is more about exploring incentives and identity. My adult solution was a show of competence. The child’s solution was a direct response to the job.

Innovation isn’t a tooling problem

When organisations talk about innovation, they typically focus on tools: AI copilots, new platforms, automation. But tools rarely determine the quality of decisions. For that, we need our instinct, honed through experience. The instinct that decides what counts as a “serious” solution, which risks are acceptable, and what complexity is a signal of professionalism.

Maslow’s hammer is the notion that comes to mind for me here. If the only tool you have is a hammer, every problem looks like a nail. The harder diagnosis is organisational: most companies reward hammering. Speed over rethinking. Outputs over outcomes. Visible effort over invisible simplification.

The result is a predictable pattern. We are not asking, “What is the simplest system that solves this today?” We ask, “What is the most defensible system that won’t get me criticised later?” A decision that serves the work and a decision that protects the decision-maker are in opposition here.

Modern work is heavily optimised for repeatability. It makes sense to every manager and business owner that this is an operational necessity. Quarterly targets, performance cycles, and stakeholder expectations push organisations toward solutions that are understood by the wider business system. We seek solutions that can be justified in a meeting, documented for audit, and defended as aligned with “how we do things.”

Our frameworks become organisational shorthand for legitimacy. They offer us both thinking aids and social proof. But when this happens, organisations cease distinguishing between “a good framework” and “good thinking.”

Metrics intensify this. When results are evaluated monthly or quarterly, the system selects for approaches that produce predictable increments rather than uncertain leaps. Experimentation becomes theatre: safe bets labelled as innovation.

A strategic observation emerges from this: Incentives don’t reward creativity; they reward credibility.

And credibility is often purchased with complexity.

This is why the “curse of knowledge” is less about cognition and more about culture. Individuals can notice their own biases, but they can’t will their way out of a system that penalises deviation. In many organisations, the easiest way to fail is to attempt something unfamiliar and be wrong. The safest way to fail is to follow best practice and still miss the mark — because the process itself provides cover.

That’s the hidden cost of expertise inside a high-accountability environment: you learn to optimise for defensibility. You learn to select solutions that are easier to explain than to test.

AI introduces a new layer to this tension. It is tempting to believe that new tools will create new behaviours. More likely, they will accelerate old ones.

If your organisation is rewarded for “shipping,” AI will help it ship more. If it is rewarded for “control,” AI will be constrained into compliance workflows. If it is rewarded for “certainty,” AI will be used to generate better-sounding rationalisations for the same decisions.

Tools don’t alter a system’s logic. Often, they scale whatever the system already rewards.

So what does a different approach look like?

One useful constraint is to ask: if this were the first time we were seeing this problem, and we were not allowed to use any of our existing playbooks, what would we do? Not as an icebreaker, but as a design rule.

The point isn’t to reject best practices. It’s to prevent them from becoming default settings. A best practice is a hypothesis that has worked under certain conditions. Treating it as a law is how organisations fossilise.

For leaders, the organisational implication is uncomfortable: you can’t ask for experimentation while measuring people like operators.

If you want teams to try novel solutions, you have to create “permission to be wrong” that is real, not rhetorical. That means separating exploratory work from performance metrics that penalise variance, evaluating decisions by the quality of the reasoning rather than the familiarity of the method, and making simplification a status move that signals mastery rather than a lack of rigour.

Most organisations don’t have a creativity problem. They have an incentive problem. The system rewards adherence, so it gets adherence.

The eight-year-old’s answer isn’t a blueprint. It’s a mirror. It shows how direct problem-solving looks when it isn’t filtered through professional identity, institutional memory, and the need to be seen as competent.

The real question isn’t whether AI will make organisations more innovative. The question is whether they are willing to change what they reward — because that is what decides which instincts win.