Everyone Has an AI Strategy

Try an experiment. Visit any technology company’s website - or increasingly, any company at all - and measure how long until you see a mention of AI. A few seconds, usually. Often it’s in the hero banner.

TTFAI (Time to First AI) is approaching zero across the industry.

That’s not inherently a problem; AI has tremendous potential to transform how software is built and how businesses operate. But when every company claims the same capability, the claim stops carrying any weight. So the question that matters isn’t “do you have an AI strategy?” It’s “what’s actually behind it?”

The rise of AI-washing

AI-washing matches a human tendency to describe where you’re headed as though you’ve already arrived. A product roadmap becomes a sales pitch, a proof of concept gets added to the features list, and an API call to a third-party model becomes an “AI-powered platform.”

None of these things are lies exactly. But they’re not the same as genuine, defensible AI capability either. In a market where investors, acquirers, and customers are making decisions based on these claims, the difference matters.

What’s behind the curtain

When I assess a company’s AI positioning, I focus on three things:

Data

Foundation models are a commodity - anyone can access GPT-5 or Claude via an API. What’s not a commodity is data: the accumulated, structured, high-quality proprietary dataset that makes a model trained or fine-tuned on it better than the alternative. The question to ask is simple: does this company have data that nobody else has, is it governed appropriately, and have they built something that takes advantage of it? If the answer is no, the AI story is thinner than it looks.

Architecture

There’s a difference between AI that’s integrated into the core of a product and AI that’s been bolted on top of something built for a different era. The latter is not wrong - teams ship what they can; but bolted-on AI is harder to improve, harder to scale, and often delivers a worse user experience than the marketing suggests. I want to understand whether AI is load-bearing or decorative.

Team

Product capability is a function of the capability of the people who build it. Lots of organisations have spent big on people with “AI” in their title but the question is whether there are people who understand the fundamentals deeply enough to make good decisions: model selection, data pipelines, where AI helps and where it introduces risk. Understanding the capability of the team gives you a better view of the quality of the product.

What AI capability looks like

When assessing AI capability, look for the specific rather than vague statements. Teams that have done the work can tell you exactly where AI is in their product, what it does, how they measure whether it’s working, and what the failure modes are. They’ve thought about the edge cases and where the model gets it wrong.

Anyone can say “we’re leveraging AI to drive efficiency across our platform.” Fewer people can tell you which model, trained on what data, deployed where in the stack, improving which metric by how much.

The businesses who are positioned to capitalise on AI aren’t necessarily the ones who got to zero TTFAI fastest. They’re the ones who, when you ask the hard questions, have the answers ready. Knowing the right questions to ask (and how to interpret the answers) is where it gets interesting.