Skip to content
← All writing

Why most artificial intelligence projects fail

· 6 min read · Artificial intelligence · Leadership
Why most artificial intelligence projects fail

The promise of artificial intelligence has created in many organizations a sense of urgency that frequently precedes any understanding of the problem being addressed. Executive committees allocate budget to «AI initiatives» without having precisely defined what operational deficiency they intend to correct, what business metric should improve, or how they will know in three months whether the project has worked. The result is predictable: the model gets integrated, the dashboard is presented, the vendor invoices, and nobody remembers why it started.

After participating directly in the implementation of a dozen or more AI projects, both in my own organizations and in others, I have identified five recurring causes by which these projects end up in the drawer of what could have been. None of them are technical.

First cause: the absence of a well-formulated problem

The founding mistake consists of framing the project in terms of the solution rather than in terms of the problem. «We want to put AI into customer support» is not an objective: it is the anticipation of an answer. The correct objective would be «we want to reduce the average resolution time of first-tier incidents by forty percent». From that objective, it becomes possible to evaluate whether AI is the right tool, or whether there are simpler options that should be ruled out first.

In most of the cases I have seen, the optimal tool was not a language model but a workflow redesign or a traditional rule-based automation. The model ended up integrated because it was what the committee wanted to hear, not because it was what the problem demanded.

Second cause: confusing technical capability with business value

Current models are impressive. That impression is the enemy of judgment. A model that summarizes meeting minutes with astonishing technical precision adds no value if the real problem was that attendees were not following through on the commitments made. Technical capability disconnected from business outcome is a spectacle, not a return.

The question we should ask in every project is uncomfortable: if the model works exactly as expected, what concrete figure on the profit and loss will change, by how much, and when? If there is no clear answer, the project is not ready to begin.

Third cause: underestimating maintenance cost

The visible cost of an AI project is the initial development. The real cost is maintenance. Models change, vendors update pricing, input data drifts over time, edge cases appear in production, and performance measured at launch deteriorates gradually.

A project that made economic sense at a six-month cost may stop making sense at a real two-year cost. A significant share of abandoned projects are not abandoned because they do not work: they are abandoned because total cost of ownership becomes unsustainable when the internal technical team lacks the judgment to optimize it.

Fourth cause: lack of instrumentation

It is surprising how often an AI-based system is pushed to production without the minimal instrumentation required to know whether it is working well. How many queries does it handle per day? What percentage does it resolve successfully? What subjective quality does the end user perceive? How much does each interaction cost? Without these answers, any claim about project return is a fabrication.

Instrumentation is not optional. It is the precondition for the project to be evaluable, and therefore, for it to improve. Projects that are not measured never improve: they only age.

Fifth cause: ignorance of the model's context

Language models are tools with specific characteristics: limited context window, tendency to confabulate in the absence of information, sensitivity to prompt format, variability between versions from the same provider. Those who integrate a model without understanding these characteristics end up surprised when the system fails in ways that were foreseeable.

The figure of the applied AI systems architect is relatively recent and scarce. Many organizations delegate the work to the existing development team without external support and make mistakes that have more to do with ignorance of the domain than with lack of technical capability.

What works

Projects that do produce value share observable characteristics. They start from a concrete, measurable problem. They choose the simplest tool that could solve the problem, ruling out AI when it is not necessary. They quantify total cost of ownership before starting. They instrument the system from day one. And they have someone with enough judgment to distinguish when the model is working and when it is pretending.

The role of the external advisor in these projects is frequently to be the voice asking the uncomfortable questions at the beginning, while there is still time to reformulate the project, rather than at the end, when all that remains is to archive the results and learn from the mistake. The questions are the same in every case and the answers determine whether the project will be worth it. Asking them is free. Not asking them can be very expensive.

Does this resonate?

If you believe an external perspective could add value to your organization's technology decisions, a first no-commitment conversation is the natural starting point.