O8 Insight Paper

Why Generic GenAI Fails in Supply Chain

Founder viewpoint6 min read2026-05-04

A practical lesson from trying to beat the stock market: generic AI sounds clever but cannot solve complex decision systems. Real supply chain progress comes from focused ML models trained on specific problems.

  • Generic AI can sound intelligent and still be useless at solving real-world decision problems — as a stock market experiment proved.
  • Machine learning works when the problem is clearly defined, the data is strong, and feedback eliminates wrong answers.
  • Supply chain is easier than markets because it can be broken into smaller, trainable games: what to order, what to ship, what to load.

I tried to build a model to beat the stock market using the latest agentic AI tools and the best generic LLMs. It did not work. I lost money. That experience taught me something important: generic AI is not enough for complex decision systems. The same lesson applies in supply chain. Real progress comes from focused machine learning models trained on specific problems, using strong data and clear feedback, not from spreading AI thinly across everything.

Like many others, I was intrigued by the promise of the latest agentic AI and generic large language models. If they were so capable, surely they could help identify patterns in the stock market and make money. In practice, they did not. The models could talk. They could analyse. They could generate plausible explanations. But they could not reliably win. There were too many dimensions, too much noise, too many false signals, and too many wrong paths that looked convincing. That was the key lesson: a model can sound intelligent and still be useless at solving a real-world game.

The problem was not that the models were weak in general. The problem was that they were not focused enough. When the search space is too wide, generic AI struggles. It wanders. It produces impressive language, but it does not consistently produce the right answers. It sees too many possibilities and has too little discipline. This is exactly what has happened with many heavily funded attempts to use GenAI to create supply chain AI solutions. Deep pockets do not fix the core issue. If the model is aimed at a vague, over-dimensional problem, it still fails. You cannot solve an ill-defined problem by throwing a bigger generic model at it.

What does work is much more disciplined. Machine learning works when it is supported by three things: a clearly defined problem, as much relevant data as possible for pattern recognition, and a good teacher to eliminate wrong answers and blind alleys. That teacher might be history, expert judgment, business rules, constraints, or feedback from outcomes. But the principle is the same: the model needs guidance. Once you have identified the problem and the desired result, you have a target. And once you have a target, brute force becomes useful.

I think of this as an AI game. You define the target clearly enough that the machine can aim at it. Then you generate a huge number of attempts and find the one that lands closest. It is like firing a million cannonballs at a target and seeing which lands nearest. If you do not know where the target is, brute force is wasteful. If you do know where the target is, brute force becomes powerful. That is the difference between AI as theatre and AI as engineering.

The stock market is a brutally hard version of this game. It can be done, but the barriers are high. You need serious capital, serious computing power, strong data, and the ability to compete against others doing the same thing. The game is real. But it is expensive, fast-moving, and unforgiving.

Supply chain is easier. Not easy, but easier than the stock market, because the dimensions are fewer and the decisions are more structured. Supply chains are governed by physical flows, planning rules, lead times, capacities, transport limits, supplier choices, service targets, and inventory positions. Most importantly, the problem can be broken down. That is the breakthrough. The challenge is not solve supply chain in one move. The challenge is to solve a series of smaller games one after another: what to order, which orders to change, how to plan around production constraints, how to plan around transport constraints, which supplier to choose, what shipment to build, what machine to load. Each of these is a specific problem. Each has its own target. Each can be trained. Each can be improved. And when those models are linked together, you get something far more useful than one giant AI black box. You get a chain of focused intelligences.

The future of supply chain AI will not come from sprinkling AI fairy dust across the whole organisation and hoping for a broad uplift. It will come from focused machine learning models solving clearly defined problems, one by one, linked together in a practical way, and reducing human involvement proportionately where the machine can genuinely do the job better. That is how real value is created. Not through generic brilliance. Through specificity, data, feedback, and disciplined design.

The biggest mistake in AI is thinking that broader is better. In reality, the breakthrough often comes from making the problem narrower. That is true in markets. And it is even more true in supply chain.

Continue the conversation

Talk with O8 about AI supply planning, download the paper internally, or explore how Organic AI Planner fits into your wider planning stack.