The 30-minute AI audit
You don't need a consulting firm to find your first AI use case.
You need 30 minutes and honest answers.
I've used this approach with my own team. Not because it's fancy. Because it works. Every time we ran this exercise, we found at least one candidate worth pursuing. And every time, it was not the use case anyone expected going in.
The problem most operations leaders face is not a lack of AI ideas. It's the opposite. There are too many possibilities, too many vendor pitches, and too many articles listing "Top 10 AI applications in supply chain." None of them start from your actual process. They start from the technology.
This audit flips that.
The exercise
Set aside 30 minutes. Take a whiteboard or a blank document. Work through three steps in order. No research needed. No tools required. Just what's already in your head and your team's heads.
Step 1: List your top 5 recurring operational decisions (10 minutes)
Not strategic decisions. Not annual planning. The decisions your team makes repeatedly. Weekly. Daily. Sometimes multiple times per day.
Examples from teams I've worked with:
- How much safety stock to hold for a specific product group
- Whether to expedite a late shipment or wait
- Which supplier to allocate a purchase order to when two are available
- When to trigger a production schedule change based on demand signals
- How to prioritize customer orders when capacity is tight
Write them down. Be specific. "Improve demand planning" is not a decision. "Decide whether to override the statistical forecast for product X based on customer intel" is a decision.
Step 2: For each decision, score the data reality (10 minutes)
Three questions per decision. Answer honestly.
Is the data that feeds this decision digital? Meaning: does it live in a system somewhere, or is it in someone's head, on sticky notes, or in personal spreadsheets? If it's not digital, AI has nothing to work with.
Is the data trusted? Will the team actually believe a recommendation based on this data? If planners override system suggestions every time because they know the data is wrong, AI won't change that. It just adds another suggestion to ignore.
Is the data accessible? Can it be queried, extracted, connected to other data sources without six weeks of IT tickets? If getting the data requires a manual export every Monday morning, that's a bottleneck that AI won't solve.
For each decision, you should have a "yes" or "no" to three questions: digital, trusted, accessible.
Step 3: Score the value (10 minutes)
For each decision, ask: if this decision was made faster or more accurately, would it save measurable money or time?
Not in theory. In practice. Can you point to a number?
"If we got the expedite call right 80% of the time instead of 60%, we'd save approximately EUR 200K per year in air freight." That's measurable value.
"It would be nice to have better forecasts" is not measurable value. Push harder. What does "better" mean? For which products? By how much? What changes if the forecast improves by 5%?
Reading the results
After 30 minutes, you have a simple grid. Five decisions, each scored on data readiness and value.
The decision with "yes, yes, yes" on data and a clear value statement is your first candidate.
Not the sexiest one. Not the one your CEO saw at a conference. Not the one the vendor is pushing. The one where data, process, and value intersect.
In my experience, this is usually something boring. Reorder point calculations. Exception handling prioritization. Supplier allocation based on lead time and reliability. Boring problems that the team deals with constantly and that cost real money when handled poorly.
Why companies skip this
Most operations teams jump straight to the AI tool. They evaluate platforms, compare vendors, attend demos. That feels productive. It feels like progress.
But they're solving the wrong problem first. The right sequence is:
1. Find the decision
2. Validate the data
3. Prove the value
4. Then pick the tool
Skipping step 1 through 3 is why pilots fail. You end up with a capable tool pointed at the wrong problem, fed with data nobody trusts, solving something nobody was asking for.
What I've learned doing this
Every team I've run this exercise with was surprised by the result. The highest-potential AI use case was never the one on their roadmap. It was usually something so routine that nobody thought of it as an "AI opportunity." But routine, high-volume, data-backed decisions are exactly where AI adds the most value.
The complicated, strategic, once-a-quarter decisions? Those need human judgment. That's where your experience matters most. Don't automate your best thinking. Automate the repetitive work that eats your team's time and attention.
Your move
Try this with your team this week. 30 minutes. A whiteboard. Five decisions. Honest answers.
If you find a candidate, write a one-page problem statement: what is the decision, who makes it, what data feeds it, and what happens if it improves by 20%. That single page is worth more than any AI strategy deck.
And if you don't find a candidate? That's useful information too. It means your data foundations need work before AI becomes relevant. Better to know now than after six months and EUR 500K spent on a pilot that never reaches production.