Most business owners picture it like this: you hire an AI consultant, shake hands, and by week two, there's some smart tool running in the background saving you time and money while you get on with everything else.
Honestly? That's not how it works. And the gap between that expectation and what actually happens is probably the single biggest reason so many AI projects crash before they ever get going.
So let's talk about what the first 90 days actually look like, based on real client work, real conversations, and real results. Not the glossy version. The real one.
Why Most AI Projects Don't Even Make It to Day 30
There's a stat from MIT that should make every business owner sit up straight: somewhere between 90 and 95 percent of AI projects fail. That's not a rounding error. That's almost every single one.
And here's the thing, it's almost never the technology that's the problem. The technology works fine. What doesn't work is everything around it. No clear data strategy. No KPIs tied to anything that actually matters. Leaders who handed the whole thing to their IT department and said, "Get it done," and then wondered why six months later, nothing had really changed.
Gregory Van Duyse of Leap AI puts it in a way that sticks with you, "We're no longer in the world of information management. We're in the world of intelligence." And you can't treat an intelligence transformation like you're installing a new CRM. It just doesn't work that way. That's why having a real AI roadmap for your business isn't a nice-to-have; it's the difference between wasting money and actually making a change.
Days 1 to 30: Forget Building Anything. This Is About Understanding Everything
What the Data Audit Actually Looks Like
The first month is not about tools. Not about tech. It's about going deep into how your business actually works, and most businesses are surprised by what comes up when someone starts asking the right questions.
For clients who come to Leap AI without a specific project already in mind, the process kicks off with what Gregory calls an AI business assessment. And that means interviewing every single department head, an hour and a half, sometimes two hours each, before anyone even sits down with the CEO. Every department. Every process. Every system.
"We find out exactly what work they do in the department and find out all the information that could help us understand the work," Gregory says. And what usually comes up? Data is sitting in five different places with no connection between them. KPI tracking that's more gut feeling than actual measurement. Work processes that look simple from the outside but have layers of nuance underneath that nobody's ever written down anywhere.
Those are your red flags. And you want to find them in week one, not week eight, when you're already mid-build, and something keeps breaking, and nobody can figure out why.
The Stakeholder Interviews
Here's something that surprises a lot of people about this phase: the most important conversations aren't always with the CEO or the leadership team. They're with the people actually doing the work every day.
Gregory is really clear on this. "The people actually doing the work on the ground floor understand the nuance of work, why in this case we change it a little bit this way, and in that case we change it a little bit in the other way." Managers don't always see that. Owners often don't either. But the person who's been doing that job for four years? They know exactly where the little quirks are and why they exist.
Miss that in the discovery phase, and it will absolutely come back to bite you. Every time. So the interviews are structured to pull that out: what's the actual work, what systems touch it, where does it slow down, and what would genuinely moving the needle actually look like for this team?
Days 31 to 60: Now You Start Building
Getting the Data In and the Model Set Up
Once discovery is done and the scope is locked, the build starts, and this is where things get interesting. The team gets access to the client's systems, starts pulling data together, and builds a working prototype that can actually be tested and poked at.
The best way to understand what this phase really looks like is through a real example. An airport operations client came to Leap AI with a workforce scheduling problem they'd been wrestling with for a long time. They had an experienced team member working on it, an Excel file, and ChatGPT. And they were still stuck.
The reason? It wasn't a language problem, which is what ChatGPT is built for. It was a mathematical optimization problem. Multiple variables, multiple constraints, millions of possible combinations. And as Gregory points out, "humans are not really great at optimising things, especially if there's more than two dimensions to the data. If there are two variables, we're okay. But as soon as we hit three or four variables at the same time, our brains have a hard time."
So the first thing the team did wasn't build an AI tool. A mathematical model of the problem itself was built. Which, as Gregory describes it, "looked more like a report that a mathematician would write." Only once that existed could AI actually be applied to run it properly.
Running the Scenarios and Testing What "Good" Looks Like
With the model built, the system could take the client's Excel data, apply all the constraints and variables, and run optimisation scenarios over three to four hours to surface the best possible outcomes.
The results were hard to argue with. The client's experienced team member had spent three weeks on the problem manually and projected staff costs for the quarter at $1.7 million. The model came back with a solution that brought that down to just over $1 million, roughly $700,000 saved, and it did it in four days instead of three weeks.
But the part that really matters? The model was built to be reused. Every time a new flight schedule comes in now, they just run it again. That's not a one-off win, that's a permanent shift in how the business operates.
"Good" in this phase doesn't just mean the numbers look better on paper. It means the solution is tied to a KPI that actually matters, the client can clearly see the connection to real results, and the people using it understand how it works well enough to trust it.
Days 61 to 90: Validation and Handing It Over
This is the phase where everything gets tested properly, and the client takes ownership of what's been built. The validation meetings aren't a big reveal; they're a walkthrough. Here's what the system does, here's the output, here's how it maps back to the KPIs you actually care about. What's working, what needs tweaking, what comes next.
Gregory calls the whole model co-creation, and this phase is where that really shows up. "We're here working with the CEO, working with his executive team, working with the department heads to co-create a new strategy for the business." It's not a handover. It's a conversation.
By the end of day 90, the client walks away with a validated working system, a full strategy report, clear KPI movement they can point to, and a prioritised list of what to build next. For clients who came in for a full business assessment, that list usually runs to 12 to 20 AI projects, each one stress-tested for return on investment and linked to real outcomes, not vanity metrics.
What You Should Demand From a Consultant Before You Sign Anything
This part matters. A lot of consultants will tell you what you want to hear to close the deal. So before you sign anything, here's what Gregory says you should actually be looking for.
-
Don't accept a "hand it over and wait" relationship. The engagements that work are built on co-creation from day one. If a consultant isn't planning to involve you and your team throughout the process, that's a problem.
-
Ask to see real work from real clients. Demos, references and actual before-and-after numbers. Any consultant worth their fee should be able to show you this without blinking.
-
Look for a risk-free pilot. Leap AI offers a guarantee on early-stage work. If the client isn't convinced by the time the demo or pilot lands, they get their deposit back. That should be a baseline expectation, not a bonus.
-
Maybe the most important one, your consultant should be thinking ahead, not just about what AI can do right now, but where it's going. Gregory uses a hockey analogy for this: "We talk about skating where the puck is going to be. You want your consultant to have an idea of where it's going to go and to position the company to be ready for that."
Right now, AI is doubling its capacity to do real work every three to four months. Gregory puts it plainly, "From March to September, AI will probably quadruple its capacity to do work. It's the same thing as waiting five years in the old world." So a consultant who's only thinking about the next project, not the next three years, isn't the right partner for where this is all going.
The 90 days aren't just about delivering something that works. They're about laying a foundation your business can keep building on, because the businesses that start now, even imperfectly, are going to be in a completely different position than the ones that kept waiting for the right moment.
That moment isn't coming. The window is open right now.
Want to see what your own 90-day AI roadmap could look like? Book a free consultation with Leap AI and find out exactly where to start.