One of the fastest ways to waste time with AI is to point it at the wrong kind of problem.
Not every business process is a good candidate for AI. In fact, many of the workflows teams instinctively reach for are the ones where AI adds the least value. That’s because AI struggles with the things software is already very good at and shines where traditional software starts to break down.
Not All Processes Are AI-Worthy
When teams ask, “Where should we use AI?”, the instinct is often to start with the most visible or critical process. That’s usually a mistake. A useful rule of thumb is something I call the flowchart test.
Think about workflows that look like this:
Do X.
If A, go left.
If B, go right.
Repeat.
These processes are already well understood by software. They map perfectly to code, rules engines, and traditional automation. Trying to solve them with AI usually makes things worse. You introduce variability, make debugging harder, and increase operational risk without gaining much capability. That’s not a limitation of AI, it’s just using the wrong tool for the job.
The Work AI Is Actually Good At
Now compare that to a different kind of process – the messy ones.
Workflows where inputs arrive incomplete or out of order. Where context matters. Where the next step depends on interpretation rather than strict rules. This is the kind of work humans handle instinctively. We ask clarifying questions, we fill in the gaps, and adjust when thiings don’t line up. Traditional software struggles here because it requires everything to be defined up front. This is exactly where AI becomes useful.
Modern models are surprisingly good at navigating messy, contextual work. They can interpret partial information, infer intent, and move a process forward even when the path isn’t perfectly defined. That’s why so many high-value AI use cases look less like transactions and more like conversations.
This is where AI thrives.
Why AI Excels in Messy Work
Modern AI systems are surprisingly good at:
- Understanding context instead of a rigid structure
- Working with incomplete information
- Asking the next best question
- Driving toward an outcome instead of following a script
This makes AI a natural fit for work that:
- Doesn’t follow a single path
- Can’t be fully specified in advance
- Feels more conversational than transactional
In other words: “human-shaped work.”
The “Objective + Tools” Model
A useful way to design AI-friendly processes is to stop over-specifying steps and start designing for outcomes.
Instead of defining every path, define two things:
- The objective: What “done” looks like
- The tools: What actions is AI allowed to take
Then let the system figure out how to get there. The agent doesn’t need a flowchart. It needs a goal, constraints, and the ability to act. This is the fundamental shift to actually building AI that works in production.
A Simple Example: Conversational Intake
Consider a simple but common workflow: collecting information from a user.
You need:
- Name
- Contact details
- A few qualifying answers
In a traditional system, this becomes a rigid form:
- Field order matters
- Missing inputs break the flow
- Users must adapt to the system
Now imagine a conversational interface instead.
The user replies with partial answers, out of order, and with ambiguity.
AI can:
- Recognize what was provided
- Identify what’s missing
- Ask the right follow-up
- Validate inputs in real time
There’s no clean flowchart here. There’s an objective (completed intake) and tools (messaging, validation, storage) to get there.
That’s an agent.
A Good Rule of Thumb
AI doesn’t replace good software engineering. It complements it.
- If a process feels mechanical → automate it with code.
- If it feels human, conversational, or adaptive → AI is probably the right tool.
The biggest wins come from embracing that difference, not fighting it.
FAQs
How do I know if a process is a good candidate for AI?
If you can fully describe it as a flowchart with deterministic steps, it’s usually better solved with code or rules. If inputs are messy and context matters, AI is often a better fit.
Why is AI a bad fit for rule-driven workflows?
Because AI introduces variability. For predictable processes, variability increases risk and reduces debuggability without adding value.
What does “objective + tools” mean in agent design?
It means defining what “done” looks like and what actions are allowed, then letting the AI adapt within boundaries instead of following a rigid script.
What are examples of “messy” processes where AI works well?
Conversational intake, support triage, document review, exception handling, internal search, and workflows that involve incomplete or ambiguous inputs. When these capabilities become the core of a product rather than a feature inside one, that’s AI-native app development.
How do you make AI safe in messy workflows?
Constrain actions, validate outputs with tools, log everything, monitor quality and drift, and keep humans in the loop for high-impact decisions.