Agentic systems vs. deterministic workflows for AI-powered search
These past months, I spent a lot of time thinking about building “AI-powered search”. Where do LLMs actually fit in a search architecture?
In particular, I’ve been thinking about the different ways to leverage AI to improve search: when to build a deterministic workflow versus handing the keys to an agent?
In this article, I’m compiling my thoughts on:
- AI-powered workflow vs agentic search system
- The trade-offs between both
- When to use which
Introduction
Agentic search means using a Large Language Model (LLM) to run a program that can call search tools (vector search, web search, APIs…) with some level of autonomy.
An application is agentic when the LLM decides what to do next. In contrast, an AI-powered workflow is when your code controls the flow, and the LLM only handles specific, well-scoped steps.
Search agent example: answer questions over a large corpus by drafting a research plan, running multiple retrieval calls, and synthesizing the results.
Search workflow example: searching for a specific database record using natural language, where the LLM only helps interpret the query.
Comparison overview
| Feature | AI-powered workflow | Agentic search |
|---|---|---|
| Control flow | Deterministic code orchestrates LLM calls | LLM-centered loop decides next steps |
| LLM’s role | Specialist for selection & compilation | Central planner & orchestrator |
| Best for | Precise queries in a known domain | Open-ended, multi-step research |
| Strength | Predictability, testability, domain grounding | Flexibility, autonomy, extensibility |
| Weakness | Inflexible, tightly coupled | Unpredictable performance, weaker determinism |
Workflows: precision and predictability
An AI-powered workflow is a fixed pipeline: your code orchestrates everything, and the LLM is just a helper.
To search for a database record using natural language, we can:
- Tokenize the user query
- Use an LLM to:
- Select candidates (e.g., matching resource types)
- Compile an intermediate representation that encodes the user’s intent
- Run fully deterministic code to build full-text, semantic, or hybrid search queries
Pros
- High determinism: easy to regression-test the intermediate representation (it’s almost like parsing).
- Domain precision: tightly grounded in your own schema.
- Predictable cost: fixed number of LLM calls per query.
Cons
- Inflexible: new behaviors (e.g., “analyze this, then search”) need new workflows.
- Tightly coupled: optimized for one domain and set of tools.
Agents: flexibility and autonomy
In agentic search, the LLM is the orchestrator. It receives a goal and a toolbox, then runs a loop like Plan → Execute → Evaluate.
This Choma’s agentic search cookbook does exactly this:
- For each query, the LLM decides which tools to call, with which parameters, and whether to continue or stop.
- The code only defines the tools and the loop; the control flow lives in the LLM.
Pros
- General framework: reusable across domains.
- Great for complexity: multi-hop reasoning and long research chains.
- Extensible: add a capability by adding a tool.
Cons
- Operational noise: variable latency and cost.
- Weaker determinism: harder to debug or guarantee strict formats (requires more harness engineering).
- More hallucination risk: unless you enforce strong constraints.
Which one to choose?
In the end, it’s a classic trade-off between control and flexibility.
- Choose an AI-powered workflow when you need predictable, reliable, highly precise behavior in a well-defined domain and your UX requires quick results.
- Choose an agentic search system when your problem is open-ended, requires complex reasoning, and the user can afford to wait for in-depth results.
The good thing is that both approaches are compatible. You can leverage LLMs to build small, deterministic workflows and expose them as tools to a higher-level agent. Workflows give you reliability, and the agent gives you autonomy.