Back

Agentic systems vs. deterministic workflows for AI-powered search

December 24, 2025

These past months, I spent a lot of time thinking about building “AI-powered search”. Where do LLMs actually fit in a search architecture?

In particular, I’ve been thinking about the different ways to leverage AI to improve search: when to build a deterministic workflow versus handing the keys to an agent?

In this article, I’m compiling my thoughts on:

  • AI-powered workflow vs agentic search system
  • The trade-offs between both
  • When to use which

Introduction

Agentic search means using a Large Language Model (LLM) to run a program that can call search tools (vector search, web search, APIs…) with some level of autonomy.

An application is agentic when the LLM decides what to do next. In contrast, an AI-powered workflow is when your code controls the flow, and the LLM only handles specific, well-scoped steps.

Search agent example: answer questions over a large corpus by drafting a research plan, running multiple retrieval calls, and synthesizing the results.

Search workflow example: searching for a specific database record using natural language, where the LLM only helps interpret the query.

Comparison overview

FeatureAI-powered workflowAgentic search
Control flowDeterministic code orchestrates LLM callsLLM-centered loop decides next steps
LLM’s roleSpecialist for selection & compilationCentral planner & orchestrator
Best forPrecise queries in a known domainOpen-ended, multi-step research
StrengthPredictability, testability, domain groundingFlexibility, autonomy, extensibility
WeaknessInflexible, tightly coupledUnpredictable performance, weaker determinism

Workflows: precision and predictability

An AI-powered workflow is a fixed pipeline: your code orchestrates everything, and the LLM is just a helper.

To search for a database record using natural language, we can:

  1. Tokenize the user query
  2. Use an LLM to:
    • Select candidates (e.g., matching resource types)
    • Compile an intermediate representation that encodes the user’s intent
  3. Run fully deterministic code to build full-text, semantic, or hybrid search queries

Pros

  • High determinism: easy to regression-test the intermediate representation (it’s almost like parsing).
  • Domain precision: tightly grounded in your own schema.
  • Predictable cost: fixed number of LLM calls per query.

Cons

  • Inflexible: new behaviors (e.g., “analyze this, then search”) need new workflows.
  • Tightly coupled: optimized for one domain and set of tools.

Agents: flexibility and autonomy

In agentic search, the LLM is the orchestrator. It receives a goal and a toolbox, then runs a loop like Plan → Execute → Evaluate.

This Choma’s agentic search cookbook does exactly this:

  • For each query, the LLM decides which tools to call, with which parameters, and whether to continue or stop.
  • The code only defines the tools and the loop; the control flow lives in the LLM.

Pros

  • General framework: reusable across domains.
  • Great for complexity: multi-hop reasoning and long research chains.
  • Extensible: add a capability by adding a tool.

Cons

  • Operational noise: variable latency and cost.
  • Weaker determinism: harder to debug or guarantee strict formats (requires more harness engineering).
  • More hallucination risk: unless you enforce strong constraints.

Which one to choose?

In the end, it’s a classic trade-off between control and flexibility.

  • Choose an AI-powered workflow when you need predictable, reliable, highly precise behavior in a well-defined domain and your UX requires quick results.
  • Choose an agentic search system when your problem is open-ended, requires complex reasoning, and the user can afford to wait for in-depth results.

The good thing is that both approaches are compatible. You can leverage LLMs to build small, deterministic workflows and expose them as tools to a higher-level agent. Workflows give you reliability, and the agent gives you autonomy.