What Is Agentic AI? A Builder's Guide
CONTENTS
Contact Us

What Is Agentic AI? A Builder's Guide

What Makes AI "Agentic"?

Most people's first experience with AI is a chatbot: you type a message, you get a response. That response is generated in a single forward pass through a model. It's impressive — but it's fundamentally reactive.

Agentic AI is different. An agentic system receives a goal, not just a prompt. It then plans how to achieve that goal, takes actions (calling tools, searching databases, writing files, running code), observes the results of those actions, and decides what to do next. This loop continues until the goal is achieved or the system determines it can't be done.

The key properties of agentic AI:

  • Multi-step reasoning — the system decomposes a goal into subproblems and works through them sequentially or in parallel
  • Tool use — the agent can call external functions: search engines, APIs, databases, code interpreters, browsers
  • Memory — the agent maintains context across multiple steps, often storing intermediate results
  • Self-correction — if a step fails, the agent can reason about why and try a different approach

Chatbots vs Agents: A Concrete Example

Imagine asking: "Summarize the top three stories from TechCrunch today and email them to me."

A chatbot would say it can't access the internet or send emails.

An agentic system would:

  1. Use a web search tool to find TechCrunch's latest articles
  2. Read the full text of the top three
  3. Generate summaries for each
  4. Use an email API to send the summaries to you

Same language model. Completely different architecture.

The ReAct Pattern

The most widely used framework for building agents is ReAct (Reasoning + Acting). Developed by researchers at Google and Princeton, ReAct structures an agent's behavior as an alternating sequence of:

  • Thought: The model reasons about what it should do next
  • Action: The model calls a tool with specific parameters
  • Observation: The tool returns a result
  • ...repeat until done

This pattern is powerful because it keeps reasoning and action grounded in each other. The model can't just hallucinate an answer — it has to actually call the tool and handle what comes back.

In code, this looks like a loop:

` while not_done: thought = model.generate(context) action, params = parse_action(thought) observation = tools[action](**params) context.append(thought, action, observation) `

Failure Modes to Know Before You Build

Agentic systems fail in ways that simple LLM calls don't. Before shipping anything to production, understand these:

Hallucination chains: In a single LLM call, a hallucination is contained. In an agent, a hallucinated "fact" in step 2 can corrupt every subsequent step. The agent acts on false information, calling tools with wrong parameters, and the errors compound.

Infinite loops: Agents can get stuck retrying the same failed action, or oscillating between two approaches. Always implement a maximum step count.

Prompt injection: If your agent reads external content (web pages, emails, documents), that content can contain instructions that hijack the agent's behavior. This is a serious security concern for production systems.

Cost explosions: Agents make multiple LLM calls per task. A task that would cost $0.01 with a single call might cost $0.50 with a 20-step agent. Instrument your costs before scaling.

Tool misuse: Models often call tools with subtly wrong parameters, or call the wrong tool entirely. Every tool needs clear documentation in the system prompt — the model's only guide is what you tell it.

When to Use Agents vs Simple LLM Calls

Agents are powerful but expensive and slow. Don't reach for an agent when a single LLM call will do.

Use a simple LLM call when:

  • The task is well-defined and completable in one shot
  • Latency matters (agents are inherently multi-round)
  • The input is bounded and known

Use an agent when:

  • The task requires multiple steps that depend on each other
  • You need to interact with external systems (APIs, databases, browsers)
  • The problem space is too large for a single context window
  • The task requires self-correction based on intermediate results

When in doubt, start simple. A well-prompted single call will outperform a poorly designed agent every time.

Practical Advice for Your First Agent

Start with a small number of tools (3-5 maximum). More tools means more surface area for the model to call the wrong one, and more tokens explaining each tool.

Write tool documentation as if you're writing for a junior developer who has never seen your codebase. Be explicit about what the tool does, what parameters it takes, and what it returns.

Build evaluation infrastructure before you start iterating on the agent. You need to know if your changes are making things better or worse. Without evals, you're flying blind.

Log everything: every thought, every action, every observation. Debugging agents without logs is nearly impossible.

Finally: start with the most capable model available, then optimize down once you have a working baseline. Under-powered models in agents fail in subtle, hard-to-debug ways.

Meet the author