Skip to content
← Back to blog
· Rodrigo

Why I build AI agents instead of chatbots

Chatbots answer questions. Agents do work. Here's why I stopped building the first kind and started building the second.

Most people hear “AI” and think chatbot. A text box, a response, maybe a follow-up. That’s fine for customer support. But it’s not what I’m interested in.

I build agents. The difference matters.

Chatbots wait. Agents act.

A chatbot sits there until you ask it something. An agent has a goal, a plan, and the tools to execute it. It monitors markets while I sleep. It runs code, checks results, and decides what to do next without me typing a single prompt.

That shift — from reactive to autonomous — changes everything about how you design the system. You stop thinking about prompts and start thinking about loops, error recovery, and when to escalate to a human.

The hard part isn’t the AI

Getting an LLM to generate text is the easy part. The hard part is everything around it: reliable tool execution, state management across long-running tasks, knowing when the agent is stuck vs. thinking, and building interfaces where humans and agents actually collaborate instead of just taking turns.

That’s what Cyborg7 is about. Not a better chatbot — a workspace where agents are first-class participants.

What I’ve learned so far

Three things keep coming up:

Agents fail silently. They don’t throw errors — they just produce slightly wrong results. If you don’t instrument everything, you won’t know until it’s too late.

Context windows are a leash. The longer an agent runs, the more context it needs, and the more it forgets. Managing what stays in context and what gets summarized is an engineering problem, not an AI problem.

Humans are still the bottleneck. Not because we’re slow, but because we don’t know how to delegate to agents yet. We either micromanage them or give them too much freedom. Finding the right level of autonomy is the actual challenge.

What’s next

I’m working on making agents that can collaborate with each other — not just with humans. If one agent finds a bug, it should be able to hand it off to another agent that knows how to fix it. That’s the Cyborg7 vision: a team of agents with different specialties, coordinated through channels and task boards.

If that sounds interesting, the code is on my GitHub. Or just say hi.