Lovex
← Back to blog
·7 min read

Agents need APIs, not UIs

Every product team we talk to is asking the same question: how do we make our product work with AI agents? The answer is simpler than most people expect, and harder than it sounds. You need an API that agents can actually use. Not an API that exists — one that's designed for non-human consumers from the start.

This sounds obvious. It isn't. Most APIs are built as backends for frontends. They return HTML fragments, redirect on auth failures, assume a cookie jar, paginate for human scroll speed, and encode business logic in UI state that never touches the wire. An agent hitting these endpoints is like a person trying to use a TV remote to drive a car — the interface exists, but it wasn't built for this.

What agents actually need

We've been building agent-first APIs for the past year, both for our own products and for clients. The pattern is surprisingly consistent. Agents need four things from an API that most human-oriented APIs don't provide well.

Explicit state machines

Humans infer state. They look at a button color and know something is disabled. They read a toast notification and understand an action succeeded. Agents don't infer — they parse. Every entity needs a status field with documented transitions. Moving a task from "todo" to "done" should be an explicit API call with validation, not a drag event that fires a mutation as a side effect.

Invalid transitions should fail hard with clear error messages. "Cannot move task to done: task is currently blocked" is infinitely more useful to an agent than a 400 with no body. The agent needs to understand why something failed so it can decide what to do next.

Atomic operations

Human users do one thing at a time. They click, wait, click again. Agents work in loops — read state, decide, act, repeat. Multiple agents will hit the same resources concurrently. If your API requires a read-then-write pattern with no concurrency control, agents will create race conditions that humans never would.

Claim-then-act is the pattern that works. An agent claims a task (atomic, fails if already claimed), does the work, then reports back. The claim is the lock. Without it, two agents will pick up the same task, do duplicate work, and conflict on the write.

Structured errors

This is where most APIs fail for agents. A human sees "Something went wrong" and retries or asks for help. An agent sees "Something went wrong" and has no decision tree to follow. It will either retry forever or give up — neither is correct.

Agent-friendly errors need three things: a machine-readable error code, a human-readable message (for logs), and a suggested action. "Rate limited — retry after 30 seconds" is actionable. "Task not found — verify task ID" is actionable. "Internal server error" is not.

Discoverability

Agents need to understand what they can do. The simplest version is a board endpoint — give me everything I need to know about the current state of the project in one call. What tasks exist, what's claimed, what's blocked, what columns are available. The agent reads this, builds a plan, and executes. Without a single "state of the world" endpoint, agents make dozens of calls to reconstruct context that a human gets from glancing at a screen.

The testing problem

Here's something nobody talks about: testing agent integrations is fundamentally different from testing human workflows. A human tester clicks through a flow once and checks the result. An agent tester needs to verify behavior under concurrency, partial failure, rate limiting, and state conflicts — all at once.

We run agent integration tests where multiple simulated agents race to claim tasks, update statuses, and post messages simultaneously. The failures we catch are never the obvious ones. They're the subtle timing issues — two agents reading the board at the same instant, both seeing an unclaimed task, both trying to claim it. Only one should succeed. The other needs to handle rejection gracefully.

If you're not testing for this, you'll discover it in production when a customer's CI agent and their monitoring agent both try to close the same issue.

The UI isn't going away

None of this means UIs don't matter. Humans still need to see what's happening, configure the system, and intervene when agents make mistakes. The shift is in priority: build the API first, then build the UI on top of it. If the UI can do something the API can't, that's a bug — it means agents are locked out of a capability.

We've found this discipline improves the UI too. When you build the API first, the UI becomes thinner and more focused. It's a view layer over well-structured data, not a monolith that mixes presentation with business logic. The API forces you to think clearly about state, transitions, and permissions — and the UI inherits that clarity.

Start with the loop

If you're adding agent support to an existing product, don't start with the API spec. Start with the agent loop. Write the pseudocode for what an agent would do with your product:

  1. Read current state
  2. Decide what to work on
  3. Claim it
  4. Do the work
  5. Report the result
  6. Go to step 1

Then ask: can your API support each step? Can the agent read state in one call? Is there a claim mechanism? Can it report structured results? If any step requires screen-scraping, cookie management, or multi-step flows with no transactional guarantees, that's where to focus.

The products that win the next five years will be the ones that treat agents as first-class users today. Not because every customer has agents now — most don't. But because the ones who do are the fastest-growing, highest-value teams, and they'll choose the product that works with their workflow over the one that fights it.

Build the API. The agents are already on their way.