← back

AI Agents

Bringing agentic AI into an enterprise low-code platform.

role Lead Product Designer
timeline Late 2024 – Early 2025
platform Web · Enterprise SaaS
product Offline
Agentic AILCAPEnterprise
Agent configuration interface — query creator adapted for agentic AI

Appsmith's users had spent years building internal tools on the platform — dashboards, admin panels, workflow apps. The next question was obvious: what if those tools could think?

AI Agents was Appsmith's first serious bet on AI — a feature that would let users build autonomous agents directly inside their existing applications. Not a separate product, not a bolt-on. A native capability that sat alongside everything they'd already built, and could access all of it.

I was brought in mid-stream, initially to solve a specific problem. I ended up touching almost every part of the product before it shipped.

// context

By late 2024, the agentic AI space was moving fast and Appsmith needed to move with it. The vision was clear enough: users could create an AI agent, configure it, connect it to their data sources, and deploy it inside an existing Appsmith application. The agent would handle queries, run workflows, and take actions — all within the controlled environment the user had already built.

A designer was already working on the end-user experience — the interface a business user would see when interacting with a finished agent. A development team was building the underlying technology. What wasn't yet solved was the creator experience: how would a developer or technical user actually build and configure one of these agents?

That's where I came in.

// my role

I was brought in initially to focus on the configuration side — the developer-facing experience of setting up an agent: defining its behaviour, connecting it to data, selecting and tuning models. Once that was in a stable place, I was asked to pivot onto the end-user experience and lead a significant visual redesign of the chat interface. In the final phase, I contributed to post-launch efforts around marketing and positioning.

I wore three different hats on this project — sometimes in the same week. Configuration UX, visual design, and eventually product marketing. That breadth was unusual and worth naming.

// the configuration challenge

The first problem was the one I was hired to solve. How should a developer configure an AI agent inside Appsmith?

The team had decided to build within the existing query creator — Appsmith's established interface for connecting to data sources and writing queries. The instinct was right: reuse familiar patterns rather than introduce an entirely new mental model. But adapting that interface for agentic AI required understanding what agentic AI actually needed from a configuration interface — which meant I had to learn it first.

Query creator — existing interface adapted for agent configuration

I needed to understand:

// 01
RAG (Retrieval-Augmented Generation) How agents access and reason over external data sources, and what that means for how users connect and prioritise those sources.
// 02
Tool calling How agents take actions (querying a database, calling an API, writing a record) and how users define, scope, and trust those capabilities.
// 03
Model selection and configuration How to present meaningful choices around model behaviour without overwhelming users who aren't AI researchers.
Tool calling was the hardest concept to surface. Users needed to understand what the agent could do without needing to understand how it worked.

This required a lot of research, a lot of working sessions with the PM and engineering team, and a lot of designs that didn't survive contact with technical reality. The process was iterative in the most honest sense — not a clean sprint structure, but a rolling conversation between what the product could do and what a user could reasonably be expected to configure.

Tool calling — surfacing agent capabilities
Model selection — configuration options

// the chat interface redesign

Once the configuration work had reached a stable point, the focus shifted. Management wanted serious effort on the end-user experience — the chat interface a business user would interact with to use a finished agent. The existing design wasn't landing visually, and I was asked to lead a redesign.

This was a different kind of problem. Configuration is about clarity and control — users need to understand exactly what they're setting up. A chat interface is about trust and fluency — users need to feel like the agent understands them, and that the experience is responsive and alive.

Over several focused sprints, I redesigned the visual language of the chat experience: response presentation, loading states, error handling, the way sources and reasoning were surfaced. The goal was to make the agent feel capable without overpromising — a balance that AI interfaces get wrong in both directions.

Before — initial chat design
After — redesigned chat interface
The redesign wasn't about making it look better — it was about making the agent feel trustworthy. Every state, every transition, had to earn the user's confidence rather than assume it.

// user testing and launch

Before launch, we ran user testing sessions to pressure-test the experience with real users. The feedback shaped several improvements — particularly around the configuration flow and how the agent communicated uncertainty or failure.

[X]/[Y] participants successfully configured a working agent without assistance
[X]% wanted to see what data the agent had access to before trusting its answers

AI Agents launched in early 2025.

AI Agents — shipped product

// after launch — and what didn't work

The product shipped. It did what it was designed to do. And then it didn't take off the way we expected.

After launch, I noticed a disconnect between the product and how it was being marketed. An external team had built the marketing site, and while the product had evolved significantly during development, the marketing hadn't kept pace. The value proposition wasn't landing. The language wasn't matching what the product actually did, or who it was actually for.

I pitched a set of changes — content, framing, design — and we ran with it. We created variants, tested different approaches, tried to find the message that would connect.

It didn't move the needle. The honest conclusion: the product worked. The technology was sound. But we hadn't found market fit — and no amount of marketing refinement was going to substitute for that.

That's a real and useful thing to learn. It's also what led directly to the decision to rethink the company's direction entirely — which became the starting point for Kite.

The product was solving a real technical problem. The market just wasn't ready to buy it in the way we were selling it.

// key decisions

Adapting an existing interface rather than building a new one

The decision to configure agents inside the existing query creator rather than creating a dedicated interface was a bet on familiarity. For Appsmith's developer users, the query creator was already understood — they'd used it to connect to databases and APIs. Extending it to handle agent configuration reduced the cognitive load of learning a new tool, even as the underlying concepts were new. The tension was that the query creator's mental model (inputs → outputs) didn't map cleanly onto agentic behaviour (ongoing, stateful, action-taking). Designing within that constraint pushed the configuration UI to be more explicit about things the query creator had previously left implicit.

Configuration screen — agentic concepts surfaced within the existing query creator

Surfacing agent reasoning in the chat interface

One of the more contested decisions in the chat redesign was how much of the agent's reasoning to show end users. Show too little and the agent feels like a black box — outputs appear without any sense of how the agent arrived at them. Show too much and the interface becomes technical and hard to scan. We landed on a progressive disclosure approach: a clean response surface by default, with the option to expand and see sources and reasoning steps. This let the interface work for both users who wanted to trust the output and users who needed to verify it.

Expanded reasoning view — sources and steps on demand

// reflection

This was the project where I had to move fastest and learn most from scratch. Agentic AI in 2024 was genuinely new design territory — the patterns didn't exist yet, and working out how to make tool calling or RAG legible to a non-technical user required building understanding from the ground up before I could put pen to paper.

The post-launch chapter taught me something about the limits of design. A product can be well-designed and still fail to find its market. When that happens, the instinct is to redesign — to find the version of the thing that connects. Sometimes that's the right call. Sometimes the problem is upstream of design entirely, and no amount of iteration on the interface or the marketing site will substitute for a clearer answer to who this is actually for.

That question — who is this actually for — is what we went to answer next. And that became Kite.

Note: final design files and prototypes are covered under NDA.