In this tutorial you will build an AI coding assistant that can read files, search code, propose plans, edit files, and run commands — all with the user in control. The display stack turns the AI from a blind executor into a responsible collaborator.
This is the most compelling use of the display stack: the AI shows you a plan before making changes, presents diffs for review, and asks for permission before running destructive commands. The user sees real UI at every decision point, not just text.
Prerequisites: You should have completed Getting Started and read The Display Stack.
A coding agent where the user can say “Refactor the auth module to use JWT” and the app will:
pushAndForget)pushAndWait)pushAndWait)pushAndWait + pushAndForget)The display stack turns every critical decision into a UI checkpoint. The AI never makes changes you haven't approved.
A coding agent is different from the travel planner. The travel planner's tools are entirely browser-based — they show UI and collect input, nothing more. A coding agent needs to read files, write files, and run shell commands. Those operations require a server.
Here is how the pieces fit together:
createChatHandler is a thin LLM proxy. It forwards your conversation to OpenAI or Anthropic and streams back the response. It sends tool schemas (name, description, parameters) to the LLM so the AI knows what tools are available — but it does not execute tools.do functions run in the browser. When the AI requests a tool call, useGlove executes the do function client-side. This is why the travel planner works — its tools only use the display stack and pure computation.do function calls a Next.js API route via fetch. The API route runs on the server with full Node.js access.The flow for a tool like edit_file:
edit_file with path, old string, new stringdo function runs in the browser — it calls the server API route to read the filedo function pushes a diff preview onto the display stack (pushAndWait) — this is browser-sidedo function calls another API route to write the fileThe display stack stays client-side (that is where React renders). The heavy lifting happens server-side through API routes. The do function is the bridge between them.
Start from a Next.js project with Glove installed:
pnpm add glove-core glove-react glove-next zodimport { createChatHandler } from "glove-next";
// This is the LLM proxy — it does NOT execute tools.
// It sends tool schemas to the AI and streams back responses.
export const POST = createChatHandler({
provider: "anthropic",
model: "claude-sonnet-4-20250514",
});Since tool do functions run in the browser, you need server-side API routes for anything that requires Node.js — file reads, file writes, and shell commands. Create three routes:
import { readFile } from "fs/promises";
import { resolve, normalize } from "path";
import { NextResponse } from "next/server";
// The project root that the agent can access
const PROJECT_ROOT = process.cwd();
function safePath(relativePath: string): string {
const resolved = resolve(PROJECT_ROOT, relativePath);
// Prevent path traversal outside the project
if (!resolved.startsWith(PROJECT_ROOT)) {
throw new Error("Path outside project root");
}
return resolved;
}
export async function POST(req: Request) {
const { path } = await req.json();
try {
const content = await readFile(safePath(path), "utf-8");
return NextResponse.json({ content });
} catch (err: any) {
return NextResponse.json(
{ error: err.message },
{ status: 400 },
);
}
}import { readFile, writeFile } from "fs/promises";
import { resolve } from "path";
import { NextResponse } from "next/server";
const PROJECT_ROOT = process.cwd();
function safePath(relativePath: string): string {
const resolved = resolve(PROJECT_ROOT, relativePath);
if (!resolved.startsWith(PROJECT_ROOT)) {
throw new Error("Path outside project root");
}
return resolved;
}
export async function POST(req: Request) {
const { path, oldString, newString } = await req.json();
try {
const fullPath = safePath(path);
const content = await readFile(fullPath, "utf-8");
if (!content.includes(oldString)) {
return NextResponse.json(
{ error: "old_string not found in file" },
{ status: 400 },
);
}
const updated = content.replace(oldString, newString);
await writeFile(fullPath, updated);
return NextResponse.json({ success: true });
} catch (err: any) {
return NextResponse.json(
{ error: err.message },
{ status: 400 },
);
}
}import { exec } from "child_process";
import { promisify } from "util";
import { NextResponse } from "next/server";
const execAsync = promisify(exec);
// Allowlist of safe command prefixes
const ALLOWED_PREFIXES = [
"npm test", "pnpm test", "npx ", "pnpm ",
"git status", "git diff", "git log",
"ls", "cat", "rg ", "grep ",
];
export async function POST(req: Request) {
const { command } = await req.json();
// Only allow known-safe commands
const isAllowed = ALLOWED_PREFIXES.some((p) =>
command.startsWith(p),
);
if (!isAllowed) {
return NextResponse.json(
{ error: `Command not allowed: ${command}` },
{ status: 403 },
);
}
try {
const { stdout, stderr } = await execAsync(command, {
timeout: 30000,
cwd: process.cwd(),
});
return NextResponse.json({
output: (stdout + stderr).trim() || "(no output)",
});
} catch (err: any) {
return NextResponse.json({
output: err.stderr || err.message,
error: true,
});
}
}Notice the security measures: path traversal prevention on file routes, and a command allowlist on the exec route. In a real application, you would add authentication and more restrictive sandboxing.
Now build the client-side tools. Each tool's do function calls the server API routes via fetch, then uses the display stack to show results.
The read_file tool has no render function — it is invisible to the user. The AI reads files silently to build context.
import { z } from "zod";
import type { ToolConfig } from "glove-react";
export const readFileTool: ToolConfig = {
name: "read_file",
description: "Read the contents of a file. Returns the full text.",
inputSchema: z.object({
path: z.string().describe("File path relative to the project root"),
}),
async do(input) {
// Call the server API route — file system access happens server-side
const res = await fetch("/api/fs/read", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ path: input.path }),
});
const data = await res.json();
if (data.error) return `Error: ${data.error}`;
return data.content;
},
// No render — this tool doesn't show UI
};The search_code tool calls the server to run rg, then shows results as a persistent card using pushAndForget. The user sees what the AI found, but the tool does not wait — the AI keeps working.
import { z } from "zod";
import type { ToolConfig, SlotRenderProps } from "glove-react";
export const searchCode: ToolConfig = {
name: "search_code",
description:
"Search the codebase for a pattern. Returns matching files and lines. " +
"Shows results as a card in the UI.",
inputSchema: z.object({
pattern: z.string().describe("Regex pattern to search for"),
glob: z.string().optional().describe("File glob filter, e.g. '*.ts'"),
}),
async do(input, display) {
// Build the rg command and run it on the server
const globFlag = input.glob ? ` --glob '${input.glob}'` : "";
const command = `rg --json '${input.pattern}'${globFlag}`;
const res = await fetch("/api/fs/exec", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ command }),
});
const data = await res.json();
// Parse ripgrep JSON output into readable format
const matches = (data.output || "")
.split("\n")
.filter(Boolean)
.map((line: string) => {
try { return JSON.parse(line); } catch { return null; }
})
.filter((m: any) => m?.type === "match")
.map((m: any) => ({
file: m.data.path.text,
line: m.data.line_number,
text: m.data.lines.text.trim(),
}))
.slice(0, 20);
// Show results as a persistent card — pushAndForget
if (matches.length > 0) {
await display.pushAndForget({
input: { pattern: input.pattern, matches },
});
}
return JSON.stringify(matches);
},
render({ data }: SlotRenderProps) {
const { pattern, matches } = data as {
pattern: string;
matches: { file: string; line: number; text: string }[];
};
return (
<div style={{ padding: 16, borderRadius: 12, background: "#141414", border: "1px solid #262626" }}>
<p style={{ fontSize: 12, color: "#888", marginBottom: 8 }}>
Search results for <code style={{ color: "#9ED4B8" }}>{pattern}</code>
{" "}— {matches.length} match{matches.length !== 1 ? "es" : ""}
</p>
<div style={{ display: "flex", flexDirection: "column", gap: 4 }}>
{matches.map((m, i) => (
<div
key={i}
style={{
fontFamily: "monospace",
fontSize: 12,
padding: "4px 8px",
borderRadius: 4,
background: "#0a0a0a",
}}
>
<span style={{ color: "#888" }}>{m.file}:{m.line}</span>
{" "}
<span style={{ color: "#ededed" }}>{m.text}</span>
</div>
))}
</div>
</div>
);
},
};Before making any changes, the AI should explain what it plans to do and wait for approval. This tool is entirely client-side — no server route needed. It only uses the display stack.
import { z } from "zod";
import type { ToolConfig, SlotRenderProps } from "glove-react";
export const proposePlan: ToolConfig = {
name: "propose_plan",
description:
"Present a step-by-step plan to the user for approval before " +
"making changes. ALWAYS use this before editing files. " +
"Blocks until the user approves or rejects.",
inputSchema: z.object({
title: z.string().describe("Plan title, e.g. 'Refactor auth to JWT'"),
steps: z
.array(
z.object({
title: z.string().describe("Step title"),
description: z.string().describe("What this step does"),
}),
)
.describe("Ordered list of planned changes"),
}),
// This tool is pure display stack — no server call needed
async do(input, display) {
const approved = await display.pushAndWait({ input });
return approved
? "Plan approved — proceed with the changes."
: "Plan rejected — ask the user what they want to change.";
},
render({ data, resolve }: SlotRenderProps) {
const { title, steps } = data as {
title: string;
steps: { title: string; description: string }[];
};
return (
<div style={{ padding: 16, border: "1px solid #9ED4B8", borderRadius: 12 }}>
<p style={{ fontWeight: 600, marginBottom: 12 }}>{title}</p>
<ol style={{ listStyle: "none", padding: 0, display: "flex", flexDirection: "column", gap: 6 }}>
{steps.map((step, i) => (
<li
key={i}
style={{
display: "flex",
flexDirection: "column",
gap: 2,
padding: "6px 10px",
borderRadius: 6,
background: "#0a0a0a",
}}
>
<div style={{ display: "flex", alignItems: "center", gap: 8 }}>
<span style={{ fontSize: 10, fontWeight: 700, color: "#9ED4B8" }}>
{i + 1}
</span>
<strong style={{ fontSize: 13 }}>{step.title}</strong>
</div>
<span style={{ fontSize: 12, color: "#888", paddingLeft: 18 }}>
{step.description}
</span>
</li>
))}
</ol>
<div style={{ display: "flex", gap: 8, marginTop: 12 }}>
<button
onClick={() => resolve(true)}
style={{
padding: "8px 16px",
border: "none",
borderRadius: 6,
background: "#22c55e",
color: "#fff",
cursor: "pointer",
}}
>
Approve Plan
</button>
<button
onClick={() => resolve(false)}
style={{
padding: "8px 16px",
border: "none",
borderRadius: 6,
background: "#262626",
color: "#888",
cursor: "pointer",
}}
>
Reject
</button>
</div>
</div>
);
},
};The description says “ALWAYS use this before editing files.” This is how you encode safety rules — through tool descriptions. The AI reads the description and follows it. Combined with the system prompt (step 7), this creates a reliable approval gate.
When the AI edits a file, it should show you what it is about to change. This tool combines both patterns: it calls the server to read the file, shows a diff using pushAndWait, and if approved, calls the server again to write the file.
import { z } from "zod";
import type { ToolConfig, SlotRenderProps } from "glove-react";
export const editFile: ToolConfig = {
name: "edit_file",
description:
"Edit a file by replacing a specific string. Shows a diff preview " +
"and waits for user approval before writing. Use this for all " +
"code modifications.",
inputSchema: z.object({
path: z.string().describe("File path relative to project root"),
oldString: z.string().describe("The exact text to find and replace"),
newString: z.string().describe("The replacement text"),
}),
async do(input, display) {
// Step 1: Read the file from the server to verify the edit is valid
const readRes = await fetch("/api/fs/read", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ path: input.path }),
});
const readData = await readRes.json();
if (readData.error) return `Error: ${readData.error}`;
if (!readData.content.includes(input.oldString)) {
return "Error: old_string not found in file.";
}
// Step 2: Show the diff and wait for approval (client-side display stack)
const approved = await display.pushAndWait({
input: {
path: input.path,
oldString: input.oldString,
newString: input.newString,
},
});
if (!approved) return "Edit rejected by user.";
// Step 3: Write the file on the server
const writeRes = await fetch("/api/fs/write", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
path: input.path,
oldString: input.oldString,
newString: input.newString,
}),
});
const writeData = await writeRes.json();
if (writeData.error) return `Error: ${writeData.error}`;
return "File updated successfully.";
},
render({ data, resolve }: SlotRenderProps) {
const { path, oldString, newString } = data as {
path: string;
oldString: string;
newString: string;
};
return (
<div style={{ padding: 16, borderRadius: 12, border: "1px solid #333" }}>
<p style={{ fontSize: 12, color: "#888", marginBottom: 8 }}>
Edit: <code style={{ color: "#9ED4B8" }}>{path}</code>
</p>
{/* Removed lines */}
<div style={{ marginBottom: 8 }}>
{oldString.split("\n").map((line, i) => (
<div
key={`old-${i}`}
style={{
fontFamily: "monospace",
fontSize: 12,
padding: "2px 8px",
background: "rgba(239, 68, 68, 0.1)",
color: "#ef4444",
borderLeft: "3px solid #ef4444",
}}
>
- {line}
</div>
))}
</div>
{/* Added lines */}
<div style={{ marginBottom: 12 }}>
{newString.split("\n").map((line, i) => (
<div
key={`new-${i}`}
style={{
fontFamily: "monospace",
fontSize: 12,
padding: "2px 8px",
background: "rgba(34, 197, 94, 0.1)",
color: "#22c55e",
borderLeft: "3px solid #22c55e",
}}
>
+ {line}
</div>
))}
</div>
<div style={{ display: "flex", gap: 8 }}>
<button
onClick={() => resolve(true)}
style={{
padding: "8px 16px",
border: "none",
borderRadius: 6,
background: "#22c55e",
color: "#fff",
cursor: "pointer",
}}
>
Apply Edit
</button>
<button
onClick={() => resolve(false)}
style={{
padding: "8px 16px",
border: "none",
borderRadius: 6,
background: "#262626",
color: "#888",
cursor: "pointer",
}}
>
Reject
</button>
</div>
</div>
);
},
};This is the pattern that makes AI coding assistants trustworthy. The do function talks to the server to read the file, shows a diff in the browser, and only writes to the server after the user approves. The server never sees the write request unless the user clicked Apply.
Running shell commands is the most dangerous capability. The display stack adds two layers of safety: a permission prompt before executing, and an output card after.
This tool uses both display stack patterns in a single call — pushAndWait for the permission gate, then pushAndForget to show the output.
import { z } from "zod";
import type { ToolConfig, SlotRenderProps } from "glove-react";
export const runCommand: ToolConfig = {
name: "run_command",
description:
"Run a shell command. Shows the command for user approval first, " +
"then displays the output. Use for running tests, installing " +
"packages, git operations, or build commands.",
inputSchema: z.object({
command: z.string().describe("The shell command to run"),
reason: z.string().describe("Why this command needs to run"),
}),
async do(input, display) {
// Step 1: Ask permission in the browser (pushAndWait)
const approved = await display.pushAndWait({
input: { command: input.command, reason: input.reason, phase: "permission" },
});
if (!approved) return "Command rejected by user.";
// Step 2: Execute on the server
const res = await fetch("/api/fs/exec", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ command: input.command }),
});
const data = await res.json();
// Step 3: Show output in the browser (pushAndForget)
await display.pushAndForget({
input: {
command: input.command,
output: data.output,
phase: data.error ? "error" : "output",
},
});
if (data.error) return `Command failed: ${data.output}`;
return data.output;
},
render({ data, resolve }: SlotRenderProps) {
const { phase } = data as { phase: string };
// Permission prompt (pushAndWait — resolve is available)
if (phase === "permission") {
const { command, reason } = data as {
command: string;
reason: string;
phase: string;
};
return (
<div style={{ padding: 16, border: "1px dashed #f59e0b", borderRadius: 12 }}>
<p style={{ fontSize: 12, color: "#f59e0b", fontWeight: 600, marginBottom: 8 }}>
Run command?
</p>
<div
style={{
fontFamily: "monospace",
fontSize: 13,
padding: "8px 12px",
background: "#0a0a0a",
borderRadius: 6,
marginBottom: 8,
}}
>
$ {command}
</div>
<p style={{ fontSize: 12, color: "#888", marginBottom: 12 }}>{reason}</p>
<div style={{ display: "flex", gap: 8 }}>
<button
onClick={() => resolve(true)}
style={{
padding: "8px 16px",
border: "none",
borderRadius: 6,
background: "#22c55e",
color: "#fff",
cursor: "pointer",
}}
>
Run
</button>
<button
onClick={() => resolve(false)}
style={{
padding: "8px 16px",
border: "none",
borderRadius: 6,
background: "#262626",
color: "#888",
cursor: "pointer",
}}
>
Deny
</button>
</div>
</div>
);
}
// Output display (pushAndForget — no resolve needed)
const { command, output } = data as {
command: string;
output: string;
phase: string;
};
const isError = phase === "error";
return (
<div
style={{
padding: 16,
borderRadius: 12,
borderLeft: `3px solid ${isError ? "#ef4444" : "#333"}`,
background: "#141414",
}}
>
<p style={{ fontSize: 12, color: "#888", marginBottom: 4 }}>
$ {command}
</p>
<pre
style={{
fontFamily: "monospace",
fontSize: 12,
color: isError ? "#ef4444" : "#ededed",
whiteSpace: "pre-wrap",
lineHeight: 1.5,
margin: 0,
}}
>
{output}
</pre>
</div>
);
},
};The render function handles both phases by checking data.phase. For the permission prompt, it uses resolve (the user must respond). For the output card, there is no resolve call — it is fire-and-forget.
import { GloveClient } from "glove-react";
import { readFileTool } from "./tools/read-file";
import { searchCode } from "./tools/search-code";
import { proposePlan } from "./tools/propose-plan";
import { editFile } from "./tools/edit-file";
import { runCommand } from "./tools/run-command";
export const gloveClient = new GloveClient({
// Points to the LLM proxy — NOT where tools execute
endpoint: "/api/chat",
systemPrompt: `You are a careful, thorough coding assistant. You help
users understand and modify their codebase.
Your workflow:
1. When given a task, start by reading relevant files and searching
the codebase to understand the current state.
2. ALWAYS use propose_plan before making any changes. Present a clear
step-by-step plan and wait for approval.
3. After the plan is approved, make changes one file at a time using
edit_file. Each edit shows a diff for review.
4. After all edits, use run_command to run tests or verify the changes.
5. If a test fails, read the error, explain it, and propose a fix.
Rules:
- Never edit a file without showing a plan first.
- Never run a command without explaining why.
- If the user rejects a plan or edit, ask what they want to change.
- Show search results when you find something relevant.
- Keep explanations concise — the UI speaks for itself.`,
tools: [readFileTool, searchCode, proposePlan, editFile, runCommand],
});The endpoint points to the LLM proxy. The server API routes (/api/fs/read, /api/fs/write, /api/fs/exec) are called by the tools directly.
"use client";
import { useState } from "react";
import { useGlove } from "glove-react";
export default function CodingAgent() {
const {
timeline,
streamingText,
busy,
sendMessage,
slots,
renderSlot,
} = useGlove();
const [input, setInput] = useState("");
function handleSubmit(e: React.FormEvent) {
e.preventDefault();
if (!input.trim() || busy) return;
sendMessage(input.trim());
setInput("");
}
return (
<div style={{ maxWidth: 700, margin: "2rem auto" }}>
<h1>Coding Agent</h1>
<div>
{timeline.map((entry, i) => {
if (entry.kind === "user")
return <div key={i} style={{ margin: "1rem 0" }}><strong>You:</strong> {entry.text}</div>;
if (entry.kind === "agent_text")
return <div key={i} style={{ margin: "1rem 0" }}><strong>Agent:</strong> {entry.text}</div>;
if (entry.kind === "tool")
return (
<div key={i} style={{ margin: "0.5rem 0", fontSize: "0.85rem", color: "#888" }}>
{entry.name} — {entry.status}
</div>
);
return null;
})}
</div>
{streamingText && (
<div style={{ opacity: 0.7 }}><strong>Agent:</strong> {streamingText}</div>
)}
{/* Display stack — plans, diffs, permission prompts, output cards */}
{slots.length > 0 && (
<div style={{ margin: "1rem 0", display: "flex", flexDirection: "column", gap: "0.5rem" }}>
{slots.map(renderSlot)}
</div>
)}
<form onSubmit={handleSubmit} style={{ display: "flex", gap: "0.5rem" }}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Describe what you want to change..."
disabled={busy}
style={{ flex: 1, padding: "0.5rem", fontFamily: "monospace" }}
/>
<button type="submit" disabled={busy}>Send</button>
</form>
</div>
);
}pnpm devTry these prompts:
search_code and a results card appearsHere is a summary of the architecture. Understanding this split is key to building tools that need server access:
| Piece | Where it runs | Why |
|---|---|---|
createChatHandler | Server | Proxies to OpenAI/Anthropic. Sends tool schemas, streams responses. |
Tool do functions | Browser | Called by useGlove when the AI requests a tool call. |
Tool render functions | Browser | React components that show in the display stack. |
/api/fs/* routes | Server | File reads, writes, and shell commands via Node.js APIs. |
| Display stack | Browser | pushAndWait and pushAndForget manage React components. |
The do function is the bridge. It runs in the browser, so it can call display.pushAndWait() to show UI. And it can call fetch() to reach server API routes for operations that need Node.js. This is what makes the pattern work — the display stack and the server are both accessible from the same function.
| Action | Without display stack | With display stack |
|---|---|---|
| Search | AI silently reads results | Results card visible to user |
| Plan | AI describes changes in text | Structured plan with Approve/Reject buttons |
| Edit | AI writes to file directly | Diff preview with Apply/Reject buttons |
| Command | AI runs commands blindly | Permission prompt, then output card |
The AI still orchestrates everything. But the user approves every mutation. This is the difference between a tool that helps you code and a tool that codes at you.
The coding agent showcases a reusable pattern for any tool that performs a mutation through a server:
async do(input, display) {
// Gate: show preview, wait for approval (browser — pushAndWait)
const approved = await display.pushAndWait({ input: { ... } });
if (!approved) return "Rejected";
// Execute: call the server API route (server — fetch)
const res = await fetch("/api/...", { method: "POST", body: ... });
// Display: show result (browser — pushAndForget)
await display.pushAndForget({ input: { output: res.data } });
return res.data;
}Gate, execute, display. The gate ensures the user consents. The execute happens on the server. The display shows the result. This pattern works for file edits, database writes, API calls, email sends, deployments — anything where the operation needs server access and “undo” is expensive.
| Tool | Pattern | Why |
|---|---|---|
read_file | No display | Silent server call — AI builds context |
search_code | pushAndForget | Show results, AI keeps working |
propose_plan | pushAndWait | Must approve before any changes |
edit_file | pushAndWait | Must review diff before server writes |
run_command | Both | pushAndWait for permission, pushAndForget for output |
pushAndWait, pushAndForget, and SlotRenderProps