Claude Code (Part 9)

Neil HaddleyFebruary 22, 2026

Using an AI coding agent to create an AI coding agent

AIclaude-codeai-agentanthropicagentic-coding

In a twist of recursive thinking, I built a local AI coding agent using Claude Code, another AI coding agent. I started with a single prompt to define the tech stack (Next.js 15, TypeScript, Tailwind, and Ollama with qwen2.5-coder:7b) and built up to a full implementation with a three-column web UI and a CLI. Along the way I worked through file trees, chat interfaces, and project-specific instructions via AGENT.md — and ended up with a fully functional AI coding agent, built by an AI coding agent.

Prompt

MARKDOWN
1Build a lightweight, local AI coding agent (inspired by Open Hands) with a browser UI and a CLI. The agent uses Ollama (default model `qwen2.5-coder:7b`)
2
3## Tech Stack
4- **Next.js 15** (App Router) + TypeScript + Tailwind CSS for UI
5- **CLI**: TypeScript, run with `tsx`
6- **LLM**: Ollama (URL configurable)
7
8## Web UI (Three‑Column Layout)
9- **Left sidebar (224px)**: File tree (depth 4, ignores blacklisted dirs).
10- **Center**: File viewer.
11- **Right sidebar (384px)**: Chat panel with SSE‑streamed agent responses, expandable tool call badges, model selector (only tool‑capable models enabled).
12- **Workspace**: Folder picker (path saved to localStorage); validated on next load.
13
14Do you have any questions?
Claude asked which tools the agent should be able to call

Claude asked which tools the agent should be able to call

Claude asked whether the center file viewer should be editable

Claude asked whether the center file viewer should be editable

Claude asked what the CLI should do

Claude asked what the CLI should do

I approved all edits for the session

I approved all edits for the session

The build completed successfully

The build completed successfully

The application was running in the browser

The application was running in the browser

The app prompted me to select a workspace folder

The app prompted me to select a workspace folder

I asked the agent to write a bash script that prints 'hello world!'

I asked the agent to write a bash script that prints 'hello world!'

The model suggested invoking the write_file tool, but nothing happened

The model suggested invoking the write_file tool, but nothing happened

I asked Claude Code to troubleshoot the issue

I asked Claude Code to troubleshoot the issue

Claude Code diagnosed the problem

Claude Code diagnosed the problem

Claude Code updated the application

Claude Code updated the application

I tested the fix by asking the agent to write hello.sh again

I tested the fix by asking the agent to write hello.sh again

I asked the agent to modify hello.sh to accept a name argument

I asked the agent to modify hello.sh to accept a name argument

I asked the agent to run hello.sh with argument 'Neil'

I asked the agent to run hello.sh with argument 'Neil'

I ran /init to generate a CLAUDE.md file

I ran /init to generate a CLAUDE.md file

PROMPT
1If AGENT.md present in workspace root, append (≤10k chars) to system prompt as `## Project instructions`.
I asked Claude Code to add AGENT.md support to the agent

I asked Claude Code to add AGENT.md support to the agent

Claude Code added support for AGENT.md

Claude Code added support for AGENT.md

I created an example AGENT.md file in the workspace

I created an example AGENT.md file in the workspace

I asked the agent to create goodbye.sh using the AGENT.md project instructions

I asked the agent to create goodbye.sh using the AGENT.md project instructions

Prompt

MARKDOWN
1## User Interface – Center Column (Split View)
2The center column is divided into two resizable sections:
3
4- **Top – File Viewer**: Displays the content of the selected file (read‑only). Its height shrinks when the terminal panel is opened.
5- **Bottom – Terminal Panel**: A collapsible panel (default height 260px) that can be expanded or hidden via a 26‑px toggle bar (▸ Terminal) at the very bottom of the UI.
6
7### Terminal Panel Features
8- **Command Execution**: Runs shell commands via `spawn('sh', ['-c', ...])`, streaming real‑time output over SSE (`/api/terminal`).
9- **Color‑Coded Output**:
10  - `$` command – blue
11  - stdout – white
12  - stderr – red
13  - system messages (e.g., "Command killed") – grey italic
14- **Command History**: Navigate with up/down arrows (deduplicated, capped at 100 entries).
15- **Process Control**:
16  - **Kill button** – sends abort signal via `AbortController` (also responds to Ctrl+C).
17  - **Clear button** – resets the output display.
I described the terminal panel feature to Claude Code

I described the terminal panel feature to Claude Code

The terminal panel was running in the browser

The terminal panel was running in the browser

Command line interface (CLI)

agent command line interface (CLI)

agent command line interface (CLI)

File: src/lib/agent.ts

Signature

TYPESCRIPT
1export async function runAgent(
2  messages: Message[],
3  model: string,
4  workspace: string,
5  ollamaUrl = DEFAULT_OLLAMA_URL,   // default: "http://localhost:11434"
6  emit: (event: SSEEvent) => void,
7): Promise<void>

Purpose

Runs the agentic loop for a single conversation turn. It sends messages to an Ollama model, executes any tool calls the model requests, and feeds results back until the model produces a final text response. Results are delivered incrementally via the emit callback rather than returned — this keeps the function transport-agnostic (it is used by both the HTTP SSE route and the CLI).

Parameters

ParameterDescription
messages: Message[]Conversation history from the caller (user/assistant turns).
model: stringOllama model name, e.g. "qwen2.5-coder:7b".
workspace: stringAbsolute path to the workspace directory. Passed to every tool call and used to load AGENT.md.
ollamaUrl: stringBase URL of the Ollama instance. Defaults to http://localhost:11434.
emit: (event: SSEEvent) => voidCallback invoked for every observable event (text chunks, tool calls, errors, done).

SSE Events Emitted

`type`WhenKey fields
textModel produces output text (streamed in ~40-char chunks)content: string
tool_callBefore a tool is executedid, name, args
tool_resultAfter a tool returnsid, name, result
errorOllama request fails or streaming throwsmessage: string
doneLoop exits normally

How It Works

1. History initialisation
CODE
1system prompt (base rules + optional AGENT.md content)
2  └─ buildSystemPrompt(workspace) reads AGENT.md from workspace root,
3     appends it under "## Project instructions" (capped at 10 000 chars)
4incoming messages (user / assistant turns)
2. The loop (max MAX_TOOL_ITERATIONS = 20 iterations)
CODE
1┌─────────────────────────────────────────────────────────┐
2│  chat(ollamaUrl, model, history, TOOL_DEFINITIONS)       │  non-streaming
3│    → response.message                                    │
4└────────────────────────┬────────────────────────────────┘
56          ┌──────────────▼──────────────┐
7          │ tool_calls present?          │
8          └──────┬──────────────┬───────┘
9                NO              YES
10                │               │
11                ▼               ▼
12         ┌─────────┐    ┌─────────────────────────────┐
13         │ Final   │    │ For each tool call:          │
14         │ response│    │  emit tool_call              │
15         │ path    │    │  dispatchTool(workspace, …)  │
16         └────┬────┘    │  emit tool_result            │
17              │         │  push result into history    │
18              │         └──────────────┬──────────────┘
19              │                        │ loop ──────────────►
2021  textContent non-empty?
22      YES → chunk & emit (simulate streaming, 10 ms delay)
23      NO  → chatStream() for true streaming from Ollama
24  emit done
3. Tool call resolution (native vs. text fallback)

Ollama models on the TOOL_CAPABLE_PREFIXES list return tool calls in the structured message.tool_calls field. Models that don't support native function calling may embed JSON in their text response instead. extractTextToolCalls() (src/lib/parseToolCalls.ts) handles the fallback:

1. Scans for markdown code fences containing JSON.

2. Tries to parse the entire string as JSON.

3. Uses a bracket scanner for partial/embedded JSON objects.

When the fallback fires, msg.tool_calls and msg.content are normalised before being pushed to history so that subsequent turns remain consistent.

4. Available tools

Defined in TOOL_DEFINITIONS (src/lib/tools.ts) and dispatched via dispatchTool:

Tool nameWhat it doesKey limits
read_fileReads a workspace file, returns content with line numbersPath traversal blocked by resolveSafe()
write_fileCreates or overwrites a file; creates parent dirsSets 0o755 for .sh, .py, etc.
run_commandRuns a shell command via exec in the workspace dir30 s timeout, 1 MB output buffer
search_filesgrep -r -n across the workspaceUp to 100 matches
5. Final response delivery

After the last tool round-trip the model should produce a plain text response (no further tool calls). Two paths:

- `textContent` is set — the non-streaming chat() call already has the full text. It is broken into 40-character chunks and emitted with a 10 ms artificial delay to give the UI a smooth streaming feel.

- `textContent` is empty — the model returned an empty content body (some Ollama models do this). A second, streaming chatStream() call is made and each chunk is forwarded directly.

agent worked example

agent worked example

console.log output

console.log output