Skip to main content

PromptConduit: Building Analytics for AI Coding Assistants

· 6 min read
Scott Havird
Engineer

Every day, I spend hours having conversations with AI coding assistants. Claude Code helps me debug issues, Cursor generates components, and Gemini CLI answers quick questions. But here's the thing: I had no idea what I was actually asking them. What patterns emerged from my prompts? Which tools got invoked most frequently? Was I getting better at prompting over time?

These questions led me to build PromptConduit.

The Problem: Flying Blind with AI Assistants

If you're like me, you've fully embraced AI-assisted development. It's become second nature to describe what you want and let the AI figure out the implementation details. But this new workflow comes with a blind spot: we have no visibility into our own patterns.

Think about it:

  • How many prompts do you send per day?
  • What percentage of your requests require multiple iterations?
  • Which tools (file edits, bash commands, web searches) are most commonly triggered?
  • How does your prompting evolve across different projects?

Without answers to these questions, we're missing opportunities to improve our AI collaboration skills.

The Solution: A Multi-Tool Analytics Platform

PromptConduit is a suite of tools I built to solve this problem:

  1. Go CLI - Captures real-time events from Claude Code, Cursor, and Gemini CLI
  2. macOS App - Native menu bar app for managing AI agent sessions
  3. SaaS Platform - Web dashboard for analyzing patterns and trends

The architecture is designed around one core principle: never block the AI tool. Every interaction should feel instant, with analytics happening asynchronously in the background.

How It Works: Hooks and Adapters

The CLI integrates with AI tools through their native hook systems. When you submit a prompt or a tool gets executed, the hook fires and PromptConduit captures the event:

AI Tools → Raw Event Envelope → Platform API → Server-side Adapters → Storage

Here's what a captured event looks like:

{
"tool": "claude-code",
"event_type": "prompt_submit",
"timestamp": "2025-01-04T10:30:00Z",
"session_id": "abc123",
"git": {
"branch": "feat/new-feature",
"commit_hash": "abc123",
"is_dirty": true,
"staged_count": 2
},
"prompt": {
"prompt": "Add error handling to the API endpoint",
"attachments": [
{
"filename": "screenshot.png",
"media_type": "image/png"
}
]
}
}

Notice the Git context. Every event captures your repository state, making it possible to correlate prompts with specific features or bug fixes.

The Journey: From Plugin to CLI

The project went through several architectural iterations. The git history tells the story:

Phase 1: Plugin System I started with a Claude Code plugin that hooked into events. It worked, but distribution was clunky and updates required manual reinstallation.

Phase 2: Go CLI Binary Rewriting as a Go CLI solved the distribution problem. One curl command installs it, Homebrew keeps it updated, and cross-platform builds happen automatically with GoReleaser.

Phase 3: Server-side Adapters Originally, each tool had its own adapter in the CLI. This meant releasing CLI updates whenever I added a new AI tool. Moving adapters to the server means the CLI stays thin and tool support grows without user updates.

Phase 4: Transcript Sync Real-time hooks capture events as they happen, but what about historical conversations? The sync command parses Claude Code's JSONL transcript files and uploads them to the platform:

promptconduit sync claude-code --since 2025-01-01

The macOS App: PTY-Based Agent Control

The CLI handles capture, but I wanted something more: a native macOS app that could orchestrate multiple AI agents simultaneously.

The app uses pseudo-terminal (PTY) control to interact with Claude Code:

  • Menu Bar Presence: Always-accessible toolbar with global hotkey
  • Floating Panels: Agent windows that stay above other apps
  • Multi-Terminal Support: Run parallel agents across different repos
  • Subscription-First: Uses your Claude Code subscription (no API credits)

The breakthrough was detecting agent state through CLI hooks. When Claude Code fires a hook, the macOS app receives it and updates the UI accordingly. This creates a feedback loop between the terminal and the native app.

Supported AI Tools

ToolEvents Captured
Claude CodePrompts, Tools, Sessions, Attachments
CursorPrompts, Shell, MCP, Files, Attachments
Gemini CLIPrompts, Tools, Sessions

Adding a new tool requires:

  1. Creating an adapter in the platform's API
  2. Adding install/uninstall logic to the CLI
  3. Registering the tool in the supported tools list

Getting Started

Install the CLI

curl -fsSL https://promptconduit.dev/install | bash

Configure and Install Hooks

# Set your API key
promptconduit config set --api-key="your-api-key"

# Install hooks for Claude Code
promptconduit install claude-code

# Verify installation
promptconduit status

Sync Historical Transcripts

# Preview what would be synced
promptconduit sync --dry-run

# Sync all Claude Code conversations
promptconduit sync claude-code

What I Learned Building This

Async is Everything The hook must return immediately. Any latency affects the AI tool's responsiveness. The solution: fork a subprocess that handles the HTTP request while the main process exits.

Git Context is Gold Knowing which branch, commit, and dirty files exist when a prompt is submitted adds tremendous context. You can now ask questions like "what was I working on when I asked this?"

Server-side Adapters Win Moving transformation logic to the server means the CLI stays stable while tool support evolves. New AI assistants can be supported without CLI updates.

Multipart Uploads for Attachments Prompt attachments (images, PDFs, documents) need efficient upload. Multipart form data keeps the payload manageable while preserving metadata.

What's Next

The platform is evolving toward team analytics. Imagine seeing:

  • How your team's prompting patterns compare
  • Which projects consume the most AI assistance
  • Trends in tool usage over time

The foundations are in place. Organizations, teams, and an insights dashboard are already shipping.

Open Source

The CLI is open source under Apache 2.0:

If you're curious about your AI assistant usage patterns, give it a try. I'd love to hear what insights you discover about your own prompting habits.