Skip to main content

8 posts tagged with "development"

View All Tags

Ways of Working at the Speed of Thought

· 9 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Ways of Working at the Speed of Thought

Product talked to a BA. The BA talked to a scrum master. The scrum master scheduled a refinement. Engineers showed up, debated the shape of the feature, poked at the data model, estimated it, and dropped it into the backlog. Eventually — maybe next sprint, maybe next quarter, maybe never — somebody wrote the code.

That was the shape of enterprise software for two decades. We built elaborate coordination machinery around a single hard fact: writing the code was the slow, expensive, error-prone step. Everything else existed to protect that scarce resource.

That fact is no longer true. And most enterprises are still running the coordination machinery for a world that doesn't exist anymore.

TL;DR

The idea-to-code distance has collapsed. Enterprises that keep the old orchestration machinery wrapped around AI-assisted engineers are paying relay-race taxes on a sprint. Invest in platform architecture (what's good for engineers is a force multiplier for agents), analyze your own prompt patterns before you build skills, dismantle ceremonies designed to coordinate humans, and give the restless builders a direct line to the business.

AI Coding Tool Comparison 2026: Claude Code vs Cursor vs GitHub Copilot vs Windsurf

· 11 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

AI Coding Tool Comparison 2026: Claude Code vs Cursor vs GitHub Copilot vs Windsurf

I use multiple AI coding tools every day. Not because I'm indecisive — because different tools genuinely excel at different tasks. After a year of tracking my usage through PromptConduit and monitoring releases through Havoptic, I have a data-informed perspective on where each tool shines and where it falls short.

This isn't a surface-level feature checklist. It's an honest assessment from someone who ships production code with these tools daily, tracks their release velocity, and measures their impact on productivity.

TL;DR

Honest head-to-head across six AI coding tools — Claude Code, Cursor, Copilot, Windsurf, Gemini CLI, Codex CLI — from a year of daily use tracked through PromptConduit. The short answer: Claude Code dominates agentic work, Cursor wins autocomplete, and the rest have specific niches.

Claude Code Hooks: A Complete Guide to Automating Your AI Coding Workflow

· 10 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Claude Code Hooks: A Complete Guide to Automating Your AI Coding Workflow

If you've been using Claude Code for more than a week, you've probably noticed a pattern: you keep telling it the same things. "Run prettier after editing." "Don't touch the .env file." "Run the tests before you stop." These aren't complex instructions — they're rules. And rules shouldn't depend on an LLM remembering to follow them.

That's exactly what Claude Code hooks solve. They're deterministic automation that runs at specific points in Claude Code's lifecycle, executed by the harness itself — not by Claude. If you configure a hook to format code after every edit, it will format code after every edit. No exceptions. No "I forgot."

TL;DR

Claude Code hooks turn shaky CLAUDE.md instructions into deterministic automation — the harness runs them, not the LLM. This guide covers the seven practical patterns I use in production: formatting, secret protection, notifications, quality gates, and more. Each with config and rationale.

How to Measure AI Coding Assistant Productivity: A Framework for Engineering Teams

· 11 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

How to Measure AI Coding Assistant Productivity: A Framework for Engineering Teams

Here's a question I get asked constantly: "How do you know if AI coding tools are actually making your team more productive?"

It's a fair question. Engineering leaders are investing real budget in Claude Code, Cursor, and GitHub Copilot seats. Developers are restructuring their workflows around these tools. But when someone asks for data — actual numbers on impact — most teams have nothing to show.

I've been working on this problem for over a year, first as an engineering leader trying to justify AI tooling investments at Georgia-Pacific, and then by building PromptConduit to close the analytics gap. Here's the framework I've developed for measuring what actually matters.

TL;DR

Most teams can't prove AI coding ROI because they measure the wrong things. This framework focuses on concrete metrics — commit-assistance rate, PR throughput, cycle-time deltas — instead of vanity numbers. Works across Claude Code, Cursor, and Copilot, and pairs with PromptConduit for automated collection.

Havoptic: I Built a Visual Release Tracker Because I Couldn't Keep Up

· 8 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Havoptic: I Built a Visual Release Tracker Because I Couldn't Keep Up

Here's a confession: I can't keep up with release notes. Claude Code, Cursor, Windsurf, Gemini CLI, Copilot CLI, Codex CLI, Kiro – they're all shipping at breakneck speed, and every week there's a new version with features that could change how I work. But reading through changelogs? My eyes glaze over by paragraph two. I'm a visual person. I need to see what changed, not read a wall of text about it.

That frustration is why I built Havoptic.

TL;DR

Havoptic is an open-source visual release tracker for AI coding tools — Claude Code, Cursor, Windsurf, Copilot, and more. Built with React on Cloudflare because I got tired of reading four different changelogs to find out what changed each week.

PromptConduit: Building Analytics for AI Coding Assistants

· 6 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

PromptConduit: Building Analytics for AI Coding Assistants

Every day, I spend hours having conversations with AI coding assistants. Claude Code helps me debug issues, Cursor generates components, and Gemini CLI answers quick questions. But here's the thing: I had no idea what I was actually asking them. What patterns emerged from my prompts? Which tools got invoked most frequently? Was I getting better at prompting over time?

These questions led me to build PromptConduit.

TL;DR

Claude Code and Cursor ship without an analytics layer. PromptConduit fills that gap — it captures, parses, and visualizes prompts across AI coding tools. After tracking 18,700+ prompts across both tools, I have hard data on what engineers actually ask AI tools to do.

Claude Code Template: Accelerating AI-Assisted Development

· 6 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Claude Code Template: Accelerating AI-Assisted Development

The future of software development is here, and it's conversational. With AI coding assistants becoming increasingly sophisticated, the way we structure and approach development projects is evolving rapidly. Today, I'm excited to share the Claude Code Template – a comprehensive starter template designed to maximize productivity in AI-assisted development workflows.

TL;DR

Claude Code starter template: devcontainer, custom slash commands, hooks, CLAUDE.md patterns, and analytics wired in from the first commit. Designed to get a team productive with AI-assisted development on day one, not week four.

AI Agents and the Future of Development: Lessons from a Hackathon

· 7 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

AI Agents and the Future of Development: Lessons from a Hackathon

What happens when you give a small team of developers one week, a pile of AI tools, and the audacity to think we could build something meaningful? This is our story from the KOLO AI Hackathon – a journey into what agent-led development might actually look like.

TL;DR

Seven days, one small team, a pile of AI tools, and a genuine attempt to build something real at the KOLO AI Hackathon. What we learned about agent-led development, where it breaks, and why the future of shipping is closer than most teams think.