Skip to main content

3 posts tagged with "productivity"

View All Tags

Ways of Working at the Speed of Thought

· 9 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Ways of Working at the Speed of Thought

Product talked to a BA. The BA talked to a scrum master. The scrum master scheduled a refinement. Engineers showed up, debated the shape of the feature, poked at the data model, estimated it, and dropped it into the backlog. Eventually — maybe next sprint, maybe next quarter, maybe never — somebody wrote the code.

That was the shape of enterprise software for two decades. We built elaborate coordination machinery around a single hard fact: writing the code was the slow, expensive, error-prone step. Everything else existed to protect that scarce resource.

That fact is no longer true. And most enterprises are still running the coordination machinery for a world that doesn't exist anymore.

TL;DR

The idea-to-code distance has collapsed. Enterprises that keep the old orchestration machinery wrapped around AI-assisted engineers are paying relay-race taxes on a sprint. Invest in platform architecture (what's good for engineers is a force multiplier for agents), analyze your own prompt patterns before you build skills, dismantle ceremonies designed to coordinate humans, and give the restless builders a direct line to the business.

How to Measure AI Coding Assistant Productivity: A Framework for Engineering Teams

· 11 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

How to Measure AI Coding Assistant Productivity: A Framework for Engineering Teams

Here's a question I get asked constantly: "How do you know if AI coding tools are actually making your team more productive?"

It's a fair question. Engineering leaders are investing real budget in Claude Code, Cursor, and GitHub Copilot seats. Developers are restructuring their workflows around these tools. But when someone asks for data — actual numbers on impact — most teams have nothing to show.

I've been working on this problem for over a year, first as an engineering leader trying to justify AI tooling investments at Georgia-Pacific, and then by building PromptConduit to close the analytics gap. Here's the framework I've developed for measuring what actually matters.

TL;DR

Most teams can't prove AI coding ROI because they measure the wrong things. This framework focuses on concrete metrics — commit-assistance rate, PR throughput, cycle-time deltas — instead of vanity numbers. Works across Claude Code, Cursor, and Copilot, and pairs with PromptConduit for automated collection.

Building AI Teams with CrewAI

· 4 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Building AI Teams with CrewAI

Have you ever wished you could assemble a team of AI experts to tackle your projects? Imagine having a researcher who never sleeps, an analyst who processes data in seconds, and a writer who crafts perfect content – all working together seamlessly. This isn't science fiction; it's possible today with CrewAI.

TL;DR

Production-ready starter template for building intelligent multi-agent teams with CrewAI. Covers agent roles, task orchestration, and the guardrails that turn a demo into something you can actually deploy — based on lessons from real AI agent systems.