Skip to main content
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms
View all authors

Your Cloudflare Pages Redirect Is Probably Backwards

· 9 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Your Cloudflare Pages Redirect Is Probably Backwards

Here's a scenario that's more common than it should be: you set up a site on Cloudflare Pages, configure a redirect so that www points to your apex domain, and everything looks fine. Then six months later you check Google Search Console and your impressions are split across four different URL variants. Half your SEO equity is leaking because both www.yourdomain.com and yourdomain.com are being indexed as separate pages.

The culprit isn't your code. It's Cloudflare Pages silently redirecting your apex domain to www — the exact opposite of what you intended.

I ran into this on my own site and spent longer than I'd like to admit tracing it down. Here's exactly what happens and how to fix it permanently.

TL;DR

Cloudflare Pages silently flips the www-vs-apex redirect based on which domain you added first. This post walks through the diagnosis with curl, why the Rulesets API cannot fix it, and the Page Rule plus proxied A-record combination that codifies the correct canonical in Terraform.

AI Coding Tool Comparison 2026: Claude Code vs Cursor vs GitHub Copilot vs Windsurf

· 11 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

AI Coding Tool Comparison 2026: Claude Code vs Cursor vs GitHub Copilot vs Windsurf

I use multiple AI coding tools every day. Not because I'm indecisive — because different tools genuinely excel at different tasks. After a year of tracking my usage through PromptConduit and monitoring releases through Havoptic, I have a data-informed perspective on where each tool shines and where it falls short.

This isn't a surface-level feature checklist. It's an honest assessment from someone who ships production code with these tools daily, tracks their release velocity, and measures their impact on productivity.

TL;DR

Honest head-to-head across six AI coding tools — Claude Code, Cursor, Copilot, Windsurf, Gemini CLI, Codex CLI — from a year of daily use tracked through PromptConduit. The short answer: Claude Code dominates agentic work, Cursor wins autocomplete, and the rest have specific niches.

Claude Code Hooks: A Complete Guide to Automating Your AI Coding Workflow

· 10 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Claude Code Hooks: A Complete Guide to Automating Your AI Coding Workflow

If you've been using Claude Code for more than a week, you've probably noticed a pattern: you keep telling it the same things. "Run prettier after editing." "Don't touch the .env file." "Run the tests before you stop." These aren't complex instructions — they're rules. And rules shouldn't depend on an LLM remembering to follow them.

That's exactly what Claude Code hooks solve. They're deterministic automation that runs at specific points in Claude Code's lifecycle, executed by the harness itself — not by Claude. If you configure a hook to format code after every edit, it will format code after every edit. No exceptions. No "I forgot."

TL;DR

Claude Code hooks turn shaky CLAUDE.md instructions into deterministic automation — the harness runs them, not the LLM. This guide covers the seven practical patterns I use in production: formatting, secret protection, notifications, quality gates, and more. Each with config and rationale.

How to Measure AI Coding Assistant Productivity: A Framework for Engineering Teams

· 11 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

How to Measure AI Coding Assistant Productivity: A Framework for Engineering Teams

Here's a question I get asked constantly: "How do you know if AI coding tools are actually making your team more productive?"

It's a fair question. Engineering leaders are investing real budget in Claude Code, Cursor, and GitHub Copilot seats. Developers are restructuring their workflows around these tools. But when someone asks for data — actual numbers on impact — most teams have nothing to show.

I've been working on this problem for over a year, first as an engineering leader trying to justify AI tooling investments at Georgia-Pacific, and then by building PromptConduit to close the analytics gap. Here's the framework I've developed for measuring what actually matters.

TL;DR

Most teams can't prove AI coding ROI because they measure the wrong things. This framework focuses on concrete metrics — commit-assistance rate, PR throughput, cycle-time deltas — instead of vanity numbers. Works across Claude Code, Cursor, and Copilot, and pairs with PromptConduit for automated collection.

Havoptic: I Built a Visual Release Tracker Because I Couldn't Keep Up

· 8 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Havoptic: I Built a Visual Release Tracker Because I Couldn't Keep Up

Here's a confession: I can't keep up with release notes. Claude Code, Cursor, Windsurf, Gemini CLI, Copilot CLI, Codex CLI, Kiro – they're all shipping at breakneck speed, and every week there's a new version with features that could change how I work. But reading through changelogs? My eyes glaze over by paragraph two. I'm a visual person. I need to see what changed, not read a wall of text about it.

That frustration is why I built Havoptic.

TL;DR

Havoptic is an open-source visual release tracker for AI coding tools — Claude Code, Cursor, Windsurf, Copilot, and more. Built with React on Cloudflare because I got tired of reading four different changelogs to find out what changed each week.

PromptConduit: Building Analytics for AI Coding Assistants

· 6 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

PromptConduit: Building Analytics for AI Coding Assistants

Every day, I spend hours having conversations with AI coding assistants. Claude Code helps me debug issues, Cursor generates components, and Gemini CLI answers quick questions. But here's the thing: I had no idea what I was actually asking them. What patterns emerged from my prompts? Which tools got invoked most frequently? Was I getting better at prompting over time?

These questions led me to build PromptConduit.

TL;DR

Claude Code and Cursor ship without an analytics layer. PromptConduit fills that gap — it captures, parses, and visualizes prompts across AI coding tools. After tracking 18,700+ prompts across both tools, I have hard data on what engineers actually ask AI tools to do.

Building Scalable Video Generation with Remotion and Docker: A Developer's Complete Guide

· 7 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Building Scalable Video Generation with Remotion and Docker: A Developer's Complete Guide

If you've ever wanted to programmatically generate videos at scale, you've probably discovered that traditional video editing tools don't cut it for automated workflows. Enter Remotion—a React-based video generation framework that lets you create videos using familiar web technologies. But here's the kicker: when you combine Remotion with Docker and GitHub Actions, you get a production-ready video generation pipeline that can scale from your laptop to the cloud.

TL;DR

Scalable programmatic video generation using Remotion, Docker, and GitHub Actions. Turn React components into rendered MP4s in CI — reproducible, parallelizable, and free of the flakiness that comes with running a headless browser on a developer laptop.

Claude Code Template: Accelerating AI-Assisted Development

· 6 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Claude Code Template: Accelerating AI-Assisted Development

The future of software development is here, and it's conversational. With AI coding assistants becoming increasingly sophisticated, the way we structure and approach development projects is evolving rapidly. Today, I'm excited to share the Claude Code Template – a comprehensive starter template designed to maximize productivity in AI-assisted development workflows.

TL;DR

Claude Code starter template: devcontainer, custom slash commands, hooks, CLAUDE.md patterns, and analytics wired in from the first commit. Designed to get a team productive with AI-assisted development on day one, not week four.

AI Agents and the Future of Development: Lessons from a Hackathon

· 7 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

AI Agents and the Future of Development: Lessons from a Hackathon

What happens when you give a small team of developers one week, a pile of AI tools, and the audacity to think we could build something meaningful? This is our story from the KOLO AI Hackathon – a journey into what agent-led development might actually look like.

TL;DR

Seven days, one small team, a pile of AI tools, and a genuine attempt to build something real at the KOLO AI Hackathon. What we learned about agent-led development, where it breaks, and why the future of shipping is closer than most teams think.

Building AI Teams with CrewAI

· 4 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Building AI Teams with CrewAI

Have you ever wished you could assemble a team of AI experts to tackle your projects? Imagine having a researcher who never sleeps, an analyst who processes data in seconds, and a writer who crafts perfect content – all working together seamlessly. This isn't science fiction; it's possible today with CrewAI.

TL;DR

Production-ready starter template for building intelligent multi-agent teams with CrewAI. Covers agent roles, task orchestration, and the guardrails that turn a demo into something you can actually deploy — based on lessons from real AI agent systems.

AI Agents as Enterprise UI

· 4 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

AI Agents as Enterprise UI

In the evolving landscape of enterprise architecture, we're witnessing a paradigm shift that could fundamentally transform how businesses interact with their applications. I've observed a compelling trend: the emergence of AI agents as direct intermediaries between users and data layers, effectively replacing traditional UI components. This architectural evolution promises to streamline enterprise applications while significantly reducing infrastructure complexity.

TL;DR

AI agents are about to replace traditional UI layers in enterprise applications. This post argues that the chat-plus-tools pattern is not a chatbot skin — it is a fundamental reshaping of how employees interact with enterprise software, and what that means for SaaS builders.

Auto Sizzle Reel

· 15 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Building an AI-Powered Sizzle Reel Extraction Engine: Technical Architecture and Business Rules

A deep dive into the technical implementation of automated video content curation using multi-cloud AI services

TL;DR

Deep dive into the AI-powered sizzle-reel engine built for HBO Max at WarnerMedia. Fuses AWS Rekognition, Azure Video Indexer, and GCP Video Intelligence through the ContentAI platform to auto-extract the most compelling 30–90 second clips from hours of source video.

Comscore POC - Content Taxonomy

· 5 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Advertisers want to ensure their ads are present in video content that is relevant to their message. There are intelligent video services that 3rd party vendors provide to classify videos using the industry IAB standard.

What is IAB? The IAB develops technical standards and best practices for targeted advertising. The IAB also works to educate agencies, brands, and businesses on the importance of digital advertising consent while standardizing how companies run advertisements to comply with GDPR consent guidelines.

To meet our goal, we created a workflow that includes two extractors. The first extractor calls Comscore's Media API to classify videos. The second extractor takes the results from the first extractor and maps the IAB Taxonomy provided by Comscore to WarnerMedia's taxonomy.

TL;DR

POC for adding Comscore's contextual-classification API to the ContentAI platform — enriching scene-level metadata with IAB categories so downstream ad and content decisions can reason about context, not just detected objects.

Stitchy POC

· 9 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Goal

Given the rising interest in short-form content, hyper-personalization, and meme-making, we wanted to explore some new ideas for using AI with video-making and matching. For Stitchy, we wanted to learn if we could match videos based on the words and specifically the timing of the words spoken. With this achieved, the next objective was to subjectively test the entertaining factors these stitched videos. We believe there may be latent commercial and/or marketing applications for Stitchy including physical locations (AT&T Stores, HP Store, WMIL, etc) as well as inside other applications & online experiences.

TL;DR

Stitchy: assemble a video mash-up where each word in a target phrase is cut from a different clip in the archive. A playful demo of phoneme-level scene search sitting on top of the ContentAI transcript index.

Getting started - exploring extracted data

· 5 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Say you have been tasked with building an application that requires you to know when celebrities are on the screen. Where do I get started with ContentAI? Do I need to bring my own video? What if I don't have a video? Do you have some existing data I can look at to start researching/exploring? The answer to that is yes, we do!

TL;DR

Walkthrough of a ContentAI POC dataset: what the extraction pipeline produces, how to load the JSON, and what questions the data can answer. Aimed at engineers getting started with computer-vision output in a real media archive.

React video player and konva

· 2 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms
Demo of React video player with Konva bounding box overlays tracking objects in video

Motivation

I work with very talented data scientists and engineers. A lot of the models they are building are used to identify objects and actions in a video. Their models produce raw results in a CSV, JSON format. It is difficult to validate their results just from looking at the raw data. We have tried to feed the data into 3rd party tools to visualize the data in graphs, and there are a lot of python projects that can draw bounding boxes on images or videos .. but there doesn't seem to be a good solution to visualize bounding boxes on a video on a website.

TL;DR

Interactive video player that overlays live computer-vision bounding boxes on playback using React, Konva, and Material-UI. Lets reviewers see exactly what an ML model sees, frame by frame — essential for tuning detection thresholds.

FaceMatcher3000 POC

· 10 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Goal

At the WMIL (WarnerMedia Innovation Lab) an interesting project was being considered contemplating a dynamic in-person experience. The idea originated from a “how might we create immersive experience for Lab visitors with our content” type of brainstorm.

As the ideation continued, the WMIL folks and the ContentAI folks talked about different ideas around matching users with celebrities in our content. As we explored various extractors that could be relevant, we collectively settled on a first lightweight proof of concept “can we match a person to a scene or moment in one of our movies or shows?” Thus was born… FaceMatcher3000.

TL;DR

FaceMatcher3000: voice-command-driven face and scene lookup across a video catalog. Spoken query → keyword extraction → scene-based results. Built on top of ContentAI's extraction pipeline as a natural-language interface to computer-vision output.

Redenbacher POC

· 7 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Overview

Popcorn (details below) is a new app concept from WarnerMedia that is being tested in Q2 2020. The current version involves humans curating “channels” of content, which is sourced through a combination of automated services, crowdsourcing, and editorially selected clips. Project Redenbacher takes the basic Popcorn concept, but fully automates the content/channel selection process through an AI engine.

Popcorn Overview

Popcorn is a new experience, most easily described as “TikTok meets HBOMax”. WarnerMedia premium content is “microsliced” into small clips (“kernels”), and organized into classes (“channels”). Users can easily browse through channels by swiping left/right, and watch and browse a channel by swiping up/down. Kernels are short (under a minute), and can be Favorited or Shared directly from the app. Channels are organized thematically, for example “Cool Car Chases” or “Huge Explosions” or “Underwater Action Scenes”. Channels comprise content generated by the HBO editorial team, popularity/most watched, and from AI/algorithms.

Demo

removed

Redenbacher Variant

Basic Logic

In Redenbacher, instead of selecting a specific channel, the user starts with a random clip. The AI selects a Tag that’s associated with the clip, and automatically queues up a next piece of content that has that same Tag in common. This continues until the user interacts with the stream.

TL;DR

Redenbacher POC: generate bite-sized vertical clips from long-form content by combining scene understanding with pacing rules. TikTok-style format meets WarnerMedia's ContentAI — short-form repurposing of the existing catalog without manual editing.

Condo/Hotel Computer Vision Ideas

· 2 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

My family and I were on our annual beach vacation, during covid-19, and I had a few ideas on how to help out the condo/hotel industry.

TL;DR

Notes from exploring computer vision for the hospitality industry during COVID-19 — elevator usage analytics, amenity occupancy monitoring, and what signals matter when density matters. Practical CV applied to operational questions.

Scene Finder POC

· 8 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Goal

We began this project as an exploration around up-levelling the capabilities of “assistants” (AI, chatbot, virtual, etc) with regards to the specific field of media & entertainment. As the overall platform of voice (and other) assistants increase in capabilities, we believe that they will focus on more “generic” oriented features, which gives us the opportunity to specialize in the vertical of entertainment. For this phase of work, we are exploring the concept of what an “Entertainment Assistant” might do, and how it might function.

One such function would be, for example, a voice-driven search where the user doesn’t exactly know what they are looking for: “Show me the first time we see Jon Snow in Game of Thrones” or “Show me dance scenes from classic movies” or “Show me that scene from Friends where they say ‘we were on a break!”.

In other words: respond to a voice-based command, filter out the relevant keywords, and deliver to the user all the matching scene-based results.

TL;DR

Scene Finder POC: take a spoken query, filter to the relevant keywords, and return every video scene that matches. A proof of voice-driven search across large video libraries using the ContentAI extraction pipeline.

Speech Bubble Experiment

· 3 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Why

Closed captions have been around for awhile its time to try to innovate on them.

TL;DR

Experiment replacing traditional closed captions with AR-style speech bubbles anchored to the speaker. Built on React, AWS, and a computer-vision speaker-detection step — part of exploring more engaging and more accessible video captions.

WWDC 2017

· 2 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Video

Goal

As someone who loves exploring new technologies, I was thrilled to try out Apple's ARKit at the 2017 WWDC conference. I drove to the nearest Apple store to purchase the latest iPhone, just so I could use the latest ARKit technology.

TL;DR

WWDC 2017 recap: experimenting with ARKit on the day it shipped by placing 3D Cartoon Network characters into augmented reality on iPhone. A snapshot of what felt possible the instant ARKit became public to developers.

Speed Detection

· 4 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Video

Motivation

I live in a neighborhood with a 25mph speed limit. I've noticed that many people drive much faster than the speed limit. I wanted to capture how fast people were driving in my neighborhood. I set up a camera to capture the speed of cars driving by. I used a Hikvision CCTV to capture the video and a Python script with OpenCV to detect the speed of the cars.

TL;DR

Built a neighborhood speed-detection rig using a Hikvision CCTV camera, Python, and OpenCV. Pulled frames over RTSP, detected vehicles, and computed speed against the 25mph limit — early computer-vision work that foreshadowed later ContentAI pipelines.

Tumbling Timmy on App Store

· 4 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Video

Goal

Hey there! I'm super excited to announce that my fun and quirky physics-based game, Tumbling Timmy, is now available on the App Store! If you've been following my journey as a solo entrepreneur since the game's initial launch

TL;DR

Tumbling Timmy — an Angry-Birds-style physics puzzler — launched on the iOS App Store after finding early traction on Windows Phone. Solo-dev project; a reminder that platform reach matters as much as the game itself.

Welcome

· One min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Welcome! Thank you for stopping by. This is my first post for my first blog.

I want to use this blog to highlight quick ideas and prototypes with potential use cases.

TL;DR

Welcome post. This blog captures quick experiments and working prototypes across computer vision, AI, and engineering — written from the perspective of an engineer who ships small things fast and writes about what actually worked.