Skip to main content

2 posts tagged with "aws_rekognition_video_faces"

View All Tags

Getting started - exploring extracted data

· 5 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Say you have been tasked with building an application that requires you to know when celebrities are on the screen. Where do I get started with ContentAI? Do I need to bring my own video? What if I don't have a video? Do you have some existing data I can look at to start researching/exploring? The answer to that is yes, we do!

TL;DR

Walkthrough of a ContentAI POC dataset: what the extraction pipeline produces, how to load the JSON, and what questions the data can answer. Aimed at engineers getting started with computer-vision output in a real media archive.

FaceMatcher3000 POC

· 10 min read
Scott Havird
Engineer at Georgia-Pacific · ex-WarnerMedia Innovation Lab (ContentAI) · decade shipping AI-powered platforms

Goal

At the WMIL (WarnerMedia Innovation Lab) an interesting project was being considered contemplating a dynamic in-person experience. The idea originated from a “how might we create immersive experience for Lab visitors with our content” type of brainstorm.

As the ideation continued, the WMIL folks and the ContentAI folks talked about different ideas around matching users with celebrities in our content. As we explored various extractors that could be relevant, we collectively settled on a first lightweight proof of concept “can we match a person to a scene or moment in one of our movies or shows?” Thus was born… FaceMatcher3000.

TL;DR

FaceMatcher3000: voice-command-driven face and scene lookup across a video catalog. Spoken query → keyword extraction → scene-based results. Built on top of ContentAI's extraction pipeline as a natural-language interface to computer-vision output.