Skip to main content

Scott Havird

Advertisers want to ensure their ads are present in video content that is relevant to their message. There are intelligent video services that 3rd party vendors provide to classify videos using the industry IAB standard.

What is IAB? The IAB develops technical standards and best practices for targeted advertising. The IAB also works to educate agencies, brands, and businesses on the importance of digital advertising consent while standardizing how companies run advertisements to comply with GDPR consent guidelines.

To meet our goal, we created a workflow that includes two extractors. The first extractor calls Comscore's Media API to classify videos. The second extractor takes the results from the first extractor and maps the IAB Taxonomy provided by Comscore to WarnerMedia's taxonomy.

Scott Havird

Goal

Given the rising interest in short-form content, hyper-personalization, and meme-making, we wanted to explore some new ideas for using AI with video-making and matching. For Stitchy, we wanted to learn if we could match videos based on the words and specifically the timing of the words spoken. With this achieved, the next objective was to subjectively test the entertaining factors these stitched videos. We believe there may be latent commercial and/or marketing applications for Stitchy including physical locations (AT&T Stores, HP Store, WMIL, etc) as well as inside other applications & online experiences.

Scott Havird
react video player and konva

Motivation

I work with very talented data scientists and engineers. A lot of the models they are building are used to identify objects and actions in a video. Their models produce raw results in a CSV, JSON format. It is difficult to validate their results just from looking at the raw data. We have tried to feed the data into 3rd party tools to visualize the data in graphs, and there are a lot of python projects that can draw bounding boxes on images or videos .. but there doesn't seem to be a good solution to visualize bounding boxes on a video on a website.

Scott Havird

Goal

At the WMIL (WarnerMedia Innovation Lab) an interesting project was being considered contemplating a dynamic in-person experience. The idea originated from a “how might we create immersive experience for Lab visitors with our content” type of brainstorm.

As the ideation continued, the WMIL folks and the ContentAI folks talked about different ideas around matching users with celebrities in our content. As we explored various extractors that could be relevant, we collectively settled on a first lightweight proof of concept “can we match a person to a scene or moment in one of our movies or shows?” Thus was born… FaceMatcher3000.

Scott Havird

Overview

Popcorn (details below) is a new app concept from WarnerMedia that is being tested in Q2 2020. The current version involves humans curating “channels” of content, which is sourced through a combination of automated services, crowdsourcing, and editorially selected clips. Project Redenbacher takes the basic Popcorn concept, but fully automates the content/channel selection process through an AI engine.

Popcorn Overview

Popcorn is a new experience, most easily described as “TikTok meets HBOMax”. WarnerMedia premium content is “microsliced” into small clips (“kernels”), and organized into classes (“channels”). Users can easily browse through channels by swiping left/right, and watch and browse a channel by swiping up/down. Kernels are short (under a minute), and can be Favorited or Shared directly from the app. Channels are organized thematically, for example “Cool Car Chases” or “Huge Explosions” or “Underwater Action Scenes”. Channels comprise content generated by the HBO editorial team, popularity/most watched, and from AI/algorithms.

Demo

removed

Redenbacher Variant

Basic Logic

In Redenbacher, instead of selecting a specific channel, the user starts with a random clip. The AI selects a Tag that’s associated with the clip, and automatically queues up a next piece of content that has that same Tag in common. This continues until the user interacts with the stream.

Scott Havird

Goal

We began this project as an exploration around up-levelling the capabilities of “assistants” (AI, chatbot, virtual, etc) with regards to the specific field of media & entertainment. As the overall platform of voice (and other) assistants increase in capabilities, we believe that they will focus on more “generic” oriented features, which gives us the opportunity to specialize in the vertical of entertainment. For this phase of work, we are exploring the concept of what an “Entertainment Assistant” might do, and how it might function.

One such function would be, for example, a voice-driven search where the user doesn’t exactly know what they are looking for: “Show me the first time we see Jon Snow in Game of Thrones” or “Show me dance scenes from classic movies” or “Show me that scene from Friends where they say ‘we were on a break!”.

In other words: respond to a voice-based command, filter out the relevant keywords, and deliver to the user all the matching scene-based results.