Skip to main content

6 posts tagged with "contentai"

View All Tags

Comscore POC - Content Taxonomy

· 4 min read
Scott Havird
Engineer

Advertisers want to ensure their ads are present in video content that is relevant to their message. There are intelligent video services that 3rd party vendors provide to classify videos using the industry IAB standard.

What is IAB? The IAB develops technical standards and best practices for targeted advertising. The IAB also works to educate agencies, brands, and businesses on the importance of digital advertising consent while standardizing how companies run advertisements to comply with GDPR consent guidelines.

To meet our goal, we created a workflow that includes two extractors. The first extractor calls Comscore's Media API to classify videos. The second extractor takes the results from the first extractor and maps the IAB Taxonomy provided by Comscore to WarnerMedia's taxonomy.

Stitchy POC

· 8 min read
Scott Havird
Engineer

Goal

Given the rising interest in short-form content, hyper-personalization, and meme-making, we wanted to explore some new ideas for using AI with video-making and matching. For Stitchy, we wanted to learn if we could match videos based on the words and specifically the timing of the words spoken. With this achieved, the next objective was to subjectively test the entertaining factors these stitched videos. We believe there may be latent commercial and/or marketing applications for Stitchy including physical locations (AT&T Stores, HP Store, WMIL, etc) as well as inside other applications & online experiences.

FaceMatcher3000 POC

· 9 min read
Scott Havird
Engineer

Goal

At the WMIL (WarnerMedia Innovation Lab) an interesting project was being considered contemplating a dynamic in-person experience. The idea originated from a “how might we create immersive experience for Lab visitors with our content” type of brainstorm.

As the ideation continued, the WMIL folks and the ContentAI folks talked about different ideas around matching users with celebrities in our content. As we explored various extractors that could be relevant, we collectively settled on a first lightweight proof of concept “can we match a person to a scene or moment in one of our movies or shows?” Thus was born… FaceMatcher3000.

Redenbacher POC

· 7 min read
Scott Havird
Engineer

Overview

Popcorn (details below) is a new app concept from WarnerMedia that is being tested in Q2 2020. The current version involves humans curating “channels” of content, which is sourced through a combination of automated services, crowdsourcing, and editorially selected clips. Project Redenbacher takes the basic Popcorn concept, but fully automates the content/channel selection process through an AI engine.

Popcorn Overview

Popcorn is a new experience, most easily described as “TikTok meets HBOMax”. WarnerMedia premium content is “microsliced” into small clips (“kernels”), and organized into classes (“channels”). Users can easily browse through channels by swiping left/right, and watch and browse a channel by swiping up/down. Kernels are short (under a minute), and can be Favorited or Shared directly from the app. Channels are organized thematically, for example “Cool Car Chases” or “Huge Explosions” or “Underwater Action Scenes”. Channels comprise content generated by the HBO editorial team, popularity/most watched, and from AI/algorithms.

Demo

removed

Redenbacher Variant

Basic Logic

In Redenbacher, instead of selecting a specific channel, the user starts with a random clip. The AI selects a Tag that’s associated with the clip, and automatically queues up a next piece of content that has that same Tag in common. This continues until the user interacts with the stream.

Scene Finder POC

· 6 min read
Scott Havird
Engineer

Goal

We began this project as an exploration around up-levelling the capabilities of “assistants” (AI, chatbot, virtual, etc) with regards to the specific field of media & entertainment. As the overall platform of voice (and other) assistants increase in capabilities, we believe that they will focus on more “generic” oriented features, which gives us the opportunity to specialize in the vertical of entertainment. For this phase of work, we are exploring the concept of what an “Entertainment Assistant” might do, and how it might function.

One such function would be, for example, a voice-driven search where the user doesn’t exactly know what they are looking for: “Show me the first time we see Jon Snow in Game of Thrones” or “Show me dance scenes from classic movies” or “Show me that scene from Friends where they say ‘we were on a break!”.

In other words: respond to a voice-based command, filter out the relevant keywords, and deliver to the user all the matching scene-based results.