UNIVERSAL CITY, Calif. — Content distribution used to be very simple, according to Scott Maddux, VP of business development for TiVo: create programming, broadcast it over the air, and rest assured that the audience was very contained.
But, between 2005-2007, things began changing, and quickly: mobile video began to find footing, YouTube became a huge hit, and OTT video began to find popularity. That was also around the time millennials and Gen Z began to come of age as groups that needed to be catered to by the entertainment business, he added, speaking May 25 at the HITS: Spring 2017 event.
“If you’re that same network, you’re thinking about TV Everywhere, you’ve got apps, traditional web portals,” and dozens of OTT, SVOD, transactional VOD, ad-supported VOD and other distribution channels, on hundreds of devices, to worry about, Maddux said.
This is where artificial intelligence and machine learning, to help curate and enrich your content metadata, comes into play, in getting the right assets in front of the right audiences, Maddux said during his presentation “Harnessing Machine Learning, A.I. & Metadata to Power-up Your Catalog & Super Glue Content to Viewers.”
“You’re thinking about social media, you’re thinking about Facebook, Snapchat, your audiences are engaging everywhere,” he said. “It’s exciting, but it gets a little messy.” It means content owners need to be much more closely aligned with their consumers, and that’s where machine learning, as applied to metadata, can be a benefit. Metadata has become one of the most powerful tools in media and entertainment, with uses in search optimization, ad targeting, and recommendations.
Images are critical toward driving discovery, the factual metadata for content has to be precise, purchase options need to be on point, and formats and distribution channels all need to be accounted for, Maddux said. And when it comes to machine learning, TiVo continuously ingests from thousands of sources, pulling from content owners, distributors and vendors, parsing the data, cleaning it up, and matching the correct data with the correct content.
“What’s happening today for TiVo, is we see a lot of applications for this in the near future, and what we’re doing … through the technologies available, like facial recognition and video signal processing, we know which actor is in what scene, what character is in what scene, and every name that appears in the credits, object identification … and brands,” Maddux said.
For Amazon, machine learning and AI have been used since 1995 to improve existing processes and products, and to help create new product categories, including Amazon Alexa and Echo.
Amazon Web Services recently launched its own AI Services division, extending these capabilities to customers, and making it simple for developers to build AI into their applications, and build those apps without being constrained by costs or technologies, according to David Pearson, head of business development of AI Services for Amazon Web Services.
His presentation — “Recognizing the Impact of AI in M&E” — showed how AI has progressed to the point where insights into audience engagement and speech recognition have become true benefits for media and entertainment companies.
“And in terms of metadata, what [AI] has enabled is the ability for us to do these detection [services],” Pearson said. “Companies have been doing this for quite some time, at great expense. The science has been there, and what we’ve been focused with the managed services we provide, is the ability to make it easy to use, and easily accessible, so anyone can work in this space.”
Scenes, objects, faces, all can be recognized in media and entertainment with AI, and put to great use, Pearson said. Among the use cases might be engaging the reaction of viewers as a show is being viewed, offering near-instant feedback on how involved with a show consumers might be, he added.
“Anyone with audio and visual assets can build an index that refers to the times, locations, the frames, anything, so you can look up who was in what and when,” Pearson said. C-SPAN is among those doing facial indexing with its videos, with nearly 100,000 faces indexed to date, giving them the ability to recognize faces almost immediately on the screen.
Pearson also pointed out another interesting use of AI: chat bots, where an AI application converses in real time with consumers, Pearson said.
Meanwhile, in the presentation “No Monetization, Mo’ Problems: How AI Can Drive Revenue for OTT Services,” David Kulczar, senior offering manager of IBM Cloud Video, stressed how crucial it is today for content companies to know how audiences are using and reacting to content, and how technologies like natural language processing can be put to use.
“You can monetize what was previously very difficult to monetize,” he said, by building new correlations between audiences and content, finding a “360-degree” view of audiences, and better targeting ads toward the right viewers.
This year IBM Cloud Video plans on launching an AI-powered content solution, one that takes a look at a company’s content and bringing back a “deep understanding” of how consumers respond to content. “Once you have this information you can create the holy grail of solutions, how to advertise, how to market, how to personalize and stop churn,” Kulczar said. That includes preparing content for other regions where different regulations exist, automatically editing content to adhere to the rules of countries that may be stricter.
He shared a scenario where someone watched, say, a Tom Hanks movie, where the story takes place in Paris, with a theme of personal loss. Oh, and Hanks’ character owns a pet. “Once I can understand not just what the consumer watched, but how and why they watched it, I can unlock more power [with the content,” Kulczar said. “I know very specific pieces that will help me recommend extremely relevant content, and advertise and market to that person.”
Advanced data, analytics and cognitive solutions have become more readily available today to help content companies find actionable insights, Kulczar said. It’s just up to those content companies to make use of them, he added.
HITS: Spring is the largest gathering of the L.A. entertainment community’s most senior IT executives and technologists. More than 500 people attended HITS: Spring on May 25 at the Sheraton Universal Hotel in Los Angeles. Produced by the Media & Entertainment Services Alliance (MESA), in partnership with the Hollywood IT Society (HITS), the Content Delivery & Security Association (CDSA), and the Smart Content Council, HITS: Spring is presented by Entertainment Partners, with sponsorship by Box, TiVo, Avanade, Amazon Web Services, Expert System, IBM, MarkLogic, MediaSilo, Microsoft Azure, Composite Apps, Deluxe, EIDR, HGST, SAS, Sohonet, Sony DADC NMS, Zaszou IT Consulting and Ooyala.
For more information visit HollywoodITSummit.com.