Connections

M&E Journal: Boosting Online Audience Engagement, Loyalty, Retention and Revenue

By Hillary Henderson, Senior Director of Product Strategy & Management, IBM Watson Media and IBM Cloud Video

By 2021, it’s projected that 82 percent of all IP traffic will be video related, according to a 2017 forecast by Cisco. Viewers will continue to consume more video content. Advertisers are taking note too, as they are expected to spend almost $30 billion on online video in 2018, according to a Global Ad Trends Report by WARC.

This growing demand is creating an exciting and challenging time for content providers as they strive to offer viewers more options, with better ways to search and discover video material, and relevant, interesting recommendations.

Thankfully these aspects are being advanced through machine learning and artificial intelligence (AI), helping publishers and content owners get video assets that will resonate in front of the right viewer.

Recommendation and discovery needs

More content means options, but also presents end users with the challenge of trying to find something that interests them. Over-the-top (OTT) providers have invested heavily in great user experience (UX) designs, and some truly amazing ones are out there that are very visually appealing while also helping with navigation and search. Many providers are also on the right path toward building large libraries of compelling content as well, creating an experience that can keep viewers engaged and, importantly, coming back for more. A key goal for many is getting the right content in front of the right audience… or even in front of the right individual viewer.

This is where discovery and recommendations come into play. In the case of discovery, this means properly tagging assets with metadata. Traditional categories like “genre” can work, although at the same time feels like a more antiquated way of classifying videos. There are deeper ways to dive into content, from actors all the way down to concepts. Going deeper into content can greatly aid the end viewer as well. For example, someone might be interested in watching not just a program on food, but one on desserts in particular. So, if their search turns up relevant results on content related to confections, and not food in general, they are more likely to engage in that content.

Helping viewers discover content from searches is an important element, although can be seen as more reactive to queries and searching around. Recommendation is more proactive: a process of keeping someone engaged. A good recommendation engine can have viewers watching for long spans of time, jumping from content to content and potentially seeing more ads as well. However, a recommendation engine faces hurdles in keeping this content relevant to the individual viewer.

Present challenges for recommendations

The value in recommendations is clear: to keep a viewer interested by recommending content they are likely to watch. Unfortunately, many implementations today fall short of this goal. In fact, 44 percent of consumers state that generated recommendations are rarely or never what they want to watch, according to a 2017 IBM report. Furthermore, the same survey in that report found that only 10 percent of consumers watch either most or all of the shows or movies that are recommended to them.

This makes current recommendation methods appear inadequate. More content means more possibilities to match a viewer with appealing content, but also more chances that something is recommended that doesn’t appeal to them.

Thankfully, improvements are happening that are advancing how recommendation engines are able to work. Processes infused with AI go deeper into assets and enhance why content is being recommended.

Cognitive learning and recommendations

Advancements around the concepts of machine learning and AI are allowing publishers to parse and catalog a huge amount of information related to individual video assets. As libraries are growing, the value of using AI increases due to its capabilities to scale by allowing a system to automatically skim through videos and identify concepts, context and more. This is a process that might traditionally require someone to manually watch content once or multiple times to achieve a similar result.

Furthermore, this depth of information lends itself to evaluation and analysis, helping to bridge the gap for AI that will offer insights into patterns and similarities to better recommend content that appeals to the end viewer.

As a result, content providers who are using advanced AI today have a clear advantage over competitors using legacy approaches that have been shown to be inadequate at consistently recommending appealing content. This is in-part because they aren’t as reliant on using a very limited set of criteria, like relying on just genre or actors, in making a recommendation.

IBM Watson Media is helping in these AI driven advancements, having introduced IBM Video Recommendations for content owners and publishers. By partnering with IRIS.TV, a cloud-based personalized video programming system, IBM Watson Media is now able to provide a solution for organizations to tap into cognitive methods that will allow them to give their viewers well informed recommendations. The solution works by transforming video assets into pools of data descriptors and then using new ways of categorizing content into taxonomies.

This transcends traditional video metadata with enriched insight that draws from the video itself, with information ranging from objects to language while also including more abstract aspects like the emotive mood. As a result, the possibilities for understanding and examining informational relationships is immeasurably broader than traditional metadata, ultimately being able to approach assets from the angle of what makes it appealing or interesting to an individual viewer.

How AI can work for content providers

A cognitive solution to facilitate recommendations starts in analyzing assets and grows by learning from viewing habits. Using IBM Video Recommendations, the solution begins by automatically extracting rich metadata from video content and mapping it to categories. This enriched metadata is then used to build taxonomies that help drive the recommendations engine.

Algorithms involving consumption and audience data are then used to surface the right video, taking in a wide range of factors including geographic location and even device being used. This is not a one and done process, though, as the solution learns based on perceived preference—for example, is this viewer more likely to want short form content or long form. In fact, the solution can continuously optimize which videos are streamed to individual viewers to create personalized experiences.

The platform has tools that publishers can use to create and manage custom business rules to control what content is recommended to audiences, and target users based on the content they are watching, where they are located, or the platform they are on.

In terms of implementation, the solution is enabled from the IRIS.TV Adaptive Plugin in the player header while features or events can be customized through plugin options. The AI-generated metadata is ingested and analyzed from the publisher’s content management system, while Programming Strategist enriches metadata and structures taxonomy as needed. The solution can also be setup with brand safety for advertisers in mind, preventing their ads from running adjacent undesired content.

Using AI and analytics to optimize

An analytics dashboard is also included with the solution, which assesses audience engagement and is able to help relate a value to content across a variety of parameters. Analytics are also provided for programming optimization content while also being able to understand the relationship between content and consumption patterns.

A benefit of this is the ability to discover what type of content is resonating with viewers. This might then shape future licensing decisions or influence what kind of content is being created as well, helping to improve asset yield.

Increasing engagement, increasing revenue

Ultimately, an advanced recommendation solution that uses machine learning can help to present personalized video streams or title- by-title recommendations that are likely to engage viewers.

It does this by using a wide range of factors, from environmental influences to using knowledge about various attributes of a video asset. This in turn offers more relevant recommendations to an end user, increasing video consumption and ultimately revenue through things like bolstered ad impressions.

—-

Click here to translate this article
Click here to download the complete .PDF version of this article
Click here to download the entire Spring/Summer 2018 M&E Journal