IBM is leveraging artificial intelligence (AI) to automate the closed captioning process as part of its latest offering, Watson Captioning, according to David Kulczar, senior offering manager at IBM Watson Media.
The new standalone service provides businesses with a “scalable solution that saves time and money,” streamlining workflows to “maximize productivity” and using machine learning technology to “increase caption accuracy over time,” he told the Media & Entertainment Services Alliance (MESA) by email Feb. 6.
In recent years, we’ve seen a significant shift towards video content as the main form of media, he said in a blog post. “Considering this momentum, the importance of closed captioning has only increased,” he said.
But “delivering closed captions at scale is challenging for media and entertainment companies—they are costly to create, and the manual undertaking can be burdensome to production teams,” he pointed out. Also an issue is the “ever-changing compliance landscape, wherein adapting closed captions to meet regional or industrial guidelines can be tricky,” he said.
Watson Captioning helps solve those challenges, he said, noting it’s a “customizable offering that provides flexibility and productivity, can be easily managed across compliance standards, and has the potential to transform industries beyond media and entertainment.”
The new IBM offering provides a seamless user experience via tools including Machine Generated Captions, Embedded Smart Layout, Watson Caption Editor and Live Captioning, he said. Each company using it has the option to “input a unique glossary of words and phrases for proper context and accuracy,” he pointed out.
In the future, Watson-generated captions will “deliver sound/audio descriptions that can be edited and formatted in real-time,” he said, adding: “From there, adapting captions to meet compliance standards is far easier. Live Captioning provides broadcasters with the ability to cover content ranging from nightly news, to live sporting events—bringing reliable captioning to viewers in near real-time.”
The Caption Editor tool, meanwhile, gives users the ability to edit captions in real-time within the interface, and because of the embedded Smart Layout, lines within captions are automatically separated based on natural pauses in speech or by punctuation, he said. Watson-generated captions are “backed by AI and machine learning, and increase in accuracy and efficiency over time,” he added.
He went on to say: “The regulatory environment surrounding compliant video content is often variable, with the rules differing based on industry, geographic location, and delivery medium. With Watson Captioning, companies can take automatically-generated captions and easily alter them to their specific compliance needs. This solution not only addresses regulations surrounding captioning but can aid in compliance for video content at large. By adding a layer of searchable, textual data to video libraries, flagging content that includes profanity, violence, or flagged content is made simpler.”
While it’s obvious how Watson Captioning can benefit media companies, use cases “reach far beyond just media and entertainment,” he said. For example, the captioning technology could increase education accessibility, he noted.
As the popularity of video content continues to increase, companies must make sure audiences can access reliable closed captioning, and Watson Captioning “aims to tackle the efficiency gap in this process, empowering companies to better meet audience needs while promoting a streamlined workflow,” he said.