NEW YORK — Adobe, Avid and Amazon Web Services (AWS) were among the Media & Entertainment Services Alliance (MESA) members that showcased their latest products, services and strategies at NAB Show New York Oct. 16-17 at Jacob Javits Convention Center.
Adobe is “focused on helping broadcast customers get to the finish line quicker,” Jeff Pedersen, a company marketing manager, said at the show. To help accomplish that, “it’s all about our efficiency and so what we’re really focused on” at the show was previewing the Auto Reframe feature that’s “coming soon” to Adobe Premiere Pro video editing software, he noted.
Adobe unveiled Auto Reframe in September, at the IBC 2019 show in Amsterdam, where it said the new feature is powered by the company’s Adobe Sensei artificial intelligence (AI) and machine learning (ML) platform. Auto Reframe automatically reframes and reformats video content so that the same project can be published in different aspect ratios, from square to vertical to cinematic 16:9 versions, the company said. The new feature, which Adobe called a “must-have in the age of content and platform proliferation” in its announcement, will launch late this year, it said at the time.
Adobe also spotlighted Content-Aware Fill, a new feature for After Effects that the company introduced in the spring, Pedersen pointed out. That’s “another amazing Adobe Sensei feature” and has “received many positive reviews and has won a lot of awards already,” he said.
In promoting Content-Aware Fill earlier this year, Adobe said at its website that “removing an unwanted object or area from a video can be a time-consuming and complex process.” With the feature, users “can remove any unwanted objects such as mics, poles, and people from your video with a few simple steps,” it said, adding: “Powered by Adobe Sensei, this feature is temporally aware, so it automatically removes a selected area and analyzes frames over time to synthesize new pixels from other frames. Simply by drawing mask around an area, After Effects can instantly replace it with new image detail from other frames. The tool gives you the option to help get the fill to blend seamlessly with the rest of the image. The Content-Aware Fill panel contains various options to help you remove unwanted objects and fill transparent areas.”
Adobe sees AI being used for creativity as a “major trend” that will be a “continued game changer for video production,” Pedersen said at NAB Show New York. With tools including Auto Reframe and Content-Aware Fill, Adobe is “going to be saving video pros time on tedious tasks so they can maximize their output and reach more audiences and enhance creativity,” he said.
Another thing that Adobe is seeing, “in an age of social media, is news and sports broadcasters need to deliver new content as soon as possible to keep their community engaged,” he said, adding: “With the ability to shoot, edit and upload on the go,” Adobe’s Premiere Rush “helps production crews create polished videos that enable viewers to feel” like they’re “part of the action.”
“Probably the biggest thing we’re showing” at NAB Show New York related to Avid’s MediaCentral media production and management platform was the new MediaCentral Publisher app, according to Raymond Thompson, director of product marketing at the company.
It’s a “significantly improved version” of the publish capabilities that Avid offered previously, he said, noting the company is “white-labeling” the product from Wildmoka. Significant features include the fact that it “auto provisions everything in the cloud, whereas in the past,” with the older version, “you had to actually provision all the transcoding and the content yourself,” he said.
The new, Software-as-a-Publisher (SaaS) version also “gives us a broader toolset for people to massage the content,” including through the addition of closed captions, the ability to change the aspect ratios, and the ability to add calls to action,” he said. But server-side “ad insertion is probably the biggest thing,” he said, noting that means one can monetize the content using MediaCentral “across different platforms.”
For the distribution side, meanwhile, “we increased the amount of social media platforms to pretty much all of them,” he said, noting “we had only supported like three of them before” and now it’s up to about 15.
In announcing MediaCentral Publisher in September, Avid said it enables media companies to “create content, add graphics and branding, and publish news and sports videos quickly to social media to boost viewership and drive additional revenues.”
Among the other initiatives that Avid touted at NAB Show New York were the new integrations with Haivision Secure Reliable Transport (SRT) Hub that it announced in September, Thompson said. Avid said last month that “enables intelligent live- and file-based cloud media routing across the Microsoft Azure network over IP for cost-effective, high-quality and secure delivery into Avid cloud and on-premise editorial workflows.”
SRT Hub is “basically a routing service in the cloud,” Thompson explained at the show, adding the ingest tool can also run on-prem. This is “enabling news, sports and remote live workflows – so, REMI productions,” he said.
In terms of trends overall, “the move to the cloud is actually happening,” he said, adding: “We’re still on the front end of people migrating workflows to the cloud.” The recent announcement at IBC that Disney and Microsoft entered into a five-year partnership that will look for new ways to create, produce and distribute content using the Microsoft Azure cloud was “big because if Disney is willing to put their content in the cloud, it must be secure enough for everybody else,” he told MESA. “That’s a big step forward,” he said, adding: “What we’re seeing now is people moving away from just doing” proofs of concept “to actually doing real productions in the cloud.”
We’re also seeing an “acceleration” of Internet Protocol (IP) “as a way to not only do contribution and distribution, but as a way to sort of do end-to-end production,” he said, predicting: “When 5G gets rolled out, that’s going to be a very big change for the market because I think it’s going to be sort of the next push that further migrates people to” over-the-top (OTT) services and to mobile.
Overcoming latency issues and disaster recovery were among the challenges for video production in the cloud that were tackled by Usman Shakeel, global head of solutions architecture at AWS, and other technology and broadcast executives during a panel conference session called “Re-Tooling for the Cloud.”
“Things are going to fail” within the hardware and systems that broadcasters use, Shakeel said. Therefore, it’s important to “architect for failure and make sure that we have resiliency built in,” he told attendees.
“Customers are asking for” zero latency and “it’s obviously needed,” he went on to say. To help accomplish that, “we’re always looking at expanding our regional footprint,” he said. Pointing to one specific case, he said zero latency will be “very important” for online betting, so it’s important for companies to work together to “come up with solutions” and the right architecture for that, he said. “Are we there yet? No.”
In an interview after the panel session, Shakeel told MESA that broadcasters are increasingly “leveraging the cloud as much as possible” today as they work to modernize their workflows for reasons that include the fact that it’s “cost efficient” and “flexible.” Shifting to the cloud also allows for the leveraging of AI and ML to help handle all the data that companies are dealing with today in their workflows, he said.
AWS services are all built keeping in mind these and other challenges that broadcasters and its other customers are dealing with, he noted. For example, it offers low-latency live video streaming: AWS Elemental Live, AWS Elemental MediaStore and Amazon CloudFront processing — delivery services that provide faster-than-broadcast (sub 6 seconds) live streaming video workflows using standard HLS or DASH protocols, according to Shakeel.
On the ML front, he noted that AWS was showcasing its Automated Metadata Extraction and Analysis capabilities at NAB Show New York. These are cloud-based approaches to “unlocking the value of content libraries,” and they integrate ML and analytics services to automatically extract metadata from live or on-demand video streams with a combination of Amazon Transcribe, AWS Elemental MediaConvert, AWS Elemental MediaLive, AWS Elemental MediaPackage, Amazon Rekognition and AWS Comprehend.
AWS was also touting its High Dynamic Range (HDR) offerings at NAB Show New York, he noted. It demonstrated the conversion of any combination of HLG, HDR10 and SDR sources for playout as HDR10 and HLG channels at any resolution, which he told MESA “enables broadcasters to quickly and cost-effectively bring HDR channels to the market with a mix of existing SDR and HDR content.”