M+E Europe

CPS 2023: Convergent Risks Explores AI Security Risks

The growth of generative artificial intelligence (AI) and AI in general stands to have a major impact on security compliance, Convergent Risks executives said at the Dec. 5 Content Production Summit at The Culver Theater in Culver City, California.

Generative AI is in nearly every conversation and will touch us in everything we do from email and social media, through to new SaaS applications we develop and our use of third-party solutions, according to Convergent Risks.

Risk will increase in proportion to the level of AI interaction and, without defined best practice and security compliance it is difficult to be confident that AI is working for us rather than against us in public cloud environments.

“It seems like everybody goes to ChatGPT and asks it to do something” now, Convergent Risks CEO and founder Chris Johnson, said during the session “How AI Impacts Security Compliance.”

“I didn’t do that,” he said. “So in the break I thought I’d better do that, so what I did is I went to ChatGPT and I asked it to find a convergent British chatbot in L.A that would be connecting to a worldwide audience across multiple time zones and label it. And this is what it came up with. It came up with me wearing a pair of sunglasses, a set of pyjamas and slippers.”

Meanwhile, “AI technologies are now commonplace,” he pointed out. “They’re in every component of our supply chain…. But actually it’s more about just filming and broadcasting TV. It’s across music, it’s across gaming, and it’s across publishing. So every part of the media and entertainment industry is heavily impacted by AI and will only be even more so.”

He added: “We know that AI is going to bring speed, it’s going to bring efficiency, scale and reach loads and loads of good things. It’s going to financially probably de-risk multimillion dollar projects if it’s done correctly. That’s only really ever going to be achievable if we deploy it securely [and] we take adequate measures to monitor it and regulate it.”

In fact, he warned: “That’s got to happen.” He pointed to a recent Forbes article that said AI in media and entertainment is currently $13 billion plus as a business. “That’s a lot of brass,” he said. “With a predicted growth rate of 26 percent annually, that means that by the time we get to our zero-trust target date of 2030, if we get there, then it’s going to be worth something like $100 billion. So we probably need to protect that. Or do we need to be protected from it? Because some of the stuff today is really quite scary.”

A “downside” is “bad actors,” he said, noting: “A bad threat actor at all levels of competency in the future is going to be capable of attacking us. That’s a real worry in itself. We can be sure that AI will be used throughout our supply chains. We can be sure that those threat actors are going to use AI against that. So, therefore, it makes common sense that we’re going to have to use AI as a countermeasure to the threat actor. [It’s] a bit complicated, he conceded. “But that’s what I think [is]going to need to happen.”

Johnson led off the session by announcing a new partnership with technology company Digital Silence. “Our business is becoming more and more technical [and] we needed to expand,” he said.

Also participating in the session were Justin Whitehead, founder and CEO of Digital Silence, and moderator Mathew Gilliat-Smith, EVP at Convergent Risks.

Produced by MESA, the Content Production Summit was presented by Fortinet, and sponsored by Convergent Risks, Friend MTS, Amazon Studios Technology, Indee, NAGRA, EIDR, and Eluv.io, in association with CDSA and the Hollywood IT Society (HITS).