MARINA DEL REY, Calif. — Around April 2016, while the run-up to the presidential election was heating up, a funny thing happened on both Twitter and Facebook.
Thousands of dummy profiles, some that had been dormant for years, all began springing to life. And they all began sharing takes on society that including extremist and highly polarized language, including anti-Semitic and racially tinged posts. Many were targeted to well-known people who had millions of followers and friends, in the hopes they would be shared and re-Tweeted, shared as far and as wide as possible.
It was a Russian-led campaign to sow civil unrest, according to Jonathon Morgan, CEO of cybersecurity firm New Knowledge; and founder of Data for Democracy. And it worked.
“It’s having a pretty significant impact,” he said Dec. 5, speaking at the annual Content Protection Summit, presented by the Content Delivery & Security Association (CDSA). “If you make it trend, you make it true.”
Morgan’s firm estimates that 170 million Facebook accounts are fake, around 13% of all Facebook profiles. That’s an insane number, he said, noting that the U.S. population is only 326 million. Pair that with how easily false information is spread and shared on social media platforms, and you’ve got a perfect storm for a disinformation campaign unlike any other.
And it’s not just social trends and politics that are in trolls’ sights. Increasingly, brands — including media and entertainment — are finding themselves targeted. Morgan pointed to Netflix and its March announcement that it was appointing former Obama U.N. ambassador Susan Rice to its board of directors. Sure enough, a number of “likely automated” social media accounts began driving an anti-Netflix narrative among far-right online communities.
It’s a serious issue with the digital age, Morgan said, where brand manipulation and fabricated public outrage can result in the loss of millions of dollars in revenue.
“The broader problem here is we’re eroding trust with the systems we’ve set up to communicate,” Morgan said. “If you can’t believe anything you see, or everyone believes what they want, we’re going to have a difficult time functioning in society.”
There is hope. Facebook, Twitter and other outlets are much more aware of the possibility of social media manipulation, and a brand that finds itself under attack should immediately reach out to those platforms, and head off a trolling campaign before it gets too big, Morgan said. And news outlets are more aware as well, and not as quick to report something as “news” just because it’s trending.
The 2018 CDSA Content Protection Summit was presented by SafeStream, and sponsored by EdgeScan, Microsoft Azure, LiveTiles, Aspera, Amazon Web Services, Convergent Risks, Dolby, Illumio, NAGRA, EIDR, the Trusted Partner Network (TPN), Videocites, Human-i-t, Telesoft and Bob Gold and Associates and is produced by the Media & Entertainment Services Alliance (MESA) in association with CDSA, the Hollywood IT Society (HITS), Smart Content Council and Women in Technology Hollywood (WiTH).