M+E Daily

2018: The Year of ‘Citizen AI’

This is the final installment of a three-part piece on the advances made in artificial intelligence in 2018, by Yves Bergquist, founder and CEO of AI company Corto, and director of the AI and Neuroscience in Media Project at the Entertainment Technology Center at the University of Southern California (ETC@USC). Parts one and two can be read here and here.

Read all three segments before attending the Jan. 23 webinar “AI and the Future of Entertainment Data,” featuring Bergquist, MarkLogic CTO Matt Turner, and MESA president Guy Finley.

The Year of ‘Citizen AI’

It’s no surprise that a year dominated by GDPR, Russian troll farms, and the Cambridge Analytica scandal, also saw an enormous amount of debate around how to make AI models more transparent, explainable and accountable. At least 18 countries saw their election results affected by “fake news,” and Congressional investigations and academic reports revealed that Russian disinformation campaigns reached 126 million people in 2016 and 2017. And although it’s unclear what real impact this campaign may have had, 2018 was the year that civil society seriously contemplated “rogue machine intelligence.”

Naturally, this led to a lot of political theater in the U.S. Congress but also real and robust debate about how to make AI more friendly to society. Some real action was also taken: in a vivid case of fighting fire with fire, hundreds of millions of dollars were spent in 2018 developing machine learning applications capable of classifying fake news. This includes Facebook, obviously, as well as MIT, whose Computer Science and Artificial Intelligence (CSAIL) lab, in cooperation with the Qatar Computing Research Institute, is at the forefront of this fight. Its current models use supervised learning-enabled language models that can not only detect fake news with 65% accuracy, but reliably predict whether a news piece is leaning left or right on the political spectrum.

In the same vein, a lot of money was spent in 2018 trying to make AI more legible and explainable to humans, in order to more easily spot bias. This year, Amazon famously scraped its resume-classifying tool that, it turns out, was working from heavily biased data and consistently rejected women. This has been an area of technical focus for a few years now, most notably thanks to DARPA’s “Explainable AI” effort, but 2018 saw major technical advances in this field, including a lot more experimentation around Microsoft Research’s famous Local Interpretable Model-Agnostic Explanations (LIME) framework, which cleverly allows humans to “mess with” input data and observe those actions’ impact on the resulting prediction, and at this point is almost a standard in the field.

There are many such efforts currently underway, and it’s almost certain that 2019 will see a lot more action in this field.

Overall, AI ethics is so hot that it’s become a brand new functional area of work, with companies like Google publicly enacting “principles” (which now often include not participating in militarized AI research) and others, like DeepMind or Facebook, creating whole new “Ethics and Society” Departments. In 2018, the Nuffield Foundation launched “Ada,” a charitable trust dedicated to educating “digital ethicists,” which no doubt will be one of the hottest new jobs of 2019.

Which, actually, can be cause for concern.

Just like in 2017 or the years before that, the public debate around AI ethics lacked — for the most part- even a basic understanding of what AI is, can be, or would be. Too often we see a picture of “rogue,” almighty and impenetrable AI black boxes running wild through society, solidifying majority bias and spreading misery. Too often we assume that AI works, or that truly autonomous decision systems are around the corner. They’re not. An AI or even machine learning in support of decision system doesn’t work, and likely won’t ever work, without considerable human help and input.

What is true is that, as the research community ramps up these systems, we’ll collectively need to apply more insight and more effort into making statistical models legible and accountable, and assessing the societal implications of machine-augmented human decision.

But as more attention gets dedicated to AI ethics (as it should) and serious money is being spent on building a whole industry around it, the volume of people with little technical understanding of AI, serious political (or commercial) skin in the game, and an unlimited thirst for attention, is about to explode. This is a dangerous combination at a time when governments and legislatures are starting to take up the topic. The 2020 Presidential election is the first one where AI will be a real issue. So brace yourselves for more stupid in 2019.

AI in the Sheets, Stupid in the Streets

If you have to have one proverb to describe AI today, let it be University of Washington Computer Scientist Pedro Domingos’ famous one:

“The problem is not that artificial intelligence will get too smart and take over the world, the problem is that it’s too stupid and already has.”

This is a stark reality of the AI domain, which 2018 drove home once again: the leaps and bounds that we see around AI and machine learning in the labs still haven’t been translated into meaningfully intelligent products. Sure, the Alexas and Google Homes of the world are impressive, ubiquitous, and fast improving. But they’re still a far cry from the powerful frameworks developed — often by the same companies — by research teams. Some companies have tackled this issue head-on in 2018: Google has put DeepMind, which it bought for a half billion dollars in 2014, on a much shorter leash, asking it to contribute more directly to product development especially in the health space. But by and large, the “applied AI chasm” is here, and it’s growing.

Take autonomous vehicles, for example. Despite tens of billions of dollars invested in thousands of the world’s most brilliant minds, and tens of millions of miles driven experimentally throughout 2018, a single fully autonomous vehicle has yet to be deployed commercially.

2018 was indeed not friendly to autonomous vehicles. Pilots proved frustrating. Rollouts were delayed. The fatal crash that killed a pedestrian in Tempe, Ariz., in March 2018, sent Uber’s program crashing. The company had to settle with Waymo in their legal battle over IP theft. Its pilots in Arizona and Pittsburg were shut down, and its California permit wasn’t renewed (the Pittsburg program has just started again). Tesla’s (very limited) Autopilot driving function was linked to three accidents in 2018 (one fatal). It’s said to not be great at discriminating between stationary and moving cars, which is likely a by-product of the Tesla research team’s exaggerated enthusiasm for object recognition and neural network-driven supervised learning over a more flexible (and intelligence) hybrid model including probabilistic reasoning (see above).

Even Google’s Waymo, rightly considered an industry leader (thanks to Google’s billions of dollars invested over a decade), has delayed its full rollout, despite its frantic one-million-miles-per-month training regimen. Latest reports indicate that its current Phoenix pilot (limited to an exclusive club of only 400 “early riders”), which started in April 2017, has been running smoothly, and that the company’s Waymo One taxi service will launch for “several hundred” new members in the same area sometime in 2019.

Meanwhile, Zoox, one of Waymo’s biggest competitors, recently (December 2018) received the State of California’s very first permit to move beyond its pilot program and start transporting passengers (it has to have a safety driver behind the wheel and it not allowed to charge customers). Considering how conservative California is compared to Arizona, this sounds like a big milestone.

In both cases, it’s interesting to see that 2018 has brought a very substantial shift in technical architectures away from an “all-(deep) learning” approach and towards a more hybrid model incorporating elements of high-end machine learning together with a fair amount of traditional if/then rule-based programming, which both regulators and lawyers agree is more reliable and transparent.

They’re right. But if/then statements are not AI. And here lies the problem: the tech is, for the most part, reliable enough to be carefully implemented. But mindsets, organizations, and business models, are not.

It is much more than a challenge of skill or even education. Sure, the gaping deficit of AI talent is a big factor holding back the application of AI in enterprise. This is why MIT announced in 2018 that it would spend $1 billion on a dedicated “AI college”. Having more executives specifically trained in artificial intelligence will definitely create a much stronger pull towards applied AI. But it’s unlikely that this will in and of itself dislodge the enterprise bottleneck. What is needed is a true cultural revolution: organizations large and small will have to transform how they think about human vs machine knowledge, and how to augment the former with the latter. They will have rethink how they approach organizational agency and the power of the human mind to control and reduce risk. To deploy AI at scale, they will have to gradually shift some of that power and responsibility over to machines which they don’t yet fully trust or fully understand.

Visit MESAlliance.org or read the M&E Daily newsletter on Tuesday, Jan. 22 for part three of Yves Bergquist’s look at AI in 2018: “The Year of Citizen AI and AI in the Sheets, Stupid in the Streets.” Register for the free Jan. 23 MarkLogic and MESA webinar “AI and the Future of Entertainment Data” here.