M+E Daily

AI in 2018: More Deep Learning Extensions and Crazy Rich Bayesians

This is the second installment of a three-part piece on the advances made in artificial intelligence in 2018, by Yves Bergquist, founder and CEO of AI company Corto, and director of the AI and Neuroscience in Media Project at the Entertainment Technology Center at the University of Southern California (ETC@USC). Part one can be read here.

Read all three segments before attending the Jan. 23 webinar “AI and the Future of Entertainment Data,” featuring Bergquist, MarkLogic CTO Matt Turner, and MESA president Guy Finley.

More Deep Learning Extensions

With one new academic paper publisher every half hour or so in 2018, machine learning is still — and by far — the most vigorous domain of AI. And within machine learning, Deep Learning (also called Deep Neural Networks) still dominates the field. This year saw a lot of extensions of DL to new areas, especially natural language.

Fast.ai’s ULMFiT, the Allen Institute’s ELMo and of course Google’s BERT, all used new DL architectures to deliver breakthrough accuracy performances in all areas of Natural Language Processing (NLP), ensuring that 2019 and 2020 will see a massive improvement in text analysis and chatbot deployment. Exhibit A: Google’s Duplex, demoed in May 2018, is a voice assistant that can make a restaurant reservation sounding exactly like a human. It’s still very narrow in its domain of application, and rather rough around the edges, but the research community has made of a lot of progress this year, and by all accounts, NLP was the area of AI that saw the most action in 2018, and where we can expect a lot of progress in 2019.

Computer vision has made so much progress in the last few years that it definitely didn’t see as much action as NLP in 2018. Generalized Adversarial Networks (GANs), in particular the BigGANs model, delivered ever more freakishly real image and video synthesis (new images from a training set of images). NVIDIA’s vid2vid video synthesis technique, which outputs a completely photorealistic video from a set of input “sketches,” blew everyone away at NeuroIPS 2018.

These trends, combined with Google’s first commercial deployment of its long-awaited Tensor Processing Units (TPU) chips optimized for training very large and very deep neural nets (through hardware stacks of up to … 180 Petaflops), puts Deep Learning once again at the center of expectations for 2019.

Perhaps even more importantly, 2018 was the year when the research community started asking really smart questions about the limitations of Deep Learning.

In an MIT Technology Review article in November 2018, one of the pioneers of the field, Yoshua Bengio (who, along with 2 others, authored the most referential book about the method in 2015) publicly expressed doubts about the ability of Deep Learning to solve more general problems than computer vision, signal processing or NLP. In a stunning nod to a rival school of thought in AI, Bengio even mentioned extending Deep Learning to “things like reasoning, learning causality, and exploring the world in order to learn and acquire information.”

This may not sound like anything special to the uninitiated, but it’s a massive deal for the AI research community, which for decades has been divided between those who think that Deep Learning is sufficient to generate human-like General Intelligence (the “Connectionists”), and those who argue that DL needs to be integrated with other methods to allow for more complex probabilistic reasoning from smaller, sparser and more ambiguous data (the “Symbolists”).

Bengio’s words generated a storm of controversy within the research community, and since this is 2018, both camps had their gloves off. Two of the most prominent AI researchers, Yann LeCun (another pioneer of Deep Learning) and Gary Marcus (former head of AI research at Uber, and a vocal critic of Deep Learning), squared off on Twitter in a deliciously belligerent manner. This was well summarized by Marcus in a phenomenal Medium blog post.

Bengio’s words, and the ensuing debate, are highly significant because they indicate a very substantial tipping of the scale away from the “All DL All the Time” mentality which has dominated AI research since 2010, and creating more pull – and research funding- for the very niche but vastly promising field of “Neural-Symbolic AI”, which seeks to integrate DL (which is basically fancy curve-fitting) with probabilistic and Bayesian-style reasoning to allow AI applications to better generalize to new domains with less data, less computation, and less supervision.

Crazy Rich Bayesians

Yoshua Bengio’s veiled criticism of Deep Learning capped a year that saw a spectacular revival of the probabilistic, symbolist and Bayesian school of AI. It is no surprise that one of the most important non-fiction books of 2018, Judea Pearl’s “Book of Why,” received such attention and universal praise in the AI community: its criticism of data (“causal questions can never be answered from data alone”) and advocacy for symbolic reasoning, was a welcome breath of fresh air in an AI community that has been encountering success in applying large datasets and deep neural network architectures to many intelligence problems, but felt fundamentally “stuck” in its approach to General Intelligence.

Pearl’s magnificent book was also a tip of the hat to the small but incredibly vibrant research community on Probabilistic Reasoning, which, from Pei Wang’s “Non-Axiomatic Reasoning Systems” at Temple University to the Open Cog Team’s Probabilistic Logic Networks, has been working on this problem for more than a decade, in near-total isolation from the mainstream AI community. It’s telling to see both of these teams start receiving millions of dollars from large tech companies in the past few years.

2018 was the year when this probabilistic approach started appearing in the mainstream. Sure, big tech companies like Google, Netflix, Amazon and Facebook have been using probabilistic graphical models to make content recommendations, but this year saw some real innovation in the field, including by such leading organizations as DeepMind and OpenAI, which all made serious headway towards integrating Deep Learning with Bayesian Learning.

Lots of brilliants minds have for long had a strong hunch that this was a promising direction to achieve General Intelligence, so this area of research is one that everybody will have their eyes on in 2019.

Visit MESAlliance.org or read the M&E Daily newsletter on Tuesday, Jan. 22 for part three of Yves Bergquist’s look at AI in 2018: “The Year of Citizen AI and AI in the Sheets, Stupid in the Streets.” Register for the free Jan. 23 MarkLogic and MESA webinar “AI and the Future of Entertainment Data” here.