This is the first of a three-part piece on the advances made in artificial intelligence in 2018, by Yves Bergquist, founder and CEO of AI company Corto, and director of the AI and Neuroscience in Media Project at the Entertainment Technology Center at the University of Southern California ([email protected]).
Read all three segments before attending the Jan. 23 webinar “AI and the Future of Entertainment Data,” featuring Bergquist, MarkLogic CTO Matt Turner, and MESA president Guy Finley.
2018 was a pivotal year in AI. With buzz way down and real-life applications way up, practitioners, policymakers and community leaders started having serious discussions about it. There was intense and necessary talk about accountability, fairness, and transparency. In the lab, scientists started broadening the application of Deep Learning with great results, and even started tinkering with hybrid neural net-probabilistic graph models that many – including this author- consider key to more general AI. Challenges still abound in the private sector, but they’re getting clearer, and the goal of seeing real AI products and services is nearer than ever.
(Before you head into this post, be aware that I have a strict definition of AI, which includes machine learning, but is not interchangeable with it. In this view, a learning architecture, even a Deep Neural Network, performs only two of the three functions required to qualify as AI: 1. It represents knowledge, and 2. It extracts patterns from these representations [curve-fitting]. To qualify as AI, an application will need to have agency in the domain it learns from: it will both learn and take action. If it performs reasoning [AI’s fourth leg] even better, but I don’t view it as necessary. Agency, on the other hand, is. According to this definition, a self-driving car, a game-playing application, or even a thermostat, are all AI applications. Deep Learning-powered classifiers are not. They are powerful machine learning applications. The difference between learning and intelligence is agency. I’m not alone in this: such eminent AI experts as Alan Mackworth [chair of AI at the University of British Columbia], Peter Norvig [director of research at Google], and Shane Legg [co-founder of DeepMind] have all put agency at the heart of their own definitions of AI.)
How you feel about what happened in artificial intelligence in 2018 depends on where you look from.
If you’re a researcher, 2018 was truly a pivotal year. No ludicrous “AI against humanity” meme to fight. You got to play with a gaggle of exciting new toys like Facebook’s PyTorch machine learning library. You got to witness the deliciously pugnacious debate between two giants of the field, Yann LeCun and Gary Marcus, about whether or not Deep Learning can truly generalize to broader domains.
And of course, you got to tinker with emerging and thrilling methods like probabilistic graph models and deep learning-enabled natural language processing.
If you’re an executive in a forward-thinking organization full of machine intelligence ambitions, you’re part of the lucky few who probably got to apply the gaggle of new machine learning frameworks to real world business problems such as logistics, drug discovery, autonomous driving, chatbots, or image/video/audio analysis. Congrats, you have the best job in the world.
If you’re an AI enthusiast in one of the many “race to the bottom” organizations focused entirely on protecting their ever-shrinking margins, you’re probably still fighting 2017’s dragons: myopic leadership, fragmented data, business unit politics, and clueless legal departments.
You might find solace in the fact that you’re in the vast majority. Indeed, even PwC’s upbeat “Annual Review” of AI in 2018 stated that only 27% of the 1,000 large companies they surveyed had “already implemented AI in multiple areas.” Given that this is a survey, and that it didn’t define AI, you can expect a lot of clever Excel macros to have snuck in there.
It’s unlikely that more than 20% of organizations has machine learning applications (it’s not AI unless you’re a Google or a hedge fund) running in production.
Even in 2018, applied AI is still very hard. Make no mistake: the tech IS there. What’s happening is that 19th century organizational models are trying to solve 21st century problems with 22nd century technology. It’s largely a problem of mindsets and organizations.
To apply AI in enterprise, you need 5 things most companies don’t have: educated leaders, lots of properly curated data, lots of money to experiment, and a large and diverse team of absurdly expensive quants (Google’s DeepMind has almost a thousand employees). Most importantly – and rare- you need the freedom to apply it all to pursue uncertain goals through unproven methods … and fail miserably. Despite its progress, AI is still very experimental. And it takes a very special kind of organization to put “give me $10 million and maybe you’ll get something” into action.
The lab is where all of these conditions are met, not the boardroom. And once again it’s in the lab that 2018 was the most generous.
Visit MESAlliance.org or read the M&E Daily newsletter on Jan. 18 for part two of Yves Bergquist’s look at AI in 2018: “More Deep Learning Extensions and Crazy Rich Bayesians.” Register for the free Jan. 23 MarkLogic and MESA webinar “AI and the Future of Entertainment Data” here.