Existing research in machine learning and artificial intelligence has been constrained by a focus on specific tasks chosen either for their perceived importance in human intelligence, their expected practical impact, their suitability for testing and comparison, or simply by an accident of research trends.
However, the intelligence landscape extends far beyond our current capabilities, with many unexplored dimensions that present themselves as new opportunities for research.
This symposium explores this landscape across three main topics: a broader perspective of the possible kinds of intelligence beyond human intelligence, better measurements providing an improved understanding of research objectives and breakthroughs, and a more purposeful analysis of where progress should be made in this landscape in order to best benefit society.
Cynthia Dwork is the Gordon McKay Professor of Computer Science at the Paulson School of Engineering and Applied Sciences, the Radcliffe Alumnae Professor at the Radcliffe Institute for Advanced Study, and an Affiliated Faculty Member at Harvard Law School. She has done seminal work in distributed computing, cryptography, and privacy-preserving data analysis, and spearheaded the invention of differential privacy. Her most recent foci include ensuring validity in exploratory data analysis (especially via differential privacy) and fairness in classification.
Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge with joint appointments at Carnegie Mellon University, University College London and the Alan Turing Institute. He was recently appointed Chief Scientist at Uber. He was one of the founding members of the Gatsby Computational Neuroscience Unit in London, and was elected a Fellow of the Royal Society in 2015. Zoubin has contributed to many fields of AI including Gaussian processes, non-parametric Bayesian methods, clustering, approximate inference algorithms, graphical models, Monte Carlo methods, and semi-supervised learning.
Alison Gopnik is Professor of Psychology and Affiliate Professor of Philosophy at Berkeley University, California. She is known, amongst many other things, for developing the "theory theory", championing children's capacity to employ theory-based reasoning. She was the first cognitve scientist to apply probabilistic models to children’s learning, particularly using the causal Bayes net framework. In the past 15 years she has applied computational ideas to many areas of early cognitive development, including the learning of physical and social concepts.
Demis Hassabis is co-founder and CEO of DeepMind, a neuroscience-inspired AI company which develops general-purpose learning algorithms and uses them to tackle some of the world’s most pressing challenges. A child chess prodigy, Demis coded the classic game Theme Park aged 17. After graduating from Cambridge University, he founded videogames company Elixir Studios and completed a PhD in cognitive neuroscience at University College London. Science declared his research one of 2007’s top breakthroughs. He is a five-time World Games Champion, recipient of the Royal Society’s Mullard Award, and a Fellow of the Royal Society of Arts and the Royal Academy of Engineering, winning the Academy's Silver Medal. In 2017 he featured in the Time 100 list of most influential people.
Katja Hofmann is a researcher at the Machine Intelligence and Perception group at Microsoft Research Cambridge, where she is the lead researcher for Project Malmo. Using the popular game Minecraft as a platform, Project Malmo aims to develop artificial intelligences that can interpret complex environments and collaborate with other agents, including humans. A key goal of the project is to assist the broader research community in developing new approaches to reinforcement learning. Outside of Project Malmo, Katja works on applying machine learning to information retrieval in order to improve online search and recommendation systems.
Lucia Jacobs is a biologist and Professor of Psychology and Neuroscience at the University of California, Berkeley, where she leads the Laboratory of Cognitive Biology. She is interested in how a mind is created from the building blocks of learning over evolutionary time, starting at the beginning of multi-cellular time with the evolution of spatial navigation. She studies two domains: spatial cognition, how limbic brain structures, such as the hippocampus and olfactory systems, diversify and adapt to the demands of ecological niches, and “cognition in the wild”, the decision processes of tree squirrels as a model behavioral economic system.
Gary Marcus is a scientist, bestselling author, entrepreneur, and AI contrarian. He was CEO and Founder of the machine learning startup Geometric Intelligence, recently acquired by Uber. As a Professor of Psychology and Neural Science at NYU, he has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, and artificial intelligence, often in leading journals such as Science and Nature. As a writer, he contributes frequently to the The New Yorker and The New York Times, and is the author of four books, including The Algebraic Mind, Kluge:The Haphazard Evolution of the Human Mind, and The New York Times Bestseller, Guitar Zero, and also editor of the recent book, The Future of the Brain: Essays By The World's Leading Neuroscientists, featuring the 2014 Nobel Laureates May-Britt and Edvard Moser.
David Runciman is Professor of Politics and Head of the Department of Politics and International Studies at the University of Cambridge. He writes regularly for the London Review of Books and has published numerous books including The Confidence Trap: A History of Democracy in Crisis. His interests include various aspects of contemporary political philosophy and contemporary politics and he has recently worked on the dangers of Artificial Intelligence and Artificial Agents. David is interested in the difference between robots and artificial corporations and markets as well as the effects of franchising out decisions to machines on democracy.
Joshua Tenenbaum is Professor at the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology and is leader of the Computational Cognitive Science Group, where he studies computational models of human learning and inference. He has numerous influential papers, including 'How to Grow a Mind', exploring how computational models can address deep questions about the nature and origin of human thought. His work combines empirical methods and formal approaches with a focus on probabilistic models, and has narrowed the gap between AI and the capacities of human learners.
The three presentations (including questions at the end) will cover different areas of the broad intelligence landscape: types of intelligence as observed in biology, from a cognitive science perspective, and also how new breakthroughs in machine intelligence are taking us to places previously unexplored by human intelligence and other types of biological intelligent behaviour.
Some other questions that will be raised to the panel are: "What are the general principles pervading all kinds of intelligent behaviour? And, in contrast, what are the more specific strategies that are very successful for more narrow niches of behaviour?".
This session will have three presentations, also starting from a view on types of intelligence, focusing on the differences between human-like AI and other forms of AI, why human-like AI is important scientifically and technologically, how we might go about building more general and collaborative AI and how we can measuring it.
Some other questions that will be raised to the panel are: "How can we categorize and compare in a fair and nuanced way the kinds of AI systems that are being created?" "Why is it so difficult to create benchmarks that go beyond specific tasks?"
This session will have two presentations on what the priority areas are in the intelligence landscape to benefit society, why they have been chosen and how the progress is evaluated. We will also pay attention to what we can learn about our needs from the history and politics of artificial agency (corporations, states, etc.) in democratic societies.
Some other questions that will be raised to the panel are: "Should AI systems be more task-oriented or ability-oriented? More data-oriented or knowledge-oriented? More rational or more emotional? More self-centered or social?" "What about organizations that combine humans and AI systems?" "Can we agree on a list of challenges and corresponding evaluation benchmarks to accelerate research for beneficial AI?"