Murray Shanahan
Murray Patrick Shanahan is a professor of Cognitive Robotics at Imperial College London,[3] in the Department of Computing, and a senior scientist at DeepMind.[4] He researches artificial intelligence, robotics, and cognitive science.[1][5] EducationShanahan was educated at Imperial College London[6] and completed his PhD at the University of Cambridge in 1987[7] supervised by William F. Clocksin.[2] Career and researchAt Imperial College, in the Department of Computing, Shanahan was a postdoc from 1987 to 1991, an advanced research fellow until 1995. At Queen Mary & Westfield College, he was a senior research fellow from 1995 to 1998. Shanahan joined the Department of Electrical Engineering at Imperial, and then (in 2005) the Department of Computing, where he was promoted from Reader to Professor in 2006.[6] Shanahan was a scientific advisor for Alex Garland's 2014 film Ex Machina.[8] Garland credited Shanahan with correcting an error in Garland's initial scripts regarding the Turing test.[9] Shanahan is on the external advisory board for the Cambridge Centre for the Study of Existential Risk.[10][11] In 2016 Shanahan and his colleagues published a proof-of-concept for "Deep Symbolic Reinforcement Learning", a specific hybrid AI architecture that combines symbolic AI with neural networks, and that exhibits a form of transfer learning.[12][13] In 2017, citing "the potential (brain drain) on academia of the current tech hiring frenzy" as an issue of concern, Shanahan negotiated a joint position at Imperial College London and DeepMind.[4] The Atlantic and Wired UK have characterized Shanahan as an influential researcher.[14][15] BooksIn 2010, Shanahan published Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds, a book that helped inspire the 2014 film Ex Machina.[16] The book argues that cognition revolves around a process of "inner rehearsal" by an embodied entity working to predict the consequences of its physical actions.[17] In 2015, Shanahan published The Technological Singularity, which runs through various scenarios following the invention of an artificial intelligence that makes better versions of itself and rapidly outcompetes humans.[18] The book aims to be an evenhanded primer on the issues surrounding superhuman intelligence.[19] Shanahan takes the view that we do not know how superintelligences will behave: whether they will be friendly or hostile, predictable or inscrutable.[20] Shanahan also authored Solving the Frame Problem (MIT Press, 1997) and co-authored Search, Inference and Dependencies in Artificial Intelligence (Ellis Horwood, 1989).[6] ViewsShanahan said in 2014 about existential risks from AI that "The AI community does not think it's a substantial worry, whereas the public does think it's much more of an issue. The right place to be is probably in-between those two extremes." He added that "it's probably a good idea for AI researchers to start thinking (now) about the (existential risk) issues that Stephen Hawking and others have raised."[21] Shanahan said in 2018 that there was no need to panic yet about an AI takeover because multiple conceptual breakthroughs would be needed for artificial general intelligence (AGI), and "it is impossible to know when (AGI) might be achievable". He stated that AGI would come hand-in-hand with true understanding, enabling for example safer automated vehicles and medical diagnosis applications.[22][23] In 2020, Shanahan characterized AI as lacking the common sense of a human child.[24] References
External links
|