In 2014, Schmidhuber formed a company, Nnaisense, to work on commercial applications of artificial intelligence in fields such as finance, heavy industry and self-driving cars. Sepp Hochreiter, Jaan Tallinn, and Marcus Hutter are advisers to the company.[2] Sales were under US$11 million in 2016; however, Schmidhuber states that the current emphasis is on research and not revenue. Nnaisense raised its first round of capital funding in January 2017. Schmidhuber's overall goal is to create an all-purpose AI by training a single AI in sequence on a variety of narrow tasks.[13]
Research
In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths in artificial neural networks. To overcome this problem, Schmidhuber (1991) proposed a hierarchy of recurrent neural networks (RNNs) pre-trained one level at a time by self-supervised learning.[14] It uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network.[14][15] In 1993, a chunker solved a deep learning task whose depth exceeded 1000.[16]
In 1991, Schmidhuber published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss.[6][17][7][8] The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called "artificial curiosity." In 2014, this principle was used in a generative adversarial network where the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set. GANs were the state of the art in generative modeling during 2015-2020 period.
Schmidhuber supervised the 1991 diploma thesis of his student Sepp Hochreiter[18] which he considered "one of the most important documents in the history of machine learning".[15] It studied the neural history compressor,[14] and more importantly analyzed and overcame the vanishing gradient problem. This led to the long short-term memory (LSTM), a type of recurrent neural network. The name LSTM was introduced in a tech report (1995)
leading to the most cited LSTM publication (1997), co-authored by Hochreiter and Schmidhuber.[19]
It was not yet the standard LSTM architecture which is used in almost all current applications. The standard LSTM architecture was introduced in 2000 by Felix Gers, Schmidhuber, and Fred Cummins.[20] Today's "vanilla LSTM" using backpropagation through time was published with his student Alex Graves in 2005,[21][22] and its connectionist temporal classification (CTC) training algorithm[23] in 2006. CTC was applied to end-to-end speech recognition with LSTM. By the 2010s, the LSTM became the dominant technique for a variety of natural language processing tasks including speech recognition and machine translation, and was widely implemented in commercial technologies such as Google Neural Machine Translation,[24] have also been used in Google Voice for transcription[25] and search,[26] and Siri.[27]
In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers.[28] Stacking too many layers led to a steep reduction in training accuracy,[29] known as the "degradation" problem.[30] In May 2015, Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles to create the highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks.[8][31][32] In Dec 2015, the residual neural network (ResNet) was published, which is a variant of the highway network.[30][33]
In 1992, Schmidhuber published fast weights programmer, an alternative to recurrent neural networks.[9] It has a slow feedforward neural network that learns by gradient descent to control the fast weights of another neural network through outer products of self-generated activation patterns, and the fast weights network itself operates over inputs.[10] This was later shown to be equivalent to the unnormalized linear Transformer.[34][10][35] Schmidhuber used the terminology "learning internal spotlights of attention" in 1993.[36]
In 2011, Schmidhuber's team at IDSIA with his postdoc Dan Ciresan also achieved dramatic speedups of convolutional neural networks (CNNs) on fast parallel computers called GPUs. An earlier CNN on GPU by Chellapilla et al. (2006) was 4 times faster than an equivalent implementation on CPU.[37] The deep CNN of Dan Ciresan et al. (2011) at IDSIA was already 60 times faster[38] and achieved the first superhuman performance in a computer vision contest in August 2011.[39] Between 15 May 2011 and 10 September 2012, these CNNs won four more image competitions[40][41] and improved the state of the art on multiple image benchmarks.[42] The approach has become central to the field of computer vision.[41] It is based on CNN designs introduced much earlier by Kunihiko Fukushima.[43][41]
Credit disputes
Schmidhuber has controversially argued that he and other researchers have been denied adequate recognition for their contribution to the field of deep learning, in favour of Geoffrey Hinton, Yoshua Bengio and Yann LeCun, who shared the 2018 Turing Award for their work in deep learning.[2][44][45] He wrote a "scathing" 2015 article arguing that Hinton, Bengio and Lecun "heavily cite each other" but "fail to credit the pioneers of the field".[45] In a statement to the New York Times, Yann LeCun wrote that "Jürgen is manically obsessed with recognition and keeps claiming credit he doesn't deserve for many, many things... It causes him to systematically stand up at the end of every talk and claim credit for what was just presented, generally not in a justified manner."[2] Schmidhuber replied that LeCun did this "without any justification, without providing a single example,"[46] and published details of numerous priority disputes with Hinton, Bengio and LeCun.[47][48]
The term "schmidhubered" has been jokingly used in the AI community to describe Schmidhuber's habit of publicly challenging the originality of other researchers' work, a practice seen by some in the AI community as a "rite of passage" for young researchers. Some suggest that Schmidhuber's significant accomplishments have been underappreciated due to his confrontational personality.[49][44]
He has been referred to as the "father of modern AI" or similar,[62] the "father of Generative AI,"[63] and also the "father of deep learning."[64][55] Schmidhuber himself, however, has called Alexey Grigorevich Ivakhnenko the "father of deep learning,"[65][66] and gives credit to many even earlier AI pioneers.[15]
Views
Schmidhuber is a proponent of open source AI, and believes that they will become competitive against commercial closed-source AI.[8] He believes that AI is less threatening than nuclear weapons and does not pose a new existential threat.[58][59]
Since the 1970s, Schmidhuber wanted to create "intelligent machines that could learn and improve on their own and become smarter than him within his lifetime."[8] He differentiates between two types of AIs: tool AI, such as those for improving healthcare, and autonomous AIs that set their own goals, perform their own research, and explore the universe. He has worked on both types for decades,[8] He expects the next stage of evolution to be self-improving AIs that will succeed human civilization as the next stage in the universal increase towards ever-increasing complexity, and he expects AI to colonize the visible universe.[8]
^ abSchmidhuber, Jürgen (1991). "A possibility for implementing curiosity and boredom in model-building neural controllers". Proc. SAB'1991. MIT Press/Bradford Books. pp. 222–227.
^ abSchmidhuber, Jürgen (1 November 1992). "Learning to control fast-weight memories: an alternative to recurrent nets". Neural Computation. 4 (1): 131–139. doi:10.1162/neco.1992.4.1.131. S2CID16683347.
^ abcSchlag, Imanol; Irie, Kazuki; Schmidhuber, Jürgen (2021). "Linear Transformers Are Secretly Fast Weight Programmers". ICML 2021. Springer. pp. 9355–9366.
^Schmidhuber, Jürgen (2010). "Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990-2010)". IEEE Transactions on Autonomous Mental Development. 2 (3): 230–247. doi:10.1109/TAMD.2010.2056368. S2CID234198.
^Graves, Alex; Fernández, Santiago; Gomez, Faustino; Schmidhuber, Juergen (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". In Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX10.1.1.75.6306.
^Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin; Macherey, Klaus; Klingner, Jeff; Shah, Apurva; Johnson, Melvin; Liu, Xiaobing; Kaiser, Łukasz; Gouws, Stephan; Kato, Yoshikiyo; Kudo, Taku; Kazawa, Hideto; Stevens, Keith; Kurian, George; Patil, Nishant; Wang, Wei; Young, Cliff; Smith, Jason; Riesa, Jason; Rudnick, Alex; Vinyals, Oriol; Corrado, Greg; Hughes, Macduff; Dean, Jeff (8 October 2016). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv:1609.08144 [cs.CL]. Retrieved May 14, 2017
^Schmidhuber, Jürgen (1993). "Reducing the ratio between learning complexity and number of time-varying variables in fully recurrent nets". ICANN 1993. Springer. pp. 460–463.
^Fukushima, Neocognitron (1980). "A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position". Biological Cybernetics. 36 (4): 193–202. doi:10.1007/bf00344251. PMID7370364. S2CID206775608.