Jürgen Schmidhuber

Jürgen Schmidhuber (born 17 January 1963)[1] is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland.[2] He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.[3][4]

Jürgen Schmidhuber
Schmidhuber speaking at the AI for GOOD Global Summit in 2017
Born17 January 1963[1]
Alma materTechnical University of Munich
Known forLong short-term memory, Gödel machine, artificial curiosity, meta-learning
Scientific career
FieldsArtificial intelligence
InstitutionsDalle Molle Institute for Artificial Intelligence Research
Websitepeople.idsia.ch/~juergen

He is best known for his foundational and highly-cited[5] work on long short-term memory (LSTM), a type of neural network architecture which went on to become the dominant technique for various natural language processing tasks in research and commercial applications in the 2010s. He also introduced principles of meta-learning, generative adversarial networks[6][7][8] and linear transformers,[9][10][8] all of which are widespread in modern AI.

Career

Schmidhuber completed his undergraduate (1987) and PhD (1991) studies at the Technical University of Munich in Munich, Germany.[1] His PhD advisors were Wilfried Brauer and Klaus Schulten.[11] He taught there from 2004 until 2009. From 2009,[12] until 2021, he was a professor of artificial intelligence at the Università della Svizzera Italiana in Lugano, Switzerland.[1]

He has served as the director of Dalle Molle Institute for Artificial Intelligence Research (IDSIA), a Swiss AI lab, since 1995.[1]

In 2014, Schmidhuber formed a company, Nnaisense, to work on commercial applications of artificial intelligence in fields such as finance, heavy industry and self-driving cars. Sepp Hochreiter, Jaan Tallinn, and Marcus Hutter are advisers to the company.[2] Sales were under US$11 million in 2016; however, Schmidhuber states that the current emphasis is on research and not revenue. Nnaisense raised its first round of capital funding in January 2017. Schmidhuber's overall goal is to create an all-purpose AI by training a single AI in sequence on a variety of narrow tasks.[13]

Research

In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths in artificial neural networks. To overcome this problem, Schmidhuber (1991) proposed a hierarchy of recurrent neural networks (RNNs) pre-trained one level at a time by self-supervised learning.[14] It uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network.[14][15] In 1993, a chunker solved a deep learning task whose depth exceeded 1000.[16]

In 1991, Schmidhuber published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss.[6][17][7][8] The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called "artificial curiosity." In 2014, this principle was used in a generative adversarial network where the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set. This can be used to create realistic deepfakes.[8]

Schmidhuber supervised the 1991 diploma thesis of his student Sepp Hochreiter[18] and called it "one of the most important documents in the history of machine learning".[15] It not only tested the neural history compressor,[14] but also analyzed and overcame the vanishing gradient problem. This led to the deep learning method called long short-term memory (LSTM), a type of recurrent neural network. The name LSTM was introduced in a tech report (1995) leading to the most cited LSTM publication (1997), co-authored by Hochreiter and Schmidhuber.[19] The standard LSTM architecture which is used in almost all current applications was introduced in 2000 by Felix Gers, Schmidhuber, and Fred Cummins.[20] Today's "vanilla LSTM" using backpropagation through time was published with his student Alex Graves in 2005,[21][22] and its connectionist temporal classification (CTC) training algorithm[23] in 2006. CTC enabled end-to-end speech recognition with LSTM. By the 2010s, the LSTM became the dominant technique for a variety of natural language processing tasks including speech recognition and machine translation, and was widely implemented in commercial technologies such as Google Translate and Siri.[24] LSTM has become the most cited neural network of the 20th century.[15] LSTM was called "arguably the most commercial AI achievement."[24]

In 2015, Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks.[8][25][26] 7 months later, the ImageNet 2015 competition was won with an open-gated or gateless Highway network variant called Residual neural network.[27] This has become the most cited neural network of the 21st century.[15]

Since 2018, transformers have overtaken the LSTM as the dominant neural network architecture in natural language processing[28] through large language models such as ChatGPT. As early as 1992, Schmidhuber published an alternative to recurrent neural networks[9] which is now called a Transformer with linearized self-attention[8][10][29][15] (save for a normalization operator). It learns internal spotlights of attention:[30] a slow feedforward neural network learns by gradient descent to control the fast weights of another neural network through outer products of self-generated activation patterns FROM and TO (which are now called key and value for self-attention).[10] This fast weight attention mapping is applied to a query pattern.

In 2011, Schmidhuber's team at IDSIA with his postdoc Dan Ciresan also achieved dramatic speedups of convolutional neural networks (CNNs) on fast parallel computers called GPUs. An earlier CNN on GPU by Chellapilla et al. (2006) was 4 times faster than an equivalent implementation on CPU.[31] The deep CNN of Dan Ciresan et al. (2011) at IDSIA was already 60 times faster[32] and achieved the first superhuman performance in a computer vision contest in August 2011.[33] Between 15 May 2011 and 10 September 2012, their fast and deep CNNs won no fewer than four image competitions.[34][35] They also significantly improved on the best performance in the literature for multiple image databases.[36] The approach has become central to the field of computer vision.[35] It is based on CNN designs introduced much earlier by Yann LeCun et al. (1989)[37] who applied the backpropagation algorithm to a variant of Kunihiko Fukushima's original CNN architecture called neocognitron,[38] later modified by J. Weng's method called max-pooling.[39][35]

Credit disputes

Schmidhuber has controversially argued that he and other researchers have been denied adequate recognition for their contribution to the field of deep learning, in favour of Geoffrey Hinton, Yoshua Bengio and Yann LeCun, who shared the 2018 Turing Award for their work in deep learning.[2][24][40] He wrote a "scathing" 2015 article arguing that Hinton, Bengio and Lecun "heavily cite each other" but "fail to credit the pioneers of the field".[40] In a statement to the New York Times, Yann LeCun wrote that "Jürgen is manically obsessed with recognition and keeps claiming credit he doesn't deserve for many, many things... It causes him to systematically stand up at the end of every talk and claim credit for what was just presented, generally not in a justified manner."[2] Schmidhuber replied that LeCun did this "without any justification, without providing a single example,"[41] and published details of numerous priority disputes with Hinton, Bengio and LeCun.[42][43]

The term "schmidhubered" has been jokingly used in the AI community to describe Schmidhuber's habit of publicly challenging the originality of other researchers' work, a practice seen by some in the AI community as a "rite of passage" for young researchers. Some suggest that Schmidhuber's significant accomplishments have been underappreciated due to his confrontational personality.[44][24]

Recognition

Schmidhuber received the Helmholtz Award of the International Neural Network Society in 2013,[45] and the Neural Networks Pioneer Award of the IEEE Computational Intelligence Society in 2016[46] for "pioneering contributions to deep learning and neural networks."[1] He is a member of the European Academy of Sciences and Arts.[47][12]

He has been referred to as the "father of modern AI" or similar,[57] and also the "father of deep learning."[58][50] Schmidhuber himself, however, has called Alexey Grigorevich Ivakhnenko the "father of deep learning,"[59] and gives credit to many even earlier AI pioneers.[15]

Views

Schmidhuber states that "in 95% of all cases, AI research is really about our old motto, which is make human lives longer and healthier and easier."[54] He admits that "the same tools that are now being used to improve lives can be used by bad actors," but emphasizes that "they can also be used against the bad actors."[53]

He does not believe AI poses a "new quality of existential threat," and is more worried about the old nuclear warheads which can "wipe out human civilization within two hours, without any AI."[8] "A large nuclear warhead doesn’t need fancy face recognition to kill an individual. No, it simply wipes out an entire city with 10 million inhabitants."[8]

Since the 1970s, Schmidhuber wanted to create "intelligent machines that could learn and improve on their own and become smarter than him within his lifetime."[8] He differentiates between two types of AIs: AI tools directed by humans, in particular for improving healthcare, and more interesting AIs that "are setting their own goals," inventing their own experiments and learning from them, like curious scientists. He has worked on both types for decades,[8] and has predicted that scaled-up versions of AI scientists will eventually "go where most of the physical resources are, to build more and bigger AIs." Within "a few tens of billions of years, curious self-improving AIs will colonize the visible cosmos in a way that’s infeasible for humans. Those who don’t won’t have an impact."[8] He said: "don’t think of humans as the crown of creation. Instead, view human civilization as part of a much grander scheme, an important step (but not the last one) on the path of the universe from very simple initial conditions toward more and more unfathomable complexity. Now it seems ready to take its next step, a step comparable to the invention of life itself over 3.5 billion years ago."[8]

He strongly supports the open-source movement, and thinks it is going to "challenge whatever big-tech dominance there might be at the moment," also because AI keeps getting 100 times cheaper per decade.[8]

References