The New AI: An interview with Juergen Schmidhuber


Here’s an interview with a very interesting individual Juergen Schmidhuber, a German artificial intelligence researcher who has worked on recurrent neural networks, Godel machines, universal learning algorithms, artificial evolution, robotics, and neural network based financial forecasting. From 2004 to 2009 he was professor of Cognitive Robotics at the Tech. University Munich. Since 1995 he has been co-director of the Swiss AI Lab IDSIA in Lugano, since 2009 also professor of Artificial Intelligence at the University of Lugano. In honor of his achievements he was elected to the European Academy of Sciences and Arts in 2008.

The interview was done by Sander Olson a correspondent for NextBigFuture.com:

Question: You have a plan to build an “optimal scientist”. What do you mean by that?

Answer: An optimal scientist excels at exploring and then better understanding the world and what can be done in it. Human scientists are suboptimal and limited in many ways. I’d like to build an artificial one smarter than myself (my colleagues claim that should be easy) who will then build an even smarter one, and so on. This seems to be the most efficient way of using and multiplying my own little bit of creativity.

Question: You believe that by 2028 computers will have computing power equivalent to that of a human brain. Can a sufficiently powerful digital computer mimic all of the processes and activities of a brain?

Answer: It would be surprising if such a computer could not mimic all of the brain’s processes, since there is no evidence that neurons engage in activities that cannot be mimicked by digital logic processes.

Question: Some AI critics claim that classical computation is not suited to digital processes.

Answer: There is simply no evidence to support such claims. All available evidence indicates that pattern recognition, planning, and reward maximization through decision making are computable processes, given sufficient computational power.

Question: What is the “New AI” developed at the Swiss AI Lab IDSIA?

Answer: Most traditional artificial intelligence (AI) systems of the past decades are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in the field of theoretically optimal algorithms for prediction, search, inductive inference based on Occam’s razor, general problem solving, universal decision making, and reward optimization for agents embedded in unknown environments of a very general type. That’s the New AI: AI as a Formal Science. Heuristics come and go – theorems are for eternity.

Question: Traditional neural networks have serious limitations. To what extent can recurrent neural networks (RNNs) overcome these limitations?

Answer: Traditional neural networks, also known as feed forward neural networks, are the simplest type of neural network. There information goes only in one direction, forward. The human brain, however, is a recurrent neural net (RNN): a network of neurons with feedback connections, essentially a general computer. It can learn many behaviors / sequence processing tasks / algorithms / programs that are not learnable by traditional machine learning methods. These capabilities explain the rapidly growing interest in artificial RNN for technical applications: general computers which can learn algorithms to map input sequences to output sequences, with or without a teacher. They are computationally more powerful and biologically more plausible than feed forward networks and other adaptive approaches. Our “Long Short-Term Memory” RNN have recently given state-of-the-art results in time series prediction, adaptive robotics and control, connected handwriting recognition and other sequence learning problems.

Question: So our brains are also RNNs?

Answer: Yes, although we do not understand all their details, and currently they are still clearly more complex than the artificial RNNs we are using. A human brain incorporates about 100 trillion synapses, which presumably are trainable parameters. Our current artificial RNNs only have about half a million such parameters. We are constrained by the limitations of current hardware. But every decade our hardware capabilities increase by a factor of 100-1000. That is, within a couple of decades we should have artificial RNNs whose computational power exceeds the one of human brains.

Question: You oversee the CogBotLab in Munich and the IDSIA Robot Lab. How are they different from other robot labs?

Answer: They focus on robots that learn. By contrast, many other robotics labs focus on pre-programmed robots that solve clearly defined practical tasks but do not learn from trial and error and other types of experience.

Question: Speaking of robotics, how important is embodiment for AGI learning?

Answer: It is essential. The general problem of AI is about embedded agents capable of interacting with their environment: robots. IDSIA’s recent optimality results for agents embedded in initially unknown worlds precisely address this general case.

Question: How close are we to implementing a Godel machine for a learning robot?

Answer: The Gödel machine formalizes Good’s informal remarks (1965) on an “intelligence explosion” through self-improving “super-intelligences”. It is a self-referential universal problem solver that interacts with its environment and simultaneously searches for a program that can rewrite its own software in a theoretically optimal way. But it must first find a mathematical proof that the rewrite will indeed improve its performance, given some user-defined performance measure defining the goal to be achieved. (We may initialize the Gödel machine by my former postdoc Hutter’s asymptotically fastest algorithm for all well-defined problems, such that it will be at least asymptotically optimal even before the first self-rewrite.) Currently one of my postdocs at IDSIA is working on a first Gödel machine implementation. How long will it take to transfer this type of research to a real robot? I hesitate to make bold predictions – let’s proceed incrementally.

Question: How do you understand sentience?

Answer: Consciousness and sentience may be viewed as simple by-products of problem solving and data compression. As we interact with the world to achieve goals, we are constructing internal models of the world, predicting and compressing the data histories we are observing. If the predictor / compressor is an artificial RNN, it will create feature hierarchies, lower level neurons corresponding to simple feature detectors similar to those found in human brains, higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor the RNN will learn to identify shared regularities among different already existing internal data structures, and generate prototype encodings or “symbols” for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole. Self-consciousness may be viewed as a by-product of this, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself. To efficiently encode the entire data history, it will profit from creating some sort of internal prototype symbol or code (e. g., a neural activity pattern) representing itself. Whenever this representation is actively used, say, by activating the corresponding neurons through new incoming sensory inputs or otherwise, the agent could be called self-aware or conscious. No need to see this as a mysterious process – it’s just a natural by-product of compressing the observation history by efficiently encoding frequent observations.

Advertisements

3 Comments

Filed under Artificial Intelligence

3 responses to “The New AI: An interview with Juergen Schmidhuber

  1. Great interview.
    Schmidhuber seems an interesting person and he’s optimist about building human level AI.

    I read about his work and his collegue’s work at IDSIA (like Hutter AIXI), very formal and mathematical.
    I’m not knowlegeable enoght tu judge but I hope he is right.

  2. Ross

    The Darwin series designed by Gerald Edelman and team at the Neurosciences Institute is brain-based, not pre-programmed, and possibly ten years ahead of (this) schedule.

    The focus, however, is different, so results will be as well. They are not attempting to build brains to ‘think’ or ‘search’, but rather organisms to act.

    Best luck to the “new AI”. We surely need it.

    Those interested in cautionary tales might enjoy Ken Wilber’s
    thoughts on intelligence (carbon or silicon, analog or digital) in
    “Boomeritis”.

  3. Carl

    One thought re: RNNs; assumption here is you’re working with one Really Big computer. What about a BotNet?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s