Inverse features Joe Carmichael’s interview with artificial intelligence pioneer Jürgen Schmidhuber, who claims that we’ve been making artificially intelligent programs since 1991. His argument actually does make a weird kind of sense, but I’m far from being an expert in the field. What do experts say?
You claim that some A.I.s are already conscious. Could you explain why?
I would like to claim we had little, rudimentary, conscious learning systems for at least 25 years. Back then, already, I proposed rather general learning systems consisting of two modules.
One of them, a recurrent network controller, learns to translate incoming data — such as video and pain signals from the pain sensors, and hunger information from the hunger sensors — into actions. For example, whenever the battery’s low, there’s negative numbers coming from the hunger sensors. The network learns to translate all these incoming inputs into action sequences that lead to success. For example, reach the charging station in time whenever the battery is low, but without bumping into obstacles such as chairs or tables, such that you don’t wake up these pain sensors.
The agent’s goal is to maximize pleasure and minimize pain until the end of its lifetime. This goal is very simple to specify, but it’s hard to achieve because you have to learn a lot. Consider a little baby, which has to learn for many years how the world works, and how to interact with it to achieve goals.
Since 1990, our agents have tried to do the same thing, using an additional recurrent network — an unsupervised module, which essentially tries to predict what is going to happen. It looks at all the actions ever executed, and all the observations coming in, and uses that experience to learn to predict the next thing given the history so far. Because it’s a recurrent network, it can learn to predict the future — to a certain extent — in the form of regularities, with something called predictive coding.
There’s much more at Inverse.