Humans, AI won't be replacing you... yet

Humans, AI won't be replacing you... yet
A booth staff of French company Aldebaran Robotics communicates with its humanoid robot NAO at the International Robot Exhibition 2013 in Tokyo November 8, 2013.

Circa 2045, argues author Ray Kurzweil, machines will become smarter than people.

In his popular 2005 book, The Singularity Is Near, the moment when this happens is called the "singularity". The bedrock idea is that machines with artificial intelligence (AI) that matches the human level can be built in the lifetime of those of us who are alive right now.

With greater and faster processing power, such a machine would be able to reprogram itself into one more intelligent than itself. As this machine would be more intelligent than the most intelligent one people can make, it would have superhuman intelligence.

But once this happens, this super-intelligent machine would go on to reprogram itself into a machine that is even more intelligent. This hyper-intelligent machine would then reprogram itself into an ultra-intelligent machine and so on, exponentially, perhaps without limit.

Based on known rates of advancements in computer processing power and related fields, Kurzweil figures that the tipping point in this hypothetical process of self-amplifying growth in AI capacity will come in 2045. If it does, with limitless intelligence on tap, the biggest worry is whether these ultra-intelligent machines might render humanity completely redundant.

All this was just a fringe idea that serious academics wouldn't touch until Australian National University philosopher David Chalmers published the first formal analysis of the singularity's possibility in a peer-reviewed journal. His 2010 paper in Journal of Consciousness Studies attracted responses from 26 experts from various fields that were published in the same journal in 2012.

Professor Chalmers' analysis of what is already known in AI, neuroscience and the philosophy of consciousness led him to feel that human-level AI would be "distinctly possible" before 2100.

Because of hardware and software advances, such a system would have the capacity to amplify intelligence. When this happens recursively - as a procedure that can repeat itself indefinitely - then the singularity would be possible, he felt, "within centuries", rather than decades.

However, critics argue that the whole enterprise could actually founder at the very first step: emulating normal human intelligence, which proponents implicitly assume to be located in the brain, an organ they liken to a computer, which is a machine.

If, as is likely, the human brain is more than a machine, then it actually can't be perfectly emulated. That is, apart from functioning like a mechanical computing system, if at all, the brain also has non-mechanical processes.

But if this is so, then no machine can perfectly emulate a brain. And if so, an AI system will not actually have a mind, which means it won't be able to attain even the human level of intelligence, which is the singularity's takeoff point.

Proponents put the cart before the horse because they ignore the very question of what intelligence itself really is. Real human intelligence depends on cognition, which is always carried out by a living person, so it is "embodied" and always within specific human situations - thus it is "situated".

This human cognition comes about through human interaction with the environment. This interaction is always carried out through the body's finely tuned sensory and motor systems. That is, the senses deliver environmental stimuli to your mind and your mind directs the actions which you carry out or enact in the real world through your body and limbs.

More about

Purchase this article for republication.

BRANDINSIDER

SPONSORED

Most Read

Your daily good stuff - AsiaOne stories delivered straight to your inbox
By signing up, you agree to our Privacy policy and Terms and Conditions.