Press "Enter" to skip to content

Will robots be the death of us?

Are we in danger of being enslaved by intelligent computer programs? Will our own creations rise up against us and spell the end of humanity? Not any time soon, although it is a foreseeable possibility if we don’t heed the warnings given by our technological leaders.

Technology has become an integral part of our daily lives. In the course of a single day you might use an alarm clock, television, microwave, computer, laptop, smartphone, and handheld music player. Savvy technology users could add a smart watch, Fitbit or other exercise device, and Google Glass or Oculus device to their everyday arsenal of technology.

Within the last 15 years we have seen technology usage and sales increase from being practically negligible to an integral part of daily life for most humans. For example, 40 million iPhones were sold worldwide in 2010 with sales increasing to 169 million sold worldwide in 2014, only four years later.

With technology so ubiquitously available it requires no stretch of the imagination to see how our lives are made easier through a partnership with technology. Why shouldn’t we take it to the next level? Google, Apple, and IBM have been working on artificial intelligence, or A.I., for over five years with projects such as Google DeepMind, IBM’s Watson and Apple’s self-driving car.

A.I. is the next step to having a fully automated world. A world where your alarm clock tells your coffee maker to start a fresh pot when you get out of bed. A world where you can climb into your car and it will deliver you to your exact location without the need for a steering wheel or gas and brake pedals.

However, in a world where humans wouldn’t have to perform basic tasks for themselves, would this open up time for higher thinking or merely create dependency on the machines we create?

Individuals working at some of the foremost A.I. companies and universities have attached their names to an open letter advising possible research strategies to utilize A.I. in a way that is safe for humanity. The letter states that “because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.” This statement sounds innocuous enough, but heralds some serious thought about the nature of human intent and possible dealings with A.I.

Those who have signed the letter work at companies and research schools such as Berkeley, Harvard, Microsoft, DeepMind (acquired by Google), IBM, Oxford, MIT, Google, Cambridge, Tesla and SpaceX. Elon Musk (of Tesla and SpaceX) and Stephen Hawking are among those who support the letter openly.

The dialogue around A.I. has been building up for some time now. Back in August 2014 Elon Musk tweeted that “we need to be super careful with AI. Potentially more dangerous than nukes.” In December 2014 Professor Stephen Hawking told the BBC “the development of full artificial intelligence could spell the end of the human race.” But what does he mean by ‘full artificial intelligence’?

Michael Gazzaniga, a leading researcher in cognitive neuroscience, mentioned two years ago that researchers “can barely figure out the C. Elegans, a little worm that has only around five or six hundred neurons.” Currently the best laboratories in the world have been able to simulate about one billion neurons. The human brain has around 100 billion.

There is no need to worry about A.I. becoming smart enough to overpower its human creators in the near future. However, at the rate that technology is growing it is sensible to put measures in place for research and development of AI.

Some worry that A.I. will become a problem if put in charge of intelligence databases and weapons of mass destruction at the governmental level. We can only speculate how long research has been going on, but we do know that the government has been working on A.I. with regards to military benefits and weaponization.

“There is no guarantee that a future, in which robots and computers will become so smart and clever that they will be able to manipulate us to their own ends, will never occur,” said Paul Benioff, who is credited with creating the quantum computer.

We have been trained to equate A.I. with danger. Some of Hollywood’s most successful science fiction films deal with the computer-centric topic. Movies such as “Artificial Intelligence (A.I.),” “The Matrix,” “Transformers” and “I, Robot” show us how the beneficial robots that we create can easily turn bad.

We only have need to worry about machines once they begin to change their own programming organically, and if they begin to have complex thoughts and agendas. Most of the machinery that is currently available to consumers runs on simple yes-or-no programs. Films show us through the lens of cinema how these machines can take further steps. There is a fine line between beneficial and malevolent, a line that we have the potential to cross within the next 10 to 15 years.

For now the human race is safe from the threat of robot overlords. Sensibility and pragmatism is always helpful in these types of situations. We should be glad that the world’s leaders in A.I. have put practices in place to safeguard against malevolent robots and, most importantly, against ourselves.

 

Image Sources

  • news telescope logo: The Telescope Newspaper | All Rights Reserved
Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.