Facts

What is the Scariest Thing about Artificial Intelligence?

The scariest thing about artificial intelligence is not them replacing our jobs or destroying the planet to pursue their own goals or do away with humans by accident—or not by accident.

Though AI has been highlighted for the past 9-10 years with the rise of Google Assistant, Siri, Alexa, and AI-operated GPS to your Amazon recommendations, it can be traced around since medieval times. Classical philosophers attempted to decode the patterns of human emotions and behaviors and tried to express them as a symbolic system- proof that ideas of AI are pretty old; but it was on a summer morning in 1956, at a conference at Dartmouth College, scientists, discussed machines that could think. This is where the term ‘artificial intelligence’ was coined and formalized.

The basic aim of developing AI has been to aid human needs and activities.. by out-doing humans.

THE UNDERSTANDING OF ARTIFICIAL INTELLIGENCE

Say you have data that varies over some time. If a machine can learn the pattern of the data, then the machine can make predictions based on what it has learned. While one two or three dimensions are easy for humans to notice, understand, and learn the pattern, machines can learn in many dimensions, like, hundreds and thousands of combinations. They can look at a lot of high-dimension patterns and determine them.

LEARNING ALGORITHMS USED FOR AI

Supervised learning – this includes training an algorithm with data that also contains the answer. for instance – To train a machine to identify your friends, you’ll have to first identify them for it.

Unsupervised learning – Training the algorithm in a way where the machine can figure out patterns on its own.

Once it learns these patterns, it can make predictions, that humans can’t even come close to.

The thing about AI is: that it is developing at a terrifying speed.

Google and Amazon are spending billions on developing AI capabilities. Pentagon is researching on the same to develop drones for accurate drone strikes.

The AI being developed can think for itself, and make decisions and predictions, just the way humans do: based on their experiences and beliefs.

Along the same lines- a team at MIT created the world’s first psychopathic artificial intelligence to check its perspective after being exposed to the darkest parts of the internet.

Meet NORMAN

Norman
Image credit: USATODAY.com

Amid digital tech leaders like Elon Musk raising concern over the expanse, utility and probable monopoly by machines, this team at MIT developed Norman with an aim to demonstrate that artificial intelligence can think only what it is fed. It can be unfair, biased or very understanding depending on what it fed and taught.

Algorithm Norman was trained to understand pictures but the conclusion of their experiments established Norman as a not-so-optimistic AI.

Norman was exposed to the darkest parts of Reddit. from the same website, the algorithm was shown images of people dying in gruesome ways.

Norman’s responses were compared to a ‘normal’ AI’s responses.

The following observation and conclusions were published by the team as per BBC-

A “normal” algorithm chooses cheery things like: “A group of birds sitting on top of a tree branch.” when asked to describe an abstract drawing.

Norman sees a man being electrocuted.

Where “normal” AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

THE RORSCHACH INKBLOT TEST

Norman’s responses were compared with a regular image recognition network when generating text description for Rorschach inkblots, a popular psychological test to examine a person’s personality characteristics and emotional functioning. Also employed to detect underlying thought disorder.

THE RORSCHACH INKBLOT TEST
THE RORSCHACH INKBLOT TEST on AI
THE RORSCHACH INKBLOT TEST on Norman
THE RORSCHACH INKBLOT

Norman behaved in just the way it was taught. It was fed the darkest of human emotions and responded in a way humans would’ve. The way the AI perceives the world and behaves reflects the data used to train it. This brings AI closer to how we think making its thought process similar to ours.

Imagine AI refusing to do something major just because it doesn’t ‘feel’ like it.

Imagine AI incriminating someone based on racial prejudices.

The scariest thing about AI is how similar they are to us.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button