The Pandora’s Box of Artificial Intelligence

There’s been a fair bit of talk lately about the perils of artificial intelligence (AI). Several well-respected personalities have come forward to issue a warning about the growing arms-race in this rapidly expanding field. Nevertheless, I get the distinct sense that most people find it difficult to appreciate the gravity of the issue. We’ve all experienced rudimentary AI in software such as Siri, Google Now, and other tools such as real-time language translation; and it seems harmless enough – but these technologies only begin to touch on the potential of a fully-developed artificial intelligence.

There are two basic levels of artificial intelligence: Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI).

AGI refers to the intelligence level of a machine that can carry out tasks as well as the smartest human being. This includes intellectual tasks such as reasoning and learning. A machine with AGI, therefore, would theoretically be indistinguishable from a human in its overall capabilities. However, an AGI will be inherently superior to humans since, being a machine, it’s much less susceptible to mistakes and errors of judgment.

Most scientists believe that machines with AGI will begin appearing sometime between 2015 and 2045.

ASI, on the other hand, refers to a machine with computing capabilities which have eclipsed the human mind completely. The basic concern is that an AGI that is capable of self-learning will do so at an exponential rate, and will very quickly surpass human levels of intelligence. With virtually limitless memory, processing power, and the entirety of human knowledge at its metaphorical fingertips, the ASI will become superhuman.

Suddenly, the ASI will be so intelligent its actions will be unknowable and unpredictable to its humble human creators. At this critical juncture, it will be as impossible for humans to understand the ASI as it would be for an ant to try to comprehend the actions of a human – and all bets will be off.

What new scientific progress will the ASI make? Will we be able to comprehend the means it chooses to accomplish its ends? Will it attempt to protect us from ourselves? Might it attempt to protect itself from us?

Advertisements

2 thoughts on “The Pandora’s Box of Artificial Intelligence

  1. Good summary. AGI presumably incorporates a setting of ethics that are man-made, while ASI designs its own ethics. Space colonisation might happen relatively fast once ASI occurs. As for humans, what can we expect. Perhaps the opposite of ‘natural’ evolution – not evolution of rivals to humans, but evolution of various ‘network peripherals’ that incorporate the best of ourselves but omit the worst of ourselves.

    Liked by 1 person

  2. Reblogged this on Tony's blog and commented:
    Whilst self aware AI is indeed scary I have faith in the resilience of the human spirit. We would fight dirty when it comes down to survival. Unplug the damned creatures…

    It is telling how we instantly imagine a superior intelligence would look to harm us. Perhaps it will indeed be our saviour and facilitate peace unto mankind. Same probability I dare say as bringing our doom.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s