The Singularity is Coming

Alex Koyfman

Updated August 7, 2015

In the tech universe, few terms evoke emotions like the word “singularity.”

Benign and perhaps even diminishing in nature, science fiction and popular culture have rebranded this geeky-sounding word to mean something sinister, mysterious… even apocalyptic.

Since the dawn of the computer age, the technological singularity has come to refer to one of two theoretical events, or the combination of both:

singularitychartClick Image to Enlarge

Event 1: The point in mankind’s technical development where a machine can exhibit fluid intellect to an extent where it can be called “self-aware”…

Event 2: The point where mankind’s machines are advanced enough to self-replicate and self-improve without any further intervention from human engineers.

The first qualification is more philosophical than technological. What is self-awareness, anyway? What is consciousness?

Do any humans actually possess it, or is what we refer to as “consciousness” just the end result of a countless number of factors working dynamically with and against one another in the constantly evolving environment of the human mind?

Well, as with so many things lacking a concrete definition, science has come up with a workaround.

How Do We Know When Machines Become Like Us?

turingtestInstead of defining consciousness, researchers use something called the “Turing Test” — named after its inventor and legendary computer science pioneer, Alan Turing.

The Turing Test measures a machine’s “ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.”

First proposed by Turing more than 60 years ago, the test involves a short interaction between a judge and both a machine and a human in basic text-based conversation. The judge then determines which was which.

In 2014, a Russian chatterbot by the name of Eugene Goostman managed to fool one-third of its human judges into believing that it was, in fact, human.

Mind you, this bot wasn’t hooked up to a supercomputer capable of making trillions of micro-decisions per second. It was a relatively simple algorithm.

eugenegootsman

This begs the question of whether the test is too simple or whether we humans aren’t quite as sophisticated as we’d like to believe.

As far as the second qualification of singularity goes, there is not a test so much as a sliding scale.

Every year, computers achieve higher and higher levels of performance guided by the general principle of “Moore’s Law,” which states that chip density and processing speed of computers doubles roughly every 18 months. The law has held true for going on four decades now.

And with every doubling, machines get better and better at the kind of problem-solving skills necessary to, among other things, design more efficient, better-performing microprocessors.

A doubling of processing power took the human brain thousands, perhaps tens or even hundreds of thousands of generations to achieve.

Doubling every 1.5 years is an evolutionary pace we simply cannot keep up with. That said, humans still play too active a role in the design of machines for qualification number two to be considered achieved.

mooreslawcurve

So this singularity isn’t yet here based on our two most common concepts of what it is.

But how close are we?

Well, we could be far closer than people think. And here is why…

A few weeks ago, a video clip went viral featuring a horde of robots behaving in the most peculiar manner — a manner that surprised the very engineers who designed and programmed them.

These weren’t Terminator-like machines doing sophisticated operations with their extremities, mind you. These were the simplest kind of robots — ones lacking any discernible intellect at all.

The one capability they did have, which is now becoming more and more popular with engineers and technical researchers, is something that’s been referred to by the robotics industry as “swarming.”

These robots, about the size of a quarter and moving around using four vibrating legs, were only able to perform a handful of functions.

kilabot

They could measure distance, sense and emit light, and detect and respond to other robots.

Putting together a bunch of these things should in theory just lead to chaos, but as it turns out, pure chaos is simply not as natural as we might have though.

The robots, which are simple on an individual level, began exhibiting extremely complex pattern-forming behavior. They began behaving like a super-organism, not just swarming and flocking like birds but also forming complex biomorphic shapes that behaved as one unit as opposed to a disassociated group of individuals:

swarmkilabot 600x630

The complex behavior resulted from the organic combination of simple programming and the interaction of many individual parts.

It’s a kind of “secondary” instinct that behavioral scientists refer to as “emergent behavior.”

It’s what gives birds the instinct to flock, fish the instinct to school, and insects like ants the instinct to work together in sophisticated societies.

All of this is governed by a few very basic but consistent patterns.

Order From Chaos… Intellect From Processing Power

But how does this relate to the singularity? Well, for doubters of the evolutionary dangers of artificial intelligence, the answer may be scarier than it appears.

Based on these Kilabot experiments and what we know about how fast processing power expands, we can assume that:

  1. Chaos does not exist for very long in nature. It forms patterns almost immediately and does so in a way we cannot predict.

  2. If simple machines can exhibit emergent behavioral patterns, then complex ones will tend to do the same, also in ways we cannot predict.

Today’s advanced computers — those powering our weapons of war or sifting through innumerable bits of data to predict the next terrorist attack, the next hurricane, the next earthquake — are billions of times faster than the primitive brain of a Kilabot.

swarmdrone

The one thing these machines lack is the swarming instinct that was given to the Kilabots just for the experiment I described.

However, as you read this, swarming instincts — the ability to network, synergize, communicate, and behave accordingly based on data gathered as a group, not as an individual — are being built into machines like military drones and even satellites.

Before the end of this decade, machines with processing speeds in excess of anything you’ve ever seen or heard of will be cooperating in the skies above your home, on street level, or perhaps even inside your body as medical science merges with nanotechnology.

They will function based on complex, hard-wired instinct just like complex organisms and will exhibit unpredictable, self-developed emergent behavior on that same level.

Sounds a bit scary, right? Well, it is. Because now that machines operate based on the laws of nature and the laws of life, that means they will evolve according to those same laws, not ours.

Does this mean the end of mankind as we know it? That’s a difficult question to answer. Technology certainly has and will continue to evolve our society as it itself advances.

Whether we will suffer as a result has yet to be determined.

Fortune favors the bold,

alex koyfman Signature

Alex Koyfman

follow basicCheck us out on YouTube!

His flagship service, Microcap Insider, provides market-beating insights into some of the fastest moving, highest profit-potential companies available for public trading on the U.S. and Canadian exchanges. With more than 5 years of track record to back it up, Microcap Insider is the choice for the growth-minded investor. Alex contributes his thoughts and insights regularly to Wealth Daily. To learn more about Alex, click here.

Angel Publishing Investor Club Discord - Chat Now

Alex Koyfman Premium

Introductory

Advanced