Skip to main content

The Future of AI: Separating Facts from Fiction
AI Will Be an Ordinary Part of Life—But Not in the Way We Imagine Now

Computers powered by artificial intelligence have proven their capability by beating the world chess champion and defeating the best human players on the game show Jeopardy. As computer systems with AI continue to evolve, will they develop intelligence that matches, and eventually exceeds human intelligence?

There are many misconceptions about artificial intelligence (AI). The most common predictions we hear about the future of AI are the following:

  • It’s going to threaten mankind.
  • It’s just a decade away.
  • It will never be achieved.

While it’s likely that none of these assumptions are completely right or wrong, the truth is that powerful artificial intelligence will inevitably arise. How soon that happens and to what extent it will be incorporated into our everyday lives will depend on user acceptance and supporting technologies, but we may very well see tremendous, life-altering changes in this technology in our lifetime.

Life may be completely different 25 years from now. And possibilities that seem laughable to us now could become an ordinary part of everyday life in the next 50 years. Humans could work smarter…or they may not have to work at all, at least not in the way we work right now. We could live longer, possibly becoming immortal, or we may lose our very humanity in the process and simply go extinct. As far-fetched as these possibilities seem, we will still discuss them with an open mind. After all, the idea of having powerful computers in our pockets or on our wrists may have also sounded ridiculous just a few decades ago, but here we are. One thing is certain: if, or rather when, we achieve human-level AI, life will never be the same again.

We’re Getting to Know AI

Many different definitions of artificial intelligence have been proposed by philosophers, researchers and computer scientists, but so far no one definition satisfies everyone. Alan Turing proposed the “Turing test” to judge the ability of a machine to exhibit intelligent behavior, hinging on whether it can engage in natural language conversations in a way that is indistinguishable from a human being. Other such tests include the “coffee test” (a machine has to go into an average American home, figure out how to make coffee, and then prepare it), the “robot college student test” (a machine has to enroll in a university, take the same classes that humans would, pass the tests and obtain a degree), and the “employment test” (a machine has to work an economically important job and perform as well or better than humans perform in the same position).

Artificial intelligence is a broad concept and AI comes in many forms. We’re already using it every day, often without realizing it. When Google predicts what we’re going to type in the search bar, it uses a form of “weak AI” or “artificial narrow intelligence.” Siri and Cortana show more sophisticated, but still narrow AI. They operate within the walls of a limited, predefined range and show no signs of consciousness or self-awareness.

In contrast to narrow artificial intelligence, John Searle, author and professor of philosophy at the University of California, Berkeley proposes the philosophical idea of “strong AI” or “artificial general intelligence.” The artificial general intelligence of a hypothetical machine is its ability to apply intelligence to any problem, rather than just one specific problem, in the same way that human beings do. A machine with artificial general intelligence will have a conscious, sentient mind.

Searle coined the term “strong AI” in 1980 while demonstrating his famous thought experiment the Chinese room, purporting to prove the fallacy of strong AI. He challenged the claim made by Ray Kurzweil that it is possible for a computer running a program to have a “mind” and “consciousness.” Searle also presented a variant of his Chinese room argument to challenge Kurzweil’s claim that a computer could defeat the world chess champion using its machine intelligence—which it eventually did. However, Searle argued that the machine was actually never playing chess, but merely manipulating “a bunch of meaningless symbols.”

Spiritual Machines

Ray Kurzweil, in his famous 1999 book The Age of Spiritual Machines, outlines his vision of technology as it will progress through the 21st century. Kurzweil presented his law of accelerating returns to explain the emergence of artificial intelligence and its impact on the future course of humanity. Kurzweil’s “Law of Accelerating Returns” states that “the rate of change in a wide variety of evolutionary systems (including but not limited to the growth of technologies) tends to increase exponentially.”

Kurzweil further expounded on his theory in his 2001 essay “The Law of Accelerating Returns” to explain how paradigm shifts have and will continue to become increasingly common, leading to radical change. He suggests that a technological singularity will occur before the end of the 21st century, around 2045. The essay begins:

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The ‘returns,’ such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to the Singularity—technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.

Source: Wikipedia

Kurzweil’s track record with predictions about the future progression of technology has been widely appreciated as having been “uncannily right.” However, like most philosophical propositions, many of his predictions have also been criticized.

AI Could Change Humanity Forever

Some of the predictions that Kurzweil makes may seem like fantasy to most of us. Peter Diamandis—who is an engineer, physician and cofounder of the X Prize Foundation, as well as cofounder and executive chairman of Singularity University—calls Kurzweil’s technology predictions for the next 25 years exciting. He summarized some of the predictions in his article:

  1. By the     2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.
  2. By the     2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.
  3. By the     2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in the physical world at a whim.
  4. By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.

Yes, that sounds mostly like science fiction. But as Diamandis points out, “It’s not about the predictions. It’s about what the predictions represent.”

Kurzweil did admit that he thought (wrongly) that autonomous cars would be in use by 2009. “Now that’s not completely wrong,” he said. “If I had said 2015, I think it would’ve been correct, but they’re still not in mainstream use. So even the [predictions] that were wrong were directionally correct.” If Kurzweil’s predictions about the future, like most of his predictions of the past, come true, our lives in the years to come and the lives of future generations could be radically different from what we’re accustomed to today.

Humans Could Live Longer or Even Achieve Immortality

Death is the termination of all biological functions that sustain a living organism. So far, it is one of nature’s biggest mysteries and we accept it as inevitable. But if we look back in time through human history, we find that continuing development in medical sciences and technology has already prolonged our lives. Life expectancy has increased, with the global average human lifespan rising from 26 years for early humans to about 71 years now. Can technology take it a step further, or perhaps enable mankind to make a quantum leap?

Rapid developments in the field of stem cell therapy promise new ways to treat or prevent many diseases and conditions, which could potentially elongate the human lifespan. Add AI into the mix, and we could be staring at the possibility of achieving immortality. Now, hang on! How is that possible? That’s surely an overly optimistic view of the future. But let’s keep an open mind.

Why does death occur? Diamandis writes that current theory offers two explanations. “First, we deplete our reserves of stem cells during the course of our lives. Second, our stem cells undergo various ‘epigenetic’ changes (insertions, deletions, mutations) over the course of life, making them less accurate and less adaptable. Basically, the repairmen of our body die off and go senile.”

Could humans alter this naturally occurring biological course? Kurzweil believes that nanotechnology will enable humans to defeat death for good and reach immortality. Kurzweil believes that in the future, intelligent wifi-enabled nanobots will be able to travel through the bloodstream and routinely replenish and replace worn down cells in different parts of the body. Kurzweil goes on to say that nanotech will not only extend human life, it will also expand it. So nanobots would not only make the human body stop aging, it could also help reverse aging. Moreover, he believes that “within 15 years, humans will be implanted with nanobots that will connect their brains to the internet, allowing for vastly accelerated cognition. Ten years after that, most of our thinking ‘will be done online.’” By 2030, humans will be hybrids, he claims.

This extensive, two-part paper on AI also touches upon Kurzweil’s idea of immortality. Kurzweil believes humans will eventually reach a point when they’re entirely artificial. This ultimately implies that humans could live forever. By the time humans are no longer purely biological beings, humanity will have fundamentally changed. He calls it singularity.

Sorting Facts from Fiction

We can rule out the possibility of artificial general intelligence (AGI) altogether or imagine it’ll destroy humankind, but the truth is we’re all pretty much guessing here. Computer scientists, physicists and futurists continue to debate this topic, meanwhile the average user of technology remains skeptical and largely oblivious. After all, as humans we are biased to think in a linear way, and anything that fiddles with our concept of normality raises doubts.

In addition, sci-fi movies and novels have typically portrayed the future of AI in a way that makes people feel rather uncomfortable about its emergence. Unavoidably, our perception of AI has been affected by those sensational portraits. Perhaps it is simply too much for us, at this point in time, to imagine immortality and singularity, but can we truly deny the possibility?

Paul Allen is one of the many prominent tech leaders to oppose the idea of singularity’s arrival by 2045, as Ray Kurzweil and Vernor Vinge claim. Kurzweil bases his predictions on his understanding of Moore’s Law, the basis for his “Law of Accelerating Returns,” which predicts exponential growth of technologies. The interesting thing is, while computer scientists, philosophers and futurists argue that the prospect of strong AI and singularity arising within the next few decades is far-fetched, they don’t outright deny the possibility of machines achieving human-level AI in this century. In fact, they feel it’s inevitable and many express concern about it.

Here are the opinions of some leading technologists and scientists about the emergence of AI.

“I don’t understand why some people are not concerned.” – Bill Gates

“I think we should be very careful about artificial intelligence…If I had to guess at what our biggest existential threat is, it’s probably that.” – Elon Musk

“The development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking

These are statements coming from some of most renowned figures of our modern technological world. They are smart people, so if they are worried, it’s worth asking why? As some tech visionaries and futurists argue, if we’re ever able to achieve AI’s holy grail—human-level general intelligence—it may very well be our best and last invention as biological human beings.

    Please share your contact details to download the Prospus brochure