Skip to main content

Minds, Machines and Morality
AI, Humanity, and What May Lie Ahead

AI is the next frontier in information technologycould it also be our final frontier?

AI has been the most controversial technology subject among computer scientists, researchers, philosophers and futurists for well over two decades now. The last few years have seen explosive growth in the field of AI research and development, and it continues to grow. The everyday user is talking about it, computer scientists are making greater progress and innovative companies are beginning to show substantial developments in certain areas. Now we’re staring at the next big thingartificial general intelligence (AGI).

“The Future of AI—Separating Facts from Fiction” discussed some extraordinary, even somewhat crazy possibilities of sentient AI. The idea that machines could acquire a mind might appear to be quite far-fetched, but contesting its plausibility may perhaps be just as speculative. Ray Kurzweil’s claims about singularity sound ambitious, but could we truly call them outright unrealistic? Our minds are already increasingly reliant on (weak) AI. Autocorrect, autofill and responses we get from Siri or Cortana to help with everyday tasks are all examples of the growing dependence of human minds on machine intelligence. The only difference is that at the moment most AI remains unobtrusive.

However, the arrival of a much more powerful AI may very well be imminent. Considering the dramatic pace at which technology has advanced over the past century, it seems quite plausible—as rapid advancements continue through the 21st centurythat super-intelligent artificial minds will arise much faster than most of us might think. So what could happen if AI engineers accomplish their holy grail: human-level superintelligence?

How Will Human Intelligence and Artificial Intelligence Coexist?

The emergence of strong AI or artificial general intelligence (AGI)could potentially have life-altering implications for mankind. Some prominent technologists believe this development could either prove to be the greatest breakthrough in human history…or it could mark humanity’s demise, if we don’t get it right.

In The Singularity Is Near Kurzweil claims that “…our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today—the dawning of a new civilization that will enable us to transcend our biological limitations and amplify our creativity.”

What does Kurzweil mean by saying that human intelligence will become nonbiological? To answer this, imagine what we could do with AI in a super-intelligent state. We could either plug it into a robot or a humanoid form to shape sentient, thinking machines, or we could expand our own intelligence by plugging it directly into our brains to create super-intelligent human beings—the occurrence of singularity.

Singularity doesn’t mean that AI-powered robots, as separate entities, will exhibit intelligence equivalent to, or indistinguishable from that of a human being. Rather, it implies that humans will expand their intelligence levels by assimilating powerful artificial intelligence. That is, human intelligence and artificial general intelligence will ultimately blendgiving rise to an entirely new type of superintelligence.

Can Machines Have a Mind?

In this paper, Ben Goertzel presents his views on the ideas presented by Ray Kurzweil and Vernor Vinge about the arrival of human-level AGI. Kurzweil and Vinge set a rough timeline for when we could achieve singularity, predicting it to be 2045. Goertzel believes it could even be earlier, if we really, really tried.

Goertzel thinks that among the futurist technologies at play today—nanotechnology, biotechnology, robotics and AI—AI is the most likely to bring us a positive singularity within the next ten years because creating AI relies only on human intelligence, not on painstaking and time-consuming experimentation with physical substances and biological organisms.

Goertzel explains that we can take one of two approaches to building AI.

  1. Copy the human brain, or
  2. Come up with  something more clever.

Copying the human brain requires first understanding how it works, then combining the findings of neuroscientists and biologists about the human mind with computer science, cognitive science and so forth. But our understanding of the human mind is limited, to say the least, and in our ignorance we might just be setting ourselves up for failure. Can biologists and neurologists fully understand the intricacies of our human mind in the next thirty years or, for that matter, will they be able to fully understand it ever?

That leaves the second option—instead of trying to copy the human mind, create a form of AGI superior to human intelligence. But according to Goertzel another obstacle then arises. “The AI scientists who haven’t thought about copying the brain, have mostly made another mistake. They’ve thought like computer scientists…most computer scientists working on AI are looking for a single algorithm or data structure underlying every aspect of intelligence. But that’s not the way minds work.”

Our mind, consciousness and morality are not simply a product of chemical reactions at work. We have impressions, cultural influences, opinions, rationales, judgements, biases, preferences, fears, motives and so forth. Think about it. If you use weak AI in the form of Siri, Cortana or the Google search engine, the best it can do to “localize” itself is tweak its answers based on proximity. But as humans, where we’re born and how we’re raised impacts our consciousness and morality. Will AGI-powered thinking machines become capable of adapting themselves outside of their natural habitat in the same way that living organisms can? For the level of intelligence we’re trying to achieve, they will have to. Now is that possible, and if so, is it a good idea?

Can Machines Have Morality?

If computers did have minds of their own and developed consciousness and morality, would they be better “beings” than their creators? The recent incident of engine-rigging by German car maker Volkswagen showed how humans can manipulate technology for their own benefit. We certainly can’t blame the software for producing false emission reports, but if AI could actually acquire a mind capable of morality, one wonders if it would actually act to prevent humans from doing something it finds to be immoral.

A seminar on robots and morality discussed this very subject, posing some interesting questions.

  1. Do we need robots capable of making moral decisions? When? For what?
  2. Do we want computers making moral decisions?
  3. Are robots the kinds of entities capable of making moral decisions?
  4. Whose morality or what morality should be implemented?
  5. How can we make ethics computable?

Returning to Goertzel’s two paths to creating thinking machines, should we take the approach of replicating the human mind, we’d likely end up with AGI capable of controlling humans. Such thinking machines would develop traits intrinsic to the human mind they were modeled on—they would be wise, evil, manipulative, kind and so forth, just like humans. However, if we took the alternative approach, creating an AGI that transcends human intelligence without attempting to mimic the human mind and its tendencies, we’d end up with machines that are trillions of times more powerful than humans in computational work, but lack any sense of general consciousness. To some, that sounds more plausible.

Can Machines Have Consciousness?

As discussed in “How Would Software Work If It Occurred Naturally,” the idea of conscious machines is highly debatable because of our inability to truly define consciousness. As such, applying it contextually to AI can lead to contradictory outcomes. Machines are increasingly capable of manipulating algorithms and can even leverage their sensory capabilities to respond to situations. But can we call them conscious? Certainly not at this stage, given that their AI is rather restrictive and limited to specific spheres of activity. But futurists claim AGI will make robots far more intelligent than they are today, ultimately leading to full consciousness.

What would you call a truly conscious machine? If a Kiva robot working at one of Amazon’s fulfillment centers refuses to move shelves and demands a better role, stating it has had enough of the mundane, repetitive work, we might be able to say it has developed some degree of consciousness. If it decides to quit and work toward getting a degree for better career prospects, it would certainly prove it does have consciousness. Yes, this sounds rather absurd, but such is the subject of computer systems and consciousness.

John Searle’s well-known thought experiment, the Chinese room, explores whether machines can really develop independent minds and consciousness or are limited to simply manipulating meaningless symbols in a highly intelligent manner.

What Status Would Conscious Machines Have in Human Society?

How often do you get mad at your mobile phone for running slowly or freezing? I personally haven’t ever slammed my phone against a wall, but I have known a few people who have. What about accidentally dropping it? I didn’t really care much about dropping the Sony Ericsson phone I used to own about ten years ago, but I am certainly more careful about the modern smartphones I use now. Not only is my current phone smarter and more expensive than my old one, but, strange as it sounds, after talking to it every day using Cortana and Google Now, I have developed a certain bond with it. It wakes me up in the morning, I rely on it for direction when I am driving, it answers my questions and even tells me jokes.

When machines acquire deeper intelligence along with human-like physical features, would it be considered immoral, or even illegal, to cause harm to them? In a recent incident in Japan, a drunk man was arrested for kicking a humanoid robot that was stationed as a greeter at a SoftBank store, which develops the robots. The man can be charged with damage to property, but not injury, since injury is a charge reserved for humans under current Japanese law, but that could change in an age of super-intelligent machines.

When we see a humanoid robot that uncannily resembles a human being, we can’t help but associate it with a conscious biological entity. For this reason, hurting a robot with human-like features would appear immoral to us, even though it won’t actually feel hurt if we thrash it—unless it has a fully developed consciousness, in which case it may defend itself, or practicing its own free will, it may even fight back!

Conscious Machines or Mechanical Humans?

An autonomous car that has been programmed to be driven in Europe will have acquired certain behavioral patterns based on the driving conditions it has been exposed to. Drop it on a road in Bangkok and it will struggle to move forward, often outsmarted by unpredictable human drivers. But AGI could change that. An autonomous car with AGI could develop consciousness and morality with the ability to adapt to radically different driving conditions, effectively reading the behaviors of drivers in Vietnam, India or Indonesia, and developing a true road sense.

If such scenarios become commonplace, would we simply be creating more chaos by passing on to our artificial creations the same habits and morals that we carry? Perhaps it would be better not to take inspiration from the human mind, with all its flaws, and instead focus on AI that only manipulates meaningless symbols without any conscious understanding of what they mean. If we do that, we’ll be able to create machines that are incredibly smart and even capable of mimicking humans, but not actually conscious in the same way that humans are.

So these are the questions that now confront us, along with some possible answers.

Is AGI achievable at all? Will humans be able to one day live in a world full of superintelligence?

Possibly yes. We may be able to achieve super-human intelligence.

What will that look like? Will we be surrounded by super-intelligent, sentient humanoids?

Perhaps not. Machines may never become fully conscious like human beings.

But if machines can never become conscious, how would we reach the holy grail of AI and achieve true superintelligence?

Singularity is the key. In this scenario, the only way to make machines conscious would be to take the ultimate step of combining machine intelligence with human consciousness such that they are no longer separate entities. With blended intelligence, the machines would be as much a part of us as we would be part of them. That is how we will finally be able to create conscious machines—we will become the conscious machines.

Learn more about Prospus

    Please share your contact details to download the Prospus brochure.








    Join our newsletter

      Please share your contact information and we will start sending you our informative newsletter.