The globally-renowned physicist Stephen Hawking has issued another warning: without global unity, technology will eventually overcome mankind. Is he right about this one?
Two of his previous notices included a stern warning that artificial intelligence is a sword with multiple edges and that as a species, we only have about 1,000 years to colonize another planet before we start withering off.
One of the brightest minds alive, Hawking admits he is seen as an alarmist. But that’s just the thing with early adopters, they are always perceived as standing on the high-end of the spectrum. AI has the capacity to overthrow our dominance simply because it can evolve to become much more intelligent than we are, both as individuals and as a species.
“Since civilization began, aggression has been useful since it has definite survival advantages,” Prof. Hawking told the NY Times. “It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”
As for the fact that we’re rapidly exhausting the available resources on our planet, that isn’t news. The Earth itself could say that we’re not very good tenants and there wouldn’t be any argument against it.
Humanity is facing a lot of problems right now and it’s easy to see how our hopes could turn to artificial intelligence. It could offer solutions to many of our dilemmas but the alarming rate of its evolution could also spell doom.
And that wouldn’t be because a super-intelligent AI would be malicious by nature, but because there’s a great chance it could evolve to not give a rat’s ass about humans and their misery. If it goals differed from ours, we’d just be in its way. And what do we do with obstacles? We remove them. Simple as that, and well-evidenced in Hawking’s anthill analogy:
You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project, and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
The solution Hawking proposes could fix this one problem but it brings up an even scarier alternative. A unified global government. Now where have we heard that before? Ah right, the New World Order. Good times, good times.
The problems posed by unchecked AI progress could in theory be solved by the emergence of a centralized, worldwide organization whose purposes would include keeping a close eye on the evolution of said artificial intelligence. That way, it would identify and solve any potential problem even before it became problematic. Hawking believes that a prerequisite of such an organization would be a global government. But this could pose an even bigger complication to our already difficult lives.
We know how human beings are. Give a man even a whiff of power and he’ll start abusing it in no time at all. In the event of a global government taking control, tyranny would be installed the following day. Do you have any doubts about this? Look at the world leaders of today. Are empathy and benevolence present among their traits? Because to me, it seems these words aren’t even in their vocabulary.
And today, with the world being divided among different governments, each and every one of them is busy undermining and fighting each other. In a global government, these petty squabbles wouldn’t take up their time and energy; these would be mainly directed to controlling the population. We’re not people to them, we’re resources.
Hawking wanted to end his warning on a sweet note:
All this may sound a bit doom-laden, but I am an optimist. I think the human race will rise to meet these challenges.”
Hope you’re right, Professor, hope you’re right.
Content retrieved from: