“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” 
― Eliezer Yudkowsky

The rise of AI and the fight for our future.

There’s a new kid on the block. An elitist, hardly smiling loner we didn’t really know much about until recently. A changer. A shapeshifter. A disruptor who cares little for the council of industrial ‘elders’ or the traditional waters of Status Quo. Unlike any character the ancients ever studied, he does what he wants, goes where he wants.

You see, from when man discovered how to make tents from bent iron, or how to make fire from cracked stones, we have come a long way. Today, there are million-dollar-a-piece drones, flying cars, self-driving bikes, gene-editing technologies, technological feats too amazing to believe.

But not all advancement is positive. As we continue to record major strides in the health sector, and we begin the countdown until Cancer is no longer a health threat, we must face our realities squarely.

Some believe that humanity’s embrace of artificial intelligence is the action which eventually activates our extinction as a human race. Extremely intelligent systems trained to process exabytes of data in minutes don’t have to do much to deactivate our defences, should they choose to. Some of the remarkable among the AI naysayers are the late esteemed astrophysicist Stephen Hawking and Elon Musk who have expressly cried out for the future threat AI poses.

But is there any merit to their fears?

A recent research project concluded by Google’s DeepMind has revealed that as AI advances, it becomes more aggressive in its workings.

Here’s what happened. DeepMind created a simple AI game called Gathering. The goal of the game was to gather apples. The AI agents were also armed with lasers to be used where necessary. With an abundance of apples, they cooperated. However, as the apples diminished, the AIs’ attitude gradually changed. Subsequently, they began to attack each other.

Quite interestingly, the more advanced AIs were less interested in “bonding” and attacked others frequently, regardless of the state of the game’s economy – abundance or scarcity.

What happens where there are indeed no apples to even compete for?

The more crucial question is: should the same scenario play out in real life, what will be the response of these AIs where their main competition is humans?

Is Elon Musk right after all?

Do humans have an exit plan? Should we enact laws that mandate a failsafe protocol should all the conspiracy theorists suddenly turn out right? Do we have measures in place to ensure companies comply with such protocol?

And if it is the Governments’ responsibility to make laws regulating these strides, who checks the Governments as they secretly pursue their own agenda to dominate the world?

The rise of AI and the fight for our future is as crucial as any discourse we are having today about global warming and climate change.


This post first appeared here on February 8, 2019.

You can also read some more of my writing here.