The topic of when the next world war will happen has been a subject of debate ever since World War II ended. Since then, people have been making assumptions about everything from a nuclear holocaust, to a massive chemical war or even a zombie outbreak as the catalyst, with either China, Russia, North Korea, The Middle East or the US as the key instigators. If a new report published by the Centre for a New American Security (CNAS) is anything to go by though, it appears it could possibly be those pesky robots after all. Perhaps James Cameron is a prophet and not just a movie visionary.
In the report, it claims that Chinese officials see the scramble to build AI in the global arms race as a global security risk that could lead to potential conflict. Particularly, how it could be sparked completely unintentionally by AI getting things wrong and leading conflict to escalate when humans perhaps hadn’t intended. A scenario which Gregory C. Allen, an adjunct senior fellow at CNAS and author of the new report shared in a recent interview with The Verge:
The specific scenario described to me [by one anonymous Chinese official] is unintentional escalation related to the use of a drone… The point made to me was that it’s not clear how either side will interpret certain behaviours [involving autonomous equipment], the side sending out an autonomous drone will think it’s not a big deal because there’s no casualty risk, while the other side could shoot it down for the same reason. But there’s no agreed framework on what message is being sent by either sides’ behaviour.
In both the report and the article, Allen goes on to explain how drones, both large and small are being utilised by the military with the capabilities to not only autopilot and perform simple tasks like flying to specific
It’s something which is rather frighteningly easy to see happening. Yes, companies are going to great lengths to improve AI and Machine Learning capabilities and
To make things even scarier, is that in his research even though a lot of progress is already been made especially between the US and China – the only two countries believed to have autonomous drones – not nearly enough is known about the technology, only making the likelihood of something going wrong to be even higher.
There are definitely pockets of real expertise on this issue (in the US) but there’s not the widespread comprehension there needs to be
Let’s hope that governments come to their sense and allow some level of human decision making into the process for the time being before handing things over to the robots. Like waiting for the birth of John Connor.
However, as Allen explains further, it’s not just all down to pure AI, but also the element of perceived risk, as because drones remove the risk of human life from the military, they may be more inclined to push the boundaries with them, only triggering further escalations as a result.
I understand that governments will always feel the need to arm themselves but the thought of what AI driven weaponry could mean is certainly a scary one. Of course, robots and AI could also solve all the world’s problem and lead to world peace, but that’s not profitable, so I unfortunately struggle to see arms companies even allowing for that possibility in their programming.
Last Updated: February 8, 2019