Rise of the Machines: How to Avoid an Ai Apocalypse

With “Terminator: Dark Fate” opening this month, we analyse how to keep the risks of AI in check

The Terminator is back on the big screen, giving us a worrying glimpse of a post-apocalyptic world destroyed by Artificial Intelligence (AI) gone bad. The new Terminator: Dark Fate* film is a direct sequel to 1991's Terminator 2: Judgment Day* which saw the fictional Skynet AI becoming self-aware and turning on its creators. When its human masters panicked and tried to shut it down, Skynet launched a nuclear missile attack on a besieged population, leading humankind into a war against the killer robots.

“AI is more than a matter of making good technology; it is also a matter of making good policy”

In reality, a malevolent, all-conquering supercomputer probably isn't going to arrive anytime soon, but AI has already started to beat humans at games like Chess and Chinese strategy challenge Go. What's more, AI is increasingly present in our everyday lives from the TV recommendations offered by Netflix* to the voice-controlled smart speakers we use at home. While AI is set to transform society in a positive way, from powering autonomous cars to diagnosing cancer, there is an element of risk involved if we don't fully consider its impact and set boundaries for its use.

Bias in AI

There is also a danger that AI is evolving faster than our ability to understand it. While harmful AI probably won't come in the form of cyborg assassins sent from the future, it could affect us in more subtle ways — an area which requires further research. One of the biggest challenges involved in developing useful AI is bias. AI systems inevitably reflect the unconscious assumptions of the people who build them. And if the people that build AIs all belong to the same demographic then algorithmic bias is inevitable.

The quality of the data that AI is trained on is also important. We're already seeing the results of limited training data in voice recognition systems. Often trained on 'standard' male speech, these systems have been known to have difficulty recognising female voices and local accents. In a more extreme example, a medical AI algorithm could offer skewed predictions if it has been trained using only one demographic of patients. Along with bias and data quality, privacy is a key concern in the way that AI is used. Some form of standardisation and regulation is likely to be needed in future to ensure privacy and address ethical concerns.

Shaping Ethical Frameworks

In 2015, a number of tech leaders including the late Stephen Hawking, Tesla* and SpaceX* boss Elon Musk and Apple* co-founder Steve Wozniak, signed an open letter calling for research into the societal impacts of AI. The letter — Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter — outlines the benefits of AI but urges caution in order to avoid the potential pitfalls of such a revolutionary technology.

While some large tech firms are attempting to develop their own ethical frameworks, there are a number of collaborative initiatives aiming to research the ethics of AI. Intel is among the backers of the Partnership on AI* initiative, which is focused on shaping the technology's future. With more than 90 partners, including Amazon*, Google* and Facebook*, the Partnership AI aims to develop and share best practices, advance public understanding, provide an open platform for discussion and support efforts to develop socially beneficial AI.

AI for Good

Intel is already working to develop uses of AI that have a positive impact on the world as part of its Intel® AI for Social Good initiative. As part of the project, Intel supports organisations with AI technologies and expertise to help them accelerate their work. It also carries out research and supports efforts to ensure that AI is more transparent, less biased and more accessible to all.

The initiative covers a wide range of projects including the TrailGuard AI* camera. This is designed to detect potential poachers entering wildlife reserves and alert park rangers in near real-time so that endangered animals can be protected. "AI can be used in so many different ways," said Anna Bethke, Head of AI for Social Good at Intel. "The overall aim of this initiative is to use the technology we've created to help as many individuals as possible."

A Matter of Good Policy

Earlier this year, Intel published a white paper outlining its plans for a national AI strategy in the US, echoing the efforts of China, India, the UK, France and the European Union which have already announced formal plans for AI. "AI is more than a matter of making good technology; it is also a matter of making good policy," said Naveen Rao, corporate VP and general manager of Intel's AI Products Group, and David Hoffman, associate general counsel and global privacy officer at Intel, in a joint editorial. "And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries."

Businesses, governments and other organisations must work together to build a framework for ethical AI. By ensuring that AI is transparent, free from bias, based on quality data and designed for positive use, we can avoid an AI apocalypse and unlock the full potential of this transformative technology.

*Other names and brands may be claimed as the property of others