Andrew Wheen, Principal consultant
You have probably noticed advertisements on the web which have a knack of knowing exactly what you are interested in. You may have sought help from a digital assistant such as Alexa or Siri, and you will almost certainly have spoken to intelligent machines on the telephone. Whether you are aware of it or not, artificial intelligence (AI) is playing an increasingly important part in your life.
AI can deliver enormous benefits. For example, it is already being used to diagnose illnesses and improve the safety of self-driving cars. However, as in the case of any powerful technology, AI also has a darker side. In the hands of malign individuals or organisations, AI is a powerful weapon that could be used to cause serious harm.
A report by Cambridge University has warned that the malicious use of AI could be a threat to global stability. The authors expect novel cyber-attacks such as:
- Highly believable fake videos that impersonate prominent figures to manipulate public opinion
- Automated hacking
- Finely-targeted spam emails using information scraped from social media
- Exploiting the vulnerabilities of AI systems through adversarial examples and data poisoning.
- Crashing fleets of autonomous vehicles
- Turning commercial drones into face-targeting missiles
- Holding critical infrastructure to ransom
Thankfully, there is now a public debate over the dangers of AI, including voices from the science and technology community. The late Professor Stephen Hawking warned that AI needs to be controlled or it could do severe damage to humanity. Similarly, CEO of Tesla and SpaceX Elon Musk has stated that AI is a fundamental risk to the existence of human civilisation, and called for tougher government regulation. However, this view has been dismissed by Facebook CEO Mark Zuckerberg who prefers to focus on AI benefits such as better disease diagnosis and fewer car crashes.
AI enthusiasts argue that humans will never lose control of AI because it is humans that decide when and where the technology should be deployed – but the same can also be said about nuclear weapons.
History has taught us that we cannot hold back the march of science. International treaties and government regulation can help but, as recent attempts to stop nuclear proliferation have shown, they are unlikely to have much impact on rogue states. The problem has to be addressed from a number of different angles:
Take the lead in developing AI and identifying the threats
By developing the technology as quickly as possible in strong, stable democracies, it can be adequately supervised with problems identified and mitigated.
Invest now to counter the threats
Investment is required to establish and maintain defences against AI-based crime, terrorism and warfare. Recent experience with cyber threats has shown that developing effective counter measures can be a slow and difficult process.
Engage wider society in the debate
Open discussion should be stimulated to ensure that public opinion is not left behind by the pace of AI developments. Ethics committees (similar to those used in medicine) should be created to monitor developments and set boundaries.
Transparency is crucial
Users must be made aware if AI is being used in a product or service. Where AI is being used, the reasons for using AI should be completely clear from the outset.
Develop the regulatory environmental now
Law-makers and regulators must keep pace with the speed of AI developments. Technological progress will not slow down to suit their own more-ponderous ways of working.
The increasing use and development of AI will undoubtedly bring efficiencies to a range of industries, with many benefits in store for the infrastructure industry. However, in the rush to develop this exciting technology, we must be mindful of the potential risks in order to prevent unexpected problems further down the line.
For more information read Artificial intelligence: Opportunity or threat? by Andrew Wheen, a white paper which introduces the technology and sets out some benefits and challenges.