Maximising the benefits of artificial intelligence for infrastructure calls for strong governance to minimise the risks

Quick take

Our implementation of AI on major projects has seen significant cost saving and efficiency improvements

Governance structures to ensure responsible use of AI must be put in place from the outset to maximise integrity and foresight

Read our six recommendations to support organisations in using AI responsibly

Ensuring AI is used responsibly is critical to safety and security

The announcement earlier this year of the UK government’s AI Opportunities Action Plan means more organisations than ever before are looking to artificial intelligence (AI) to unlock improved productivity. The plan estimates that – if AI is fully embraced – the gains could be worth up to an average of £47bn to the UK each year over a decade. Mott MacDonald’s digital project principal in UK and Europe, John Farrow, considers how the infrastructure industry can deliver this potential without compromising on safety and security.

Racks of servers in a data centre.

Our experience of working with major government clients to implement AI means we know that realising this figure securely and responsibly is not going to be straightforward, but it is not unrealistic – we have seen the benefits firsthand. Recent deployment of AI on one part of a major infrastructure project to track drawings and use of design codes has potential to deliver a cost saving of £0.5M in work hours. On a multimillion-pound scheme this may sound like a drop in the ocean but if AI could be rolled out to improve efficiency in the same way on other parts of the project, the value grows. 

Launch of the government’s AI plan creates an urgency to develop, procure, deploy and adopt AI technology and capabilities. Nonetheless, like any new technology, there are risks that need to be addressed but, with AI, if the right understanding is not in place, the risks could be greater than with other digital solutions. These challenges make the need for human oversight and robust governance around AI adoption paramount.

AI risks

The potential for biased algorithms, privacy and data protection concerns and the ethical implications of algorithmic and autonomous decision-making are significant hurdles. AI might take actions that are not easily understood or anticipated by human operators, plus AI systems might expand the attack surface for cyber threats. Malicious actors might target AI systems directly or use AI to enhance their attacks on critical infrastructure. 

Overreliance on AI decision support tools could lead to mistakes and operational inefficiencies, potentially leading to the failure of assets. Without proper governance, the risks associated with AI could outweigh its benefits, leading to public distrust and regulatory action.

Ground truthing to evaluate the responses given from AI and using synthetic test queries to manually check how the AI is working is essential. This has been underlined on work we have proposed for a major government contract where developing the business case requires ensuring compliance with 8,000 documents. That would be almost impossible with manual techniques but by using a retrieval augmented generation database and searching using large language model AI, the time involved can be significantly reduced. However, checking and evaluation is still important to ensure confidence in the output and understand how to improve accuracy and reliability.

Why governance matters

AI governance refers to the frameworks, policies and practices that ensure AI systems are developed and deployed responsibly. Ethics in AI deployment goes beyond compliance with regulations – it involves a commitment to doing what is right, even when not mandated by law. 

As an example of the challenges we help address, we recently collaborated with a major government client to develop a safe and ethical AI-based analysis. This solution ensured that confidential information remained protected while creating a comprehensive data dashboard. The quantity and type of data meant that manual techniques would be too labour intensive, but the nature of the data led to questions over whether the processing could be done safely with AI. The technical solution may have only taken a few days to create, but before reaching that stage it was important for the teams to consider what could go wrong with using AI and the impact of any failure. 

To overcome the legal, security and ethical challenges, we bring together subject matter experts with technical experts in AI to advise clients on the risk, while also challenging, testing and checking the solution to ensure the AI is acting as intended.  

However, the ethical issues need to be considered more broadly than on a project-by-project basis. When organisations integrate AI governance and ethics into the core of their operations, they not only safeguard against potential pitfalls but also demonstrate leadership in the responsible adoption of emerging technologies. While the government’s AI plan presents the potential for the technology to support its growth agenda, there is also an emphasis on responsible adoption too and that level of AI maturity takes time to develop within organisations.

Six steps for responsible AI use

Organisations that want to use AI responsibly should:

  • Tailor AI governance to organisational principles and risk appetite – articulate acceptable uses of AI and measures for addressing potential ethical issues, while building a coherent narrative around the approach to AI that aligns with the values of the organisation.
  • Have leadership support and organisation-wide buy-in – commitment from leadership is required at the earliest opportunity to promote best practices and influence cultural change within the organisation.
  • Start slowly and build out – leverage existing governance structures and processes to build an AI assessment and use external support until AI maturity is reached to minimise the risks.
  • Incorporate diverse voices – involve customers, communities and other stakeholders in discussions about how AI is used on projects to build trust in the use of the technology.
  • Promote AI literacy - ensure all employees have the right skills, confidence, capabilities and tools they need to embrace the responsible adoption of AI but also understand the limits of their knowledge, as well as the capabilities of the technology.
  • Monitor regulatory developments – stay informed about evolving AI regulations and adjust policy and practices as necessary.

Understand your risk

As AI continues to shape and reshape industries, the importance of fit-for-purpose governance and ethical decision-making cannot be overstated. Each AI use case comes with its own ethical dilemmas and governance concerns. . From perpetuating biases to introducing new security vulnerabilities, organisations that push forward without understanding and mitigating these risk consequences will face challenges ahead. Irresponsible AI applications and practices not only increase risk to an organisation and its people but can also negatively impact the communities they serve and society at large.

Embracing responsible AI is not just about staying competitive – it’s about leading with integrity and foresight, enabling navigation of the AI landscape with confidence. Starting the journey now will prepare organisations for the coming future where AI is integral to all aspects of infrastructure development and management. The first step in the journey is likely to be an assessment of AI skills across your organisation and supply-chain.

Subscribe for exclusive updates

Receive our expert insights on issues that transform business, increase sustainability and improve lives.