The Economist has described big data as the new oil, although I gather some insurers are calling it the new asbestos. Big data will impact almost every person in the world and almost all businesses. Its potential for transformative good is extraordinary, as is its potentially destructive power if we don’t pay sufficient attention to security and privacy.
Artificial intelligence (AI), big data’s bigger brother, is another thing entirely. Brute computing power will soon replicate much of what human intelligence can accomplish, and soon afterwards what the aggregate intelligence of all humans can do. This creates enormous opportunities to transform how we approach everything, and the power of AI seems likely to expand exponentially if Moore’s Law continues to hold.
In the infrastructure world we think of social outcomes as our ultimate goal. Better and faster journeys to work. Eliminated diseases. Reduced inequalities. But what would be the social outcome of exponential and unconstrained AI? Much has been written on this subject, such as the idea that humans might become little more than the machines’ pets – if they want us. Sooner though, we seem to face a risk of dramatic increases in inequality, a tiny fraction of machine makers taking almost all the wealth, and low-paid jobs or no work for anyone else. But on what timescale? The technology will probably get there quickly, but how will it be adopted and what role will existing governance structures, laws, institutions and infrastructures play to enable or inhibit its deployment?
This raises profound social questions, even if we find ways, like universal basic incomes, to mitigate the worst of the inequalities. The technologists’ answer is that we need to find meaning outside work and maybe that’s right for some, but will whole populations be content to either be wealthy technocrats, cut each others’ hair, perfect their ballet or live a life of permanent idleness? That doesn’t feel like human nature, or something that I for one could consciously impose on other people. It might also provoke social uproar on a level that would leave the French Revolution looking like a storm in a teacup. Intrinsic characteristics of humanity adapt several orders of magnitude more slowly than technological change.
Standing back, AI is beginning to feel uncannily like climate change. Both are initiated by humans, both are definitely happening but their course is unpredictable, and both could drive exponential change in the way we live. In the case of AI this is both a wonderful opportunity and a significant but unpredictable risk. One could say the same in hindsight about climate change – the Industrial Revolution gave birth to modern developed societies, but was predicated on burning fossil fuels, leaving behind a planet-threatening legacy.
There is also the issue of the ‘global commons’; the responsibility and ownership we all have over the challenges this new development creates. Everyone, at least in the developed world, has contributed to climate change but no individual person, company or state felt any burden to address the issues until recently, resulting in a lethargic response that may have come too late. In the case of AI, the world’s corporations and start-ups are locked in a battle for competitive advantage, with the field advancing exponentially. In economic terms no one person, company or state feels an incentive to consider the global consequences of what this could unleash, although the effects will be felt by all.
As with climate change, exploring and protecting the global commons will be phenomenally difficult. Decisions must be taken in a context full of uncertainty and partial insight, and the whole issue is ripe for political polarisation. However, there is at least one big difference between climate change and AI. In the case of climate change, it is convenient for most people to ignore it, the impacts feel distant and many people don't want to change what they do today. But with AI there is a natural fear factor, it being relatively easy to paint dystopian pictures. The costs of restricting the impacts of AI on individuals today would be low (assuming we set such restrictions sensibly and don’t for example forego the immense healthcare benefits it offers in the short term). So it needs a lot less political bravery to move forward. It’s good to see a number of Silicon Valley’s luminaries using their profile to highlight the issues.
However, the big lesson of climate change is not to leave the consensus building too late. As a society we need to be debating this widely, learning from the successes and failures of the climate change debate. We need to explore what limitations on AI would benefit humankind, and how to establish the global governance to achieve them.
We live in interesting times, with AI set to bring immense benefits, but we must debate potential pitfalls before it’s too late.
This article was first published on Infrastructure Intelligence's Digital Transformation Hub, sponsored by Mott MacDonald.