David Shrier, a prominent expert in large-scale, technology-driven change, holds the position of Professor of Practice, AI and Innovation at Imperial College Business School and serves as the co-Director of the Trusted AI Institute. He provided guidance to the European Parliament on the EU AI Act and advised the Commonwealth Secretariat during the creation of the Commonwealth Fintech Toolkit.
Shrier co-founded the Trust::Data Initiative at MIT, fostering collaboration between private sector companies and governmental organizations. Additionally, he established the Data Academy for the United Nations, offering training on utilizing data in humanitarian crises. Shrier’s upcoming book, ‘Basic AI: A Human Guide to Artificial Intelligence,’ is scheduled for release in January 2024 (Harvard Business Publishing; Little Brown).
In a recent article for the ‘Horizons’ journal, you refer to AI as ‘humanity’s greatest existential crisis.’ Do you believe the widespread concern about AI is justified?
Panic is not justified, but a significant level of concern and focus is necessary. We must invest more effort in understanding how AI will impact human society in the coming months and years. Additionally, we need to explore how AI can be utilized to address major challenges facing humanity.
AI presents a unique opportunity to address critical issues such as inclusion, equity, climate crisis, and human health. Paradoxically, it poses both a challenge to our traditional notions of labor and the economy and a potential solution to the impending demographic cliff. We need a deeper understanding of the risks and opportunities associated with AI.
You introduced the term ‘flash growth’ to describe how technologies disrupt societies. What does this term mean?
We’re familiar with the concept of a ‘flash crash’ in financial markets, where AI-driven trading systems can cause sudden and sharp changes in prices. ‘Flash growth’ is a similar concept but refers to the rapid adoption of new technologies. In the past two years, we’ve witnessed the swift adoption of technologies thanks to decades of government investment in telecommunications infrastructure. This includes the widespread availability of low-cost devices and networks, leading to the rapid adoption of applications and services.
This quick, society-scale adoption of new technologies presents a challenge for government response. How can governments future-proof their oversight of technologies like AI?
Governments need to pursue parallel streams of action. While regulations should be in place to protect society, they should be developed through a consultative process involving multiple stakeholders. Principles-based regulation is preferable to rules-based regulation. Simultaneously, regulators must be trained on new technologies to adapt existing rules while waiting for a more deliberate regulatory process.
Some regions have been proactive in continuously training regulators and fostering communication between regulators and innovators. Others lag behind due to a lack of investment in regulator education. The growth potential of AI, estimated to add 10% to global GDP by 2032, depends on government capacity and support for private sector action.
Who has been successful in navigating the AI landscape?
The United States and China have heavily invested in AI and are reaping the benefits. Interestingly, the UK, with a smaller budget and population, ranks third globally in AI productivity. Other notable countries include Israel, Switzerland, and to some extent, India.
You have experience bringing together the public and private sectors. What is your key takeaway?
Mariana Mazzucato’s work highlights the crucial role of government investment in driving innovation. Rather than cutting government programs in the hope that the private sector will take over, there should be a focus on enhancing public-private cooperation platforms. Government investment in academic programs, coupled with translational mechanisms to bring research to commercial application, is vital for fostering innovation.
The private sector, represented by figures like Elon Musk and Mark Zuckerberg, plays a significant role in digital tech and AI. Is this concentration of power concerning?
It becomes problematic if only a few individuals or companies wield disproportionate power. The concentration of power in a small number of tech companies raises concerns about control over our collective future. It is crucial to involve a diverse range of stakeholders in discussions and decision-making processes regarding AI and digital technologies.