The TechCrunch Global Affairs project examines the increasingly intertwined relationship between the tech industry and global politics.
Geopolitical actors have always used technology to achieve their goals. Unlike other technologies, artificial intelligence (AI) is more than just a tool. We don’t want to anthropomorphize AI or suggest it has its own agenda. It is not—yet—a moral agent. But it is rapidly becoming a key determinant of our collective destiny. We believe that due to the unique characteristics of AI – and its impact on other fields, from biotechnology to nanotechnology – it already threatens the foundations of global peace and security.
The rapid pace of AI technology development, coupled with the scope of new applications (the global AI market size is expected to grow by more than multiplied by nine from 2020 to 2028) means that AI systems are widely deployed without sufficient legal oversight or full consideration of their ethical impacts. This discrepancy, often referred to as the pacing problem, has left legislatures and executive bodies simply unable to cope.
After all, the impacts of new technologies are often difficult to predict. Smartphones and social media were integrated into everyday life long before we fully appreciated their potential for misuse. Likewise, it has taken time to realize the implications of facial recognition technology for privacy and human rights violations.
Some countries will deploy AI to manipulate public opinion by determining what information people see and using surveillance to restrict free speech.
In the longer term, we have no idea what challenges are currently being studied that will lead to innovations and how these innovations will interact with each other and with the wider environment.
These problems are particularly acute with AI, because the means by which learning algorithms arrive at their conclusions are often inscrutable. When side effects occur, it may be difficult or impossible to determine the cause. Systems that constantly learn and change their behavior cannot be constantly tested and certified for security.
AI systems can act with little or no Human intervention. You don’t have to read a science fiction novel to imagine dangerous scenarios. Autonomous systems risk undermining the principle that there should always be an agent – human or corporate – who can be held accountable for actions in the world – especially when it comes to matters of war and peace. We cannot hold the systems themselves to account, and those who deploy them will say they are not responsible when the systems act unpredictably.
In short, we believe that our societies are not prepared for AI – politically, legally or ethically. The world is also unprepared for how AI will transform geopolitics and the ethics of international relations. We identify three ways this could happen.
First, developments in AI will alter the balance of power between nations. Technology has always shaped geopolitical power. In the 19th and early 20th centuries, the international order was based on emerging industrial capabilities—steamships, airplanes, etc. Later, control of oil and natural gas resources became more important.
All major powers are acutely aware of the potential of AI to advance their national agendas. In September 2017, Vladimir Putin said to a group of school children: “the one who becomes the chief [in AI] will become the master of the world. While the United States currently leads in AI, Chinese tech companies are advancing rapidly and are arguably superior in the development and application of specific research areas such as facial recognition software.
The domination of AI by major powers will exacerbate existing structural inequalities and contribute to new forms of inequality. Countries that already do not have access to the Internet and depend on the largesse of the richest countries will be left behind. AI-powered automation will transform employment patterns in ways that advantage some national economies over others.
Second, AI will empower a new set of geopolitical actors beyond nation states. In some ways, leading digital technology companies are already more powerful than many countries. As French President Emmanuel Macron asked in March 2019: “Who can claim to be sovereign, on their own, against the digital giants?
The recent invasion of Ukraine is an example of this. National governments responded by imposing economic sanctions on the Russian Federation. But arguably at least as impactful were the decisions of companies such as IBM, Dell, Meta, Apple and Alphabet to cease operations in the country.
Similarly, when Ukraine feared the invasion would disrupt its internet access, it turned to help not from a friendly government, but from tech entrepreneur Elon Musk. Musk responded by activating his Starlink satellite internet service in Ukraine and delivering receivers, allowing the country to continue communicating.
The digital oligopoly, with access to large and growing databases that serve as fuel for machine learning algorithms, is rapidly becoming a AI oligopoly. Given their immense wealth, large American and Chinese companies can either develop new applications or acquire smaller companies that invent promising tools. Machine learning systems could also be useful for the AI oligopoly to circumvent national regulations.
Third, AI will open up possibilities for new forms of conflict. These range from influencing public opinion and election results in other countries through fake media and manipulated social media posts, to interfering with the functioning of critical infrastructure in other countries, such as electricity, transport or communications.
Such forms of conflict will prove difficult to manage, prompting a complete rethinking of arms control instruments that are not suited to combat coercive weapons. Current arms control negotiations require adversaries to clearly perceive each other’s capabilities and their military necessity, but while nuclear bombs, for example, are limited in their development and application, almost anything is possible with ‘AI, as abilities can develop both rapidly and opaquely.
Without binding treaties limiting their deployment, autonomous weapon systems assembled from off-the-shelf components will eventually be made available to terrorists and other non-state actors. There is also a high probability that misunderstood autonomous weapons systems could unintentionally trigger conflicts or escalate existing hostilities.
The only way to mitigate the geopolitical risks of AI and provide the agile, comprehensive oversight it will need is to openly dialogue about its benefits, limitations, and complexities. The G20 is a potential venue where a new international governance mechanism could be created to engage the private sector and other key stakeholders.
It is widely recognized that international security, economic prosperity, public good and human well-being depend on managing the proliferation of deadly weapons systems and climate change. We believe they will increasingly depend at least as much on our collective ability to shape the development and trajectory of AI and other emerging technologies.