The Convergence of Artificial Intelligence and Quantum Computing: Unravelling the Complex Geopolitical Nexus

Cameron Wood & Alex Krijger

Artificial Intelligence (AI) has the potential to become the greatest technological advancement of our lifetime, with PWC estimating it could contribute $15.7 trillion to the global economy by 2030. As such, the increasing significance of the interplay between AI and Quantum Computing (QC) is garnering the interest of technologists and policymakers.

Substantial private and public sector investments have been driven by the relatively recent availability of big data, used to train AI programs, coupled with the development of machine learning technology. Changing demographics have also contributed, as AI is increasingly seen to stimulate productivity and growth in an age of shrinking and aging populations.

However, the primary enabler of AI development has been the increasing computational power of hardware. This brings us to QC, which allows for the processing of massive amounts of data swiftly, theoretically providing AI with the speed and agility required for real-time intelligent decision making.

Currently, AI relies on classical computing methods, involving multiple trials and errors to calculate possible choices. QC surpasses binary computer computation by enabling the identification of all possible decisions in a single process using quantum states. Google demonstrated the significance of this in 2019 when their quantum computer accomplished a task that would take a classical computer 10,000 years, in just 200 seconds.

Without becoming lost in technical details, AI aims to create computational systems which replicate human intelligence, while QC can exponentially accelerate an AI algorithms’ information processing capabilities, enhancing its accuracy and effectiveness. This paves the way for the rapid development of data processing, forecasting, process optimization, and complex problem solving, through machine learning and neural networks.

Strategic Implications of Quantum Integrated Artificial Intelligence

The convergence of AI and QC presents strategic implications in various domains. Economically, developing AI systems and harnessing quantum advancements can position states for significant advantages, revolutionizing industries and driving innovation. Key areas such as personalized medicine, genomic sequencing, drug discovery, defensive cryptography, sustainability, climate change modelling, financial modelling, logistics, and cybersecurity, are expected to benefit from QC enabled AI. Benefits will spill over into related sectors and enhance productivity across multiple industries concurrently. Conversely, a state’s failure to keep pace will invariably lead to technological and economic disadvantages.

In the military sphere, quantum computing has the potential to revolutionize capabilities, optimizing logistics, cyber warfare, command & control, strategic planning, and situational awareness. Additionally, the development of lethal autonomous weapons systems allows for new opportunities in environments where risks to civilians are minimal, like air, sea, and outer space. However, obvious concerns about the ethical implications of lethal autonomous weapon systems persist, as do broader concerns about the ignition of arms races and the destabilisation of global security.

The cybersecurity landscape will also be profoundly impacted. While AI & QC can enhance defences against cyber threats, they also amplify them. AI-driven attacks, powered by QC algorithms, can exploit current encryption vulnerabilities with devastating effectiveness. As such, improving the security of critical infrastructure and defending AI systems themselves, against adversarial manipulation, is a crucial challenge for policy makers and security experts.

AI & QC convergence will also transform intelligence operations by enabling faster data processing and pattern recognition. It will enhance signals intelligence, cyber intelligence, and predictive analytics, revolutionizing intelligence gathering and adversary monitoring. The primary question in this area is how to ensure transparency, sufficient oversight and accountability of intelligence operators as non-human decision making becomes more prevalent.

Additionally, using QC enabled AI to monitor social, economic, political, and environmental trends also carries significant socio-political implications. While enabling preventive interventions and the avoidance of dangers, it also grants governments tremendous power to control citizens, shape mindsets, and interfere in other countries’ domestic affairs. QC enabled AI’s capacity to be used to exert soft power, by collecting and disseminating information, to shape perceptions of reality, cannot be understated.

Great Power Competition

State actors are engaged in a race for AI & QC supremacy. This has led to the formulation of national strategies by numerous countries, highlighting the area as a flashpoint for great power competition. Several leading governments, including the UK, Germany, Canada, Israel, India, Japan, Russia, and Australia have allocated significant funds to develop their capabilities over the next five years. However, the US and China stand out as the main players, with substantial investments, research capabilities, and vast amounts of data at their disposal.

China aims to be the preeminent AI & QC leader by 2030 and is subsequently pursuing aggressive development policies. China has already invested in quantum projects like the Micius Quantum Science Satellite, quantum networks connecting Beijing and Shanghai, and the world’s largest quantum laboratory.

China has a natural advantage over its western counterparts through its large population, data abundance, and weaker antitrust and privacy laws. Furthermore, the CCP shares foreign technology, acquired through espionage, with national R&D laboratories, while also guiding their development focus based on strategic state objectives. Despite this, Chinese tech firms also remain driven by the same powerful market incentives as their western counterparts.

While concerns exist about the use of AI as a tool of socio-political control in democracies, China’s authoritarian government, social credit system, and extensive surveillance architecture intensify this threat. However, western policymakers are primarily concerned with the interstate implications of Chinese dominance, particularly as China closes the gap in AI technology, while already leading in subfields of quantum technology, like communications. Though Chinese capabilities are inherently hard to gauge, western policy makers’ concerns are telling.

By comparison, the US has demonstrated a strong interest in the sector and has favoured a free market approach, supporting private industry and emphasising R&D over state influence. The US benefits from a thriving high-tech industry, including large multinational corporations and dynamic startups, carrying qualified human capital, supported by significant investments from both the public and private sectors. Subsequently, leading US corporations have functionally developed capabilities comparable to state actors on a technical level.

This public/private interplay, while central to the US model, has caused problems. In 2018, Google promised not to develop military related AI, announcing an abandonment of a contract with the Department of Defence, citing employee’s ethical concerns. Additionally, US multinationals are not fully American firms, with many owning offshore assets and dispersing corporate profits overseas. Western tech companies also typically have their own goals and can be reluctant to publicly engage with their governments who subsequently struggle to shape them by regulation.

While the US’s AI & QC strategy may lack coherence, their policy of ‘Chinese Containment’ is clearly defined. Designed to stifle the development of the Chinese high-tech industry, Chinese Containment is characterised by restrictions on access to key areas such as semiconductors, capital, know-how, and data flows.

These physical hardware restrictions have limited China’s access to high-tech equipment, such as the extreme ultraviolet photolithography machines used in semiconductor manufacturing. Dutch company ASML, the sole producer of these machines, is under export controls due to US pressure, meaning the only three countries with microchip printer manufacturers have now blocked exports to China.

US companies investing in sensitive sectors in China have also been heavily criticised by lawmakers who claim it advances Chinese capabilities when global tensions are high. As early as this month, the Biden administration announced it was working on new rules to restrict the flow of investment into Chinese companies specifically working on AI, advanced semiconductors, and quantum technologies.

The US also aims to restrict the flow of knowledge into China. The scarcity of individuals with high-level understanding of these technologies has led to limitations/required dispensations for PhD students, from certain countries, studying AI or other sensitive technologies. Such measures, though antithetical to the traditional openness of western academia, are already becoming more prevalent.

The Geopolitical Dynamics

Globally, there are two trends emerging simultaneously. Firstly, AI & QC nationalisation represents the intervention of states in the economy to direct and regulate technological development. They aim to prevent QC enabled AI from becoming a threat and ensure technological advances are consistent with state objectives. The reluctance of western corporations to share valuable industrial secrets with their governments and the emergence of information industrial complexes, often with military ties, carrying significant economic and political influence, is another motivation for increased state control.

Secondly, despite increased global connectivity, we can observe growing AI nationalism – employing AI to serve national interests and enhance state power. This zero-sum approach stems from the rise of nationalist political forces, great power competition, and the recognition that QC enabled AI technologies can be utilised to influence the domestic affairs of other nations, posing a direct threat to national sovereignty.

Both ‘state-centric’ approaches offer potential for destabilization. They not only foster competition in deploying QC enabled AI systems, but also in acquiring and utilizing the essential resource for its development: data. China and the US already possess significant data holdings and are positioned to acquire even more in the future. As such, the advantages from QC & AI predominantly reside within advanced countries.

This raises concerns about the rise of cyber neo-colonialism, where wealthier and more technologically advanced states exploit data from economically disadvantaged countries in need of advanced technology. This loss of data control leads to a deterioration of sovereignty, resulting in significant economic and political repercussions, ultimately leaving states more susceptible to foreign interference.

European Strategy

In many ways, Europe is geopolitically caught between China and the US. Though the war in Ukraine is an exception, typically, the EU struggles to create a unified foreign policy due to the veto powers of member states. Despite the EU expressing concerns the ‘Chinese containment’ policies will have a negative effect on global supply chains, the European Commission has continued exploring ways to screen outward investment as part of an economic security strategy.

European Commission President Ursula von der Leyen described targeting “a small number of sensitive technologies where investment can lead to the development of military capabilities that pose risks to national security”. This reflects concern that the technological gap between Europe and the great tech powers is growing, exacerbated by the lack of either European domestic innovation or a unified strategy.

Creating this ‘unified European strategy’ is essential to effectively navigate this technological revolution and is critical for the maintenance of security and economic competitiveness. To effectively manage the development of AI & QC, Europe, in unison with the UK, North America and other western allies, must foster international cooperation through joint research initiatives, data sharing, and security protocols. This will reduce strategic risk and lessen the competition for technological dominance that fuels geopolitical tensions between western liberal democracies.

Europe is ideally positioned, politically and geographically, to form strategic alliances to direct data flows, pool resources, and advance capabilities. Strategic public-private partnerships can be conducive to technology transfers that promote responsible usage and drive both innovation and responsible use of these technologies.

The adoption of a strategic technology control architecture that clearly categorises QC and AI technology, based on strategic significance, is necessary to effectively balance the freedom of commerce which fosters innovation, and state security priorities, to assist European innovators in protecting their intellectual property from both legal and illegal acquisition.

Additionally, significant economic power and a near unparalleled capacity to regulate, provides the EU a unique opportunity to mould global standards to its benefit. The General Data Protection Regulation (GDPR) is a model of how the EU can influence global standards. Although applied exclusively to EU citizens, non-European countries are forced to adopt regulatory standards comparable to the GDPR to gain access to the lucrative single market. In essence, this exports European values on data privacy and data sharing internationally.

Europe should shape the geopolitical landscape by controlling the flow of data, through regulation, to protect what is rapidly becoming one of the most important strategic resources. European liberal democratic values should shape AI & QC development through smart regulation and governance frameworks. Regulations should prioritize privacy, data protection, and responsible data handling. Transparent, accountable, and ethical AI systems can ensure human rights compatibility, prevent discrimination, and promote inclusivity. Safety and security should be encouraged, as should AI systems that address sustainability or other societal challenges, through emphasizing human well-being and human-centric designs.

The promotion of interdisciplinary collaboration between technologists, security experts, and geopolitical analysts, will provide policy makers a more holistic understanding of the implications of this technological convergence, dissolving inhibitive informational asymmetries, and helping maintain adaptive policy frameworks that keep pace with these evolving technologies.

With this in mind, the proposed European Artificial Intelligence Act offers an opportunity to create robust regulation that shapes this technology with European values. The proposed law is a significant step toward regulating AI and currently focuses on applications with the potential for human harm. The Act covers facial recognition and the scraping of biometric data, also requiring greater transparency from AI system creators and requiring risk assessments before implementation.

However, over regulation can create environments difficult for private business to navigate, translating to slower growth and innovation. This is a criticism frequently levelled against the GDPR, which is often cited as the reason for a 4:1 disparity between US and EU tech ‘unicorns’ (private startups valued at one billion USD or more). This translates into a size differential that makes European firms and human capital susceptible to acquisition by foreign tech giants.

As such, lawmakers should be cautious. While regulatory power is a tool at the EU’s disposal, which should certainly be used, relying on regulation alone is inadequate. Europe needs to foster innovation through a healthy domestic market to remain competitive. Initiatives like the European AI Alliance, the Quantum Flagship, and the NATO Innovation Fund all hold promise, but future success requires long-term planning and extensive capital allocation, which is currently lacking.

Developing QC enabled AI is a collective goal amongst the community of nations and is a tool for shared development, not national supremacy. Understanding the synergistic relationship between AI and QC, assessing its impact, and comprehending the geopolitical dynamics they unleash are crucial for policymakers. Like all new technology, their potential and effects on the world’s economic and political order remain uncertain. However, policymakers must proactively address risks and challenges, while leveraging the potential benefits of these transformative technologies, to spur collaborative innovation and produce better results for humanity.

Cameron Wood is Cofounder of Sarissa Labs – a cyber defence & advisory firm.

Alex Krijger is Founder and Managing Partner of Krijger & Partners – a geopolitical risk advisory firm.