Menu
See all NewsEngineering News
Events

Navigating the Geopolitical Stakes of Artificial Intelligence

Northwestern Buffett organized a day-long symposium on the geopolitics of AI

Navigating the Geopolitical Stakes of Artificial Intelligence

Artificial intelligence (AI) has emerged as a key driver of geopolitical power imbalances, fueling the competition for technological supremacy and economic dominance and intensifying global disparities. The theorized potential of AI technologies to transform industries, enhance military capabilities, and influence societal norms has far-reaching implications that transcend borders, raising urgent questions about international regulation.

To explore these critical issues, the Northwestern Roberta Buffett Institute for Global Affairs, Northwestern Security & AI Lab (NSAIL), and Insight Research Ireland Centre for Data Analytics at University College Cork organized a symposium on AI and geopolitics. On January 16, leading strategists, researchers, and policymakers discussed how AI technologies influence global power dynamics, national security, economic development, and the frameworks for governance and cooperation between nations.

“We're in the midst of at least three major, interlaced transformations to the world order that all have AI at their center and cannot be understood without each other,” said Deborah Cohen, Northwestern Buffett director, during the event’s opening remarks. “There's generative AI, which is changing our economies, societies, and politics in ways that we've only started to grasp. There's the geopolitical transformation that we've seen playing out over the past few decades as US supremacy has given way to a multipolar world. And there's the climate crisis, to which AI data centers, with their vast energy requirements, seem at this moment to be an accelerant.”



The symposium addressed three key questions:

  • What are the geopolitical risks and opportunities associated with AI development?
  • What strategies are being developed to prevent the misuse of AI?
  • How do regulation strategies and frameworks attempt to build guardrails around AI to protect citizens, promote responsible and ethical AI development, and shape the future of AI in a way that benefits humanity?

The European Union Artificial Intelligence Act

The European Union Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive legislation aimed at regulating AI. The EU’s risk-based framework categorizes AI systems as unacceptable, high-risk, or low-risk, with stringent requirements for high-risk applications in areas like healthcare, education, and law enforcement. The EU AI Act prioritizes ethical AI principles, including human oversight, fairness, and the protection of fundamental rights. Practices like mass surveillance, real-time biometric identification, and predictive policing are banned.



Speaker: Barry O’Sullivan, Visiting Professor of Computer Science at Northwestern Engineering, Professor of Computer Science and IT at the University College Cork, Ireland, Director of the Insight Research Ireland Centre for Data Analytics, and Director of the Research Ireland Centre for Research Training in AI

Key Takeaways

The Act prioritizes ethical safeguards and fundamental rights. O’Sullivan noted that it is important that the EU builds upon the regulation to increase the pace of innovation in the Union. O’Sullivan explained that there is a fear that the stringent rules for high-risk application domains like healthcare, education, employment, and public services may compromise innovation in these areas, leading developers to redirect efforts to less-regulated markets like the US and China, where innovation takes precedence, leaving the EU overly reliant on imported advanced AI technology.

Compliance with the EU AI Act is resource-intensive, requiring organizations to conduct risk assessments, ensure technical robustness, and train employees in AI literacy. Startups and small-to-medium enterprises may struggle to bear these costs, potentially stifling competition and favoring larger corporations.

The EU AI Act’s detailed requirements, including the shifting roles of providers and deployers, introduce a complex legal framework. This may lead to progress-halting debates and disputes about whether specific systems qualify as AI and how liabilities are assigned.

Barry O’Sullivan

International Governance of AI

The panel discussed the complex interplay between geopolitics and the international governance of AI, emphasizing how national strategic interests and power dynamics — particularly between technologically advanced nations like the US and China — overshadow regulatory considerations.

While geopolitical tensions influence and impede efforts to create global frameworks on the development and deployment of AI, intergovernmental agreements like the EU AI Act, the Council of Europe’s Framework Convention on Artificial Intelligence, and the Organization for Economic Co-operation and Development (OECD) AI Principles aim to establish standards that balance innovation, equity, and safety.



Participants


  • Yaron Gamburg, Research Associate at the Institute for National Security Studies in Tel-Aviv, Israel
  • Maria Vanina Martinez, Tenured Scientist at the Artificial Intelligence Research Institute at the Spanish National Research Institute in Barcelona, Spain
  • Ruby Scanlon (SESP ’22), Research Assistant in the Technology and National Security Program at the Center for New American Studies in Washington, DC, who studied international relations and social policy at Northwestern’s School of Education and Social Policy
  • Moderator: Neha Jain, Professor of Law at Northwestern’s Pritzker School of Law and Deputy Director of the Buffet Institute

Key Takeaways

Country-specific and regional AI governance reflects dynamic geopolitical priorities. Favoring a light-handed approach to avoid stifling technological growth, the US has focused on leveraging its AI leadership for national security. Scanlon noted that the US encourages voluntary collaboration between the private sector and agencies like the National Institute of Standards and Technology for AI model testing and accountability. Israel is among the innovation-focused countries tailoring sector-specific regulations for specific AI applications, such as interventional clinical trials using AI-based tools. In Latin America — where AI is forecasted to boost the region’s GDP by more than 5 percent by 2030, according to a report by The Economist — governments have demonstrated interest in adapting flexible, risk-based regulations inspired by the EU model. According to Martinez, the Milei administration in Argentina is moving toward complete deregulation of AI to catalyze innovation and foreign investment.

Collaborative frameworks aim to expand strategic partnerships and increase diplomatic relations. In addition to the 38 OECD member countries, several non-members signed on as adherents of the Principles on AI, including Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore, Ukraine, and Uruguay. In addition, the Council of Europe’s Framework Convention on Artificial Intelligence was drafted by the 46 member states in conjunction with all observer and several non-member states. Gamburg explained that a new India-Israel coalition strives to foster the development of mutually beneficial advanced technologies through a proposed middle path between the stringent EU and relatively lenient US regulatory models.

Political and economic imperatives outweigh ethical concerns in many regions, sometimes at the expense of societal and human rights protections. Martinez underscored that AI development often relies on labor and data harvesting from developing nations. Workers in regions like Africa, Asia, and Latin America label data and curate content, often under exploitative conditions, prompting calls for better safeguards. Due to the global power imbalance, developing nations in the so-called Global South are being left behind in the race toward artificial general intelligence — and thus advocate for international cooperation to enhance local capacity while preserving their autonomy.

Notable Quotes

Maria Vanina Martinez
On the role of international governments among the Global South

"The Global South look up to international agreements and governing frameworks as a way to protect themselves, both because they understand that maybe their own countries do not have the power to do so and because they may not actually want to. Another aspiration from the Global South is that these kinds of agreements will lead to an opportunity to transfer knowledge and generate local capacity.”

Ruby Scanlon
On the competitive race for AI dominance and how it impedes our ability to develop safe and responsible AI

“The United States and China are racing to the frontier, and that is going to cause leading AI labs to be hasty with their deployment of models and not sufficiently red-team them, perhaps, with government agencies and deploy them a little too quickly without proper regard for safety measures.”

AI, Deepfakes, and Malign Ops

V.S. Subrahmanian explored the role of AI in creating and combating deepfakes and influence operations, highlighting its dual-use nature. Malicious actors use large language models and AI-powered tools such as reinforcement learning to dynamically alter their behavior, learn from what they observe, and evade detection. The Northwestern Security & AI Lab (NSAIL) team is a global leader among a growing multidisciplinary community developing and deploying AI technologies to address these global threats. The Global Online Deepfake Detection System (GODDS), for example, is a tool for verified journalists to substantiate the authenticity of audio, images, and videos. GODDS uses 20 predictive models to test whether an artifact is real or fake and incorporates contextual variables which increases the prediction capability by up to 15 percent.



Speaker: V.S. Subrahmanian, Walter P. Murphy Professor of Computer Science in Northwestern Engineering, Faculty Fellow at Northwestern Buffett, and Director of NSAIL

Key Takeaways

Democratic governments are increasingly considering using deepfakes in covert operations. Subrahmanian coauthored a report with Daniel W. Linna Jr. and Daniel Byman examining hypothetical scenarios in which democratic governments might consider using deepfakes to advance their foreign policy objectives and the potential harms this use might pose to democracy. Decisions about creating or using deepfakes, especially by governments, must consider efficacy, harms, legality, and traceability, requiring robust governance and ethical guidelines.

What used to be a human-in-the-loop cat and mouse game is increasingly being automated, with attackers and defenders dynamically adapting to each other’s strategies in real time. Malicious actors adapt their strategies based on defender actions, and vice versa. Adopting a predictive approach — training systems to anticipate future tactics rather than just reacting to current ones — is essential.

Bot farms balance two competing objectives: maximize influence and minimize detection of fraudulent accounts. Bots are designed to act subtly, avoiding detection by being less overtly positive or negative in their messaging. They aim to influence through quantity rather than intensity, illustrating a shift toward more nuanced tactics in influence operations.

V.S. Subrahmanian

Economic Impacts of AI

The economic disparities in AI adoption across regions and industries are influenced by factors such as regulatory environments, infrastructure readiness, and cultural attitudes toward risk. The panel discussed the barriers to entry of developing and deploying use-case specific enterprise AI systems, including operational agility and compliance with regulatory environments, while acknowledging the low-hanging fruit of enhancing white-collar workforce productivity, optimizing operations, and automating customer service.



Participants


  • David Bray, Distinguished Fellow and Chair of the Loomis Accelerator with the Alfred Lee Loomis Innovation Council at the non-partisan Henry L. Stimson Center, and former Chief Information Security Officer at the US Federal Communications Commission
  • Johan Harvard, Global AI Advisory Lead at the Tony Blair Institute for Global Change in London
  • Sandeep Mehta, Advisory Board Member of the Ethical AI Governance Group, and former Chief Technology Officer at the Hartford Financial Services Group
  • Moderated by Daniel W. Linna Jr., Senior Lecturer and Director of Law and Technology Initiatives at Northwestern

Key Takeaways

The sector-specific variability in the return on investment (ROI) of AI reflects the substantial investment in infrastructure, data standardization, and workforce training to make the systems effective. Harvard explained that the ROI calculus is improving AI’s economic incentives for many stakeholders, making it worthwhile for certain sectors, but the upfront effort remains a prohibitive barrier. Rather than expecting quick wins from an out-of-the-box solution, achieving meaningful ROI with AI requires a strategic, long-term approach.

While AI offers real productivity gains, the expectations surrounding its transformative power may be over-hyped. AI applications provide tangible, incremental improvements to existing systems and workflows, Mehta noted, but they are not so-called "killer apps.” Mehta reported that, in the finance sector, bullish projections estimate 30 percent productivity gains, but the actual gains are measured at five or six percent.

The focus on generative AI has diverted attention and resources from other promising approaches. Unlike deep learning, which requires extensive training on vast datasets, active inference is modeled after how humans learn and make predictions with limited information. Bray noted that active inference may offer more privacy-preserving, energy-efficient, and data-efficient solutions, but has been overshadowed by the sunk costs and focus on generative AI, limiting its exploration and adoption.

Notable Quotes

David Bray
On the geopolitical advantage of autocratic nations that can invest in AI without free-market driven constraints and profit motives

“Generative AI, based on deep learning, is currently not making a profit. Despite the massive revenue we are seeing, these companies are actually losing money at the end of the day. And if you think about that geopolitically, that means countries that do not separate their private sector from their public sector — that are more autocratic — can propel generative AI efforts further, because there won't be a reconciliation in terms of have the books been balanced.”

Sandeep Mehta
On the economic dynamics of the AI industry infrastructure suppliers are poised to reap the most significant financial rewards in the competitive global landscape

“Most of the people who are going to make money are the makers of the picks and shovels in a gold rush. It's a GPU race, whether it's amongst the US providers or internationally. There are a bunch of startups that are developing AI chips that are going to be far more energy efficient that are going to get deployed, so that game will also evolve.”

International Cooperation & AI

The panel explored the role of international cooperation in advancing the development, regulation, and application of AI through shared expertise, collaborative research, and ethical governance. Through partnerships such as the 2023 US-EU Administrative Arrangement on Artificial Intelligence for the Public Good, scientific and technological cooperation can leverage AI to tackle grand challenges in healthcare, education, disaster management, and public service delivery. Through this agreement, the US and EU aim to share findings and resources with international partners, which is critical to efforts to bridge the digital divide.



Participants


  • Daniel Byman, Professor and Director of the Security Studies Program at Georgetown University, and Director of the Warfare, Irregular Threats, and Terrorism Program at the Center for Strategic and International Studies
  • Juha Heikkilä, Adviser for Artificial Intelligence, European Commission
  • Romain Murenzi, Professor Physics at Worcester Polytechnic Institute, and former Rwandan Minister of Education, Science and Technology, and Information Communication Technologies
  • Moderated by V.S. Subrahmanian

Key Takeaways

Unequal access to broadband and electricity, prerequisites for leveraging AI effectively, is a significant barrier to entry for countries in the Global South. Murenzi noted that, during the AI boom in 2023, developing countries were rebuilding their economies and education infrastructure post-COVID-19. International cooperation, through funding programs and partnerships, is essential to ensure technologies are tailored to local contexts and don’t further exacerbate disparities.

Expertise and innovation predominantly lie within private corporations, emphasizing the need for public-private partnerships and ethical oversight. The private sector possesses the resources, talent, and computational power necessary to drive innovation, placing them at the forefront of AI advancement. However, this concentration of expertise creates a dependency on private entities for progress, potentially sidelining public interests and societal goals.

Effective collaboration between private companies, governments, and academia is necessary to create synergies and build local capacities, particularly in developing nations. Byman explained that university researchers play an important role in developing AI technologies and safety protocols that are beneficial to society but have less value commercially and thus are not likely to be pursued by the profit-driven private sector.

Notable Quotes

Juha Heikkilä
On the role of the EU in shaping global AI norms

“We want to build coalitions with those that share the desire for regulatory guardrails and the democratic governance of AI in a way that benefits us all. We seek this both in terms of bilateral relations and also in multilateral international forums.”



Romain Murenzi
On capacity building through access to technology

“Giving a low-power device, a phone — not a smartphone — to a million people, two million, ten million, is like putting a quantum charge in a quantum vacuum. Suddenly, millions of people who didn't participate in the economy become taxpayers. That money can help to build educational and health infrastructure.”