The rise of artificial intelligence (AI) has profound implications for economic growth, security, and governance. As global AI dialogues progress, the incoming Trump administration can play a decisive role in shaping these discussions. Leading negotiations around global AI standards can help the United States manage security risks and ensure that emerging frameworks align with its values and interests. Falling behind in these discussions poses the risk of ceding leadership to competitors—particularly China, which is eager to influence the global AI landscape.
Discussions about advanced AI governance tend to focus on the United States and China. Leading AI developers such as OpenAI, Google, Anthropic, Meta, Microsoft, and xAI are all based in the United States. Although China has lagged behind the United States in frontier AI progress, its status as a global economic superpower makes it a natural part of the conversation. The United Kingdom also deserves mention for its role in hosting the world’s first AI Safety Summit and establishing the world’s most well-resourced AI Safety Institute. While AI security discussions often focus on these nations, emerging trends suggest that another region could become important for AI global security: the Middle East.
Middle East investments in AI
The Middle East has rapidly embraced AI, not just in research or consumer applications but by investing in the critical infrastructure that powers AI data centers. Countries such as Saudi Arabia, the United Arab Emirates (UAE), and Israel have recognized the strategic importance of AI and are dedicating significant resources to building the advanced data centers and hardware ecosystems required for its development.
Both Saudi Arabia and the UAE plan to double their data center capacity over the next few years, and these plans have been accompanied by more than $100 billion in funding for work relating to semiconductors, AI, and related fields. The Business Times estimates that AI will contribute $96 billion to the UAE’s economy and $135 billion to Saudi Arabia’s. The UAE has also invested in AI developers, such as G42, which recently struck a $1.5-billion deal with Microsoft, and the Technology Innovation Institute, which recently open-sourced impressive models via its Falcon series. Israel, a recognized leader in technology, has similarly invested in high-powered data centers, partnering with companies such as Dell and NVIDIA to push the boundaries of what this technology can achieve.
Data center security: a national security priority
Possessing powerful data centers confers more than just economic benefits; it also brings significant responsibilities. Nations that host leading data centers will wield disproportionate influence over global AI governance dialogues. Although the United States dominates the global data center market, most of those data centers cannot be effectively used for advanced AI development. As the computational workloads needed to support advanced AI development and inference grow exponentially, the most advanced forms of AI will require new data centers. Thus, as Saudi Arabia, the UAE, and Israel invest more in cutting-edge infrastructure, they might play an increasingly significant role in shaping the global AI ecosystem. This makes it even more essential for the United States to deepen its engagement with these nations to develop shared standards and norms.
Data center security presents another critical challenge. Recent research, including a report by RAND, has highlighted the importance of protecting model weights—the parameters that encode the capabilities of an AI system. If adversaries steal these weights, they could fine-tune systems for malicious purposes or accelerate their AI research, exacerbating an already dangerous AI race. Model weights can be stolen during either the training stage (in which weights are actively updated and stored) or the inference stage (in which finalized weights could be accessed through side-channel attacks). Nations hosting powerful data centers must implement robust safeguards to protect against internal and external threats. A failure to secure these centers could lead to catastrophic consequences if malicious actors gain access to sensitive AI systems.
An opportunity to shape regional AI security discussions
While the benefits of increasing US-Middle East AI cooperation are compelling, developing a constructive AI dialogue with leading players in the region comes with hurdles. Efforts to establish meaningful cooperation are complicated by the absence of formal diplomatic relations between Israel and Saudi Arabia, heightened regional tensions, and competition with China over influence in the Middle East and North Africa. Despite these challenges, the urgency of advancing appropriate AI security standards should compel the United States to act. With its track record for bold and creative diplomacy in the region, the incoming Trump administration will be well-equipped to lead these AI dialogues.
One potential strategy the Trump administration could consider is adopting a phased approach that begins with a trilateral dialogue involving the United States, the UAE, and Israel. Both the UAE and Israel have strong ties to the United States and are well-positioned to collaborate and share best practices on issues like data center security, model weight protection, and the development of AI security standards. These early discussions could focus on aligning standards for AI infrastructure security and exploring cooperative research opportunities to mitigate risks. Nations with emerging AI industries, such as India, could also be meaningfully included in these dialogues, either through existing cooperation mechanisms like I2U2 or a new structure developed by the Trump administration.
In the longer term, these dialogues could expand to include Saudi Arabia and other countries in the region. While formal diplomatic relations between Israel and Saudi Arabia remain elusive, efforts toward normalization could create opportunities for discussion. Even without formal agreements, quiet diplomatic efforts could pave the way for Saudi participation.
Ultimately, this cooperation could evolve into a broader regional framework for AI and global security. This would allow the United States and its Middle East partners to present a more cohesive front in global AI discussions, countering the influence of adversaries. Aligning AI security standards would also strengthen US alliances with regional players, fostering economic and technological interdependence at a time when competition with China over AI dominance is intensifying.
As the world’s leader in AI innovation, the United States can capitalize on its position of strength to shape the global conversation. By adopting a phased approach that begins with trilateral collaboration and gradually expands to include broader regional participation, the United States can lead the way in shaping a safer AI future that aligns with its strategic interests. This is not just an opportunity but an imperative, as the decisions made today could fundamentally shape the trajectory of AI development.
Akash Wasil is a senior research associate at the Center for International Governance Innovation (CIGI), specializing in the intersection of AI and national security. Prior to his focus on AI policy, he was a National Science Foundation-funded PhD student at the University of Pennsylvania, where he researched innovative applications of technology and machine learning in mental healthcare. Akash earned his BA from Harvard University and graduated Phi Beta Kappa.
Image: Tesla CEO and X owner Elon Musk appears on a screen as he virtually attends the Future Investment Initiative (FII) in Riyadh, Saudi Arabia October 29, 2024. REUTERS/Hamad I Mohammed