
Presented by Zia H Shah MD
China: National AI Self-Reliance Initiative
- Project & Description: China’s government is pushing a broad AI self-reliance initiative rather than a single named project. After the debut of ChatGPT, Chinese tech firms (Baidu, Alibaba, Huawei, etc.) rushed to build indigenous large language models (ERNIE Bot, Tongyi Qianwen, PanGu, Baichuan, etc.) under state encouragement. By mid-2024, authorities had approved over 180 domestic LLMs for public useatlanticcouncil.org. These models are trained on Chinese data and aligned with Beijing’s censorship and values requirements.
- Agencies/Organizations: The drive is led by central agencies (Ministry of Science and Technology, Cyberspace Administration of China) with state-funded labs and companies. Tech giants (Baidu, Alibaba, Tencent, iFLYTEK) and research institutions (e.g. Tsinghua University) are all developing models. Indigenous chipmakers (Huawei’s HiSilicon, Cambricon) are also involved in reducing reliance on foreign GPUsmerics.org.
- Goals & Scope: Strategic independence is the core goalmerics.orgmerics.org. China views AI as critical to economic and national security, especially given U.S. export controls on chips. The aim is an “independent and controllable” AI ecosystem across the stack – domestic semiconductors, frameworks, data, and modelsmerics.orgmerics.org. This includes building national GPU clusters and cloud platforms so Chinese firms can train models without U.S. technology.
- Key Achievements: Multiple Chinese LLMs matching mid-tier Western models have been released (e.g. Baidu’s ERNIE 3.5, Alibaba’s Tongyi, Huawei’s PanGu series). Local AI chips (like Huawei Ascend) now power some training runsmerics.org. Adoption is rising; many Chinese enterprises and government services use these models instead of foreign ones. Hardware self-sufficiency remains a challenge, but China’s AI labs have pooled resources to progress despite chip bansatlanticcouncil.org.
- Independence Policy: Beijing’s policy explicitly calls for AI sovereignty. In April 2025, President Xi Jinping urged a “nationwide mobilization” to achieve “self-reliance and self-strengthening” in AI by building an entirely independent tech ecosystemmerics.org. China’s 2017 New Generation AI Plan and recent five-year plans all emphasize reducing dependence on foreign AI. Officials frequently cite U.S. sanctions as justification for a homegrown approach. As MERICS analysts note, China has made “independent and controllable” AI a key objective of its tech strategymerics.org, aiming to sanction-proof its AI sector from external pressuresmerics.org.
Russia: Domestic LLMs and Sovereign AI Drive
- Project & Description: Under sanctions and cut off from Western AI services, Russia is pursuing “sovereign AI” to cultivate domestic models and infrastructure. A flagship project is Sberbank’s GigaChat – a Russian-language ChatGPT alternative launched in 2023turingpost.comturingpost.com. GigaChat is a multimodal LLM (text and images) based on Sber’s internal research (the NeONKA model) and built upon earlier Russian NLP models (ruGPT-3, FRED, etc.)turingpost.com. Rival tech firm Yandex has its own LLMs like YaLM 2.0 (100B-parameter model) integrated into servicesturingpost.com. These efforts provide Russian users with AI tools in Russian, hosted on local servers.
- Agencies/Organizations: Key players include Sberbank (state-linked bank turned tech provider), Yandex (Russia’s largest search/tech company), and government bodies like the Ministry of Digital Development. The government’s National AI Center and leading universities support research, but much development is by state-affiliated companies. Military and security agencies are also reportedly developing AI for defense. President Putin highlighted Sber and Yandex as leaders in this domainrussiapost.info.
- Goals & Scope: Russia’s goal is tech sovereignty – ensuring it can meet AI needs internally without foreign tech. This means developing Russian-language LLMs, domestic cloud and supercomputers, and adapting AI to Russian law and censorship guidelines. Use cases span civilian (chatbots, enterprise AI) and military (intelligence analysis, autonomous systemsrussiapost.info). The scope includes building large GPU clusters (Sber’s “Christofari” supercomputer) and exploring domestic chip design, although hardware is a bottleneck due to sanctionsrussiapost.info.
- Key Achievements: GigaChat saw adoption by ~15,000 Russian companies within its first yearreuters.com. Yandex’s models are deployed in its products (search assistant “Alisa”). Russia has fine-tuned many open-source models on Russian data (e.g. RuGPT-3 series). Sber’s latest GigaChat version (2025) includes improved “reasoning” capabilities for science and coding tasksreuters.com. These models run on domestic infrastructure – a point Sber emphasizes as a selling point for Russian clients (data stays in-country)reuters.com.
- Independence Policy: The concept of “sovereign AI” is now explicit in Russia. In late 2024, President Putin introduced the term, stressing that “sovereignty is a highly important component” of AI progressrussiapost.info. He argued Russia can “join the ranks of leaders” in AI with its own solutionsrussiapost.info. The national AI strategy (updated 2022) highlights reducing reliance on foreign software. Putin rhetorically asked why a nation of 146 million should “rely on AI from [abroad]” instead of creating its own. This posture has only hardened post-Ukraine war, as Western AI services (like ChatGPT) are largely unavailable in Russiaturingpost.com. Officials claim domestic AI will bolster national security, protect local data, and uphold Russian “information sovereignty.”
India: IndiaAI Mission for Indigenous AI
- Project & Description: India launched the IndiaAI Mission in 2024 as a comprehensive sovereign AI initiativetrade.gov. The mission encompasses developing homegrown LLMs, large-scale AI compute infrastructure, and datasets for Indian languages. A $1.25 billion (₹10,300 crore) program over five years was approved in March 2024 to fund this effortreuters.com. Part of this plan is to create a National AI Compute Grid (“IndiaAI Compute”) with up to 10,000 GPUs, and to support domestic AI startups and researchlawfaremedia.orgreuters.com. The government is enhancing its cloud and supercomputing capabilities so that AI models can be developed and deployed within India.
- Agencies/Organizations: The mission is led by the Ministry of Electronics and IT (MeitY) along with NITI Aayog and Digital India Corporation. A new apex body named IndiaAI was formed to coordinate AI efforts. Indian tech companies and institutes (IITs, IISc, CDAC) are partners for research. The initiative also leverages the National Supercomputing Mission centers for infrastructure. In 2023, the government empaneled providers for an AI cloud platform to offer compute access to researcherslawfaremedia.org.
- Goals & Scope: The aim is to achieve “Atmanirbhar” (self-reliant) AI capability. This includes developing LLMs tuned to Indian languages and contexts (Hindi, Tamil, etc.), so that AI services understand local culture and accents. It also addresses national security – e.g. ensuring India isn’t forced to use foreign AI that might be biased or unreliable on sensitive issuestheguardian.com. The scope spans funding AI startups, building a repository of curated Indian data, and using AI in sectors like healthcare, agriculture, and governancereuters.com. Essentially, India wants both the models and the compute infrastructure on Indian soil, reducing dependency on US or Chinese AI providers.
- Key Achievements: The funding has kick-started projects to create India’s own foundation models. By late 2024, teams in IITs and industry were developing bilingual (English-Indian language) models and translation systems. The government’s supercomputer PARAM Siddhi was used for some model trainings. India has also released open datasets for Indian languages and built AI tools like Bhashini (translation platform) to support model training. Though India’s LLMs are still smaller-scale, the mission envisions models comparable to GPT-3 in capabilitytheguardian.comtheguardian.com. In parallel, India’s largest tech companies (TCS, Reliance) have announced plans for AI platforms aligned with the mission.
- Independence Policy: Indian officials explicitly frame this as reducing reliance on foreign AI. The government statement launching IndiaAI noted a “strategic need for indigenous AI”. One driving concern is data sovereignty – that using foreign models could send sensitive Indian data abroad or embed biasesbabl.aibabl.ai. For example, India’s defense ministry warned against Chinese LLMs after one claimed a disputed region wasn’t part of Indiatheguardian.com. Thus, the mission’s mandate is to create “Indian-owned and located” AI infrastructurepm.gc.capm.gc.ca. Prime Minister Modi has often spoken of making India a “global AI hub” built on Indian talent, and the Mission was launched to realize that vision with self-sufficient capabilitytrade.gov.
Japan: Domestic LLM Development for Japanese Language
- Project & Description: Japan is investing heavily in homegrown large language models to ensure AI systems understand Japanese language and culture. A major effort in 2023–2024 was training a ChatGPT-like model on Japan’s Fugaku supercomputerscientificamerican.comscientificamerican.com. This project, led by RIKEN and Tokyo Institute of Technology with companies like Fujitsu and NEC, is creating an open Japanese LLM (targeting ~30B parameters) primarily on Japanese textscientificamerican.com. Separately, in 2025 the government announced support for a new national Japanese LLM that pools domestic data to reduce dependence on U.S./Chinese AIbabl.ai. This model will be tuned to Japanese facts and societal context.
- Agencies/Organizations: Key players include RIKEN (government research institute) and the National Institute of Information and Communications Technology (NICT), which is contributing decades of Japanese-language datababl.ai. The project is backed by the Ministry of Internal Affairs and Communications and METI (Economy, Trade and Industry)babl.ai. Tech companies like Preferred Networks (AI startup) are development partnersbabl.ai. Cloud provider Sakura Internet is slated to host the model in domestic data centers to keep infrastructure localbabl.ai.
- Goals & Scope: Japan’s goal is an LLM that “thinks” in Japanese, capturing nuances that foreign models missscientificamerican.comscientificamerican.com. Culturally, this means adhering to local norms (politeness in writing, proper use of honorifics) and handling Japan-specific knowledge correctly. Strategically, Japan seeks to reduce reliance on American and Chinese AI due to security and economic concernsbabl.ai. Officials worry foreign AI could misrepresent facts about Japan or leak sensitive datababl.ai. Thus the scope includes ensuring the model aligns with Japan’s historical and factual perspective (for instance, not echoing Chinese stances on territorial disputesbabl.ai). The project also aims to support Japanese industries – e.g. by providing an AI that understands local healthcare or legal terminology better than English-trained models.
- Key Achievements: Using Fugaku (one of the world’s fastest supercomputers), Japan trained a sizable Japanese-first model expected to be released in 2024scientificamerican.com. Meanwhile, several companies have launched commercial Japanese LLMs: NEC deployed an in-house generative model in 2023 for enterprise use, and SoftBank is investing in its own AI suitescientificamerican.com. An open Japanese model from RIKEN’s team is anticipated, and a more specialized science-focused 100B-parameter model (for generating research hypotheses) is in development with government fundingscientificamerican.com. These efforts indicate Japan’s progress toward having competitive domestic AI that can rival GPT-4 on Japanese tasks.
- Independence Policy: The Japanese government explicitly emphasizes digital sovereignty in AI. In late 2023 it announced plans to “develop a homegrown AI model to reduce dependence on U.S. and Chinese systems”babl.ai. Officials cited security worries that foreign AI could funnel Japanese data abroad or distort Japan’s positionsbabl.aibabl.ai. Japan’s AI strategy now calls for domestically controlled AI infrastructure, similar to how it maintains independent navigation satellites. The government has budgeted hundreds of millions for these AI projectsscientificamerican.com, reflecting a policy decision that foundational models are a national strategic asset that Japan must cultivate internally.
South Korea: “AI Champions” Program for Sovereign Models
- Project & Description: South Korea in 2025 launched its most ambitious sovereign AI program to date – a ₩530 billion (≃ $390 million) initiative to back homegrown LLMstechcrunch.com. The government selected five local “AI champions” – LG AI Research, SK Telecom, Naver, NCSoft (NC AI), and startup Upstage – to develop large-scale Korean-language foundation modelstechcrunch.com. This program, often dubbed Korea’s “AI Giants project,” will funnel funding and access to government data to these companies as they build advanced models (on the order of tens of billions of parameters) tailored to Korean language and needs.
- Agencies/Organizations: The initiative is led by the Ministry of Science and ICT. Each chosen company brings unique resources: e.g. LG’s Exaone model (which melds language and reasoning)techcrunch.com, SK Telecom’s “A.” (AITT) model aimed at enterprise, Naver’s HyperCLOVA X (an upgrade to its 2021 Korean LLM), NCSoft’s NC AI for gaming and cloud, and Upstage’s models for office productivity. The government will periodically review progress and winnow the field from five to two top performers that will lead Korea’s sovereign AI effort long-termtechcrunch.com.
- Goals & Scope: Seoul’s primary goal is to “cut reliance on foreign AI technologies” and bolster national security by keeping data and AI capability in-countrytechcrunch.com. This means developing Korean-language models that can serve local businesses and government agencies without needing OpenAI’s or Google’s APIs. A strong cultural component is included: models must handle Korean idioms, honorifics, and context better than English-trained models. Additionally, by controlling its own models, Korea can enforce stricter data privacy and tailor the AI to national regulations. The scope also extends to AI chips and cloud – South Korea is supporting projects in AI semiconductors (e.g. from Samsung) to underpin these models.
- Key Achievements: Prior to this program, Korean firms already made progress: Naver’s HyperCLOVA (2021) was one of the first 200B-parameter non-English LLMs. In 2023, Kakao Brain released KoGPT models and SKT launched an LLM-based assistant. With the new funding, Exaone 4.0 (LG) and Upstage’s Solar model have shown competitive benchmark scorestechcrunch.com. South Korean models are now offered via APIs to Korean banks, hospitals, and government offices, providing an on-premise alternative to ChatGPTkoreaherald.com. The country is also building a national AI computing cluster to support these models. By late 2025, early versions of the five consortiums’ models were being pilot-tested in Korean public services.
- Independence Policy: South Korea’s government frames sovereign AI as essential to maintaining digital independence. Officials note that relying on U.S. models could expose sensitive local data or leave Korean users vulnerable if foreign services are cut offfindarticles.com. Thus, the 2023 national AI strategy explicitly funded a “national AI stack to reduce dependence on foreign models”tecknexus.com. South Korea also cites the need to preserve Korean language and culture in AI outputs. The Science Ministry stated this investment will “strengthen national security and keep a tighter control over data in the AI era”techcrunch.com. In short, the policy recognizes AI as critical infrastructure that Korea aims to own domestically, similar to its approach in semiconductors and 5G.
Taiwan: TAIDE – Trustworthy AI Dialogue Engine
- Project & Description: Taiwan’s TAIDE (Trustworthy AI Dialogue Engine) is a homegrown large language model project launched in 2023lawfaremedia.org. With roughly $7.4 million in government funding, TAIDE was built by fine-tuning Meta’s LLaMa models on Taiwanese-local datalawfaremedia.org. The result is a family of Chinese-English bilingual chat models aligned with Taiwan’s democratic values and factual context. The motivation was to offer Taiwanese society a domestic chatbot alternative to counter the influence of Chinese AI systems that are constrained by Beijing’s censorship. TAIDE’s developers incorporated local news and government data so that the AI’s answers reflect Taiwan’s reality (for example, on political history) rather than PRC narrativeslawfaremedia.org.
- Agencies/Organizations: TAIDE was developed by a coalition including Taiwan’s National Center for High-Performance Computing (NCHC) – which provided training infrastructurelawfaremedia.org – and academic partners (likely Academia Sinica and local universities). The project was overseen by digital affairs authorities and supported by the Ministry of Science and Technology. Taiwan’s tech firms contributed expertise in Mandarin NLP. The model and its API are hosted locally in Taiwan.
- Goals & Scope: The primary goal is information security and cultural protection. Taipei saw a need for an AI that would not enforce “core socialist values” (as mainland Chinese models do by law)lawfaremedia.org. TAIDE is intended to uphold Taiwan’s democratic values, use Traditional Chinese script, and know local facts (holidays, figures, places) accurately. More broadly, Taiwan aims to strengthen its digital sovereignty by not depending on either Chinese or U.S. AI providers. The TAIDE project is part of a national strategy to build AI resilience against mis/disinformation. It is scoped to produce not just one model but a “family of models” continuously improved with local datalawfaremedia.org.
- Key Achievements: TAIDE released initial versions (based on LLaMa-2) in 2023, with the model accessible for public testing. It demonstrated more factual accuracy on Taiwan-specific queries than baseline models. Academia Sinica also beta-launched an LLM for traditional Chinese text processing around the same timelawfaremedia.org. Taiwan upgraded its NCHC supercomputing with NVIDIA H100 GPUs to support larger modelslawfaremedia.org. By 2024, TAIDE models were being evaluated as chat assistants in government services to see if they handle Mandarin and English inquiries reliably. This represents a first step toward Taiwan’s longer-term plan for independent AI-driven applications in the public sector.
- Independence Policy: Taiwan’s government has explicitly framed TAIDE as a national security measure. The model was conceived to “protect Taiwan with a domestic AI alternative” amid concerns that relying on Chinese chatbots could subtly bias information toward Beijing’s standpointlawfaremedia.org. Given cross-strait tensions, Taiwan is keen that its AI systems are under local control and free from authoritarian influence. This aligns with Taiwan’s broader digital sovereignty policies (e.g. localized data centers, telecom independence). In sum, TAIDE illustrates Taiwan’s policy that critical AI tools – like information chatbots – should be developed at home to safeguard the country’s political and cultural autonomylawfaremedia.org.
Singapore: SEA–LION – Southeast Asian Language LLMs
- Project & Description: SEA‑LION (Southeast Asian Languages in One Network) is Singapore’s sovereign AI project focused on building LLMs fluent in the languages of Southeast Asialawfaremedia.orglawfaremedia.org. Launched in 2023 with about S$70 million (~US$52 million) in fundinglawfaremedia.org, SEA‑LION has developed multiple open-source language models (V1, V2, and recently V3) covering 11 languages such as Bahasa Indonesia, Malay, Vietnamese, Thai, Tagalog, Burmese, etc. The first versions were trained from scratch on nearly a trillion tokens of regional language textlawfaremedia.org. Subsequent versions fine-tuned Meta’s Llama models with Southeast Asian language data and instructionslawfaremedia.org. These models are intended to ensure local languages are well-served by AI, rather than being sidelined by English-centric systems.
- Agencies/Organizations: The project is led by AI Singapore and the National Research Foundation (NRF)lawfaremedia.org, in collaboration with the Infocomm Media Development Authority (IMDA). It pulls expertise from local universities (NUS, NTU) and regional partners. Amazon Web Services provided cloud GPU resources for training (via AWS Asia-Pacific)lawfaremedia.org. The models are hosted on government platforms and also published to Hugging Face for global access. Singapore’s Smart Nation initiative oversees these efforts to align with national digital strategy.
- Goals & Scope: The goal is to achieve “sovereign capabilities in LLMs” for Singapore and its neighborslawfaremedia.org. Practically, this means AI that understands and generates less-common languages like Lao or Malay with high competency – something foreign models struggle with. Culturally, the project aims to protect and promote national languages and local context in AIlawfaremedia.org. By open-sourcing SEA‑LION, Singapore also hopes to set a regional standard and reduce reliance on English-based models from the US. The scope includes not only language models but also evaluation benchmarks for these languages (the project has its own leaderboard to track performancelawfaremedia.org). Ultimately, it’s about digital sovereignty: Singapore wants a say in AI developments affecting its multi-lingual society, rather than being a pure consumer of foreign AI.
- Key Achievements: SEA‑LION v1 (2023) produced LLMs up to 7B parameters that, despite being relatively small, outperformed larger models in sentiment analysis for SEA languageslawfaremedia.org. SEA‑LION v2 (2024) built on Llama 2 and achieved strong results in following instructions in Thai, Vietnamese, etc.lawfaremedia.org. By late 2025, a third iteration using Google’s open Gemma model was released, further improving accuracy. Independent evaluations show SEA‑LION models often exceed other open models in understanding Southeast Asian textlawfaremedia.org. These models have been deployed in translation services and chatbots by Singapore’s government, showcasing local viability.
- Independence Policy: Singapore’s ministers have stated that SEA‑LION was initiated due to a “strategic need to develop sovereign [AI] capabilities”lawfaremedia.org. The concern was that if Singapore relies only on Silicon Valley models, local languages and values might be neglected. By investing in its own AI, Singapore ensures its multiracial, multilingual society is reflected in the technology. This feeds into a broader Smart Nation policy of tech self-sufficiency. While Singapore collaborates internationally, it clearly chose to fund a domestic LLM to assert control over critical AI infrastructure and knowledge, aligning with its digital sovereignty ethoslawfaremedia.org.
United Arab Emirates: Falcon LLM and National AI Strategy
- Project & Description: The UAE has emerged as a leader in sovereign AI with its Falcon series of large language models. The Falcon LLM project began at Abu Dhabi’s government-funded Technology Innovation Institute (TII) in 2023. TII released Falcon 40B (40-billion parameter model) as an open-source model in 2023, and in 2024 announced Falcon 180B, one of the world’s most powerful open models. In late 2024, they unveiled Falcon 3 – a set of smaller, optimized models (1B to 10B parameters) designed to run even on laptopstii.aetii.ae. These models are freely available for research and commercial use under a permissive license. The Falcon project showcases the UAE’s intent to compete with AI giants by building its own advanced models.
- Agencies/Organizations: Falcon is developed by the Technology Innovation Institute (TII), under the umbrella of the Advanced Technology Research Council (ATRC) – a UAE government body. TII’s AI and Digital Science Research Center, led by top AI scientists, spearheads model training. The UAE’s Mohamed bin Zayed University of AI (MBZUAI) supports research and talent development. On the hardware side, TII initially trained Falcon on Amazon’s cloud (AWS) clusterslawfaremedia.org, and the UAE has since been investing in local supercomputing capacity (the new G42 Cloud AI supercomputer, etc.).
- Goals & Scope: The UAE’s goal is to become a global “AI powerhouse” while assuring technological sovereignty. By building Falcon in-house, the UAE gains control over the model’s code, data, and usage – unlike relying on closed models from abroad. A key aim is to imbue the AI with Arabic language proficiency and cultural knowledge, as well as to address local market needs (government services, Arabic business applications) which foreign models may not handle. More broadly, this fits into the UAE’s national AI strategy (initiated with its Ministry of AI in 2017) to diversify the economy via tech innovation. The scope spans fundamental R&D (creating state-of-the-art models) to deployment (Falcon is used in UAE government chatbots and services). By open-sourcing, the UAE also seeks global influence: Falcon’s code availability encourages adoption, positioning it as a de facto standard in certain domains.
- Key Achievements: Falcon 40B, upon release, topped some leaderboards for open models and was downloaded widely by developers. Falcon 180B (announced 2024) is among the largest open models globally and demonstrated capabilities close to GPT-3.5 on many tasks. The Falcon 2 series (11B parameter models, including a vision-language model) showcased the UAE’s ability to iterate quicklyreuters.comreuters.com. These successes have attracted international partnerships – e.g. IBM and Microsoft have engaged with UAE’s AI ecosystem. Domestically, Falcon models are being adapted for Arabic; the UAE is also hosting AI hackathons to encourage local uses of Falcon. All this was achieved within a short span, highlighting the UAE’s “punching above its weight” in AIreuters.com.
- Independence Policy: UAE officials underline that developing Falcon proves the UAE “can be a major player” in AI and compete with the best globallyreuters.comreuters.com. The project was partly motivated by concerns of having to choose between American or Chinese tech. Notably, the UAE’s push initially drew U.S. scrutiny, leading it to pivot away from Chinese chips to avoid pressurereuters.com – underscoring the desire for autonomy. By owning its models, the UAE ensures it isn’t beholden to tech giants’ pricing or geopolitics. This is aligned with the UAE’s broader vision of digital sovereignty and becoming an exporter of AI solutions. As ATRC’s Secretary General Faisal Al Bannai put it, the UAE is “demonstrating it can really compete … globally” with its sovereign AI effortsreuters.com.
Saudi Arabia: HUMAIN – Arabic First LLM
- Project & Description: Saudi Arabia is rapidly investing in sovereign AI, highlighted by the 2023 launch of HUMAIN Chat, the Kingdom’s first home-grown Arabic large language modelthenationalnews.comthenationalnews.com. Developed by Saudi AI company HUMAIN, the core model is called ALLAM (with a 34B-parameter version reported) designed specifically for Arabic language and Gulf cultural contextsw.media. HUMAIN Chat was released in beta to Saudi users in 2025 as a secure, Arabic-first chatbot. Beyond this, Saudi Arabia’s wider AI initiative (part of Vision 2030) includes building AI research centers and training local talent, all aimed at reducing reliance on American tech.
- Agencies/Organizations: HUMAIN is a Saudi AI startup backed by the Public Investment Fund (PIF), the country’s sovereign wealth fundthenationalnews.comthenationalnews.com. PIF’s support underscores state endorsement of building national AI champions. The Saudi Data & AI Authority (SDAIA) and the National Center for AI are government bodies setting frameworks (e.g. an AI ethics code) and likely facilitating private-public collaboration. The King Abdullah University of Science and Technology (KAUST) also established an AI center and supercomputing upgrades to support such projects.
- Goals & Scope: Saudi Arabia’s goal is to become a regional leader in AI while preserving Arabic language and data sovereignty. Officials note that global AI models poorly handle Arabic dialects and could pose data security risksthenationalnews.comthenationalnews.com. Thus HUMAIN Chat is positioned as a secure alternative for government and business – ensuring sensitive Saudi data (from ministries or companies) is processed locally, not by foreign APIsthenationalnews.com. Culturally, the scope includes reflecting Arab cultural nuances and values in AI responses. Strategically, Saudi Arabia views sovereign AI as part of its digital infrastructure, complementing its investments in data centers and undersea cables. The country is even reportedly buying tens of thousands of high-end GPUs to build up AI computing capacity for these projects.
- Key Achievements: By 2025, HUMAIN released ALLAM 34B, an Arabic LLM which quickly ranked among the top Arabic-focused models. It can handle modern standard Arabic and some dialects better than English-trained models. Saudi Arabia also set up the “Global AI Hub”, a program to train 25,000 Saudis in AI, indicating progress in human capital. The HUMAIN Chat app’s beta received positive local feedback for understanding Saudi-specific queries better than ChatGPT. Additionally, Saudi technical universities have begun to fine-tune open-source models on Arabic text (e.g. Arabic GPT-2 variants for research). These are early milestones toward a broader Saudi AI ecosystem anchored on local models.
- Independence Policy: Saudi leaders frame sovereign AI as both an economic opportunity and a matter of digital sovereignty. The investments align with Vision 2030’s aim to localize advanced technology. Saudi officials have pointed to incidents like biased or incorrect outputs from foreign AI about Middle Eastern issues as motivation to “build our own AI that knows us”. By controlling AI development (and regulating it, as with the 2023 AI Principles document), the Kingdom seeks to guard against foreign tech dominance. As a CIO magazine noted, with HUMAIN, Saudi Arabia is “challenging global dominance with homegrown infrastructure and language models”, aiming to lead in Arabic AI and ensure its AI future isn’t dependent on Big Techcio.com.
United Kingdom: National Foundation Model Taskforce and Compute
- Project & Description: The UK is pursuing sovereign AI through a National Foundation Model Taskforce established in 2023. The taskforce was launched with an initial £100 million to develop “Britain’s own” safe and reliable foundation models (in areas like LLMs)gov.uk. It is tasked with both advancing core model capabilities and enabling public-sector adoption of AI. In parallel, the UK unveiled a Compute Roadmap in 2025, committing £2 billion by 2030 to dramatically expand national supercomputing for AIdigitalpolicyalert.orgtechuk.org. This includes building a new AI Research Supercomputer (sometimes nicknamed “BritGPT compute”) to support domestic model training.
- Agencies/Organizations: The Foundation Model Taskforce reports directly to the Prime Minister and the Department for Science, Innovation & Technologygov.uk. It is modeled on the COVID-19 Vaccines Taskforce, indicating high-level government involvement. The taskforce brings together government experts (e.g. from GCHQ’s AI unit) and industry researchers (some from DeepMind, etc.). On infrastructure, UK Research and Innovation (UKRI) and the Met Office are involved in the new supercomputing builds. The Alan Turing Institute and British AI startups are also consulted to align research goals.
- Goals & Scope: The UK’s goals are to ensure it has sovereign capabilities in foundation models – meaning the ability to develop and run cutting-edge AI within the UKgov.ukgov.uk. Part of this is economic and strategic: the UK wants to be a leader in AI safety and innovation, not merely a consumer of US models. The scope includes training large English-language models that could power government services (healthcare, defense, etc.) with appropriate controls. There’s also an emphasis on AI safety research (the UK is setting up an AI Safety Institute) to guide these models. By building its own compute infrastructure, Britain seeks to avoid being bottlenecked by access to foreign cloud GPUs. Overall, the initiative covers model R&D, computing power, startup support, and regulatory readiness.
- Key Achievements: In Budget 2023 the UK set aside £900 million for an exascale supercomputer and AI-specific compute, which is now taking shapegov.uk. By mid-2024, the Taskforce had begun funding projects – for example, it supported the open-source BritGPT model by a UK startup and fine-tuning of existing models for the NHS. The UK’s “AI Research Resource” computing cluster (to rival the US’s infrastructure) is under procurement, aiming to be one of the most powerful in Europe by 2024–25. On the policy side, the UK hosted a global AI Safety Summit (2023) to show leadership in the field. While its proprietary model efforts are nascent, these investments signal progress toward a distinctly British AI ecosystem.
- Independence Policy: The UK government explicitly stated that the Taskforce investment will “ensure sovereign [AI] capabilities” and build the UK’s “sovereign national” AI capacitygov.ukgov.uk. British officials don’t frame this as rejecting allied tech (indeed UK works closely with U.S. labs), but stress the importance of domestic control. As the policy paper put it, having British-built AI will “cement the UK’s position as a science and technology superpower by 2030”gov.uk. In practice, that means both the know-how and the infrastructure for top-tier AI must reside in the UK. The Taskforce’s creation, in the words of PM Rishi Sunak, is about “making sure UK values and ideas shape this technology” and that Britain is not left dependent on a few foreign companies for critical AI capabilitiesgov.uk.
France: Investing in National AI Compute and Models
- Project & Description: France’s government has actively invested in “digital sovereignty” for AI. A signature project is the upgrade of the Jean Zay supercomputer (owned by CNRS near Paris) to support large-scale AI model training. In 2023, France spent about €40 million to install 1,500 new Nvidia H100 GPUs in Jean Zaylawfaremedia.org, turning it into an AI powerhouse open to French researchers and startups. This infrastructure was notably used to train the BLOOM open-source multilingual LLM in 2022 and continues to support new model developmentlawfaremedia.org. France is also directly backing AI startups – for example, it contributed funding to Mistral AI, a French company that released a 7B parameter open model in 2023lawfaremedia.org.
- Agencies/Organizations: The effort is led by the Ministry of Higher Education and Research and coordinated through national bodies like GENCI (the national high-performance computing agency) for supercomputers. State investment bank Bpifrance has provided financing to AI firms (including Mistral AI)lawfaremedia.org. The French military’s innovation agency (DGA) also funds certain AI projects for defense needs. Politically, President Macron and the Digital Minister have championed “Tech sovereignty” programs that include AI. The INRIA research institute and companies like Atos (Eviden) took part in deploying the new hardware for AI at Jean Zaylawfaremedia.org.
- Goals & Scope: France’s goals are to ensure French researchers and companies have domestic access to world-class AI computation and to foster French-made foundation models. By boosting national supercomputers, France wants to free its AI community from dependence on U.S. cloud providerslawfaremedia.org. Another goal is nurturing AI that reflects French and European values (e.g. transparency, neutrality in language). The scope spans supporting multi-language models (BLOOM was trained in 46 languages with French leadership) and sector-specific AI (France has initiatives in health AI and language translation AI under sovereign control). Ultimately, France sees sovereign AI as crucial for economic competitiveness and cultural influence (protecting French language in the AI era).
- Key Achievements: The Jean Zay AI supercomputer became one of the top public AI compute resources in Europe after its upgradelawfaremedia.org. This enabled French and European teams to train models like BLOOM (176B parameters) entirely in Europelawfaremedia.org. In 2023, French startup Mistral, backed by public funds, released an open-source 7B LLM that gained global attention for its quality – an early win for France’s strategylawfaremedia.org. Another company, Aleph Alpha (though based in Germany, it collaborates closely with France), built a 70B multilingual model with support from French/German governmentslawfaremedia.org. France also launched the “Confiance.ai” program focusing on trustworthy AI R&D. By 2025, France was hosting one of the first “AI factories” under the EU’s plan – essentially a center of excellence to incubate sovereign AI solutions.
- Independence Policy: France is an outspoken proponent of “strategic autonomy” in technology. French leaders argued during EU AI Act debates that Europe must not only regulate AI but also invest in it, or else become dependentlawfaremedia.org. The French Digital Strategy explicitly includes developing domestic AI capabilities as a pillar. President Macron has stated that Europe needs sovereign AI to uphold its values and not be dominated by American or Chinese systems. The investment in compute and startups is a concrete execution of this policy: as the Lawfare analysis noted, France’s upgrades are part of a broader strategy to make domestic AI computing accessible to French playerslawfaremedia.org. This political support is ensuring France doesn’t “fall behind in the AI race” by relying solely on Silicon Valleylawfaremedia.org.
Germany: Backing Local AI Labs and Infrastructure
- Project & Description: Germany’s approach to sovereign AI has focused on funding domestic AI research and leveraging European computing infrastructure. In 2023, the German government co-funded Aleph Alpha, an AI lab developing the Luminous family of large language models (competitors to GPT-3)lawfaremedia.org. This support helped Aleph Alpha scale up its models (up to 70B parameters, multilingual) and offer them to German industry and government with data residency in Germany. Germany is also a key player in EuroHPC supercomputers – it hosts the upcoming JUPITER exascale system (the EU’s first) which will power AI research. Already, Germany’s JUWELS and Berzelius supercomputers have been used for training models like Aleph Alpha’s and otherslawfaremedia.orglawfaremedia.org.
- Agencies/Organizations: The Federal Ministry of Economics and Climate Action (BMWK) and Ministry of Research (BMBF) have programs for AI innovation that directed funds to companies like Aleph Alphalawfaremedia.org. The German state of Baden-Württemberg supported Aleph Alpha as well, since it’s based there. On infrastructure, the Jülich Supercomputing Centre and EuroHPC Joint Undertaking involve German participation to build large systems for AI. German research institutes (DFKI, Fraunhofer Society) are working on German-language models and AI applications for public administration, under the umbrella of the national AI strategy.
- Goals & Scope: Germany’s goals are to ensure it has an AI ecosystem not entirely dependent on U.S. tech. This means cultivating German AI providers (like Aleph Alpha) that can serve local needs (e.g. a German-language medical chatbot respecting German privacy laws). Another goal is integrating AI into German industry (Manufacturing 4.0, etc.) with tools that can be audited and trusted – easier if they are developed domestically. The scope of support ranges from fundamental model research (e.g. funding for new model training runs on German supercomputers) to applied AI in sectors like automotive and finance, with preference for open or sovereign solutions. Germany also strongly advocates for EU-wide AI sovereignty, often coupling its efforts with France’s in EU initiatives.
- Key Achievements: Aleph Alpha has delivered a series of LLMs (for example, Luminous-20B and 65B) that perform well on European language tasks and are deployed in pilot projects with German ministries. Germany’s existing HPCs have enabled these models to be trained within Europe. In 2024, the model “OpenGPT-X” (a European LLM project involving German and French partners) released a first version, showcasing cross-border collaboration. Germany’s automotive companies have begun using native AI models for engineering design and supply chain optimization. By hosting the JUPITER exascale supercomputer (operational in 2025), Germany achieved a major milestone – securing one of the world’s top AI computation resources on European soilecmwf.intecmwf.int. This will greatly accelerate sovereign model development in Germany and the EU.
- Independence Policy: German officials emphasize “technologische Souveränität” (technological sovereignty). Former Chancellor Angela Merkel and current leaders have argued that Europe’s digital future must not be left to outside companies. This led to initiatives like GAIA-X (European cloud) and similar thinking in AI. The support of Aleph Alpha and others in 2023 was explicitly to avoid falling behind in AI and to have homegrown optionslawfaremedia.org. Germany also tends to stress ethical AI; having its own models helps enforce its strict privacy and safety standards. In sum, Germany’s policy is that while it will use global AI tools, it also wants at least a few German/European AI platforms in the fray, to secure both economic benefits and autonomy in critical tech.
Spain: National LLMs for Spanish and Co-Official Languages
- Project & Description: Spain has launched a project to develop national large language models that understand Spanish and the country’s co-official languages (Catalan, Basque, Galician). Announced in 2023 as part of Spain’s digital agenda, this initiative will produce a family of foundation models for text and possibly speechlawfaremedia.org. Some models are being trained from scratch on Spanish corpora, while others will adapt existing open-source models. A new supercomputer, MareNostrum 5 in Barcelona, is earmarked for training these modelslawfaremedia.org. The goal is to have Spain’s own GPT-like systems for use in everything from government chatbots to industry-specific assistants, all tuned to Iberian linguistic nuances.
- Agencies/Organizations: The effort is overseen by Spain’s Secretary of State for Digitalization and AI. Key players include the Barcelona Supercomputing Center (BSC), which hosts MareNostrum 5, and the national AI research program (Vision AI). The project likely involves universities (e.g. Polytechnic University of Catalonia) and the Spanish Royal Academy for language resources. It aligns with the EU’s broader funded projects (Spain is part of the EU’s EuroHPC AI projects and was selected to host an “AI Innovation Hub”). Local startups like Bitext or TELMI might contribute their language datasets.
- Goals & Scope: The primary goal is an AI that truly masters Spanish and Spain’s regional languages, which are underrepresented in current global models. By doing so, Spain ensures that vital digital services (like legal document summarizers or educational chatbots) work accurately in Spanish and don’t force reliance on English tools. Another aim is to bolster Spain’s tech sovereignty and innovation – having its own LLMs could spawn local AI enterprises and reduce cloud spending on foreign APIs. The scope includes making these models open or widely accessible so businesses and researchers in Spain can build on themlawfaremedia.org. Culturally, it’s also about preserving linguistic diversity in the AI era (Basque and Catalan, for example, have unique needs that a Spain-trained model would address).
- Key Achievements: By late 2024, Spain’s team had reportedly trained smaller baseline models in Spanish and Catalan and was preparing a larger model (dozens of billions of parameters). They have compiled one of the most comprehensive Spanish text datasets, including literature, legal texts, and web data. Early internal versions of the model showed better performance in Spanish than Google’s or OpenAI’s on certain tasks, like understanding formal vs. informal address (tú vs. usted). MareNostrum 5’s installation (with thousands of H100 GPUs) began, promising a big leap in training capacitylawfaremedia.org. Spain has also rolled out a Spanish-language AI assistant for public administrative help as a pilot, based on an earlier medium-sized model – a stepping stone toward the full national LLM.
- Independence Policy: Spain frames its AI initiative as part of “Tecnología con Soberanía” – technology with sovereignty. The government wants core AI that aligns with European values and use of the Spanish language. Part of Spain’s National AI Strategy (España Digital 2026) explicitly calls for developing open, sovereign AI solutions so that Spain isn’t strictly dependent on large U.S. providers. Officials also highlight language: Spanish is the world’s second-most spoken native tongue, and Spain sees an opportunity (and responsibility) to have AI that serves the Spanish-speaking world in an unbiased, culturally appropriate manner. This policy outlook has driven public investment into national LLM development, ensuring Spain has a stake in the AI race on its own termslawfaremedia.org.
Canada: AI Sovereign Compute Strategy
- Project & Description: Canada in 2024 announced a Canadian AI Sovereign Compute Strategy as part of a broader plan to “secure Canada’s AI advantage.” This strategy involves a $2 billion+ investment to build and provide access to AI computing infrastructure on Canadian soilpm.gc.capm.gc.ca. The immediate step is an AI Compute Access Fund to give researchers and startups near-term cloud compute, while the longer-term step is developing large-scale Canadian-owned AI supercomputers. The strategy complements Canada’s existing AI leadership (Montreal and Toronto AI hubs) by ensuring the next-generation foundation models can be developed domestically instead of solely at foreign tech companies.
- Agencies/Organizations: The initiative was announced by the Prime Minister and is led by the Ministry of Innovation, Science and Industry. It will likely be executed through organizations like Compute Canada or the Digital Research Alliance, which manage HPC resources, and Canada’s AI Institutes (Vector Institute, MILA, AMII). The government also led a funding round for Cohere, a Toronto-based AI company making LLMslawfaremedia.org, to anchor talent in Canada. Partnerships with hardware firms (potentially Nvidia) are envisioned as Canada builds new computing clusterslawfaremedia.org.
- Goals & Scope: The main goal is to ensure Canadian AI researchers and companies are not dependent on foreign cloud platforms for large-scale computingpm.gc.capm.gc.ca. By having sovereign computing, Canada can train its own large models (including bilingual English-French models or domain-specific models for health and climate) under Canadian privacy rules and values. It’s also about economic growth: scaling up domestic AI infrastructure should attract talent and investment, keeping Canada at the cutting edge (Canada was an early AI pioneer and wants to maintain that status). The scope includes building at least one world-class AI supercomputer and associated data centers across Canada, as well as continuing funding of homegrown AI model developers (like Cohere, which focuses on enterprise LLMs).
- Key Achievements: As of 2024, the government set aside $2.4 billion in the federal budget for the AI package, including ~$1.5 billion for the compute strategypm.gc.capm.gc.ca. A consultation was launched to design the sovereign compute network (covering what hardware, where to locate, etc.). Meanwhile, Canada’s existing supercomputer Niagara in Ontario was upgraded to support AI workloads as an interim measure. The Cohere LLM (which Canada backs) has grown in usage and is offered as a Canadian alternative to OpenAI for enterprise customers. The early formation of an AI Safety Institute (with $50 million funding) also ensures Canada’s models will be developed with oversightpm.gc.ca. While the new infrastructure is in progress, these steps have laid the groundwork for Canada’s independent AI ecosystem.
- Independence Policy: Canada’s move comes from a position of strength (a “world-leading AI ecosystem” per Trudeaupm.gc.ca) but with a clear intent to remain sovereign. The policy explicitly mentions catalyzing “Canadian-owned and located AI infrastructure”pm.gc.ca. This reflects concerns that if all AI compute is rented from U.S. giants, Canadian innovators could be hamstrung or data could leave jurisdiction. By investing in itself, Canada ensures Canadian values (like bilingualism, multicultural inclusion and privacy) are built into the AI layer. The government’s language emphasizes opportunity – that Canadian ideas and values should help shape this globally in-demand technology, hence the need for made-in-Canada solutionspm.gc.capm.gc.ca. This stance aligns with Canada’s broader strategy of fostering homegrown tech champions and retaining talent.
Brazil: Brazilian Artificial Intelligence Plan (PBIA)
- Project & Description: Brazil in 2024 unveiled the Brazilian Artificial Intelligence Plan (Plano Brasil de Inteligência Artificial – PBIA), a sweeping initiative to develop fully homegrown AI capabilities across the public and private sectorsbrazilreports.com. The plan includes a budget of R$23 billion (~US$4 billion) through 2028brazilreports.com. A central project under PBIA is upgrading Brazil’s top supercomputer (Santos Dumont, in Petrópolis) to be among the world’s top 5 in performancebrazilreports.com – effectively creating a national AI supercomputing hub. The PBIA also outlines dozens of AI projects in health, education, agriculture, defense, and environmental monitoring, all using Brazil-developed AI models and toolsbrazilreports.com.
- Agencies/Organizations: The plan is led by Brazil’s Ministry of Science, Technology and Innovation, with input from the Ministry of Digital Communications and others. The National Laboratory for Scientific Computing (LNCC) operates the Santos Dumont supercomputer and is a key player in implementationbrazilreports.com. A network of federal universities and innovation centers are involved in R&D. Politically, President Luiz Inácio Lula da Silva himself launched the plan, indicating top-level supportbrazilreports.com. The government is also likely partnering with domestic tech companies (such as Petrobras’ CENPES in AI for energy, or Embraer in AI for aviation) to drive applied research.
- Goals & Scope: The overarching goal is to achieve technological autonomy in AI and boost Brazil’s competitivenessbrazilreports.com. Officials explicitly state they want Brazil to create its own AI mechanisms “instead of relying on AI from China, the United States, South Korea, or Japan”brazilreports.com. That entails developing Brazilian LLMs (including Portuguese-language models for government use), AI-powered public services (like automated health diagnostics for remote areas), and even domestic AI chips in the long runbrazilreports.com. The scope is comprehensive: a first phase integrates AI into many public services (with immediate-impact projects), and a second phase focuses on long-term capacity – upgrading hardware, training AI talent, and establishing an excellence networkbrazilreports.combrazilreports.com. Sustainability is also in scope, ensuring sufficient renewable energy for new data centersbrazilreports.com.
- Key Achievements: By mid-2024, Brazil announced the procurement to expand Santos Dumont with cutting-edge GPUs, aiming to massively increase its AI training capacitybrazilreports.com. Some early AI projects under PBIA have rolled out, e.g. autonomous disinfection robots in hospitals and AI systems for analyzing satellite imagery of the Amazon rainforestbrazilreports.combrazilreports.com. Brazil’s government also launched Portuguese-language AI assistants for citizens (for example, an AI chatbot on a government portal) as pilots. The PBIA’s emphasis on open-source means Brazil is adapting models like BLOOM and GPT-J to Portuguese with local data – these fine-tuned models are expected to be released for public use in academia and industry.
- Independence Policy: Brazil’s leadership has been vocal that a country of over 200 million must not remain an AI importer. At the PBIA launch, President Lula rhetorically asked, “Why can’t we have our own [AI]?”, stressing that Brazil should not have to depend on foreign AI from the US or Chinabrazilreports.com. This sentiment of national pride and sovereignty underpins the plan. It’s also tied to digital inclusion – ensuring AI serves Brazilian society broadly (Portuguese content, local needs) rather than foreign companies’ priorities. By investing billions, Brazil is asserting that it wants a say in the AI revolution and to develop AI that aligns with its national interests and values. This policy marks a shift for Brazil into a more assertive tech stance: leveraging its large market and talent to build independent AI capacity for the futurebrazilreports.combrazilreports.com.
Sources: Recent government and institutional reports, news articles, and analysis on national AI initiativeslawfaremedia.orglawfaremedia.orgreuters.combrazilreports.com, as cited above. Each country section includes inline citations to these sources for verification.






Leave a comment