3 Sources
3 Sources
[1]
Russia to give itself sweeping powers to ban or restrict foreign AI tools
March 20 (Reuters) - Foreign AI tools like Claude, ChatGPT and Gemini could be banned or restricted inside Russia if they fail to adhere to new rules that would give Moscow sweeping powers to regulate the sector, according to government proposals published online. The proposals, published by Russia's Ministry for Digital Development, would extend to the burgeoning AI sector Russia's drive to establish a sovereign internet - protected from foreign influence and respecting what it calls "traditional Russian spiritual and moral values." Russia's Ministry for Digital Development â said in a statement that the new rules were designed to "help protect citizens from covert manipulation and discriminatory algorithms." RESTRICTING CROSS-BORDER AI TECHNOLOGY The initiative, which is likely to benefit home-grown AI tools being developed by state lender Sberbank and technology group Yandex, has been made public at a time when the Russian state is tightening state control over the internet. The regulations are expected to enter into force next year after further review and government approval. "The operation of cross-border artificial intelligence technologies may be prohibited or restricted in cases specified by the legislation of the Russian Federation," the rules state. The state-run RIA news agency reported on Friday that â foreign AI tools would fall under the new rules because they inevitably transferred the data of Russian citizens abroad. "Cross-border artificial intelligence technologies refers to all foreign AI models, including ChatGPT, Claude and Gemini, where the use of such models results in user data, queries and dialogues being transmitted to the developers of these models outside Russia," â RIA cited a specialised technology lawyer, Kirill Dyakov, as saying. All three models mentioned by Dyakov were developed by U.S. companies: OpenAI, Anthropic and Alphabet's Google (GOOGL.O), opens new tab, respectively. Other foreign but open AI tools, such as China's Qwen or DeepSeek, could however â be safely adapted and rolled out in a closed environment on the proprietary infrastructure of Russian government organisations and companies, Dyakov said, since any data processed would remain within that infrastructure. RIA said AI models â used by more than 500,000 people per day would need to store Russian user information on Russian territory for three years to be compliant under the new regulatory regime. Western tech companies have in the past refused to comply with such demands. Reporting by Andrew Osborn Editing by Tomasz Janowski Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Data Privacy Andrew Osborn Thomson Reuters As Russia Chief Political Correspondent, and former Moscow bureau chief, Andrew helps lead coverage of the world's largest country, whose political, economic and social transformation under President Vladimir Putin he has reported on for much of the last two decades, along with its growing confrontation with the West and wars in Georgia and Ukraine. Andrew was part of a Wall Street Journal reporting team short-listed for a Pulitzer Prize for international reporting. He has also reported from Moscow for two British newspapers, The Telegraph and The Independent.
[2]
Russia to give itself sweeping powers to ban or restrict foreign AI tools - The Economic Times
Russia is set to introduce new rules for artificial intelligence. Foreign AI tools may face bans or restrictions if they fail to comply. These regulations aim to protect citizens and promote domestic AI development. The new regime will likely come into effect next year. Russian user data will need to be stored locally for compliance.Foreign AI tools like Claude, ChatGPT and Gemini could be banned or restricted inside Russia if they fail to adhere to new rules that would give Moscow sweeping powers to regulate the sector, according to government proposals published online. The proposals, published by Russia's Ministry for Digital Development, would extend to the burgeoning AI sector Russia's drive to establish a sovereign internet - protected from â foreign â influence and respecting what it calls "traditional Russian spiritual and moral values." Russia's Ministry for Digital Development said in a statement that the new rules were designed to "help protect citizens from covert manipulation and discriminatory algorithms." RESTRICTING CROSS-BORDER AI TECHNOLOGY The initiative, which is likely to benefit home-grown AI tools being developed by state lender Sberbank and technology group Yandex, has been made public at a time when the Russian state is tightening state control â over the internet. The regulations are expected to enter into force next year after further review and government approval. "The operation of cross-border artificial intelligence technologies may be â prohibited or restricted in cases specified by the legislation of the Russian Federation," the rules state. The state-run RIA news agency reported on Friday that foreign AI tools would fall under the new rules because they inevitably transferred the data of Russian citizens abroad. "Cross-border artificial intelligence technologies refers to all foreign AI models, including ChatGPT, Claude and Gemini, where the use of such models results in user data, queries and dialogues being transmitted to the developers of these models outside Russia," RIA cited a specialised technology lawyer, Kirill Dyakov, as saying. All three models mentioned by Dyakov were developed by U.S. companies: OpenAI, Anthropic and Alphabet's Google, respectively. Other â foreign but open AI tools, such as China's Qwen or DeepSeek, could however be safely adapted and rolled out in a closed environment on the proprietary infrastructure of Russian government organisations and companies, Dyakov said, since any data processed would remain within that infrastructure. RIA said AI models used by more than 500,000 people per day would need to store Russian user information on Russian territory for three years to be compliant under the new regulatory regime. Western tech companies have in the past refused to comply with such demands. (Reporting by Andrew OsbornEditing by Tomasz Janowski)
[3]
Russia to give itself sweeping powers to ban or restrict foreign AI tools
March 20 (Reuters) - Foreign AI tools like Claude, ChatGPT and Gemini could be banned or restricted inside Russia if they fail to adhere to new rules that would give Moscow sweeping powers to regulate the sector, according to government proposals published online. The proposals, published by Russia's Ministry for Digital Development, would extend to the burgeoning AI sector Russia's drive to establish a sovereign internet - protected from foreign influence and respecting what it calls "traditional Russian spiritual and moral values." Russia's Ministry for Digital Development said in a statement that the new rules were designed to "help protect citizens from covert manipulation and discriminatory algorithms." RESTRICTING CROSS-BORDER AI TECHNOLOGY The initiative, which is likely to benefit home-grown AI tools being developed by state lender Sberbank and technology group Yandex, has been made public at a time when the Russian state is tightening state control over the internet. The regulations are expected to enter into force next year after further review and government approval. "The operation of cross-border artificial intelligence technologies may be prohibited or restricted in cases specified by the legislation of the Russian Federation," the rules state. The state-run RIA news agency reported on Friday that foreign AI tools would fall under the new rules because they inevitably transferred the data of Russian citizens abroad. "Cross-border artificial intelligence technologies refers to all foreign AI models, including ChatGPT, Claude and Gemini, where the use of such models results in user data, queries and dialogues being transmitted to the developers of these models outside Russia," RIA cited a specialised technology lawyer, Kirill Dyakov, as saying. All three models mentioned by Dyakov were developed by U.S. companies: OpenAI, Anthropic and Alphabet's Google, respectively. Other foreign but open AI tools, such as China's Qwen or DeepSeek, could however be safely adapted and rolled out in a closed environment on the proprietary infrastructure of Russian government organisations and companies, Dyakov said, since any data processed would remain within that infrastructure. RIA said AI models used by more than 500,000 people per day would need to store Russian user information on Russian territory for three years to be compliant under the new regulatory regime. Western tech companies have in the past refused to comply with such demands. (Reporting by Andrew OsbornEditing by Tomasz Janowski)
Share
Share
Copy Link
Russia's Ministry for Digital Development has published proposals granting Moscow authority to ban or restrict foreign AI tools including ChatGPT, Claude, and Gemini. The regulations extend Russia's sovereign internet initiative to AI, requiring platforms with over 500,000 daily users to store Russian user data locally for three years. Expected to take effect next year, the move benefits domestic players like Sberbank and Yandex.
Russia is moving to assert control over artificial intelligence within its borders through new regulatory proposals that could fundamentally reshape how foreign AI tools operate in the country. The Ministry for Digital Development has published rules granting Moscow sweeping powers to ban or restrict foreign AI tools like ChatGPT, Claude, and Gemini if they fail to comply with strict data localization requirements
1
. These regulations represent an extension of Russia's long-standing push for a sovereign internetâone insulated from foreign influence and aligned with what officials describe as "traditional Russian spiritual and moral values." The Ministry for Digital Development stated the new rules aim to "help protect citizens from covert manipulation and discriminatory algorithms"2
.The proposed Russia AI regulations specifically target cross-border AI technologies, which technology lawyer Kirill Dyakov defines as "all foreign AI models, including ChatGPT, Claude and Gemini, where the use of such models results in user data, queries and dialogues being transmitted to the developers of these models outside Russia"
3
. The regulations stipulate that AI models used by more than 500,000 people per day must store Russian user data on Russian territory for three years to remain compliant. This requirement poses a significant challenge for Western tech companies including OpenAI, Anthropic, and Google, which developed the three AI models explicitly mentioned in the proposals. Historically, these companies have refused similar data localization demands from Russia.The initiative arrives amid broader efforts by Moscow to establish tightening state control over the internet and assert digital sovereignty. The regulations are expected to enter into force next year after further review and government approval
2
. According to the published rules, "the operation of cross-border artificial intelligence technologies may be prohibited or restricted in cases specified by the legislation of the Russian Federation". This broad language grants Russian authorities considerable discretion in determining which foreign AI tools can operate within the country. The state control framework creates an environment where compliance becomes not just a technical matter of data privacy, but a political calculation for international AI providers.
Source: Reuters
Related Stories
The new regulatory framework appears designed to promote domestic AI development and favor homegrown AI solutions. State lender Sberbank and technology group Yandex, both developing their own AI capabilities, stand to gain from restrictions on foreign competitors
3
. Interestingly, the regulations may leave room for certain foreign but open AI tools. Dyakov noted that models like China's Qwen or DeepSeek could potentially be adapted and deployed in closed environments on the proprietary infrastructure of Russian government organizations and companies, since any data processed would remain within that infrastructure2
. This distinction suggests Russia may differentiate between commercial AI services that transmit data abroad and open-source models that can be locally deployed. For global AI providers, the choice becomes stark: either establish local data infrastructure to protect citizens from manipulation as Russia frames it, or exit the Russian market entirely. Given Western tech companies' historical reluctance to comply with Russian data demands, the latter outcome appears more likely, potentially fragmenting the global AI landscape along geopolitical lines.Summarized by
Navi
[2]
[3]
24 Nov 2025â¢Policy and Regulation

01 Jan 2025â¢Technology

12 Dec 2024â¢Technology
