4 Sources
4 Sources
[1]
European Parliament blocks AI on lawmakers' devices, citing security risks | TechCrunch
The European Parliament has reportedly blocked lawmakers from using the baked-in AI tools on their work devices, citing cybersecurity and privacy risks with uploading confidential correspondence to the cloud. Per an email seen by Politico, the parliament's IT department said it could not guarantee the security of the data uploaded to the servers of AI companies and that the full extent of what information is shared with AI companies is "still being assessed." As such, the email said, "It is considered safer to keep such features disabled." Uploading data to AI chatbots, like Anthropic's Claude, Microsoft's Copilot, and OpenAI's ChatGPT, for example, means that U.S. authorities can demand the companies that run the chatbots turn over information about their users. AI chatbots also typically rely on using information that users provide or upload to improve their models, increasing the chance that potentially sensitive information uploaded by one person may be shared and seen by other users. Europe has some of the strongest data protection rules in the world. But the European Commission, the executive body that oversees the 27-member state bloc, last year floated new legislative proposals aimed at relaxing its data protection rules to make it easier for tech giants to train their AI models on Europeans' data, drawing ire from critics who said the move caves in to U.S. technology giants. The move to restrict European lawmakers from accessing AI products on their devices comes as several EU member countries reevaluate their relationships with U.S. tech giants, which remain subject to U.S. law and the unpredictable whims and demands of the Trump administration. In recent weeks, the U.S. Department of Homeland Security has sent hundreds of subpoenas demanding U.S. tech and social media giants turn over information about people, including Americans, who have been publicly critical of the Trump administration's policies. Google, Meta, and Reddit complied in several cases, even though the subpoenas had not been issued by a judge and were not enforced by a court.
[2]
European Parliament bars lawmakers from AI tools
Who knows where that helpful email summary is being generated? The European Parliament has reportedly turned off AI features on lawmakers' devices amid concerns about content going where it shouldn't. According to Politico, staff were notified that AI features on corporate devices (including tablets) were disabled because the IT department could not guarantee data security. The bone of contention is that some AI assistants require the use of cloud services to perform tasks including email summarization, and so send the data off the device - a challenge for data protection. It's a unfortunate for device vendors that promote on-device processing, but the European Parliament's tech support desk reportedly stated: "As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled." The Register contacted the European Parliament for comment. Data privacy and AI services have not been the greatest of bedfellows. Studies have shown that employees regularly leak company secrets via assistants, and on-device AI services are a focus of vendors amid concerns about exactly what is being sent to the cloud. The thought of confidential data being sent to an unknown location in the cloud to generate a helpful summary has clearly worried lawmakers, which is why there is a blanket ban. However, the issue has less relevance if the process occurs on the device itself. The Politico report noted that day-to-day tools, such as calendar applications, are not affected by the edict. The ban is temporary until the tech boffins can clarify what is being shared and where it is going. The European Parliament has scrutinized AI over recent years and has enacted the world's first legislation specifically designed to address perceived risks from the technology. The ban, alongside guidance to steer lawmakers away from using the services for Parliament business, is more about fears about where the data could end up than anything specific about AI. The guidance also advised against granting third-party AI apps broad access to data, which seems a sensible instruction regardless of where a user works. ®
[3]
The European Parliament pulls back AI from its own devices
The European Parliament has taken a rare and telling step: it has disabled built-in artificial intelligence features on work devices used by lawmakers and staff, citing unresolved concerns about data security, privacy, and the opaque nature of cloud-based AI processing. The decision, communicated to Members of the European Parliament (MEPs) in an internal memo this week, reflects a deepening unease at the heart of European institutions about how AI systems handle sensitive data. The Parliament's IT department concluded that it could not guarantee the safety of certain AI-driven functions, notably writing assistants, text summarization tools, virtual assistants, and web page summary features, because they rely on cloud-based processing that sends data off the device. In a workplace where draft legislation, confidential correspondence, and internal deliberations circulate daily, even momentary exposure of sensitive information is viewed as unacceptable. For now, the measures apply only to these native, built-in AI features on Parliament-issued tablets and smartphones, not to everyday apps like email or calendars. The institution has declined to specify which operating systems or device manufacturers are affected, citing the "sensitive nature" of cybersecurity matters. The internal memo did more than announce a software rollback. It advised lawmakers to review AI settings on their personal phones and tablets, warning them against exposing work emails, documents, or internal information to AI tools that "scan or analyze content," and urging caution with third-party AI applications that seek broad access to data. This guidance implicitly acknowledges a larger truth: for many elected officials and staff, the boundary between official and personal devices is porous. The Parliament's approach underscores that risks are not confined to issued hardware but extend into the consumer technology choices of its own members. The move is the latest in a series of precautionary steps by EU institutions. In 2023 the Parliament banned the use of TikTok on staff devices over similar data concerns, and ongoing debates have questioned the use of foreign-developed productivity software. Some lawmakers have even suggested moving away from Microsoft products in favor of European alternatives, part of a broader push for digital sovereignty. That push is not abstract. The EU's Artificial Intelligence Act, the world's first comprehensive regulatory framework on AI, has been in force since 2024 and imposes obligations on AI providers and users alike, categorizing systems by risk and demanding transparency, traceability, and human oversight. Yet the Parliament's latest action reveals a paradox: while Europe seeks to regulate and shape AI at scale, it is simultaneously wary of the very tools it aims to master. Stopping short of a full ban on AI use, the institution is essentially saying that in certain contexts, the technology is too unpredictable to trust, especially when critical information could leak outside secure boundaries. The Parliament's decision may seem narrowly targeted, but it carries broader implications. It signals that even for progressive regulators who have championed innovation alongside rights protections, the practical limits of AI integration are now a central concern. Cybersecurity teams within government institutions are not merely technologists; they are custodians of trust in an era when data is both an asset and a vulnerability. For businesses and citizens watching Europe's regulatory trajectory, this episode is instructive. It suggests that the EU's approach to AI will not only be legal and ethical but deeply pragmatic. Regulations may promote responsible innovation, but European institutions are prepared to pull back when security and control are at stake. As AI capabilities continue to evolve and become embedded in devices worldwide, the Parliament's cautionary step highlights a core tension of the digital age: balancing the potential of AI with its unseen and unquantified risks. Whether other governments follow suit, or whether this stance influences corporate and product strategy, remains to be seen. In the meantime, the message from Brussels is unmistakable: when it comes to AI and sensitive data, trust but verify is no longer enough.
[4]
EU Parliament bans AI use on government work devices as security fears rise
Workers also asked to exercise caution when using personal devices and AI for work tasks The European Parliament has turned off built-in AI features on the devices it issues employees due to cybersecurity and data protection concerns. An internal memo cited by Politico said the IT department could not guarantee the security of certain AI tools, particularly those that rely on cloud services that send data off-device instead of processing locally. While the European Parliament is said to be assessing the extent of the data shared with service providers to potentially re-enable some AI tools, they've been turned off for now. "Some of these features use cloud services to carry out tasks that could be handled locally, sending data off the device," the letter reads, suggesting that current tools could be safer. Although access to generative AI tools has been cut for now, the European Parliament hasn't cut access to core workplace tools like emails, calendars and office apps. Neither did the Parliament mention which AI features or systems are involved. While the data security argument has merit, European officials have also been ramping up efforts to turn their backs on US Big Tech, including Microsoft. A company that just so happens to offer operating system, productivity and AI software to European officials. Might tech sovereignty also be playing a role in the AI ban? The support desk also asked workers to "consider applying similar precautions" on their own personal devices, which includes "avoid[ing[ granting broad access to data" and not sharing sensitive info with AI chatbots. A European Parliament spokesperson told Politico it "constantly monitor[s] cybersecurity threats and quickly deploys the necessary measures to prevent them."
Share
Share
Copy Link
The European Parliament has disabled built-in AI features on lawmakers' work devices, citing unresolved cybersecurity and privacy concerns. The IT department cannot guarantee data security when AI tools send sensitive information to cloud servers. The move reflects growing unease about how AI systems handle confidential data and highlights tensions between AI integration and data protection in government institutions.
The European Parliament has taken a precautionary step by disabling built-in AI features on work devices issued to lawmakers and staff, citing cybersecurity and privacy concerns that remain unresolved
1
. According to an internal memo seen by Politico, the Parliament's IT department concluded it could not guarantee the security of data uploaded to cloud servers operated by AI companies1
. The decision affects AI-driven functions including writing assistants, text summarization tools, virtual assistants, and web page summary features that rely on cloud-based AI tools rather than on-device processing3
.
Source: The Next Web
The core issue centers on how cloud-based AI tools handle sensitive information. When lawmakers use AI assistants like ChatGPT, Copilot, or similar services, data must be sent off the device to external cloud servers for processing
2
. The Parliament's tech support desk stated that "the full extent of data shared with service providers is still being assessed" and that "it is considered safer to keep such features disabled" until this is fully clarified2
. This concern is particularly acute in a workplace where draft legislation, confidential correspondence, and internal deliberations circulate daily3
. AI chatbots typically rely on using information that users provide or upload to improve their models, increasing the chance that potentially sensitive information uploaded by one person may be shared and seen by other users1
.
Source: TechCrunch
The internal memo did more than announce a software rollback. It advised Members of the European Parliament to review AI settings on their personal phones and tablets, warning them against exposing work emails, documents, or internal information to AI tools that "scan or analyze content"
3
. Workers were also asked to "consider applying similar precautions" on their own personal devices, including avoiding granting broad access to data and not sharing sensitive info with AI chatbots4
. This guidance implicitly acknowledges that for many elected officials and staff, the boundary between official and personal devices is porous, and risks extend beyond issued hardware into the consumer technology choices of its own members3
.The move to restrict AI on lawmakers' devices comes as several EU member countries reevaluate their relationships with US Big Tech companies, which remain subject to U.S. law and the unpredictable demands of the Trump administration
1
. Uploading data to AI chatbots from companies like OpenAI and Microsoft means that U.S. authorities can demand these companies turn over information about their users1
. In recent weeks, the U.S. Department of Homeland Security has sent hundreds of subpoenas demanding U.S. tech and social media giants turn over information about people, including Americans, who have been publicly critical of the Trump administration's policies1
. Some European lawmakers have even suggested moving away from Microsoft products in favor of European alternatives, part of a broader push for tech sovereignty3
.Related Stories
The Parliament's decision reveals a paradox: while Europe seeks to regulate and shape AI at scale through its AI Act, the world's first comprehensive regulatory framework on AI that has been in force since 2024, it is simultaneously wary of the very tools it aims to master
3
. The ban is temporary until the tech boffins can clarify what is being shared and where it is going2
. Day-to-day tools such as calendar applications and core workplace tools like emails and office apps are not affected by the edict2
4
. The institution has declined to specify which operating systems or device manufacturers are affected, citing the "sensitive nature" of cybersecurity matters3
.This is not the first time the European Parliament has taken precautionary measures. In 2023, the Parliament banned the use of TikTok on staff devices over similar data concerns
3
. Studies have shown that employees regularly leak company secrets via AI assistants, and on-device AI services are a focus of vendors amid concerns about exactly what is being sent to the cloud2
. A European Parliament spokesperson told Politico it "constantly monitors cybersecurity threats and quickly deploys the necessary measures to prevent them"4
. The message from Brussels is unmistakable: when it comes to AI and sensitive information, trust but verify is no longer enough3
. Whether other governments follow suit, or whether this stance influences corporate and product strategy around data processing and data privacy, remains to be seen3
.
Source: The Register
Summarized by
Navi
[1]
[2]
[3]
1
Policy and Regulation

2
Technology

3
Technology
