3 Sources
[1]
Why the EU's anonymisation method may not survive the GDPR test
Sergei Vassilvitskii, distinguished scientist at Google since 2012, has written to Brussels warning that the Commission's proposed anonymisation scheme for forced search-data sharing is, by his red team's own demonstration, breakable in 120 minutes. The decision deadline is 27 July. There is a familiar genre of corporate complaint in EU regulatory proceedings: a US technology company protests a Brussels rule, frames the protest as a defence of user welfare, and is dismissed by regulators as making a self-interested argument in privacy clothing. The Reuters exclusive published on Tuesday makes that dismissal harder than usual. Sergei Vassilvitskii, who has been a distinguished scientist at Google since 2012 and is one of the most-cited researchers in the field of differential privacy, has written to the European Commission warning that the Commission's proposed anonymisation method for forced search-data sharing is breakable in less than two hours. His exact words, in written comments to Reuters republished in the syndicated wire, were: "We are concerned because the EC's approach to anonymisation fails to protect Europeans' privacy: our red team managed to re-identify users in less than two hours." The number is unusually specific. It is also, on the technical literature, plausible. The proceeding sits inside the Digital Markets Act, the EU's flagship competition framework for so-called gatekeeper platforms. On 27 January 2026, the Commission opened formal specification proceedings against Google under Article 6(11) of the DMA, which obliges gatekeeper search engines to grant third-party rivals access to anonymised ranking, query, click and view data on fair, reasonable and non-discriminatory (FRAND) terms. Per the Commission's own press materials, the proceeding is intended to specify, with operational precision, four things: the scope of the data that has to be shared, the anonymisation method that will be applied to it, the conditions of access, and the eligibility of AI chatbot providers (OpenAI, Anthropic, and others) to receive it. Google's compliance deadline is 27 July 2026. Failure to meet it could result in DMA charges with fines up to 10 per cent of the company's global annual revenue. The Register noted in mid-April that Google has accumulated roughly €9.71bn in European antitrust fines since 2017, so the financial calculus on this proceeding is, even by Google's standards, material. What makes the proceeding unusual is that the proposed remedy, search-data sharing, is itself privacy-sensitive in ways most DMA remedies are not. The Information Technology and Innovation Foundation, in a 1 May filing, flagged the same fundamental tension: forcing a search engine to make user-search data available to rivals is, by definition, expanding the surface area on which user-search data can be exploited. The Chamber of Progress raised parallel concerns the same week, and CyberInsider warned that the proposal could enable large-scale surveillance if anonymisation methods proved insufficient. Vassilvitskii's intervention is the technical specification of that concern. Anonymisation, in the modern privacy literature, is not a binary property of a dataset. It is a probabilistic property that depends on (a) the data itself, (b) the auxiliary information an attacker has access to, and (c) the technique used to anonymise. Vassilvitskii's research career, per his Google Research profile, has focused specifically on differential privacy, the mathematical framework for measuring and bounding the re-identification risk in released datasets. His 2025 ACM SIGKDD paper on differentially private datasets for Google's Topics API is one of the more rigorously documented applications of the framework to a live commercial system. The two-hour claim, in that frame, is an empirical statement, not a rhetorical one. Vassilvitskii's red team, working from a sample of search-engine query data anonymised under the Commission's proposed method, was able to re-identify individual users within two hours. The anonymisation technique the Commission has proposed, in his framing, falls into a category of methods (typically combinations of pseudonymisation, aggregation, and noise injection) that have been demonstrated for over a decade to be vulnerable to linkage attacks when the underlying queries are sufficiently distinctive. That vulnerability is not theoretical. In 2006, an anonymised release of AOL search data led to multiple users being identified by name within days, including a famous New York Times reconstruction of one specific user. The same principle applies, more starkly, to modern search data, which is now vastly more granular than the 2006 corpus and far easier to cross-reference against the public web. There is a delicate political question Google has to navigate here. The company has spent the past decade arguing, sometimes credibly and sometimes not, that user privacy is one of its core commitments. The same company is now subject to a Commission proceeding that seeks to compel it to share user data with rivals on competition grounds. The argument that doing so would harm user privacy, regardless of whether it is technically correct, is open to the obvious counter-charge that Google's privacy concern has activated suspiciously alongside its commercial interest. The Vassilvitskii intervention is, on the available reporting, an attempt to defuse that counter-charge by anchoring the privacy argument in a researcher whose career independence and technical credibility are harder to dismiss. He has not just written a letter; he has met with Commission officials in person on Wednesday and has, per his own framing, proposed alternative anonymisation guardrails that would meet the DMA's competitive intent without producing the re-identification risk his red team has demonstrated. Whether the Commission accepts that framing is a separate question. The political pressure on the proceeding runs in both directions: AI competitors (OpenAI, Anthropic, Perplexity, Mistral) want access to Google's search data on the most permissive possible terms, both because it would substantially improve their commercial positions in retrieval-augmented generation and because the precedent itself, that gatekeeper search data is shareable on FRAND terms, is strategically valuable. Privacy advocates and researchers, of which Vassilvitskii is now publicly one, want the most restrictive possible terms. The Commission has six months to thread the needle. Vassilvitskii's intervention lands inside a Brussels regulatory environment that is itself under unusual strain. TNW reported earlier this year on Europe's broader struggle over whether to dismantle parts of its own regulatory architecture in order to compete more effectively with the US, with several recent moves to soften AI Act provisions and accelerate competitive responses to US dominance in the model layer. We have tracked the AI Act's enforcement timeline, with high-risk system rules entering into force in August. The DMA proceeding against Google sits alongside that calendar, but on the competition rather than the safety axis. There is also a wider transatlantic dimension. TNW has covered the EU's tightening posture on Chinese-origin connectivity infrastructure in parallel, and the broader picture is one in which Europe is simultaneously trying to constrain US gatekeepers (DMA), Chinese vendors (Cybersecurity Act recommendations), and its own regulatory drag on European AI startups. The Vassilvitskii letter complicates that trilemma by raising the possibility that the EU's own competition remedies, designed to weaken US gatekeeper positions, are themselves creating user-privacy exposure that the EU's privacy framework (GDPR) was built to prevent. It is, on a sober read, the kind of regulatory tension Europe has not previously had to resolve. TNW's earlier coverage of the Italy-OpenAI ChatGPT GDPR enforcement established the principle that EU data-protection law applies extraterritorially to AI systems trained on European data. The same principle, applied to the DMA's data-sharing remedy, suggests that any anonymisation method the Commission specifies has to clear not just the DMA's competition test but also the GDPR's privacy test. The Commission has, in effect, written itself a problem in which the two tests pull in opposite directions. Three things will determine the trajectory of the proceeding. The first is whether the Commission revises its anonymisation specification before the 27 July decision deadline. Vassilvitskii's red team result, if reproduced or independently confirmed, would make a continued specification of the original method increasingly difficult to defend. TNW has covered the EU's broader push for digital sovereignty, and an outcome in which the Commission's headline remedy fails its own privacy test would be the kind of outcome that European regulators tend to avoid by quiet revision rather than public reversal. The second is whether the AI chatbot providers, OpenAI, Anthropic, Mistral, and others, who are the ostensible beneficiaries of the data-sharing rule, take a public position on the privacy question. So far they have not. Their commercial interest in obtaining the data on the most permissive possible terms is in tension with their public reputational interest in being seen as privacy-respecting model operators. The longer the Vassilvitskii framing remains uncontested, the harder that tension becomes to manage. The third is whether the European Court of Justice eventually has to rule on whether DMA remedies that produce GDPR-violating outcomes are themselves legal under EU law. That is the kind of constitutional question Brussels has, until now, managed to avoid. The Vassilvitskii letter makes it more plausible that the question is asked, by Google in court, by privacy advocates in court, or by a national data-protection authority pre-empting the Commission's specification. None of this excuses Google's commercial interest in the outcome. The company would, by any honest reading, prefer not to share its search data with rivals at all, and its privacy argument is being deployed in service of a pre-existing competitive position. ]What has changed, is that the privacy argument is now being made by someone whose technical credibility is harder to write off and whose career has been spent in the specific sub-field that the Commission's proposed remedy depends on. The decision deadline is 27 July. Vassilvitskii's two-hour figure has, on the public record, been entered into the proceeding's evidence base. Whether it produces a revision, a delay, a litigation track, or a quiet political accommodation is the question Brussels has roughly twelve weeks to answer.
[2]
Top Google scientist says EU data measures pose privacy risk for users
A top Google scientist has warned European Union regulators. The EU proposal to share search engine data with rivals like OpenAI could expose private user information. Google's expert will meet EU officials to voice concerns. The company fears modern AI tools could re-identify users from anonymized data. Regulators will decide on measures by July 27. A top Google scientist sent a warning to EU antitrust regulators on Tuesday that its proposal requiring the company to share search engine data with rivals such as OpenAI risked exposing users' private information, the sternest rebuke yet in a tussle over Google's lucrative business model. Assembly Elections 2026 Election Results 2026 Live Updates: Who's ahead in which stateWest Bengal Election Results 2026 Live UpdatesTN Election Result 2026 Live Updates The European Commission, which acts as the EU competition enforcer, has in recent years cracked down on Big Tech via a slew of legislation to ensure that users have more choices and smaller rivals room to compete that has however triggered the ire of the US government. Sergei Vassilvitskii, with the title of distinguished scientist at Google since 2012 and regarded a leader in his field, will meet EU antitrust officials on Wednesday to voice his concerns and propose a broader approach with better guardrails. The meeting comes a month after the Commission outlined a series of steps that Google should take to allow rival search engines access search data such as ranking, query, click and view data on fair, reasonable and non-discriminatory terms. The EU proposal, which will be finalised in the coming weeks following feedback from interested parties, has triggered a furious response from Google which called it regulatory overreach that could jeopardise users' privacy and security. The issue is the Commission's proposed method to ensure anonymised personal data, Vassilvitskii said, underlining fears that this may not be strong enough to prevent modern AI tools from sifting through the data to identify people. "We are concerned because the EC's approach to anonymization fails to protect Europeans' privacy: our red team managed to re-identify users in less than two hours," he said in exclusive written comments to Reuters. Google's AI red team is a group of hackers which simulate a variety of realistic adversary activities to highlight potential vulnerabilities and weaknesses and come up with fixes. "We are eager to share our technical expertise and work with the EC to establish the right guardrails and protect Europeans from privacy harm," Vassilvitskii said. Regulators will decide by July 27 on the exact measures which Google will have to implement. Failure to do so could see the company charged with breaching the Digital Markets Act which seeks to rein in the power of Big Tech and penalised with a fine that could be as much as 10% of its global annual revenue.
[3]
Top Google scientist says EU data measures pose privacy risk for users
BRUSSELS, May 5 (Reuters) - A top Google scientist sent a warning to EU antitrust regulators on Tuesday that its proposal requiring the company to share search engine data with rivals such as OpenAI risked exposing users' private information, the sternest rebuke yet in a tussle over Google's lucrative business model. The European Commission, which acts as the EU competition enforcer, has in recent years cracked down on Big Tech via a slew of legislation to ensure that users have more choices and smaller rivals room to compete that has however triggered the ire of the U.S. government. Sergei Vassilvitskii, with the title of distinguished scientist at Google since 2012 and regarded a leader in his field, will meet EU antitrust officials on Wednesday to voice his concerns and propose a broader approach with better guardrails. The meeting comes a month after the Commission outlined a series of steps that Google should take to allow rival search engines access search data such as ranking, query, click and view data on fair, reasonable and non-discriminatory terms. The EU proposal, which will be finalised in the coming weeks following feedback from interested parties, has triggered a furious response from Google which called it regulatory overreach that could jeopardise users' privacy and security. The issue is the Commission's proposed method to ensure anonymised personal data, Vassilvitskii said, underlining fears that this may not be strong enough to prevent modern AI tools from sifting through the data to identify people. "We are concerned because the EC's approach to anonymization fails to protect Europeans' privacy: our red team managed to re-identify users in less than two hours," he said in exclusive written comments to Reuters. Google's AI red team is a group of hackers which simulate a variety of realistic adversary activities to highlight potential vulnerabilities and weaknesses and come up with fixes. "We are eager to share our technical expertise and work with the EC to establish the right guardrails and protect Europeans from privacy harm," Vassilvitskii said. Regulators will decide by July 27 on the exact measures which Google will have to implement. Failure to do so could see the company charged with breaching the Digital Markets Act which seeks to rein in the power of Big Tech and penalised with a fine that could be as much as 10% of its global annual revenue. (Reporting by Foo Yun Chee, Editing by Nick Zieminski)
Share
Copy Link
Sergei Vassilvitskii, Google's distinguished scientist, has warned the European Commission that its proposed anonymisation method for search data sharing is vulnerable to re-identification in less than two hours. The EU's Digital Markets Act requires Google to share search engine data with rivals like OpenAI by July 27, but the company's AI red team demonstrated critical flaws in the privacy safeguards.
Sergei Vassilvitskii, a distinguished scientist at Google since 2012 and a leading researcher in differential privacy, has issued a stark warning to the European Commission about its proposed data sharing requirements. In exclusive written comments to Reuters, Vassilvitskii revealed that Google's AI red team managed to re-identify users in less than two hours when testing the EU's anonymisation method
1
2
. The privacy risk stems from the European Commission's approach to forced search-data sharing under the Digital Markets Act, which requires Google to share search engine data with rivals such as OpenAI on fair, reasonable and non-discriminatory terms.Source: Market Screener
The proceeding sits within the Digital Markets Act, the EU's flagship competition framework for gatekeeper platforms. On January 27, 2026, the European Commission opened formal specification proceedings against Google under Article 6(11) of the DMA, which obliges gatekeeper search engines to grant third-party rivals access to anonymised ranking, query, click and view data
1
. The compliance deadline is July 27, 2026, and failure to meet it could result in fines up to 10% of Google's global annual revenue2
3
. EU antitrust regulators are finalizing the proposal in the coming weeks following feedback from interested parties.The issue centers on whether modern AI tools can pierce through anonymisation safeguards to identify individual users. Google's AI red team, a group of hackers that simulate realistic adversary activities to highlight potential vulnerabilities, demonstrated that the Commission's proposed method fails to protect user data adequately
2
. Vassilvitskii's research career has focused specifically on differential privacy, the mathematical framework for measuring and bounding re-identification risk in released datasets1
. The vulnerability is not theoretical—in 2006, an anonymised release of AOL search data led to multiple users being identified by name within days, demonstrating that anonymisation techniques combining pseudonymisation, aggregation, and noise injection remain vulnerable to linkage attacks when queries are sufficiently distinctive1
.
Source: ET
Related Stories
Vassilvitskii met with EU antitrust officials on Wednesday to voice his concerns and propose a broader approach with better guardrails to protect Europeans from privacy harm
2
3
. The proposal has triggered concerns beyond Google—the Information Technology and Innovation Foundation flagged that forcing a search engine to share search engine data with rivals expands the surface area on which user-search data can be exploited, while the Chamber of Progress and CyberInsider warned the proposal could enable large-scale surveillance if anonymisation methods proved insufficient1
. The European Commission has cracked down on Big Tech via a slew of legislation to ensure users have more choices and smaller rivals room to compete, though this has triggered concerns from the US government2
. Google has accumulated roughly €9.71 billion in European antitrust fines since 2017, making the financial calculus on this proceeding material even by Google's standards1
.Summarized by
Navi
[1]
[3]
17 Apr 2026•Policy and Regulation

27 Jan 2026•Policy and Regulation

10 Nov 2025•Policy and Regulation

1
Health

2
Technology

3
Technology
