3 Sources
[1]
DeepSeek may have used Google's Gemini to train its latest model | TechCrunch
Last week, Chinese lab DeepSeek released an updated version of its R1 reasoning AI model that performs well on a number of math and coding benchmarks. The company didn't reveal the source of the data it used to train the model, but some AI researchers speculate that at least a portion came from Google's Gemini family of AI. Sam Paeach, a Melbourne-based developer who creates "emotional intelligence" evaluations for AI, published what he claims is evidence that DeepSeek's latest model was trained on outputs from Gemini. DeepSeek's model, called R1-0528, prefers words and expressions similar to those Google's Gemini 2.5 Pro favors, said Paeach in an X post. That's not a smoking gun. But another developer, the pseudonymous creator of a "free speech eval" for AI called SpeechMap, noted the DeepSeek model's traces -- the "thoughts" the model generates as it works toward a conclusion -- "read like Gemini traces." DeepSeek has been accused of training on data from rival AI models before. In December, developers observed that DeepSeek's V3 model often identified itself as ChatGPT, OpenAI's AI-powered chatbot platform, suggesting that it may've been trained on ChatGPT chat logs. Earlier this year, OpenAI told the Financial Times it found evidence linking DeepSeek to the use of distillation, a technique to train AI models by extracting data from bigger, more capable ones. According to Bloomberg, Microsoft, a close OpenAI collaborator and investor, detected that large amounts of data were being exfiltrated through OpenAI developer accounts in late 2024 -- accounts OpenAI believes are affiliated with DeepSeek. Distillation isn't an uncommon practice, but OpenAI's terms of service prohibit customers from using the company's model outputs to build competing AI. To be clear, many models misidentify themselves and converge on the same words and turns of phrases. That's because the open web, which is where AI companies source the bulk of their training data, is becoming littered with AI slop. Content farms are using AI to create clickbait, and bots are flooding Reddit and X. This "contamination," if you will, has made it quite difficult to thoroughly filter AI outputs from training datasets. Still, AI experts like Nathan Lambert, a researcher at the nonprofit AI research institute AI2, don't think it's out of the question that DeepSeek trained on data from Google's Gemini. "If I was DeepSeek, I would definitely create a ton of synthetic data from the best API model out there," Lambert wrote in a post on X. "[DeepSeek is] short on GPUs and flush with cash. It's literally effectively more compute for them." Partly in an effort to prevent distillation, AI companies have been ramping up security measures. In April, OpenAI began requiring organizations to complete an ID verification process in order to access certain advanced models. The process requires a government-issued ID from one of the countries supported by OpenAI's API; China isn't on the list. Elsewhere, Google recently began "summarizing" the traces generated by models available through its AI Studio developer platform, a step that makes it more challenging to train performant rival models on Gemini traces. Anthropic in May said it would start to summarize its own model's traces, citing a need to protect its "competitive advantages." We've reached out to Google for comment and will update this piece if we hear back.
[2]
Researchers suspect DeepSeek cloned Gemini data
DeepSeek, a Chinese lab, released an updated version of its R1 reasoning AI model last week. The company did not disclose the data sources used for training, but some AI researchers suggest that Google's Gemini family of AI may have been a source. Sam Paech, a Melbourne-based developer, claims to have found evidence that DeepSeek's latest model was trained on outputs from Gemini. According to Paech's X post, DeepSeek's model, R1-0528, uses similar words and expressions favored by Google's Gemini 2.5 Pro. SpeechMap's pseudonymous creator, who developed a "free speech eval" for AI, mentioned that the DeepSeek model's "thoughts" resemble Gemini traces. Previously, DeepSeek faced accusations of training on data from competitor AI models. In December, developers noticed that DeepSeek's V3 model often identified itself as ChatGPT, suggesting potential training on ChatGPT chat logs. Earlier in the year, OpenAI informed the Financial Times about evidence connecting DeepSeek to distillation, a technique involving extracting data from larger AI models for training. According to Bloomberg, Microsoft detected significant data exfiltration through OpenAI developer accounts in late 2024, accounts OpenAI suspects are linked to DeepSeek. OpenAI prohibits customers from using its model outputs to create competing AI, despite the fact that distillation is relatively common. AI companies source training data from the open web, increasingly saturated with AI-generated content. This has made it difficult to thoroughly filter AI outputs from training datasets. Nathan Lambert, a researcher at AI2, believes that DeepSeek may have trained on data from Google's Gemini. Lambert stated in an X post, "If I was DeepSeek, I would definitely create a ton of synthetic data from the best API model out there... [DeepSeek is] short on GPUs and flush with cash. It's literally effectively more compute for them." AI companies are increasing security measures to prevent distillation. OpenAI began requiring organizations to complete an ID verification process to access advanced models in April. China is not on the list of countries supported by OpenAI's API for this process. Google has begun "summarizing" traces generated by models available through its AI Studio developer platform. Anthropic announced plans in May to summarize its own model's traces.
[3]
Is DeepSeek's New AI Powered by Google's Gemini Model?
Microsoft detected large-scale data exfiltration from OpenAI accounts linked to DeepSeek in late 2024. DeepSeek, the famous Chinese AI startup, has shaken the global tech stage once again. Last week, it released an updated version of its R1 reasoning model called R1-0528. This model has impressed many with strong results in math and coding benchmarks. However, critics are raising questions on whether DeepSeek's new AI was trained on data from Google's Gemini? While DeepSeek has not shared the sources behind the training data, experts are picking up on clues. The similarities between R1-0528 and Gemini 2.5 Pro are difficult to ignore. Some developers believe DeepSeek may have used outputs from Google's AI to improve its own.
Share
Copy Link
Chinese AI lab DeepSeek's updated R1 reasoning model shows similarities to Google's Gemini, raising questions about data sources and ethical AI development practices.
Chinese AI lab DeepSeek has recently released an updated version of its R1 reasoning AI model, known as R1-0528, which has demonstrated impressive performance on various math and coding benchmarks 12. However, the release has sparked controversy and speculation within the AI community regarding the sources of its training data.
Source: TechCrunch
Several AI researchers and developers have pointed out striking similarities between DeepSeek's R1-0528 and Google's Gemini family of AI models. Sam Paeach, a Melbourne-based developer, claims to have found evidence suggesting that DeepSeek's latest model was trained on outputs from Gemini 12. According to Paeach, R1-0528 shows a preference for words and expressions similar to those favored by Google's Gemini 2.5 Pro.
Another developer, the creator of the "SpeechMap" AI evaluation tool, noted that the traces or "thoughts" generated by the DeepSeek model closely resemble those of Gemini 12. While these observations are not conclusive proof, they have raised significant questions about DeepSeek's training practices.
This is not the first time DeepSeek has faced accusations of training on data from rival AI models. In December, developers observed that DeepSeek's V3 model often identified itself as ChatGPT, suggesting possible training on ChatGPT chat logs 12.
Earlier this year, OpenAI reported evidence linking DeepSeek to the use of distillation, a technique that involves extracting data from larger, more capable models to train smaller ones 12. Bloomberg reported that Microsoft detected large amounts of data being exfiltrated through OpenAI developer accounts in late 2024, which OpenAI believes are affiliated with DeepSeek 13.
Source: Dataconomy
The allegations against DeepSeek highlight ongoing concerns about ethical practices in AI development. While distillation is not uncommon in the field, OpenAI's terms of service explicitly prohibit customers from using their model outputs to build competing AI systems 1.
Nathan Lambert, a researcher at the nonprofit AI research institute AI2, suggests that it wouldn't be surprising if DeepSeek had indeed trained on data from Google's Gemini, given the company's resources and potential limitations in computing power 12.
In response to these challenges, major AI companies have been implementing stricter security measures:
OpenAI now requires organizations to complete an ID verification process to access certain advanced models, with China notably absent from the list of supported countries 12.
Google has begun "summarizing" the traces generated by models available through its AI Studio developer platform, making it more difficult to train rival models on Gemini traces 1.
Anthropic announced plans to start summarizing its own model's traces to protect its "competitive advantages" 1.
The controversy surrounding DeepSeek's latest model underscores the growing challenges in AI development and ethics. As the open web becomes increasingly saturated with AI-generated content, it has become more difficult for companies to filter out AI outputs from their training datasets 12. This "contamination" of the data landscape poses significant challenges for the future of AI development and raises important questions about the originality and independence of new AI models.
Apple is reportedly in talks with OpenAI and Anthropic to potentially use their AI models to power an updated version of Siri, marking a significant shift in the company's AI strategy.
22 Sources
Technology
11 hrs ago
22 Sources
Technology
11 hrs ago
Microsoft unveils an AI-powered diagnostic system that demonstrates superior accuracy and cost-effectiveness compared to human physicians in diagnosing complex medical conditions.
6 Sources
Technology
19 hrs ago
6 Sources
Technology
19 hrs ago
Google announces a major expansion of AI tools in education, including Gemini for Education and NotebookLM for under-18 users, aiming to transform classroom experiences while addressing concerns about AI in learning environments.
7 Sources
Technology
11 hrs ago
7 Sources
Technology
11 hrs ago
NVIDIA's upcoming GB300 Blackwell Ultra AI servers, slated for release in the second half of 2025, are poised to become the most powerful AI servers globally. Major Taiwanese manufacturers are vying for production orders, with Foxconn securing the largest share.
2 Sources
Technology
3 hrs ago
2 Sources
Technology
3 hrs ago
Elon Musk's AI company, xAI, has raised $10 billion through a combination of debt and equity financing to expand its AI infrastructure and development efforts.
3 Sources
Business and Economy
3 hrs ago
3 Sources
Business and Economy
3 hrs ago