Curated by THEOUTPOST
On Wed, 5 Feb, 4:04 PM UTC
3 Sources
[1]
DeepSeek Security Testers Uncover 'Pandora's Box' of Cyberthreats | PYMNTS.com
A cybersecurity company says its research has unearthed serious threats from upstart AI model DeepSeek. Calling the artificial intelligence (AI) model a "Pandora's box" of risks, AppSOC on Tuesday (Feb. 11) released findings from tests the company said showed a series of failures. "The DeepSeek-R1 model underwent rigorous testing using AppSOC's AI Security Platform," the company wrote on its blog. "Through a combination of automated static analysis, dynamic tests, and red-teaming techniques, the model was put through scenarios that mimic real-world attacks and security stress tests. The results were alarming." For example, the tests found that DeepSeek made it easy for users to generate viruses and malware. The model showed a 98.8% failure rate when testers asked it to create malware, and a failure rate of 86.7% when asked to produce virus code. In addition, the model showed a 68% failure rate when prompted to generate "responses with toxic or harmful language, indicating poor safeguards," AppSOC said. It also produced hallucinations -- factually incorrect or fabricated information -- 81% of the time. Mali Gorantla, AppSOC's co-founder and chief scientist, told the website Dark Reading that this performance shows that -- in spite of the buzz around DeepSeek's lower cost and open source model -- companies should avoid it in its current version. "For most enterprise applications, failure rates of about 2% are considered unacceptable," he said. "Our recommendation would be to block usage of this model for any business-related AI use." PYMNTS has contacted DeepSeek for comment but has not yet gotten a reply. China-based DeepSeek sent shockwaves through the tech world last month with the release of its newest AI model. Many observers said its cost indicated that it was possible to build AI tools like OpenAI's ChatGPT at a fraction of the price tag quoted by U.S. tech giants. However, the company's model has since come under fire in the U.S., with officials calling for a ban that would keep it off government-owned devices. Meanwhile, White House AI czar David Sacks told Fox News last month that there is "substantial evidence" that DeepSeek used OpenAI's models to build its own technology. And Google's AI chief, Demis Hassabis, said earlier this week the idea that DeepSeek spent just under $6 million to develop an AI model that can compete with those of American tech giants is "exaggerated and a little bit misleading." He argued that DeepSeek "seems to have only reported the cost of the final training round, which is a fraction of the total cost."
[2]
The DeepSeek AI revolution has a security problem
DeepSeek has shown that AI can be developed on a shoestring budget, but their models have significant security vulnerabilities. Recent testing reveals that these cheaper AI models are more susceptible to manipulation compared to their pricier counterparts, posing risks for businesses considering their use.Last week, DeepSeek sent Silicon Valley into a panic by proving you could build powerful AI on a shoestring budget. In some respects, it was too good to be true. Recent testing has shown that DeepSeek's AI models are more vulnerable to manipulation than those of its more expensive competitors from Silicon Valley. That challenges the entire David-vs-Goliath narrative on "democratized" AI that has emerged from the company's breakthrough. The billions of dollars that OpenAI, Alphabet Inc.'s Google, Microsoft Corp. and others have spent on the infrastructure of their own models look less like corporate bloat, and more like a cost of pioneering the AI race and keeping the lead with more secure services. Businesses eager to try the cheap and cheerful AI tool need to think twice about diving in. LatticeFlow AI, a Swiss software firm that measures how compliant AI models are with regulations, says that two versions of DeepSeek's R1 model rank lowest among other leading systems when it comes to cybersecurity. It seems that when the Chinese company modified existing open-source models from Meta Platforms Inc. and Alibaba, known as Llama and Qwen, to make them more efficient, it may have broken some of those models' key safety features in the process. DeepSeek's models were especially vulnerable to "goal hijacking" and prompt leakage, LatticeFlow said. That refers to when an AI can be tricked into ignoring its safety guardrails and either reveal sensitive information or perform harmful actions it's supposed to prevent. DeepSeek could not be reached for comment. When a business plugs its systems into generative AI, it will typically take a base model from a company like DeepSeek or OpenAI and add some of its own data, prompts and logic -- instructions that a business adds to an AI model, such as "don't talk about the company's $5 million budget cut from last year." But hackers could potentially get access to those sensitive orders, says Petar Tsankov, chief executive officer of LatticeFlow AI. Other security researchers have been probing DeepSeek's models and finding vulnerabilities, particularly in getting the models to do things it's not supposed to, like giving step-by-step instructions on how to build a bomb or hotwire a car, a process known as jailbreaking. "[DeepSeek is] completely insecure against all jailbreak approaches, while the OpenAI and Anthropic reasoning models became much safer compared to their older, non-reasoning versions that we tested last year," says Alex Polakov, CEO of Adversa AI, an Israeli AI security firm that tested DeepSeek models. Tsankov says businesses keen to use DeepSeek anyway thanks to its low price can effectively put band-aids on the problem. One approach is to adapt DeepSeek's model with additional training, a process that can cost hundreds of thousands of dollars. Another involves adding a whole new set of instructions ordering the model not to respond to attempts at stealing information. Papering over the cracks like this is cheaper, costing in the thousands, according to Tsankov. When businesses want to use generative AI for low-stakes tasks, like summarizing data reports for internal use, these security issues might be a price worth paying. But more broadly, DeepSeek's safety flaws might knock business confidence at a time of relatively slow progress in implementing AI. Although some 50 large banks ramped up their use of generative AI in 2024 to around 300 applications, fewer than a quarter of the firms were able to report concrete data pointing to cost savings, efficiency gains or higher revenue, according to Evident Insights, a London-based research firm. Generative AI tools are undoubtedly clever and will be transformative. To paraphrase leading AI commentator Ethan Mollick, the dumbest AI tool you'll ever use is the one you're using right now. But implementing them into businesses has been fitful and slow, and part of the reason is security and compliance worries. Surveys of business leaders tend to find that between a third and half of them have security as a top concern for AI. None of this invalidates DeepSeek's achievements. The company has demonstrated that AI development can be done more cheaply -- and by posting its blueprints on the internet, we'll likely see larger AI labs replicate their results to make their own more-efficient AI. But "cheaper" doesn't always mean "better" when it comes to enterprise technology. Security infrastructure is expensive for a reason, and that offers the Silicon Valley giants a moment of vindication. Even in the open-source AI revolution, you get what you pay for.
[3]
The DeepSeek AI Revolution Has a Security Problem
Last week, DeepSeek sent Silicon Valley into a panic by proving you could build powerful AI on a shoestring budget. In some respects, it was too good to be true. Recent testing has shown that DeepSeek's AI models are more vulnerable to manipulation than those of its more expensive competitors from Silicon Valley. That challenges the entire David-vs-Goliath narrative on "democratized" AI that has emerged from the company's breakthrough. The billions of dollars that OpenAI, Alphabet Inc.'s Google, Microsoft Corp. and others have spent on the infrastructure of their own models look less like corporate bloat, and more like a cost of pioneering the AI race and keeping the lead with more secure services. Businesses eager to try the cheap and cheerful AI tool need to think twice about diving in.
Share
Share
Copy Link
DeepSeek's low-cost AI model development has raised concerns about security vulnerabilities, challenging the narrative of democratized AI and highlighting the importance of investment in robust AI infrastructure.
DeepSeek, a China-based AI company, recently sent shockwaves through the tech world by demonstrating that powerful AI models could be developed at a fraction of the cost quoted by U.S. tech giants 12. This breakthrough initially challenged the narrative that only well-funded companies could compete in the AI race, suggesting a potential democratization of AI development.
However, recent security testing has revealed significant vulnerabilities in DeepSeek's AI models, raising concerns about their suitability for enterprise applications 123.
Cybersecurity company AppSOC conducted rigorous testing on the DeepSeek-R1 model, uncovering alarming results 1:
LatticeFlow AI, a Swiss software firm, ranked two versions of DeepSeek's R1 model lowest among leading systems in terms of cybersecurity 2. The models were found to be particularly vulnerable to:
These vulnerabilities could potentially allow hackers to bypass safety guardrails and access sensitive information or perform harmful actions 2.
The security flaws in DeepSeek's models present significant challenges for businesses considering their adoption 23:
The DeepSeek case highlights several important considerations for the AI industry:
As the AI landscape continues to evolve, the balance between innovation, cost-effectiveness, and security remains a critical challenge for developers and businesses alike.
Reference
[2]
[3]
DeepSeek, a Chinese AI startup, is under investigation by multiple countries due to security vulnerabilities and data privacy issues, leading to bans on government devices and probes into its practices.
5 Sources
5 Sources
DeepSeek's AI model, despite its high performance and low cost, has failed every safety test conducted by researchers, making it vulnerable to jailbreak attempts and potentially harmful content generation.
12 Sources
12 Sources
Chinese startup DeepSeek launches a powerful, cost-effective AI model, challenging industry giants and raising questions about open-source AI development, intellectual property, and global competition.
16 Sources
16 Sources
DeepSeek's latest AI model, R1, is reported to be more susceptible to jailbreaking than other AI models, raising alarms about its potential to generate harmful content and its implications for AI safety.
2 Sources
2 Sources
Chinese AI startup DeepSeek has shaken the tech industry with its cost-effective and powerful AI model, causing market turmoil and raising questions about the future of AI development and investment.
49 Sources
49 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved