Curated by THEOUTPOST
On Sat, 8 Mar, 12:02 AM UTC
4 Sources
[1]
DeepSeek: Everything you need to know about the AI chatbot app | TechCrunch
Chinese AI lab DeepSeek broke into the mainstream consciousness this week after its chatbot app rose to the top of the Apple App Store charts (and Google Play, as well). DeepSeek's AI models, which were trained using compute-efficient techniques, have led Wall Street analysts -- and technologists -- to question whether the U.S. can maintain its lead in the AI race and whether the demand for AI chips will sustain. But where did DeepSeek come from, and how did it rise to international fame so quickly? DeepSeek is backed by High-Flyer Capital Management, a Chinese quantitative hedge fund that uses AI to inform its trading decisions. AI enthusiast Liang Wenfeng co-founded High-Flyer in 2015. Wenfeng, who reportedly began dabbling in trading while a student at Zhejiang University, launched High-Flyer Capital Management as a hedge fund in 2019 focused on developing and deploying AI algorithms. In 2023, High-Flyer started DeepSeek as a lab dedicated to researching AI tools separate from its financial business. With High-Flyer as one of its investors, the lab spun off into its own company, also called DeepSeek. From day one, DeepSeek built its own data center clusters for model training. But like other AI companies in China, DeepSeek has been affected by U.S. export bans on hardware. To train one of its more recent models, the company was forced to use Nvidia H800 chips, a less-powerful version of a chip, the H100, available to U.S. companies. DeepSeek's technical team is said to skew young. The company reportedly aggressively recruits doctorate AI researchers from top Chinese universities. DeepSeek also hires people without any computer science background to help its tech better understand a wide range of subjects, per The New York Times. DeepSeek unveiled its first set of models -- DeepSeek Coder, DeepSeek LLM, and DeepSeek Chat -- in November 2023. But it wasn't until last spring, when the startup released its next-gen DeepSeek-V2 family of models, that the AI industry started to take notice. DeepSeek-V2, a general-purpose text- and image-analyzing system, performed well in various AI benchmarks -- and was far cheaper to run than comparable models at the time. It forced DeepSeek's domestic competition, including ByteDance and Alibaba, to cut the usage prices for some of their models, and make others completely free. DeepSeek-V3, launched in December 2024, only added to DeepSeek's notoriety. According to DeepSeek's internal benchmark testing, DeepSeek V3 outperforms both downloadable, openly available models like Meta's Llama and "closed" models that can only be accessed through an API, like OpenAI's GPT-4o. Equally impressive is DeepSeek's R1 "reasoning" model. Released in January, DeepSeek claims R1 performs as well as OpenAI's o1 model on key benchmarks. Being a reasoning model, R1 effectively fact-checks itself, which helps it to avoid some of the pitfalls that normally trip up models. Reasoning models take a little longer -- usually seconds to minutes longer -- to arrive at solutions compared to a typical non-reasoning model. The upside is that they tend to be more reliable in domains such as physics, science, and math. There is a downside to R1, DeepSeek V3, and DeepSeek's other models, however. Being Chinese-developed AI, they're subject to benchmarking by China's internet regulator to ensure that its responses "embody core socialist values." In DeepSeek's chatbot app, for example, R1 won't answer questions about Tiananmen Square or Taiwan's autonomy. If DeepSeek has a business model, it's not clear what that model is, exactly. The company prices its products and services well below market value -- and gives others away for free. The way DeepSeek tells it, efficiency breakthroughs have enabled it to maintain extreme cost competitiveness. Some experts dispute the figures the company has supplied, however. Whatever the case may be, developers have taken to DeepSeek's models, which aren't open source as the phrase is commonly understood but are available under permissive licenses that allow for commercial use. According to Clem Delangue, the CEO of Hugging Face, one of the platforms hosting DeepSeek's models, developers on Hugging Face have created over 500 "derivative" models of R1 that have racked up 2.5 million downloads combined. DeepSeek's success against larger and more established rivals has been described as "upending AI" and "over-hyped." The company's success was at least in part responsible for causing Nvidia's stock price to drop by 18% in January, and for eliciting a public response from OpenAI CEO Sam Altman. Microsoft announced that DeepSeek is available on its Azure AI Foundry service, Microsoft's platform that brings together AI services for enterprises under a single banner. When asked about DeepSeek's impact on Meta's AI spending during its first-quarter earnings call, CEO Mark Zuckerberg said spending on AI infrastructure will continue to be a "strategic advantage" for Meta. During Nvidia's fourth-quarter earnings call, CEO Jensen Huang emphasized DeepSeek's "excellent innovation," saying that it and other "reasoning" models are great for Nvidia because they need so much more compute. At the same time, some companies are banning DeepSeek, and so are entire countries and governments, including South Korea. New York state also banned DeepSeek from being used on government devices. As for what DeepSeek's future might hold, it's not clear. Improved models are a given. But the U.S. government appears to be growing wary of what it perceives as harmful foreign influence. In March, The Wall Street Journal reported that the U.S. will likely ban DeepSeek on government devices.
[2]
Worried about DeepSeek? Turns out, Gemini and other US AIs collect more user data
It's an AI privacy showdown. How much data does your favorite chatbot collect? Amid growing concerns over Chinese AI models like DeepSeek, new research suggests that fears may be overblown - at least when it comes to data privacy. In fact, some popular US-based AI chatbots might be collecting even more of your personal information. When DeepSeek debuted its flagship open-source AI model in January, the American tech industry was thrown into hysteria. Some embraced the competition -- claiming this is "AI's Sputnik moment" -- but others? Well, not so much. Still, about 12 million users worldwide downloaded the AI chatbot two days after its launch. Numerous privacy and security concerns quickly surfaced about it, prompting private and government organizations to ban DeepSeek's use in the US and abroad. Also: 5 ways to use generative AI more safely - and effectively But here's the twist - despite all the frenzy, DeepSeek isn't the biggest data offender out there. Curious to know how your favorite AI chatbot stacks up when it comes to privacy? Let's look at what Surfshark's researchers have found. Recent data from Surfshark, a well-known VPN provider, uncovered that Google Gemini is the most data-intensive AI chatbot app. DeepSeek, however, comes in fifth out of the 10 most popular applications. The researchers analyzed the privacy details of the following chatbots that are the most popular on the Apple App Store: ChatGPT, Gemini, Copilot, Perplexity, DeepSeek, Grok, Jasper, Poe, Claude and Pi. Then, they compared the types of data each application collects, whether it collects any data linked to its users, and whether the app includes third-party advertisers. Also: Google claims Gemma 3 reaches 98% of DeepSeek's accuracy using only one GPU The investigation led the researchers to determine that Google Gemini collects significantly more personal data than its competitors. The app gathers 22 out of 35 user data types, including highly sensitive data like location data, user content, the device's contacts list, and browsing history. Ultimately, it far outpaces the data collected by the other popular chatbots included in the study. Only Gemini, Copilot, and Perplexity were found to collect precise location data, but about 30% of the chatbots were found to share sensitive user data, like location data and browsing history, with third parties such as data brokers. Thirty percent of these chatbots also track user data. In particular, Copilot, Poe, and Jasper collect data to track their users, which means that the user data collected from the app is linked with third-party data for the purpose of targeted advertising or ad measurement metrics. Furthermore, Copilot and Poe collect device IDs for this purpose, and Jasper gathers not only device IDs but also product interaction data, advertising data, and "any other data about user activity in the app," according to Surfshark experts. The controversial DeepSeek R1 model lies in the middle, so it's not the best, but not the worst. It collects an average of 11 unique data types and predominantly gathers contact information, user content, and diagnostics. Similarly, ChatGPT collects 10 unique types of data, including contact information, user content, identifiers, usage data, and diagnostics. It's important to note that ChatGPT also amasses chat history, but users can opt to use Temporary chat instead. Meanwhile, DeepSeek's privacy policy states users can manage their chat history and may delete their chat history via their settings. Also: The best AI chatbots: ChatGPT, Copilot, and notable alternatives Privacy complaints have plagued DeepSeek's AI chatbot for various reasons, but they're primarily grounded in the idea that the American public is at heightened risk of surveillance, cyber warfare, and other national security risks. DeepSeek's privacy policy states: "The personal information we collect from you may be stored on a server located outside the country where you live. We store the information we collect in secure servers located in the People's Republic of China." The AI arms race between the US and China and the rapid acceleration of global AI development fuel profound privacy, security, and ethical risks.
[3]
DeepSeek kicks off the next wave of the AI rush
The IT world is currently in the middle of an 'AI rush,' and we have just been hit by a new next wave with the launch of DeepSeek, an open source AI-powered chatbot that rivals OpenAI's architecture. With any new artificial intelligence innovation, we must also discuss its potential data privacy impact. In the wake of Data Privacy Day, now is a good time to take a closer look at the potential of this new AI tool and its related data protection considerations. AI is fundamentally about handling and enriching data, as without data (for now) there is no artificial intelligence. The more data and power it is fed, the more powerful the artificial intelligence becomes. The contextual engines of tools like ChatGPT and now DeepSeek rely on the data as context for their modeling and outcomes. And this raises the question of who controls this data and who has access to it. What data goes into the AI tools and which biases are potentially existing in this box? DeepSeek claims to have power not only to process massive amounts of data efficiently, but to throw stock markets into turmoil due to the substantially lower cost than rivals. For many years, companies from the United States have dominated the digital innovation space; and it would appear that in the first two years of the AI rush, many of the companies in the space like OpenAI, are also American. No wonder these digital natives are taking this AI newcomer from China as a massive threat that endangers the land grab for artificial intelligence, very similar to how the Cloud race and other IT land grabs have been before. DeepSeek's entrance is expected to have a democratization effect on AI and shows that the insular group of Silicon Valley companies are no longer the only ones capable of shaping the future of this technology. The fact that DeepSeek is an open source AI platform, however, has to be evaluated carefully. While this AI tool's codes are open, its training data sources remain largely opaque, making it difficult to assess potential biases or security risks. What makes DeepSeek nevertheless so powerful is its unique level of efficiency. The biggest problems that Silicon Valley has had in the wake of the AI rush over the last two years are the enormous processing required and the consequential energy consumption of all these chatbots and applications that are suddenly in vogue. With the development of DeepSeek there is the potential to have AI consume more efficiently, and therefore less energy as it needs less computing power. The compute curve was approaching an asymptote governed by supply and costs were rising and driving market caps for companies in the ecosystem. That supply, such as for GPUs, faces a change in the balance between supply and demand. But this potential disruption is only one side of the coin. This new AI tool will function as a catalyst to speed up demand of new applications and, in the short-to-medium term, organizations will likely accelerate AI innovation and come to a point where the capacity maximization from an energy and compute perspective returns to the same asymptote yet again. Barring breakthroughs in energy production or computing, such as with quantum computing, the ecosystem will stabilize in due order. In the rush to roll out new AI driven applications as fast as they can, organizations should not forget about solid data protection foundations. There are various governance, privacy, security, legal, social, and ethical considerations that should be taken into account, alongside improved efficiency and performance of an AI tool. Organizations have to make sure all these components are in alignment before pushing forward, and those that have done so are ready to leap ahead flexibly and quickly while those that haven't will find themselves at more risk than peers. Each of these dimensions require not only a framework and deliberation but the articulation and clarity as well. When organizations accelerate the rate of information being fed into their AI tools to supercharge adoption, they have to review the data sets for bias and be transparent about what data they are using and collecting in their model. The final step is to evaluate not only the outputs of their AI tools but also the supply chain that has access to it. The very minute that data is introduced into the AI world, organizations should be aware that they have the appropriate security controls in place. So with all this goldrush mindset of AI, organizations must not forget data protection. Companies that have invested time and effort in their AI governance and preparation around data protection mechanisms in the last two years will be able to get to the AI gold first. They will have mature AI policies in place about who they work with and how they treat their data, have ethical guidelines and oversight into AI projects to enable the departments that are eagerly evaluating new AI tools and functionality. We've listed the best privacy tool and anonymous browser.
[4]
My Employees Are Using DeepSeek. Should I Be Concerned?
DeepSeek, the AI chatbot currently topping app store charts, has rapidly gained popularity for its affordability and functionality, positioning itself as a competitor to OpenAI's ChatGPT. However, recent reports suggest that DeepSeek may come with serious security concerns that business leaders cannot afford to ignore. Here's a breakdown of its pros, cons and alternatives, so you can make the best AI optimization decisions for your business: DeepSeek has positioned itself as a powerful AI tool capable of advanced natural language processing and content generation. Developed by China-based High-Flyer, DeepSeek has gained traction due to its ability to deliver AI-driven insights at a fraction of the cost of American alternatives (OpenAI's Pro Plan has already jumped up to $200/month). However, cybersecurity experts have raised alarm bells over its embedded code, which allegedly allows for the direct transfer of user data to the Chinese government. Investigative reporting from ABC News revealed that DeepSeek's code includes links to China Mobile's CMPassport.com, a registry controlled by the Chinese government. This raises significant concerns about potential data surveillance, particularly for U.S.-based businesses handling sensitive intellectual property, customer data, or confidential internal communications. Related: Google's CEO Praised AI Rival DeepSeek This Week for Its 'Very Good Work.' Here's Why. DeepSeek's security concerns follow a familiar pattern. TikTok, which faced a federal ban earlier this year, was caught in a legal and political tug-of-war due to concerns over its Chinese ownership and potential data security risks. Initially banned on January 19, TikTok was temporarily reinstated following President Trump's intervention, with discussions on a forced sale to American investors still ongoing. Despite ByteDance's reassurances that U.S. user data is protected, national security experts have continued to raise concerns about potential Chinese government access to private information. TikTok's brief ban underscored the heightened scrutiny surrounding foreign-owned digital platforms, particularly those linked to adversarial governments. Now, DeepSeek is facing similar questions -- only this time, security experts claim to have found direct backdoor access embedded in its code. Unlike TikTok, which denied direct government ties, DeepSeek's alleged backdoor to China Mobile adds a new layer of risk. According to cybersecurity expert Ivan Tsarynny, DeepSeek's digital fingerprinting capabilities extend beyond its platform, potentially tracking users' web activity even after they've closed the app. This means that companies using DeepSeek may be exposing not just individual employee data but also proprietary business strategies, financial records and client interactions to unauthorized surveillance. Related: Avoid AI Disasters With These 8 Strategies for Ethical AI A knee-jerk reaction might be to ban DeepSeek outright, but that may not be the most practical solution. AI tools like DeepSeek offer significant efficiency gains, and the reality is that employees are often quick to adopt new technologies before leadership has time to assess the risks. Instead of an outright ban, leaders should take a strategic approach to AI integration. Here are some best practices for AI optimization in your organization: AI-powered platforms like DeepSeek offer compelling advantages, but they also introduce serious security risks that business leaders must consider. Entrepreneurs, CMOs, CEOs and CTOs should balance innovation with vigilance, ensuring that AI tools enhance productivity without compromising data security.
Share
Share
Copy Link
DeepSeek, a Chinese AI chatbot, has rapidly gained popularity and sparked debates about AI efficiency, data privacy, and international tech competition.
DeepSeek, a Chinese AI chatbot developed by High-Flyer Capital Management, has rapidly ascended to prominence in the global AI landscape. The app recently topped charts on both Apple's App Store and Google Play, signaling its growing popularity among users worldwide 1. This sudden rise has sparked discussions about the future of AI development and the competitive dynamics between Chinese and American tech companies.
DeepSeek's success is largely attributed to its technical capabilities and cost-effectiveness. The company claims that its models, particularly DeepSeek-V3 and R1, outperform both open-source alternatives like Meta's Llama and closed models such as OpenAI's GPT-4 on key benchmarks 1. What sets DeepSeek apart is its ability to deliver high-performance AI at significantly lower costs, challenging the pricing models of established players in the market.
Despite its technological achievements, DeepSeek has faced scrutiny over data privacy and security concerns. The app's Chinese origin has raised questions about potential data access by the Chinese government, reminiscent of concerns previously directed at TikTok 4. Some reports suggest that DeepSeek's code includes links to China Mobile's CMPassport.com, a registry controlled by the Chinese government, potentially allowing for direct transfer of user data 4.
Interestingly, a study by Surfshark revealed that some U.S.-based AI chatbots collect more user data than DeepSeek. Google's Gemini, for instance, gathers 22 out of 35 user data types, including sensitive information like location data and browsing history 2. In comparison, DeepSeek collects an average of 11 unique data types, primarily focusing on contact information, user content, and diagnostics 2.
DeepSeek's emergence has had far-reaching effects on the AI industry:
Market Disruption: The company's success has influenced stock prices of major tech companies, with Nvidia experiencing an 18% drop in January 1.
Competitive Response: Established players like OpenAI, Microsoft, and Meta have had to reassess their strategies in light of DeepSeek's rise 1.
Efficiency Debate: DeepSeek's ability to deliver high-performance AI with lower computational requirements has reignited discussions about AI efficiency and energy consumption 3.
The rapid adoption of DeepSeek has prompted varied responses globally:
DeepSeek's rise underscores the evolving nature of global AI competition. It highlights the need for balanced approaches to AI adoption that consider efficiency, data privacy, and security. As the AI landscape continues to shift, companies and governments alike must navigate the complex interplay of technological advancement, national security concerns, and ethical considerations in AI development and deployment 34.
Reference
[3]
[4]
Chinese AI startup DeepSeek has quickly gained prominence with its powerful and cost-effective AI models, challenging U.S. dominance in AI technology while raising security and ethical concerns.
4 Sources
4 Sources
DeepSeek, a Chinese AI chatbot, has gained popularity but faces bans and investigations worldwide due to security and privacy concerns, drawing comparisons to TikTok's challenges.
14 Sources
14 Sources
DeepSeek, a Chinese AI startup, is under investigation by multiple countries due to security vulnerabilities and data privacy issues, leading to bans on government devices and probes into its practices.
5 Sources
5 Sources
DeepSeek, a Chinese AI chatbot, has sparked international concern due to its data collection practices and potential security risks, leading to bans and investigations across multiple countries.
4 Sources
4 Sources
Chinese startup DeepSeek launches a powerful, cost-effective AI model, challenging industry giants and raising questions about open-source AI development, intellectual property, and global competition.
16 Sources
16 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved