The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 30 Jan, 12:02 AM UTC
14 Sources
[1]
DeepSeek R1 is now available on Nvidia, AWS, and Github as available models on Hugging Face shoot past 3,000
Microsoft also has future local deployment plans for DeepSeek Having taken the AI world by storm in recent weeks, DeepSeek has now made significant strides in expanding the accessibility of its advanced reasoning models. The company has announced its flagship DeepSeek R1 model is now available on multiple platforms, including Nvidia, AWS, and GitHub. DeepSeek's open source nature allows developers to build models based on its architecture, and, at press time, there are 3,374 DeepSeek-based models available collaborative AI-model development platform Hugging Face. On AWS, DeepSeek-R1 models are now accessible through Amazon Bedrock which simplifies API integration and Amazon SageMaker which enables advanced customization and training, supported by AWS Trainium and Inferentia for optimized cost efficiency. AWS also offers DeepSeek-R1-Distill, a lighter version, through Amazon Bedrock Custom Model Import. This serverless deployment simplifies infrastructure management while maintaining scalability. Nvidia has also integrated DeepSeek-R1 as a NIM microservice, leveraging its Hopper architecture and FP8 Transformer Engine acceleration to deliver real-time, high-quality responses. The model, which features 671 billion parameters and a 128,000-token context length, utilizes test-time scaling for improved accuracy. It also benefits from Nvidia's Hopper architecture, using FP8 Transformer Engine acceleration and NVLink connectivity. Running on an HGX H200 system, DeepSeek-R1 can generate up to 3,872 tokens per second. Microsoft's Azure AI Foundry and GitHub have further expanded DeepSeek's reach, offering developers a secure and scalable platform to integrate AI into their workflows. Microsoft has also implemented extensive safety measures, including content filtering and automated assessments. The company claims that it plans to offer distilled versions of DeepSeek-R1 for local deployment on Copilot+ PCs in the future. DeepSeek-R1 took the world by storm by offering a powerful, cost-efficient AI model with advanced reasoning capabilities and has dethroned popular AI models like ChatGPT. R1 was reportedly trained for just $6 million, with its most advanced versions being about 95% cheaper to train than comparable models from NVIDIA and Microsoft.
[2]
DeepSeek R1 hits Microsoft's AI catalog -- OpenAI won't like this
DeepSeek R1 is now available in the model catalog on Azure AI Foundry and GitHub, expanding Microsoft's portfolio of over 1,800 AI models. This new model is designed for seamless integration into enterprise systems while ensuring compliance with security and responsible AI standards. DeepSeek R1 offers a powerful and cost-efficient AI model that enables developers to harness advanced AI capabilities with minimal infrastructure investment. It facilitates rapid experimentation, iteration, and integration, enhancing the speed at which enterprises can deploy AI solutions. Developers benefit from built-in evaluation tools, allowing them to compare outputs, benchmark performance, and scale their AI applications effectively. How to setup DeepSeek-R1 easily for free (online and local)? Microsoft emphasizes the importance of safety and security in AI development. DeepSeek R1 has undergone extensive safety evaluations, including automated assessments of its behavior and security reviews, to mitigate potential risks. Azure AI Content Safety features built-in content filtering by default, along with opt-out options. Additionally, the Safety Evaluation System enables customers to test their applications prior to deployment, ensuring a secure environment for AI solutions. To use DeepSeek R1 in the model catalog, users must sign up for an Azure account if they do not already have one. After locating DeepSeek R1 in the catalog, they can open the model card, click on deploy, and receive the inference API and key, along with access to the playground. Deployment is swift, taking less than a minute to generate the necessary API and key. Users can also explore additional resources and guides on GitHub. Future developments include the release of distilled versions of the DeepSeek R1 model for local use on Copilot+ PCs, as announced in the Windows Developer blog post. During a recent earnings call, Microsoft CEO Satya Nadella discussed the AI arena, noting that advancements in AI, like those from DeepSeek, exemplify ongoing innovation in the sector. He acknowledged the rapid evolution of AI technology, comparing its trajectory to past shifts in computing paradigms. Despite the emergence of competitors, Nadella expressed confidence in the positive implications for broader market consumption and application development, particularly as costs for inference computing decline.
[3]
Microsoft Makes DeepSeek R1 Available on Azure and GitHub
Customers will soon be able to run DeepSeek R1's distilled models locally on Copilot+ PCs. DeepSeek R1 has been added to the Azure AI Foundry and GitHub model catalogue, expanding the platform's AI portfolio. The model is now accessible for businesses looking to integrate advanced AI solutions while maintaining security and reliability standards. Microsoft announced on its official blog that DeepSeek R1 is available on its enterprise-ready Azure AI Foundry platform, which supports over 1,800 models. "Bringing models like DeepSeek R1 to Azure AI Foundry allows businesses to scale AI-powered applications with speed and security," said Asha Sharma, corporate vice president of AI Platform at Microsoft. "Customers will soon be able to run DeepSeek R1's distilled models locally on Copilot+ PCs, as well as on the vast ecosystem of GPUs available on Windows. Beyond Copilot+ PCs, the most powerful AI workstation for local development is a Windows PC running WSL2, powered by NVIDIA RTX GPUs," said Microsoft chief Satya Nadella during the recent earnings call on Wednesday. He further added that DeepSeek has introduced real innovations, some of which even OpenAI discovered in o1. "Now, of course, those innovations are becoming commoditised and will be widely used," he said. According to DeepSeek, R1 is a cost-efficient AI model that enables developers to incorporate AI capabilities with minimal infrastructure investment. Azure AI Foundry provides built-in model evaluation tools, allowing users to test, benchmark, and deploy AI applications efficiently. Microsoft emphasised its commitment to AI safety and compliance. DeepSeek R1 has undergone red teaming, security reviews, and automated behaviour assessments. Azure AI Content Safety includes built-in content filtering, with options for users to opt out. The Safety Evaluation System helps businesses test AI applications before deployment. To use DeepSeek R1, developers can search for the model in the Azure AI Foundry catalogue, access the model card, and deploy it to obtain an inference API and key. Users can test the model in a playground environment before integrating it into applications. DeepSeek R1 is also available on GitHub, where developers can find additional resources and integration guides. Microsoft stated that future versions of the model would be available in distilled formats for local deployment on Copilot+ PCs. This follows Microsoft and OpenAI's investigation into whether the Chinese AI startup used OpenAI's output to train its model. A recent report states that OpenAI has found evidence suggesting DeepSeek used its proprietary models to develop an open-source competitor, raising concerns about a possible intellectual property breach.
[4]
Microsoft adds DeepSeek R1 to Azure AI Foundry and GitHub
Microsoft has added DeepSeek R1 to Azure AI Foundry and GitHub, showing that even a lumbering tech giant can be nimble when it needs to be. DeepSeek R1 is only one of more than 1,800 models in the Azure AI Foundry catalog yet the speed at which it was brought on board is of note. Its inclusion will no doubt ruffle a few feathers among the C-suite at OpenAI. OpenAI, which is heavily backed by Microsoft, has claimed it has evidence that China's DeepSeek used its model for training. R1, the supposedly more efficiently trained LLM that only emerged last week, wreaked havoc on Monday by battering the share price of many US tech corporations as investors started to question if billions of dollars of GPUs are actually needed to train AI. Microsoft said of the latest addition to its Azure AI Foundry cloudy portfolio: "DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks." The plan is to also bring Distilled DeepSeek R1 models to Copilot+ PCs, with the first release, DeepSeek-R1-Distill-Qwen-1.5B, being made available in the AI toolkit and the 7B and 14B variants following soon. Qualcomm Snapdragon X-powered kit is the first to support the NPU-optimized versions, with Intel Core Ultra 200V coming later. The availability of cloud-hosted DeepSeek R1 on Azure AI Foundry made local versions inevitable. Microsoft said: "With the speed and power characteristics of the NPU-optimized version of the DeepSeek R1 models, users will be able to interact with these groundbreaking models entirely locally." On the swift addition of DeepSeek R1 to Microsoft's platform, the US biz said: "This rapid accessibility - once unimaginable just months ago - is central to our vision for Azure AI Foundry: bringing the best AI models together in one place to accelerate innovation and unlock new possibilities for enterprises worldwide." Microsoft might have moved rapidly to add R1 to its catalog, but it is unclear whether it has done anything about the censorship of DeepSeek's models. The Reg asked DeepSeek about Tiananmen Square and in a terse reply, it stated: "Sorry, that's beyond my current scope. Let's talk about something else." The chatbot was, however, more than happy to spill the beans on the US Capitol attack on January 6, 2021. ®
[5]
Microsoft says you can run DeepSeek R1 right on your laptop
Microsoft has made an interesting move in being quick to support the DeepSeek R1 reasoning model on its Azure cloud computing platform and GitHub tool for developers, not long after setting its sights legally on the China-based company. Microsoft has announced that it will make the new DeepSeek AI model available in "NPU-optimized" versions that will be more aligned with Windows 11 Copilot+ PCs and compatible with the components they run. It will first roll out a version for Qualcomm Snapdragon X devices, then one for Intel Lunar Lake PCs, and finally a variant for AMD Ryzen AI 9 processors. Additionally, Microsoft will add the DeepSeek-R1-Distill-Qwen-1.5B model to its Microsoft AI Toolkit for developers, and will also make available 7B and 14B versions. Recommended Videos "These optimized models let developers build and deploy AI-powered applications that run efficiently on-device, taking full advantage of the powerful NPUs in Copilot+ PCs" Microsoft said on a blog post. Notably, Microsoft has requirements for Windows 11 Copilot+ PCs to process AI models, which include at least 256GB storage, 16GB RAM, and an NPU that can output a minimum of 40 TOPS of power. Windows Center noted some PCs with older NPUs may not be able to run these models locally. The move to support DeepSeek may have come because Microsoft is looking to lessen its reliance on OpenAI for its artificial intelligence needs, while working on its proprietary models and introducing more third-party models to help power its Microsoft 365 Copilot AI product, according to Reuters. The swift support from Microsoft could be a win for DeepSeek by way of quelling privacy and data-sharing concerns when using products running the model. The company has confirmed that its data servers are in China, which could be a challenge for some U.S. users, the publication added. Meanwhile, reports indicated that Microsoft is investigating whether DeepSeek used illegal practices to train the very models it is now planning to host on its own platform. Microsoft is a primary investor in OpenAI, and the company is now on alert after a White House official stated it was "possible" DeepSeek "stole intellectual property from the United States." Prior research revealed that DeepSeek may have used a process called distillation to extract data from OpenAI's code. The process entails two models having a teacher-student dynamic so one can collect information from the other. The company has marketed itself as an open-source model with a low operating cost, particularly using lower-powered Nvidia chips. DeepSeek is taking the tech world by storm, and we haven't seen the last of it. In addition to criticisms around DeepSeek's censorship, users are already finding ways to jailbreak the AI model.
[6]
Microsoft rolls out DeepSeek's AI model on Azure
(Reuters) - Microsoft has made Chinese startup DeepSeek's R1 artificial intelligence model available on its Azure cloud computing platform and GitHub tool for developers, the U.S. company said on Wednesday. The AI model will be available in the model catalog on the platforms and will join more than 1,800 models that Microsoft is offering. DeepSeek last week launched a free AI assistant that it says uses less data at a fraction of the cost of incumbent services. By Monday, the assistant had overtaken U.S. rival ChatGPT in downloads from Apple's App Store, sparking panic among tech stock investors. The move comes as Microsoft has been looking to reduce its dependence on ChatGPT maker OpenAI. The company has been working to add internal and third-party AI models to power its flagship AI product Microsoft 365 Copilot, Reuters reported last month. Microsoft also said customers would soon be able to run the R1 model locally on their Copilot+ PCs, a move that could potentially ease privacy and data-sharing concerns over the use of the model. DeepSeek has said it stores user information in servers in China, which could be a sticking point in its U.S. adoption. Meanwhile, Microsoft and OpenAI are probing if data output from OpenAI's technology was obtained in an unauthorized manner by a group linked to DeepSeek, Bloomberg News reported on Tuesday. DeepSeek bursting onto the AI scene has prompted rivals to respond, with OpenAI boss Sam Altman saying the company will "pull up some releases" - following which it released a tailored version of ChatGPT for U.S. government agencies on Tuesday. China's Alibaba also released a new version of its Qwen 2.5 AI model on Wednesday, an unusual timing, considering it marked the first day of the Lunar New Year. (Reporting by Deborah Sophia in Bengaluru; Editing by Maju Samuel)
[7]
In surprise move Microsoft announces DeepSeek R1 is coming to CoPilot+ PCs - here's how to get it
DeepSeek R1 will have three Copilot+ versions that will roll out over time DeepSeek has seriously shaken up the AI world with an LLM that is seemingly cheaper to train, more power-efficient, and yet equally intelligent compared to its rivals. While Meta, Google, Open AI and others scramble to decipher how DeepSeek's R1 model got so impressive out of nowhere - with OpenAI even claiming it copied ChatGPT to get there - Microsoft is taking the 'if you can't beat them, join them' approach instead. Microsoft has announced that, following the arrival of DeepSeek R1 on Azure AI Foundry, you'll soon be able to run an NPU-optimized version of DeepSeek's AI on your Copilot+ PC. This feature will roll out first to Qualcomm Snapdragon X machines, followed by Intel Core Ultra 200V laptops, and AMD AI chipsets. It'll start by making the DeepSeek-R1-Distill-Qwen-1.5B available on Microsoft AI Tookit for developers, before later unlocking the more powerful 7B and 14B versions. While these aren't as impressive as the 32B and 70B variants also at its disposal, the 14B and lower versions of DeepSeek can run on-device. This mitigates one of the main concerns with DeepSeek - that data shared with the AI could end up on unsecured foreign servers - with Microsoft adding that "DeepSeek R1 has undergone rigorous red teaming and safety evaluations" to further reduce possible security risks. To start using DeepSeek's on-device Copilot+ build once its available, you'll need an Azure account - you can sign up on Microsoft's official website if you don't already have one. Your next step will be to boot up Azure AI Foundry and search for DeepSeek R1. Then hit 'Check out model' on the Introducing DeepSeek R1 card, before clicking on 'Deploy' then 'Deploy' again in the window that pops up. After a few moments the Chat Playground option should open up, and you can start chatting away with DeepSeek on-device. If you haven't yet used DeepSeek, two big advantages you'll find when you install it are that it's currently free (at least for now), and that it shows you its 'thinking' as it develops its responses. Other AI, like ChatGPT, go through the same thought process but they don't show it to you, meaning you have to refine your prompts through a process of trial and error until you get what you want. Because you can see its process, and where it might have gone off on the wrong track, you can more easily and precisely tweak your DeepSeek prompts to achieve your goals. As 7B and 14B variants unlock, you should see DeepSeek R1's Azure model improve, though if you want to test it out you might want to do so sooner rather than later. Given Microsoft's serious partnership with OpenAI, we expect it won't treat this emerging rival well if it turns out that DeepSeek was indeed copied from ChatGPT - potentially removing it from Azure, which it may not have a choice about if the AI faces a ban in the US, Italy and other regions.
[8]
Microsoft brings a DeepSeek model to its cloud | TechCrunch
Microsoft's close partner and collaborator, OpenAI, might be suggesting that DeepSeek stole its IP and violated its terms of service. But Microsoft still wants DeepSeek's shiny new models on its cloud platform. Microsoft today announced that R1, DeepSeek's so-called reasoning model, is available on Azure AI Foundry service, Microsoft's platform that brings together a number of AI services for enterprises under a single banner. In a blog post, Microsoft said that the version of R1 on Azure AI Foundry has "undergone rigorous red teaming and safety evaluations," including "automated assessments of model behavior and extensive security reviews to mitigate potential risks." In the near future, Microsoft said, customers will be able to use "distilled" flavors of R1 to run locally on Copilot+ PCs, Microsoft's brand of Windows hardware that meets certain AI readiness requirements. "As we continue expanding the model catalog in Azure AI Foundry, we're excited to see how developers and enterprises leverage [...] R1 to tackle real-world challenges and deliver transformative experiences," continued Microsoft in the post. The addition of R1 to Microsoft's cloud services is a curious one, considering that Microsoft reportedly initiated a probe into DeepSeek's potential abuse of its and OpenAI's services. According to security researchers working for Microsoft, DeepSeek may have exfiltrated a large amount of data using OpenAI's API in the fall of 2024. Microsoft, which also happens to be OpenAI's largest shareholder, notified OpenAI of the suspicious activity, per Bloomberg. But R1 is the talk of the town, and Microsoft may have been persuaded to bring it into its cloud fold while it still holds allure. Unclear is whether Microsoft made any modifications to the model to improve its accuracy -- and combat its censorship. According to a test by information-reliability organization NewsGuard, R1 provides inaccurate answers or non-answers 83% of the time when asked about news-related topics. A separate test found that R1 refuses to answer 85% of prompts related to China, possibly a consequence of the government censorship to which AI models developed in the country are subject.
[9]
Microsoft embraces OpenAI competitor DeepSeek on its AI hosting service
Fresh on the heels of a controversy in which ChatGPT maker OpenAI accused the Chinese company behind DeepSeek R1 of using its AI model outputs against its terms of service, OpenAI's largest investor Microsoft announced on Wednesday that it will now host DeepSeek R1 on its Azure cloud service. DeepSeek R1 has been the talk of the AI world for the past week because it is a freely available simulated reasoning model that reportedly matches OpenAI's o1 in performance -- while allegedly being trained for a fraction of the cost. Azure allows software developers to rent computing muscle from machines hosted in Microsoft-owned data centers, as well as rent access to software that runs on them. "R1 offers a powerful, cost-efficient model that allows more users to harness state-of-the-art AI capabilities with minimal infrastructure investment," wrote Microsoft Corporate Vice President Asha Sharma in a news release. DeepSeek R1 runs at a fraction of the cost of o1, at least through each company's own services. Comparative prices for R1 and o1 were not immediately available on Azure, but DeepSeek lists R1's API cost as $2.19 per million output tokens, while OpenAI's o1 costs $60 per million output tokens. That's a massive discount for a model that performs similarly to o1-pro in various tasks. On its face, the decision to host R1 on Microsoft servers is not unusual: The company offers access to over 1,800 models on its Azure AI Foundry service with the hopes of allowing software developers to experiment with various AI models and integrate them into their products. In some ways, whatever model they choose, Microsoft still wins because it's being hosted on the company's cloud service.
[10]
NPU-Optimized Versions Of DeepSeek's R1 Model Will Now Be Running On Copilot+ Windows 11 PCs, With The More Advanced Variants Arriving Later
DeepSeek's R1 might be at the center of an investigation initiated by Microsoft as to whether or not the AI model was trained on OpenAI's data outputs, but that has not stopped the software giant from bringing an NPU--optimized-version of the model to its Copilot+ PCs. The company made an announcement a short while ago, stating that the first release will be DeepSeek-R1-Distill-Qwen-1.5B and will be available via the Microsoft AI Toolkit for developers. The Washington-based firm intends to bring more advanced models down the road, but for now, it is taking small steps. Where U.S. companies are sweating over DeepSeek's popularity, Microsoft has embraced it and its R1 AI model, and in its blog post, has announced that an optimized version will be arriving for machines powered by Qualcomm's Snapdragon X chipsets, followed by Intel's Core Ultra 200V and others. After DeepSeek-R1-Distill-Qwen-1.5B, Microsoft intends to introduce the 7B and 14B variants soon, with developers being able to take full advantage of them. To get started, Microsoft has provided instructions on how to get DeepSeek running on your machines below. "To see DeepSeek in action on your Copilot+ PC, simply download the AI Toolkit VS Code extension. The DeepSeek model optimized in the ONNX QDQ format will soon be available in AI Toolkit's model catalog, pulled directly from Azure AI Foundry. You can download it locally by clicking the "Download" button. Once downloaded, experimenting with the model is as simple as opening the Playground, loading the " deepseek_r1_1_5" model, and sending it prompts." There are other optimizations made by the company to ensure that DeekSeek's R1 runs efficiently, and locally on NPU-based hardware and if you want to check out Microsoft's efforts, you can click on the source link below. There is no update if the company will drop its probe around DeekSeek's AI model allegedly being trained on OpenAI, but we will provide updates on that at a later time, so stay tuned.
[11]
Microsoft ports DeepSeek's AI to Copilot+ PCs, and their NPUs
Only Snapdragon PCs have access, for now. Intel and AMD will come later. Microsoft has taken the hot new AI model, DeepSeek, and made it available for Copilot+ PCs -- and on the NPU, no less. If it seems like every other week something important is happening in the AI space -- well, that's not too far from the truth. And while it's sometimes difficult to tell which developments are the most significant, the recent release of Chinese developer DeepSeek's model has shaken the AI industry deeply, specifically because the cost to train it was, according to the company, much less than other Western models. That sent American stocks plunging, including those involved in the chip and AI industries. DeepSeek was also in the news for leaking sensitive information, and because the company's mobile app slurped up user data. But DeepSeek also released its model on GitHub as open source, meaning that others could examine its source code and sanitize it. Microsoft has done just that. Running DeepSeek isn't as easy as simply navigating to a website, like Google's Gemini or Microsoft's Copilot. Microsoft has released the DeepSeek model to use in the cloud, but it's DeepSeek for the PC that's more interesting. Microsoft has "distilled" the DeepSeek model for use on PCs, specifically addressing Copilot+ PCs that use the Qualcomm Snapdragon X platform. (Intel's Core Ultra 200 chips will be supported later.) According to a blog post, Microsoft has already released a compact version of DeepSeek, DeepSeek-R1-Distill-Qwen-1.5B, with 7-billion and 14-billion-parameter models scheduled to release soon. AI models with smaller parameters are typically less precise but are run more quickly. But LLMs or AI chatbots typically run on the CPU or GPU, because of the prevalence of both. Microsoft's use of the NPU is what the logic was designed for: powerful yet efficient AI processing. "These optimized models let developers build and deploy AI-powered applications that run efficiently on-device, taking full advantage of the powerful NPUs in Copilot+ PCs," Microsoft said. To see DeepSeek in action on your Copilot+ PC, you'll need to download Microsoft's AI Toolkit, then the VS code extension, Microsoft said. You'll then need to download and install the "deepseek_r1_1_5" model. Microsoft has quietly worked to bring AI locally to your PC, especially with the ongoing work behind Phi Silica and the promise of AI functions running locally within Windows itself. The DeepSeek port shows that it's possible.
[12]
Microsoft is now hosting DeepSeek R1, even though it suspects it of illegal data abuse
A hot potato: Microsoft is raising eyebrows after announcing that it will host DeepSeek R1 on its Azure cloud service. The decision comes just days after OpenAI accused DeepSeek of violating its terms of service by allegedly using ChatGPT outputs to train its system, allegations Microsoft is currently investigating. DeepSeek R1 began making waves in the AI world when it launched last week. Chinese developer DeepSeek touted it as a freely available simulated reasoning model that rivals OpenAI's o1 in performance but at a fraction of the training cost. While OpenAI has priced its o1 model at $60 per million output tokens, DeepSeek lists R1 at just $2.19 per million - a remarkable contrast that sunk stock for AI-adjacent companies like Nvidia. Microsoft's decision to host R1 on Azure is not too unusual on its surface. The tech giant already offers over 1,800 AI models through its Azure AI Foundry, giving developers access to a variety of AI systems for experimentation and integration. Microsoft doesn't discriminate since it profits from any AI platform operating on its cloud infrastructure. However, the decision seems ironic since OpenAI has spent the last week aggressively criticizing the model for distilling ChatGPT outputs. OpenAI claims the AI startup violated its terms of service by using "distillation," as reported by Fox News. Distillation is when developers train an AI model using outputs from a more advanced system. Suspicions arose after users discovered that an earlier model, DeepSeek V3, sometimes referred to itself as "ChatGPT," suggesting that DeepSeek used OpenAI-generated data to fine-tune its system. The move also seems somewhat hypocritical, considering Microsoft security researchers reportedly launched an ethics probe into DeepSeek, on Wednesday. Anonymous sources claim that the investigation focuses on whether DeepSeek extracted substantial amounts of data through OpenAI's API during the fall of 2024. Despite the frustrations with DeepSeek, OpenAI CEO Sam Altman has publicly welcomed the competition. In a tweet on Monday, Altman acknowledged R1's cost efficiency, calling it "an impressive model" but vowing that OpenAI would soon deliver "much better results." Analysts expect the company may release a new model, o3-mini, as early as today. OpenAI's outcry over DeepSeek's data practices is notable given its own history of alleged data abuse. The New York Times has filed a lawsuit against OpenAI and Microsoft, accusing them of using copyrighted journalism without permission. OpenAI has also struck deals with publishers and online communities - such as The associated Press and others - to access user-generated data for training. The whole situation exposes the AI industry's hypocritical relationship with data ownership. Investment firm Andreessen Horowitz, another Open AI investor, argued in a 2023 legal filing that training AI models should not be considered copyright infringement, as they merely "extract information" from existing works. If OpenAI truly believes in that principle, then DeepSeek is just playing by the same rules. The current landscape of the AI industry is more or less a free-for-all. We have no laws on the books to govern AI directly, and those laws that affect it indirectly, like copyright and trade laws, are twisted into a favorable interpretation by the AI firms that are breaking them.
[13]
Inside Microsoft's quick embrace of DeepSeek
Distilled R1 models can now run locally on Copilot Plus PCs, starting with Qualcomm Snapdragon X first and Intel chips later. This brings a lot more AI capabilities to Windows, and it's something Microsoft was already working on with its Phi Silica language models. Sources tell me Microsoft is also looking at the prospect of bringing R1 to some of its Copilot tools for businesses. Microsoft is currently anticipating that more businesses will use its AI tools in the coming months, particularly its AI agent capabilities. Models like R1 could help Microsoft sell more access to Copilot inside business apps, low-code platforms, and other industry-specific tools at a lower cost to businesses.
[14]
Microsoft Snapdragon X Copilot+ PCs get local DeepSeek-R1 support -- Intel, AMD in the works
Microsoft just announced that it will release NPU-optimized versions of DeepSeek-R1, allowing it to take advantage of AI-optimized hardware found in Copilot+ PCs. According to the Windows Blog, the feature will first arrive on Qualcomm Snapdragon X PCs, to be followed by Intel Core Ultra 200V (Lunar Lake) and other chips. The initial release will feature DeepSeek-R1-Distill-Qwen-1.5B, which an AI research team from UC Berkeley has discovered is the smallest model that delivers correct answers, but larger models featuring 7 billion and 14 billion parameters will arrive shortly thereafter. DeepSeek's optimizations meant that it needed 11x less compute versus its Western competitors, making it a great model to run on consumer devices. However, it also uses Windows Copilot Runtime so developers can use on-device DeepSeek APIs within their apps. Furthermore, Microsoft claims that this NPU-optimized version of DeepSeek will deliver "very competitive time to first token and throughput rates, while minimally impacting battery life and consumption of PC resources." This means that Copilot+ PC users can expect the power and performance of competing models like Meta's Llama 3 and OpenAI's o1 while ensuring that the devices it's installed on still offer great battery life. That said, DeepSeek's availability on Copilot+ PCs is geared more toward programmers and developers instead of consumers. Perhaps Microsoft is using it to encourage them to build more apps that would take advantage of AI PCs as many people still don't see the need for it and market research suggests users only purchase these devices because they're the only available option nowadays. Another thing that got us curious is Microsoft's preferential treatment for Qualcomm Snapdragon X PCs at this time. While it launched the Copilot+ branding with these chips last July, the latest mainstream Intel and AMD laptops now also have built-in NPUs. AMD has even released instructions on how users can run it on Ryzen AI CPUs and Radeon GPUs, with the company even claiming that the RTX 7900 XTX runs DeepSeek better than the RTX 4090. Whatever the case, we're still excited about the possibilities that DeepSeek unlocks for AI. Since it's open source, nearly anyone can download it and run it locally, allowing others to build upon the advancements and optimizations the original model has put into place.
Share
Share
Copy Link
Microsoft integrates DeepSeek R1 into its Azure AI Foundry and GitHub, expanding AI model accessibility while raising questions about competition and intellectual property in the AI industry.
Microsoft has made a significant move in the AI landscape by integrating DeepSeek R1, a powerful and cost-efficient AI model, into its Azure AI Foundry and GitHub platforms. This integration expands Microsoft's AI portfolio to over 1,800 models, offering developers and enterprises enhanced access to advanced AI capabilities 1 2.
DeepSeek R1 is now accessible through multiple platforms, including Nvidia, AWS, and GitHub. On AWS, it's available via Amazon Bedrock and SageMaker, offering API integration and advanced customization options 1. Nvidia has integrated DeepSeek-R1 as a NIM microservice, leveraging its Hopper architecture for high-performance computing 1.
Microsoft has announced plans to offer distilled versions of DeepSeek R1 for local deployment on Copilot+ PCs. This move aims to bring AI capabilities directly to users' devices, with support for various hardware configurations including Qualcomm Snapdragon X, Intel Core Ultra, and AMD Ryzen AI processors 3 5.
Microsoft emphasizes the importance of safety in AI development. DeepSeek R1 has undergone extensive safety evaluations, including automated assessments and security reviews. Azure AI Content Safety features built-in content filtering, and a Safety Evaluation System allows customers to test their applications before deployment 2.
The integration of DeepSeek R1 into Microsoft's ecosystem has significant implications for the AI industry. It potentially challenges OpenAI's position, as DeepSeek R1 offers comparable capabilities at a fraction of the training cost 1. This move could accelerate innovation and competition in the AI sector 2.
The rapid adoption of DeepSeek R1 by Microsoft has raised questions about intellectual property and training practices. OpenAI, backed by Microsoft, has claimed evidence that DeepSeek used its models for training, potentially breaching intellectual property rights 3 4.
The introduction of DeepSeek R1 has already impacted the tech industry, affecting stock prices of major US tech corporations. This development challenges the notion that extensive GPU resources are necessary for training advanced AI models 4. As the AI landscape continues to evolve, the integration of DeepSeek R1 into major platforms signals a potential shift in the dynamics of AI development and deployment.
Reference
[1]
[3]
[4]
[5]
DeepSeek's open-source R1 model challenges OpenAI's o1 with comparable performance at a fraction of the cost, potentially revolutionizing AI accessibility and development.
6 Sources
6 Sources
DeepSeek R1, a new open-source AI model, demonstrates advanced reasoning capabilities comparable to proprietary models like OpenAI's GPT-4, while offering significant cost savings and flexibility for developers and researchers.
21 Sources
21 Sources
Chinese AI startup DeepSeek releases a major upgrade to its V3 language model, showcasing improved performance and efficiency. The open-source model challenges industry leaders with its ability to run on consumer hardware.
16 Sources
16 Sources
AWS and Microsoft Azure have integrated DeepSeek's R1 AI model into their platforms, offering customers access to this cost-effective and high-performing Chinese AI model. This move highlights the evolving AI landscape and its impact on major tech companies.
2 Sources
2 Sources
Cerebras Systems announces hosting of DeepSeek's R1 AI model on US servers, promising 57x faster speeds than GPU solutions while addressing data privacy concerns. This move reshapes the AI landscape, challenging Nvidia's dominance and offering a US-based alternative to Chinese AI services.
2 Sources
2 Sources