The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On July 24, 2024
2 Sources
[1]
Snowflake Teams Up with Meta to Host and Optimize New Flagship Model Family in Snowflake Cortex AI
Snowflake (NYSE: SNOW), the AI Data Cloud company, today announced that it will host the Llama 3.1 collection of multilingual open source large language models (LLMs) in Snowflake Cortex AI for enterprises to easily harness and build powerful AI applications at scale. This offering includes Meta's largest and most powerful open source LLM, Llama 3.1 405B, with Snowflake developing and open sourcing the inference system stack to enable real-time, high-throughput inference and further democratize powerful natural language processing and generation applications. Snowflake's industry-leading AI Research Team has optimized Llama 3.1 405B for both inference and fine-tuning, supporting a massive 128K context window from day one, while enabling real-time inference with up to 3x lower end-to-end latency and 1.4x higher throughput than existing open source solutions. Moreover, it allows for fine-tuning on the massive model using just a single GPU node -- eliminating costs and complexity for developers and users -- all within Cortex AI. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240723098720/en/ By partnering with Meta, Snowflake is providing customers with easy, efficient, and trusted ways to seamlessly access, fine-tune, and deploy Meta's newest models in the AI Data Cloud, with a comprehensive approach to trust and safety built-in at the foundational level. "Snowflake's world-class AI Research Team is blazing a trail for how enterprises and the open source community can harness state-of-the-art open models like Llama 3.1 405B for inference and fine-tuning in a way that maximizes efficiency," said Vivek Raghunathan, VP of AI Engineering, Snowflake. "We're not just bringing Meta's cutting-edge models directly to our customers through Snowflake Cortex AI. We're arming enterprises and the AI community with new research and open source code that supports 128K context windows, multi-node inference, pipeline parallelism, 8-bit floating point quantization, and more to advance AI for the broader ecosystem." Snowflake's Industry-Leading AI Research Team Unlocks the Fastest, Most Memory Efficient Open Source Inference and Fine-Tuning Snowflake's AI Research Team continues to push the boundaries of open source innovations through its regular contributions to the AI community and transparency around how it is building cutting-edge LLM technologies. In tandem with the launch of Llama 3.1 405B, Snowflake's AI Research Team is now open sourcing its Massive LLM Inference and Fine-Tuning System Optimization Stack in collaboration with DeepSpeed, Hugging Face, vLLM, and the broader AI community. This breakthrough establishes a new state-of-the-art for open source inference and fine-tuning systems for multi-hundred billion parameter models. Massive model scale and memory requirements pose significant challenges for users aiming to achieve low-latency inference for real-time use cases, high throughput for cost effectiveness, and long context support for various enterprise-grade generative AI use cases. The memory requirements of storing model and activation states also make fine-tuning extremely challenging, with the large GPU clusters required to fit the model states for training often inaccessible to data scientists. Snowflake's Massive LLM Inference and Fine-Tuning System Optimization Stack addresses these challenges. By using advanced parallelism techniques and memory optimizations, Snowflake enables fast and efficient AI processing, without needing complex and expensive infrastructure. For Llama 3.1 405B, Snowflake's system stack delivers real-time, high-throughput performance on just a single GPU node and supports a massive 128k context windows across multi-node setups. This flexibility extends to both next-generation and legacy hardware, making it accessible to a broader range of businesses. Moreover, data scientists can fine-tune Llama 3.1 405B using mixed precision techniques on fewer GPUs, eliminating the need for large GPU clusters. As a result, organizations can adapt and deploy powerful enterprise-grade generative AI applications easily, efficiently, and safely. Snowflake's AI Research Team has also developed optimized infrastructure for fine-tuning inclusive of model distillation, safety guardrails, retrieval augmented generation (RAG), and synthetic data generation so that enterprises can easily get started with these use cases within Cortex AI. Snowflake Cortex AI Furthers Commitment to Delivering Trustworthy, Responsible AI AI safety is of the utmost importance to Snowflake and its customers. As a result, Snowflake is making Snowflake Cortex Guard generally available to further safeguard against harmful content for any LLM application or asset built in Cortex AI -- either using Meta's latest models, or the LLMs available from other leading providers including AI21 Labs, Google, Mistral AI, Reka, and Snowflake itself. Cortex Guard leverages Meta's Llama Guard 2, further unlocking trusted AI for enterprises so they can ensure that the models they're using are safe. Comments on the News from Snowflake Customers and Partners "As a leader in the hospitality industry, we rely on generative AI to deeply understand and quantify key topics within our Voice of the Customer platform. Gaining access to Meta's industry-leading Llama models within Snowflake Cortex AI empowers us to further talk to our data, and glean the necessary insights we need to move the needle for our business," said Dave Lindley, Sr. Director of Data Products, E15 Group. "We're looking forward to fine-tuning and testing Llama to drive real-time action in our operations based on live guest feedback." "Safety and trust are a business imperative when it comes to harnessing generative AI, and Snowflake provides us with the assurances we need to innovate and leverage industry-leading large language models at scale," said Ryan Klapper, an AI leader at Hakkoda. "The powerful combination of Meta's Llama models within Snowflake Cortex AI unlocks even more opportunities for us to service internal RAG-based applications. These applications empower our stakeholders to interact seamlessly with comprehensive internal knowledge bases, ensuring they have access to accurate and relevant information whenever needed." "By harnessing Meta's Llama models within Snowflake Cortex AI, we're giving our customers access to the latest open source LLMs," said Matthew Scullion, Matillion CEO and co-founder. "The upcoming addition of Llama 3.1 gives our team and users even more choice and flexibility to access the large language models that suit use cases best, and stay on the cutting-edge of AI innovation. Llama 3.1 within Snowflake Cortex AI will be immediately available with Matillion on Snowflake's launch day." "As a leader in the customer engagement and customer data platform space, Twilio's customers need access to the right data to create the right message for the right audience at the right time," said Kevin Niparko VP, Product and Technology Strategy, Twilio Segment. "The ability to choose the right model for their use case within Snowflake Cortex AI empowers our joint customers to generate AI-driven, intelligent insights and easily activate them in downstream tools. In an era of rapid evolution, businesses need to iterate quickly on unified data sets to drive the best outcomes." This press release contains express and implied forward-looking statements, including statements regarding (i) Snowflake's business strategy, (ii) Snowflake's products, services, and technology offerings, including those that are under development or not generally available, (iii) market growth, trends, and competitive considerations, and (iv) the integration, interoperability, and availability of Snowflake's products with and on third-party platforms. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, including those described under the heading "Risk Factors" and elsewhere in the Quarterly Reports on Form 10-Q and the Annual Reports on Form 10-K that Snowflake files with the Securities and Exchange Commission. In light of these risks, uncertainties, and assumptions, actual results could differ materially and adversely from those anticipated or implied in the forward-looking statements. As a result, you should not rely on any forward-looking statements as predictions of future events. © 2024 Snowflake Inc. All rights reserved. Snowflake, the Snowflake logo, and all other Snowflake product, feature and service names mentioned herein are registered trademarks or trademarks of Snowflake Inc. in the United States and other countries. All other brand names or logos mentioned or used herein are for identification purposes only and may be the trademarks of their respective holder(s). Snowflake may not be associated with, or be sponsored or endorsed by, any such holder(s). About Snowflake Snowflake makes enterprise AI easy, efficient and trusted. Thousands of companies around the globe, including hundreds of the world's largest, use Snowflake's AI Data Cloud to share data, build applications, and power their business with AI. The era of enterprise AI is here. Learn more at snowflake.com (NYSE: SNOW).
[2]
Meta Platforms : Open Source AI Is the Path Forward
In the early days of high-performance computing, the major tech companies of the day each invested heavily in developing their own closed source versions of Unix. It was hard to imagine at the time that any other approach could develop such advanced software. Eventually though, open source Linux gained popularity - initially because it allowed developers to modify its code however they wanted and was more affordable, and over time because it became more advanced, more secure, and had a broader ecosystem supporting more capabilities than any closed Unix. Today, Linux is the industry standard foundation for both cloud computing and the operating systems that run most mobile devices - and we all benefit from superior products because of it. I believe that AI will develop in a similar way. Today, several tech companies are developing leading closed models. But open source is quickly closing the gap. Last year, Llama 2 was only comparable to an older generation of models behind the frontier. This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency. Today we're taking the next steps towards open source AI becoming the industry standard. We're releasing Llama 3.1 405B, the first frontier-level open source AI model, as well as new and improved Llama 3.1 70B and 8B models. In addition to having significantly better cost/performance relative to closed models, the fact that the 405B model is open will make it the best choice for fine-tuning and distilling smaller models. Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and Nvidia are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data. As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone. Meta is committed to open source AI. I'll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term. When I talk to developers, CEOs, and government officials across the world, I usually hear several themes: Meta's business model is about building the best experiences and services for people. To do this, we must ensure that we always have access to the best technology, and that we're not locking into a competitor's closed ecosystem where they can restrict what we build. One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing. People often ask if I'm worried about giving up a technical advantage by open sourcing Llama, but I think this misses the big picture for a few reasons: First, to ensure that we have access to the best technology and aren't locked into a closed ecosystem over the long term, Llama needs to develop into a full ecosystem of tools, efficiency improvements, silicon optimizations, and other integrations. If we were the only company using Llama, this ecosystem wouldn't develop and we'd fare no better than the closed variants of Unix. Second, I expect AI development will continue to be very competitive, which means that open sourcing any given model isn't giving away a massive advantage over the next best models at that point in time. The path for Llama to become the industry standard is by being consistently competitive, efficient, and open generation after generation. Third, a key difference between Meta and closed model providers is that selling access to AI models isn't our business model. That means openly releasing Llama doesn't undercut our revenue, sustainability, or ability to invest in research like it does for closed providers. (This is one reason several closed providers consistently lobby governments against open source.) Finally, Meta has a long history of open source projects and successes. We've saved billions of dollars by releasing our server, network, and data center designs with Open Compute Project and having supply chains standardize on our designs. We benefited from the ecosystem's innovations by open sourcing leading tools like PyTorch, React, and many more tools. This approach has consistently worked for us when we stick with it over the long term. I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life - and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer. My framework for understanding safety is that we need to protect against two categories of harm: unintentional and intentional. Unintentional harm is when an AI system may cause harm even when it was not the intent of those running it to do so. For example, modern AI models may inadvertently give bad health advice. Or, in more futuristic scenarios, some worry that models may unintentionally self-replicate or hyper-optimize goals to the detriment of humanity. Intentional harm is when a bad actor uses an AI model with the goal of causing harm. It's worth noting that unintentional harm covers the majority of concerns people have around AI - ranging from what influence AI systems will have on the billions of people who will use them to most of the truly catastrophic science fiction scenarios for humanity. On this front, open source should be significantly safer since the systems are more transparent and can be widely scrutinized. Historically, open source software has been more secure for this reason. Similarly, using Llama with its safety systems like Llama Guard will likely be safer and more secure than closed models. For this reason, most conversations around open source AI safety focus on intentional harm. Our safety process includes rigorous testing and red-teaming to assess whether our models are capable of meaningful harm, with the goal of mitigating risks before release. Since the models are open, anyone is capable of testing for themselves as well. We must keep in mind that these models are trained by information that's already on the internet, so the starting point when considering harm should be whether a model can facilitate more harm than information that can quickly be retrieved from Google or other search results. When reasoning about intentional harm, it's helpful to distinguish between what individual or small scale actors may be able to do as opposed to what large scale actors like nation states with vast resources may be able to do. At some point in the future, individual bad actors may be able to use the intelligence of AI models to fabricate entirely new harms from the information available on the internet. At this point, the balance of power will be critical to AI safety. I think it will be better to live in a world where AI is widely deployed so that larger actors can check the power of smaller bad actors. This is how we've managed security on our social networks - our more robust AI systems identify and stop threats from less sophisticated actors who often use smaller scale AI systems. More broadly, larger institutions deploying AI at scale will promote security and stability across society. As long as everyone has access to similar generations of models - which open source promotes - then governments and institutions with more compute resources will be able to check bad actors with less compute. The next question is how the US and democratic nations should handle the threat of states with massive resources like China. The United States' advantage is decentralized and open innovation. Some people argue that we must close our models to prevent China from gaining access to them, but my view is that this will not work and will only disadvantage the US and its allies. Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities. Plus, constraining American innovation to closed development increases the chance that we don't lead at all. Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term. When you consider the opportunities ahead, remember that most of today's leading tech companies and scientific research are built on open source software. The next generation of companies and research will use open source AI if we collectively invest in it. That includes startups just getting off the ground as well as people in universities and countries that may not have the resources to develop their own state-of-the-art AI from scratch. The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone. With past Llama models, Meta developed them for ourselves and then released them, but didn't focus much on building a broader ecosystem. We're taking a different approach with this release. We're building teams internally to enable as many developers and partners as possible to use Llama, and we're actively building partnerships so that more companies in the ecosystem can offer unique functionality to their customers as well.
Share
Share
Copy Link
Snowflake partners with Meta to host Llama 2 models in Snowflake Cortex, while Meta emphasizes the importance of open-source AI. This collaboration aims to enhance AI accessibility and development.
In a significant move for the artificial intelligence (AI) industry, Snowflake Inc. has announced a partnership with Meta Platforms Inc. to host and optimize Meta's new flagship model family, Llama 2, in Snowflake Cortex [1]. This collaboration marks a pivotal moment in the democratization of AI technology and highlights the growing trend towards open-source AI development.
Snowflake, a cloud computing-based data cloud company, will integrate Meta's Llama 2 models into its Snowflake Cortex platform. This integration will allow Snowflake customers to leverage these powerful AI models directly within their Snowflake account [1]. The move is expected to simplify the process of AI adoption for businesses, enabling them to harness the power of large language models (LLMs) without the need for complex infrastructure setup.
By hosting Llama 2 models in Snowflake Cortex, customers will gain several advantages:
This integration is set to accelerate AI adoption across various industries, potentially leading to innovative applications and improved business processes.
Meta Platforms, formerly known as Facebook, has been a strong advocate for open-source AI development. The company believes that open-source AI is the path forward for the industry [2]. By making Llama 2 available through platforms like Snowflake Cortex, Meta is demonstrating its commitment to this philosophy.
Open-source AI offers several benefits to the broader tech community:
Meta's approach aligns with these principles, potentially accelerating the pace of AI advancement and fostering a more inclusive AI ecosystem.
The collaboration between Snowflake and Meta represents a significant shift in the AI landscape. It highlights the growing trend towards more accessible and transparent AI technologies. As more companies embrace open-source AI and collaborative partnerships, we can expect to see:
This partnership between Snowflake and Meta could set a precedent for future collaborations in the AI industry, potentially reshaping how AI technologies are developed, distributed, and utilized across various sectors.
Reference
[1]
[2]
Meta has released Llama 3.1, its largest and most advanced open-source AI model to date. This 405 billion parameter model is being hailed as a significant advancement in generative AI, potentially rivaling closed-source models like GPT-4.
5 Sources
Meta's decision to open-source LLaMA 3.1 marks a significant shift in AI development strategy. This move is seen as a way to accelerate AI innovation while potentially saving Meta's Metaverse vision.
6 Sources
Meta Platforms unveils Llama 3, a powerful open-source AI model, potentially disrupting the AI industry. The move aims to enhance developer freedom, privacy standards, and Meta's competitive position against rivals like OpenAI and Anthropic.
4 Sources
Meta's Llama AI models have achieved a staggering 350 million downloads, solidifying the company's position as a leader in open-source AI. This milestone represents a tenfold increase in downloads compared to the previous year, highlighting the growing interest in accessible AI technologies.
4 Sources
Meta has released Llama 3, its latest and most advanced AI language model, boasting significant improvements in language processing and mathematical capabilities. This update positions Meta as a strong contender in the AI race, with potential impacts on various industries and startups.
22 Sources