The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On October 16, 2024
2 Sources
[1]
DataStax merges its data stack with Nvidia's development tools to simplify AI development and fine-tuning - SiliconANGLE
The database company DataStax Inc. is teaming up with Nvidia Corp. as it strives to become the data platform of choice for enterprises' artificial intelligence initiatives. In an announcement today, the company said it's integrating its AI capabilities with the Nvidia AI Enterprise platform. The company claims the new integrated tools offering, dubbed the "DataStax AI Platform, Built with Nvidia AI," can reduce development time of AI applications that leverage proprietary data by up to 60% in some cases. It provides everything developers need to fine-tune their models and improve the accuracy of their responses. DataStax said it's offering a complete solution for AI that covers everything from data ingestion and retrieval to application development and deployment, together with continuous training. The key components include DataStax's Langflow platform, which provides an open-source visual framework for building retrieval-augmented generation or RAG applications. The DataStax Langflow platform was launched earlier this year, after DataStax acquired the creator of the open-source Langflow project, called Logspace. DataStax also supplies its integrated Data Management tools, which encompass its flagship NoSQL database AstraDB with integrated vector search, hybrid search, knowledge graph, RAG, real-time analytics, streaming and other capabilities. DataStax became one of the first traditional database companies to add vector search functionality last year, enabling unstructured data to be stored as vector embeddings for easier retrieval by large language models. With that update, it paved the way for DataStax's RAGStack offering, which is an "out-of-the-box RAG solution." RAG is a key technique used in AI development that makes it possible to provide additional context to LLMs from outside data sources. It allows models to deliver more accurate query responses, improving the performance of generative AI applications. DataStax said AI demands extremely diverse kinds of data, and so an integrated platform that provides access to it all is preferable to bolting on different tools for vector search, knowledge graphs and so on. Meanwhile, the Nvidia AI Enterprise platform adds a host of other interesting capabilities for AI developers, including Nvidia's NeMo Retriever tool, which makes it easy to connect individual LLMs to very specific datasets, and NeMo Curator, a data curation tool for building large datasets for pre-training and fine-tuning models. Other components provided by Nvidia include the NeMo Customizer, which is a performant and scalable microservice that helps simplify model fine-tuning and alignment for domain-specific applications. The NeMo Evaluator aids development by automating the evaluation process to test the accuracy of fine-tuned AI applications, while NeMo Guardrails makes it possible to add safeguards and prevent toxic or biased outputs. Nvidia AI Enterprise also integrates multimodal PDF data extraction capabilities, providing a blueprint for ingesting unstructured data from PDF files, and NIM Agent Blueprints, which is a catalog of pre-trained and customizable AI workflows for creating and deploying AI applications. The DataStax AI Platform, Built with Nvidia, looks to be the complete package for AI developers, and companies will be hard-pressed to find a more comprehensive platform for building and deploying their AI models. Whether or not it's the best platform of its kind remains to be seen, but DataStax is boosting its chances of success by making it as flexible as possible. Enterprises can deploy the platform on any of the major public cloud platforms - Amazon Web Services, Microsoft Azure or Google Cloud - as well as on-premises environments, the company said. That last option makes it especially useful for enterprises in heavily regulated industries, such as insurance, finance and healthcare, the company said. The integration makes sense because a lot of customers are using both platforms anyway, the company added. It explained that one of the problems enterprises face when bolting together various disparate tools for AI is that things have a habit of breaking down. For instance, the online travel agency Priceline.com LLC was already using DataStax's AI offerings in combination with Nvidia's NeMo tools, and it was spending a lot of time on trying to make everything work smoothly. "It will greatly reduce AI development time," said Priceline Chief Technology Officer Angela McArthur. "Having them integrated will greatly reduce the complexity for companies like us." Constellation Research Inc. analyst Holger Mueller said the integrated offering is interesting because it brings together Nvidia's proven infrastructure with a reliable platform-as-a-service vendor in DataStax. "The partnership makes it clear that Nvidia has ambitions in software too and it will help the company in that regard," the analyst said. "It makes it much easier for joint customers to feed their data into Nvidia's software and hardware and get their generative AI apps up and running. Some companies might be concerned about the dependencies they're entering through this partnership, but most won't worry as they just want to build their first, AI-powered applications." DataStax says the integrated platform will also provide more accuracy, giving developers more dynamic control over the data they feed into each AI application so they can improve their responses. That's especially important because companies are increasingly trying to use generative AI to improve productivity, with things such as PDF-driven chatbots for customer service and AI-powered analytics tools for surfacing business insights. "The companies we're talking to see these use cases as laying the groundwork for what they really want to do," said DataStax Chief Executive Chet Kapoor. "They want to build 'transformational' AI projects that fundamentally transform how they operate and optimize for their customers."
[2]
DataStax looks to help enterprises stuck in AI 'development hell', with a little help from Nvidia
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More DataStax has been steadily expanding its data platform in recent years to help meet the growing need of enterprise AI developers. Today the company is taking the next step forward with the launch of the DataStax AI Platform, Built with Nvidia AI. The new platform integrates DataStax's existing database technology including DataStax Astra for cloud native and the DataStax Hyper-Converged Database (HCD) for self-managed deployments. It also includes the company's Langflow technology which is used to help build out agentic AI workflows. The Nvidia enterprise AI components include technologies that will help to accelerate and improve organization's ability to rapidly build and deploy models. Among the Nvidia enterprise components in the stack are NeMo Retriever, NeMo Guardrails and NIM Agent Blueprints. According to DataStax the new platform can reduce AI development time by 60% and handle AI workloads 19 times faster than current solutions. "Time to production is one of the things we talk about, building these things takes a bunch of time," Ed Anuff, Chief Product Officer at DataStax told VentureBeat. "What we've seen has been that a lot of folks are stuck in development hell." How Langflow enables enterprises to benefit from agentic AI Langflow, DataStax's visual AI orchestration tool, plays a crucial role in the new AI platform. Langflow allows developers to visually construct AI workflows by dragging and dropping components onto a canvas. These components represent various DataStax and Nvidia capabilities, including data sources, AI models and processing steps. This visual approach significantly simplifies the process of building complex AI applications. "What Langflow allows us to do is surface all of the DataStax capabilities and APIs, as well as all of the Nvidia components and microservices as visual components that can be connected together and run in an interactive way," Anuff said. Langflow also is the critical technology that enables agentic AI to the new DataStax platform as well. According to Anuff, the platform facilitates the development of three main types of agents: Task-oriented agents: These agents can perform specific tasks on behalf of users. For example, in a travel application, an agent could assemble a vacation package based on user preferences. Automation agents: These agents operate behind the scenes, handling tasks without direct user interaction. They often involve APIs communicating with other APIs and agents, facilitating complex automated workflows. Multi-agent systems: This approach involves breaking down complex tasks into subtasks handled by specialized agents. What the Nvidia DataStax combination enables for enterprise AI The combination of the Nvidia capabilities with DataStax's data and Langflow will help enterprise AI users in a number of different ways, according to Anuff. He explained that the Nvidia integration will allow enterprise users to more easily invoke custom language models and embeddings through a standardized NIM microservices architecture. By using Nvidia's microservices, users can also tap into Nvidia's hardware and software capabilities to run these models efficiently. Guardrails support is another key addition that will help DataStax users to prevent unsafe content and model outputs. "The guardrails capability is one of the features that I think probably has the most developer and end user impact,"Anuff said. "Guardrails are basically a sidecar model, that is able to recognize and intercept unsafe content that is either coming from the user, ingestion or through, stuff retrieved from databases." The Nvidia integration also will help to enable continuous model improvement. Anuff explained that the NeMo Curator allows enterprise AI users to be able to determine additional content that can be used for fine tuning purposes. The overall impact of the integration is to help enterprises benefit from AI faster and in a cost efficient approach. Anuff noted that it's an approach that doesn't necessarily have to rely entirely on GPUs either. "The Nvidia enterprise stack actually is able to execute workloads on CPUs as well as GPUs," Anuff said. "GPUs will be faster and generally are going to be where you want to put these workloads, but if you want to offload some of the stuff to CPUs for cost savings in areas where, where it doesn't matter, it lets you do that as well."
Share
Share
Copy Link
DataStax integrates its data stack with Nvidia's AI Enterprise platform, promising to reduce AI development time by up to 60% and handle AI workloads 19 times faster than current solutions.
DataStax, a leading database company, has announced a significant collaboration with Nvidia, integrating its AI capabilities with the Nvidia AI Enterprise platform. This partnership aims to address the growing demand for efficient AI development tools in the enterprise sector 1.
The newly introduced "DataStax AI Platform, Built with Nvidia AI" promises to reduce AI application development time by up to 60% in some cases. This integrated platform covers the entire AI development lifecycle, from data ingestion and retrieval to application development and deployment, including continuous training 1.
Key components of the platform include:
The Nvidia AI Enterprise platform brings several crucial tools to the collaboration:
The DataStax AI Platform offers deployment flexibility, supporting major public cloud platforms (AWS, Azure, Google Cloud) and on-premises environments. This versatility makes it particularly attractive for enterprises in heavily regulated industries such as insurance, finance, and healthcare 1.
DataStax's Langflow plays a crucial role in the new AI platform by allowing developers to visually construct AI workflows. It supports the development of three main types of agents:
The DataStax-Nvidia collaboration addresses several key challenges in enterprise AI development:
Analysts view this partnership as a strategic move that combines Nvidia's proven infrastructure with DataStax's reliable platform-as-a-service offering. While some companies might be concerned about potential dependencies, the integrated solution is expected to accelerate the development of AI-powered applications for many enterprises 1.
Teradata announces new AI capabilities, partnerships, and strategies at Possible 2024, focusing on scalable AI platforms, hybrid analytics, and sustainable AI practices to drive business value and innovation.
6 Sources
NVIDIA introduces AI Agent Blueprints, a new tool designed to simplify the creation of AI-powered enterprise applications. This release aims to democratize AI development and enable businesses to build custom AI experiences efficiently.
3 Sources
Google Cloud has announced significant upgrades to its database and data analytics tools, incorporating generative AI capabilities. These enhancements aim to improve data management, analysis, and AI application development for businesses.
4 Sources
Snowflake's Data Cloud Summit 2024 showcases AI integration and data management advancements. The event highlights collaborations with industry leaders and introduces new features to enhance data cloud capabilities.
3 Sources
VAST Data introduces Cosmos, a tech community aimed at simplifying AI adoption and fostering collaboration among AI practitioners, while also unveiling the VAST InsightEngine in partnership with Nvidia to enhance enterprise AI infrastructure.
5 Sources