Curated by THEOUTPOST
On Wed, 9 Oct, 8:02 AM UTC
2 Sources
[1]
Distributional raises $19M to automate AI model and app testing
Distributional, an AI testing platform founded by Intel's former GM of AI software, Scott Clark, has closed a $19 million Series A funding round led by Two Sigma Ventures. Clark says that Distributional was inspired by the AI testing problems he ran into while applying AI at Intel, and -- before that -- his work at Yelp as a software lead in the company's ad-targeting division. "As the value of AI applications continues to grow, so do the operational risks," he told TechCrunch. "AI product teams use our platform to proactively and continuously detect, understand, and address AI risk before it introduces risk in production." Clark came to Intel by way of an acquisition. In 2020, Intel acquired SigOpt, a model experimentation and management platform that Clark co-founded. Clark stayed on, and in 2022 he was appointed VP and GM of Intel's AI and supercomputing software group. At Intel, Clark says that he and his team were frequently hamstrung by AI monitoring and observability issues. AI is non-deterministic, Clark pointed out -- meaning that it generates different outputs given the same piece of data. Add to that fact that AI models have many dependencies (like software infrastructure and training data), and pinpointing bugs in an AI system can feel like searching for a needle in a haystack. According to a 2024 Rand Corporation survey, over 80% of AI projects fail. Generative AI is proving to be a particular challenge for companies, with a Gartner study predicting that a third of deployments will be abandoned by 2026. "It requires writing statistical tests on distributions of many data properties," Clark said. "AI needs to be continuously and adaptively testing through the lifecycle to catch behavioral change." Clark created Distributional to try to abstract away this AI auditing work somewhat, drawing on techniques he and SigOpt's team developed while working with enterprise customers. Distributional can automatically create statistical tests for AI models and apps to a developer's specifications, and organize the results of these tests in a dashboard. From that dashboard, Distributional users can work together on test "repositories," triage failed tests, and recalibrate tests if and where necessary. The entire environment can be deployed on-premises (although Distributional also offers a managed plan), and integrated with popular alerting and database tools. "We provide visibility across the organization into what, when, and how AI applications were tested and how that has changed over time," Clark said, "and we provide a repeatable process for AI testing for similar applications by using sharable templates, configurations, filters, and tags." AI is indeed an unwieldy beast. Even the top AI labs have weak risk management. A platform like Distributional's could ease the testing burden, and perhaps even help companies achieve ROI. At least, that's Clark's pitch. "Whether instability, inaccuracy, or the dozens of other potential challenges, it can be hard to identify AI risk," he said. "If teams fail to get AI testing right, they risk AI applications never making it into production. Or, if they do productionalize, they risk these applications behaving in unexpected and potentially harmful ways with no visibility into these issues." Distributional isn't first to market with tech to probe and analyze an AI's reliability. Kolena, Prolific, Giskard, and Patronus are among the many AI experimentation solutions out there. Tech giants such as Google Cloud, AWS, and Azure also offer model evaluation tools. So why would a customer choose Distributional? Well, Clark asserts that Distributional -- which is on the cusp of commercializing its product suite -- delivers a more "white glove" experience than many. Distributional takes care of installation, implementation, and integration for clients, and provides AI testing troubleshooting (for a fee). "Monitoring tools often focus on higher-level metrics and specific instances of outliers, which gives a limited sense of consistency, but without insights on broader application behavior" Clark said. "The goal of Distributional's testing is to enable teams to get to a definition of desired behavior for any AI application, confirm that it still behaves as expected in production and through development, detect when this behavior changes, and figure out what needs to evolve or be fixed to reach a steady state once again." Flush with new cash from its Series A, Distributional plans to expand its technical team, with a focus on the UI and AI research engineering sides. Clark said that he expects the company's workforce to grow to 35 people by the end of the year, as Distributional embarks on its first wave of enterprise deployments. "We have secured significant funding in the course of just a year since we were founded, and, even with our growing team, are in a position to capitalize over the next few years on this massive opportunity," Clark added. Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, and Alumni Ventures also participated in Distributional's Series A. To date, the San Francisco-based startup has raised $30 million.
[2]
Distributional raises $19M to enhance reliability of AI testing for enterprises - SiliconANGLE
Distributional raises $19M to enhance reliability of AI testing for enterprises Artificial intelligence testing platform provider Distributional Inc. announced today that it had raised $19 million in new funding to support its mission of making AI reliable for enterprise use. Founded in 2023 by Scott Clark, the former general manager of AI software at Intel Corp., Distributional is building an enterprise platform for consistent, adaptive and reliable AI testing. The company's platform tests the consistency of any AI or machine learning application, giving AI engineering and product teams confidence in the reliability of their AI applications. Distributional argues that, unlike traditional software testing, AI testing needs to be done more consistently and adaptively over time on a meaningful amount of data due to AI being inherently "probabilistic and dynamic." Added into the mix is the ongoing need to avoid operational risks by deploying faulty products due to a business's financial, regulatory and reputational bottom line. The platform tests applications such as generative AI, which is claimed by Distributional to be particularly unreliable since it is prone to non-determinism or varying outputs from a given input. Generative AI is also said by Distributional to be more likely to be non-stationary, with many shifting components that are outside of the control of developers. Distributional helps automate AI testing with intelligent suggestions on augmenting application data, suggesting tests and enabling a feedback loop that adaptively calibrates these tests for each AI application being tested. The platform allows AI product teams to continuously identify, understand and mitigate AI risks before they affect customers. By proactively addressing potential issues, the service ensures the reliability and consistency of AI applications across their lifecycle. Distributional's platform also offers an Extensible Test Framework that allows teams to gather and enhance data, run tests and respond to alerts through adaptive calibration or debugging. It does so while seamlessly integrating with existing datastores, workflow systems and alerting platforms to provide a self-managed solution within customer environments. Additional features include a Configurable Test Dashboard and Intelligent Test Automation that allows teams to collaborate on test workflows, analyze results and scale AI testing effortlessly. The features can also help fine-tune testing processes across all AI applications to ensure that teams can maintain reliability and adapt to dynamic AI environments. Two Sigma Ventures LP led the Series A round, with Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence Venture Capital and Alumni VenturesGroup also participating.
Share
Share
Copy Link
Distributional, an AI testing platform founded by former Intel AI software GM Scott Clark, raises $19 million in Series A funding to automate and enhance AI model and application testing for enterprises.
Distributional, an AI testing platform founded by Scott Clark, former GM of AI software at Intel, has successfully closed a $19 million Series A funding round led by Two Sigma Ventures 12. The San Francisco-based startup aims to revolutionize the way enterprises test and ensure the reliability of their AI models and applications.
Scott Clark, drawing from his experiences at Intel and Yelp, identified critical issues in AI testing that inspired the creation of Distributional. The non-deterministic nature of AI, coupled with its numerous dependencies, makes pinpointing bugs in AI systems a complex task 1.
"As the value of AI applications continues to grow, so do the operational risks," Clark explained to TechCrunch 1. This sentiment is supported by alarming statistics: a 2024 Rand Corporation survey revealed that over 80% of AI projects fail, while a Gartner study predicts that a third of generative AI deployments will be abandoned by 2026 1.
Distributional's platform offers a comprehensive solution to these challenges:
Automated Statistical Testing: The platform can automatically create and run statistical tests for AI models and applications based on developer specifications 1.
Collaborative Dashboard: Users can work together on test "repositories," triage failed tests, and recalibrate as needed 1.
Flexible Deployment: Distributional offers both on-premises deployment and a managed plan, integrating with popular alerting and database tools 1.
Extensible Test Framework: Teams can gather and enhance data, run tests, and respond to alerts through adaptive calibration or debugging 2.
Intelligent Test Automation: The platform helps fine-tune testing processes across all AI applications, ensuring reliability in dynamic AI environments 2.
While Distributional isn't the first to market with AI testing solutions, Clark asserts that their "white glove" experience sets them apart. The company handles installation, implementation, and integration for clients, and provides AI testing troubleshooting services 1.
"We provide visibility across the organization into what, when, and how AI applications were tested and how that has changed over time," Clark stated, emphasizing the platform's ability to create a repeatable process for AI testing 1.
With the new funding, Distributional plans to expand its technical team, focusing on UI and AI research engineering. The company expects to grow to 35 employees by the end of the year as it embarks on its first wave of enterprise deployments 1.
As AI continues to play a crucial role in enterprise operations, Distributional's platform could significantly ease the testing burden and help companies achieve better ROI on their AI investments. By providing a robust solution for AI testing, Distributional aims to increase the success rate of AI projects and mitigate potential risks associated with AI deployments 12.
The successful funding round, which also saw participation from Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, and Alumni Ventures, brings Distributional's total funding to $30 million 12. This substantial investment underscores the growing importance of AI testing and reliability in the enterprise sector.
Arize AI, a leader in AI observability and LLM evaluation, has raised $70 million in Series C funding to expand its platform for testing and troubleshooting AI systems before deployment.
3 Sources
3 Sources
TrueFoundry, a startup founded by former Meta engineers, has raised $19 million in a Series A funding round led by Intel Capital. The company's platform aims to simplify AI model management and deployment, addressing key challenges in enterprise AI adoption.
4 Sources
4 Sources
Gentrace, a developer platform for testing and monitoring AI applications, has secured $8 million in Series A funding to expand its LLM testing capabilities and make AI development more accessible to non-technical teams.
2 Sources
2 Sources
Relyance AI raises $32.1 million in Series B funding to scale its AI data governance platform, addressing the growing need for transparency in AI model training and data usage amid increasing regulations.
2 Sources
2 Sources
Early, a Tel Aviv-based startup, secures $5 million in seed funding to develop an AI-powered tool that automates code testing, aiming to improve software quality and catch bugs early in the development process.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved