2 Sources
[1]
Zencoder just launched an AI that can replace days of QA work in two hours
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Zencoder, the artificial intelligence coding startup founded by serial entrepreneur Andrew Filev, announced today the public beta launch of Zentester, an AI-powered agent designed to automate end-to-end software testing. This critical but often sluggish step can delay product releases by days or weeks. The new tool represents Zencoder's latest attempt to distinguish itself in the increasingly crowded AI coding assistant market, where companies are racing to automate not just code generation but entire software development workflows. Unlike existing AI coding tools that focus primarily on writing code, Zentester targets the verification phase -- ensuring software works as intended before it reaches customers. "Verification is the missing link in scaling AI-driven development from experimentation to production," said Filev in an exclusive interview with VentureBeat. The CEO, who previously founded project management company Wrike and sold it to Citrix for $2.25 billion in 2021, added: "Zentester doesn't just generate tests -- it gives developers the confidence to ship by validating that their AI-generated or human-written code does what it's supposed to do." The announcement comes as the AI coding market undergoes rapid consolidation. Last month, Zencoder acquired Machinet, another AI coding assistant with over 100,000 downloads. At the same time, OpenAI reached an agreement to acquire coding tool Windsurf for approximately $3 billion (the deal was completed in May). The moves underscore how companies are rushing to build comprehensive AI development platforms rather than point solutions. Why software testing has become the biggest roadblock in AI-powered development Zentester addresses a persistent challenge in software development: the lengthy feedback loops between developers and quality assurance teams. In typical enterprise environments, developers write code and send it to QA teams for testing, often waiting several days for feedback. By then, developers have moved on to other projects, creating costly context switching when issues are discovered. "In a typical engineering process, after a developer builds a feature and sends it to QA, they receive feedback several days later," Filev told VentureBeat. "By then, they've already moved on to something else. This context switching and back-and-forth -- especially painful during release crunches -- can stretch simple fixes into week-long ordeals." Early customer Club Solutions Group reported dramatic improvements, with CEO Mike Cervino stating, "What took our QA team a couple of days now takes developers 2 hours." The timing is particularly relevant as AI coding tools generate increasingly large volumes of code. While tools like GitHub Copilot and Cursor have accelerated code generation, they have also created new quality assurance challenges. Filev estimates that if AI tools increase code generation by 10x, testing requirements will similarly increase by 10x -- overwhelming traditional QA processes. How Zentester's AI agents click buttons and fill forms like human testers Unlike traditional testing frameworks that require developers to write complex scripts, Zentester operates on plain English instructions. The AI agent can interact with applications like a human user -- clicking buttons, filling forms, and navigating through software workflows -- while validating both frontend user interfaces and backend functionality. The system integrates with existing testing frameworks, including Playwright and Selenium, rather than replacing them entirely. "We absolutely do not like people abandoning stuff that's part of our DNA," Filev said. "We feel that AI should leverage the processes and tools that already exist in industry." Zentester offers five core capabilities: developer-led quality testing during feature development, QA acceleration for comprehensive test suite creation, quality improvement for AI-generated code, automated test maintenance, and autonomous verification in continuous integration pipelines. The tool represents the latest addition to Zencoder's broader multi-agent platform, which includes coding agents for generating software and unit testing agents for basic verification. The company's "Repo Grokking" technology analyzes entire code repositories to provide context, while an error-correction pipeline aims to reduce AI-generated bugs. The battle for AI coding dominance heats up as billions pour into automation tools The launch intensifies competition in the AI development tools market, where established players like Microsoft's GitHub Copilot and newer entrants like Cursor are vying for developer mindshare. Zencoder's approach of building specialized agents for different development phases contrasts with competitors focused primarily on code generation. "At this point, there are three strong coordination products in the market that are production grade: it's us, Cursor, and Windsurf," Filev said in a recent interview. "For smaller companies, it's becoming harder and harder to compete." The company claims superior performance on industry benchmarks, reporting 63% success rates on SWE-Bench Verified tests and approximately 30% on the newer SWE-Bench Multimodal benchmark -- results Filev says double previous best performances. Industry analysts note that end-to-end testing automation represents a logical next step for AI coding tools, but successful implementation requires a sophisticated understanding of application logic and user workflows. What enterprise buyers need to know before adopting AI testing platforms Zencoder's approach offers both opportunities and challenges for enterprise customers evaluating AI testing tools. The company's SOC 2 Type II, ISO 27001 and ISO 42001 certifications address security and compliance concerns critical for large organizations. However, Filev acknowledges that enterprise caution is warranted. "For enterprises, we don't advocate changing software development lifecycles completely, yet," he said. "What we advocate is AI-augmented, where now they can have quick AI code review and acceptance testing that reduces the amount of work that needs to be done by the next party in the pipeline." The company's integration strategy -- working within existing development environments like Visual Studio Code and JetBrains IDEs rather than requiring platform switches -- may appeal to enterprises with established toolchains. The race to automate software development from idea to deployment Zentester's launch positions Zencoder to compete for a larger share of the software development workflow as AI tools expand beyond simple code generation. The company's vision extends to full automation from requirements to production deployment, though Filev acknowledges current limitations. "The next jump is going to be requirements to production -- the whole thing," Filev said. "Can you now pipe it so that you could have natural language requirements and then AI could help you break it down, build architecture, build code, build review, verify that, and ship it to production?" Zencoder offers Zentester through three pricing tiers: a free basic version, a $19 per user per month business plan, and a $39 per user per month enterprise option with premium support and compliance features. For an industry still debating whether artificial intelligence will replace programmers or simply make them more productive, Zentester suggests a third possibility: AI that handles the tedious verification work while developers focus on innovation. The question is no longer whether machines can write code -- it's whether they can be trusted to test it.
[2]
Zencoder accelerates vibe coding with instant, AI-powered software verification - SiliconANGLE
Zencoder accelerates vibe coding with instant, AI-powered software verification Generative artificial intelligence coding startup Zencoder says it's finally able to help developer teams shift from "vibe coding" to production-ready applications with its latest tool, which automates the verification process for newly generated code. Announced today and available in beta test now, Zentester is an AI-powered agent that can quickly validate any piece of code, including those that were created with the aid of generative tools. It's designed to verify newly written lines of code the moment it's created, directly within the developer's workflow. It's the latest addition to Zencoder's arsenal of AI coding agents. The company already offers a legion of coding assistants for tasks such as code generation, repair, test generation, optimization and documentation, and last month debuted a platform for developers to create their own, highly customized AI coding tools. Now it's going further, bringing AI-generated software tests into the development pipeline with Zentester. As the startup explains, AI coding bots have created a significant gap between actually writing code and the ability of teams to ship out reliable software. The code creation process has been accelerated dramatically, but testing is still a slow and laborious process. To close this gap, Zentester enables end-to-end testing and faster feedback loops to provide instant verification that software actually works as it's supposed to. Zencoder said that traditional software engineering involves the developer building a new feature and sending it to quality assurance, with feedback returned anywhere from a few hours to a few days later. By the time that feedback arrives, the developer has already moved onto the next feature, and if a fix or alteration is required, that means stopping what they were working on and going back to perform the needed fix. Once done, the fixed code needs to go back to Q&A to be tested again, and if it's still not working correctly, it'll go back to the developer again. It's a lot of context switching, and developers can quickly find themselves overwhelmed, with simple fixes stretching into weeks-long ordeals. To put an end to this back-and-forth, Zentester looks at newly written applications from the same perspective as human users do. So its verification process will involve clicking on buttons, completing forms and navigating through various menus and features, looking to validate both the user interface and the backend responses. Developers can "talk" to Zentester in plain language, so they can ask it to "complete the checkout with the saved payment method," and it will go ahead and verify if that process works or not. That eliminates the need for complex scripts which are traditionally used in testing. In addition, Zentester slots into both the development and engineering workflows. As a result, developers can use it as they're writing code in order to catch any issues immediately, and engineers can focus on creating more comprehensive tests to fully validate every possible scenario. It can even be employed by Zencoder's AI agents, enabling them to self-verify the code they create. When they do this, they'll remember the mistakes they made, so it also helps them to improve their coding skills over time. Additionally, Zentester can also help to fix existing software tests that may "break" as the application evolves, as a result of compatibility issues. In this case, it simply adapts the test in line with the changes made to the underlying code. Early adopters have verified that Zentester's automated verification really does help. According to Zencoder, teams with early access have reported a 30% improvement in first-time pass rates for verified AI commits, leading to a significant acceleration in developer velocity. Zencoder founder and Chief Executive Andrew Filev said verification is the "missing link" that's required to scale AI-powered software development. "Zentester doesn't just generate tests, it gives developers the confidence to ship by validating that their AI-generated or human-written code does what it's supposed to do," he promised.
Share
Copy Link
Zencoder introduces Zentester, an AI agent that automates end-to-end software testing, potentially replacing days of QA work in just two hours. This tool aims to bridge the gap between rapid AI-driven code generation and reliable software deployment.
Zencoder, an artificial intelligence coding startup, has launched the public beta of Zentester, an AI-powered agent designed to automate end-to-end software testing. This innovative tool aims to significantly reduce the time required for quality assurance (QA) processes, potentially replacing days of work with just two hours of automated testing 1.
Source: SiliconANGLE
As AI coding tools generate increasingly large volumes of code, the need for efficient verification has become a critical bottleneck in the software development process. Zentester addresses this challenge by providing instant validation of newly written code, directly within the developer's workflow 2.
Andrew Filev, Zencoder's founder and CEO, emphasizes the importance of verification in scaling AI-driven development: "Verification is the missing link in scaling AI-driven development from experimentation to production. Zentester doesn't just generate tests -- it gives developers the confidence to ship by validating that their AI-generated or human-written code does what it's supposed to do" 1.
Unlike traditional testing frameworks that require complex scripts, Zentester operates on plain English instructions. The AI agent interacts with applications like a human user, performing actions such as clicking buttons, filling forms, and navigating through software workflows. This approach allows for validation of both frontend user interfaces and backend functionality 1.
Source: VentureBeat
Key features of Zentester include:
Zentester integrates with popular testing frameworks like Playwright and Selenium, allowing developers to leverage existing tools and processes. This integration enables seamless adoption within current development environments 1.
Early adopters of Zentester have reported significant improvements in their development processes. Club Solutions Group, an early customer, saw dramatic time savings, with CEO Mike Cervino stating, "What took our QA team a couple of days now takes developers 2 hours" 1.
Teams with early access to Zentester have reported a 30% improvement in first-time pass rates for verified AI commits, leading to a notable acceleration in developer velocity 2.
The launch of Zentester intensifies competition in the AI development tools market, where established players like Microsoft's GitHub Copilot and newer entrants like Cursor are vying for developer mindshare. Zencoder's approach of building specialized agents for different development phases sets it apart from competitors primarily focused on code generation 1.
Zencoder claims superior performance on industry benchmarks, reporting 63% success rates on SWE-Bench Verified tests and approximately 30% on the newer SWE-Bench Multimodal benchmark β results that Filev says double previous best performances 1.
The introduction of Zentester represents a significant step towards fully automated software development workflows. By addressing the verification bottleneck, Zencoder aims to enable developers to move from "vibe coding" to production-ready applications more quickly and confidently 2.
As the AI coding market undergoes rapid consolidation, with recent acquisitions like Zencoder's purchase of Machinet and OpenAI's acquisition of Windsurf, the industry is moving towards comprehensive AI development platforms rather than point solutions 1.
The success of tools like Zentester could potentially reshape software development processes, reducing the time and resources required for QA while improving overall code quality and reliability.
AMD CEO Lisa Su reveals new MI400 series AI chips and partnerships with major tech companies, aiming to compete with Nvidia in the rapidly growing AI chip market.
8 Sources
Technology
2 hrs ago
8 Sources
Technology
2 hrs ago
Meta has filed a lawsuit against Joy Timeline HK Limited, the developer of the AI 'nudify' app Crush AI, for repeatedly violating advertising policies on Facebook and Instagram. The company is also implementing new measures to combat the spread of AI-generated explicit content across its platforms.
17 Sources
Technology
9 hrs ago
17 Sources
Technology
9 hrs ago
Mattel, the iconic toy manufacturer, partners with OpenAI to incorporate artificial intelligence into toy-making and content creation, promising innovative play experiences while prioritizing safety and privacy.
14 Sources
Business and Economy
9 hrs ago
14 Sources
Business and Economy
9 hrs ago
A critical security flaw named "EchoLeak" was discovered in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive data without user interaction. The vulnerability highlights potential risks in AI-integrated systems.
5 Sources
Technology
18 hrs ago
5 Sources
Technology
18 hrs ago
Spanish AI startup Multiverse Computing secures $217 million in funding to advance its quantum-inspired AI model compression technology, promising to dramatically reduce the size and cost of running large language models.
5 Sources
Technology
9 hrs ago
5 Sources
Technology
9 hrs ago