End-to-end tests are essential for ensuring the reliability of your application, but they can also be a source of frustration. Even small changes to the user interface can cause tests to fail, leading developers and QA teams to spend hours troubleshooting.
In this blog post, I'll show you how to utilize AI tools like ChatGPT or Copilot to automatically fix Playwright tests. You'll learn how to create an AI prompt for any test that fails and attach it to your HTML report. This way, you can easily copy and paste the prompt into AI tools for quick suggestions on fixing the test. Join me to streamline your testing process and improve application reliability!
By following these steps, you can enhance your end-to-end testing process and make fixing Playwright tests a breeze.
To detect a failed test in Playwright, you can create a custom fixture that checks the test result during the teardown phase, after the test has completed. If there's an error in and the test won't be retried, the fixture will generate a helpful prompt. Check out the code snippet below:
I'll start with a simple proof-of-concept prompt (you can refine it later):
Playwright stores the error message in . However, it includes special ASCII control codes for coloring output in the terminal (such as or ):
This cleaned-up message can be inserted into the prompt template.
The test code snippet is crucial for AI to generate the necessary code changes. Playwright often includes these snippets in its reports, for example:
You can see how Playwright internally generates these snippets. I've extracted the relevant code into a helper function, , to retrieve the source code lines from the error stack trace:
ARIA snapshots, introduced in Playwright 1.49, provide a structured view of the page's accessibility tree. Here's an example ARIA snapshot showing the navigation menu on the Playwright homepage:
While ARIA snapshots are primarily used for snapshot comparison, they are also a game-changer for AI prompts in web testing. Compared to raw HTML, ARIA snapshots offer:
When the prompt is built, you can attach it to the test using :
Now, whenever a test fails, the HTML report will include an attachment labeled "Fix with AI."
When it comes to using ChatGPT for fixing tests, you typically have to manually implement the suggested changes. However, you can make this process much more efficient by using Copilot. Instead of pasting the prompt into ChatGPT, simply open the Copilot edits window in VS Code and paste your prompt there. Copilot will then recommend code changes that you can quickly review and apply  --  all from within your editor.
Check out this demo video of fixing a test with Copilot in VS Code:
Vitaliy Potapov created a fully working GitHub repository demonstrating the "Fix with AI" workflow. Feel free to explore it, run tests, check out the generated prompts, and fix errors with AI help.
To integrate the "Fix with AI" flow into your own project, follow these steps:
Run your tests and open the HTML report to see the "Fix with AI" attachment under any failed test
From there, simply copy and paste the prompt into ChatGPT or GitHub Copilot, or use Copilot's edits mode to automatically apply the code changes.