you might have noticed over the past few weeks, that out test reports look differently. Failed tests are clustered, test run details is packed with more information and new buttons appeared leading to new features. Let me explain.
From the start we knew that after test generation and execution, we would have to tackle test maintenance next. It's the reason why many teams decide to stay away from end-to-end testing entirely. It's time consuming and annoying. So we made it our goal to cut time spent on fixing tests. By a lot.
I'm really happy to show you the first set of features:
When tests break we pre-classify the root cause of the failure automatically
No more hitting a red wall of broken tests. We sift through the tests, identify the culprit, and cluster the test so you can take actions fast. No more digging through logs and searching for root causes.
Not sure where that failed click was supposed to go? Lost the context of a test case?
It’s possible you have many tests to manage and you’re not sure where that failed step was supposed to go. Maybe you didn’t even create the test yourself and don’t have the context. `Compare runs` will show you the timeline history of previous runs so you can step through the flow to be tested with the particular test case.
We've put everything you need to debug a test case in one place.
Annotated timeline to step through the run test sequence. Locator code. Error log. Agent summary on what it tried to do and why it thinks it failed. The original prompt. Options to compare runs, investigate further with playwright traces and by debugging locally.
From here you also let the Agent have a go at healing the test automatically with our `attempt auto-fix` feature.
We tested it on a SaaS user test suite to see roughly how much time savings we could achieve with the new tools. I've put it into a basic graph.
—
Let me know what more would help you maintain your test suite faster + better. Simply reply to this email, I'm genuinely interested in your input.