-
Notifications
You must be signed in to change notification settings - Fork 217
Add option to re-run only the tests which failed during last run #329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Doing this gracefully in the test runner as it exists today would require a lot of infrastructure for creating hidden state—some sort of notion of a |
Is it possible to get the priority upped? It's over a year since original issue and I've started using IntelliJ to just execute |
Just FYI Webstorm provides this feature AFAIK the test runner provides a JSON API that is used by WebStorm for example. Perhaps a simple command line tool could launch the test runner in a way I would probably not be too hard to build an external Dart command line tool that does the same, using the JSON API of the test runner to get the information about failed tests and run these again. |
This would still require a lot of infrastructure, and there's a lot of other features I could spend an equivalent amount of time on for probably more benefit in the long run. In general, I think this is a better feature for other tools that have a more graceful way to preserve state—either wrappers like WebStorm, or live test runners that use the test backend API. |
Yes, I meant to build such a tool outside of the test package. |
@matanlurey @jakemac53 The 'Rerun failed tests' feature is there in IntelliJ IDEA and WebStorm: @matanlurey Test framework requires that your tests files (those that contain |
Yeah! I use this all the time. Sometimes though I'm running things on the CLI and it would be nice to have the same functions available :) |
Another workflow idea: is it possible to dump an output of failing tests such that it can be easily fed back in as the cases to run? That way the state isn't managed by Bonus points if the failing sets of tests can be easily merged etc. |
This is something we intend to fix. It would be useful to also output what tests failed without the exception. Something like a print summary. |
Note that we now have the common This could be as simple as a json map of test keys to statuses (could just be a bool even or an int for more than pass/fail). Test keys would need to take into account the suite, name, and platform at a minimum. The main question would be do we only care about exactly one previous run or do we want to store the previous state for all tests ever ran? It would be hard to know when we could evict a test from the cache (if a test got renamed for instance we would potentially keep its old key around forever). We could alternatively only store a list of failed tests since that is all we really care about. |
What is the "test key" here? Do we already have a unique ID for test cases? If it's based on the string names, I don't think those are guaranteed to be unique, are they? |
True they aren't guaranteed to be unique, but it wasn't ever intentionally supported either I don't think? I don't think we have any way to disambiguate two tests in the same suite with the same name, nor do I think that is common or worth supporting in perpetuity. |
We could do for instance this as an opt-in feature called like |
Yet another option would be a very basic interactive mode for the test runner which supports rerunning all tests or just the failing tests etc. |
Would this be usable by editors (eg. a long running process they can send re-run requests to)? |
Sure, you could imagine it having two modes a user mode which just watches for keystrokes (r, etc), and a machine mode that implements some sort of protocol. |
Just an idea, but I think its a relatively common case that you only want to re-run the tests which failed during the last run (because you just made some changes to fix them). Today you either end up re-running all tests (which you eventually want to do anyways, but ideally not until after you fix the specific ones that were failing), or you manually specify each of the tests you want to retry which is cumbersome.
The text was updated successfully, but these errors were encountered: