-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Planned number of tests is ignored #8
Comments
@RReverser I'm sorry I don't quite understand the problem. Would you please give an example? I'd be glad to check that. |
@zoubin It's simply that test runner crashes due to some global issue, so output is incomplete (not all the assertions were written out). |
@RReverser Can you share an example. When I try to simulate by adding a |
I think I understand. |
@Hypercubed Well, I actually emitted plans line right after TAP version line, so seems to be a slightly different case. Let me try later today. |
How does other TAP transformers act in this case? tap-spec seems to do what I suggested: https://github.com/scottcorgan/tap-spec/blob/master/index.js#L70 |
Yes, test failures if count mismatches is what I originally asked for, too. |
Add check for plans on output, fixes #8
So a simple artificial example to show the problem: > echo "TAP version 13
1..4
ok 1
ok 2
" | ../js/node_modules/.bin/tap-summary
------------------------------------ Tests ------------------------------------- (still happening in published 2.1.1, not sure why this issue has been closed) |
@zoubin @Hypercubed Can you please reopen the issue? |
@RReverser, I'll look into checking the plan counts. Have you seen other TAP consumers that handle the plan counts correctly? I'm wondering how they handle subtests or the concatenated output from multiple tests. I can't reopen, I'm a just a user. 😞 |
The "plan" line can be written multiple times in case of subtests or globbing. So we need to sum the plan counts and compare to the assertion counts. Something like:
How does that sound? @RReverser @zoubin Edit: Probably should be |
@RReverser This issue got automatically closed when I merged #9 . |
@Hypercubed I just made you a collaborator :) |
@zoubin Thanks. I'll still go through the PR process. I was playing with this. I guess we need to decide if the planned tests don't meet the assertion count do we:
|
Unfortunately, not yet (not in JS at least), but this is part of the spec so I'd really want them to.
IIUC 2nd option, I'm for it. Was thinking about reporting any tests starting from the mismatched one as failed, so that in case N tests were planned but M assertions found, summary would say that (N-M) tests failed. |
@zoubin if it's alright with you I'll work on this. No hard fails just output in the summary and the people exit code. If plans exceed asserts the difference counts as fails. If no plans are found it reports that and exit code is set to 1. Not sure if there can ever be a case where the asserts exceed the plans. This should already be a fail. |
@Hypercubed I'm also with 2nd option. Just for your information, there is a special case: test('empty', function (t) {
t.end()
}) The plan line will be |
If it is always
Then planned will be '>= 0'. |
The issue closed again when #10 was merged. |
Looks so, thanks @Hypercubed! One rather unrelated question/issue: is it by design that tap-summary doesn't support output of just
but does support
? Spec allows first one as valid too, whereas comment lines are optional and their behavior is implementation-dependant (I'm outputting TAP from custom non-JS producer according to the spec, so had to match up with tape format and add comments just so that it would be accepted by tap-summary). |
tap-summary is supposed to be a reporter, which consumes the output of a parser which processes the TAP output. |
The reason I asked here is because {
"tests": [],
"asserts": [
{
"type": "assert",
"raw": "ok 1 test 1",
"ok": true,
"number": 1,
"name": "test 1",
"test": 0
},
{
"type": "assert",
"raw": "ok 2 test 2",
"ok": true,
"number": 2,
"name": "test 2",
"test": 0
},
{
"type": "assert",
"raw": "ok 3 test 3",
"ok": true,
"number": 3,
"name": "test 3",
"test": 0
},
{
"type": "assert",
"raw": "ok 4 test 4",
"ok": true,
"number": 4,
"name": "test 4",
"test": 0
}
],
"versions": [
{
"type": "version",
"raw": "TAP version 13"
}
],
"results": [],
"comments": [],
"plans": [
{
"type": "plan",
"raw": "1..4",
"from": 1,
"to": 4
}
],
"pass": [
{
"type": "assert",
"raw": "ok 1 test 1",
"ok": true,
"number": 1,
"name": "test 1",
"test": 0
},
{
"type": "assert",
"raw": "ok 2 test 2",
"ok": true,
"number": 2,
"name": "test 2",
"test": 0
},
{
"type": "assert",
"raw": "ok 3 test 3",
"ok": true,
"number": 3,
"name": "test 3",
"test": 0
},
{
"type": "assert",
"raw": "ok 4 test 4",
"ok": true,
"number": 4,
"name": "test 4",
"test": 0
}
],
"fail": []
} |
@RReverser Thanks for the information. I'll check it later in #11 |
Got issues where tests were silently failing at 60% of the suite, and
tap-summary
reports as if all the tests passed (it's hard to notice an issue unless you pay attention and care to the number reported). As per TAP specification, this is exactly kind of problem plan is supposed to solve - reporter should compare total number of assertions with those given in the plan and report failures if they don't match (or if the plan wasn't provided at all - it should be either in beginning or in the very end of the suite).The text was updated successfully, but these errors were encountered: