-
Notifications
You must be signed in to change notification settings - Fork 531
fix: fixed quiet_mode test to ignore unimportant logs #3795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Meet Soni <[email protected]>
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #3795 +/- ##
==========================================
- Coverage 80.37% 80.20% -0.18%
==========================================
Files 808 808
Lines 11977 11983 +6
Branches 1595 1598 +3
==========================================
- Hits 9627 9611 -16
- Misses 1923 1955 +32
+ Partials 427 417 -10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test doesn't really test what we need here.
If you set stdout/stderr to /dev/null then sure, you get the same effect that the -q flag is supposed to have. But it doesn't test whether -q itself is working because it's dumping all the output into the void.
It might help to imagine that someone has quiet mode hooked up to an email bot: you want it to email you if there's some kind of catastrophic failure of cve-bin-tool that we couldn't predict. But you don't want it to mail you every day with boring "downloading" messages or "unpacked a file" messages or "connection failed, retrying." For any error we can predict (like periodic network timeouts), you only want an appropriate error code to tell you if something went wrong. (e..g imagine the error codes being shown on a dashboard of thousands of runs.)
In our case, the quiet mode test is supposed to make sure we've ensured all output goes with an appropriate log priority (e.g. no rogue print statements), and that all exceptions are handled correctly (e.g. no surprise python stack traces; we should print a log.error and if the issue halts the tool entirely we should set an appropriate error code)
There was some misunderstanding:
the problem is with our implementation of quiet_mode:
So, I'm adding a commit to address that. |
Regarding stack traces, there is inconsistency in the code, many parts are lacking effective error handling and corresponding issues can be made. |
Signed-off-by: Meet Soni <[email protected]>
Signed-off-by: Meet Soni <[email protected]>
I agree that there's a lot of room for improvement but I should probably be clear about my priorities:
In that last case, violating quiet mode would be appropriate regardless. |
Yeah, commits to address the extra log messages is the correct way to go here. Setting stderr/stout to /dev/null would mean that even if quiet mode was broken then this test would pass, and that would make it pretty useless as a regression test. |
yes, by using critical logs, we can reduce the noise without affecting debugging quality. I think that these should be separate as issue for network related problems is already there #1431 |
Also, do we want anything else in this one? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think this will work better. Thanks!
Signed-off-by: Meet Soni <[email protected]>
Fixes: #2382