-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Improve error message when tests use closed stdout/stderr (capture) #5743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks @gaborbernat for the detailed description. Indeed seems like pytest should automatically cleanup the handlers somehow, not sure how to do it though ensuring we don't break existing things... cc @Thisch |
I don't think pytest should automatically clean up any of a user's resources as it's arguably a correctness problem with the test suite or user code |
It not cleanup we should alter the capsys stream to raise a more meaningful error on write than stream closed and maybe point out the test where that capsys was pulled from? |
yeah I'm fine with a better error message -- somehow notify the user that they're reassigning globals to |
Oh right, I agree. Initially I thought we were not cleaning our handlers correctly, but I misunderstood the issue. |
Actually IIUC the tests are not reassigning |
Needs more investigation, but if the message originates from here: Lines 420 to 427 in 2d613a0
We just need to raise a more meaningful message if the buffer is already closed at this point. |
yeah I didn't say they're modifying the |
Yeah, we basically just need there an exception saying this is now a defunct stream and point out from which test it came, so the user can clean up properly in there to fix the issue of cleaning up. |
Gotcha! 👍
Agreed. 👍 |
do we have a test case to reproduce this? I know that it has been described in general terms above and also the members discussing it seem to be very aware of the problem, but I don't know and if I want to fix it, would like to be sure that I really fix it. My proposed approach is to write a unit test that reproduces and then do a try except block and push out something useful in terms of error. Please note that I am a newbie and be gentle. |
I can't remember how to make it outright crash, but this is close: import logging
import sys
logger = logging.getLogger(__name__)
def test(capsys):
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel(logging.DEBUG)
def test2():
logger.error('ohai') $ pytest t.py -s
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-5.3.2, py-1.8.0, pluggy-0.13.1
rootdir: /tmp/x
collected 2 items
t.py .--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit
stream.write(msg)
ValueError: I/O operation on closed file.
Call stack:
File "/tmp/x/venv/bin/pytest", line 8, in <module>
sys.exit(main())
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 92, in main
config=config
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/main.py", line 239, in pytest_cmdline_main
return wrap_session(config, _main)
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/main.py", line 196, in wrap_session
session.exitstatus = doit(config, session) or 0
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/main.py", line 246, in _main
config.hook.pytest_runtestloop(session=session)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/main.py", line 270, in pytest_runtestloop
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/runner.py", line 81, in pytest_runtest_protocol
runtestprotocol(item, nextitem=nextitem)
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/runner.py", line 96, in runtestprotocol
reports.append(call_and_report(item, "call", log))
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/runner.py", line 182, in call_and_report
call = call_runtest_hook(item, when, **kwds)
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/runner.py", line 207, in call_runtest_hook
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/runner.py", line 234, in from_call
result = func()
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/runner.py", line 207, in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/runner.py", line 131, in pytest_runtest_call
item.runtest()
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/python.py", line 1435, in runtest
self.ihook.pytest_pyfunc_call(pyfuncitem=self)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/tmp/x/venv/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/tmp/x/venv/lib/python3.6/site-packages/_pytest/python.py", line 166, in pytest_pyfunc_call
result = testfunction(**testargs)
File "/tmp/x/t.py", line 14, in test2
logger.error('ohai')
Message: 'ohai'
Arguments: ()
.
============================== 2 passed in 0.01s =============================== |
cool thanks. I know this is offtopic but what is the protocol for reserving an issue for your development? There is nothing in the contributor page. |
there isn't really beyond what you've already done :) "I want to work on this" is usually sufficient! |
if you need specific help let us know, would be happy to guide you through! |
ok at work and will report back |
@betchern0t |
@blueyed while your proposed solution is undoubtedly superior, I was looking for a simple task to get me started. I have lots of dev/it experience but not so much python and no experience with this codebase. I am quite happy for someone else to supersede my work when they are able. I have found a suitable place to intervene and am just about to look at how to get the test name, test file name and line number to place in the error message. |
imo making it "just work" is not the correct solution here -- the code which triggers this has a bug (it leaks global state as a side-effect of the test). improving the error message is (imo) the only solution here |
I have now looked at this issue and #14 and #5502. It seems to me that - as has been said elsewhere - the issue is between pytest hyjacking std* and adding a stream buffer and other things assuming that std* are standard file descriptors. As such there are actually a number of places where we could possibly catch the exception and write an indicative error message. But this works most of the time so I need a valid test case to indicate which path through code it is. My thanks to Asottile but that test case runs perfectly for me. I tried some of the other test cases in the other two issues but the only one I can make to work/fail is: import atexit
import logging
LOGGER = logging.getLogger(__name__)
def test_1():
pass
atexit.register(lambda: LOGGER.error("test in atexit"))
logging.basicConfig()` Looking at this I am not sure this is a relevant test. My understanding of this code is that it attempts to log something at a very late stage of process tear down which I would expect to produce these kind of errors, What is more there is nothing to say beyond what is produced. There isn't a specific test to point to. |
IIUC we don't need to actually catch the exception anywhere, we just need to raise a better error message during Lines 420 to 427 in 622995a
Instead of: self.buffer.write(obj) Something like: try:
self.buffer.write(obj)
except (IOError, ValueError) as e:
raise type(e)("Error from stream created by test {nodeid}".format(self.test_nodeid)) So the error message will help the user finding out about the test. |
Thankyou Nicoddemus. The issue is that my analysis has found several places where this error can occur under different circumstances. I would have to go back and look to see exactly the places I found but at least two in the capture file. I had pretty much what you suggest in the first place before looking at the second. The issue is that there isn't a hard fault and so I could end up patching this in many locations and still not fix the issue. |
You mean several places in I would focus only on the place reported by the issue because it is the situation where |
Nicoddemus I mean several points in capture.py. There is nothing definitive to point to the bug in the issue. Yes I could put a try except block in write - what I call catching the exception - but my testing suggested other locations. |
In case this helps as another example, I can't run pytest on Windows 10 with PyWinAuto without seeing the Fatal exception/ValueError Minimum example: import logging
import pytest
from pywinauto.application import Application
def test_case():
assert True Output:
requirements.txt
The below log output is from the extended script shown here: pywinauto/pywinauto#858 (comment)
Windows Error Message: |
Thanks @KyleKing, can you please confirm that passing |
Thanks, the fatal error still occurs with the I think the root cause is due to changes in |
I'm having trouble replicating the Fatal Exception from comtypes without running pytest. I've experimented with calling the method in a thread, but the script doesn't report an error. I created an example test case that uses a snippet from pywinauto, but doesn't require the full package to be imported: import comtypes
import comtypes.client
def test_case():
comtypes.client.GetModule('UIAutomationCore.dll')
ui_automation_client = comtypes.gen.UIAutomationClient
iuia = comtypes.CoCreateInstance(
ui_automation_client.CUIAutomation().IPersist_GetClassID(),
interface=ui_automation_client.IUIAutomation,
clsctx=comtypes.CLSCTX_INPROC_SERVER,
)
iuia.GetRootElement()
assert True
What is pytest doing that is leading to the Fatal Exception? (I've tried modifying the related package source code, but debugging this issue is tricky with such an unhelpful traceback/error. Should I create a new issue?)
Edited 23Jan2020 and 24Jan2020 Update 24Jan2020: I don't know why this fatal error was occurring, but I disabled the pop-ups, which allows me to run through my pywinauto test suite and carry on 😄 |
See pytest-dev/pytest#5743 for the details, and explanation of the fix.
@nicoddemus @asottile
This is still an issue with the latest pytest across tests (e.g. when one tests change the logging config and add a stream handler pointing to stdout, a subsequent test that doesn't setup logging and tries to write to logging will fail).
I think the issue boils down to the fact that if you set up a Stream Handler and you enable capsys the actual output stream will be a pytest stream that will be closed and thrown away at the next test in the test suite. So when a subsequent test tries to write to it you get this error. Adding the following fixture fixed this:
Should this be automatically done always by pytest? Granted for this to work ideally you would need to do it for all loggers, not just the root one. In this case, for me, this was enough as I was only set up a custom stream logger onto the root logger.
In an ideal world, I would expect though pytest to cleanup loggig configurations at the end tests that requested the caplog fixture.
Originally posted by @gaborbernat in #14 (comment)
The text was updated successfully, but these errors were encountered: