-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
Skipped flaky part of test_time #25894
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #25894 +/- ##
=======================================
Coverage 91.47% 91.47%
=======================================
Files 175 175
Lines 52863 52863
=======================================
Hits 48357 48357
Misses 4506 4506
Continue to review full report at Codecov.
|
Codecov Report
@@ Coverage Diff @@
## master #25894 +/- ##
==========================================
+ Coverage 91.47% 91.47% +<.01%
==========================================
Files 175 175
Lines 52863 52863
==========================================
+ Hits 48357 48358 +1
+ Misses 4506 4505 -1
Continue to review full report at Codecov.
|
@pytest.mark.xfail(strict=False, reason="Unreliable test") | ||
def test_time_change_xlim(self): | ||
t = datetime(1, 1, 1, 3, 30, 0) | ||
deltas = np.random.randint(1, 20, 3).cumsum() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the random ints for deltas
are what can cause the failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially, though probably best to investigate after this PR.
ok so let's see if this works |
Workaround for #25875 to get CI passing - I kept intact the working part of the test and split off the failing piece into a separate test, which may be more explicit anyway
Haven't been able to reproduce this locally so plan to either keep the original issue open or create a new one for a more permanent fix, which may require a total refactor of the test
@gfyoung