Skip to content

test_irradiance.test_perez_components fails #390

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
KonstantinTr opened this issue Oct 27, 2017 · 7 comments
Closed

test_irradiance.test_perez_components fails #390

KonstantinTr opened this issue Oct 27, 2017 · 7 comments
Milestone

Comments

@KonstantinTr
Copy link
Contributor

Here is the error message I keep receiving

E       AssertionError: Series are different
E       
E       Series values are different (25.0 %)
E       [left]:  [0.0, 31.4635076992, 0.0, 45.4597896363]
E       [right]: [0.0, 31.46046871, nan, 45.45539877]
@cwhanse
Copy link
Member

cwhanse commented Oct 28, 2017

I don't know why the test is producing different results. But I think the fix is to remove the shortcut that was taken in test_irradiance. At the head of the file are calls to solar_position.get_solarposition and .get_clearsky that provide ephemeris and irradiance values for the unit tests; it is possible that a change to one of these functions is causing the test failure. In my view, its not the best design for a unit test to implicitly rely on other functions.

What would be better is to explicitly code the solar position and irradiance values in the test file. Over time these values may diverge from the functions we use to create them, but we'll have unit tests that are isolated to one function.

Other opinions?

@wholmgren
Copy link
Member

I agree with Cliff regarding the bad test design, though I'm less sure that it's really the cause of this particular test failure.

It could be that the failure is related to a difference between your environment and the test environments. Can you run pandas.show_versions() and paste the results here?

@wholmgren wholmgren added this to the 0.5.2 milestone Nov 1, 2017
@KonstantinTr
Copy link
Contributor Author

INSTALLED VERSIONS

commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None

pandas: 0.20.3
pytest: 3.2.1
pip: None
setuptools: 36.5.0.post20170921
Cython: 0.26.1
numpy: 1.13.3
scipy: 0.19.1
xarray: None
IPython: 6.1.0
sphinx: 1.6.3
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.2
feather: None
matplotlib: 2.1.0
openpyxl: 2.4.8
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.2
lxml: 4.1.0
bs4: 4.6.0
html5lib: 0.9999999
sqlalchemy: 1.1.13
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None

@wholmgren
Copy link
Member

It may be due to bottleneck's nan sum rules: nan + nan = 0. Try removing bottleneck and rerunning the test. Also note that the new pandas 0.21 disables bottleneck for sum operations.

@KonstantinTr
Copy link
Contributor Author

I've removed the 'bottleneck' lib and now test passes.
@wholmgren Should I close the issue or should we redesign the test with Cliff's suggestions?

@wholmgren
Copy link
Member

Seems to me that we've found and solved the problem, so I suggest closing the issue if you're satisfied. There are a number of other tests that should be redesigned in the same way, so maybe we should make a separate issue for cataloging them.

@cwhanse
Copy link
Member

cwhanse commented Nov 2, 2017

I agree - it merits a separate issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants