Skip to content

Configure and implement integration tests for DashR-related repos #82

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rpkyle opened this issue Apr 10, 2019 · 2 comments
Closed

Configure and implement integration tests for DashR-related repos #82

rpkyle opened this issue Apr 10, 2019 · 2 comments

Comments

@rpkyle
Copy link
Contributor

rpkyle commented Apr 10, 2019

Background

Commits to master and dev branches of the DashR repo are currently permitted without passing code and style checks.

This is undesirable; we should implement small app-based tests which can be automated (as in Dash for Python); this will likely involve setting up both CircleCI and Percy, and creating a repo with Python integration test scripts for DashR.

These are the lines demonstrating how a DashPy app is inlined; we'd need to modify this a bit for DashR, but it's relatively straightforward:

https://github.com/plotly/dash/blob/6cee22c4518714770567405f77eacf42b7dc8c51/tests/test_integration.py#L37-L51

And the exception is in call_count, here:

https://github.com/plotly/dash/blob/6cee22c4518714770567405f77eacf42b7dc8c51/tests/test_integration.py#L53

And here is a sample call to Percy:

https://github.com/plotly/dash/blob/6cee22c4518714770567405f77eacf42b7dc8c51/tests/test_integration.py#L63

Proposed workflow

The following workflow is suggested, given the formal support for (official) Percy bindings to Python and not R:

  • Implement a minimalist CircleCI configuration for the R repos (easier, portable); a sample config is here.
  • Modify L37-51 of https://github.com/plotly/dash/blob/master/tests/IntegrationTests.py to support R testing, and create a DashR-test repo with the updated scripts
  • test_integration.py is the most important one to implement, the others (test_configs.py, test_react.py, test_resources.py) can follow later

Implementation-specific details

  • We might consider verbosely printing JSON for these simple apps into an htmlDiv, or at least relevant prop and value fields; this would provide a bit more data for debugging purposes
  • The return value of callbacks can be made global in R by using the superassignment operator <<-
  • When we eventually implement linting support, this workflow for TravisCI is likely adaptable to 🐎 the process of configuring lintr to work with CircleCI

(While we don't have a concrete style guide for R like PEP8, and R tools for linting aren't quite as mature as pylint or flake8, RStudio does provide the lintr package, and we should use it also.)

@alexcjohnson

@rpkyle rpkyle self-assigned this Apr 10, 2019
@alexcjohnson
Copy link
Collaborator

Thanks @rpkyle, really looking forward to having this set up! Just a few points of clarification:

  • The setup you describe above is for end-to-end integration tests, and the idea is to have Python launch a dashR app, then interact with that app via Python the same way we do in dashPy tests. The only pattern in the dashPy tests that seems hard this way is stuff like call_count because it's direct communication between two Python processes (the app and the test runner). The obvious solution to this is to simply have the dashR app put that info into the page (as a callback output) and have the test code inspect the page, like all the rest of the tests.
  • There may be some tests that make sense to do just purely in R - things like errors that are thrown when you set up callbacks wrong. For this purpose it probably makes sense to have a separate test suite using testthat or whatever. But that's kind of a known process, and I think end-to-end testing is more important in the short run so I'd concentrate on that for now.

@rpkyle
Copy link
Contributor Author

rpkyle commented Sep 3, 2019

We've implemented several integration tests already with @byronz's help, and adding them is now firmly part of the feature implementation workflow. A few examples:

In addition, we've begun adding unit tests using R's testthat package to replace the old ones which are now largely stale due to API changes:

The latter are initiated in CircleCI after integration tests finish, but it might make sense to have these launch first:

- run:
name: 🔎 Unit tests
command: |
sudo Rscript -e 'testthat::test_dir("tests/")'

Closing as the current workflow satisfies the original focus of this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants