Skip to content

Integration test #88

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 of 3 tasks
tcompa opened this issue Jun 15, 2022 · 5 comments
Closed
2 of 3 tasks

Integration test #88

tcompa opened this issue Jun 15, 2022 · 5 comments

Comments

@tcompa
Copy link
Collaborator

tcompa commented Jun 15, 2022

As of our last meeting today:

Let's add an integration test which runs on a small dataset (I would aim at something like 2 wells of 2x2 sites, with just a couple of Z levels and a handful channels). The plan is to run yokogawa_to-zarr + illumination_correction + maximum_intensity_projection, and then to compare the three zarr files with reference ones.

Some more work is probably needed before adding this test becomes compelling. At least on these issues:

@tcompa
Copy link
Collaborator Author

tcompa commented Jun 29, 2022

As of e079db2c0dd067b2f67253e1646cfe69d074c519, we have a simple test that runs on fake images (see tests/Example/run_fake_workflow.sh).

By now it just runs as an example. The plan is to transform it into an actual test (doing everything in a python script, rather than in a bash script), where we run a complex workflow (on a tiny fake dataset) and compare results at each step with some reference results.

tcompa referenced this issue in fractal-analytics-platform/fractal-client Jun 29, 2022
@tcompa
Copy link
Collaborator Author

tcompa commented Jun 29, 2022

As of 40d00de04e6197f5426da841aedebf042c21050f we have a python script in the tests folder (test_workflow_on_fake_data.py), which runs a trivial workflow on fake data. At the moment it simply tests that the run gets through without errors/exceptions, but later we'll add an actual validation of output against some known values.

@tcompa
Copy link
Collaborator Author

tcompa commented Aug 29, 2022

From fractal-analytics-platform/fractal-client#69 (comment):

Very good question! We could save a small dataframe with actual measurements and use a np.assert_almost_equal check on them or something similar (e.g. https://pandas.pydata.org/docs/reference/api/pandas.testing.assert_frame_equal.html?)
For initial checks of the dataframe, we'd have to sample a few example measurements. If we have some test data, I have a workflow to visualize measurements back on label images that allows us to cross-check whether they make sense. I can manually check and then we can just ensure that those measurements don't change.
Maybe the 2x2 test case is reasonably small? Or would we want to go for something smaller (e.g. smaller images maybe? Just one FOV?)

@tcompa
Copy link
Collaborator Author

tcompa commented Aug 30, 2022

There is a useful example of fractal-server-agnostic pipeline file in fractal-analytics-platform/fractal-client#150. This is useful to test a plain python workflow, i.e. a collection of tasks with their I/O definitions.

@tcompa tcompa transferred this issue from fractal-analytics-platform/fractal-client Sep 20, 2022
tcompa added a commit that referenced this issue Sep 20, 2022
* Include OME-NGFF validation;
* Use real data from Zenodo
@tcompa
Copy link
Collaborator Author

tcompa commented Sep 20, 2022

Close in favor of #2 (which will be closed by #90)

@tcompa tcompa closed this as completed Sep 20, 2022
Repository owner moved this from TODO to Done in Fractal Project Management Sep 20, 2022
@jluethi jluethi moved this from Done to Done Archive in Fractal Project Management Oct 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant