Skip to content

Remove rows & cols parameter from yokogawa to Zarr #13

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jluethi opened this issue Jul 28, 2022 · 5 comments
Closed

Remove rows & cols parameter from yokogawa to Zarr #13

jluethi opened this issue Jul 28, 2022 · 5 comments
Assignees
Labels
High Priority Current Priorities & Blocking Issues

Comments

@jluethi
Copy link
Collaborator

jluethi commented Jul 28, 2022

To support search-first, we move away from the logic of wells consisting of rows & columns of images. Given the metadata parsing (#25), we now know the position of each field of view (in physical units that are relative to the center of the acquisition position, e.g. going from ~ -1000 to around +1000 in x & y in our examples).

We can use this positional metadata to decide where each FOV is placed in the well zarr array and how large the well zarr array should be (combination of position, pixel sizes & image dimensions in pixels). Thus, we should be able to handle the parsing of our existing test sets without needing row & column information.

If we generalize that, it will be a much smaller step towards the full parsing of search first data (see #8). Plus it reduced the inputs a user needs to provide.

@jluethi jluethi transferred this issue from fractal-analytics-platform/fractal-client Sep 2, 2022
@jluethi jluethi added the High Priority Current Priorities & Blocking Issues label Sep 7, 2022
@jluethi
Copy link
Collaborator Author

jluethi commented Sep 13, 2022

@mfranzon where are we at with this?

Big picture, we mostly want to parse the position from the metadata table whenever it is available. There is also the idea here as 3): #47

The somewhat simple case of full rectangular wells (like most of our test cases so far). Based on very few parameters (pixel sizes, bit depth, grid dimensions & image sizes), we can calculate all the metadata parameters for this simple case without needing metadata files or dataframes.

Goals:

  1. Parse positions based on metadata
    Later: Optionally parse with rows & cols parameters. If it's easy to keep optional support while building the default for parsing from metadata, then let's do that (would simplify using some of the larger test cases where it's hard to generated the correct metadata). If not, let's drop rows & cols, just build the metadata table based parsing and worry about adding this functionality later.

@mfranzon
Copy link
Collaborator

Hi! Sorry for the delay, unfortunately I started just today working on this. Here is the first commit, which is not working, but not far away from the solution. What is missing is the delayed handling of the image loading and changing the stack along z-index.
I hope to close it during the day.
ad458c9

@tcompa
Copy link
Collaborator

tcompa commented Sep 14, 2022

work in progress by @mfranzon:

Screenshot from 2022-09-14 09-55-05
Screenshot from 2022-09-14 09-55-38

At the moment the final zarr file has roughly the same size as the original one (that is, the zeros are not actually stored on disk)

@tcompa
Copy link
Collaborator

tcompa commented Sep 14, 2022

There is now an example in examples/02_cardio_tiny_dataset_sparse, producing output similar to the one in the previous comment.

@tcompa
Copy link
Collaborator

tcompa commented Sep 19, 2022

Closed with #80 and #79

@tcompa tcompa closed this as completed Sep 19, 2022
Repository owner moved this from TODO to Done in Fractal Project Management Sep 19, 2022
@jluethi jluethi moved this from Done to Done Archive in Fractal Project Management Oct 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
High Priority Current Priorities & Blocking Issues
Projects
None yet
Development

No branches or pull requests

3 participants