Skip to content

Handle overlap between input images #15

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jluethi opened this issue Jul 26, 2022 · 6 comments
Closed

Handle overlap between input images #15

jluethi opened this issue Jul 26, 2022 · 6 comments
Assignees

Comments

@jluethi
Copy link
Collaborator

jluethi commented Jul 26, 2022

The main case of yokogawa parsing we've implemented so far was a grid without overlap. After that, given the progress with metadata parsing & ROI definitions, we want to support search-first data where the FOVs are not a grid, but any random positions (see #8). As an initial implementation, we can focus on random positions, but without overlap between FOVs.

The step after that is handling overlap between FOVs. I don't know whether we have test data for search-first with overlap, but we certainly have grids with overlap (e.g. FOVs always overlap for about 10%). I'm not fully sure how we want to handle that in our current setup where a well is a single FOV.

We could consider a direction that only saves to Zarr once registration has been done, but that could get tricky memory-wise.
For grids, we could also save it initially as a non-overlap grid and then later have a registration/merging task that combines images.

This has the downside of having to actually merge (=edit) the raw data and save the modified data. That we probably have to live with if we want to go in this direction.
The other somewhat annoying downside is the question of how we handle metadata. Because the metadata from the microscope will already define an overlap between the FOVs, which is not how we'd want to save it initially, most likely.

Let's use this issue to discuss how we'll handle this. We may decide that this is not in scope for the search first milestone, but should start thinking about how we'll eventually tackle this.

@jluethi
Copy link
Collaborator Author

jluethi commented Aug 23, 2022

Overview of the options for storage on disk in Zarr files of overlap

a) We never store overlap on disk (very limiting, would mean stitching only possible during parsing)
b) We shift FOVs to save it as a grid without overlap. If stitching is performed, it is baked into the image. Where do we save overlap positions? In a extended part of the ROI table (e.g. have position_x and raw_position_x)? Could then be used as an optional input to a stitching algorithm (initial position of all tiles)
c) We save it as a multi-FOVs Zarr file that contains the positional information for each FOV . Benefit: Could be visualized in any way the user desires (e.g. blending overlaps), maintains all raw data & positions. Downsides: Massive complexity. Much harder to process in our current infrastructure. Visualization doesn't scale at all, file numbers scale very badly.

I would go with b for the moment and declare c out of scope. It is potentially interesting to implement at a later point, but the complexity is fairly high. b takes a bit of additional parsing, but is doable.
Depends also a bit on what kind of downstream processing we need to enable, see the discussion on processing iii) here: #11

@jluethi
Copy link
Collaborator Author

jluethi commented Aug 29, 2022

Given the discussion in #11, we can stay with b here. If we do stitching, we do it early on and then just have the fused array.

Until the stitching, we can just save the shifted tiles, so let's keep things simple then :)

@jluethi jluethi transferred this issue from fractal-analytics-platform/fractal-client Sep 2, 2022
@tcompa
Copy link
Collaborator

tcompa commented Sep 29, 2022

Quoting from #11 (comment):

What's the status of this issue, given recent PRs (mainly #80 and #106)?

There was progress in:

* Input (2) (a grid of FOVs with overlaps) with processing (ii) (within-FOV per-ROI processing).

* Input (3) (arbitrary FOV positions without overlaps) with processing (ii) (within-FOV per-ROI processing).

Are these two use cases somewhat complete for the moment?

It seems that the main missing part is processing (i) (stitching) for inputs (1) or (2). Should we start this discussion (in #15 or in a new "stitching" issue)?

@jluethi
Copy link
Collaborator Author

jluethi commented Sep 29, 2022

We should organize a test set to check how well the processing of images with overlap is working. Given recent progress, the parsing may already work sufficiently. We can then follow up on stitching here: #116

@gusqgm Do you have any test sets ready? e.g. single well, imaged with overlap?

@jluethi jluethi added the Priority Important, but not the highest priority label Oct 5, 2022
@gusqgm
Copy link

gusqgm commented Oct 28, 2022

Hey!
Update from my side: I do have a dataset, information can be found here. Basically this is a wanted overlap imaged with grids, 50 pixel overlap between FOV's. It is quite a large dataset, so let's check runtime first and if needed I can strip it down to just 2x2 FOV's.

@jluethi
Copy link
Collaborator Author

jluethi commented Dec 13, 2022

We achieved the parsing side of overlap handling, see #10 (comment)

Thus, we can close this issue.
If we want to go into discussions about stitching the overlap, we have a backlog issue here: #116

@jluethi jluethi closed this as completed Dec 13, 2022
Repository owner moved this from TODO to Done in Fractal Project Management Dec 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

3 participants