Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detection of Tiles in CZI dataset when there is only angles #10

Open
Xqua opened this issue Jul 13, 2018 · 4 comments
Open

Detection of Tiles in CZI dataset when there is only angles #10

Xqua opened this issue Jul 13, 2018 · 4 comments

Comments

@Xqua
Copy link

Xqua commented Jul 13, 2018

Hi,

Maybe this is an intended behavior ? But I have a simple 4 angle 1 timepoint dataset, and the new version of multiview reconstruction detects 4 tiles in it.

          <id>0</id>
          <name>0</name>
          <location>-274.326 13003.322 -827.0389999999996</location>
        </Tile>
        <Tile>
          <id>1</id>
          <name>1</name>
          <location>-80.05799999999998 13003.421999999999 -103.23700000000007</location>
        </Tile>
        <Tile>
          <id>2</id>
          <name>2</name>
          <location>606.991 13003.266 -335.45899999999983</location>
        </Tile>
        <Tile>
          <id>3</id>
          <name>3</name>
          <location>415.08399999999983 13003.25 -1058.6810000000005</location>
        </Tile>

It also detects my 4 angles:

          <id>0</id>
          <name>0</name>
          <axis>0.0 1.0 0.0</axis>
          <degrees>2.777778E-4</degrees>
        </Angle>
        <Angle>
          <id>1</id>
          <name>1</name>
          <axis>0.0 1.0 0.0</axis>
          <degrees>90.0000072</degrees>
        </Angle>
        <Angle>
          <id>2</id>
          <name>2</name>
          <axis>0.0 1.0 0.0</axis>
          <degrees>180.0000144</degrees>
        </Angle>
        <Angle>
          <id>3</id>
          <name>3</name>
          <axis>0.0 1.0 0.0</axis>
          <degrees>270.00002159999997</degrees>
        </Angle>

But I'm pretty sure I did not Tile my samples at any time.

Is this a normal behavior ?

PS: The dataset was generated on the Zeiss Z1

@hoerldavid
Copy link
Contributor

Hi @Xqua ,

I'm assuming that you have installed BigStitcher? Doing this will also update the Multiview Reconstruction.

In the work on BigStitcher, we introduced the Tile attribute (in addition to the existing Channel, Illumination, Angle, TimePoint), representing the (x,y,z)-stage coordinates at which an image was acquired.
Since MVR and BigStitcher share the same data model (esentially, they are two 'modes' of the same plugin), you are seeing Tiles in the Multiview Reconstruction.

Since the stage coordinates typically differ for (tiled) acquisitions from multiple angles, we decided to assign a separate Tile to every (x,y,z,angle)-combination (instead of 1 tile for all angles). So it is perfectly normal to see 4 Tiles in a 4 Angle dataset (each angle has one tile).

I hope this explains what the 'Tiles' are and where they come from.

Best,
David

@Xqua
Copy link
Author

Xqua commented Jul 13, 2018

Makes complete sense !

It actually should also make the RANSAC part much faster in theory if you are starting from a "good" starting point !

Actually, if you have this info, you could also make a phase correlation alignement without beads no ?

@hoerldavid
Copy link
Contributor

Hi @Xqua

I don't think the RANSAC speed is affected that much, since we will still do it with all interest points of two images (as long as they have nonempty overlap). The main parameter to affect the speed of the pairwise RANSAC would be RANSAC iterations (https://imagej.net/BigStitcher_Registration#Specific_Registration_Options). But I think you are correct, in theory we could just use the interest points from the overlap volume to have a much smaller candidate set and speed up the process.

Regarding the phase correlation alignment, that is exactly what we do in BigStitcher. Out Basic workflow there is:

  1. For every (Angle, TimePoint)-combination, we align the Tiles using pairwise phase correlation followed by global optimization. Here, the (x,y,z)-metadata can speed things up considerably, since we only have to align the overlapping volumes to get the relative shift of two images. Also, without metadata, we would have to do all-to-all alignment, which still works most of the time, but is obviously much slower.
  2. Optionally, we can do an interest point-based affine refinement of the registration (and/or correct chromatic aberrations if we have structures that are visible in all channels, e.g. autofluorescence in tissue samples)
  3. Then we can do the MVR with the interest points of all (already aligned) tiles of a view grouped to get the final alignment.

We also have expert options (https://imagej.net/BigStitcher_Advanced_stitching) in BigStitcher that would allow you to align Angles using phase correlation (among other things). This will only do a translation model, but you can use it on pre-rotated views (we will then use virtually transformed images as the input). But still, since the rotation from metadata will probably not be 100% exact, the interest point-based registration should be better for multi-view alignment. It often also works if you do not have beads in your spamples, as long as there are sufficiently prominent local minima or maxima.

Best,
David

@Xqua
Copy link
Author

Xqua commented Jul 16, 2018

Thanks a lot for this info !

I'll have to play with this as I have a dataset that has no beads (well it had beads in the wrong channel ...) and I kinda put it aside for now until I was going to have the time to write up a phase corr algorithm !

I might come bug you sometime in the future when I try it !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants