Reconstruct SiMView Images Using BigSticher

Reconstruct SiMView Images Using BigSticher

by Eric Wait
Jul 6, 2021
data-and-analysis, microscopy
imagej, scripts, simview

Overview #

There are four steps to this process. First is to create a dataset file that allows BigSticher to interpret the data. Second is to run a macro that will flip the opposing views, so they are roughly aligned. Third is to find the best parameters that register the views together. Forth is to register and export the reconstructed data.


Inital Setup #

First we must install ImageJ and the BigStitcher plugin. To do so, you:

  1. Install ImageJ
  2. Open ImageJ and use the menu to go to Help -> Update
  3. Click on the Manage update sites button
  4. Within the list select BigStitcher

    Select BigStitcher from the Update Sites

  5. Click Close and then Close again.
  6. Close ImageJ
  7. Install the macros for this tutorial.
  8. Download the current macros from https://github.com/aicjanelia/SiMView/blob/master/src/imagej/simview_transform.ijm and https://github.com/aicjanelia/SiMView/blob/master/src/imagej/simview_fuse.ijm.
  9. Place these files into the plugins folder of ImageJ Fiji.app\plugins.
  10. Verify that BigStitcher and the macros have been installed by looking in the Plugins menu

    Menu item for BigStitcher and Installed Macros


Create a BigSticher Dataset #

BigStitcher uses xml to store the metadata that describes a dataset. The data within the xml tells BigStitcher how to interpret the images into a multi-dimensional image. We are going to use the batch processing functionality of BigSticher to reduce the number of windows popup.

  1. Open ImageJ and navigate to the Define Dataset menu of BigSticher: Plugins -> BigSticher -> Batch Processing -> Define dataset...

    Open BigSticher from Plug-ins

    Create Dataset - BigSticher

    Create Dataset - Batch Processing

    Create Dataset - Define Dataset

  2. BigSticher needs to know how to read the images. In this case select SimView Dataset Loader (RAW) and create a name for this dataset. You can name the dataset to something other than dataset.xml to something more meaningful or leave it as is.
    • Do not use special characters or spaces in the name and include .xml at the end.

      Read Images

  3. Point BigSticher to the directory you would like to process. You can browse to the directory by clicking on Browse, you can type in the path, or you can drag a folder into the text box.

    Select Directory

  4. If you selected the root directory, there is most likely two subdirectories within this folder, Config and SPM00. The images that you will want to combine reside in SPM00. Select this directory as the experiment.

    Select Images Directory

  5. Now BigSticher will look at each of the images to gather the dimensions of the dataset.
    • This can take a long time depending on your connection to the images. In other words, this step will take longer for images on a remote server and less time for images stored on a fast hard drive connected to the computer. The log window will say All files found. when this step is complete.
  6. Once all of the images have been evaluated, BigSticher needs to know how to read the images for further processing. In this case select SimView Dataset Loader (RAW) and create a name for this dataset. You can name the dataset to something other than dataset.xml to something more meaningful or leave it as is.
    • Do not use special characters or spaces in the name and include .xml at the end.
  7. Click OK.
  8. A window will open showing the parameters that BigSticher has learned from the images and metadata. However, there are two that are typically incorrect. Open one of the frames in the dataset, this is typically stored in the SPM00/TM000000/ANG00 directory. Look at the number of Z slices captured, this is the max value after PLN in the file names. This will be the Z Size in the Image loading section. Remember that these numbers start at zero, so add one to the number after PLN, e.g. PLN0120.tif turns into 121 for Z size. Also, ensure that the Pixel distance Z is correct for this dataset.

    Provide Number of Z Slices

  9. Click OK. The log window should now say that the dataset is saved and the location that it has saved it to.

Flip Images from Second Camera #

The images from one of the cameras will have the X axes in the reverse direction compared to the opposing camera. For this to be corrected, we will use the macro called simview transform.

  1. Run the macro in ImageJ by using the menu Plugins -> simview transform.
    • If you don’t see simview transform in this list, refer back to the setup section.
  2. A file browser will open. Select the xml file created at the end of creating a dataset.
    • If you did not include .xml in step 5 of creating a dataset, the file will not have an xml extension in the name.

      Select Dataset File

  3. After clicking on Open on the file select dialog, a new window will popup. Enter the number of pixels in the X dimension. This is necessary for the flipped images to be re-centered.

    Enter the X Dimension of the Images

  4. Click OK This should be done quickly. You will know that it is done when one of the last lines in the Log window will state “Done applying transforms” and “Saved xml” with a path to where it saved.

Register the Views #

Now that we have the opposing views facing the same way, now we can register (align) the views so that each object completely overlap in the final image. This will be done in a few steps, where the first alignment will be a course adjustment followed by more fine adjustments until you are satisfied with the results.

  1. First we must open the dataset with BigSticher through the menu Plugins -> BigSticher -> BigSticher.

    Menu item to start BigStitcher

  2. A window will popup asking the location of the dataset. You can browse to the directory by clicking on Browse, you can type in the path, or you can drag a folder into the text box.
  3. Click OK.

    Opening a Dataset

  4. The dataset will open in the Multiview Explorer Window. This is where most of the operations will start from, mainly through a right click menu.

    Multiview Explorer Window


Open Images for Viewing #

Open a viewer to evaluate the registration process.

  1. Start viewing the data by selecting the first frame. In the above image, you will see that all of the zeros TimePoint rows are highlighted. This is done by clicking on the first row and holding down the shift key and clicking on the forth row. Now right click on one of the highlighted rows and select Display in BigDataViewer (on/off). This will open up a BigDataViewer window.
    • It can take a long time for this new window to display data, depending on how fast your connection is to the image data.

      Open BigDataViewer

  2. The initial view will show each view in white, which is not that helpful. Click on the Multiview Explorer (arrow) window and press the C key to toggle through the various color schemes as shown below.

    BigDataViewer

    BigDataViewer Red Green

    BigDataViewer Cyan Magenta

    BigDataViewer Green Cyan

    BigDataViewer Magenta Red

  3. Even with the different color scheme, you may find the image to be too bright. There is a hidden interface on the right side of the view window. Move your mouse to the right edge of the view window and an arrow will pop out. Click on it to get a brightness control.

    Brightness UI

  4. Select all of the views (rows) in the top panel.

    Change Brightness of All of The Views At Once

  5. Change the max value to where there is contrast throughout the image.
  6. Click on the arrows on the left of this panel to hid it again. The images should look much better now.

    View of Data With Better Contrast


Create Interest Points #

BigStitcher uses interest points in each image to find the best transformation of one of the images to match the other. We will make two sets of interests point that will be used with different registration techniques for coarse and fine adjustments.

  1. Right click on the rows of the first frame that are highlighted. Select Detect Interest Points.

    Menu Item To Create Interest Points

  2. The first set of interest points will be used in the largest movement between the images and does not need to be too precise. To know which interest points we are using later, we will label these points coarse as shown in the label image below. For these interest points to actually be more coarse than others, we will downsample Z by two times as shown in the downsample image below.

    Label the Coarse Interest Points

    Downsample in Z for Coarse Interest Points

  3. Select the view that you know to have good signal in the next window. You will select a different view when doing the fine interest points. Often the first view is sufficient.

    Select the View to Use For Interest Point Parameters

  4. Once the images have been read in, two windows will open. One will be for changing the interest point parameters, the other will be for viewing the results given those parameters.
    • This may take some time to open.

      Parameters for Calculating Interest Points

      Interest Points Drawn on Coarse View

  5. Change the slider on the bottom of the image viewer until you get to a plane that has clear objects to be detected.
  6. Move or draw a box over that covers the objects to be detected. The image window showing the points uses the standard ImageJ tools to manipulate (e.g. zooming, selecting, etc.)
  7. Move the top slider, called Sigma 1, until the green circles are about the size of the objects.
  8. Move the bottom slider, called Threshold, until the false positives are removed.
  9. When satisfied, click Done.
  10. It might take some time to detect the interest points depending on the speed of your computer. Open the Interest Point Explorer by right clicking on the highlighted rows in the Multiview Explorer to see how many points were detected.

    Interest Point Explorer

  11. Repeat the steps above. Change the label to fine, downsample to one, and select a different view.

Register Views #

Now that we have interest points calculated, we can use them to register the views together. We will use the coarse interest points to make a big change to get the views relatively close. We will then use the fine interest points to make incremental changes to finalize the registration.

  1. From the Multiview Explorer, right click on one of the selected rows and select Register using Interest Points....

    Menu Item To Register Views

  2. For the coarse movement, we will select the Precise descriptor-based (translation invariant) algorithm.

    Choose Coarse Registration Algorithm

  3. Below that we will select the coarse interest points.

    Choose Coarse Interest Points

  4. Click OK.
  5. Keep the defaults on the next window Register: Register Timepoints Individually.
  6. Ensure that Affine is being used for the Transformation Model.

    Leave Defaults and Choose Affine Transformation Model

  7. Click OK.
  8. Keep the defaults for the regularization parameters. Click OK.

    Keep Default Regularization Parameters

  9. You should see in the BigDataViewer window that the images are now closely aligned. To see how the views are registered along a different axis, click on the BigDataViewer. Use the shift key and the axis you would like to view along, e.g. shift + X will view along the X axis. You can also use shift + ctrl + mouse wheel to zoom in and out of the current view. The mouse wheel alone will traverse the axis you are viewing along. If you ever loose the image in the viewer, click back on the Multiview Explorer window and type R. This will reset the translations allong the axis you are viewing. Change the to each of the axes in BigDataViewer, clicking R each time to completely reset your view back to default.

    Better Registration Between Views

    View Along X Axis

    View Along Y Axis

  10. Now that we have a relatively good registration, we can refine it further. Follow the above steps but change the following:
    • The registration algorithm should now be Assign closest-points with ICP (no inariance).
    • Change to the fine interest points.
    • Change the transformation model to Affine.
    • For the first time, change the Maximal distance for correspondence (px) to 7. If you run the fine adjustments again, you can lower this to 5 or even 3.

      Choose Fine Registration Algorithm

      Choose Fine Interest Points

      Choose Fine Registration Model and Error

  11. Repeat step 8 until you are satified with the results.
  12. Click Save in the Multiview Explorer.
  13. You have now registered the first frame of the movie.

Apply Registration and Export Images #

Up to this point, you have registered the views on the first frame. This next step will apply the same registration across the entire time-lapse. Additionally, for future viewing and processing, it is desirable to fuse these views into a single stack per frame and channel. For this, we will use the macro called simview fuse.

  1. Run the macro in ImageJ by using the menu Plugins -> simview fuse
    • If you don’t see simview transform in this list, refer back to the setup section.
  2. A file browser will open. Select the xml file used for the steps above.

    Choose the XML file to Process

  3. Another window will open. Enter the number of frames to process. This will start from the first frame and fuse all frames up to the frame entered here.

    Enter the Number of Frames to Process

  4. This process will take a significant amount of time based on your connection to the data and processor. To monitor the progress, you can watch the fused directory. This directory will be in the same place as your xml file. All fused images will be placed here. Once you see the last frame in this folder, you are done.

Complete #

Congratulations! At this point you have reconstructed a SiMView dataset.


Code #

simview_transform.ijm #

dataset_file = File.openDialog("Select xml file");
Dialog.create("X dimension");
Dialog.addNumber("X dimension", 2048);
Dialog.show();
x_dimension = Dialog.getNumber();

run("Apply Transformations",
"select=" + dataset_file + " " +
"apply_to_angle=[Single angle (Select from List)] " +
"apply_to_channel=[All channels] " +
"apply_to_illumination=[All illuminations] " +
"apply_to_tile=[All tiles] " +
"apply_to_timepoint=[All Timepoints] " +
"processing_angle=[angle 0-1] " +
"transformation=Affine " +
"apply=[Current view transformations (appends to current transforms)] " +
"same_transformation_for_all_timepoints " +
"same_transformation_for_all_channels " +
"all_timepoints_all_channels_illumination_0_angle_0-1=[-1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0]");

run("Apply Transformations", "select=" + dataset_file + " " +
"apply_to_angle=[Single angle (Select from List)] " +
"apply_to_channel=[All channels] " +
"apply_to_illumination=[All illuminations] " +
"apply_to_tile=[All tiles] " +
"apply_to_timepoint=[All Timepoints] " +
"processing_angle=[angle 0-1] " +
"transformation=Translation " +
"apply=[Current view transformations (appends to current transforms)] " +
"same_transformation_for_all_timepoints " +
"same_transformation_for_all_channels " +
"all_timepoints_all_channels_illumination_0_angle_0-1=[" + x_dimension + ", 0.0, 0.0]");

print("Done applying transforms.\nSet registration for first frame and then run fuse macro.");

simview_fuse.ijm #

dataset_file = File.openDialog("Select xml file");
Dialog.create("Number of frames to fuse");
Dialog.addNumber("Number of Frames", 1);
Dialog.show();
num_frames = Dialog.getNumber();

run("Duplicate Transformations",
"apply=[One timepoint to other timepoints] " + 
"select=" + dataset_file + " " +
"apply_to_angle=[All angles] " + 
"apply_to_channel=[All channels] " + 
"apply_to_illumination=[All illuminations] " +
"apply_to_tile=[All tiles] " +
"source=0 " + 
"target=[All Timepoints] " + 
"duplicate_which_transformations=[Replace all transformations]");

print("Finished applying transforms.\n");

output_dir = File.getDirectory(dataset_file);
output_dir = output_dir + File.separator() + "fused";
if (!File.exists(output_dir))
{
    File.makeDirectory(output_dir);
    print("Making directory " + output_dir);
}

for (i=1; i<num_frames; ++i)
{
    run("Fuse dataset ...",
    "select=" + dataset_file + " " +
    "process_angle=[All angles] " +
    "process_channel=[All channels] " +
    "process_illumination=[All illuminations] " +
    "process_tile=[All tiles] " +
    "process_timepoint=[Single Timepoint (Select from List)] processing_timepoint=[Timepoint " + i + "] " +
    "bounding_box=[Currently Selected Views] " +
    "downsampling=1 " +
    "pixel_type=[16-bit unsigned integer] " +
    "interpolation=[Linear Interpolation] " +
    "image=[Precompute Image] " +
    "produce=[All views together] " +
    "fused_image=[Save as (compressed) TIFF stacks] " +
    "output_file_directory=" + output_dir + " filename_addition=fused_" + i + "t");
}

print("DONE\nWrote files to " + output_dir + "\n");


Last modified Jul 8, 2021