Reconstruct SiMView Images Using BigSticher
by Eric WaitJul 6, 2021
Overview #
There are four steps to this process. First is to create a dataset file that allows BigSticher to interpret the data. Second is to run a macro that will flip the opposing views, so they are roughly aligned. Third is to find the best parameters that register the views together. Forth is to register and export the reconstructed data.
Inital Setup #
First we must install ImageJ and the BigStitcher plugin. To do so, you:
- Install ImageJ
- Open ImageJ and use the menu to go to
Help
->Update
- Click on the
Manage update sites
button - Within the list select
BigStitcher
- Click
Close
and thenClose
again. - Close ImageJ
- Install the macros for this tutorial.
- Download the current macros from https://github.com/aicjanelia/SiMView/blob/master/src/imagej/simview_transform.ijm and https://github.com/aicjanelia/SiMView/blob/master/src/imagej/simview_fuse.ijm.
- Place these files into the plugins folder of ImageJ
Fiji.app\plugins
. - Verify that
BigStitcher
and the macros have been installed by looking in thePlugins
menu
Create a BigSticher Dataset #
BigStitcher uses xml to store the metadata that describes a dataset. The data within the xml tells BigStitcher how to interpret the images into a multi-dimensional image. We are going to use the batch processing functionality of BigSticher to reduce the number of windows popup.
- Open ImageJ and navigate to the Define Dataset menu of BigSticher:
Plugins -> BigSticher -> Batch Processing -> Define dataset...
- BigSticher needs to know how to read the images.
In this case select
SimView Dataset Loader (RAW)
and create a name for this dataset. You can name the dataset to something other thandataset.xml
to something more meaningful or leave it as is.- Do not use special characters or spaces in the name and include
.xml
at the end.
- Do not use special characters or spaces in the name and include
- Point BigSticher to the directory you would like to process.
You can browse to the directory by clicking on
Browse
, you can type in the path, or you can drag a folder into the text box. - If you selected the root directory, there is most likely two subdirectories within this folder,
Config
andSPM00
. The images that you will want to combine reside inSPM00
. Select this directory as the experiment. - Now BigSticher will look at each of the images to gather the dimensions of the dataset.
- This can take a long time depending on your connection to the images.
In other words, this step will take longer for images on a remote server and less time for images stored on a fast hard drive connected to the computer.
The log window will say
All files found.
when this step is complete.
- This can take a long time depending on your connection to the images.
In other words, this step will take longer for images on a remote server and less time for images stored on a fast hard drive connected to the computer.
The log window will say
- Once all of the images have been evaluated, BigSticher needs to know how to read the images for further processing.
In this case select
SimView Dataset Loader (RAW)
and create a name for this dataset. You can name the dataset to something other thandataset.xml
to something more meaningful or leave it as is.- Do not use special characters or spaces in the name and include
.xml
at the end.
- Do not use special characters or spaces in the name and include
- Click
OK
. - A window will open showing the parameters that BigSticher has learned from the images and metadata.
However, there are two that are typically incorrect.
Open one of the frames in the dataset, this is typically stored in the
SPM00/TM000000/ANG00
directory. Look at the number of Z slices captured, this is the max value afterPLN
in the file names. This will be theZ Size
in theImage loading
section. Remember that these numbers start at zero, so add one to the number afterPLN
, e.g.PLN0120.tif
turns into121
forZ size
. Also, ensure that thePixel distance Z
is correct for this dataset. - Click
OK
. The log window should now say that the dataset is saved and the location that it has saved it to.
Flip Images from Second Camera #
The images from one of the cameras will have the X axes in the reverse direction compared to the opposing camera.
For this to be corrected, we will use the macro called simview transform
.
- Run the macro in ImageJ by using the menu
Plugins -> simview transform
.- If you don’t see
simview transform
in this list, refer back to the setup section.
- If you don’t see
- A file browser will open.
Select the xml file created at the end of creating a dataset.
- If you did not include
.xml
in step 5 of creating a dataset, the file will not have an xml extension in the name.
- If you did not include
- After clicking on
Open
on the file select dialog, a new window will popup. Enter the number of pixels in the X dimension. This is necessary for the flipped images to be re-centered. - Click
OK
This should be done quickly. You will know that it is done when one of the last lines in the Log window will state “Done applying transforms” and “Saved xml” with a path to where it saved.
Register the Views #
Now that we have the opposing views facing the same way, now we can register (align) the views so that each object completely overlap in the final image. This will be done in a few steps, where the first alignment will be a course adjustment followed by more fine adjustments until you are satisfied with the results.
- First we must open the dataset with BigSticher through the menu
Plugins -> BigSticher -> BigSticher
. - A window will popup asking the location of the dataset.
You can browse to the directory by clicking on
Browse
, you can type in the path, or you can drag a folder into the text box. - Click
OK
. - The dataset will open in the
Multiview Explorer Window
. This is where most of the operations will start from, mainly through a right click menu.
Open Images for Viewing #
Open a viewer to evaluate the registration process.
- Start viewing the data by selecting the first frame.
In the above image, you will see that all of the zeros
TimePoint
rows are highlighted. This is done by clicking on the first row and holding down the shift key and clicking on the forth row. Now right click on one of the highlighted rows and selectDisplay in BigDataViewer (on/off)
. This will open up a BigDataViewer window.- It can take a long time for this new window to display data, depending on how fast your connection is to the image data.
- The initial view will show each view in white, which is not that helpful.
Click on the
Multiview Explorer
(arrow) window and press theC
key to toggle through the various color schemes as shown below. - Even with the different color scheme, you may find the image to be too bright. There is a hidden interface on the right side of the view window. Move your mouse to the right edge of the view window and an arrow will pop out. Click on it to get a brightness control.
- Select all of the views (rows) in the top panel.
- Change the max value to where there is contrast throughout the image.
- Click on the arrows on the left of this panel to hid it again. The images should look much better now.
Create Interest Points #
BigStitcher uses interest points in each image to find the best transformation of one of the images to match the other. We will make two sets of interests point that will be used with different registration techniques for coarse and fine adjustments.
- Right click on the rows of the first frame that are highlighted.
Select
Detect Interest Points
. - The first set of interest points will be used in the largest movement between the images and does not need to be too precise.
To know which interest points we are using later, we will label these points
coarse
as shown in the label image below. For these interest points to actually be more coarse than others, we will downsample Z by two times as shown in the downsample image below. - Select the view that you know to have good signal in the next window. You will select a different view when doing the fine interest points. Often the first view is sufficient.
- Once the images have been read in, two windows will open.
One will be for changing the interest point parameters, the other will be for viewing the results given those parameters.
- This may take some time to open.
- Change the slider on the bottom of the image viewer until you get to a plane that has clear objects to be detected.
- Move or draw a box over that covers the objects to be detected. The image window showing the points uses the standard ImageJ tools to manipulate (e.g. zooming, selecting, etc.)
- Move the top slider, called
Sigma 1
, until the green circles are about the size of the objects. - Move the bottom slider, called
Threshold
, until the false positives are removed. - When satisfied, click
Done
. - It might take some time to detect the interest points depending on the speed of your computer.
Open the
Interest Point Explorer
by right clicking on the highlighted rows in the Multiview Explorer to see how many points were detected. - Repeat the steps above.
Change the label to
fine
, downsample to one, and select a different view.
Register Views #
Now that we have interest points calculated, we can use them to register the views together. We will use the coarse interest points to make a big change to get the views relatively close. We will then use the fine interest points to make incremental changes to finalize the registration.
- From the Multiview Explorer, right click on one of the selected rows and select
Register using Interest Points...
. - For the coarse movement, we will select the
Precise descriptor-based (translation invariant)
algorithm. - Below that we will select the coarse interest points.
- Click
OK
. - Keep the defaults on the next window
Register: Register Timepoints Individually
. - Ensure that
Affine
is being used for theTransformation Model
. - Click
OK
. - Keep the defaults for the regularization parameters.
Click
OK
. - You should see in the BigDataViewer window that the images are now closely aligned.
To see how the views are registered along a different axis, click on the BigDataViewer.
Use the
shift
key and the axis you would like to view along, e.g.shift + X
will view along the X axis. You can also useshift + ctrl + mouse wheel
to zoom in and out of the current view. Themouse wheel
alone will traverse the axis you are viewing along. If you ever loose the image in the viewer, click back on the Multiview Explorer window and typeR
. This will reset the translations allong the axis you are viewing. Change the to each of the axes in BigDataViewer, clickingR
each time to completely reset your view back to default. - Now that we have a relatively good registration, we can refine it further.
Follow the above steps but change the following:
- The registration algorithm should now be
Assign closest-points with ICP (no inariance)
. - Change to the
fine
interest points. - Change the transformation model to
Affine
. - For the first time, change the
Maximal distance for correspondence (px)
to 7. If you run the fine adjustments again, you can lower this to 5 or even 3.
- The registration algorithm should now be
- Repeat step 8 until you are satified with the results.
- Click
Save
in the Multiview Explorer. - You have now registered the first frame of the movie.
Apply Registration and Export Images #
Up to this point, you have registered the views on the first frame.
This next step will apply the same registration across the entire time-lapse.
Additionally, for future viewing and processing, it is desirable to fuse these views into a single stack per frame and channel.
For this, we will use the macro called simview fuse
.
- Run the macro in ImageJ by using the menu
Plugins -> simview fuse
- If you don’t see
simview transform
in this list, refer back to the setup section.
- If you don’t see
- A file browser will open.
Select the
xml
file used for the steps above. - Another window will open. Enter the number of frames to process. This will start from the first frame and fuse all frames up to the frame entered here.
- This process will take a significant amount of time based on your connection to the data and processor.
To monitor the progress, you can watch the
fused
directory. This directory will be in the same place as yourxml
file. All fused images will be placed here. Once you see the last frame in this folder, you are done.
Complete #
Congratulations! At this point you have reconstructed a SiMView dataset.
Code #
simview_transform.ijm #
dataset_file = File.openDialog("Select xml file");
Dialog.create("X dimension");
Dialog.addNumber("X dimension", 2048);
Dialog.show();
x_dimension = Dialog.getNumber();
run("Apply Transformations",
"select=" + dataset_file + " " +
"apply_to_angle=[Single angle (Select from List)] " +
"apply_to_channel=[All channels] " +
"apply_to_illumination=[All illuminations] " +
"apply_to_tile=[All tiles] " +
"apply_to_timepoint=[All Timepoints] " +
"processing_angle=[angle 0-1] " +
"transformation=Affine " +
"apply=[Current view transformations (appends to current transforms)] " +
"same_transformation_for_all_timepoints " +
"same_transformation_for_all_channels " +
"all_timepoints_all_channels_illumination_0_angle_0-1=[-1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0]");
run("Apply Transformations", "select=" + dataset_file + " " +
"apply_to_angle=[Single angle (Select from List)] " +
"apply_to_channel=[All channels] " +
"apply_to_illumination=[All illuminations] " +
"apply_to_tile=[All tiles] " +
"apply_to_timepoint=[All Timepoints] " +
"processing_angle=[angle 0-1] " +
"transformation=Translation " +
"apply=[Current view transformations (appends to current transforms)] " +
"same_transformation_for_all_timepoints " +
"same_transformation_for_all_channels " +
"all_timepoints_all_channels_illumination_0_angle_0-1=[" + x_dimension + ", 0.0, 0.0]");
print("Done applying transforms.\nSet registration for first frame and then run fuse macro.");
simview_fuse.ijm #
dataset_file = File.openDialog("Select xml file");
Dialog.create("Number of frames to fuse");
Dialog.addNumber("Number of Frames", 1);
Dialog.show();
num_frames = Dialog.getNumber();
run("Duplicate Transformations",
"apply=[One timepoint to other timepoints] " +
"select=" + dataset_file + " " +
"apply_to_angle=[All angles] " +
"apply_to_channel=[All channels] " +
"apply_to_illumination=[All illuminations] " +
"apply_to_tile=[All tiles] " +
"source=0 " +
"target=[All Timepoints] " +
"duplicate_which_transformations=[Replace all transformations]");
print("Finished applying transforms.\n");
output_dir = File.getDirectory(dataset_file);
output_dir = output_dir + File.separator() + "fused";
if (!File.exists(output_dir))
{
File.makeDirectory(output_dir);
print("Making directory " + output_dir);
}
for (i=1; i<num_frames; ++i)
{
run("Fuse dataset ...",
"select=" + dataset_file + " " +
"process_angle=[All angles] " +
"process_channel=[All channels] " +
"process_illumination=[All illuminations] " +
"process_tile=[All tiles] " +
"process_timepoint=[Single Timepoint (Select from List)] processing_timepoint=[Timepoint " + i + "] " +
"bounding_box=[Currently Selected Views] " +
"downsampling=1 " +
"pixel_type=[16-bit unsigned integer] " +
"interpolation=[Linear Interpolation] " +
"image=[Precompute Image] " +
"produce=[All views together] " +
"fused_image=[Save as (compressed) TIFF stacks] " +
"output_file_directory=" + output_dir + " filename_addition=fused_" + i + "t");
}
print("DONE\nWrote files to " + output_dir + "\n");
Last modified Jul 8, 2021