Stabilize and Select Region of Interest Using BigStitcher

Stabilize and Select Region of Interest Using BigStitcher

by Eric Wait
Jul 30, 2020
data-and-analysis

Who this is for #

There are times when your sample moves quickly through the Field of View (FOV) but your interest lies within the sample. For example, if you are interested in tracking vesicals within a cell, yet that cell is migrating across the FOV. It will be difficult to understand the vesical motion fully without the cellular motion removed. Ideally, you would want the cell to remain fixed in the FOV. One technique is to register each frame to the next, making the objects appear stationary within the FOV. Big Stitcher plugin for ImageJ was originally designed to register or stitch together large static images. It has since been extended to stabilize time-lapse images by registering between frames.

This tutorial will walk you through a typical Lattice Lightsheet movie in which we would like to remove the sample movement. This same technique is broadly applicable to any sets of images that can be read into ImageJ. The scenario presented here is as follows:

  1. Remove sample motion from time-lapse,
  2. Select a Region of Interest (ROI),
  3. Export stabilized ROI to be imported into another software solution such as Imaris.

Inital Setup #

First we must install ImageJ and the BigStitcher plugin. To do so, you:

  1. Install ImageJ
  2. Open ImageJ and use the menu to go to Help -> Update
  3. Click on the Manage update sites button
  4. Within the list select BigStitcher

    Select BigStitcher from the Update Sites

  5. Click Close and then Close again.
  6. Restart ImageJ
  7. Verify that BigStitcher has been installed by looking in the Plugins menu

    Menu item to start BigStitcher


Make a Big Stitcher Data File #

Big Stitcher uses an xml file to understand how the data is stored and track any internal processing. This means you will have to generate one of these per dataset. Big Stitcher allows you to convert your data into a multi-resolution data type. See our discussion here to help you determine which file format is right for you. Even though this exercise is ultimately going to export tiff files, it is best to work with a multi-resolution HDF5 file in BigStitcher. This will allow for interactive viewing of the dataset, which is key for making the ROI selection.

When you first open the BigStitcher plugin, you will be presented with the following window. This window allows you to either open a dataset that has previously been created or create a new one now. You will want to click on Define a new dataset.

The next window will ask what is the best way to read in the images. In our case, we are reading tiff files so the Automatic Loader will be best. This is also where you should name the output file.

This next window asks where your images are located. Choose the folder and not an individual file after you click Browse. You will know if the reader has found all of your files by seeing them in the list below selected files. After you click OK there may be a considerable delay before the next window appears.

If it has been a long time since the last window and the next one still has not appeared, check the Log windows for activity. If there are still images to be read, the Log window will show the path to the last image read. This should give you an indication of how much longer this step will take. The timing of many of the subsequent steps will be determined by your storage speed. If the files are on a network drive, this step may take considerable amount of time (hours).

The next window asks for help understanding how the name of each file equates to different dimensions. In our case we have a channel field that starts with _c and is zero padded, as indicated in yellow in the figure below. The time field starts with _t and is zero padded, as indicated in red below. Make the approprite selections in the dropdown boxes.

Optionally, you can define the space that is represented by each voxel. Here I have filled out values that correspond to a typical LLSM dataset.

Because we do not have any tiles, we will tell BigStitcher not to move them. Click OK to move to the next window.

The next step is to tell BigStitcher how we would like the image data to be stored relative to the internal xml file. To save space, you can leave your files as is. However, we will ultimately want to look through the data to create a bounding box for ROI extraction. Keeping the image data as tiff files will make this process slow and painful. So, I recommend that you convert your data into a multi-resolution HDF5 file. Make sure that your destination for this file is large enough to store the entire dataset.

This next window allows you to change how the HDF5 file is created. Leave the defaults. The only change needed here is to select the output directory that you want the HDF5 file stored. Remember to place this somewhere that can accommodate your entire dataset.

Converting may take a long time depending on your computational power and storage speed. On large datasets, it is not unreasonable to see this step take more than five hours. Slow network speeds or less capable computers may see this time double. Please be patient. The log window is a good indicator of how far along this process is.


Create Interest Points #

Next we are going to find interest points within the image that will be registered between frames. Before this step, you will want to know which channel contains objects that that either are static or that you would like static realative to the rest of the data. For our example, channel one has a membrane marker and we want the cells to appear as if they are not moving in the field of view.

Click on the Channel column in the Multiview Explorer window to group the channels all together.

Select the first frame of the first channel (in this case) and then scroll to the last frame. Hold down the shift key and click on the row that indicates the last frame.

Right click on one of the the blue rows and select Detect Interest Points.

A new window will open. The only change here is to name the interest points. Here I’ve called these interest points edges. Then click OK.

On this next window, just ensure that you are using the CPU version in the last dropdown. There are GPU versions, but they require additional setup beyond the scope of this tutorial. Then click OK.

On this next window, you can select the time point where you would like to make your adjustments on. I’ve selected a frame about a quarter of the way into the sequence to account for any extreme photobleaching that may have occurred. Then click OK.

The next two windows are for you to refine the parameters used to identify the interest points. The sigma sliders will adjust the size of spots that will be detected. The threshold slider is for how strong the contrast must be when detecting a spot.

Move these around and scroll through the Z position by using the slider at the bottom of the image window. Once you are satisfied with the current results, click Done.

There maybe a lag as BigStitcher collects the data to run the processing. Use the log window as a progress indicator of which frame is currently being processed.

When this completes, don’t for get to click save on the Multiview Explorer window.


Register Images #

Now that we have interest points in channel 1, we would like those points to appear static between each frame. To do this, we need to select the same frames that we created the interest points on. Then right click on one of the rows highlighted in blue. Select Register using Interest Points.

The next window is used to define the registration parameters. For this example, we are trying to remove the movement of cells moving through the field of view. The motion in this case is just translational, meaning it moves as a ridged object in X, Y, and Z. Because this is the expected motion, we will select the Precise descriptor-based (translation invariant) algorithm. We also want to the cells to appear static through the entire sequence. We will select All-to-all timepoints matching (global optimization) as the time options.

Also ensure that you have the proper interest points selected. We named our interest points edges in the previous section. Then click OK.

On the next window there are four places to change.

  1. Check Consider each timepoint as a rigid unit
  2. Change the transformation model to Translation
  3. Uncheck Regularize model
  4. Change the RANSAC error to 7

Then click OK.

Leave this window with its defaults. Then click OK.

After grouping there is a delay before you see any processor use. Watch the log to know which frame is currently being processed. When this process is done, you will get a graph showing the registration quality from each frame.

When this completes, don’t for get to click save on the Multiview Explorer window.

To check to see how many of the interest points were used in the registration, right click on a frame and choose Interest Point Explorer (on/off).

Apply the registration to all channels #

We currently only have interest points and registration data for the first channel. We would like that all channels are registered as a single ridged unit. To do this, we will copy the registration from the first channel to all of the others. Make sure you have clicked on save in the Multiview Explorer window.

Now we close all of the BigStitcher windows. Once all of the windows are closed, use the Plugins menu to navigate to:

Multiview Reconstruction -> Batch Processing -> Tools -> Duplicate Transformations.

We want to apply the transformation of One channel to the other channels. Then click OK.

We need to select the xml file that we created back in making a big stitcher data file. Click the Browse button and then select the appropriate xml file. Leave the rest as defaults. Click OK.

On this last window, change the source channel to the one that contains the registration. In our case it is channel 1. Then click OK.

This process should be quite quick. You can verify that all of the channels are registered by opening the dataset in BigStitcher again. Use Plugins -> BigStitcher -> BigStitcher. Use the Browse button to select the xml file if the path listed is incorrect. When you scroll through the sequence, you should see all of the channels move in concert.


Create Bounding Box for Region of Interest (ROI) #

In this example, we do not need the entire field of view (FOV). A large FOV is captured knowing that the interesting cells will transit through the view of the microscope. By selecting out just a small region that contains our cell of interest, we can reduce the data set considerably. This means that any subsequent analysis using this ROI will be much faster. To do this we first make a bounding box that contains our ROI.

Open the data in BigStitcher that you would like to extract an ROI from. Select a frame that has the clearest view of the object you are interested. Here we choose all of the channels so that we can compare the structures on this frame.

If the image is not opened in a viewer, you can right click on one of the selected rows and choose Display in BigDataViewer (on\off). Then move your mouse to the middle of the right most side of the viewer. You should see a blue button appear on the right side as shown below. Click on this to change your viewing preferences.

On this window, I realized that I was only interested in the first (0) and the forth (3) channels. I deactivated the others. You can click in the right most column to change the color of each of the channels that gives you best contrast. I prefer green and magenta because they are similar in brightness to my eye and when you add them together you get white. While you have a channel row selected, you can use the two dot on the slider in the middle to change the brightness of that color’s display. Once you have the view you prefer, click on the blue button to collapse this panel.

Now go back to the Multiview Explorer window and right click on one of the selected rows. Choose Define Bounding Box from this menu.

This will bring up a window that will ask you how you want to create a bounding box. I am going to show how to create this in the interactive mode. Name your bounding box something unique so that you can distinguish between multiples on the same dataset. I’m only going to make one so I’ll just call it roi1. Click OK.

Now you will see a box within your image in the viewer and a control window. Use the control window to restrict down both X and Y. Rotate the volume to then restrict Z. Don’t forget to check your results on different frames. When you are satified, click ok.

The next window will show you what the resulting image dimensions will be. Click OK.

Don’t forget to click save on the Multiview Explorer window.


Export Images #

In this example, the intent is to get ROIs for processing in a separate software package. Unfortunately, not many other software packages will read the HDF5 file that was created by BigStitcher. So what we will do is export only the ROI information to tiff files. To do this, select all of the frames and channels (you can just use Ctrl-A in the Multiview Explorer window). Then right click and select Image Fusion.

This next window defines the export parameters. The important ones are:

  1. Choose the bounding box that you defined in Create Bounding Box
  2. Change the pixel type to 16-bit unsigned integer
  3. Check the Preserve original data anisotropy
  4. Change the output to tiff stacks in the Fused image dropdown.

Click OK.

The last window will ask you were you would like the roi image to go. Now you have registered ROI sequences that are much more manageable to process and work with generally. If you have multiple dataset to processes and would like to set the parameters for each follow the automated process below.


TL;DR Automate It for Me #

Going through all of the above steps can be daunting if you have a lot of movies to go through. If your data looks / behaves similar to the example data above, you can use an ImageJ marco. The only two areas that you will have to interact with the data are:

  1. Selecting the dataset to process
  2. Defining the bounding box for the ROI.

I have included the code as it is today at the end of this post. However, if you want to check for an updated version, it will be on GitHub:

Find this on GitHub
/*************************************************************
 * USER DEFINES
 ************************************************************/
input_dir = getDirectory("Choose TimeLapse Directory");

Dialog.create("BigStitcher Setup");
Dialog.addString("Image Directory:", input_dir, 64);
Dialog.addString("Output Directory Name:", "bigstitcher", 32);
Dialog.addString("ROI Name:", "roi_1");
Dialog.addNumber("Voxel Size xy:", 0.104);
Dialog.addNumber("Voxel Size z:", 0.26348);
Dialog.show();

input_dir = Dialog.getString();
output_dir_name = Dialog.getString();
roi_dir_name = Dialog.getString();
voxel_size_xy = Dialog.getNumber();
voxel_size_z = Dialog.getNumber();

root_dir = File.getParent(input_dir) + File.separator;
proj_str = File.getName(root_dir);
date_dir = File.getName(File.getParent(root_dir));
project_name = date_dir + "_" + proj_str;

output_dir = root_dir + output_dir_name
 + File.separator;
roi_output_dir = output_dir + roi_dir_name + File.separator;
output_data_name = output_dir + project_name;

print("Input dir: " + input_dir + "\nOutput dir: " + output_dir + "\nROI dir: " + roi_output_dir + "\nProject Name: " + project_name + "\nOutput Data Name: " + output_data_name + "\nVoxel Size: (" + voxel_size_xy + "x, " + voxel_size_xy + "y, " + voxel_size_z + "z)\n\n");
/************************************************************
************************************************************/

// make output directory if it doesn't already exist
if (!File.isDirectory(output_dir))
{
	File.makeDirectory(output_dir);
}
if (!File.isDirectory(roi_output_dir))
{
	File.makeDirectory(roi_output_dir);
}

// Convert data from tiff files to HDF5 for interactive viewing in BigDataViewer
run("Define dataset ...",
	"define_dataset=[Automatic Loader (Bioformats based)] " +
	"project_filename=" + project_name + ".xml " + 
	"path=" + input_dir + " " +
	"exclude=10 " +
	"pattern_0=Channels " +
	"pattern_1=TimePoints " + 
	"modify_voxel_size? " +
	"voxel_size_x=" + voxel_size_xy + 
	" voxel_size_y=" + voxel_size_xy + 
	" voxel_size_z=" + voxel_size_z + 
	" voxel_size_unit=um " + 
	"move_tiles_to_grid_(per_angle)?=[Do not move Tiles to Grid (use Metadata if available)] " + 
	"how_to_load_images=[Re-save as multiresolution HDF5] " + 
	"dataset_save_path=" + output_dir + " " +
	"subsampling_factors=[{ {1,1,1}, {2,2,1}, {4,4,2} }] " +
	"hdf5_chunk_sizes=[{ {32,16,8}, {16,16,16}, {16,16,16} }] " +
	"timepoints_per_partition=1 " +
	"setups_per_partition=0 " + 
	"use_deflate_compression " +
	"export_path=" + output_data_name
);

// Calculate the intrest points
run("Detect Interest Points for Registration",
	"browse=" + output_dir + " " +
	"select=" + output_data_name + ".xml " +
	"process_angle=[All angles] " +
	"process_channel=[Single channel (Select from List)] " +
	"process_illumination=[All illuminations] " +
	"process_tile=[All tiles] " +
	"process_timepoint=[All Timepoints] " +
	"processing_channel=[channel 1] " +
	"type_of_interest_point_detection=Difference-of-Gaussian " +
	"label_interest_points=edges " +
	"subpixel_localization=[3-dimensional quadratic fit] " +
	"interest_point_specification=[Advanced ...] " +
	"downsample_xy=[Match Z Resolution (less downsampling)] downsample_z=1x " +
	"sigma=6.5 " +
	"threshold=0.019 " +
	"find_maxima " +
	"compute_on=[CPU (Java)]"
);

// Register between intrest points
run("Register Dataset based on Interest Points",
	"select=" + output_data_name + ".xml " +
	"process_angle=[All angles] " +
	"process_channel=[Single channel (Select from List)] " +
	"process_illumination=[All illuminations] " +
	"process_tile=[All tiles] " +
	"process_timepoint=[All Timepoints] " +
	"processing_channel=[channel 1] " +
	"registration_algorithm=[Precise descriptor-based (translation invariant)] " +
	"registration_over_time=[All-to-all timepoints matching (global optimization)] " +
	"registration_in_between_views=[Only compare overlapping views (according to current transformations)] " +
	"interest_points=edges " +
	"consider_each_timepoint_as_rigid_unit " +
	"fix_views=[Fix first view] " +
	"map_back_views=[Do not map back (use this if views are fixed)] " +
	"transformation=Translation " +
	"number_of_neighbors=3 " +
	"redundancy=1 " +
	"significance=3 " +
	"allowed_error_for_ransac=7 " +
	"ransac_iterations=Normal " +
	"show_timeseries_statistics " +
	"interestpoint_grouping=[Group interest points (simply combine all in one virtual view)] " +
	"interest=5"
);

// Duplicate Transformations
run("Duplicate Transformations",
	"apply=[One channel to other channels] " +
	"select=" + output_data_name + ".xml " + 
	"apply_to_angle=[All angles] " +
	"apply_to_illumination=[All illuminations] " +
	"apply_to_tile=[All tiles] " +
	"apply_to_timepoint=[All Timepoints] " +
	"source=1 target=[All Channels] " +
	"duplicate_which_transformations=[Replace all transformations]"
);

// create a bounding box
run("Define Bounding Box",
	"select=" + output_data_name + ".xml " +
	"process_angle=[All angles] " +
	"process_channel=[All channels] " +
	"process_illumination=[All illuminations] " +
	"process_tile=[All tiles] " +
	"process_timepoint=[All Timepoints] " +
	"bounding_box=[Define using the BigDataViewer interactively] "+
	"bounding_box_name=" + roi_dir_name
);

// export the ROI defined by the bounding box
run("Fuse",
	"select=" + output_data_name + ".xml " +
	"process_angle=[All angles] " +
	"process_channel=[All channels] " +
	"process_illumination=[All illuminations] " +
	"process_tile=[All tiles] " +
	"process_timepoint=[All Timepoints] " +
	"bounding_box=" + roi_dir_name + " " +
	"downsampling=1 " +
	"pixel_type=[16-bit unsigned integer] " +
	"interpolation=[Linear Interpolation] " +
	"image=[Precompute Image] " +
	"interest_points_for_non_rigid=[-= Disable Non-Rigid =-] " +
	"blend preserve_original " +
	"produce=[Each timepoint & channel] " +
	"fused_image=[Save as (compressed) TIFF stacks] " +
	"output_file_directory=" + roi_output_dir + " " +
	"filename_addition=[]"
);


Last modified Jul 4, 2021