Fluorescent Microscopy (Methods in Molecular Biology, 2440) 1071620509, 9781071620502

This volume provides both experienced and new microscopists with methods and protocols to perform fluorescence microscop

113 89 12MB

English Pages 382 [369] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Contributors
Part I: Basic Fluorescence Imaging
Chapter 1: Fluorescence Microscopy: A Field Guide for Biologists
1 Introduction
2 Foundational Concepts
2.1 The Photophysical Properties of Fluorescence Probes
2.2 The Light Path of a Standard Fluorescence Microscope
2.3 Magnification, Spatial Resolution, and Sampling
2.4 Capturing and Digitizing Images
3 Transferring Concepts to Practice: The Fluorescence Imaging Experimental Workflow
3.1 Define Problem
3.2 Select Model and Optimize Sample Preparation
3.3 Select Imaging Technique
3.4 Optimize Image Acquisition
3.4.1 Know your Light Path(s)
3.4.2 Select Filter Sets
3.4.3 Place Sample and Focus
3.4.4 Capture Images
3.5 Analyze Data and Interpret Results
3.6 Act on Extracted Knowledge
4 Next Steps
References
Chapter 2: Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins During Influenza Virus Infection in Single Cel...
1 Introduction
2 Materials
2.1 Fluorescence In Situ Hybridization (FISH) and Immunofluorescence
2.2 Image Acquisition
2.3 Image Analysis and Deconvolution
3 Methods
3.1 Sample Preparation for FISH and Immunofluorescence Staining
3.2 Multicolor Fluorescence In Situ Hybridization and Immunofluorescence
3.3 Determining the Point Spread Function of the Microscope
3.4 Image Acquisition
3.5 Image Deconvolution
3.6 Quantification of Colocalization Between Two or more Fluorescently Labeled Targets
3.6.1 Defining Cellular Boundaries
3.6.2 Channel Masking to Include or Exclude Subcellular Region of Interest
3.6.3 Define Spots for all Channels to Be Analyzed
4 Notes
References
Chapter 3: Optimizing Long-Term Live Cell Imaging
1 Introduction
2 Materials
2.1 J774A.1 Cell Culture and Transfection
2.2 Preparation of IgG-Opsonized Beads
2.3 Preparation of Fluorescently Labeled Bacteria
2.4 Microscopy
3 Methods
3.1 Preparation of J774A.1 Macrophages
3.1.1 J774A.1 Cell Culture
3.1.2 J774A.1 Transfection
3.2 Preparation of Phagocytic Target Mimics
3.2.1 Preparation of IgG-Opsonized Beads
3.2.2 Preparation of Fluorescently Labeled E. coli BL21
3.3 Optimizing Live Cell Imaging
3.3.1 General Considerations: Signal Strength and Temporal Resolution
3.3.2 General Considerations: Maintaining the Focal Plane and Sample Integrity
3.3.3 Special Consideration: 3D Live Cell Imaging
3.4 Configuring Acquisition Parameters for Imaging Phagocytosis
3.5 Quantification of Localization of Fluorescent Markers
4 Notes
References
Part II: Quantitative Approaches
Chapter 4: Monitoring Transmembrane and Peripheral Membrane Protein Interactions by Förster Resonance Energy Transfer Using Fl...
1 Introduction
1.1 Fluorescence
1.2 Förster Resonance Energy Transfer (FRET)
1.3 Fluorescence Lifetime Microscopy (FLIM)
2 Materials
2.1 Cell Culture
2.2 Vectors
2.3 Transfection Reagents
2.4 Microscopy
3 Methods
3.1 Cell Culture
3.2 Cloning, Transformation, and Plasmid Purification
3.3 Transient Transfection
3.4 Imaging Media
3.5 Fluorescence Lifetime Imaging (FLIM)
3.6 Image Analysis
3.7 Statistical Analysis
4 Notes
References
Chapter 5: Bimolecular Fluorescence Complementation to Visualize Protein-Protein Interactions in Cells
1 Introduction
2 Materials
2.1 Addition of Fluorophore Fragments to Proteins of Interest
2.2 Preparation of Cells
2.3 Immunostaining
2.4 Mounting Coverslips on Slides and Visualizing the Interaction Via Microscopy
3 Methods
3.1 Adding Fluorophore Fragments to Proteins of Interest
3.2 Preparation of Cells
3.3 Immunostaining
3.4 Mounting Coverslips on Slides and Visualizing the Interaction Via Microscopy
4 Notes
References
Chapter 6: Monitoring Cellular Responses to Infection with Fluorescent Biosensors
1 Introduction
2 Materials
2.1 A549 Cell Culture and Apoptosis
2.2 J774A.1 Macrophage Transfection and Culture
2.3 Preparation of Pathogen Mimics
2.4 Microbial Degradation Reporters
2.5 Microscopy
3 Methods
3.1 General Considerations
3.2 Preparation of A549 Cells for Monitoring LPS-Induced Apoptosis
3.3 Culture and Preparation of J774 Macrophages
3.3.1 J774A.1 Cell Culture
3.3.2 Transfection of Biosensors into J774A.1 Cells
3.3.3 Preparation of Pathogen Mimics
3.3.4 Preparation of Bacteria Expressing a Degradation Bioreporter
3.4 Imaging Biosensors During Apoptosis and Phagocytosis
3.4.1 Imaging Apoptosis
3.4.2 Imaging Phagocytic Signaling
3.4.3 Imaging Pathogen Degradation
4 Notes
References
Chapter 7: Quantifying Endothelial Transcytosis with Total Internal Reflection Fluorescence Microscopy (TIRF)
1 Introduction
2 Materials
3 Methods
3.1 Cell Preparation
3.2 Preparation for Imaging
3.3 Coverslip Preparation
3.4 Microscope Setup
3.5 Imaging
3.6 MATLAB Image Processing
4 Notes
References
Chapter 8: Measurement of Minute Cellular Forces by Traction Force Microscopy
1 Introduction
2 Materials
2.1 Preparation of Polyacrylamide Gels
2.2 Functionalization of the Polyacrylamide Gel Surface
2.3 Cell Culture
2.4 Image Acquisition
2.5 Image Analysis
3 Methods
3.1 Preparation of Polyacrylamide Gels
3.2 Functionalization of the Polyacrylamide Gel Surface
3.3 Cell Culture
3.4 Image Acquisition
3.5 Image Analysis
4 Notes
References
Part III: Tissue and Live Animal Imaging
Chapter 9: Quantitative Immunofluorescent Imaging of Immune Cells in Mucosal Tissues
1 Introduction
2 Materials
2.1 Solutions
2.2 Supplies
2.3 Equipment
2.4 Antibodies
3 Methods
3.1 Tissue Processing
3.2 Staining
3.3 Imaging
3.4 Automated Image Analysis for Compact Cells: CD3+ Cells
3.4.1 Preparation of Representative Fields-of-View for Algorithm Optimization
3.4.2 Preprocessing of Whole Tissue Section Scans to Improve Accuracy of Automated CD3+ Cell Counting
3.4.3 Automated CD3+ Cell Counting
3.4.4 High Throughput Automated CD3+ Cell Counting
3.5 Pixel-Based Quantification of Irregularly Shaped Cells: CD4+ Cells
3.5.1 Preparation of Representative Fields-of-View for Pipeline Optimization
3.5.2 Percent Signal Coverage Determination (Fig. 6)
3.5.3 High-Throughput Pixel-Based Quantification of CD4+ Cells
4 Notes
References
Chapter 10: Intravital Microscopy Techniques to Image Wound Healing in Mouse Skin
1 Introduction
2 Materials
2.1 Skin Punch Biopsy Wound Model
2.2 Intravital Microscope
2.3 Extended Surgical Anesthetic Maintenance
2.4 Surgical Tools/Materials
2.5 Labelling Antibodies and/or Contrast Agents
3 Methods
3.1 Wound Generation
3.2 Initiating Imaging Equipment and Software
3.3 Preparing the Animal for IVM
3.3.1 Induction and Maintenance of Anesthesia
3.3.2 Administration of Labeling Reagents
3.3.3 Surgery to Facilitate IVM
3.4 Imaging of the Wound Site
4 Notes
References
Chapter 11: Quantifiable Intravital Light Sheet Microscopy
1 Introduction
2 Materials
2.1 Zebrafish Lines
2.2 Printing and Cleaning Custom Mounts
2.3 Imaging Medium for Zebrafish Embryos
2.4 Mounting Medium and Equipment
2.5 Screening, Short-Term and Long-Term Imaging
2.6 Image Analysis, Segmentation, and Cell Tracking
3 Methods
3.1 Mount Design, Printing, Curing, and Cleaning (2-3 Days)
3.2 Initial Microscope Setup (30 min to 1 h)
3.3 Embryo Screening and Mounting (30 min to 2 h)
3.4 Light Sheet Alignment (5-10 min)
3.5 Spatial Resolution Optimization (1-2 Days)
3.6 Temporal Optimization (1-2 Days)
3.7 Viability and Photobleaching Studies (1-2 Weeks)
4 Notes
References
Chapter 12: Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing
1 Introduction
2 Materials
2.1 Mouse Transcardial Perfusion and Fixation
2.2 PACT Clearing
2.3 iDISCO+ Clearing
2.4 Primary and Secondary Antibodies
3 Methods
3.1 Mouse Transcardial Perfusion and Fixation
3.2 PACT Clearing
3.3 iDISCO+ Clearing
4 Notes
References
Chapter 13: Expansion Microscopy of Larval Zebrafish Brains and Zebrafish Embryos
1 Introduction
2 Materials
2.1 Anchoring
2.2 Gelation
2.3 Digestion
2.4 Expansion and Imaging
3 Methods
3.1 Anchoring
3.2 Gelation
3.3 Digestion
3.4 Expansion
3.5 Imaging Recommendations
3.6 Image Processing and Analysis
4 Notes
References
Part IV: Super-Resolution Approaches
Chapter 14: Super-Resolution Radial Fluctuations (SRRF) Microscopy
1 Introduction
2 Materials
2.1 Microscope
2.2 Sample Preparation
2.2.1 Materials
2.2.2 Cell Lines
2.2.3 Fluorescent Dyes, Antibodies, and Plasmids
2.2.4 Immunostaining Reagents
2.3 Software
2.3.1 Image Acquisition Software
2.3.2 Image Processing Software
3 Methods
3.1 Sample Preparation
3.1.1 General Considerations
3.1.2 Fixation and Permeabilization
3.1.3 DNA Staining
3.1.4 Actin Staining
3.1.5 Immunostaining
3.2 Image Acquisition
3.2.1 General Protocol
3.2.2 Detailed Protocol Using SlideBook 6
3.3 Image Processing (SRRF)
3.3.1 General Considerations
3.3.2 Protocol
4 Notes
References
Chapter 15: Sample Preparation for Multicolor STED Microscopy
Abbreviations
1 Introduction
2 Materials
2.1 Buffers
2.2 Antibodies
2.3 Cells
2.4 Reagents
2.5 Lab Ware
2.6 Microscope
3 Methods
3.1 Sample Preparation Considerations
3.2 Choosing Fluorophores for Single Color STED Experiments
3.3 Choosing Fluorophores for Two Color STED Experiments
3.4 General Considerations and Controls for Multichannel Imaging Experiments
3.5 Making Subdiffraction Fluorescent Bead Samples
3.6 Cell Culture
3.7 Immunofluorescence
3.8 Imaging Considerations
3.9 Postimaging Considerations
4 Notes
References
Chapter 16: Single-Molecule Localization Microscopy of Subcellular Protein Distribution in Neurons
1 Introduction
2 Materials
2.1 Cell Culture and Transfection
2.2 HaloTag Labeling
2.3 Fixation and Immunolabeling
2.4 Imaging Buffers and Mounting
2.5 Microscope Setup
2.6 Software
3 Methods
3.1 Transfection of Dissociated Hippocampal Rat Neurons
3.2 Live-Cell HaloTag Labeling
3.3 Fixation
3.4 Immunolabeling
3.5 Sample Preparation and Mounting
3.5.1 Live-Cell PALM
3.5.2 dSTORM
3.6 PALM Imaging
3.7 dSTORM Imaging (on Alexa 647 or HaloLigand JF646)
3.8 Data Processing
3.9 Visualization and Data Analysis
4 Notes
References
Chapter 17: Measuring the Lateral Diffusion of Plasma Membrane Receptors Using Raster Image Correlation Spectroscopy
1 Introduction
2 Materials
2.1 Cell Culture
2.2 Fluorescence Labeling of Receptors
2.3 Calibration of Focal Volume Waist (ω0)
2.4 Microscopy and RICS Analysis
3 Methods
3.1 RAW 264.7 Cell Culture Protocol
3.2 Fluorescence Staining
3.3 Sample Preparation for Calibration of Focal Volume Waist (ω0)
3.4 Microscopy
3.5 RICS Analysis Using SimFCS
4 Notes
References
Chapter 18: Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy
1 Introduction
2 Materials
2.1 Coverslips
2.2 Fixation
2.3 Sample Staining
2.4 Sample Imaging
3 Methods
3.1 Experimental Considerations
3.1.1 Imaging Depth of the Structure of Interest
3.1.2 Selection of Reference Marker(S)
3.1.3 Labeling Strategy: Antibodies or Fluorescent Proteins
3.2 3D-SIM Imaging
3.3 STORM Imaging
3.4 Molecular Mapping Analysis
4 Notes
6. References
Part V: Micrograph Analysis
Chapter 19: Visualizing and Quantifying Data from Time-Lapse Imaging Experiments
1 Introduction
2 Materials
2.1 Mapping Dynamics of Structures
2.2 Plotting Signals from Time-Lapse Imaging
3 Methods
3.1 Mapping Dynamics of Structures
3.1.1 Encoding Dynamics with Temporal Color Coding
3.1.2 Kymographs
3.1.3 Visualizing Fluctuations in Intensity
3.1.4 Visualizing a Change in Area
3.2 Plotting Signals from Time-Lapse Imaging
3.2.1 Visualizing the Dynamics of Intensity
3.2.2 Visualizing Ratiometric Data
4 Notes
References
Chapter 20: Automated Microscopy Image Segmentation and Analysis with Machine Learning
1 Introduction
2 Concepts and Definitions
2.1 Images
2.2 Annotations
2.3 Model
2.4 Training
2.5 Objective Function
2.6 Epoch
2.7 Hyperparameter
2.8 Batch
2.9 Optimizer
2.10 Overfitting
2.11 Early-Stopping
2.12 Regularization
3 Methods
3.1 Preparing the Images
3.1.1 Handling of Images
3.1.2 Number of Samples
3.1.3 Metadata
3.1.4 Normalization of the Images
3.2 Generating the Annotations
3.3 Training the Models
3.3.1 Random Forest
3.3.2 Deep Learning
3.4 Evaluation
4 Notes
References
Index
Recommend Papers

Fluorescent Microscopy (Methods in Molecular Biology, 2440)
 1071620509, 9781071620502

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Methods in Molecular Biology 2440

Bryan Heit Editor

Fluorescent Microscopy

METHODS

IN

MOLECULAR BIOLOGY

Series Editor John M. Walker School of Life and Medical Sciences University of Hertfordshire Hatfield, Hertfordshire, UK

For further volumes: http://www.springer.com/series/7651

For over 35 years, biological scientists have come to rely on the research protocols and methodologies in the critically acclaimed Methods in Molecular Biology series. The series was the first to introduce the step-by-step protocols approach that has become the standard in all biomedical protocol publishing. Each protocol is provided in readily-reproducible step-bystep fashion, opening with an introductory overview, a list of the materials and reagents needed to complete the experiment, and followed by a detailed procedure that is supported with a helpful notes section offering tips and tricks of the trade as well as troubleshooting advice. These hallmark features were introduced by series editor Dr. John Walker and constitute the key ingredient in each and every volume of the Methods in Molecular Biology series. Tested and trusted, comprehensive and reliable, all protocols from the series are indexed in PubMed.

Fluorescent Microscopy Edited by

Bryan Heit Department of Microbiology and Immunology, Center for Human Immunology, The University of Western Ontario, London, ON, Canada; Robarts Research Institute, London, ON, Canada

Editor Bryan Heit Department of Microbiology and Immunology Center for Human Immunology The University of Western Ontario London, ON, Canada Robarts Research Institute London, ON, Canada

ISSN 1064-3745 ISSN 1940-6029 (electronic) Methods in Molecular Biology ISBN 978-1-0716-2050-2 ISBN 978-1-0716-2051-9 (eBook) https://doi.org/10.1007/978-1-0716-2051-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Humana imprint is published by the registered company Springer Science+Business Media, LLC, part of Springer Nature. The registered company address is: 1 New York Plaza, New York, NY 10004, U.S.A.

Preface In 1665 Robert Hooke published Micrographia, which contained the first biological drawings of organisms viewed through a microscope. Nine years later Antonie van Leeuwenhoek published the first observations of microorganisms—protozoans—and two years later made the first recorded observations of bacteria. These publications not only launched microbiology as a new scientific discipline but also cemented the microscope as a quintessential biological tool. Improvements in microscope optics and the development of staining techniques over the intervening centuries made microscopy an indispensable tool in many scientists’ laboratories. Although fluorescence was first observed by Clarke in 1819, it took nearly a century before the first fluorescence microscope was constructed by Zeiss and Reichert. The widespread adoption of fluorescence microscopy by biologists required one additional development: the ability to label specific biological targets with fluorescence dyes, with the first of these approaches established in the 1940s by Ellinger and Hirt. The ability to directly label biological objects of interest—and starting in the early 1990s, the ability to use genetically encoded fluorophores—led to an explosion of fluorescence-based techniques and improvements in optical instruments that, today, provide a plethora of microscopy techniques to biologists. While the development of this array of techniques and technologies has provided an unprecedented ability to directly view biological processes at the cellular and molecular level, it has also led to increasingly complex and specialized methods. These can be hard for new microscopy users to learn, while it can be challenging for established labs to keep up with the development of new and improved methodologies. Moreover, these techniques require careful acquisition, handling, and analysis, to ensure the integrity and accuracy of the resulting data. Mastering these instruments and analyses can be challenging, while minor errors in sample preparation, acquisition, or analysis can produce unusable data—or even worse—erroneous results. This book has been designed to help the new and experienced microscopist master a range of fluorescence microscopy techniques. It begins with introductory chapters covering foundational concepts, basic sample labeling, and acquisition. This book then builds these core concepts into advanced techniques for imaging intermolecular interactions, for imaging of cells in tissues and animals, and ultimately builds to methods for imaging objects and processes occurring on spatial scales below the resolution limit of optical microscopy. Finally, the last two chapters of this book describe methods to produce objective, reproducible, and highly quantitative measurements from the resulting images. Many of these chapters focus on the use of free and open-source software and approaches, ensuring that these methods are accessible to most laboratories and researchers. I hope that this book will be a valuable resource to new microscopists operating their first instruments and to experienced users looking to expand their microscopy repertoire. London, ON, Canada

Bryan Heit

v

Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contributors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

PART I

BASIC FLUORESCENCE IMAGING

1 Fluorescence Microscopy: A Field Guide for Biologists . . . . . . . . . . . . . . . . . . . . . . Lucy H. Swift and Pina Colarusso 2 Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins During Influenza Virus Infection in Single Cells Using Confocal Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Richard Manivanh, Seema S. Lakdawala, and Jennifer E. Jones 3 Optimizing Long-Term Live Cell Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alex Lac, Austin Le Lam, and Bryan Heit

PART II

v ix

3

41 57

QUANTITATIVE APPROACHES

4 Monitoring Transmembrane and Peripheral Membrane Protein Interactions by Fo¨rster Resonance Energy Transfer Using Fluorescence Lifetime Imaging Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Janhavi Nagwekar, Caterina Di Ciano-Oliveira, and Gregory D. Fairn 5 Bimolecular Fluorescence Complementation to Visualize Protein–Protein Interactions in Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Cassandra R. Edgar and Jimmy D. Dikeakos 6 Monitoring Cellular Responses to Infection with Fluorescent Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Amena Aktar, Kasia M. Wodz, and Bryan Heit 7 Quantifying Endothelial Transcytosis with Total Internal Reflection Fluorescence Microscopy (TIRF). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Erika Jang, Siavash Ghaffari, and Warren L. Lee 8 Measurement of Minute Cellular Forces by Traction Force Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Valentin Jaumouille´

PART III

TISSUE AND LIVE ANIMAL IMAGING

9 Quantitative Immunofluorescent Imaging of Immune Cells in Mucosal Tissues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Lane B. Buchanan, Zhongtian Shao, Yuan Chung Jiang, Abbie Lai, Thomas J. Hope, Ann M. Carias, and Jessica L. Prodger

vii

viii

10

11

12

13

Contents

Intravital Microscopy Techniques to Image Wound Healing in Mouse Skin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Madison Turk, Jeff Biernaskie, Douglas J. Mahoney, and Craig N. Jenne Quantifiable Intravital Light Sheet Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Holly C. Gibbs, Sreeja Sarasamma, Oscar R. Benavides, David G. Green, Nathan A. Hart, Alvin T. Yeh, Kristen C. Maitland, and Arne C. Lekven Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frank L. Jalufka, Sun Won Min, Madison E. Platt, Anna L. Pritchard, Theodore E. Margo, Alexandra O. Vernino, Megan A. Kirchhoff, Ryan T. Massopust, and Dylan A. McCreedy Expansion Microscopy of Larval Zebrafish Brains and Zebrafish Embryos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ory Perelsman, Shoh Asano, and Limor Freifeld

PART IV 14 15 16

17

18

19

20

181

197

211

SUPER-RESOLUTION APPROACHES

Super-Resolution Radial Fluctuations (SRRF) Microscopy . . . . . . . . . . . . . . . . . . . Jayme Salsman and Graham Dellaire Sample Preparation for Multicolor STED Microscopy . . . . . . . . . . . . . . . . . . . . . . . Walaa Alshafie and Thomas Stroh Single-Molecule Localization Microscopy of Subcellular Protein Distribution in Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jelmer Willems, Manon Westra, and Harold D. MacGillavry Measuring the Lateral Diffusion of Plasma Membrane Receptors Using Raster Image Correlation Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sara Makaremi and Jose Moran-Mirabal Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vito Mennella and Zhen Liu

PART V

165

225 253

271

289

305

MICROGRAPH ANALYSIS

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Eike K. Mahlandt and Joachim Goedhart Automated Microscopy Image Segmentation and Analysis with Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Anthony Bilodeau, Catherine Bouchard, and Flavie Lavoie-Cardinal

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

367

Contributors AMENA AKTAR • Department of Microbiology and Immunology, Center for Human Immunology, The University of Western Ontario, London, ON, Canada WALAA ALSHAFIE • Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada SHOH ASANO • Internal Medicine Research Unit, Pfizer, Cambridge, MA, USA OSCAR R. BENAVIDES • Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA JEFF BIERNASKIE • Department of Comparative Biology and Experimental Medicine, Faculty of Veterinary Medicine, University of Calgary, Calgary, AB, Canada; Alberta Children’s Hospital Research Institute, University of Calgary, Calgary, AB, Canada ANTHONY BILODEAU • Universite´ Laval, Que´bec, QC, Canada; CERVO Brain Research Center, Que´bec, QC, Canada CATHERINE BOUCHARD • Universite´ Laval, Que´bec, QC, Canada; CERVO Brain Research Center, Que´bec, QC, Canada LANE B. BUCHANAN • Department of Microbiology and Immunology, Western University, London, ON, Canada ANN M. CARIAS • Department of Cellular and Developmental Biology, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA PINA COLARUSSO • Department of Physiology and Pharmacology, Live Cell Imaging Laboratory, Snyder Institute for Chronic Diseases, University of Calgary, Calgary, AB, Canada GRAHAM DELLAIRE • Department of Pathology, Dalhousie University, Halifax, NS, Canada; Department of Biochemistry, Dalhousie University, Halifax, NS, Canada CATERINA DI CIANO-OLIVEIRA • Keenan Research Centre for Biomedical Science, St. Michael’s Hospital, Unity Health Toronto, Toronto, ON, Canada JIMMY D. DIKEAKOS • Department of Microbiology and Immunology, The University of Western Ontario, London, ON, Canada CASSANDRA R. EDGAR • Department of Microbiology and Immunology, The University of Western Ontario, London, ON, Canada GREGORY D. FAIRN • Keenan Research Centre for Biomedical Science, St. Michael’s Hospital, Unity Health Toronto, Toronto, ON, Canada; Department of Pathology, Dalhousie University, Halifax, NS, Canada LIMOR FREIFELD • Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel SIAVASH GHAFFARI • Keenan Centre for Biomedical Research, St. Michael’s Hospital, Toronto, ON, Canada HOLLY C. GIBBS • Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA; Microscopy and Imaging Center, Texas A&M University, College Station, TX, USA JOACHIM GOEDHART • Section Molecular Cytology, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands DAVID G. GREEN • Department of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada

ix

x

Contributors

NATHAN A. HART • Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA BRYAN HEIT • Department of Microbiology and Immunology, Center for Human Immunology, The University of Western Ontario, London, ON, Canada; Robarts Research Institute, London, ON, Canada THOMAS J. HOPE • Department of Cellular and Developmental Biology, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA FRANK L. JALUFKA • Department of Biology, Texas A&M University, College Station, TX, USA ERIKA JANG • Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada VALENTIN JAUMOUILLE´ • Department of Molecular biology and Biochemistry, Simon Fraser University, Burnaby, BC, Canada CRAIG N. JENNE • Snyder Institute for Chronic Diseases, University of Calgary, Calgary, AB, Canada; Department of Microbiology, Immunology and Infectious Diseases, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada YUAN CHUNG JIANG • Department of Microbiology and Immunology, Western University, London, ON, Canada JENNIFER E. JONES • Department of Microbiology & Molecular Genetics, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA MEGAN A. KIRCHHOFF • Department of Biology, Texas A&M University, College Station, TX, USA ALEX LAC • Department of Microbiology and Immunology, The University of Western Ontario, London, ON, Canada ABBIE LAI • Department of Microbiology and Immunology, Western University, London, ON, Canada SEEMA S. LAKDAWALA • Department of Microbiology & Molecular Genetics, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA FLAVIE LAVOIE-CARDINAL • CERVO Brain Research Center, Que´bec, QC, Canada; De´ partement de psychiatrie et de neurosciences, Universite´ Laval, Que´bec, QC, Canada AUSTIN LE LAM • Department of Microbiology and Immunology, The University of Western Ontario, London, ON, Canada WARREN L. LEE • Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada; Keenan Centre for Biomedical Research, St. Michael’s Hospital, Toronto, ON, Canada; Departments of Biochemistry, Medicine and the Interdepartmental Division of Critical Care, University of Toronto, Toronto, ON, Canada ARNE C. LEKVEN • Department of Biology and Biochemistry, University of Houston, Houston, TX, USA ZHEN LIU • Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong SAR, China HAROLD D. MACGILLAVRY • Division of Cell Biology, Neurobiology and Biophysics, Department of Biology, Faculty of Science, Utrecht University, Utrecht, The Netherlands EIKE K. MAHLANDT • Section Molecular Cytology, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands DOUGLAS J. MAHONEY • Snyder Institute for Chronic Diseases, University of Calgary, Calgary, AB, Canada; Department of Microbiology, Immunology and Infectious Diseases, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada; Alberta Children’s Hospital Research Institute, University of Calgary, Calgary, AB, Canada;

Contributors

xi

Department of Biochemistry and Molecular Biology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada; Arnie Charbonneau Cancer Institute, Calgary, University of Calgary, Calgary, AB, Canada KRISTEN C. MAITLAND • Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA; Microscopy and Imaging Center, Texas A&M University, College Station, TX, USA SARA MAKAREMI • School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada RICHARD MANIVANH • Department of Microbiology & Molecular Genetics, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA THEODORE E. MARGO • Department of Biology, Texas A&M University, College Station, TX, USA RYAN T. MASSOPUST • Texas A&M Institute for Neuroscience (TAMIN), Texas A&M University, College Station, TX, USA DYLAN A. MCCREEDY • Department of Biology, Texas A&M University, College Station, TX, USA; Texas A&M Institute for Neuroscience (TAMIN), Texas A&M University, College Station, TX, USA VITO MENNELLA • MRC Toxicology Unit, School of Biological Sciences, University of Cambridge, Cambridge, UK SUN WON MIN • Department of Biology, Texas A&M University, College Station, TX, USA JOSE MORAN-MIRABAL • School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, ON, Canada JANHAVI NAGWEKAR • Keenan Research Centre for Biomedical Science, St. Michael’s Hospital, Unity Health Toronto, Toronto, ON, Canada ORY PERELSMAN • Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel MADISON E. PLATT • Department of Biology, Texas A&M University, College Station, TX, USA ANNA L. PRITCHARD • Department of Biology, Texas A&M University, College Station, TX, USA JESSICA L. PRODGER • Department of Microbiology and Immunology, Western University, London, ON, Canada; Department of Epidemiology and Biostatistics, Western University, London, ON, Canada JAYME SALSMAN • Department of Pathology, Dalhousie University, Halifax, NS, Canada SREEJA SARASAMMA • Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA ZHONGTIAN SHAO • Department of Microbiology and Immunology, Western University, London, ON, Canada THOMAS STROH • Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada LUCY H. SWIFT • Department of Physiology and Pharmacology, Live Cell Imaging Laboratory, Snyder Institute for Chronic Diseases, University of Calgary, Calgary, AB, Canada MADISON TURK • Snyder Institute for Chronic Diseases, University of Calgary, Calgary, AB, Canada; Department of Microbiology, Immunology and Infectious Diseases, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada

xii

Contributors

ALEXANDRA O. VERNINO • Department of Biology, Texas A&M University, College Station, TX, USA MANON WESTRA • Division of Cell Biology, Neurobiology and Biophysics, Department of Biology, Faculty of Science, Utrecht University, Utrecht, The Netherlands JELMER WILLEMS • Division of Cell Biology, Neurobiology and Biophysics, Department of Biology, Faculty of Science, Utrecht University, Utrecht, The Netherlands KASIA M. WODZ • Department of Microbiology and Immunology, Center for Human Immunology, The University of Western Ontario, London, ON, Canada ALVIN T. YEH • Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA

Part I Basic Fluorescence Imaging

Chapter 1 Fluorescence Microscopy: A Field Guide for Biologists Lucy H. Swift and Pina Colarusso Abstract Optical microscopy is a tool for observing objects, and features within objects, that are not visible to the unaided eye. In the life sciences, fluorescence microscopy has been widely adopted because it allows us to selectively observe molecules, organelles, and cells at multiple levels of organization. Fluorescence microscopy encompasses numerous techniques and applications that share a specialized technical language and concepts that can create barriers for researchers who are new to this area. Our goal is to meet the needs of researchers new to fluorescence microscopy, by introducing the essential concepts and mindset required to navigate and apply this powerful technology to the laboratory. Key words Optical microscopy, Fluorescence microscopy, Magnification, Resolution, Numerical aperture, Digital image, Imaging workflow, Image acquisition, Lateral resolution, Nyquist sampling, Data management plan

1

Introduction The optical microscope is a symbol of scientific discovery. Ask a child to draw a scientist and chances are the picture will include a microscope. The optical microscope is a standard fixture in education and research and its intuitive design allows us to explore samples from onion skins to human cells with relative ease. Although capturing images can be straightforward, it is difficult to make informed decisions at every step of an experiment without training in this discipline. Perhaps you want to examine the fate of a particular cell during development or decode why a novel virus readily infects the lungs. To achieve rigorous and reproducible results, it is vital to develop the required mind set and skills. To do so takes time and patience, just like learning to fix an engine, skate backward, or knit a sock. Yet you will be rewarded by systematic progress as you uncover new insights about the miniature worlds that surround us. At its heart, microscopy is a tool that helps researchers observe objects, and features within objects, that cannot be seen with the

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_1, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

3

4

Lucy H. Swift and Pina Colarusso

unaided eye. Along with resolving minute features in a specimen, microscopes must also provide contrast to make these features apparent against the background. That is, a microscopic image must include variations in color and/or brightness or it will be invisible to the eye. Microscope techniques are grouped according to the way they implement contrast. Diverse techniques have been developed such as phase contrast [1], darkfield [1], polarization [2], differential interference contrast (DIC) microscopy [3] as well as fluorescence microscopy [4]. Fluorescence microscopy generates contrast from a type of luminescence that occurs when certain molecules, denoted as fluorophores, emit photons while being irradiated with light of characteristic frequencies. Fluorescence is rapid, occurring on the order of a few nanoseconds after photon excitation. The fluorescence is generally shifted to a lower frequency compared to the excitation light, and this difference allows us to isolate the weak fluorescence signal from the intense excitation light by using filters that can separate the different colors of light. This leads to the characteristic high contrast appearance of a fluorescence image, with bright features against a dark background, much like stars illuminating the night sky. The dark background means that fluorescence microscopy is highly sensitive; indeed, the technique can detect emission from a single fluorophore [5]. Fluorescence is also selective at multiple scales of organization; individual molecules, organelles, and cells can be labeled with a variety of extrinsic and genetically encoded fluorophores. Here we present a practical introduction to fluorescence microscopy for the biologist. We start by defining and describing several foundational concepts, and then illustrating how to transfer these concepts into practice. We emphasize how to make informed choices at every step of the experimental workflow. Informed choices save time, precious samples, and reagents, and help you extract the maximum information and value from experiments. Throughout, we highlight information that is typically transferred informally in the laboratory but not always mentioned directly in the literature.

2

Foundational Concepts

2.1 The Photophysical Properties of Fluorescence Probes

Fluorescence is a photophysical process that occurs when molecules absorb and quickly reemit light within a few nanoseconds. Fluorescence is only exhibited by molecules with specific molecular symmetries [6], and we know these as fluorophores. When fluorophores absorb and reemit light, they do so between one or more defined electronic energy levels. These transitions involve changes within the fluorophore’s electron cloud, as well as molecular vibrations and rotations. Thus, each energy state of the molecule

Fluorescence Microscopy: A Field Guide for Biologists

5

Fig. 1 A simplified Jablonski diagram illustrating the ground electronic state (S0) and first excited state of a fluorophore (S1). Each energy level is further subdivided into vibrational (v) and rotational levels, although only representative vibrational levels are depicted for clarity. Arrows indicate transitions among the different energy levels. Once a fluorophore absorbs a photon, it is excited to the upper electronic energy level. The fluorophore quickly relaxes to the lowest vibrational-rotational level of S1. Next the fluorophore reemits the photon as it drops back down to the ground electronic state S0. Usually the energy of the emitted photon is slightly less than the energy of the absorbed photon

corresponds to specific electronic, vibrational, and rotational levels. Figure 1 depicts the first two electronic energy states of a typical fluorophore, with the ground state labelled S0 and the first excited state labelled S1. Each electronic state is subdivided into discrete levels that correspond to molecular vibrations. They are not shown for clarity, but each vibrational level is further subdivided into rotational levels. The energy of a photon is directly proportional to its frequency, ν, and inversely proportional to its wavelength (λ). c E ¼ hν ¼ h ð1Þ λ where h is Planck’s constant, a fundamental physical constant [7], and c is the speed of light in the material. As the wavelength of

6

Lucy H. Swift and Pina Colarusso

Fig. 2 The excitation and emission spectra of the fluorophore DiI (1,10 -dioctadecyl-3,3,30 ,30 -tetramethylindocarbocyanine perchlorate). Spectral data courtesy of Chroma Technologies. The Stokes shift (*) is the difference between the excitation and emission maxima. Note the emission spectrum is shifted to longer wavelengths (and lower energies) compared to the excitation spectrum

electromagnetic radiation decreases, the energy and frequency increase. Fluorescence microscopy involves energies in the visible range (with some overlap into the near-infrared and ultraviolet). As we move from red to violet in the visible spectrum, the energy and frequency increase, while the wavelength decreases. Transitions between molecular energy states are governed by quantum mechanical selection rules, in which “allowed” transitions are likely, while “forbidden” transitions have a low probability of occurring [6]. When light of the appropriate energy shines on a fluorophore, it can be excited from the ground state to the first electronic excited state as long as the transitions between the states are allowed. The electronic transitions observed in fluorescence microscopy are excited from about 350–800 nm and involve simultaneous vibrational and rotational transitions. As shown in Fig. 1, a molecule can absorb light and undergo a transition from the ground state S0 to the upper electronic state S1. Next the fluorophore quickly relaxes to the lowest vibrational-rotational level of S1 and then reemits the photon as it drops back down to the ground electronic state S0. As the energy of the absorbed photon is slightly greater than the emitted energy, the energy difference allows us to use filters and related spectral devices to separate the weaker fluorescence emission from the intense light used to excite fluorescence.

Fluorescence Microscopy: A Field Guide for Biologists

7

The photophysical properties of fluorophores are important to consider when selecting the best probe or probes for an imaging experiment. When designing an experiment, it is important to reference and/or measure several properties of probes, including the excitation and emission spectra, brightness and photostability. These parameters influence the quality of the experimental results, including spatial and spectral resolution, the relative brightness of signal versus background, and sample viability when working with living specimens. Advanced techniques may require attention to additional properties, including fluorescence lifetime [8] and fluorescence anisotropy [9]. In the Jablonski diagram shown in Fig. 1, we only depict one of the millions of electronic transitions that occur when imaging with standard fluorescence microscopes. The excitation and emission spectra plot the relative probabilities of the multiple electronic transitions (and the accompanying vibrational and rotational transitions) as a function of wavelength. They are visual representations of the multiple photophysical events that occur when a population of fluorophores absorb and then reemit photons. It is important to refer to the detailed excitation and emission spectra when designing an experiment and choosing the appropriate filter settings required to image a given fluorophore. When designing experiments, it is important to look up the appropriate wavelength ranges for exciting and detecting the fluorescence of the selected probe(s). Sometimes product guides list a single excitation and emission wavelength; this information is incomplete and can be misleading because most fluorescence probes can be excited and emit over a broad range of wavelengths. Often in fluorescence microscopy the terms “absorption spectrum” and “excitation spectrum” are used interchangeably, but they are not synonymous. An absorption spectrum measures the attenuation of light as a function of wavelength as it passes through a sample. An absorbed photon may or may not participate in generating fluorescence. Conversely, an excitation spectrum measures the wavelengths of light that can be used to generate fluorescence, with the height of the curve at each point illustrating the efficacy of each wavelength. Compared to an excitation spectrum, an absorption spectrum captures the effect of many different lightmolecule interactions, including those that lead to fluorescence. In practice, either spectrum is a useful reference, but the absorption spectrum may not exactly match the excitation spectrum. To illustrate excitation and emission spectra, let’s consider the fluorophore DiI (1,10 -dioctadecyl-3,3,30 ,30 -tetramethylindocarbocyanine perchlorate), a popular probe for studying lipid distribution in cells and tissues. Figure 2 illustrates the excitation and emission spectra of this molecule from 400–700 nm. The excitation spectrum reveals which wavelengths the molecule absorbs to produce fluorescence, while the emission spectrum depicts which

8

Lucy H. Swift and Pina Colarusso

wavelengths are emitted. Note that the excitation spectrum is at a shorter wavelength and thus higher energy than the emission spectrum. This arises because the fluorophore loses energy as it drops to the lowest level of the excited electronic state, before releasing a photon (Fig. 1). Most often, the excitation and emission spectra are mirror images because the upper and lower electronic states have similar energy level distributions. In Fig. 2, the spectral maxima for excitation (555 nm) and emission (569 nm) represent the most probable transitions for both photophysical events. The difference between the spectral maxima (here 14 nm) is known as the Stokes shift. The Stokes shift allows us to separate the emitted signal from the excitation light by using optical filters or related devices. The Stokes shift varies by fluorophore and is also influenced by the chemical environment. A large Stokes shift is generally favorable because it makes it easier to separate excitation light from emission light. Spectra are available on numerous vendor websites and are often encoded in acquisition software on microscopes. When using more than one fluorophore, it is important to minimize spectral overlap between the excitation and emission spectra of the different probes, or you may obtain and interpret artifactual results [10–13]. For example, you may obtain a false positive if the emission from one probe is visible using the microscope filters and settings for a second probe. Additional caution is needed because most reference spectra have been recorded in solvents rather than in cells and tissues. As fluorescence spectra can shift in wavelength depending on their environment, the reference spectra may not be identical to the spectra of the probes within your sample. The emission spectrum of a fluorophore can be measured directly in the sample with a spectral imaging system [11]. To design fluorescence imaging experiments, it is important to evaluate probes for brightness. Fluorescence spectra, however, are reported in relative and arbitrary units and thus cannot be used to compare the brightness of different fluorophores. To check brightness, it is important to evaluate photophysical properties that yield a quantitative measure of the efficiency of light absorption and the subsequent reemission. The Beer–Lambert law tells us that the absorbance A of a molecule at a given wavelength λ is given by: A ðλÞ ¼ εbc,

ð2Þ

where ε is the molar absorption coefficient (also known as extinction coefficient), b is the physical path length (sample thickness), and c is the concentration of the molecular species. When comparing fluorophores, the greater the value of the extinction coefficient, the greater the number of photons absorbed. The second photophysical property that determines brightness is the quantum yield Φ, which is a measure of how many absorbed photons will be reemitted by the molecule. It is calculated by determining the

Fluorescence Microscopy: A Field Guide for Biologists

9

Table 1 Comparing relative brightness of fluorophores [14]. This table shows different photophysical properties of lipophilic dyes. Brightness is proportional to ε Φ where ε is the molar absorption coefficient (M1 cm1) and Φ is the quantum yield. Note that these data are recorded in solvents, and results may vary for biological samples Fluorophore (common name)

ε (λ in nm)

ΦF

Brightness

Solvent used

DiR

240,000 (742)

0.28

67,200

Ethanol

Sulforhodamine B

99,000 (553)

0.7

69,300

Ethanol

Nile red

38,000 (519)

0.7

26,600

Dioxane

fraction of absorbed photons that are reemitted as fluorescent photons. Φ¼

number of photons emitted : number of photons absorbed

ð3Þ

The brightness of the probe is proportional to Φ, the product of the molar absorption extinction coefficient and quantum yield. As an example, Table 1 lists the relative brightness of three lipophilic probes. Along with brightness, it is important to consider the photostability of a probe. For example, many probes used in flow cytometry are bright but are not suitable for fluorescence microscopy. The simplified Jablonski diagram in Fig. 1 does not depict the myriad photophysical processes that can accompany fluorescence. This includes photochemical reactions that can destroy the fluorophore so that it no longer emits light, a process known as photobleaching [15, 16]. Photobleaching occurs when the fluorophore decomposes at the molecular level, typically after undergoing repeated cycles of the fluorescence process [16]. Photobleaching leads to a decrease in signal over time. It can also induce phototoxicity in living samples, because the by-products of this process are often free radicals that induce damage to cells. 2.2 The Light Path of a Standard Fluorescence Microscope

To develop expertise in fluorescence microscopy, it is important to learn how to identify the various paths that light can travel from the source to the detector. A simple fluorescence microscope may have a single light path from the light source to the eyepiece, or two, if is also equipped with a camera. Advanced systems can have ten or more light paths. From the simplest to the most complex system, a light path in a fluorescence microscope contains a successive series of optical components that can be organized by function. These functions include: l

Exciting the fluorescence (light sources)

l

Resolving spatial features (lenses)

10

Lucy H. Swift and Pina Colarusso l

Separating the light by spectral range (filters)

l

Detecting the fluorescence emission (cameras or other detectors)

l

Converting the fluorescence signal to a digital image (computer and electronics)

When you begin working with a new microscope, start by tracing the light path from the light source to the detector. Try to identify the different components in the light path and describe how they work together to form the image that you view or acquire digitally. If you take the time to trace the light paths in a microscope when learning a new application, you will find it easier to troubleshoot problems and optimize your imaging. To apply the light path to laboratory practice, let us consider how light propagates from the light source to the detector in a widefield fluorescence microscope. The term “widefield” refers to techniques where the image is captured simultaneously across the field of view using a camera. The advantages of widefield systems are their low cost, straightforward and modular design, and high sensitivity [17]. They are excellent choices for imaging thin preparations such as cultured cells and for live-cell work. A representative light path highlighting the components of a widefield microscope is shown in Fig. 3: 1. Light is emitted from one or more intense light sources, commonly arc lamps and light emitting diodes (LEDs) [18]. Arc lamps emit a broad range of wavelengths but require spectral filtering, generate heat and sometimes emit high UV levels. They are slowly being replaced by LEDs, solid state devices that consume less energy and require less maintenance. 2. The excitation filter transmits specific wavelengths while reflecting others; here it transmits the required excitation wavelengths while reflecting the undesired wavelengths back to the source. Ideally, the filter transmission curve matches the excitation spectrum of the fluorophore of interest. 3. The excitation light is reflected by a dichroic mirror, which is  oriented at 45 relative to the sample. The excitation light passes through the objective and is delivered to the sample. As the sample is irradiated, fluorophores emit light of lower energy and longer wavelengths than the excitation light. 4. The fluorescence emission is captured by the objective, which provides an enlarged and spatially resolved image of the fluorescence emitted by the sample. The excitation light is reflected to the source. 5. The fluorescence emission passes through the dichroic filter and the emission filter. Both the dichroic filter and emission filter ideally transmit wavelength ranges that match the

Fluorescence Microscopy: A Field Guide for Biologists

11

Fig. 3 Representative light path for a widefield microscope. Light is emitted from an intense light source (1). The excitation filter transmits light over a specific wavelength range (2). The excitation light is reflected by a dichroic mirror, which  is oriented at 45 relative to the sample. The dichroic reflects light with shorter wavelengths and transmits light of longer wavelengths (3). Fluorophores emit light of lower energy and longer wavelength compared to the excitation light. The fluorescence emission is captured by the objective, which provides an enlarged and spatially resolved image of the fluorescence labelling on the sample (4). The signal passes through the dichroic mirror and then an emission filter matched to the fluorescence spectrum of the fluorophore of interest. The emission filter provides a layer of security to ensure that the much brighter excitation light is blocked, or it may overwhelm the fluorescence signal or create high background (5). The image is viewed through the eyepieces (6) OR the image is projected onto the camera (7)

fluorescence emission of the fluorophore of interest. The dichroic and emission filters are required to ensure that the much brighter excitation light is blocked. Otherwise, the fluorescence signal may be obscured by the much brighter excitation light.

12

Lucy H. Swift and Pina Colarusso

6. The image is viewed through the eyepieces. Although having a second light path for visual inspection adds cost, and can lead to confusion, eyepieces allow for quick set up and provide a larger field of view compared to the camera. OR 7. The image is projected onto the camera, typically a chargecoupled device (CCD) or scientific complementary metaloxide semiconductor (sCMOS) [19]. Cameras are also known as two-dimensional array detectors because they contain multiple photosensitive elements, also known as photosites, which allow the multiple points on an image to be detected at the same time. The camera converts the photons into electrical signals that are then converted into digital files for computational visualization and analysis. Widefield fluorescence microscopes, like most fluorescence microscopes, are configured for reflection imaging. In fluorescence imaging, the objective serves a dual function by illuminating the sample as well as by collecting the fluorescence emission. This arrangement is known as epifluorescence, because the excitation and emission light both pass through the objective (from the Greek epi meaning “same”). The microscope is known as “upright” if the objective is above the sample and “inverted” if the objective is below the sample. Tracing and systematically testing the components in a light path can help diagnose issues with imaging. For example, if you want to quickly inspect your sample but are unable to see any fluorescence emission through the eyepieces, try tracing the light path step by step. Is your light source switched off, turned to minimum intensity or is a shutter closed? Did you choose an incorrect filter set? Did you switch the light path to image with the camera rather than with the eyepiece? As shown here, the light path can help you systematically work through problems; otherwise, troubleshooting can be a frustrating exercise. Although it requires more time initially, we recommend practicing with a standard sample before attempting your first imaging experiment. If you develop imaging skills ahead of time, when you start with experiments, you will be able to focus on the biological questions rather than the mechanics of operating a microscope. A standard sample need not be expensive, and you can use a discarded slide with robust fluorescence. Another option is to purchase standard slides from educational supply companies. Pollen or plant specimen slides are useful because they contain pigments that are naturally autofluorescent. Standard slides are also useful for troubleshooting when you start experiments. They allow you to check that you have set up the microscope properly

Fluorescence Microscopy: A Field Guide for Biologists

13

and reveal whether any issues encountered are with your experimental sample rather than the equipment. Often widefield microscopes are also equipped for white-light imaging, including techniques such as brightfield, phase contrast or DIC. White-light techniques do not require any external labels and can be used alongside fluorescence to provide a view of the sample as a whole and with less photodamage. However, white-light imaging modes can create confusion when trying to identify the light paths in a microscope. Fluorescence imaging is configured for reflection, where the excitation and emission light both travel through the objective. By contrast, white-light imaging for life science research is typically set up in transmission, with a condenser illuminating the sample on one side and the objective forming the image on the other. New operators sometimes attempt to set up their microscope or diagnose problems with the epifluorescence light path by adjusting components in the white-light transmission path. To avoid this source of potential confusion, it is important to familiarize yourself with all the light paths of the microscope, not just the fluorescence ones. A final note of advice on white-light imaging: while fluorescence light paths rarely need alignment, white-light paths take more time and effort to set up properly. If you have a white-light path, after bringing your sample into focus, ensure that you align the microscope for “Koehler illumination” as it is the required first step for white-light imaging [20, 21]. After you set up Koehler illumination, refer to the manufacturer instructions for your specific imaging mode. Fluorescence microscopy allows us to label cells, organelles, proteins, or other cellular constituents and identify them using tags that emit over different wavelength ranges. However, the detectors, such as the cameras used in research grade fluorescence microscopes, are typically monochrome. This means they detect intensity but not colour and thus optical filters are required to distinguish among the fluorescent probes. It is important to compare the fluorophore excitation and emission spectra to the transmission curves for the excitation and emission filters available in your microscope so that the appropriate fluorophores can be selected. When looking up information about fluorophores, it is common to see the excitation and emission maxima listed. However, to fully understand how your fluorophore will perform with your microscope filter sets, it is important to consider the full spectral curves and compare them to the full filter transmission curves. This allows you to determine how well matched your filter set is to your fluorophore and determine if there will be any overlap. In Fig. 4a, the excitation and emission spectra of GFP are illustrated, along with a filter set optimized for imaging this fluorophore. The percent transmission of the excitation, dichroic, and emission filters are also plotted. By inspection, the excitation and

Lucy H. Swift and Pina Colarusso

a

GFP spectra and matched filter set

1.0

Excitation Filter

1.0

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0.0

0.0 400

Fluorescence emission (arb. units)

Excitation (arb. units)

Dichroic Filter Emission Filter

425

450

475

500

525

550

575

600

Wavelength (nm)

b

GFP spectra and mismatched filter set

1.0

Excitation Filter

1.0

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0.0

0.0

400

425

450

475

500

525

550

575

Fluorescence emission (arb. units)

Dichroic Filter Emission Filter

Excitation (arb. units)

14

600

Wavelength (nm)

Fig. 4 Excitation and absorption spectra of a fluorophore should be matched to the channel, that is, the wavelengths for excitation and emission. The spectra and filter transmission curves should be carefully evaluated for optimal overlap. In (a), the filter set used was designed for imaging with GFP. The excitation and emission spectra of the fluorophore, here GFP, align with the transmission curves for the excitation, dichroic and emission filters. In (b), the filter set is designed for imaging with YFP and the GFP excitation and emission spectra imperfectly match the emission, dichroic and emission filter curves compared to the filter set in (a)

emission filters are seen to match the fluorophore spectra. The dichroic mirror reflects most of the excitation light (toward the sample) and transmits almost all of the fluorescence emission (toward the detector). We can conclude that this filter set is a

Fluorescence Microscopy: A Field Guide for Biologists

15

suitable choice for imaging GFP. In Fig. 4b, the excitation and emission spectra for GFP are shown with a nonoptimal filter set that is designed for imaging YFP. The excitation filter overlaps with the emission spectrum and only transmits a small range of the wavelengths that excite GFP. The dichroic reflects a large portion of the wavelengths of light emitted by GFP and the emission filter does not transmit the peak wavelengths emitted. Note that although the match is less than perfect, it may still be usable. As filters are expensive, it is always good to test whether the existing filters meet the demands of the imaging experiment rather than purchasing a new set for a single use case. For multicolor imaging experiments, separate filter sets are used to differentiate among the two or more probes used to label the sample. Each filter set corresponds to a defined spectral range that is used for excitation and emission, corresponding to what we know as “channels.” Carefully matching the probe spectra to the appropriate filters is essential because each channel should detect the emission from one intended target. If there is spectral contamination between the different channels (spectral cross talk and bleedthough are the technical terms), you may not be able to analyze your data. For more information, we direct you to several excellent reviews [10–13]. 2.3 Magnification, Spatial Resolution, and Sampling

Like all microscopes, fluorescence microscopes spatially resolve features beyond the acuity of human vision. When working with optical microscopes, sometimes magnification is confused with the spatial resolving power of the system. Magnification is the difference in size of the image relative to the object. When imaged by a detector, we call this lateral magnification, while when observed by eye it is referred to as angular magnification [22]. Magnification, however, does not tell us how well the lens resolves individual points on an image. For example, a microscope may be equipped with two 20 objectives, but one objective may be better at resolving finer spatial features. This is because objectives and their resolving power are defined by the numerical aperture (NA), not magnification. Numerical aperture refers to the ability of an objective to collect light and resolve objects. For an epifluorescent microscope, where the objective acts as the condenser, NA is calculated as follows: NA ¼ n sin θ,

ð4Þ

where n is the refractive index of the medium between the front lens and the sample, and θ is the acceptance angle of the objective (Fig. 5). Most microscope objectives have NA values between 0.3 and 1.45. Air objectives have maximum NA of about 0.9 and are designed to work with no immersion medium between the front

16

Lucy H. Swift and Pina Colarusso

Fig. 5 A visual representation of the acceptance angle θ of an objective lens. It is defined as the largest half-cone of light that can propagate through the lens

lens and sample. To achieve an NA > 1, the immersion medium can be changed from air to water, oil, silicone, or some other substance. Objectives are commonly labelled by the name of the immersion medium and whether the objective is placed in it directly or separated from the medium by a coverslip or similar barrier. For example, water objectives can be used as “dipping” or “immersion” objectives, while oil objectives are almost always used as immersion objectives. Some specialized objectives can perform with multiple immersion media using both dipping and immersion configurations. As NA increases, the spatial resolving power of an optical microscope improves, but only up to a point. Standard optical microscopes can, at best, resolve features on the order of 200–250 nm. This restriction arises from diffraction, a fundamental property of light-matter interactions [22]. Diffraction occurs whenever light interacts with an edge or aperture, such as a lens. Diffraction creates an interference pattern that is critical to image formation but is also associated with effects that degrade the quality of an image. In fluorescence microscopy, the heart of the microscope imaging system is the objective. When a point of light passes through the objective, the diffraction pattern that is formed consists of a central bright disk surrounded by concentric rings that fall off in brightness as they extend outward (Fig. 6a) [22]. This characteristic pattern is known as the Airy diffraction; it arises whenever light travels through a circular aperture such as an objective lens. In light microscopy, the central disk is denoted “the Airy disk” and the surrounding rings are known as “Airy rings.”

Fluorescence Microscopy: A Field Guide for Biologists

17

Fig. 6 Airy disk and resolution. (a) When imaging a subresolution feature such as a 100 nm bead, the resulting diffraction pattern (a) has a characteristic appearance that consists of a bright central disk surrounded by alternating bright and dark rings that gradually fall off in intensity. The intensities are displayed using a logarithmic scale, to highlight the diffraction rings. The radius of the Airy disk, r, is the lateral (x,y) resolution limit of the microscope (see Eq. 5). Here we show the diffraction pattern in a single image plane, but it also extends in 3D. Plot generated using POPPY, an open-source optical propagation Python package originally developed for the James Webb Space Telescope project [23]. (b) From top to bottom: two subresolution features are well-resolved; not resolved; just resolved when the centers of the Airy disks are separated by r. This is known as the resolution limit

By convention, we define the resolution limit of a standard widefield microscope by considering two bright subresolution features on the sample, represented as the central bright Airy disks in Fig. 6b. Three cases are shown, when the points are said to be “resolved,” “not resolved,” and “just resolved.” The “just resolved” arises when the central disks in the Airy diffraction pattern are separated by a distance equal to the radius of each circle: r ðxy Þ ¼

0:6λ , NA

ð5Þ

where λ is the emission wavelength and NA is the numerical aperture of the objective. The value r (x,y) is known as the Rayleigh criterion for lateral (x,y) resolution [22]. Here we limit our description to a single imaging plane (x,y), and simplify r(x,y) to r, but it is important to note that the diffraction pattern extends in three dimensions. How we define resolution in the z direction is covered in the following chapter. Note that the r value is also defined differently depending on the technique [24]. As r decreases, the resolution increases or improves. This is because r is a measure of how finely features can be resolved on a sample. An r value of 200 nm yields more spatial definition when compared to an r of 500 nm. Also note that as wavelength decreases, r also decreases and as NA increases, r decreases. If you want to improve resolution, you can use shorter wavelengths to image and/or increase the NA of your objective. However, be

18

Lucy H. Swift and Pina Colarusso

Fig. 7 Numerical aperture (NA), not magnification, sets the spatial resolution of a microscope objective. (a) The diagrams show the light collection and angle of acceptance of two different 20 objectives that have different NA values. The objective on the left has an NA of 0.45 and the objective on the right has an NA of 0.75. (b) The corresponding images obtained with the 20 objective depicted above each image. Although the same magnification (20) was used, the features in the right image are more resolved than in the left image, because the NA is higher. Images record the fluorescence signal from a secondary antibody (STAR RED, Abberior, Go¨ttingen, Germany) that targets a primary antibody used to label von Willebrand factor. Images were acquired using a laser scanning confocal microscope (Nikon A1R microscope, Ti2-E stand, NIS-Elements v 5.02 software; Nikon Canada, Mississauga, Ontario, Canada). Image acquisition settings: 639 nm laser excitation and an emission filter with transmission between approximately 660–735 nm. Scale bar ¼ 10 μm

careful when imaging with wavelengths near 400 nm. Most microscope optics are tailored for visible light longer than 450 nm and perform poorly near 400 nm. It is better to work with wavelengths above 450 nm or choose an objective designed specifically for shorter wavelengths. To illustrate the effect of NA on spatial resolution, Fig. 7 shows two images captured with two different 20 objectives that have different NA. The samples are dermal endothelial cells that have been stained using immunofluorescence to highlight von Willebrand factor, a large multimeric glycoprotein that is stored in WeibelPalade bodies. Although both images are captured at the same magnification, the features in the image on the left (NA ¼ 0.45,

Fluorescence Microscopy: A Field Guide for Biologists

19

r ¼ 0.89 μm) are not as well resolved as in the image on the right (NA ¼ 0.75, r ¼ 0.53 μm). This highlights that NA, not magnification, sets the spatial resolution. When choosing an objective, consider the experimental aims. Here, if the aim of the experiment were to investigate the morphology and number of Weibel-Palade bodies following treatment with different drugs, the 20/0.75 NA objective would be a better choice for the image acquisition than the 20/0.45 NA objective. Unfortunately, this concept is often misapplied when magnification is prioritized without considering NA. When choosing an objective, NA is central as it sets the limit on the resolution of the imaging experiment. Besides NA, there are a number of other specifications to consider when choosing the best objective for the application [25]. As one example, the working distance of an objective can affect the imaging parameters. Oil immersion objectives may provide the highest resolution, but these objectives cannot be used to focus through standard plasticware. Rather, they require specialized thin dishes or cells grown on coverslips. In addition, when working with multicolor applications, it is critical to choose objectives with a high degree of chromatic correction, designated apochromats, that minimize mis-registration among different spectral channels. In some cases, applications like phase contrast or total internal reflection fluorescence (TIRF) [26] demand the use of a specific objective to perform any imaging at all. These and other key objective specifications are inscribed on the objective barrel as shown in Fig. 8, which depicts a stylized air objective as well as an objective with a correction collar. These are only two representative examples of the dozens of objectives available for any fluorescence microscope. Table 2 summarizes the key specifications of microscope objectives and their practical implications for fluorescence imaging of biological samples. Diffraction not only limits spatial resolution, but also leads to blur and haze that degrades overall image quality. Techniques such as confocal, deconvolution, and multiphoton microscopy, collectively known as “optical sectioning” techniques, can improve the spatial resolution in x, y, and z, as described in the following chapters. To image with the highest resolution, superresolution techniques such as structured illumination microscopy (SIM), stimulated emission after depletion (STED) and point localization can be applied, as described in Chapters 14–18. 2.4 Capturing and Digitizing Images

The image formed by the microscope is a real object, just like your sample, though it is composed of photons rather than atoms and molecules. The focused image is projected onto a detector that coverts the photons into an electrical signal, which in turn, is converted into a digital file that we can visualize and analyze. In a widefield microscope, images are detected by CCD or sCMOS

20

Lucy H. Swift and Pina Colarusso

Fig. 8 Two representative objectives and typical specifications are depicted. The top schematic shows an oil immersion objective and the bottom schematic depicts an air objective with a correction collar. Table 2 summarizes the specifications and their relevance

cameras. These cameras contain photosites arranged in a two-dimensional array that can capture multiple points on the image at the same time. Each photosite generates an electrical signal that should be proportional to the number of photons captured. The electrical signal is a continuous signal that is digitized into discrete intensity readings. The resulting digital file contains a two-dimensional array of numbers that are organized into rows and columns that correspond to each photosensitive element on the camera (Fig. 9). These individual elements are known as “pixels.” Each pixel encodes the emission intensity as well as the x and y positions, for each point on the original image. The digital files are then used to visualize and analyze the data computationally. When a

Fluorescence Microscopy: A Field Guide for Biologists

21

Table 2 Common specifications for objectives used in fluorescence microscopy. These specifications are often inscribed on the barrel of the objective Specification

Definition

Practical notes

Manufacturer

Microscope objectives from different vendors are You can buy specialized adaptors to interchange usually not interchangeable. They have objectives. If you threads, dimensions, and optical properties interchange objectives, that are tailored for the vendor’s systems carefully assess performance such as colour registration among spectral channels and make sure that the objective has enough clearance if you mount it on a rotatable turret

Special properties

Optical designs customized for specific applications

Applications like phase contrast and total internal reflection (TIRF) microscopy require an objective designed for the application

Magnification

Lateral magnification provided relative to the sample size

Common objectives range from 4–100. Color coding for magnification: Red: 4 or 5 Yellow: 10 Green: 16 or 20 Light blue: 40 or 50, Bright blue: 60 or 63 White: 100

Working distance (WD)

Working distance is the distance from the front lens of the objective to the specimen

As you increase the NA, WD typically decreases. Oil-immersion lenses have a short WD, typically very close to the specimen. If using an inverted microscope, you can image through cell culture dishes with most 4, 10, and 20 objectives. Standard plastic dishes are not compatible with high-NA objectives and specialized dishes or cells mounted on coverslips are needed

Numerical aperture (NA)

The greater the numerical aperture, the higher the spatial resolution in (x,y)

The NA appears after the magnification usually in the format mag/NA such as 10/0.3 Often higher NA means that you have a shorter working distance. (continued)

22

Lucy H. Swift and Pina Colarusso

Table 2 (continued) Specification

Definition

Practical notes We have not considered resolution in z; refer to Chapter 2 for more information

Immersion medium Medium between the front lens and the sample Air, dipping or immersion medium. If using oil, and/or dipping surface or coverslip research the different kinds objective Some are dipping objectives and placed directly because they have different in the immersion medium without a coverslip optical properties that are or other barrier suitable for different applications Dipping objectives have a second white ring below the magnification ring colour code Tube length

Infinity for modern objectives 160 or 170 mm for older objectives

Modern microscope optics are infinity corrected to allow for the placement of optical components in the light path

Coverslip if applicable

“0” means no coverslip Standard coverslip thickness is 0.17 mm

Objectives are designed to work with no coverslip, a coverslip of fixed thickness (typically 0.17 mm) or a variable range of thicknesses. The appropriate condition is listed after the tube length, usually the infinity symbol for modern microscopes

Objective type

All lenses, including objectives, exhibit aberrations that can affect the quality of the image. The three major aberrations are spherical aberration, chromatic aberration, and field curvature Chromatic aberration poses the greatest risk to the integrity of data sets. Chromatic aberration is when images collected using different spectral channels do not register properly, so images are offset in x, y, or z, which can lead to erroneous interpretations Spherical aberration arises because rays at the edges of lenses do not focus at the same point as those that travel through the center A perfect image is perpendicular to the axis of the objective. Field curvature means the image formed is slightly curved at the edges and thus out of focus

Objective types and degree of correction for optical aberrations. Spherical aberration (Sph); chromatic aberration (Chr); field curvature (Fld). Numbers report how many wavelengths are corrected Type

Sph Chr Fld

Achromat

1

2

No

Plan achromat 1

2

Yes

Fluorite

2–3 2–3 No

Plan fluorite

3–4 2–4 Yes

Plan 3–4 4–5 Yes apochromat (continued)

Fluorescence Microscopy: A Field Guide for Biologists

23

Table 2 (continued) Specification

Definition

Practical notes

Iris or collars

Adjust for thickness of plate, Collars are often used to correct for different coverslip thickness and properties. These can include correcting for temperature chamber thickness and/or to improve optical quality when imaging into thick specimens and/or for imaging at different temperatures Irises limit the cone of light and thus NA. If you are imaging with fluorescence, the iris must be fully open to achieve the full NA and thus spatial resolution

camera or related detector captures an image on a fluorescence microscope, the image is digitized, meaning that it is converted from a continuous or analogue signal (the real image) to one consisting of discrete data points (the signal detected at each detector element). Whenever we convert a continuous signal to a digital one, we must consider the sampling rate, that is, the extent to which we must sample to create an accurate representation of the original. The image formed by the microscope is continuous, but we convert it into discrete pixel readings so that we can visualize and analyze the signal computationally. To capture the image at high fidelity without losing information, we must sample the image with high enough spatial detail. This sampling rate is governed by a fundamental theorem on signal processing, known as the Nyquist Theorem. The Nyquist Theorem states that a continuous signal must be sampled at atleast twice the highest frequency in the original [27]. For a fluorescence image, the highest spatial frequency is the resolution lateral limit r. When imaging, if you do not want to lose spatial detail, you should configure your system so that p

M total r , 2

ð6Þ

where p is the pixel length (assuming square pixels), Mtotal is the total magnification of the system, and r the smallest lateral (x,y) distance that is resolvable by the system, given by Eq. 5. If the smallest resolvable feature is 1 μm, then each image pixel should be about 500 nm. As long as the Nyquist Theorem is followed, then the spatial fidelity of the microscopic image is preserved, and sampling artifacts are avoided. If we do not sample the image with enough pixels per resolution element r, we denote this as “undersampling.” An undersampled image does not preserve all the spatial information that was resolved in the original image and can create artifacts. For example, in Fig. 10, the fluorescence image on the left has been

24

Lucy H. Swift and Pina Colarusso

Fig. 9 Images are composed of individual pixel readings arranged in the two-dimensional (x,y) array. A fluorescent image of living cells labelled with a fluorescent probe that localizes to mitochondria is shown. The area of the cell bounded by the white box is shown below at higher zoom. The image is composed of pixels that encode intensity readings. The image is 8-bit, meaning that the possible intensity values range from 0 to 255. The representative intensity (I) pixel readings are indicated. The image was recorded using a spinning disk confocal microscope (Leica DMI6000 stand, Diskovery Flex system equipped with an Andor iXon Ultra 897 camera, MetaMorph software; Quorum Technologies, Guelph, Ontario, Canada). Image acquisition settings: 63/1.35 NA objective, 488 nm laser excitation and an emission filter with transmission between approximately 500–575 nm to match the emission spectrum of the organelle probe MitoTracker Green FM (ThermoFisher Scientific, Canada). Scale bar ¼ 20 μm

sampled with fewer pixels per micron than the one on the right. This means that the sampling is inadequate if we want to obtain the highest resolution possible from the experiment. Conversely, if we sample more than required, our images will contain more pixels and require more computer storage but will not lead to any more detail in the image. This is known as “oversampling” or “empty magnification.” Another drawback of oversampling is that it can slow down image acquisition and even degrade the quality of imaging, because it can lead to photobleaching.

Fluorescence Microscopy: A Field Guide for Biologists

25

Fig. 10 The effect of sampling on spatial resolution. An autofluorescent pollen grain (#304264, Carolina Biological Supply Company, NC, USA) imaged using a laser scanning confocal microscope (Nikon A1R microscope, Ti2-E stand, NIS-Elements v 5.02 software; Nikon Canada, Mississauga, Ontario, Canada) and 20/0.75 NA objective. The image on the left is undersampled and not all spatial resolution has been preserved. This image has been sampled with fewer pixels per micron (1.27 μm/pixel or 0.8 pixels/μm). The pollen on the right has been sampled using the Nyquist Theorem (0.18 μm/pixel or 5.5 pixels/μm). When the image is sampled using the Nyquist Theorem, features of the pollen are better resolved. For example, the spikes of the pollen are visible and can be counted and measured in the image on the right. Image acquisition settings: 561 nm laser excitation and an emission filter with transmission between approximately 570–625 nm. Scale bar ¼ 10 μm

Widefield microscopes should typically be configured so that r obtained with the highest numerical aperture objective is sampled by at least two adjacent image pixels [28]. If you are not sure whether a system is sampling at Nyquist, you can calibrate your system by acquiring an image of a stage micrometer or other standard such as a USAF 1951 target, and then calculating the number of adjacent pixels per unit length in microns [29]. On systems such as laser scanning confocal or multiphoton microscopes, the sampling frequency is set by the operator rather than determined by a camera. Often, the default settings are not configured for optimal sampling but rather for speed. For this reason, it is important to check your sampling frequency on these systems. Sampling at a rate specified by the Nyquist theorem, however, is not an absolute requirement but rather depends on the research question and application. If you are not interested in imaging features close to the resolution limit then undersampling may be appropriate, especially if you need to acquire images quickly and with minimal photobleaching. Yet if you are exploring spatial relationships on the order of 200–500 nm, it is critical to ensure that you are sampling at the highest fidelity possible. Nyquist sampling is also relevant to imaging z-sections, described in

26

Lucy H. Swift and Pina Colarusso

Chapter 2, and to sampling in the time domain, as in live cell imaging (Chapter 3). For each pixel in a digital imaging file, the intensity readings and their ranges are set by the analogue to digital conversion (ADC) electronics. The ADC converts the analogue signal (the electrical signal generated by the photons hitting the photoelements) to a discrete integer value within specific ranges. These readings are expressed from 0 to 2N -1 or 1 to 2N, where N is the bit depth. The pixels in an 8-bit image will range from 0–255 (equivalently 1–256), while those in a 12-bit image will range from 0–4095 (or 1–4096). It is possible to record the same image using either bit depth. In each case, the 0 value represents the lowest possible value, while the top value (255 or 4095) represents the highest possible intensity, corresponding to the maximum number of photons that can be detected at each photosite. The number represents how many gradations are used to capture the original signal and a higher bit depth captures the signal with finer detail. This is like measuring a volume of liquid (73 mL) using a 100 mL graduated cylinder with markings every mL (high bit depth) versus a 100 mL beaker with markings every 50 mL (low bit depth). When working with digital images, you may not see a difference between an 8-bit and 12-bit image because the human eye can, at best, distinguish about 700 shades of grey [30]. If you collect your image at 12-bit, but save it as an 8-bit image, important information about your signal will be irrevocably lost. It is critical to acquire and save your data at higher bit depths if you plan to carry out quantitative analysis of your images. Additionally, compressing data reduces the fidelity of an image and can introduce artifacts. When saving data, avoid file compression and save your data sets in the original proprietary format. When exporting files, use a lossless format such as PNG or TIF and avoid JPEG, as these can compress your data. Another common misconception is that the detectors used in research-grade fluorescence microscopes detect images in colour. Except for cameras used for histology, most detectors are monochrome and only record the intensity of light, not colour. The images are acquired using filters, and then we assign a colour look up table (LUT) that by convention is often chosen to match the fluorescence emission peak. Although this is the convention, it is also appropriate and sometimes preferable to use a different colour scheme; however, to avoid confusion, it is good practice to report the spectral range used to detect the signal. When using colour to display images, it is important to use palettes that are accessible for people with colour deficiencies. The prevalence of inherited colour vision deficiencies varies, affecting 3–8% of males worldwide; rates for female sex are much lower [31, 32]. Colour vision deficiencies can also develop due to disease or environmental exposure. The

Fluorescence Microscopy: A Field Guide for Biologists

27

most common deficiency affects red-green perception. This means the common red-green palette used to depict fluorescence images is uninterpretable to a significant fraction of the population and can create unexpected issues when working with colleagues who may interpret the data in unexpected ways.

3 Transferring Concepts to Practice: The Fluorescence Imaging Experimental Workflow Now let us apply the foundational concepts described above to the experimental workflow, as illustrated in Fig. 11. Ideally, consider these steps as an iterative cycle over the lifetime of a project. For example, you may have to carry out a few tests before finalizing your experimental design or selecting your preferred imaging mode. In addition, if you acquire all your imaging data before attempting to analyze it, you may find yourself repeating experiments. This is because image analysis often highlights issues in the workflow, and you may need to adjust your sample preparation and/or image acquisition settings to extract the information that you need from the images.

Fig. 11 The steps in a fluorescence imaging experiment. This workflow should not be considered linearly. You may need to revisit the different steps as your experiment progresses

28

Lucy H. Swift and Pina Colarusso

Define Problem

Fluorescence microscopy requires an intensive investment in time, effort, and funding, and the goal is rigor and reproducibility [12, 33]. Before beginning an imaging experiment, make sure that you have well-defined aims with specific goals. Like with any research and development project, every step should be systematically mapped out, performed, and documented. It is important to define your project aims and how you will achieve them. Just because you are capturing images does not mean that you circumvent the need for controls or to document your work. Anecdotally, through working with hundreds of students and researchers, we note that there is a tendency for people new to the field to start off with a flurry of image acquisition rather than taking the time for experimental design, sample optimization and analysis. Even the most seasoned microscopists experience the siren call of the single representative image rather than taking the time to acquire systematic and statistically rigorous data.

3.2 Select Model and Optimize Sample Preparation

The models used in fluorescence imaging range from individual molecules to living organisms. Whether imaging an individual ion channel, root stem, or blood flow, the imaging tools and approaches developed for fluorescence imaging bridge these levels of organization. Careful sample preparation is essential for successful microscopy experiments. Nonfluorescent samples can be labelled by a number of methods, such as labelling organelles with specific dyes, expressing proteins tagged with fluorescent proteins [34], using fluorescently tagged DNA probes (such as DNA-PAINT) [35] and immunofluorescence [36]. Careful consideration should be given to how a sample will be stained and which fluorophores will be used. Always take the time to research suitable probes to address your research questions and check that they are compatible with your microscope filters. For immunofluorescence applications, it is also important to validate your antibody choice [37–39], to ensure you are labeling the intended structure or molecule. When choosing probes it is important to compare the excitation and emission spectra to the spectral channels available on the microscope and to consider fluorophore brightness. It is also important to know the performance capability of the system itself. That is, a probe may be bright, but the light source on a microscope may be dim within that spectral range or the camera may exhibit lower sensitivity. The intensity of a light source versus wavelength and spectral sensitivity curves for detectors are available on vendor websites. It is a good idea to refer to these, along with probe information, as you plan an imaging experiment. When imaging only one target, it is preferable to choose a bright fluorophore for the most sensitive spectral channel on the microscope. When imaging more than one target, it is a good idea to assign the optimal channel to the one with the lowest concentration in the sample.

3.1

Fluorescence Microscopy: A Field Guide for Biologists

29

Another important consideration is the number of targets that can be imaged in one experiment. Finally, advanced techniques such as live cell imaging, optogenetics, multiphoton imaging and superresolution techniques often require specialized probes that require more time and effort to identify if you are new to the field. It is important to consider the possible sources of error and/or artifacts arising from the choice of model and labelling method in the planning stage. Proper controls are critical for the success of fluorescence microscopy experiments. In addition to experimental controls, you should also prepare positive and negative controls for imaging [12, 40]. For example, when carrying out multicolor experiments, it is important to test samples labeled with each fluorophore alone, as sometimes you may obtain false positives if there is overlap among the spectral channels. It is also important to check an unlabeled sample for autofluorescence arising from endogenous fluorophores. Autofluorescence often looks hazy and diffuse but can also appear as specific antibody labelling. Sometimes you can use this to simplify your experimental design by taking advantage of this inherent signal. 3.3 Select Imaging Technique

To choose a suitable microscope, consider your research aims and the availability of instrumentation. When reviewing techniques and systems, it is helpful to recognize that you will need to prioritize and balance different requirements. For example, if you need to capture images at high speed, you may not be able to acquire images with a high signal-to-noise ratio (S/N). In fluorescence microscopy, various approaches are used to quantitatively measure S/N [41] but here we use this term qualitatively to mean the extent of signal over the background. Other considerations are the number of probes and spectral ranges to be used, whether you will image single planes (2D) or volumes (3D), and the fraction of the sample area or volume you need to image for statistical rigor. For delicate samples, you also need to monitor photobleaching and, if working with living specimens, phototoxicity. As a starting point, consider whether your sample is compatible with the optical setup of your microscope. Different microscopy techniques constrain the way you can mount and observe your sample, and you should check the requirements for your desired technique before planning any experiments. For guidance on which techniques are best suited to your research, we recommend the following chapters in this book, in addition to overviews in the literature [24, 42]. If you have the time and resources, sometimes the best way to find the optimal microscope is by testing applications on different systems to determine the advantages and drawbacks of each. When you work on your own research question, with your own samples, you will directly experience and recognize the choices and compromises involved with each technique and application.

30

Lucy H. Swift and Pina Colarusso

If you do not have access to the optimal system for your experiment, this does not mean all is lost. You can go far with a widefield fluorescence microscope because it will help build the groundwork and help to identify whether a more sophisticated system is truly needed. A common misconception in the imaging field is that a more complex and more expensive system must out-perform a basic one, but this is not always the case. Sometimes a widefield microscope may be your best choice, when considering your research model and aims as well as other factors such as sensitivity, time, and cost. If you do need to access an advanced system, there are newly emerging national and international consortiums that are addressing the need for shared and open resources. Although fluorescence microscopy is a powerful technique, it is limited to revealing a miniscule fraction of the numerous organelles and thousands of proteins, lipids, and other molecular constituents within a specimen. You may be pleased that the fluorescence images of your living cells look crisp and bright; yet if you don’t take the time to examine them using white-light imaging, you might miss that they are stressed, exhibit condensed nuclei, swollen mitochondria, and blebbing. Alternatively, you may think that you are looking at individual isolated cells, but when you inspect your cells using white-light, you notice that your staining is highlighting a subpopulation of a monolayer. By using white-light imaging, you will expose your specimens to a much lower dose of light compared to fluorescence imaging so there is reduced risk of photobleaching. When you first inspect your sample through the eyepiece, use white-light imaging whenever possible to bring your sample into focus. This will minimize photobleaching and will reduce photodamage and phototoxicity if imaging live cells. 3.4 Optimize Image Acquisition

Once you have planned your imaging experiment and prepared your samples, the next step is to acquire images at the microscope. This can be daunting, especially as technologies advance and options at the microscope increase. However, there are several general principles that apply in fluorescence imaging. Applying these principles to experimental practice will help to ensure your imaging is optimal.

3.4.1 Know your Light Path(s)

When you sit down at a microscope for the first time, you might wonder where to start. We recommend always tracing the light path when working with a system that is new to you. Can you identify your light source(s)? Do you know what fluorescent filter sets are available and how to select them? Does the system have any add-ons or extras such as different stages? What objectives are available? How is the image detected and where? If you can identify some of the components on the microscope, then it will be easier to navigate the image acquisition software. Although commercial

Fluorescence Microscopy: A Field Guide for Biologists

31

packages have been slowly improving the user interface experience, the array of options and controls can be confusing or even overwhelming. In addition to tracing the light path, take time to learn how to operate the microscope using a standard slide, before moving to your experimental samples. Once you are confident on a system, you will find that your experiments will progress faster, because you will not be held back by basic operator errors. 3.4.2 Select Filter Sets

First, place the appropriate filters in the light path. You should have already matched your fluorophores to the microscope filter sets as part of your experimental design. If you did not do this in advance, you will need to check which filters best match the probes used in your sample. When the specifications for two filter sets have similar transmission curves, we recommend testing both with your sample to determine which gives the best imaging for your experimental needs.

3.4.3 Place Sample and Focus

Start your imaging session with an air objective that is positioned for maximum clearance from the slide. When switching between objectives, take care to use the electronic controls. If your system is not automated, switch the positions by handling the filter turret or holder. Do not press or pull on the objectives when switching between them, as unnecessary strain can misalign the lens assemblies within objectives. Objectives are the heart of the imaging system and costly and if you are not careful, wear and tear will affect the quality of the images acquired. Next, you want to position your sample securely on the microscope stage. Microscopes are equipped with manual and/or motorized stages, each suited for different applications. For example, your microscope might be equipped with a piezo stage that will allow for fast and accurate collection of z-stacks and/or have a live cell imaging stage that allows you to control the local environment by modifying temperature, humidity and CO2. When using an inverted microscope (objective below the sample), take care that the objective is centered within the insert that holds your sample and secure your sample in place with a pressure fitting or paper tape. Now that the sample is secured, bring the sample into focus using the eyepiece or detector. It is a good idea to start with an air objective even if the experiment requires imaging with an oil immersion or other short working distance objective. Once you focus using the air objective, switch to the oil immersion objective, which may need further focusing to fine tune the image. Using the air objective to set up the imaging helps minimize the risk of driving the objective into the sample. Although the front lenses on objectives frequently have protective spring mechanisms, these cannot protect from extreme carelessness, such as rapidly crashing the objective into the sample. If you take the time to familiarize yourself

32

Lucy H. Swift and Pina Colarusso

with focusing an objective, ideally with a standard slide, then you will have a much easier time when carrying out your experiments and minimize the risk of damaging the objective and related components. When using an oil immersion objective, extra care is needed to prevent damage. Do not be tempted to add too much oil to the objective. Oil can creep into the spring mechanism of the objective, causing seizing, or can leak into the optics, which can mean a costly repair or worse. For optimal imaging, ensure that you clean an oil objective at regular intervals. Cleaning methods vary by laboratory and imaging center, and there is lively debate about how best to clean oil immersion objectives. Therefore, if you are not sure how to clean an objective, ask the person in charge of the microscope. If there is no one overseeing the microscope, you can ask the vendor. Once your sample is in focus with the desired objective, you need to select regions of interest. When choosing where to image, consider possible bias. Ideally, the images captured represent your population of interest, without selectively choosing areas that support your hypothesis. To limit bias when choosing your fields of view, there are several options including: using white-light to focus your image and choose a field of view (this is also gentle on your cells); using the fluorescence channel of a nonexperimental marker (such as the nucleus) to focus and choose a field of view; using tiling to cover a larger area of the sample and labelling your samples with codes so that your imaging is blinded to the condition. Also consider how you will approach the statistical analysis of your data, as this may influence which and how many images are acquired. 3.4.4 Capture Images

After focusing with the eyepiece and choosing your field of view, switch to the camera to start acquiring data. Sometimes when you switch to the camera, the image on your camera is perfectly in focus. This optimal configuration, when both light paths have a common focal plane, is known as “parfocality.” However, often you need to adjust the focus slightly because the light path to the eyepiece and the light path to the detector are not identical. If you see a large change in focus between the two light paths, it may be that your microscope needs to be adjusted by the person entrusted to its care or by the vendor. When you acquire images, it is important to monitor the image and intensity readings at each pixel so that you ensure the image meets the needs of your experiment. For example, you may want a bright image with high S/N for immunofluorescence imaging of fixed cells, but you may sacrifice S/N for speed and/or viability when working with living specimens. To make these choices, you should monitor the intensity readings encoded by your images. You could rely on the brightness of the image itself, but images

Fluorescence Microscopy: A Field Guide for Biologists

33

displayed by the software can be artificially enhanced for maximum brightness and can be deceptive (beware the “Autoscale” button). The image histogram is an invaluable tool for checking pixel intensity as you acquire images. Histograms plot the number of pixels versus the possible intensity values (Fig. 12). Different microscopes offer different ways to adjust the signal intensity. A widefield microscope with a camera will allow you to increase or decrease exposure time and may also let you modify camera gain and the intensity of the excitation light source. Histogram representations are a standard feature in software acquisition, and you should monitor them closely when setting up your light source intensity and exposure time. As with all aspects of microscopy, setting your signal can involve making compromises. Figure 12 depicts three images of von Willebrand factor contained in Weibel Palade bodies in dermal microvascular endothelial cells. Each image was acquired using different exposure settings, with the exposure increasing in time from left to right. The camera is 12-bit, meaning each pixel intensity reading can range between 0 and 4095 (12 bit), where 0 is displayed as black, 4095 as white, with grayscale in between. In Fig. 12a, the readings range between 0 and 1000. Having a weak signal may be necessary for live cell or other photosensitive experiments, but if imaging fixed cells, you can improve image quality by increasing the brightness of your signal. In Fig. 12b, more of the intensity range is used, and the pixel intensity values range between 0 and 3000. Figure 12c illustrates saturation. This means that some pixels in the image have reached the maximum intensity value possible (4095) and the image no longer accurately represents the fluorescence signal because the photosites are being overfilled with photons. This is much like trying to accurately measure 105 mL when the 100 mL graduated cylinder is flowing over. If you were to continue to increase the signal intensity, more and more pixels would reach this maximum value and information contained in the image would be lost. Sometimes operators will adjust the brightness and contrast of an image by manipulating the histogram settings, so the image looks brighter. The “Autoscale” function in software does this automatically, and conveniently brightens the image for you by adjusting the brightness and contrast. This is acceptable, if you recognize that adjusting the image histogram does not change acquisition settings, and by extension, the signal recorded by the camera. In short, by adjusting the histogram display settings, you have not done anything to change the pixel intensity values themselves, only how they are displayed. Therefore, you may get a false sense of the S/N of the data. To change the pixel intensity readings, you should modify the microscope settings by adjusting the light source intensity, camera exposure, or other instrumental settings.

100 0

500 1000 1500 2000 2500 3000 3500 4000 Greyscale intensity reading

101

102

103

104

105

0

Number of pixels (log scale)

100

101

102

103

104

105

B.

500 1000 1500 2000 2500 3000 3500 4000 Greyscale intensity reading

C.

Number of pixels (log scale)

Number of pixels (log scale)

0

102

103

104

105

500 1000 1500 2000 2500 3000 3500 4000 Greyscale intensity reading

Fig. 12 Using histograms to optimize the intensity values within an image. The same region of a cell was imaged using different exposure settings. (a) This image has weak signal and it is difficult to observe the features in the image. The pixel intensities fall between 0 and 1000. (b) This image has optimal signal. The features are visible and easily distinguished. The pixel intensities fall between 0 and 3000. (c) This image is saturated. Although this may look pleasing to the eye, the saturation makes features appear larger and causes coalescing between features that should be resolved. Saturation can lead you to misinterpret your data sets. Sample: dermal microvascular endothelial cells labelled using indirect immunofluorescence to detect von Willebrand factor and STAR RED (Abberior, Go¨ttingen, Germany). Images were acquired using a widefield microscope (Nikon Ti2 stand, NIS Elements software, ORCA Flash 2 sCMOS camera; Nikon Canada, Mississauga, Ontario, Canada). Image acquisition settings: 60/1.4 NA objective, 640 nm LED excitation, and an emission filter with transmission between approximately 669–741 nm. Scale bars ¼ 20 μm

A.

34 Lucy H. Swift and Pina Colarusso

Fluorescence Microscopy: A Field Guide for Biologists

35

When you acquire images, it is important to optimize your settings at the beginning of your experiment and then try to adhere to these throughout the course of your project. In addition to optimizing your acquisition settings, consider the order of imaging samples. It may be tempting to acquire images of all control slides first, and then image slides of your different experimental conditions. However, as light sources and systems warm up, there can be slight changes in excitation light intensity and other inconsistencies. Therefore, we recommend alternating your samples so that you acquire images of one control, then one experimental condition. We also suggest that all images for a data set be acquired on the same day, because performance can be variable day to day. If the experiment cannot be completed in one imaging session, consider repeat imaging one or two samples when you return to the microscope, as an internal check for comparison between sessions. Some microscopes are also equipped with additional features. It is worthwhile investigating the capabilities of your microscope as you learn to use the system so that you can optimize your image acquisition and be aware of any limitations. Common features include the following. l

A motorized stage that allows you to increase your field of view by taking multiple images to cover an entire area or around a set point (called “tiling”). These images can then be stitched together to create one large image.

l

Autofocus capabilities that allow you to maintain focus when running time lapse imaging experiments, for example with live cells.

l

3.5 Analyze Data and Interpret Results

Ability to acquire z-stacks, which allow you to visualize your sample in 3D.

l

A live cell imaging stage that controls temperature, atmospheric gas composition, and humidity.

l

Perfusion apparatus for imaging cells under flow conditions.

l

Photoactivation apparatus for photobleaching or photoactivation experiments.

Image analysis should be considered from the start, when designing experiments. Do not treat the workflow in Fig. 11 linearly, as the analysis methodology may require further optimization of sample preparation and/or image acquisition. It is a good idea to check whether others in your group have developed approaches to address related questions, or to search the literature for publications that provide detailed protocols. Here, as in the other steps of the experimental workflow, you must make choices depending on your research question, background, and access to software

36

Lucy H. Swift and Pina Colarusso

packages. There are many open source and commercial packages available for image analysis. If you have a choice, you should check which package works best for you. If you are new to image analysis, it is important to develop image analysis skills in parallel with learning to operate the microscope so that you can review and analyze your data before carrying out your next experiments. This is because sometimes data that look acceptable by eye are not amenable to image analysis. If you wait to acquire all the images before starting to analyze the data, you may have to go back and repeat your experiments or invest time in post-hoc justification. As you develop your image analysis protocol, ask one or more colleagues to test your approach. There are numerous open source and commercial packages available for image analysis. A good starting point is Fiji (Fiji is just ImageJ) [43] as it is widely used for image analysis in the life sciences, and we also draw your attention to an excellent introductory textbook that is also open source [41]. Data management and storage are also critical. You should preserve all original data sets as these contain the metadata that encode the image acquisition settings required to reproduce the imaging. If using a manual system, take the time to note down the acquisition settings. As part of any project, you should save your data sets where they are secure. If you do not have access to a server with regular backups, keep copies of your data in different locations. Additionally, make sure that your supervisor or team leader has an organized and searchable archive of all your data sets, as well as a detailed protocol for the image analysis, similar to how you document all other experimental details in your logbooks. These are but a few of the considerations that should be part of a data management plan (DMP), which are now mandated by multiple granting agencies. A DMP is a document that clearly defines roles and responsibilities when it comes to documenting, acquiring, maintaining, storing and sharing data at all stages of the project [44–46]. A number of guides and checklists are available to the research community to aid in the development of DMPs, and these are useful tools for keeping the project on track at every stage of its life cycle [47–49]. 3.6 Act on Extracted Knowledge

When you present, publish, or otherwise communicate your results and conclusions, it is important that you include information about every step in the experimental workflow, as described above. Reproducibility in the life sciences is an ongoing concern, and a recent paper highlights that most publications do not adhere to best practices for reporting imaging data [50]. Ensure that you keep up to date with best practices [12, 33, 51] and clearly report and document every step of your experiment, including image analysis. If you have developed custom software, follow guidelines for documenting and sharing code as well. If you can, consider depositing

Fluorescence Microscopy: A Field Guide for Biologists

37

the original data sets in centralized repositories. In this way, you will help improve standards and model good practices, as you create a body of work for you and the research community to build and act upon.

4

Next Steps We have introduced concepts designed to develop the skills and mind set needed for successful microscopy experiments. Microscopy is a practice that demands continual learning and refinement. As you start applying microscopy to your research, we encourage you to read the literature, consult with experienced colleagues, and seek out learning opportunities through online collaboration and/or the numerous onsite workshops offered worldwide. Besides reinforcing concepts and skills, these opportunities facilitate the transfer of ideas and experiences. As Michael Polanyi stated, “we know more that we can tell” [52] and when you reach out to others, you will learn valuable informal information that does not always translate to text. Whether you are a newcomer or a seasoned expert, you can help deepen your expertise and that of others by creating vibrant communities that sustain and nurture good practice.

Acknowledgments We thank Mr. Joel Glover (University of Calgary) for preparing the objective schematics shown in the figures and Dr. Craig Brideau (University of Calgary) for providing critical feedback on the manuscript. We also thank Mr. Devin Aggarwal from Dr. Kamala Patel’s laboratory at the University of Calgary, who kindly prepared the dermal endothelial samples illustrated in the figures. References 1. Murphy DB, Davidson MW (2012) Phase contrast microscopy and darkfield microscopy. In: Fundamentals of light microscopy and electronic imaging. Wiley, Hoboken, NJ, pp 115–133 2. Murphy DB, Davidson MW (2012) Polarization microscopy. In: Fundamentals of light microscopy and electronic imaging. Wiley, Hoboken, NJ, pp 153–171 3. Murphy DB, Davidson MW (2012) Differential interference contrast microscopy and modulation contrast microscopy. In: Fundamentals of light microscopy and electronic imaging. Wiley, Hoboken, NJ, pp 173–197

4. Lichtman JW, Conchello JA (2005) Fluorescence microscopy. Nat Methods 2:910–919 5. Shashkova S, Leake MC (2017) Singlemolecule fluorescence microscopy review: shedding new light on old problems. Biosci Rep 37:20170031 6. Harris DC, Bertolucci MD (1989) Symmetry and spectroscopy: an introduction to vibrational and electronic spectroscopy. Dover, New York 7. Mohr PJ, Newell DB, Taylor BN (2016) CODATA recommended values of the fundamental physical constants: 2014. Rev Mod Phys 88:035009

38

Lucy H. Swift and Pina Colarusso

8. Lakowicz JR, Szmacinski H, Nowaczyk K et al (1992) Fluorescence lifetime imaging. Anal Biochem 202:316–330 9. Lakowicz JR (2006) Fluorescence anisotropy. In: Principles of fluorescence spectroscopy. Springer, New York, NY, pp 353–382 10. Dunn KW, Kamocka MM, McDonald JH (2011) A practical guide to evaluating colocalization in biological microscopy. Am J Physiol Cell Physiol 300:723–742 11. Zimmermann T (2005) Spectral imaging and linear unmixing in light microscopy. Adv Biochem Eng Biotechnol 95:245–265 12. Payne-Tobin Jost A, Waters JC (2019) Designing a rigorous microscopy experiment: validating methods and avoiding bias. J Cell Biol 218: 1452–1466 13. Aaron JS, Taylor AB, Chew TL (2018) Image co-localization - co-occurrence versus correlation. J Cell Sci 131 14. Taniguchi M, Lindsey JS (2018) Database of Absorption and Fluorescence Spectra of >300 Common Compounds for use in PhotochemCAD. Photochem Photobiol 94:290–327 15. Diaspro A, Chirico G, Usai C et al (2006) Photobleaching. In: Handbook of biological confocal microscopy, 3rd edn. Springer, US, pp 690–702 16. Demchenko AP (2020) Photobleaching of organic fluorophores: quantitative characterization, mechanisms, protection. Methods Appl Fluoresc 8:022001 17. Shaw PJ (2006) Comparison of widefield/ deconvolution and confocal microscopy for three-dimensional imaging. In: Handbook of biological confocal microscopy, 3rd edn. Springer, US, pp 453–467 18. Love D, Goodhand I (2019) Optimal LED filtering for fluorescence microscopy. Microsc Today 27:26–30 19. Jerome WG(J) (2017) Practical guide to choosing a microscope camera. Microsc Today 25:24–29 20. Keller HE (2003) Proper alignment of the microscope. Methods Cell Biol 2003:45–56 21. Inoue´ S, Spring KR (1997) Practical aspects of microscopy, in: video microscopy : the fundamentals. Plenum Press, New York 22. Jenkins FA, White HE (1976) Fundamentals of optics. McGraw-Hill, New York 23. Perrin MD, Soummer R, Elliott EM et al (2012) Simulating point spread functions for the James Webb Space Telescope with WebbPSF. In: Clampin MC, Fazio GG, MacEwen HA et al (eds) Space telescopes and

instrumentation 2012: optical, infrared, and millimeter wave. SPIE, Amsterdam, p 84423D 24. Fritzky L, Lagunoff D (2013) Modern trends in imaging XII advanced methods in fluorescence microscopy. Anal Cell Pathol 36:5–17 25. Ross ST, Allen JR, Davidson MW (2014) Practical considerations of objective lenses for application in cell biology. In: Methods in cell biology. Academic Press Inc., pp 19–34 26. Mattheyses AL, Simon SM, Rappoport JZ (2010) Imaging with total internal reflection fluorescence microscopy for the cell biologist. J Cell Sci 123:3621–3628 27. Weik MH, Weik MH (2000) Nyquist theorem. In: Computer science and communications dictionary. Springer, US, pp 1127–1127 28. Murphy DB, Davidson MW (2012) Diffraction and spatial resolution. In: Fundamentals of light microscopy and electronic imaging. Wiley, Hoboken, NJ, pp 103–113 29. Laboratory for Optical and Computational Instrumentation (LOCI) U of W-M Spatial Calibration – ImageJ. https://imagej.net/ SpatialCalibration 30. Kimpe T, Tuytschaever T (2007) Increasing the number of gray shades in medical display systems - how much is enough? J Digit Imaging 20:422–432 31. U.S. National Library of Medicine (2015) Color vision deficiency: MedlinePlus Genetics 32. Hasrod N, Rubin A (2016) Defects of colour vision: a review of congenital and acquired colour vision deficiencies. African Vis Eye Heal 75 33. Lee JY, Kitaoka M (2018) A beginner’s guide to rigor and reproducibility in fluorescence imaging experiments. Mol Biol Cell 29: 1519–1525 34. Specht EA, Braselmann E, Palmer AE (2017) A critical and comparative review of fluorescent tools for live-cell imaging. Annu Rev Physiol 79:93–117 35. Schnitzbauer J, Strauss MT, Schlichthaerle T et al (2017) Super-resolution microscopy with DNA-PAINT. Nat Protoc 12:1198–1228 36. Allan V (1999) Protein localization by fluorescence microscopy: a practical approach. Oxford University Press, Oxford 37. Ivell R, Teerds K, Hoffman GE (2014) Proper application of antibodies for immunohistochemical detection: antibody crimes and how to prevent them. Endocrinology 155:676–687 38. Bordeaux J, Welsh A, Agarwal S et al (2010) Antibody validation. BioTechniques 48: 197–209

Fluorescence Microscopy: A Field Guide for Biologists 39. Weller MG (2018) Ten basic rules of antibody validation. Anal Chem Insights 13: 1177390118757462 40. Waters JC (2009) Accuracy and precision in quantitative fluorescence microscopy. J Cell Biol 185:1135–1148 41. Bankhead P (2014) Analyzing fluorescence microscopy images with ImageJ 42. Thorn K (2016) A quick guide to light microscopy in cell biology. Mol Biol Cell 27:219–222 43. Schindelin J, Arganda-Carreras I, Frise E et al (2012) Fiji: an open-source platform for biological-image analysis. Nat Methods 9: 676–682 44. Michener WK (2015) Ten simple rules for creating a good data management plan. PLoS Comput Biol 11:e1004525 45. Schiermeier Q (2018) Data management made simple. Nature 555:403–405 46. Williams M, Bagwell J, Nahm Zozus M (2017) Data management plans, the missing perspective. J Biomed Inform 71:130–142

39

47. Redaktion. MIT Libraries Write a data management plan | Data management. https:// libraries.mit.edu/data-management/plan/ write/ 48. Tools to assist you with the creation of a DMP. https://library.concordia.ca/research/data/ dm-plans.php?guid¼onlinetools 49. Donnelly M. Checklist for a Data Management Plan (v3.0, 17 March 2011). http:// dmponline.dcc.ac.uk 50. Marque´s G, Pengo T, Sanders MA (2020) Imaging methods are vastly underreported in biomedical research. elife 9:1–10 51. Nelson G, Boehm U, Bagley S et al QUAREPLiMi: a community-driven initiative to establish guidelines for quality assessment and reproducibility for instruments and images in l i g h t m i c r o s c o p y. h t t p : // a r x i v. o r g / abs/2101.09153 52. Polanyi M (2009) The tacit dimension. University of Chicago Press, Chicago

Chapter 2 Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins During Influenza Virus Infection in Single Cells Using Confocal Microscopy Richard Manivanh, Seema S. Lakdawala, and Jennifer E. Jones Abstract Three-dimensional imaging is a powerful tool for examining the spatial distribution of intracellular molecules like nucleic acids, proteins, and organelles in cells and tissues. Multicolor fluorescence imaging coupled with three-dimensional spatial information provide a platform to explore the relationship between different cellular features and molecules. We have previously developed a pipeline to study the intracellular localization of influenza virus genomic segments within an infected cell. Here, we describe the staining of multiple viral RNA segments in cells infected with influenza virus by combined fluorescence in situ hybridization (FISH) and immunofluorescence and quantification of colocalization between viral segments. This chapter will cover the acquisition and analysis of 3D images by the widely used laser scanning confocal microscope. These strategies can be applied to a wide range of biological processes and modified to examine colocalization of other cellular features. Key words Confocal microscopy, 3D imaging, Fluorescence in situ hybridization, Fluorescence microscopy, Influenza virus, Deconvolution, Point spread function, Huygens, Imaris, Image acquisition, Multicolor

1

Introduction Three-dimensional (3D) imaging reveals information about processes as they occur within the volume of a cell, organ, or organism and has proven to be a crucial tool in cellular biology [1]. While two-dimensional widefield epifluorescence imaging has the advantages of ease of use and convenience, confocal microscopy yields superior resolution in the z-dimension [2]. The difference lies in the scanning methods: conventional widefield scanning microscopy illuminates a large area with each exposure, whereas confocal microscopy reduces noise from out-of-focus light by scanning focused beams of light to capture a single x/y plane or slice [2]. Iteratively imaging slices through the z-direction of a cell or tissue allows for rendering of 3D volumes [3]. However, spherical

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_2, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

41

42

Richard Manivanh et al.

Fig. 1 (a) Influenza virion and viral ribonucleoprotein (vRNP). Inset: The influenza virus genome is comprised of 8 single-stranded vRNA segments decorated with viral nucleoproteins and the heterotrimeric (PB2, PB1, PA) polymerase complex. (b) Sets of ~18–40 bp oligonucleotide probes, each conjugated to a fluorophore, are designed to anneal along the sequence of the target RNA molecule. (Created with BioRender.com)

aberration of the system causes diffraction patterns that are different in the axial and lateral planes (x/y or x/z), distorting a spherical object into a football- or hourglass-shape [3]. Distortion in the zplane can be challenging for intracellular colocalization studies; we will cover methods used to account for this distortion. Fluorescence microscopy has become commonplace in biological research. However, understanding the limitations of labeling density and resolution is paramount to accurate image analysis. FISH is extensively utilized to visualize the intracellular localization of nucleic acids and can be further combined with immunofluorescence to simultaneously gain information on the localization of nucleic acid and protein targets [4–6]. In this chapter, we utilize combined FISH and immunofluorescence to visualize influenza virus viral RNA (vRNA) and protein puncta at high resolution in 3D. The influenza virus genome is comprised of eight vRNA segments bound by several viral proteins termed viral ribonucleoproteins (vRNPs) (Fig. 1a) [7–9]. Association and assembly of multiple vRNPs transpires in the cytoplasm as they traffic to the membrane [4, 5]. We present a method for examining the cellular compartments in which nucleic acid and protein targets colocalize, detailing sample preparation and staining, image acquisition, and analysis of colocalized puncta in 3D confocal images. Thus, the

Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins. . .

43

methods described here can be applied to understand complex molecular interactions in cellular biology.

2

Materials All solutions are prepared using RNase-free consumables (e.g., pipette tips). Workspace and equipment should be treated with an RNase decontamination solution. Solutions containing formamide and paraformaldehyde should be handled in a chemical fume hood and disposed of according to proper waste disposal regulations. Tissue culture work and any work involving infectious material should be performed in a biosafety cabinet using approved safety precautions. Prepare and store all solutions at room temperature unless otherwise indicated. Consult with your institutional biosafety officer prior to work with infectious or hazardous agents.

2.1 Fluorescence In Situ Hybridization (FISH) and Immunofluorescence

1. Circular coverslips: 12 mm diameter glass, 0.16–0.19 mm thickness (#1.5). 2. Tissue culture plates: Sterile, tissue culture-treated 24-well polystyrene plates. 3. Microscope slides: 25  75  1.00 mm glass. 4. Phosphate-buffered saline (PBS): 137 mM NaCl, 2.7 mM KCl, 10 mM Na2HPO4, 1.8 mM KH2PO4, pH 7.4. 5. Media: Minimum Essential Medium containing 10% fetal bovine serum (FBS), 2% L-glutamine, and 1% penicillin– streptomycin. 6. Cells: Madin-Darby canine kidney (MDCK) epithelial cells. 7. Virus: Influenza virus strain A/California/07/2009 (H1N1). 8. Fixative: 4% paraformaldehyde in RNase-free PBS. Store at 4  C (see Note 1). 9. Cell permeabilization buffer: 70% molecular biology grade ethanol in RNase-free water. Store at 20  C or below. 10. 20 saline-sodium citrate (SSC): 0.3 M sodium citrate, 3 M NaCl in DEPC-treated H2O. 11. Dextran sulfate sodium salt: 20% solution in DEPC-treated H2O (see Note 2). 12. Wash buffer: 10% formamide, 2 SSC in DEPC-treated H2O (see Note 3). 13. Hybridization buffer: 10% formamide, 2 SSC, 10% dextran sulfate sodium salt solution, 2 mg/mL RNase-free bovine serum albumin (BSA), 1 mg/mL E. coli tRNA, 2 mM vanadyl-ribonucleoside complex (VRC) in DEPC-treated H2O. VRC and tRNA should be thawed on ice.

44

Richard Manivanh et al.

14. FISH probes: Sets of ~18–40 bp oligonucleotide probes are designed and purchased from BioSearch Technologies and are specific to each RNA segment (Fig. 1b). In this chapter, we use probes specific for the PA and HA RNA segments of influenza virus commercially labeled with Quasar 570 and 670, respectively (see Note 4). 15. Primary antibody: mouse anti-nucleoprotein. Store at 20  C. 16. Secondary antibody: goat anti-mouse Alexa Fluor 488. Store at 4  C. 17. 4,6-diamidino-2-phenylindole (DAPI): Aliquot and store at 20  C. 18. Diamond Antifade Mountant: Store at 20  C. 19. RNase decontamination solution. 2.2 Image Acquisition

1. Diffraction limited fluorescent beads for determining the point spread function (PSF) of the microscope: 0.1 μm TetraSpeck Beads. 2. Confocal Microscope: A variety of confocal microscopes can be used and should be chosen based on desired experimental parameters. One consideration for the appropriate microscope system is available laser lines. For the purposes specified in this chapter, we use an Olympus FV1000 confocal microscope with commonly available laser lines: 405 nm to excite DAPI, 488 nm to excite Alexa Fluor 488, 559 nm to excite Quasar 570, and 635 nm to excite Quasar 670 (see Note 5). 3. Imaging Software. Follow manufacturer’s recommendations when choosing software compatible with your chosen microscope.

2.3 Image Analysis and Deconvolution

1. Huygens Professional (version 19.04; Scientific Volume Imaging B.V.) 2. Imaris (version 8.4.2; Bitplane A.G.) 3. MATLAB plugin (R2018a; MathWorks).

3

Methods

3.1 Sample Preparation for FISH and Immunofluorescence Staining

1. Grow MDCK cells on coverslips placed in wells of a 24 well plate to about 90% confluence (see Note 6). 2. Infect cells with influenza virus at the desired multiplicity of infection and duration. 3. Aspirate growth media and wash three times with PBS. 4. Fix the cells in 300–500 μL of fixative per well for 20 min at room temperature.

Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins. . .

45

5. Rinse cells three times with PBS. 6. Permeabilize with pre-chilled (20  C) permeabilization buffer at 4  C overnight. Samples can be stored for up to 7 days in 70% EtOH at 4  C. Seal plates with paraffin to prevent ethanol evaporation. 3.2 Multicolor Fluorescence In Situ Hybridization and Immunofluorescence

The following steps should be performed in a dark chemical fume hood to protect light-sensitive fluorescently labeled probes and to mitigate the health risk posed by formamide. It is also important that all work is done in an RNAse-free environment (i.e., pipette tips should be certified DNase/RNase-free and workspace cleaned with RNase decontamination solution). 1. Prepare wash and hybridization buffers. 2. Aspirate the permeabilization buffer from coverslips and rehydrate cells with wash buffer for 5 min on a rocker at room temperature. 3. Hybridize the cells at 28  C overnight in hybridization buffer containing a panel of fluorescently conjugated probes. Protect your samples from light (see Note 7). 4. Aspirate the hybridization buffer and wash three times with wash buffer for 5 min each on a rocker at room temperature. If no immunofluorescence staining is to be performed, skip to step 7. 5. Incubate coverslips with primary antibody diluted in wash buffer for 1 h on a rocker at room temperature. 6. Aspirate antibody solution and wash coverslips three times with wash buffer for 5 min each on a rocker at room temperature. 7. Dilute DAPI in wash buffer with or without secondary antibody, depending on whether immunofluorescence is performed. Incubate coverslips for 1 h on a rocker at room temperature. 8. Aspirate the DAPI-containing solution and wash three times with wash buffer for 5 min each on a rocker at room temperature. 9. Pipette a sufficient volume of mounting media to encompass the entire coverslip (typically 7–10 μL) onto a clean microscope slide and invert the coverslip onto the mounting media without trapping air bubbles. Allow medium to set overnight (see Note 8).

3.3 Determining the Point Spread Function of the Microscope

1. Clean a glass coverslip and slide by rinsing with ethanol and let dry completely. 2. Dilute TetraSpeck beads in ethanol and pipette 5 μL onto the center of the coverslip. The bead solution will wick to fill the

46

Richard Manivanh et al.

surface area of the coverslip. Allow to dry completely (see Note 9). 3. Pipette 5 μL of mounting medium onto a glass slide and invert the coverslip onto the slide so that the beads are sandwiched between the slide and coverslip. Allow medium to dry overnight at room temperature. Protect from light. 4. Image TetraSpeck beads using the desired magnification and step size required for Nyquist sampling. For our system, we use a 60 oil immersion objective with an optical zoom of 4 and aspect ratio of 1024  1024. With a step size of 0.17 μm, the resultant pixel size is ~50  50  170 nm. 5. Open image file in Imaris. 6. Assess the chromatic aberration of each channel by first designating one channel as the reference channel; no adjustments will be made to this channel. 7. Empirically determine whether any adjustments are required in the x, y, or z dimensions to maximize overlap of the test channel to the reference channel (see Note 10). 8. Record these values to apply during image analysis. 9. Repeat for all other channels. 3.4 Image Acquisition

1. Identify a cell of interest using either the epifluorescence mode of the confocal microscope or a low-resolution scanning mode on the confocal. 2. For each channel, adjust the laser power and photomultiplier tube voltage and/or gain to optimize the signal-to-noise ratio (SNR). The SNR for each channel can be approximated by the signal intensity profile found in the lookup table (abbreviated LUT) (see Note 11). 3. Use fluorescence to determine the vertical range of the cell of interest and designate the number of z-stacks required for Nyquist sampling. For our system, we use a 60 oil immersion objective with an optical zoom of 4 and aspect ratio of 1024  1024. With a step size of 0.17 μm, the resultant pixel size is ~50  50  170 nm (see Note 12). 4. Image each channel sequentially by line to avoid bleed-through of fluorescence from one channel into another. Kalman line averaging of 2–4 will increase SNR. A frame rate of 2–4 μs/ pixel is recommended for optimal resolution (see Note 13).

3.5 Image Deconvolution

1. Open experimental image files in Huygens. 2. Confirm the microscope parameters used during imaging by selecting “Edit microscopic parameters” under the “Edit” tab.

Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins. . .

47

3. Select an embedding media from the dropdown menu from which to approximate the refractive index of the mounting medium used. ProLong Diamond is closest to glycerol with a refractive index of 1.47. 4. Press “Set all verified” to confirm the microscopic parameters associated with the image file. Press “Accept” to return to the main menu. 5. To empirically determine a suitable background intensity value for each channel, select “Twin Slicer” mode under the “Visualization” tab. 6. Scan through the z-stack until a slice is found with relatively high background noise. 7. Drag the pointer across the image through both the background as well as the cell of interest. A graph of the intensity profile under the region of interest (ROI) will populate one panel on the Twin Slicer. 8. Use the intensity profile to choose a value reflective of background signal for each channel. 9. To begin deconvolution, select “Deconvolution Wizard” under the “Deconvolution” tab. Press “Enter Wizard” to begin. 10. The next page will prompt the investigator to designate the point spread function. Import the file obtained in Subheading 3.3 to account for the optical diffraction of each laser. 11. Select any channel and enter the background signal determined in step 8. 12. Run the classic maximum likelihood estimation (CMLE) deconvolution with an SNR of 20 and no more than 40 iterations. 13. Inspect the results of the deconvolution and adjust the background intensity value if necessary (see Note 14). 14. Repeat for all channels. 15. Save the deconvolved image as a .ics file (see Note 15). 3.6 Quantification of Colocalization Between Two or more Fluorescently Labeled Targets 3.6.1 Defining Cellular Boundaries

1. Open the .ics file in Imaris and select 3D View (see Note 16). 2. Correct for chromatic aberration by selecting “Channel Shift” under the “Image Processing” tab. 3. Uncheck all channels but one. 4. Enter the channel shift values determined for that channel in Subheading 3.3, steps 6–9 and press “Apply.” 5. Repeat for all channels. 6. Define the nuclear surfaces: On the panel on the left-hand side when 3D View is selected, there are several icons to choose

48

Richard Manivanh et al.

Fig. 2 Image rendering for three-color colocalization in Imaris. (a) Imaris 3D View workspace icons. Use the Surfaces and Cells icons to render the cell and the Spots icon to render fluorescent foci. (b) Image analysis workflow using a representative image of an MDCK cell infected with influenza virus and stained by FISH and immunofluorescence. Left to right: unprocessed confocal image, image after deconvolution, nuclear and cell border surface rendering, spot rendering for quantification of colocalization. Scale bar is 7 μm

from (Fig. 2). Select the Surfaces feature (Fig. 2a, blue icon), to create a nuclear surface (Fig. 2b). Press the blue arrow on the bottom of the panel to proceed. 7. Select the source channel for DAPI (405 nm) in the surface construction wizard. Enter a smoothing detail of 0.25–0.5 μm. Press the blue arrow on the bottom of the panel to proceed. 8. All nuclear surfaces detected will populate the 3D View. Visually inspect these surfaces and adjust the Absolute Intensity threshold value using the slider in the surface construction wizard, if necessary. Press the green arrow on the bottom of the panel to finalize nuclear surfaces. 9. Define the surface for the cell boundary of the cell of interest by deselecting “Volume” on the left-hand panel of the 3D View. 10. Create a second surface by selecting the Surfaces feature (Fig. 2a, blue icon) as in step 6. 11. Select “Skip automatic creation, edit manually.” Press the blue arrow on the bottom of the panel to proceed. 12. Under the “Contour” tab of the surface construction wizard, select the “Mode” tab and choose a “Drawing Mode.” We use either Time or Distance. Adjusting the parameters for vertex insertion will alter the frequency at which new vertices are inserted during drawing, thereby changing the resolution of the Surface.

Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins. . .

49

13. Pan over to the right-hand side of Imaris and under “Pointer,” make sure that “Select” is highlighted. 14. Press the “Draw” button at the bottom of the surface construction wizard. Using either FISH signal or a cytoskeletal protein stain to define the cell boundary, a contour around the signal can be manually drawn on each image slice. The contours are then combined to render the 3D cell volume. 15. Repeat for each slice of the image by changing the Slice Position in the surface construction wizard. 16. When all slices are complete, press Create Surface in the surface construction wizard (see Note 17). 3.6.2 Channel Masking to Include or Exclude Subcellular Region of Interest

1. To segment analysis on cytoplasmic signal from the cell of interest, we use the nuclear surface as a mask. 2. Select the nuclear Surface created in Subheading 3.6.1, steps 6–8, from the left-hand panel. 3. Using the pointer, select the nuclear surface of the cell of interest. It will be highlighted yellow. 4. Select “Edit” from the icons that appear. 5. Under “Mask Properties,” select “Mask Selection.” 6. From the dropdown menu, choose any channel corresponding to those which must be segmented. 7. Check “Duplicate channel before applying mask.” 8. Under “Mask Settings,” check “Set voxels inside surface to:” and enter a value of zero. A new channel will be created with the nuclear signal eliminated. 9. Repeat for all remaining channels of interest. 10. Select the cell boundary Surface created in Subheading 3.6.1, steps 9–16, from the left-hand panel. 11. Select “Edit” from the icons that appear. 12. Under “Mask Properties,” select “Mask All.” 13. From the dropdown menu, choose any of the masked channels corresponding to those which must be segmented. 14. Check “Duplicate channel before applying mask.” 15. Under “Mask Settings,” check “Set voxels outside surface to:” and enter a value of zero. A new “Masked” channel will be created with any fluorescent signal outside of the cell boundary eliminated. 16. Repeat for all remaining Masked channels.

50

Richard Manivanh et al.

3.6.3 Define Spots for all Channels to Be Analyzed

1. Quantitative analysis will utilize the doubly masked channels for cytoplasmic signal constructed in Subheading 3.6.2. 2. Under the 3D view panel, select “Volume.” 3. Select the “Statistics” icon from the menu that appears below. 4. Record the “Data Intensity Standard Deviation” values for all doubly masked channels created in Subheading 3.6.2. These will be used for spot thresholding. 5. Under the 3D View panel, select “Spots” (Fig. 2a, orange icon). Press the blue arrow on the bottom of the spot creation wizard to proceed. 6. Select a source channel (doubly masked channel for a given vRNA segment). 7. Enter the dimensions in x and y for spot detection. We use 0.3 μm (see Note 18). 8. Select “Model PSF-elongation along Z-axis” and set the desired dimensions. We use 1 μm. Press the blue arrow to proceed. 9. Spots are generated by thresholding on the “Intensity center” value for a given channel. We set the “Intensity center” threshold for a given channel to 2 the standard deviation of that channel (recorded in step 4). Press the green arrow to proceed to building spots (see Note 19). 10. Repeat steps 5–9 for all other doubly masked channels to be analyzed to create spots from each channel. 11. Analyze spot colocalization. This analysis is performed using the “Colocalization of Spots” MATLAB extension. Imaris’s default package is designed for analysis of pairwise colocalization between two spots; hence, we provide a modified extension program that allows for colocalization analysis of up to four different spots. The modified extension called “XTSpotsColocalizeFISH4.m” can be found at https://github.com/ Lakdawala-Lab/MatLab-Extensions. Upload the downloaded file to Imaris MATLAB XTensions directory usually located in the Bitplane Program folder housing the Imaris installation. 12. Under the 3D View panel, select any of the spots created in steps 1–10. 13. Select the Tools icon (red cog or flower shape) from the panel that appears below. Select ‘Colocalization of Spots’ from the Tools menu. 14. Identify colocalization of spots using a distance threshold of 0.3 μm, the diffraction-limited pixel size (see Note 18). This analysis will report statistics for single (uncolocalized) and two-color and three-color spots (two or three different spots colocalized, respectively).

Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins. . .

51

Fig. 3 Quantification of colocalization. (a) Distribution of a subset of spots within a cell after analysis in Imaris. Distances are in relation to both the nuclear periphery and cell border, as determined by the nuclear and cell border Surfaces. (b) Proportion of nucleoprotein-positive spots that colocalize with one (blue), two (teal), or three vRNA spots (yellow) in fifteen different cells. Adapted from Jones et al. [10]

15. Import surfaces and spots into a Cell made in Subheading 3.6.1 by selecting the nuclear surfaces under the 3D View panel. 16. Using the pointer, select the nucleus of interest. The selected nucleus will turn yellow. 17. Under the 3D View panel, select the “Cells” icon (Fig. 2a, yellow icon) and manually edit the cell.

52

Richard Manivanh et al.

18. Select “Skip automatic creation, edit manually.” Press the blue arrow on the bottom of the panel to proceed. 19. Select “Import Surface to Cell.” Choose the cell border surface created in Subheading 3.6.1. Under “Import as,” choose “Cell.” Under “Selection,” choose “All.” Press OK. 20. Select “Import Surface to Cell.” Choose the nuclear surface created in Subheading 3.6.1. Under “Import as,” choose “Nuclei.” Under “Selection,” choose “Selection.” Press OK. 21. Select “Import Spots to Vesicles.” Select any of the colocalized spots created in steps 11–14. Enter the desired name to be associated with those spots. Under Selection, choose “All.” Press OK. Repeat for all other spots created (see Note 20). 22. Export the data, including quantification of colocalization and absolute distances of each spot from the nuclear and cellular surfaces: Under the 3D View panel, select the Cell built in steps 15–21. 23. Select the “Statistics” icon from the panel that appears. 24. Click on the icon at the bottom of the wizard depicting stacked floppy disks to export a Microsoft Excel document containing all statistics (see Note 21). 25. The values including the proportion of colocalized and single vRNA containing foci can be plotted. Alternatively, the distribution of spots (or vesicles) in relation to defined cellular features can also be examined (Fig. 3).

4

Notes 1. The pH may drop in unbuffered aqueous solutions of paraformaldehyde, resulting in decomposition of paraformaldehyde and formation of formaldehyde gas [11]. Buffered solutions of paraformaldehyde may be stored at 4  C and protected from light for up to 2 weeks. 2. Commonly available as a 50% solution, dextran sulfate solution is extremely viscous. A 20% solution is easier to manipulate and can be stored at room temperature for several months. Master mixes containing dextran sulfate should be made in excess of at least three additional reactions to account for pipetting loss. 3. For 50 mL wash buffer: add 5 mL 20 SSC and 5 mL formamide to 40 mL DEPC-treated water. Formamide deionizes RNA, thereby stabilizing it. 4. Ensure probe specificity for a particular vRNA segment by comparing base pair complementarity with the sequences of the other seven vRNAs, which are all in negative-sense orientation. Additionally, assess probe complementarity against

Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins. . .

53

positive-sense viral complementary/messenger RNA (cRNA/ mRNA), and host mRNA. We exclude sequences that exhibit 10 or more base pair complementarities. Reconstitute FISH probes in Tris-EDTA (TE) buffer, pH 8, at 25 μM. Aliquot and store at 20  C. We routinely use FISH probes conjugated to the following fluorophores: Alexa Fluor 488, Quasar 570, CAL Fluor Red 590, and Quasar 670. Probes conjugated to Alexa Flour 488 are purchased as amine-terminal oligonucleotides and conjugated to the fluorophore using the Invitrogen Amine Labeling kit as per the manufacturer’s recommendations. Probes can be multiplexed for multicolor imaging provided the imaging system can achieve spectral separation of the utilized fluorophores. We have previously published analysis of five-color FISH that included a nuclear DAPI stain in combination with four FISH probes targeting distinct viral RNA segments labeled with the following fluorophores: Alexa Fluor 488, Quasar 570, Cal Flour red 590, and Quasar 670 [5]. Recently, we have adapted this method to combine FISH with immunofluorescence, such that we can visualize three distinct vRNA segments in addition to viral nucleoprotein and DAPI [10]. 5. If more than four fluorophores are desired, the chosen microscope must have specialized features, such as a pulsed white light laser excitation source and acousto-optical beam splitters (e.g., Leica SP8 confocal microscope) and/or spectral unmixing capabilities to distinguish between fluorophores excited by the same laser line (e.g., Nikon C2 confocal microscope). These systems are capable of spectral separation of two more emission spectra, as we have previously published [10]. If these or other such advanced tools are used, investigators should verify the sensitivity and specificity of their imaging parameters using single-color controls. 6. Influenza virus replication in MDCK cells is well-studied. However, the protocol described here is readily adapted to other cell types, and the cells appropriate for each study should be determined by the investigator. An alternative to glass coverslips is glass-bottomed chambered slides on which cells are grown, infected, processed for staining, and imaged directly on the slide itself. If glass-bottomed chambered slides are used for experimental slides, they should also be used during the determination of the point spread function of the microscope in Subheading 3.3. 7. We empirically determine a suitable working stock concentration for each probe by diluting in TE buffer (typically in the range of 1:2.5–1:50). Working stocks may be stored at 4  C for several months. Probes from the working stock are added to

54

Richard Manivanh et al.

the hybridization buffer at 1:100 for a final concentration of 5–100 nM. 8. Allow mounting medium to warm to room temperature to reduce viscosity. Use a large-bore pipette tip to minimize bubbles. Do not vortex, as this also may create bubbles. If bubbles arise, they can be removed by brief centrifugation. Allow the medium to set on its own, as the application of pressure can crush or distort three-dimensional structures. Insufficient mounting media can cause air bubbles to form at the edges of the coverslip; excess mounting media may cause the coverslip to slide. 9. To determine an appropriate bead density, test a range of dilutions (typically 1:10,000–1:1,000,000). Serially dilute beads to mitigate pipetting error with small volumes. The beads are quite small and can be difficult to locate under the microscope, so testing a range of bead densities will be helpful in locating the correct z-plane of the bead population. Diluting the beads in ethanol will allow the preparation to evaporate quickly. The beads will adhere to the coverslip glass. Alternatively, beads can be placed on the surface of the coverslip on a hotplate set at 37  C, which will reduce drying time and help immobilize the beads to the coverslip. 10. Adjustments to chromatic aberration may be imperfect and should be applied sparingly. Avoid making adjustments to channels that produce overcorrections in any dimension (x, y, or z). Once determined, chromatic aberration adjustments may be applied to all subsequent experiments. In the event that microscope equipment undergoes maintenance (e.g., laser alignment or replacement), the steps in Subheading 3.3 should be repeated to ensure that these adjustments continue to be accurate for that imaging system. 11. A negative control such as unstained cells, mock-infected and stained cells, or cells stained with secondary antibody but no primary antibody, will help determine the extent of nonspecific signal. It is recommended to perform this step using a cell other than the true cell of interest, as photobleaching may occur during scanning. This should be done with every experiment when establishing imaging parameters. 12. The Nyquist theorem specifies that reproduction of a signal should be sampled at twice the frequency of the signal. In imaging, sample rate corresponds to pixel size and frequency corresponds to structure size, with smaller structures having higher frequency. Therefore, the pixel size will be half the size of a structure. 13. Note that with each additional scan, more photons reach the sample thereby increasing the likelihood of photobleaching.

Three-Dimensional Simultaneous Imaging of Nucleic Acids and Proteins. . .

55

Image appropriate controls with the same imaging parameters to assess nonspecific background. 14. Optimal background values will vary between microscopes, fluorophores, and experiments. Generally, sufficient deconvolution is achieved when the background value selected is a few orders of magnitude below the region of the intensity profile corresponding to background signal. Caution should be taken to avoid excessive deconvolution, which can distort or eliminate structures in the raw image file. 15. For Imaris software version 9.6 and above, conversion of the . ics file to a .ims file is necessary. 16. Our chapter assumes a working knowledge of Imaris image analysis software from Oxford Instruments. Basic tutorials are available on the Imaris website (https://imaris.oxinst.com/ products/imaris-start) and more specific analysis features are also available here (https://imaris.oxinst.com/tutorials). 17. Fluorescent signal may be dim in some slices. Use the “Display Adjustment” window to manually adjust the brightness of each channel if necessary while defining the cell border. 18. These values are determined by the limit of resolution (R), defined by the following formula: R ¼ λ/2NA where λ is the excitation wavelength and NA is the numerical aperture of the objective used. For a 60 oil immersion of objective with an NA of 1.4, R varies between 175–230 nm for λ in the range of 488–647 nm. 19. This threshold is chosen based on the assumption that the signal intensity is binomially distributed such that 2 standard deviation eliminates ~95% of all signal intensity [12]. Other threshold factors may be more appropriate for additional applications. 20. Any type “spots” feature can be imported into the cell. In our studies, we import the singles, doubles, triples, or quadruples. Specifically, we use the antibody staining specific for nucleoprotein to examine vRNP segments. Thus the 1-vRNA containing spots are technically doubles (i.e., 1 vRNA colocalized with NP protein). Based on the specific research question, the inclusion of spots can be modified for further downstream analysis. 21. Customization of the statistical features to be exported can be modified in the “Preferences” section located under the “Edit” tab.

56

Richard Manivanh et al.

References 1. Paddock SW, Eliceiri KW (2014) Laser scanning confocal microscopy: history, applications, and related optical sectioning techniques. Methods Mol Biol 1075:9–47 2. Bayguinov PO, Oakley DM, Shih CC, Geanon DJ, Joens MS, Fitzpatrick JAJ (2018) Modern laser scanning confocal microscopy. Curr Protoc Cytom 85(1):e39 3. Miyashita T (2015) Confocal microscopy for intracellular co-localization of proteins. Methods Mol Biol 1278:515–526 4. Chou YY, Heaton NS, Gao Q, Palese P, Singer RH, Lionnet T (2013) Colocalization of different influenza viral RNA segments in the cytoplasm before viral budding as shown by single-molecule sensitivity FISH analysis. PLoS Pathog 9(5):e1003358 5. Lakdawala SS, Wu Y, Wawrzusin P, Kabat J, Broadbent AJ, Lamirande EW, Fodor E, Altan-Bonnet N, Shroff H, Subbarao K (2014) Influenza a virus assembly intermediates fuse in the cytoplasm. PLoS Pathog 10(3): e1003971 6. Nturibi E, Bhagwat AR, Coburn S, Myerburg MM, Lakdawala SS (2017) Intracellular colocalization of influenza viral RNA and Rab11A is dependent upon microtubule filaments. J Virol 91(19)

7. Bouvier NM, Palese P (2008) The biology of influenza viruses. Vaccine 26(Suppl 4): D49–D53 8. Arranz R, Coloma R, Chichon FJ, Conesa JJ, Carrascosa JL, Valpuesta JM, Ortin J, MartinBenito J (2012) The structure of native influenza virion ribonucleoproteins. Science 338(6114):1634–1637 9. Moeller A, Kirchdoerfer RN, Potter CS, Carragher B, Wilson IA (2012) Organization of the influenza virus replication machinery. Science 338(6114):1631–1634 10. Jones JE, Le Sage V, Padovani GH, Calderon M, Wright ES, Lakdawala SS (2021) Parallel evolution between genomic segments of seasonal human influenza viruses reveals RNARNA relationships. Elife. bioRxiv:2021 10: e66525. https://doi.org/10.7554/eLife. 66525 11. Helander KG (2000) Formaldehyde prepared from paraformaldehyde is stable. Biotech Histochem 75(1):19–22 12. Femino AM, Fay FS, Fogarty K, Singer RH (1998) Visualization of single RNA transcripts in situ. Science 280(5363):585–590

Chapter 3 Optimizing Long-Term Live Cell Imaging Alex Lac, Austin Le Lam, and Bryan Heit Abstract Live cell microscopy has become a common technique for exploring dynamic biological processes. When combined with fluorescent markers of cellular structures of interest, or fluorescent reporters of a biological activity of interest, live cell microscopy enables precise temporally and spatially resolved quantitation of the biological processes under investigation. However, because living cells are not normally exposed to light, live cell fluorescence imaging is significantly hindered by the effects of photodamage, which encompasses photobleaching of fluorophores and phototoxicity of the cells under observation. In this chapter, we outline several methods for optimizing and maintaining long-term imaging of live cells while simultaneously minimizing photodamage. This protocol demonstrates the intracellular trafficking of early and late endosomes following phagocytosis using both two and three dimensional imaging, but this protocol can easily be modified to image any biological process of interest in nearly any cell type. Key words Live cell imaging, Fluorescence microscopy, Time-lapse, Photodamage, Photobleaching, Phototoxicity, Acquisition parameters, z-stack, Phagocytosis, Intracellular trafficking

1

Introduction The advent of live cell fluorescence microscopy allows for the in-depth investigation of cellular and molecular dynamics and function. While the imaging of chemically fixed cells is relatively easy and can be utilized to investigate many biological questions including quantification of cellular morphology and determination of the subcellular localization of molecules of interest, the fixation process kills the cells and ceases all cellular activity, thereby precluding the study of dynamic processes [1]. Live cell imaging allows for these dynamics to be directly observed but incurs two primary limitations that rarely affect fixed cell imaging. These limitations arise from the physical process central to fluorescence microscopy—the use of high-intensity excitation light to excite fluorophores in order to produce a detectable emission [2]. The first of these limitations is photobleaching. During excitation, fluorophores absorb a shorter wavelength (higher energy) photon, causing an electron in the fluorophore to enter an excited

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_3, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

57

58

Alex Lac et al.

state. Some of this energy is lost, after which the electron returns to the ground state through the emission of a longer wavelength (lower energy) photon—a process repeated so long as the fluorophore is continually exposed to excitation light [2]. However, the excited state of the fluorophore is sensitive to oxidation; therefore, exposure to prolonged or high-intensity illumination leads to the chemical inactivation of fluorophores [3, 4]. A similar phenomenon drives the second major limitation to live cell imaging—phototoxicity. While the excitation wavelengths used for live cell imaging are not sufficiently energetic to be directly ionizing, they can excite electrons in biomolecules in a manner similar to how a fluorophore is excited. As in fluorophores, these excited electrons are prone to oxidation, thereby accelerating oxidative damage to cells beyond what the intrinsic antioxidant systems can handle [3–5]. In addition to directly damaging biomolecules, excitation energy can directly drive the formation of reactive oxygen species (ROS) [5]. While the effects of ROS produced during metabolism can be diminished by several cellular detoxifying mechanisms, fluorescent illumination during live cell imaging can easily overwhelm these defensive processes [6]. For simplicity, we will refer to the combined effects of photobleaching and phototoxicity as “photodamage” in this article. Thus, during live cell imaging a researcher must balance excitation intensity, the duration of individual exposures, and the image acquisition rate (framerate) to limit photodamage while retaining sufficient signal strength and temporal resolution to quantify the processes under investigation. Further complicating matters, living cells also require specific environmental conditions and, therefore, many parameters including temperature, CO2 levels, pH, and osmolarity must be tightly controlled to maintain cells in their physiological state on the microscope. As such, achieving reproducible and representative data mandates that live cell microscopy approaches minimize light exposure and mimic normal environmental conditions to ensure that cells are kept alive in their physiological state throughout the entirety of the experiment’s duration.

2

Materials

2.1 J774A.1 Cell Culture and Transfection

1. Autoclaved (sterile) #1.5 thickness 18 mm diameter circular glass coverslips. 2. 12-well (22.1 mm diameter wells) tissue culture plates 3. T25 polystyrene tissue culture treated flasks. 4. J774A.1 macrophage cell line (TIB-67, American Type Culture Collection).

Optimizing Long-Term Live Cell Imaging

59

5. Phosphate Buffered Saline (PBS): 137 mM NaCl, 10 mM Na2HPO4, 1.8 mM KH2PO4. 6. Dulbecco’s Modified Eagle Medium (DMEM), serum-free. 7. DMEM +10% fetal bovine serum (FBS). 8. 5 mL pipettes. 9. Cell scrapers. 10. Hemocytometer. 11. Rab5A-GFP construct (Addgene # 56417). 12. Rab7A-mCherry construct (Addgene # 55127). 13. FuGENE HD transfection reagent. 14. 37  C + 5% CO2 tissue culture incubator. 2.2 Preparation of IgG-Opsonized Beads

1. 5 μm diameter polystyrene/divinylbenzene (PS/DVB) microspheres, 10% solids (Bangs Labs). 2. Whole rat IgG, from serum (50 mg/mL stock solution). 3. PBS.

2.3 Preparation of Fluorescently Labeled Bacteria

1. BL21 competent E. coli. 2. Luria–Bertani (LB) bacterial culture medium. 3. Bacterial culture tubes. 4. Heat block or water bath heated to 70  C. 5. Shaking bacterial culture incubator. 6. PBS.

2.4

Microscopy

1. Widefield or confocal microscope equipped with appropriate white light optics (differential interference contrast (DIC) or phase contrast), appropriate filter sets or laser lines (e.g., GFP (Ex: 490  20 nm, Em: 525  50), RFP/mCherry (Ex: 555  25 nm, Em: 605  50 nm), and far-red (Ex: 645  30, Em: 690 nm long-pass)), EM-CCD or CMOS camera (widefield) or high-sensitivity confocal detector (PMT, GaAsP, etc.), a minimum of 60 magnification objective lens, heated and CO2 perfused microscope stage, and image-capture software with time-lapse capabilities. 2. Leiden chamber which fits 18 mm circular coverslips. 3. Imaging buffer: 150 mM NaCl, 5 mM KCl, 1 mM MgCl2, 100 μM EGTA, 2 mM CaCl2, buffered with 20 mM sodium bicarbonate (for stages perfused with 5% CO2) or 20 mM HEPES (for air perfused stages), pH 7.4. Immediately before imaging add 1% (v/v) of 200 g/L sterile d-glucose and 2% (v/v) FBS. If imaging for more than 4 h, use the complete medium normally used to grow the cells.

60

Alex Lac et al.

4. Rutin hydrate (optional). 5. FIJI Image analysis software [7].

3

Methods

3.1 Preparation of J774A.1 Macrophages

Live cell fluorescence microscopy is a useful technique for exploring molecular dynamics in cultured primary or immortalized cells. Phagocytosis—the engulfment, killing, and degradation of microbes by phagocytes—is a dynamic process which starts with the recognition and internalization of a microbe into a plasma membrane-derived vacuole, termed the phagosome [8]. The phagosome then undergoes a vesicular trafficking process in which early endosomes, late endosomes, and lysosomes sequentially fuse with the phagosome, thereby delivering the hydrolytic enzymes that kill and degrade the engulfed microbe [9, 10]. The resulting microbial antigens are then loaded onto Major Histocompatibility Group II (MHC II) for presentation to the adaptive immune system [11, 12]. This dynamic process occurs over time-scales ranging from tens of seconds (engulfment of a microbe into a phagosome) [13, 14], to minutes (delivery of degradative enzymes through fusion of endosomes and lysosomes with the phagosome) [15, 16], to hours (degradation of the microbe) [17], to days (presentation of microbe-derived antigens to other immune cells) [18]. Each of these steps can be quantified using live cell microscopy. For these assays, we are using J774A.1 immortalized mouse macrophages, but a range of macrophage and dendritic cell lines, as well as cultured primary cells, can also be used for phagocytosis experiments. Various approaches can be used to fluorescently label structures of interest, including chemical dyes that bind to known subcellular targets (e.g., DRAQ5 labeling of DNA) and transgenes bearing either intrinsically fluorescent tags such as green fluorescent protein (GFP) or tags that covalently bind to cell-permeant fluorophores (e.g., HaloTag) [19, 20]. These transgenes can be introduced into cells through a range of transfection methods, including electroporation, lentiviral delivery, or chemical transfection reagents. Here, we demonstrate the use of live cell imaging to track the vesicular trafficking of the early endosome marker Rab5-GFP and the lysosome marker Rab7-mCherry, as these compartments fuse with phagosomes to deliver the hydrolytic enzymes that then degrade the pathogen [21, 22]. While focusing on phagocytosis, the methods described in this chapter can be applied to investigate other dynamic cellular events in other cell types.

3.1.1 J774A.1 Cell Culture

1. Culture J774A.1 cells in 5 mL of DMEM +10% FBS in T25 flasks at 37  C + 5% CO2.

Optimizing Long-Term Live Cell Imaging

61

2. Once the culture reaches 80% confluency, passage cells by rinsing with 1 mL of sterile PBS, followed by the addition of 5 mL of fresh DMEM +10% FBS. Using a cell scraper, gently scrape the cells into suspension. 3. Using the highest speed on your pipette-aid, draw the entire volume vigorously through the pipette 3 times to break up any cell clusters. 4. Dilute cells 1:5 by adding 1 mL of suspended cells to a new T25 flask containing 4 mL of fresh prewarmed DMEM +10% FBS. Incubate cells at 37  C + 5% CO2. 3.1.2 J774A.1 Transfection

1. Aseptically place individual 18 mm coverslips into the wells of a 12-well plate. 2. Add 1 mL of fresh DMEM +10% FBS into each coverslipcontaining well and place the plate into a 37  C + 5% CO2 incubator to prewarm the plate and medium. 3. Using a cell scraper, scrape a >60% confluent culture of J774A.1 cells into suspension as per procedure Subheading 3.1.1. Count the cells on a hemocytometer. 4. To each coverslip/medium-containing well, add 2.5  105 cells and incubate for 18–24 h at 37  C + 5% CO2 (see Note 1). 5. Prepare the DNA for transfection as per manufacturer’s instructions. For each well to be transfected add 150 μL of serum-free DMEM to a sterile 1.5 mL microfuge tube, to which 3.3 μg of the desired DNA construct(s) is added (see Note 2). If using more than one construct, add an equimolar amount of each construct for a total of 3.3 μg (e.g., 1.6 μg of Rab5-GFP + 1.6 μg of Rab7-mCherry DNA). Briefly vortex the tube to mix (see Notes 3 and 4). 6. Add 10 μL of FuGENE HD to the DNA solution and flick the tube 15–20 times to mix. 7. Incubate at room temperature for 15 min before adding dropwise to the cells. Gently shake the plate to evenly distribute the transfection mixture. 8. After 18–24 h replace the medium with fresh DMEM +10% FBS (see Note 5). Incubate an additional 12–24 h at 37  C + 5% CO2 to allow for cell recovery and transgene expression.

3.2 Preparation of Phagocytic Target Mimics

Fluorescently labeled bacteria or IgG-opsonized pathogen mimics (beads) are used in phagocytosis assays as phagocytic targets, with the choice of target determined by the imaging and biological needs of the experiment. The large size of the beads allows them to be detected via white-light microscopy and provides better resolution of fluorescent markers on/in the phagosome, but has the drawback that these beads cannot be degraded by the macrophage.

62

Alex Lac et al.

In comparison, bacteria need to be labeled with a fluorophore to allow for their detection after phagocytosis and their small size makes localization studies more difficult, but bacteria are physiologically relevant targets that will be degraded, processed into antigens, and presented to adaptive immune cells. The preparation of both bead-based mimics and fluorescent bacteria are described below. 3.2.1 Preparation of IgG-Opsonized Beads

1. Vortex the PS/DVB microspheres to ensure that they are evenly suspended and then transfer 10 μL into a 1.5 mL microfuge tube containing 1 mL of PBS. Briefly vortex to mix. 2. Centrifuge the beads at 5000  g for 1 min and remove the supernatant. 3. Resuspend the beads in 100 μL of PBS. 4. Add 10 μL of whole rat IgG to the tube. Incubate for 60–90 min at room temperature on a rotator, or overnight at 4  C. 5. Wash microspheres by adding an additional 1 mL of PBS and centrifuging at 5000  g for 1 min. Remove the supernatant. 6. Resuspend microspheres in 100 μL of PBS.

3.2.2 Preparation of Fluorescently Labeled E. coli BL21

1. Prepare 5 mL of LB medium in a sterile bacterial culture tube. 2. Inoculate the LB medium with BL21 bacteria and grow overnight to stationary phase at 37  C with 200 RPM shaking. 3. Transfer 100 μL of overnight culture into a 1.5 mL microfuge tube. Heat-kill bacteria by incubating at 70  C for 10 min. 4. Rinse the cells twice by adding 1 mL of PBS and centrifuging at 3500  g for 30 sec. 5. Remove the supernatant and resuspend in 100 μL of PBS. Add 0.5 μL of Cell Proliferation Dye eFluor 670 (see Note 6). 6. Incubate at room temperature for 20 min, protected from light. 7. Quench excess dye by adding 900 μL of LB medium into the tube, followed by a 3 min incubation at room temperature. 8. Pellet cells by centrifuging at 3500  g for 30 s. 9. Resuspend the pellet in 100 μL of PBS. Store at 4  C and protect from light until needed. Bacteria can be used for up to 5 days after preparation.

3.3 Optimizing Live Cell Imaging

As photodamage is a major limitation during live cell imaging, it is important to design your experiment and acquisition parameters to limit these effects. A significant but underappreciated issue is the presence of photosensitizers, such as riboflavin and pyridoxal, in many tissue culture media formulations [23]. Using media

Optimizing Long-Term Live Cell Imaging

63

Fig. 1 Effect of Photoprotectants and Media Formulation on the Photostability of Cytosolic GFP. Rab5-GFP was expressed in J774A.1 cells and exposed to high-intensity (~3 W) excitation in medium containing either photosensitizers (DMEM), photosensitizers and the photoprotectant rutin (DMEM + Rutin), or in medium formulated without photosensitizers (Imaging Media). (a) Representative images of the photostability of Rab5GFP in different types of medium. (b) Quantification of green fluorescence intensity over time, with brightness normalized to the intensity in each cell at the first timepoint. In DMEM, GFP had a half-life of 21.13 s, with this increasing to 35.95 s by the addition of rutin, and increasing further to 54.80 s in imaging media. Data is presented as mean intensity  SD and is representative of a minimum of 5 cells. Scale bars are 10 μm

formulated without these photosensitizers, and/or the addition of photoprotectants such as the plant flavonoid rutin, have been shown to improve photostability [23]. As an example, cytosolically expressed Rab5-GFP photobleached rapidly in riboflavin- and pyridoxal-containing DMEM, whereas the addition of rutin to DMEM, or replacing DMEM with imaging buffer lacking these photosensitizers, greatly increased the half-life of GFP under identical imaging conditions (Fig. 1). 3.3.1 General Considerations: Signal Strength and Temporal Resolution

Beyond photodamage are issues arising from focal drift, temporal resolution, and image quality during acquisition [24, 25]. Imaging a sample over time results in cumulative damage in the form of photobleaching and phototoxicity. In essence, there is a finite “budget” of excitation photons that can be used without undue photodamage to the sample [26]. When performing live cell imaging, this budget must be allocated over three categories: signal strength (determined by the intensity and duration of excitation used to capture a single timepoint), the rate of image capture, and the duration of the experiment. The strength of a signal depends on several factors, including the intensity and duration of excitation, the brightness of the fluorophore (e.g., how efficiently a fluorophore converts excitation energy to fluorescence), label density, and

64

Alex Lac et al.

the sensitivity of the detector. Typically, image acquisitions using longer exposures at lower excitation intensity cause less photodamage than shorter exposures at higher intensity [3, 27]. Many chemical fluorophores and fluorescent proteins are now available, allowing for brighter fluorophores to be selected for most applications (see Note 7). While it may be tempting to overexpress a fluorescent protein to increase label density, it is important to ensure that this overexpression does not lead to aberrant function or localization of the protein. Signal strength can also be improved at the detector. If imaging with an electron multiplied CCD camera, or with a confocal PMT or GaAsP detector, electronmultiplying gain can be increased to amplify the signal strength [28]. While this can greatly increase signal brightness for a given exposure time and excitation intensity, it also tends to increase noise in the image, typically placing a maximum gain that can be applied before image quality is compromised. In addition, pixels can be binned (i.e., the signal in neighbouring pixels summed together) to create a larger and brighter pixel [29]. While this can greatly increase signal strength without adding additional noise, the resulting summation of pixels reduces the resolution of an image. For example, the smallest binning option (2  2 pixel binning) reduces the number of pixels in the image fourfold. Optimizing these exposure settings is crucial for live cell imaging, as the excitation energy used to acquire each time point determines the rate of photodamage and, therefore, determines the number of images that can be acquired before the cumulative damage to the sample precludes further imaging. The other two factors affecting the photon budget—temporal resolution and duration of the experiment—are tightly linked to, and are dictated by, the exposure settings. At a given exposure setting, there will be a set number of images that can be collected before photodamage prevents further imaging. As such, increasing the temporal resolution (e.g., shortening the time between images) will reduce the maximum possible experimental duration, while decreasing the temporal resolution will enable longer experiments. Balancing temporal resolution and experimental duration can be challenging and, in some cases, may not be possible. When these needs cannot be met, it is usually necessary to divide the experiment into discrete time periods, and to image these periods in separate experiments. For example, when imaging phagocytosis it is not uncommon to image the engulfment of the phagocytic target, which typically lasts only a few minutes, at a high framerate, and then to image the degradation of the phagocytic target in a separate experiment at a lower framerate. By dividing the process into temporally separated segments, the phagocytic process can be imaged in its totality while maintaining the necessary temporal resolution during the different stages of the process.

Optimizing Long-Term Live Cell Imaging

65

3.3.2 General Considerations: Maintaining the Focal Plane and Sample Integrity

The long duration of some live cell experiments, combined with mechanical movement within the microscope and thermal expansion driven by changes in temperature, can make it difficult to maintain the desired focal plane during an experiment. Many microscopes come equipped with adaptive focus systems that actively monitor the distance between the objective lens and the sample, and moves the focus should this distance change [25]. Long-term imaging all-but-requires this feature, and if available, it should be used. For mammalian cells, a heated stage and lens warmer are required to maintain physiological temperatures, and for imaging lasting more than a few hours, a system to profuse humidified air +5% CO2 is recommended to maintain the pH and osmolarity. If used, the heated stage and lens warmer should be prewarmed at least 30 min prior to imaging to ensure that temperature equilibrium is reached and to limit focal drift caused by thermal expansion of the microscope. If available, imaging multiple regions in the sample using tiling or point-visiting features can greatly increase the number of cells observed in each experiment, thereby improving the likelihood of capturing rapid or rare biological events and increasing the amount of data collected.

3.3.3 Special Consideration: 3D Live Cell Imaging

Most live cell imaging experiments are performed using a single focal plane, but in some cases, three-dimensional imaging of cells is required to capture the biological process under investigation. This is typically achieved using z-stacking, where multiple images at different focal points in the sample, at the same x- and y-location, are captured at each timepoint. These can then be combined to create a composite 3D image of the cell. The use of this form of imaging with live cells can be challenging as it greatly increases the exposure of the sample to excitation light during each timepoint. This issue can be compounded by the time it takes to capture multiple z-slices at a single time-point, with cell movement during the acquisition of the z-stack introducing motion artifacts [30, 31]. However, with appropriate attention applied to optimizing imaging conditions, z-stacking can be used with live cell imaging to generate a 3D time series (Fig. 2) [32, 33]. New imaging technologies, such as light-sheet microscopy, have significantly improved our ability to capture live cell 3D images with reduced photobleaching. This image modality is described further in Chapter 11.

3.4 Configuring Acquisition Parameters for Imaging Phagocytosis

1. Prewarm the microscope stage and lens to 37  C and perfused with 5% CO2. 2. Mount coverslip containing transfected cells into a prewarmed Leiden chamber. 3. Fill the imaging chamber with the appropriate volume of prewarmed medium (see Notes 8 and 9), and transfer the Leiden

66

Alex Lac et al.

Fig. 2 3D Live Cell Recording of Endosome and Lysosome Intracellular Dynamics. Live cell fluorescence microscopy was performed to observe the localization of Rab5-GFP (early endosomes) and Rab7-mCherry (late endosomes and lysosomes) expressed in J774A.1 macrophages. A 5 μm thick z-stack was captured at 4-min intervals with 500 nm spacing, and deconvolved in Leica Application Suite X using an iterative blinded deconvolution algorithm. Image is presented as a maximum-intensity z-projection of each time point. Scale bar is 10 μm

chamber to the prewarmed stage of the microscope with 60 or higher magnification objective lens in place. 4. Using the red channel, identify a transfected macrophage (see Note 10). 5. Using this cell, configure the microscope’s acquisition software to capture a time series with 4 min intervals between timepoints, with an experimental duration of 2 h. At each time point acquire a white light image (DIC or phase contrast) and images of all fluorescent labels (e.g., green channel for Rab5, red channel for Rab7, and far-red channel if using fluorescently labeled bacteria) (see Notes 11 and 12). 6. Optional: tiling or point-visiting can be used with slower acquisitions to collect images from multiple regions in the sample. If available, configure adaptive focus control settings to ensure the desired focal plane is maintained throughout the experiment. 7. If collecting z-stacks: (1) Find the focal plane corresponding to the bottom of the cell. Set this point as the beginning of the stack; (2) Find the focal plane corresponding to the top of the cell. If the cell is expected to change shape during acquisition, raise the focal plane 1–2 μm above this point to account for any thickening of the cell. Set this point as the end of the stack; (3) Adjust the number of steps and z-step sizes to ensure that

Optimizing Long-Term Live Cell Imaging

67

images are captured with appropriate spacing in the z-axis. Generally, larger z-step sizes and—henceforth, smaller number of steps—are preferred to prevent overexposure to light and photobleaching (see Note 13). 8. Carefully add 10 μL of IgG-opsonized beads prepared in Subheading 3.2.1, or 25–50 μL of fluorescently labeled bacterial suspension prepared in Subheading 3.2.2, into the Leiden chamber. Using a pipette set to ~1/4th the volume of the Leiden chamber, carefully mix the medium in the Leiden chamber to ensure an even distribution of the beads or bacteria. Check to make sure that the focal plane has not shifted (see Note 14). 9. Monitor the acquisition. If necessary, exposure settings, the focal plane, image acquisition rate, and other imaging settings can be adjusted during the experiment (see Note 15). 10. Once imaging is complete, export the data in a format that can be read by image analysis software. 3.5 Quantification of Localization of Fluorescent Markers

Many forms of image quantification can be performed on live cell images, including approaches such as kymographs or particle trafficking to visualize movement, intensity or ratiometric measurements to quantify localization, colocalization analyses, and morphological measurements. These measurements can be more challenging than with fixed-cell images, as factors such as photobleaching can complicate intensity-based measurements. We cannot cover all forms of analyses here, but as an example, we will illustrate how to use FIJI/ImageJ to perform a photobleachingcorrected quantification of Rab5 and Rab7 recruitment to a pathogen mimic-containing phagosomes (Fig. 3a). More advanced forms of live cell image analyses can be found in Chapter 19 of this book. 1. Import saved image file into FIJI. 2. Separate the colour channels using the “Separate Channels” command found under the “Image ! Color” menu. 3. Check the green and red channels for photobleaching. If present, it will be apparent as a dimming of the intensity of the green or red signal in the transfected cell. Generally, photobleaching correction is only needed if more than 10% of the signal is lost over the experiment. 4. If a photobleaching correction is required, select the channel to be corrected and run the “Bleach Correction” command under the “Image ! Adjust” menu. We recommend the “Histogram Matching” mode of bleach correction. Repeat this correction on any channel where it is required. 5. Ensure that only “Mean grey value” is selected under the “Analysis ! Set Measurements” command.

68

Alex Lac et al.

Fig. 3 Quantification of Rab5 and Rab7 Localization to Phagosomes. Live cell fluorescent microscopy was used to track the localization of Rab5-GFP and Rab7-mCherry to phagosomes formed following the phagocytosis of IgG coated PS/DVB beads. Data is presented where 0 min is the timepoint when the bead is first fully engulfed by the macrophage. (a) Images of Rab5 and Rab7 on a maturing phagosome (arrowhead and inserts). (b) Normalized intensity of Rab5 and Rab7 on the phagosome indicated in panel A. Scale bar is 10 μm

6. On the first timepoint where a phagosome has formed, identified as a fully engulfed bead, use the “Image ! Duplicate” command. Duplicate only the current timepoint and only the channels you wish to quantify (e.g., green/Rab5 and red/Rab7 channels, Note 16). 7. On the duplicate image, use the circle selection tool on the ImageJ/FIJI toolbar to draw a region of interest (ROI) around the phagosome. Add this ROI to the selection manager using the “Add to Manager” command under the “Tools ! Selection” menu. Repeat this for all phagosomes in the image. 8. Using the circle selection tool, select a cell-free region and add this ROI to the selection manager as a measurement of the background fluorescence in the image. 9. On the selection manager, select “Multi Measure” under the “More” button. In the pop-up, select “Measure all slices” and “One row per slice” options, then click “OK”. This will measure the mean intensity inside of each selection in each channel, placing the data for each channel in a separate row and each ROI in a separate column. 10. Copy this data to a spreadsheet. Subtract the background intensity from the intensity of the fluorescent markers in each phagosome. 11. Close the duplicate image, and then move to the next timepoint in the original time series. Duplicate the timepoint as described in step 6, above. Click on the first ROI in the selection manager and adjust its position and size to keep the

Optimizing Long-Term Live Cell Imaging

69

same phagosome selected. Update the selection by clicking the “Update” button on the selection manager. Repeat this process for all selections and add additional selections should new phagosomes form. Do not forget to add a background ROI for each timepoint. 12. Repeat steps 9 and 10 to quantify the intensity of Rab5 and Rab7 on phagosomes in the current timepoint, then repeat steps 11 and 12 for all subsequent timepoints. Ensure that the intensity values for each phagosome are kept separate so that each phagosome can be analyzed individually. 13. For individual phagosomes, data can be plotted simply as intensity (Fig. 3b). 14. If multiple phagosomes are to be averaged, intensity and time normalization may need to be performed. If intensity normalization is required, for each phagosome, normalize the intensity I t ð min τ Þ of the Rab5 intensity data as i t ¼ max , where it is the τ  min τ normalized Rab5 intensity at time t, It is the measured Rab5 intensity at time t, and minτ and maxτ are the minimum and maximum Rab5 intensities measured on that phagosome, respectively. This will scale the intensity of the Rab5 on the phagosome to a range of 0–1. Normalize the Rab7 signal using the same approach. 15. There should be no need to perform a time normalization if a well-defined morphological or other hallmark was used to define when intensity measurements were started (see Note 16). However, if a clear starting point is missing, an easy-toidentify hallmark should be chosen (e.g., the first timepoint when Rab7 intensity matches or exceeds Rab5 intensity on the phagosome), and the timing of each phagosome intensity data adjusted such that this event is set as t ¼ 0. 16. Once intensity and time have been normalized, average the Rab5 and Rab7 intensity at each timepoint, across all phagosomes in the sample. This data can be plotted as mean  SD or can be used as a single repeat in an experimental series.

4

Notes 1. If using other cell types, plate cells at a density such that they will be 50–60% confluent at the time of transfection. Typically, for 18 mm coverslips, this requires between 2.5  105 and 1  106 cells/coverslip. 2. J774A.1 cells and other macrophage cell lines can be difficult to transfect. It is recommended to use transfection reagents that are specifically designed for macrophages or hard-to-transfect

70

Alex Lac et al.

cell lines. In our experience, FuGENE HD works well for most murine and human macrophage cell lines. 3. As some transfection reagents can increase cell permeability, it is recommended to refrain from adding antibiotics/antimycotics to transfected cells as they can be toxic to permeabilized cells. 4. When possible, use fluorophores that are excited at longer wavelengths (e.g., red and far-red fluorophores) as they are often more resistant to photobleaching, and their excitation wavelengths cause less phototoxicity. 5. Replacing the media of transfected cells reduces potential cytotoxicity and altered cellular behaviour due to the transfection reagent. 6. Many fluorophores can be used to track bacteria used in phagocytosis assays. Covalent labels such as proliferation dyes or protein-reactive dyes are preferred, as noncovalent reagents (e.g., DNA dyes such as DAPI or DRAQ5) will quickly disperse following killing of the bacteria. 7. For a database of genetically encoded fluorophores with information on their brightness, see the fluorescent protein database (www.fpbase.org) [20]. 8. If your imaging chamber lacks CO2 perfusion, medium can be buffered with HEPES instead of sodium bicarbonate in order to maintain physiological pH while imaging in air. 9. For short-term imaging (5 different fluorophores in the one laser system, it is possible to image same animal with ease four different channels with one often occupied by second harmonic emission

Sample Due to out of focal plane excitation, using Since multiphoton lasers only excite the sample at the focal plane, out of focus conservation multiple lasers of shorter wavelengths photobleaching is not an issue. can increase the risk of photobleaching of However, if the laser power is too high, fluorophores (a critical consideration for the sample/tissue itself can be “burnt” long term imaging experiments) and destroyed a

Sequential scanning refers to the separation of laser and detectors by time, allowing the user to choose which laser lines and detectors are activate at any one time, allowing for better separation of fluorescent spectra and more accurate and resolute images

The following method will highlight the use of both confocal and multiphoton microscopy in a wounded skin model in mice [14, 15]. This example will assist readers in troubleshooting the technique of IVM from surgical expertise, to optimal imaging methods for various experimental questions.

Intravital Microscopy Techniques to Image Wound Healing in Mouse Skin

2

169

Materials

2.1 Skin Punch Biopsy Wound Model

1. Electric animal fur clippers. 2. 70% Ethanol 3. Isoflurane gas anesthetic machine with a nose-cone adapter. 4. 2 mm biopsy punches.

2.2 Intravital Microscope

1. Inverted confocal fluorescent microscope (preferably resonant scanning), with spectral detection, AND/OR alternate imaging platform utilizing a multiphoton pulsed laser and nondescanned detectors (see Note 1). 2. Computer with appropriate image capture and analysis software (see Note 2). 3. Heated microscope stage. 4. Glass coverslip (thickness 0.12–0.19 mm), sized to fit the window on the imaging stage. 5. Blenderm ® tape.

2.3 Extended Surgical Anesthetic Maintenance

1. Ketamine. 2. Xylazine. 3. Sterile saline. 4. 1 mL slip tip syringe 5. Polyethylene tubing (Ø 0.28 mm, 15–20 cm length). 6. 30G  1/2 in. needles (2) 7. Heat lamp. 8. 70% Ethanol 9. Gauze. 10. Transpore® tape. 11. Cotton swaps.

2.4 Surgical Tools/Materials

1. Sterile tissue scissors. 2. Sterile forceps 2. 3. Needle drivers. 4. Surgical stage. 5. Sutures (PERMA-HAND silk 5-0, c-31 reverse cutting). 6. Small vessel cautery iron. 7. Surgical lamp. 8. 35 mm Petri dish 9. Glass slides (25  75 mm). 10. Phosphate-buffered saline.

170

Madison Turk et al.

11. Mineral oil. 12. 70% Ethanol 13. Kimwipes® tissue or gauze. 2.5 Labelling Antibodies and/or Contrast Agents

1. 1 mL Slip-tip syringe 2. 2–20 μl Pipettor 3. Pipette tips. 4. Microfuge tubes. 5. Fluorescently conjugated antibodies. 6. Vessel contrast dye (i.e., FITC-conjugated albumin). 7. Sterile PBS.

3

Methods

3.1 Wound Generation

1. Set up the small animal veterinary isoflurane machine as per manufacturer directions. 2. Place mouse into a plastic induction chamber attached to the isoflurane machine. 3. Set oxygen flow rate to 0.5–1 L/min. 4. Set isoflurane vaporizer to 5% to induce anesthesia; reduce to 2% for maintenance. 5. Quickly remove anesthetized mouse from induction chamber, replace induction chamber with a breathing tube attached to a nose cone, place the nose cone over the mouse’s nose and mouth. Ensure proper anesthesia plane is maintained. 6. Remove fur on the hind flank of the mouse using electric animal clippers. 7. Wipe newly shaved area with 70% ethanol to remove loose hair and to sterilize the wound site. 8. Reposition the mouse onto its right side and make a pinch of skin just lateral to the right side of the mouses spine and hold it against a solid surface (Fig. 1i–iii). 9. Using the 2 mm biopsy punch, place half of the punch of the edge of the skin fold and press down firmly, punching through both layers of the folded skin. 10. Once the skin in unfolded, a full 2 mm circle biopsy wound will remain in the right back flank of the mouse. 11. Let the mouse slowly recover from the anesthetic in a clean cage sternum down under a heat lamp. Observe the mouse until it becomes active again (see Note 3).

Intravital Microscopy Techniques to Image Wound Healing in Mouse Skin

171

Fig. 1 Schematic of preparation of a skin biopsy punch model for inverted confocal resonant-scanning intravital microscopy. Generation of the wound: (i) Anesthetize mouse, (ii) Position mouse on its side, pinch the skin overlaying the animals back, extend this skin out and hold against a solid surface. Make a half-circle punch through both sides of the skin, (iii) When the skin is released the wound will take the shape of a full circle wound—slightly right of midline. Preparation for imaging: (iv) Cut the skin up the midline, and reflect the flap outward, (v) Place mouse on its back on the microscope stage, (vi) confirm the position of the skin wound site relative to the imaging window to ensure the wound is located on the cover glass 3.2 Initiating Imaging Equipment and Software

1. Turn on confocal and multiphoton imaging components and computer. Open the software. 2. Fix microscope cover glass (thickness 0.12–0.19 mm) over the imaging window of the heated inverted imaging stage using Blenderm ® tape. 3. Turn on the power supply to the heated stage and set at 37  C. 4. Set up the imaging parameters in the acquisition software applicable to your fluorophores of choice. 5. Allow appropriate time for lasers and detectors to warm up (see Note 4).

172

Madison Turk et al.

3.3 Preparing the Animal for IVM

1. Make an anesthetic solution of Ketamine (200 μg/g mouse) and Xylazine (10 μg/g mouse) in sterile saline.

3.3.1 Induction and Maintenance of Anesthesia

2. Inject the appropriate volume of anesthetic (based on the weight of the mouse) into the peritoneal cavity using a 1 mL slip tip syringe and 30 G ½ inch needle. 3. Assess the anesthetic plane by testing the mouses reflexes. Pinch the footpad of the mouse, firmly with forceps. The mouse should not withdraw its foot if the appropriate plane of anesthesia is achieved. This will take a minimum of 10 min and can vary between mice. Do not proceed until the mouse no longer withdraws its foot following a pinch reflex test. 4. Construct a venous catheter to be placed in the tail vein. This is achieved by making a catheter from a 1 mL slip tip syringe, polyethylene tubing (Ø 0.28 mm, length ~15 cm) and two 30G  1/2 in. needles. Break one of the needles from the hub (using a needle driver or hemostat, carefully bend back and forth until it breaks cleanly from the hub) and insert the blunt end of the needle into one end of the catheter tubing. This can be aided by hemostats or forceps. Fill the 1 mL syringe with saline and attach the second 30G needle. Insert the second needle (attached to the syringe) into the other end of the tubing. Be sure to clear the air from the tubing dead space by slowly depressing the syringe plunger, allowing the saline to fill the tubing and drip out of the end with the detached needle inserted. 5. Place the mouse on its side, and wipe the tail with 70% ethanol. 6. Using forceps, place the needle on the end of the catheter tubing, bevel up, into the tail vein of the mouse. Successful cannulation can show flashback of blood into the tubing and will allow easy (smooth, low resistance to plunger depression) delivery of the contents of the syringe into the blood stream. This can be visualized by a lightening of the vessel as the PBS is pushed through (only use a small amount (20 μL) of saline to test proper insertion of the catheter as to avoid hypervolemic shock in the animal). 7. Secure the catheter in place. Place the tail onto a piece of Transpore tape and place a wooden cotton swap dowel (with the cotton ends removed) next to the tail where the needed is inserted. Fold the Transpore tape over the wooden stint and inserted needle and press gently to secure (see Note 5). 8. Once the venous catheter is in place, fluorescently conjugated antibodies or contrast agents can be injected into the mouse. Injecting labeling reagents prior to surgery allows sufficient time for circulation (approximately 10 min) and allows the labeled to enter tissues that may develop perfusion defects resulting from the surgical preparation.

Intravital Microscopy Techniques to Image Wound Healing in Mouse Skin 3.3.2 Administration of Labeling Reagents

173

1. Prepare a mixture of antibodies (typically 1–3 μg of each fluorescently conjugated antibody per animal) and/or contrast dyes in a microfuge tube using a pipette, top up total volume to a minimum of 10 μL with saline (see Notes 6 and 7). Care must be taken in selecting labeling antibodies that do not interfere with desired biological functions (see Note 8) and are accessible to the tissue being imaged (see Note 9). 2. Draw back the plunger of a new 1 mL slip-tip syringe to generate ~200 μL of dead air space. Using a pipette, load the antibody mixture into the tip syringe. The antibody mixture will remain in the tip of the syringe held by surface tension. 3. Replace the syringe attached to the tail vein catheter with the one containing the antibody mixture. Slowly depress the plunger stopping immediately once the antibody mixture is injected into the catheter ensuring you do not inject any of the dead air space located behind the antibody mixture (to avoid introducing an air bubble into the animal circulation). 4. Remove the syringe used for administering the antibody mixture and replace it with a syringe containing sterile saline. Avoid the introduction of air bubbles. Dispense 100 μL of saline to flush the antibody mixture through the catheter tubing and into the mouse circulation. Antibodies take approximately 10 min for optimal intravascular labeling.

3.3.3 Surgery to Facilitate IVM

1. Place the mouse onto the surgical board on its stomach, and with its head facing away from you. Secure all 4 limbs to the surgical board with Transpore Tape, to ensure stability (Fig. 1iii–vi; Supplemental Video 1). 2. Cover the midline of the mouses back with mineral oil. This will keep fur from entering the imaging site. 3. Make a small incision just above the base of the tail, then continue to cut up the entire midline of the back stopping at a point perpendicular to the shoulders of the front limbs. Use the cautery iron if any bleeding occurs along the incision site. 4. Reflect the skin on the right side of the mouse (where the wound is located) away from the body, and carefully remove connecting fascia. Do not make any perpendicular cuts into the reflected skin as to preserve proper blood supply. 5. Using a 35 mm petri dish for support, reflect the skin over the petri dish (elevating the skin flap and providing a stable worksurface for the removal of fascia, etc.) and use 5.0 suture to make 2 retractions; 1 on either end of the skin flap) and Transpore tape, secure the sutured skin flap to a clean glass slide, leaving the inside of the 2 mm biopsy wound exposed.

174

Madison Turk et al.

6. The wound may have a thin layer of fascia on top, which will need to be carefully removed using forceps and scissors (Supplemental Video 2, Note 10). Ensure you do not cut or damage any vessels. 7. Carefully remove tape securing the animal to the surgical stage and position the animal on the imaging stage (Supplemental Video 3). Add a small amount of saline onto the wound site to keep the tissue hydrated. The mouse will now be on its back, with the reflected skin facing down onto the cover glass (over the imaging window). Observe from below to ensure proper contact of the wounded area and the cover glass (ensure no air bubbles in the imaging field). If the experiment exceeds 30 min, efforts to maintain tissue moisture over time must be made, for example, a saline soaked Kimwipe can be placed over the wounded area. 8. Transport the mouse on the imaging stage, to the microscope. Ensure the heated stage is plugged in. 9. Replace the catheter syringe with a new 1 mL syringe containing the ketamine/xylazine solution, careful not to introduce air bubbles, allowing for top up of anesthetic as needed throughout the imaging time frame. You should be constantly monitoring anesthetic depth using the toe pinch reflex method previously described. 3.4 Imaging of the Wound Site

1. Once the mouse/stage has been transferred to the microscope, ensure the prep is still stable, tighten the stabilizing sutures if necessary, or repositioning the skin flap. Perform a toe pinch test to ensure the mouse is properly anesthetized. 2. Locate the wound and ensure the lens is directly under (or over, if using an upright microscope) the wounded site before locating the focal plane. Once the tissue is in focus, use the edges of the wound to help landmark; this will ensure you are imaging the tissue proximal to the wound site. 3. Each imaging platform will require specific settings and configurations, however common aspects to consider when imaging the wound and surrounding environment include: Brightness/Contrast for image acquisition: Optimizing laser power and sensitivity (gain) of the detectors); this will greatly affect the quality of the imaging. Increase laser power to a level to that is just sufficient to excite the fluorescent molecules, then increase detector gain to ensure adequate signal is detected. This strategy will protect the sample from thermal injury and will limit photobleaching, while ensuring no image pixels become saturated. If using confocal, this can be also aided by narrowing or opening the pinhole. The brightest true signal

Intravital Microscopy Techniques to Image Wound Healing in Mouse Skin

175

should be collected while trying to avoid nonspecific emitted light (background). In addition, many tissues can produce autofluorescence with various excitation wavelengths (which need to be determined empirically). This signal can be used for landmarking, and or structural detail, but may also interfere with collection of information from fluorescent labels/dyes. Choosing fluorescent labels/dyes that avoid autofluorescent wavelengths may be a necessary strategy when using confocal imaging. Autofluorescence is not commonly observed when using multiphoton microscopy. There is also less need for adjusting settings to reduce out of focus light, as only the focal plane of the sample will be excited. Power of the laser will depend on the depth of the focal plane of interest. Imaging Speed: When acquiring videos, the settings must be appropriate for the temporal event you are trying to capture. For example, the difference between imaging platelets in a blood vessel, requiring a high frame rate, and a cell crawling in the tissue, only requiring one frame every 10 or more seconds. Various settings will affect speed of acquisition. Increasing the number of image-capture sequences (cycling individual excitation lasers on and off in a sequence for each imaging frame) will reduce capture speed. Multiphoton imaging is faster on average, as typically only a single sequence is required. Averaging (see Note 11) can also dictate the imaging speed, by increasing the number of times each position in the sample needs to be illuminated and imaged. For fast moving objects, if sequences cannot be reduced, lowering the averaging could provide faster acquisition speeds. Slower moving objects that can afford more time between laser passes can be averaged many times to increase signal to noise ratios, generating more true and clear images. However, more laser passes can lead to photobleaching of less stable fluorescent proteins. Resolution: Depending on the research question, the ability to resolve increasingly smaller details can be more or less important. Confocal microscopes are known in the world of light microscopy for their ability to optically ‘section’ the sample and provide 3D reconstructions of the tissue (Fig. 2). When using a scanning confocal microscope, smaller pinhole sizes (down to 1 airy unit) will increase the resolving power in all three dimensions. Resolution will diminish quickly with imaging depth in the z-plane. When the research question relies on high resolution, multiphoton imaging should be considered if possible. With longer wavelengths reaching greater tissue depths, and ability to only excite one focal plane at a time, the z-resolution is

176

Madison Turk et al.

Fig. 2 Representative images from a biopsy wound model imaged with single and multiphoton microscopy. Imaging of a biopsy skin wound model following injection of fluorescently labeled antibodies (CD31 and CD49b [endothelium and platelets], red; Ly6G [neutrophils], cyan; F4/80 [macrophage], blue; CD11b [myeloid cells], green; second harmonic generation [collagen], magenta). (a) Stitched, and maximum projected image of vasculature around a wound site in the skin captured with resonant scanning confocal microscopy. Dashed white lines used to highlight vessels. (b) Corresponding stitched image to (a) using multiphoton imaging. (c) 3D reconstruction of a stitched confocal image and corresponding multiphoton imaging (d). White dashed lines highlight vessels, scale bars represent 300 μm

much better than what can be achieved with confocal light microscopy, and at greater tissue depths. These sequential z-planes can be “reassembled” to generate a 3D model of the tissue (Fig. 3) or can be “flattened” into a single focal plane where each pixel represents the maximum fluorescent signal collected for that location in any one of the focal planes contained in the z-stack (Fig. 4). Importantly, the laser power needs to be increased to facilitate imaging at greater depths within a tissue, enhancing the risk of thermal tissue damage when deep imaging. Laser power attenuation will be required when making z-stacks using a multiphoton laser, and often takes optimization, increasing the difficulty of image capture. Additionally, using a lens based focal drive, will reduce sample movement/vibration from a moving stage, aiding in sample stability and image quality.

Intravital Microscopy Techniques to Image Wound Healing in Mouse Skin

177

Fig. 3 Representative confocal and multiphoton 3D images from a biopsy wound model. Following injection of fluorescently labeled antibodies (CD31 and CD49b [endothelium and platelets], red; Ly6G [neutrophils], cyan; F4/80 [macrophage], blue; CD11b [myeloid cells], green; second harmonic generation [collagen], magenta) a consecutive z-stack of focal planes were collected using confocal microscopy and rendered as a 3D reconstruction of the wound site (a). Corresponding 3D model generated using multiphoton imaging (b). Cross-sectional projection of the confocal (c) and the multiphoton (d) 3D models. White dashed lines highlight vessels, scale bars represent 300 μm

4

Notes 1. The protocol describe here utilizes a Leica SP8 inverted microscope (Leica Microsystems, Concord, Ontario, Canada), equipped with 405-, 488-, 552-, and 638-nm excitation lasers, 8 kHz tandem scan head, and spectral detectors (conventional PMT and hybrid HyD detectors). This platform is also equipped with a tunable multiphoton (MP) pulse laser (700–1040 nm) (Newport Corporation, Irvine, CA) and external PMT and HyD detectors (Leica). 25 0.95NA water immersion lens recommended.

178

Madison Turk et al.

Fig. 4 Maximum projected images of z-stack imaging of a skin blood vessel. Images from a biopsy wound model comparing maximum projection renderings of focal plane z-stacks collected by confocal (a) and multiphoton (b) intravital imaging. Cellular targets were labelled by i.v. injection of fluorescently conjugated antibodies (CD31 and CD49b [endothelium and platelets], red; Ly6G [neutrophils], cyan; F4/80 [macrophage], blue; CD11b [myeloid cells], green; second harmonic generation [collagen], magenta). White dashed lines highlight vessels, scale bars represent 100 μm

2. Leica Application System X Version 3.5.6.21594 is used for image capture/analysis. 3. With small (2 mm) wounds, analgesic may not be necessary. However, larger wound models may require further recovery procedures, including dressing and analgesic administration; Meloxicam (Metacam®, 25 mg/kg reconstituted in 100 μL of sterile saline, administered subcutaneously 24 h post wound). [15] These requirements may vary by institution. 4. Each system will require some time for lasers to warm up. The Leica system used in these studies takes between 15–20 min on average for lasers and detectors to reach optimal operating temperature. Individual warm up times for each system will be recommended by the manufacturer. 5. A jugular vein catheter can be substituted for tail vein cannulation. 6. ~4 μg of each antibody/mouse (8 μL of a 0.5 mg/mL antibody) can be used as a starting point; this should be experimentally determined depending on the marker and tissue imaged. 7. Fluorophore selection should be based on the microscope configurations/capability, as well as tissue imaged. For example, brighter fluorophores can be used for lower expression

Intravital Microscopy Techniques to Image Wound Healing in Mouse Skin

179

Table 2 Choosing fluorophores for your intravital imaging experiment; tested suggestions Emission Excitation ~410–470 nm

~500–550 nm ~570–620 nm

~640–680 nm ~700–780 nm

405 nm

BV-421, Q-Tracker, BFP

Q-Tracker

Q-Tracker

Q-Tracker

488 nm



FITC, AF-488, GFP

PE

PerCP-Cy5.5a

552 nm





PE, AF-555/594, TdTomato, mCherry

638 nm







AF-647, APC, eF-660

Q-Tracker

AF-700, iRFP

A list of fluorophores commonly used for intravital imaging. Excitation is representative of a common four laser confocal microscope setup (not all systems will be the same). This table can be used as a starting point for designing in vivo staining panels ( ) Indicates that laser lines cannot be used to generate emission of lower wavelengths Transgenic fluorescent reporter proteins AF (Alexa Flour), BV (Brilliant Violet), ef(eFluor) a Tandem dyes do not always work well in vivo

markers. In addition, it can be difficult to completely separate bleed through of emission from other channels, so if two colors that are closer in spectra need to be used together, choosing structures that are morphologically distinct (such as lymphocytes and vasculature) can help ensure you are looking at true signal. A list of fluorophores commonly used for intravital imaging is presented in Table 2 [16]. 8. The use of fluorophore-conjugated antibodies to label cells in vivo can potentially induce biological effects (blocking proteins function [i.e., adhesion molecules], Fc-receptors binding and subsequent cell activation, complement activation, cell depletion). Every antibody that will be used for IVM should be characterized/vetted carefully. 9. Antibodies can only diffuse so far into certain dense tissues. If inadequate labelling results when using fluorescently labeled antibodies, transgenic mice expressing fluorescent proteins driven by a cell specific promotor may be required. 10. Dissecting away the connective tissue from the wound site is important for producing the best quality and resolute imaging. Even small amounts of extra tissue will scatter light and lead to out of focus and autofluorescent images. 11. Averaging is an imaging technique used to increase the signal to noise ratio, and/or enhance signal detection. Averaging can

180

Madison Turk et al.

be done by frame or by line. Line averaging is more common in laser scanning confocal since it has less impact on acquisition speed. The laser sweeps over an image, line by line. Higher line averages, take electron data collected on each laser pass and averages it, giving a normalized representation of the number of electrons collected per pixel of the image. The higher the line average, the cleaner the image will appear. References 1. van Zuylen J (1981) The microscopes of Antoni van Leeuwenhoek. J Microsc 121: 309–328 2. Subramaniam S (2005) Bridging the imaging gap: visualizing subcellular architecture with electron tomography. Curr Opin Microbiol 8(3):316–322 3. Herschel JFW (1845) On a case of superficial colour presented by a homogeneous liquid internally colourless. Philos Trans R Soc London 135:143–145 4. Stokes GG (1852) On the change of refractibility of light. Philos Trans R Soc London 142: 463–562 5. Ellinger P, Hirt A (1929) Mikroskopische Beobachtungen an lebenden Organen mit Demonstrationen (Intravitalmikroskopie). Arch Exp Pathol Phar 147:63 6. Davis R, Surewaard B, Turk M, Carestia A, Lee W, Petri B, Urbanski S, Coffin C, Jenne C (2020) Optimization of in vivo imaging provides a first look at mouse model of non-alcoholic fatty liver disease (NAFLD) using intravital microscopy. Front Immunol 10:2988. https://doi.org/10.3389/fimmu. 2019.02988 7. Kammertoens T, Friese C, Arina A, Idel C, Briesemeister D, Rothe M, Ivanov A, Szymborska A, Patone G, Kunz S, Sommermeyer D, Engels B, Leisegang M, Textor A, Joerg Fehling H, Fruttiger M, Lohoff M, Herrmann A, Yu H, Weichselbaum R, Uckert W, Hubner N, Gerhardt H, Beule D, Schreiber H, Blankenstein T (2017) Tumour ischaemia by interferon-γ resembles physiological blood vessel regression. Nature 545:98–102. https://doi. org/10.1038/nature22311 8. Ito K, Smith BR, Parashurama N, Yoon J-K, Song SY, Miething C, Mallick P, Lowe S, Gambhir SS (2012) Unexpected dissemination patterns in lymphoma progression revealed by serial imaging within a murine lymph node. Cancer Res 72(23):6111–6118. https://doi. org/10.1158/0008-5472.CAN-12-2579

9. Kondo H, Ratcliffe C, Hooper S, Dunsby C, Anderson K, Ellis J, Macrae J, Hennequart M, Anderson K, Sahai E (2021) Single-cell resolved imaging reveals intra-tumor heterogeneity in glycolysis, transitions between metabolic states, and their regulatory mechanisms. Cell Rep 34:108750. https://doi.org/10. 1016/j.celrep.2021.108750 10. Naumenko V, Jenne CN, Mahoney DJ (2016) Intravital microscopy for imaging the tumor microenvironment in live mice. Methods Mol Biol 1458:217–230. https://doi.org/10. 1007/978-1-4939-3801-8_16 11. Paddock S (1999) Confocal laser scanning microscopy. Biotechniques 27:992–1004 12. Helmchen F, Denk W (2005) Deep tissue two-photon microscopy. Nat Methods 2: 9 3 2 – 9 4 0 . h t t p s : // d o i . o r g / 1 0 . 1 0 3 8 / NMETH818 13. Phan TG, Bullen A (2010) Practical intravital two-photon microscopy for immunological research: faster, brighter, deeper. Immunol Cell Biol 88:438–444. https://doi.org/10. 1038/icb.2009.116 14. Rahmani W, Liu Y, Rosin NL, Kline A, Raharjo E, YoonJ Stratton JA, Sinha S, Biernaskie J (2018) Macrophages promote woundinduced hair follicle regeneration in a CX3CR1- and TGF-β1–dependent manner. J Invest Dermatol 138:2111–2122. https://doi. org/10.1016/j.jid.2018.04.010 15. Abbasi S, Sinha S, Labit E, Rosin NL, Yoon G, Rahmani W, Jaffer A, Sharma N, Hagner A, Shah P, Arora R, Yoon J, Islam A, Uchida A, Chang CK, Stratton JA, Scott RW, Rossi F, Underhill T, Biernaskie J (2020) Distinct regulatory programs control the latent regenerative potential of dermal fibroblasts during wound healing. Cell Stem Cell 27:396–412. https://doi.org/10.1016/j.stem.2020.07. 008ll 16. Lichtman J, Conchello J (2005) Fluorescence microscopy. Nat Methods 2:910–919. https:// doi.org/10.1038/NMETH817

Chapter 11 Quantifiable Intravital Light Sheet Microscopy Holly C. Gibbs, Sreeja Sarasamma, Oscar R. Benavides, David G. Green, Nathan A. Hart, Alvin T. Yeh, Kristen C. Maitland, and Arne C. Lekven Abstract Live imaging of zebrafish embryos that maintains normal development can be difficult to achieve due to a combination of sample mounting, immobilization, and phototoxicity issues that, once overcome, often still results in image quality sufficiently poor that computer-aided analysis or even manual analysis is not possible. Here, we describe our mounting strategy for imaging the zebrafish midbrain–hindbrain boundary (MHB) with light sheet fluorescence microscopy (LSFM) and pilot experiments to create a study-specific set of parameters for semiautomatically tracking cellular movements in the embryonic midbrain primordium during zebrafish segmentation. Key words Zebrafish, Midbrain–hindbrain boundary, Light sheet fluorescence microscopy, Bioimage analysis

1

Introduction The zebrafish is a choice model organism for studying dynamic developmental processes with intravital microscopy. Zebrafish are externally fertilized vertebrates, whose robust embryos are amenable to facile microinjection, cell transplant experiments, and are beloved by a research community that has fostered a fully stocked molecular and genetic toolbox (zfin.org, zebrafish.org, zrc.kit.edu, zfish.cn). Zebrafish embryos develop rapidly, forming a body plan in less than 24 h that is small enough to image with one multiview acquisition on a light sheet fluorescence microscope (LSFM). The embryos are transparent, and pigment-free mutants improve image quality attainable at later developmental stages. Taking full advantage of the benefits of this model organism, however, can be unexpectedly challenging. When imaging zebrafish embryos, it is critical to optimize parameters to minimize interference with normal developmental processes while attaining adequate image quality and spatiotemporal resolution for processing and analysis. Practically speaking, this means: (1) selecting optimal

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_11, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

181

182

Holly C. Gibbs et al.

fluorophores, (2) finding suitable mounting solution to keep structure that is changing shape, position, and orientation over time accessible for imaging, (3) aligning the LSFM at the start of each experiment and, (4) realizing one is likely to be both fatigued and rushed at the critical step of setting the imaging parameters and executing data collection. Due to these challenges, we advise piloting the sample preparation, imaging parameters, and candidate components of an image analysis pipeline during a thoughtful pause between exploratory and formal imaging studies. LSFM generates data with a relative ease that contrasts the sophisticated parallel and high performance computing solutions required for comprehensive analysis, making it tempting, but perilous, to rush forward with data collection. While image analysis tools are ever-improving, there is serious risk of acquiring images that cannot be analyzed even manually. It is incumbent on the studies’ PI to require scientists and trainees to establish a clear quantitative endpoint to a study and thorough protocol, with input and collaboration from imaging scientists and bioimage analysts if need be [1]. Choosing the correct microscope and objective(s) is an important consideration and problems here will become apparent during pilot experiments. LSFM achieves optical sectioning by restricting illumination light, rather than photon collection, to a thin plane so the overall light dose is smaller and collection efficiency higher compared to confocal microscopy. Generally, for long-term imaging studies of cellular movements, LSFM should be a suitable choice that minimizes phototoxicity and photobleaching. For shorter imaging sessions requiring higher spatiotemporal resolution over a smaller field of view, confocal and lattice light-sheet systems could be better solutions [2, 3]. If greater imaging depth is required, multiphoton imaging or imaging with adaptive optics or near-infrared wavelengths could be suitable [4–6]. Among the various light sheet designs, the standard orthogonal dual-axis design known originally as selective plane illumination microscopy (SPIM) is useful, though single-objective light sheet designs provide increased speed and ease of sample access and mounting [7, 8]. With these considerations in mind, we present a strategy for pilot experiments with a bidirectional SPIM LSFM (Fig. 1). We are interested in morphogenesis of the optic tectum of the zebrafish, which is derived from the dorsal portion of the midbrain primordium, so we describe the task of defining the spatiotemporal resolution and image quality required for segmentation and semiautomatic tracking of fluorescent nuclei in this region with FIJI plug-ins. Bioimage analysis tools are constantly evolving to tackle challenging data sets and currently there is no fully automated solution we can aim to use [9]. Rather, we aim to optimize our imaging parameters to minimize the amount of human supervision required (see Note 1).

Quantifiable Intravital Light Sheet Microscopy

183

Fig. 1 Guidance for minimizing risk of data waste in intravital imaging experiments

2 2.1

Materials Zebrafish Lines

2.2 Printing and Cleaning Custom Mounts

1. Tg(otx2:citrine-H2b) reporter line constructed with AN and FM enhancer as in Kurokawa et al. [10] (see Note 2). 1. Elegoo Mars UV photocuring LCD resin printer (0.045 mm xy resolution, 0.01 mm minimum layer thickness). 2. Photocuring resin (Elegoo Mars 3D rapid resin, VOC free, transparent) (see Note 3). 3. 100% ethanol. 4. UV LED light (40 mW). 5. dH2O.

2.3 Imaging Medium for Zebrafish Embryos

1. 10 fish water (Instant Ocean, 600 mg in 1 L of dH2O). 2. 20 tricaine (ethyl 3-aminobenzoate methanesulfonate, that is, MS-222, 4 mg in 100 mL of tank water, buffer to pH of 7.0). 3. 100 PTU (1-phenyl 2-thiourea, 0.3 g PTU in 100 mL fish water).

2.4 Mounting Medium and Equipment

1. 1.2% agarose stock (w/v, 1.2 g in 100 mL 1 fish water) in 1 mL aliquots (Sigma A9045). 2. 0.2 μm TetraSpeck multicolor fluorescent beads (Invitrogen T7280).

184

Holly C. Gibbs et al.

3. 1–2 mm inner diameter glass capillaries and plunger (Zeiss). 4. Mounting apparatus embryo mold.

including

mount,

sleeve,

and

5. Dissecting microscope, fine forceps, petri dishes, disposable pipettes. 2.5 Screening, Short-Term and LongTerm Imaging

1. Nikon fluorescence stereomicroscope. 2. Zeiss Z.1 Light sheet fluorescence microscope (or other multiview LSFM). 3. 20 1.0 NA detection objective (Zeiss, Part no: 4214529700-000), two 10 0.2 NA illumination objectives (Zeiss, Part no: 400900-9000-710). 4. Zeiss Z.1 sample chamber and, optionally, environmental control system. 5. Optional power meter slide (Thorlabs S175C).

2.6 Image Analysis, Segmentation, and Cell Tracking

1. Analysis workstation: 128 GB RAM, i9-9900K 3.6 GHz 16 thread CPU, GeForce RTX 2080 Ti GPU, 2 TB M.2 NVMe drive, 8 TB SATA SSD with Windows 10 Pro (see Note 4). 2. FIJI [11] plugins: CLIJx [12], Stardist 2D [13], LabKIT, TrackMate [14], Big Stitcher [15], and MaMuT [16].

3

Methods

3.1 Mount Design, Printing, Curing, and Cleaning (2–3 Days)

Images from the zebrafish midbrain-hindbrain boundary (MHB) will be clearest on an orthogonal dual-axis LSFM if the detection objective is on the dorsal side of the neural tube and the illumination axis is on the left-right axis of the embryo. While elegant solutions for imaging zebrafish over long periods of time in toto have been reported [17], we have found that when using capillaries or FEP tubes, the MHB is difficult to position and drifts and rotates over time, obscuring cellular details from the view of the detection objective. To address this problem, we have developed a custom 3D-printed solution that incorporates elements from previous designs [18, 19] including agarose molding and trimming, shown in Fig. 2. 1. Print an excess of holders and molds so you have extra on hand during mounting. 2. Wash the prints in 100 mL of fresh 100% ethanol at room temperature overnight. 3. Cure the prints with UV-light for 15 min on each side.

Quantifiable Intravital Light Sheet Microscopy

185

Fig. 2 Mounting embryos. (a) Brightest embryos are screened for imaging (arrow). (b) Dechorionated embryos in clean fish water. (c) Cylindrical sample holders on the bottom mold. Top mold has well printed on it (inset). (d) Molds clamped together and filled with 1.2% agarose and bead mixture. (e, f) After top mold is removed, agarose wells are exposed for embryo mounting. (g) Well-oriented embryo. (h, i) Trimming agarose for tail extension. (j) Imaging chamber (Scale bar ¼ 1 mm)

4. Soak prints in dH2O at room temperature overnight. 5. Air dry and store at room temperature in a clean compartment. 3.2 Initial Microscope Setup (30 min to 1 h)

The performance of microscope optics depends on clean, smooth surfaces coated with fragile films. Check the lenses and other surfaces are clean each time you use the microscope. 1. Couple the 20, 1.0 NA, n ¼ 1.33 objective to the microscope with the water adapter. 2. Screw the 10, 0.2 NA, n ¼ 1.0 objectives into the left and right illumination paths.

186

Holly C. Gibbs et al.

3. Inspect the coverslip glass on the imaging chamber. Clean with 70% ethanol on a q-tip if necessary or replace with new cover glass. 4. Secure the imaging chamber in the microscope by sliding it in the dovetail joint until it reaches the end of the rail and the o-ring equipped port is tight against the detection lens. 5. Prepare 50 mL 1 fish water with tricaine and PTU by adding 2.5 mL of 20 tricaine stock and 0.5 mL of 100 PTU to 5 mL of 10 fish water stock, bringing the final volume to 50 mL with dH20. PTU is toxic, so dispose of it properly and wear gloves. 6. Fill the imaging chamber with fish water containing tricaine and PTU. 7. The Z.1 internal temperature during operation is 28–29  C which is ideal for zebrafish development, however, if timing is a critical aspect of your measurements, it is recommended to equilibrate the microscope to desired temperature. 3.3 Embryo Screening and Mounting (30 min to 2 h)

When embryos are harvested, they should be thoroughly rinsed and transferred with netting to clean, freshly prepared fish water. 1. Screen embryos on a widefield fluorescence microscope (Fig. 2a). 2. Dechorionate the brightest embryos under a dissecting microscope (Fig. 2b). 3. If the embryos have primary motor activity (after 18 somite stage), add tricaine to prevent them from disrupting the agarose during gelling/mounting. 4. Melt 1.2% agarose in fish water by warming several 1 mL aliquots to 70  C on the heat block. After the agarose has melted, lower the temperature to 40  C. 5. Add 1 μL of TetraSpeck beads to 1 mL agarose, vortexing thoroughly to mix. 6. Pipette 1.2% agarose–bead mixture into holder clamped with surrounding mold (Fig. 2d). Let agarose gel for 10 min. 7. Remove embryo mold and add excess water to one of the now exposed wells. 8. Draw an embryo into a capillary tube or Pasteur pipette and carefully place it into one of the wells tail first, gently orienting the left-right axis of embryo with the left-right axis of the cylinder using forceps and rolling it so the MHB constriction is facing the detection objective (Fig. 2g). Continue to ensure the embryo is hydrated or surface tension will burst the yolk.

Quantifiable Intravital Light Sheet Microscopy

187

9. Cover the embryos with a small amount of 1.2% agarose–bead mixture and allow it to gel for 5 min, carefully adjusting the embryo orientation as needed. 10. Remove the holder with the mounted embryo from the bottom mold and gently cut away a small section of agarose from underneath the embryo to make room for tail growth using forceps, taking care not to create a vacuum that would pull the mounted embryo further down into the well and burst the yolk (Fig. 2h,i). 11. Examine the embryo. If all is well, couple the mount to the microscope stage (Fig. 2j), otherwise start the same process with another holder in the mold. 3.4 Light Sheet Alignment (5–10 min)

This procedure has been well-described for this instrument [20], so we briefly reiterate. 1. Lower the sample into the imaging chamber. 2. Rotate the embryo so the MHB is facing the detection objective (Fig. 3c). 3. Select 488 nm excitation, 505–550 nm detection, low power level and short exposure. Begin live acquisition. To avoid photobleaching the sample while you are aligning the light sheets, move the embryo out of focus so that you only see agarose beads. 4. Turn the pivot scan on to suppress shadowing artifacts. 5. Both the left and right sheets need to be positioned in the focal plane of the detection objective and coaligned. Adjust the light sheet position for one side, digitally zooming in to examine the shape of the PSF (measured from the sub-resolution sized beads in the gel) that it looks symmetric (Fig. 3a) and not asymmetric (Fig. 3b). Then, similarly bring the right side illumination into alignment. Switch back and forth between sides until images are matched. 6. Recheck the left and right side illumination are coaligned in the embryo (Fig. 3e–h). 7. If you will be fusing dual illumination images on-the-fly, check the quality of the fused images with a small test volume (Fig. 3d). The Z.1 can fuse images with either the maximum or average pixel intensity (Fig. 3i–l). Taking advantage of this feature prohibits attempting more sophisticated fusing algorithms later, but saves half the data size. Since even minor misalignments destroy subsequent image quality (Fig. 3m, n), we prefer to keep raw left and right images.

Fig. 3 Importance of proper alignment. (a) Symmetric PSF shows light sheet is in focus. (b) Asymmetry of the PSF shows light sheet is mis-aligned (Scale bar ¼0.5 μm). (c, d) Zebrafish embryo at 14 ss (Scale bar ¼ 100 μm). (e–h) Left v. right illumination shows signal intensity is decreased at the far side of the embryo (see arrows on intensity profiles taken from the white line in e). (i–l) Average and max fusion, max fusion better preserves peak prominence of nuclei. (m, n) Misalignment of left and right illumination superimposes multiple planes within the embryo into one image, obscuring individual cell nuclei peaks (arrow). (o, p) Preprocessing steps can assist feature detection but quantitative analysis of fluorescence should always be done on raw data

Quantifiable Intravital Light Sheet Microscopy

3.5 Spatial Resolution Optimization (1–2 Days)

189

Cell nuclei generally become smaller and more dense throughout MHB morphogenesis, so we have selected prim-5 as the time point to test the requisite spatial resolution, when we anticipate segmentation will be most challenging. 1. Set the zoom to 1, providing adequate space for the anterior movement of the MHB (Fig. 4a). Lateral pixel dimensions will be 0.23  0.23 μm with FOV of 441.6 μm. 2. Collect an image stack with Nyquist sampling for the z-step at a laser power that excites signal well enough to utilize the 16-bit encoding range, avoiding saturation. Here we used light sheet thickness of 4.58 μm sampled at 0.48 μm z-step for 400 slices, with 10% of 50 mW 488 nm excitation (measured as 1.2 mW after the illumination objective) and 100 ms exposure time. Laser power can be relatively high here to provide good SNR for ground-truth images but may be too high for long-term imaging. 3. Fuse left and right views and examine the embryo morphology with a maximum intensity projection (MIP) of an image stack (In FIJI, Image/Stack/ZProject). The size of one 1920  1920  400 pixel volume is 2.75 GB. This image can serve as a control for expected morphology at the termination of a time-lapse. 4. Select a smaller region of interest and crop the dataset. The subvolume here is 87 MB. 5. Preprocess the data with Max fusion, CLAHE and Top-Hat background subtraction. 6. Downsample this test dataset along the z-axis 2, 4, and 6 (Image/Scale) to simulate z-steps of increasing size. Save xz sections for subsequent analysis. 7. Draw ground truth ROIs on the full resolution xz section with the freehand selections tool, and save these selections with the ROI Manager. 8. Run the 2D StarDist plugin on all xz sections, saving the output ROIs (Fig. 4c–f). 9. Compare StarDist outputs to ground truth (GT) annotation with measurements (Analyze/Set Measurements) of area and major and minor axis length (Fig. 4g–i). 10. Create a ground truth of nuclei count and position in the test volume, by labeling nuclei manually in the highest resolution

190

Holly C. Gibbs et al.

Fig. 4 Assessing effect of axial resolution on segmentation and spot detection. (a) MIP of 24 hpf Tg(otx2: citrine-H2b) embryos (Scale bar ¼ 50 μm). (b) Section shows cell nuclei are distinguishable with this lateral resolution (0.23 μm square pixel) (Scale bar ¼ 50 μm). (c–f) Axial resolution is decreased from 0 (full resolution, 0.48 μm z-step) to 6 and xz images were segmented with StarDist2D (Scale bar ¼ 20 μm). (g–i) Comparison of ground-truth (GT) to automated segmentation. (j) Labeling nuclei in 3D using Labkit and BDV. (k, l) 3D spot detection with Trackmate shown in hyperstacks or 3D viewer (Scale bar ¼ 20 μm)

volume using Labkit (Plugins/Segmentation/Labkit) (Fig. 4j). We find 1642 nuclei in the test volume. 11. The annotation from step 10 can be used to evaluate spotdetection algorithms, which can be a robust alternative to segmentation for cell tracking (Fig. 4k–l). Open the raw test volume (0 downsampled) in Trackmate (Plugins/Tracking/ Trackmate). Using the LoG detector, we find the 0, 2, 4, and 6 downsampled volumes are reported to have 1663, 1787, 1885, and 1500 spots, respectively, using 5 μm blob diameter and threshold of 150. 2 downsampling leads to spurious detections, but a tolerable amount that can be quickly removed manually.

Quantifiable Intravital Light Sheet Microscopy

191

Fig. 5 Evaluation of temporal resolution requirements with MaMuT. (a) MIP of 14 ss Tg(otx2:citrine-H2b) embryos (Scale bar ¼ 50 μm). (b) Selected tracks from MaMuT. (c) Cell division dynamics can be captured manually but not semiautomatically with time steps of 5 min. MaMuT tracking performs well with time steps of 4 min (Scale bar ¼ 10 μm)

12. Judging from the data, nuclei can be identified and segmented reasonably well with 1 μm z-steps. If desired, similarly explore downsampling other dimensions. 3.6 Temporal Optimization (1–2 Days)

Nuclei in the MHB undergo interkinetic nuclear migration between the basal and apical side of the neural tube, dividing at the ventricular zone (apical side) (Fig. 5a–c) Approximately 5–10 μm in size, they have previously been reported to move at an average instantaneous velocity of 0.5–1 μm/min [21]. To keep each object overlapping in time, which is important for such densely packed objects, we therefore expect we need to image at least every 5 min. Brand et al. have reported successfully tracking nuclei in this system using 3–4 min time steps [22], but it is worth exploring whether this interval could be lengthened. 1. Acquire a 3–4 h time-lapse with temporal steps of 5 min using the same settings as in 3.5 except substitute 1 μm for the z-step, decrease the laser power to 5% (0.63 mW), and lengthen the exposure time to 150 ms. Additionally, add 20 μm of empty space above the embryo to the limits of the stack to allow space for the dorsal displacement of the MHB during its morphogenesis. At any signs of cell stress, stop and repeat the acquisition with lower laser power. 2. Once acquired, open the .czi file with BigStitcher by defining a dataset with the Automatic Loader (Bioformats based) and convert to multiresolution .hdf5/.xml format using the default mipmap settings and deflate compression. This unfused test data are 96 GB uncompressed and 65 GB after compression. 3. Select all views (time points and left/right illumination), and right click in BigStitcher to access the processing menu. Select “detect interest points”, which are the stationary TetraSpeck beads, using Difference-of-Gaussian detector with sub-pixel

192

Holly C. Gibbs et al.

localization, choosing “weak and small” signal to exclude moving cell nuclei. 4. “Register time points” using the bead interest points with affine transformation using an “all-to-all” global optimization within a reasonable range (10 volumes). Resave .xml. 5. Open MaMuT and start a new annotation with previously saved .hdf5/.xml. 6. Open MaMuT Viewer and TrackScheme Viewer. Select a few nuclei to track in the first frame and try automated tracking in steps of 10 frames, adjusting the parameters as needed (Annotations tab, main MaMuT GUI window). Note the performance (Fig. 5b,c). 7. With these settings for data acquisition, we find 5 min is not adequate for semiautomated tracking and the laser power is too high. After repeating steps 1–6, a time step of 4 min and laser power of 1.5% (0.3 mW) was found to facilitate optimal performance in MaMuT, with tracks requiring intervention every 5 frames on average. 3.7 Viability and Photobleaching Studies (1–2 Weeks)

With spatial and temporal resolution specified, it is still necessary to test whether the laser power interferes with normal development and possibly, what minimum image quality is required. 1. Acquire a 15–20 h time lapse with the settings in Subheading 3.6 except adjust the time step to 4 min and laser power to 1.5% (0.3 mW) (Fig. 6a–f). 2. Examine the health of the embryo with brightfield after the experiment is finished (Fig. 6f) and compare morphology to the acquired data from 3.5. 3. Convert the .czi to .hdf5/.xml as described in 3.6 and follow steps 3–6. File conversion will take several hours on an analysis workstation. Our test data set is 730 GB (unfused) without compression and 428 GB after compression. 4. Track nuclei, noting the performance of semiautomated spot linking algorithms as before to determine if the intervention required is tolerable (Fig. 6g–i, k–m). 5. Photobleaching, which may occur, presents an opportunity to characterize performance of selected image analysis tools with varying data quality. Use BigStitcher to fuse select image volumes and choose a representative plane at each time point to manually segment into foreground and background. Using FIJI’s mask operations, calculate signal-to-noise ratio (SNR) and contrast ratio (CR) (see Fig. 6j). We achieved manual annotation by starting with StarDist labels and correcting them with the ROI Manager, allowing us to also calculate the

Quantifiable Intravital Light Sheet Microscopy

193

Fig. 6 Long-term tracking. (a–f) Tg(otx2:citrine-H2b) embryos were imaged from 14 ss to prim-25 (Scale bar ¼ 50 μm). (g–i, k–m) Testing semiautomated tracking with MaMuT to quantify cell division and nuclear movement in a tightly packing epithelium (Scale bar ¼ 10 μm). (j) Data quality and segmentation performance decreases across time (FG-foreground, BG-background, SNR_1 ¼ ((FG_ave-BG_ave)/BG_std), SNR_2 ¼ (FG_ave/BG_std), CR ¼ (FG_ave/BG_ave), JI ¼ (overlap/union))

pixel-wise Jaccard Index (JI) with CLIJx “Jaccard Index on Two Binary Images”. 6. Use data from step 5 in combination with tracking performance to select the lower bounds of usable data quality. We observe segmentation and tracking still performs reasonably well down to a CR of 2, allowing us to lower the laser power to 1% to preserve data quality across our temporal window of interest (see Note 5).

194

4

Holly C. Gibbs et al.

Notes 1. Semiautomated image analysis provides the annotator with intimate views of the behavior of the imaged biological system and should be carried out by someone with interest and background knowledge of the biological questions at hand. 2. Sometimes it is more practical to use an existing transgenic reporter, even if one does not have access to the optimal excitation lines and filters for the study. Here we have tested a citrine reporter line originally developed for a multiphoton system, with excitation at 488 nm (some Zeiss Z.1 LSFM come equipped with 515 nm, but we currently lack this capability). It is wise to select longer wavelength, low toxicity fluorophores. An extensive catalog of genetically encoded fluorophores can be found at FPbase.com [23]. 3. Some printed photo resins are toxic to zebrafish embryos, especially if not properly cured and cleaned [24]. Formlabs biocompatible dental resin is recommended for early stages of development, as it has met FDA biocompatibility standards. Other Formlabs resin are also reportedly safe with zebrafish embryos [25]. We have tested the resin in this protocol with embryos exposed from 12 ss to 3 dpf and found properly cleaned printed parts fail to induce developmental abnormalities. 4. This protocol makes use of software that employ a variety of approaches to utilizing available hardware for storage and computation. CLIJ and Stardist utilize GPUs, so a performant GPU is helpful to further accelerate computation and larger GPU RAM will allow larger data sets to be computed on by the GPU. FIJI’s built-in functions mostly utilize CPU computation. Some, but not all, of these functions have been written to utilize multiple cores (e.g., TrackMate, BigStitcher, and MaMuT), so a balance of processor speed and number of cores is desirable to benefit from multiprocessing while maintaining good performance of single-core processes. BigStitcher and MaMuT handle data sets too large for the system’s RAM by reading an optimized data storage structure (.xml/.hdf5) from disk (lazy loading/processing). These processes can be accelerated with disk storage that has fast read/write capability, such as NMVe drives, though SATA solid state drives (SSDs) can also perform sufficiently well. Spinning hard drives should be avoided, except for long-term storage. 5. We highlighted some open source tools currently available, but this landscape is large and dynamic. We find that image.sc forum is a responsive and valuable resource to find candidate tools and troubleshoot common issues with open-source software.

Quantifiable Intravital Light Sheet Microscopy

195

Acknowledgments We thank Dr. Bruce Riley, Dr. Jennifer Dong, and Dr. Jo-Ann Fleming for care and maintenance of zebrafish colonies and facilities, and we thank Texas A&M University’s Microscopy and Imaging Center for access to the Zeiss Z.1 LSFM. The plasmid containing citrine:H2b was a gift from S.Megason. This work was supported by the Silicon Valley Community Foundation (CZI Imaging Scientist Program, 2019-198168) and the National Institutes of Health (R01 NS088564 and R21 NS109504). References 1. Wait EC, Reiche MA, Chew T-L (2020) Hypothesis-driven quantitative fluorescence microscopy – the importance of reversethinking in experimental design. J Cell Sci 133:jcs250027 2. Gutzman JH, Sahu SU (2015) Kwas C (2015) non-muscle myosin IIA and IIB differentially regulate cell shape changes during zebrafish brain morphogenesis. Dev Biol 397:103–115 3. Kesevan G, Machete A, Hans S, Brand M (2020) Cell-fate plasticity, adhesion and cell sorting complementarily establish a sharp midbrain-hindbrain boundary. Development. Development 147:dev185882 4. Dray N, Bedu S, Vuillemin N, Alunni A, Coolen M, Krecsmarik M, Supatto W, Beaurepaire E, Bally-Cuif L (2015) Largescale live imaging of adult neural stem cells in their endogenous niche. Development 142: 3592–3600 5. Gibbs HC, Chang-Gonzalez A, Hwang W, Yeh AT, Lekven AC (2017) Midbrain-hindbrain boundary morphogenesis: at the intersection of Wnt and Fgf Signaling. Front Neuroanat 11:64 6. Liu T-L, Upadhyayula S, Milkie DE et al (2018) Observing the cell in its native state: imaging subcellular dynamics in multicellular organisms. Science. Science 360:eaaq1392 7. Kumar M, Kishore S, Nasenbeny J, McLean DL, Kozorovitskiy Y (2018) Integrated oneand two-photon scanned oblique plan illumination (SOPi) microscopy for rapid volumetric imaging. Opt Express 26:10 8. Voleti V, Patel KB, Li W, Perez Campos C et al (2018) Real-time volumetric microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0. Nat Methods 15:1054–1062 9. Ulman V, Maska M, Magnusson KEG et al (2017) An objective comparison of cell

tracking algorithms. Nat Methods 14(12): 1141–1152 10. Kurokawa D, Sakurai Y, Inoue A et al (2006) Evolutionary constraint on Otx2 neuroectoderm enhancers-deep conservation from skate to mouse and unique divergence in teleost. PNAS 103(51):19350–19355 11. Schindelin J, Arganda-Carreras I, Frise E et al (2012) Fiji: an open-source platform for biological-image analysis. Nat Methods 9(7): 676–682 12. Haase R, Royer LA, Steinback P et al (2020) CLIJ: GPU-accelerated image processing for everyone. Nat Methods 17:5–6 13. Weigert M, Schmidt U, Haase R, Sugawara K, Myers G (2020) Star-convex polyhedra for 3D object detection and segmentation in microscopy. Proc IEEE/CVF WACV 2020: 3666–3673 14. Tinevez J-Y, Perry N, Schindelin J et al (2017) TrackMate: an open and extensible platform for single-particle tracking. Methods 115: 80–90 15. Horl D, Rusak FR, Preusser F et al (2019) BigStitcher:reconstructing high-resolution image datasets of cleared and expanded samples. Nat Methods 16:870–874 16. Wolff C, Tinevez J-Y, Pietzsch T (2018) Multiview light-sheet imaging and tracking with the MaMuT software reveals the cell lineage of a direct developing arthropod limb. elife 7: e34410 17. Kaufmann A, Mickoleit M, Weber M, Huisken J (2012) Multilayer mounting enables longterm imaging of zebrafish development in a light sheet microscope. Development 139: 3242–3247 18. Megason S (2009) In toto imaging of embryogenesis with confocal time-lapse microscopy. In: Lieschke GJ et al (eds)

196

Holly C. Gibbs et al.

Zebrafish Methods in molecular biology, vol 546. Humana Press, New York, NY pg 317–332 19. Hirsinger E, Steventon B (2017) A versatile mounting method for long term imaging of zebrafish development. J Vis Exp 119:e55210 20. Schmied C, Tomancak P (2016) Sample preparation and mounting of drosophila embryos for multiview light sheet microscopy. In: Dahmann C (ed) Drosophila: methods and protocols, Methods in molecular biology, vol 1478. Humana Press, New York, NY pg 189–202 21. Leung L, Klopper AV, Grill SW, Harris WA, Norden C (2011) Apical migration of nuclei during G2 is a prerequisite for all nuclear motion in zebrafish neuroepithelia. Development 138:5003–5013

22. Langenberg T, Brand M (2005) Lineage restriction maintains a stable organizer cell population at the zebrafish midbrain-hindbrain boundary. Development 132:3209–3216 23. Lambert TJ (2019) FPbase: a communityeditable fluorescent protein database. Nat Methods 16:277–278 24. Oskui SM, Diamante G, Liao C et al (2016) Assessing and reducing the toxicity of 3D-printed parts. Environ Sci Technol Lett 3: 1–6 25. Klaeinhans DS, Lecaudey V (2019) Standardized mounting method of (zebrafish) embryos using a 3D-printed stamp for highcontent, semi-automated confocal imaging. BMC Biotechnol 19:68

Chapter 12 Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing Frank L. Jalufka, Sun Won Min, Madison E. Platt, Anna L. Pritchard, Theodore E. Margo, Alexandra O. Vernino, Megan A. Kirchhoff, Ryan T. Massopust, and Dylan A. McCreedy Abstract Optical tissue clearing enables the precise imaging of cellular and subcellular structures in whole organs and tissues without the need for physical tissue sectioning. By combining tissue clearing with confocal or lightsheet microscopy, 3D images can be generated of entire specimens for visualization and large-scale data analysis. Here we demonstrate two different passive tissue clearing techniques that are compatible with immunofluorescent staining and lightsheet microscopy: PACT, an aqueous hydrogel–based clearing method, and iDISCO+, an organic solvent–based clearing method. Key words Tissue clearing, Lightsheet microscopy, Passive CLARITY technique, PACT, iDISCO+

1

Introduction The study of whole organs or tissues and their cellular components and structures has been historically limited by their natural opacity, which is caused by the high concentration of lipids and proteins that scatter light as it traverses through the tissue [1, 2]. The main method of circumventing tissue opacity has been the use of histological tissue sections. While these thin tissue sections can be used to generate precise imaging of subcellular structures, they often lack 3D spatial context and can be subject to tears and distortions caused by the physical sectioning method [1]. Tissue clearing methods have existed for more than a century, however, several modern techniques for tissue clearing have emerged in recent years that are compatible with fluorescent labeling and volumetric imaging. Tissue clearing techniques reduce the opacity of organs and tissues predominantly through the removal of lipids and matching the refractive index (RI) of the sample with the RI of the optical imaging media. Volumetric imaging of the

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_12, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

197

198

Frank L. Jalufka et al.

optically cleared sample can then be achieved with confocal or lightsheet microscopy to generate 3D images. While many different tissue clearing methods and procedures have been developed in the past two decades, they can generally be grouped into organic solvent–based, hyperhydrating, and hydrogel-based techniques [2–10]. Organic solvent–based techniques, also called hydrophobic tissue clearing, rely on dehydration and delipidation of the tissue prior to RI matching [2–4, 6, 7, 11]. Organic solvent-based clearing techniques are often rapid and straightforward, while causing tissue shrinkage and hardening. Hydrogel-based techniques, also called tissue-transformative tissue clearing, often utilize an acrylamide-based hydrogel to stabilize tissue proteins followed by delipidation with sodium dodecyl sulfate (SDS) and RI matching [2–4, 8, 10, 12, 13]. While inherently slower, hydrogel-based techniques help preserve endogenous epitopes and are often directly compatible with aqueous tissue labeling methods [1–4, 8, 10]. Here we demonstrate the optical clearing and 3D lightsheet imaging of mouse spinal cord, brain, and muscle tissue using two previously described tissue clearing methods: a hydrogel-based protocol, passive CLARITY technique (PACT) [10, 12], and an organic solvent–based protocol, iDISCO+ [7, 11] (Figs. 1, 2, 3). Both approaches are capable of optically clearing a variety of tissues and are compatible with immunofluorescent staining and lightsheet imaging, allowing for 3D visualization and analysis of the specimen. Several factors should be considered when selecting between PACT and iDISCO+ for tissue clearing experiments including microscope compatibility, tissue distortion, clearing efficiency, and preservation of endogenous or transgenic fluorescence (for a detailed comparison of clearing techniques, see Matryba et al. [2]).

2

Materials Prepare all solutions using ultrapure 18 MΩ-cm water and store all reagents at room temperature (RT), unless otherwise indicated. Diligently follow all waste disposal regulations when disposing of waste materials.

2.1 Mouse Transcardial Perfusion and Fixation

1. 10 phosphate-buffered saline (PBS): Dissolve 80 g (8% w/v) of NaCl, 2 g (0.2% w/v) of KCl, 14.4 g (1.44% w/v) of Na2HPO4 (dibasic anhydrous), and 2.4 g (0.24% w/v) KH2PO4 (monobasic anhydrous) in 800 mL of water. Adjust the pH to 7.4 with HCl or NaOH and add water to 1 L final volume. 2. 1 PBS: For 1 L of solution, dilute 100 mL of 10 PBS with 900 mL of ddH2O. Store at 4  C and keep on ice while in use.

Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing

199

Fig. 1 PACT and iDISCO+ clearing of the intact adult mouse spinal cord. (a) Macroscopic images of an intact adult mouse spinal cord cleared with the PACT protocol, with representative images showing before clearing, after delipidation, and after RI matching in RIMS. (b) Macroscopic images of an intact adult mouse spinal cord cleared with the iDISCO+ protocol, with representative images showing before clearing, after dehydration and

200

Frank L. Jalufka et al.

Fig. 2 PACT and iDISCO+ clearing of the adult mouse muscle tissue. (a) Macroscopic images of an intact adult mouse extensor digitorum longus (EDL) muscle cleared with the PACT protocol, with representative images showing before clearing, after delipidation, and after RI matching in RIMS. (b) Macroscopic images of an intact adult mouse EDL muscle cleared with the iDISCO+ protocol, with representative images showing before clearing, after dehydration and embedding in agarose, and after RI matching in ethyl cinnamate. Background grid for (a) and (b) are 2.5 mm squares. (c) Lightsheet image of immunofluorescent staining for choline acetyltransferase (ChAT) of a sternomastoid muscle cleared by the PACT protocol, processed using Imaris to produce a 3D rendering

3. 4% Paraformaldehyde (PFA): For 250 mL of solution, dilute 31.25 mL of 32% PFA solution (EM grade) in 218.75 mL of 1 PBS. Store at 4  C and keep on ice while in use. 4. Perfusion pump with tubing and a blunt tip needle. 2.2

PACT Clearing

1. 40% Acrylamide: Dissolve 40 g of acrylamide in 80 mL of water. Adjust the volume to 100 mL with water. Store at 4  C. 2. A4P0 monomer solution (4% Acrylamide): For 40 mL of solution, combine 4 mL of 10 PBS and 4 mL of 40% acrylamide in 32 mL of ddH2O. Chill at 4  C for several hours or on ice for 30 min. Add 100 mg of VA-044 thermal initiator and mix until fully dissolved. 3. Boric Acid Buffer (BB): For 1 M solution, dissolve 61.83 g (6.183% w/v) of boric acid and 12 g (1.2% w/v) of NaOH in 900 mL of water. Once sodium hydroxide and boric acid are fully dissolved, add water to a final volume of 1 L and adjust the pH to 8.5 with NaOH.

ä Fig. 1 (continued) embedding in agarose, and after RI matching in ethyl cinnamate. Background grid for A and B are 2.5 mm squares. (c, d) Lightsheet images of PACT (c) and iDISCO+ (d) cleared spinal cord with immunofluorescent staining for NeuN, processed in Imaris to produce a 3D rendering of the thoracic region of the spinal cord. (e) (i) Lightsheet image of PACT cleared spinal cord with immunofluorescent staining for choline acetyltransferase (ChAT), processed in Imaris to produce a 3D rendering of the thoracic region of the spinal cord. (ii) Magnified view of motor neurons in the PACT cleared spinal cord from a ChAT-Cre x Confetti mouse. Motor neurons stochastically express different transgenic fluorophores (YFP in yellow and RFP in magenta)

Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing

201

4. 10% Sodium Dodecyl Sulfate (SDS): For 1 L, dissolve 100 g of SDS in 800 mL of ddH2O. Heat the solution to 68  C to dissolve SDS. Let the solution cool to RT and add remaining water to 1 L. 5. Clearing Solution (8% SDS in 0.2 M BB): For 12.5 mL of solution, dilute 10 mL of 10% SDS with 2.5 mL of 1 M BB. 6. Boric Acid Buffer with Triton (BBT): For 1 L of solution, dilute 200 mL of 1 M BB in 799 mL of water and add 1 mL (0.1% v/v) of Triton X-100. 7. Immunostaining Buffer: For 1 mL of solution, combine 20 μL (2% v/v) of normal donkey serum and 0.01% (w/v) sodium azide in 980 μL of BBT. 8. 0.22 μm Syringe filter. 9. Single use syringe (3, 5, or 10 mL). 10. 0.1 M Phosphate Buffer (PB): Dissolve 3.1 g NaH2PO4 (monobasic; monohydrate) and 10.9 g Na2HPO4 (dibasic; anhydrous) in 900 mL of water. Bring to a final volume of 1 L with water and a final pH of 7.4. 11. 0.02 M PB: Combine 200 mL of 0.1 M PB with 800 mL of water. 12. 1.5% Agarose: Dissolve 300 mg of low-melting temperature agarose in 20 mL of 0.02 M PB. Heat solution to dissolve agarose. Aliquot 1 mL into 1.5 mL microcentrifuge tubes and store at 4  C. 13. Refractive Index Matching Solution (RIMS): For ~45 mL of RIMS, add 30 g of Nycodenz to 22.5 mL of 0.02 M PB and rock or stir for several hours. Once fully dissolved, measure RI using a refractometer and adjust to a target RI of 1.455–1.458 with 0.02 M PB. Add sodium azide to a final concentration of 0.01% (w/v). Light protect solution with foil and store at 4  C. 2.3

iDISCO+ Clearing

Add sodium azide to a concentration of 0.02% (w/v) to all stock solutions. 1. PTx.2: For 1 L of solution, add 2 mL of Triton X-100 to 998 mL of 1 PBS and stir until fully dissolved. 2. Heparin Stock: For 5 mL of 10 mg/mL solution, dissolve 50 mg of heparin sodium salt in 4 mL of water. Once dissolved, use water to bring the final volume to 5 mL. 3. PTwH: For 1 L of solution, add 2 mL (0.2% v/v) of Tween 20 to 997 mL of 1x PBS and stir until fully dissolved. Then add 1 mL of 10 mg/mL heparin stock solution. 4. Permeabilization Solution: For 500 mL of solution, use 400 mL PTx.2, 11.5 g (2.3% w/v) glycine, and 100 mL (20% v/v) of Dimethyl Sulfoxide (DMSO).

202

Frank L. Jalufka et al.

Fig. 3 PACT and iDISCO+ clearing of the adult mouse brain hemispheres. (a) Macroscopic images of an adult mouse left brain hemisphere cleared with the PACT protocol, with representative images showing before clearing, after delipidation, and after agarose embedding and RI matching in RIMS. (b) Macroscopic images of an intact adult mouse right brain hemisphere cleared with the iDISCO+ protocol, with representative images showing before clearing, after dehydration and embedding in agarose, and after RI matching in ethyl cinnamate. Background grid for (a) and (b) are 2.5 mm squares. (c, d) Lightsheet images of immunofluorescent staining for tyrosine hydroxylase (TH) in the PACT cleared left brain hemisphere (c) and the iDISCO+ cleared right brain hemisphere (d). Images were processed in Imaris to produce 3D renderings

5. Blocking Solution: For 50 mL, use 42 mL PTx.2, 3 mL (6% v/v) of Normal Donkey Serum, and 5 mL (10% v/v) of DMSO.

Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing

203

6. 5% Peroxide Solution (H2O2): For 6 mL of solution, dilute 1 mL of 30% H2O2 in 5 mL of methanol. 7. Primary Immunolabeling Solution: For 1 mL of solution, use 920 μL of PTwH with 50 μL (5% v/v) of DMSO and 30 μL (3% v/v) of Normal Donkey Serum with the desired concentration of primary antibody. 8. Secondary Immunolabeling Solution: For 1 mL of solution, use 970 μL of PTwH with 30 μL (3% v/v) of Normal Donkey Serum and the desired concentration of secondary antibodies. 9. 2% Agarose: Dissolve 400 mg of low-melting temperature agarose in 20 mL of 0.02 M PB. Heat solution to dissolve agarose. Aliquot 1 mL into 1.5 mL microcentrifuge tubes and store at 4  C. 2.4 Primary and Secondary Antibodies

1. Primary Antibodies: rabbit anti-NeuN (1:200), chicken antityrosine hydroxylase (TH; 1:200), rabbit anti-TH (1:200), goat anti-choline acetyltransferase (ChAT; 1:500). 2. Secondary Antibodies (1:200): donkey anti-rabbit IgG Alexa Fluor® 488, donkey anti-goat IgG Alexa Fluor® 647, donkey anti-chicken IgG Alexa Fluor® 488 (see Note 1).

3

Methods The following protocols describe the perfusion, dissection, and optical clearing of mouse tissue using two different clearing procedures, PACT and iDISCO+. All procedures are carried out at RT unless otherwise specified. All animal procedures must be conducted in strict compliance with the Institutional Animal Care and Use Committee (IACUC) or equivalent regulatory agency.

3.1 Mouse Transcardial Perfusion and Fixation

1. Set up a workspace in a fume hood with enough space to perform the perfusion. Set up a wire stage inside of a wide, shallow container for fluid collection on an absorbent mat. 2. Prepare fresh 1x PBS and 4% PFA and chill at 4  C until ready for use. Keep on ice during procedure to keep chilled. 3. Prime the perfusion pump with 5–10 mL of chilled 1 PBS (see Note 2). 4. Anesthetize or euthanize mice according to approved procedures. Once the animal appears anesthetized, perform a bilateral toe pinch on both hind paws to assess motor response. Do not proceed until the animal fails to respond to the toe pinch on both sides.

204

Frank L. Jalufka et al.

5. Perform the perfusion using 25–30 mL of chilled 1 PBS solution followed by 25–30 mL of chilled 4% PFA solution at a flow rate of approximately 5 mL per minute. 6. Carefully dissect out the organ or tissue wanted for clearing. Brain and muscle tissue can be harvested directly, and the spinal column can be removed if the spinal cord is wanted. 7. Place dissected organs/tissue in a 15 mL conical with 10 mL of chilled 4% PFA solution, making sure the tissue is fully submerged. 8. Incubate samples in the 4% PFA solution at 4  C overnight. 9. Transfer samples to 40 mL of 1 PBS in a 50 mL conical and allow samples to wash overnight at 4  C. 10. Dissect out the spinal cord from the spinal column using microscissors and Adson forceps (see Note 3). Transfer the cord to a petri dish and carefully remove the meninges using fine forceps. Transfer the clean and dissected spinal cord to 14 mL of 1 PBS and wash for a minimum of 1 h (see Note 4). 3.2

PACT Clearing

1. Prepare 12.5 mL of A4P0 solution per sample (see Note 5). 2. Place each sample in 12.5 mL A4PO monomer solution in a 15 mL conical sealed with parafilm to prevent any leakage. 3. Incubate overnight at 4  C with gentle rocking. 4. Place conical within a rack in a vacuum desiccator, remove the cap, and apply house vacuum (100–150 Torr) to degas samples for 30–60 minutes or until bubbles stop forming within the A4P0 monomer solution (see Note 6). Flood the desiccator chamber with nitrogen gas for 1–2 min. Quickly cap the conical and seal with parafilm. 5. Incubate sample at 37  C for 3 h. 6. Remove samples from the hydrogel solution (see Note 7). Wash samples in 14 mL of 1 PBS in a new 15 mL conical with gentle shaking for 5 min at RT. 7. Transfer sample from PBS to a new conical with 12.5 mL of Clearing Solution. Seal the lid of the conical with parafilm and incubate with gentle rocking at 37  C (see Note 8). 8. Replace the Clearing Solution every other day until samples have cleared (see Note 9). 9. Once cleared, transfer samples to new 15 mL conical and wash with 14 mL of BBT with gentle shaking at RT for ~1 h. Repeat this wash for a total of 5 times, with the final wash going overnight (see Note 10). 10. Transfer sample to 1 mL of Immunostaining Buffer with the desired primary antibodies in a 1.5 mL microcentrifuge tube (see Notes 11 and 12).

Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing

205

11. Incubate with gentle shaking at RT for 2–6 days (see Note 13). 12. Transfer samples to a 15 mL conical and wash with 14 mL of BBT solution containing 0.01% (w/v) sodium azide for ~1 h with gentle shaking. Repeat this wash 4 more times with the fifth wash going overnight. 13. Transfer samples to a new microcentrifuge tube with 1 mL of Immunostaining Buffer containing the appropriate secondary antibodies. When preparing secondary antibodies in Immunostaining Buffer, make 20% more solution than necessary (make 1.2 mL when you need 1 mL) and filter the solution with a 0.2 μm syringe filter prior to adding to samples. The extra solution accounts for volume lost in the filter. Incubate samples at RT with gentle shaking for 2–6 days (see Notes 13 and 14). 14. Transfer sample to a light protected 15 mL conical and wash with 14 mL of BBT solution containing 0.01% (w/v) sodium azide for ~1 h with gentle shaking. Repeat this wash 4 more times with the fifth wash going overnight. 15. Prepare enough RIMS to equilibrate each sample in 5 mL and fill the sample chamber for lightsheet imaging. 16. Prepare 1.5% agarose and keep melted using a heat block set to 42  C (see Note 15). 17. Mount sample in 1.5% agarose in a histology mold. Once solidified, trim the agarose around the tissue, leaving enough agarose on at least one side for mounting in the lightsheet microscope (see Note 16). 18. Place mounted sample into 5 mL of RIMS, making sure that the entire sample is submerged in the solution. 19. Equilibrate samples for 1–2 days protected from light at RT. Sample is ready for lightsheet imaging once the sample and agarose are optically clear (see Note 17). 3.3

iDISCO+ Clearing

1. Following sample fixation and dissection, dehydrate samples using a series of methanol/H2O solutions (5 mL each of 20%, 40%, 60%, 80%, 100%; v/v) for 1 h each. 2. Incubate samples in 5 mL of 100% methanol for 1 h at RT and then place on ice for 30 min. 3. Incubate samples overnight in a 5 mL solution of 66% Dichloromethane (DCM)/33% Methanol (v/v) at RT with gentle shaking. 4. Briefly wash samples twice in 3 mL of 100% Methanol at RT and then place on ice for 30 min. 5. Bleach samples in 5 mL of chilled 5% H2O2 solution overnight at 4  C.

206

Frank L. Jalufka et al.

6. Rehydrate samples with a series of methanol/H2O solutions (5 mL each of 80%, 60%, 40%, 20%: v/v) for 1 h each, followed by a final wash in 5 mL of 1 PBS for 1 h. 7. Wash sample twice with 12 mL of PTx.2 for 1 h per wash. 8. Incubate samples in 1 mL of Permeabilization Solution for 1 day at 37  C (see Note 18). 9. Block samples with 1 mL of Blocking Solution for 1 day at 37  C. 10. Incubate samples with primary antibody in 1 mL of Primary Immunolabeling Solution for 2–6 days at 37  C (see Note 13). 11. Wash samples with 12 mL of PTwH for 5 times for 1 h each with the fifth wash going overnight. 12. Incubate samples with secondary antibody in 1 mL of Secondary Immunolabeling Solution for 2–6 days at 37  C (see Notes 13 and 14). 13. Wash samples with 12 mL of PTwH for 5 times for 1 h each with the fifth wash going overnight. 14. Mount samples in 2% agarose in a histology mold. Once solidified, trim the agarose around the tissue, leaving enough agarose on at least one side for mounting in the lightsheet microscope (see Note 19). 15. Dehydrate samples using a series of methanol/H2O solutions (5 mL each of 20%, 40%, 60%, 80%, 100%, 100%; v/v) for 1 h each (see Note 20). 16. Incubate samples in a 5 mL solution of 66% DCM/33% Methanol (v/v) with shaking for 3 h. 17. Incubate samples twice in 5 mL of 100% DCM for 15 min with shaking. 18. Incubate samples in 5 mL of Ethyl Cinnamate, making sure the entire sample is submerged. Sample is ready for lightsheet imaging when the sample and agarose are optically clear [14, 15] (see Note 21).

4

Notes 1. Antibodies not listed may still be compatible with either PACT or iDISCO+. Testing for compatibility and appropriate antibody concentration is necessary for each type of tissue. 2. If performing multiple perfusions, keep 1 PBS and 4% PFA on ice and aliquot 40 mL of each into marked conicals for use. 3. For spinal cord dissection, start at the ventral surface of the cervical end of the spinal column, and carefully make bilateral incisions through the pedicles of a single vertebra using

Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing

207

microscissors. Once a vertebra is cut on both sides, peel off the loose bone by carefully pulling it toward the lumbar end with forceps or tweezers. Be careful not to cut the spinal cord when making the bilateral incisions. Repeat down the length of the spinal column until the spinal cord is completely exposed. Then, starting from the lumbar end of the cord, cut the spinal nerves to remove the cord from the spinal column. 4. The spinal nerves that stray from the lumbar region can be used to facilitate removal of the meninges. Occasionally wet the spinal cord with PBS as needed to prevent the tissue from drying out. Exercise caution when removing meninges around the lumbar and cervical enlargements, as pulling too hard can damage the spinal cord. Removal of the meninges is a critical step toward uniform clearing. As a possible stopping point, samples can be kept in PBS with 0.01% (w/v) sodium azide at 4  C. 5. VA-004 is the thermal initiator for the hydrogel formation. Once dissolved in solution, limit the time off ice and prolonged exposure to light until the monomer solution has diffused through the tissue. 6. When degassing the samples, gentle rocking of the desiccator on a rotary rocker or by gentle hand rocking helps facilitate the removal of gas bubbles from the sample. 7. Solution should be thick and viscous like corn syrup. Lack of viscosity may indicate ineffective hydrogel polymerization. 8. Make sure that the sample is able to invert inside the conical in order to facilitate movement of the SDS solution through the sample during the clearing process. The conical or rocker should be placed within a container to collect any potential SDS that may leak from the conical during the clearing process. 9. Various tissues clear at different speeds. Small muscle samples clear in about 1 day, whole spinal cords can take 5–7 days, and brain hemispheres 7–10 days. An indication of effective clearing is being able to clearly make out the volume markings on the conical through the center of the tissue. 10. Possible stopping point. Samples can be stored in BBT or 0.2 M BB with 0.01% (w/v) sodium azide at 4  C. 11. Immunofluorescent staining may not be necessary to visualize compatible endogenous or transgenic fluorescence (such as tdTomato, YFP, and RFP) in the sample with PACT clearing. 12. Larger or smaller samples may need more or less staining buffer. Small samples such as small muscles or parts of the spinal cord can be stained using 500 μL of buffer in 0.6 mL microcentrifuge tubes, while brain hemispheres may require 3–5 mL of staining buffer in 5 mL tubes.

208

Frank L. Jalufka et al.

13. Depending on the thickness of the sample, immunofluorescent staining may take several days to adequately label the center of the sample. The antibody solution should be replaced every 2 days. 14. Once secondary antibodies are introduced, keep samples light protected at all times. This can be done with foil wrapped around the sample tubes and using dark, light blocking microcentrifuge tubes. 15. Once the agarose is made, divide it into 1 mL aliquots in microcentrifuge tubes and keep melted on a heat block. Any unused aliquots can be stored at 4  C and remelted at 95  C at a later time. 16. If you have a compatible lightsheet microscope, small samples such as muscles and spinal cords can be mounted in agarose inside of an insulin syringe. Remove the needle tip of a 0.5 cc insulin syringe and fill about two-third of the way up with agarose. Place the sample in agarose and pull the sample and remaining agarose up into the syringe. The syringe should be nearly full of agarose with the sample located near the tip of the syringe. Be careful not to introduce any air bubbles near the sample or between the agarose and the syringe plunger. 17. Possible stopping point as mounted tissues can be stored in RIMS with 0.01% (w/v) sodium azide at 4  C. Mounted tissues may also be washed with 0.2 M BB for 1 h and broken out of the agarose and restained, remounted, or stored in 0.2 M BB with 0.01% (w/v) sodium azide at 4  C. 18. In order to prevent oxidation of samples, all steps should be conducted using fully closed and filled tubes. 19. Syringe mounting of samples is not compatible with the dehydration steps of iDISCO+ and so it is not recommended to attempt syringe mounting with iDISCO+ cleared samples. 20. Optional step: Samples can be left overnight in 100% methanol at room temperature after dehydration. 21. For imaging iDISCO+ cleared samples, ethyl cinnamate can be used in the lightsheet chamber in place of dibenzyl ether.

Acknowledgments This work was funded by Mission Connect, a program of TIRR Foundation. Use of the Texas A&M Microscopy and Imaging Center is acknowledged.

Hydrophobic and Hydrogel-Based Methods for Passive Tissue Clearing

209

References 1. Ariel P (2017) A beginner’s guide to tissue clearing. Int J Biochem Cell Biol 84:35–39. https://doi.org/10.1016/j.biocel.2016. 12.009 2. Matryba P, Kaczmarek L, Goła˛b J (2019) Advances in ex situ tissue optical clearing. Laser Photonics Rev 13(8). https://doi.org/ 10.1002/lpor.201800292 3. Ueda HR, Erturk A, Chung K, Gradinaru V, Chedotal A, Tomancak P, Keller PJ (2020) Tissue clearing and its applications in neuroscience. Nat Rev Neurosci 21(2):61–79. https:// doi.org/10.1038/s41583-019-0250-1 4. Richardson DS, Lichtman JW (2015) Clarifying tissue clearing. Cell 162(2):246–257. https://doi.org/10.1016/j.cell.2015.06.067 5. Susaki EA, Ueda HR (2016) Whole-body and whole-organ clearing and imaging techniques with single-cell resolution: toward organismlevel systems biology in mammals. Cell Chem Biol 23(1):137–157. https://doi.org/10. 1016/j.chembiol.2015.11.009 6. Erturk A, Becker K, Jahrling N, Mauch CP, Hojer CD, Egen JG, Hellal F, Bradke F, Sheng M, Dodt HU (2012) Threedimensional imaging of solvent-cleared organs using 3DISCO. Nat Protoc 7(11):1983–1995. https://doi.org/10.1038/nprot.2012.119 7. Renier N, Wu Z, Simon DJ, Yang J, Ariel P, Tessier-Lavigne M (2014) iDISCO: a simple, rapid method to immunolabel large tissue samples for volume imaging. Cell 159(4): 896–910. https://doi.org/10.1016/j.cell. 2014.10.010 8. Chung K, Wallace J, Kim SY, Kalyanasundaram S, Andalman AS, Davidson TJ, Mirzabekov JJ, Zalocusky KA, Mattis J, Denisin AK, Pak S, Bernstein H, Ramakrishnan C, Grosenick L, Gradinaru V, Deisseroth K (2013) Structural and molecular interrogation of intact biological systems. Nature 497(7449):332–337. https://doi. org/10.1038/nature12107 9. Tainaka K, Murakami TC, Susaki EA, Shimizu C, Saito R, Takahashi K, HayashiTakagi A, Sekiya H, Arima Y, Nojima S, Ikemura M, Ushiku T, Shimizu Y, Murakami M, Tanaka KF, Iino M, Kasai H, Sasaoka T, Kobayashi K, Miyazono K,

Morii E, Isa T, Fukayama M, Kakita A, Ueda HR (2018) Chemical landscape for tissue clearing based on hydrophilic reagents. Cell Rep 24(8):2196–2210. e2199. https://doi.org/ 10.1016/j.celrep.2018.07.056 10. Yang B, Treweek JB, Kulkarni RP, Deverman BE, Chen CK, Lubeck E, Shah S, Cai L, Gradinaru V (2014) Single-cell phenotyping within transparent intact tissue through whole-body clearing. Cell 158(4):945–958. https://doi.org/10.1016/j.cell.2014.07.017 11. Renier N, Adams EL, Kirst C, Wu Z, Azevedo R, Kohl J, Autry AE, Kadiri L, Umadevi Venkataraju K, Zhou Y, Wang VX, Tang CY, Olsen O, Dulac C, Osten P, TessierLavigne M (2016) Mapping of brain activity by automated volume analysis of immediate early genes. Cell 165(7):1789–1802. https:// doi.org/10.1016/j.cell.2016.05.007 12. Treweek JB, Chan KY, Flytzanis NC, Yang B, Deverman BE, Greenbaum A, Lignell A, Xiao C, Cai L, Ladinsky MS, Bjorkman PJ, Fowlkes CC, Gradinaru V (2015) Wholebody tissue stabilization and selective extractions via tissue-hydrogel hybrids for highresolution intact circuit mapping and phenotyping. Nat Protoc 10(11):1860–1896. https://doi.org/10.1038/nprot.2015.122 13. Gradinaru V, Treweek J, Overton K, Deisseroth K (2018) Hydrogel-tissue chemistry: principles and applications. Annu Rev Biophys 47: 355–376. https://doi.org/10.1146/annurevbiophys-070317-032905 14. Klingberg A, Hasenberg A, Ludwig-PortugallI, Medyukhina A, Mann L, Brenzel A, Engel DR, Figge MT, Kurts C, Gunzer M (2017) Fully automated evaluation of Total glomerular number and capillary tuft size in nephritic kidneys using Lightsheet microscopy. J Am Soc Nephrol 28(2):452–459. https://doi. org/10.1681/ASN.2016020232 15. Masselink W, Reumann D, Murawala P, Pasierbek P, Taniguchi Y, Bonnay F, Meixner K, Knoblich JA, Tanaka EM (2019) Broad applicability of a streamlined ethyl cinnamate-based clearing procedure. Development 146(3). https://doi.org/10.1242/dev. 166884

Chapter 13 Expansion Microscopy of Larval Zebrafish Brains and Zebrafish Embryos Ory Perelsman, Shoh Asano, and Limor Freifeld Abstract Since its introduction in 2015, expansion microscopy (ExM) allowed imaging a broad variety of biological structures in many models, at nanoscale resolution. Here, we describe in detail a protocol for application of ExM in whole-brains of zebrafish larvae and intact embryos, and discuss the considerations involved in the imaging of nonflat, whole-organ or organism samples, more broadly. Key words Superresolution Development

1

microscopy,

Expansion

microscopy,

Zebrafish,

Neuroscience,

Introduction Expansion microscopy (ExM) is a fluorescent nanoscopy technique that allows capturing how molecules of interest are organized within all types of biological samples, including large intact tissues, at nanoscale resolution. Since ExM is based on the physical magnification of the sample itself, made possible by its embedding in an expandable hydrogel, it obviates the need for specialized optics and microscopy systems. In standard ExM, the sample is expanded by a factor of ~4, and the effective resolution is improved by this factor, from ~300 nm in standard confocal microscopes to ~70–80 nm. While ExM was first demonstrated to work in cultured cells and mouse brain slices [1, 2]; it has since been applied to a far broader variety of model organisms and sample types, including whole brains – of larval zebrafish [3, 4] and flies [5] or entire organisms such as C. elegans [6] and both zebrafish and Drosophila embryos [3, 7] (reviewed in ref. [8]). Imaging of expanded samples is facilitated by the fact the samples are clear, and are mostly comprised of water, implying minimal light scattering and background. Thus, expanded samples facilitate obtaining both high resolution as well as high quality images.

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_13, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

211

212

Ory Perelsman et al.

In this chapter we describe a protocol for ExM imaging of zebrafish embryos, and brains of zebrafish larvae; important for allowing the incorporation of nanoscale structural information into studies of development and neural function often conducted using these models [3]. Our protocol is based on, and very similar to, the original proExM protocol which has been described in detail [9]. While we focus on the details of tissue preparation, handling and expansion protocol steps for these specific sample types; we also point out the considerations and modifications that can be made to generalize this protocol for other nonflat whole-organ or organism samples. Imaging such samples allows for capturing the spatial distribution of proteins at nanoscale resolutions and putting this information in the context of an entire organ or organism. We note that since ExM increases the original sample volume by two orders of magnitude (the exact number varies between samples), the volume that must be imaged can be considerable. For the zebrafish samples described here it reaches several cubic millimeters. This size must be taken into consideration in planning the imaging and analysis pipelines, and we discuss these considerations as well. In its protein-preservation version (proExM, [2, 10]), ExM is applicable to many tissue samples that have been stained using standard antibodies, or expresses fluorescent proteins.

2 2.1

Materials Anchoring

1. 10 mg/mL Acryloyl X (AcX): Dissolve 5 mg of AcX (ThermoFisher Scientific, A20770) in 500 μL anhydrous DMSO (ThermoFisher Scientific, D12345) (see Note 1), store in 20 μL aliquots at 20  C for up to 2 months. 2. 10 PBS. 3. Polystyrene 24-well plate. 4. Glass Pasteur pipette with a rubber or latex bulb or a 10 mL pipette pump. 5. Diamond scriber.

2.2

Gelation

1. Ice bucket with ice. 2. Monomer solution stock (also referred to as “Stock X”): Mix 2.25 mL of 38 g/100 mL sodium acrylate (Sigma-Aldrich 408220), 0.625 mL of 40 g/100 mL acrylamide (Fisher Scientific, BP1402–1, see Note 2), 0.75 mL of 2 g/100 mL bisacrylamide solution (Fisher Scientific, BP1404-250), 4 mL of 29.2 g/100 mL sodium chloride, 1 mL of 10 PBS, and 0.775 mL of ultrapure water. This gives rise to 9.4 mL of stock solution, which we divide to 1 mL aliquots (in Eppendorf

Expansion Microscopy of Larval Zebrafish Brains and Zebrafish Embryos

213

tubes) stored at 20  C. This stock solution can be stored for several months before use. 3. 4-hydroxy-TEMPO (4-HT) (Alfa Aeser A12497): Make a 0.005 g/mL stock of 1 mL in ultrapure water. Store at 20  C. This stock solution can be stored for several months before use. 4. N,N,N0 ,N0 -tetramethylethane-1,2-diamine (TEMED) (Sigma-Aldrich, T7024): Make a 0.1 g/mL stock of 1 mL in ultrapure water. Store at 20  C. Long-term storage (for over a month) is not recommended. 5. Ammonium Persulfate (APS) (ThermoFisher Scientific, 17874): Make a 0.1 g/mL stock of 1 mL in ultrapure water. Store at 20  C. Long-term storage (for over a month) is not recommended. 6. Silicone isolators (see Note 3). 7. #1 rectangular coverslips. 8. Plain microscope slides. 9. Polystyrene 12-well plate. 10. An incubator that can be set to a temperature of 37  C. 2.3

Digestion

1. Stock solution 1: Mix 1 mL of 1 M Tris, pH 8.0 with 1 mL of 10% Triton X-100 solution, 1 mL of 0.5 M EDTA and 7 mL of ultrapure water to obtain 10 mL of stock digestion solution (final concentrations: 100 mM Tris, 50 mM EDTA, 1% Triton X-100). This solution can be stored at 4  C for weeks. 2. Stock solution 2: Mix 0.935 g of sodium chloride in 10 mL water to obtain a 1.6 M stock. 3. proteinase K (NEB, P8107S). 4. Utrecht series 6150 watercolor brush, size 1 (or similar). 5. Single-edge razor-blade or scalpel. 6. An incubator that can be set to a temperature of 50  C.

2.4 Expansion and Imaging

1. 1 PBS. 2. DAPI (Sigma-Aldrich D9542): Make a stock of 2.855 mM DAPI by dissolving 1 mg in 1 mL of 1 PBS. 3. Polystyrene 6-well plate. 4. Disposable plastic pipettes. 5. 0.1% (w/v) poly-L-lysine solution. 6. 60 mm  15 mm petri dishes. 7. Coverslips/disposable spatulas. 8. Utrecht series 6150 watercolor brush, size 1 (or similar).

214

Ory Perelsman et al.

9. Optional: Absorbent paper points (e.g., Fisher Scientific, 50-379-946).

3

Methods This section begins with a stained and washed larval zebrafish brain or zebrafish embryo sample, that is kept in PBS at 4  C and is protected from light at all times (see Note 4 regarding previous steps and ref. [11] for a recommended staining protocol). We recommend imaging the sample pre-expansion before applying the steps listed below to verify the staining quality and allow subsequent computation of the expansion factor by comparing the size of the sample (or of labeled identifiable features within it) before and after expansion.

3.1

Anchoring

1. Prepare the final AcX solution by diluting the stock 1:100 (to a final concentration of 0.1 mg/mL) in 1 PBS (see Note 5). 2. Cut the tip of a glass Pasteur pipette using a diamond pen such that the tip diameter will be significantly larger than the sample. 3. Transfer the sample, using the pipette, to a well in a 24-well plate (see Note 6, Fig. 1a). If handling more than one sample, place each sample in a separate well (see Note 7). Remove any excess PBS around the sample to the extent possible (we use a fine, 10 μL pipette to get as close as possible to the sample). 4. Add 300 μL of AcX solution to each well containing a sample (the sample should be covered in this solution, hence a larger volume should be used if necessary). 5. Keep samples in anchoring solution for at least 6 h at room temperature (the reaction can also be left overnight). Make sure the samples are protected from light. 6. Replace the AcX solution with 1 PBS and wash 2  15 min in 1 PBS (see Note 8).

3.2

Gelation

1. Prepare an ice bucket and keep all gelling-related solutions (4-HT, APS, TEMED, and “Stock X”) on ice throughout the following procedures. 2. Prepare a gelation chamber on a plain microscope slide using silicon isolators as spacers (Fig. 1b). The isolators must be firmly attached to the slide, to prevent liquids from leaking out of the wells. 3. Move one sample at a time to a silicon well, using a glass Pasteur pipette with a cut tip as in the anchoring step, with a minimal amount of PBS around it (see Note 9).

Expansion Microscopy of Larval Zebrafish Brains and Zebrafish Embryos

215

Fig. 1 Sample hydrogel embedding and handling procedures (a) A 24-well plate with samples immersed in AcX. The glass Pasteur pipette used for sample transfer is shown. (b) The gel-chamber components consisting of a silicon isolator, coverslip and a coverglass. (c) The assembled gel-chamber containing gels (in the top row) (d) Removing the coverslip covering the chamber after gelation. (e) Gels on a slide after chamber disassembly. (f). Cutting the gel around the sample. (g) Moving the gel to a 12-well plate for digestion using a paintbrush

4. To make 200 μL of gelling solution: First, prepare 196 μL of no-APS gelling solution by mixing 188 μL of monomer solution (Stock X, see Note 10) with 4 μL 4-HT stock (0.005 g/ mL) and 4 μL of TEMED stock (0.1 g/mL). Add 4-HT before adding TEMED, and thoroughly mix the solution using a vortex mixer after adding each ingredient. 5. Carefully remove the PBS around the sample (see Note 9). 6. Quickly add 4 μL APS to the solution made in step 4 to form the final gelling solution. Mix using a vortex mixer, and apply 40 μL of it to each sample-containing well inside the chamber. We find 40 μL of liquid to completely fill the well (see Note 11). 7. Close the chamber by carefully placing a coverslip on top of it (the coverslip must be large enough to cover all wells, see Note 12, Fig. 1c). 8. Place the gelling chamber at 4  C for 45 min to 1 h (Note 13). Protect it from light. The gel will still be in liquid state at the end of this step. 9. Place the gelling chamber at 37  C, in a humid environment (e.g., placing a 200 mL beaker of water near the gels), for 2–3 h (see Note 14). Protect it from light. At the end of this step, the gel should be fully polymerized. 3.3

Digestion

1. Make 1 mL of digestion buffer for each sample by mixing together equal amounts of the two stock solutions (stock solution 1 and stock solution 2). 2. Carefully remove the top coverslip from the chamber (Fig. 1d). Then, remove the silicon isolator from the slide. The gels may stick to either the coverslip or to the microscope slide (Fig. 1e). Cut out the gel portion containing the sample (see Note 15, Fig. 1f). If the sample is on the coverslip, it can be moved back

216

Ory Perelsman et al.

to the slide prior to cutting (see Note 16). Dispose of the remaining gel and the previously removed coverslip. 3. Add Proteinase K (1:50, or a final concentration of 16 units/ mL) to the digestion buffer, and mix gently (see Note 17). 4. Fill one well with digestion solution. If the gel is cut to a square with a side of about 2 mm, 1 mL of mixed digestion solution is enough for one gel placed in a 12-well plate. 5. Transfer the gel gently by “sliding” it from the chamber into the well using a brush (see Note 16, Fig. 1g). 6. Place the chamber in 50  C overnight. 3.4

Expansion

1. Wash the digestion solution once with 1 PBS (see Note 18). Disposable plastic pipettes can be used for this and all subsequent washes. 2. If desired, it is possible to stain the gel with DAPI at this stage (see Note 19). In this manner, the entire sample would be stained, which may facilitate its localization under the microscope during post-expansion imaging. For this purpose, the digestion solution should be washed 3  5 min with 1 PBS. Dilute the DAPI stock 1:10,000 in 1 PBS to obtain a final concentration of 285.5 nM. Remove the PBS and apply 1 mL of DAPI at this concentration for 20 min at room temperature. Wash again 3  5 min with 1 PBS. 3. Remove all liquid from the well. Gently load the gel to a cut coverslip, using a brush, and slide it into a 6-well plate (see Note 20). 4. Wash 4  20 min with ultrapure water (see Note 21). Washing must be continued until the gel volume stops increasing during washes. 5. We recommend conducting low-resolution imaging of the gel at this point, to establish the desired up/down orientation for subsequent imaging. Since expanded gels are thick, it is beneficial to orient them such that the target structures are positioned as close to the objective as possible (i.e., close to the gel top if imaging with an upright microscope, and close to the gel bottom if imaging with an inverted one). 6. For high-resolution, motion-artifact free imaging, prepare a poly-L-lysine–coated small petri dish (see Note 22). Apply poly-L-lysine for 20 min and then wash 3  2 min with ultrapure water. Let the dish dry in air. 7. When the gel is fully expanded, remove the water from the well, gently load the gel onto a coverslip (or a plastic spatula) using a brush, and slide it to the center of the petri dish (see Note 23).

Expansion Microscopy of Larval Zebrafish Brains and Zebrafish Embryos

217

8. Let the gel stick to the dish for 1 min, and then gently fill the dish with ultrapure water, until the gel is fully covered. The gel is now ready for imaging. 3.5 Imaging Recommendations

Expanded gels are completely transparent and invisible when immersed in water. While this is highly beneficial for imaging, it can make localizing the sample within the gel when starting an imaging session challenging. We thus recommend to first localize the sample within the gel using a fluorescent stereoscope (example images of a gelled larval zebrafish brain sample, before and after expansion, obtained with an Olympus MVX10 macro fluorescence microscope, are shown in Fig. 2). The objective of an optical sectioning microscope can then be immediately positioned at the sample location within the gel (see images of plains within expanded samples, captured with optical sectioning microscopes, Fig. 3). We further note that due to the expanded sample size, unless only a very small portion of the sample is imaged, using a standard confocal microscope is highly time-consuming. We also find such microscopes to bleach the fluorescent molecules within the gels quite rapidly. Thus, light-sheet or spinning disk confocal microscopes constitute a better choice. Similarly, long-working distance or water dipping objectives are recommended to allow imaging the entire depth of the gel.

3.6 Image Processing and Analysis

ExM image processing largely follows standard practices for conventional fluorescence microscopy images. Thus, recommendations for a general workflow vary based on the instrumentation used, and naturally also on the research question being addressed. Nevertheless, some ExM studies provided significant detail regarding processing pipelines that were applied to the generated datasets (see, e.g., [5, 12, 13]). Moreover, in general, workflows often start with broadly applicable preprocessing steps, such as illumination corrections and 3D deconvolution, before image registration and fusing multiple datasets into one seamless volume (if applicable). Luckily, computational tools for these particular tasks are plenty and often free of charge. They range from simple plugins for image processing software (such as ImageJ or napari [14]), to programs written in MATLAB/Python or even to stand-alone applications. To name a few examples, illumination correction can be performed using “CIDRE” [15] or “Intensify3D” [16]; Registration with BigWarp [17] or CMTK [18]; Stitching with BigStitcher [19] or using distributed gradient-domain processing [20] (and note this is not a comprehensive list). Once the final corrected and optimized image has been generated, it will be subjected to image visualization and analyses. These particular tasks can range from straightforward to daunting, and this largely depends on the overall 3D image size and the type of

218

Ory Perelsman et al.

Fig. 2 Pre- and post-expansion low-resolution imaging examples. An HuC:H2B-GCaMP6s larval zebrafish brain (missing a part of the hindbrain) before and after expansion. The GCaMP signal was amplified by staining with chicken anti-GFP and Alexa 488 anti-chicken antibodies. (a, b) Pre- and post-expansion images taken under the same conditions, reflecting the significant increase in the sample size. Scale bar: 500 μm. (c, d) Same images as in A and B, rotated and cropped to better reflect sample shape. Scale bar: 100 μm

Fig. 3 Post-expansion imaging examples. (a) An expanded larval zebrafish brain imaged with a Zeiss Z.1 lightsheet microscope (using a 20, 1NA objective). (b) A ~ 3.5 h postfertilization embryo imaged with a spinning disk microscope (3I, using a 20, 1NA objective). Scale bars: 10 μm (physical size post-expansion ~40 μm)

analyses required. The specialized (often costly) visualization and analyses tools necessary for larger images convert images into a more accessible container format to only display or analyze relevant subregions of images. Self-made 3D image analysis workflows in relevant programing languages (such as MATLAB, Python, JAVA, etc.) are very flexible and can often yield impressive results, but are less accessible for noncomputer scientists. As in many image processing tasks, the user benefits greatly from a workstation with high disk bandwidth (SSDs), a large amount of RAM, many CPU cores and a powerful graphics card. A few examples for tools that may be useful in this context are: ClearVolume [21], BigDataViewer [22], and Vaa3D [23]. Summarized, while the increased file size of the datasets often require more computational resources, ExM data enjoys

Expansion Microscopy of Larval Zebrafish Brains and Zebrafish Embryos

219

exceptionally low levels of autofluorescence, an effective way of de-crowding highly clustered signals, and near absence of many imaging aberrations, all of which improves many of the processing and analysis steps. Unlocking the full potential of ExM therefore requires combining both the increase in resolution as well as the proper image processing and analysis steps.

4

Notes 1. We recommend adding the DMSO to the original AcX bottle and mixing carefully. 2. We use this preprepared acrylamide solution for convenience and safety considerations. Since its concentration is distinct from that of the stock used protocol in the original, we adjust the monomer solution stock recipe accordingly. 3. We find it beneficial to use isolators of fixed thickness for casting gels around thick samples instead of assembling a chamber by stacking multiple coverslips to establish the required thickness (as in ref. [4]). Nevertheless, the standard coverslip based chamber described in ref. [9] can also be used (as in ref. [3]). 4. Since signal is lost in the expansion process (due to the spread of fluorophores over a large volume, fluorophore quenching during gel polymerization and potential additional fluorescence loss during digestion), high-quality staining with antibodies providing well-preserved signals in ExM (see [2]), is a prerequisite for successful ExM imaging. We use the staining protocol described in ref. [11], applying it to isolated larval zebrafish brains (dissected shortly after fixation) rather than whole larval zebrafish. We perform all staining (of either embryos or larvae) in 2 mL Eppendorf tubes consisting of 10–20 samples and use relatively high antibody concentrations (1:200–400 for both primaries and secondaries) and long incubation times (~3 days at 4  C). Samples must be protected from light to the extent possible at all times. 5. It is important to allow AcX to diffuse through the tissue and react with it in its entirety. AcX can be diluted in an alternative buffer (e.g., MES-based saline adjusted to pH 6.0) to facilitate this for large tissue samples [2]. 6. We find our small samples to be very “sticky,” and thus to tend to nonreversibly attach to plastic pipettes. This can also happen with glass pipettes, and we thus recommend placing the sample container close to the well plate and making the transfer as quickly as possible. 7. We separate samples to maintain their identities throughout pre-expansion imaging, expansion, and post-expansion imaging.

220

Ory Perelsman et al.

8. The sample, in 1x PBS, can be stored at 4  C at this stage, but for best results, we recommend to continue for gelation immediately after this step. 9. The sample should be immersed in PBS so that it does not dry out before gelling takes place, yet the small wells can only contain ~40 μL of fluids and thus care must be taken not to flood them when transferring the samples into them. In addition, all PBS around the sample must be removed just prior to gelation so as not to dilute the gelling solution. 10. The monomer solution must be thoroughly thawed and mixed, using a vortex mixer, before its use. Make sure the solution is completely clear before using it, as some of its components may precipitate before freezing. 11. When adding the gelling solution, try to position the sample as close to the center of the well as possible, so gel polymerization around it would be optimal. 12. It is important not to trap air inside the chamber when placing the coverslip on top of it, which requires placing it in one smooth, quick motion, first placing one edge of the cover slip on the spacer, then moving the other edge down onto it. 13. We extended the duration of this incubation with respect to the original protocol [9] to provide sufficient time for the gel monomer solution to diffuse throughout our thick sample. An incubation time of 1 h was found to suffice for tissues up to 500 μm thick (see also [3, 5]). More broadly, the duration if this incubation step should be adjusted according to the size and type of the sample being expanded. 14. The duration allowed for polymerization here is also extended with respect to the original proExM protocol, and sufficient for samples up to 500 μm thick. 15. To minimize the physical impact of this step on the gel, we recommend placing the razor blade/scalpel at a minimal, yet safe, distance from the sample edge and cutting through the thickness of the gel by moving the blade from top to bottom. Then, the blade can be used to move the excess gel away from the sample. 16. If the gel sticks to the slide or coverslip, it may help to immerse it in a small amount of digestion solution, so that it can be released using the brush. 17. The digestion protocol may need to be adjusted according to the properties of the tissue being digested. Full homogenization must be achieved in this step to allow isotropic gel expansion. A fully expanded gel has smooth boundaries and is completely clear, any deviation from this constitutes an indication that the digestion protocol must be improved. For larval

Expansion Microscopy of Larval Zebrafish Brains and Zebrafish Embryos

221

zebrafish brains and embryos we found it sufficient to use a final concentration of 25 mM of EDTA (following [12]), a relatively high concentration of proteinase K and a high digestion temperature, with respect to the original proExM protocol [9]. It is also possible to avoid using proteinase K in the digestion step, instead applying a disruption/denaturation procedure using, for example, SDS [2, 24, 25]. 18. Note that after the digestion step, the gel is slightly expanded with respect to its original size. 19. While AcX does not mediate binding of DNA to the gel, DNA likely gets trapped in the gel, allowing its staining with DAPI. Nevertheless, an alternative anchoring scheme must be used to localize nucleotides in expanded gels [9, 12]. 20. A disposable spatula can also be used to “pick up” the gel from one well and move it to another. Care must be taken so as not to physically damage the gel at this point. 21. When the well is filled with water, it is impossible to see the gel within it. Thus, when removing water from the well during washes, it is important to avoid applying a suction force to the gel with the pipette. We find it helpful to direct the pipette to an edge of the well until the gel emerges from the water, allowing the pipette to be placed next to it. It can be advantageous to use disposable plastic pipettes with a small tip in this step. 22. It is possible to minimize the amount of poly-L-lysine used by drawing a circle of about 2 cm on the bottom of the petri dish and applying the solution only above this area (instead of throughout the dish). 23. The gel must be quite dry to attach to the poly-L-lysine– treated dish. Thus, it is useful to move it over the coverslip and dry out the remaining water (e.g., using absorbent paper points), before transferring the gel to the dish.

Acknowledgments The authors thank Ms. Yarden Levinsky for help with demonstration of the expansion procedure, Mr. Nitay Aspis for ExM imaging of a larval zebrafish brain and Dr. Nitsan Dahan from the LS&E microscopy core facility for technical assistance with light-sheet microscopy imaging. We also thank the Baier lab for kindly providing us with the s1181Et;UAS:Kaede fish [26] shown in Fig. 3a and the Engert lab for kindly providing us with the actb2: H2B-EGFP fish [27] shown in Fig. 3b. Limor Freifeld is funded by the Zuckerman STEM Leadership Program.

222

Ory Perelsman et al.

References 1. Chen F, Tillberg PW, Boyden ES (2015) Expansion microscopy. Science 347:543–548 2. Tillberg PW, Chen F, Piatkevich KD et al (2016) Protein-retention expansion microscopy of cells and tissues labeled using standard fluorescent proteins and antibodies. Nat Biotechnol 34:987–992 3. Freifeld L, Odstrcil I, Fo¨rster D et al (2017) Expansion microscopy of zebrafish for neuroscience and developmental biology studies. Proc Natl Acad Sci 114:E10799–E10808 4. Mu Y, Bennett DV, Rubinov M et al (2019) Glia accumulate evidence that actions are futile and suppress unsuccessful behavior. Cell 178: 27–43 5. Gao R, Asano SM, Upadhyayula S et al (2019) Cortical column and whole-brain imaging with molecular contrast and nanoscale resolution. Science 363 6. Yu C-C (Jay), Barry NC, Wassie AT, et al (2020) Expansion microscopy of C. elegans. eLife 9:e46249 7. Tsai A, Muthusamy AK, Alves MR et al (2017) Nuclear microenvironments modulate transcription from low-affinity enhancers. elife 6: e28975 8. Tillberg PW, Chen F (2019) Expansion microscopy: scalable and convenient superresolution microscopy. Annu Rev Cell Dev Biol 35:683–701 9. Asano SM, Gao R, Wassie AT et al (2018) Expansion microscopy: protocols for imaging proteins and RNA in cells and tissues. Curr Protoc Cell Biol 80:e56 10. Chozinski TJ, Halpern AR, Okawa H et al (2016) Expansion microscopy with conventional antibodies and fluorescent proteins. Nat Methods 13:485–488 11. Randlett O, Wee CL, Naumann EA et al (2015) Whole-brain activity mapping onto a zebrafish brain atlas. Nat Methods 12: 1039–1046 12. Zhao Y, Bucur O, Irshad H et al (2017) Nanoscale imaging of clinical specimens using pathology-optimized expansion microscopy. Nat Biotechnol 35:757–764 13. Alon S, Goodwin DR, Sinha A et al (2021) Expansion sequencing: spatially precise in situ transcriptomics in intact biological systems. Science 371 14. napari contributors (2019) napari: a multidimensional image viewer for python. https:// doi.org/10.5281/zenodo.3555620

15. Smith K, Li Y, Piccinini F et al (2015) CIDRE: an illumination-correction method for optical microscopy. Nat Methods 12:404–406 16. Yayon N, Dudai A, Vrieler N et al (2018) Intensify3D: normalizing signal intensity in large heterogenic image stacks. Sci Rep 8:4311 17. Bogovic JA, Hanslovsky P, Wong A, et al (2016) Robust registration of calcium images by learned contrast synthesis. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pp. 1123–1126 18. Rohlfing T, Maurer CR (2003) Nonrigid image registration in shared-memory multiprocessor environments with application to brains, breasts, and bees. IEEE Trans Inf Technol Biomed 7:16–25 19. Ho¨rl D, Rusak FR, Preusser F et al (2019) BigStitcher: reconstructing high-resolution image datasets of cleared and expanded samples. Nat Methods 16:870–874 20. Kazhdan M, Surendran D, Hoppe H (2010) Distributed gradient-domain processing of planar and spherical images. ACM Trans Graph 29:1–11 21. Royer LA, Weigert M, Gu¨nther U et al (2015) ClearVolume: open-source live 3D visualization for light-sheet microscopy. Nat Methods 12:480–481 22. Pietzsch T, Saalfeld S, Preibisch S et al (2015) BigDataViewer: visualization and processing for large image data sets. Nat Methods 12: 481–483 23. Peng H, Bria A, Zhou Z et al (2014) Extensible visualization and analysis for multidimensional images using Vaa3D. Nat Protoc 9:193–208 24. Damstra HGJ, Mohar B, Eddison M et al (2021) Visualizing cellular and tissue ultrastructure using ten-fold robust expansion microscopy (TREx). bioRxiv. https://doi. org/10.1101/2021.02.03.428837 25. Sarkar D, Kang J, Wassie AT et al (2020) Expansion revealing: Decrowding proteins to unmask invisible brain nanostructures. bioRxiv. https://doi.org/10.1101/2020.08.29. 273540 26. Scott EK (2009) The Gal4/UAS toolbox in zebrafish: new approaches for defining behavioral circuits. J Neurochem 110:441–456 27. Xiong F, Ma W, Hiscock TW et al (2014) Interplay of cell shape and division orientation promotes robust morphogenesis of developing epithelia. Cell 159:415–427

Part IV Super-Resolution Approaches

Chapter 14 Super-Resolution Radial Fluctuations (SRRF) Microscopy Jayme Salsman and Graham Dellaire Abstract Super-resolution Radial Fluctuations (SRRF) imaging is a computational approach to fixed and live-cell super-resolution microscopy that is highly accessible to life science researchers since it uses common microscopes and open-source software plugins for ImageJ. This allows users to generate super-resolution images using the same equipment, fluorophores, fluorescent proteins and methods they routinely employ for their studies without specialized sample preparations or reagents. Here, we discuss a step-by-step workflow for acquiring and analyzing images using the NanoJ-SRRF software developed by the Ricardo Henriques group, with a focus on imaging chromatin. Increased accessibility of affordable super-resolution imaging techniques is an important step in extending the reach of this revolution in cellular imaging to a greater number of laboratories. Key words SRRF, Super-resolution, Chromatin, sCMOS, EMCCD

1

Introduction The wavelength of visible light limits our ability to resolve cellular structures smaller than 200–300 nm. This physical limitation of visible light for imaging subcellular structures can be overcome using imaging techniques that either employ higher energy photons and electrons like X-ray and electron microscopy [1, 2]— which are not amenable to live-cell imaging—or by using specialized super-resolution microscopy systems such as structured-illumination (SIM) and stimulated emission depletion (STED) microscopy [3, 4]. However, each of these methods requires increasingly specialized equipment and reagents that are not widely available to all research institutions or laboratories. A potential solution for the average life science researcher with access to a widefield or confocal microscope, is super-resolution radial fluctuations (SRRF) imaging, which is a novel computational approach to fixed and live-cell super-resolution microscopy developed by Ricardo Henriques and colleagues [5, 6]. This method takes advantage of the inherent fluctuation of emitted photons

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_14, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

225

226

Jayme Salsman and Graham Dellaire

from a fluorophore or fluorescent protein over time, which occur to varying degrees as these molecules (or proteins) transitions through dark nonfluorescent states and interact with their environment [7– 9]. These oscillations in emitted fluorescence are captured in a time series of 100–1000 s of images using widely available scientific complementary metal oxide semiconductor (sCMOS) or Electron multiplying charge-coupled device (EMCCD) cameras through a microscope. Once captured, the SRRF algorithm [5, 6] is used to predict a more precise localization of the fluorophore emitting the photons based on radial fluctuations in the fluorescence detected on the sCMOS or EMCCD chip; with true fluorescence signal emitted from a fluorophore expected to exhibit a higher degree of radial symmetry in its point spread function (PSF) than the background fluorescence in the captured images over time. As such, physical camera resolution (pixel size of the chip array) and magnification of the microscopy system will affect the precision of localization, which we will detail below. Nonetheless, at least a two- to threefold enhancement in resolution can be expected using SRRF [4]. The temporal analysis across multiple frames allows for additional “de-noising” by suppressing image noise that does not correlate between image frames. In many ways, this temporal correlation of fluorescence is very similar to super-resolution optical fluctuation imaging (SOFI) [10]. However, the SOFI technique does not incorporate additional spatial information from the analysis of radiality, and thus is not capable of enhancing the resolution of images derived from more stable fluorescent molecules or proteins such as green fluorescent protein (GFP) [11]. Further accuracy and resolution gains are possible with SRRF by employing reversibly photoswitchable fluorescent proteins [12], and fluorescent probes and dyes that fluctuate in fluorescence or can be made to photoswitch using redox buffer systems [4]. One limitation of SRRF and SOFI approaches is that they cannot reach the localization precision and resolution enhancement of single molecule localization microscopy techniques such as photoactivation localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) that readily achieve 20 nm resolution of cellular structures [4, 13]. However, SRRF and SOFI approaches are applicable to a wider array of fluorophores, do not require special buffers or imaging conditions, and can be applied to images captured on standard widefield and confocal microscopes available to most laboratories to achieve resolutions in the range of 60–120 nm. It should also be noted that the current iteration of the SRRF algorithm does not improve axial (Z-axis) resolution; however, techniques that improve axial resolution over widefield imaging such as point-scan confocal, total internal reflection fluorescence (TIRF), and spinning disk confocal microscopy are fully compatible with SRRF [5]. In this methods chapter, we will primarily focus on the use of SRRF with spinning disk confocal microscopy.

Super-Resolution Radial Fluctuations (SRRF) Microscopy

227

Although the first release of SRRF algorithm called NanoJSRRF is implemented as an ImageJ plugin that is publicly available, the algorithm remains under active development. For example, recent updates include the NanoJ-CORE and NanoJ-SQUIRREL suite of tools for mapping and evaluation of super-resolution optical artifacts [11]. This was an important if not essential update to SRRF, as it provides a means of estimating resolution and optimizing imaging in a reproducible way that minimizes algorithminduced artifacts. In this chapter, we employ a custom SRRF algorithm provided by Dr. Ricardo Henriques (Instituto Gulbenkian de Cieˆncia, Portugal and University College London, UK) called NanoJ-LiveSRRF (used recently by Stubb et al. [14] and Lee et al. [15]). Thus, although the principles we outline below for sample preparation, imaging acquisition and processing by SRRF will remain largely unchanged, in future the actual software menus, options and layout may change based on the version of SRRF software available at the time. Generating super resolution images using the SRRF algorithm requires three basic steps: sample preparation, image acquisition and image processing. Proper sample preparation and planning can improve the results of SRRF analysis and fortunately, most fixation and staining protocols for widefield or confocal fluorescence microscopy will be compatible with SRRF. Image acquisition consists of collecting a series of images at high temporal frequency which will be analyzed to create as single SRRF frame. For standard fluorophores within the visible light range (380–700 nm), the best resolution will be achieved when the objective lens and camera are set up to provide an effective pixel size of ~90–110 nm. The acquired images are then processed using NanoJ-SRRF and NanoJ-SQUIRREL software for producing super-resolution images and error-mapping/resolution estimation, respectively. In this chapter, we describe how to approach these three steps to produce super-resolution images of fixed human cell lines using standard staining techniques, common microscopy equipment and freely available software.

2 2.1

Materials Microscope

Images in this paper were acquired using a Marianis combination widefield and spinning disk microscope (Intelligent Imaging Innovations, 3i) based on a Zeiss Axio Cell Observer equipped with 63 (NA 1.4) and 100 (NA 1.46) Plan-Apochromat oil-immersion lenses and SlideBook 6.0 acquisitions software (3i). For use of acquired images in ImageJ (version 1.53c), SlideBook images were exported as 16-bit Open Microscopy Environment (OME) tagged image file format (TIF) images (see Note 1). However, as noted above a large variety of widefield and confocal microscopes

228

Jayme Salsman and Graham Dellaire

can be used for SRRF imaging [5], and the Bio-Formats plugin for ImageJ can import a large number of native file formats (see Note 1). Widefield microscopy images were acquired using LED-based illumination via a SPECTRA III light engine (Lumencor) using the 63 (1.4 NA) objective lens and a Prime BSI back-illuminated sCMOS (Teledyne Photometrics) coupled to a 1.0 tube lens adaptor resulting in images with an effective pixel size of 103 nm/pixel (see Note 2). For spinning disk confocal microscopy images, micrographs were acquired using a Yokogawa CSU-X1 spinning disk coupled to an mSwitcher unit (3i) and a custom laser launch equipped with 405 nm, 488 nm, 561 nm, and 640 nm lasers. For the best resolution, images were acquired using the 100 (NA 1.46) objective lens and an Evolve 512 EMCCD camera (Teledyne Photometrics) coupled to a 1.22 tube lens adaptor resulting images with an effective pixel size of 131 nm/pixel (see Note 2). 2.2 Sample Preparation

1. Glass coverslips, 18  18 mm, 0.17 mm thickness (no. 1.5).

2.2.1 Materials

3. Needle nose tweezers for manipulating coverslips.

2. Superfrost glass slides. 4. 6-well tissue culture plates.

2.2.2 Cell Lines

1. U2OS human osteosarcoma cells (ATCC, HTB-96) were maintained in DMEM supplemented with 10% fetal calf serum at 37  C and 5% CO2 in a humidified atmosphere. 2. U2OS cells expressing the GFP-variant clover fused to the N-terminus of PML were generated using a CRISPR/Cas9 knock in strategy [16] and maintained as above.

2.2.3 Fluorescent Dyes, Antibodies, and Plasmids

1. DNA Stains: DAPI (40 ,6-diamidino-2-phenylindole). 5-TMR-Hoechst was a gift from Dr. Grazvydas Lukinavicius (Max Planck Institute for Biophysical Chemistry, Goettingen, Germany [17]. 2. Mouse monoclonal (clone E-11) anti-PML primary antibody. 3. Alexa Flour 555– or Alexa Flour 647–conjugated donkey antimouse or donkey anti-rabbit IgG secondary antibodies. 4. Alexa Flour 647–conjugated phalloidin.

2.2.4 Immunostaining Reagents

For best results, all reagents should be made fresh with high quality, analytical grade reagents and ultrapure distilled water. 1. Phosphate-buffered saline (PBS), pH 7.4 (see Note 3). 2. Fixation Buffer. 4% paraformaldehyde in PBS. Can be made fresh or preferably purchased as 16% paraformaldehyde,

Super-Resolution Radial Fluctuations (SRRF) Microscopy

229

methanol free solution packaged in glass ampules under inert gas (EM grade) and diluted to 4% just before use with PBS. 3. Permeabilization Buffer: 0.1% Triton X-100 in PBS, pH 7.4. Prepare a 5% Triton X-100 stock solution (2.5 mL of Triton X100 in 50 mL of distilled H2O) and dilute 1:50 with PBS to final concentration of 0.1% (see Note 4). 4. Blocking buffer: 4% bovine serum albumin (BSA) in PBS. Add 2 g BSA to 50 mL conical tube and bring to 50 mL with PBS. Mix gently at room temperature until BSA is completely solubilized. 5. Fluorescence mounting medium with antifade and a refractive index as close to glass as possible (RI ~1.52) (see Note 5). 2.3

Software

2.3.1 Image Acquisition Software

2.3.2 Image Processing Software

Most microscope driver and image acquisition software packages should contain the functionality required for acquiring images for SRRF. In particular, the user will need the ability to set the illumination intensity and image capture rate to achieve at least 100 frames at ideally 10 ms per frame (or 100 frames/s). This is partly dependent on camera sensitivity and speed and may require image intensification and/or cropping to achieve high-frame rates. However, acceptable SRRF reconstructions are possible at acquisition rates of 30–50 ms/frame and up to 500 frames total captured (see Note 6). Here, we use SlideBook 6.0 (Intelligent Imaging Innovations Inc. (3i), Denver CO; https://www.intelligentimaging.com/slidebook). The software required for SRRF analysis is freely available as an ImageJ Plugin (see below) and will perform best on computers with a graphics processing unit (GPU) capable of running Open computing language (OpenCL) commands—which includes most commercially available GPUs. We suggest using GPUs with at least 4 GB of RAM, and cards with more GPU computing units will exhibit faster computation. At the time of writing this chapter, NanoJ-SRRF runs well on most Nvidia GPUs including 4000 Quadro series, GeForce GTX 10 series, Titan series and both RTX 20 and 30 series cards. Here we employed a Nvidia Titan V GPU with 12 GB of RAM and 5120 CUDA GPU computation cores (gift of Nvidia Corp.). 1. ImageJ (Version 1.53c, Java 1.8.0_172 (64-bit)): (https:// imagej.nih.gov/ij/download.html) or Fiji (https://imagej. net/Fiji) [18]. 2. NanoJ SRRF (Version 1.14) [5]: (https://github.com/ HenriquesLab/NanoJ-SRRF) and custom NanoJ-LiveSRRF algorithm available on request from Dr. Ricardo Henriques (Instituto Gulbenkian de Cieˆncia, Oeiras, Portugal).

230

Jayme Salsman and Graham Dellaire

3. NanoJ SQUIRREL, part of NanoJ CORE software (Version 2.1) [19]: (https://github.com/superresolusian/NanoJSQUIRREL) (see Note 7).

3

Methods

3.1 Sample Preparation 3.1.1 General Considerations

1. Fixation techniques that preserve the three-dimensional organization of cells produce the best images. Standard 4% paraformaldehyde fixation yields good results. 2. Imaging live cells for SRRF analysis can be achieved with cell permeable dyes (e.g., sIR-Hoechst) and fluorescent protein tagged proteins (e.g., enhanced GFP). For live cell imaging, brighter and more photostable dyes are preferred to minimize phototoxicity from longer exposure times and higher light source powers. 3. When imaging fixed cells the choice of fluorophore can affect image quality and SRRF performance. Generally, bright fluorophores such as the Alexa Fluor dyes that are resistant to photobleaching (intrinsically or by using appropriate mounting media with antifade reagents) work well. However, to make the best of use of the SRRF algorithms capabilities for enhancing resolution the best fluorophores are both bright and exhibit high rates of intermittent fluorescence emission (see Note 8). 4. Like other super-resolution techniques, direct labeling of samples should produce higher resolution images and more accurate positional information following SRRF analysis. For example, imaging of chromatin using direct stains like DAPI will typically provide the best resolution whereas indirect labeling with fluorophore-conjugated secondary antibodies (e.g., to histones) can place the fluorophore over 30 nm away from the target, thus decreasing spatial resolution. SRRF is also compatible with fluorescent proteins, such as GFP and its brighter derivatives (e.g., Clover [20], mNeonGreen [21] and with sizes of about 2–4 nm, fusion of these proteins to the protein of interest will provide a good alternative to antibody labeling. The same rationale applies to SRRF imaging of Halo and SNAP-tagged proteins. Similarly, fluorophore-conjugated nanobodies [22, 23], which are ~15 kDa and range in diameter from 2 to 3 nm in size, can provide the antigen specificity, brightness and photo stability of antibody-conjugated dyes (e.g., Alexa Fluor dyes) but allow much higher spatial resolution by super-resolution techniques including SRRF (reviewed in ref. [24]). 5. Choice of mounting medium can be an important factor for obtaining aberration-free images and to best take advantage of

Super-Resolution Radial Fluctuations (SRRF) Microscopy

231

the additional resolution provided by super-resolution techniques such as SRRF that will otherwise be affected by spherical aberration. For this reason, when using high NA oil-immersion lenses, the best results can be obtained using mounting media with antifade that has a refractive index (RI) close to that of cover glass (i.e., 1.52), several of which are available commercially (see Note 5). 3.1.2 Fixation and Permeabilization

1. Grow cells (e.g., U2OS) in 6-well cluster plates on 18 mm square glass coverslips at 50–70% confluency. 2. Remove spent media from cells and rinse 2 times with 1 mL of PBS. 3. Fix cells with 4% paraformaldehyde in PBS at room temperature for 20 min. Perform fixation in the dark if fluorophores are present in the live sample (e.g., GFP-tagged proteins). 4. Remove fix and rinse 2 times with 1 mL of PBS. 5. Permeabilize cells with 0.1% Triton X-100 in PBS for 5 min at room temperature. 6. Rise cells 2 times with 1 mL of PBS and store in 1 mL of PBS before proceeding with next steps depending on subsequent methods.

3.1.3 DNA Staining

1. Dilute DNA stain stocks to 1 μg/mL (DAPI) or 1 μM 5-TMR-Hoechst in PBS and incubate with fixed and permeabilized cells for 30 min at room temperature in the dark. 2. Wash cells twice with PBS for 5 min at room temperature in the dark. 3. Mount stained cells onto glass microscope slides using one drop of fluorescence mounting medium with antifade reagent. Be mindful that the side of the coverslip containing the cells is placed down into the mounting media. 4. Store slides in the dark overnight to allow the mounting media to harden or seal the edges of the coverslips with nail polish. Slides will be ready to image the next day.

3.1.4 Actin Staining

1. Dilute Alexa Fluor 647 phalloidin stocks to 5 units/mL (165 nM) in PBS and incubate with fixed and permeabilized cells for 30 min at room temperature in the dark. 2. Wash cells twice with PBS for 5 min at room temperature in the dark. 3. Mount stained cells onto glass microscope slides using one drop of fluorescence mounting medium with antifade reagent. Be mindful that the side of the coverslip containing the cells is placed down into the mounting media.

232

Jayme Salsman and Graham Dellaire

4. Store slides in the dark overnight to allow the mounting media to harden or seal the edges of the coverslips with nail polish. The slides will be ready to image the next day. 3.1.5 Immunostaining

1. Fixed and permeabilized cells are blocked with 4% BSA in PBS for 30 min at room temperature. 2. Dilute primary antibody in blocking solution (mouse antiPML, clone E-11 (1:200)) and incubate at room temperature for 1 h in a humidifying chamber or sealed dish. To minimize volumes, coverslips can be immunostained by placing the coverslip cell side down onto a 50 μL drop of primary antibody solution. 3. Wash cells 3 times 5 min with PBS. 4. Dilute fluorophore-conjugated secondary antibodies in blocking solution and incubate at room temperature for 45 min in a humidifying chamber or sealed dish. 5. Wash cells 3 times 5 min with PBS. 6. If desired, incubate for 20 min with a DNA visualization stain (e.g., DAPI) diluted as above in PBS, then rinse with PBS 3 times. 7. Mount stained cells onto glass microscope slides using one drop of fluorescence mounting medium with antifade reagent. Be mindful that the side of the coverslip containing the cells is placed down into the mounting media. 8. Store slides in the dark overnight to allow the mounting media to harden or seal the edges of the coverslips with nail polish. Slides will be ready to image the next day.

3.2 Image Acquisition

General Considerations for Image Acquisition 1. SRRF is compatible with both widefield and confocal microscopy (spinning disk and point scanning) [5]. 2. For optimum SRRF imaging, objective lens magnification, coupling optics and camera chip pixel size should allow images to be captured at an effective pixel size in the range of 90–105 nm/pixel but good resolution gains are seen up to 130 nm/pixel (see Note 2). 3. Acquisition Software and camera setup should be capable of 100 frames/s imaging for optimal SRRF imaging. However, in practice this is not always possible due to camera sensitivity, density/brightness of fluorophores and during live cell imaging. In these cases, 40–50 frames/s may be sufficient for substantial gains in resolution upon SRRF reconstruction. 4. Fixed versus live cell imaging considerations: Although most antifade solutions will greatly reduce photobleaching one must

Super-Resolution Radial Fluctuations (SRRF) Microscopy

233

be aware of photoconversion of certain dyes. For example, DAPI, which has blue light emission, is known to photoconvert after prolonged illumination to green light emitting state that may interfere with imaging of GFP or fluors such as Alexa 488 [25]. For live cell imaging, phototoxicity is the chief concern when using longer imaging times or higher intensity illumination required for fast and bright image acquisition needed for SRRF. In addition, movement of cells or organelles during imaging can be an issue during live-cell imaging if trying to observe cellular phenomenon that occur at temporal scales below 1 s. For general microscope or cell movement (e.g., within a living embryo), the built-in vibration correction of the current SRRF algorithm can compensate for these movements and improve resolution during reconstruction [14]. 5. Background correction: Due to the inherent noise in both EMCCD and sCMOS cameras, background correction (or flat-field correction) is recommended for optimal SRRF results. For background correction, one should take an image stack of the same cropped size, acquisition conditions (illumination intensity, camera intensification and integration time) for the same number of frames per second as the imaging stack to be corrected. This is done by moving to an area of the coverslip with no cells or stained structures in the fluorescence channel being collected (see Note 9). The resulting “blank” image stack is then subtracted frame-by-frame from the SRRF acquisition image stack. This can be done usually within the acquisition software under “image math” or in ImageJ as a postprocess prior to SRRF reconstruction as described below (Subheading 3.3, step 3). 6. In the following protocol, we use arrows (!) to indicate how to navigate the SlideBook software hierarchies (e.g., Home tab ! Export ! 16-bit TIFF (OME)). 3.2.1 General Protocol

1. Turn on microscope, camera, light source and image acquisition software. 2. Select an objective lens and camera combination that will result in optimal effective pixel resolution of resulting micrographs (i.e., ~90–100 nm pixel size) (see Note 2). 3. Find an object of interest to image and crop the camera window field of view to maximize camera speed (see Note 6). 4. Test capture parameters (light source power, exposure time, camera intensification for EMCCD cameras) for each fluorescence channel to ensure sufficient signal intensity is acquired. 5. Capture at least 100 frames at about 10 ms per frame for each fluorescence channel. 16- or 12-bit capture is recommended (see Note 6).

234

Jayme Salsman and Graham Dellaire

6. Without changing the cropped camera window field of view, navigate to an area of the slide with no signal for each fluorescence channel. 7. Capture background images for each fluorescence channel using the same capture settings as above. 8. Export images in OME-TIFF format (or similar) for analysis in ImageJ. 3.2.2 Detailed Protocol Using SlideBook 6

1. Turn on microscope, cameras and light sources. 2. Open image acquisition software (e.g., SlideBook 6). 3. Create a new slide file (Ctrl + N) and save. 4. Open the Focus (Ctrl + F) and Capture (Ctrl + E) windows. 5. In the Focus Window, select the Camera, Objective and Filter Set appropriate for your sample (Fig. 1). 6. In the Focus Window under the Camera tab, select the Live button to begin viewing your sample (Fig. 1). Find an object of interest and place the object in the top 1/3 of the field of view (see Note 6). 7. Crop the field of view for image acquisition by using the Select Tool which is under the Select heading in the Regions group of the Home tab (Fig. 1). Once the field of view is selected with the rectangular marquee, press the Update button in the Camera tab of the Focus Window (Fig. 1). 8. Adjust capture parameters to ensure enough photons are being captured by the camera. In the Focus Window, monitor the histogram and adjust exposure time, light source power and, if using an EMCCD camera, intensification levels to ensure a peak pixel intensity of at least 20,000 on a 16-bit image (Fig. 2). Exposure time should not exceed 30 ms and should ideally be 10 ms to collect 100 frames per second. If capturing multiple channels for the same sample, repeat this step for each channel (see Note 10). 9. Once capture parameters have been optimized for each channel, images can be collected using the Stream function in the Focus Window (Fig. 3). Under the Stream Tab, select the “capture” checkbox and set it to at least 100 frames. Ensure the Exposure time is set correctly (e.g., 10 ms). Finally, turn on the light source (e.g., “Open Alt”) and immediately begin the image capture by pressing the “Start” button for the image stream. 10. Wait for the captured image to appear in the Slide file before turning off the light source to preserve the sample (see Note 11).

Super-Resolution Radial Fluctuations (SRRF) Microscopy

235

Fig. 1 Image acquisition windows in SlideBook 6. Screen captures of the focus window, tool bar and camera window in SlideBook 6 showing the relevant buttons for image acquisition steps 5, 6 and 7 (Subheading 3.2) as indicated with boxes

11. If capturing multiple channels for the same sample, repeat steps 9 and 10 for each channel. Make note of the capture parameters (i.e., exposure time, light source power, intensification setting) for each channel (see Note 12). 12. Capture background correction images. Without changing the dimensions of the cropped field of view, turn on the light source and navigate to an area of the slide without any signal to capture the background signal noise. For each channel, use the same settings as for the image capture and collect background images. Different fields of view can be used for each channel to ensure an appropriate background image devoid of specific signal (see Note 13). 13. Export images for analysis in ImageJ. For each of the captured sample and background images, Export as 16-bit Open Microscopy Environment-Tagged Image File Format (OME-TIFF) images (Home tab ! Export ! 16-bit TIFF (OME)). 14. Export scale bar reference image. Open one of the captured images in SlideBook and add a scale bar (Ctrl + Shift + S). Export this image by using Home tab ! Export ! RGB TIFF (24 bit). The scale bar on this reference image can be

236

Jayme Salsman and Graham Dellaire

Fig. 2 Image acquisition parameter settings. Screen captures of the focus window and camera tab contents in SlideBook 6 showing the relevant buttons for setting image acquisition parameters (Subheading 3.2, step 8) as indicated with boxes

superimposed on or transferred to the processed SRRF images after analysis. 3.3 Image Processing (SRRF) 3.3.1 General Considerations

For image processing, each multiframe 16-bit OME-TIFF stack exported from the image acquisition software is processed in ImageJ using the NanoJ-SRRF (or NanoJ-liveSRRF) plugin to produce a single SRRF frame. The quality and accuracy of the processed images will be dependent on the quality of the capture parameters (e.g., signal intensity, frame rate, number of frames), the attributes of the fluorophore, the nature of the staining protocol (e.g., direct labeling with dyes vs indirect labeling with antibodies) and the morphology of the structure being imaged (e.g., actin fibers vs chromatin). Optimizing the parameters to generate the best SRRF images will require some trial and error so in order minimize bias from the subjective determination of image quality, it is important to perform error mapping using the NanoJ-SQUIRREL plugin.

Super-Resolution Radial Fluctuations (SRRF) Microscopy

237

Fig. 3 Stream to disk setup parameters. Screen captures of the focus window and stream tab contents in SlideBook 6 showing the relevant buttons for image acquisition (Subheading 3.2, step 9) as indicated with boxes

In the following protocol, we use arrows (!) to indicate how to navigate the ImageJ software hierarchies (e.g., Image ! Adjust ! Brightness and Contrast). 3.3.2 Protocol

1. Open ImageJ and ensure NanoJ-SRRF and NanoJSQUIRREL plugins are installed. If using NanoJ-LiveSRRF, be sure it is installed as well (see Note 14). 2. Open the 16-bit OME-TIFF image series for both the sample and the background images (Ctrl + O or File ! Open). 3. Perform background subtraction with ImageJ tools by selecting Process ! Image Calculator to open the image calculator window (Fig. 4). In the Image Calculator Window select the image stack you want to process as Image 1 (e.g., image series of DAPI stained cell), select “Subtract” from the Operation pull-down menu and select the appropriate background image series as Image 2 (Fig. 4). This will generate a background corrected image series that will be used in the next step. An

238

Jayme Salsman and Graham Dellaire

Fig. 4 Background correction in ImageJ. Screen captures showing how to perform background correction in ImageJ. Image stack and background stack files are opened in ImageJ. The image calculator window is used to perform image subtraction (black box) of Image 2 (background image file) from Image 1 (image stack File) which generates a background-corrected output image stack

Super-Resolution Radial Fluctuations (SRRF) Microscopy

239

Fig. 5 Background subtraction reduces image noise. (a) Widefield fluorescent images of DAPI-stained U2OS cells were imaged with a 63 (1.4 NA) objective lens and a Prime BSI back-illuminated sCMOS camera. Images were processed using the NanoJ-liveSRRF algorithm (radius: 3, sensitivity: 3, average temporal analysis) in ImageJ without background correction (b) producing images with increased background noise and patterning (inset B). The same image in (a) was processed with background subtraction (c) to produce an image in which background noise and patterning is virtually eliminated (inset C)

example of the impact of background correction on the resulting SRRF reconstruction is shown in Fig. 5.

240

Jayme Salsman and Graham Dellaire

Fig. 6 Adjustment of SRRF parameter settings. (a) Screen capture showing how to open the SRRF parameter window in ImageJ. (b) Screen capture of the SRRF Parameter window. Arrows indicate the radius, sensitivity, and temporal analysis settings that are commonly adjusted. The “OK” button (black box) used to run the SRRF analysis one the parameters have been selected

4. Run NanoJ-LiveSRRF by selecting Plugins ! NanoJ LiveSRRF ! liveSRRF (or NanoJ-SRRF by selecting Plugins ! NanoJ-SRRF ! SRRF Analysis) to bring up the SRRF Parameters window (Fig. 6).

Super-Resolution Radial Fluctuations (SRRF) Microscopy

241

Table 1 Suggested starting parameters for SRRF analysis of various cellular structures

Feature

Stain

Radius

Sensitivity

Magnification

Temporal analysis (AVG vs STD)

DNA

DAPI

2

1

5

AVG

Actin

Phalloidin-Alexa fluor 647

1

1.5

5

AVG

PML NBs

Anti-PML, Alexa fluor 555

3

2

5

AVG

5. Select SRRF settings from the SRRF parameter window (Fig. 6). The main parameters affecting the error, resolution and subjective quality of the SRRF reconstruction are the radius and sensitivity, and whether averaging or standard deviation temporal analysis methods are employed during reconstruction (see Note 15). Some common settings that can used as a starting point are found in Table 1. Press OK to run. An example of how varying the sensitivity at an optimum radius value can affect the subjective quality of the image versus RSP error is shown in Fig. 7 for two different DNA dyes (DAPI and 5-tetramethylrhodamine-hoechst (5-TMR-Hoechst)) [17]. 6. After the image is rendered the software will produce an interpolated image and AVG and/or STD image(s). Before saving these images, one should reset the brightness and contrast settings using Image J ! Image ! Adjust ! Brightness and Contrast (or Ctrl + Shift + C) to bring up the B&C (brightness and contrast) window. Press the “Reset” button in the B&C window in order to restore and preserve the full bit depth of the image. 7. Perform error mapping with NanoJ-SQUIRREL by selecting Plugins ! NanoJ-SQUIRREL ! Calculate error map, RSE and RSP which will open the “Calculate error map, RSE and RSP” window (Fig. 8). Select the interpolated image from steps 5 and 6 as the “Reference image” and the SRRF image (AVG or STD) from steps 5 and 6 as the “Super resolution reconstruction image” and press “OK” (Fig. 8). This will produce an error map image and “RSE and RSP value” output table, examples of which are show in Fig. 7. 8. Prepare to perform average resolution estimation using Fourier Ring Correlation (FRC). To begin this process, first split the original, background-corrected image stack (step 3) into two new image stacks using Plugins ! NanoJ-SQUIRREL ! Tools ! Split image sequence into odd and even frames.

Fig. 7 SRRF analysis and error mapping of chromatin imaging. U2OS cells were stained with the fluorescent DNA stains DAPI (a) (ex: 405 nm, em: 461 nm) and 5-TMR-Hoescht (b) (ex: 560 nm, em: 580 nm) to label chromatin and imaged by spinning disk (SD) confocal microscopy using a 100 objective (1.46 NA) and Evolve 512 EMCCD camera. Background corrected images (i) were processed in ImageJ using the NanoJ-liveSRRF algorithm with the indicated radius and sensitivity settings and average temporal analysis. Images of DAPI and 5-TMR-Hoescht-stained cells with resolution-scaled Pearson’s correlation (RSP) values closest to 1 (least error) are indicated with teal boxes and the NanoJ-SQUIRREL-derived error maps for these images are included as indicated (ii). The spinning disk (SD) confocal images (i) and the best SRRF images (teal borders) were merged (iii) and the improvement in resolution was visualized by line scanning (blue arrows in (iii) and graphs). NanoJ-SQUIRREL was used to estimate average resolution by Fourier ring correlation (FRC) and produce an FRC map (iv)

242 Jayme Salsman and Graham Dellaire

Super-Resolution Radial Fluctuations (SRRF) Microscopy

243

Fig. 8 NanoJ-SQUIRREL error mapping window. Screen capture showing the NanoJ-SQUIRREL “Calculate Error Map, RSE and RSP” window. The pull-down windows for selecting the reference image (Interpolated image) and SRRF image for analysis are indicated with boxes

9. Run SRRF analysis as in steps 4–6 for each of the split image stacks to create a “odd” and “even” SRRF reconstruction. 10. Combine the two SRRF-processed reconstructions into a single, two-frame image stack using the Image ! Stacks ! Tools ! Concatenate function and select the odd and even SRRF reconstructions for combining in one image stack. 11. With the new odd/even SRRF image stack highlighted, perform average resolution estimation using FRC by selecting Plugins ! NanoJ-SQUIRREL ! Calculate-FRC-map which will open the Calculate FRC Map window. In the Calculate FRC Map window the user can enter the pixel size (in nm) as determined by the camera and microscope set up (see Note 2). The number of blocks per axis is set to 10 by default, however, the optimal number of blocks will depend on the pixel dimensions of the image (see Note 16). Press OK to run analysis which will generate an FRC mean resolution map (example in Fig. 7) and an FRC-Resolution table indicating the mean estimated resolution (see Note 16). 12. If analyzing multicolor images, repeat steps 4–7 for each channel, adjusting the SRRF settings and running the error mapping (NanoJ-SQUIRREL) as necessary. 13. Make any adjustments to brightness and contrast to optimize image quality by opening the B&C menu: Image ! Adjust ! Brightness and Contrast (or Ctrl + Shift + C). 14. The output SRRF reconstructions are 32-bit images, therefore the user will usually require converting these images to 16-bit for use in other software as required: Image ! Type ! 16-Bit.

Fig. 9 SRRF analysis improves resolution of spinning disk confocal microscopy images. U2OS cells were stained with DAPI to visualize chromatin, phalloidin-Alexa 647 to visualize actin and immunostained to detect PML NBs with Alexa Fluor 555–conjugated secondary antibodies. Cells were imaged by spinning disk (SD) confocal microscopy using a 100 objective (1.46 NA) and Evolve 512 EMCCD camera. Background-corrected images were processed in ImageJ using the NanoJ-liveSRRF algorithm (using parameters in Table 1). The indicated spinning disk (SD) confocal inset images and the SRRF inset images (white boxes) were merged (SRRF/SD) and the improvement in resolution was visualized by graphing line scans (line indicated by blue arrows). White arrows in the inset images (bottom row) indicate PML NBs

244 Jayme Salsman and Graham Dellaire

Super-Resolution Radial Fluctuations (SRRF) Microscopy

245

15. Save the converted images to TIFF format: File ! Save As ! tiff. 16. If you have imaged different fluorophores in the same field of view and processed images from each fluorescence channel separately using the SRRF algorithm, then these images can be assembled into a single multicolor image. To begin, open each SRRF processed image file you want to merge (i.e., images saved in step 11 for each fluorophore imaged). Select: Image ! Stacks ! Tools ! Concatenate. Select the “keep original images” checkbox. 17. Confirm that images are in the desired order (Image 1 ¼ Red, Image 2 ¼ Green, Image 3 ¼ Blue) then select Image ! Color ! Stack to RGB. An example of a processed multicolor SRRF image for three different cellular structures (chromatin, actin, PML NBs) that shows improvements in resolution and clarity over unprocessed spinning disk confocal images is shown in Fig. 9. Note the multicolor images in this figure have been false-colored in Adobe Photoshop (CS3). 18. Save new merged image as your processed SRRF image: File ! Save As ! tiff.

4

Notes 1. We recommend for best software interoperability that OME TIF format is used for importing 16-bit micrographs into ImageJ for SRRF analysis (see https://docs.openmicroscopy. org/ome-model/5.6.3/ome-tiff/specification.html). However, it may also be possible to use the Bio-Formats plugin for ImageJ to directly import micrographs as 16-bit TIFs from native file formats from a number of popular commercial microscope software packages (see: https://imagej.net/BioFormats). 2. Typically, if setup correctly, your imaging acquisition software will automatically generate the effective pixel size in your images taking into account the camera CCD/sCMOS chip pixel size, the magnification of the objective lens used to capture the image and any coupling optics that may be used to better illuminate the CCD or CMOS chip. For example, for a for a 6.5 micron sCMOS chip as found on the Prime BSI coupled with 63 objective and 1.0 coupling optics one can acquire ~103 nm/pixel images (i.e., 6500 nm/pixel divided by (63  1.0) ¼ ~103 nm/pixel). For the Evolve 512 camera, which has a 16 micron EMCCD chip, 131 nm/ pixel images can be obtained using a 100 objective and 1.22

246

Jayme Salsman and Graham Dellaire

coupling optics (i.e., 16,000 (100  1.22) ¼ ~131 nm/pixel).

nm/pixel

divided

by

3. For ease and consistency, we often purchase sterile 1 PBS for microscopy, rather than dilute from the 10 stocks that we make ourselves for more general purpose uses. 4. 100% Triton X-100 is quite viscous and can be difficult to remove completely from pipette tips when transferring small amounts. Therefore, one might also choose to make up a 20% Triton X-100 intermediate stock solution by carefully pouring 10 mL of Triton X-100 into a 50 mL conical tube using the graduations on the tube as a guide. Bring the volume up to the 50 mL mark with distilled water and rock at room temperature until it is fully dissolved. Use this 20% stock to make the 5% stock by diluting it with distilled water. 5. There are a variety of commercial fluorescent mounting media to choose from. The refractive index (RI) and antifade properties are the most important considerations for SRRF microscopy and should be as close to the RI of glass (i.e., 1.52) as possible. Some examples of common mounting media include aqueous media such as Vectashield (VectorLabs, RI: 1.44), and hard-set media such as Prolong™ Gold (ThermoFisher, RI: 1.46) or Prolong Glass™ (ThermoFisher, RI:1.52). Selfhardening mounting media adds convenience of not requiring additional sealing (e.g., with nail polish). However, there is a trade-off between the use of aqueous media versus higher RI hard-set mounting media in regard to sample morphology preservation, as the latter will induce shrinking and flattening of samples. 6. Very few cameras are capable of full 16-bit frame rate acquisition of 100 frames/s. However, increased frame rates are possible through a combination of streaming images directly to disk (which is an option in SlideBook 6.0 software (3i)), camera intensification (only available on EMCCD or interline CCD cameras), dropping the bit depth of image acquisition to 12-bit (the minimum suggested acquisition bit depth), and/or cropping the region of interest (ROI) for image acquisition. For example, on the Evolve 512 EMCCD camera it is possible to achieve a frame rate of ~100 frame/s with 10 ms illumination, image intensification at 200–400, 16-bit depth, and cropped ROI to 180  180 pixels. In addition, some sCMOS cameras, like the Prime BSI, do not have image intensification capability and the rolling shutter readout of the image is from the top of the sensor to the bottom. Therefore, increased frame rates on the Prime BSI can be achieved by increasing illumination and by cropping the top 1/4 to 1/3 of the field of view as the capture ROI. It is never recommended to use the bin function

Super-Resolution Radial Fluctuations (SRRF) Microscopy

247

of the camera to achieve higher frame rates as this will severely degenerate SRRF performance. 7. There is a known issue with NanoJ-SQUIRREL having conflicts with Java versions and certain GPUs that results in a software crash. If you encounter this issue, there is a CPU-based version of NanoJ-SQUIRREL that can be obtained on Github here: https://github.com/super resolusian/NanoJ-SQUIRREL/releases/tag/v1.1-alpha 8. We routinely use the Alexa Fluor series of dyes (Thermo Scientific), although the cyanine series of dyes (Cy2, Cy3, Cy5) also work well. Of these dyes, Alexa 647 combines high fluorescence, resistance to photobleaching, and superior “blinking” or intermittent fluorescence emission to other dyes when used with common mounting media. This combination of attributes makes Alex 647 one of the most optimal dyes for SRRF, taking advantage of both temporal and radial fluorescence correlation. 9. For obtaining images for background correction, it may be easier if cells are plated at lower density to ensure cell-free areas for background imaging. In addition, small (i.e., cropped) fields of view make it easier to find empty spaces. Finally, you can scrape cells from an edge of a coverslip before mounting with a pipette tip, cell scraper or razor blade to ensure there is a cell-free part of the coverslip for imaging background on the same slide as your sample. 10. When using SlideBook 6 capture software, one can also adjust the capture parameters using the Capture Window. While using the Capture Window one can use the “Live” and/or “Test” buttons to quickly see how adjustments to exposure time and light source power are affecting pixel intensity and saturation by looking at the histogram in the Capture Window. For multicolor imaging, it can also be easier to optimize each channel using the Test Button in the Capture Window, before switching back to the Focus Window for acquisition. 11. When using SlideBook 6, there is a delay between when the camera has finished collecting data and when the image file appears in the slide file. If you are using high light source power or are capturing for long time frames, then it is possible to close the shutter for the light source (e.g., press the “Close alt” button for the light source, Fig. 3) after the camera has stopped but before the image appears in the slide file. This will help preserve your sample but runs the risk of closing the shutter too soon, while the camera is still collecting data. 12. When collecting images for multiple fluorphores with the same field of view, it is best to image the longest (lowest energy) wavelengths first to minimize damage, photobleaching and

248

Jayme Salsman and Graham Dellaire

photoconversion of your sample. This is especially true if using UV light, which should be imaged last. 13. Background images need to be matched for capture parameters (exposure time, light source power) and the field of view. Therefore, a new background image will need to be collected whenever any of these variables are changed. However, if one is imaging multiple fields of view with the same dimensions (i.e., same cropped window) and the capture parameters are the same, then the same background reference can be used for all images captured in this way. 14. ImageJ, (or Fiji package containing ImageJ), NanoJ-SRRF and NanoJ-SQUIRREL can all be downloaded freely from the websites listed in the Subheading 2.3. However, NanoJSRRF and NanoJ-SQUIRREL can also be installed as ImageJ plugins using the ImageJ updater. To access this from ImageJ or Fiji select Help ! Update which will open the ImageJ Updater window. At the bottom left of the Updater window, select the Manage Update Sites button which will bring up the Manage Update Sites Window. From here, scroll down and select the NanoJ-SRRF and NanoJ-SQUIRREL plug ins. Press the Close button in the Manage Update Sites window, then press Apply Changes in the ImageJ Updater window (Fig. 10). 15. The main parameters to consider when initiating a SRRF reconstruction are the magnification (default is 5 but reducing this number will reduce run-time and GPU RAM requirements), radius (i.e., ring radius in pixels at which gradient convergences is calculated), sensitivity (which increases contrast and single-to-noise (SNR) of the SRRF reconstruction at the cost of high frequency information and increased error), and whether to employ average (AVG) or standard-deviation (STD) temporal analysis methods of reconstruction. The NanoJ-SQUIRREL ImageJ plugin software allows an estimation of error by resolution-scaled error (RSE), representing the root-mean-square error between the reference and resolutionscaled image; and the RSP (resolution-scaled Pearson coefficient), which is the Pearson correlation coefficient between the reference and resolution-scaled images with values truncated between 1 and 1 [19]. In addition this software also has a built in tool for estimating the average resolution by Fourier Ring Correlation [26]. Typically, for a given subcellular structure, labeling/fixation scheme and fluorophore combination parameters giving lowest error (i.e., closest RSP value to 1) and highest resolution (smallest estimate of resolution by Fourier ring correlation (FRC)) will produce the best results. In addition, we have provided a table with suggested starting radius and sensitivity values for commonly imaged subcellular structures (Table 1) but users should use these only as a starting

Super-Resolution Radial Fluctuations (SRRF) Microscopy

249

Fig. 10 Installation of NanoJ plugins. Screen captures showing how to install NanoJ plugins as described in Note 14. Select the Help menu (1) then the Update option (2) to bring up the ImageJ updater window. Press the “Manage update sites” button (3) to open the “Manage update sites” window and select the NanoJ plugins to install (4). Close (5) the “Manage update sites” window and press the “Apply changes button (6)

point as optimal values may be different for a given acquisition system. 16. To get the most accurate estimate of resolution it is often necessary to try running the analysis with different “blocks per axis” values. Increase the number of blocks in 10-block increments, rerunning the FRC analysis each time. The software will generate an FRC-resolution table after each iteration that lists the mean resolution as well as the standard deviation (SD), minimum and maximum. Monitor the changes to these values with each iteration and use the estimated resolution associated with the block number that provides the smallest SD. The optimum number of blocks per axis will be affected by the pixel dimensions making up the image and, as a general approach, blocks per axis values that are about 1/30th (range 1/20th to 1/40th) of the number of pixels in the largest dimension provide good resolution estimates. For example,

250

Jayme Salsman and Graham Dellaire

for a 1200  600 pixel image, 40 blocks per axis (1200 pixels  30) should provide good results.

Acknowledgments Funding for microscope upgrades for SRRF imaging were funded by an Equipment Grant from the Dalhousie Medical Research Foundation (DMRF), and a Research, Tools, & Instruments (RTI) grant from the Natural Sciences and Engineering Research Council of Canada (NSERC). We would also like to thank Nvidia Corporation for the gift of the Titan V GPU used in this study obtained through their Higher Education and Research grants program. We would also like to thank Dr. Ricardo Henriques (Instituto Gulbenkian de Cieˆncia, Portugal and University College London, UK) and his research group for their advice and access to the development-stage NanoJ-LiveSRRF software. Finally, we thank Dr. Grazˇvydas Lukinavicˇius and Dr. Jonas Bucevicˇius (Research group for chromatin labeling and imaging, Max Planck Institute for Biophysical Chemistry, Go¨ttingen, Germany) for the generous gift of the 5-TMR-Hoechst DNA dye used in this chapter. References 1. Dellaire G, Nisman R, Bazett-Jones DP (2004) Correlative light and electron spectroscopic imaging of chromatin in situ. Methods Enzymol 375:456–478 2. Guo J, Larabell CA (2019) Soft X-ray tomography: virtual sculptures from cell cultures. Curr Opin Struct Biol 58:324–332 3. Feng H, Wang X, Xu Z et al (2018) Superresolution fluorescence microscopy for single cell imaging. Adv Exp Med Biol 1068:59–71 4. Jacquemet G, Carisey AF, Hamidi H et al (2020) The cell biologist’s guide to superresolution microscopy. J Cell Sci 133 5. Gustafsson N, Culley S, Ashdown G et al (2016) Fast live-cell conventional fluorophore nanoscopy with ImageJ through superresolution radial fluctuations. Nat Commun 7:12471 6. Laine RF, Tosheva KL, Gustafsson N et al (2019) NanoJ: a high-performance opensource super-resolution microscopy toolbox. J Phys D Appl Phys 52:163001 7. Acharya A, Bogdanov AM, Grigorenko BL et al (2017) Photoinduced chemistry in fluorescent proteins: curse or blessing? Chem Rev 117: 758–795

8. Bagshaw CR, Cherny D (2006) Blinking fluorophores: what do they tell us about protein dynamics? Biochem Soc Trans 34:979–982 9. van de Linde S, Sauer M (2014) How to switch a fluorophore: from undesired blinking to controlled photoswitching. Chem Soc Rev 43: 1076–1087 10. Dertinger T, Colyer R, Iyer G et al (2009) Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI). Proc Natl Acad Sci U S A 106:22287–22292 11. Culley S, Tosheva KL, Matos Pereira P et al (2018) SRRF: Universal live-cell super-resolution microscopy. Int J Biochem Cell Biol 101: 74–79 12. Zhang X, Chen X, Zeng Z et al (2015) Development of a reversibly switchable fluorescent protein for super-resolution optical fluctuation imaging (SOFI). ACS Nano 9:2659–2667 13. Almada P, Culley S, Henriques R (2015) PALM and STORM: into large fields and high-throughput microscopy with sCMOS detectors. Methods 88:109–121 14. Stubb A, Laine RF, Miihkinen M et al (2020) Fluctuation-based super-resolution traction force microscopy. Nano Lett 20:2230–2245

Super-Resolution Radial Fluctuations (SRRF) Microscopy 15. Lee J, Salsman J, Foster J et al (2020) Lipidassociated PML structures assemble nuclear lipid droplets containing CCTα and Lipin1. Life Sci Alliance 3. https://doi.org/10. 26508/lsa.202000751 16. Pinder J, Salsman J, Dellaire G (2015) Nuclear domain “knock-in” screen for the evaluation and identification of small molecule enhancers of CRISPR-based genome editing. Nucleic Acids Res 43:9379–9392 17. Bucevicˇius J, Keller-Findeisen J, Gilat T et al (2019) Rhodamine-Hoechst positional isomers for highly efficient staining of heterochromatin. Chem Sci 10:1962–1970 18. Schindelin J, Arganda-Carreras I, Frise E et al (2012) Fiji: an open-source platform for biological-image analysis. Nat Methods 9: 676–682 19. Culley S, Albrecht D, Jacobs C et al (2018) Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat Methods 15:263–266 20. Lam AJ, St-Pierre F, Gong Y et al (2012) Improving FRET dynamic range with bright green and red fluorescent proteins. Nat Methods 9:1005–1012

251

21. Shaner NC, Lambert GG, Chammas A et al (2013) A bright monomeric green fluorescent protein derived from Branchiostoma lanceolatum. Nat Methods 10:407–409 22. Fabricius V, Lefe`bre J, Geertsema H et al (2018) Rapid and efficient C-terminal labeling of nanobodies for DNA-PAINT. J Phys D Appl Phys 51:474005 23. Carrington G, Tomlinson D, Peckham M (2019) Exploiting nanobodies and Affimers for superresolution imaging in light microscopy. MBoC 30:2737–2740 24. de Beer MA, Giepmans BNG (2020) Nanobody-based probes for subcellular protein identification and visualization. Front Cell Neurosci 14:573278 25. Jezˇ M, Bas T, Veber M et al (2013) The hazards of DAPI photoconversion: effects of dye, mounting media and fixative, and how to minimize the problem. Histochem Cell Biol 139: 195–204 26. Nieuwenhuizen RPJ, Lidke KA, Bates M et al (2013) Measuring image resolution in optical nanoscopy. Nat Methods 10:557–562

Chapter 15 Sample Preparation for Multicolor STED Microscopy Walaa Alshafie and Thomas Stroh Abstract Stimulated emission depletion (STED) microscopy is one of the optical superresolution microscopy (SRM) techniques, more recently also referred to as nanoscopy, that have risen to popularity among biologists during the past decade. These techniques keep pushing the physical boundaries of optical resolution toward the molecular scale. Thereby, they enable biologists to image cellular and tissue structures at a level of almost molecular detail that was previously only achievable using electron microscopy. All the while, they retain the advantages of light microscopy, in particular with regards to sample preparation and flexibility of imaging. Commercially available SRM setups have become more and more available and also increasingly sophisticated, both in terms of optical performance and, importantly, ease of use. Institutional microscopy core facilities now offer widespread access to this type of systems. However, the field has grown so rapidly, and keeps growing, that biologists can be easily overwhelmed by the multitude of available techniques and approaches. From this vast array of SRM modalities, STED stands out in one respect: it is essentially an extension to an advanced confocal microscope. Most experienced users of confocal microscopy will find the transition to STED microscopy relatively easy as compared with some other SRM techniques. This also applies to STED sample preparation. Nonetheless, because resolution in STED microscopy does not only depend on the wavelength of the incident light and the numerical aperture of the objective, but crucially also on the square root of the intensity of the depletion laser and, in general, on the photochemical interaction of the fluorophore with the depletion laser, some additional considerations are necessary in STED sample preparation. Here we describe the single color staining of the somatostatin receptor subtype 2A (SSTR2A) and dual color staining of the trans-Golgi-network protein TGN 38 and the t-SNARE syntaxin-6 for STED in the endocrine cell line AtT20 and STED imaging of the samples, providing the protocols in as general a form as possible. The protocols in this chapter are used in this way in an institutional microscopy core facility. Key words Superresolution microscopy, Stimulated emission depletion (STED), Optical resolution, Immunocytochemistry, Organic fluorophores, Antibodies, Nanobodies, SSTR2A, TGN-38, Syntaxin-6

Abbreviations BSA DMEM FBS NA NGS

Bovine serum albumin Dulbecco’s Modified Eagle’s Medium Fetal bovine serum Numerical aperture Normal goat serum

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_15, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

253

254

Walaa Alshafie and Thomas Stroh

PFA PBS PSF RI RT SNR SRM SSTR2A STED TGN38

1

Paraformaldehyde solution Phosphate-buffered saline Point spread function Refractive index Room temperature Signal to noise ratio Super-resolution microscopy Somatostatin receptor subtype 2A (mouse) Stimulated emission depletion A type 1 transmembrane protein located in the trans-Golgi network

Introduction The resolving power of a fluorescence-based light microscope is limited by the physical nature of light being an electromagnetic wave. As the wave propagates, it diffracts or bends as it encounters a slit or an aperture like the one present in the microscope’s optical path. This diffraction results in smearing out the light emitted from a point in the specimen when imaged by a wide-field microscope into an airy disk (also known as point spread function). That was defined by German physicist Ernst Abbe in 1873 [1, 2]. Abbe determined the diffraction limit through the equation, 0.61 λ/ 2NA, where λ is the wavelength of light, and NA is the numerical aperture of the light collection lens. In theory, for 488 nm light, the smallest distance that can be resolved by a wide-field microscope for a pair of points in the specimen is 200 nm in XY and 600 in Z. The actual resolution of the wide-field microscope is reduced by the contribution of light from above and below the focal plane to 500 nm in XY. The confocal concept was introduced by Minsky to improve optical sectioning capability of the wide-field microscope for specimens that are thicker than the focal plane [3]. In a typical confocal microscope, the laser is focused into a point and scanning mirrors are used to sweep the laser point across the specimen, generating an image pixel by pixel. While the laser point excites fluorescent molecules throughout the entire cone of illumination, a pinhole aperture is placed in the emission path to reject the majority of the out-of-focus fluorescence. Thus, only fluorescence originating in the focal volume is captured by the detector. That rejection of out-of-focus light improves the resolution slightly to its theoretical value of 250 nm in XY and 600 nm along the Z axis. In 1994, Hell and Wichmann [4, 5] proposed the idea of stimulated emission depletion (STED) microscopy that provides sub-diffraction resolution by using a donut shaped depletion (STED) laser superimposed with a slight delay onto the region excited by the regular excitation beam also used in the confocal

Sample Preparation for Multicolor STED Microscopy

255

imaging. The donut shaped depletion beam usually is generated through placing a vortex phase plate in the path of the depletion laser which results in the generation of a donut-shaped pattern. The STED donut suppresses the fluorescence emission from the fluorophores located in the outer areas of the excited spot resulting in a reduction in the effective width of the PSF. This suppression is achieved through stimulated emission. In brief, when an excited fluorophore encounters a depletion photon that matches the energy difference between the excited and the ground state, it can emit two photons with identical properties as the incident photon. The STED depletion photon is chosen to match the tail of emission of the excited photon so that the emitted photons are of a longer wave length and can escape detection. Following this stimulated emission, the fluorescent molecule returns to the ground state (S0) and no fluorescence is detected. Thus, in STED microscopy the resolution is not only related to the wavelength of the incident light and the NA of the objective but also to the square root of the intensity of the STED laser (I)  λ / (2NA√(1 + I/Is) where Is is the STED intensity when the half of the fluorescence is quenched. While 2D STED does not improve axial resolution, 3-D STED can improve the Z-resolution below 130 nm (typically 100 nm, depending on sample). However, this comes at the cost of reduced lateral STED resolution. The most common way of generating the 3-D STED effect is the use of two superimposed incoherent STED beams, one producing the doughnut-shaped focus and confining the fluorescence laterally, and the other producing a bottle shaped focus and confining the fluorescence axially [6, 7]. In all these cases, achieving the STED effect and the extent of the improvement in resolution that can be achieved is critically dependent on the sample quality, in particular the choice of fluorophores to match with the depletion lasers and labeling density. Here we will describe protocols for single and dual color STED sample preparation and microscopy as they are used in a microscopy core facility serving mainly neurobiologists and cell biologists. In particular, we will illustrate staining for the somatostatin receptor subtype 2A in single color STED in the endocrine cell line AtT20 (Fig. 1) and the t-SNARE syntaxin-6 as well as the trans-Golgi-network protein TGN-38 in dual color STED in the same cells (Fig. 2).

2 2.1

Materials Buffers

1. PBS–0.05% saponin: To make 50 mL, add 25 mg of saponin to 50 mL of PBS and mix well. Store at 4  C. 2. Permeabilization–blocking buffer: PBS–0.05% saponin +2% BSA + 5% NGS; prepare fresh. To make 10 mL, add 200 mg BSA and 0.5 mL NGS to 5 mL of PBS–0.05% saponin buffer in

256

Walaa Alshafie and Thomas Stroh

Fig. 1 Comparison of resolution in single color confocal versus STED microscopy. (a) AtT20 cells were incubated for 3 min with 100 nM [D-Trp8]-SOM-14, then fixed and immunostained for SSTR2A and the signal was visualized by secondary antibodies conjugated to Alexa Fluor 488 (a). The boxed area in the upper panels are shown at higher magnification in the lower panels. Images were acquired using the 488 nm excitation and 595 nm STED lasers. The images were acquired as single optical section through the cells. They were cropped and brightness/contrast were adjusted using image J. Fluorescent spots appearing as single, large endocytic organelles by confocal are clearly resolved as consisting of several discrete endosomes by STED microscopy. The dashed lines represent the cell boundaries in the overview images. Scale bars ¼ 1 μm. (b) Plot profile analysis of a line passing through a cytoplasmic fluorescent spot representing internalizing receptors, which appears a single entity by confocal microscopy. In the STED image (lower right panel in a) and the corresponding intensity plot one can clearly resolve that there are two separate endocytic vesicles underlying the diffraction limited spot in the confocal image

Sample Preparation for Multicolor STED Microscopy

257

Fig. 2 Two-color confocal versus STED microscopy. AtT20 cells were labeled for the trans-Golgi network using antibodies against TGN38 and syntaxin-6 and secondary antibodies conjugated to Alexa Fluor 594 and STAR RED, respectively. The 2D images represent an enlarged region of the labeled TGN and its associated vesicles. They were acquired using a single depletion laser (775 nm, pulsed) in both the orange and red channels. Note the improvement in XY resolution over confocal microscopy in the STED images in both color channels achieved with this approach. Due to perfect alignment of both channels in the single depletion laser imaging approach, it is also easily possible to state based on the STED image that both proteins are located in very close but clearly separate zones in the area of the trans-Golgi network. Images were cropped and adjusted for brightness and contrast using image J. Scale bars ¼ 1 μm

a 15 mL conical tube. Rock slightly at 4  C until the BSA is dissolved. Fill up to 10 mL with PBS–0.05% saponin buffer. Keep on ice until use. 3. Antibody dilution buffer: PBS–0.05% saponin–1% normal goat serum; prepare fresh. To make 10 mL, Add 100 μL of NGS Add 10 mL of PBS–0.05% saponin buffer to a 15 mL conical tube and mix. Keep on ice until use. 2.2

Antibodies

1. Monoclonal rabbit anti-SSTR2A diluted 1:3000 in antibody dilution buffer. 2. Polyclonal rabbit anti-TGN38 diluted 1:100 in antibody dilution buffer. 3. Mouse anti-syntaxin-6 diluted 1:500 in antibody dilution buffer.

258

Walaa Alshafie and Thomas Stroh

4. Alexa Fluor 488-conjugated goat anti-rabbit secondary antibodies diluted 1:800 in antibody dilution buffer. 5. Alexa Fluor 594-conjugated goat anti-rabbit diluted 1:800 in antibody dilution buffer. 6. Goat anti-mouse STAR RED diluted 1:100 in antibody dilution buffer. 2.3

Cells

1. AtT20/D16-16, mouse corticotropic pituitary adenoma cell line. 2. Round coverslips # 1.5 (0.17 mm thickness), coated, for example, with poly-L-lysine. Use high quality with minimum tolerance in thickness.

2.4

Reagents

1. Tetraspec fluorescent beads, 0.1 μm diameter. 2. Bovine serum albumin (BSA). 3. Normal goat serum (NGS). 4. Fetal bovine serum (FBS). 5. Dulbecco’s Minimum Essential Medium (DMEM). 6. Poly-L-lysine, Sigma-Aldrich. 7. Penicillin–streptomycin. 8. Trypsin/EDTA. 9. [D-Trp8]-somatostatin-14. 10. Phosphate-buffered saline (PBS), pH 7.4. 11. Paraformaldehyde (PFA) (4% w/v) in PBS. 12. Saponin. 13. Triton X-100. 14. ProLong Gold mounting medium.

2.5

Lab Ware

1. Bottle-top filter 0.2 μm. 2. 100 mm cell culture dish. 3. Cell culture dish, 4-well. 4. Conical tube, 1.5 mL. 5. Conical tube, 15 mL. 6. Conical sterile tube, 50 mL. 7. Cover glasses #1.5. 8. Cryovials. 9. Cryobox. 10. Parafilm. 11. Pipette tips, P200. 12. Pipette tips, P1000.

Sample Preparation for Multicolor STED Microscopy

259

13. Plastic serological pipette, 5 mL. 14. Plastic serological pipette, 10 mL. 2.6

Microscope

Commercial Abberior equipped with:

Expert

Line

STED

microscope

1. 405, 488, 561, and 640 nm excitation lasers. 2. Two pulsed STED depletion lasers at 595 nm and 775 nm. 3. Olympus Plan-Apo 100/1.40NA oil objective. 4. Avalanche Photo Diode detectors.

3

Methods The major concerns in STED experiments are mainly related to fluorescent labels and sample preparation. In STED microscopy, sensitivity and speed are potentially at odds with the goal of achieving superhigh resolution in the realm of a few dozen nanometers. Fluorescent dye molecules suffer from the high energy excitation needed when using the depletion laser in order to be able to collect enough photons from the remaining, much reduced in size excited spot. This may result in pronounced photobleaching. However, optimal sample fixation and mounting are also critical to the outcome of STED experiments. Also, the high photosensitivity of living cells makes live cell STED imaging experiments highly difficult. Below are some basic considerations for a successful STED experiment:

3.1 Sample Preparation Considerations

1. Seed the cells on coverslips with appropriate thickness to prevent refractive index (RI) mismatch that could result in spherical aberrations. Most objective lenses are designed to be used with 170 μm-thick coverslips (#1.5 coverslips) (see Note 1). 2. Sample thickness using live or fixed cells is usually only up to 10 μm. However, thick samples can also be imaged, for instance frozen brain sections ranging from 10 to 40 μm in thickness. In such experiments, clearing the tissue is mandatory to enable deep penetration of the depletion laser without losing its modulation and, as a consequence, the STED effect. We recommend SCALE buffer for this purpose when doing it with frozen sections [8]. 3. When imaging tissue sections, it is critical to mount the sections directly onto the coverslip, then place them onto an appropriately sized drop of embedding medium onto the object slide. 4. Fixation must be optimized for every cell or tissue type. Generally, cross-linking approaches (aldehyde fixation) or

260

Walaa Alshafie and Thomas Stroh

precipitation methods (methanol, ethanol, or acetone) are most commonly used. 5. The most commonly used aldehyde fixative is freshly prepared formaldehyde solution (typically 2–4% w/v) buffered to a neutral pH, prepared using para-formaldehyde powder. The freshly depolymerized paraformaldehyde solution can be stored at 4  C in the dark for 1–2 weeks or be aliquoted and stored at 20 or 80  C for longer storage. Methanol fixation is appropriate if studying cytoskeletal proteins such as tubulin or other structural proteins. Methanol should be chilled to 20  C before use. 6. Mount the sample in a high refractive index mounting medium with antifade agents, for instance ProLong Gold or ProLong Diamond or comparable compounds. 7. These embedding media cure when in contact with air. This may be desirable to produce long-lasting samples. However, the flattening artifacts introduced by the curing of the embedding medium may be deleterious to the imaging when studying larger organelles with a complex 3D structure (e.g., Golgi apparatus) or when doing 3D STED. In such cases, seal the coverslip immediately with nail polish to prevent the embedding medium from curing. 8. Position the coverslip only near the center of the slide. 3.2 Choosing Fluorophores for Single Color STED Experiments

To avoid excessive photo bleaching in STED imaging experiments imposed by the necessity of high-power excitation, great attention must be paid to the fluorophores. In general, the ideal fluorophores for STED imaging need to be: 1. Bright, of high Quantum yield and of high photo stability, exhibiting low photobleaching (see Note 2; Table 1). 2. The excitation spectrum of the fluorophore must be compatible with the available excitation laser wavelengths at your STED microscope (Table 1). 3. The tail of emission of the dyes must overlap with the STED depletion wavelength. 4. The dye should not be excited by the STED wavelength. 5. An example of a good fluorophore/laser combination is for instance Atto 647, STAR RED or STAR 635P (Table 1) when depleted with the 775 nm STED laser. Because of the optimal interaction of these dyes with the 775 nm depletion laser, comparatively low excitation is needed to excite the respective fluorophore. The lower excitation required for this setting is advantageous because it minimizes photobleaching. Similarly, the 595 nm STED laser interacts optimally with any photostable green-emitting fluorophore for instance Alexa Fluor

Sample Preparation for Multicolor STED Microscopy

261

Table 1 List of some of the most commonly used fluorophores for STED microscopy λex

λSTED

Reference

Alexa Fluor 488

490 nm

595 nm

[11] [12–14], https://www.thermofisher. com/

Star 488

503 nm

595 nm

https://abberior.shop/

Oregon Green 488

501 nm

595 nm

[15], https://www.thermofisher.com/

Cy3

555 nm

660 nm

[16], https://www.jacksonimmuno. com/

Star 580

587 nm

660 and 775 nm

[17] https://abberior.shop/

Alexa Fluor 594

590 nm

660 and 775 nm

[18, 19] https://www.thermofisher. com/

Atto 594

603 nm

660 and 775 nm

[20] https://www.atto-tec.com/

Atto647N

646 nm

775 nm

[21, 22] https://www.atto-tec.com/

Star 635P

638 nm

775 nm

[21, 22] https://abberior.shop/

KK114 ¼ STAR Red

638 nm

775 nm

[21] [20] https://abberior.shop/

Fluorophore Organic fluorophores

Live cell dyes and fluorescent proteins GFP

490 nm

595 nm

[13, 14, 23]

YFP

514 and 488 nma

595 nm

[13, 14, 23]

mCitrine

516 nm

595 nm

[24, 25]

mNeptune2

586 nm

775 nm

[26]

MitoPB Yellow

488 nm

660 nm

[27]

b

590 nm

775 nm

[28]

SiR–based probes and tagsb

650 nm

775 nm

[28–30]

ATTO590

SNAP-, HALO- or CLIP- Dependent on λex and λSTED tags of the ligands used

[31]

Exchangeable labels (examples) – Membrane stains (Nile Red, FM4–64) – Actin labeling peptide (Lifeact-Alexa Fluor 594) – DNA dyes (JF646-Hoechst or SiR-Hoechst)

[32]

a

YFP can be excited using the 488 nm laser line in case a dedicated 514 nm line is not available As substrates for SNAP- or HALO-tags

b

488 (Table 1). Here we used for instance the 488 nm excitation with the 595 nm depletion laser to image internalizing SSTR2A vesicles labeled with Alexa Fluor 488 in AtT20 cells

262

Walaa Alshafie and Thomas Stroh

stimulated with a somatostatin analog ([D-Trp8]-somatostatin-14) (Fig. 1). On the other hand, when the 775 nm STED laser is used to deplete an orange-emitting dye, higher depletion and excitation laser power is needed due to the less optimal interaction of the orange fluorophore with the 775 nm STED laser. The higher excitation laser requirement results in comparatively faster photobleaching of the fluorophore. In this case, we find that labeling density/signal abundance plays a role in defining whether the orange dye can be depleted with the pulsed 775 nm STED laser. Signal abundance can reflect the protein concentration in the sample under ideal conditions or just be a reflection of the primary-secondary antibody interaction or any poorly developed step along the staining protocol. Thus, great attention must be paid to optimize every single step of the staining protocol especially the choice of the primary antibody and its concentration as well as secondary antibody concentration (see Note 2). 6. If doing live-cell experiments, far-red fluorophores are preferable because of reduced phototoxicity of the required excitation as compared with shorter wavelength dyes. 3.3 Choosing Fluorophores for Two Color STED Experiments

Double immunolabeling STED imaging experiments can be approached in two ways. The appropriate choice of fluorophore combinations is always critical to the success of the experiment (see Note 3): 1. Two-channel imaging with one depletion wavelength: In this approach a single depletion laser beam is used in both color channels. Therefore, the coalignment between the depletion laser and the two excitation beams is immanently perfect, since only one depletion laser is used and the location of its central null intensity spot is detected in both channels. However, both dyes must have at least sufficient STED efficiency at the same depletion wavelength. The most common microscope set up used to perform this type of experiment is the use of the pulsed 775 nm depletion laser together with a red-emitting fluorophore and an orange-emitting dye with sufficient interaction efficiency with the depletion laser. The sample is labeled with an orange dye such as Alexa Fluor 594 and a far red dye like STAR RED (Table 1). However, because the 775 nm laser interacts sub-optimally with the orange dye, the abundance of the target antigen in the sample plays a role as described above (cf. Subheading 3.2, step 5). In our present example, when performing double immunolabeling for the t-SNARE syntaxin-6 and the Trans-Golgi protein TGN38, it was optimal to use the red fluorophore, STAR RED, which has a better interaction with the 775 nm depletion laser than Alexa Fluor 594, to label the less abundant target, syntaxin-6, and the

Sample Preparation for Multicolor STED Microscopy

263

orange dye, Alexa Fluor 594, to label the more abundant TGN38 (Fig. 2). So, an extra optimization step is required when using the one STED laser approach to find the optimal target-fluorophore combination (see Note 4). 2. Two-channel imaging with two depletion wavelengths: in this approach, different excitation/depletion laser pairs are used for each fluorophore. The sample is for instance labeled with a green-emitting dye such as Alexa Fluor 488 and a red-emitting dye such as STAR RED and the stimulated emission depletion is done using 595- and 775-nm lasers respectively (Table 1). In this approach both fluorophores interact potentially equally well with their respective depletion laser so that optimal depletion will be achieved in both channels. Perfect overlap of two depletion beams lasers at the focal plane is a concern in this approach. This has to be verified before every experiment to avoid false-positive or false-negative colocalization findings. For that, a Tetraspec bead sample is an ideal control for checking the laser coalignment by superimposing the respective bead images from the two channels. Alternatively, one target protein can be labelled with both fluorophores used in the experiment in a separate control experiment. In this control, the colocalized signals should be perfectly superimposed. In case of misalignment, the images can be realigned after acquisition using any image processing software for example, the manual drift correction plugin or the transform J plugin of the freeware image J. Another highly important consideration when using the two depletion laser approach is to image the longer wavelength first otherwise the 595 nm depletion laser will bleach all signal in any longer wavelength channels. 3.4 General Considerations and Controls for Multichannel Imaging Experiments

1. Choosing fluorophores with well separated excitation and emission spectra is preferred (Table 1). 2. To control for cross talk, also referred to as bleed through a series of single-labeled and unlabeled controls are needed as follows: 2.1 Prepare samples with single-labeled controls for each fluorophore in your multilabeling protocol. 2.2 Use the fully labeled sample to adjust image acquisition parameters for each fluorophore separately, including illumination intensity, laser power and scan speed. 2.3 Adjust acquisition parameters starting with a relatively low depletion laser power (e.g., 20 or 30%) and low excitation to avoid photobleaching, then increase laser power in both excitation and depletion lasers incrementally as needed to achieve your desired resolution and contrast while

264

Walaa Alshafie and Thomas Stroh

avoiding saturation. If your microscope does not automatically adjust for Nyquist sampling, be mindful of choosing a pixel size amenable to achieving the resolution you desire (see Note 5). 2.4 Use the same setting for imaging the control samples. 2.5 Image the single labeled control through all channels and carefully analyze potential cross-talk in the unlabeled channels. 2.6 Adjust the imaging parameters in each channel so that there is no/minimal cross-talk into the unlabeled channels. 2.7 To control for auto fluorescence, image unlabeled sample (omission of primary antibody) gone through the staining protocol with each filter set. 3. Do not label with DAPI or Hoechst fluorophores in STED experiments because they can be excited by the 595 nm depletion laser and cause a diffuse background. 4. If staining the nuclei is necessary as a counter stain, ensure that the STED depletion laser does not excite the confocal counterstain. 5. In dual channel STED experiments using two different depletion lasers always image the two channels sequentially. Always image the longer wavelength channel first to avoid the shorter wavelength depletion laser bleaching the longer wavelength fluorophores. 3.5 Making Subdiffraction Fluorescent Bead Samples

1. Dilute fluorescent beads (e.g., 100-nm diameter Tetraspec fluorescent spheres with distilled water before use to 1:5000 (v/v). 2. Sonicate the beads for at least 5 min in an ultrasonic bath to disperse aggregates. 3. Apply 10 μL beads suspension onto poly-L-lysine–coated coverslips and spread it evenly using the edge of a second coverslip, and let it air-dry. 4. Mount using the same medium as in your biological samples and cover with a cover slip # 1.5 (see Note 1). 5. Seal the coverslip with nail polish. 6. Bead samples are used to routinely calibrate the microscope.

3.6

Cell Culture

1. All cell manipulations must be done in a Class II biosafety cabinet with sterile media and solutions prewarmed to 37  C. 2. Cells are cultured in a 10 cm dish with 10 mL of complete DMEM media containing 10% FBS and penicillin–streptomycin in a 37  C incubator with 5% CO2.

Sample Preparation for Multicolor STED Microscopy

265

3. Wash the confluent adherent cells twice with 5 mL of sterile PBS and discard the PBS. 4. Add 2 mL of trypsin to cell plates. 5. Incubate plates in the 37  C–5% CO2 incubator 2–3 min until cells have detached. 6. Verify under microscope that the cells have detached and then inactivate trypsin by adding 5 mL of complete DMEM. 7. Collect cells in a 10 mL conical tube. 8. Centrifuge at 1.2  g for 3 min to pellet the cells and discard the supernatant. 9. Resuspend each cell pellet with complete DMEM and count cells. 10. Plate approximately 100,000 of AtT20 cells into poly-Llysine–treated coverslips #1.5 in 4-well dishes. 11. Transfer the plates to the 37  C–5% CO2 incubator for overnight incubation prior to immunostaining. 12. For cell passage, transfer 10% of the cells into a 10 cm dish containing 10 mL of complete DMEM. 3.7 Immunofluorescence

In general, a conventional immunofluorescence protocol may be used to prepare samples for STED imaging. However, additional precautions are important to ensure high photostability and brightness (photon yield) to allow for high contrast and good results on STED microcopy (see Note 2). 1. Fix the cells for 15 min using 4% PFA in PBS. 2. Wash three times in PBS 5 min each. 3. Block nonspecific binding sites for 15 min at room temperature using PBS–2% BSA–5% normal goat serum (from the same species as the secondary antibody host, in our case NGS) and permeabilize the membranes adding the appropriate detergent for your sample to the blocking buffer. In our hands, 0.05% saponin provided the best compromise between structural preservation and efficient permeabilization. 4. Dilute antibodies to their final concentration in antibody dilution buffer (PBS–0.05% saponin–1% normal goat serum); appropriate antibody dilution must be established experimentally for your respective application (see Note 6). 5. Centrifuge each antibody solution at 15,000  g for at least 1 min to precipitate aggregates. 6. Incubate over night at 4  C with primary antibodies. Do not allow the sample to become dry. 7. At least an omission of primary antibody negative control for each antibody in your staining protocol should be carried in

266

Walaa Alshafie and Thomas Stroh

each experiment to determine the level of nonspecific binding of the secondary antibodies. For that, incubate one coverslip with the antibody dilution buffer without primary antibody overnight at 4  C and then continue treating it as the other experimental coverslips (see Note 7). 8. Wash three times 5 min each with PBS. 9. Incubate 1 h with the appropriate species-specific secondary antibodies-conjugated to the appropriate fluorophores in antibody dilution buffer. Dilution of secondary antibodies should be optimized to obtain the best signal to noise ratio (SNR) (see Notes 2, 8; cf. Table 1). 10. Wash three times (5 min each) with PBS. 11. Mount with an appropriate mounting medium with antifade reagent, for example, Prolong Gold or Diamond, onto glass slides (see Notes 9–12). 3.8 Imaging Considerations

1. Ensure that the STED depletion beam and the excitation beam are well aligned before starting the experiment. Align if necessary. Many commercial systems provide autoalignment tools. Use them freely. 2. Averaging: the improved resolution of STED microscopy requires a small pixel size in order to conform with the Nyquist sampling paradigm requiring that the pixel size should be 2.3 times smaller than the resolution of the microscope. Therefore, in STED microscopy, when aiming at 50 nm resolution or less, the intensity collected at each pixel is going to be very low. It is, therefore, recommended to average fluorescence from multiple frames or line scans at the same position rather than increasing the excitation laser power too high to avoid undue bleaching of the samples (see Note 13). 3. Fast scanning is recommended to reduce photo bleaching. 4. It is not usually necessary to use the full power of the STED depletion laser to achieve super-resolution (depends on the resolution required to answer the experimental question). The incremental increase of depletion power allows for resolution scaling in STED microscopy. To this end, an optimal balance between achieving the resolution required through narrowing the point spread function (PSF) through increasing the STED depletion power and maintaining a good SNR is necessary. The STED laser power should be preoptimized for each fluorophore and sample, generally 30% depletion power is a good starting point (see Note 14). 5. The choice of the fluorophores depends on the wavelengths of the depletion and excitation lasers available at your STED system (Table 1).

Sample Preparation for Multicolor STED Microscopy

267

6. Time-gated detection is recommended because the signal that is detected during the first nanosecond or so has not been fully depleted by the STED beam and will therefore mostly contribute to a blurred background. Implementing the gated detection will allow rejection of the nonoptimally depleted first signal that arrives to the detector and thus collection of the highly resolved signal only [9]. 7. Scanning in sequential mode is recommended for most dualcolor experiments to reduce cross-talk between channels. Only if the fluorescent dyes are sufficiently spectrally separated, they may be acquired simultaneously (see Note 15). 8. When available, pixel-by-pixel or line-by-line acquisition schemes should be used to minimize drift between the color channels in multilabeling experiments. 3.9 Postimaging Considerations

1. If fluorophores with not very well separated emission spectra need to be used in multicolor experiments, it is recommended to apply linear unmixing as a postprocessing step in order to quantitatively separate the signals from the respective dyes. 2. Image deconvolution will help to improve the resolution and SNR of the acquired images, even in STED microscopy. However, the restored image has to be critically examined to exclude the introduction of any artifacts.

4

Notes 1. The thickness of the coverslip must be chosen according to the objective lens specifications. Some objectives can be corrected for a different cover glass thickness or have a correction collar ring that can be adjusted to match the thickness of the cover glass. 2. Choose photo stable, bright fluorophores whose emission spectrum peak lies close to the wavelength of the STED laser beam in order to be depleted with minimal power for example Star red dye with 775 nm STED laser. 3. Reduce the signal cross-talk in multicolor STED images by choosing fluorophores with clearly separated spectral properties. 4. For multicolor experiments, using the single STED laser approach removes the risk of laser misalignment, however, an extra optimization step is required to determine the best secondary antibody to use for a given primary antibody. 5. The Nyquist sampling theorem dictates that any object in the analog world must be sampled at least 2.3 times in order to be correctly represented in the digital world. In microscopy terms,

268

Walaa Alshafie and Thomas Stroh

if you want to be able to resolve 50 nm your pixel size must be at least 21.7 nm or smaller than that. 6. Small probes such as camelid antibodies (nanobodies) as compared with conventional antibodies (IgG, etc.) are advantageous in experiments requiring measurements of structures. Their small size makes them a much smaller, more negligible spacer between the antigen of interest and the fluorophore (which is what the microscope localizes), thus improving resolution and accuracy of measurements. 7. Proper verification of the specificity and selectivity of primary antibodies is a must in all immunocytochemistry, not only for STED experiments. The best controls in this regard are the staining of cells/sections from knock-out animals for your antigen of interest in parallel with your experimental samples. If a knockout is not available, siRNA knockdown or knockout by CRISPR/Cas9 offers a good alternative. Of course, these negative controls for antibody specificity need not be analyzed by STED, but all primary antibodies used in STED experiments should be verified in this or a comparably reliable manner [10]. 8. Far-red fluorophores are preferable for the live-cell experiments because reduced phototoxicity. 9. The refractive index of the mounting media should match that of the objective lens as closely as possible to ovoid optical aberrations. 10. Seal coverslips with nail polish immediately after mounting if you want to prevent 3D-artifacts through shrinkage during curing of the mounting medium. 11. If not imaging cells but tissue sections, mount the sections directly onto a coated coverslip first, then place them slowly onto a drop of appropriate mounting medium on the object slide. 12. Clearing might be needed for thick samples. 13. Use highly magnifying objectives of high numerical aperture, usually 100x oil immersion objectives, NA 1.4 or higher. Bear in mind that such objectives are usually optimized to work with a coverslip thickness of 0.17 mm (coverslip #1.5). 14. While using 100% of the power of the STED depletion laser leads to the best theoretical resolution in STED microscopy, it is not necessarily advisable to go that high when imaging biological samples. This will lead to rapid photobleaching of your sample by requiring very high excitation energy to generate enough photons from the tiny nondepleted area to detect. Rather adjust the settings of the STED laser power to a level optimal for a given sample to achieve the resolution you need

Sample Preparation for Multicolor STED Microscopy

269

through narrowing the PSF while maintaining a good SNR (resolution scaling). 15. Always image sequentially, imaging the longer wavelength channel first to prevent the shorter wavelength depletion laser from bleaching the longer wavelength fluorophore when using multiple depletion lasers!

Acknowledgments This work was supported by Discovery Grant RGPIN/060962016 from the Natural Sciences and Engineering Research Council of Canada (NSERC) and an operating grant to improve and develop super-resolution microscopy at the Montreal Neurological Institute by the Bachynski Family Foundation to T.S. W.A. was the holder of a Jeanne Timmins Costello Fellowship from the Montreal Neurological Institute. We are deeply thankful to Naomi Takeda for administrative support. References 1. Abbe E (1873) Beitr€age zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung. Arch Mikrosk Anat 9:413–468 2. Abbe E (1884) Note on the proper definition of the amplifying power of a lens or a lenssystem. J R Microsc Soc 4:348–351 3. Inoue´ S (2006) Handbook of biological confocal microscopy. Springer, Boston 4. Hell SW, Wichmann J (1994) Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt Lett 19(11):780–782 5. Westphal V, Hell SW (2005) Nanoscale resolution in the focal plane of an optical microscope. Phys Rev Lett 94(14):143903 6. Han KY et al (2009) Three-dimensional stimulated emission depletion microscopy of nitrogen-vacancy centers in diamond using continuous-wave light. Nano Lett 9(9):3323–3329 7. Willig KI et al (2007) STED microscopy with continuous wave beams. Nat Methods 4(11):915–918 8. Hama H et al (2011) Scale: a chemical approach for fluorescence imaging and reconstruction of transparent mouse brain. Nat Neurosci 14(11):1481–1488 9. Birk, U.J., Super-resolution microscopy.. Wiley VCH Verlag GmbH, 2017 10. Laflamme C et al (2019) Implementation of an antibody characterization procedure and

application to the major ALS/FTD disease gene C9ORF72. elife 8 11. Vicidomini G et al (2011) Sharper low-power STED nanoscopy by time gating. Nat Methods 8(7):571–573 12. Bianchini P et al (2015) STED nanoscopy: a glimpse into the future. Cell Tissue Res 360(1):143–150 13. Che´reau R, Tønnesen J, N€agerl UV (2015) STED microscopy for nanoscale imaging in living brain slices. Methods 88:57–66 14. Tønnesen J et al (2011) Two-color STED microscopy of living synapses using a single laser-beam pair. Biophys J 101(10):2545–2552 15. Moneron G et al (2010) Fast STED microscopy with continuous wave fiber lasers. Opt Express 18(2):1302–1309 16. Deng S et al (2018) Effects of donor and acceptor’s fluorescence lifetimes on the method of applying Forster resonance energy transfer in STED microscopy. J Microsc 269(1):59–65 17. Oracz J et al (2017) Photobleaching in STED nanoscopy and its dependence on the photon flux applied for reversible silencing of the fluorophore. Sci Rep 7(1):11354 18. Ding JB, Takasaki KT, Sabatini BL (2009) Supraresolution imaging in brain slices using stimulated-emission depletion two-photon laser scanning microscopy. Neuron 63(4):429–437

270

Walaa Alshafie and Thomas Stroh

19. Wildanger D et al (2009) A compact STED microscope providing 3D nanoscale resolution. J Microsc 236(1):35–43 20. Go¨ttfert F et al (2013) Coaligned dual-channel STED nanoscopy and molecular diffusion analysis at 20 nm resolution. Biophys J 105(1): L01–L03 21. Wurm CA et al (2012) Novel red fluorophores with superior performance in STED microscopy. Optical Nanoscopy 1(1):1–7 22. Fendl S et al (2017) STED imaging in drosophila brain slices. Methods Mol Biol 1563: 143–150 23. Tam J, Merino D (2015) Stochastic optical reconstruction microscopy (STORM) in comparison with stimulated emission depletion (STED) and other imaging methods. J Neurochem 135(4):643–658 24. Griesbeck O et al (2001) Reducing the environmental sensitivity of yellow fluorescent protein. Mechanism and applications. J Biol Chem 276(31):29188–29194 25. Hein B, Willig KI, Hell SW (2008) Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell. Proc Natl Acad Sci U S A 105(38):14271–14276

26. Wegner W et al (2017) In vivo mouse and live cell STED microscopy of neuronal actin plasticity using far-red emitting fluorescent proteins. Sci Rep 7(1):11781 27. Wang C et al (2019) A photostable fluorescent marker for the superresolution live imaging of the dynamic structure of the mitochondrial cristae. Proc Natl Acad Sci U S A 116(32):15817–15822 28. Bottanelli F et al (2016) Two-colour live-cell nanoscale imaging of intracellular targets. Nat Commun 7(1):1–5 29. Lukinavicius G et al (2013) A near-infrared fluorophore for live-cell super-resolution microscopy of cellular proteins. Nat Chem 5(2):132–139 30. Erdmann RS, Toomre D, Schepartz A (2017) STED imaging of Golgi dynamics with Cer-SiR: a two-component, photostable, high-density lipid probe for live cells. Methods Mol Biol 1663:65–78 31. Vicidomini G, Bianchini P, Diaspro A (2018) STED super-resolved microscopy. Nat Methods 15(3):173–182 32. Spahn C et al (2019) Whole-cell, 3D, and multicolor STED imaging with exchangeable fluorophores. Nano Lett 19(1):500–505

Chapter 16 Single-Molecule Localization Microscopy of Subcellular Protein Distribution in Neurons Jelmer Willems, Manon Westra, and Harold D. MacGillavry Abstract Over the past years several forms of superresolution fluorescence microscopy have been developed that offer the possibility to study cellular structures and protein distribution at a resolution well below the diffraction limit of conventional fluorescence microscopy (50 mW). An adjustable mirror for adjusting the angle of illumination. Camera for fluorescence detection: sCMOS (ORCA Flash 4, Hamamatsu), with an effective pixel size of 117 nm (see Note 3). Integrated filters are used to split far-red emission onto the right side of the camera and blue-green-red emission spectra on the left side. The imaging chamber can be temperature controlled. Continuous feedback control over the focus position is critical. This is built into some microscope models (as is the case for the NanoImager), or can be added with separate accessories.

2.6

Integrated ONI software is used for detection and fitting of singlemolecule blinking events. Alternatively, freely available options can be used: 3D-Daostorm [24], Picasso [25], Thunderstorm [26], DoM [27], ZOLA-3D [28], Fit3Dspline (integrated into SMAP) [29], SMAP [30], and Decode [31]. For additional processing steps, we use MATLAB.

3

Software

Methods All steps are performed at room temperature, unless mentioned otherwise. See Fig. 2a for a flowchart indicating which steps to follow for each individual method. For direct labeling with antibodies, go to Subheading 3.3.

3.1 Transfection of Dissociated Hippocampal Rat Neurons

Perform all steps in a sterile flow hood. DNA plasmids for the generation of a Halo knock-in are transfected on day in vitro (DIV) 3, and those expressing an intrabody on DIV 14 (see Note 4). 1. Prepare fresh 300 μL BP incubation medium and 500 μL BP full medium for each coverslip to be transfected. Warm to 37  C. 2. Prepare the lipofectamine mix by diluting 3.3 μL Lipofectamine 2000 in 100 μL BrainPhys neuronal medium (without

Single-Molecule Localization Microscopy

277

supplements) per coverslip to be transfected. Incubate for 5 min at room temperature. 3. For each coverslip, prepare a 1.5 mL microtube with 1 μg of DNA and add 100 μL BrainPhys neuronal medium (without supplements). 4. Add the lipofectamine mix to the DNA mix, gently mix using a pipette and incubate 30 min. DNA-Lipofectamine complex is stable for several hours at room temperature. 5. Transfer 50% (500 μL) of the conditioned medium from each well to a new 12-wells plate. Add 500 μL fresh BP full medium to each well of this new plate. Place this ‘new plate’ in the incubator. 6. Add 300 μL BP incubation medium to each coverslip with neurons. 7. Using a pipette, gently drop the DNA-Lipofectamine mix onto the cells and place the plate in the incubator for 1–2 h. 8. Transfer the coverslips to the ‘new plate’. 9. Grow the neurons until DIV 21. Refresh half the medium with new BP full medium once a week. For HaloTag labeling, go to Subheading 3.2. For PALM imaging, go to Subheading 3.5. 3.2 Live-Cell HaloTag Labeling

See Note 5 for more information about self-labeling enzymes. During the labeling procedure, prevent exposure to direct light as much as possible. 1. Prepare an incubation chamber with a piece of parafilm. 2. Dilute the HaloLigand (JF646) 1:1000 in conditioned medium (1 mL medium is enough for 12 coverslips). Mix well by pipetting up and down. 3. Place drops of ~80 μL on the parafilm and gently place the coverslips upside down on these drops. Place the incubation chamber in the incubator (37  C, 5% CO2) for 15 min. 4. Transfer the coverslips back to the conditioned medium and continue with fixation (Subheading 3.3) (see Note 5).

3.3

Fixation

See Note 6 for more information about the importance of fixation and other methods. 1. Freshly prepare and prewarm PEM-PFA mixture at 37  C. 2. Remove medium from cells using a vacuum pump and add 500 μL fixative to the coverslip. Perform this step according to the PFA MSDS and handling protocols. 3. Incubate for 5–10 min with PEM-PFA.

278

Jelmer Willems et al.

4. Wash 3 times 5 min with PBS-Gly (1 mL). Samples can be stored and kept stable for several days at 4  C in PBS. In case of no antibody staining, go to Subheading 3.5. 3.4

Immunolabeling

1. Incubate coverslips with ~250 μL blocking buffer and incubate for 1 h at 37  C (see Note 7). 2. Prepare primary antibody dilutions in antibody buffer (50 μL per coverslip). 3. Prepare an incubation chamber with parafilm. Place drops (~50 μL) with the antibody mixture on the parafilm and gently place the coverslips upside down on the drops. 4. Incubate for 2 h at room temperature or overnight at 4  C (see Note 7). 5. Wash three times 5 min with PBS-Gly. 6. Dilute secondary Alexa 647-conjugated antibodies 1:400 in antibody buffer (50 μL per coverslip). 7. Incubate the coverslips as in step 3 for 1 h (at room temperature). 8. Wash three times 5 min with PBS-Gly. 9. Postfixation (optional): Wash once with PBS (no glycine) and perform another fixation with PEM-PFA for 5 min, and wash three times with PBS-Gly (see Note 8). 10. Store the coverslips in PBS until mounting. Samples remain stable for several days if kept at 4  C and protected from light.

3.5 Sample Preparation and Mounting 3.5.1 Live-Cell PALM

1. Preheat the microscope chamber to 37  C. 2. Preheat extracellular buffer to 37  C and filter (15 nm for STORM and >25 nm for PALM imaging. This difference has to do with the fact that organic dyes provide more photons per emission event compared to fluorescent proteins, and thus a better average localization precision. 5. Filtering on photon count: Additional filtering on photon count might help to reduce the amount of noise in the dataset due to localizations derived from nonemission events. Note that localizations with a low photon count often have a low localization precision as well and most are probably filtered out when filtering just on the localization precision. 3.9 Visualization and Data Analysis

1. Rendering/Binning: The most common method of visualizing SMLM datasets is by binning the localizations into pixels. The localizations are converted to pixels and plotted as a Gaussian distribution with the standard deviation adjusted by the localization precision. Figure 3 shows examples of rendered superresolution plots with their diffraction-limited images, for both PALM imaging of mEos3.2-tagged intrabodies targeting PSD95 and dSTORM imaging of HaloTag-GluA1 and antibody-tagged Bassoon. Commonly used pixel sizes for rendering are in the range of 10–50 nm or half the average localization precision. Combining a rendered image with a diffraction-limited snapshot of the same region, allows for visualization of the improvement in resolution and judgement of image quality as much can be learned by “just looking at the thing” [32]. 2. Analyzing SMLM datasets: Extracting information about protein distribution in SMLM datasets can be challenging, and highly depends on your research question. A good way to start is exploring the heterogeneity in protein density using the molecular coordinates of the localizations instead of rendered images (Fig. 4). Density can thus not only be determined from pixel intensity but can be calculated directly from the molecular coordinates. Examples of these are the so-called local density values and Voronoi diagrams. See Fig. 4b for a comparison of these different plotting methods. 3. The local density value is calculated as the number of localizations in a given radius (in this case 5 times the mean nearest neighbor distance (MNND)) [33]. To calculate the local density, we use the MATLAB functions knnsearch (for determining the NND) and rangesearch (for the local density). The outcome can be used as color-code for plotting, as well as being used for further analysis. We use 5 times the MNND as this

282

Jelmer Willems et al.

Fig. 3 SMLM of synaptic proteins. (a) Illustration of the expected localization of different synaptic proteins in the dendrite or axon. Scaffolding protein Bassoon localizes in the active zone of the presynaptic bouton, glutamate receptor GluA1 is localized on the dendritic membrane with an enrichment on the postsynaptic membrane, and PSD95 is a postsynaptic scaffolding protein. (b) Examples of dSTORM acquisitions of Bassoon using immunolabeling, endogenously tagged GluA1 with HaloTag, and live-cell PALM of PSD95 using expression of mEos3.2-fused intrabodies. Comparing the diffraction-limited image on the left part with the SMLM acquisition. Scale bars: 2 μm, zooms: 300 nm

normalizes for differences in overall localization density across the field of view and between datasets. 4. Voronoi diagrams are aimed to segment localizations into areas, reflecting the density based on the distance of each localization to its neighbors. Thus, the area of each so-called Voronoi cell reflects its density relative to the overall density of the acquisition. Voronoi diagrams can be generated using the MATLAB function voronoi. The area of individual Voronoi cells can be calculated from the output vertices. 5. Statistical analysis: Both the local density and Voronoi diagrams can yield important information about protein density. Further image analysis including cluster detection is often very specific to distinct biological questions. Therefore, we would like to refer to other sources for more information regarding SMLM postimaging analysis options including cluster detection, segmentation, protein counting, and colocalization [19, 34, 35].

Single-Molecule Localization Microscopy

283

Fig. 4 Data visualization and evaluation. (a) Rendered image from a SMLM acquisition of PSD95.FingRmEos3.2. (b) Examples of different visualization and evaluation options. Rendering: localizations are converted to pixels by plotting them as Gaussians with integrated density 1 and the localization precision as standard deviation. Localizations: Plotting the centroids of the fluorophores. Local density: Each localization is colorcoded for the number of localizations within a given radius. Voronoi: Boundaries can be drawn that assign each localization to their own area that includes all points closer to that localization than any other localization. This area reflects its density relative to the overall density. Scale bar: 2 μm, zoom: 200 nm

4

Notes 1. Dye selection for dSTORM. Alexa 647 is considered as the best dye for dSTORM, but other dyes work as well and new dyes suitable for dSTORM are constantly being developed, mostly in the far-red emission spectrum. Besides Alexa 647 and JF646, another dye that works well in our hands is CF568. CF568 is a bit more difficult to get into the dark state (also see Note 12), and thus not advised for very dense protein structures or proteins with high expression levels. Note that imaging CF568 requires a different laser for excitation than described in the protocol (where Alexa 647 and JF646 are used). 2. Choice of labeling method. In our experience, protein abundance and the availability of specific antibodies are the main challenges that impact the quality of labeling, and thus the choice of method. For example, PALM on endogenous proteins is only feasible for medium to highly expressed proteins. For STORM, endogenous tagging of a protein with HaloTag can be used, but with the note that the dye to protein ratio is

284

Jelmer Willems et al.

much lower than labeling with antibodies. Alternatively, and not described here, proteins can be tagged with other fluorescent proteins like GFP or small epitope tags like HA, FLAG, and ALFA-tag, which can be subsequently labeled with organic dyes using antibodies or nanobodies, significantly amplifying the signal and making it possible to perform dSTORM. 3. Pixel size. The optimal pixel size for a given experiment depends on the number of expected photons and background. Usually, a pixel size in the range of 100–160 nm is used, based on the point spread function [2]. 4. Optimal DIV for transfection of hippocampal rat neurons. For genomic tagging of a gene (coding for a protein of interest) we advise to transfect at a relatively young age (DIV2-5). As neurons mature, the transfection efficiency drops quite significantly. Also, and especially for proteins with a low turnover, a longer window between transfection and imaging allows for more of the protein pool to be replaced with the tagged version [22]. For exogenous expression of a fusion protein or intrabody, the optimal window between transfection and imaging day has to be optimized for individual constructs, but we usually use 3–7 days between transfection and imaging. If exogenous expression of a recombinant fusion protein is used, it is critical to make sure the level of overexpression does not alter the localization of the protein. 5. HaloTag Ligand labeling. HaloTag is a haloalkane dehalogenase enzyme which is designed to covalently bind to synthetic ligands (HaloLigand) [13] (Fig. 2a). The HaloTag can be coupled to the protein of interest, for example via genomic tagging (as used in this protocol), or through exogenous expression of HaloTag fusion proteins. The HaloLigand is commercially available conjugated to organic dyes. As the HaloTag—HaloLigand binding is enzymatic, live-cell labeling is preferred over labeling of already fixed samples. In addition to the protocol described here, always check the protocol of the supplier and adjust if needed. The HaloTag Ligand-JF646 used in this protocol is membrane permeable. Thus, both intracellular and extracellular HaloTag-fused proteins are labeled. The optimal length of labeling has to be determined experimentally. Extensive washing of the HaloLigand before fixation is not needed. Besides the potential harmful effects of washing on living cells, most fixatives do not react with the HaloLigand as it is not a protein. The washing steps after fixation will remove any unbound HaloLigand. 6. Type and length of fixation. A good fixation protocol is considered one of the most important steps of any superresolution imaging technique as it is key to the preservation of the cell’s

Single-Molecule Localization Microscopy

285

ultrastructure. Sometimes, glutaraldehyde is used as a fixative in addition to or to replace PFA. When used, additional quenching steps (to reduce autofluorescence) using fresh NaBH4 are advised. Alternatively, fixation using ice-cold methanol is sometimes used, but this is not compatible with all antibodies and might negatively influence the ultrastructure, particularly membrane-associated complexes, more than PFA. Therefore, we do not recommend this for SMLM. Although 10 min of PEM-PFA fixation is the standard, staining quality can benefit from optimizing the duration of fixation. 7. Blocking and immunolabeling. The protocol described here is a general protocol for immunolabeling that is used in our lab. Other blocking reagents like bovine serum albumin (BSA) can be used instead of NGS. Also, the duration and temperature of the antibody incubation step, as well as antibody concentration have to be optimized experimentally for each protein labeling. 8. Postfixation. Although not a must, postfixation allows for better preservation of the staining if imaging is not performed directly after labeling. Postfixed cells can be stored for several days in PBS at 4  C. 9. Temperature control of microscope. Preheating and controlling the temperature of the microscope system is also advised for fixed samples. During an acquisition, heat is produced which might cause some drift, which in our hands is reduced if the temperature is already stable at around 30  C. Therefore, we also advise to keep the STORM buffer and glass slides at least at room temperature. 10. Exposure time, frame rate, and laser power. Longer exposure time/lower frame rate allows for more photons to be collected from single emission events, which positively influences the localization precision. However, a long exposure time can also increase the chance of overlapping single-molecules, and when performing PALM in live cells, the movement of molecules within the exposure time of a single frame reduces the localization precision due to motion blurring. Depending on the camera, imaging smaller ROIs can allow for a higher frame rate. Alternatively, higher laser power can be used, but this reduces the lifetime of an emission event. 11. Illumination angle. SMLM experiments are generally performed with near TIRF illumination or so-called oblique illumination. Oblique illumination can be achieved by changing the angle of illumination toward full TIRF. Adjust the angle so that in-focus fluorescence events are mostly retained, but that out-of-focus events are not excited. 12. Induction of dark state. In the presence of the reducing STORM buffer, high laser power will turn molecules into the

286

Jelmer Willems et al.

dark state. The most common issue faced, is the inability of reducing the number of blinks per frame causing individual emission events to overlap. Using a higher concentration of MEA in the STORM buffer can help. Alternatively, 2-betamercaptoethanol (BME) is sometimes used instead of MEA, depending on the organic dye used for imaging. In our hands, using a few short pulses of the 488 nm or 561 nm laser can help to bring more far-red emitting dyes to the dark state, but with the risk of irreversible photobleaching.

Acknowledgments This work was supported by the Netherlands Organization for Scientific Research (ALW-VIDI 171.029 to H.D.M.) and the European Research Council (ERC-StG 716011 to H.D.M.). References 1. Vangindertael J, Camacho R, Sempels W, Mizuno H, Dedecker P, Janssen KPF (2018) An introduction to optical super-resolution microscopy for the adventurous biologist. Methods Appl Fluoresc 6(2):022003. https://doi.org/10.1088/2050-6120/aaae0c 2. Thompson RE, Larson DR, Webb WW (2002) Precise nanometer localization analysis for individual fluorescent probes. Biophys J 82(5): 2775–2783. https://doi.org/10.1016/ S0006-3495(02)75618-X 3. Gould TJ, Verkhusha VV, Hess ST (2009) Imaging biological structures with fluorescence photoactivation localization microscopy. Nat Protoc 4(3):291–308. https://doi.org/ 10.1038/nprot.2008.246 4. Betzig E, Patterson GH, Sougrat R, Lindwasser OW, Olenych S, Bonifacino JS, Davidson MW, Lippincott-Schwartz J, Hess HF (2006) Imaging intracellular fluorescent proteins at nanometer resolution. Science 313(5793): 1642–1645. https://doi.org/10.1126/sci ence.1127344 5. Hess ST, Girirajan TP, Mason MD (2006) Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys J 91(11):4258–4272. https://doi.org/ 10.1529/biophysj.106.091116 6. Rust MJ, Bates M, Zhuang X (2006) Subdiffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat Methods 3(10):793–795. https://doi.org/ 10.1038/nmeth929

7. Heilemann M, van de Linde S, Schuttpelz M, Kasper R, Seefeldt B, Mukherjee A, Tinnefeld P, Sauer M (2008) Subdiffractionresolution fluorescence imaging with conventional fluorescent probes. Angew Chem Int Ed Engl 47(33):6172–6176. https://doi.org/10. 1002/anie.200802376 8. Jungmann R, Avendano MS, Woehrstein JB, Dai M, Shih WM, Yin P (2014) Multiplexed 3D cellular super-resolution imaging with DNA-PAINT and exchange-PAINT. Nat Methods 11(3):313–318. https://doi.org/ 10.1038/nmeth.2835 9. Balzarotti F, Eilers Y, Gwosch KC, Gynna AH, Westphal V, Stefani FD, Elf J, Hell SW (2017) Nanometer resolution imaging and tracking of fluorescent molecules with minimal photon fluxes. Science 355(6325):606–612. https:// doi.org/10.1126/science.aak9913 10. Li H, Vaughan JC (2018) Switchable fluorophores for single-molecule localization microscopy. Chem Rev 118(18):9412–9454. https:// doi.org/10.1021/acs.chemrev.7b00767 11. Dempsey GT, Vaughan JC, Chen KH, Bates M, Zhuang X (2011) Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging. Nat Methods 8(12):1027–1036. https://doi. org/10.1038/nmeth.1768 12. Samanta S, Gong W, Li W, Sharma A, Shim I, Zhang W, Das P, Pan W, Liu L, Yang Z, Qu J, Kim JS (2019) Organic fluorescent probes for stochastic optical reconstruction microscopy (STORM): recent highlights and future

Single-Molecule Localization Microscopy possibilities. Coord Chem Rev 380:17–34. https://doi.org/10.1016/j.ccr.2018.08.006 13. Los GV, Encell LP, McDougall MG, Hartzell DD, Karassina N, Zimprich C, Wood MG, Learish R, Ohana RF, Urh M, Simpson D, Mendez J, Zimmerman K, Otto P, Vidugiris G, Zhu J, Darzins A, Klaubert DH, Bulleit RF, Wood KV (2008) HaloTag: a novel protein labeling technology for cell imaging and protein analysis. ACS Chem Biol 3(6): 3 7 3 – 3 8 2 . h t t p s : // d o i . o r g / 1 0 . 1 0 2 1 / cb800025k 14. Keppler A, Gendreizig S, Gronemeyer T, Pick H, Vogel H, Johnsson K (2003) A general method for the covalent labeling of fusion proteins with small molecules in vivo. Nat Biotechnol 21(1):86–89. https://doi.org/10.1038/ nbt765 15. Gautier A, Juillerat A, Heinis C, Correa IR Jr, Kindermann M, Beaufils F, Johnsson K (2008) An engineered protein tag for multiprotein labeling in living cells. Chem Biol 15(2): 128–136. https://doi.org/10.1016/j. chembiol.2008.01.007 16. Jacquemet G, Carisey AF, Hamidi H, Henriques R, Leterrier C (2020) The cell biologist’s guide to super-resolution microscopy. J Cell Sci 133(11). https://doi.org/10.1242/ jcs.240713 17. Schermelleh L, Ferrand A, Huser T, Eggeling C, Sauer M, Biehlmaier O, Drummen GPC (2019) Super-resolution microscopy demystified. Nat Cell Biol 21(1):72–84. https://doi.org/10.1038/s41556-0180251-8 18. Wait EC, Reiche MA, Chew TL (2020) Hypothesis-driven quantitative fluorescence microscopy - the importance of reversethinking in experimental design. J Cell Sci 133(21). https://doi.org/10.1242/jcs. 250027 19. Wu YL, Tschanz A, Krupnik L, Ries J (2020) Quantitative data analysis in single-molecule localization microscopy. Trends Cell Biol 30(11):837–851. https://doi.org/10.1016/j. tcb.2020.07.005 20. Jimenez A, Friedl K, Leterrier C (2020) About samples, giving examples: optimized single molecule localization microscopy. Methods 174:100–114. https://doi.org/10.1016/j. ymeth.2019.05.008 21. Yang X, Specht CG (2020) Practical guidelines for two-color SMLM of synaptic proteins in cultured neurons. In: Yamamoto N, Okada Y (eds) Single molecule microscopy in neurobiology. Neuromethods Humana, New York, NY. https://doi.org/10.1007/978-1-07160532-5_9

287

22. Willems J, de Jong APH, Scheefhals N, Mertens E, Catsburg LAE, Poorthuis RB, de Winter F, Verhaagen J, Meye FJ, MacGillavry HD (2020) ORANGE: a CRISPR/Cas9based genome editing toolbox for epitope tagging of endogenous proteins in neurons. PLoS Biol 18(4):e3000665. https://doi.org/10. 1371/journal.pbio.3000665 23. Gross GG, Junge JA, Mora RJ, Kwon HB, Olson CA, Takahashi TT, Liman ER, EllisDavies GC, McGee AW, Sabatini BL, Roberts RW, Arnold DB (2013) Recombinant probes for visualizing endogenous synaptic proteins in living neurons. Neuron 78(6):971–985. https://doi.org/10.1016/j.neuron.2013. 04.017 24. Babcock H, Sigal YM, Zhuang X (2012) A high-density 3D localization algorithm for stochastic optical reconstruction microscopy. Opt Nanoscopy 1(6). https://doi.org/10.1186/ 2192-2853-1-6 25. Schnitzbauer J, Strauss MT, Schlichthaerle T, Schueder F, Jungmann R (2017) Superresolution microscopy with DNA-PAINT. Nat Protoc 12(6):1198–1228. https://doi.org/ 10.1038/nprot.2017.024 26. Ovesny M, Krizek P, Borkovec J, Svindrych Z, Hagen GM (2014) ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging. Bioinformatics 30(16):2389–2390. https://doi.org/10.1093/bioinformatics/ btu202 27. Chazeau A, Katrukha EA, Hoogenraad CC, Kapitein LC (2016) Studying neuronal microtubule organization and microtubuleassociated proteins using single molecule localization microscopy. Methods Cell Biol 131: 127–149. https://doi.org/10.1016/bs.mcb. 2015.06.017 28. Aristov A, Lelandais B, Rensen E, Zimmer C (2018) ZOLA-3D allows flexible 3D localization microscopy over an adjustable axial range. Nat Commun 9(1):2409. https://doi.org/10. 1038/s41467-018-04709-4 29. Li Y, Mund M, Hoess P, Deschamps J, Matti U, Nijmeijer B, Sabinina VJ, Ellenberg J, Schoen I, Ries J (2018) Realtime 3D single-molecule localization using experimental point spread functions. Nat Methods 15(5):367–369. https://doi.org/ 10.1038/nmeth.4661 30. Ries J (2020) SMAP: a modular superresolution microscopy analysis platform for SMLM data. Nat Methods 17(9):870–872. https://doi.org/10.1038/s41592-0200938-1

288

Jelmer Willems et al.

31. Speiser A, Muller L-R, Matti U, Obara CJ, Legant WR, Kreshuk A, Macke JH, Ries J, Turaga SC (2020) Deep learning enables fast and dense single-molecule localization with high accuracy. BioRxiv. https://doi.org/10.1101/ 2020.10.26.355164 32. Mund M, Ries J (2020) How good are my data? Reference standards in superresolution microscopy. Mol Biol Cell 31(19): 2093–2096. https://doi.org/10.1091/mbc. E19-04-0189 33. MacGillavry HD, Song Y, Raghavachari S, Blanpied TA (2013) Nanoscale scaffolding domains within the postsynaptic density concentrate synaptic AMPA receptors. Neuron

78(4):615–622. https://doi.org/10.1016/j. neuron.2013.03.009 34. Khater IM, Nabi IR, Hamarneh G (2020) A review of super-resolution single-molecule localization microscopy cluster analysis and quantification methods. Patterns (N Y) 1(3): 100038. https://doi.org/10.1016/j.patter. 2020.100038 35. Baddeley D, Bewersdorf J (2018) Biological insight from super-resolution microscopy: what we can learn from localization-based images. Annu Rev Biochem 87:965–989. https://doi.org/10.1146/annurev-biochem060815-014801

Chapter 17 Measuring the Lateral Diffusion of Plasma Membrane Receptors Using Raster Image Correlation Spectroscopy Sara Makaremi and Jose Moran-Mirabal Abstract Raster image correlation spectroscopy (RICS) enables detecting and quantifying diffusion in live cells using standard commercial laser scanning confocal microscopes. Here, we describe a protocol based on RICS for measuring the lateral diffusion of two immunoreceptors within the plasma membrane of the macrophage cell line RAW 264.7. The sample images and measurements presented in this chapter were obtained from RICS analysis of Toll-like receptor 2 (TLR2) and cluster of differentiation 14 (CD14), which are transmembrane and membrane-anchored receptors, respectively. A step-by-step guideline is provided to acquire raster-scanned images and to extract the diffusion coefficients using RICS analysis. Key words CD14, TLR2, RAW 264.7, Macrophage, Immune receptor, Receptor mobility

1

Introduction The diffusion of biological species involved in cellular processes can be visualized and quantified using a versatile toolbox of fluorescence microscopy techniques, such as fluorescence recovery after photobleaching [1], fluorescence correlation spectroscopy [2, 3], image correlation spectroscopy [4], and single particle tracking [5]. In the past two decades, raster image correlation spectroscopy (RICS) has been frequently used to measure the diffusion coefficient of proteins in live cells [6–10]. The principles behind RICS can be found in detail in the seminal papers by Gratton and Digman [11, 12]. In brief, RICS measures the intensity fluctuations caused by the movement of fluorescent molecules. Since RICS is performed using confocal laser-scanning microscopy (CLSM), the intensities of adjacent pixels along the line of scan are captured one after the other while the laser illumination dwells at each pixel for a very brief period of time (typically in the order of microseconds). Therefore, there is an inherent time-dependent structure in each image frame and the intensities of pixels contained in one frame can be correlated to determine the characteristic

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_17, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

289

290

Sara Makaremi and Jose Moran-Mirabal

correlation decay times corresponding to various dynamic processes, such as the diffusion or association of fluorescent species. The spatial correlation depends on the diffusion coefficient, the pixel dwell time, and the pixel size. In this chapter, we detail a procedure based on RICS that we have used [6] to measure the diffusion coefficient of two immunoreceptors, TLR2 and CD14, which diffuse laterally within the plasma membrane of macrophages. It is recommended that the experimental procedure explained here be used after reviewing the theory provided in the original protocols for RICS [13].

2

Materials All experiments with cells should be performed in a clean, sterile biosafety cabinet (BSC level 2), and all containers coming in and out of BSC should be thoroughly sprayed and/or wiped down with 70% ethanol for sterility.

2.1

Cell Culture

1. RAW 264.7 macrophage Manassas, VA).

cell

line

(TIB-71;

ATCC,

2. RPMI-1640 (or DMEM) supplemented with 10% fetal bovine serum (FBS), 1% L-glutamine, and 1% penicillin–streptomycin. 3. Phosphate buffered saline (1 PBS) autoclaved for sterility. 4. Trypsin–EDTA. 5. Cell lifter. 6. Surface-treated sterile tissue culture flask (Falcon 75 cm2 rectangular canted neck with vented cap). 7. Glass Bottom dish (35 mm dish, 20 mm Microwell, No. 1.5 cover glass, 0.16–0.19 mm thickness, MatTek, Ashland, MA). 8. Hemocytometer and trypan blue for cell counting. 9. Water bath (37  C). 10. Level 2 biosafety cabinet. 11. 37  C, 5% CO2 incubator. 12. Light microscope. 13. Centrifuge. 2.2 Fluorescence Labeling of Receptors

1. RPMI-1640 (or DMEM) without phenol red supplemented with 10% FBS. 2. Mouse anti-TLR2/CD282 antibody labeled with Alexa Fluor 647 (0.2 mg/mL, BD Biosciences, Mississauga, ON, Canada). 3. Mouse anti-CD14 antibody labeled with APC (excitation: 633–647 nm, emission: 660 nm, 0.2 mg/mL, eBioscience, San Diego, CA, USA).

Measuring the Lateral Diffusion of Plasma Membrane Receptors Using Raster. . .

291

4. 1 PBS. 5. 4  C fridge. 6. Sterile syringe filter, 0.2 μm pore size. 7. Sterile syringe (10 mL) with Luer-lock tip. 8. Vortex mixer. 9. L-Ascorbic acid (powder form, MW: 176.12 g/mol). 10. Ascorbic Acid Stock Solution: Prepare a 100 mM stock solution by dissolving 0.176 g ascorbic acid in 10 mL PBS. Filter the solution using a syringe filter with 0.2 μm pore size for sterility. 2.3 Calibration of Focal Volume Waist (ω0)

1. Cy5-dUTP Stock Solution: Cy5-labeled dUTP (AminoallyldUTP-Cy®5 triethylammonium salt solution, 95.0% HPLC, Sigma-Aldrich, Oakville, ON, Canada) is dissolved in 18.2 MΩ cm water to a concentration of 10 mM. The Cy5-dUTP solution can be aliquoted and stored protected from light at 14  C until needed. 2. Glass Bottom dish (should be identical to the imaging dish used for cells, Subheading 2.1). 3. 10 mM TRIS-HCl (pH 7.5) buffer. 4. Bovine serum albumin (BSA) powder. 5. Blocking Solution: 10 mg/mL BSA solution in 10 mM Tris– HCl (pH 7.5).

2.4 Microscopy and RICS Analysis

1. Laser scanning confocal microscope (inverted Nikon Microscope ECLIPSE LV100ND, Nikon Canada Inc., Mississauga, ON, Canada) in combination with the imaging software (NIS-Elements AR 4.30.02, Nikon) for imaging and data acquisition. 2. 60/1.40NA Plan Apo λ (Nikon) oil-immersion objective (see Note 1). 3. Stage top incubator (37  C) in combination with objective heating collar (TOKAI HIT, INUBG2ATW-TIZW) for live cell imaging. 4. ImageJ software (NIH, Bethesda, MD). 5. SimFCS 2.0 software (available for free at www.lfd.uci.edu provided by the Laboratory for Fluorescence Dynamics at the University of California Irvine, Irvine, CA).

292

3

Sara Makaremi and Jose Moran-Mirabal

Methods

3.1 RAW 264.7 Cell Culture Protocol

1. Culture RAW 264.7 cells in a 75 cm2 flask with 10 mL of supplemented RPMI until 60–75% confluency is reached (see Note 2). 2. When 60–75% confluency is reached, remove the RPMI by tilting the flask and aspirating the media collected in the corner of the flask. Then add 10 mL of warm (37  C) PBS and, after a gentle wash through rocking by hand, aspirate the PBS. 3. Add 5 mL trypsin–EDTA to the flask and incubate (37  C) for 5–10 min. RAW 264.7 cells adhere well to tissue culture flasks. Use a light microscope to check if the cells are still adhered to the flask. By focusing on the flask surface, the adhered cells can be clearly identified as being immobile, while the dislodged cells will appear to be floating in the media. The incubation time should not be extended, even if the cells are not dislodged, because this risks the viability of cells. 4. Add 5 mL of supplemented RPMI to the flask to dilute the trypsin and gently use the cell lifter to scrape the cells off the flask. Collect the cell suspension from the flask (see Note 3). 5. Centrifuge the cell suspension for 5 min (~515g). Remove the supernatant and resuspend the cell pellet in 10 mL of fresh and prewarmed (37  C) RPMI. 6. Seed the cells into a new flask with 1:2 or 1:3 dilution. Higher dilutions (up to 1:5) should be used when the cells are not required to be passaged for 3–4 days. Use the trypan blue and hemocytometer to stain and count the cells if necessary. The greater the dilution, the longer it takes for the cells to reach confluency (see Note 4). 7. Incubate the cells at 37  C. Monitor for cell growth and refresh the media when the red RPMI gradually changes to orange. 8. Twenty-four hours before imaging, repeat steps 2–5 and plate the cells at a density of 100,000 per Glass Bottom dish, filled with 2 mL supplemented RPMI. Immediately after adding the cells, swirl the dish around to make sure the cells are spread evenly. Incubate the cells at 37  C for 24 h. 9. After 24 h, use the light microscope to check if the cells have adhered to the glass.

3.2 Fluorescence Staining

1. Remove the cell culture medium and wash the sample three times with PBS. After the final wash, replace the PBS with RPMI (1 mL) without phenol red supplemented with 10% FBS (see Note 5). 2. Add 1 μL of mouse anti-TLR2/CD282 antibody labeled with Alexa Fluor 647 (0.2 mg/mL) or 1 μL of mouse anti-CD14

Measuring the Lateral Diffusion of Plasma Membrane Receptors Using Raster. . .

293

antibody labeled with APC (0.2 mg/mL) to the RPMI and cover the sample to prevent photobleaching by exposure to ambient light. 3. Immediately after staining, store the sample at 4  C for 1 h to prevent receptor internalization. 4. Remove the staining medium and wash the sample three times with PBS. After the final wash, PBS should be replaced with RPMI (2 mL) without phenol red supplemented with 10% FBS. 5. Add 50 μL of the Ascorbic Acid Stock Solution to the 2 mL RPMI in the imaging dish to achieve 2.5 mM final concentration. Ascorbic acid acts as an oxygen scavenger that aids in preventing photobleaching and thus improves the fluorophore’s emissive properties. The ascorbic acid solution should be prepared fresh before the imaging is performed. 3.3 Sample Preparation for Calibration of Focal Volume Waist (ω0)

1. Add 1–2 mL of the Blocking Solution to the Glass Bottom dish to cover the imaging glass and incubate for 1–4 h at room temperature, or 24 h at 4  C. 2. Dilute the Cy5-dUTP Stock Solution to 30 nM in 10 mM TRIS-HCl. This solution should be made fresh every time prior to experimentation. 3. Remove the Blocking Solution and wash 3 times with 1 PBS. Immediately after the final wash, add 500 μL of the Cy5-dUTP solution to the imaging glass and cover the dish to prevent photobleaching.

3.4

Microscopy

A detailed protocol to perform RICS experiments is provided by Gratton and coworkers [13]. Here, we explain how RICS can be used to measure the diffusion coefficient of TLR2 and CD14 in the plasma membrane of macrophages adhered to a glass-bottom dish. 1. Set up the microscope according to the manufacturer’s guidelines. Select the appropriate optical paths, laser line, dichroic, and band-pass filters using the microscope control software. For live cell imaging, make sure the stage top incubator is at 37  C. 2. Select the 60/1.40NA Plan Apo λ oil-immersion objective (or the highest NA from the objectives available in the microscope). Place an oil drop directly on the objective lens, then place the glass-bottom dish on the microscope stage and adjust the focus to find the approximate focal plane through the specimen. For imaging Cy5-dUTP in solution, focus the laser in the bulk of the solution, slightly above the glass surface. For imaging cells, use the eyepieces and the appropriate optical path for brightfield illumination to locate the cells and adjust

294

Sara Makaremi and Jose Moran-Mirabal

the focus so that the cell shape can be seen clearly. Once a cell is located, scan the field of view with confocal laser scanning with ~10–15 μs pixel dwell time and adjust the focus. 3. Select 647 nm excitation laser line and attenuate the power to ~250 nm in plane) obtained by conventional fluorescence

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_18, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

305

306

Vito Mennella and Zhen Liu

microscopy methods, which are limited by resolution as described by the Abbe equation [7]. Super-resolution fluorescence microscopy breaks the diffraction barrier by two main strategies: either through spatial patterning of the excitation illumination (as in SIM [8] and stimulated emission depletion (STED) [9]/reversible saturable optical fluorescence transitions (RESOLFT) [10]) or by detecting single molecules separated in space and time to enable high-resolution localization (as in (fluorescence) photoactivated localization microscopy ((F)PALM) [11, 12], stochastic optical reconstruction microscopy (STORM) [13] and DNA-molecules position accumulation for imaging in nanoscale topography (DNA-PAINT) [14, 15]). We refer to recent reviews for a detailed discussion of the principles of these technologies, newer implementations and applications [4, 5, 16, 17]. This chapter will focus on the experimental workflow of two complementary superresolution modalities: 3D-SIM [18], which is used for fast, high-throughput multicolor imaging to map individual components and identify reference markers [19, 20], and STORM, which is used to acquire highest resolution information [21, 22]. In our experience, most applications benefit from a dual imaging approach [19, 23, 24], which is facilitated by the coexistence of both microscopy modalities in commercial platforms. 3D-SIM is a wide-field microscope based technique that reaches higher resolution by generating a spatially structured light illumination on the sample plane [18]. The purpose of the light modulation is to create a three-dimensional interference pattern on the sample—a phenomenon named the moire´ pattern—which shifts high-frequency fluorescence information non detectable by the microscope to the observable range of frequencies [16, 18, 25]. 3D-SIM requires straightforward sample preparation, similar to confocal/widefield imaging, allows easy implementation of multicolor imaging experiments, but provides an increase in resolution by only a factor of two in its most common commercial implementation (~125 nm laterally and ~250 nm axially). The recent implementation of lattice illumination strategies [26] and novel iterative reconstruction algorithm (SIM2) has, however, pushed this technology forward allowing live super-resolution imaging at ~255 frames per second (fps) and with up to 60 nm in plane resolution (~200 nm axially) (Fig. 1) on commercially available platforms [27]. In contrast, single molecule localization techniques such as STORM and (F)PALM break the diffraction barrier by detecting stochastically activated, diffraction-limited single fluorophores, which are first well separated in space and time, then localized with high precision by Gaussian fitting of the fluorophore intensity distribution [28]. These methods reach higher resolution (routinely ~25 nm laterally and ~50 nm axially by commercial setups),

Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy

307

Fig. 1 Structured Illumination microscopy. 2D maximum intensity projections of 3D volume micrographs of nasal primary airway epithelial cells labeled with rabbit anti-RSPH4A (red) and mouse anti-alpha-tubulin (green) antibodies, recognized with Alexa 488 and 555 labelled secondaries, respectively. Cells were imaged with 3D-SIM (Zeiss Elyra 7) and reconstructed with SIM^2

but require careful control of fluorophore photoactivation and density, and some extra effort in terms of sample preparation [16, 28]. STED/RESOLFT super-resolution imaging can provide an alternative to the dual SIM-STORM approach combining both throughput and high resolution [5]. Interestingly, some of the strengths of STED and STORM imaging have been also recently combined in a novel experimental setup named MINFLUX achieving nanometer scale resolution (1–3 nm) [29]. An alternative strategy to overcome the diffraction barrier is through expansion microscopy, a methodology that increases resolution by sample expansion, instead of optical methods, and therefore can be integrated with super-resolution fluorescence imaging to increase the resolution further. This methodology has been described recently in details elsewhere [30]. In this chapter, we describe a SIM-STORM integrated approach to map components of the centrosome and cilia in human cells. Centrosome and cilia are cellular organelles with well-characterized ultrastructures by electron microscopy and sizes right below the diffraction limit (centrosome: 0.3–0.5 μm (length), ~0.2 μm (diameter); cilia: 80–200 nm (diameter)), which have been historically objects of many super-resolution microscopy studies [19, 31, 32], therefore providing optimal examples to present as a proof of principle [33].

308

2 2.1

Vito Mennella and Zhen Liu

Materials Coverslips

1. #1.5, 0.16–0.19 mm, 25 mm diameter round coverslips (see Notes 1, 2). 2. #1.5, 0.16–0.19 mm, 12 mm diameter round coverslips (see Notes 1, 2). 3. Coverslip mounting chamber: Chamlide CMB (CM-B25-1, Live Cell Instrument). 4. Coverslip-bottomed Scientific).

2.2

Fixation

petri

dish

(FD35-100,

Coherent

The protocols for fixation and permeabilization vary depending on the components to image; the user can refer to ref. [33] for representative protocols used in centrosome and cilia biology. Extreme care should be taken in finding the best conditions to minimize background and preserve ultrastructure. When available, electron microscopy images by high-pressure freezing and freezesubstituted material or images from ice-frozen samples from cryoEM—the gold standards for cell architecture preservation—should be used to guide evaluation of super-resolution images [19, 34]. 1. 4% paraformaldehyde (EM grade 15710, Electron Microscopy Sciences). 2. 0.1% glutaraldehyde (EM grade 619, Electron Microscopy Sciences). 3. 20  C cold histological grade anhydrous methanol.

2.3

Sample Staining

1. 0.1% sodium borohydride (Merck, Sigma-Aldrich, in 1XPBS). 2. Washing buffer: 1 PBS (pH ¼ 7.4). 3. Blocking buffer: 3% BSA in 1 PBS with 0.2% Triton X-100 for PFA fixation; 3% BSA in PBS with 0.05% Tween for methanol fixation. 4. Secondary Antibodies: goat anti rabbit Alexa 488-conjugated IgGs, goat anti mouse Alexa 555-conjugated IgGs and goat anti-rabbit Alexa 647-conjugated F(ab’)2.

2.4

Sample Imaging

1. 3D-SIM mounting media: 5% n-propyl gallate in 80% glycerol (refractive index 1.451). 2. STORM imaging buffer: 50 mM Tris–HCI (pH 8.0), 10 mM NaCl and 10% Glucose. 3. STORM oxygen-depleting system: 2.5 mM protocatechuic acid (PCA, Sigma-Aldrich 37580) and 50 nM protocatechuic dioxygenase (PCD, Sigma-Aldrich P8279); another commonly used oxygen scavenger is 60 mg/mL glucose oxidase and 6 mg/mL catalase.

Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy

309

4. STORM reducing reagent: 10 mM mercaptoethylamine (MEA) and 50 mM β-mercaptoethanol (β-ME). 5. STORM photon boosting reagent: 2 mM Cyclooctatetraene (COT, Sigma Aldrich 138924). 6. Fiducial markers: 100 nm Tetraspeck fluorescent beads (Thermo Fisher Scientific, T7279).

3

Methods

3.1 Experimental Considerations 3.1.1 Imaging Depth of the Structure of Interest

3.1.2 Selection of Reference Marker(S)

When planning a super-resolution imaging experiment, a critical factor to consider is the distance of the structure of interest from the objective. As a general rule, the closer is the structure to image to the objective the better, as less scattering and fewer optical aberrations are introduced. In practice, most 3D-SIM commercial platforms allow for imaging of structures at a depth up to ~10–15 μm from the coverslip; at increasing depth, the contrast of the illumination pattern is reduced, limiting the acquisition of high-frequency information and in turn reconstruction of superresolution images [26, 35]. The most recent commercial systems however include lattice SIM illumination strategies, which do not suffer from a reduction of the structured illumination pattern contrast, thereby allowing imaging to depth up to ~100 μm [26, 35, 36]. Single molecule imaging platforms are instead more sensitive to the distance from the objective as the number of photons collected from a single fluorophore is reduced progressively away from the objective, allowing for effective nanometer scale imaging only a few micrometres away from the coverslip. To build a nanometer scale map of a cellular structure or organelle, it is important to first identify suitable reference markers, which allow to orient the structure of interest in the cell (e.g., end-on, tilted or side view) and to provide a molecular reference for measuring the distance of unknown components (Fig. 2a). Reference markers should be preferably well-characterized structural components, which are known to be stably associated with the organelle/ protein assembly of interest in a certain cellular state (cell cycle resting state, etc.) [19]. It is advisable to select at least two reference markers, one that is used to gather relative measurements in the lateral, in plane direction of the structure and another in the axial direction (Fig. 2a). In our previous studies aimed at determining the organization of protein assemblies in centrosome and cilia, we have used for example a protein associated with the PCM (CEP152) and one with the distal region of the centriole (CENTRIN) to map the position of

310

Vito Mennella and Zhen Liu

A Radial distance

Axial distance

B GFP-PPP1R35

CEP152

Merge

PPP1R3

GFP-PPP1R35 CENTRIN

CEP152

5

Merge PP P1 R3 5 PP P1 R3 5

Fig. 2 Selection of reference markers. Figure adapted from Nguyen et al. Developmental Cell, 2020 and Sydor et al. Elife, 2016 with permission from Elsevier and eLife science publications. (a) Cartoon depiction of the strategy for mapping proteins of interest (green) relative to reference markers (red) on centrioles or basal bodies, the base of cilia. The radial distance, or distance from the centriolar/basal body center, is calculated either from end-on view by dividing each of the ring diameter measurements by two (left) or from side views by measuring the lateral distance of proteins positioned across centriole/basal body (middle). Axial distance is measured relative to the centriole/basal body proximal end (red). (b) Representative example of reference markers used to map the position of a novel centriolar protein (PPP1R35) on mother centrioles. 3D-SIM micrographs show U2OS cells expressing GFP-PPP1R35 and co-stained with either CEP152 to label the proximal end of the mother centriole or CENTRIN to label the distal end

components along the longitudinal axis of the mother centriole (Fig. 2b) [37]. The position of a component within a certain structure can be more accurately achieved if several reference markers are used systematically. In the case of a previously uncharacterized human centrosomal protein, PPP1R35, we have collected hundreds of

Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy

A

311

B

POC5

Microtubule Wall Centrin (230 ± 50 nm)

POC1B

Distal

Distal

CETN1

POC5 (160 ± 50 nm) POC1B (140 ± 60 nm)

CPAP

CPAP (60 ± 30 nm)

SAS6 CEP250 0

100

200

CEP135 (90 ± 40 nm) SAS6 (120 ± 40 nm) CEP250 (170 ± 50 nm)

Proximal

Proximal

CEP135

300

Distance from PPP1R35 (nm)

Fig. 3 Super-resolution molecular mapping using multiple reference markers. Figure adapted from Sydor et al. Elife, 2016, with permission of Elife science publications. (a) The position of PPP1R35 was mapped in the mother centriole by measuring distances between the fluorescence maxima of PPP1R35 and various centriolar markers in the proximal and distal region on the same z-plane. The fluorescence intensity are depicted in a Tukey box and whiskers plot, with the whiskers representing datum within an interquartile range of 1.5 and the square as the mean. (b) A cartoon depiction of the localization of PPP1R35 in the centriole relative to the markers with the average distances and standard deviations noted

3D-SIM super-resolution images to measure the distance of PPP1R35 from several centriolar reference markers of the proximal (SAS6, CEP135, CPAP, CEP250) and distal (CETN1, POC1B, POC5) region to locate its position along the centrosomal lateral and longitudinal axis (Fig. 3) [37]. Once a suitable reference marker is identified, it is good practice to label it in mapping experiments with a fluorophore always in the same wavelength, preferably chosen to be a higher wavelength (e.g., 555) than the one of the component to measure (e.g., 488) [23]. This allows for maximizing resolution of the unknown component to measure. 3.1.3 Labeling Strategy: Antibodies or Fluorescent Proteins

To make accurate measurements, it is important to reduce the absolute distance of the component to be measured from the fluorescent probe that we use in super-resolution imaging. To minimize this distance, a fluorescent protein expressed as a fusion to the protein/component of interest can be used, however this is not always the best choice for nanometer scale mapping when the fluorescent tag interferes with function, expression levels cannot be carefully controlled or molecule copy numbers are low. Fluorescent proteins are relatively small (5 nm) compared to antibodies (10 nm), but emit less photons, reducing the achievable resolution in STORM imaging. Alternative tags that could be coupled with fluorescent dyes such as HALO [38]/SNAP [39]/CLIP [40] are good alternatives to increase resolution, especially in cases when live cell STORM is required.

312

Vito Mennella and Zhen Liu

The use of a primary-secondary antibody complex to image is less desirable, especially in STORM, since the distance from the component to measure can reach up to 15–20 nm, thereby causing localization uncertainty. Moreover, a variable number of secondary antibodies can bind to a single primary antibody molecule, further increasing the size of the antibody-antigen complex and its variance [33, 41]. Therefore, if the antibody amount is not a limiting factor, it is advisable to label the fluorophore of interest directly to the primary antibody, or alternatively to antigen-binding fragments (F (ab’)2), or single chain variable fragments (scFv) or nanobodies if available [33, 41]. 3.2

3D-SIM Imaging

1. Sample preparation. Grow cells on high precision glass coverslip and proceed to established fixation and labeling protocols for proteins of interest. For preservation of centrosomal cytoskeletal structures, we recommend as a starting point rapid fixation in 20  C cold methanol, stored with molecular sieves to reduce water content driven by humidity or 4% PFA. As secondary antibodies, we have successfully used Alexa dyes 488, 555, 647 and Dylight 405 conjugated IgGs, but any antibodies linked to strongly emitting fluorophores compatible with emission filters of the microscope platform used will suffice. Before sample mounting, equilibrate the mounting media at RT for 150 –300 to allow precise pipetting of the viscous solution. Add 15–20 μL on the coverslip and mount on cover glass with the preferred sealant. 2. Before starting data collection, check the stability of the laser and other microscope hardware by performing a data acquisition and reconstruction with a 100 nm Tetraspeck fluorescent beads sample. This will also be useful to determine the parameters for correction of the chromatic aberration (see step 8). If the microscope is used very frequently, the hardware can be left continuously on, while lasers are turned off when not in use to increase lifetime. 3. Let the sample equilibrate at room temperature (RT) in the imaging chamber for 300 (see Note 3). Note that the immersion oil is sensitive to temperature and should be kept in the imaging chamber if possible or equilibrate with the sample before imaging due to refractive index changes (see Note 4). 4. Define the exposure time of the individual channels so that at least a 5–10 (linear 3D-SIM system) or 3-5 (lattice-SIM) signal-to-noise ratio over the background is achieved (where possible) in the area near the structure to image (Fig. 4a). Limit the exposure time to a minimum to reduce chances of drift during image acquisition (see Notes 3, 5). This is particularly important because in SIM imaging multiple images (phases, rotations) need to be acquired per reconstruction of each

Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy

313

Fig. 4 Importance of signal to noise and selection of appropriate Weiner filter value on 3D-SIM image reconstruction. Maximum intensity projection of micrograph of centrosome labeled with anti-Plp antibody imaged by 3D-SIM at different signal to noise (top) or reconstructed with different noise Weiner filter (bottom)

individual Z plane. For fixed samples, check if there is any movement of structures during acquisition in images originating from different phases and rotations of a single z plane to identify sample drift. 5. Collect Z stack including a few Z slices below and above the structure of interest. Old commercial microscopes allow acquisition of stacks of ~10 μm thickness, while newer system using lattice SIM allows acquisition up to 100 μm (see Note 6). 6. Run test reconstruction to check quality of data. Adjust noise Weiner filter to maximize resolution by collecting highfrequency information. We routinely use 4 to 6 noise filter levels for high-resolution, consistent reconstructions. See example in Fig. 4b. Note that each commercial platform has its own nomenclature for Weiner filtering (see Note 7). 7. Once satisfied with results, start collecting datasets avoiding any disturbance in the room including change of temperatures, abrupt movements etc. Reconstruct using identical reconstruction parameters to ensure similar resolution levels – these are generally detailed in the text file associated with the SIM reconstruction.

314

Vito Mennella and Zhen Liu

8. Correct for optical chromatic aberration by aligning the datasets in three dimensions using the calibration file obtained with Tetraspeck fluorescent beads. 9. Export whole-volume images as tiff files to be further analyzed in ImageJ/Fiji [42] or other commercially available software. 10. Analyze the position of the protein of interest relative to reference markers by using 3D-SIM images in which both the fluorescence maxima of protein of interest and the corresponding reference protein were on the same Z-plane. The distance between the peak maxima for the two markers can be determined using the caliper function built into commercial platform software. 3.3

STORM Imaging

1. Prepare the cells to be imaged. Cells are grown on 25 mm diameter round coverslips and fit into a Chamlide CMB to facilitate buffer exchange/replacement for STORM imaging. Cells can also be cultured and imaged directly on coverslipbottomed petri dish. Alternatively, samples could also be sandwiched between a clean coverslip and a slide, filled with imaging buffer in between for short time imaging (within 1 h). 2. Stain the samples. Samples could be stained according to standard immunofluorescence staining protocols except (1) the choice of specific secondary antibodies and (2) applying of fresh ~10 mM NaBH4 to quench cell autofluorescence after the chemical fixation step. STORM imaging asks for dyes with high photon number and low duty cycles to achieve high spatial resolution (see Note 8). Commercial available secondary antibodies include goat anti rabbit/mouse Alexa 647, goat anti rabbit/mouse Alexa 555, and goat anti rabbit/mouse Alexa 488. Commercial secondary antibodies usually have multiple fluorophores per antibody, which is not ideal for structures with high density. Alternatively, secondary antibodies could be labeled with organic dyes in the lab to control the antibody/fluorophore ratio, access more color channels (Cy3B or DyLight 750) or enable dual color imaging by using different activators to avoid differences in optical performance between channels. An additional post-fixation step can be used to stabilize the antigen-antibody interaction for long-term sample storage. Depending on the drift correction methods, fiducial markers such as fluorescent beads could be added into the sample. 3. Stabilize and calibrate the microscope. It is generally recommended to turn on the microscope 300 –600 before the experiment to fully stabilize the optical platform and warm up the lasers. During this time, laser intensity at the objective could be measured with a power meter to ensure the excitation laser

Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy

315

power is within ~1–3 kW/cm2. The activation laser power could in general be much lower (10–1000 fold less). A Tetraspeck beads sample could be imaged first to get a Z calibration curve for 3D-STORM and for aligning different chromatic channels in multicolor experiments. Laser power measurement, Z calibration and channel alignment should be performed on a monthly basis. 4. Prepare the imaging buffer. The STORM imaging buffer contains an oxygen-depleting system to prevent fluorophore photobleaching. 10 mM MEA and/or 50 mM β-ME are added to promote the photoswitching of dyes between the off and on states (see Notes 9 and 10). For single channel Cy 5/Alexa 647 imaging, β-ME is preferred and for dual color imaging together with other fluorophores, MEA is recommended. Optionally, COT could be added to increase the photon number output per switching cycle of certain dyes (see Notes 11 and 12). 5. Put the sample on microscope stage and check through the eyepiece first to identify the cells of interest. Mark the positions to be imaged so that they could be located quickly later on. When everything is ready, change the storing buffer to imaging buffer (STORM imaging buffer needs to be changed every 1 h during imaging). 6. Switch the imaging path to EMCCD/sCMOS camera, turn on the laser and find the focus of interested structure under weak laser excitation at low camera frame rate. 7. Select the light illumination mode. For structures of interest that are near the coverslip (T p.Thr277ArgfsStop3 / p.Arg599Stop

Fig. 6 GAS8 distribution in multiciliated cells of healthy control and patient with CCDC39 loss of function mutation. Figure adapted from Liu et al., Science Translational Medicine, 2020 with permission from American Association for the Advancement of Science. (a) GAS8 localizes into motile cilia in a healthy control multiciliated cell (left) but is trapped at the transition fiber rather than transition zone in patients with CCDC39 loss of function (right). (b) GAS8 forms 96 nanometer repeats along motile cilia in healthy control. Left: Comparison between STORM imaging and conventional fluorescence imaging. Insets in white boxes are highmagnification images of three ciliary regions showing periodic distribution. Middle: High-magnification

Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy

321

STORM approach on patient cells suggests that CCDC39 in motile cilia is required for GAS8 localization to the axonemal region, providing mechanistic insights on the effect of loss of CCDC39 in patients affected by primary ciliary dyskinesia.

4

Notes 1. It is strongly suggested to use high-precision glass coverslip (thickness of #1.5, 0.17 mm) for super-resolution microscopy experiments to minimize aberration due to refractive index (RI) changes. 2. To minimize background fluorescence, it is recommended to preclean coverslip with either a plasma cleaner apparatus, which is the best choice when available (H2O2 gas treatment for 2 min is sufficient). Alternatively glass coverslips can be treated with 1 M KOH for 15 min in a sonicator bath followed by extensive washes with Milli-Q water. 3. During SIM imaging, when the sample to be imaged drifts or moves during a live experiment for a distance larger than the microscope resolution during a single stack acquisition [52, 53], it is common to observe the formation of a striped pattern and/or the formation of a deformed object due to the incorrect position assignment of the different side bands after reconstruction. The most common cause of drift is change in temperature, therefore we suggest extensive equilibration of the sample in the imaging chamber. For a theoretical detailed analysis of the image artifacts commonly observed after SIM reconstruction refer to [54]. 4. Care should be taken in selecting an appropriate refractive index of the immersion oil for the sample to image to avoid refractive index mismatch. 5. Light scattering due to local changes of refractive index in the sample causes a reduction of the image signal to noise and lower contrast of the SIM illumination pattern, which affect negatively the SIM image reconstruction by reducing effective resolution. This problem can be further exacerbated by a mismatch of the refractive index between sample, the imaging medium and objective immersion material [35, 55]. For SIM imaging, we generally use a glycerol-based n-propyl gallate mounting medium, which has a refractive index of

ä Fig. 6 (continued) image of an intact cilium from the STORM image on the left. Right: Histogram of the GAS8 96-nm repeat distribution in the left image, with a mean of 95.7 nm, a median of 95.8 nm, and an SD of 10.2 nm (n ¼ 223). (c) Left: STORM imaging shows GAS8 forms nine cluster ring structure at the transition fiber. Right: Averaging of STORM images shows nine cluster ring structure

322

Vito Mennella and Zhen Liu

1.451, close to the RI of oil commonly used on objective lenses (1.518) [20]. It is important to note that some commercial mounting media can give suboptimal results because of their low refractive index or because of the changes in their optical properties during the curing process. For exceptional results it is suggested to match the RI of the sample by using oils at different refractive indexes available for purchase (Cargill). If a refractive index mismatch is present, it can be detected by the observation of an asymmetrical point spread function by looking at a bead sample in X/Z or Y/Z sections [55, 56]. 6. The problem of low modulation contrast of the diffraction pattern has historically been a limiting factor in the application of linear 3D-SIM to tissue specimens to depths larger than 10–15 μm, causing image reconstruction failure. However, current lattice SIM illumination features high pattern modulation achieving contrast up to 100 μm and allows in depth imaging of tissues and organoids [26]. 7. After reconstruction, SIM images might present pixel with negative values, which are introduced when excessive low frequency filtering is applied to the image. This can be a potential concern when quantification of image intensity is needed [57, 58]. This issue is addressed by accurate selection of the algorithm Weiner filter parameters during reconstruction. 8. The reconstructed SIM image should be carefully examined for the presence of phase shifts, intensity differences between angles, as well as overall image modulation contrast. These elements give us an idea of the overall quality of our sample and its compatibility with SIM imaging. In the reconstructed image, the Fourier transform of the 3D volume should be analyzed for symmetry and homogeneity, which is lost due to incorrect sidebands assignment [8, 16, 18, 25]. 9. A common problem in STORM imaging is low spatial resolution of the final image. This is detected automatically in most commercially available software and can be confirmed by looking at the standard deviation of the coordinates of a single fluorescent antibody molecule on the coverslip undergoing multiple excitation cycles. One of the main causes is reduction of the laser power reaching the imaging plane due to misalignment/drift of the laser path. This could be confirmed by measurement with a power meter and can be solved by accurate laser alignment. When the laser power could not be increased any further, the use of TIRF illumination, elongation of exposure time or addition of COT to the imaging buffer could be considered to increase the number of photons detected per frame, which will increase the single molecule localizaion precision of detection.

Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy

323

10. Selection of the right imaging buffer is critical in single molecule imaging as it modulates the photoswitching properties of the fluorescent dyes, which in turn affects both the resolution and the capacity to resolve the structure. The STORM imaging buffer should be replaced frequently (~1 h) to maintain constant imaging conditions—the oxygen scavenging system lasts only a few hours when continuously in contact with air (also see Note 8). 11. Reducing agents affect photoswitching properties. A buffer containing the reducing agent 2-mercaptoethanol (BME) or monoethanolamine (MEA) can be selected as a starting point for structures with densely packed molecules such as centrosomes and cilia. In this type of buffer, Alexa Fluor 647 has a lower on–off duty cycle; that is, it spends less time in a fluorescent state, avoiding large number of overlapping molecules (also see Note 8). 12. In addition to the spatial resolution, which could be improved by collecting more photons emitted, the final resolution of the image is also restricted by the sampling frequency determined by the Nyquist-Shannon sampling theorem [59], which in simple terms states that the sample requires to be densely labeled or enough molecules need to be acquired to reveal a complete pattern. If the reconstructed image appear sparse/ incomplete and only a limited number of molecules centers are identified, few causes include (1) imaging buffer is depleted so all molecules are bleached immediately rather than temporarily turned off; (2) not enough frames are collected; (3) too much molecule fluorescence overlap due to simultaneous activation. To partially overcome this issue most commercial analysis software provide built in solution to fit multiple Gaussians when molecules emit in the same frame. In practice, after all optimization of the above mentioned steps, sampling frequency is often determined by the antibody quality or fluorescent protein maturation rate so the selection of high quality antibody with high specificity is often key. Increasing antibody concentration or elongating the antibody incubation time are common practices in optimizing the labeling efficiencies. A mixture of antibody cocktails or tandem fluorescent proteins can be also considered. 13. Insufficient washing often leads to accumulation of secondary antibodies nonspecifically bound to the structure of interest in the samples, thereby introducing high background noise in the final image. Use of cleaned coverslips, or increase washing volume and time are good starting points to reduce background as well as careful selection of chemical fixation methods and high quality antibodies.

324

Vito Mennella and Zhen Liu

Acknowledgments This work was funded by the UK Medical Research Council intramural project MC_UU_00025/13 to Vito Mennella. We are grateful to Jonathan Shewring at Zeiss, inc. for help with data acquisition and reconstruction with Elyra 7 and SIM2 algorithm. 6. References 1. Stahl PL et al (2016) Visualization and analysis of gene expression in tissue sections by spatial transcriptomics. Science 353(6294):78–82 2. Lundberg E, Borner GHH (2019) Spatial proteomics: a powerful discovery tool for cell biology. Nat Rev Mol Cell Biol 20(5):285–302 3. Sydor AM et al (2015) Super-resolution microscopy: from single molecules to supramolecular assemblies. Trends Cell Biol 25(12): 730–748 4. Sigal YM, Zhou R, Zhuang X (2018) Visualizing and discovering cellular structures with super-resolution microscopy. Science 361(6405):880–887 5. Sahl SJ, Hell SW, Jakobs S (2017) Fluorescence nanoscopy in cell biology. Nat Rev Mol Cell Biol 18(11):685–701 6. Schur FK (2019) Toward high-resolution in situ structural biology with cryo-electron tomography and subtomogram averaging. Curr Opin Struct Biol 58:1–9 7. Heintzmann R, Ficz G (2007) Breaking the resolution limit in light microscopy. Methods Cell Biol 81:561–580 8. Gustafsson MG (2000) Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J Microsc 198(Pt 2):82–87 9. Klar TA et al (2000) Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission. Proc Natl Acad Sci U S A 97(15):8206–8210 10. Hofmann M et al (2005) Breaking the diffraction barrier in fluorescence microscopy at low light intensities by using reversibly photoswitchable proteins. Proc Natl Acad Sci U S A 102(49):17565–17569 11. Betzig E et al (2006) Imaging intracellular fluorescent proteins at nanometer resolution. Science 313(5793):1642–1645 12. Hess ST, Girirajan TP, Mason MD (2006) Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys J 91(11):4258–4272 13. Rust MJ, Bates M, Zhuang X (2006) Subdiffraction-limit imaging by stochastic optical

reconstruction microscopy (STORM). Nat Methods 3(10):793–795 14. Sharonov A, Hochstrasser RM (2006) Widefield subdiffraction imaging by accumulated binding of diffusing probes. Proc Natl Acad Sci U S A 103(50):18911–18916 15. Schnitzbauer J et al (2017) Super-resolution microscopy with DNA-PAINT. Nat Protoc 12(6):1198–1228 16. Schermelleh L et al (2019) Super-resolution microscopy demystified. Nat Cell Biol 21(1): 72–84 17. Mockl L, Moerner WE (2020) Superresolution microscopy with single molecules in biology and beyond-essentials, current trends, and future challenges. J Am Chem Soc 142(42):17828–17844 18. Gustafsson MG et al (2008) Threedimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophys J 94(12):4957–4970 19. Mennella V et al (2012) Subdiffractionresolution fluorescence microscopy reveals a domain of the centrosome critical for pericentriolar material organization. Nat Cell Biol 14(11):1159–1168 20. Schermelleh L et al (2008) Subdiffraction multicolor imaging of the nuclear periphery with 3D structured illumination microscopy. Science 320(5881):1332–1336 21. Bates M et al (2007) Multicolor superresolution imaging with photo-switchable fluorescent probes. Science 317(5845): 1749–1753 22. Huang B et al (2008) Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 319(5864):810–813 23. Nguyen QPH et al (2020) Comparative superresolution mapping of basal feet reveals a modular but distinct architecture in primary and motile cilia. Dev Cell 55(2):209–223. e7 24. Liu Z et al (2020) A quantitative superresolution imaging toolbox for diagnosis of motile ciliopathies. Sci Transl Med 12(535): eaay0071

Nanometer-Scale Molecular Mapping by Super-resolution Fluorescence Microscopy 25. Mennella V (2016) In: Stahl RBaP (ed) Structured illumination microscopy, in encyclopedia of cell biology. Elsevier, Amsterdam, Netherlands, pp 86–98 26. Wu Y, Shroff H (2018) Faster, sharper, and deeper: structured illumination microscopy for biological imaging. Nat Methods 15(12): 1011–1019 27. Loschberger Anna NY, Ralf N, MarieChristine S, Ricardo B, Teresa K, Markus S, Ingo K (2021) Super-resolution imaging by dual iterative structured illumination microscopy. bioRxiv. https://doi.org/10.1101/ 2021.05.12.443720 28. Huang B, Bates M, Zhuang X (2009) Superresolution fluorescence microscopy. Annu Rev Biochem 78:993–1016 29. Sahl SJ, Hell SW (2019) High-resolution 3D light microscopy with STED and RESOLFT. In: Bille JF (ed) High resolution imaging in microscopy and ophthalmology: new Frontiers in biomedical optics. Springer Nature, Cham (CH), pp 3–32 30. Wassie AT, Zhao Y, Boyden ES (2019) Expansion microscopy: principles and uses in biological research. Nat Methods 16(1):33–41 31. Mennella V et al (2014) Amorphous no more: subdiffraction view of the pericentriolar material architecture. Trends Cell Biol 24(3): 188–197 32. Lawo S et al (2012) Subdiffraction imaging of centrosomes reveals higher-order organizational features of pericentriolar material. Nat Cell Biol 14(11):1148–1158 33. Mennella V, Hanna R, Kim M (2015) Subdiffraction resolution microscopy methods for analyzing centrosomes organization. Methods Cell Biol 129:129–152 34. Bowler M et al (2019) High-resolution characterization of centriole distal appendage morphology and dynamics by correlative STORM and electron microscopy. Nat Commun 10(1): 993 35. Gao L et al (2014) 3D live fluorescence imaging of cellular dynamics using Bessel beam plane illumination microscopy. Nat Protoc 9(5):1083–1101 36. York AG et al (2012) Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy. Nat Methods 9(7):749–754 37. Sydor AM et al (2018) PPP1R35 is a novel centrosomal protein that regulates centriole length in concert with the microcephaly protein RTTN. elife 7

325

38. Los GV et al (2008) HaloTag: a novel protein labeling technology for cell imaging and protein analysis. ACS Chem Biol 3(6):373–382 39. Keppler A et al (2003) A general method for the covalent labeling of fusion proteins with small molecules in vivo. Nat Biotechnol 21(1):86–89 40. Gautier A et al (2008) An engineered protein tag for multiprotein labeling in living cells. Chem Biol 15(2):128–136 41. Szymborska A et al (2013) Nuclear pore scaffold structure analyzed by super-resolution microscopy and particle averaging. Science 341(6146):655–658 42. Schindelin J et al (2012) Fiji: an open-source platform for biological-image analysis. Nat Methods 9(7):676–682 43. Winterflood CM et al (2015) Dual-color 3D superresolution microscopy by combined spectral-demixing and biplane imaging. Biophys J 109(1):3–6 44. Testa I et al (2010) Multicolor fluorescence nanoscopy in fixed and living cells by exciting conventional fluorophores with a single wavelength. Biophys J 99(8):2686–2694 45. Ries J (2020) SMAP: a modular superresolution microscopy analysis platform for SMLM data. Nat Methods 17(9):870–872 46. Sage D et al (2019) Super-resolution fight club: assessment of 2D and 3D singlemolecule localization microscopy software. Nat Methods 16(5):387–395 47. Liu Z et al (2020) Super-resolution microscopy and FIB-SEM imaging reveal parental centriole-derived, hybrid cilium in mammalian multiciliated cells. Dev Cell 55(2):224–236. e6 48. Gingras AC, Abe KT, Raught B (2019) Getting to know the neighborhood: using proximitydependent biotinylation to characterize protein complexes and map organelles. Curr Opin Chem Biol 48:44–54 49. May DG, Roux KJ (2019) BioID: a method to generate a history of protein associations. Methods Mol Biol 2008:83–95 50. Reiter JF, Leroux MR (2017) Genes and molecular pathways underpinning ciliopathies. Nat Rev Mol Cell Biol 18(9):533–547 51. Shi X et al (2017) Super-resolution microscopy reveals that disruption of ciliary transition-zone architecture causes Joubert syndrome. Nat Cell Biol 19(10):1178–1188 52. Kner P et al (2009) Super-resolution video microscopy of live cells by structured illumination. Nat Methods 6(5):339–342

326

Vito Mennella and Zhen Liu

53. Shao L et al (2011) Super-resolution 3D microscopy of live whole cells using structured illumination. Nat Methods 8(12):1044–1046 54. Demmerle J et al (2017) Strategic and practical guidelines for successful structured illumination microscopy. Nat Protoc 12(5):988–1010 55. Ball G et al (2012) A cell biologist’s guide to high resolution imaging. Methods Enzymol 504:29–55 56. Wallace W, Schaefer LH, Swedlow JR (2001) A workingperson’s guide to deconvolution in light microscopy. BioTechniques 31(5): 1076–1078. 1080, 1082 passim

57. Righolt CH et al (2014) Three-dimensional structured illumination microscopy using Lukosz bound apodization reduces pixel negativity at no resolution cost. Opt Express 22(9): 11215–11227 58. Righolt CH et al (2013) Image filtering in structured illumination microscopy using the Lukosz bound. Opt Express 21(21): 24431–24451 59. Thompson MA et al (2010) Molecules and methods for super-resolution imaging. Methods Enzymol 475:27–59

Part V Micrograph Analysis

Chapter 19 Visualizing and Quantifying Data from Time-Lapse Imaging Experiments Eike K. Mahlandt and Joachim Goedhart Abstract One obvious feature of life is that it is highly dynamic. The dynamics can be captured by movies that are made by acquiring images at regular time intervals, a method that is also known as time-lapse imaging. Looking at movies is a great way to learn more about the dynamics in cells, tissue, and organisms. However, science is different from Netflix, in that it aims for a quantitative understanding of the dynamics. The quantification is important for the comparison of dynamics and to study effects of perturbations. Here, we provide detailed processing and analysis methods that we commonly use to analyze and visualize our timelapse imaging data. All methods use freely available open-source software and use example data that is available from an online data repository. The step-by-step guides together with example data allow for fully reproducible workflows that can be modified and adjusted to visualize and quantify other data from timelapse imaging experiments. Key words Time-lapse, Dynamics, Fluorescence Imaging, Data Visualization, Image analysis, Fluorescent protein, Optogenetics, Biosensor, Open-source software

1

Introduction An image is a representation of numbers in two dimensions. The simplest image that is acquired by fluorescence microscopy shows the distribution of fluorescence intensities in two spatial dimensions (x and y). However, it is very common that the output from an imaging setup is more complicated than a single image. This adds more dimensions to the imaging data. The two best-known examples of experimental data that has more than two dimensions is data from time-lapse imaging, adding the dimension time and data from 3D imaging, adding another spatial dimension. The addition of spectroscopic information (wavelength, fluorescence lifetime, anisotropy, photobleaching rate) brings yet another dimension to the imaging data. In Table 1, types of multidimensional datasets that are commonly encountered in fluorescence imaging are listed. Anything beyond a 2-dimensional dataset poses a data visualization

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_19, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

329

330

Eike K. Mahlandt and Joachim Goedhart

Table 1 Examples of common types of multidimensional data acquired by fluorescence imaging. Here, x,y,z are spatial dimensions, t is time, λ a spectral dimension, and τ is lifetime Type Dimensions Explanation 2D

x,y

Image

2D

x,t

Kymograph

2D

x,z

Orthogonal image/slice

3D

x,y,z

Images acquired at different axial positions, yielding a 3D image

3D

x,y,t

Images acquired over time, also known as movie or time-lapse

3D

x,y,λ

An image with spectral information, the simplest format is an image acquired at two wavelengths.

3D

x,y,τ

An image with fluorescence lifetime information.

4D

x,y,z,t

A 3D image that is acquired at multiple time points (also known as 4D image)

4D

x,y,t,λ

An image with spectral information that is acquired at multiple time points

4D

x,y,t,τ

An image with lifetime information that is acquired at multiple time points

challenge. Here, we will specifically address data from time-lapse imaging. Representing data with 3 spatial dimensions (3D imaging) as obtained from confocal imaging, 3D-STORM or light-sheet microscopy needs a different approach [1, 2] and will not be discussed here. The choice for the type of data visualization depends on the information that is extracted from the data. Below, we explain some of the data visualizations that we use in our research. This document is formatted as a “protocol” with the necessary materials listed in Subheading 2, the protocols in Subheading 3 and notes in Subheading 4. The materials in Subheading 2 comprise (open source) software, code and data and are all freely available online. The step-bystep guides in Subheading 3 should be self-explanatory and are tested on MacOS and Windows operating systems. Since it is impossible to test the software and protocol for each possible computer configuration, you may need to update software or tweak the protocol to make it work on your system. If you run into problems and find a solution, we would be happy to hear about it as it will help us to improve the workflows. The preferred way to communicate issues is through Github: https://github.com/ JoachimGoedhart/TimeLapse-Imaging-DataViz/issues. The expected outcome for each of the workflows is shown as a figure or plot at the end of the protocol. In general, the interpretation and biological significance of the result can be found in the papers that we refer to in the introduction of each workflow.

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

2

331

Materials

2.1 Mapping Dynamics of Structures

1. FIJI; https://fiji.sc [3]. We use text between brackets to indicate the sequence of menu items that needs to be selected to reach a function. For instance [File > New > Image. . .] to create a new image (see Note 1). 2. LUTs that are used here are available in the Github repository: https://github.com/JoachimGoedhart/TimeLapse-ImagingDataViz. 3. To install a LUT, open the folder with LUTs: [File > Show Folder > LUTs]. Copy the .lut files to this folder. The LUTs are available after restarting FIJI. 4. Example data; https://doi.org/10.5281/zenodo.4501412.

2.2 Plotting Signals from Time-Lapse Imaging

1. FIJI; https://fiji.sc [3]. 2. Set the measurements to only measure mean gray values: [Analyze > Set Measurements. . .] and select “Mean gray value,” “Display label,” and set “Decimal places” to 2. 3. Macros that are used here are available in the Github repository: https://github.com/JoachimGoedhart/TimeLapseImaging-DataViz. 4. To install a macro use [Plugins > Macro > Install. . .] and locate the .ijm file. The macro will appear in the menu [Plugins > Macro]. After closing FIJI, the macro is removed from the menu. If you want to use the macro more often, you can add it to the plugins menu: [File > Show Folder > Plugins]. Move the .ijm file into this folder. After restarting FIJI, the macro is available from the Plugins menu. 5. Example data; https://doi.org/10.5281/zenodo.4501412. 6. PlotTwist; https://huygens.science.uva.nl/PlotTwist [4]. 7. R; https://www.r-project.org/. 8. R package “tidyverse” (https://tidyverse.tidyverse.org). The package can be installed in RStudio from the menu: [Tools > Install Packages. . .]. 9. RStudio; https://rstudio.com/products/rstudio.

3

Methods

3.1 Mapping Dynamics of Structures

The acquisition of fluorescence images at defined time points can be used to examine the dynamics of objects in the images. There are several ways to do this and four relatively straightforward methods are described below. The methods described here are used to visualize the dynamics of cells and within cells. More advanced

332

Eike K. Mahlandt and Joachim Goedhart

methods can be found elsewhere, for instance tracking of objects [5] or by measuring correlations between subsequent frames [6]. 3.1.1 Encoding Dynamics with Temporal Color Coding

A very straightforward way of visualizing the dynamics in an image format is by applying unique colors to each frame of a movie. In this way, the color encodes time. This method works very well for showing the trajectory of objects in an image. In this example, we use temporal color coding to display the migration of white blood cells that are labeled with a fluorophore on a 2D monolayer of endothelial cells. 1. Open the file “201029_PMN_channel_06-2.tif” in Fiji by dropping it onto the main window (see Note 2). 2. [Image > Hyperstacks > Temporal-Color Code] (see Note 3). 3. Choose a LUT, here we selected “Fire” (Fig. 1). Make sure to select the box “Create Time Color Scale Bar” (see Note 4). 4. Two images will be displayed, one with the color coded timelapse and another with the LUT, showing which frames are represented by the different colors. For the final result, we combine the LUT with the color coded image: 5. Activate the “color time scale” window and select the contents: [Edit > Selection > Select All]. 6. Copy the selected contents, [Edit > Copy], and paste in the image ‘MAX_colored’: [Edit > Paste]. Move the LUT to the right location. Deselecting it (by clicking next to the ROI) fixes the location. The result is shown in Fig. 2.

3.1.2 Kymographs

In image analysis, a kymograph is an image in which one dimension is space and the other dimension is time. Kymographs are used to visualize the movements of structures or organelles. An image with one spatial and one temporal dimension can be directly generated by time-lapse imaging of a line (instead of a frame) with a confocal microscope. But, usually, kymographs are created from movies that are obtained by time-lapse imaging. To this end, a line is defined in the image by the user, which is used to create the kymograph. Here, we use a kymograph to analyze the dynamics of microtubules that are labeled at their tips with fluorescently tagged EB3 [7]. Midway through the time-lapse, nocodazole is added to disrupt microtubule integrity and dynamics. 1. Locate the image sequence “EB3-timelapse.tif” on your computer and open it in FIJI. 2. Alternatively, this file can be retrieved through a URL [File > I m p o r t > U R L . . .] w i t h h t t p s : // g i t h u b . c o m / JoachimGoedhart/TimeLapse-Imaging-DataViz/raw/ main/3_1_2_KymoGraph/EB3-timelapse.tif.

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

333

Fig. 1 Screenshots of a submenu and the window that pops up when the “Temporal-Color Code” is selected

Fig. 2 Migration of a white blood cell visualized by applying the temporal color code. In this figure the “Fire” look-up table was used to define the colors for the different frames

Fig. 3 Screenshots of the main window of FIJI with the line tool selected and highlighted by a black circle

334

Eike K. Mahlandt and Joachim Goedhart

Fig. 4 A kymograph of microtubule dynamics, in which moving tips are visible as diagonal lines. Time progresses in the vertical direction from the top to the bottom and the horizontal direction is a spatial dimension (reflecting the selected line)

Fig. 5 A still image from a time-lapse imaging dataset that shows the line (in yellow) from which the kymograph (shown in Fig. 4) was generated

3. Draw a line in the image, using the line tool (Fig. 3) from the main window in FIJI (see Note 5). 4. Retrieval of the ROI that was used here can be done by drag and drop of the file onto the main window of Fiji. 5. To reproduce the line that was used in this example, drop the file “Line-for-Kymograph.roi” onto the main window.

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

335

6. Generate a kymograph: [Analyze > Multi Kymograph > Multi Kymograph], Linewidth 1. 7. Stretch the image in the time dimension to improve the visualization: [Image > Scale. . .] with “Y Scale:” set to 4 (and “X Scale:” to 1), the results is shown in Fig. 4 (see Note 6). 8. To save the original image with the colored line that was selected to generate the kymograph, first convert the data to allow the display of color: [Image > Type > RGB color]. 9. Add the selected line to the ROI manager: [Analyze > Tools > ROI Manager. . .] and press the button “Add.” 10. To make the line more visible, select the ROI from the list and press “Properties” and change the Width: 2. 11. To show the line in the image use the button [More » Fill]. 12. To use only this frame from the stack use [Image > Duplicate. . .] and deselect “Duplicate stack” (see Note 7). 13. Save the image (Fig. 5) as a .PNG file. 3.1.3 Visualizing Fluctuations in Intensity

We have recently explored another way to visualize dynamics in data acquired by time-lapse imaging [8]. In this case, we wanted to quantify how intensities of fluorescence signals changes over time by determining the coefficient of variation (standard deviation divided by the mean) on a pixel-by-pixel basis. The imaging data was acquired from endothelial cells of which the plasma membrane was labeled with the fluorescent protein mNeongreen tagged with a CaaX box [8]. 1. Open the sequence “mNeon-CAAX.tif”. 2. Any other sources of fluctuations or changes in intensity are corrected for. Image drift can be corrected as described for “Visualizing a change in area” and is not necessary here. Intensity loss due to bleaching is corrected by [Image > Adjust > Bleach Correction] with “Correction Method:” set to “Exponential Fit.” 3. The resulting image “DUP_mNeon-CAAX.tif” is used to determine the standard deviation in groups of 10 images. For instance, if we have 100 images, these are grouped in batches of 1–10, 11–20, 21–30, and so on. In FIJI, this is achieved as follows: [Image > Stacks > Tools > Grouped Z Project. . .]. 4. Choose “Standard Deviation” as the projection method and group size 10 (Fig. 6). 5. On the same sequence, the average per 10 frames is calculated: select the window “DUP_mNeon-CAAX.tif”. Then select [Image > Stacks > Tools > Grouped Z Project. . .] and choose

336

Eike K. Mahlandt and Joachim Goedhart

Fig. 6 Screenshots of a submenu and the window that pops up when the “Grouped Z Project. . .” is selected

Fig. 7 Screenshot of the image Calculator window with the settings that are used to divide the stack with the standard deviation of intensities over the stack with the average intensity

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

337

Fig. 8 The result of the workflow that shows intensity fluctuations in a time-lapse imaging dataset. The lighter the color, the larger the change in intensity during the time-lapse. The light colors are interpreted as areas where the membranes are highly dynamic

“Average Intensity” as the projection method and group size 10. 6. Divide the stack with standard deviation data by the stack with average intensities (Fig. 7) by using [Process > Image Calculator. . .] and selecting Image1: STD_DUP_mNeon-CAAX.tif, Operation: Divide and Image2: AVG_DUP_mNeon-CAAX.tif. Make sure that that both checkboxes are selected (“Create new window” and “32-bit (float) result”). 7. The resulting stack “Result of STD_DUP_mNeon-CAAX.tif” shows how the intensity-averaged standard deviation changes over time. To generate an average image, choose [Image > Stacks > Z Project. . .] and “Project type” is “Average Intensity” (“Start slice” is 1 and “Stop slice” is 35). 8. A LUT can be applied to the map to better show the differences in dynamics: [Image > Lookup Tables]. Here, we have selected the LUT “morgenstemning” [9] (see Subheading 2.1 on how to install this LUT). 9. The image can be optimized by adjusting the brightness and contrast manually: [Image > Adjust > Brightness/Contrast. . .]. 10. To reproduce the image displayed here, use [Process > Enhance Contrast] with “Saturated pixels:” set to 0.3%. 11. For compatibility with downstream applications (text editors, presentation software) it is useful to save the result (Fig. 8) in PNG format: [File > Save As > PNG. . .].

338

Eike K. Mahlandt and Joachim Goedhart

3.1.4 Visualizing a Change in Area

Cells may change their shape in response to perturbations. Endothelial cells respond to thrombin with a Rho GTPase mediated cell contraction, which strongly reduces their cell area. To visualize the change in area between two (user selected) time points, we use a colormap that visualizes reduced, increased, and unchanged cell area [10]. The data used is here is from an endothelial cell that expresses a membrane marker (mTurquoise2-CaaX). The change in cell area is triggered by treating the cells with thrombin [10, 11]. 1. Open the image “2010230_BOEC-mTq2-CaaX.tif” from your computer (or use the URL: https://zenodo.org/ record/4501412/files/2010230_BOEC-mTq2-CaaX.tif). 2. Remove image shift by using the plugin “Linear Stack Alignment with SIFT”: [Plugins > Registration > Linear Stack Alignment with SIFT] and use default settings. 3. Crop the stack to exclude the cells at the edges, by drawing an ROI with the Rectangle tool and remove everything outside of the ROI: [Image > Crop]. 4. Get frame 4: [Image > Duplicate...]. Set the “Title” to Frame4 and uncheck “Duplicate stack.” 5. Repeat to get frame 25, name this image Frame25. 6. [Process > Filters > Gaussian Blur...] and set Sigma(Radius) to 2. 7. [Image > Adjust > Threshold...] and use the button “Set” to set the “Low threshold level” to 270 (and the “High threshold level” to 65,535). 1. After pressing the button “OK,” press the button “Apply” on the Threshold window (make sure that the “Dark background” box is checked). 8. Repeat the previous step for image Frame25 with the same settings for thresholds. 9. Both images are now “binary” images with values of either 0 (background) or 255 (foreground). Use [Process > Binary > Fill Holes], to fill holes in both binary images. 10. On image Frame4: [Process > Math > Subtract. . .], Value: 254. 11. On image Frame25: [Process > Math > Subtract. . .], Value: 253. 12. Sum the images: [Process > Image Calculator...], Image1: Frame4, Operation: Add, Image2: Frame25 and check “Create new window.” 13. The resulting image has pixel values of 0, 1, 2, and 3. 14. To apply a colormap to depict these different pixel values, take the file “AreaLUT.lut” and drop this file on the main window. When this LUT is used, red depicts the area loss, blue is

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

339

Fig. 9 A four color image that visualizes a change in cellular area. The white area is unchanged, the red color indicates a loss of area and the blue color indicates a gain in area. As such, the red color visualizes cell retraction, and the blue color depicts cell expansion or protrusion

expanded area and white is unchanged between the two selected frames (Fig. 9). 15. The LUT can be modified to change the colors: [Image > Color > Edit LUT...]. 3.2 Plotting Signals from Time-Lapse Imaging

In our work, we use genetically encoded biosensors to report on intracellular processes. These biosensors may report on changes in calcium levels, kinase activity or lipid signaling. In addition, we used chemo- and optogenetic switches to steer cellular processes. In all of these cases, we analyze changes in optical signals over time. The signal can simply be the fluorescence intensity of a reporter, but it may also be the ratio of two intensities. Ratios of signals are more robust to photobleaching, cell shape changes, and sample drift than an intensity measured from a single channel. In the next workflows, we will explain how intensities can be quantified and visualized over time.

3.2.1 Visualizing the Dynamics of Intensity

The LEXY probe consists of a fluorescent protein and a light inducible nuclear export sequence [12]. In resting cells, the fluorescence is mainly located in the nucleus. When illuminated with blue light, the nuclear export signal is exposed, and the probe moves out of the nucleus. In absence of blue light, the system reverses to the original state and the probe accumulates in the nucleus. Here we quantify the nuclear intensity over time, to demonstrate the nuclear export induced by blue light.

340

Eike K. Mahlandt and Joachim Goedhart

Fig. 10 A plot that shows the change in nuclear intensity over time and how it depends on exposure to blue light. When the light is on, the nuclear intensity decreases and when the light is off, the intensity increases. The black lines represent the data from four different cells. The data was scaled between 0 and 1

1. Open the data: “210203_LEXY_combined.tif” from your computer (or use the URL: https://zenodo.org/record/4501412/files/210203_ LEXY_combined.tif). 2. Find an area with no fluorescence, which can be used as background. You may need to stretch the contrast to find the optimal background region: [Image > Adjust > Brightness/ Contrast. . .]. 3. Draw an ROI that represents background signal. 4. Run the macro Subract_Measured_Background which will, for each image in the stack, subtract the average background value calculated from the ROI. See Subheading 2.2 for instructions on how to install the macro. 5. Activate the ROI manager: [Analyze > Tools > ROI Manager. . .]. 6. Draw an ROI in the image sequence to select the nucleus of a cell, add to ROI manager “Add [t]”. 7. Repeat the selection of nuclei and the addition of ROIs to the ROI manager until all nuclei are marked. 8. Rename the ROIs: select a ROI from the list and use [Rename. . .] on the ROI manager. Use a more informative, here we use Cell-01, Cell-02, and so on. 9. Save the set of ROIs: [More » Save. . .].

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

341

10. To reproduce the ROIs that are used in this example, drop the file “RoiSet.zip” onto the main window. 11. Select all ROIs in the ROI manager, except for the background ROI, if it is in the list (select the first label, then select the last label in the list, while pressing the key). 12. Make sure that the measurements are correctly defined: [Analyze > Set Measurement] and select “Mean gray value,” “Display label,” and set “Decimal places” to 2. 13. Measure in all ROIs the mean gray value: [More » Multi Measure] and select “Measure all slices” and make sure to that “One row per slice” is not selected. (see Note 8). 14. Save the table with results in csv format: [File > Save As. . .]. 15. The open-source web application will be used to plot the data: https://huygens.science.uva.nl/PlotTwist/. 16. The PlotTwist app needs an Internet connection. It disconnects when it is inactive for a while. Both issues can be addressed by running it offline from RStudio, for instructions see: https://github.com/JoachimGoedhart/ PlotTwist. 17. Select “Upload (multiple) file(s)” and use “Browse. . .” to locate the Results.csv file or drop the file on the “Browse. . .” button to upload the data. 18. The app splits the “Label” column into three columns, of which the “Sample” column is used to identify the different ROIs. 19. Select the checkbox “These data are Tidy.” 20. Set the x-axis variable to “Slice” and the y-axis variable to “Mean.” The default settings for identifier of the sample and condition (“Sample” and “id,” respectively) can be used for these data. 21. Optional: a data normalization can be applied in PlotTwist. Several options are available when the checkbox “Data normalization” is activated. Here, we correct for differences in intensity due to different expression levels between cells. Select the option “Rescale between 0 and 1.” 22. To visualize the data, click on the “Plot” Table A plot of the data is shown, and the plot can be modified to improve the visualization. 23. To store the settings of the user interface, use the button “Clone current setting” above the plot. 24. The URL can be stored and used later to reproduce the exact settings. For instance, the URL that belongs to the plot shown here is https://huygens.science.uva.nl:/PlotTwist/?data=3; TRUE;TRUE;zero_one;1,5;&vis = dataasline;1;;;1;;&layout=

342

Eike K. Mahlandt and Joachim Goedhart

;;;;;;;6;X;480;600&color = none&label = TRUE;Light induced nuclear export in single cells;TRUE;Slice;Normalized nuclear intensity;TRUE;26;24;18;8;;;&stim = TRUE;both;1 ,20,20,55,55,65,65,101,101,112,112,148,148,182;on,off, on,off,on,off,on,off;blue,orange&. 25. To use this setting, copy-paste the URL in a browser. Upload the data and select the x-axis and y-axis variable. Click on the “plot” tab and a plot that looks identical to the one shown here (Fig. 10) should appear. 26. 26. A limitation of this workflow is that you end up with “Slices” at the x-axis instead of time. This can be fixed with an R-script, as will be explained in Subheading 3.2.2. An alternative is to run an ImageJ macro “Add-timing-to-results. ijm” that adds a column “Time” to the results window that is calculated from the slice number and a user-defined time interval. 3.2.2 Visualizing Ratiometric Data

Yellow cameleon is a FRET-based calcium biosensor [13]. The ratio of the intensity at two wavelengths is used as a read-out. When calcium levels are high, the cyan intensity is low and the yellow intensity is high and when calcium levels are low, this is reversed. Therefore, a plot of the ratio of yellow over cyan signal is often shown to visualize calcium dynamics over time. Here, we used yellow cameleon to measure calcium oscillations triggered by histamine in HeLa cells. The histamine receptor is deactivated by the addition of the antagonist pyrilamine [14] near the end of the timelapse. 1. The data consists of individual frames that are located in the folder YCam. The data is available here: https://zenodo.org/ record/4501412/files/YCaM.zip. 2. [File > Import > Image Sequence. . .] and locate the folder “YCam” with the images and press “open.” To open the data acquired from the CFP emission channel use for “File name contains:” “CFP” and make sure that the checkbox “Sort names numerically” is selected. 3. Rename the image sequence to CFP: [Image > Rename. . .]. 4. Repeat the previous two steps to get the YFP data. 5. The background (or offset) is removed by determining the average background signal from a region in the image that does not contain signal from cells. First, draw a ROI in a region that represent background. Run the macro “Subtract_Measured_Background” (https://github.com/JoachimGoedhart/ Time-lapse-Imaging-DataViz/tree/main/Macros). 6. Repeat the background subtraction for the YFP image sequence.

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

343

Fig. 11 A plot that shows the change in fluorescence intensity over time from the CFP and YFP channel. The black lines represent the data from four different cells. The baseline fluorescence, determined from the average of the first 5 frames, was normalized 1

7. The images can be saved (and are available as CFP.tif and YFP. tif). 8. ROIs that define individual cells can be drawn and added to the ROI manager (see also Subheading 3.2.1). 9. To reproduce the ROIs, that are used in this example, drop the file “RoiSet.zip” onto the main window. 10. Make sure that the measurements are correctly defined: [Analyze > Set Measurement] and select “Mean gray value,” “Display label,” and set “Decimal places” to 2. 11. To obtain the fluorescence intensity from multiple ROIs over time select the ROIs in the ROI manager. Next, apply [More » Multi Measure] and select “Measure all 121 slices.” Deselect the other two checkboxes. The resulting window with data can be saved: File > Save As. . . in csv format. The resulting file is named “Results-CFP.csv”. 12. Repeat this for the YFP data and name the file with results “Results-YFP.csv”. 13. At this point, the results can be inspected with the online visualization tool PlotTwist:

344

Eike K. Mahlandt and Joachim Goedhart

Fig. 12 A plot that shows the change in YFP/CFP ratio induced by histamine over time. The ratio is used to indicate the intracellular calcium levels. The lines represent the data from four different cells, indicated with unique colors. The baseline fluorescence, determined from the average of the first 5 frames, was normalized 1

Choose the option “Upload (multiple) file(s)” and drag simultaneously the files Results-CFP.csv and Results-YFP.csv onto the “Browse. . .” button to upload. 14. Select the check box “These data are Tidy.” 15. Select the variables, for x-axis ¼ Slice, y-axis ¼ Mean, Identifier of samples ¼ “Sample,” and Identifier of conditions ¼ “id.” 16. Activate the checkbox “Data normalization” and use the default setting: “Fold change over baseline (I/I0)”. 17. Press on the “Plot” tab to show the plot (Fig. 11). 18. The default plot will show up and the visualization can be further optimized. Below, we explain how the data can be processed to normalize the data, calculate the ratios and visualized as a plot in R. See Subheading 2.2 for instructions on how to install R and R studio. 19. Make sure that the R-script “Plot_Ratiometric-data” and the csv files “Results-YFP” and “Results-CFP” are located in the same folder. Make sure to use these files names, otherwise the script does not find the data. 20. Open the script in RStudio by double-clicking on the R-script.

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

345

Fig. 13 A plot that shows the change in YFP/CFP ratio induced by histamine over time. The ratio is used to indicate the intracellular calcium levels. The data from four different cells, indicated with unique colors, is shown as a small multiple. The baseline fluorescence, determined from the average of the first 5 frames, was normalized 1

21. In RStudio go to [Session > Set Working Directory > To Source File Location]. 22. The code can be run line-by-line to understand what each line of code does: 23. Position the cursor in the line you want to run and press: + or use the menu: [Code > Run Selected Line(s)]. 24. After running a line of code, the cursor moves down and so the previous step can be repeated until all the code is executed (see Note 9). 25. The script will produce a plot (Fig. 12), that is saved in the working directory as “Ratio-plot.png”. 26. The script will produce a CSV file with normalized data that is saved in the working directory as “Normalized_data.csv”. This file can be used as input for PlotTwist: drop the file on the “Browse. . .” button, select “These data are Tidy” and as variable for the x-axis “Time” and y-axis “ratio” (or any of the other columns).

346

Eike K. Mahlandt and Joachim Goedhart

27. Click on the “Plot” tab to see the plot and adjust the visualization. 28. To reproduce the plot, start PlotTwist with this URL: https://huygens.science.uva.nl:/PlotTwist/?data=3; TRUE;;fold;1,5;&vis = dataasline;1;;;1;TRUE;&layout=;; TRUE;0,220;;TRUE;;6;X;480;600&color = none&label = TRUE; Calcium oscillations induced by histamine;TRUE; Time [s];Normalized Ratio;TRUE;24;18;18;8;;;TRUE& stim=;bar;;;&.. 29. Upload the file “Normalized_data.csv”, select variable for y-axis “ratio” and click on the “Plot” tab to show the plot (Fig. 13).

4

Notes 1. Use [Help > Search] to find the location of a function in a menu. 2. There are alternatives to open files: Alternative 1: [File > Open. . .] and locate the file on your computer. Alternative 2: [File > Import > URL. . .] and copy-past this URL in the text field: https://github.com/JoachimGoedhart/TimeLapseImaging-DataViz/raw/main/3_1_1_TemporalColor/20102 9_PMN_channel_06-2.tif. Alternative 3: [File > Import > URL. . .] and copy-past this URL in the text field: https:// zenodo.org/record/4501412/files/201029_PMN_chan nel_06-2.tif. 3. When the function does not work, you may use the macro that i s a v a i l a b l e o n G i t h u b : h t t p s : // g i t h u b . c o m / JoachimGoedhart/TimeLapse-Imaging-DataViz/blob/ main/Macros/Temporal-Color_Code.ijm. See Subheading 2.2 for instructions on how to install the macro. 4. The choice of the LUT will influence the clarity of the visualization. The “Fire” LUT that comes with FIJI is a safe first choice, but it is recommended to try a couple of different LUTs. 5. The position of the line can be stored by adding it in the ROI manager: [Analyze > Tools > ROI Manager. . .] and press the button “Add.” The line can be permanently saved by using [More > Save. . .] on the ROI manager. 6. KymoButler [15] is an online tool that quantifies track length, duration, and velocity from a kymograph: https://www. wolframcloud.com/objects/deepmirror/Projects/ KymoButler/KymoButlerForm. The cloud app accepts a maximum of 150,000 pixels, so it makes sense to crop and analyze the upper half of the image.

Visualizing and Quantifying Data from Time-Lapse Imaging Experiments

347

7. This can also be done in other ways, for instance by [Edit > Copy] followed by [File > New > Internal Clipboard]. 8. The results are in a “tidy format,” which implies that all the measured mean intensities are in a single column that is named “Mean.” Two other columns, “Label” and “Slice” hold information on the ROI and the slice number respectively. The tidy format is well suited for handling the data with open-source application such as R and Python. 9. To run the entire script at once, select all the code and press + .

Acknowledgments We thank Janine Arts and Jaap van Buul (Sanquin Research and Landsteiner Laboratory, Amsterdam, the Netherlands) for providing the data of the membrane dynamics and Marten Postma (University of Amsterdam, the Netherlands) for useful discussions. This work was supported by an NWO ALW-OPEN grant ALWOP.306 (EKM). We are grateful for all the input, comments, and solutions from the active communities on Stack Overflow, Twitter, and other fora that share their knowledge and expertise (you know who you are). References 1. Long F, Zhou J, Peng H (2012) Visualization and analysis of 3D microscopic images. PLoS Comput Biol 8:e1002519 2. Pietzsch T, Saalfeld S, Preibisch S et al (2015) BigDataViewer: visualization and processing for large image data sets. Nat Methods 12: 481–483 3. Schindelin J, Arganda-Carreras I, Frise E et al (2012) Fiji: an open-source platform for biological-image analysis. Nat Methods 9: 676–682 4. Goedhart J (2020) PlotTwist: a web app for plotting and annotating continuous data. PLoS Biol 18:e3000581 5. Tinevez J-Y, Perry N, Schindelin J et al (2017) TrackMate: an open and extensible platform for single-particle tracking. Methods 115:80– 90 6. Meddens MBM, Pandzic E, Slotman JA et al (2016) Actomyosin-dependent dynamic spatial patterns of cytoskeletal components drive mesoscale podosome organization. Nat Commun 7:13127 7. Chertkova AO, Mastop M, Postma M et al (2020) Robust and bright genetically encoded

fluorescent markers for highlighting structures and compartments in mammalian cells. bioRxiv. https://doi.org/10.1101/160374 8. Arts JJG, Mahlandt EK, Gro¨nloh MLB et al (2021) Endothelial junctional membrane protrusions serve as hotspots for neutrophil transmigration. elife. https://doi.org/10.7554/ eLife.66074 9. Geissbuehler M, Lasser T (2013) How to display data by color schemes compatible with red-green color perception deficiencies. Opt Express 21:9862–9874 10. Reinhard NR, Mastop M, Yin T et al (2017) The balance between Gαi-Cdc42/Rac and Gα12/13-RhoA pathways determines endothelial barrier regulation by sphingosine-1phosphate. Mol Biol Cell 28(23):3371–3382 11. Mastop M, Reinhard NR, Zuconelli CR et al (2018) A FRET-based biosensor for measuring Gα13 activation in single cells. PLoS One 13(3):e0193705 12. Niopek D, Wehler P, Roensch J et al (2016) Optogenetic control of nuclear protein export. Nat Commun 7:10624

348

Eike K. Mahlandt and Joachim Goedhart

13. Nagai T, Yamada S, Tominaga T et al (2004) Expanded dynamic range of fluorescent indicators for Ca2+ by circularly permuted yellow fluorescent proteins. Proc Natl Acad Sci U S A 101:10554 14. Van Unen J, Rashidfarrokhi A, Hoogendoorn E et al (2016) Quantitative single-cell analysis

of signaling pathways activated immediately downstream of histamine receptor subtypes. Mol Pharmacol (3):90, 162–176 15. Jakobs MA, Dimitracopoulos A, Franze K (2019) KymoButler, a deep learning software for automated kymograph analysis. elife 8: e42288

Chapter 20 Automated Microscopy Image Segmentation and Analysis with Machine Learning Anthony Bilodeau, Catherine Bouchard, and Flavie Lavoie-Cardinal Abstract The development of automated quantitative image analysis pipelines requires thoughtful considerations to extract meaningful information. Commonly, extraction rules for quantitative parameters are defined and agreed beforehand to ensure repeatability between annotators. Machine/Deep Learning (ML/DL) now provides tools to automatically extract the set of rules to obtain quantitative information from the images (e.g. segmentation, enumeration, classification, etc.). Many parameters must be considered in the development of proper ML/DL pipelines. We herein present the important vocabulary, the necessary steps to create a thorough image segmentation pipeline, and also discuss technical aspects that should be considered in the development of automated image analysis pipelines through ML/DL. Key words Quantitative analysis, Segmentation, Deep learning, Machine learning, Microscopy

1

Introduction Quantitative analysis of an image consists in extracting characteristics that can be expressed numerically. The number of instances, the morphology, or the organization of a biological structure are all quantifiable parameters in optical microscopy images. Quantitative analysis approaches should generally be based on a well-defined extraction technique and be repeatable. Thus, two users repeating a quantification on the same dataset should obtain similar results (inter-expert agreement). Recent developments in the field of microscopy now provide researchers with optical devices capable of high-throughput generation of data. These microscopy techniques bring to light heterogeneous patterns, textures, and shapes of biological structures from the macro- to the nanoscale [41]. This significantly increases the complexity of the analysis process and challenges classical userbased analysis techniques. Due to the complexity of the structures within the images, a single highly trained user is often charged with the long and tedious task of extracting quantitative parameters

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9_20, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

349

350

Anthony Bilodeau et al.

from the complete set of acquired images. Hence, fewer images will be analyzed for the final quantification and annotation imprecision may occur due to labeling fatigue [27, 49]. The field of computer vision has evolved tremendously in recent years with the development of Machine/Deep Learning (ML/DL) models, more specifically with the deployment of Deep Neural Networks (DNNs) [2, 28]. As a result of their implementation on graphical processing unit (GPU) computation, the inference of a DNN on a given image is very effective and can sustain the high-throughput generation of data. DNNs have a high representation capability even in a high-dimensional context which makes them suitable to automatically learn general representations from an image [28]. DNNs have now been applied to solve many computer vision tasks, such as image classification or segmentation [11, 24, 29]. Given their great success with natural images, the developed models were introduced recently in the field of biomedical and microscopy image analysis [9, 13, 15, 16, 26, 27, 32, 38, 46, 47]. They are now routinely applied to the quantification of diverse biological structures in microscopy images with variable levels of performance. When applied to image analysis, the core idea of ML/DL is to teach a model to reproduce a specific task on unseen images belonging to the dataset. Various tasks can be addressed such as classification, detection, and segmentation. In this chapter, the primary focus will be set on a segmentation task. The segmentation of a structure of interest refers to the classification of each pixel within the image belonging to this particular structure. The classification assigns an ID (e.g. cell or background) to each pixel of the image. Neighboring pixels which have the same ID are considered as part of the same object or structure. The segmentation of a structure/object then allows the extraction of characteristics of the underlying structure such as its size, shape, and position. To automatically generate the segmentation mask of one or more structures of interest from an image, a DNN needs to learn from multiple examples [24, 28]. Each example image has to be associated with its corresponding annotations (ground truth annotations) which the DNN uses to compare its prediction during the learning phase. Once trained, a DNN may be deployed to segment the biological structures on unseen images (images that were not used in the training process). This chapter presents a guideline for automated quantitative analysis of biological structures using machine and deep learning, more specifically in the context of an object segmentation task. The Material section introduces the vocabulary of ML/DL assisted image analysis. The necessary steps to obtain the automated segmentation pipeline are discussed in the Methods section. A complementary Notes section introduces important aspects that should be considered when developing ML/DL strategies for microscopy image analysis.

Automated Microscopy Image Segmentation and Analysis with Machine Learning

2

351

Concepts and Definitions

2.1

Images

An image is composed of many small elements called pixels which have an associated numerical value. An image is defined by its size (C  H  W), where C refers to the number of channels/colors and H, W are the height and width.

2.2

Annotations

Image annotation refers to highlighting and identifying (e.g. trace the contours, classify) the structures of interest in an image. The annotations are used by the ML/DL models to derive a set of rules or learn a general representation from the image. The ML/DL model will attempt to reproduce the annotations that are often referred to as the ground truth. Depending on the desired task, there are different types of annotations. For example, class labels are associated with every individual image to train a classification model. In a segmentation task, the annotations can consist of binary masks of the structures of interest. The precision of the annotations will dictate the level of supervision of the models, ranging from fully supervised when the masks precisely highlight the contours of the structures to weakly supervised (e.g. bounding boxes, scribbles, circles) (see Fig. 1).

2.3

Model

A ML model is the combination of algorithms and parameters that are learned and then used to make predictions from data.

2.4

Training

Training a model refers to determining the values of all of the model’s parameters that optimize its performance on the training dataset. Without training, a ML/DL model would not perform better than randomly generated predictions. In DL, training involves optimizing the weights and biases of the neural networks.

2.5 Objective Function

The objective function (also termed cost or loss function in DL) is the function that the model aims to minimize. For a segmentation task, the objective function is generally the mean difference (either absolute or squared) between the expert annotations and the predicted segmentation. For gradient-based training methods, the objective function must be differentiable.

2.6

Epoch

An epoch is an iteration of the model over the complete training dataset. One epoch is completed when each sample from the training set has contributed to updating the parameters of the model. The maximum number of epochs is a hyperparameter. The training can be set to train indefinitely and be stopped using a convergence criteria (see Subheading 2.11).

2.7

Hyperparameter

The hyperparameters are the parameters that are defined prior to the training process, whereas the parameters are the weights and

352

Anthony Bilodeau et al.

Fig. 1 Annotations of biological structures with different levels of supervision. Left: image of the GFP-GOWT1 mouse stem cells dataset from the Cell Tracking Challenge [47, 53]

biases of the model that are derived from the learning process (see Note 1). Common hyperparameters include for example the learning rate and the batch size for DNNs, the number and maximum depth of trees in a random forest, or the regularization constant in support vector machines (SVM). Hyperparameters should always be tuned using the validation set, most simply by using a grid-search method by training the model sequentially with all combinations of hyperparameters from a predefined range. 2.8

Batch

To speed-up the training process, the training data is generally processed in batches (e.g. multiple images) instead of one by one. The batch size determines the frequency at which the model’s parameters are updated, as they are updated once at the end of each batch.

2.9

Optimizer

The optimizer is the algorithm used to update the parameters of the model. In DL, the most basic optimizer is gradient descent, which directly uses the derivative of the loss function to reduce the loss toward a minimum. The other commonly used optimizers use the same principle with added complexity, such as stochasticity [21], momentum [36], and adaptive learning rate [22].

2.10

Overfitting

A model is said to have overfitted the data when it performs significantly better on the training data than on the validation data; in other words, the model has learned its training data by heart and does not generalize well to unseen data. Overfitting is a challenge for any machine learning method and must always be addressed, commonly by using early-stopping (see Subheading 2.11) or regularization (see Subheading 2.12).

2.11

Early-Stopping

For models that are trained iteratively, such as DNNs, earlystopping involves interrupting the training process before the predefined maximum number of iterations (epochs) is reached. Earlystopping criteria are defined using the difference between the validation and training score to avoid overfitting, or using the convergence of the training score to avoid unnecessary training steps.

Automated Microscopy Image Segmentation and Analysis with Machine Learning

2.12

3

Regularization

353

Regularization is a technique used to avoid overfitting by encouraging simpler solutions over the more complex solutions [19]. Complex models often perform better on the training set, but can generalize poorly to unseen data. The most commonly used regularization techniques for gradient-based methods are L1 and L2 [12] regularization, which add a penalty corresponding to the sum of the absolute value and the squared value of the weights of the model respectively, encouraging lighter models [14].

Methods A guideline is detailed below in order to facilitate the development of an automated segmentation pipeline for microscopy images. The focus is primarily on the segmentation of biological structures, but this strategy can be applied to obtain segmentation masks in other research fields.

3.1 Preparing the Images

The first step in any automated segmentation pipeline is to obtain a set of images on which the developed models will be trained.

3.1.1 Handling of Images

In the context of quantitative analysis of microscopy data, the images can be obtained for example from online datasets or in-house existing datasets. Multiple platforms exist to help with organizing the set of images that are used to train the model, for example the Open Microscopy Environment (OMERO) or the Neurodata Without Border initiative [1, 39].

3.1.2 Number of Samples

There is no universal rule for how many samples are needed to train a ML model, as this number is highly dependent on factors such as: (1) the difficulty of the task, (2) the variability of structures within the images, and (3) their quality (e.g. noise level). The training dataset must contain multiple instances of all classes of objects (including rare features, morphological variations, and outliers). Indeed, ML/DL models will generalize on new images that are similar to those used in the training process, i.e. that share common features. Hence, increasing the diversity of images in the training set, while remaining constrained to images that are relevant to the task at hand, is required to obtain a flexible model capable of handling a large manifold of images (see Note 2).

3.1.3 Metadata

Another important aspect of images is their metadata [1, 40]. Each image in a dataset should have metadata associated with it. The metadata of the image should be able to answer the who/what/ when/where questions (see [30] for best practice). Keeping track of the metadata has many advantages: it facilitates the sharing of data

354

Anthony Bilodeau et al.

with other groups, allows the removal of images from the set with specific characteristics (i.e. acquired from microscope X), and is essential to ensure that experiments are reproducible. 3.1.4 Normalization of the Images

The dynamic range of pixel intensities within the datasets may vary. For instance, images may be acquired on an 8-bit or a 16-bit detector. This implies that models which learn from the pixel intensities need to handle the complete dynamic range of intensity values. To facilitate the learning task, it is common practice to normalize the images within a given range of values. Common normalization techniques used in ML/DL are whitening and min–max normalization [14, 18]. The whitening normalization rescales the intensity of each pixel so that the mean of each image is null, and its standard deviation is equal to one. The min–max normalization rescales the pixel intensities within the [0, 1] range using the minimum and maximum value of each image. There is not a single method that performs systematically better than the other on all datasets. Hence, when training the models both normalization methods should be validated by comparing the achieved performance on the validation set.

3.2 Generating the Annotations

In supervised ML/DL, the annotation task is a critical step (see Note 3). It provides the necessary ground truth annotations with which the algorithms will be trained and validated [28]. It is expected that the performance of the models will be directly related to the quality of the annotations it was provided with. Therefore, the quality and precision of the annotations is a key aspect of the training process. A common open-source software used in microscopy and life science is Fiji (or ImageJ) [42]. Fiji provides various tools to facilitate the annotation process and dataset generation. For instance, the recent demonstration of U-Net assisted segmentation of microscopy images from [13] provides a Fiji plug-in to annotate and train a model all within the Fiji software. Other developed ML/DL models such as Ilastik [5] or CellPose [46] provide their own software examples to facilitate annotation and training. In some specific cases, the development of an annotation software may be envisioned in order to handle the specific need of a particular dataset [3, 27]. The type of annotation used to train the model will depend on the analysis task. For example, in an enumeration task (e.g. counting cells), providing the position of each cell with a marker should be sufficient to train an algorithm to identify single cells in an image [13, 50]. Precise annotation of the object boundaries is often required for quantitative analysis of morphological parameters (e.g. size, perimeter, number of sub-structures, etc.). Obtaining such precise information with respect to object boundaries provides more details on the structure but inevitably results in longer manual annotation time.

Automated Microscopy Image Segmentation and Analysis with Machine Learning

355

In a fully supervised training setting, the annotations need to precisely follow the contours of the annotated objects. There are numerous examples of segmentation of biological objects that use fully supervised annotations and achieve high segmentation performance [5, 13, 16, 25, 26, 46]. In a weakly supervised setting, reduction in the precision of the annotations is employed to alleviate the annotation task. Reducing the level of supervision reduces the annotation duration [6, 27, 29], but it can also lead to the generation of less precise segmentation maps by the DL/ML models in comparison to the fully supervised counterparts [20, 35]. In some cases, weakly supervised learning may be needed to increase the possible number of annotated samples. Examples of weak supervision include polygonal bounding boxes [27], rectangular bounding boxes [51], or scribbles [5]. The level of supervision and the precision of the annotations will have a strong impact on the performance of the ML/DL models on a specific dataset. When training with a weakly supervised setting, it may be very useful to precisely re-annotate the testing dataset in order to better quantify the performance of the model. Prior knowledge on the type of objects to segment is very important in the design of an optimal DL/ML analysis strategy. There are two common types of segmentation tasks: (1) semantic and (2) instance segmentation. For semantic segmentation, each structure type needs not only to be precisely segmented but also to be classified. This means that each structure will be associated with a specific ID during the annotation task (e.g. cell body, nucleus, neurites, etc.). Semantic segmentation is particularly interesting in contexts where several structures, patterns, or textures are present within a single image (see Fig. 1). Instance segmentation refers to the segmentation of multiple instances of a single object category within the field of view (e.g. single cells). This requires that all objects be annotated with a different ID in order to separate each instance during training. Instance segmentation is interesting in contexts where both the enumeration of objects (e.g. cell counting) and their morphological parameters (e.g. area, shape) are required [5, 13, 26, 46, 47]. 3.3 Training the Models

In ML/DL, the complete set of images and their associated annotations form the dataset (see Fig. 2). The images contained within the dataset are used to train the supervised models, i.e. learn a mapping function between the input image and the desired output (see Note 4). To solve a segmentation task with ML/DL approaches, the dataset is split into 3 parts: training, validation, and testing [2] (see Note 5). As a rule of thumb, the split ratio should be around 70%, 15%, and 15%, respectively. The training and validation sets are both used during the training phase. The training set allows the

356

Anthony Bilodeau et al.

Fig. 2 Schematic representation of a quantitative microscopy analysis pipeline for an automated segmentation task

model to build a set of rules or a representation to predict the segmentation masks. The validation set is said to be unseen by the learning model in the training phase. Indeed, the representation is not updated using the validation dataset, but rather allows the user to approximate the performance on unseen data, fine-tune the hyperparameters, track overfitting, or use early-stopping. The testing dataset is used only for the final evaluation of the trained and tuned models. Images of the testing dataset must not be part of the training/validation datasets. Indeed, the testing dataset should under no circumstances be used to fine-tune a trained model or choose the best state of a model during training. Two different approaches for automated segmentation of biological structures will be discussed. The first approach uses a machine learning procedure, more precisely a random forest [2]. The second type of approach will be based on DL models. In both cases, the task is to classify each pixel in the image (e.g. structure or background) to generate a precise segmentation map of the objects. 3.3.1 Random Forest

In a random forest (RF) classifier [8], multiple filters are used to extract features or textures from the image [44]. These filters are combined to produce an information vector associated with each position (pixel) in the image [48]. The training of the algorithm aims at classifying the pixels based on the set of input features. Since the number of features for each pixel is large, it allows the RF classifier to learn rules that will perform well across different images. This property is referred to as the generalization capacity of the

Automated Microscopy Image Segmentation and Analysis with Machine Learning

357

... Fig. 3 Training steps of a RF classifier on a segmentation task. (a) Images from the dataset are annotated at the pixel level and various features are extracted. (b) The features from each annotated pixel are extracted from the image and serve as a training sample for the RF algorithm. (c) At inference, the same features are extracted from the image and the trained RF algorithm infers the ID of each pixel

model. Several all-in-one feature extraction formulas, for example the combination of Gaussian and Hessian filters or the use of histograms of gradient, have been developed [48]. The RF algorithm is a very accessible tool to segment biological structures in microscopy images with the creation of all-in-one analysis frameworks such as Ilastik [5, 45] and trainable Weka segmentation [4, 7]. To solve a segmentation task, the training of the RF is done at the pixel level, i.e. each pixel is considered as a training example (see Fig. 3). Multiple random decision trees (forest) are constructed from the extracted features at each annotated pixel in the training dataset [2, 5]. Each tree aims at predicting the class of a given pixel. A majority vote is used to generate the final prediction on the class (i.e. ID) of the pixel. Neighboring pixels with the same ID will be considered as part of the same object which creates the segmentation map. In this case, the validation set is used to verify the performance of different hyperparameters of the random forest algorithm (e.g. number of trees, depth, extracted features, etc.). Following the training and tuning phases, the performance of the random forest is reported on the testing dataset. Due to the training on individual pixels, the number of objects to annotate to obtain a suitable performance is often small compared to other approaches (e.g. deep neural networks). RF algorithms, for example using the Ilastik software, were shown to perform well on a variety of different segmentation tasks [5, 10, 44]. While this approach is capable of segmenting various

358

Anthony Bilodeau et al.

biological structures, it is limited by the features that are chosen to be extracted by the user. Therefore, complex feature engineering may be required to solve a particular segmentation task if the random forest model fails to learn from the default extracted features. 3.3.2 Deep Learning

DNNs were introduced recently to perform automated microscopy image analysis [38]. DNNs are excellent feature extractors that can solve complex tasks such as instance and semantic segmentation [14, 28]. A DNN model is a combination of different layers (i.e. filters) that provides an abstract representation of its input, here an image, in order to solve different types of tasks (e.g. classification, enumeration, segmentation, etc.). The following explanation of a DNN model applied to microscopy image analysis will assume a 2-dimensional input data (see Fig. 4). As a first step, the feature extraction is performed with convolution kernels, or filters, of size N  N parameterized by some weights θ. A size of 3  3 is often used in the literature [43]. The kernel is slid across the entire field of view with a predefined step size (generally of 1 pixel) (see Fig. 4a). At each step, the kernel is multiplied with the section of the image it overlaps with and summed. The results of each of these multiplications are combined to generate an activation map. The successive combination of multiple convolution layers allows the efficient extraction of activation maps of increasing complexity. DNN models are inspired by the human visual system, i.e. there is a hierarchical composition of activations that leads to the formation of an object [28, 34]. An object is composed of sub-units that are an assembly of several textures that are themselves a combination of edges, intensities, and colors. Thus to carry out the segmentation of an object, the DNN model needs to sequentially extract this relevant information from the image [34]. Convolution kernels extract local information (N  N) from the image and thus have a small receptive field which does not allow the combination of features that are spatially distant in the image. To cope with this small receptive field, pooling layers are introduced to effectively merge extracted features, for instance combine edges into textures (see Fig. 4b, c). A pooling layer uses the maximal or average numerical value of the image within a small kernel window. A max-pooling layer of size 2  2 is often used with a step of 2 in both directions to downsample the image by a factor of 2. This effectively increases the receptive field of the next convolution kernel. Sequentially combining multiple convolution layers with pooling layers results in a network that is capable of associating textures into sub-units and sub-units into a complete structure (i.e. an object) [28]. The number of convolution layers is generally doubled following each contracting step to increase the representation capability of the network.

Automated Microscopy Image Segmentation and Analysis with Machine Learning

359

Fig. 4 Schematic of the components of DNNs. (a) A convolution kernel is slid across the entire field of view to extract a feature map. (b) A pooling layer downsamples the input image to merge semantically similar features. (c) The receptive field of the convolution kernel increases with the depth of the DL model

Normalization layers are applied following the convolution layers to stabilize the training of the models [18] and are followed by non-linear activation functions [2]. There are multiple non-linear activation functions [14]. The most widely used is the rectified linear unit (ReLU) [33] ReLUða Þ ¼ maxð0, a Þ,

ð1Þ

where a is the value of the activation from the convolution layer. Successively combining the convolution layer, normalization layer, non-linear activation function, and the pooling layer allows the DNN to obtain a rich representation of the input image. As mentioned above, each pooling layer leads to a downscaling of the input image. Hence, using multiple layers effectively results in a segmentation map which is smaller compared to the original image. This is not desired as the goal of the DNN is to perform a pixel-wise segmentation. In order to obtain an output segmentation with pixel-wise mapping, upsampling layers are used [31]. The DNN first encodes the input image into a rich latent representation from which it is able to decode a dense segmentation [31]. One of the most widely used architectures in the field of biomedical image analysis is the U-Net model [13, 38]. The U-Net model uses a similar procedure with an encoder and a decoder component. However, in the U-Net implementation, skipping links between layers at the same depth facilitates the propagation of the spatial information through the network [38]. Using the techniques described above, DNNs are able to automatically extract features from the input image. However, to learn the features that should be extracted, the network requires some feedback. This feedback is obtained by comparing the prediction of the model with the ground truth annotations. It results in a pixelwise classification error. After receiving many examples, the DNN will be able to generate a segmentation map that is similar to the one generated by an expert on the same image. The pixel-wise classification error is computed using an objective function (or loss function, e.g. cross-entropy or mean squared error) [14]. The error is then backpropagated through the model using

360

Anthony Bilodeau et al.

the backpropagation algorithm [14]. The backpropagation of the error through the DNN allows the adjustment of the weights θ in order to minimize the classification error. Adjustments made on the weights depend on the local gradient of the objective function. In order to minimize the objective function, the update of the weights is carried out in the direction where the error diminishes [14]. The training of the DNN is an iterative process. At each iteration, the network predicts the segmentation masks on a subset of images and compares its prediction with the ground truth annotations. The objective function is calculated between the prediction and ground truth segmentation masks. Finally, an algorithm, such as the stochastic gradient descent, updates the weights θ of the DNN θ

θ  ηg,

ð2Þ

where η is the learning rate and g is the gradient estimation. While the convergence of the objective function is not guaranteed to the global minimum, it should converge to a local minimum [14]. 3.4

Evaluation

Once the model is trained, the next step is to report the performance of the model on an unseen set of images, i.e. the testing set. Again, it is of the utmost importance that no images in the testing set be included in the training and validation sets. In order to evaluate the performance, it is common procedure to report different metrics which provide information on the errors of the model. The prediction of the models is compared to the ground truth annotations which were carefully annotated by an expert beforehand (see Notes 6–8). One approach is to report the confusion matrix (CM) of the predicted segmentation. A CM reports the different errors that the model makes on the testing set, i.e. true positive/negative and false positive/negative. In cases of multiclass predictions, the CM allows the user to see which classes are more difficult to predict for the DNN. From the CM, the user can also derive common metrics such as precision, recall, and F1-score [10, 52]. There are other metrics which are often used in the literature, for instance the intersection over-union (IOU) or the Jaccard index, which compares the predicted segmentation and the ground truth annotations [17, 23, 52]. In the case of instance segmentation, other metrics were also derived to account for the detection of an object and the precision of the segmentation simultaneously. In this context, it is common to report a detection metric (F1-score) as a function of the level of association (IOU) of a predicted object with the objects from the ground truth maps [10].

Automated Microscopy Image Segmentation and Analysis with Machine Learning

4

361

Notes Some technical aspects for the development of an automated segmentation pipeline are discussed below. 1. Selection of hyperparameters The absence of a rule of thumb for the selection of hyperparameters can be an overwhelming challenge to face when first developing ML/DL methods. A good place to start is by looking at publications that use a similar method for a related task and begin by using the published hyperparameters. If possible, the hyperparameters can be explored using a gridsearch approach. In the case of DL models requiring long training periods, a grid-search may not be possible. In that case the optimization of the hyperparameters can be performed by looking at the evolution of the validation and training losses. If the loss oscillates significantly between high and low values, lowering the learning rate or increasing the batch size should be considered. If the loss converges but the actual performance does not, consider using a different objective function better adapted to the specific task. 2. Data augmentation Data augmentation consists in artificially increasing the number of training examples. For example, an image can be flipped or rotated, and the intensity of its pixels may be scaled, all resulting in slightly different training examples. More advanced data augmentation techniques may induce elastic transformation of an image [38]. Data augmentation should be done carefully to preserve the semantic of the image. For instance, the semantic of the digit 6 which is rotated by 180∘ is now 9, and therefore this data augmentation operation would not be suitable for the classification of digits. 3. Validation of the ground truth annotations Proofing of the annotations should not be minimized as discrepancies within the annotations will negatively impact the trained algorithms. The annotations should thus be validated for possibly missed cases or errors. Validation of the annotations from multiple experts or the conception of a user-study may be required to assess intra- and inter-expert variability. Variations between expert annotations should be considered when analyzing the performance of a given ML/DL approach. 4. Pretraining Pretraining of a DNN on another dataset can reduce the effective number of required training samples from a new dataset. To ensure reliable and reproducible performance, the nature of the dataset on which the model was pretrained should be similar to the new dataset. For instance, it was recently

362

Anthony Bilodeau et al.

shown by [37] that pretraining a model on natural images does not improve the performance of a model on a biomedical detection task. A similar consideration should be made when training on microscopy images. 5. Training/Validation datasets The images used in the training and validation datasets cannot be used for the quantitative analysis after training nor for the evaluation of the training performance. The trained models might lead to unrepresentative results on these images as they were previously shown in the training phase. This would thus yield a biased quantitative analysis of the structures of interest. For example, higher detection rates or more precise segmentation masks are generally obtained for the training/ validation datasets over the testing dataset. 6. Small structures For microscopy images, the structures of interest are often relatively small compared to the field of view (e.g. single cell, bacteria, sub-cellular structure, etc.). Most of the performance evaluation metrics of DL models were designed for natural images, in which the structures of interest occupy a large portion of the image. Due to the larger proportion of pixels belonging to the border for smaller objects, weaker performance is often measured for the segmentation of small structures. Metrics that will consider this border effect may need to be considered depending on the task at hand [52]. 7. Inter-expert agreement The inter-expert agreement should be reported when possible. This metric serves as a gold standard on how two different experts would extract the quantitative information from the same image. Ideally, a ML/DL model should be able to attain a performance similar or higher to the inter-expert agreement. 8. Class imbalance In microscopy images there is often a natural class imbalance toward the background pixels. As mentioned above, the objects of interest are often small compared to the field of view, resulting in an increased number of background pixels. This can be visualized by plotting the distribution of the pixel intensity. It is important to acknowledge the class imbalance in the evaluation phase by choosing metrics that are less sensitive to this imbalance. The pixel classification accuracy is a metric which is sensitive to this class imbalance. For example, if the structures of interest compose only 5% of the image, then a model predicting background on the entire image would obtain a classification accuracy of 95%. The precision, recall, F1-score, and IOU metrics are less sensitive to this class imbalance in comparison with the pixel classification accuracy as they do not depend on the number of background pixels.

Automated Microscopy Image Segmentation and Analysis with Machine Learning

363

Acknowledgements Annette Schwerdtfeger for careful proofreading of the manuscript. Funding was provided by grants from the Natural Sciences and Engineering Research Council of Canada and the Neuronex Initiative (National Science Foundation, Fond de recherche du Que´bec – Sante´). F.L.C. is a Canada Research Chair Tier II, A.B. and C.B. are both supported by a PhD scholarship from the Fonds de recherche du Que´bec - Nature et technologies (FRQNT), an excellence scholarship from the FRQNT strategic cluster UNIQUE, and C.B. by a Leadership and Scientific Engagement Award from Universite´ Laval. References 1. Allan C, Burel JM, Moore J et al (2012) OMERO: flexible, model-driven data management for experimental biology. Nat Methods 9(3):245–253 2. Alpaydin E (2009) Introduction to machine learning. MIT Press, Cambridge, MA 3. Anderson J, Mohammed S, Grimm B et al (2011) The Viking viewer for connectomics: scalable multi-user annotation and summarization of large volume data sets. J Microsc 241(1):13–28 4. Arganda-Carreras I, Kaynig V, Rueden C et al (2017) Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification. Bioinformatics 33(15):2424–2426 5. Berg S, Kutra D, Kroeger T et al (2019) ilastik: interactive machine learning for (bio)image analysis. Nat Methods 1–7 6. Bilodeau A, Delmas C, Parent M et al (2021) MICRA-Net: MICRoscopy Analysis Neural Network to solve detection, classification, and segmentation from a single simple auxiliary task. bioRxiv 2021.06.29.448970 7. Boykov Y, Kolmogorov V (2004) An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans Pattern Anal Mach Intell 26:1124–1137 8. Breiman L (2001) Random forests. Mach Learn 45(1):5–32 9. Caicedo JC, Goodman A, Karhohs KW et al (2019a) Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl. Nat Methods 16(12):1247–1253 10. Caicedo JC, Roth J, Goodman A et al (2019b) Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytom Part A 95(9):952–965

11. Cordts M, Omran M, Ramos S et al (2016) The cityscapes dataset for semantic urban scene understanding. arXiv:160401685 [cs] 1604.01685 12. Cortes C, Mohri M, Rostamizadeh A (2012) L2 regularization for learning kernels. arXiv preprint arXiv:12052653 13. Falk T, Mai D, Bensch R et al (2019) U-net: deep learning for cell counting, detection, and morphometry. Nat Methods 16(1):67 14. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press. http://www. deeplearningbook.org 15. Gupta A, Harrison PJ, Wieslander H et al (2019) Deep learning in image cytometry: a review. Cytom Part A 95(4):366–380 16. He K, Gkioxari G, Dolla´r P et al (2018) Mask R-CNN. arXiv:170306870 [cs] 1703.06870 17. Hossin M, Sulaiman M (2015) A review on evaluation metrics for data classification evaluations. Int J Data Min Knowl Manag Process 5(2):1 18. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:150203167 [cs] 1502.03167 19. Kawaguchi K, Kaelbling LP, Bengio Y (2017) Generalization in deep learning. arXiv preprint arXiv:171005468 20. Khoreva A, Benenson R, Hosang J et al (2017) Simple does it: weakly supervised instance and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 876–885 21. Kiefer J, Wolfowitz J et al (1952) Stochastic estimation of the maximum of a regression function. Ann Math Stat 23(3):462–466

364

Anthony Bilodeau et al.

22. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:14126980 23. Kotsiantis S, Kanellopoulos D, Pintelas P et al (2006) Handling imbalanced datasets: a review. GESTS Int Trans Comput Sci Eng 30(1):25–36 24. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 25. Kromp F, Fischer L, Bozsaky E et al (2019) Deep learning architectures for generalized immunofluorescence based nuclear image segmentation. arXiv:190712975 [cs, q-bio] 1907.12975 26. Kromp F, Bozsaky E, Rifatbegovic F et al (2020) An annotated fluorescence image dataset for training nuclear segmentation methods. Sci Data 7(1):262 27. Lavoie-Cardinal F, Bilodeau A, Lemieux M et al (2020) Neuronal activity remodels the F-actin based submembrane lattice in dendrites but not axons of hippocampal neurons. Sci Rep 10(1):11960 28. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436 29. Lin TY, Maire M, Belongie S et al (2014) Microsoft coco: common objects in context. In: European conference on computer vision. Springer, pp 740–755 30. Linkert M, Rueden CT, Allan C et al (2010) Metadata matters: access to image data in the real world. J Cell Biol 189(5):777–782 31. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440 32. Moen E, Bannon D, Kudo T et al (2019) Deep learning for cellular image analysis. Nature Methods 1–14 33. Nair V, Hinton GE (2010) Rectified linear units improve restricted Boltzmann machines. In: ICML 34. Olah C, Mordvintsev A, Schubert L (2017) Feature visualization. Distill https://distill. pub/2017/feature-visualization 35. Papandreou G, Chen LC, Murphy KP et al (2015) Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation. In: Proceedings of the IEEE international conference on computer vision, pp 1742–1750 36. Qian N (1999) On the momentum term in gradient descent learning algorithms. Neural Netw 12(1):145–151

37. Raghu M, Zhang C, Kleinberg J et al (2019) Transfusion: understanding transfer learning for medical imaging. In: Wallach H, Larochelle H, Beygelzimer A, d’Alche´-Buc F, Fox E, Garnett R (eds) Advances in neural information processing systems, vol 32. Curran Associates, Inc., pp 3347–3357 38. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241 39. Ru¨bel O, Tritt A, Dichter B et al (2019) NWB: N 2.0: an accessible data standard for neurophysiology. bioRxiv p 523035 40. Sarkans U, Chiu W, Collinson L et al (2021) REMBI: Recommended Metadata for Biological Images—enabling reuse of microscopy data in biology. Nat Methods 1–5 41. Schermelleh L, Ferrand A, Huser T et al (2019) Super-resolution microscopy demystified. Nat Cell Biol 21(1):72 42. Schindelin J, Arganda-Carreras I, Frise E et al (2012) Fiji: an open-source platform for biological-image analysis. Nat Methods 9(7):676 43. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556 44. Sommer C, Gerlich DW (2013) Machine learning in cell biology–teaching computers to recognize phenotypes. J Cell Sci 126(24):5529–5539 45. Sommer C, Straehle C, Koethe U et al (2011) Ilastik: interactive learning and segmentation toolkit. In: 2011 IEEE international symposium on biomedical imaging: from nano to macro. IEEE, pp 230–233 46. Stringer C, Wang T, Michaelos M et al (2021) Cellpose: a generalist algorithm for cellular segmentation. Nat Methods 18(1):100–106 47. Ulman V, Masˇka M, Magnusson KE, Ronneberger O, Haubold C, Harder N, Matula P, Matula, P, Svoboda D, Radojevic M, Smal I (2017) An objective comparison of cell tracking algorithms. Nat Methods 14 (12):1141–1152 48. Vicar T, Balvan J, Jaros J others (2019) Cell segmentation methods for label-free contrast microscopy: review and comprehensive comparison. BMC Bioinformatics 20(1):360 49. Vohs KD, Baumeister RF, Schmeichel BJ et al (2008) Making choices impairs subsequent self-control: a limited-resource account of decision making, self-regulation, and active initiative. J Pers Soc Psychol 94(5):883–898

Automated Microscopy Image Segmentation and Analysis with Machine Learning 50. Xie Y, Xing F, Kong X et al (2015) Beyond classification: structured regression for robust cell detection using convolutional neural network. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 358–365 51. Yang L, Zhang Y, Zhao Z et al (2018) BoxNet: deep learning based biomedical image segmentation using boxes only annotation. arXiv preprint arXiv:180600593

365

52. Yeghiazaryan V, Voiculescu ID (2018) Family of boundary overlap metrics for the evaluation of medical image segmentation. J Med Imag 5(1):015006 53. Ba´rtova´ E, Sˇusta´cˇkova´, G, Stixova´, L, Kozubek, S., Legartova´, S Folta´nkova´, V (2011) Recruitment of Oct4 protein to UV-damaged chromatin in embryonic stem cells. PLoS One, 6(12): e27281

INDEX A Abbe, E. ................................................................ 254, 306 Adaptive focus ...........................................................65, 66 Antibody IgG...................... 147, 148, 203, 228, 268, 308, 312 labeled................................... 176, 177, 290, 292–294 nanobody .......................................230, 268, 284, 312 primary.................................................. 18, 44, 45, 54, 93, 94, 96, 147, 162, 163, 203, 204, 206, 228, 232, 264–268, 275, 278, 312 secondary .............................................. 18, 44, 54, 93, 95, 96, 147, 162, 203, 205, 206, 208, 228, 230, 232, 244, 256, 257, 262, 265, 266, 275, 308, 312, 314 Apoptosis ............................................. 100–103, 106–111 Autofluorescence...............................................29, 84, 96, 144, 145, 151–154, 158, 175, 219, 285, 314

B Bacteria .................................................59, 61, 62, 66, 67, 70, 71, 104–106, 110–112, 166, 362 Beads pathogen mimic ....................... 61, 71, 105, 108, 112 Tetraspeck .................................................. 44–46, 183, 186, 191, 309, 312, 314 Bimolecular fluorescence complementation (BiFC) ......................................................91–96 Biosensor ............................................... 99–112, 339, 342 Brightness ...................................... 4, 7–9, 16, 28, 32, 33, 55, 63, 64, 70, 122, 123, 153, 164, 174, 230, 232, 237, 241, 243, 256, 257, 337, 340

C Cell culture A549 .......................................................100–101, 103 AtT20/D16-16....................................................... 258 BJ-5ta..................................................... 129, 131, 132 endothelial cells .........................................18, 34, 116, 118, 122, 332, 335, 338 HEK293T..................................................... 83, 84, 88 HeLa ...........................................................92, 94, 342 J774A.1 .................................................58–61, 63, 66, 69, 101, 103–105, 108–112

MDCK ................................................... 43, 44, 48, 53 RAW264.7............................................................... 299 U2OS........................... 228, 231, 239, 242, 244, 310 Collagen........................................................144, 176–178 Confocal microscopy ........................................18, 24, 25, 41–55, 59, 87, 101, 129, 135, 167, 168, 175–177, 182, 211, 217, 225–228, 242, 244, 254, 256, 257, 291 Coverslip ...............................................16, 43, 58, 83, 92, 100, 115, 128, 150, 169, 186, 213, 228, 258, 274, 308 CRISPR ....................................................... 228, 268, 273 Cryostat ....................................................... 146, 148, 162

D Darkfield ............................................................................ 4 Deconvolution ..............19, 44, 46–48, 55, 66, 217, 267 Differential Interference Contrast (DIC) .................4, 13, 59, 66, 101, 107–110, 133, 134 Diffraction .................................................. 16, 17, 19, 42, 44, 47, 50, 168, 254, 256, 272, 273, 279–282, 306, 307, 322 Diffusivity ...................................................................... 121 Digital images bit depth .................................................................... 26 CCD.....................................................................19, 83 CMOS............................................................... 59, 102 compression............................................................... 26 EM-CCD .......................................................... 59, 102 lookup table (LUT) .................................................. 26 Nyquist theorem .................................................23, 25 pixels .....................................................20, 23–26, 120 Undersampling/under sampling/ under-sampling........................................23, 25 Direct Stochastic Optical Reconstruction Microscopy (dSTORM) ............................... 272–274, 276, 278, 279, 281–284, 316 Drosophila ..................................................................... 211

E Electron microscopy ..................116, 225, 305, 307, 308 Expansion ............................................. 65, 106, 212–214, 216–219, 221, 307, 339 Expansion Microscopy (ExM).....................211–221, 307

Bryan Heit (ed.), Fluorescent Microscopy, Methods in Molecular Biology, vol. 2440, https://doi.org/10.1007/978-1-0716-2051-9, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022

367

FLUORESCENT MICROSCOPY

368 Index

Exposure time ........................................33, 64, 109, 163, 189, 191, 230, 233–235, 248, 279, 285, 312, 315, 316 Extracellular matrix (ECM).......................................... 125

G

F

H

Fiducial marker/reference marker ............ 306, 309–311, 314, 317 Fixation ...................................................57, 96, 146, 149, 198–200, 203–205, 219, 227, 228, 230, 231, 248, 259, 260, 273, 275, 277, 278, 284, 285, 308, 312, 314, 323 FLIM-FRET ..............................................................82, 86 Fluorescence in situ hybdridization (FISH).................................. 42–45, 48, 49, 53 Fluorescence lifetime imaging microscopy (FLIM) .....................................................77–89 Fluorophores ..........................................4, 42, 57, 79, 91, 100, 116, 129, 144, 167, 182, 200, 219, 226, 255, 271, 306, 332 CFP ................................................................. 151, 152 DAPI......................................... 53, 70, 152, 230, 264 emission .......................................... 4, 6–8, 11, 13, 14, 28, 57, 80, 129, 168, 230, 255, 260, 263, 267, 271, 273, 312, 316 excitation ............................................ 6–8, 10, 11, 13, 14, 53, 57, 58, 63, 64, 70, 79, 82, 86, 112, 167, 168, 179, 194, 260, 262, 263, 266, 315 excited state ................................................5, 6, 58, 79 fluorescent protein ...................................... 60, 64, 70, 226, 230, 261 GFP .............................................................. 13, 14, 60, 144, 179, 230, 231, 261 ground state ...................................... 5, 6, 58, 79, 255 Jablonski diagram................................................5, 7, 9 lifetime .......................................................... 79–82, 86 mCherry ................................... 57, 60, 105, 108, 109 mNeonGreen........................................................... 230 spectrum ........................................... 6, 8, 10, 11, 144, 167, 168, 260, 267, 273, 316 Stokes shift,6 ............................................................... 8 wavelength.......................................... 6–8, 10, 11, 14, 57, 58, 70, 112, 167, 168, 179, 260, 262–264, 266, 267, 269, 311, 316 YFP............................................................14, 200, 261 Focus........................................................... 12, 13, 19, 22, 30–32, 35, 41, 65, 66, 107–109, 120, 133, 135, 159, 167, 168, 174, 175, 179, 187, 188, 212, 226, 234–237, 247, 254, 255, 276, 285, 293–296, 306, 315, 316, 350, 353 Fo¨rster resonance energy transfer/fluorescence resonance energy transfer (FRET) ........77–89, 91, 100, 111, 126, 342

HaloTag ................................60, 273–275, 277, 281–284 HIV-1 ................................................................... 144, 145

Gelation ........................................................212–215, 220 Github.......................................................... 247, 330, 331

I IgG...............................................59, 61, 62, 67, 68, 101, 105, 109, 147, 148, 203, 228, 268, 308, 312 Image acquisition software ................................... 30, 229, 234, 236, 245 Image analysis 3D ....................................................... 41–55, 65, 175, 218, 230, 245, 306, 314 annotations ..................................................... 192, 350 colocalization................................. 42, 47–49, 67, 282 deep learning ..........................................350, 358–360 deep neural networks .............................................. 357 intensity ......................... 67, 110, 133, 134, 153, 156 kymograph........................................................ 67, 332 machine learning ............................................ 350, 358 masking.................................................................... 192 maximum intensity projection (MIP) ........... 133, 134 normalization .......................................................... 354 ratiometric ................................................67, 110, 342 region of interest (ROI) ................................. 68, 134, 157, 246, 295, 299 segmentation ..........................................184, 349–362 statistics...................................................................... 32 temporal color coding ............................................ 332 training ..........................................353, 356, 360, 362 validation ............................................... 356, 360, 362 volume ...........................................187, 192, 217, 322 Image analysis software Big Stitcher .............................................................. 184 BigWarp ................................................................... 217 CellProfiler .................................... 145, 147, 155–158 CIDRE..................................................................... 217 FIJI is just ImageJ (FIJI) ............................ 36, 60, 67, 68, 101, 110, 112, 129, 133, 147, 182, 184, 192, 194, 229, 248, 314, 331–335, 346, 354 Huygens...............................................................44, 46 ImageJ........................................ 36, 67, 68, 101, 110, 112, 129, 134, 145, 147, 151–161, 164, 217, 227–229, 233–235, 237–240, 242, 245, 248, 249, 291, 295, 314, 342, 354 Imaris .................................... 44, 46–50, 55, 200, 202 Intensify3D.............................................................. 217 LabKIT ........................................................... 184, 190 MaMuT ..................................................184, 191–194

FLUORESCENT MICROSCOPY Index 369 MATLAB .......................................... 44, 50, 117, 120, 129, 217, 218, 276, 281, 282 NanoJ....................................................................... 249 Open Microscopy Environment (OMERO) .......................................... 227, 353 particle image velocity........................... 127, 129, 134 SimFCS 2.0 .................................................... 291, 296 Stardist 2D............................................................... 184 TrackMate.............................................. 184, 190, 194 Immunofluorescence ........................................18, 28, 32, 34, 42–45, 48, 53, 148, 149, 265–266, 314 Influenza virus ...........................................................41–55 Intravital microscopy (IVM) ............................... 165–181

L Leiden chamber.................................59, 65, 67, 106–108 Light path .......................... 9–15, 22, 30–32, 83, 87, 133 Light sheet/light sheet fluorescence microscopy (LSFM) ....................65, 181–194, 217, 330 Live-cell imaging ........................................ 29, 35, 57–71, 83, 106, 122, 225, 230, 232, 233, 273, 293 Low-density lipoprotein (LDL) .......................... 116–120

M Macrophage................................................ 58, 60–61, 66, 68–71, 101, 103–106, 109–112, 127, 176–178, 290, 293, 299, 301 Mean-squared displacement (MSD) ............................ 121 Mechanosensing ................................................... 125, 136 Mice anesthetization isoflurane .................................................. 169, 170 ketamine ...........................................169, 172, 174 xylazine .............................................169, 172, 174 biopsy punch .................................................. 169–171 catheterization ........................................172–174, 178 fixation ............................................................ 198–200 imaging window............................................. 171, 174 oxygen...................................................................... 170 perfusion ........................................ 198–200, 203–204 surgery ............................................................ 172, 173 Mucosal associated lymphoid tissues (MALT) ............ 144 Multiphoton ......................................................19, 25, 82, 167–169, 171, 175–178, 182, 194

P Pathogen...............................................60, 100, 103, 104, 108–111, 143, 144, 166 Pathogen mimics.............................................61, 71, 101, 105, 108, 109, 111, 112 pH ............................................................... 43, 52, 53, 58, 59, 65, 70, 81, 83, 85, 88, 100, 102, 122, 146,

183, 198, 200, 201, 213, 219, 228, 229, 258, 260, 275, 291, 308 Phagocytosis ...................................................... 60–62, 64, 65, 68, 70, 102, 104, 106–111, 126 Phase contrast............................................... 4, 13, 19, 21, 59, 66, 101, 107, 108 Photoactivated localization microscopy (PALM) ..................................... 226, 272–274, 276–279, 281–283, 285, 306, 316 Photobleaching ............................................ 9, 24, 25, 29, 30, 35, 54, 58, 63, 65, 67, 70, 81, 86, 102, 106, 107, 112, 116, 117, 120, 162, 168, 175, 182, 187, 192–193, 230, 232, 247, 259, 260, 262, 263, 268, 279, 286, 293–295, 315, 329, 339 Photoconvert/photoconversion ................ 233, 248, 272 Photodamage ............................................. 13, 58, 62–64, 70, 103, 111 Photon budget ................................................................ 64 Photophysical process ................................................... 4, 9 Photoprotectant .............................................................. 63 Phototoxicity .............................................. 29, 30, 58, 63, 70, 107, 112, 182, 230, 233, 262, 268 Pinhole............................... 167, 168, 174, 175, 254, 294 Point spread function (PSF) ............................. 44–47, 53, 187, 188, 226, 254, 255, 266, 269, 284, 297, 302, 322 Polarization ....................................................................... 4 Polyacrylamide ........................... 126, 128, 130–132, 134 Proteinase .................................................... 213, 216, 221

R Raster image correlation spectroscopy (RICS)................................................. 289–302 Resolution ............................................7, 41, 58, 83, 106, 127, 167, 181, 211, 226, 254, 271, 294, 305

S Sample illumination dichroic ...................................................................... 10 field of view (FOV) ................................................. 246 filters ...........................................................15, 31, 205 Koehler illumination/Ko¨hler illumination .............. 13 lasers....................................................... 268, 279, 289 magnification ............................................21, 108, 109 numerical aperture .................................................... 25 oil objective ...................................................... 16, 259 water objective .......................................................... 16 white-light ...........................................................13, 30 Signal-to-noise ratio..........................................29, 46, 88, 106, 116, 175, 192, 266, 273, 312 Single-molecule localization microscopy (SMLM) .............................................. 271–286

FLUORESCENT MICROSCOPY

370 Index

Spinning disk confocal ................................. 24, 129, 135, 217, 226, 228, 242, 245 Stimulated emission depletion microscopy (STED) depletion wavelength ............................ 260, 262, 263 one color STED .....................................255, 260–262 two color STED ............................................. 262–263 Structured illumination microscopy (SIM/3D-SIM) ................................... 19, 225, 306–314, 319, 321, 322 Subdiffraction/sub-diffraction................... 254, 264, 319 Super resolution radial fluctuations (SRRF) ........................................135, 225–250 Super-resolution/super resolution .................... 136, 225, 227, 230, 231, 241, 266, 305–323

Total internal reflection (TIRF) ...................19, 116–121, 226, 274, 285, 315, 322 Total internal reflection fluorescence (TIRF) microscopy ....................................... 21, 115–123, 226, 315 Traction force microscopy (TFM) ...................... 125–137 Transcytosis .......................................................... 115–123 Transfection.............................58–61, 69, 84, 87, 88, 95, 101, 104, 105, 111, 216, 274, 276, 284 Transfection reagent .........................................59, 60, 69, 70, 83, 84, 92, 94, 101 Transgene ..................................................................60, 61

T

Vector.............................................. 83, 84, 132, 134, 356

Temporal resolution................................. 58, 63, 64, 106, 181, 191, 192, 294 Timelapse/time-lapse ............................35, 59, 102, 107, 108, 110, 189, 191, 192, 294, 295, 329–347 Tissue clearing CLARITY ................................................................ 198 clearing solution ............................................. 201, 204 hydrogel.......................................................... 197–208 hydrophobic ................................................... 197–208 iDISCO+ .............................. 198–203, 205–206, 208 PACT ............................................. 198–204, 206, 207

V Z Zebrafish advantages................................................................ 181 embryo screening .................................................... 186 lines .......................................................................... 183 mounts ..................................................................... 184 viability 192–193 Z-stack/Z stack.......................................... 31, 35, 46, 47, 65, 66, 71, 133, 176–178, 313