Fiber-Optic Measurement Techniques [2 ed.] 0323909574, 9780323909570

Fiber Optic Measurement Techniques is an indispensable collection of key optical measurement techniques essential for de

279 16 58MB

English Pages 844 [846] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Cover
Fiber-Optic Measurement Techniques
Copyright
Contents
Preface to first edition
Preface to the second edition
Chapter 1: Fundamentals of optical devices
1.1. Introduction
1.2. Laser diodes and LEDs
1.2.1. Pn junction and energy diagram
1.2.2. Direct and indirect semiconductors
1.2.3. Carrier confinement
1.2.4. Spontaneous emission and stimulated emission
1.2.5. Light-emitting diodes (LEDs)
1.2.5.1. PI curve
1.2.5.2. Modulation dynamics
1.2.6. Laser diodes (LDs)
1.2.6.1. Rate equations
1.2.6.2. Steady state solutions of rate equations
1.2.6.3. Threshold carrier density
Threshold current density
PJ relationship about threshold
Side-mode suppression ratio (SMR)
Turn-on delay
Small-signal modulation response
Laser noises
Relative intensity noise (RIN)
Phase noise
Mode partition noise
1.2.7. Single-frequency semiconductor lasers
1.2.7.1. DFB laser diode
1.2.7.2. External cavity laser diode
1.2.7.3. Integrated tunable lasers
1.3. Photodetectors
1.3.1. Pn-junction photodiodes
1.3.2. Responsivity and bandwidth
1.3.3. Electrical characteristics of a photodiode
1.3.4. Photodetector noise and SNR
1.3.4.1. Noise-equivalent power (NEP)
1.3.5. Avalanche photodiodes (APDs)
1.3.6. APD used as single photon detectors
1.4. Optical fibers
1.4.1. Reflection and refraction
1.4.1.1. Fresnel reflection coefficients
1.4.1.2. Special cases
Normal incidence
Critical angle
1.4.1.3. Optical field phase shift between the incident and the reflected beams
1.4.1.4. Brewster angle (total transmission ρ=0)
1.4.2. Propagation modes in optical fibers
1.4.2.1. Geometric optics analysis
1.4.2.2. Mode analysis using electromagnetic field theory
1.4.2.3. Numerical aperture
1.4.3. Optical fiber attenuation
1.4.4. Group velocity and dispersion
1.4.4.1. Phase velocity and group velocity
1.4.4.2. Group velocity dispersion
1.4.4.3. Sources of chromatic dispersion
1.4.4.4. Modal dispersion
1.4.4.5. Polarization mode dispersion (PMD)
1.4.5. Nonlinear effects in an optical fiber
1.4.5.1. Stimulated Brillouin scattering
1.4.5.2. Stimulated Raman scattering
1.4.5.3. Kerr effect nonlinearity and nonlinear Schrödinger equation
1.5. Optical amplifiers
1.5.1. Optical gain, gain bandwidth, and saturation
1.5.2. Semiconductor optical amplifiers
1.5.2.1. Steady-state analysis
1.5.2.2. Gain dynamics of OSA
Optical wavelength conversion using cross-gain saturation
Wavelength conversion using FWM in SOA
Optical phase modulation in an SOA
1.5.3. Erbium-doped fiber amplifiers (EDFAs)
1.5.3.1. Absorption and emission cross sections
1.5.3.2. Rate equations
1.5.3.3. EDFA design considerations
Forward pumping and backward pumping
EDFAs with AGC and APC
1.5.3.4. EDFA gain flattening
1.5.4. Raman amplification in optical fiber
1.6. External electro-optic modulator
1.6.1. Basic operation principle of electro-optic modulators
1.6.2. Frequency doubling and duobinary modulation
1.6.3. Optical single-side modulation
1.6.4. I/Q modulation of complex optical field
1.6.5. Bias point stabilization of an I/Q modulator
1.6.6. Optical modulators using electro-absorption effect
References
Chapter 2: Basic mechanisms and instrumentation for optical measurement
2.1. Introduction
2.2. Grating-based optical spectrum analyzers
2.2.1. General specifications
2.2.2. Fundamentals of diffraction gratings
2.2.2.1. Measure the diffraction angle spreading when the input only has a single frequency
2.2.2.2. Sweep the signal wavelength while measuring the output at a fixed diffraction angle
2.2.3. Basic OSA configurations
2.2.3.1. OSA based on a double monochromator
2.2.3.2. OSA with polarization sensitivity compensation
2.2.3.3. Consideration of focusing optics
2.2.3.4. Optical spectral meter using photodiode array
2.3. Scanning FP interferometer
2.3.1. Basic FPI configuration and transfer function
2.3.1.1. Free spectral range (FSR)
2.3.1.2. Half-power bandwidth (HPBW)
2.3.1.3. Finesse
2.3.1.4. Contrast
2.3.2. Scanning FPI spectrum analyzer
2.3.3. Scanning FPI basic optical configurations
2.3.4. Optical spectrum analyzer using the combination of grating and FPI
2.4. Mach-Zehnder interferometers
2.4.1. Transfer matrix of a 2x2 optical coupler
2.4.2. Transfer function of an MZI
2.4.3. MZI used as an optical filter
2.5. Michelson interferometers
2.5.1. Operating principle of a Michelson interferometer
2.5.2. Measurement and characterization of Michelson interferometers
2.5.3. Sagnac loop mirror
2.6. Optical wavelength meter
2.6.1. Operating principle of a wavelength meter based on Michelson interferometer
2.6.2. Wavelength coverage and spectral resolution
2.6.2.1. Wavelength coverage
2.6.2.2. Spectral resolution
2.6.2.3. Effect of signal coherence length
2.6.3. Wavelength calibration
2.6.4. Wavelength meter based on Fizeau wedge interferometer
2.7. Optical ring resonators and their applications
2.7.1. Ring resonator power transfer function and Q-factor
2.7.2. Ring resonators as tunable optical filters
2.7.3. Label-free biosensors based on high-Q ring resonators
2.7.4. Electro-optic modulators based on ring resonators
2.8. Optical polarimeter
2.8.1. General description of lightwave polarization
2.8.2. The stokes parameters and the Poincare sphere
2.8.3. Optical polarimeters
2.9. Measurement based on coherent optical detection
2.9.1. Operating principle
2.9.2. Receiver SNR calculation of coherent detection
2.9.2.1. Heterodyne and homodyne detection
2.9.2.2. Signal-to-noise ratio in coherent detection receivers
2.9.3. Balanced coherent detection and polarization diversity
2.9.4. Phase diversity in coherent homodyne detection
2.9.5. Coherent OSA based on swept frequency laser
2.10. Waveform measurement
2.10.1. Oscilloscope operating principle
2.10.2. Digital sampling oscilloscopes
2.10.3. High speed real-time digital analyzer
2.10.4. High-speed sampling of optical signal
2.10.4.1. Nonlinear optical sampling
2.10.4.2. Linear optical sampling
2.10.4.3. Sampling oscilloscope base on single-photon detection
2.10.4.4. High-speed electric ADC using optical techniques
2.10.5. Short optical pulse measurement using an autocorrelator
2.11. LIDAR and OCT
2.11.1. Light detection and ranging (LIDAR)
2.11.1.1. Pulsed LIDAR with direct detection
2.11.1.2. FMCW LIDAR and pulse compression
2.11.2. OCT
2.12. Optical network analyzer
2.12.1. S-Parameters and RF network analyzer
2.12.2. Optical network analyzers
2.12.2.1. Scalar optical network analyzer
2.12.2.2. Vector optical network analyzer
References
Chapter 3: Characterization of optical devices
3.1. Introduction
3.2. Characterization of RIN, linewidth, and phase noise of semiconductor lasers
3.2.1. Measurement of relative intensity noise (RIN)
3.2.2. Measurement of laser linewidth and phase noise
3.2.2.1. Self-homodyne and self-heterodyne detection
3.2.2.2. Coherent envelope detection and complex optical field detection
3.2.2.3. Non-Lorentzian phase noise and Lorentzian-equivalent linewidth
3.2.3. Multi-heterodyne technique to characterize spectral properties of semiconductor laser frequency combs
3.3. Measurement of electro-optic modulation response
3.3.1. Characterization of intensity modulation response
3.3.1.1. Frequency-domain characterization
3.3.1.2. Time-domain characterization
3.3.2. Measurement of frequency chirp
3.3.2.1. Modulation spectral measurement
3.3.2.2. Measurement utilizing fiber dispersion
3.3.3. Time-domain measurement of modulation-induced chirp
3.4. Wideband characterization of an optical receiver
3.4.1. Characterization of photodetector responsivity and linearity
3.4.2. Frequency domain characterization of photodetector response
3.4.3. Photodetector bandwidth characterization using source spontaneous-spontaneous beat noise
3.4.4. Photodetector characterization using short optical pulses
3.5. Characterization of optical amplifiers
3.5.1. Measurement of amplifier optical gain
3.5.2. Measurement of static and dynamic gain tilt
3.5.2.1. Static gain tilt
3.5.2.2. Dynamic gain tilt
3.5.3. Optical amplifier noise
3.5.4. Optical domain characterization of ASE noise
3.5.5. Impact of ASE noise in electrical domain
3.5.5.1. Signal-spontaneous emission beat noise
3.5.5.2. Spontaneous-spontaneous beat noise spectral density
3.5.6. Noise figure definition and its measurement
3.5.6.1. Noise figure definition
3.5.6.2. Optical domain measurement of noise figure
3.5.6.3. Electrical domain characterization of a noise figure
3.5.7. Time-domain characteristics of EDFA
3.5.8. Characterization of fiber Raman amplification
3.5.8.1. Noise characteristics of Raman amplifiers
3.5.8.2. Forward/backward hybrid pumping and 2nd-order pumping
3.5.8.3. RIN transfer from the pump to the optical signal
3.5.8.4. Characterization of fiber Raman amplifiers
3.6. Characterization of passive optical components
3.6.1. Fiber-optic couplers
3.6.2. Fiber Bragg grating filters
3.6.3. WDM multiplexers and demultiplexers
3.6.3.1. Thin film-based interference filters
3.6.3.2. Arrayed waveguide gratings
3.6.4. Characterization of optical filter transfer functions
3.6.4.1. Modulation phase-shift technique
3.6.4.2. Interferometer technique
3.6.5. Optical isolators and circulators
3.6.5.1. Optical isolators
3.6.5.2. Optical circulators
References
Chapter 4: Optical fiber measurement
4.1. Introduction
4.2. Classification of fiber types
4.2.1. Standard optical fibers for transmission
4.2.2. Specialty optical fibers
4.3. Measurement of fiber mode-field distribution
4.3.1. Near-field, far-field, and mode-field diameter
4.3.2. Far-field measurement techniques
4.3.3. Near-field measurement techniques
4.4. Fiber attenuation measurement and OTDR
4.4.1. Cutback technique
4.4.2. Optical time-domain reflectometers
4.4.3. Improvement considerations of OTDR
4.5. Fiber dispersion measurements
4.5.1. Intermodal dispersion and its measurement
4.5.1.1. Pulse distortion method
4.5.1.2. Frequency-domain measurement
4.5.2. Chromatic dispersion and its measurement
4.5.2.1. Modulation phase shift method
4.5.2.2. Baseband AM response method
4.5.2.3. Interferometric method
4.6. Polarization mode dispersion (PMD) measurement
4.6.1. Representation fiber birefringence and PMD parameter
4.6.2. Pulse delay method
4.6.3. The Interferometric method
4.6.4. Poincare arc method
4.6.5. Fixed analyzer method
4.6.6. The Jones Matrix method
4.6.7. The Mueller Matrix method
4.7. Determination of polarization-dependent loss
4.8. PMD sources and emulators
4.9. Measurement of fiber non-linearity
4.9.1. Measurement of stimulated Brillouin scattering coefficient
4.9.2. Measurement of the stimulated Raman scattering coefficient
4.9.3. Measurement of Kerr effect non-linearity
4.9.3.1. Non-linear index measurement using SPM
4.9.3.2. Non-linear index measurement using FWM
4.9.3.3. Non-linear index measurement using cross-phase modulation
4.9.3.4. Non-linear index measurement using modulation instability
References
Chapter 5: Fiber-based optical metrology and spectroscopy techniques
5.1. Introduction
5.2. Discrete fiber-optic sensors
5.2.1. Fiber-optic sensors based on optical path loss
5.2.2. Fiber-optic sensors based on interferometry
5.2.3. Fiber-optic sensors based on Faraday rotation of polarization
5.2.4. Fiber-optic gyroscopes
5.2.5. Fiber-optic sensors based on fiber Bragg gratings
5.2.6. Fiber-optic sensors based on Fabry-Perot interferometers
5.3. Distributed fiber sensors
5.3.1. Phase-sensitive OTDR
5.3.2. Brillouin and Raman OTDR
5.3.2.1. Measurements based on Brillouin scattering
5.3.2.2. Measurements based on Raman scattering
5.3.3. Interferometer-based distributed fiber sensors
5.4. Optical frequency combs and their applications
5.4.1. Basic definitions of optical frequency comb parameters
5.4.2. Femtosecond fiber lasers
5.4.3. Frequency stabilization of optical frequency combs
5.4.4. Precision metrology based on optical frequency combs
5.4.5. Measurements based on coherent dual combs
5.5. Nonlinear spectroscopy and microscopy based on femtosecond fiber lasers
5.5.1. Soliton self-frequency shift and generation of λ-tunable femtosecond pulses
5.5.2. Two-photon fluorescence microscopy based on λ-switchable femtosecond pulses excitation
5.5.3. CRS spectroscopy based on λ-tunable femtosecond pulses excitation
5.5.4. CRS microscopy based on λ-tunable femtosecond pulses excitation
References
Chapter 6: Optical system performance measurements
6.1. Introduction
6.2. Overview of fiber-optic transmission systems
6.2.1. Optical system performance considerations
6.2.2. Receiver BER and Q
6.2.3. System Q estimation based on eye diagram parameterization
6.2.4. Bit error rate testing
6.2.4.1. Pattern generator
6.2.4.2. Error detection
6.3. Receiver sensitivity measurement and OSNR tolerance
6.3.1. Receiver sensitivity and power margin
6.3.2. OSNR margin and required OSNR (R-OSNR)
6.3.3. BER versus decision threshold measurement
6.3.4. EVM and BER for high order complex modulation
6.4. Waveform distortion measurements
6.5. Jitter measurement
6.5.1. Basic jitter parameters and definitions
6.5.2. Jitter detection techniques
6.5.2.1. Jitter measurement based on sampling oscilloscope
6.5.2.2. Jitter measurement based on a phase detector
6.5.2.3. Jitter measurement based on a BER-T scan
6.6. In situ monitoring of linear propagation impairments
6.6.1. In situ monitoring of chromatic dispersion
6.6.2. In situ PMD monitoring
6.6.2.1. Basic operating principle
6.6.2.2. PMD monitoring using coherent detection
6.6.2.3. Difference between fiber DGD and the DGD experienced by an optical signal
6.6.3. In situ PDL monitoring
6.7. Measurement of non-linear crosstalks in WDM systems
6.7.1. Cross-phase modulation and pump-probe based measurement techniques
6.7.1.1. Measure XPM-induced phase modulation
6.7.1.2. Measure XPM-induced intensity modulation
6.7.1.3. Characterization of electrostriction non-linearity based on coherent detection
6.7.2. FWM-induced crosstalk in optical systems
6.7.3. Create WDM crosstalk channels with spectrally shaped broadband Gaussian noise
6.8. Optical performance monitoring based on coherent optical transceivers
6.8.1. Estimating system OSNR with a digital coherent transceiver
6.8.2. Measuring non-linear phase shift in a fiber-optic system with a digital coherent transceiver
6.8.2.1. Measurement using a single transceiver
6.8.2.2. Multi-span measurements with a recirculating loop and a separate coherent receiver
6.9. Optical system performance evaluation based on required OSNR
6.9.1. Measurement of R-SNR due to chromatic dispersion
6.9.2. Measurement of R-SNR due to fiber non-linearity
6.9.3. Measurement of R-OSNR due to optical filter misalignment
6.10. Fiber-optic recirculating loop
6.10.1. Operation principle of a recirculating loop
6.10.2. Measurement procedure and time control
6.10.3. Optical gain adjustment in the loop
References
Chapter 7: Measurement errors
7.1. Introduction
7.1.1. Error classification and reporting
7.2. Measurement error statistics
7.2.1. Effective sample size in the presence of serial correlations
7.3. Central limit theorem (CLT)
7.3.1. Approximations related to the central limit theorem
7.4. Identifying candidate outliers
7.5. Error estimates of measurement combinations
7.5.1. Error estimates for combinations of uncorrelated measurement samples
7.5.2. The weighted mean
7.6. Linear least squares fitting of data
7.6.1. Fitting evaluation based on a chi-square merit function
7.6.2. A fitting example
References
Index
Back Cover
Recommend Papers

Fiber-Optic Measurement Techniques [2 ed.]
 0323909574, 9780323909570

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

FIBER-OPTIC MEASUREMENT TECHNIQUES

This page intentionally left blank

FIBER-OPTIC MEASUREMENT TECHNIQUES Second Edition RONGQING HUI Electrical Engineering and Computer Science Department, The University of Kansas, Lawrence, KS, United States

MAURICE O’SULLIVAN Ciena, Ottawa, ON, Canada

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2023 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. ISBN: 978-0-323-90957-0 For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Mara E. Conner Acquisitions Editor: Tim Pitts Editorial Project Manager: Aera F. Gariguez Production Project Manager: Nirmala Arumugam Cover Designer: Miles Hitchen Typeset by STRAIVE, India

Contents

Preface to first edition Preface to the second edition

vii ix

1. Fundamentals of optical devices

1

1.1 Introduction 1.2 Laser diodes and LEDs 1.3 Photodetectors 1.4 Optical fibers 1.5 Optical amplifiers 1.6 External electro-optic modulator References

2. Basic mechanisms and instrumentation for optical measurement 2.1 Introduction 2.2 Grating-based optical spectrum analyzers 2.3 Scanning FP interferometer 2.4 Mach-Zehnder interferometers 2.5 Michelson interferometers 2.6 Optical wavelength meter 2.7 Optical ring resonators and their applications 2.8 Optical polarimeter 2.9 Measurement based on coherent optical detection 2.10 Waveform measurement 2.11 LIDAR and OCT 2.12 Optical network analyzer References

3. Characterization of optical devices 3.1 Introduction 3.2 Characterization of RIN, linewidth, and phase noise of semiconductor lasers 3.3 Measurement of electro-optic modulation response 3.4 Wideband characterization of an optical receiver 3.5 Characterization of optical amplifiers 3.6 Characterization of passive optical components References

1 3 30 42 83 114 134

137 137 138 151 164 169 179 187 204 211 226 250 283 292

297 297 297 339 357 366 414 444

v

vi

Contents

4. Optical fiber measurement 4.1 Introduction 4.2 Classification of fiber types 4.3 Measurement of fiber mode-field distribution 4.4 Fiber attenuation measurement and OTDR 4.5 Fiber dispersion measurements 4.6 Polarization mode dispersion (PMD) measurement 4.7 Determination of polarization-dependent loss 4.8 PMD sources and emulators 4.9 Measurement of fiber non-linearity References

5. Fiber-based optical metrology and spectroscopy techniques 5.1 Introduction 5.2 Discrete fiber-optic sensors 5.3 Distributed fiber sensors 5.4 Optical frequency combs and their applications 5.5 Nonlinear spectroscopy and microscopy based on femtosecond fiber lasers References

6. Optical system performance measurements 6.1 Introduction 6.2 Overview of fiber-optic transmission systems 6.3 Receiver sensitivity measurement and OSNR tolerance 6.4 Waveform distortion measurements 6.5 Jitter measurement 6.6 In situ monitoring of linear propagation impairments 6.7 Measurement of non-linear crosstalks in WDM systems 6.8 Optical performance monitoring based on coherent optical transceivers 6.9 Optical system performance evaluation based on required OSNR 6.10 Fiber-optic recirculating loop References

7. Measurement errors 7.1 Introduction 7.2 Measurement error statistics 7.3 Central limit theorem (CLT) 7.4 Identifying candidate outliers 7.5 Error estimates of measurement combinations 7.6 Linear least squares fitting of data References Index

447 447 448 457 464 475 489 517 520 524 553

557 557 559 576 594 614 646

651 651 652 676 698 701 710 727 752 767 777 788

793 793 797 800 802 804 807 817 819

Preface to first edition

Modern fiber-optic communications date back to the early 1960s when Charles Kao theoretically predicted that high-speed messages could be transmited long distances over a narrow glass waveguide, which is now commonly referred to as an optical fiber. In 1970, a team of researchers at Corning successfully fabricated optical fibers using fused-silica with a loss of less than 20dB/km at 633nm wavelength. The Corning breakthrough was the most significant step toward the practical application of fiber-optic communications. Over the following several years, fiber losses dropped dramatically, aided both by improved fabrication methods and the shift to longer wavelengths, where fibers have inherently lower attenuation. Meanwhile, the prospect of long distance fiber-optic communication intensified research and development efforts in semiconductor lasers and other related optical devices. Near-infrared semiconductor lasers and LEDs operating at 810nm, 1320nm and 1550nm wavelengths were developed to fit into the low loss windows of silica optical fibers. The bandwidth in the 1550nm wavelength window alone can be as wide as 80nm, which is approximately 10THz. In order to make full and efficient use of this vast bandwidth, many innovative technologies have been developed, such as single frequency and wavelength tunable semiconductor lasers, dispersion shifted optical fibers, optical amplifiers, wavelength division multiplexing as well as various modulation formats and signal processing techniques. In addition to optical communications, fiber-optics and photonic technologies have found a variety of other applications ranging from precision metrology, to imaging, to photonic sensors. Various optical measurement techniques have been proposed and demonstrated in research, development, maintenance and trouble-shooting of optical systems. Different optical systems demand different measurement and testing techniques based on the specific application and the key requirements of each system. Over the years, fiber-optic measurement has become a stand-alone research discipline, which is both interesting and challenging. In general, optical measurements can be categorized into instrumentation and measurement methodology. In many cases, the measurement capability and accuracy are limited by the instruments used. Therefore, a good understanding of operation principles and performance limitations of basic optical instruments is essential in the design of experimental setups and to achieve the desired measurement speed and accuracy. From methodology point of view, a familiarity with various basic measurement system configurations and topologies is necessary, which helps in determining how to make

vii

viii

Preface to first edition

the most efficient use of the available instrumentations, how to extract useful signals, and how to interpret and process the results. The focus of this book is the measurement techniques related to fiber-optic systems, subsystems and devices. Since both optical systems and optical instruments are built upon various optical components, basic optical devices are discussed in chapter 1, which includes semiconductor lasers and LEDs, photodetectors, fundamental properties of optical fibers, optical amplifiers and optical modulators. Familiarity with the characteristics of these individual building blocks is essential for the understanding of optical measurement setups and optical instrumentation. Chapter 2 introduces basic optical instrumentation, such as optical spectrum analyzers, optical wavelength meters, Fabry-Perot, Machzehnder and Michelson interferometers, optical polarimeters, high-speed optical and RF oscilloscopes and network analyzers. Since coherent optical detection is a foundation for an entire new category of optical instrumentation, the fundamental principle of coherent detection is also discussed in this chapter, which helps in the understanding of linear optical sampling and vectorial optical network analyzer. In chapter 3, we discuss techniques of characterizing optical devices such as semiconductor lasers, optical receivers, optical amplifiers and various passive optical devices. Optical and optoelectronic transfer functions, intensity and phase noises and modulation characteristics are important parameters to investigate. Chapter 4 discusses measurement of optical fibers, including attenuation, chromatic dispersion, polarization mode dispersion and optical nonlinearity. Finally, chapter 5 is dedicated to the discussion of measurement issues related to optical communication systems. Instead of describing performance and specification of specific instruments, the major purpose of this book is to outline the fundamental principles behind each individual measurement technique. Most of them described here can be applied to various different application fields. A good understanding of fundamental principles behind these measurement techniques is a key to making the best use of available instrumentation, to obtain the best possible results and to develop new and innovative measurement techniques and instruments.

Preface to the second edition

Many new fiber-optic measurement techniques and topics have developed in the 10+ years since the 1st edition of this book. We have included some of these in this 2nd edition. In Chapter 1, fundamental optical devices, we now include discussions of integrated tunable lasers (ITLA), single-photon detectors, and I/Q modulators. A description of distributed Raman amplification, which has come into more common use in optical communication systems and sensors, has also been added. In Chapter 2, Basic Instrumentation for Optical Measurement, we now provide a treatment of optical ring resonators, for use as optical filters, sensors, and electro-optic modulators. We have added a discussion of real-time digital sampling oscilloscope for high-speed waveform measurements. Although used under different circumstances, LiDAR and OCT are based on similar optical measurement principles. These techniques are treated together in Section 2.10 to facilitate their comparison. For Chapter 3, Characterization of optical devices, we have added a discussion of non-Gaussian phase noise in diode lasers and described measurements that characterize diode laser-based frequency combs. Distributed Raman amplification in fiber-optic systems and the associated characterization techniques are also new to this chapter. Chapter 4, Optical fiber measurement, remains unchanged apart from the addition of new fiber types and parameters in Section 4.1. Chapter 5, Fiber-based optical metrology and spectroscopy techniques, is introduced in this 2nd edition. It describes different types of fiber-optic sensors, measurement principles, mechanisms, and their applications. Optical frequency comb-based femtosecond lasers have, in recent years, become an important precision metrology tool promising many applications. Section 5.4 discusses femtosecond fiber lasers and spectroscopy techniques based on frequency combs and dual-frequency combs. Nonlinear spectroscopy and multiphoton microscopy techniques based on soliton self-frequency shift for fast wavelength tuning of femtosecond pulses are discussed in Section 5.5. Major updates in Chapter 6 include additional discussions of complex optical field modulation and high-order modulation, a more consolidated discussion of cross-phase modulation and pump-probe-based measurement techniques, as well as optical performance monitoring of optical systems and networks based on coherent optical transceivers as opposed to specialized optical measurement equipment. Chapter 7, Measurement errors, is another addition to this edition. Estimates of measurement error are essential to the design and use of measurements. This chapter is a brief examination of error categories, some relevant statistics, statistical trends, and

ix

x

Preface to the second edition

combination of errors and estimation of errors on coefficients from weighted linear least square fitting of functions to measurement data. Fiber-optic measurement methods evolve with the advancement of technologies and tools and at a pace with our understanding of fundamental physics. It is not possible to include all measurement techniques. Here, we have strived to keep a balance between the breath and the depth, and to discuss measurement techniques in a systemic way. We hope this 2nd edition can better serve the fiber-optic R&D community. Rongqing Hui Maurice O’Sullivan

CHAPTER 1

Fundamentals of optical devices 1.1 Introduction In an optical communication system, information is delivered by optical carriers. The signal can be encoded into optical intensity, frequency, and phase for transmission and be detected at the receiver. As illustrated in Fig. 1.1.1, the simplest optical system has an optical source, a detector, and various optical components between them, such as an optical coupler and optical fiber. In this simple system, the electrical signal is modulated directly onto the light source such that the intensity, the wavelength, or the phase of the optical carrier is encoded by the electrical signal. This modulated optical signal is coupled into an optical fiber and delivered to the destination, where it is detected by an optical receiver. The optical receiver detects the received optical signal and recovers the electrical signal encoded on the optical carrier. Simple optical systems like the one shown in Fig. 1.1.1 are usually used for low data rate and short-distance optical transmission. First, direct modulation on a semiconductor laser source has a number of limitations such as frequency chirp and a poor extinction ratio. Second, the attenuation in the transmission optical fiber limits the system reach between the transmitter and the receiver. In addition, although the low-loss window of a single-mode fiber can be as wide as hundreds of terahertz, the transmission capacity is limited by the modulation speed of the transmitter and the receiver electrical bandwidth. To overcome these limitations, a number of new technologies have been introduced in modern optical communication systems, including single-frequency semiconductor lasers, external electro-optic modulators, wavelength-division multiplexing (WDM) technique, and optical amplifiers (Kaminow et al., 2008). Fig. 1.1.2 shows the block diagram of a WDM optical system with N wavelength channels. In this system, each information channel is modulated onto an optical carrier with a specific wavelength through an external electro-optic modulator. All the optical carriers are combined into a single optical fiber through a wavelength-division multiplexer. The power of the combined optical signal is boosted by an optical amplifier and sent to the transmission optical fiber. Along the fiber transmission line, the optical signal is periodically amplified by in-line optical amplifiers to overcome the transmission loss of the optical fiber. At the destination, a wavelength-division demultiplexer is used to separate optical carriers at different wavelengths, and each wavelength channel is detected separately to recover the data carried on each channel. In this system, the WDM configuration allows the full use of the Fiber Optic Measurement Techniques https://doi.org/10.1016/B978-0-323-90957-0.00002-3

Copyright © 2023 Elsevier Inc. All rights reserved.

1

Fiber optic measurement techniques

Optical system Coupling Light source

Optical fiber

Electrical signal

Optical receiver

Electrical receiver

Fig. 1.1.1 Configuration of the simplest optical communication system.

wide band offered by the optical fiber, and optical amplification technology significantly extended the overall reach of the optical system. Meanwhile, because of the dramatically increased optical bandwidth, transmission distance, and optical power levels in the fiber, other sources of performance impairments, such as chromatic dispersion, fiber nonlinearity, polarization-mode dispersion, and amplified spontaneous noise generated by optical amplifiers, start to become significant. Understanding and characterization of these impairments have become important parts of fiber-optic system research. Another very important area in fiber-optic systems research is optical modulation format and optoelectronic signal processing. In addition to intensity modulation, optical systems based on phase modulation are gaining momentum, demonstrating excellent performance. Furthermore, due to the availability and much improved quality of single-frequency tunable semiconductor lasers, optical receivers based on coherent detection are becoming more and more popular. Many signal processing techniques previously developed for radio frequency systems can now be applied to optical systems such as orthogonal frequency-division multiplexing (OFDM) and code-division multiple access (CDMA). Obviously, the overall system Channel 1

Channel 2 Laser λ2

Channel N Laser λN Modulator

In-line optical amplifier Optical fiber

Channel 1

Receiver

Channel 2

Receiver

Channel N

Receiver

λ1 λ2

λN

Demultiplexer

Laser λ1

Wavelength Multiplexer

2

Fig. 1.1.2 Block diagram of a WDM optical system using external modulation.

Fundamentals of optical devices

performance not only is determined by the system architecture but also depends on the characteristics of each individual optical device. On the other hand, from an optical testing and characterization point of view, the knowledge of functional optical devices as building blocks and optical signal processing techniques are essential for the design and construction of measurement setup and optical instrumentation. It also helps to understand the capabilities and limitations of optical systems. This chapter introduces fundamental optical devices that are often used in fiberoptic systems and optical instrumentation. The physics background and basic properties of each optical device will be discussed. Section 1.2 introduces optical sources such as semiconductor lasers and light-emitting diodes (LEDs). Basic properties, including optical power, spectral width, and optical modulation, will be discussed. Section 1.3 presents optical detectors. Responsivity, optical and electrical bandwidth, and the signal-to-noise-ratio are important characteristics. Section 1.4 reviews the basic properties of optical fibers, which include mode-guiding mechanisms, attenuation, chromatic dispersion, polarization mode dispersion, and nonlinear effects. Section 1.5 discusses optical amplifiers, which include semiconductor optical amplifiers (SOA) and erbium-doped fiber amplifiers (EDFA). Distributed Raman amplification in fiber systems is also discussed in this section. Although SOAs are often used for optical signal processing based on their high-speed dynamics, EDFAs are more convenient for application as in-line optical amplifiers in WDM optical systems due to their slow carrier dynamics and low crosstalk between high-speed channels. Distributed Raman amplification utilizing transmission fiber as the gain medium is able to provide a better optical signal-to-noise ratio than both SOA and EDFA but requires high power pump lasers. The last section in this chapter is devoted to the discussion of external electro-optic modulators, which are widely used in high-speed optical transmission systems. Both LiNbO3-based Mach-Zehnder modulators and electro-absorption modulators will be discussed, including in-phase/quadrature modulators which are able to provide complex optical field modulation.

1.2 Laser diodes and LEDs In optical systems, signals are carried by photons, and therefore, an optical source is an essential part of every optical system. Although there are various types of optical sources, semiconductor-based LEDs are most popular in fiber-optic systems because they are small, reliable, and, most important, their optical output can be rapidly modulated by the electrical injection current, which is commonly referred to as direct modulation. Semiconductor lasers and LEDs are based on forward-biased pn junctions, and the output optical powers are proportional to the injection electric current.

3

4

Fiber optic measurement techniques

1.2.1 Pn junction and energy diagram Fig. 1.2.1 shows a homojunction between a p-type and n-type semiconductor. For standalone n-type and p-type semiconductor materials, the Fermi level is closer to the conduction band in n-type semiconductors, and it is closer to the valence band in p-type semiconductors, as illustrated in Fig. 1.2.1A. Fig. 1.2.1B shows that once a pn junction is formed, under thermal equilibrium, the Fermi level will be unified across the structure. This happens because high-energy free electrons diffuse from n-side to p-side and lowenergy holes diffuse in the opposite direction; as a result, the energy level of the p-type side is increased compared to the n-type side. Meanwhile, because free electrons migrate from the n-type side to the p-type side, uncovered protons left over at the edge of the n-type semiconductor create a positively charged layer on the n-type side. Similarly, a p-type

n-type

conduction band Fermi levels

Bandgap Eg valence band

(a) Depletion region

conduction band

valence band

(b) Depletion region hn



+

current

(c) Fig. 1.2.1 (A) Band diagram of separate n-type and p-type semiconductors, (B) band diagram of a pn junction under equilibrium, and (C) band diagram of a pn junction with forward bias.

Fundamentals of optical devices

negatively charged layer is created at the edge of the p-type semiconductor due to the loss of holes. Thus, built-in electrical field, and thus a potential barrier, is created at the pn junction, which pulls the diffused free electronics back to the n-type side and holes back to the p-type side, a process commonly referred to as carrier drift. Because of this built-in electrical field, neither free electrons nor holes exist at the junction region, and therefore, this region is called the depletion region or space charged region. Without an external bias, there is no net carrier flow across the pn junction due to the exact balance between carrier diffusion and carrier drift. When the pn junction is forward-biased as shown in Fig. 1.2.1C, excess electrons and holes are injected into the n-type side and the p-type sections, respectively. This carrier injection reduces the potential barrier and pushes excess electrons and holes to diffuse across the junction area. In this process, excess electrons and holes recombine inside the depletion region to generate photons. This is called radiative recombination.

1.2.2 Direct and indirect semiconductors One important rule of radiative recombination process is that both energy and momentum must be conserved. Depending on the shape of their band structure, semiconductor materials can be generally classified as having direct bandgap or indirect bandgap, as illustrated in Fig. 1.2.2, where E is the energy and k is the momentum. For direct semiconductors, holes at the top of the valence band have the same momentum as the electrons at the bottom of the conduction band. In this case, electrons directly recombine with the holes to emit photons, and the photon energy is equal to the bandgap. For indirect semiconductors, on the other hand, holes at the top of the valence band and electrons at the bottom of the conduction band have different momentum. Any recombination between electrons in the conduction band and holes in the valence band would requires a significant momentum change. Although a photon can have considerable energy hv, where h is the Planck’s constant and v is the optical frequency, its E

E Conduction band −− −− −− ΔEg

Conduction band hn = ΔEg

ΔEg

++ ++ ++

Δk

++ ++ ++

Valence band

Valence band k

(a)

−− −− −−

k

(b)

Fig. 1.2.2 Illustration of direct bandgap (A) and indirect bandgap (B) of semiconductor materials.

5

6

Fiber optic measurement techniques

momentum hv/c is much smaller with c the speed of light, which cannot compensate for the momentum mismatch between the electrons and the holes. Therefore, radiative recombination is considered impossible in indirect semiconductor materials unless a third particle (for example, a phonon created by crystal lattice vibration) is involved and provides the required momentum.

1.2.3 Carrier confinement In addition to the requirement of using direct bandgap semiconductor material, another important requirement for LEDs is the carrier confinement. In early LEDs, materials with the same bandgap were used at both sides of the pn junction, as shown in Fig. 1.2.1. This is referred to as homojunction. In this case, carrier recombination happened over the entire depletion region with the width of 1–10 μm depending on the diffusion constant of the electrons and the holes. This wide depletion region makes it difficult to achieve a high carrier concentration. To overcome this problem, double heterojunction was introduced in which a thin layer of semiconductor material with a slightly smaller bandgap is sandwiched in the middle of the junction region between the p-type and the n-type sections. This concept is illustrated in Fig. 1.2.3. In this structure, the thin layer has a slightly smaller bandgap, which attracts the concentration of carriers when the junction is forward-biased; therefore, this layer is referred to as the active region of the device. The carrier confinement is a result of bandgap discontinuity. The sandwich layer can be controlled to be on the order of  0.1 μm, which is several orders of magnitude thinner than the depletion region of a homojunction; thus, very high levels of carrier concentration can be realized at a certain injection current. In addition to providing carrier confinement, another advantage of using double heterostructure is that it also provides useful photon confinement. By using a material with a slightly higher refractive index for the sandwich layer, a dielectric waveguide is formed. This dielectric optical waveguide provides a mechanism to confine photons within the active layer, and therefore, very high photon density can be achieved.

p-region Eg

n-region E g⬘ Eg

Active region

Fig. 1.2.3 Illustration of semiconductor double heterostructure.

Fundamentals of optical devices

1.2.4 Spontaneous emission and stimulated emission

ΔEi

ΔEj

Probability

As discussed, radiative recombination between electrons and holes creates photons, but this is a random process. The energy is conserved in this process, which determines the frequency of the emitted photon as v ¼ ΔE/h, where ΔE is the energy gap between the conduction band electron and the valence band hole that participated in the process. h is Planck’s constant. However, the phase of the emitted lightwave is not predictable. Indeed, since semiconductors are solids, energy of carriers are not on discrete levels; instead, they are continuously distributed within energy bands following the Fermi-Dirac distribution, as illustrated by Fig. 1.2.4. Fig. 1.2.4A shows that different electron-hole pairs may be separated by different energy gaps and ΔEi might not be equal to ΔEj. Recombination of different electron-hole pairs will produce emission at different wavelengths. The spectral width of the emission is determined by the statistic energy distribution of the carriers, as illustrated by Fig. 1.2.4B. Spontaneous emission is created by the spontaneous recombination of electron-hole pairs. The photon generated from each recombination event is independent although statistically, the emission frequency falls into the spectrum shown in Fig. 1.2.4B. The frequencies, the phases, and the direction of propagation of the emitted photons are not correlated. This is illustrated in Fig. 1.2.5A.

n n0

(a)

(b)

Fig. 1.2.4 Illustration of an energy band in semiconductors and the impact on the spectral width of radiative recombination. (A) Energy distributions of electrons and holes. (B) Probability distribution of the frequency of emitted photons.

(a)

(b)

Fig. 1.2.5 Illustration of spontaneous emission (A) and stimulated emission (B).

7

8

Fiber optic measurement techniques

Stimulated emission, on the other hand, is created by stimulated recombination of electron-hole pairs. In this case, the recombination is induced by an incoming photon, as shown in Fig. 1.2.5B. Both the frequency and the phase of the emitted photon are identical to those of the incoming photon. Therefore, photons generated by the stimulated emission process are coherent, which results in narrow spectral linewidth.

1.2.5 Light-emitting diodes (LEDs) Light emission in an LED is based on the spontaneous emission of the forward-biased semiconductor pn junction. The basic structures are shown in Fig. 1.2.6 for surfaceemitting and edge-emitting LEDs. For surface-emitting diodes, light emits in the perpendicular direction of the active layer. The active area of the surface is equally bright, and the emission angle is isotopic, which is called Lambertian. Optical power emission patterns can be described as P(θ) ¼ P0 cos θ, where θ is the angle between the emitting direction and the surface normal and P0 is the optical power viewed from the direction of surface normal. By properly designing the shape of the bottom metal contact, the active emitting area can be made circular to maximize the coupling efficiency to optical fiber.

Light emission window

Metal contact

Metal contact

Substrate

Substrate Heterostructure layers

SiO2 isolation

SiO2 isolation Metal contact

Active region

(a)

Metal contact SiO2 SiO2 Substrate Metal contact

Heterostructure layers

Emission

(b) Fig. 1.2.6 Illustration of surface emission (A) and edge emission (B) LEDs.

Fundamentals of optical devices

1.2.5.1 P  I curve For edge-emitting diodes, on the other hand, light emits in the same direction as the active layer. In this case, a waveguide is usually required in the design, where the active layer has a slightly higher refractive index than the surrounding layers. Compared to surface-emitting diodes, the emitting area of edge-emitting diodes is usually much smaller and asymmetric, which is determined by the width and thickness of the active layer. For an LED, the emitted optical power is linearly proportional to the injected electrical current, as shown in Fig. 1.2.7. This is commonly referred to as the P  I curve. In the ideal case, the recombination of each electron-hole generates a photon. If we define the power efficiency dP/dI as the ratio between the emitted optical power Popt and the injected electrical current I, we have hv hc dP=dI ¼ ¼ (1.2.1) q λq where q is the electron change, h is Plank’s constant, c is the speed of light, and λ is the wavelength. In practical devices, in addition to radiative recombination, there is also nonradiative recombination, which does not produce photons. The internal quantum efficiency is defined as Rr ηq ¼ (1.2.2) Rr + Rnr where Rr and Rnr are the rates of radiative and nonradiative recombination, respectively. Another factor that reduces the slope of the P  I curve is that not all the photons generated through radiative recombination are able to exit the device. Various effects contribute to this efficiency reduction, such as internal material loss, interface reflection, and emitting angle. The external efficiency is defined by ηext ¼

Remit Rr

(1.2.3)

where Remit is the rate of the generated photons that actually exit the LED. Considering both internal quantum efficiency and external efficiency, the slope of the P  I curve should be

Pout

dP/dI

I

Fig. 1.2.7 LED emitting power is linearly proportional to the injection current.

9

10

Fiber optic measurement techniques

dP=dI ¼ ηq ηext

hc λq

(1.2.4)

Because of the linear relationship between the output optical power and the injection current, the emitted optical power of an LED is P opt ¼ ηq ηext

hc I λq

(1.2.5)

In general, the internal quantum efficiency of an LED can be in the order of 70%. However, since an LED is based on spontaneous emission and the photon emission is isotropic, its external efficiency is usually less than 5%. As a rule of thumb, for ηq ¼ 75%, ηext ¼ 2%, and the wavelength of λ ¼ 1550 nm, the output optical power efficiency is approximately 12 μW/mA. 1.2.5.2 Modulation dynamics In an LED active layer, the increase of carrier population is proportional to the rate of external carrier injection minus the rate of carrier recombination. Therefore, the rate equation of carrier population Nt is dN t ðtÞ I ðtÞ N t ðtÞ ¼  dt q τ

(1.2.6)

where τ is referred to as the carrier lifetime. In general, τ is a function of carrier density, and the rate equation does not have a closed-form solution. To simplify, if we assume τ is a constant, Eq. (1.2.6) can be easily solved in the frequency domain as e e t ðωÞ ¼ I ðωÞτ=q N 1 + jωτ

(1.2.7)

e t ðωÞ and IeðωÞ are the Fourier transforms of Nt(t) and I(t), respectively. Eq. (1.2.7) where N demonstrates that carrier population can be modulated through injection current modulation, and the 3-dB modulation bandwidth is B3dB ¼ 1/τ. Because the photons are created by radiative carrier recombination and the optical power is proportional to the carrier density, the modulation bandwidth of the optical power has the bandwidth of 1/τ. P opt ð0Þ P opt ðωÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 + ðωτÞ2

(1.2.8)

where Popt(0) is the optical power at DC. Typical carrier lifetime of an LED is in the order of nanoseconds and therefore the modulation bandwidth is in the 100 MHz  1GHz level, depending on the structure of the LED. It is worth noting that since the optical power is proportional to the injection current, the modulation bandwidth can be defined as either electrical or optical. Here comes a

Fundamentals of optical devices

practical question: To fully support an LED with an optical bandwidth of Bopt, what electrical bandwidth Bele is required for the driving circuit? Since optical power is proportional to the electrical current, Popt(ω) ∝ I(ω), and the driver electrical power is proportional to the square of the injection current, Pele(ω) ∝ I2(ω); therefore, the driver electrical power is proportional to the square of the LED optical power Pele(ω) ∝ P2opt(ω), that is, 2 P ele ðωÞ P opt ðωÞ ¼ 2 P ele ð0Þ P opt ð0Þ

(1.2.9)

If at a frequency, the optical power is reduced by 3 dB compared to its DC value, the driver electrical power is supposed to be reduced by only 1.5 dB at that frequency. That is, 3 dB electrical bandwidth is equivalent to 6 dB optical bandwidth. Example 1.1

Consider an LED emitting at λ ¼ 1550 nm wavelength window. The internal quantum efficiency is 70%, the external efficiency is 2%, the carrier lifetime is τ ¼ 20 ns, and the injection current is 20 mA. Find: (1) The output optical power of the LED (2) The 3 dB optical bandwidth and the required driver electrical bandwidth Solution: (1) Output optical power is hc 6:63  1034  3  108  20  103 I ¼ 0:7  0:02  ¼ 0:225mW λq 1550  109  1:6  1019 (2) To find the 3 dB optical bandwidth, we use P opt ¼ ηq ηext

P opt ð0Þ P opt ðωÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 + ω2 τ 2   P opt ðωopt Þ For 3 dB optical bandwidth, 20 log P opt ð0Þ ¼ 3dB, that is,   10 log 1 + ω2opt τ2 ¼ 3dB Therefore, the angular frequency of optical bandwidth is ωopt ¼ 1/τ ¼ 50 Mrad./s, which corresponds to a circular frequency fopt ¼ 50/2π  8MHz. n o For 3 dB electrical bandwidth, 20 log PPeleeleðωð0eleÞ Þ ¼ 3dB. This is equivalent to nP 2 ðω Þo ele 20 log Popt2 ð0Þ ¼ 3dB. Therefore, ωele ¼ 0.64/τ ¼ 32MHz, and fele  5.1MHz. opt

1.2.6 Laser diodes (LDs) Semiconductor laser diodes are based on the stimulated emission of the forward-biased semiconductor pn junction. Compared to LEDs, LDs have higher spectral purity and

11

12

Fiber optic measurement techniques

higher external efficiency because of the spectral and spatial coherence of stimulated emission. One of the basic requirements of laser diodes is optical feedback. Consider an optical cavity of length L as shown in Fig. 1.2.8, where the semiconductor material in the cavity provides an optical gain g and an optical loss α per unit length; the refractive index of the material in the cavity is n, and the reflectivity of the facets is R1 and R2. The lightwave travels back and forth in the longitudinal (z) direction in the cavity. After each roundtrip in the cavity, the optical field change is E i+1 pffiffiffiffiffiffipffiffiffiffiffiffi (1.2.10) ¼ R1 R2 exp ðΔG + jΔΦÞ Ei where the total phase shift of the optical field is 2π 2nL λ and the net optical gain coefficient of the optical field is ΔΦ ¼

(1.2.11)

ΔG ¼ ðΓg  αÞ2L

(1.2.12)

1

where g is the optical gain coefficient in (cm ) and α is the material absorption coefficient, also in (cm1). 0 < Γ < 1 is a confinement factor. Since not all the optical field is confined within the active region of the waveguide, Γ is defined as the ratio between the optical field in the active region and the total optical field. To support a self-sustained oscillation, the optical field has to repeat itself after each roundtrip. Therefore, pffiffiffiffiffiffipffiffiffiffiffiffi R1 R2 exp ðΔG + jΔΦÞ ¼ 1 (1.2.13) This is a necessary condition of oscillation, which is also commonly referred to as the threshold condition. Eq. (1.2.13) can be further decomposed into phase condition and threshold gain condition. The phase condition is that after each roundtrip, the optical phase change must be multiples of 2π, ΔΦ ¼ 2mπ, where m is an integer. One important implication of this phase condition is that it can be satisfied by multiple wavelengths,

electrode

I

Active layer

R1

R2

waveguide

0

L

z

Fig. 1.2.8 A laser cavity with facet reflectivity R1 and R2, optical gain g, optical loss α, and refractive index n.

Fundamentals of optical devices

λm ¼

2nL m

(1.2.14)

This explains the reason that a laser may emit at multiple wavelengths, which are generally referred to as multiple longitudinal modes. Example 1.2 For an InGaAsP semiconductor laser operating in a 1550 nm wavelength window, if the effective refractive index of the waveguide is n  3.5 and the laser cavity length is L ¼ 300μm, find the wavelength spacing between adjacent longitudinal modes. Solution: Based on Eq. (1.2.14), the wavelength spacing between the mth mode and the (m + 1) th modes can be found as Δλ  λ2m/(2nL). Assume λm ¼ 1550nm; this mode spacing is Δλ ¼ 1.144nm, which corresponds to a frequency separation of approximately Δf ¼ 143GHz. The threshold gain condition is ffithat each roundtrip, the amplitude of the optical pffiffiffiffiffi pffiffiffiffiffiafter ffi field does not change — that is, R1 R2 exp fðΓgth  αÞ2L g ¼ 1, where gth is the optical field gain at threshold. Therefore, in order to achieve the lasing threshold, the optical gain has to be high enough to compensate for both the material attenuation and the optical loss at the mirrors, Γgth ¼ α 

1 ln ðR1 R2 Þ 4L

(1.2.15)

In semiconductor lasers, the optical field gain coefficient is a function of the carrier density in the laser cavity, which depends on the rate of carrier injection, g ðN Þ ¼ a ðN  N 0 Þ

(1.2.16)

In this expression, N is carrier density in (cm3), and N0 is the carrier density required to achieve material transparency. a is the differential gain coefficient in (cm2); it indicates the gain per unit length along the laser cavity per unit carrier density. In addition, due to the limited emission bandwidth, the differential gain is a function of the wavelength, which can be approximated as parabolic, (  2 ) λ  λ0 aðλÞ ¼ a0 1  (1.2.17) Δλg for jλ  λ0 j ≪ Δλg, where a0 is the differential gain at central wavelength λ ¼ λ0 and Δλg is the spectral bandwidth of the material gain. It must be noted that material gain coefficient g is not equal to the actual gain of the optical power. When an optical field travels along the active waveguide, E ðz, t Þ ¼ E0 eðΓgαÞz ejðωtβzÞ

(1.2.18)

where E0 is the optical field at z ¼ 0, ω is optical frequency, and β ¼ 2π/λ is the propagation constant. The envelop optical power along the waveguide is then

13

14

Fiber optic measurement techniques

P ðzÞ ¼ jEðzÞj2 ¼ P 0 e2ðΓgαÞz

(1.2.19)

Combining Eqs. (1.2.16), (1.2.17), and (1.2.19), we have (  2 ) λ  λ0 2 P ðz, λÞ ¼ jEðzÞj ¼ P ðz, λ0 Þ exp 2Γg0 z Δλg

(1.2.20)

where P(z, λ0) ¼ P(0, λ0)e2(Γgα)z is the peak optical power at z.

As shown in Fig. 1.2.9, although there are a large number of longitudinal modes that all satisfy the phase condition, the threshold gain condition can be reached only by a small number of modes near the center at the gain peak. 1.2.6.1 Rate equations Rate equations describe the nature of interactions between photons and electrons in the active region of a semiconductor laser. Useful characteristics such as output optical power versus injection current, modulation response, and spontaneous emission noise can be found by solving the rate equations. dN ðtÞ N ðt Þ J ¼   2Γvg aðN  N 0 ÞP ðt Þ qd dt τ

(1.2.21)

dP ðtÞ P ðt Þ + Rsp ¼ 2Γvg aðN  N 0 ÞP ðtÞ  dt τph

(1.2.22)

where N(t) is the carrier density and P(t) is the photon density within the laser cavity; they have the same unit (cm3). J is the injection current density in (C/cm2), d is the thickness of the active layer, vg is the group velocity of the lightwave in (cm/s), and τ and τph are electron and photon lifetimes, respectively. Rsp is the rate of spontaneous emission; it

α−

Gain

1 ln(R1R2) 4L

Γg

Δλl

λ0

λ

2Δλg

Fig. 1.2.9 Gain and loss profile. Vertical bars show the wavelengths of longitudinal modes.

Fundamentals of optical devices

represents the density of spontaneously generated photons per second that coupled into the lasing mode, so the unit of Rsp is (cm3 s1). On the left side of Eq. (1.2.21), the first term is the number of electrons injected into each cubic meter within each second time window; the second term is the electron density reduction per second due to spontaneous recombination; the third term represents the electron density reduction rate due to stimulated recombination, which is proportional to both material gain and the photon density. The same term 2Γvga(N  N0)P(t) also appears in Eq. (1.2.22) due to the fact that each stimulated recombination event will generate a photon, and therefore, the first term at the left side of Eq. (1.2.22) is the rate of photon density increase due to stimulated emission. The second term in Eq. (1.2.22) is the photon density decay rate due to both material absorption and photon leakage from the two mirrors. If we distribute mirror losses into the cavity, an equivalent mirror loss coefficient with the unit of (cm1) can be defined as αm ¼

ln ðR1 R2 Þ 4L

(1.2.23)

In this way, the photon lifetime can be expressed as τph ¼

1 2vg ðα + αm Þ

(1.2.24)

where α is the material attenuation coefficient. Using this photon lifetime expression, the photon density rate eq. (1.22) can be simplified as dP ðt Þ (1.2.25) ¼ 2vg ½Γg  ðα  αm ÞP ðtÞ + Rsp dt where g is the material gain as defined in Eq. (1.2.16). Eqs. (1.2.21) and (1.2.22) are coupled differential equations, and generally, they can be solved numerically to predict static as well as dynamic behaviors of semiconductor lasers. 1.2.6.2 Steady state solutions of rate equations In the steady state, d/dt ¼ 0, rate Eqs. (1.2.21) and (1.2.22) can be simplified as J N   2Γvg aðN  N 0 ÞP ¼ 0 qd τ 2Γvg aðN  N 0 ÞP 

P + Rsp ¼ 0 τph

(1.2.26) (1.2.27)

With this simplification, the equations can be solved analytically, which will help understand some basic characteristics of semiconductor lasers.

15

16

Fiber optic measurement techniques

1.2.6.3 Threshold carrier density Assume that Rsp, τph, and α are constants. Eq. (1.2.27) can be expressed as P¼

Rsp 1=τph  2Γvg aðN  N 0 Þ

(1.2.28)

Eq. (1.2.28) indicates that when the value of 2Γvga(N  N0) approaches that of 1/τph, the photon density would approach infinite, and this operation point is called threshold. Therefore, the threshold carrier density is defined as N th ¼ N 0 +

1 2Γvg aτph

(1.2.29)

Because the photon density should always be positive, 2Γvga(N  N0) < 1/τph is necessary, which requires N < Nth. Practically, carrier density N can be increased to approach the threshold carrier density Nth by increasing the injection current density. However, the threshold carrier density level can never be reached. After the carrier density increases to a certain level, photon density will be increased dramatically and the stimulated recombination becomes significant, which, in turn, reduces the carrier density. Fig. 1.2.10 illustrates the relationships among carrier density, photon density, and the injection current density. Threshold current density As shown in Fig. 1.2.10, for a semiconductor laser, carrier density linearly increases with the increase of injection current density to a certain level. After that level, the carrier density increase is suddenly saturated due to the significant contribution of stimulated recombination. The current density corresponding to that saturation point is called threshold current density. Generally, above threshold, the laser output is dominated by stimulated emission. However, below threshold, spontaneous emission is the dominant mechanism similar to that in an LED, and the output optical power is usually very small. In this case, P(J) Nth

N(J)

Jth

J

Fig. 1.2.10 Photon density P(J) and carrier density N(J) as functions of injection current density J. Jth is the threshold current density, and Nth is the threshold carrier density.

Fundamentals of optical devices

stimulated recombination is negligible; therefore, Eq. (1.2.26) can be simplified as J/qd ¼ N/τ for below threshold operation. At threshold point, N ¼ Nth, as expressed in Eq. (1.2.29); therefore, the corresponding threshold current density is   qd qd 1 (1.2.30) N0 + J th ¼ ¼ τe τe 2Γvg aτph P  J relationship about threshold In general, the desired operation region of a laser diode is above threshold, where highpower coherent light is generated by stimulated emission. Combining Eqs. (1.2.26) and (1.2.27), we have J N P  Rsp + ¼ τ τph qd

(1.2.31)

As shown in Fig. 1.2.10, in the above threshold regime, carrier density is approximately equal to its threshold value (N  Nth). In addition, since Nth/τ ¼ Jth/qd, we have J J P  Rsp ¼ th + τph qd qd

(1.2.32)

Therefore, the relationship between photon density and current density above laser operating threshold is τph P¼ ðJ  J th Þ + τph Rsp (1.2.33) qd Apart from a small spontaneous emission contribution τphRsp, which is usually very small, the photon density is linearly proportional to the injection current density for J > Jth, and the slope is dP/dJ ¼ τph/qd. Then, how to relate the photon density to the output optical power? Assume that the laser active waveguide has a length l, width w, and thickness d, as shown in Fig. 1.2.11. The output optical power is the flow of photons through the facet, which can be expressed as P opt ¼ P  ðlwdÞhv2αm vg

I w d l

Fig. 1.2.11 Illustration of the dimension of a laser cavity.

(1.2.34)

17

18

Fiber optic measurement techniques

where P  (lwd) is the total photon number and P ∙ (lwd)hv is the total optical power within the cavity. αm is the mirror loss in (cm1), which is the percentage of photons that escape from each mirror, and αmvg represents the percentage of photon escape per second. The factor 2 means photons travel in both directions along the cavity and escape through two end mirrors. Neglecting the contribution of spontaneous emission in Eq. (1.2.33), combining Eqs. (1.2.34) and (1.2.24), and with Eq. (1.2.33), and considering that the injection current is related to current density by I ¼ J  wl, we have P opt ¼

ðI  I th Þhv αm  αm + α q

(1.2.35)

This is the total output optical power exit from both of the two laser facets. Since α is the rate of material absorption and αm is the photon escape rate through facet mirrors, αm/(a + am) represents an external efficiency. Side-mode suppression ratio (SMR) As illustrated in Fig. 1.2.9, in a laser diode, the phase condition can be satisfied by multiple wavelengths, which are commonly referred to as multiple longitudinal modes. The gain profile has a maximum in the middle, and one of the longitudinal modes is closest to the threshold gain condition. This mode usually has the highest power, which is the main mode. However, the power in the modes adjacent to the main mode may not be negligible for many applications that require single-mode operation. To consider the multimodal effect in a laser diode, the rate equation for the photon density of the mth mode can be written as dP m ðtÞ P ðt Þ + Rsp ¼ 2vg Γgm ðN ÞP m ðtÞ  m dt τph

(1.2.36)

Where gm(N) ¼ a(N  N0) is the optical field gain for the mth mode. Since all the longitudinal modes share the same pool of carrier density, the rate equation for the carrier density is dN ðtÞ N ðt Þ X J 2Γ k vg gk ðN ÞP k ðt Þ (1.2.37) ¼   k qd dt τ Using parabolic approximation for the material gain, n

2 o gðN, λÞ ¼ g0 ðN Þ 1  ðλ  λ0 Þ=Δλg and let λm ¼ λ0 + mΔλl, where Δλl is the mode spacing as shown in Fig. 1.2.9. There should be approximately 2 M + 1 modes if there are M modes on each side of the main mode (M < m < M) and M  Δλg/Δλl.

Fundamentals of optical devices

Therefore, the field gain for the mth mode can be expressed as a function of the mode index m as   2  m gm ðN Þ ¼ g0 ðN Þ 1  (1.2.38) M The steady state solution of the photon density rate equation of the mth mode is Pm ¼

Rsp 1=τph  2Γvg gm ðN Þ

(1.2.39)

The gain margin for the main mode (m ¼ 0) is defined as δ¼

Rsp 1  2Γ 0 vg g0 ðN Þ ¼ P0 τph

(1.2.40)

where P0 is the photon density of the main mode. Substituting the main mode gain margin in Eq. (1.2.40) into Eq. (1.2.39), the photon density of the mth mode is Pm ¼

Rsp Rsp ¼ δ  2Γvg ½g0 ðN Þ  gm ðN Þ δ  2Γvg g0 ðN Þðm2 =M 2 Þ

(1.2.41)

The power ratio between the mth mode and the main mode is then SMR ¼

P m δ  2Γvg g0 ðN Þðm2 =M 2 Þ P ¼ 1 + 0 2Γvg g0 ðN Þ m2 =M 2 ¼ δ P0 Rsp

(1.2.42)

Eq. (1.2.42) indicates that first of all, the side mode suppression ratio is proportional to m because high index modes are far away from the main mode, and the gain is farther away from the threshold. In addition, the side mode suppression ratio is proportional to the photon density of the main mode. The reason is that at a high photon density level, stimulated emission is predominantly higher than the spontaneous emission; thus, side modes that benefited from spontaneous emission become weaker than the main mode. 2

Turn-on delay In directly modulated laser diodes, when the injection current is suddenly switched on from below to above threshold, there is a time delay between the signal electrical pulse and the output optical pulse. This is commonly referred to as turn-on delay. Turn-on delay is mainly caused by the slow response of the carrier density below threshold. It needs a certain amount of time for the carrier density to build up and to reach the threshold level. To analyze this process, we have to start from the rate equation at the low injection level J1 below threshold, where photon density is very small, and the stimulated recombination term is negligible in Eq. (1.2.21):

19

20

Fiber optic measurement techniques

J2 Current density J(t) J1

P2 Photon density P(t)

P1

Nth Carrier density N(t)

N1 0

td

t

Fig. 1.2.12 Illustration of the laser turn-on delay. Injection current is turned on at t ¼ 0 from J1 to J2, but both photon density and carrier density will require a certain time delay to build up toward their final values.

dN J ðtÞ N ðtÞ  ¼ qd τ dt

(1.2.43)

Suppose J(t) jumps from J1 to J2 at time t ¼ 0. If J2 is above the threshold current level, the carrier density is supposed to be switched from N1 to a level very close to the threshold Nth, as shown in Fig. 1.2.12. Eq. (1.2.43) can be integrated to find the time required for carrier density to increase from N1 to Nth:

 1 Z N th J  J1 J ðt Þ N ðtÞ dN ¼ τ ln (1.2.44)  td ¼ J  J th qd τ N1 Where J1 ¼ qdN1/τ and Jth ¼ qdNth/τ. Because laser threshold is reached only for t td, the actual starting time of the laser output optical pulse is at t ¼ td. In practical applications, this time delay td may limit the speed of optical modulation in optical systems. Since td is proportional to τ, a laser diode with shorter spontaneous emission carrier lifetime may help reduce the turn-on delay. Another way to reduce turn-on delay is to bias the low-level injection current J1 very close to the threshold. However, this may result in a poor extinction ratio of the output optical pulse. Small-signal modulation response In a semiconductor laser, both its optical power and its operation wavelength can be modulated by injection current. In Section 1.2.5, we show that the modulation speed of an LED is inversely proportional to the carrier lifetime. For a laser diode operating above threshold, the modulation speed is expected to be much faster than that of an LED because of the contribution of stimulated recombination. When a laser is modulated by a small current signal δJ(t) around a static operation point Js: J ¼ Js + δJ(t), the carrier

Fundamentals of optical devices

density will be N ¼ Ns + δN(t), where Ns and δN(t) are the static and small signal response, respectively, of the carrier density. Rate Eq. (1.2.21) can be linearized for the small-signal response as dδN ðtÞ δJ ðtÞ δN ðt Þ ¼   2Γvg aPδN ðtÞ dt qd τ

(1.2.45)

Here, for simplicity, we have assumed that the impact of photon density modulation is negligible. Eq. (1.2.45) can be easily solved in the frequency domain as e ðωÞ ¼ δN

δe J ðωÞ 1 qd jω + 1=τ + 2Γvg aP

(1.2.46)

e ðωÞ are Fourier transforms of δJ(t) and δN(t), respectively. If we where δe J ðωÞ and δN define an effective carrier lifetime τeff such that 1 1 ¼ + 2Γvg aP τeff τ

(1.2.47)

the 3-dB modulation bandwidth of the laser will be B3dB ¼ 1/τeff. For a laser diode operating well above threshold, stimulated recombination is much stronger than spontaneous recombination, i.e., 2ΓvgaP ≫ 1/τ, and therefore, τeff ≪ τ. This is the major reason that the modulation bandwidth of a laser diode is much larger than that of an LED. In this simplified modulation response analysis, we have assumed that photon density is a constant, and therefore, there is no coupling between the carrier density rate equation and the photon density rate eq. A more precise analysis has to solve coupled rate equations. A direct consequence of coupling between the carrier density and the photon density is that for a sudden increase of injection current, the carrier density will first increase, which will increase the photon density. But the photon density increase tends to reduce carrier density through stimulated recombination. Therefore, there could be an oscillation of both carrier density and photon density immediately after the injection current is switched on. This is commonly referred to as relaxation oscillation. Detailed analysis of laser modulation can be found in Agrawal (2012). A unique characteristic of a semiconductor laser is that, in addition to direct intensity modulation, its oscillation frequency can also be modulated by injection current. This frequency modulation is originated from the carrier density-dependent refractive index of the material within the laser cavity. Since the refractive index is a parameter of the laser phase condition shown in Eq. (1.2.14), change of the refractive index will change resonance wavelength of the laser cavity. A direct modulation on a laser diode by injection current will introduce both intensity modulation and the phase modulation. This optical phase modulation is usually referred to as chirp. The ratio between the emitting optical field phase change rate and the normalized photon density change rate is defined by a well-known linewidth enhancement factor αlw as (Henry, 1982).

21

22

Fiber optic measurement techniques

αlw ¼ 2P

dφ=dt dP=dt

(1.2.48)

And therefore, optical frequency shift is related to photon density modulation as dφ αlw dP ¼ (1.2.49) 2P dt dt αlw is an important parameter of the laser diode, which is determined both by the semiconductor material and by the laser cavity structure. For intensity modulation-based optical systems, lasers with smaller chirp are desired to minimize the spectral width of the modulated optical signal. On the other hand, for optical frequency modulation-based systems such as frequency-shift key (FSK), lasers with large chirp will be beneficial. δf ¼

Laser noises Relative intensity noise (RIN) In semiconductor lasers, the output optical power may fluctuate due to the existence of spontaneous emission, thus producing intensity noise. Since the quality of a laser output depends on the ratio between the noise power and the total optical power, a commonly used measure of laser intensity noise is the relative intensity noise (RIN), which is defined as RIN ¼

SP ðωÞ P 2opt

(1.2.50)

where SP(ω) is the intensity noise power spectral density and Popt is the total optical power. Obviously, SP(ω) increases with the increase of the total optical power. RIN is a convenient way to characterize a laser quality. Generally, RIN is a function of frequency; it peaks around relaxation oscillation frequency because of the interaction between carrier density and photon density. The unit of RIN is (Hz1) or (dB/Hz), as a relative measure. Phase noise Phase noise is a measure of spectral purity of the laser output. Fig. 1.2.13 shows that a spontaneous emission event not only generates intensity variation but also produces phase variation. The spectral width caused by phase noise is commonly referred Im

Spontaneous Emission

δφ φ

Re

Fig. 1.2.13 Optical field vector diagram. Illustration of optical phase noise generated due to spontaneous emission events.

Fundamentals of optical devices

to as spectral linewidth, which is proportional to the rate of spontaneous emission and inversely proportional to the photon density: Δω ∝ Rsp/(2P). In contrast to other types of lasers, a unique feature of the semiconductor laser is the dependency of the optical phase on the photon density through the linewidth enhancement factor αlw as shown by Eq. (1.2.48). The photon density variation introduced by each spontaneous emission event will cause a change in the optical phase through the change of the carrier density. This effect turns out to be much stronger than the direct phase noise process illustrated in Fig. 1.2.13, and the overall linewidth expression of a laser diode is Rsp 1 + α2lw (1.2.51) 2P where the second term is the contribution of the photon density-dependent refractive index. This is where the term linewidth enhancement factor came from. For typical semiconductor lasers, the value of αlw varies between 2 and 6; therefore, it enhances the laser linewidth by 4 to 36 times (Henry, 1982). Δω ¼

Mode partition noise The output from a semiconductor laser can have multiple longitudinal modes as shown in Fig. 1.2.9 if the material gain profile is wide enough. All these longitudinal modes compete for carrier density from a common pool. Although several different modes may have similar gain, the winning mode will consume most of the carrier density, and thus, the power of other modes will be suppressed. Since the values of gain seen by different modes are not very different, spontaneous emission noise, external reflection, or temperature change may introduce a switch from one mode to another mode. This mode hopping is random and is usually associated with intensity fluctuation. In addition, if the external optical system has wavelength-dependent loss, this mode hopping will inevitably introduce additional intensity noise for the system.

1.2.7 Single-frequency semiconductor lasers So far, we have only considered the laser diode where the resonator consists of two parallel mirrors. This simple structure is called a Fabry-Perot resonator, and the lasers made with this structure are usually called Fabry-Perot lasers, or simply FP lasers. An FP laser diode usually operates with multiple longitudinal modes because a phase condition can be met by a large number of wavelengths, and the reflectivity of the mirrors is not wavelength selective. In addition to mode partition noise, multiple longitudinal modes occupy wide optical bandwidth, which results in poor bandwidth efficiency and low tolerance to chromatic dispersion of the optical system. The definition of a single-frequency laser can be confusing. An absolute singlefrequency laser does not exist because of phase noise and frequency noise. A singlefrequency laser diode may simply be a laser diode with a single longitudinal mode.

23

24

Fiber optic measurement techniques

A more precise definition of the single-frequency laser is a laser that not only has a single mode but that mode also has very narrow spectral linewidth. To achieve single-mode operation, the laser cavity has to have a special wavelength selection mechanism. One way to introduce wavelength selectivity is to add a grating along the active layer, which is called distributed feedback (DFB). The other way is to add an additional mirror outside the laser cavity, which is referred to as the external cavity. 1.2.7.1 DFB laser diode A DFB laser diode is a very popular device that is widely used in optical communication systems. Fig. 1.2.14 shows the structure of a DFB laser, where a corrugating grating is written just outside the active layer, providing a periodic refractive index perturbation (Kogelink and Shank, 1972). Like what happens in an FP laser, the lightwave resonating within the cavity is composed of two counter-propagating waves, as shown in Fig. 1.2.14B. However, in the DFB structure, the index grating creates a mutual coupling between the two waves propagating in opposite directions, and therefore, mirrors on the laser surface are no longer needed to provide the optical feedback. Because the grating is periodical, constructive interference between the two waves happens only at certain wavelengths, which provides a mechanism of wavelength selection for the laser cavity. To resonate, the wavelength has to match the grating period, and the resonant condition of a DFB laser is thus λg ¼ 2nΛ

(1.2.52)

This is called the Bragg wavelength, where Λ is the grating pitch and n is the effective refractive index of the optical waveguide. For wavelengths away from the Bragg wavelength, the two counter-propagated waves do not enhance each other along the way; therefore, self-oscillation cannot be sustained for these wavelengths.

Electrode Λ

(a)

Grating Active layer

(b) Fig. 1.2.14 (A) Structure of a DFB laser with a corrugating grating just outside the active layer and (B) an illustration of two counter-propagated waves in the cavity.

Fundamentals of optical devices

Reff

Reff

(a)

(b)

L/2

l/4 shift

Fig. 1.2.15 (A) A uniform DFB grating and (B) a DFB grating with a quarter-wave shift in the middle.

Another way to understand this distributed feedback is to treat the grating as an effective mirror. As shown in Fig. 1.2.15, from the reference point in the middle of the cavity looking left and right, the effect of the grating on each side can be viewed as an equivalent mirror, as in an FP laser cavity, with the effective reflectivity Reff. This effective reflectivity is frequency-dependent, as     sin x 2   (1.2.53) Reff ∝ 1   x where x ¼ πLc(λ  λg)/(2vgλ2g ), vg is the group velocity, c is the speed of light, and L is the cavity length. If the grating is uniform, this effective reflectivity has two major resonance peaks separated by a deep stop band. As a result, a conventional DFB laser generally has two degenerate longitudinal modes, and the wavelength separation between these two modes is Δλ ¼ 4vgλ2g /Lc. Although residual reflectivity from laser facets may help to suppress one of the two degenerate modes by breaking up the symmetry of the transfer function, it is usually not reliable enough for mass production. A most popular technique to create single-mode operation is to add a quarter-wave shift in the middle of the Bragg grating, as shown in Fig. 1.2.15B. This λ/4 phase shift introduces a phase discontinuity in the grating and results in a strong reflection peak at the middle of the stopband, as shown in Fig. 1.2.16B. This ensures single longitudinal mode operation in the laser diode at the Bragg wavelength. 1.2.7.2 External cavity laser diode The operation of a semiconductor is sensitive to external feedback (Lang and Kobayashi, 1980). Even a 40 dB optical feedback is enough to bring a laser from single-frequency operation into chaos. Therefore, an optical isolator has to be used at the output of a laser diode to prevent optical feedback from external optical interfaces. On the other hand, precisely controlled external optical feedback can be used to create wavelength-tunable lasers with very narrow spectral linewidth.

25

Reff

Fiber optic measurement techniques

(a)

λg

λ

λg

λ

Reff

26

(b)

Fig. 1.2.16 (A) Structure of a DFB laser with a corrugating grating just outside the active layer and (B) an illustration of two counter-propagated waves in the cavity.

The configuration of a grating-based external cavity laser is shown in Fig. 1.2.17, where laser facet reflectivities are R1 and R2 and the external grating has a wavelength-dependent reflectivity of R3(ω). In this complex cavity configuration, the reflectivity R2 of the facet facing the external cavity has to be replaced by an effective reflectivity Reff as shown in Fig. 1.2.17, as npffiffiffiffiffiffi o2 pffiffiffiffiffiffiffiffiffiffiffiffiffiX∞ m1 Reff ðωÞ ¼ R2 + ð1  R2 Þ R3 ðωÞ m¼1 ðR2 R3 ðωÞÞ 2 ejωτe (1.2.54) If external feedback is small enough (R3 ≪ 1), only one roundtrip needs to be considered in the external cavity. Then, Eq. (1.2.54) can be simplified as ( pffiffiffiffiffiffiffiffiffiffiffiffiffi)2 ð1  R2 Þ R3 ðωÞ pffiffiffiffiffiffi Reff ðωÞ  R2 1 + (1.2.55) R2 Then, the mirror loss αm shown in Eq. (1.2.23) can be modified by replacing R2 with Reff. Fig. 1.2.18 illustrates the contributions of various loss terms: α1 is the reflection loss of the grating, which is wavelength selective, and α2 and α3 are resonance losses between R1 and R2 and between R2 and R3, respectively. Combining these three contributions, the total wavelength-dependent loss αm has only one strong low-loss wavelength, which determines the lasing wavelength. In practical external cavity laser applications, an Reff Le

L

Output

R1

R2

Re(w)

Fig. 1.2.17 Configuration of an external cavity semiconductor laser, where the external feedback is provided by a reflective grating.

Fundamentals of optical devices

α1 α2 α3

Magnitude

αm

Material gain

λ λ0 λL Wavelength

Fig. 1.2.18 Illustration of resonance losses between R1 and R2 (α2) and between R2 and R3 (α3). α1 is the reflection loss of the grating, and αm is the combined mirror loss. Lasing threshold is reached only by one mode at λL.

antireflection coating is used on the laser facet facing the external cavity to reduce R2, and the wavelength dependency of both α1 and α2 can be made very small compared to that of the grating; therefore, a large wavelength tuning range can be achieved by rotating the angle of the grating while maintaining single longitudinal mode operation. External optical feedback not only helps to obtain wavelength tuning but also changes the spectral linewidth of the emission. The linewidth of an external cavity laser can be expressed as Δv ¼

Δv0 1 + k cos ðω0 τe + tan 1 αlw Þ

(1.2.56)

where Δv0 is the linewidth of the laser diode without external optical feedback, ω0 is the oscillation angular frequency, αlw is the linewidth enhancement factor, and k represents the strength of the optical feedback. When the feedback is not very strong, this feedback strength can be expressed as pffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffi τe ð1  R2 Þ R3 pffiffiffiffiffiffi k¼ (1.2.57) 1 + α2lw τ R2 τ ¼ 2nL/c and τe ¼ 2neLe/c are roundtrip delays of the laser cavity and the external cavity, respectively, with L and Le, the lengths of the laser cavity and the external cavity. n and ne are refractive indices of the two cavities. Eq. (1.2.56) shows that the linewidth of the external cavity laser depends on the phase of the external feedback. To obtain narrow linewidth, precise control of the external

27

28

Fiber optic measurement techniques

cavity length is critical; a mere λ/2 variation in the length of external cavity can change the linewidth from the minimum to the maximum. This is the reason why an external cavity has to have a very stringent mechanical stability requirement. An important observation from Eq. (1.2.57) is that the maximum linewidth reduction is proportional to the ratio of the cavity length ratio Le/L. This is because there is no optical propagation loss in the external cavity, and the photon lifetime is increased by increasing the external cavity length. In addition, when photons travel in the external cavity, there is no power-dependent refractive index; this is the reason for including pffiffiffiffiffiffiffiffiffiffiffiffiffiffi the factor 1 + α2lw in Eq. (1.2.57). In fact, if the antireflection coating is perfect such that R2 ¼ 0, this ideal external cavity laser can be defined as an extended-cavity laser because it becomes a two-section one, with one of the sections passive. With R2 ¼ 0, α2 and α3 in Fig. 1.2.18 will be wavelength independent, and in this case, the laser operation will become very stable, and the linewidth is no longer a function of the phase of the external optical feedback. The extendedcavity laser diode linewidth can simply be expressed as (Hui and Tao, 1989) Δv ¼

1+

τe τ

Δv0 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 + α2lw

(1.2.58)

Grating-based external cavity lasers are commercially available, and they are able to provide >60 nm continuous wavelength tuning range in a 1550-nm wavelength window Eg), they may break up initially neutral electron-hole pairs into free electrons and holes. Then under the influence of the electric field, the holes (electrons) will move to the right (left) and create an electrical current flow, known as a photocurrent. Since the p-region and the n-regions are both highly conductive and the electrical field is built up only within the depletion region where the photons are absorbed to generate photocurrent, it is desirable to have a thick depletion layer so that the photon absorption efficiency can be improved. This is usually accomplished by adding an undoped (or very lightly doped) intrinsic layer between the p-type and the n-type layers to form the so-called PIN structure. In this way, the depletion region can be thick, which is determined in the fabrication. Typically, the thickness of the intrinsic layer is on the order of 100-μm. Another desired property of the photodiode is a sufficient optical window to accept the incoming photons. Practical photodiode geometry is shown in

Fundamentals of optical devices

Photon input

Depletion region current



+

Fig. 1.3.1 Band diagram of a pn junction with reverse bias.

Fig. 1.3.2, where the optical signal comes from the p-type side of the wafer, and the optical window size can be large, independent of the thickness of the intrinsic layer. The photon absorption process in a PIN structure is illustrated in Fig. 1.3.3, where WP, WN, and WI are the thicknesses of p-type, n-type, and the intrinsic layers, respectively, and the absorption coefficients are αp, αn, and αi for these three regions.

hv

-

p Intrinsic

Metal contacts

n +

Fig. 1.3.2 Geometry of a typical PIN photodetector, where the optical signal is injected from the p-type side.

WP

WI

WN

ap

ai

an

p

i

Optical power

n

R 0

WI

Fig. 1.3.3 Illustration of photon absorption in a PIN structure.

z

31

32

Fiber optic measurement techniques

Assume that each photon absorbed within the intrinsic layer produces an electrical carrier, whereas the photons absorbed outside the intrinsic layer are lost; the quantum efficiency η can then be defined by the ratio between the number of electrical carriers generated and the number of photons injected: η ¼ ð1  RÞ exp αp W P ½1  exp ðαi W I Þ (1.3.1) where R is the surface reflectivity of the device. Obviously, a necessary condition for Eq. (1.3.1) is that each incoming photon has a higher energy than the bandgap (hv > Eg); otherwise, η is equal to zero because no carriers can be generated.

1.3.2 Responsivity and bandwidth As we have discussed previously that quantum efficiency defines the number of photons can be generated for every incoming photon, responsivity R, on the other hand, defines how many mA photocurrent that can be generated for every mW of signal optical power. If each photon generates a carrier, this efficiency will be R ¼ q=hv ¼ qλ=hc, where q is the electron charge, hv is the photon energy, c is the speed of light, and λ is the signal wavelength. Considering the nonideal quantum efficiency η, the photodiode responsivity will be R¼

I ðmAÞ q qλ ¼η ¼η hv hc P ðmW Þ

(1.3.2)

It is interesting to note that the responsivity is linearly proportional to the wavelength of the optical signal. With the increase of wavelength, the energy per photon becomes smaller, and each photon is still able to generate a carrier but with a lower energy. Therefore, the responsivity becomes higher at longer wavelengths. However, when the wavelength is too long and the photon energy is too low, the responsivity will suddenly drop to zero because the necessary condition hv > Eg is not satisfied, as shown in Fig. 1.3.4. The longest wavelength to which a photodiode can still have nonzero responsivity is called the cutoff wavelength, which is, λc ¼ hc/Eg, where Eg is the bandgap of the ℜ

ql η hc

λ hc / Eg

Fig. 1.3.4 Photodiode responsivity versus wavelength.

Fundamentals of optical devices

semiconductor material used to make the photodiode. Typically, a silicon-based photodiode has a cutoff wavelength at about 900 nm, and an InGaAs-based photodiode can extend the wavelength to approximately 1700 nm. For that reason, in optical communication systems at 1550 nm wavelengths, Si photodiodes cannot be used. For example, for a photodiode operating at 1550 nm wavelength, if the quantum efficiency is η ¼ 0.65, the responsivity can be easily found, using Eq. (1.3.2), as R  0:81 (mA/mW). The responsivity can be improved using an antireflection coating at the surface to reduce the reflectivity R. Another way to improve the responsivity is to increase the thickness of the intrinsic layer so that fewer photons can escape into the n-type layer, as shown in Fig. 1.3.3. However, this can result in a slower response speed of the photodiode. Now, let’s see why. Suppose the input optical signal is amplitude modulated as P ðt Þ ¼ P 0 1 + km ejωt (1.3.3) where P0 is the average optical power, ω is the modulation angular frequency, and km is the modulation index. The generated electrical carriers are then distributed along the z-direction within the intrinsic region, as shown in Fig. 1.3.3. Assume that carrier drift velocity is vn under the electrical field; the photocurrent density distribution will be J ðz, tÞ ¼ J 0 ½1 + km exp jωðt  z=vn Þ

(1.3.4)

where J0 is the average current density. The total electrical current will be the collection of contributions across the intrinsic layer thickness:

 Z WI k I ðt Þ ¼ J ðz, t Þdz ¼ J 0 W I 1 + m ejωτt  1 ejωm t (1.3.5) jωτt 0 where τt ¼ WI/vn is the carrier drift time across the intrinsic region. Neglecting the DC parts in Eqs. (1.3.3) and (1.3.5), the photodiode response can be obtained as jωτ    I η jωτt ωτt t H ðωÞ ¼ ac ¼ e  1 ¼ η exp  sinc (1.3.6) P ac jωτt 2 2π where η ¼ J0/P0 is the average responsivity. The 3-dB bandwidth of j H(ω)j2 can be easily found as ωc ¼

2:8 v ¼ 2:8 n WI τt

(1.3.7)

Clearly, increasing the thickness of the intrinsic layer will slow the response speed of the photodiode. Carrier drift speed is another critical parameter of detector speed. vn increases with the increase of external bias voltage, but it saturates at approximately 8  106cm/s in silicon, when the field strength is about 2  104V/cm. As an example,

33

34

Fiber optic measurement techniques

for a silicon-based photodiode with 10 μm intrinsic layer thickness, the cutoff bandwidth will be about fc ¼ 3.6GHz. The carrier mobility in InGaAs is usually much higher than that in silicon; therefore, the ultra-high-speed photodiodes can be made. Another important parameter affecting the speed of a photodiode is the junction capacitance, which can be expressed as Cj ¼

εi A WI

(1.3.8)

where εi is the permittivity of the semiconductor and A is the junction area. Usually, large-area photodiodes have lower speed due to their large junction capacitance.

1.3.3 Electrical characteristics of a photodiode The terminal electrical characteristic of a photodiode is similar to that of a conventional diode; its current-voltage relationship is shown in Fig. 1.3.5. The diode equation is

   V I j ¼ I D exp 1 (1.3.9) xV T where VT ¼ kT/q is the thermal voltage (VT  25mV at room temperature), 2 > x > 1 is a device structure-related parameter, and ID is the reverse saturation current, which may range from picoampere to nanoampere, depending on the structure of the device. When a photodiode is forward-biased (please do not try this; it could easily damage the photodiode), current flows in the forward direction, which is exponentially proportional to the bias voltage. On the other hand, when the photodiode is reversely biased, the reverse current is approximately equal to the reverse saturation current ID when there is no optical signal received by the photodiode. With the increase of the signal optical power, the reverse current linearly increases as described by Eq. (1.3.2). Reverse bias also helps increase the detection speed as described in the last section. This is the normal

I No optical power

−VB

0 V

Dark current High optical power

Fig. 1.3.5 Photodiode current-voltage relationship.

Fundamentals of optical devices

bias

bias

Optical signal

Optical signal Amp RL

Output voltage

(a)

RF

Amp

Output voltage

(b)

Fig. 1.3.6 Two often used preamplifier circuits: (A) a voltage amplifier and (B) a transimpedance amplifier.

operation region of a photodiode. When the reverse bias is too strong (V  VB), the diode may break down, where VB is the breakdown voltage. To construct an optical receiver, a photodiode has to be reverse biased, and the photocurrent has to be amplified. Fig. 1.3.6 shows two typical electrical circuit examples of optical receivers. Fig. 1.3.6A is a voltage amplifier, where the load resistance seen by the photodiode can be high. High load resistance makes the amplifier highly sensitive; we will see later that the thermal noise is relatively small. However, the frequency bandwidth of the amplifier, which is inversely proportional to the parasitic capacitance and the load resistance, may become narrow. On the other hand, in the transimpedance amplifier shown in Fig. 1.3.6B, the equivalent load resistance seen by the photodiode can be low. This configuration is usually used in optical receivers, which require high speed and wide bandwidth.

1.3.4 Photodetector noise and SNR In optical communication systems and electro-optic measurement setups, signal quality depends not only on the signal level itself but also on the signal-to-noise ratio (SNR). To achieve a high SNR, the photodiode must have high quantum efficiency and low noise. We have already discussed the quantum efficiency and the responsivity of a photodiode previously; in this section, we discuss noise sources associated with photodetection. It is straightforward to find the expression of a signal photocurrent as I s ðtÞ ¼ RP s ðtÞ

(1.3.10)

where Is(t) is the signal photocurrent, Ps(t) is the signal optical power, and R is the photodiode responsivity as defined in Eq. (1.3.2). Major noise sources in a photodiode can be categorized as thermal noise, shot noise, and dark current noise. Because of the random nature of the noises, the best way to specify them is to use their statistical values, such as spectral density, power, and bandwidth. Thermal noise is generated by the load resistor, which is a white noise. Within a spectral bandwidth B, the mean-square thermal noise current can be expressed as

35

36

Fiber optic measurement techniques

 2  4kTB ith ¼ RL

(1.3.11)

where RL is the load resistance, k is the Boltzmann’s constant, and T is the absolute temperature. Large load resistance helps reduce thermal noise; however, as we discussed in the last section, receiver bandwidth will be reduced by increasing the RC constant. In most high-speed optical receivers, 50 Ω is usually used as a standard load resistance. Shot noise arises from the statistic nature of photodetection. For example, if 1 μW optical power at 1550 nm wavelength is received by a photodiode, it means that statistically about 7.8 trillion photons hit the photodiode every second. However, these photons are not synchronized, and they come randomly. The generated photocurrent will fluctuate as the result of this random nature of photo arrival. Shot noise is also a wideband noise, and the mean-square shot noise current is proportional to the signal photocurrent detected by the photodiode. Within a receiver bandwidth B, the mean-square shot noise current is 2 ish ¼ 2qI s B ¼ 2qRP s B (1.3.12) Dark current noise is the constant current that exists when no light is incident on the photodiode. This is the reason it is called dark current. As shown in Fig. 1.3.5, this dark current is the same thing as the reverse saturation current. Because of the statistical nature of the carrier generation process, this dark current also has a variance and its mean-square value is 2  idk ¼ 2qI D B (1.3.13) where ID is the dark current of the photodiode. After discussing signal and noise photocurrents, we can now put them together to discuss the SNR. The SNR is usually defined as the ratio of signal electrical power and noise electrical power. This is equivalent to the ratio of their mean-square currents,  2  I ðt Þ SNR ¼     s   D E (1.3.14) i2th + i2sh + i2dk + i2amp where hi2ampiis the equivalent mean-square noise current of the electrical preamplifier. In many optical systems, the optical signal is very weak when it reaches the receiver. In these cases, thermal noise is often the dominant noise. If we only consider thermal noise and neglect other noises, the SNR will be simplified as SN Rthermal ¼

R2 R L 2 P ∝ P 2s 4kTB s

(1.3.15)

Here, the SNR is proportional to the square of the incident optical power.

Fundamentals of optical devices

On the other hand, if the incident optical signal is very strong, shot noise will become the dominant noise source. If we consider only shot noise and neglect other noises, the SNR will be SN Rshot ¼

R2 P 2s RP s ¼ ∝ Ps 2qRP s B 2qB

(1.3.16)

In this case, the SNR is linearly proportional to the incident optical power. Example 1.3 For a photodiode operating in a 1550 nm wavelength window with the following parameters: η ¼ 0.85, RL ¼ 50 Ω, ID ¼ 5 nA, T ¼ 300 K, and B ¼ 1GHz, find the output SNR versus incident optical power when separately considering various noise sources. Solution: This problem can be solved using Eqs. (1.3.10)–(1.3.13). As a result, Fig. 1.3.7 shows the calculated SNR versus the input optical power in dBm. In this example, the thermal noise remains the limiting factor to the SNR for most of the signal optical power region. When the signal power is higher than approximately 0 dBm, shot noise becomes the limiting factor. Also, the thermal noise-determined SNR has a slope of 2 dB/1 dB, whereas the shot noise-determined SNR has a slope of 1 dB/1 dB, as discussed in Eqs. (1.3.15) and (1.3.16). In this example, the impact of dark current noise is always negligible. In general, dark current noise is only significant in low-speed optical receivers where load resistance RL is very high and the thermal noise level is therefore very low.

140 120

Dark current contribution

SNR (dB)

100 80

Shot noise contribution

60 40 20

Thermal noise contribution

0 −30 −25 −20 −15 −10 −5 0 5 Input signal optical power (dBm)

Fig. 1.3.7 Decomposing SNR by its three contributing factors.

10

37

38

Fiber optic measurement techniques

1.3.4.1 Noise-equivalent power (NEP) NEP is another useful parameter to specify a photodetector. NEP is defined as the minimum optical power required to obtain a unity SNR in a 1-Hz bandwidth. Usually, only thermal noise is considered in the definition of NEP. From Eq. (1.3.15), if we let R2 RL P 2s =ð4kTBÞ ¼ 1, it will require rffiffiffiffiffiffiffiffiffiffiffiffi Ps 4kT NEP ¼ pffiffiffi ¼ (1.3.17) R2 R L B pffiffiffiffiffiffi According to this definition, the unit of NEP is in (W = Hz). Some manufacturers specify NEP for their photodiode products. Obviously, small NEP is desired for high-quality photodiodes. As an example, at room temperature, for a photodetector operating in a 1550-nm wavelength window with quantum efficiency η ¼ 0.85pand ffiffiffiffiffiffiload resistance RL ¼ 50 Ω, the NEP value is approximately NEP ¼ 1:72  1011 W= Hz . In an optical system requiring the measurement of low levels of optical signals, NEP of the optical receiver has to be low enough to guarantee the required SNR and thus the accuracy of the measurement.

1.3.5 Avalanche photodiodes (APDs) The typical responsivity of a PIN photodiode is limited to the level of approximately 1 mA/mW because the quantum efficiency η cannot be higher than 100%. To further increase the detection responsivity, avalanche photodiodes were introduced in which photocurrents are internally multiplexed before going to the load. A very high electrical field is required to initiate the carrier multiplication; therefore, the bias voltage an APD needed can be as high as 100 V. Under a very strong electrical field, an electron is able to gain sufficient kinetic energy, and it can knock several electrons loose from neutral electron-hole pairs; this is called ionization. These newly generated free electrons will also be able to gain sufficient kinetic energy under the same electrical field and to create more free carriers; this is commonly referred to as the avalanche effect. Fig. 1.3.8 illustrates this carrier multiplication process, where one input electron generates five output electrons and four holes. Electrical field

Input electron

Fig. 1.3.8 Illustration of the carrier multiplication process.

Fundamentals of optical devices

Fig. 1.3.9 shows a commonly used APD structure, where a highly resistive intrinsic semiconductor material is deposited as an epitaxial layer on a heavily doped p-type (p+) substrate. Then, a p-type implantation or diffusion is made on top of the intrinsic layer to create a thin p-type layer. Finally, another heavily doped n-type (n+) epitaxial layer is deposited on the top. Because the charge density at the n+ p junction suddenly changes the sign, the built-in electrical field is very strong. This electrical field density is further intensified when a high reverse biasing across the device is applied. When a photon is absorbed in the intrinsic region, it generates a free carrier. The electron moves toward the avalanche region under the applied electrical field. Once it enters the avalanche region, it is quickly accelerated by the very strong electrical field within the region to initiate the ionization and avalanche process. The result of the avalanche process is that each input photon may generate multiple free electrons and holes. In this case, the responsivity expression of the photodiode needs to be modified as qλ (1.3.18) hc where MAPD is defined as the APD gain. Since the avalanche process depends on the applied external voltage, the APD gain strongly depends on the reverse bias. A commonly used simple expression of APD gain is RAPD ¼ M APD RPIN ¼ M APD η

M APD ¼

1 1  ðV =V B ÞnB

(1.3.19)

where nB is a parameter that depends on the device structure and the material. V is the applied reverse bias voltage, and VB is defined as the breakdown voltage of the APD, and Eq. (1.3.19) is valid only for V < VB. Obviously, when the reverse bias voltage approaches the breakdown voltage VB, the APD gain approaches infinite.

n+

p

i

p+

(b)

Charge density

(c)

Electrical field

(a)

Avalanche region

Fig. 1.3.9 (A) APD layer structure, (B) charge density distribution, and (C) electrical field density profile.

39

40

Fiber optic measurement techniques

In addition to APD gain, another important parameter in an APD is its frequency response. In general, the avalanche process slows the response time of the APD, and this bandwidth reduction is proportional to the APD gain. A simplified equation describing the frequency response of APD gain is M APD,0 M APD ðωÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 + ðωτe M APD,0 Þ2

(1.3.20)

where MAPD, 0 ¼ MAPD(0) is the APD gain at DC as shown in Eq. (1.3.19) and τe is an effective transient time, which depends on the thickness of the avalanche region and the speed of the carrier drift. Therefore, the 3-dB bandwidth of APD gain is fc ¼

1 2πτe M APD,0

(1.3.21)

In practical applications, the frequency bandwidth requirement has to be taken into account when choosing APD gain. Due to carrier multiplication, signal photocurrent will be amplified by a factor MAPD as I s,APD ðtÞ ¼ RM APD P s ðt Þ

(1.3.22)

As far as the noises are concerned, since the thermal noise is generated in the load resistor RL, it is not affected by the APD gain. However, both shot noise and the dark current noise are generated within the photodiode, and they will be enhanced by the APD gain. Within a receiver bandwidth B, the mean-square shot noise current in an APD is 2  ish,APD ¼ 2qRP s BM 2APD F ðM APD Þ (1.3.23) The dark current noise in an APD is 2  idk,APD ¼ 2qI D BM 2APD F ðM APD Þ

(1.3.24)

In both Eqs. (1.3.23) and (1.3.14), F(MAPD) is a noise figure associated with the random nature of the carrier multiplication process in the APD. This noise figure is proportional to the APD gain MAPD. The following simple expression is found to fit well with measured data for most practical APDs: F ðM Þ ¼ ðM APD Þx

(1.3.25)

where 0 x 1, depending on the material. For often used semiconductor materials, x ¼ 0.3 for Si, x ¼ 0.7 for InGaAs, and x ¼ 1 for Ge avalanche photodiodes. From a practical application point of view, the APD has advantages compared to conventional PIN when the received optical signal is very weak and the receiver SNR is limited by thermal noise. In quantum noise limited optical receivers, such as coherent

Fundamentals of optical devices

detection receivers, the APD should, in general, not be use, because it would increase the noise level and introduce extra limitations in the electrical bandwidth.

1.3.6 APD used as single photon detectors For an APD used as a linear photodetector, the APD is biased slightly below the breakdown threshold VBK. In this case, the photocurrent is linearly proportional to the signal optical power, and the APD gain is typically less than 25 dB. In circumstances where the level of optical signal is extremely low, not more than one photon may arrive within a short time window. Photon counting is applicable in this type of situation, where an APD can be used as the single-photon detector ( Jiang et al., 2007). Fig. 1.3.10 explains the operation principle of a single-photon detector, in which Np is the number of arriving photons, Ip is the photocurrent, and VB is the reverse bias voltage. For the application as a single-photon counter, the reverse bias voltage of the APD is set to be higher than the breakdown voltage (VB > VBK) so that the detector operates in an unstable condition. Because the strong impact ionization effect above breakdown voltage, a single photon arriving is enough to trigger an avalanche breakdown of the APD and generate large electric current in the milliampere level. In order to detect another photon which may arrive later, the APD has to be reset to the prebreakdown state by temporarily reducing the bias to the level below than the breakdown voltage. This operation is commonly referred to as the Geiger mode (Donnelly et al., 2006). The number of photons can be measured by counting the number of photocurrent pulses. The reset window has to be placed immediately after each photo-induced breakdown event, and the width of the reset window, tres, has to be small enough to allow highspeed photon counting. This bias reset is commonly referred to as quenching. In the example shown in Fig. 1.3.10, the third photon is not counted because its arrival time is within the bias reset window. In practical applications, the resetting of electric biasing Np Missed

1

t

0 Ip Ith

t

0 VB VBD 0

tres

t

Fig. 1.3.10 Operation principle of a single-photon detector. Np: number of photons received, Ip: photocurrent, VB: applied biasing voltage.

41

42

Fiber optic measurement techniques

Photon Vs >VBD

Photon

APD RL

+

VR -

(a)

Vs >VBD

APD RL

+

V0

-

(b)

Fig. 1.3.11 Biasing circuits for passive (A) and active (B) quenching of a Geiger mode APD.

has to be triggered by each photodetection event, which can be achieved by either passive quenching or active quenching as shown in Fig. 1.3.11. Passive quenching is accomplished in the component level using a series load resistor to provide a negative feedback. As shown in Fig. 1.3.11A, upon each avalanche event, a large photocurrent pulse is created which results in a large voltage drop VR on the load resistor. Thus, the voltage applied on the APD is reduced to VS VR, which is lower than the breakdown voltage VBD of the APD. On the other hand, active quenching, shown in Fig. 1.3.11B, is a circuit level approach which uses an amplified negative feedback circuit to switch off the electric bias after each avalanche event. Active quenching usually allows better control in the reset pulse width and depth.

1.4 Optical fibers The optical fiber is the most important component in fiber-optic communication systems as well as in many fiber-based optical measurement setups. The basic structure of an optical fiber is shown in Fig. 1.4.1, which has a central core, a cladding, and an external coating to protect and strengthen the fiber. Light guiding and propagation in an optical fiber are based on total internal reflection between the core and the cladding. Because of the extremely low attenuation in silica, optical signals can transmit for very long distance along an optical fiber without significant loss in optical power. To understand the mechanism of optical propagation in fiber, we start by discussing the fundamental concept of lightwave reflection and refraction at an optical interface.

Cladding Core Coating

Fig. 1.4.1 Basic structure of an optical fiber.

Fundamentals of optical devices

1.4.1 Reflection and refraction An optical interface is generally defined as a plane across which optical property discontinues. For example, the water surface is an optical interface because the refractive indices suddenly change from n ¼ 1 in the air to n ¼ 1.3 in the water. To simplify our discussion, the following assumptions have been made: (1) Plane wave propagation (2) Linear medium (3) Isotropic medium (4) Smooth planar optical interface As illustrated in Fig. 1.4.2, an optical interface is formed between two optical materials with refractive indices of n1 and n2, respectively. A plane optical wave is projected onto the optical interface at an incident angle θ1 (with respect to the surface normal). The optical wave is linearly polarized, and its field amplitude vector can be decomposed into two orthogonal components, Ei// and Ei? parallel and perpendicular to the incidence plane. At the optical interface, part of the energy is reflected to the same side of the interface, and the other part is refracted across the interface. Snell’s law: n2 sin θ2 ¼ n1 sin θ1

(1.4.1)

tells the propagation direction of the refracted lightwave with respect to that of the incident wave. The wave propagation direction change is proportional to the refractive index difference across the interface. Snell’s law is derived based on the fact that phase velocity along z-direction should be continuous across the interface. Since the phase velocities in the z-direction are

E i⊥

E//r

E//i

E r⊥

Wave front

n1 n2

q1 q1

z q2

E//t E t⊥ Wave front

Fig. 1.4.2 Plane wave reflection and refraction at an optical interface.

43

44

Fiber optic measurement techniques

vp1 ¼ nc1 sin1θ1 and vp2 ¼ nc2 sin1θ2 at the two sides of the interface, Eq. (1.4.1) can be obtained with vp1 ¼ vp2. Because Snell’s law was obtained without any assumption of lightwave polarization state and wavelength, it is independent of these parameters. An important implication of Snell’s law is that θ2 > θ1 with n2 < n1. 1.4.1.1 Fresnel reflection coefficients To find out the strength and the phase of the optical field that is reflected to the same side of the interface, we have to treat optical field components Ei// and E ? i separately. An important fact is that optical field components parallel to the interface must be continuous at both sides of the interface. Let’s first consider the field components Ei//, Et//, and Er//. (They are all parallel to the incident plane but not to the interface.) They can be decomposed into components parallel with and perpendicular to the interface; the parallel components are Ei//cosθ1, Et//cosθ2, and Er//cosθ1, respectively, which can be derived from Fig. 1.4.2. Because of the field continuity across the interface, we have   E i==  Er== cos θ1 ¼ E t== cos θ2 (1.4.2) i Et//, and At the same time, the magnetic field components associated with Ep //,ffiffiffiffiffiffiffiffiffiffiffi have to be perpendicular to incident plane, and they are H i? ¼ ε1 =μ1 Ei== , ffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi pthe t t r H ? ¼ ε2 =μ2 E== , and H ? ¼ ε1 =μ1 E r== , respectively, where ε1 and ε2 are electrical permittivities and μ1 and μ2 are magnetic permittivities of the optical materials at two sides of the interface. Since H i?, H t?, and H r? are all parallel to the interface (although perpendicular to the incident plane), magnetic field continuity requires H i? + H r? ¼ H t?. pffiffiffiffi pffiffiffiffi Assume that μ1 ¼ μ2, ε1 ¼ n1 and ε2 ¼ n2 ; we have

Er//

n1 E i== + n1 E r== ¼ n2 E t==

(1.4.3)

Combining Eqs. (1.4.2) and (1.4.3), we can find the field reflectivity: ρ== ¼

E r== E i==

¼

n1 cos θ2  n2 cos θ1 n1 cos θ2 + n2 cos θ1

Using Snell’s law, Eq. (1.4.4) can also be written as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n22 cos θ1 + n1 ðn22  n21 sin 2 θ1 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ρ== ¼ n22 cos θ1 + n1 ðn22  n21 sin 2 θ1 Þ

(1.4.4)

(1.4.5)

where variable θ2 is eliminated. Similar analysis can also find the reflectivity for optical field components perpendicular to the incident plane as

Fundamentals of optical devices

ρ? ¼ Or, equivalently,

E r? n1 cos θ1  n2 cos θ2 ¼ E i? n1 cos θ1 + n2 cos θ2

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðn22  n21 sin 2 θ1 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ρ? ¼ n1 cos θ1 + ðn22  n21 sin 2 θ1 Þ n1 cos θ1 

(1.4.6)

(1.4.7)

Power reflectivities for parallel and perpendicular field components are therefore  2  2     R== ¼ ρ==  ¼ E r== =E i==  (1.4.8) and  2 R? ¼ jρ? j2 ¼ E r? =E i? 

(1.4.9)

Then, according to energy conservation, the power transmission coefficients can be found as 2   2     T == ¼ E t== =E i==  ¼ 1  ρ==  (1.4.10) and  2 T ? ¼ E t? =E i?  ¼ 1  jρ? j2

(1.4.11)

In practice, for an arbitrary incidence polarization state, the input field can always be decomposed into E// and E? components. Each can be treated independently. 1.4.1.2 Special cases Normal incidence This is when the light is launched perpendicular to the material interface. In this case, θ1 ¼ θ2 ¼ 0 and cosθ1 ¼ cosθ2 ¼ 1, the field reflectivity can be simplified as n  n2 ρ== ¼ ρ? ¼ 1 (1.4.12) n1 + n2 Note that there is no phase shift between the incident and reflected field if n1 > n2. (The phase of both ρ// and ρ? is zero.) On the other hand, if n1 < n2, there is a π phase shift for both ρ// and ρ? because they both become negative. With normal incidence, the power reflectivity is   n1  n2 2   R== ¼ R? ¼  (1.4.13) n1 + n2 

45

46

Fiber optic measurement techniques

This is a very often used equation to evaluate optical reflection. For example, reflection at an open fiber end is approximately 4%. This is because the refractive index in the fiber core is n1  1.5 (silica fiber) and refractive of air is n1 ¼ 1. Therefore,   n1  n2 2 1:5  12  ¼  R¼  ¼ 0:22 ¼ 0:04  14dB n1 + n2  1:5 + 1 In practical optical measurement setups using optical fibers, if optical connectors are not properly terminated, the reflections from the fiber-end surface can potentially cause significant measurement errors. Critical angle The critical angle is defined as an incident angle θ1 at which total reflection happens at the interface. According to Fresnel Eqs. (1.4.5) and (1.4.7), the only possibility that j ρ//j2 ¼ j ρ?j2 ¼ 1 is to have n22  n21sin2θ1 ¼ 0 or θ1 ¼ θc ¼ sin 1 ðn2 =n1 Þ

(1.4.14)

where θc is defined as the critical angle. Obviously, the necessary condition to have a critical angle depends on the interface condition. First, if n1 n2, a real solution can be found for θc ¼ sin1(n2/n1), and therefore, total reflection can only happen when a light beam launches from a high-index material to a low-index material. It is important to note that at a larger incidence angle θ1 > θc, n22  n21sin2θ1 < 0 and pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n22  n21 sin 2 θ1 becomes imaginary. Eqs. (1.4.5) and (1.4.7) clearly show that if pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n22  n21 sin 2 θ1 is imaginary, both jρ//j2and jρ?j2 are equal to 1. The important conclusion is that for all incidence angles satisfying θ1 > θc, total internal reflection will happen with R ¼ 1. Evanescent field: Note that if the total reflection condition is satisfied, there is no optical power flow across the interface. However, because of the continuity constrain, the optical field on the other side of the interface does not suddenly reduce to zero, which is known as the evanescent field. As illustrated in Fig. 1.4.3A, when the incidence angle is smaller than the critical angle, θ1 < θc, the reflection is partial, and the optical field that propagates in the z-direction in the n2 medium can be described by E(z) ¼ E0ejβzz, where βz ¼ (2π/λ) n2 cos θ2 which is the propagation constant projected on the z-direction. Based on the Snell’s law, this projected propagation constant can be expressed as βz ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð2π=λÞ n22  n21 sin 2 θ1. The value of βz reduces with the increase of the incidence angle,

Fundamentals of optical devices

n1

q1 q2

q1

n1

E(z)

n2

Total reflection

Incidence

Partial reflection

Incidence

n2 z

z

Partial transmission

(a)

(b)

Fig. 1.4.3 Illustration of evanescent wave for (A) partial reflection with θ1 < θc and (B) total reflection with θ1 > θc.

and βz becomes zero when the incidence angle is equal to the critical angle (θ1 ¼ θc). Further increasing the incidence angle for θ1 > θc results in an imaginary βz value, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi βz ¼ jð2π=λÞ n21 sin 2 θ1  n22 ¼ jα, where qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2π n21 sin 2 θ1  n22 α¼ λ is defined as the attenuation parameter of the evanescent field on the n2 side of the medium, and the optical field is then E ðzÞ ¼ E0 ejβz z ¼ E0 eαz As illustrated in Fig. 1.4.3B, the penetration depth, ze, of the evanescent field across the interface is usually defined by the distance at which the field is reduced by 1/e, and thus, ze ¼ 1/α. This evanescent field penetration depth not only depends on the values of n1 and n2 but also depends on the wavelength of the optical signal and the wave incidence angle. As an example, Fig. 1.4.4 shows the penetration depth as the function of the incidence angle for two different materials: glass (n1 ¼ 1.5) and silicon (n1 ¼ 3.5) in the air

Penetraon depth ze (μ m)

0.8

0.6

n1=1.5

n1=3.5

0.4

0.2

0 0

n2 = 1, λ = 1.5μm 0.1

0.2

0.3

0.4

Incidence angle θ 1 (radians)

0.5

Fig. 1.4.4 Penetration depth of evanescent field as the function of incidence angle, for air/glass interface and air/silicon interface.

47

48

Fiber optic measurement techniques

(n2 ¼ 1), for the signal wavelength of 1.5 μm. This figure indicates that the evanescent field penetration depth ze is much shorter than the wavelength of the optical signal, especially when the index difference (n1 n2) is large, and the incidence angle approaches π/ 2. This unique property of tight field concentration has been utilized to make evanescent field photonic biosensors, in which only the molecules immobilized on the waveguide surface are illuminated by the optical field (Taitt et al., 2005), which will be further discussed in Section 2.7. 1.4.1.3 Optical field phase shift between the incident and the reflected beams (a) When θ1 < θc (partial reflection and partial transmission), both ρ// and ρ? are real, and therefore, there is no phase shift for the reflected wave at the interface. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (b) When total internal reflection happens, θ1 > θc, n22  n21 sin 2 θi is imaginary. Fresnel Eqs. (1.4.5) and (1.4.7) can be written as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n22 cos θ1 + jn1 ðn21 sin 2 θ1  n22 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ρ== ¼ (1.4.15) n22 cos θ1 + jn1 ðn21 sin 2 θ1  n22 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n1 cos θ1  j ðn21 sin 2 θ1  n22 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (1.4.16) ρ? ¼ n1 cos θ1 + j ðn21 sin 2 θ1  n22 Þ Therefore, phase shift for the parallel and the perpendicular electrical field components are, respectively, ! pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! E r== n21 sin 2 θ1  n22 ¼ 2 tan 1 (1.4.17) ΔΦ== ¼ arg i n1 cos θ1 E == pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi!  r 2 2 2 E? n 1 ðn1 =n2 Þ sin θ1  1 (1.4.18) ΔΦ? ¼ arg i ¼ 2 tan 1 n2 cos θ1 E? This optical phase shift happens at the optical interface, which has to be considered in optical waveguide design, as will be discussed later. 1.4.1.4 Brewster angle (total transmission ρ 5 0) Consider a light beam launched onto an optical interface. If the input electrical field is parallel to the incidence plane, there exists a specific incidence angle θB at which the reflection is equal to zero. Thus, the energy is totally transmitted across the interface. This angle θB is defined as the Brewster angle. Consider Fresnel Eq. (1.4.5) for parallel field components. If we solve this equation for ρ// ¼ 0 and use θ1 as a free parameter, the only solution is tan(θ1) ¼ n2/n1, and therefore, the Brewster angle is defined as

Fundamentals of optical devices

θB ¼ tan 1 ðn2 =n1 Þ

(1.4.19)

Two important points we need to note are as follows: (1) The Brewster angle is only valid for the polarization component which has the electrical field vector parallel to the incidence plane. For the perpendicular polarized component, no matter how you choose θ1, total transmission will never happen. (2) ρ// ¼ 0 happens only at one angle, θ1 ¼ θB. This is very different from the critical angle where total reflection happens for all incidence angles within the range of θc < θ1 < π/2. The Brewster angle is often used to minimize the optical reflection, and it can also be used to select the polarization. Fig. 1.4.5 shows an example of optical field reflectivities ρ// and ρ? and their corresponding phase shifts ΔΦ// and ΔΦ? at an optical interface of two materials with n1 ¼ 1.5 and n2 ¼ 1.4. In this example, the critical angle is θc  69 degree, and the Brewster angle is θB  43 degree.

1.4.2 Propagation modes in optical fibers

Magnitude of ield reflectivities

As mentioned earlier in this chapter, an optical fiber is a cylindrical glass bar with a core, a cladding, and an external coating, as shown in Fig. 1.4.1. To confine and guide the lightwave signal within the fiber core, a total internal reflection is required at the core1 0.8 0.6 0.4

ρ//

0.2 0

0

10

20

30

40

ρ⊥

50

60

70

80

90

Phase shifts (π)

1

ΔΦ //

0.5

0

ΔΦ ⊥

−0.5

−1

θB 0

10

20

30

40

θc 50

60

70

80

90

Incidence angle (degree)

Fig. 1.4.5 Field reflectivities and phase shifts versus incidence angle. Optical interface is formed with n1 ¼ 1.5 and n2 ¼ 1.4.

49

50

Fiber optic measurement techniques

r n2

n1 Step-index fiber r n2

n1(r) Graded-index fiber

Fig. 1.4.6 Illustration and index profiles of step-index and graded-index fibers.

cladding interface. According to what we have discussed in Section 1.4.1, this requires the refractive index of the core to be higher than that of the cladding. Practical optical fibers can be divided into two categories: step-index fiber and graded-index fiber. The index profiles of these two types of fibers are shown in Fig. 1.4.6. In a step-index fiber, the refractive index is n1 in the core and n2 in the cladding; there is an index discontinuity at the core-cladding interface. A lightwave signal is bounced back and forth at this interface, which forms guided modes propagating in the longitudinal direction. On the other hand, in a graded-index fiber, the refractive index in the core gradually reduces its value along the radius. A generalized Fresnel equation indicates that in a medium with a continual index profile, a light trace would always bend toward high refractive areas. In fact, this graded-index profile creates a self-focus effect within the fiber core to form an optical waveguide (Meumann, 1988). Although gradedindex fibers form a unique category, they are usually made for multimode applications. The popular single-mode fibers are made with step-index fibers. Because of their popularity and simplicity, we will focus our analysis on step-index fibers. Rigorous description of wave propagation in optical fibers requires solving Maxwell’s equations and applying appropriate boundary conditions. In this section, we first use geometric ray trace approximation to provide a simplified analysis, which helps us understand the basic physics of wave propagation. Then, we present electromagnetic field analysis, which provides precise mode cutoff conditions. 1.4.2.1 Geometric optics analysis In this geometric optics analysis, different propagation modes in an optical fiber can be seen as rays traveling at different angles. There are two types of light rays that can propagate along the fiber: skew rays and meridional rays. Fig. 1.4.7A shows an example of

Fundamentals of optical devices

Ray trace in fiber core

Ray trace projected on fiber end surface

(a) r n2 qi

n1 z

(b) Fig. 1.4.7 Illustration of fiber propagation modes in geometric optics: (A) skew ray trace and (B) meridional ray trace.

skew rays, which are not confined to any particular plane along the fiber. Although skew rays represent a general case of fiber modes, they are difficult to analyze. A simplified special case is the meridional rays shown in Fig. 1.4.7B, which are confined to the meridian plane, which contains the symmetry axis of the fiber. The analysis of meridional rays is relatively easy and provides a general picture of ray propagation along an optical fiber. Consider meridional rays as shown in Fig. 1.4.7B. This is a two-dimensional problem where the optical field propagates in the longitudinal direction z, and its amplitude varies over the transversal direction r. We define β1 ¼ n1ω/c ¼ 2πn1/λ (in rad./m) as the propagation constant in a homogeneous core medium with a refraction index of n1. Each fiber mode can be explained as a light ray that travels at a certain angle, as shown in Fig. 1.4.8. Therefore, for ith mode propagating in +z direction, the propagation constant can be decomposed into a longitudinal component βzi and a transversal component ki1 such that β21 ¼ β2zi + k2i1

(1.4.20)

Then, the optical field vector of the ith mode can be expressed as ! E i ðr,

where

! E i0 ðr,

!

zÞ ¼ E i0 ðr, zÞ exp fjðωt  βzi zÞg

zÞ is the field amplitude of the mode.

(1.4.21)

51

52

Fiber optic measurement techniques

ki22 < 0

n2 n1

b12

qi

kil2 > 0

bzi2

Fig. 1.4.8 Decompose propagating vector β1 into longitudinal and transversal components.

Since the mode is propagating in the fiber core, both ki1 and βzi must be real. First, for ki to be real in the fiber core, we must have k2i1 ¼ β21  β2zi 0

(1.4.22)

The physical meaning of this real propagation constant in the transversal direction is that the lightwave propagates in the transverse direction but is bounced back and forth between the core-cladding interfaces. This creates a standing wave pattern in the transverse direction, like a resonant cavity. In addition, ki1 can only have discrete values because the standing wave pattern in the transversal direction requires phase matching. This is the reason that propagating optical modes in a fiber have to be discrete. Now let’s look at what happens in the cladding. Because the optical mode is guided in the fiber core, there should be no energy propagating in the transversal direction in the cladding. (Otherwise, optical energy would be leaked.) Therefore, ki has to be imaginary in the cladding, that is, k2i2 ¼ β22  β2zi < 0

(1.4.23)

where the subscript 2 represents parameters in the cladding and β2 ¼ n2ω/c ¼ 2πn2/λ is the propagation constant in the homogeneous cladding medium. Note that since the optical field has to propagate with the same phase velocity in the z-direction both in the core and cladding, βzi has the same value in both Eqs. (1.4.22) and (1.4.23). Eqs. (1.4.22) and (1.4.23) can be simplified as βzi =β1 1

(1.4.24)

βzi =β2 > 1

(1.4.25)

and

Fundamentals of optical devices

Bringing Eqs. (1.4.24) and (1.4.25) together with β2 ¼ β1n2/n1, we can find the necessary condition for a propagation mode, 1

βzi n2 > β 1 n1

(1.4.26)

It is interesting to note that in Fig. 1.4.8, θi is, in fact, the incidence angle of the ith mode at the core-cladding interface. The triangle in Fig. 1.4.8 clearly shows that βzi/ β1 ¼ sin θi. This turns Eq. (1.4.23) into 1 sin θi > n2/n1, which is the same as the definition of the critical angle as given by Eq. (1.4.14). The concept of discrete propagation modes comes from the fact that the transversal propagation constant ki1 in the fiber core can only take discrete values to satisfy standing wave conditions in the transverse direction. Since β1 is a constant, the propagation constant in the z-direction β2zi ¼ β21  k2i can only take discrete values. Or equivalently, the ray angle θi can only take discrete values within the range defined by 1 sin θi > n2/n1. The geometric optics description given here is simple, and it qualitatively explains the general concept of fiber modes. However, it is not adequate to obtain quantitative mode field profiles and cutoff conditions. Therefore, the electromagnetic field theory has to be applied by solving Maxwell’s equations and using appropriate boundary conditions, which we discuss next. 1.4.2.2 Mode analysis using electromagnetic field theory Mode analysis in optical fibers can be accomplished more rigorously by solving Maxwell’s equations and applying appropriate boundary conditions defined by fiber geometries and parameters. We start with classical Maxwell’s equations, !

∂H r  E ¼ μ ∂t !

!

∂E rH ¼ε ∂t !

(1.4.27) (1.4.28)

The complex electrical and the magnetic fields are represented by their amplitudes and phases, n  ! !o ! ! ! E t, r ¼ E 0 exp j ωt  k  r (1.4.29) n  ! !o ! ! ! H t, r ¼ H 0 exp j ωt  k  r (1.4.30) Since fiber material is passive and there is no generation source within the fiber,   ! rE ¼0 (1.4.31)

53

54

Fiber optic measurement techniques

  ! ! ! r  r  E ≡ r r  E  r2 E ¼ r2 E !

Combining Eqs. (1.4.27)–(1.4.32) yields     ! ! ! r  r  E ¼ jωμ r  H ¼ jωμ jωεE

(1.4.32)

(1.4.33)

And the Helmholtz equation, !

!

r2 E + ω2 μεE ¼ 0

(1.4.34) !

Similarly, a Helmholtz equation can also be obtained for the magnetic field H : !

!

r2 H + ω2 μεH ¼ 0

(1.4.35)

The next task is to solve Helmholtz equations for the electrical and the magnetic fields. Because the geometric shape of an optical fiber is cylindrical, we can take advantage of this axial symmetry to simplify the analysis by using cylindrical coordinates. In cylindrical coordinates, the electrical field can be decomposed into radial, azimuthal, and lon! ! ! ! ! ! ! ! gitudinal components: E ¼ a r Er + a φ E φ + a z E z and H ¼ a r H r + a φ H φ + a z H z , ! ! ! where a r , a φ , and a z are unit vectors. With this separation, the Helmholtz Eqs. (1.4.34) and (1.4.35) can be decomposed into separate equations for Er, Eϕ, Ez, Hr, Hϕ, and Hz, respectively. However, these three components are not completely independent. In fact, the classic electromagnetic theory indicates that in cylindrical coordinate, the transversal field components Er, Eϕ, Hr, and Hϕ can be expressed as a combination of longitudinal field components Ez and Hz. This means that Ez and Hz need to be determined first, and then, we can find all other field components. In cylindrical coordinates, the Helmholtz equation for Ez is ∂2 E z 1 ∂E z 1 ∂2 E z ∂2 E z + + + + ω2 μεE z ¼ 0 r ∂r ∂r 2 ∂z2 r 2 ∂φ2

(1.4.36)

Since Ez ¼ Ez(r, φ, z) is a function of r, ϕ, and z, Eq. (1.4.36) cannot be solved analytically. We assume a standing wave in the azimuthal direction and a propagating wave in the longitudinal direction; then, the variables can be separated as E z ðr, φ, zÞ ¼ E z ðr Þejlφ ejβz z

(1.4.37)

where l ¼ 0,  1,  2, …. is an integer. Substituting Eq. (1.4.37) into Eq. (1.4.36), we can obtain a one-dimensional wave equation:  2 2  ∂2 Ez ðr Þ 1 ∂Ez ðr Þ nω l2 2 +  β z  2 E z ðr Þ ¼ 0 (1.4.38) + r ∂r c2 r ∂r 2

Fundamentals of optical devices

This is commonly referred to as a Bessel equation because its solutions can be expressed as Bessel functions. For a step-index fiber with a core radius a, its index profile can be expressed as  n1 ðr aÞ n¼ (1.4.39) n2 ðr > aÞ We have assumed that the diameter of the cladding is infinite in this expression. The Bessel Eq. (1.4.29) has solutions only for discrete βz values, which correspond to discrete modes. The general solutions of Bessel Eq. (1.4.38) can be expressed in Bessel functions as  ðr aÞ AJ l ðU lm r Þ + A0 Y l ðU lm r Þ (1.4.40) E z ðr Þ ¼ 0 CK l ðW lm r Þ + C I l ðW lm r Þ ðr > aÞ where U2lm ¼ β21  β2z, lm and W2lm ¼ β2z, lm  β22 represent equivalent transversal propagation constants in the core and cladding, respectively, with β1 ¼ n1ω/c and β2 ¼ n2ω/c as defined before. βz, lm is the propagation constant in the z-direction. This is similar to the vectorial relation of propagation constants shown in Eq. (1.4.20) in geometric optics analysis. However, we have two mode indices here, l and m. The physical meanings of these two mode indices are the amplitude maximums of the standing wave patterns in the azimuthal and the radial directions, respectively. In Eq. (1.4.40), Jl and Yl are the first and the second kind of Bessel functions of the lth order, and Kl and Il are the first and the second kind of modified Bessel functions of the lth order. Their values are shown in Fig. 1.4.9. A, A0 , C, and C0 in Eq. (1.4.40) are constants that need to be defined using boundary appropriate conditions. 1

1

l=0 l=1

0.5

0

Yl (x)

Jl (x)

l=2

0

−1

l=0

−2

l=1

−3

l

−4 −0.5

0

1

2

3

4

5

6

7

8

9

−5

10

0

2

4

10

8

10

10

8

8

l=2

6

Il (x)

Kl (x)

6

x

x

l=1

4

l=0

2 1

l=0 l=1

4

l=2

2

0 0

6

2

3

x

4

5

0

0

1

2

3

x

Fig. 1.4.9 Bessel function (top) and modified Bessel functions (bottom).

4

5

55

56

Fiber optic measurement techniques

The first boundary condition is that the field amplitude of a guided mode should be finite at the center of the core (r ¼ 0). Since the special function Yl(0) ¼  ∞, one must set A0 ¼ 0 to ensure that Ez(0) has a finite value. The second boundary condition is that the field amplitude of a guided mode should be zero far away from the core (r ¼ ∞). Since Il(∞) 6¼ 0, one must set C0 ¼ 0 to ensure that Ez(∞) ¼ 0. Consider A0 ¼ C0 ¼ 0; Eq. (1.4.40) can be simplified, and for the mode index of (l, m), Eq. (1.4.37) becomes  ðr aÞ AJ l ðU lm r Þejlφ  ejβz,lm z E z,lm ðr, φ, zÞ ¼ (1.4.41) jlφ jβz,lm z CK l ðW lm r Þe  e ðr > aÞ Mathematically, the modified Bessel function fits well to an exponential characteristic Kl(Wlmr) ∝ exp (Wlmr) so that Kl(Wlmr) represents an exponential decay of the optical field over r in the fiber cladding. For a propagation mode, Wlm > 0 is required to ensure that energy does not leak through the cladding. In the fiber core, the Bessel function Jl(Ulmr) oscillates as shown in Fig. 1.4.9, which represents a standing-wave pattern in the core over the radius direction. For a propagating mode, Ulm 0 is required to ensure this standing-wave pattern in the fiber core. It is interesting to note that based on the definitions of U2lm ¼ β21  β2z, lm and W2lm ¼ βz, 2 2 2 2 2 lm  β 2, the requirement of Wlm > 0 and Ulm 0 is equivalent to β 2 < β z, lm β1 or 2 (n2/n1) θc is required, as shown in Fig. 1.4.10, where θc ¼ sin1(n2/n1) is the critical angle of the core-cladding interface. With this requirement n2

n0 q1 qi

Cladding

n1

qa

Fig. 1.4.10 Illustration of light coupling into a step-index fiber.

z Fiber core

57

58

Fiber optic measurement techniques

on θi, there is a corresponding requirement on the incident angle θffia at the fiber end surpffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi face. It is easy to see from the drawing that sin θ1 ¼ 1  sin 2 θi , and by Snell’s law, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n0 sin θa ¼ n1 1  sin 2 θi If total reflection happens at the core-cladding interface, which requires θi θc, then sinθi sin θc ¼ n2/n1. This requires the incidence angle θa to satisfy the following condition: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n0 sin θa n21  n22 (1.4.46) The definition of numerical aperture is NA ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n21  n22

(1.4.47)

For weak optical waveguide like a single-mode fiber, the difference between n1 and n2 is very small (not more than 1%). Use Δ ¼ (n1  n2)/n1 to define a normalized index difference between the core and cladding; then, Δ must also be very small (Δ ≪ 1). In this case, the expression of the numerical aperture can be simplified as pffiffiffiffiffiffi NA  n1 2Δ (1.4.48) In most cases, fibers are placed in air and n0 ¼ 1. sinθa  θa is valid when sinθa ≪ 1 (weak waveguide); therefore, Eq. (1.4.46) reduces to pffiffiffiffiffiffi θa n1 2Δ ¼ NA (1.4.49) From this discussion, the physical meaning of the numerical aperture is very clear. Light entering a fiber within a cone of acceptance angle, as shown in Fig. 1.4.11, will be converted into guided modes and will be able to propagate along the fiber. Outside this cone, light coupled into the fiber will radiate into the cladding. Similarly, light exiting a fiber will have a divergence angle also defined by the numerical aperture. This is often used to design focusing optics if a collimated beam is needed at the fiber output.

qa

qa

Fig. 1.4.11 Light can be coupled to an optical fiber only when the incidence angle is smaller than the numerical aperture.

Fundamentals of optical devices

Typically, parameters of a single-mode fiber are NA  0.10.2 and Δ  0.2 % 1%. Therefore, θa  sin1(NA)  5.7 degree11.5 degree. This is a very small angle, and it makes difficult to couple light into a single-mode fiber. Not only the source spot size has to be small (80 μm2) but also the angle has to be within 10 degrees. With the definition of the numerical aperture in Eq. (1.4.47), the V-number of a fiber can be expressed as a function of NA: V ¼

2πa NA λ

Another important fiber parameter is the cutoff wavelength λc. It is defined such that the second lowest mode ceases to exist when the signal wavelength is longer than λc, and therefore, when λ < λc, a single-mode fiber will become multimode. According to Eq. (1.4.45), cutoff wavelength is πd NA 2:405 where d is the core diameter of the step-index fiber. As an example, for a typical standard single-mode fiber with n1 ¼ 1.47, n2 ¼ 1.467, and d ¼ 9 μm, the numerical aperture is qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi NA ¼ n21  n22 ¼ 0:0939 λc ¼

The maximum incident angle at the fiber input is θa ¼ sin 1 ð0:0939Þ ¼ 5:38∘ and the cutoff wavelength is λc ¼ πd  NA=2:405 ¼ 1:1μm

Example 1.4 To reduce the Fresnel reflection, the end surface of a fiber contactor can be made nonperpendicular to the fiber axis. This is usually referred to as the APC (angled physical contact) contactor. If the fiber has the core index n1 ¼ 1.47 and cladding index n2 ¼ 1.467, what is the minimum angle ϕ such that the Fresnel reflection by the fiber end facet will not become the guided fiber mode? Solution: To solve this problem, we use the ray trace method and consider three extreme light beam angles in the fiber. The surface has to be designed such that after reflection at the fiber end surface, all these three light beams will not be coupled into the fiber-guided mode in the backward propagation direction. As illustrated in Fig. 1.4.12A, first, for the light beam propagating in the fiber axial direction (z-direction), the direction of the reflected beam from the end surface has an

59

60

Fiber optic measurement techniques

angle θ in respect to the surface normal of the fiber sidewall: θ ¼ π/2  2φ < θc. In order for this reflected light beam not to become the guided mode of the fiber, θ < θc is required, where θc is the critical angle defined by Eq. (1.4.14). Therefore, the first requirement for ϕ is φ > π=4  θc =2

(Ex 1.4.1)

Second, for the light beam propagating at the critical angle of the fiber as shown in Fig. 1.4.12B, the beam has an angle θ1 with respect to the surface normal of the fiber end surface, which is related to ϕ by θ1 ¼ (π/2  θc) + φ. Then the θ angle of the reflected beam from the end surface with respect to the fiber sidewall can be found as θ ¼ π  θc  2θ1 ¼ θc  2φ. This angle also has to be smaller than the critical angle, that is, θc  2φ < θc, or φ>0

(Ex 1.4.2)

In the third case as shown in Fig. 1.4.12C, the light beam propagates at the critical angle of the fiber but at the opposite side as compared to the ray trace shown in Fig. 1.4.12B. This produces the smallest θ1 angle, which is, θ1 ¼ φ  (π/2  θc). This corresponds to the biggest θ angle, θ ¼ π  θc  2θ1 ¼ 2π  3θc  2φ. Again, this θ angle has to be smaller than the critical angle, θ < θc, that is, φ > π  2θc

(Ex 1.4.3)

Since in this example

n2 n1 θ

φ φ

φ

z

φ

z

(a) n2 n1

qC

q1 θ

(b) n2 n1

q1 qC

φ θ

(c) Fig. 1.4.12 Illustration of an angle-polished fiber surface.

z

Fundamentals of optical devices

θc ¼ sin 1

    n2 1:467 ¼ 86:34∘ ¼ sin 1 1:47 n1

The three constraints given by Equations (Ex 1.4.1), (Ex 1.4.2), and (Ex 1.4.3) become φ > 1.83 degree, φ > 0, and φ > 7.3 degree, respectively. Obviously, in order to satisfy all these three specific conditions, the required surface angle is φ > 7.3 degree. In fact, as an industry standard, commercial APC connectors usually have the surface tilt angle of φ ¼ 8 degree.

1.4.3 Optical fiber attenuation The optical fiber is an ideal medium that can be used to carry optical signals over long distances. Attenuation is one of the most important parameters of an optical fiber; it, to a large extent, determines how far an optical signal can be delivered at a detectable power level. There are several sources that contribute to fiber attenuation, such as absorption, scattering, and radiation. Material absorption is mainly caused by photo-induced molecular vibration, which absorbs the signal optical power and turns it into heat. Pure silica molecules absorb optical signals in ultraviolet (UV) and infrared (IR) wavelength bands. At the same time, there are often impurities inside silica material such as OH ions, which may be introduced in the fiber preform fabrication process. These impurity molecules create additional absorption in the fiber. Typically, OH ions have high absorptions around 700, 900, and 1400 nm, which are commonly referred to as water absorption peaks. Scattering loss arises from microscopic defects and structural inhomogeneities. In general, the optical characteristics of scattering depend on the size of the scatter in comparison to the signal wavelength. However, in optical fibers, the scatters are most likely much smaller than the wavelength of the optical signal, and in this case, the scattering is often characterized as Rayleigh scattering. A very important spectral property of Rayleigh scattering is that the scattering loss is inversely proportional to the fourth power of the wavelength. Therefore, Rayleigh scattering loss is high in a short wavelength region. In the last few decades, the loss of optical fiber has been decreased significantly by reducing the OH– impurity in the material and eliminating defects in the structure. However, absorption by silica molecules in the UV and IR wavelength regions and Rayleigh scattering still constitute fundamental limits to the loss of silica-based optical fiber. Fig. 1.4.13 shows the typical absorption spectra of silica fiber. The dotted line shows the attenuation of old fibers that were made before the 1980s. In addition to strong water absorption peaks, the attenuation is generally higher than the new fiber due to material impurity and waveguide scattering. Three wavelength windows have been used since the 1970s for optical communications in 850, 1310, and 1550 nm where optical attenuation

61

Fiber optic measurement techniques

6 5

3rd window

1st window

7

2nd window

8

Attenuation (dB/km)

62

4 3 2 1 1

0 0.7

0.9

1.1

1.3

1.5

1.7

Wavelength (μm)

Fig. 1.4.13 Attenuation of old (dotted line) and new (solid line) silica fibers. The shaded regions indicate the three telecommunication wavelength windows.

has local minimums. In the early days of optical communication, the first wavelength window in 850 nm was used partly because of the availability of GaAs-based laser sources, which emit in that wavelength window. The advances in longer wavelength semiconductor lasers based on InGaAs and InGaAsP pushed optical communications toward the second and the third wavelength windows in 1310 and 1550 nm where optical losses are significantly reduced and optical systems can reach longer distances without regeneration. Another category of optical loss that may occur in optical fiber cables is radiation loss. It is mainly caused by fiber bending. Microbending, usually caused by irregularities in the pulling process, may introduce coupling between the fundamental optical mode and high-order radiation modes and thus creating losses. On the other hand, macrobending, often introduced by cabling processing and fiber handling, causes the spreading of optical energy from the fiber core into the cladding. For example, for a standard single-mode fiber, bending loss starts to be significant when the bending diameter is smaller than approximately 30 cm. Mathematically, the complex representation of an optical wave propagating in z-direction is E ðz, tÞ ¼ E 0 exp ½jðωt  kzÞ

(1.4.50)

where E0 is the complex amplitude, ω is the optical frequency, and k is the propagation constant. Considering attenuation in the medium, the propagation constant should be complex: α k¼β+j (1.4.51) 2 where β ¼ 2πn/λ is the real propagation constant and α is the power attenuation coefficient. By separating the real and the imaginary parts of the propagation constant, Eq. (1.4.50) can be written as

Fundamentals of optical devices

  α Eðz, tÞ ¼ E0 exp ½jðωt  βzÞ  exp  z 2

(1.4.52)

The average optical power can be simply expressed as P ðzÞ ¼ P 0 eαz

(1.4.53)

where P0 is the input optical power. Note here the unit of α is in Neper per meter. This attenuation coefficient α of an optical fiber can be obtained by measuring the input and the output optical power:

 1 P0 α ¼ ln (1.4.54) L P ðL Þ where L is the fiber length and P(L) is the optical powers measured at the output of the fiber. However, engineers like to use decibel (dB) to describe fiber attenuation and use dB/km as the unit of attenuation coefficient. If we define αdB as the attenuation coefficient which has the unit of dB/km, then the optical power level along the fiber length can be expressed as αdB

P ðzÞ ¼ P 0  10 10 z

(1.4.55)

Similar to Eq. (1.4.54), for a fiber of length L, αdB can be estimated using

 1 P0 (1.4.56) αdB ¼ 10 log L P ðL Þ Comparing Eqs. (1.4.54) and (1.4.56), the relationship between α and αdB can be found as αdB 10 log ½P 0 =P ðL Þ ¼ ¼ 10 log ðeÞ ¼ 4:343 α ln ½P 0 =P ðL Þ

(1.4.57)

or simply, αdB ¼ 4.343α.αdB is a simpler parameter to use for evaluation of fiber loss. For example, for an 80 km-long fiber with αdB ¼ 0.2 dB/km attenuation coefficient, the total fiber loss can be easily found as 80  0.2 ¼ 16dB. On the other hand, if complex optical field expression is required to solve wave propagation equations, α needs to be used instead. In practice, people always use the symbol α to represent optical-loss coefficient, no matter in (Neper/m) or in (dB/km). But one should be very clear which unit to use when it comes to finding numerical values. The following is an interesting example that may help to understand the impact of fiber numerical aperture and attenuation.

63

64

Fiber optic measurement techniques

Example 1.5

A very long step-index, single-mode fiber has a numerical aperture NA ¼ 0.1 and a core refractive index n1 ¼ 1.45. Assume that the fiber end surface is ideally antireflectioncoated and fiber loss is only caused by Rayleigh scattering. Find the reflectivity of the fiber Rref ¼ Pb/Pi where Pi is the optical power injected into the fiber and Pb is the optical power that is reflected to the input fiber terminal, as illustrated in Fig. 1.4.14A. Solution: In this problem, Rayleigh scattering is the only source of attenuation in the fiber. Each scattering source scatters the light into all directions uniformly, which fills the entire 4π solid angle. However, only a small part of the scattered light with the angle within the numerical aperture can be converted into the guided mode within the fiber core and travel back to the input. The rest will be radiated into cladding and get lost. For simplicity, we assume that scatters are only located along the center of the fiber core, as shown in Fig. 1.4.14B. Therefore, the conversion efficiency from the scattered light to that captured by the fiber is η¼

2π ð1  cos θ1 Þ 4π

(1.4.58)

where the numerator is the area of the spherical cap shown as the shaded area in Fig. 1.4.14B, whereas the denominator 4π is the total area of the unit sphere. θ1 is the maximum trace angle of the guided mode with respect to the fiber axis. Since the numerical aperture is defined as the maximum acceptance angle θa at the fiber entrance as given in Eq. (1.4.49), θ1 can be expressed as a function of NA as

Fiber n2 n1

Pi Pb

(a) n2

qa

q1

qc

Scatter q1

n1

(b) Fig. 1.4.14 Illustrations of fiber reflection (A) and scattering in fiber core (B).

z

Fundamentals of optical devices

θ1 ¼

n0 1  0:1 ¼ 0:069 NA ¼ 1:45 n1

(1.4.59)

where n0 ¼ 1 is used for the index of air. Substitute the value of θ1 into Eq. (1.4.58); the ratio between the captured and the total scattered optical power can be found as η ¼ 1.19  103 ¼  29dB. Now let’s consider a short fiber section Δz located at position z along the fiber. The optical power at this location is P(z) ¼ Pie αz, where α is the fiber attenuation coefficient. The optical power loss within this section is ΔP ðzÞ ¼

dP ðzÞ Δz ¼ αP i eαz Δz dz

Since we assumed that the fiber loss is only caused by Rayleigh scattering, this power loss ΔP(z) should be equal to the total scattered power within the short section. ηΔP(z) is the amount of scattered power that is turned into the guided mode that travels back to the fiber input. However, during the traveling from location z back to the fiber input, attenuation also applies, and the total amount of power loss is, again, e αz. Considering that the fiber is composed of many short sections and adding up the contributions from all sections, the total reflected optical power is Z ∞ X ηP ηjΔP ðzÞjeαz ¼ ηαP i e2αz dz ¼ i (1.4.60) Pb ¼ 2 0 Therefore, the reflectivity is Rref ¼

Pb η ¼ ¼ 5:95  104 ¼ 32dB Pi 2

(1.4.61)

This result looks surprisingly simple. The reflectivity, sometimes referred to as return loss, only depends on the fiber numerical aperture and is independent of the fiber loss. The physical explanation is that since Rayleigh scattering is assumed to be the only source of loss, increasing the fiber loss will increase both scattered signal generation and its attenuation. In practical single-mode fibers, this approximation is quite accurate, and the experimentally measured return loss in a standard single-mode fiber is between –31 and –34 dB.

1.4.4 Group velocity and dispersion When an optical signal propagates along an optical fiber, not only is the signal optical power attenuated but also different frequency components within the optical signal propagate in slightly different speeds. This frequency dependency of propagation speed is commonly known as the chromatic dispersion. 1.4.4.1 Phase velocity and group velocity Neglecting attenuation, the electric field of a single-frequency plane optical wave propagating in z-direction is often expressed as

65

66

Fiber optic measurement techniques

E ðz, t Þ ¼ E 0 exp ½jΦðt, zÞ

(1.4.62)

where Φ(t, z) ¼ (ω0t  β0z) is the optical phase, ω0 is the optical frequency, and β0 ¼ 2πn/λ ¼ nω0/c is the propagation constant. The phase front of this lightwave is the plane where the optical phase is constant: ðω0 t  β0 zÞ ¼ cons tan t

(1.4.63)

The propagation speed of the phase front is called phase velocity, which can be obtained by differentiating both sides of Eq. (1.4.63): vp ¼

dz ω0 ¼ β0 dt

(1.4.64)

Now, consider that this single-frequency optical wave is modulated by a sinusoid signal of frequency Δω. Then the electrical field is E ðz, t Þ ¼ E 0 exp ½jðω0 t  βzÞ cos ðΔωt Þ

(1.4.65)

This modulation splits the signal frequency lightwave into two frequency components. At the input (z ¼ 0) of the optical fiber, the optical field is   1 E ð0, t Þ ¼ E 0 ejω0 t cos ðΔωt Þ ¼ E 0 ejðω0 +ΔωÞt + ejðω0 ΔωÞt (1.4.66) 2 Since the wave propagation constant β ¼ nω/c is linearly proportional to the frequency of the optical signal, the two frequency components at ω0  Δω will have two different propagation constants β0  Δβ. Therefore, the general expression of the optical field is n o 1 E ðz, tÞ ¼ E0 ej½ðω0 +ΔωÞtðβ0 ΔβÞz + ej½ðω0 ΔωÞtððβ0 +ΔβÞz 2 ¼ E 0 ejðω0 tβ0 zÞ cos ðΔωt  ΔβzÞ

(1.4.67)

E0e j(ω0tβ0z)

represents an optical carrier, which is identical to that given in Eq. where (1.4.62), whereas cos(Δωt  Δβz) is an envelope that is carried by the optical carrier. In fact, this envelope represents the information that is modulated onto the optical carrier. The propagation speed of this information-carrying envelope is called group velocity. Similar to the derivation of phase velocity, one can find group velocity by differentiating both sides of (Δωt  Δβz) ¼ constant, which yields vg ¼ dz=dt ¼ Δω=Δβ With infinitesimally low modulation frequency, Δω ! dω and Δβ ! dβ, so the general expression of group velocity is

Fundamentals of optical devices

vg ¼

dω dβ

(1.4.68)

In a nondispersive medium, the refractive index n is a constant that is independent of the frequency of the optical signal. In this case, the group velocity is equal to the phase velocity: vg ¼ vp ¼ n/c. However, in many practical optical materials, the refractive index n(ω) is a function of the optical frequency, and therefore, vg 6¼ vp in these materials. Over a unit length, the propagation phase delay is equal to the inverse of the phase velocity: τp ¼

β 1 ¼ 0 vp ω0

(1.4.69)

And similarly, the propagation group delay over a unit length is defined as the inverse of the group velocity: τg ¼

1 dβ ¼ vg dω

(1.4.70)

1.4.4.2 Group velocity dispersion To understand group velocity dispersion, we consider that two sinusoids with the frequencies Δω  δω/2 are modulated onto an optical carrier of frequency ω0. When propagating along a fiber, each modulating frequency illustrated in Fig. 1.4.15 will have its own group velocity; then, over a unit fiber length, the group delay difference between these two frequency components can be found as   dτg d dβ d2 β δτg ¼ δω ¼ (1.4.71) δω ¼ 2 δω dω dω dω dω Obviously, this group delay difference is introduced by the frequency dependency of the propagation constant. In general, the frequency-dependent propagation constant β(ω) can expended in a Taylor series around a central frequency ω0:

Optical carrier dw w0−Δw w0−Δw −δ w/2 +δ w/2

Dw w0

dw ω

w0+Δw w0+Δw −δ w/2 +δ w/2

Fig. 1.4.15 Spectrum of two-tone modulation on an optical carrier, where ω0 is the carrier frequency and Δω  δω/2 are the modulation frequencies.

67

68

Fiber optic measurement techniques

  dβ  1 d2 β  βðωÞ ¼ βðω0 Þ +  ðω  ω0 Þ + ðω  ω0 Þ2 + … dω ω¼ω0 2 dω2 ω¼ω0 1 ¼ βðω0 Þ + β1 ðω  ω0 Þ + β2 ðω  ω0 Þ2 + … 2

(1.4.72)

where β1 ¼

dβ dω

(1.4.73)

β2 ¼

d2 β dω2

(1.4.74)

represents the group delay and

is the group delay dispersion parameter. If the fiber is uniform with length L, using Eq. (1.4.71), we can find the relative time delay between two frequency components separated by δω as Δτg ¼ β2  L  δω

(1.4.75)

Sometimes, it might be convenient to use wavelength separation δλ instead of frequency separation δω between the two frequency (or wavelength) components. In this case, the relative delay over a unit fiber length can be expressed as dτg δλ≡Dδλ (1.4.76) dλ where D ¼ dτg/dλ is another group delay dispersion parameter. The relationship between these two dispersion parameters D and β2 can be found as δτg ¼



dτg dω dτg 2πc ¼ ¼  2 β2  dλ dλ dω λ

(1.4.77)

For a fiber of length L, we can easily find the relative time delay between two wavelength components separated by δλ as Δτg ¼ D  L  δλ

(1.4.78)

In practical fiber-optic systems, the relative delay between different wavelength components is usually measured in picoseconds; wavelength separation is usually expressed in nanometers, and fiber length is usually measured in kilometers. Therefore, the most commonly used units for β1, β2, and D are (ps/nm), (ps2/nm), and (ps/nm-km), respectively.

Fundamentals of optical devices

1.4.4.3 Sources of chromatic dispersion The physical reason of chromatic dispersion is the wavelength-dependent propagation constant β(λ). Both material property and waveguide structure may contribute to this wavelength dependency of β(λ), which are referred to as material dispersion and waveguide dispersion, respectively. Material dispersion is originated by the wavelength-dependent material refractive index n ¼ n(λ); thus, the wavelength-dependent propagation constant is β(λ) ¼ 2πn(λ)/λ. For a unit fiber length, the wavelength-dependent group delay is  2

 dβðλÞ dnðλÞ λ dβðλÞ 1 τg ¼ ¼ ¼ nðλÞ  λ (1.4.79) 2π dω dλ dλ c The group delay dispersion between two wavelength components separated by δλ is then

 dτg 1 d2 nðλÞ δτg ¼ δλ ¼ δλ (1.4.80) λ dλ c dλ2 Therefore, material-induced dispersion is proportional to the second derivative of the refractive index. Waveguide dispersion can be explained as the wavelength-dependent angle of the light ray propagating inside the fiber core, as illustrated by Fig. 1.4.8. In the guided mode, the projection of the propagation constant in z-direction βz has to satisfy β22 < β2z β21, where β1 ¼ kn1, β2 ¼ kn2, and k ¼ 2π/λ. This is equivalent to ðβz =kÞ2  n22 sin1(n2/n1). Since the group velocity of the fastest ray trace is c/n1 (here, we assume n1 is a constant), the group velocity of the slowest ray trace should be (c/n1)sinθi ¼ (cn2/n21). Therefore, for a fiber of length L, the maximum group delay difference is approximately   n L n1  n2 δT max ¼ 1 (1.4.89) n2 c 1.4.4.5 Polarization mode dispersion (PMD) Polarization mode dispersion is a special type of modal dispersion that exists in singlemode fibers. It is worth noting that there are actually two fundamental modes that coexist in a single-mode fiber. As shown in Fig. 1.4.18, these two modes are orthogonally polarized. In an optical fiber with perfect cylindrical symmetry, these two modes have the same cutoff condition, and they are referred to as degenerate modes. However, practical optical fibers might not have a perfect cylindrical symmetry due to birefringence; therefore, these two fundamental modes may propagate in different speeds. Birefringence in an optical fiber is usually caused by small perturbations of the structure geometry as well as the anisotropy of the refractive index. The sources of the perturbations can be categorized as intrinsic and extrinsic. Intrinsic perturbation refers to permanent structural perturbations of fiber geometry, which are often caused by errors in the manufacturing process. The effect of intrinsic perturbation includes (1) noncircular

x HE11

y

HE11

Fig. 1.4.18 Illustration of optical field vector across the core cross section of a single-mode fiber. Two degenerate modes exist in a single-mode fiber.

Fundamentals of optical devices

fiber core, which is called geometric birefringence, and (2) nonsymmetric stress originated from the nonideal perform, which is usually called stress birefringence. On the other hand, extrinsic perturbation usually refers to perturbations due to random external forces in the cabling and installation process. Extrinsic perturbation also causes both geometric and stress birefringence. The effect of birefringence is that the two orthogonal polarization modes HEx11 and HEy11 experience slightly different propagation constants when they travel along the fiber; therefore, their group delays become different. Assuming that the effective indices in the core of a birefringence fiber are nx and ny for the two polarization modes, their corresponding propagation constants will be βx ¼ ωnx/c and βy ¼ ωny/c, respectively. Due to birefringence, βx and βy are not equal, and their difference is   ω Δβ ¼ βx  βy ¼ Δneff (1.4.90) c where Δneff ¼ n//  n? is the effective differential refractive index of the two modes. For a fiber of length L, the relative group delay between the two orthogonal polarization modes is n==  n? LΔneff Δτg ¼ (1.4.91) L¼ c c This is commonly referred to as differential group delay (DGD). As a result of fiber birefringence, the state of polarization of the optical signal will rotate while propagating along the fiber because of the accumulated relative phase change ΔΦ between the two polarization modes: ΔΦ ¼

ωΔneff L c

(1.4.92)

According to Eq. (1.4.92), when an optical signal is launched into a birefringence fiber, its polarization evolution can be accomplished by the changes of either the fiber length L, the differential refractive index Δneff, or the lightwave signal frequency ω. At a certain fiber length L ¼ Lp, if the state of polarization of the optical signal completes a full ΔΦ ¼ 2π rotation, Lp is therefore defined as the birefringence beat length. On the other hand, at a fixed fiber length L, the polarization state of the optical signal can also be varied by changing the frequency. For a complete polarization rotation, the change of optical frequency should be Δωcycle ¼

2πc 2π ¼ LΔneff Δτg

(1.4.93)

In modern high-speed optical communications using a single-mode fiber, polarization mode dispersion has become one of the most notorious sources of transmission

73

74

Fiber optic measurement techniques

performance degradation. Due to the random nature of the perturbations that cause birefringence, polarization mode dispersion in an optical fiber is a stochastic process. Its characterization and measurement are discussed with more details in Chapter 4. Example 1.6 A 1550 nm optical signal from a multilongitudinal mode laser diode has two discrete wavelength components separated by 0.8 nm. There are two pieces of optical fiber; one of them is a standard single-mode fiber with chromatic dispersion parameter D ¼ 17 ps/nm/km at 1550 nm wavelength, and the other is a step-index multimode fiber with the core index n1 ¼ 1.48, cladding index n2 ¼ 1.46, and core diameter d ¼ 50 μm. Both of these two fibers have the same length of 20 km. Find the allowed maximum signal data rate that can be carried by each of these two fibers. Solution: For the single-mode fiber, chromatic dispersion is the major source of pulse broadening of the optical signal. In this example, the chromatic dispersion-induced pulse broadening is Δt SMF ¼ D  L  Δλ ¼ 17  20  0:8 ¼ 272ps For the multimode fiber, the major source of pulse broadening is modal dispersion. Using Eq. (1.4.89), this pulse broadening is   n1 L n1  n2 Δt MMF  ¼ 1:35μs c n2 Obviously, the multimode fiber introduces pulse broadening more than three orders of magnitude higher than the single-mode fiber. The data rate of the optical signal can be on the order of 1Gb/s if the single-mode fiber is used, whereas it is limited to less than 1 Mb/s with the multimode fiber.

1.4.5 Nonlinear effects in an optical fiber Fiber parameters we have discussed so far, such as attenuation, chromatic dispersion, and modal dispersion, are all linear effects. The values of these parameters do not change with the change in the signal optical power. On the other hand, the effects of fiber nonlinearity depend on the optical power density inside the fiber core. The typical optical power carried by an optical fiber may not seem very high, but since the fiber core cross-section area is very small, the power density can be very high to cause significant nonlinear effects. For example, for a standard single-mode fiber, the cross-section area of the core is about 80μm2. If the fiber carries 10 mW of the average optical power, the power density will be as high as 12.5W/cm2. Stimulated Brillouin scattering (SBS), stimulated Raman scattering (SRS), and the Kerr effect are the three most important nonlinear effects in silica optical fibers.

Fundamentals of optical devices

1.4.5.1 Stimulated Brillouin scattering Stimulated Brillouin scattering (SBS) is originated by the interaction between the signal photons and the traveling sound waves, also called acoustic phonons (Boyd, 1992). It is just like that when you blow air into an open-ended tube: A sound wave may be generated. Because of the SBS, the signal lightwave is modulated by the traveling sound wave. Stokes photons are generated in this process, and the frequency of the Stokes photons is downshifted from that of the original optical frequency. The amount of this frequency shift can be estimated approximately by Δf ¼ 2f 0

Vd ðc=n1 Þ

(1.4.94)

where n1 is the refractive index of the fiber core, c/n1 is the group velocity of the lightwave in the fiber, Vd is the sound wave velocity in the fiber longitudinal direction, and f0 is the original frequency of the optical signal. In a silica-based optical fiber, sound velocity along the longitudinal direction is Vd ¼ 5760 m/s. Assume a refractive index of n1 ¼ 1.47 in the fiber core at 1550 nm signal wavelength; this SBS frequency shift is about 11GHz. SBS is narrowband, and the generated Stokes photons only propagate in the opposite direction of the original photons, and therefore, the scattered energy is always counterpropagating with respect to the signal. In addition, since SBS relies on the resonance of a sound wave, which has very narrow spectral linewidth, the effective SBS bandwidth is as narrow as 20 MHz. Therefore, SBS efficiency is high only when the linewidth of the optical signal is narrow. When the optical power is high enough, SBS turns signal optical photons into frequency-shifted Stokes photons that travel in the opposite direction. If another optical signal travels in the same fiber, in the same direction, and at the same frequency of this Stokes wave, it can be amplified by the SBS process. Based on this, the SBS effect has been used to make optical amplifiers; however, the narrow amplification bandwidth nature of SBS limits their applications. On the other hand, in optical fiber communications, the effect of SBS introduces an extra loss for the optical signal and sets an upper limit for the amount of optical power that can be used in the fiber. In commercial fiber-optic systems, an effective way to suppress the effect of SBS is to frequencymodulate the optical carrier and increase the spectral linewidth of the optical source to a level much wider than 20 MHz. The acoustic vibration generated in the SBS process also travels in the transversal direction of the fiber, generating a resonance between the center and circumference of the fiber cladding. The frequency of this SBS-induced transversal resonance is in the megahertz level determined by the fiber cladding radius which is 60 μm and shear sound velocity of approximately vs ¼ 3740 m/s which is slower than that in the longitudinal direction. The SBS-induced sound wave resonance in the fiber transversal direction does not create additional power loss of the optical signal but may create a nonlinear phase modulation on the optical signal through the vibration of the refractive index.

75

Fiber optic measurement techniques

1.4.5.2 Stimulated Raman scattering Stimulated Raman scattering (SRS) is an inelastic process where a photon of the incident optical signal (pump) stimulates molecular vibration of the material and loses part of its energy. Because of the energy loss, the photon reemits in a lower frequency (Smith, 1972). The introduced vibrational energy of the molecules is referred to as an optical phonon. Instead of relying on the acoustic vibration as in the case of SBS, SRS in a fiber is caused by the molecular-level vibration of the silica material. Consequently, through the SRS process, pump photons are progressively absorbed by the fiber, whereas new photons, called Stokes photons, are created at a downshifted frequency. Unlike SBS, where the Stokes wave only propagates in the backward direction, the Stokes waves produced by the SRS process propagate in both forward and backward directions. Therefore, SRS can be used to amplify both co- and counter-propagated optical signals if their frequencies are within the SRS bandwidth. Also, the spectral bandwidth of SRS is much wider than that of SBS. As shown in Fig. 1.4.19, in silica-based optical fibers, the maximum Raman efficiency happens at a frequency shift of about 13.2 THz, and the bandwidth can be as wide as 10 THz. Optical amplifiers based on the SRS effect have become popular in recent years because of their unique characteristics compared to other type of optical amplifiers. On the other hand, SRS also may create interchannel crosstalk in wavelengthdivision multiplexed (WDM) optical systems. In a fiber carrying multiple wavelength channels, the SRS effect may create energy transfer from short wavelength (higher-frequency) channels to long wavelength (lower-frequency) channels. €dinger equation 1.4.5.3 Kerr effect nonlinearity and nonlinear Schro Kerr effect nonlinearity is introduced by the fact that the refractive indices of an optical material is often a weak function of the optical power density: 1 0.9

Normalized Raman Gain

76

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

Frequency Shift (THz)

Fig. 1.4.19 A normalized Raman gain spectrum of a silica fiber.

30

35

Fundamentals of optical devices

n ¼ n0 + n2

P Aeff

(1.4.95)

where n0 is the linear refractive index of the material, n2 is the nonlinear index coefficient, P is the optical power, and Aeff is the effective cross-section area of the optical waveguide. P/Aeff represents optical power density. Considering both linear and nonlinear effects, a nonlinear differential equation is often used to describe the envelope of optical field propagating along an optical fiber (Agrawal, 2001): ∂Aðt, zÞ iβ ∂2 Aðt, zÞ α + Aðt, zÞ  iγ jAðt, zÞj2 Aðt, zÞ ¼ 0 + 2 2 2 ∂z ∂t2

(1.4.96)

This equation is known as the nonlinear Schr€ odinger (NLS) eq. A(t,z) is the complex amplitude of the optical field. The fiber parameters β2 and α are group delay dispersion and attenuation, respectively. γ is defined as the nonlinear parameter: nω γ¼ 2 0 (1.4.97) cAeff On the left side of Eq. (1.4.96), the second term represents the effect of chromatic dispersion; the third term is optical attenuation; and the last term represents a nonlinear phase modulation caused by the Kerr effect nonlinearity. To understand the physical meaning of each term in the nonlinear Schr€ odinger equation, we can consider dispersion and nonlinearity separately. First, we only consider the dispersion effect and assume iβ ∂2 A ∂A + 2 2 ¼0 2 ∂t ∂z This equation can easily be solved in the Fourier domain as   2 ω eðω, L Þ ¼ A eðω, 0Þ exp j β2 L A 2

(1.4.98)

(1.4.99)

eðω, L Þ is the Fourier transform of A(t, L), and A eðω, 0Þ is the where L is the fiber length, A optical field at the fiber input. Consider two frequency components separated by δω; their phase difference at the end of the fiber is δΦ ¼

δω2 β L 2 2

(1.4.100)

If we let δΦ ¼ ωcδt, where δt is the arrival time difference at the fiber end between these two frequency components and ωc is the average frequency, Δt can be found as δt  δωβ2L. Then, if we convert β2 into D using Eq. (1.4.77), we find

77

78

Fiber optic measurement techniques

δt  D  L  δλ

(1.4.101)

where δλ ¼ δωλ2/(2πc) is the wavelength separation between these two components. In fact, Eq. (1.4.101) is identical to Eq. (1.4.78). Now let’s neglect dispersion and only consider fiber attenuation and Kerr effect nonlinearity. Then, the nonlinear Schr€ odinger equation is simplified to ∂Aðt, zÞ α + Aðt, zÞ ¼ iγ jAðt, zÞj2 Aðt, zÞ 2 ∂z

(1.4.102)

If we start by considering the optical power, P(z, t) ¼ j A(z, t)j2, Eq. (1.4.102) gives P(z, t) ¼ P(0, t)e αz. Then, we can use a normalized variable E(z, t) such that   pffiffiffiffiffiffiffiffiffiffiffiffiffi αz Aðz, tÞ ¼ P ð0, tÞ exp E ðz, tÞ (1.4.103) 2 Eq. (1.4.102) becomes ∂E ðz, t Þ ¼ iγP ð0, tÞeαz Eðz, tÞ ∂z

(1.4.104)

E ðz, tÞ ¼ E ð0, tÞ exp ½jΦNL ðt Þ

(1.4.105)

Z L eαz dz ¼ γP ð0, t ÞL eff ΦNL ðtÞ ¼ γP ð0, tÞ

(1.4.106)

And the solution is

where

0

with 1  eαL 1  (1.4.107) α α known as the nonlinear length of the fiber, which only depends on the fiber attenuation (where e αL ≪ 1 is assumed). For a standard single-mode fiber operating in a 1550-nm wavelength window, the attenuation is about 0.2 dB/km (or 4.6  105Np/m), and the nonlinear length is approximately 21.7 km. According to Eq. (1.4.106), the nonlinear phase shift ΦNL(t) follows the timedependent change of the optical power. The corresponding optical frequency change can be found by L eff ¼

1 ∂ γL ½P ð0, tÞ 2π eff ∂t or the corresponding signal wavelength modulation, δf ðtÞ ¼

(1.4.108)

Fundamentals of optical devices

δλðtÞ ¼ 

λ2 ∂ γL eff ½P ð0, tÞ 2πc ∂t

(1.4.109)

Fig. 1.4.20 illustrates the waveform of an optical pulse and the corresponding nonlinear phase shift. This phase shift is proportional to the signal waveform, an effect known as self-phase modulation (SPM) (Stolen and Lin, 1978). If the fiber has no chromatic dispersion, this phase modulation alone would not introduce optical signal waveform distortion if optical intensity is detected at the fiber output. However, if the fiber chromatic dispersion is considered, wavelength deviation created by SMP at the leading edge and the falling edge of the optical pulse, as shown in Fig. 1.4.20, will introduce group delay mismatch between these two edges of the pulse, therefore creating waveform distortion. For example, if the fiber has anomalous dispersion (D > 0), short wavelength components will travel faster than long wavelength components. In this case, the blue-shifted pulse falling edge travels faster than the red-shifted leading edge; therefore, the pulse will be squeezed by the SMP process. On the other hand, if the fiber dispersion is normal (D < 0), the blue-shifted pulse falling edge travels slower than the red-shifted leading edge and this will result in pulse spreading. In the discussion of self-phase modulation, we have only considered one wavelength channel in the fiber, and its optical phase is affected by the intensity of the same channel. If there is more than one wavelength channel traveling in the same fiber, the situation becomes more complicated and crosstalk-created channels will be created by Kerr effect nonlinearity. Now, let’s consider a system with only two wavelength channels; the combined optical field is Aðz, t Þ ¼ A1 ðz, t Þejθ1 + A2 ðz, tÞejθ2

(1.4.110)

P(0,t) t

(a) ΦNL(t)

t

(b) δλ(t)

t

(c) Fig. 1.4.20 (A) Optical pulse waveform, (B) nonlinear phase shift, and (C) wavelength shift introduced by self-phase modulation.

79

80

Fiber optic measurement techniques

where A1 and A2 are the optical field amplitude of the two wavelength channels and θ1 ¼ nω1/c and θ2 ¼ nω2/c are optical phases of these two optical carriers. Substituting Eqs. (1.4.110) and (1.4.111) into the nonlinear Schr€ odinger Eq. (1.4.96) and collecting terms having e jθ1 and e jθ2, respectively, will result in two separate equations: iβ ∂2 A1 ∂A1 α + A1 ¼ jγ jA1 j2 A1 + j2γ jA2 j2 A1 + jγA21 A∗2 ejðθ1 θ2 Þ + 2 ∂z 2 ∂t2 2

(1.4.111)

iβ ∂2 A2 ∂A2 α + 2 + A2 ¼ jγ jA2 j2 A2 + j2γ jA1 j2 A2 + jγA22 A∗1 ejðθ2 θ1 Þ ∂z 2 ∂t2 2

(1.4.112)

Each of these two equations describes the propagation characteristics of an individual wavelength channel. On the right side of each equation, the first term represents the effect of self-phase modulation as we have described; the second term represents crossphase modulation (XPM); and the third term is responsible for another nonlinear phenomenon called four-wave mixing (FWM). XPM is originated from the nonlinear phase modulation of one wavelength channel by the optical power change of the other channel (Islam et al., 1987). Similar to SPM, it requires chromatic dispersion of the fiber to convert this nonlinear phase modulation into waveform distortion. Since signal waveforms carried by these wavelength channels are usually not synchronized with each other, the precise time-dependent characteristic of crosstalk is less predictable. Statistical analysis is normally used to estimate the effect of XPM-induced crosstalk. FWM can be better understood as two optical carriers copropagating along an optical fiber; the beating between the two carriers modulates the refractive index of the fiber at the frequency difference between them. Meanwhile, a third optical carrier propagating along the same fiber is phase-modulated by this index modulation and then creates two extra modulation sidebands (Hill et al., 1978; Inoue, 1992). If the frequencies of three optical carriers are ωj, ωk, and ωl, the new frequency component created by this FWM process is ωjkl ¼ ωj + ωk  ωl ¼ ωj  Δωkl

(1.4.113)

where l ¼ 6 j and l ¼ 6 k and Δωkl is the frequency spacing between channel k and channel l. If there are only two original optical carriers involved as in our example, the third carrier is simply one of the two original carriers (j ¼ 6 k), and this situation is known as degenerate FWM. Fig. 1.4.21 shows the wavelength relations of degenerate FWM where two original carriers at ωj and ωk beat in the fiber, creating an index modulation at the frequency of Δωjk ¼ ωj  ωk. Then, the original optical carrier at ωj is phase-modulated at the frequency Δωjk, creating two modulation sidebands at ωi ¼ ωj  Δωjk and ωk ¼ ωj + Δωjk. Similarly, the optical carrier at ωk is also phase-modulated at the frequency Δωjk and creates two sidebands at ωj ¼ ωk  Δωjk and ωl ¼ ωk + Δωjk. In this process, the original two carriers become four, and this is probably where the name “four-wave mixing” came from.

Fundamentals of optical devices

w wi

wj

wk

wl

Fig. 1.4.21 Degenerate four-wave mixing, where two original carriers at ωj and ωk create four new frequency components at ωi, ωj, ωk, and ωl.

FWM is an important nonlinear phenomenon in optical fiber; it introduces interchannel crosstalk when multiple wavelength channels are used. The buildup of new frequency components generated by FWM depends on the phase match between the original carriers and the new frequency component when they travel along the fiber. Therefore, the efficiency of FWM is very sensitive to the dispersion-induced relative walk-off between the participating wavelength components. In general, the optical field amplitude of the FWM component created in a fiber of length L is qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiZ L   Ajkl ðL Þ ¼ jγ P j ð0ÞP k ð0ÞP l ð0Þ eαz exp jΔβjkl z dz (1.4.114) 0

where Pj(0), Pk(0), and Pl(0) are the input power of the three optical carriers and Δβjkl ¼ βj + βk  βl  βjkl

(1.4.115)

is a propagation constant mismatch, βj ¼ β(ωj), βk ¼ β(ωk), βl ¼ β(ωl), and βjkl ¼ β(ωjkl). Expanding β as β(ω) ¼ β0 + β1(ω  ω0) + (β2/2)(ω  ω0)2 and using the frequency relation given in Eq. (1.3.113), we can find Δβjkl ¼ β2 ωj  ωl ðωk  ωl Þ Here, for simplicity, we neglected the dispersion slope and considered that the dispersion value is constant over the entire frequency region. This propagation constant mismatch can also be expressed as the functions of the corresponding wavelengths: 2πcD Δβjkl ¼ 2 λj  λl ðλk  λl Þ (1.4.116) λ where dispersion parameter β2 is converted to D using Eq. (1.4.77). The integration of Eq. (1.4.114) yields Ajkl ðL Þ ¼ jγ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðjΔβjkl αÞL e 1 P j ð0ÞP k ð0ÞP l ð0Þ jΔβjkl  α

(1.4.117)

The power of the FWM component is then P jkl ðL Þ ¼ ηFWM γ 2 L eff P j ð0ÞP k ð0ÞP l ð0Þ where Leff ¼ (1  e

 αL

)/α is the fiber nonlinear length and

(1.4.118)

81

Fiber optic measurement techniques

 3 αL 2 4e sin Δβ L=2 jkl α 41 + 5 ¼ 2 2 αL Δβjkl + α ð1  e Þ2 2

ηFWM

2

(1.4.119)

is the FWM efficiency. In most of the practical cases, when the fiber is long enough, e αL ≪ 1 is true. The FWM efficiency can be simplified as ηFWM 

α2 + α2

(1.4.120)

Δβ2jkl

In this simplified expression, FWM efficiency is no longer dependent on the fiber length. The reason is that as long as e αL ≪ 1, for the fiber lengths far beyond the nonlinear length, the optical power, and thus the nonlinear contribution, is significantly reduced. Consider the propagation constant mismatch given in Eq. (1.4.116): The FWM efficiency can be reduced either by the increase of fiber dispersion or by the increase of channel separation. Fig. 1.4.22 shows the FWM efficiency for several different fiber dispersion parameters calculated with Eqs. (1.4.116) and (1.4.120), where the fiber loss is α ¼ 0.25dB/km and the operating wavelength is 1550 nm. Note in these calculations, the unit of attenuation α has to be in Np/m when using Eqs. (1.4.117)–(1.4.120). As an example, if two wavelength channels with 1 nm channel spacing, the FWM efficiency increases for approximately 25 dB when the chromatic dispersion is decreased from 17 ps/nm/km to 1 ps/nm/km. Consequently, in WDM optical systems, if a dispersion-shifted fiber is used, interchannel crosstalk introduced by FWM may become a legitimate concern, especially when the optical power is high.

0

FWM efficiency hFWM

82

−10 −20 D = 1(ps/nm-km)

−30 −40

D = 10 D = 17

−50

0

D=2

0.5 1 1.5 2 Wavelength separation (nm)

2.5

Fig. 1.4.22 Four-wave mixing efficiency ηFWM, calculated for α ¼ 0.25 dB/km, λ ¼ 1550 nm.

Fundamentals of optical devices

1.5 Optical amplifiers The introduction of the optical amplifier is one of the most important advances in optical fiber communications. Linear optical amplifiers are often used to compensate losses in optical communication systems and networks due to fiber attenuation, optical power splitting, and other factors. Optical amplifiers can also be used to perform nonlinear optical signal processing and waveform shaping when they are used in a nonlinear regime. Fig. 1.5.1 shows a typical multispan point-to-point optical transmission system in which optical amplifiers are used to perform various functions. The postamplifier (post-amp) is used at the transmitter to boost the optical power. Sometimes, the postamp is integrated with the transmitter to form a powerful “extended” optical transmitter. The post-amp is especially useful if the power of a transmitter has to be split into a number of broadcasting outputs in an optical network to compensate the splitting loss. The basic requirement for a post-amp is to be able to supply high enough output optical power. In-line optical amplifiers (line-amps) are used along the transmission system to compensate for the attenuation caused by optical fibers. In high-speed optical transmission systems, line-amps are often spaced periodically along the fiber link, one for approximately every 80 km of transmission distance. The basic requirement for a line-amp is to provide high enough optical gain. In addition, to support wavelength-division multiplexed (WDM) optical systems, a line-amp needs to have wide optical bandwidth and flat optical gain within the bandwidth. Gain equalization techniques are often used in line-amps. Also, the gain of a line-amp has to be linear to prevent nonlinear crosstalk between different wavelength channels. A preamplifier is often used immediately before the photodiode in an optical receiver to form the so called preamplified optical receiver. In addition to the requirement of high optical gain, the most important qualification of a preamplifier is that the noise should be low. The sensitivity of a preamplified optical receiver is largely dependent on the noise characteristic of the preamplifier. Because of their different applications, various types of optical amplifiers will generally have to be designed and optimized differently for best performance.

Extended transmitter Tx

G

Pre-amplified receiver Fiber

Postamplifier

G

Fiber In-line amplifiers

G

Fiber

G

Rx

Preamplifier

Fig. 1.5.1 Illustration of a point-to-point optical transmission system and the functions of optical amplifiers. Tx: transmitter, Rx: receiver.

83

84

Fiber optic measurement techniques

The most popular optical amplifiers used for optical communications and other electro-optic systems are semiconductor optical amplifiers (SOAs) and erbium-doped fiber amplifiers (EDFAs). In fiber-optic systems, Raman amplification can also be used to provide additional optical gain. In this section, we first discuss the common characteristics of optical amplifiers, such as optical gain, gain bandwidth, and gain saturation. Then, we will discuss specific configurations and applications of SOA, EDFA, and Raman amplifiers separately. While this section only provides basic concepts, configurations, and gain mechanisms of optical amplifiers, more detailed characteristics including optical noise, optical signal-tonoise ratio (OSNR) noise figure, and their measurements will be discussed in Chapter 3.

1.5.1 Optical gain, gain bandwidth, and saturation As in a laser, the most important component of an optical amplifier is the gain medium. An optical amplifier differs from a laser in that it does not require optical feedback, and the optical signal passes through the gain medium only once. The optical signal is amplified through the stimulated emission process in the gain medium where carrier density is inverted, as discussed in Section 1.2.4. At the same time, spontaneous emission also exists in the gain medium, creating optical noise added to the amplified signal. In continuous wave operation, the propagation equation for the optical field E in the gain medium along the longitudinal direction z can be expressed as hg i dE (1.5.1) ¼ + jn E + ρsp dz 2 where g is the optical power gain coefficient at position z, n is the refractive index of the optical medium, and ρsp is the spontaneous emission factor. In general, the local gain coefficient g is a function of optical frequency due to the limited optical bandwidth. It is also a function of optical power because of the saturation effect. For homogeneously broadened optical systems, gain saturation does not affect the frequency dependency of the gain; therefore, we can use the following expression g gðf , P Þ ¼ (1.5.2)  0 2 f f 1 + 4 Δf 0 + PPsat where g0 is the linear gain, Δf is the FWHM bandwidth and f0 is the central frequency of the material gain, and Psat is the saturation optical power. Neglecting the spontaneous emission term, optical power amplification can be easily evaluated using Eq. (1.5.1): dP ðzÞ ¼ gðf , P ÞP ðzÞ dz

(1.5.3)

Fundamentals of optical devices

When signal optical power is much smaller than the saturation power (P ≪ Psat), the system is linear, and the solution of Eq. (1.5.3) is P ðL Þ ¼ P ð0ÞG0

(1.5.4)

where L is the amplifier length, P(0) and P(L) are the input and the output optical power, respectively, and G0 is the small-signal optical gain. " # g0 L P ðL Þ (1.5.5) G0 ¼ ¼ exp P ð0Þ 1 + 4ðf  f 0 Þ2 =Δf 2 It is worth noting that the FWHM bandwidth of optical gain G0 can be found as rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ln 2 (1.5.6) B0 ¼ Δf g0 L  ln 2 which is different from the material gain bandwidth Δf. Although Δf is a constant, optical gain bandwidth is inversely proportional to the peak optical gain g0L. If the signal optical power is strong enough, nonlinear saturation has to be considered. To simplify, if we only consider the peak gain at f ¼ f0, optical power evolution along the amplifier can be found by g0 P ðzÞ dP ðzÞ ¼ dz 1 + P ðzÞ=P sat This equation can be converted into  Z P ðL Þ  1 1 dP ¼ g0 L + P P sat P ð0Þ And its solution is

 P ðL Þ P ðL Þ  P ð0Þ ln ¼ g0 L + P sat P ð0Þ

(1.5.7)

(1.5.8)

(1.5.9)

We know that G0 ¼ exp (g0L) is the small-signal gain; if we define G ¼ P(L)/P(0) as the large-signal gain, Eq. (1.5.9) can be written as ln ðG=G0 Þ P ð0Þ ¼ P sat 1G

(1.5.10)

With the increase of the signal optical power, the large-signal optical gain G will decrease. Fig. 1.5.2 shows the large-signal gain versus the input optical signal power when the small-signal optical gain is 30 dB and the saturation optical power is 3 dBm.

85

Fiber optic measurement techniques

30 Large-signal gain G (dB)

86

29 28 27 P3dB(0)

26 −50

−45 −40 −35 −30 Input power P0 (dBm)

−25

Fig. 1.5.2 Optical signal gain versus input power calculated with Eq. (1.5.10). G0 ¼ 30 dB, Psat ¼ 3 dBm.

The 3-dB saturation input power P3dB(0) is defined as the input power at which the large-signal optical gain is reduced by 3 dB. From Eq. (1.5.10), P3dB(0) can be found as P 3dB ð0Þ ¼ P sat

2 ln ð2Þ G0  2

(1.5.11)

In the example shown in Fig. 1.5.2, P3dB(0) can be found as 25.6 dBm. Obviously, the 3-dB saturation input power depends on the small-signal optical gain G0. When G0 is high, P3dB(0) will become smaller. In a similar way, we can also define a 3-dB saturation output power P3dB(L), which will be less dependent on the small-signal optical gain. Since P3dB(L) ¼ GP3dB(0), P 3dB ðL Þ  P sat ln ð2Þ

(1.5.12)

where we have assumed that G0 ≫ 2.

1.5.2 Semiconductor optical amplifiers A semiconductor optical amplifier (SOA) is similar to a semiconductor laser operating below threshold. It requires an optical gain medium and an optical waveguide, but it does not require an optical cavity (Connelly, 2002; Shimada and Ishio, 1992; Olsson, 1992). An SOA can be made by a laser diode with antireflection coating on each facet; an optical signal passes through the gain medium only once, and therefore, it is also known as a traveling-wave optical amplifier. Because of the close similarity between an SOA and a laser diode, the analysis of the SOA can be based on the rate equation, Eq. (1.3.1), we have already discussed in Section 1.3. dN ðz, t Þ N ðz, tÞ J ¼   2Γvg aðN  N 0 ÞP ðz, tÞ qd dt τ

(1.5.13)

where N(z, t) is the carrier density and P(z, t) is the photon density within the SOA optical waveguide, J is the injection current density, d is the thickness of the active layer, vg is

Fundamentals of optical devices

the group velocity of the lightwave, τ is the spontaneous emission carrier lifetime, a is the differential optical field gain, N0 is the transparency carrier density, and Γ is the optical confinement factor of the waveguide. 1.5.2.1 Steady-state analysis Let’s start with a discussion of steady-state characteristics of an SOA. In the steady state, d/dt ¼ 0, and therefore, N ðz, tÞ J   2Γvg aðN  N 0 ÞP ðz, tÞ ¼ 0 qd τ

(1.5.14)

If the optical signal is very small (P(z, t)  0), the small-signal carrier density can be found as N ¼ Jτ/qd and the small-signal material gain can be found as   Jτ g0 ¼ 2aðN  N 0 Þ ¼ 2a  N0 (1.5.15) qd We can then use this small-signal gain g0 as a parameter to investigate the general case where the signal optical power may be large. Thus Eq. (1.5.14) can be written as g0  ðN  N 0 Þ  2Γvg τaðN  N 0 ÞP ðz, tÞ ¼ 0 (1.5.16) a In large-signal case, if we define g ¼ 2a(N  N0), then Eq. (1.5.16) yields g0 g¼ 1 + P ðz, tÞ=P sat

(1.5.17)

where P sat ¼

1 2Γvg τa

(1.5.18)

is the saturation photon density, and it is an OSA parameter that is independent of the actual signal optical power. The saturation photon density can also be found to be related to the saturation optical power by P sat,opt ¼ P sat wdvg hf

(1.5.19)

where w is the waveguide width (wd is the cross-section area) and hf is the photon energy. Therefore, the corresponding saturation optical power is P sat,opt ¼

wdhf 2Γτa

(1.5.20)

As an example, for an SOA with waveguide width w ¼ 2 μm and thickness d ¼ 0.25 μm, optical confinement factor G ¼ 0.3, differential field gain a ¼ 5  1020m2, and carrier lifetime τ ¼ 1 ns, the saturation optical power is approximately 22 mW.

87

88

Fiber optic measurement techniques

We need to point out that in the analysis of semiconductor lasers, we often assume that the optical power along the laser cavity is uniform; this is known as mean-field approximation. For an SOA, on the other hand, the signal optical power at the output side is much higher than that at the input side, and they differ by the amount of the SOA optical gain. In this case, mean-field application is no longer accurate. Optical evolution along the SOA has to be modeled by a z-dependent equation,

dP opt ðzÞ (1.5.21) ¼ Γg P opt  αint P opt ðzÞ dz where αint is the internal loss of the waveguide. Because the optical power varies significantly along the waveguide, the material gain is also a function of location parameter z due to power-induced saturation. Similar to Eq. (1.5.2), the optical gain can be expressed as a function of the optical power, g0 gðf , P, zÞ ¼ (1.5.22)   2 P ðzÞ f f 1 + 4 Δf 0 + Poptsat,opt Therefore, at f ¼ f0 and neglecting the waveguide loss αint, the large-signal optical gain can be obtained using the equation similar to Eq. (1.5.10): ln ðG=G0 Þ P opt ð0Þ ¼ 1G P sat,opt

(1.5.23)

where Popt(0) is the input signal optical power and G0 ¼ exp (Γg0L) is the small-signal optical gain. 1.5.2.2 Gain dynamics of OSA Due to its short carrier lifetime, optical gain in an SOA can change quickly. Fast-gain dynamics is one of the unique properties of semiconductor optical amplifiers as opposed to fiber amplifiers, which are discussed later in this chapter. The consequence of fast-gain dynamics is twofold: (1) It introduces crosstalk between different wavelength channels through cross-gain saturation, and (2) it can be used to accomplish all-optical switch based on the same cross-gain saturation mechanism. Using the same carrier density rate equation as Eq. (1.5.13) but converting photon density into optical power, dN ðz, t Þ N ðz, tÞ 2Γa J ðN  N 0 ÞP opt ðz, tÞ ¼   hfA qd dt τ

(1.5.24)

where A is the waveguide cross-section area. To focus on the time domain dynamics, it would be helpful to eliminate the spatial variable z. This can be achieved by investigating the dynamics of the total carrier population Ntot,

Fundamentals of optical devices

Z

L

N tot ðt Þ ¼

N ðz, tÞdz

0

Then Eq. (1.5.24) becomes dN tot ðt Þ N ðtÞ 2Γa I ¼  tot  dt τ qd hfA

Z

L

½N ðz, tÞ  N 0 P opt ðz, tÞdz

(1.5.25)

0

where I ¼ JL is the total injection current. Considering that relationship between the optical power and the optical gain as shown in Eq. (1.5.21), the last term of Eq. (1.5.25) can be expressed by Z L 2Γa ½N ðz, tÞ  N 0 P opt ðz, tÞdz ¼ P opt ðL, t Þ  P opt ð0, tÞ (1.5.26) 0

Here, Popt(0, t) and Popt(L, t) are the input and the output optical waveforms, respectively. We have neglected the waveguide loss αint. Also, we have assumed that the power distribution along the amplifier does not change, which implies that the rate of optical power change is much slower than the transit time of the amplifier. For example, if the length of an SOA is L ¼ 300μm and the refractive index of the waveguide is n ¼ 3.5, the transit time of the SOA is approximately 3.5 ps. Under these approximations, Eq. (1.5.25) can be simplified as dN tot ðt Þ N ðtÞ P opt ð0, tÞ I ¼  tot  ½GðtÞ  1 dt τ qd hfA

(1.5.27)

where G(t) ¼ Popt(L, t)/Popt(0, t) is the optical gain. On the other hand, the optical gain of the amplifier can be expressed as   Z L Gðt Þ ¼ exp 2Γa ðN ðz, tÞ  N 0 Þdz ¼ exp f2ΓaðN tot ðt Þ  N 0 ÞL g (1.5.28) 0

that is, N tot ðtÞ ¼

ln GðtÞ + N 0L 2Γa

or dN tot ðtÞ dGðtÞ 1 ¼ dt 2ΓaGðtÞ dt

(1.5.29)

Eqs. (1.5.27) and (1.5.29) can be used to investigate how an SOA responds to a short optical pulse. When a short and intense optical pulse is injected into an SOA with the pulse width T ≪ τ, within this short time interval of optical pulse duration, the

89

90

Fiber optic measurement techniques

contribution of current injection I/qd and spontaneous recombination Ntot/τ are very small, which can be neglected. With this simplification, Eq. (1.5.27) becomes P opt ð0, tÞ dN tot ðzÞ ½Gðt Þ  1  hfA dt

(1.5.30)

Eqs. (1.5.29) and (1.5.30) can be combined to eliminate dNtot/dt, and then, the equation can be integrated over the pulse duration from t ¼ 0 to t ¼ L,

 Z T 1  1=GðT Þ 1 ln P ð0, tÞdt ¼ 0 (1.5.31) + τP sat,opt 0 opt 1  1=Gð0Þ where Psat, opt is the saturation power as defined in Eq. (1.5.20). G(0) and G(T) are SOA optical gains immediately before and immediately after the optical pulse, respectively. Eq. (1.5.31) can be written as G ðT Þ ¼



1 1

1 G0

1  h i in ðT Þ exp  W τP sat,opt

(1.5.32)

Ð where Win(T) ¼ T0 Popt(t, 0)dt is the input signal optical pulse energy. An SOA is often used in nonlinear applications utilizing its fast cross-gain saturation effect, where an intense optical pulse at one wavelength can saturate the optical gain of the amplifier, and therefore, optical signals of other wavelengths passing through the same SOA are optically modulated. Fig. 1.5.3 illustrates the short pulse response of an SOA. Assume that the input optical pulse is very short so we do not need to consider its width. Then, immediately after the signal pulse is over, the optical gain of the amplifier can be calculated by Eq. (1.5.32). As an example, for an SOA with a saturation power Psat, opt ¼ 110mW and a carrier lifetime τ ¼ 1ns, the optical gain suppression ratio versus input pulse energy is shown in Fig. 1.5.4 for three different small-signal gains of the SOA. With high small-signal optical gain, it is relatively easy to achieve a high gain suppression ratio. For example, if the input optical pulse energy is 10 pJ, to achieve 10 dB optical gain suppression, the small-signal gain has to be at least 20 dB, as shown in Fig. 1.5.4. Although the optical gain suppression is fast in an SOA, which to some extent is only limited by the width of the input optical pulse, the gain recovery after the optical pulse can be quite long; this gain recovery process depends on the carrier lifetime of the SOA. After the optical pulse passes through an SOA, the gain of the SOA starts to recover toward its small-signal value due to constant carrier injection into the SOA. Since the photon density inside the SOA is low without the input optical signal, stimulated emission can be neglected in the gain recovery process, and the carrier population rate equation is dN tot ðt Þ N ðtÞ I ¼  tot dt τ qd

(1.5.33)

Fundamentals of optical devices

Popt (t,0)

t

(a)

ts G(t) G(0)

G(T )

t

(b)

ts Ntot (t)

Ntot (0)

Ntot (T )

t

(c)

ts

Gain suppression ratio G0 /G (dB)

Fig. 1.5.3 Illustration of short pulse response of optical gain and carrier population of an SOA: (A) short optical pulse, (B) optical gain of the SOA, and (C) carrier population.

0 G0 = 10 dB

−5 −10

G0 = 20 dB

−15 −20 −25

G0 = 30 dB 0

10 20 30 40 Optical pulse energy (pJ)

50

Fig. 1.5.4 Gain suppression ratio versus input optical pulse energy in an SOA with 110 mW saturation optical power and 1 ns carrier lifetime.

The solution of Eq. (1.5.33) is

  t  ts N tot ðtÞ ¼ ½N tot ðt s Þ  N tot ð0Þ exp  + N tot ð0Þ τ

(1.5.34)

where Ntot(0) ¼ Ntot(∞) is the small-signal carrier population of the SOA. Since optical gain is exponentially proportional to the carrier population, as shown in Eq. (1.5.28),

91

92

Fiber optic measurement techniques



G ðt Þ ln G0



    G ðt s Þ t  ts ¼ ln exp  τ G0

(1.5.35)

Eq. (1.5.35) clearly demonstrates that the dynamics of optical gain recovery primarily depend on the carrier lifetime τ of the SOA. Optical wavelength conversion using cross-gain saturation One of the important applications of gain saturation effect in SOA is all-optical wavelength conversion as schematically shown in Fig. 1.5.5. In this application, a high-power intensity-modulated optical signal at wavelength λ2 is combined with a relatively weak CW signal at λ1 before they are both injected to an SOA. The optical gain of the SOA is then modulated by the signal at λ2 through the SOA gain saturation effect as illustrated in Fig. 1.5.5B. At the same time, the weak CW optical signal at wavelength λ1 is amplified by the time-varying gain of the SOA, and therefore, its output waveform is complementary to that of the optical signal at λ2. At the output of the SOA, an optical bandpass filter is used to select the wavelength component at λ2 and reject that at λ2. In this way, the wavelength of the optical carrier is converted from λ2 to λ1. This process is all-optical and without the need for electrical modulation.

Pl1(t)

Bias current

Pl1(t)

t

t Combiner

SOA

Pl2(t)

l1 Bandpass

t

(a)

G(t)

G

P

t

P(t )

t

(b) Fig. 1.5.5 All-optical wavelength conversion using cross-gain saturation effect of SOA: (a) system configuration and (b) illustration of principle.

Fundamentals of optical devices

Wavelength conversion using FWM in SOA In the last section, we discussed four-wave mixing (FWM) in an optical fiber, where two optical carriers at different frequencies, ω1 and ω2, copropagate in the fiber, causing refractive index modulation at the frequency Ω ¼ ω2  ω1. New frequency components of FWM are created due to this phase modulation of the optical fiber. The concept of FWM in an SOA is similar to that in an optical fiber except for a different nonlinear mechanism. In an SOA, two optical carriers generate a modulation of carrier population Ntot, and therefore, the optical gain is modulated at the difference frequency between the two carriers. Rigorous analysis of FWM in an SOA can be performed by numerically solving the rate equations. However, to understand the physical mechanism of FWM, we can simplify the analysis using mean field approximation. With this approximation, the carrier population rate equation, Eq. (1.5.25), can be written as dN tot ðt Þ N ðtÞ 2Γa I ½N ðt Þ  N 0,tot P ave ðtÞdz ¼  tot  dt τ qd hfA tot

(1.5.36)

where Pave is the spatially averaged optical power and N0,tot is the transparency carrier population. Since there are two optical carriers that coexist in the SOA, the total optical power inside the SOA is the combination of these two carriers,  2 P ave ðtÞ ¼ E 1 ejω1 t + E 2 ejω2 t  (1.5.37) where ω1 and ω2 are the optical frequencies and E1 and E2 are the field amplitudes of the two carriers, respectively. Eq. (1.5.37) can be expanded as P ave ðtÞ ¼ P DC + 2E1 E 2 cos Δωt

(1.5.38)

PDC ¼ E21 + E22

is the constant part pf the power and Δω ¼ ω2  ω1 is the frequency where difference between the two carriers. As a consequence of this optical power modulation at frequency Δω as shown in Eq. (1.5.38), the carrier population will also be modulated at the same frequency due to gain saturation: N tot ðtÞ ¼ N DC + ΔN cos Δωt

(1.5.39)

where NDC is the constant part of carrier population and ΔN is the magnitude of carrier population modulation. Substituting Eqs. (1.5.38) and (1.5.39) into the carrier population rate equation, Eq. (1.5.36), and separating the DC and time-varying terms, we can determine the values of ΔN and NDC as ΔN ¼ and

2ðN DC  N 0,tot ÞE 1 E 2 P sat,opt 1 + P DC =P sat,opt + jΔωτ

(1.5.40)

93

Fiber optic measurement techniques

N DC ¼

Iτ=q + N 0,tot P DC =P sat,opt 1 + P DC =P sat,opt

(1.5.41)

Since the optical gain of the amplifier is exponentially proportional to the carrier population as G ∝ exp (NtotL), the magnitude of carrier population modulation reflects the efficiency of FWM. Eq. (1.5.40) indicates that FWM efficiency is low when the carrier frequency separation Δω is high and the effective FWM bandwidth is mainly determined by the carrier lifetime τ of the SOA. In a typical SOA, the carrier lifetime is on the order of nanosecond, and therefore, this limits the effective FWM bandwidth to the order of gigahertz. However, experimentally, the effect of FWM in the SOA has been observed even when the frequency separation between the two optical carriers was as wide as in the terahertz range, as illustrated in Fig. 1.5.6. The wide bandwidth of FWM measured in SOAs indicated that the gain saturation is determined by not only the lifetime of interband carrier recombination τ but also other effects, such as intraband carrier relaxation and carrier heating, that must be considered to explain the ultrafast carrier recombination mechanism. In Fig. 1.5.6, the normalized FWM efficiency, which is defined as the ratio between the power of the FWM component and that of the initial optical carrier, can be estimated as a multiple time constant system (Zhou et al., 1993): 2   X k i  ηFWM ∝  n (1.5.42) 1 + iΔωτn  where τn and kn are the time constant and the importance of each carrier recombination mechanism. To fit the measured curve, three time constants had to be used, with k1 ¼ 5.65e1.3i, k2 ¼ 0.0642e1.3i, k3 ¼ 0.0113e1.53i, τ1 ¼ 0.2ns, τ2 ¼ 650fs, and τ3 ¼ 50fs (Zhou et al., 1993). Normalized FWM efficiency (dB)

94

0

0

−10

−10

−20

−20

−30

−30

−40

−40

−50

−50

−60 −60 10 10 104 103 102 102 103 104 Positive frequency detune (GHz) Negative frequency detune (GHz)

Fig. 1.5.6 Normalized FWM signal power versus frequency detune.

Fundamentals of optical devices

Bias current Signal (w 2)

t w FWM

Combiner

SOA

Bandpass

Pump (w1)

(a)

w

(b)

w FWM

w1

w2

Fig. 1.5.7 Wavelength conversion using FWM in an SOA: (A) system configuration and (B) illustration of frequency relationship of FWM, where ωFWM ¼ ω2  2ω1.

The most important application of FWM in the SOA is frequency conversion, as illustrated in Fig. 1.5.7. A high-power CW optical carrier at frequency ω1, which is usually referred to as pump, is injected into an SOA together with an optical signal at frequency ω2, which is known as the probe. Due to FWM in the SOA, a new frequency component is created at ωFWM ¼ ω2  2ω1. A bandpass optical filter is then used to select this FWM frequency component. In addition to performing frequency conversion from ω2 to ωFWM, an important advantage of this frequency conversion is frequency conjugation, as shown in Fig. 1.5.7B. The upper (lower) modulation side of the probe signal is translated into the lower (upper) sideband of the wavelength-converted FWM component. This frequency conjugation effect has been used to combat chromatic dispersion in fiber-optic systems. Suppose the optical fiber has an anomalous dispersion. High-frequency components travel faster than low-frequency components. If an SOA is added in the middle of the fiber system, performing wavelength conversion and frequency conjugation, the fasttraveling upper sideband over the first half of the system will be translated into the lower band, which will travel more slowly over the second half of the fiber system. Therefore, signal waveform distortion due to fiber chromatic dispersion can be canceled by this midspan frequency conjugation. Optical phase modulation in an SOA As we discussed in Section 1.2 of this chapter, in a semiconductor laser, the carrier density change not only changes the optical gain but also changes the refractive index of the material and thus introduces an optical phase change. This is also true in semiconductor optical amplifiers. We know that photon density P of an optical amplifier can be related to the material gain g as P ¼ Pin exp (gL), where Pin is the input photon density and L is the SOA length. If there is a small change in the carrier density that induces a change of the

95

96

Fiber optic measurement techniques

material gain by Δg, the corresponding change in the photon density is ΔP ¼ P  ΔgL. At the same time, the optical phase will be changed by Δφ ¼ ωΔnL/c, where Δn is the change of the carrier density and ω is the signal optical frequency. According to the definition of linewidth enhancement factor αlw in Eq. (1.2.48), the gain change and the phase change are related by αlw ¼ 2P

Δφ 2Δφ ¼ ΔP ΔgL

(1.5.43)

If the phase is changed from ϕ1 to ϕ2, the corresponding material gain is changed from g1 to g2, and the corresponding optical gain is changed from G1 ¼ exp (g1L) to G2 ¼ exp (g2L), Eq. (1.5.43) can be used to find their relationship as   αlw G2 ln (1.5.44) φ2  φ1 ¼  2 G1 Optical phase modulation has been used to achieve an all-optical switch. A typical system configuration is shown in Fig. 1.5.8, where two identical SOAs are used in a Mach-Zehnder interferometer setting. A control optical beam at wavelength λ2 is injected into one of the two SOAs to create an imbalance of the interferometer through optical power-induced phase change. A π/2 phase change in an interferometer arm will change the interferometer operation from constructive to destructive interference, thus switching off the optical signal at wavelength λ1. The major advantage of an all-optical switch using phase modulation in an SOA is that the extinction ratio can be made very high by carefully balancing the amplitude and the phase of the two interferometer arms. While in gain saturation-based optical switch, a very strong pump power is required to achieve a reasonable level of gain saturation, as illustrated in Fig. 1.5.5B. In addition, phase modulation is usually less wavelength sensitive compared to gain saturation, which allows a wider wavelength separation between the signal and the pump (Wang et al., 2008). To summarize, a semiconductor optical amplifier has a structure similar to that of a semiconductor laser except it lacks optical feedback. The device is small and can be integrated with other optical devices such as laser diodes and detectors in a common platform. SOA

Signal at l1

l1

SOA

Bandpass

Pump at l2

Fig. 1.5.8 All-optical switch using SOAs in a Mach-Zehnder configuration.

Fundamentals of optical devices

Because of its fast gain dynamics, an SOA can often be used for all-optical switches and other optical signal processing purposes. However, for the same reason, an SOA is usually not suitable to be used as a line amplifier in a multiwavelength optical system. Cross-gain modulation in the SOA will create significant crosstalk between wavelength channels. For that purpose, the optical fiber amplifier is a much better choice.

1.5.3 Erbium-doped fiber amplifiers (EDFAs) An erbium-doped fiber amplifier is one of the most popular optical devices in modern optical communication systems as well as in fiber-optic instrumentation. EDFAs provide many advantages over SOAs in terms of high gain, high optical power, low crosstalk between wavelength channels, and easy optical coupling from and to optical fibers. The basic structure of an EDFA shown in Fig. 1.5.9 is composed of an erbium-doped fiber, a pump laser, an optical isolator, and a wavelength-division multiplexer. In contrast to SOAs, an EDFA is optically pumped, and therefore, it requires a pump source, which is usually a high-power semiconductor laser. The wavelength-division multiplexer (WDM) is used to combine the short wavelength pump with the longer wavelength signal. The reason to use a WDM combiner instead of a simple optical coupler is to avoid the combination loss, discussed in detail in the next section. The optical isolator is used to minimize the impact of optical reflections from interfaces of optical components. Since an EDFA may provide a significant amount of optical gain, even a small amount of optical reflection may be able to cause oscillation and therefore degrade EDFA performance. Some high-gain EDFAs may also employ another optical isolator at the input side. An optical pumping process in an erbium-doped fiber is usually described by a simplified three-level energy system as illustrated in Fig. 1.5.10A. The bandgap between the ground state and the excited state is approximately 1.268 eV; therefore, pump photons at 980 nm wavelength are able to excite ground state carriers to the excited state and create population inversion. The carriers stay in the excited state for only about 1 μs, and after that, they decay into a metastable state through a nonradiative transition. In this process, the energy loss is turned into mechanical vibrations in the fiber. The energy band of the metastable state extends roughly from 0.8 to 0.84 eV, which corresponds to a wavelength band from 1480 to 1550 nm. Within the metastable energy band, carriers tend to move EDF

Signal input

Output WDM Isolator Pump

Fig. 1.5.9 Configuration of an EDFA. EDF: erbium-doped fiber, WDM: wavelength-division multiplexer.

97

98

Fiber optic measurement techniques

t = 1 μs

980

Non-radiative 980 nm pump

1450 nm

t = 10 ms Meta-stable

1450 nm 1580 nm

1550 nm Emission

1550 nm Emission

1.1.1.1.1.1.1 Ground

(a)

1580 nm

1480 nm pump

1.1.1.1.1.1.2 Ground

(b)

Fig. 1.5.10 Simplified energy band diagrams of erbium ions in silica: (A) three-level system and (B) two-level system.

down from the top of the band to somewhere near the bottom through a process known as intraband relaxation. Finally, radiative recombination happens when carriers step down from the bottom of the metastable state to the ground state and emit photons in the 1550 nm wavelength region. The carrier lifetime in the metastable state is on the order of 10 ms, which is four orders of magnitude longer than the carrier lifetime in the excited state. Therefore, with constant optical pumping at 980 nm, almost all the carriers will be accumulated in the metastable state. Therefore, the three-level system can be simplified into two levels for most of the practical applications (Desurvire, 1994). An erbium-doped fiber can also be pumped at 1480 nm wavelength, which corresponds to the bandgap between the top of the metastable state and the ground state. In this case, 1480 nm pump photons excite carriers from the ground state to the top of the metastable state directly. Then, these carriers relax down to the bottom part of the metastable band. Typically, 1480 nm pumping is more efficient than 980 nm pumping because it does not involve the nonradiative transition from the 980 to the 1480 nm band. Therefore, 1480 nm pumping is often used for high-power optical amplifiers. However, amplifiers with 1480 nm pumps usually have higher noise figures than 980 nm pumps, which will be discussed later. 1.5.3.1 Absorption and emission cross sections Absorption and emission cross sections are two very important properties of erbiumdoped fibers. Although the name cross section may seem to represent a geometric size of the fiber, it does not. The physical meanings of absorption and emission cross sections in an erbium-doped fiber are absorption and emission efficiencies at each wavelength.

Fundamentals of optical devices

Now, let’s start with the pump absorption efficiency Wp, which is defined as the probability of each ground state carrier that is pumped to the metastable state within each second. If the pump optical power is Pp, within each second, the number of incoming pump photons is Pp/hfp, where fp is the optical frequency of the pump. Then, pump absorption efficiency is defined as Wp ¼

σaPp hf p A

(1.5.45)

where A is the effective fiber core cross-section area and σ a is the absorption cross section. In other words, the absorption efficiency is defined as the ratio of pump absorption efficiency and the density of the pump photon flow rate:   Pp σa ¼ W p= (1.5.46) hf p A Since the unit of Wp is (s1) and the unit of the density of the pump photon flow rate Pp/(hfpA) is (s1 m2), the unit of absorption cross section is (m2). This is probably where the term absorption cross section comes from. As we mentioned, it has nothing to do with the geometric cross section of the fiber. We also need to note that the absorption cross section does not mean attenuation; it indicates the efficiency of energy conversion from photons to the excited carriers. Similarly, we can define an emission cross section as   Ps σe ¼ W s= (1.5.47) hf s A where Ps and fs are the emitted signal optical power and frequency, respectively. Ws is the stimulated emission efficiency, which is defined as the probability of each carrier in the metastable state that recombines to produce a signal photon within each second. It has to be emphasized that both absorption and emission cross sections are properties of the erbium-doped fiber itself. They are independent of operation conditions of the fiber, such as pump and signal optical power levels. At each wavelength, both absorption and emission exist because photons can be absorbed to generate carriers through a stimulated absorption process; at the same time, new photons can be generated through a stimulated emission process. Fig. 1.5.11 shows an example of absorption and emission cross sections; obviously, both of them are functions of wavelength. If the carrier densities in the ground state and the metastable state are N1 and N2, respectively, the net stimulated emission rate per cubic meter is Re ¼ Ws(λs) N2  Wp(λs)N1, where Ws(λs) and Wp(λs) are the emission and absorption efficiencies of the erbium-doped fiber at the signal wavelength λs. Considering the definitions of absorption and emission cross sections, this net emission rate can be expressed as

99

Fiber optic measurement techniques

3.5 3 2.5

Cross-secons (dB/m)

100

2

Emission

Absoluon

1.5 1 0.5 0 -0.5 1450

1500

1550

1600

1650

Wavelength (nm)

Fig. 1.5.11 Absorption and emission cross sections of HE980 erbium-doped fiber.

  Γ s σ e ðλs ÞP s σ a ðλs Þ N N2  Re ¼ hf s A σ e ðλs Þ 1

(1.5.48)

where a field confinement factor Γ s is introduced to take into account the overlap fact between the signal optical field and the erbium-doped area in the fiber core. The physical meaning of this net emission rate is the number of photons that is generated per second per cubic meter. 1.5.3.2 Rate equations The rate equation for the carrier density in the metastable state N2 is   Γ σ ðλ ÞP σ ðλ Þ dN 2 Γ p σ a λp P p N ¼ N1  s e s s N2  a s N1  2 dt hf p A hf s A τ σ e ðλs Þ

(1.5.49)

On the right side of this equation, the first term is the contribution of carrier generation by stimulated absorption of the pump. (Γ p is the overlap factor at pump wavelength.) The second term represents net stimulated recombination which consumes the upper-level carrier density. The last term is spontaneous recombination, in which upper-level carriers spontaneously recombine to generate spontaneous emission. τ is the spontaneous emission carrier lifetime. For simplicity, we neglected the emission term at pump wavelength because usually σ e(λp) ≪ σ a(λs). In Eq. (1.5.49), both lower (N1) and upper (N2) state population densities are involved. However, these two carrier densities are not independent, and they are related by

Fundamentals of optical devices

N1 + N2 ¼ NT

(1.5.50)

where NT is the total erbium ion density that is doped into the fiber. The energy state of each erbium ion stays either in the ground level or in the metastable level. In the steady state (d/dt ¼ 0), if only one signal wavelength and one pump wavelength are involved, Eq. (1.5.49) can be easily solved with the help of Eq. (1.5.50): Γ p σ a λp f s P p + Γ s σ a ðλs Þf p P s N2 ¼ (1.5.51) NT Γ p σ a λp f s P p + Γ s f p P s ðσ e ðλs Þ + σ a ðλs ÞÞ + hf s f p A=τ Note that since the optical powers of both the pump and the signal change significantly along the fiber, the upper-level carrier density is also a function of the location parameter z along the fiber N2 ¼ N2(z). To find the z-dependent optical powers for the pump and the signal, propagation equations need to be established for them. The propagation equations for the optical signal at wavelength λs and for the pump at wavelength λp are dP s ðλs Þ ¼ gðz, λs ÞP s ðzÞ dz dP p λp ¼ α z, λp P p ðzÞ dz

(1.5.52) (1.5.53)

where gðz, λs Þ ¼ Γ s ½σ e ðλs ÞN 2 ðzÞ  σ a ðλs ÞN 1 ðzÞ is the effective gain coefficient at the signal wavelength and

α z, λp ¼ Γ p σ e λp N 2 ðzÞ  σ a λp N 1 ðzÞ

(1.5.54)

(1.5.55)

is the effective absorption coefficient at the pump wavelength. These propagation equations can be easily understood by looking at, for example, the term σ e(λs)N2(z)Ps(z) ¼ WshfsAN2(z). It is the optical power generated per unit length at the position z. Similarly, σ a(λs)N1(z)Ps(z) ¼ WahfsAN1(z) is the optical power absorbed per unit length at position z. Note that the unit of both g and α is in (m1). The two propagation Eqs. (1.5.52) and (1.5.53) are mutually coupled through carrier density N2(z) and N1(z) ¼ NT  N2(z). Once the position-dependent carrier density N2(z) is found, the performance of the EDFA will be known, and the overall signal optical gain of the amplifier can be found as   Z L Gðλs Þ ¼ exp Γ ðσ e ðλs Þ + σ a ðλs ÞÞ N 2 ðzÞdz  σ a ðλs ÞN T L (1.5.56) 0

It depends on the accumulated carrier density along the fiber length.

101

102

Fiber optic measurement techniques

In practice, both the pump and the signal are externally injected into the EDF and they differ only by the wavelength. There could be multiple pump channels and multiple signal channels. Therefore, Eqs. (1.5.52) and (1.5.53) can be generalized as dP k ðzÞ (1.5.57) ¼ gk ðz, λk ÞP k ðzÞ dz where the subscript k indicates the kth channel with the wavelength λk, and the gain coefficient for the kth channel is gk ðzÞ ¼ Γ k ½σ e ðλk ÞN 2 ðzÞ  σ a ðλk ÞN 1 ðzÞ

(1.5.58)

In terms of the impact on the carrier density, the only difference between the pump and the signal is that at signal wavelength, σ e > σ a, whereas at the pump wavelength, we usually have σ a ≫ σ e. Strictly speaking, both the signal and pump participate in the emission and the absorption processes. In the general approach, the upper-level carrier density depends on the optical power of all the wavelength channels, and the rate equation of N2 is  X P j ðzÞ

dN 2 N ¼ j (1.5.59) Γ j σe λj N 2  σ a λj N 1  2 dt τ Ahf j Considering expressions in Eqs. (1.5.57) and (1.5.58), Eq. (1.5.59) can be simplified as   dN 2 ðt, zÞ N2 X 1 dP j ðzÞ  (1.5.60) ¼ i Ahf τ dz dt j In the steady state, d/dt ¼ 0, Eq. (1.5.60) can be written as X  1 dP j ðzÞ N 2 ðzÞ ¼ τ j Ahf j dz In addition, Eq. (1.5.57) can be integrated on both sides:   Z P k,out Z L P k,out 1 dP k ðzÞ ¼ gk ðzÞdz ¼ ln ðP k,out Þ  ln ðP k,in Þ ¼ ln P k,in 0 P k,in P k ðzÞ Since gk ðzÞ ¼ Γ k fσ e ðλk ÞN 2 ðzÞ  σ a ðλk Þ½N T  N 2 ðzÞg ¼ Γ k f½σ e ðλk Þ + σ a ðλk ÞN 2 ðzÞ  σ a ðλk ÞN T g Using Eq. (1.5.61), we have

(1.5.61)

(1.5.62)

Fundamentals of optical devices

 gk ðzÞ ¼ Γ k ½σ e ðλk Þ + σ a ðλk Þτsp

X

  1 dP j ðzÞ + σ a ðλk ÞN T j Ahf dz j

(1.5.63)

gk(z) can be integrated over the erbium-doped fiber length L:

X    Z L Z L dP j ðzÞ 1 dz + σ a ðλk ÞN T L ¼ gk ðzÞdz ¼ Γ k ½σ e ðλk Þ + σ a ðλk Þτsp j Ahf dz j 0 0  ¼ Γ k

 

X P j,out  P j,in ½σ e ðλk Þ + σ a ðλk Þτsp + σ a ðλk ÞN T L j Ahf j

Define ak ¼ Γ k σ a ðλk ÞN T Sk ¼ Then we have Z

L 0

Γ k τsp ½σ e ðλk Þ + σ a ðλk Þ A

  X  P j,out  P j,in gk ðzÞdz ¼  Sk + ak L j hf j

(1.5.64) (1.5.65)

(1.5.66)

The right side of Eq. (1.5.66) can be rewritten as 

X

X  P j,out  P j,in P j,out X P j,in Sk  + ak L ¼ Sk + ak L j j hf j hf hf j j j ¼ Sk ½QOUT  QIN  + ak L where QOUT ¼

X P j,out j hf j

and QIN ¼

X P j,in j hf j

are the total photon flow rate at the output and the input, respectively (including both signals and pump). Combining Eqs. (1.5.62) and (1.5.66), we have (Saleh et al., 1990) P k,out ¼ P k,in eak L  eSk ðQOUT QIN Þ

(1.5.67)

103

104

Fiber optic measurement techniques

In this equation, both ak and Sk are properties of the EDF. QIN is the total photon flow rate at the input, which is known. However, QOUT is the total photon flow rate at the output, which is not known. To find QOUT, one can solve the following equation iteratively:  XN  1 P exp ð S Q  a L Þ exp ð S Q Þ (1.5.68) QOUT ¼ k,in K IN k k OUT k¼1 hf k In this equation, everything is known except QOUT. A simple numerical algorithm should be enough to solve this iterative equation and find QOUT. Once QOUT is obtained using Eq. (1.5.69), optical gain for each channel can be found using Eq. (1.5.67). Although Eq. (1.5.68) provides a semianalytical formulation that can be used to investigate the performance of an EDFA, it has several limitations: (1) In this calculation, we have neglected the carrier saturation effect caused by amplified spontaneous emission (ASE) noise. In fact, when the optical gain is high enough, ASE noise may be significant, and it contributes to saturate the EDFA gain. Therefore, the quasi analytical formulation is only accurate when the EDFA has low gain. (2) It only predicts the relationship between the input and the output optical power levels; power evolution along the EDF Ð L P(z) is not given. (3) It only calculates the accumulated carrier population N (z)dz in the EDF; the actual carrier density distribution is not provided. Since 0 2 ASE noise is generated and amplified in a distributive manor along the EDF, we are not able to accurately calculate the ASE noise at the EDFA output using this semianalytical method. Therefore, although this semianalytical method serves as a quick evaluation tool, it does not give accurate enough results when the optical gain is high. Precise modeling of EDFA performance has to involve numerical simulations. The model has to consider optical pumps and optical signals of various wavelengths. Because the spontaneous emission noise generated along the erbium fiber propagates both in the forward direction and the backward direction, they have to be considered as well. Fig. 1.5.12 illustrates the evolution of optical signals and ASE noises. Sfin(λ) is the combination of the input optical signal and the input pump. After propagating through the EDF ASE

PSf (l, L)

b PSP (l) ≠ 0

ASE

f (l) = 0 PSP

PSf (l,0) f (l) = 0 PSP

b PSP (l) = 0

Fig. 1.5.12 Illustration of signal and ASE noise in an EDFA: Pfs(λ, 0) and Pfs(λ, L) represent input and output optical signal (include pump) in the forward direction, PfSP(λ) and PbSP(λ) represent forward and backward propagated ASE.

Fundamentals of optical devices

fiber, the optical signal is amplified, and the optical pump is depleted. Sfout(λ) is the combination of the output optical signal and the remnant pump. Because of carrier inversion, spontaneous emission is generated along the fiber, and they are amplified in both directions. SPf(λ) is the ASE in the forward direction, which is zero at the fiber input and becomes nonzero at the fiber output. Similarly, SPb(λ) is the ASE in the backward direction, which is zero at the fiber output and nonzero at the fiber input. In the steady state, the propagation equations for the optical signal (including the pump), the forward ASE and the backward ASE, are, respectively, dP fs ðλ, zÞ ¼ P fs ðλ, zÞgðλ, zÞ dz

(1.5.69)

dP SP ðλ, zÞ f ¼ P SP ðλ, zÞgðλ, zÞ + 2nsp ðzÞgðλ, zÞ dz

(1.5.70)

f

dP bSP ðλ, zÞ (1.5.71) ¼ P bSP ðλ, zÞgðλ, zÞ  2nsp ðzÞgðλ, zÞ dz where nsp is the spontaneous emission factor, which depends on the carrier inversion level N2 as nsp ðzÞ ¼

N 2 ðzÞ N 2 ðzÞ ¼ N 2 ðzÞ  N 1 ðzÞ 2N 2 ðzÞ  N T

(1.5.72)

The carrier density N2 is described by a steady-state carrier density rate equation: # Z " f f P s ðλ, zÞ P bSP ðλ, zÞ P SP ðλ, zÞ N2 + + + gðλ, zÞdλ ¼ 0 (1.5.73) τ Ahc=λ Ahc=λ Ahc=λ λ where gðλ, zÞ ¼ Γ ðλÞf½σ e ðλÞ + σ a ðλÞN 2 ðzÞ  σ a ðλÞN T g

(1.5.74)

is the gain coefficient and Γ(λ) is the wavelength-dependent confinement factor. Eqs. (1.5.69) through (1.5.73) are coupled through N2(z) and can be solved numerically. A complication in this numerical analysis is that the ASE noise has a continuous spectrum that varies over the wavelength. In the numerical analysis, the ASE optical spectrum has to be divided into narrow slices, and within each wavelength slice, the noise power can be assumed to be in a single wavelength. Therefore, if the ASE spectrum is divided into m wavelength slices, Eqs. (1.5.70) and (1.5.71) each must be split into m equations. Since the carrier density N2(z) is a function of z, the calculation usually has to divide the erbium-doped fiber into many short sections, and within each short section, N2 can be assumed to be a constant. As shown in Fig. 1.5.12, if the calculation starts from the input side of the fiber, we know that the forward ASE noise at the fiber input (z ¼ 0) is zero, and we also know the

105

Fiber optic measurement techniques

45 40 Small-signal optical gain (dB)

106

35

−30 −25 −20

30

−15

25

−10 dBm

20 15 10 5

L = 35 m Pump = 75 mW

0 −5 1500 1510 1520 1530 1540 1550 1560 1570 1580 1590 1600 Wavelength (nm)

Fig. 1.5.13 Example of EDFA optical gain versus wavelength at different input optical signal power levels.

power levels of the input optical signal and the pump. However, we do not know the level of the backward ASE noise at z ¼ 0. To solve the rate equations, we have to assume a value of PbSP(λ) at z ¼ 0. Then, rate equations can be solved section by section along the fiber. At the end of the fiber, an important boundary condition has to be met, that is, the backward ASE noise has to be zero (PbSP(λ) ¼ 0). If this condition is not satisfied, we have to perform the calculation again using a modified initial guess of PbSP(λ) at z ¼ 0. This process usually has to be repeated several times until the backward ASE noise at the fiber output is lower than a certain level (say, –40 dBm); then, the results are accurate. Fig. 1.5.13 shows an example of EDFA optical gain versus wavelength at different input optical signal power levels. This example was obtained using a 35-m Lucent HE20P EDF and a 75-mW forward pump power at 980 nm wavelength. First of all, the gain spectrum of an EDFA is not uniform across the bandwidth, and its shape cannot be represented by a simple mathematic formula. There is typically a narrow gain peak around 1530 nm and a relatively flat region between 1540 and 1560 nm. The gain and spectral shape of an EDFA depend on the emission and absorption cross sections of the EDF, the pump power level, and the signal power. Fig. 1.5.13 shows that with the input signal optical power increased from –30 to –10 dBm, the optical gain at 1550 nm decreases from 36 to 27 dB, which is caused by gain saturation. In addition, the spectral shape of the optical gain also changes with the signal optical power. With the increase of signal optical power, the gain tends to tilt toward the longer wavelength side, and the short wavelength peak around 1530 nm is suppressed. This phenomenon is commonly referred to as dynamic gain tilt. 1.5.3.3 EDFA design considerations Forward pumping and backward pumping EDFA performance, to a large extent, depends on the pumping configuration. Both forward pumping and backward pumping can be used in an EDFA. In general, an EDFA can

Fundamentals of optical devices

EDF

Signal input

Output WDM

Pump

Forward pump

WDM Backward pump

Pump

Fig. 1.5.14 Configuration of an EDFA with both a forward pump and a backward pump.

have either a forward pump or a backward pump; some EDFAs have both forward and backward pumps. As illustrated in Fig. 1.5.14, forward pumping refers to pumps propagating in the same direction as the signals, whereas in backward pumping, the pump travels in the opposite direction of the optical signal. An EDFA with a forward pump alone usually has slightly lower saturation optical power, and the forward ASE noise level is also relatively low. However, the backward ASE noise level at the input side of the erbium-doped fiber can be higher than that with backward pump. The reason is that the pump power is strong at the fiber input, and its level is significantly reduced at the fiber output because the pump energy is converted to amplify the signal along the fiber. Because of the weak pump at the EDF output, the carrier inversion level is low as well, which limits the output optical signal level. On the other hand, since the pump power and thus the carrier inversion level are high near the fiber input side, the backward propagated spontaneous emission noise meets high amplification in that region; therefore, the backward ASE noise power level is strong at the fiber input side. With backward pumping, the EDFA has the potential to provide a higher signal optical power at the output, but the forward ASE noise level can also be high. The backward ASE noise level in a backward pumping configuration is relatively low at the input side of the erbium-doped fiber. In this case, the pump power is strong at the fiber output side, and its level is significantly reduced when it reaches the input side. Because of the strong pump power at the EDF output and the high carrier inversion, the output optical signal power can be higher than the forward pumping configuration. Fig. 1.5.15A and C show the power levels of pump, signal, forward ASE, and backward ASE along the erbiumdoped fiber for forward-pumping and backward-pumping configurations. Fig. 1.5.15B and D show optical spectra at the EDF output which include both amplified signal channels and the broadband ASE noise. This figure was obtained with 25 m EDF, 100 mW pump power at 980 nm, and 4 signal channels each having 20 dBm power at the EDFA input. EDFAs with AGC and APC Automatic gain control (AGC) and automatic power control (APC) are important features in practical EDFAs that are used in optical communication systems and networks.

107

Optical power (dBm)

Signal channels Forward ASE

Backward ASE

Optical spectrum (dBm/nm)

Pump

(b)

(a)

Wavelength (µm)

Optical power (dBm)

Pump Signal channels Forward ASE Backward ASE

(c) Z (m)

Optical spectrum (dBm/nm)

Z (m)

(d) Wavelength (µm)

Fig. 1.5.15 (A) and (C): Optical power levels of pump, signal, forward ASE, and backward ASE along the erbium-doped fiber for forward pumping (A) and backward pumping (C) configurations. (B) and (D): Optical spectral at the EDFA output.

Fundamentals of optical devices

Since the optical gain of an EDFA depends on the signal optical power, system performance will be affected by signal optical power fluctuation and add/drop of optical channels. Therefore, AGC and APC are usually used in in-line optical amplifiers to regulate the optical gain and the output signal optical power of an EDFA. Because both the optical gain and the output signal optical power are dependent on the power of the pump, the automatic control can be accomplished by adjusting the pump power. Fig. 1.5.16 shows an EDFA design with AGC in which two photodetectors, PDI and PDO, are used to detect the input and the output signal optical powers. An electronic circuit compares these two power levels, calculates the optical gain of the EDFA, and generates an error signal to control the injection current of the pump laser if the EDFA gain is different from the target value. If the EDFA carries multiple WDM channels and we assume that the responsivity of each photodetector is wavelength independent within the EDFA bandwidth, this AGC configuration controls the overall gain of the total optical power such that XN P j¼1 j,out G ¼ XN ¼ constant (1.5.75) P j,in j¼1 where N is the number of wavelength channels. In this case, although the gain of the total optical power is fixed by the feedback control, the optical gain of each individual wavelength channel may vary, depending on the gain flatness over the EDFA bandwidth. In long distance optical transmission, many in-line optical amplifiers may be concatenated along the system. If the optical gain of each EDFA is automatically controlled to a fixed level, any loss fluctuation along the fiber system will make the output optical power variation, and therefore, the optical system may become unstable. APC, on the other hand, regulates the total output optical power of an EDFA, as shown in Fig. 1.5.17. In this configuration, only one photodetector is used to detect

Optical signal input tap λs

Erbium-doped fiber WDM

Amplified optical signal tap

1% 1%

λp Pump

PDI

Pump power control

Compare

PDO

Fig. 1.5.16 Configuration of an EDFA with both automatic gain control. PDI: input photodiode, PDO: output photodiode.

109

110

Fiber optic measurement techniques

Erbium-doped fiber

Optical signal input

λs

Amplified optical signal

WDM 1%

λp Pump

Feedback control

PD

Fig. 1.5.17 Configuration of an EDFA with automatic power control.

the total output signal optical power. The injection current of the pump laser is controlled to ensure that this measured power is equal to the desired level such that XN P ¼ constant (1.5.76) j¼1 j,out The advantage of APC is that it isolates optical power fluctuations along the system because loss change in one span does not propagate to other spans. However, since the total power is regulated, channel add/drop in WDM systems will affect the optical power of each individual channel. Therefore, in advanced optical communication systems, EDFAs are controlled by intelligent systems taking into account the number of wavelength channels, the optical signal-to-noise ratio, and other important system parameters. 1.5.3.4 EDFA gain flattening As shown in Fig. 1.5.13, the optical gain of an EDFA changes with the wavelength within the gain bandwidth. In a WDM optical system, the wavelength-dependent gain characteristic of EDFAs makes transmission performance different from channel to channel and thus greatly complicates system design and provisioning. In addition, dynamic gain tilt in EDFA may also affect system design and transmission performance, considering signal power change and channel add/drop. In high-end optical amplifiers, the gain variation versus wavelength can be compensated by passive optical filters, as shown in Fig. 1.5.18A. To flatten the gain spectrum, an optical filter with the transfer function complementing to the original EDFA gain spectrum is used with the EDFA so that the overall optical gain does not vary with the wavelength and this is illustrated in Fig. 1.5.19. However, the passive gain flattening is effective only with one particular optical gain of the amplifier. Any change of EDFA operation condition and the signal optical power level would change the gain spectrum and therefore require a different filter. To address the dynamic gain tilt problem, a spatial light modulator (SLM) is used as shown in Fig. 1.5.18B. In this configuration, the output of the EDFA is demultiplexed,

Fundamentals of optical devices

MUX

EDFA

Gain flattening filter

DEMUX

EDFA

(a)

(b)

Spatial light modulator

Fig. 1.5.18 EDFA gain flattening using (A) a passive filter and (B) a spatial light modulator. 45 40 Optical gain (dB)

35 30 25 20 15 10 5 0

Before compensation

Transmission (dB)

0

−20

Optical filter transfer function 40 45 40 Optical gain (dB)

35 30 25 20 15 10 5 0

After compensation

1500 1510 1520 1530 1540 1550 1560 1570 1580 1590 1600 Wavelength (nm)

Fig. 1.5.19 Illustration of EDFA gain flattening using an optical filter.

and different wavelength components are dispersed in the spatial domain. Then a multipixel, one-dimensional SLM is used to attenuate each wavelength component individually to equalize the optical gain across the EDFA bandwidth. With the rapid advance of liquid crystal technology, SLM is commercially available. Since each wavelength component is addressed independently and dynamically in this configuration, the effect of dynamic gain tilt can be reduced.

111

112

Fiber optic measurement techniques

1.5.4 Raman amplification in optical fiber In Section 1.4, we have briefly discussed the mechanism of stimulated Raman scattering (SRS) in an optical fiber, which is caused by the interaction between the optical signal photons and the energy states of the silica molecules. In this scattering process, a photon spends part of its energy to excite a molecule from a lower to a higher vibrational or rotational state known as optical phonon. The energy ΔEe of the optical phonon determines the frequency difference between the input photons and the scattered photons commonly referred to as a Stokes wave, ΔEe (1.5.77) h where fp and fp are frequencies of the input and the scattered photons, respectively, and h is the Planck’s constant. If the material structure has a regular crystal lattice, the vibrational or rotational energy levels of the molecules are specific; in such a case, the spectrum of the scattered photons through the SRS process would be relatively narrow. For amorphous materials, such as glasses, the SRS spectrum is quite wide because the electron energy levels are not very well regulated. Fig. 1.4.19 shows the normalized SRS spectrum as a function of the frequency shift f ¼ fp  fs in a silica fiber. The frequency shift peaks at about 13 THz, and the FWHM width of the spectrum is approximately 7 THz. Although the spectral shape of SRS is not Lorentzian, it provides a mechanism for optical amplification, in which a high power laser beam is used as the pump to amplify optical signals at frequencies about 7 THz lower than that of the pump. Because the SRS process is effective for both forward and backward directions in an optical fiber, the Raman amplifier can be bidirectional (Headley and Agrawal, 2005; Bromage, 2004). When a pump wave with an optical power density of Ip(z) propagates along an optical fiber, a Stokes wave at a longer wavelength can be generated and amplified along the fiber in both forward and backward directions with a power density of Is(f, z). Energy transfer between the pump and the Stokes can be described by the following coupled wave equations, fp fs ¼

dI s ðf , zÞ (1.5.78) ¼ αs I s ðf , zÞ  gR ðf , zÞI p ðzÞI s ðf , zÞ dz dI p ðf , zÞ ¼ αp I p ðf , zÞ gR ðf , zÞI p ðzÞI s ðf , zÞ (1.5.79) dz where the  sign represents forward (+) and backward () propagating Stokes waves and the pump. gR(f, z) is the SRS gain coefficient, which is generally dependent on the frequency difference f between the pump and the Stokes waves, as shown in Fig. 1.4.19. αs and αp are fiber attenuation coefficients at the Stokes and the pump wavelengths, respectively, and hfs is the Stokes photon energy. We have assumed that the SRS gain coefficient may be nonuniform along the fiber and thus is a function of z. In general,

Fundamentals of optical devices

Eqs. (1.5.78) and (1.5.79) are mutually coupled; however, in most of the cases of a Raman amplifier, the pump power is much higher than that of the Stokes, and the pump depletion can be neglected for simplicity. In such a case, the 2nd term on the right-hand side of Eq. (1.5.79) can be neglected so that the pump power along the fiber can be simply described by IFP(z) ¼ IFP, ine αpz for forward pumping with IFP, in as the input pump power at z ¼ 0, and IBP(z) ¼ IBP, ine αp(Lz) for backward pumping with IBP, in as the input pump power at z ¼ L. Assume that an optical signal at the Stokes frequency, Ps(f, 0), is injected at the input of the fiber z ¼ 0 and propagates in the forward direction, neglecting the saturation effect due to spontaneous emission, the signal optical power along the fiber can be found by integrating Eq. (1.5.78) as

   gR ðf Þ 1  eαp z P s ðf , zÞ ¼ P s ðf , 0Þ exp P (1.5.80)  αs z αp Aeff FP,in for forward pumping and P s ðf , zÞ ¼ P s ðf , zÞ exp

   gR P BP,in eαp ðzLÞ  eαp L  αs z Aeff αp

(1.5.81)

for backward pumping, where PFP, BP ¼ IFP, BPAeff and Ps ¼ IsAeff are the pump and the signal optical power, respectively, and Aeff is the effective cross-section area of the fiber, and L is the fiber length. We have also assumed that the Raman gain coefficient gR(f ) remains constant along the fiber. Because pump depletion is neglected, the signal optical gain over the entire fiber of length, z ¼ L, will be the same for both forward pump and the backward pump schemes, which is

   gR ðf Þ 1  eαp L Gðf , L Þ ¼ exp P (1.5.82)  αs L αp Aeff P,in where PP, in represents PFP, in or PBP, in depending on the pump scheme that is used. The Raman gain coefficient is a material property that is proportional to the resonant contribution associated with molecular vibrations. In a silica fiber operating in a 1550 nm wavelength window, the peak Raman gain coefficient is on the order of gR ¼ 210  1014m/W depending on the fabrication process and doping. Fig. 1.5.20 shows the normalized signal optical power as the function of the fiber length for different input pump power levels. For forward pump, as shown in Fig. 1.5.20A, signal optical power increases near the input side of the fiber where pump power is relatively high and reduces after a certain distance when the pump power becomes weak and the Raman gain effect diminishes, and eventually, the optical signal only sees the fiber attenuation. By increasing the pump power, the Raman gain is increased, and the peak position of the signal power extends further into the fiber. For backward pump, the optical signal injected from z ¼ 0 primarily experiences fiber attenuation at the beginning, and the Raman gain is effective only near the final section

113

Fiber optic measurement techniques

30

(a)

30

20

Ps(z)/Ps(0) (dB)

Ps(z)/Ps(0) (dB)

114

10 0 400mW 600mW 800mW 1W

-10 -20

0

20

400mW 600mW 800mW 1W

(b)

20 10 0

-10

40

60

80

100

Posion z (km)

-20

0

20

40

60

80

100

Posion z (km)

Fig. 1.5.20 Normalized signal optical power as the function of position along the 100 km fiber with forward pump (A) and backward pump (B) for different pump power levels. Parameters used are Raman gain coefficient gR ¼ 5.8  1014m/W, fiber effective core area Aeff ¼ 80 μm2, spontaneous scattering factor nsp ¼ 1.2, and fiber attenuation parameters αp ¼ 0.3 dB/km and αs ¼ 0.25 dB/km at the pump and the Stokes wavelengths, respectively.

of the fiber, as shown in Fig. 1.5.20B, where the pump power is high. Although the signal optical power emerging from the end of the fiber (at L ¼ 100 km in Fig. 1.5.20A and B) is the same for both forward pump and backward pump at the same pump power level, the power profiles are quite different. Because the optical gain of fiber Raman amplifiers is created by the optical fiber, both discrete and distributed Raman amplifiers can be constructed as shown in Fig. 1.5.21. For a discrete Raman amplifier, both optical pump and the optical fiber are packaged together as a gain block similar to an EDFA. The peak Raman gain frequency is approximately 13 THz lower than the optical frequency of the pump laser with the gain spectral shape determined by silica material as shown in Fig. 1.4.19. Thus, by properly selecting the wavelength of the pump laser, Raman gain can be created in wavelength windows which are not covered by SOAs or EDFAs. Distributed Raman amplification, on the other hand, is more specific for fiber-optic communication systems where the transmission fiber is used to provide the Raman gain. Or equivalently, the propagation loss of the optical fiber is compensated or at least partly compensated by the Raman gain. Comparison of noise figure between discrete and distributed Raman amplification will be discussed in Chapter 3.

1.6 External electro-optic modulator In an optical transmitter, encoding electrical signals into optical domains can be accomplished either by direct injection current modulation of laser diode or by electro-optic

Fundamentals of optical devices

Output

Input WDM

Fiber

WDM

Forward pump

Backward pump

(a) Output

Input WDM

Forward pump

Transmission fiber

(b)

WDM

Backward pump

Fig. 1.5.21 Basic configurations of fiber Raman amplifiers: (A) a localized Raman amplifier which includes pump lasers and optical fiber inside the same package and (B) a distributed Raman amplifier using transmission fiber as the gain medium. WDM couplers combine optical signal and optical pumps into an optical fiber used as the gain medium.

modulation using an external modulator. The speed of direct modulation is usually limited by carrier dynamics. In addition, the direct modulation-induced frequency chirp may significantly degrade the transmission performance of the optical signal in highspeed systems. On the other hand, external modulation can potentially achieve >100 GHz modulation bandwidth with well-controlled frequency chirp. Therefore, external modulation is widely adopted in long-distance and high-speed optical communication systems. From an application point of view, an electro-optic modulator is typically used after a laser source that provides a constant optical power. The external modulator simply manipulates the optical signal according to the applied voltage, thus converting the electrical signal into an optical domain. Various types of external modulators have been developed to perform intensity modulation and phase modulation, as well as complex optical field modulation.

1.6.1 Basic operation principle of electro-optic modulators In some crystal materials, the refractive indices are functions of the electrical field applied on them, and the index change is linearly proportional to the applied field magnitude as δn ¼ αEOΕ, where E is the electrical field and αEO is the linear electro-optic coefficient. The most popular electro-optic material used so far for the electro-optic modulator is LiNbO3, which has a relatively high electro-optic coefficient. Electro-optic phase modulator is the simplest external modulator, which is made by a LiNbO3 optical waveguide and a pair of electrodes, as shown in Fig. 1.6.1. If the length of the electrode is L, the separation between the two electrodes is d, and the applied voltage is V, the optical phase change introduced by the linear electro-optic effect is

115

116

Fiber optic measurement techniques

L Electrode d

Ei V(t)

E0

Electrode

Fig. 1.6.1 Electro-optic phase modulator.

φðV Þ ¼



 2παEO L V λd

(1.6.1)

The modulation efficiency, defined as dφ/dV, is directly proportional to the length of the electrode L and inversely proportional to the electrode’s separation d. Increasing the length and reducing the separation of the electrodes would certainly increase the modulation efficiency but will inevitably increase the parasitic capacitance and thus reduce the modulation speed. Another speed limitation is due to the device transit time td ¼ L/vp, where vp ¼ c/n is the propagation speed in the waveguide of the refractive index n. For a 10 mm-long LiNbO3 waveguide with n ¼ 2.1, the transit time is approximately 35 ps, which sets a speed limit to the modulation. In very high-speed electro-optic modulators, traveling-wave electrodes are usually used, where the RF modulating signal propagates in the same direction as the optical signal. This results in a phase matching between the RF and the optical signal and thus eliminates the transit time induced speed limitation. Based on the same electro-optic effect, an optical intensity modulator can also be made. A very popular external intensity modulator is made by planar waveguide circuits designed in a Mach-Zehnder interferometer (MZI) configuration as shown in Fig. 1.6.2. In this configuration, the input optical signal is equally split into two interferometer arms and then recombined. Similar to the phase modulator, electrical field is applied across one of the two MZI arms to introduce an additional phase delay and thus control the differential phase between the two arms. If the phase delays of the two MZI arms are ϕ1 and ϕ2, respectively, the output optical field is

E0

Ei Electrode V(t )

Electrode

Fig. 1.6.2 Electro-optic modulator based on MZI configuration.

Fundamentals of optical devices

1 jφ1 (1.6.2) e + ejφ2 E i 2 where Ei is the complex field of the input optical signal. Then, Eq. (1.6.2) can be rewritten into   Δφ jφc =2 E 0 ¼ cos Ei (1.6.3) e 2 E0 ¼

where φc ¼ φ1 + φ2 is the average (common-mode) phase delay and Δφ ¼ φ1  φ2 is the differential phase delay of the two arms. The input-output power relationship of the modulator is then   Δφ P 0 ¼ cos 2 (1.6.4) Pi 2 where Pi ¼ j Ei j2 and P0 ¼ j E0 j2 are the input and the output powers, respectively. Obviously, in this intensity modulator transfer function, the differential phase delay between the two MZI arms plays a major role. Again, if we use L and d for the length and the separation of the electrodes and αOE for the linear electro-optic coefficient, the differential phase shift will be   2π αEO V ΔφðV Þ ¼ φ0 + L (1.6.5) λ d where ϕ0 is the initial differential phase without the applied electrical signal. Its value may vary from device to device and may change with temperature mainly due to practical fabrication tolerance. In addition, if the driving electrical voltage has a DC bias and an AC signal, the DC bias can be used to control this initial phase ϕ0. A convenient parameter to specify the efficiency of an electro-optic intensity modulator is Vπ , which is defined as the voltage required to change the optical power transfer function from the minimum to the maximum. From Eqs. (1.6.4) and (1.6.5), Vπ can be found as Vπ ¼

λd 2αEO L

(1.6.6)

Vπ is obviously a device parameter depending on the device structure as well as the material electro-optic coefficient. With the use of Vπ, the power transfer function of the modulator can be simplified as

 P0 πV 2 ¼ cos φ0 + T ðV Þ ¼ (1.6.7) Pi 2V π Fig. 1.6.3 illustrates the relationship between the input electrical voltage waveform and the corresponding output optical signal waveform. Vb0 is the DC bias voltage, which

117

118

Fiber optic measurement techniques

Pi

P0

Quadrature points

P0

Pi /2 0 Vb0 Vp

2Vp

Input electrical signal

3Vp

V

Laser

Output optical signal MZI

t

Optical signal

Electrical signal V(t )

t

Fig. 1.6.3 Electro-optic modulator transfer function and input (electrical)/output (optical) waveforms.

determines the initial phase ϕ0 in Eq. (1.6.7). The DC bias is an important operational parameter because it determines the electrical-to-optical (E/O) conversion efficiency. If the input voltage signal is bipolar, the modulator is usually biased as the quadrature point, as shown in Fig. 1.6.3. This corresponds to the initial phase φ0 ¼  π/4 depending on the selection of the positive or the negative slope of the transfer function. With this DC bias, the E/O transfer function of the modulator has the best linearity and allows the largest swing of the signal voltage, which is Vπ /2. In this case, the output optical power is    πV ðtÞ Pi P 0 ðtÞ ¼ 1  sin (1.6.8) 2 Vπ Although this transfer function is nonlinear, it is not a significant concern for binary digital modulation, where the electrical voltage switches between –Vπ/2 and + Vπ /2 and the output optical power switches between zero and Pi. However, for analog modulation, the nonlinear characteristic of the modulator transfer function may introduce signal waveform distortion. For example, if the modulating electrical signal is a sinusoid, V(t) ¼ Vm cos (Ωt), and the modulator is biased at the quadrature point with φ0 ¼ mπ  π/4, where m is an integer, the output optical power can be expanded in a Bessel series: Pi + P i J 1 ðxÞ cos ðΩtÞ  P i J 3 ðxÞ cos ð3ΩtÞ + P i J 5 ðxÞ cos ð5ΩtÞ + …: (1.6.9) 2 where Jn(x) is the nth order Bessel function with x ¼ πVm/Vπ . In this equation, the first term is the average output power, the second term is the result of linear modulation, and the third and the fourth terms are high-order harmonics caused by the nonlinear transfer function of the modulator. To minimize nonlinear distortion, the amplitude of the modulating voltage signal has to be very small, such that Vm ≪ Vπ /2, and therefore, the highorder terms in the Bessel series can be neglected. P 0 ðt Þ ¼

Fundamentals of optical devices

It is noticed that so far, we have only discussed the intensity transfer function of the external modulator as given by Eq. (1.6.7), whereas the optical phase information has not been discussed. In fact, an external optical modulator may also have a modulating chirp similar to the direct modulation of semiconductor lasers but originated from a different mechanism and with a much smaller chirp parameter. To investigate the frequency chirp in an external modulator, the input/output optical field relation given by Eq. (1.6.3) has to be used. If the optical phase ϕ2 is modulated by an external electrical signal from its static value φ20 to φ20 + δφ(t), we can write φc ¼ φ1 + φ20 + δφ(t) and Δφ ¼ φ1  φ20  δφ (t). Eq. (1.6.3) can be modified as   φ0  δφðtÞ jðφc0 +δφðtÞÞ=2 E 0 ¼ cos Ei (1.6.10) e 2 where φc0 ¼ φ1 + φ20 and φ0 ¼ φ1  φ20 are the static values of the common mode and the differential phases. Obviously, the phase of the optical signal is modulated, which is represented by the factor ejδφ(t)/2. As defined in Eq. (1.2.48), the chirp parameter is the ratio between the phase modulation and the intensity modulation. It can be found from Eq. (1.6.10) that dδφðtÞ dP 0 P i ¼ sin ðφ0  δφðt ÞÞ dt 2 dt Therefore, the equivalent chirp parameter is  dδφðtÞ=dt  1 + cos ðφ0 Þ ¼ αlw ¼ 2P 0 dP 0 =dt δφðtÞ¼0 sin ðφ0 Þ

(1.6.11)

(1.6.12)

Not surprisingly, the chirp parameter is only the function of the DC bias. The reason is that although the phase modulation efficiency dδφ(t)/dt is independent of the bias, the efficiency of the normalized intensity modulation is a function of the bias. At φ0 ¼ (2m + 1)π, the chirp parameter αlw ¼ 0 because the output optical power is zero, whereas at φ0 ¼ 2mπ  π/2, the chirp parameter αlw ¼ ∞ because the intensity modulation efficiency is zero. It is also interesting to note that the sign of the chirp parameter can be positive or negative depending on the DC bias on the positive or negative slopes of the power transfer function. This adjustable chirp of the external modulator in system designs, for example, compensates for the chromatic dispersion in the optical fiber. However, for most of the applications, chirp is not usually desired; therefore, external modulators with zero chirp have been developed. These zero-chirp modulators can be built based on a balanced MZI configuration and antisymmetric driving of the two MZI arms, as shown in Fig. 1.6.4. In this case, the two MZI arms have the same physical length, but the electrical fields are applied in the opposite directions across the two arms, creating an antisymmetric phase modulation.

119

120

Fiber optic measurement techniques

Electrode E0

Ei Electrode

Electrode

V(t)

Fig. 1.6.4 Electro-optic modulator based on an MZI configuration.

Recall that in Eq. (1.6.3), the common-mode phase delay, which determines the chirp, is φc ¼ φ1 + φ2. If both ϕ1 and ϕ2 are modulated by the same amount δϕ(t) but with opposite signs φ1 ¼ φ10 + δφ and φ2 ¼ φ20  δφ, then φc ¼ φ10 + φ20 will not be time-dependent, and therefore, no optical phase modulation is introduced. For the differential phase delay, Δφ ¼ φ10  φ20 + 2δφ(t), which doubles the intensity modulation efficiency.

1.6.2 Frequency doubling and duobinary modulation Electro-optic modulators based on an MZI configuration have been used widely in highspeed digital optical communications due to their high modulation bandwidth and low frequency chirp. In addition, due to their unique bias-dependent modulation characteristics, electro-optic modulators can also be used to perform advanced electro-optic signal processing, such as frequency doubling and single-sideband modulation. These signal processing capabilities are useful not only for optical communications but also for optical measurements and instrumentation. Frequency doubling is relatively easy to explain, as illustrated in Fig. 1.6.5. In order for the output intensity-modulated optical signal to double the frequency of the driving RF signal, the modulator should be biased at either the minimum or the maximum transmission point. Referring to the power transfer function shown in Eq. (1.6.7), the bias should make the initial phase φ0 ¼ mπ  π/2. Therefore, P0 ¼ Pi

1  cos ðπV ðtÞ=V π Þ 2

If the input RF signal is a sinusoid, V(t) ¼ Vm cos (Ωt), then the output is

  Pi πV m 1  cos cos ðΩtÞ P0 ¼ 2 Vπ

(1.6.13)

(1.6.14)

This can be expanded into a Bessel series as

       Pi πV m πV m πV m 1  J0 2J 2 cos ð2ΩtÞ  2J 4 cos ð4ΩtÞ …: P0 ¼ 2 Vπ Vπ Vπ (1.6.15)

Fundamentals of optical devices

P0

0

P0

Vp

2Vp

3Vp

Input electrical signal

V

Output optical signal t |J2(x)|

|J4(x)|

t 0

2

6

4

8

10

x

Fig. 1.6.5 Electro-optic modulator transfer function and input (electrical)/output (optical) waveforms.

In this optical output, the fundamental frequency component is 2 Ω, which doubles the input RF frequency. The absolute values of J2(x) and J4(x) shown in the inset of Fig. 1.6.5 for convenience indicate the relative amplitudes of the second- and fourthorder harmonics. When the modulating amplitude is small enough, πVm/Vπ < < 1, Bessel terms higher than the second order can be neglected, resulting in only a DC and a frequency-doubled component. This overmodulation technique can also be used to generate quadruple frequency by increasing the amplitude of the modulating RF signal such that πVm/Vπ  5. But in practice, a large-amplitude RF signal at high frequency is usually difficult to generate. It is important to notice the difference between the power transfer function and the optical field transfer function of an external modulator, as shown in Fig. 1.6.6. Because the power transfer function is equal to the square of the field transfer function, the periodicity is doubled in this squaring operation. Consequently, if the modulator is biased at the minimum power transmission point, the field transfer function has the best linearity. Although the optical intensity waveform is frequency-doubled compared to the driving RF signal, the waveform of the output optical field is quasi-linearly related to the input RF waveform. In this case, in the signal optical spectrum, the separation between each modulation sideband and the optical carrier is equal to the RF modulating frequency Ω rather than 2 Ω. However, if this optical signal is detected by a photodiode, only optical intensity is received while the phase information is removed. Therefore, the received signal RF spectrum in the electrical domain will have a discrete frequency at 2 Ω, which is twice the RF modulating frequency. This unique property of MZI-based electro-optic modulator has been used to create duobinary optical modulation format in high-speed optical transmission systems, which requires only half the optical bandwidth compared with direct amplitude modulation at the same data rate (Penninckx et al., 1997).

121

122

Fiber optic measurement techniques

Field transfer Power transfer function function

0

Vp

2Vp

3Vp

V

Output optical signal P0

0

t

E0 Input RF signal t Optical spectrum

w 0−Ω w 0 w 0+Ω

Intensity-detected Electrical spectrum



Fig. 1.6.6 Electro-optic modulator power transfer function and field transfer function.

1.6.3 Optical single-side modulation In general, under a sinusoid modulation, an electro-optic modulation generates two sidebands—one on each side of the optical carrier, which is usually referred to as double-sideband modulation. Since these two sidebands carry redundant information, removing one of them will not affect signal transmission. Single-sideband optical signals occupy narrower spectral bandwidth and thus result in better bandwidth efficiency in multiwavelength WDM systems. Also, optical signals with narrower spectral bandwidth suffer less from chromatic dispersion. These are the major reasons that optical single-sideband (OSSB) modulation is attractive (Smith et al., 1997; Hui et al., 2002). A straightforward way to generate OSSB is to use a notch optical filter, which directly removes one of the two optical sidebands. But this technique requires stringent wavelength synchronization between the notch filter and the transmitter. OSSB can also be generated using an electro-optic modulator with a balanced MZI structure and two electrodes, as shown in Fig. 1.6.7. If we assume that the frequency of the optical carrier is ω0 and the RF modulating signal is a sinusoid at frequency Ω, based on Eq. (1.6.16), the output optical field is E 0 ðtÞ ¼

Ei ½ exp ðjω0 t + jφ1 ðtÞÞ + exp ðjω0 t + jφ2 ðtÞÞ 2

(1.6.16)

Fundamentals of optical devices



V(t)

90º hybrid 90º

v1(t ) E0(t )

Laser diode

v2(t )

Fig. 1.6.7 Optical single-sideband modulation using a dual-electrode MZI electro-optic modulator.

which includes both the positive and the negative optical sidebands. ϕ1(t) and ϕ2(t) are the phase delays of the two MZI arms. Now, we suppose that these two phase delays can be modulated independently by voltage signals, v1(t) and v2(t), respectively. These two RF signals have the same amplitude and the same frequency, but there is a relative RF phase difference θ between them so that v1(t) ¼ Vb + Vm cos Ωt and v2(t) ¼ Vm(cosΩt + θ), where Vb is a DC bias and Vm is the amplitude. Therefore, the two phase terms in Eq. (1.6.16) are φ1 ðt Þ ¼ φ01 + β cos ðΩt + θÞ and φ2 ðtÞ ¼ β cos ðΩtÞ where β ¼ πVm/Vπ is the modulation index, and φ01 ¼ πVb/Vπ is a relative phase due to the DC bias. Then the output optical field can be expressed as E 0 ðt Þ ¼

Ei ejω0 t f exp ½jφ01 + jβ cos ðΩt + θÞ + exp ½jβ cos ðΩtÞg 2

(1.6.17)

If the modulator is biased at the quadrature point (for example, ϕ01 ¼ 3π/4 so that ¼  j), Eq. (1.6.17) becomes

jφ01

e

E 0 ðtÞ ¼

E i jω0 t e f exp ½jβ cos ðΩtÞ  j exp ½jβ cos ðΩt + θÞg 2

(1.6.18)

Now, we can see that the relative RF phase shift between the signals coming to the two electrodes have dramatic effect in the spectral properties of the modulated optical signal. In first case, if the relative RF phase shift between the two RF modulating signals v1(t) and v2(t) is 180°, that is, θ ¼ (2m  1)π with m an integer, Eq. (1.6.18) becomes E 0 ðt Þ ¼

Ei ejω0 t ½ exp ðjβ cos Ωt Þ  j exp ðjβ cos ΩtÞ 2

(1.6.19)

123

Fiber optic measurement techniques

Mathematically, the trigonometric functions can be expanded into Bessel series, known as the Jacobi-Anger identity, X∞ exp ðjβ cos ΩtÞ ¼ jk J k ðβÞejkΩt (1.6.20a) k¼∞ X∞ J ðβÞejkΩt (1.6.20b) exp ðjβ sin ΩtÞ ¼ k¼∞ k Consider that J k(β) ¼ (1)kJk(β), Eq. (1.6.19) can be written as jkΩt E ejω0 t X∞ k jkΩt E 0 ðtÞ ¼ i j J ð β Þ e  je k k¼∞ 2

(1.6.21)

For k ¼ 0, the carrier component is 0.5J0(β)Eiejω0t(1  j). For k ¼ 1, the signal components for the positive and negative sidebands are 0.5(1 + j)J1(β)Eiej(ω0Ω)t. The 2nd harmonic on both sideband corresponding to k ¼ 2 is 0.5(1 + j)J2(β)Eiej(ω02Ω)t. This is a typical double-sideband optical spectrum where the major frequency components are ω0, ω0  Ω, ω0  2 Ω, and so on. Fig. 1.6.8A shows an example of power spectral density of a double-sideband modulated optical signal. The amplitude of the desired modulation

Optical spectrum (dB)

0

(a)

-20 -40 -60 -80

w 0-2Ω

w 0-Ω

0

w0

w 0+Ω

-20 -40 -60 -80

w 0+2Ω

0

(b) Optical spectrum (dB)

Optical spectrum (dB)

124

(c)

-20 -40 -60 -80

w 0-2Ω

w 0-Ω

w0

w 0+Ω

w 0+2Ω

w 0-2Ω

w 0-Ω

w0

w 0+Ω

w 0+2Ω

Fig. 1.6.8 Example of calculated power spectral density of modulated optical signal. (A) Doublesideband modulation, (B) single-sideband modulation with upper sideband suppressed, and (C) single-sideband modulation with lower sideband suppressed. ω0 is the optical frequency, Ω is the modulation frequency, and the modulation index is β ¼ 0.1.

Fundamentals of optical devices

sidebands at ω0  Ω is proportional to (J1(β))2, while the amplitude of the second order harmonics at ω0  2 Ω is proportional to (J2(β))2. In this example, a modulation index of β ¼ 0.1 is used so that J1(β)/J2(β) ¼ 40 so that the second order harmonic power is about 32 dB lower than the component at the fundamental frequency in the optical spectrum. The power transfer function of Eq. (1.6.8) can be obtained by squaring on both sides Eq. (1.6.19) except that the modulation index is doubled because the push-pull driving on the two electrodes. While the output of the power (or envelope) transfer function can be measured by adding a photodiode after the modulator, the output of the field transfer function is in the optical domain representing the signal optical spectrum. The quardrature point of the power transfer function is different from the quardrature point of the optical field transfer function. That is why the 2nd order harmonic does not exist in Eq. (1.6.9) for the envelope transfer function but is present in the optical spectrum of Eq. (1.6.21). In the second case, if the relative RF phase shift between the two RF modulating signals is 90°, that is, θ ¼ 2mπ  π/2, Eq. (1.6.18) becomes E 0 ðt Þ ¼

E i jω0 t e f exp ½jβ cos ðΩt Þ  j exp ½jβ sin ðΩtÞg 2

(1.6.22)

Again, based on the Jacobi-Anger identity shown in Eq. (1.5.20), Eq. (1.6.22) can be written as k jkΩt E ejω0 t X∞ E 0 ðt Þ ¼ i J ð β Þ j  j e (1.6.23) k k¼∞ 2 In this case, for k ¼ 0, the carrier component is still 0.5J0(β)Eiejω0t(1  j). But for k ¼  1, the signal components for the positive and negative sidebands are zero and J1(β)Eijej(ω0Ω)t, respectively. This is an optical single-sideband (OSSB) spectrum with the positive sideband suppressed and the energy in the negative sideband is doubled compared to the optical double sideband spectrum. One can also choose to suppress the negative optical sidebands while keeping the positive sideband either by changing the DC bias to ϕ01 ¼ π/4 so that ejφ01 ¼ j in Eq. (1.6.17) or by changing the RF phase shift to θ ¼ 2mπ + π/2 in Eq. (1.6.18). Fig. 1.6.8B and C show single-sideband modulated optical spectra with the lower and the upper sideband suppressed, respectively. In the single-sideband modulation, the energy of the suppressed sideband is transferred to the opposite sideband, and as a result, the power of the remaining sideband is increased by 3 dB compared to the doublesideband modulation.

1.6.4 I/Q modulation of complex optical field Although intensity modulation is widely used in optical communication systems, complex optical field modulation has been shown much more efficient in terms of

125

126

Fiber optic measurement techniques

transmission capacity as both the intensity and the optical phase can be used to carry information. Electro-optic modulator based on a single Mach-Zehnder interferometer (MZI) configuration allows both intensity modulation and phase modulation with proper selections of the biasing voltage, as well as the polarity and phase of the driving RF signals on the two arms. However, phase modulation and intensity modulation cannot be independently addressable in a single Mach-Zehnder electro-optic modulator, which restricts the degree of freedom in the modulation of complex optical field. In this section, we discuss an electro-optic in-phase and quadrature (I-Q) modulator which is capable of complex optical field modulation. The configuration of an electro-optic I-Q modulator is shown in Fig. 1.6.9A, which is also based on the MZI structure, but an independent MZI (MZIi and MZIq) is built in each arm of another MZI (MZIc). In addition to the driving electric signal vI,Q, each modulator (MZIi and MZIq) also has a DC bias vBI,BQ to control the differential phase. A separate bias control vp is also used to adjust the relative optical phase between the two arms of MZIc (Tsukamoto et al., 2006; Cho et al., 2006). Based on Eq. (1.6.3), both MZIi and MZIq have field transfer functions:   πvI,Q πvBI,BQ jφci,q =2 E 0,i,q ¼ Ei,i,q cos + (1.6.24) e 2V π,RF 2V π,DC where E0,i,q and Ei,i,q are output and input of each MZI, vπ,RF and vπ,DC are the vπ values corresponding to high-frequency signal and DC bias control, as the they may have different modulation efficiencies. φci,q are common-mode phase shifts of MZIi and MZIq. Consider the additional difference phase introduced by vP for MZIc, the overall optical field transfer function of the I/Q modulation is  

  φQ + φBQ jφIQ =2 φI + φBI  jφIQ =2 Ei cos e (1.6.25) + cos E0 ¼ e 2 2 2 where Ei is the input optical field, φI, Q ¼ πvI, Q/vπ, RF, and φBI, BQ ¼ πvBI, BQ/vπ, DC are signal and biasing phases of MZIi and MZIq. The differential phase of MZIc is vI

Im(E0)

vBI 1-j

Ei

E0

MZIi MZIq

vQ

vBQ (a)

1+j Re(E0)

MZIc vp

Phase shift

-1+j

-1-j (b)

Fig. 1.6.9 (A) Configuration of an electro-optic I-Q modulator based on the combination of three MZIs, (B) normalized constellation diagram of complex modulated QPSK optical signal.

Fundamentals of optical devices P φIQ ¼ (φcq + φp  φci), which is the combination of the φci, φcq, and φp ¼ VπV introduced π,DC by the bias control voltage VP. To simplify the analysis, we assume that both MZIi and MZIq are chirp-free, which can be achieved with antisymmetric electric driving of the two electrodes as shown in Fig. 1.6.4 so that common-mode phase modulation is zero for both MZIi and MZIq (φci ¼ φcq ¼ 0). In order to be able to linearly translate the input complex electrical signal to the complex optical field at the modulator output, the optimum bias condition is to set both MZIi and MZIq at the minimum power transmission point (φBI ¼ φBQ ¼  π), and the relative optical phase shift between the two arms of MZIc at φIQ ¼ π/2. Then, the overall optical field transfer function of MZIc is

    v Q ðt Þ vI ðt Þ Eo 1 ¼ sin π + j sin π (1.6.26) Ei 2 Vπ Vπ

Here, a constant common phase delay φIQ/2 is neglected as it does not change the modulation characteristics. The two free parameters vI(t) and vQ(t) in Eq. (1.6.26) provide the capability of modulating the in-phase and the quadrature components of the optical field independently, representing the real and the imaginary part of a complex modulating signal. This capability of complex optical field modulation allows various applications in optical communication systems and coherent optical sensors. As an example, for a QPSK modulation as illustrated in Fig. 1.6.9B, the complex optical field E0 has four possible positions in the constellation diagram, Ei(1 + j)/2, Ei(1j)/2, Ei(1 + j)/2, and Ei(1 j)/2. This can be obtained by setting four combinations of the driving voltage signals (vI, vQ) as (Vπ /2, Vπ/2), (Vπ/2, Vπ/2), (Vπ /2, Vπ /2), and (Vπ/2, Vπ/2), respectively. For analog modulation, a small-signal approach can be used when vI,Q(t) ≪ Vπ , and thus, Eq. (1.6.26) can be linearized as   π E0  Ei ½v ðt Þ + jvQ ðtÞ (1.6.27) 2V π I In a conventional double-sideband modulation with a real-valued RF signal, the upper and the lower modulation sidebands are a complex conjugate pair which carries redundant information. An I/Q modulation allows single-sideband modulation, as well as double-sideband modulation but with the two sidebands carrying independent information channels. To understand this application, assume D1(t) and D2(t) are two different data sequences carrying independent information. We can construct two voltage waveforms as vI ðt Þ ¼ ½D1 ðtÞ + D2 ðtÞ cos ðΔωtÞ

(1.6.28a)

vQ ðtÞ ¼ ½D1 ðtÞ  D2 ðtÞ sin ðΔωtÞ

(1.6.28b)

127

Fiber optic measurement techniques

where Δω is an RF carrier frequency. Using the linearized transfer function of the I-Q modulator, the output optical field is   pffiffiffiffi jω0 t π E 0 ðt Þ  P i e ½ðD1 + D2 Þ cos ðΔωt Þ + jðD1  D2 Þ sin ðΔωtÞ 2V π   pffiffiffiffi π ¼ Pi (1.6.29) fD1 exp ½jðω0 + ΔωÞt + D2 exp ½jðω0  ΔωÞt g 2V π pffiffiffiffi where Ei ¼ P i ejω0 t is the input optical field, Pi is the input optical power which is a constant, and ω0 is the frequency of the optical carrier. The spectrum of the optical field E0(t) in Eq. (1.6.29) has two sidebands, one on each side of the optical carrier ω0. But these two sidebands are not redundant. Instead, the upper and the lower sidebands carry data channels D1(t) and D2(t), independently. Fig. 1.6.10 shows a spectrum of the complex modulated optical field with the upper and the lower modulation sidebands carrying independent information channels. Eq. (1.6.29) indicates that in this complex modulated optical field, the optical carrier component is suppressed because both MZIi and MZIq are biased at the minimum power transmission point. This helps improve the power efficiency of an optical communication system, as the optical carrier would only be a CW component.

1.6.5 Bias point stabilization of an I/Q modulator For high-quality modulation with an I/Q modulator, maintaining the optimum bias condition is critically important. As previously mentioned, the optimum bias condition requires both MZIi and MZIq to be at the minimum power transmission point (φBI ¼ φBQ ¼  π), and the relative optical phase shift between the two arms of MZIc has to be at the quadrature point φIQ ¼ π/2. Because of unavoidable manufacturing inaccuracy and thermal effect due to operation temperature change, the bias voltages vBI, vBI, Normalized spectral density (dB)

128

10 0

D2(t) channel

D1(t) channel Suppressed optical carrier

-10 -20 -30

w0-Δw

w0

w0+Δw

Fig. 1.6.10 Illustration of a modulated optical spectrum with carrier suppression and independent information channels carried by the upper and the lower modulation sidebands.

Fundamentals of optical devices

and VP need to be adaptively adjusted to maintain the optimum bias condition, normally through feedback control. In the feedback control configuration shown in Fig. 1.6.11 (Cho and Nazarathy, 2010), a small portion of the signal optical power at the modulator output is tapped and detected by a photodiode. The photocurrent is digitized and processed digitally to control the three DC biasing voltages of the modulator after digital to analog conversion. Based on Eq. (1.6.25), photocurrent tapped from the modulator output is I 0 ¼ ηjE i j2 2 + cos ðφI + φBI Þ + cos φQ + φBQ   φ + φ  φQ + φBQ BI + 4 cos I cos φIQ  (1.6.30) cos 2 2 Where η is a proportionality parameter accounts for the combined impact of optical coupler loss, photodiode responsivity, and electrical preamplifier gain. If the bias voltages are slightly away from their optimum values by φer, I ¼ φBI  π, φer, Q ¼ φBQ  π, and φer, IQ ¼ φIQ  π/2, the photocurrent is I 0 ¼ ηjE i j2 2  cos φI + φer,I  cos φQ + φer,Q     φQ + φer,Q φI + φer,I 4 sin sin sin φer,IQ  (1.6.31) 2 2

1

vI

vI,Q/vπ

0.5

vBI

0 -0.5 -1

Ei

E0

0

10

vQ

vBQ

30

40

50

60

70

80

90

100

Time ( ns)

PD Vp

20

ADC

DSP

DAC DAC DAC

Feedback control unit Fig. 1.6.11 Stabilize bias voltages of an I/Q modulator through a feedback control circuit. Inset: Example of arbitrary waveforms of vI and vQ with white Gaussian statistics.

129

130

Fiber optic measurement techniques

Practically, feedback stabilization may depend on specific signal waveforms vI(t) and vQ(t). Here, we use arbitrary waveforms of Gaussian statics, shown in the inset of Fig. 1.6.11, to illustrate the stabilization process. In fact, for a multicarrier optical communication system such as orthogonal frequency-division multiplexing (OFDM), or a system with high-level modulation such as M-level quadrature amplitude modulation (M-QAM), the waveforms resemble band-limited Gaussian noise without noticeable discrete levels. Fig. 1.6.12A and B show the average value hI0i and variance σ 20 of photocurrent I0, when bias errors φer,I and φer, Q vary between π, with φer, IQ ¼ 0. The minimum values of both hI0i and σ 20 are obtained at φer, I ¼ φer,Q ¼ 0 which sets the target for feedback control of vBI and vBQ. Fig. 1.6.12C and D show hI0i and σ 20 only as the function of φer,I, with both φer,Q ¼ 0 and φer,IQ ¼ 0. In comparison to hI0i, σ 20 is more sensitive against the change of φer,I in the vicinity of φer,I  0 but has two minima within a 2π period, while hI0i has a global minimum at φer,I  0.

(a)

(b)

) (π

r,Q

(π)

,Q

ϕe

ϕ er ϕ er,I (π) 1

0.8

1

ϕ er,Q = 0

ϕ er,Q = 0

(c)

(d)

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0 -1

π) ϕ er,I (

-0.5

0

ϕ er,I (π)

0.5

1

0 -1

-0.5

0

ϕ er,I (π)

0.5

1

Fig. 1.6.12 (A) and (B): Average value hI0i (A) and variance σ 20 (B) of photocurrent I0, as the functions of bias errors φer,I and φer,Q, with φer,IQ ¼ 0. (C) and (D): Average value hI0i (C) and variance σ 20 (D) of photocurrent I0, as the function of φer, I, with φer,IQ ¼ 0, and φer,Q ¼ 0.

Fundamentals of optical devices

Similarly, Fig. 1.6.13A and B show hI0i and σ 20 as the function of φer, I and φer, Q but with φer,IQ ¼ 0.1π. Although the bias error of φer,IQ modifies the shapes of Fig. 1.6.13A and B compared with Fig. 1.6.12A and B, the minima of both hI0i and σ 20 are still at φer, I ¼ φer,Q ¼ 0. In order to find the optimum bias of MZMc and minimize bias error φer,IQ, a normalized parameter σ 2n ¼ σ 20/hI0i can be used. Fig. 1.6.14 shows σ 2n as the function of φer,IQ

(b)

(a)

ϕ er,

(π)

Q

(π)

Q

ϕ er, ϕ er,I (π)

ϕ er,I

(π)

Fig. 1.6.13 Average value hI0i (A) and variance σ 20 (B) of photocurrent I0, as the functions of bias errors φer,I and φer,Q, with φer,IQ ¼ 0.1π. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3

ϕ er,I = ϕ er,Q = 0

0.2 0.1 0 -1

-0.5

0

ϕ er,IQ (π)

0.5

1

Fig. 1.6.14 Normalized variance σ 2n of photocurrent I0, as the functions of bias errors φer,IQ of MZMc, with φer,I ¼ φer,Q ¼ 0.

131

132

Fiber optic measurement techniques

with φer,I ¼ φer,Q ¼ 0, which indicates that the optimum bias voltage of VP for MZMc can be found by minimizing σ 2n. The procedure of I/Q modulator bias control through the measured photocurrent signal I0 can be summarized as the following steps (Cho and Nazarathy, 2010): (1) adjust vBI and vBQ to minimize hI0i, (2) adjust VP to minimize σ 2n, and (3) if the minimum hI0i was obtained within φer,I < π/2 and φer,Q < π/2, further optimize vBI and vBQ to minimize σ 20 as σ 20 is sensitive to φer, I and φer, Q than hI0i, and it has only one minimum within  π2 < φer,I ,φer,I < π2 . The slopes dhI0i/dvBI, dhI0i/dvBQ, dσ 20/dvBI, dσ 20/dvBQ, dσ 2n/dvBI, and dσ 2n/dvBQ can be measured by applying small amplitude perturbations to the control voltages to make sure that the operation point is within the regions of  π2 < φer,I , φer,Q < π2 and  π2 < φer,IQ < π2. In general, for a commercial coherent optical transmitter, the feedback control system serving the I/Q modulator usually has a bandwidth lower than the bandwidth of the transmitter itself. That means only the low frequency components of the signal are used to create I0(t) to feedback control the modulator bias. With that said, caution has to be taken if the modulating signal has no low frequency components, for example, in a digital subcarrier system only having high frequency subcarriers. In such a case, the feedback control of the I/Q modulator will have to be modified, or a low frequency signal has to be added just to satisfy the control loop.

1.6.6 Optical modulators using electro-absorption effect As discussed in the last section, electro-optic modulators made of LiNbO3 in an MZI configuration have good performances in terms of high modulation speed and lowfrequency chirp. However, these modulators cannot be integrated with semiconductor lasers due to the difference in the material systems. Consequently, LiNbO3-based external modulators are always standalone devices with input and output pigtail fibers, and the insertion loss is typically on the order of 3 to 5 dB. Difficulties in high-speed packaging and input/output optical coupling make LiNbO3 expensive; therefore, they are usually used in high-speed, long-distance optical systems. In addition, LiNbO3-based electrooptic modulators are generally sensitive to the state of polarization of the input optical signal; therefore, a polarization-maintaining fiber has to be used to connect the modulator to the source laser. This prevents this type of modulators to perform remodulation in locations far away from the laser source. Electro-absorption (EA) modulation can be made from the same type of semiconductor materials, such as group III-V materials, as are used for semiconductor lasers. By appropriate doping, the bandgap of the material can be engineered such that it does not absorb the signal photons, and therefore, the material is transparent to the input optical signal. However, when an external reverse electrical voltage is applied across the pn junction, its bandgap will be changed to the same level of the signal photon energy, mainly through the well-known Stark effect.

Fundamentals of optical devices

Then, the material starts to absorb the signal photons and converts them into photocurrent, similarly to what happens in a photodiode. Therefore, the optical transmission coefficient through this material is a function of the applied voltage. Because electroabsorption modulators are made by semiconductor materials, they can usually be monolithically integrated with semiconductor lasers, as illustrated in Fig. 1.6.15. In this case, the DFB laser section is driven by a constant injection current IC, thus producing a constant optical power. The EA section is separately controlled by a reverse-biased voltage that determines the strength of absorption. The optical field transfer function of an EA modulator can be expressed as   Δα½V ðt Þ E0 ðtÞ ¼ E i exp  L  jΔβ½V ðtÞL (1.6.32) 2 where Δα[V(t)] is the bias voltage-dependent power attenuation coefficient, which is originated from the electro-absorption effect in reverse-biased semiconductor pn junctions. Δβ[V(t)] is the voltage-dependent phase coefficient, which is introduced by the electro-optic effect. L is the length of the EA modulator. In general, both Δα and Δβ are strongly nonlinear functions of the applied voltage V, as shown in Fig. 1.6.16, which are determined by the material bandgap structure as well as the specific device configuration.

IC

V(t)

DFB section

EA section Modulated output

5

6

4

5 4

3

3 2

2

1 0 −5

Phase (radians)

Absorption (dB)

Fig. 1.6.15 EA modulator integrated with a DFB laser source.

1 −4

−3 −2 Voltage (V)

−1

0

0

Fig. 1.6.16 Phase and absorption coefficients of an EA modulator (Cartledge, 1998).

133

134

Fiber optic measurement techniques

From an application point of view, EA modulators can be monolithically integrated with semiconductors on the same chip; therefore, they are relatively economical and more compact. On the other hand, because both the absorption and the chirp parameters of an EA modulator are nonlinear functions of the bias voltage, the overall performance of optical modulation is generally not as good as using a LiNbO3-based external optical modulator. In addition, since there is generally no optical isolator between the semiconductor laser source and the integrated EA modulator, optical reflection from an EA modulator often affects the wavelength and the phase of the laser itself (Hashimoto et al., 1992). From a measurement point of view, the characterization of a DFB laser-EA modulator unit is relatively more complicated due to the nonlinear absorption and phase transfer functions of the modulator as well as the interaction between the laser and the modulator.

References Agrawal, G.P., 2001. Nonlinear Fiber Optics, third ed. Academic Press. Agrawal, G., 2012. Long-Wavelength Semiconductor Lasers. Springer, ISBN: 9401169969. Berger, J.D., Zhang, Y., Grade, J.D., Lee, H., Hrinya, S., Jerman, H., 2001. Widely tunable external cavity diode laser based on a MEMS electrostatic rotary actuator. In: Optical Fiber Communication Conference, Paper Td2–1, March 17–22. Boyd, R., 1992. Nonlinear Optics. Academic Press. Bromage, J., 2004. Raman amplification for Fiber communications systems. J. Lightwave Technol. 22 (1), 79. Cartledge, J.C., 1998. Comparison of effective parameters for semiconductor Mach Zehnder optical modulators. J. Lightwave Technol. 16, 372–379. Cho, P.S., Khurgin, J.B., Shpantzer, I., 2006. Closed-loop bias control of optical quadrature modulator. IEEE Photon. Technol. Lett. 18 (21), 2209–2211. Cho, P.S., Nazarathy, M., 2010. Bias control for optical OFDM transmitters. IEEE Photon. Technol. Lett. 22 (14), 1030–1032. Connelly, M.J., 2002. Semiconductor Optical Amplifiers. Kluwer Academic Publishers. Desurvire, E., 1994. Erbium-Doped Fiber Amplifiers: Principles and Applications. John Wiley. Donnelly, J.P., Duerr, E.K., McIntosh, K.A., et al., 2006. Design considerations for 1.06-μm InGaAsP-InP Geiger-mode avalanche photodiodes. IEEE J. Quantum Electron. 42, 797–809. Gloge, D., 1971. Weakly guiding fibers. Appl. Opt. 10 (10), 2252–2258. Hashimoto, J.-I., Nakano, Y., Tada, K., 1992. Influence of facet reflection on the performance of a DFB laser integrated with an optical amplifier/modulator. IEEE J. Quantum Electron. 28 (3), 594–603. Headley, C., Agrawal, G., 2005. Raman Amplification in Fiber Optical Communication Systems. Academic Press. Henry, C.H., 1982. Theory of the linewidth of semiconductor lasers. IEEE J. Quantum Electron. 18, 259. Hill, K.O., Johnson, D.C., Kawasaki, B.S., MacDonald, R.I., 1978. Cw three-wave mixing in single-mode optical fibers. J. Appl. Phys. 49, 5098. Hui, R., Tao, S., 1989. Improved rate-equation for external cavity semiconductor lasers. IEEE J. Quantum Electron. 25 (6), 1580. Hui, R., Zhu, B., Huang, R., Allen, C., Demarest, K., Richards, D., 2002. Subcarrier multiplexing for highspeed optical transmission. J. Lightwave Technol. 20, 417–427. Inoue, K., 1992. Polarization effect on four-wave mixing efficiency in a single-mode fiber. IEEE J. Quantum Electron. 28, 883–894. Islam, M.N., et al., 1987. Cross-phase modulation in optical fibers. Opt. Lett. 12 (8), 625.

Fundamentals of optical devices

Jiang, X., Itzler, M.A., Ben-Michael, R., Slomkowski, K., 2007. InGaAsP-InP avalanche photodiodes for single photon detection. IEEE J. Sel. Top. Quantum Electron. 13, 895–905. Kaminow, I., Li, T., Willner, A. (Eds.), 2008. Optical Fiber Telecommunications V (A and B). Academic Press. ISBN-13:978-0-12-374171. Keiser, G., 2000. Optical Fiber Communications, third ed. McGraw-Hill. Kogelink, H., Shank, C.V., 1972. Coupled-wave theory of distributed feedback lasers. J. Appl. Phys. 43 (5), 2327. Lang, R., Kobayashi, K., 1980. External optical feedback effects on semiconductor injection laser properties. IEEE J. Quantum Electron. 16 (3), 347–355. Meumann, E.G., 1988. Single-Mode Fibers Fundamentals, Springer Series in Optical Science. vol. 57 Springer-Verlag, ISBN: 3-54018745-6. Olsson, N.A., 1992. Semiconductor optical amplifiers. Proc. IEEE 80, 375–382. Penninckx, D., Chbat, M., Pierre, L., Thiery, J.P., 1997. The phase-shaped binary transmission (PSBT): a new technique to transmit far beyond the chromatic dispersion limit. IEEE Photon. Technol. Lett. 9, 259–261. Saleh, A.A.M., Jopson, R.M., Evankow, J.D., Aspell, J., 1990. Accurate modeling of gain in erbium-doped fiber amplifiers. IEEE Photon. Technol. Lett. 2, 714–717. Shimada, S., Ishio, H. (Eds.), 1992. Optical Amplifiers and their Applications. John Wiley. Smith, R.G., 1972. Optical power handling capacity of low loss optical fibers as determined by stimulated Raman and Brillouin scattering. Appl. Opt. 11 (11), 2489. Smith, G.H., Novak, D., Ahmed, Z., 1997. Overcoming chromatic dispersion effects in fiber-wireless systems incorporating external modulators. IEEE Trans. Microwave Theory Tech. 45, 1410–1415. Stolen, R.H., Lin, C., 1978. Self-phase modulation in silica optical fibers. Phys. Rev. A 17 (4), 1448. Taitt, C.R., Anderson, G.P., Ligler, F.S., 2005. Evanescent wave fluorescence biosensors. Biosens. Bioelectron. 20 (12), 2470–2487. Tsukamoto, S., Katoh, K., Kikuchi, K., 2006. Coherent demodulation of optical multilevel phase-shiftkeying signals using homodyne detection and digital signal processing. IEEE Photon. Technol. Lett. 18 (10), 1131–1133. Wang, J.P., et al., 2008. Efficient performance optimization of SOA-MZI devices. Opt. Express 16 (5), 3288–3292. Zhang, D., et al., 2012. Compact MEMS external cavity tunable laser with ultra-narrow linewidth for coherent detection. Opt. Express 20, 19670–19682. Zhou, J., et al., 1993. Terahertz four-wave mixing spectroscopy for study of ultrafast dynamics in a semiconductor optical amplifier. Appl. Phys. Lett. 63 (9), 1179–1181.

135

This page intentionally left blank

CHAPTER 2

Basic mechanisms and instrumentation for optical measurement 2.1 Introduction Characterization and testing of optical devices and systems require essential tools and instrumentations. Good understanding of basic operation principles of various test instruments, their general specifications, advantages, and limitations will help us design experimental setups, optimize test procedures, and minimize measurement errors. Optical measurement is a fast-moving field due to the innovations and rapid advances in optical devices and systems, as well as emerging applications. The complexity and variety of optical systems require high levels of flexibility in the testing and characterization. Realistically, it is not feasible for a single centralized laboratory to possess all the specialized instrumentations to satisfy all the measurement requirements. It is important to be flexible in configuring measurement setups based on the available tools and instruments. In doing so, it is also important to understand the implications of the measurement accuracy, efficiency, limitations, and their trade-offs. This chapter introduces a number of basic optical measurement mechanisms and instruments. The focus is on the discussion of the basic operating principles, general specifications, unique advantages, and limitations of each measurement tool. Section 2.2 discusses grating-based optical spectrum analyzers (OSAs), the most popular instruments in almost every optics laboratory. Section 2.3 introduces scanning Fabry-Perot interferometers, which can usually provide better spectral resolution than a grating-based OSA but with a narrower range of wavelength coverage. Sections 2.4 and 2.5 describe MachZehnder and Michelson interferometers. Their optical configurations are similar, but the analysis and specific applications may differ. Sagnac loop mirror is another type of fiber-optic interferometer often used in nonlinear applications, which is discussed at the end of Section 2.5. Section 2.6 introduces optical wavelength meters based on a Michelson interferometer. Although the wavelength of an optical signal can be measured by an OSA, a wavelength meter is application specific and usually provides much better wavelength measurement accuracy and efficiency. Section 2.7 discusses optical transfer functions of optical ring resonators and their applications in optical measurement and instrumentation, as well as their application in electro-optic modulators. Section 2.8 presents various designs of optical polarimeters, which are used to characterize polarization states of optical signals passing through optical devices or systems. Fiber Optic Measurement Techniques https://doi.org/10.1016/B978-0-323-90957-0.00004-7

Copyright © 2023 Elsevier Inc. All rights reserved.

137

138

Fiber optic measurement techniques

Section 2.9 introduces optical measurement techniques based on coherent detection. In recent years, high-quality single-frequency semiconductor tunable lasers became increasingly reliable and affordable. This makes coherent detection a viable candidate for many high-end measurement applications that require high spectral resolution and detection sensitivity. Section 2.10 discusses various high speed optical waveform measurement techniques, including analog and digital oscilloscopes, traditional sampling oscilloscopes versus realtime digital sampling oscilloscopes, linear and nonlinear optical sampling, and optical autocorrelation. Section 2.11 is devoted to the discussion of LIDAR and OCT. Although these two optical measurement techniques have very different applications, they are all based on the accurate measurement of roundtrip time between an optical source and a target. The last section of this chapter, Section 2.12, discusses optical network analyzers. Based on the similar idea of an RF network analyzer which operates in the electronics domain, an optical network analyzer characterizes complex transfer functions of a device or system but in the optical domain. The purpose of this chapter is to set up a foundation for basic understanding of optical measurement mechanisms, methodologies, and their theoretical background. Similar operation principles and methodologies can be extended to create various specialized instrumentations, as discussed in later chapters.

2.2 Grating-based optical spectrum analyzers An optical spectrum analyzer is an instrument used to measure the spectral density of a lightwave signal at different wavelengths. It is one of the most useful pieces of equipment in fiber-optic system and device measurement, especially when wavelength division multiplexing is introduced into the systems where different data channels are carried by different wavelengths. In addition to being able to identify the wavelength of an optical signal, an optical spectrum analyzer is often used to find the optical signal power level at each wavelength channel, evaluate the optical signal-to-noise ratio and optical crosstalk, and check the optical bandwidth when an optical carrier is modulated.

2.2.1 General specifications The most important parameter an OSA provides is the optical spectral density versus wavelength. The unit of optical spectral density is usually expressed in watts per Hertz [W/Hz], which is defined as the optical power within a bandwidth of a Hertz measured at a certain wavelength. The most important qualities of an OSA can be specified by the following parameters (Derickson, 1998; Loewen and Popov, 1997) (1) Wavelength range. It is the maximum wavelength range the OSA can cover while guaranteeing the specified performance. Although a large wavelength range is

Basic mechanisms and instrumentation for optical measurement

(2)

(3)

(4)

(5) (6) (7) (8)

desired, practical limitation comes from the applicable wavelength window of optical filters, photodetectors, and other optical devices. The typical wavelength range covered by a commercially available grating-based OSA is from 400 to 1700 nm. Wavelength accuracy. Specifies how accurately the OSA measures the wavelength. Most commercial OSAs separately specify absolute wavelength accuracy and relative wavelength accuracy. Absolute wavelength accuracy specifies how accurate the measured absolute wavelength value is, which is often affected by the wavelength calibration. Relative wavelength accuracy tells how accurate the measured wavelength separation between two optical signals is, which is mainly determined by the nonlinearity of the optical filters. In typical OSAs, wavelength accuracy of less than 0.1 nm can be achieved. Resolution bandwidth. It defines how fine an OSA slices the signal optical spectrum during the measurement. As an OSA measures signal optical spectral density, which is the total optical power within a specified bandwidth, a smaller-resolution bandwidth means a more detailed characterization of the optical signal. However, the minimum resolution bandwidth of an OSA is usually limited by the narrowest bandwidth of the optical system the OSA can provide and limited by the lowest detectable optical power of the receiver. The finest optical resolution bandwidth of a grating-based commercial OSA ranges from 0.1 to 0.01 nm. Sensitivity. Specifies the minimum measurable signal optical power before it reaches the background noise floor. Therefore, the detection sensitivity is basically determined by the noise characteristic of the photodiode used inside the OSA. For a short-wavelength OSA, the detection sensitivity is generally better due to the use of silicon photodiode, which covers the wavelength from 400 to 1000 nm. For the wavelength from 1000 to 1700 nm, an InGaAs photodiode has to be used, and the noise level is generally high, and the detection sensitivity is therefore poor compared to the short-wavelength OSAs. Commercially available OSAs can provide a detection sensitivity of 120 dBm in the 400–1700 nm wavelength range. Maximum power. The maximum allowable signal optical power before the OSA detection system is saturated. A typical OSA can tolerate 20 dBm signal optical power or higher. Calibration accuracy. Specifies how accurate the absolute optical power reading is in the measurement. Typically, a calibration accuracy of less than 0.5 dB can be achieved in a commercial OSA. Amplitude stability. Specifies the maximum allowable fluctuation of the power reading over time, when the actual input signal optical power is constant. Typical amplitude stability of a commercial OSA is less than 0.01 dB per minute. Dynamic range. The maximum distinguishable amplitude difference between two optical signals with their wavelengths a certain number of nanometers apart. This becomes a concern because in practical OSAs if a weak optical signal is in close

139

140

Fiber optic measurement techniques

vicinity to a strong optical signal, the weak signal may become unmeasurable because the receiver is overwhelmed by the strong signal; obviously, this effect depends on the wavelength separation between these two optical signals. The typical dynamic range of a commercial OSA is about 60 dB for a 0.8-nm wavelength interval and 52 dB for a 0.2 nm wavelength interval. (9) Frequency sweep rate. Specifies the speed of an OSA sweeping over the wavelength during the measurement. It depends on the measurement wavelength span and the resolution bandwidth used. Generally, the number of measurement samples an OSA takes in each sweep is equal to the span width divided by the resolution bandwidth. In practical applications, choosing sweep speed also depends on the power levels of the optical signal. At low power levels, the average may have to be made in the detection, and that may slow down the sweep speed. (10) Polarization dependence. Specifies the maximum allowable fluctuations of the power reading while changing the state of polarization of the optical signal. Polarization dependence of an OSA is usually caused by the polarization-dependent transmission of the optical system, such as gratings and optical filters used in the OSA. Commercial OSAs usually have less than 0.1 dB polarization dependence. An OSA measures signal optical power within the resolution bandwidth at various wavelengths; obviously, this can be accomplished by a tunable narrowband optical filter. Although there are various types of optical filters, considering the stringent specifications, such as wide wavelength range, fine optical resolution, high dynamic range, and so on, the option is narrowed down, and the most commonly used optical filter for OSA applications is diffraction grating. High-quality gratings can have a wide wavelength range and reasonably good resolution, and they can be mechanically tuned to achieve swept wavelength selection. In this section, we discuss the operation principle and various configurations of the grating-based optical spectrum analyzer.

2.2.2 Fundamentals of diffraction gratings Most of the commercial optical spectrum analyzers are based on diffraction gratings because of their superior characteristics in terms of resolution, wavelength range, and simplicity (Loewen and Popov, 1997). A diffraction grating is a mirror onto which very closely spaced thin groove lines are etched, as illustrated in Fig. 2.2.1. Important parameters defining a diffraction grating include the grating period d, which is the distance →

nG



nV

d

!

!

Fig. 2.2.1 Illustration of a diffraction grating with period d, grating normal n G, and groove normal n V .

Basic mechanisms and instrumentation for optical measurement

Input →

nG Equal phase plane

a

Output

b

N 34 12

Grating d D

Fig. 2.2.2 Lightwave incident and diffracted from a diffraction grating. !

between adjacent grooves, grating normal n G, which is perpendicular to the grating sur! face, and groove normal n V , which is perpendicular to the groove surface. As illustrated in Fig. 2.2.2, when a collimated optical signal is projected onto a grating ! at an incident angle α (with respect to the grating normal n G) and a receiver collects the diffracted lightwave at an angle of β, the path length difference between adjacent ray traces can be expressed as Δ ¼ d ð sin β  sin αÞ

(2.2.1)

Assume that the input field amplitude of each ray is A0 (the total field amplitude within a spatial width of a grating slot) and there are N lightwave rays project onto the grating. Then at the receiver, the field of all the ray traces will add up and the total electrical field is ( h i h i h i 2π 2π 2π AðβÞ ¼ A0 U 0 ðβÞ 1 + exp j Δ + exp j 2Δ + exp j 3Δ + … λ λ λ ) h i 2π + exp j N Δ (2.2.2) λ h i 2π 1  exp j N Δ ¼ A0 U 0 ð β Þ hλ i 2π 1  exp j Δ λ Here, U0( β ) is the angle-dependent diffractive efficiency of the field at each grating slot:

141

142

Fiber optic measurement techniques

1 U 0 ðβÞ ¼ d

i   sin πd sin β 2π πd  exp j x sin β dx ¼ A0 exp j sin β πd λ λ λ 0 λ sin β

ðd

h

(2.2.3)

Since   1  exp ðjNxÞ sin ðNx=2Þ N 1 ¼ exp j x 2 1  exp ðjxÞ sin ðx=2Þ

(2.2.4)

the overall optical field transfer function from the input to the output is       sin πN Δ Aðλ, βÞ sin πλ d sin β N 1 πd λ   π  exp j H ðλ, βÞ ¼ ¼ π πΔ + sin β N A0 λ λ N sin λ Δ λ d sin β (2.2.5) and the corresponding power transfer function is

 sin 2 πN Δ π λ  T ðλ, βÞ ¼ jH ðλ, βÞj ¼ sinc d sin β λ N 2 sin 2 πλ Δ 2

2



(2.2.6)

Obviously, the power transfer function reaches its peak values when Δ ¼ mλ, where the grating order m is an integer number, so the grating equation is defined as mλ ¼ d ð sin β  sin αÞ

(2.2.7)

Although this grating equation defines the wavelengths and the diffraction angles whereby the power transmission finds its maxima, Eq. (2.2.6) is a general equation describing the overall grating transfer function versus wavelength. To better understand this grating transfer function, let’s look at a few special cases.

2.2.2.1 Measure the diffraction angle spreading when the input only has a single frequency In this case, we can find the spatial resolution of the grating by measuring the output optical intensity versus the diffraction angle β. For an mth order grating, one of the power transmission maxima happens at a diffraction angle βm, which satisfies the grating equation: d sin βm ¼ mλ + d sin α, or equivalently, Nd sin βm ¼ mNλ + Nd sin α

(2.2.8)

Suppose that the adjacent transmission minimum happens at an angle βm + Δβ, we can write Nd sin ðβm + ΔβÞ ¼ ðmN + 1Þλ + Nd sin α so that

(2.2.9)

Basic mechanisms and instrumentation for optical measurement

sin ðβm + ΔβÞ  sin βm ¼

λ Nd

(2.2.10)

For a high-resolution grating, suppose this angle Δβ is small enough, Eq. (2.2.10) can be linearized as sin ðβ + ΔβÞ  sin β  Δβ cos β

(2.2.11)

Then we can find the value of Δβ, which is the half-angular width of the transmission peak: Δβ ¼

λ Nd cos β

(2.2.12)

Δβ specifies the angular spreading of the diffracted light when the input has a single wavelength at λ. For high-resolution OSAs, the value of Δβ should be small. High angular resolution can be achieved using a large effective area grating because the total width of the grating illuminated by the optical signal is D ¼ (N  1)d  Nd. 2.2.2.2 Sweep the signal wavelength while measuring the output at a fixed diffraction angle In this case, we can find the frequency resolution of the grating by measuring the output optical intensity as the function of the input signal wavelength. Here, β is fixed, and λ is varying. For an mth order grating and at the output diffraction angle β, the transmission maximum happens when the input signal is tuned to a wavelength λm. The relationship between β and λm is given by the following grating equation: Ndð sin β  sin αÞ ¼ Nmλm

(2.2.13)

To evaluate the width of this transmission peak, it is useful to find the position of the transmission minimum around the peak. Assume that a transmission minimum happens at (λm  Δλ), according to the grating equation Ndð sin β  sin αÞ ¼ ðmN + 1Þðλm  ΔλÞ

(2.2.14)

Combining Eqs. (2.2.13) and (2.2.14), we find λ λ  (2.2.15) mN + 1 mN Δλ specifies the wavelength resolution of the grating-based optical filter when the output is measured at a fixed diffraction angle. Eq. (2.2.15) reveals that a high-resolution OSA requires a large number of grating grooves N to be illuminated by the input signal. For a Δλ ¼

143

144

Fiber optic measurement techniques

certain input beam size D, this requires a high groove density or a small groove period d. Commercially available gratings usually have groove densities ranging from 400 to 1200 lines per millimeter. Gratings with higher groove densities suffer from larger polarization sensitivities and a narrow wavelength range. A high diffraction order m also helps improve the wavelength resolution. However, at high diffraction angles, the grating efficiency is usually decreased although there are special designs to enhance the grating efficiency at a specific diffraction order. Example 2.1 To achieve a spectral resolution of 0.08 nm at a signal wavelength of 1550 nm, use a second-order grating (m ¼ 2 in Eq. 2.2.7) with a groove density of 800 lines/mm. What is the minimum diameter of the collimated input beam? Solution: According to Eq. (2.2.15), to achieve a resolution bandwidth of 0.08 nm, the minimum number of grooves illuminated by the input optical signal is N¼

λ 1550 ¼ ¼ 9688 mΔλ 2  0:08

Since the grating has a groove density of 800 lines/mm, the required beam diameter is D¼

9688 ¼ 12:1mm 800

Another often used specification for a grating is the grating dispersion, which is defined as the diffraction angle change induced by a unit wavelength change of the optical signal. Using the grating equation given by Eq. (2.2.7) and assuming that the input angle is fixed, differentiating both sides of the equation, we have mΔλ ¼ d cos βΔβ. Therefore, the grating dispersion can be expressed as Disp ¼

Δβ m ¼ Δλ d cos β

(2.2.16)

2.2.3 Basic OSA configurations The heart of a grating-based optical spectrum analyzer is a monochromator as schematically illustrated in Fig. 2.2.3. In this basic optical configuration, the input optical signal is converted into a collimated beam through a lens system and launched onto a grating. The grating diffracts and disperses wavelength components of the optical signal into different diffraction angles. A tunable mechanical aperture (slit) is used to select various wavelength components within the signal spectrum, and a photodiode converts the selected wavelength component into an electrical signal. The electric signal is then amplified, digitized, and recorded, or displayed on the OSA screen.

Basic mechanisms and instrumentation for optical measurement

Input

Focusing opcs

Grang

Tuning mechanics Electronic processing Tunable slit Photodetector

Fig. 2.2.3 Basic configuration of an optical spectrum analyzer.

The photodiode measures the total optical power within the spectral bandwidth determined by the width of the slit. Considering the grating transfer function and the width of the optical aperture, the measured value can be converted into an optical power spectral density in the unit of mW/nm or dBm/nm. In a practical OSA, there is usually a button to select the desired resolution bandwidth for each measurement, which is accomplished by selecting the width of the optical aperture. Reducing the width of the aperture allows to cut fine slices of the optical spectrum; however, the minimum wavelength resolution is determined by the grating quality and the beam size as described in Eq. (2.2.15). In addition, focusing optics also needs to be designed properly to minimize aberration while maintaining a large beam size on the grating. 2.2.3.1 OSA based on a double monochromator Spectral resolution and measurement dynamic range are the two most important parameters of an OSA, and both of them depend on the grating transfer function. These parameters can be improved by (1) increasing groove-line density of the grating and (2) increasing the size of the beam that launches onto the grating. Practically, gratings with groove densities higher than 1200/mm are difficult to make, and they usually suffer from high polarization sensitivity and limited wavelength coverage. On the other hand, the increased beam size demands very high-quality optic systems and high uniformity of the grating. It also proportionally increases the physical size of the OSA. One often used method to improve OSA performance is to let the optical signal diffract on the grating twice, as shown in Fig. 2.2.4. In this double-pass configuration, if the two gratings are identical, the power transfer function will be T 2 ðλ, βÞ ¼ jT 1 ðλ, βÞj2

(2.2.17)

where T1(λ,β) is the power transfer function of a single grating as given by Eq. (2.2.6). Fig. 2.2.5 shows an example of the grating power transfer function versus the relative diffraction angle β. The grating groove density is 1000 lines/mm, and the beam size that

145

Fiber optic measurement techniques

Grang

Focusing optics

Input

Tuning mechanics Tunable slit Electronic processing

Fig. 2.2.4 Optical spectrum analyzer based on a double-pass monochromator.

0

Transfer function (dB)

0

Transfer function (dB)

146

−10 −20 −30 −40 −50 −60 −70 −80 −0.2

(a)

−0.15

−0.1

−0.05

0

0.05

0.1

Diffraction angle (degree)

0.15

0.2

−10 −20 −30 −40 −50 −60 −70 −80 −0.2

(b)

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

Diffraction angle (degree)

Fig. 2.2.5 Transfer functions of (A) single-pass and (B) and double-pass grating with d ¼ 1 μm, N ¼ 10,000, and λ ¼ 1550 nm.

launched on the grating is 10 mm. Therefore, there are 10,000 groove lines involved in the interference. The signal wavelength is λ ¼ 1550 nm. Using double-pass configuration, the spectral resolution defined by Eqs. (2.2.12) and (2.2.15) does not change; however, it does improve the sideband suppression and thus increase the dynamic range of the OSA. 2.2.3.2 OSA with polarization sensitivity compensation For practical OSA applications, especially when the input is from an optical fiber, the signal polarization state is not typically controlled, and it can vary randomly. This requires an OSA to operate independent of polarization. However, diffraction gratings are typically polarization sensitive. The diffraction efficiency of a grating is usually different for input optical field polarizations parallel (s-plane) or perpendicular (p-plane) to the direction of the groove lines. This polarization sensitivity is especially pronounced when the groove density is high. Fig. 2.2.6 shows an example of a plane-ruled reflectance grating with 1000 line/nm groove density. The diffraction efficiencies for signal polarized on the s-plane and the

Basic mechanisms and instrumentation for optical measurement

Relative efficiency (%)

100 s-plane 80 60

p-plane

40 20 0.8

1.40 1.0 1.2 Wavelength (mm)

1.6

Fig. 2.2.6 Example of diffraction efficiency of a plane-ruled reflectance grating versus wavelength for optical signal polarized on s-plane and p-plane (Standard Diffraction Gratings, 2022).

p-plane are generally different, and this difference is a function of wavelength. This polarization sensitivity needs to be compensated. There are several techniques that can be used to design an OSA. The most popular ones are polarization diversity and polarization average. In polarization diversity configuration, two photodiodes are used to detect polarization components in the p-plane and the s-plane separately, as shown in Fig. 2.2.7. Since the grating diffraction efficiencies for these two polarizations can be measured precisely and they are deterministic, each detector only needs to compensate for the grating efficiency in its corresponding polarization. Adding the calibrated electrical signals from the two detectors, polarization sensitivity can be easily minimized through electronic signal processing. Polarization average is a technique that can be used in double-pass monochromators, as shown in Fig. 2.2.8, where a halfwave plate is used to rotate the polarization by 90 degree. In this way, a signal entering the first grating on the p-plane will be rotated to the s-plane for the second grating. If the grating diffraction efficiencies for the p-plane

Input

PBS s-plane Electronic processin

detector

p-plane detector

Fig. 2.2.7 Optical spectrum analyzer with polarization diversity detection. PBS, polarization beam splitter.

147

148

Fiber optic measurement techniques

Input

λ/2 plate

Electronic processing

Fig. 2.2.8 Optical spectrum analyzer with polarization averaging.

and the s-plane are Ap(λ) and As(λ), respectively, the overall efficiency after a double-pass will be A(λ) ¼ Ap(λ)As(λ), which is an efficiency independent of the polarization state of the optical signal. There are various commercial OSAs, each with its unique design. However, their operations are based on similar principles, as described previously. Fig. 2.2.9 shows an example of practical optical system design of a double-pass OSA. In this design, the optical signal is folded back and forth in the optical system and diffracted by the same grating External Jumper For 9 Micron Sweep

OSA Input

Single Mode Filter Output

Aperature (slit)

Half-wave Plate

Flexure 9 Micron Fiber

1 2 50 Micron Fiber

Photodiode 2

3

4

Lens System

Photodiode 1 Diffraction Grating

Fig. 2.2.9 A practical optical system configuration of a double-pass OSA with polarization averaging using a λ/2 plate. This OSA can also be used as a tunable optical filter (Derickson, 1998).

Basic mechanisms and instrumentation for optical measurement

twice. A λ/2 waveplate is used to rotate the polarization state of the signal by 90 degree between the two diffractions. This OSA can also be used as a tunable optical filter by selecting the optical output instead of the electrical signal from Photodiode 1. To better explain the operation mechanism, the light-pass sequence is marked by 1, 2, 3, and 4 on Fig. 2.2.9, each with a direction arrow. 2.2.3.3 Consideration of focusing optics In an OSA, although the performance is largely determined by the grating characteristics, focusing optics also plays an important role in ensuring that the optimum performance is achieved. We must consider several key parameters in the design of an OSA focusing optics system: (1) Size of the optical input aperture and the collimating lens. The collimated beam quality depends largely on the optical input aperture size and the divergence angle. In principle, if the input signal is from a point source with infinitesimal spot size, it can always be converted into an ideally collimated beam through an optical lens system. However, if the optical input is from a standard single-mode fiber that has a core diameter of 9 μm and a maximum divergence angle of 10 degree, the lens system will not be able to convert this into an ideally collimated beam. This idea is illustrated in Fig. 2.2.10. Since the quality of an OSA depends on the precise angular dispersion of the grating, any beam divergence will create degradations in the spectral resolution. (2) Output focusing lens. After the optical signal is diffracted off the grating, different wavelength components within the optical signal will travel at different angles, and the output focusing lens will then focus them to different locations at the output aperture, as shown in Fig. 2.2.11. Ideally, the focusing spot size has to be infinitesimal to ensure the optimum spectral resolution. However, the minimum focusing spot size is limited by the diffraction of the lens system, which is usually called an Airy disk. This diffractionlimited spot size depends on the focus length f and the diameter of the lens D as w¼

2λf πD

Input fiber Lens

Fig. 2.2.10 Illustration of beam divergence when the input aperture is big.

(2.2.18)

149

150

Fiber optic measurement techniques

Input beam

Output aperture Output lens

Fig. 2.2.11 Illustration of beam divergence when the input aperture is large.

Large collimated beam size and a large lens are preferred to increase the spectral resolution and reduce the diffraction-limited spot size. However, this will increase the size of the OSA. (3) Aberration induced by lens. Aberration increases the focusing spot size and therefore degrades resolution. Aberration is directly proportional to the beam diameter. Therefore, the beam diameter cannot be too large. (4) Output slit or pinhole. To select diffracted light of different wavelength, a smaller exit pinhole gives better spectral resolution but less signal energy (poor SNR). Therefore, usually longer averaging time has to be used when a small resolution bandwidth is chosen. (5) Photodetection. Requires photodiodes with wide spectral coverage or sometimes multiple photodiodes (each covers a certain wavelength range). Response calibration must be made, and noise level must be low. 2.2.3.4 Optical spectral meter using photodiode array The traditional grating-based OSA uses a small optical aperture for spectral selection. To measure an optical spectrum, one has to mechanically scan the angle of the grating or the position of the optical aperture. Mechanical moving parts potentially make the OSA less robust and reliable. Like an automobile, after running for years, the engine will be worn out. For some applications requiring continuous monitoring of optical spectrum, this reliability issue would be an important concern. In recent years, miniaturized optical spectrometers have been used in WDM optical systems to monitor signal quality and the optical signal-to-noise ratio. In this type of optical communication system applications, the spectrometers must operate continuously for many years, and therefore mechanical moving parts need to be eliminated. A simple spectrometer design without mechanical moving parts is to use a onedimensional photodiode array as the detector. The configuration is shown in Fig. 2.2.12, where each pixel in the photodiode array measures a slice of the optical spectrum. Parallel electronic processing can be used to perform data acquisition and spectrum analysis. A most often used detector array is the one-dimensional charge-coupled device (CCD).

Basic mechanisms and instrumentation for optical measurement

Input

Focusing opcs

Grang

Tuning mechanic Electronic processing Photodiode array

Fig. 2.2.12 Photodiode array-based optical spectrometer.

Silicon-based CCDs have been used widely for imaging sensors; they are reliable with low noise, high sensitivity, and, most important, low cost. However, silicon-based CCDs only cover wavelength regions from visible to approximately 1 μm. For long wavelength applications such as in the 1550nm optical communication window, InGaAs-based linear arrays are commercially available with the pixel numbers ranging from 512 to 2048 and the pixel width ranging from 10 to 50 μm. In addition to higher noise levels compared to siliconbased CCDs, the cost of an InGaAs detector array is much higher than its silicon counterpart. This high cost, to a large extent, limits the applications of OSA-based optical performance monitors in commercial communication networks. For a commercially available compact OSA operating in a 1550 nm wavelength window using a InGaAs linear array, the spectral resolution is typically on the order of 0.2 nm, and the wavelength coverage can be in both C-band (1530–1562 nm) and L-band (1565–1600 nm). This is made primarily for all-optical performance monitoring of optical networks. The limitation in the spectral resolution is mainly due to the limited grating dispersion and the small beam size, as well as the limited pixel numbers of the diode array.

2.3 Scanning FP interferometer In the last section, we discussed optical spectrum analyzers based on diffraction gratings. The major advantage of grating-based OSAs is the wide wavelength coverage. A commercial OSA usually covers a wavelength range from 400 to 1700 nm. However, it is not easy to achieve a very fine spectral resolution using a grating-based OSA, which is limited by the groove-line densities and the maximum optical beam diameter. For example, for a first-order grating with line density of 1200/mm operating in a 1550 nm wavelength window, to achieve a frequency resolution of 100 MHz, which is equivalent to 0.0008 nm wavelength resolution, the required optical beam diameter has to be approximately 1.6 m; this is simply too big to be realistic. On the other hand, an ultra-high-

151

152

Fiber optic measurement techniques

resolution optical spectrum analyzer can be made using a scanning Fabry-Perot interferometer (FPI). In this section, we discuss the operating principle and realization of FPIbased optical spectrum analyzers.

2.3.1 Basic FPI configuration and transfer function The basic configuration of an FPI is shown in Fig. 2.3.1, where two parallel mirrors, both having power reflectivity R, are separated by a distance d (Hernandez, 1986; Vaughan, 1989). If a light beam is launched onto the mirrors at an incident angle α, a part of the light will penetrate through the left mirror and propagates to the right mirror at point A. At this point, a part of the light will pass through the mirror, and the other part will be reflected back to the left mirror at point B. This process will be repeated many times until the amplitude is significantly reduced due to the multiple reflection losses. The propagation phase delay can be found after each roundtrip between the two mirrors. The path length difference between the adjacent output traces, as illustrated in Fig. 2.3.1, is ΔL ¼ AB + BC  AD ¼ 2d cos α

(2.3.1)

After passing through the second mirror, the optical field amplitudes of Light Rays 1, 2, and 3 can be expressed, respectively, as pffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffi E 0 ðf Þ ¼ E i ðf Þ 1  R exp ðjβnx0 Þ 1  R pffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffi E 1 ðf Þ ¼ E i ðf Þ 1  R exp ðjβnx0 Þ exp ðjβnΔL ÞR 1  R and

pffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffi E2 ðf Þ ¼ E i ðf Þ 1  R exp ðjβnx0 Þ exp ðj2βnΔL ÞR2 1  R

R

R 3 Detector

2 C B

α AD

1

α

d

Fig. 2.3.1 Illustration of a Fabry-Perot interferometer with two partially reflecting mirrors separated by a distance d.

Basic mechanisms and instrumentation for optical measurement

where Ei( f ) is the input optical field, f is the optical frequency, x0 is the propagation delay of Beam 1 shown in Fig. 2.3.1, β ¼ 2πλ ¼ 2πfc is the propagation constant, and n is the refractive index of the material between the two mirrors. The general expression of the optical field, which experienced N roundtrips between the two mirrors can be expressed as pffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffi E N ðf Þ ¼ E i ðf Þ 1  R exp ðjβnx0 Þ exp ðjN βnΔL ÞRN 1  R (2.3.2) Adding all the components together at the FPI output, the total output electrical field is X∞ E out ðf Þ ¼ E i ðf Þð1  RÞ exp ðjβnx0 Þ m¼0 exp ðjmβnΔL ÞRm ¼ E i ðf Þ

ð1  RÞ exp ðjβnx0 Þ 1  exp ðjβnΔL ÞR

(2.3.3)

Because of the coherent interference between various output ray traces, the transfer function of an FPI becomes frequency dependent. The field transfer function of the FPI is then  2πf  Eout ðf Þ ð1  RÞ exp j c nx0 H ðf Þ ¼ ¼ (2.3.4)   Ei ðf Þ 1  exp j 2πf nΔL R c

The power transfer function is the square of the field transfer function: T FP ðf Þ ¼ jH ðf Þj2 ¼

ð1  RÞ2   ð1  RÞ2 + 4R sin 2 2πf 2c nΔL

(2.3.5)

Equivalently, this power transfer function can also be expressed using a signal wavelength as the variable: T FP ðλÞ ¼

ð1  RÞ2   ð1  RÞ2 + 4R sin 2 2πfdnλcos α

(2.3.6)

For a fixed signal wavelength, Eq. (2.3.6) is a periodic transfer function of the incidence angle α. For example, if a point light source is illuminated on an FPI, as shown in Fig. 2.3.2, a group of bright rings will appear on the screen behind the FPI. The diameters of the rings depend on the thickness of the FPI as well as on the signal wavelength. To provide a quantitative demonstration, Fig. 2.3.3 shows the FPI power transfer function versus the incidence angle at the input. To obtain Fig. 2.3.3, the power reflectivity of the mirror R ¼ 0.5, mirror separation d ¼ 300 μm, and media refractive index n ¼ 1 were used. The solid line shows the transfer function with the signal wavelength at λ ¼ 1540.4 nm. At this wavelength, the center of the screen is dark. When the signal

153

Fiber optic measurement techniques

α

Light source

Screen FPI

Fig. 2.3.2 Illustration of a circular fringes pattern when a noncollimated light source is launched onto a screen through an FPI.

1 0.8 Transmission

154

0.6 0.4 0.2 0 −8

−6

−4

−2

0 2 Angle a (degree)

4

6

8

Fig. 2.3.3 Transmission versus the beam incident angle to the FPI with mirror reflectivity R ¼ 0.5, mirror separation d ¼ 300 μm, and media refractive index n ¼ 1. Calculations were made at two wavelengths: λ ¼ 1538.5 nm (dashed line) and λ ¼ 1540.4 nm (solid line).

wavelength is changed to 1538.5 nm, as shown by the dashed line, the center of the screen becomes bright. Note too that with the increase of the mirror separation d, the angular separation between transmission peaks will become smaller, and the rings on the screen will become more crowded. In fiber-optic FPI applications, because of the small numerical aperture of the singlemode fiber, collimated light is usually used, and one can assume that the incidence angle is approximately α ¼ 0. This is known as a collinear configuration. With α ¼ 0, Eq. (2.3.6) can be simplified to T FP ðλÞ ¼

ð1  RÞ2 ð1  RÞ2 + 4R sin 2 ð2πnd=λÞ

(2.3.7)

Basic mechanisms and instrumentation for optical measurement

1

Transmission

0.8 0.6 0.4 0.2 0

(a) 1530

1535

1540

1545

1550

1540

1545 Wavelength (nm)

1550

1555

1560

1555

1560

1 FSR Contrast

Transmission

0.8 0.6 HPBW 0.4 0.2 0

(b) 1530

1535

Fig. 2.3.4 Transmission versus the signal wavelength with mirror separation d ¼ 5 mm, media refractive index n ¼ 1, and mirror reflectivity R ¼ 0.95 (A) and R ¼ 0.8 (B).

In this simple case, the power transmission is a periodic function of the signal optical frequency. Fig. 2.3.4 shows two examples of power transfer functions in a collinear FPI configuration where the mirror separation is d ¼ 5 mm, the media refractive index is n ¼ 1, and the mirror reflectivity is R ¼ 0.95 for Fig. 2.3.4A and R ¼ 0.8 for Fig. 2.3.4B. With a higher mirror reflectivity, the transmission peaks become narrower, and the transmission minima become lower. Therefore, the FPI has better frequency selectivity with high mirror reflectivity. In the derivation of FPI transfer function, we have assumed that there is no loss within the cavity, but in practice, optical loss always exists. After each roundtrip in the cavity, the reduction in the optical field amplitude should be ηR instead of just R. The extra loss factor η may be introduced by cavity material absorption and beam misalignment. The major cause of beam misalignment is that the two mirrors that form the FPI are not exactly parallel. As a consequence, after each roundtrip, the beam exit from the FPI only partially overlaps with the previous beam; therefore, the strength of interference between them is reduced. For simplicity, we still use R to represent the FPI mirror

155

156

Fiber optic measurement techniques

reflectivity; however, we have to bear in mind that this is an effective reflectivity which includes the effect of cavity loss. The following are a few parameters that are often used to describe the properties of an FPI. 2.3.1.1 Free spectral range (FSR) The FSR is the frequency separation Δf between adjacent transmission peaks of an FPI. For a certain incidence angle α, the FSR can be found from the transfer function shown in Eq. (2.3.6) as c FSR ¼ Δf ¼ (2.3.8) 2nd cos α The FSR is inversely proportional to the cavity optical length nd. For a simple FabryPerot type laser diode, if the cavity length is d ¼ 300 μm and the refractive index is n ¼ 3.5, the FSR of this laser diode is approximately 143 GHz. If the laser operates in a 1550 nm wavelength window, this FSR is equivalent to 1.14 nm. This is the mode spacing of the laser, as we discussed in Section 1.2. 2.3.1.2 Half-power bandwidth (HPBW) HPBW is the width of each transmission peak of the FPI power transfer function, which indicates the frequency selectivity of the FPI. From Eq. (1.3.6), if we assume at f ¼ f1/2 the transfer function is reduced to ½ its peak value, then

2πf 1=2 nd cos α 2 ¼ ð1  RÞ2 4R sin c Assuming that the transmission peak is narrow enough, which is the case for most of the FPIs, sin(2πf1/2nd cos α/c)  (2πf1/2nd cos α/c) and f1/2 can be found as f 1=2 ¼

ð1  RÞc pffiffiffiffi 4πnd R cos α

Therefore, the full width of the transmission peak is HPBW ¼ 2f 1=2 ¼

ð1  RÞc pffiffiffiffi 2πnd R cos α

(2.3.9)

In most applications, a large FSR and small HPBW are desired for good frequency selectivity. However, these two parameters are related by an FPI quality parameter known as finesse. 2.3.1.3 Finesse Finesse of an FPI is related to the percentage of the transmission window within the free spectral range. It is defined by the ratio between the FSR and HPBW:

Basic mechanisms and instrumentation for optical measurement

pffiffiffiffi FSR π R F¼ ¼ HPBW 1  R

(2.3.10)

Finesse is a quality measure of an FPI that depends only on the effective mirror reflectivity R. Technically, very high R is hard to obtain because the effective reflectivity not only depends on the quality of mirror itself but also depends on the mechanical alignment between the two mirrors. Current state-of-the-art technology can provide finesse of up to a few thousand. 2.3.1.4 Contrast Contrast is the ratio between the transmission maximum and the transmission minimum of the FPI power transfer function. It specifies the ability of wavelength discrimination if the FPI is used as an optical filter. Again, from the transfer function shown in Eq. (2.3.6), the highest transmission is Tmax ¼ 1, and the minimum transmission is Tmin ¼ (1  R)2/ [(1  R)2 + 4R]. Therefore, the contrast of the FPI is  2 T 4R 2F C ¼ max ¼ 1 + ¼ 1 + (2.3.11) 2 T min π ð1  R Þ The physical meanings of the FSR, HPBW, and contrast are illustrated in Fig. 2.3.4(b). Example 2.2 The reflection characteristics of a Fabry-Perot filter can be used as a narrowband notch filter, as illustrated in Fig. 2.3.5. To achieve a power rejection ratio of 20 dB, what is the maximally allowed loss of the mirrors? Solution: If the FP interferometer is ideal and the mirror has no reflection loss, the wavelengthdependent power transmission of an FP filter is given by Eq. (2.3.7) as 1 Input

0.8

Circulator RFP

FPI Output

0.6 0.4 0.2 0

Fig. 2.3.5 Using FPI as a notch filter.

0

2

4 6 8 10 Relative phase (rad.)

12

157

158

Fiber optic measurement techniques

T ðλÞ ¼

ð1  R Þ2 ð1  RÞ + 4R sin 2 ð2πnd=λÞ 2

and the reflection of the FPI is RFP ðλÞ ¼ 1  T ðλÞ ¼

4R sin 2 ð2πnd=λÞ ð1  RÞ2 + 4R sin 2 ð2πnd=λÞ

If this is true, the notch filter would be ideal because RFP(λ) ¼ 0 whenever (2nd/λ) is an integer. However, in real devices, absorption, scattering, and no-ideal beam collimation would contribute to reflection losses; therefore, the wavelength-dependent power transmission of an FP filter becomes T ðλÞ ¼

ð1  R Þ2 ð1  RηÞ2 + 4Rη sin 2 ð2πnd=λÞ

where η < 1 is the power loss of each reflection on the mirror. Then, the FPI power reflectivity is RFP ðλÞ ¼

ð1  RηÞ2  ð1  RÞ2 + 4Rη sin 2 ð2πnd=λÞ ð1  RηÞ2 + 4Rη sin 2 ð2πnd=λÞ

Obviously, the minimum reflection of the FPI happens at wavelengths where sin2(2πnd/λ) ¼ 0; therefore, the value of minimum reflection is RFP ðλÞ ¼ 1 

ð1  R Þ2 ð1  RηÞ2

In order for the minimum reflectivity to be 20 dB (0.01), the reflection loss has to be ð1  R Þ 1 1  pffiffiffiffiffiffiffiffiffi η R 0:99 Suppose that R ¼ 0.9. The requirement for the excess reflection loss of the mirror is η > 0.9994. That means it allows for only 0.12% power loss in each roundtrip in the FP cavity, which is not usually easy to achieve.

2.3.2 Scanning FPI spectrum analyzer The optical spectrum analyzer based on scanning FPI is a popular optical instrument for its superior spectral resolution. Typically, the resolution of a grating-based OSA is on the order of 0.08 nm, which is about 10 GHz in the 1550 nm wavelength window. A commercial scanning FPI can easily provide spectral resolution better than 10 MHz, which is approximately 0.08 pm in a 1550 nm wavelength window. However, the major disadvantage of an FPI-based OSA is its relatively narrow wavelength coverage. Since in fiber-optic systems, collinear configuration is often used (α ¼ 0), we only consider this

Basic mechanisms and instrumentation for optical measurement

case in the following description: the simplified FPI transfer function is given by Eq. (2.3.7). The power transfer function of an FPI is periodic in the frequency domain. From Eq. (2.3.7), the wavelength λm corresponding to the mth transmission peak can be found as λm ¼

2nd m

(2.3.12)

By changing the length of the cavity, this peak transmission wavelength will move. When the cavity length is scanned by an amount of λm (2.3.13) 2n the mth transmission peak frequency will be scanned over one entire free spectral range. This is the basic mechanism of making an OSA using an FPI. To measure the signal spectral density over a wavelength band corresponding to an FSR, we only need to sweep the cavity length for approximately half the wavelength. This mechanical scanning can usually be accomplished using a voltage-controlled piezoelectric transducer (PZT); thus, the mirror displacement (or equivalently, the change in cavity length d) is linearly proportional to the applied voltage. Fig. 2.3.6 shows a block diagram of optical spectrum measurement using a scanning FPI. The FPI is driven by a sawtooth voltage waveform, and therefore, the FPI mirror displacement is linearly scanned. As a result, the peak transmission frequency of the FPI transfer function is also linearly scanned. A photodiode is used at the FPI output to convert the optical signal into an electrical waveform, which is then displayed on an oscilloscope. To synchronize with the mirror scanning, the oscilloscope is triggered by the sawtooth waveform. As shown in Fig. 2.3.7, if the swing of the driving voltage is between V0 and V1, the frequency scan of the transmission peak will be from f0 and f1 with δd ¼

f 1  f 0 ¼ ηPZT ðV 1  V 0 Þ

Signal to be measured

2n  FSR λ

FPI Photo detector

Sawtooth wave generator

Oscilloscope

Trigger

Fig. 2.3.6 Block diagram of optical spectrum measurement using a scanning FPI.

(2.3.14)

159

V V1

V0

t Time

(a)

Selected frequency

Fiber optic measurement techniques

Driving voltage

160

(b)

f f1

f0

t Time

Fig. 2.3.7 (A) FPI driving voltage waveform and (B) transmission peak frequency versus time.

where ηPZT is the PZT efficiency, which is defined as the ratio between mirror displacement and the applied voltage. In this spectrum measurement system, the spectral resolution is determined by the width of the FPI transmission peak, and the frequency coverage is limited by the FSR of the FPI. Because of the periodic nature of the FPI transfer function, the spectral content of the optical signal to be measured must be limited within one FSR of the FPI. All the spectral components outside an FSR will be folded together, thus introducing measurement errors. This concept is illustrated in Fig. 2.3.8, where frequency scanning of three FPI transmission peaks is displayed. The two signal spectral components are separated wider than an FSR, and they are selected by two different transmission peaks of the FPI. As a result, they are folded together in the measured oscilloscope trace. Because of the limited frequency coverage, a scanning FPI is often used to measure narrow optical spectra where a high spectral resolution is required. To convert the measured oscilloscope waveform p(t) into an optical spectrum p( f ), a precise calibration is required to determine the relationship between time t and frequency f. f

f

(b)

t

Spectral density

(a) (c)

t

Fig. 2.3.8 Illustration of spectrum folding when signal spectrum exceeds an FSR. (A) Original signal optical spectrum, (B) FPI response, and (C) measured oscilloscope trace.

Basic mechanisms and instrumentation for optical measurement

f

f FSR

t Signal spectral density

Δt t Oscilloscope trace

Fig. 2.3.9 Calibration procedure converting time to frequency.

Theoretically, this can be done using Eq. (2.3.14) and the knowledge of the sawtooth waveform V(t). In practice, this calibration can easily be done experimentally. As shown in Fig. 2.3.9, we can apply a sawtooth waveform with a high swing voltage so that the frequency of each FPI transmission peak swings wider than an FSR. A single-wavelength source needs to be used in the measurement. Since the FPI is scanning for more than an FSR, two signal peaks can be observed on the oscilloscope trace corresponding to the spectrum measured by two adjacent FPI transmission peaks. Obviously, the frequency separation between these two peaks on the oscilloscope trace should correspond to one free spectral range of the FPI. If the measured time separation between these two peaks is Δt on the oscilloscope, the conversion between the time scale of the oscilloscope and the corresponding frequency should be FSR t (2.3.15) Δt We need to point out that what we have just described is a relative calibration of the frequency scale. This allows us to determine the frequency scale for each time division measured on the oscilloscope. However, this does not provide absolute frequency calibration. In practice, since the frequency of an FPI transmission peak can swing across an entire FSR with the cavity length change of merely half a wavelength, absolute wavelength calibration is not a simple task. Cavity length change on the order of a micrometer can easily be introduced by a temperature change or a change in the mechanical stress. However, the relative frequency measurement can still be very accurate because the FSR will not be affected by micrometer-level cavity length changes. Although absolute wavelength calibration can be done by adding a fixed wavelength reference, FPI-based optical spectrometers are traditionally not built to make accurate measurement in the absolute wavelength of the optical signal. Rather, they are often used to measure source linewidth, modulation sidebands, and other situations in which a high spectral resolution is required. f ¼

161

162

Fiber optic measurement techniques

2.3.3 Scanning FPI basic optical configurations There are various types of commercially available scanning FPIs, such as plano-mirror design, confocal-mirror design, and all-fiber design. Plano-mirror design uses a pair of flat mirrors that are precisely parallel to each other. As shown in Fig. 2.3.10, one of the two mirrors is mounted on a PZT so that its position can be controlled by an electrical voltage. The other mirror is attached to a metal ring that is mounted on a mechanical translation stage to adjust the angle of the mirror. The major advantage of this plano-mirror design is its flexibility. The cavity length can be adjusted by sliding one of the mirrors along the horizontal bars of the frame. However, in this configuration, the optical alignment between the two mirrors is relatively difficult. The requirement for the beam quality of the optical signal is high, especially when the cavity length is long. The finesse of this type of the FPI is usually less than 100. The confocal-mirror design uses a pair of concave mirrors whose radii of curvature are equal to half their separation, resulting in a common focus point in the middle of the cavity, as shown in Fig. 2.3.11. An FPI using this confocal configuration usually has much higher finesse than using plano-mirror configuration. The reason is that the focusing of the incident beam reduces possible finesse degradation due to mirror surface

Mirrors Input

Output

PZT ring

Adjusting screw

Metal frame

Fig. 2.3.10 Configuration of a scanning FPI using plano mirrors.

Mirrors Input

Output

PZT ring

Metal frame

Adjusting screw

Fig. 2.3.11 Configuration of a scanning FPI using concave mirrors.

Basic mechanisms and instrumentation for optical measurement

imperfections. This configuration also has better tolerance to the quality of the incident beam. However, the cavity length of the FPI is determined by the curvature of the mirror, which cannot be adjusted. Therefore, the FPI using confocal-mirror design has a fixed free spectral range and is not as flexible as that using plano mirrors. For an FPI using free-space optics, beam and mirror alignments are usually not easy tasks. Even a slight misalignment may result in a significant reduction of the cavity finesse. For applications in fiber-optic systems, an FPI based on all-fiber configuration is attractive for its simplicity and compactness. An example of an all-fiber FPI is shown in Fig. 2.3.12 (Clayton et al., 1991). In this configuration, micromirrors are made in the fiber, and a small air gap between fibers makes it possible for the scanning of cavity length. Antireflection (AR) coating is applied on each side of the air gap to eliminate the unwanted reflection. The entire package of an all-fiber FPI can be made small enough to mount onto a circuit board. Because the optical signal within the cavity is mostly confined inside the single-mode optical fiber and it does not need additional optical alignment, an all-fiber FPI can provide much higher finesse compared to free-space PFIs. A state-of-the-art all-fiber FPI can provide a finesse of as high as several thousand.

2.3.4 Optical spectrum analyzer using the combination of grating and FPI In the last two sections, we have discussed OSAs based on diffractive gratings and FPIs. A grating-based OSA has wide wavelength coverage, but the spectral resolution is limited by the groove-line density of the grating and the size of the collimated optical beam on the grating. A desktop OSA can usually provide 0.05 nm spectral resolution, and a miniature OSA using photodiode array provides about 0.25 nm resolution in a 1550 nm wavelength window. On the other hand, a scanning FPI-based OSA can have much higher spectral resolution by using long cavity length. However, since the transfer function of an FPI is periodic, the spectral coverage is limited to a free spectral range. The ratio between the wavelength coverage and spectral resolution is equal to the finesse.

PZT

AR

Fiber

Steel Bracket

Steel Bracket Mirror

PZT

Fig. 2.3.12 All-fiber FPI.

Mirror Glass ferrule

163

164

Fiber optic measurement techniques

Input FPI

Spectrum preselected by FPI

Grating

Sweep

δφ

Data acquisition and control

PD array

Electrical scanning

Fig. 2.3.13 High-resolution OSA design combining an FPI and a grating.

To achieve both high spectral resolution and wide wavelength coverage, we can design an OSA that combines an FPI and a grating. The operation principle of the high-resolution OSA using the combination of an FPI and a grating is illustrated in Fig. 2.3.13. The optical signal is transformed into discrete narrowband slices by the FPI with the wavelength separation equal to the FSR between each other. After the transmission grating, each wavelength slice is dispersed into a light beam at a certain spatial angle with the angular width of ∗ ∗ Δφ. Theoretically, Δφ is determined by the convolution between spectral bandwidth of the FPI and the angle resolution of the grating. Because of this angular width, the beam has a width ΔL when it reaches the surface of the photodiode array. It is assumed that each photodiode has a width of d, and then there will be n ¼ ΔL/d photodiodes simultaneously illuminated by this light beam. Adding up the photocurrents generated by all the n associated photodiodes, an electrical signal can be obtained that is linearly proportional to the optical power within the optical bandwidth selected by each transmission peak of the FPI. As shown in the example of Fig. 2.3.13, there are m light beams (the figure shows m ¼ 4), each corresponding to a specific transmission peak of the FPI transfer function. Therefore, for each FPI setting, the signal optical spectral density can be measured simultaneously at m separate wavelengths. If each transmission peak of the FPI is linearly swept across an FSR, the measurement will be able to cover a continuous wavelength range of m times the FSR. In this OSA, the frequency sweep of an FPI is converted into angular sweep of the light beams at the output of the grating and thus the spatial position on the surface of the photodiode array. With proper signal processing, this OSA can provide a high spectral resolution, as well as a wide spectral coverage (Hui, 2004, patent 6,697,159).

2.4 Mach-Zehnder interferometers The Mach-Zehnder interferometer (MZI) is one of the oldest optical instruments, in use for more than a century. The basic configuration of an MZI is shown in Fig. 2.4.1, which

Basic mechanisms and instrumentation for optical measurement

Mirror

Mirror

Output

Input Beam splitter

Beam combiner

Fig. 2.4.1 Basic configuration of a Mach-Zehnder interferometer.

consists of two optical beam splitters (combiners) and two mirrors to alter beam directions. The beam splitter splits the incoming optical signal into two equal parts. After traveling through two separate arms, these two beams recombine at the beam combiner. The nature of beam interference at the combiner depends, to a large extent, on the coherence length of the optical signal which will be discussed later in Section 2.6 (e.g., Eq. 2.6.7). If the path length difference between these two arms is shorter than the coherent length of the optical signal, the two beams interfere with each other at the combiner coherently. If the two beams are in phase at the combiner, the output optical power is equal to the input power; otherwise, if they are antiphase, the output optical power is equal to zero. In fiber-optic systems, the beam splitter and the combiner can be replaced by fiber couplers; therefore, all-fiber MZIs can be made. Because of the wave guiding mechanism, a fiberbased MZI can be made much more compact than an MZI using free space optics. In each case, the basic components to construct an MZI are optical couplers and optical delay lines.

2.4.1 Transfer matrix of a 2 × 2 optical coupler In Fig. 2.4.1, the beam splitter and the beam combiner are essentially the same device, and they can be generalized as 2  2 optical couplers. As shown in Fig. 2.4.2, a 2  2 optical coupler has two inputs and two outputs. The input/output relationship of both free-space and fiber-based 2  2 couplers as shown in Fig. 2.4.2 can be represented as a transfer matrix (Pietzsch, 1989; Green, 1991): b1 s11 s12 a1 ¼ (2.4.1) b2 s21 s22 a2 b2

b1

a2

b2

b1

a1

(a)

a1

a2

(b)

Fig. 2.4.2 2  2 optical couplers with (A) free-space optics and (B) fiber optics.

165

166

Fiber optic measurement techniques

where a1, a2 and b1, b2 are electrical fields at the two input ports, respectively. This device is reciprocal, which means that the transfer function will be identical if the input and the output ports are exchanged; therefore, s12 ¼ s21

(2.4.2)

We also assume that the coupler has no loss so that energy conservation applies; therefore, the total output power is equal to the total input power: jb1 j2 + jb2 j2 ¼ ja1 j2 + ja2 j2

(2.4.3)

Eq. (2.4.1) can be expanded into jb1 j2 ¼ js11 j2 ja1 j2 + js12 j2 ja2 j2 + s11 s∗12 a1 a∗2 + s∗11 s12 a∗1 a2

(2.4.4)

jb2 j2 ¼ js21 j2 ja1 j2 + js22 j2 ja2 j2 + s21 s∗22 a1 a∗2 + s∗21 s22 a∗1 a2

(2.4.5)

and

Where * indicates complex conjugate. Using the energy conservation condition, Eqs. (2.4.4) and (2.4.5) yield js11 j2 + js12 j2 ¼ 1

(2.4.6)

js21 j2 + js22 j2 ¼ 1

(2.4.7)

s11 s∗12 + s21 s∗22 ¼ 0

(2.4.8)

and

Since s12 ¼ s21, Eqs. (2.4.6) and (2.4.7) give js11 j2 ¼ js22 j2

(2.4.9)

For a coupler, if ε is the fraction of the optical power coupled from Input Port 1 to Output Port 2, the same coupling ratio will be from Input Port 2 to Output Port 1. For pffiffiffi the optical field, this coupling ration is ε. Then the optical field coupling fromffi Input pffiffiffiffiffiffiffiffiffiffi Port 1 to Output Port Port pffiffiffiffiffiffiffiffiffiffi ffi 1 or from Input pffiffiffiffiffiffiffiffiffiffi ffi 2 to Output Port 2 will be 1  ε. If we assume s11 ¼ 1  ε is real, s22 ¼ 1  ε should also be real because of the symmetry assumption. Then we can assume there is a phase shift for the cross-coupling term, pffiffiffi s12 ¼ s21 ¼ εe jΦ . The transfer matrix is then " pffiffiffiffiffiffiffiffiffiffiffi jΦ pffiffiffi # a1 b1 ε 1ε e ¼ jΦ pffiffiffi pffiffiffiffiffiffiffiffiffiffiffi (2.4.10) b2 e ε 1  ε a2

Basic mechanisms and instrumentation for optical measurement

Using Eq. (2.4.8), we have ejΦ + ejΦ ¼ 0

(2.4.11)

The solution of Eq. (2.4.11) is Φ ¼ π/2 and ejΦ ¼ j. Therefore, the transfer matrix of a 2  2 optical coupler is # " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi a1 b1 1ε j ε ¼ pffiffiffi (2.4.12) pffiffiffiffiffiffiffiffiffiffiffi b2 j ε 1  ε a2 An important conclusion of Eq. (2.4.12) is that there is a 90-degree relative phase shift between the direct pass (s11, s22) and cross-coupling (s12, s21). This property is important for several applications, including phase-balanced detection in coherent receivers.

2.4.2 Transfer function of an MZI An MZI is composed of two optical couplers and optical delay lines. It can be made by free space optics as well as guided wave optics. Using 2  2 couplers, a general MZI may have two inputs and two outputs, as shown in Fig. 2.4.3. The transfer function of an MZI can be obtained by cascading the transfer functions of two optical couplers and that of the optical delay line. Suppose the optical lengths of the delay lines in arm1 and arm2 are n1L1 and n2L2, respectively, the transfer matrix of the two delay lines is simply c1 exp ðjφ1 Þ 0 b1 ¼ (2.4.13) 0 exp ðjφ2 Þ b2 c2 where φ1 ¼ (2π/λ)n1L1 and φ2 ¼ (2π/λ)n2L2 are the phase delays of the two delay lines. If we further assume that the two optical couplers are identical with a power splitting ratio of ε, # # " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi a1 d1 ejφ1 0 1ε j ε 1ε j ε ¼ pffiffiffi (2.4.14) pffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi p ffiffi ffi jφ d2 e 2 j ε 1  ε a2 j ε 1ε 0 Delay line

a1

a2

Coupler 1

b1

c1

b2

c2

Coupler 2

Fig. 2.4.3 Illustration of an MZI with two inputs and two outputs.

d1

d2

167

168

Fiber optic measurement techniques

To achieve the highest extinction ratio, most MZIs use a 50% power splitting ratio for the optical couplers. In that case, ε ¼ 0.5 and the equation can be simplified as d1 a1 1 ðejφ1  ejφ2 Þ jðejφ1 + ejφ2 Þ ¼ (2.4.15) jφ jφ jφ jφ 2 jðe 1 + e 2 Þ ðe 1  e 2 Þ a2 d2 If the input optical signal is only at Input Port 1 and Input Port 2 is disconnected (a2 ¼ 0), the optical field at the two output ports will be    1 Δφ (2.4.16) d 1 ¼ ejφ1  ejφ2 a1 ¼ ejφ0 sin a 2 2 1 and d2 ¼

  jðejφ1 + ejφ2 Þ Δφ a a1 ¼ jejφ0 cos 2 1 2

(2.4.17)

where φ0 ¼ (φ1 + φ2)/2 is the average phase delay and Δφ ¼ (φ1  φ2) is the differential phase shift of the two MZI arms. Then the optical power transfer function from Input Port 1 to Output Port 1 is

d1

2 πf T 11 ¼ ¼ sin ðn L  n1 L 1 Þ (2.4.18) a1 a2 ¼0 c 2 2 and the optical power transfer function from Input Port 1 to Output Port 1 is

d2

2 πf T 12 ¼ ¼ cos ðn L  n1 L 1 Þ (2.4.19) a1 a2 ¼0 c 2 2 where f ¼ c/λ is the signal optical frequency and c is the speed of light. Obviously, the optical powers coming out of the two output ports are complementary such that T11 + T12 ¼ 1, which results from energy conservation, since we assumed that there is no loss. The transmission efficiencies T11 and T12 of an MZI are the functions of both the wavelength λ and the differential optical length Δl ¼ n2L2  n1L1 between the two arms. Therefore, an MZI can usually be used in two different categories: optical filter and electro-optic modulator.

2.4.3 MZI used as an optical filter The application of MZI as an electro-optic modulator was discussed in Chapter 1, where the differential arm length can be modulated to modulate the optical transfer function. A more classic application of an MZI is as an optical filter, where wavelength-dependent characteristics of T11 and T12 are used, as illustrated in Fig. 2.4.4A. For example, if we assume that the two arms have the same refractive index n1 ¼ n2 ¼ 2.387 and the physical

Basic mechanisms and instrumentation for optical measurement

Transmission

1

(a)

0.8 0.6 0.4 0.2 0 1540

1541

1542

1543

1544 1545 1546 Wavelength (nm)

λ1 λ2 λ3 λ4 λ5 λ6 λ7 λ8

1547

λ1

1548

λ5

λ3

1549

1550

λ7

Output 1

Input MZI

Output 2

(b)

λ2

λ4

λ6

λ8

Fig. 2.4.4 (A) MZI power transfer functions T11 (solid line) and T12 (dashed line) with n1 ¼ n2 ¼ 2.387, L1  L2 ¼ 0.5 mm. (B) An MZI is used as a wavelength interleaver.

length difference between the two arms is L1  L2 ¼ 0.5 mm, the wavelength spacing between adjacent transmission peaks of each output arm is Δλ 

λ2 ¼ 2nm n2 L 2  n1 L 1

and the wavelength spacing between adjacent transmission peaks of the two output ports is equal to Δλ/2 ¼ 1nm. In addition to using an MZI as an optical filter, it is also often used as a wavelength interleaver, as shown in Fig. 2.4.4B (Cao et al., 2004; Oguma et al., 2004). In this application, a wavelength division multiplexed optical signal with narrow channel spacing can be interleaved into two groups of optical signals each with a doubled channel spacing compared to the input.

2.5 Michelson interferometers The optical configuration of a Michelson interferometer is similar to that of an MZI except that only one beam splitter is used and the optical signal passes bidirectionally in the interferometer arms. The basic optical configuration of a Michelson interferometer is shown in Fig. 2.5.1. It can be made by free-space optics as well as fiber-optic components. In a free-space Michelson interferometer, a partial reflection mirror with 50% reflectivity is used to split the input optical beam into two parts. After traveling a distance, each

169

Fiber optic measurement techniques

E0

Scan

Ei

Scan

Mirror

170

L1 L2

Ei PZT

Coupler

Reflector

E0

Mirror

(a)

(b)

Fig. 2.5.1 Basic configurations of (A) free-space optics and (B) a fiber-optic Michelson interferometer. PZT, piezoelectric transducer.

beam is reflected to recombine at the partial reflection mirror. The interference between these two beams is detected by the photodetector. One of the two end mirrors is mounted on a translation stage so that its position can be scanned to vary the interference pattern. The configuration of a fiber-optics-based Michelson interferometer compares to the free-space version except that the partial reflection mirror is replaced by a 3-dB fiber coupler and the light beams are guided within optical fibers. The scan of the arm length can be accomplished by stretching the fiber in one of the two arms using a piezoelectric transducer. Because the fiber can be easily bent and coiled, the size of a fiber-optic interferometer can be much smaller. In free-space Michelson interferometers, there is no birefringence in the air, and polarization rotation is generally not a problem. However, for fiber-based Michelson interferometers, random birefringence in the fiber causes polarization rotation in each arm of the interferometer. The optical signals reflected from the mirrors in the two arms might not have the same polarization state when they combine at the fiber coupler. This polarization mismatch may cause significant reduction of the extinction ratio and degrade the performance of the interferometer. This problem caused by random birefringence of the fiber can be solved using Faraday mirrors, as shown in Fig. 2.5.2 (Kersey et al., 1991). The Faraday mirror is composed of a 45º Faraday rotator and a total reflection mirror. The polarization of the optical signal rotates 45 degree in each direction; therefore, EA EA⬘

EC

A

C

Delay

EF Faraday mirror

EC⬘ Faraday mirrors

EB EB⬘

B

D

ED ED⬘

E// E⊥

EG

Fig. 2.5.2 Illustration of a fiber Michelson interferometer using Faraday mirrors.

Basic mechanisms and instrumentation for optical measurement

the total polarization rotation is 90 degree after a roundtrip in the Faraday mirror. Because of their applications in optical measurement and optical sensors, standard Faraday mirrors with fiber pigtails are commercially available.

2.5.1 Operating principle of a Michelson interferometer For the fiber directional coupler using the notations in Fig. 2.5.2, suppose the power splitting ratio is ε, the optical field transfer matrix is #" ! # " ! # " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi 1ε j ε EA EC ¼ pffiffiffi (2.5.1) pffiffiffiffiffiffiffiffiffiffiffi ! ! j ε 1  ε EB ED Optic fields are generally vectorial, and their polarization states may change along the fiber because of the birefringence. Because of fiber propagation delays in the two arms and their polarization rotation, the relationship between reflected fields (EC’, ED’) and input fields (EC, ED) can be expressed in a transfer matrix form: "! # " #" ! #  U == exp ðjωτ1  jϕ1 Þ 0 EC EF ¼ (2.5.2)  ==  ! ! 0 V exp ðjωτ2  jϕ2 Þ E D EG where τ1 and τ2 are propagation delays of the two fiber arms, ω is the optical frequency, φ1 and φ2 are initial optical phases, and U// and V // are tensors representing the birefringence effect in the two fiber arms seen by the optical signal in the // polarization. At the end of each fiber arm, a Faraday mirror with the reflectivity of R1 and R2, respectively, reflects the optical signal and also rotates its polarization state by 90 degree. The backward propagating optical field will see the birefringence of the fiber in the perpendicular polarization compared to the forward propagating signal. The transfer matrix for the backward propagated optical fields is "! # " ! # ½U ? R1 exp ðjωτ1  jϕ1 Þ 0 E C0 EF ¼ (2.5.3) ! ! ? 0 ½V R2 exp ðjωτ2  jϕ2 Þ E G E D0 where U? and V? are tensors representing the birefringence effect in the two fiber arms seen by the optical signal in the ? polarization. Combining Eqs. (2.5.2) and (2.5.3), the roundtrip transmission matrix is "! # E C0 !

E D0

"

¼

 U == U ? R1 exp ðj2ωτ1  jϕ1 Þ 0

0

 == ?  V V R2 exp ðj2ωτ2  jϕ2 Þ

#" ! # EC ! ED

(2.5.4)

Because of the reciprocity principle, after a roundtrip, both [U//U?] and [V//V?] will be independent of the actual birefringence of the fiber. Therefore, in the following

171

172

Fiber optic measurement techniques

analysis, the polarization effect will be neglected and [U//U?] ¼ [V//V?] ¼ 1 will be assumed. Finally, the backward-propagated optical fields passing through the same fiber directional coupler are #" ! # " ! # " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi 1ε j ε EC E A0 ¼ pffiffiffi (2.5.5) pffiffiffiffiffiffiffiffiffiffiffi ! ! j ε 1  ε ED E B0 Therefore, the overall transfer matrix from the input (EA, EB) to the output (EA’, EB’) is

"! # E A0

" pffiffiffiffiffiffiffiffiffiffiffi 1ε ¼ pffiffi j E

# pffiffi j E pffiffiffiffiffiffiffiffiffiffiffi ! 1ε E B0 " pffiffiffiffiffiffiffiffiffiffiffi R1 exp ðj2ωτ1  j2ϕ1 Þ 0 1ε pffiffi 0 R2 exp ðj2ωτ2  j2ϕ2 Þ j E

#" ! # pffiffi j E EA pffiffiffiffiffiffiffiffiffiffiffi ! 1  ε EB (2.5.6)

For unidirectional application, this fiber Michelson interferometer can be used as an optical filter. Letting EB ¼ 0, the filter transfer function is

! 2

0

E T ¼ !B ¼ εð1  εÞjR1 exp ðj2ωτ1  j2ϕ1 Þ + R2 exp ðj2ωτ2  j2ϕ2 Þj2

E A   ¼ εð1  εÞ R21 + R22 + 2R1 R2 cos ð2ωΔτ + 2ϕÞ (2.5.7) where Δτ ¼ τ1 τ2 is the differential roundtrip delay and ϕ ¼ (ϕ1  ϕ2)/2 is the initial phase difference between the two arms. If we further assume that the two Faraday mirrors have the same reflectivity and, neglecting the propagation loss of the fiber, R1 ¼ R2 ¼ R, then, T ¼ 4εð1  εÞR2 cos 2 ½ωðτ1  τ2 Þ + ϕ ¼ 4εð1  εÞR2 cos2 ð2πf Δτ + ϕÞ

(2.5.8)

Except for the transmission loss term 4ε(1  ε), Eq. (2.5.8) is equivalent to an ideal MZI transfer function, and the infinite extinction ratio is due to the equal roundtrip loss in the two arms R1 ¼ R2. Fig. 2.5.3 shows the transmission loss of the Michelson interferometer as the function of the power splitting ratio of the coupler. Even though the minimum loss is obtained with a 3-dB (50%) coupling ratio of the fiber coupler, the extinction ratio of the Michelson interferometer transfer function is independent of the power splitting ratio of the fiber coupler. This relaxes the coupler selection criterion, and relatively inexpensive couplers can be used. Fig. 2.5.3 indicates that even with 10% variation of the power splitting ratio in the fiber coupler, the additional insertion loss for the filter is less than 0.2 dB.

Basic mechanisms and instrumentation for optical measurement

4*alpha(1-alpha) 0

Transmission loss (dB)

−0.1 −0.2 −0.3 −0.4 −0.5 −0.6 −0.7 −0.8 30

35

40

45 50 55 60 Power splitting ratio (%)

65

70

Fig. 2.5.3 MZ filter power loss caused by fiber coupler splitting ratio change. Fiber length difference 1cm 0

Power transfer function (dB)

−5 −10

R12 = 98%

−15

R22 = 68%

−20 −25

R22 = 78%

−30

R22 = 88%

−35 −40

R22 = 98%

−45 −50

0

2

4

6

8 10 12 Frequency (GHz)

14

16

18

20

Fig. 2.5.4 Power transfer function for different reflectivities of Faraday mirrors.

Fig. 2.5.4 shows a typical power transfer function of a Michelson interferometerbased optical filter with a 1-cm optical path length difference between the two arms, which corresponds to a free spectrum range of 10 GHz. The four curves in Fig. 2.5.4 were obtained with R21 ¼ 98% while R22 ¼ 98%, 88%, 78%, and 68%, respectively. Obviously, when the reflectivity difference between the two arms increases, the filter extinction ratio decreases.

173

Fiber optic measurement techniques

80 70 Power extinction ratio (dB)

174

60 50 40 30 20 10 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 Power reflectivity ratio between the two reflectors

1

Fig. 2.5.5 Extinction ratio of the power transfer function as the function of power reflectivity ratio R21/R22 between the two arms.

Fig. 2.5.5 systemically shows the extinction ratio of the transfer function versus the power reflectivity ratio (R1/R2)2 between the two arms. To guarantee the extinction ratio of the transfer function is higher than 30 dB, for example, the difference of power reflectivities between the two arms must be less than 10%.

2.5.2 Measurement and characterization of Michelson interferometers The characterization of the Michelson interferometer power transfer function is straightforward using an optical spectrum analyzer, as shown in Fig. 2.5.6. In this measurement setup, a wideband light source is used to provide the probing signal, which can simply be an EDFA without the input. Since the optical spectrum from the wideband source is flat, the measurement of the optical spectrum at the interferometer output is equivalent to the filter power transfer function. For accurate measurement, optical isolators have to be used Wideband light source

Isolator

Isolator

Optical spectrum Analyzer

Interferometer

FM

FM

Fig. 2.5.6 Experimental setup to measure the transfer function of a Michelson interferometer. FM, Faraday mirror.

Power (5 dB/div)

Basic mechanisms and instrumentation for optical measurement

1550

1550.5 Wavelength (nm)

1551

Fig. 2.5.7 Optical spectrum measured by an optical spectrum analyzer with 0.05 nm spectral resolution. Dashed curve is the theoretical fit.

at both the input and the output side of the interferometer to prevent unwanted reflections, which may potentially introduce wavelength-dependent errors. Fig. 2.5.7 shows an example of the optical spectrum at the output of a Michelson interferometer measured by an optical spectrum analyzer with 0.05nm spectral resolution. The free spectral range of this interferometer is approximately 0.25 nm, and the extinction ratio shown in Fig. 2.5.7 is only approximately 16 dB. Part of the reason for the poor extinction ratio measured by a grating-based OSA is due to the insufficient spectral resolution. The theoretically sharp rejection band shown by the dashed cure in Fig. 2.5.7 is smoothed out by the spectrum analyzer. Therefore, a scanning Fabry-Perot interferometer with high spectral resolution may be more appropriate for this measurement.

2.5.3 Sagnac loop mirror Sagnac loop mirror is another classic optical interferometer based on a 2  2 coupler. The configuration of a fiber-optic Sagnac loop mirror is shown in Fig. 2.5.8 in which the two output ports of a 2  2 fiber directional coupler are connected. The input optical field Ea is split into Ec and Ed, and they propagate in the clockwise (CW) and counterclockwise (CCW) directions, respectively. After traversing the fiber loop with propagation delay and attenuation, Ed becomes Ec0 , and Ec becomes Ed0 , and they reenter the 2  2 coupler from the right-hand side to become Ea0 , and Eb0 , as shown in Fig. 2.5.8. The field reflectivity and transmissivity of the Sagnac loop mirror are then defined as Ea0 /Ea, and Eb0 /Ea, respectively. Light source

CW CCW

n·L

Fig. 2.5.8 Configuration of a fiber-optic Sagnac loop.

175

176

Fiber optic measurement techniques

Assume that the coupler has a power coupling coefficient ε. The length of the fiber in the loop is L with a refractive index n. Based on Eq. (2.5.1), the field transfer function of the 2  2 fiber coupler is # " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi Ea Ec 1ε j ε ¼ pffiffiffi (2.5.9) pffiffiffiffiffiffiffiffiffiffiffi Ed j ε 1ε 0 pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi that is, Ec ¼ 1  εE a , Ed ¼ j εE a: After the CW and CCW waves traveling through the fiber delay line in the opposite directions, neglecting the propagation loss, the optical reentering the coupler are, pffiffiffiffiffiffiffiffiffiffiffifields pffiffiffi jϕ 0 jϕ 0 jϕ jϕ E c ¼ E d e ¼ j εE a e and Ed ¼ Ec e ¼ 1  εE a e , where ϕ0 ¼ (2π/λ)nL is the linear propagation phase shift. Thus, # pffiffiffi 0 " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi Ea j ε 1ε j ε (2.5.10) ¼ pffiffiffi pffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi E a ejϕ0 E0b 1E j ε 1ε pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi so that, E 0a ¼ j2 εð1  εÞE a ejϕ0 , and Eb0 ¼ (1  2ε)Eae jϕ0. Therefore, the power reflectivity of the Sagnac loop is

2 R ¼ E 0a =E a ¼ 4εð1  εÞ (2.5.11) and the power transmissivity is

2 T ¼ E 0b =Ea ¼ ð1  2εÞ2

(2.5.12)

Because we have neglected both the excess loss of the directional coupler and propagation loss of the fiber in the loop, this Sagnac loop should be energy conserved. In fact, based on Eqs. (2.5.11) and (2.5.12), this energy conservation rule can be verified as T + R ¼ ð1  2εÞ2 + 4εð1  εÞ ¼ 1  4ε + 4ε2 + 4ε  4ε2 ¼ 1 Fig. 2.5.9 shows the power transmissivity (solid line) and reflectivity (dashed line) of a Sagnac loop as the function of the power splitting ratio ε of the 2  2 coupler. When a 3-dB coupler is used with ε ¼ 0.5, the Sagnac loop has R ¼ 1 and T ¼ 0. That is equivalent to a perfect reflector or an ideal mirror. If a fiber-optic Sagnac loop is used in nonlinear application, for example, if the signal is a femtosecond pulse train, nonlinear phase shift due to the power-dependent refractive index will have to be considered. In such a case, the optical fields reentering the 2  2 coupler pffiffiffi after traversing the loop in the opposite directions are E 0c ¼ Ed ejϕCCW ¼ j εE a ejϕCCW p ffiffiffiffiffiffiffiffiffiffi ffi and E0d ¼ Ec ejϕCW ¼ 1  εEa ejϕCW . The phase shift of the CW and the CCW proppffiffiffi agated waves in the loop depend on their respective power levels, Ed ¼ j εE a ϕCW ¼ ϕ0 + γ jE c j2 L ¼ ϕ0 + γ ð1  εÞjEa j2 L ¼ ϕ0 + ð1  εÞϕNL

Power transfer functions (dB)

Basic mechanisms and instrumentation for optical measurement

1

R

0.8

0.6

0.4

T

0.2

0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Power splitting ratio Fig. 2.5.9 Sagnac loop power transmissivity (solid line) and reflectivity (dashed line) as the function of the power splitting ratio ε of the 2  2 coupler.

ϕCCW ¼ ϕ0 + γ jE d j2 L ¼ ϕ0 + γεjEa j2 L ¼ ϕ0 + εϕNL where ϕNL ¼ γ jE a j2 L

(2.5.13)

is the nonlinear phase shift. γ ¼ 2πn2/(Aeff λ) is the nonlinear parameter of the fiber, n2 is the nonlinear refractive index and Aeff is the effective core area of the fiber, λ is the wavelength of the optical signal, and ϕ0 is linear propagation phase delay of the fiber loop. Thus, the field transfer matrix shown in Eq. (2.5.10) for the linear system becomes # pffiffiffi 0 " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi Ea j εejϕCCW 1ε j ε ¼ pffiffiffi Ea (2.5.14) pffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi jϕ E0b 1  εe CW j ε 1ε pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi so that E 0a ¼ j2 εð1  εÞE a ejϕCCW , and Eb0 ¼ (1  2ε)Eae jϕCW The nonlinear power reflectivity and transmissivity of the Sagnac loop are (Doran and Wood, 1988)

0 2

E R ¼

a

¼ 2εð1  εÞf1 + cos ½ð1  2εÞϕNL g (2.5.15) Ea

0 2

E T ¼

b

¼ 1  2εð1  εÞf1 + cos ½ð1  2εÞϕNL g (2.5.16) Ea For a standard single-mode fiber with a nonlinear refractive index n2 ¼ 2.35  1020m2/W, with an effective core area Aeff ¼ 8.1  1011m2, and at a signal wavelength

177

Fiber optic measurement techniques

1

Power transfer functions (dB)

178

0.8

R 0.6

0.4

T

0.2

0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Power splitting ratio Fig. 2.5.10 Power reflectivity (thick lines) and transmissivity (thin lines) of a 0.5-m Sagnac loop as the function of power splitting ratio ε, the three different signal peak power levels of 0 W, 5 kW, and 10 kW.

λ ¼ 1560nm, the nonlinear parameter is approximately γ ¼ 2πn2/(Aeffλ) ¼ 1.2m1kW1. This means, for a loop length of L ¼ 1 m, each kW of the signal optical power will introduce 1.2 rad of nonlinear phase shift. Fig. 2.5.10 shows the reflectivity (thick lines) and transmissivity (thin lines) of a L ¼ 0.5 m long Sagnac loop mirror with signal optical power levels of 0 W (solid lines), 5 kW (dashed lines), and 10 kW (dotted lines) as the function of the power splitting ratio of the fiber coupler. Note that with a 3-dB fiber coupler (ε ¼ 0.5), both the reflectivity and the transmissivity are independent of the signal power level, with R ¼ 1 and T ¼ 0. This is because with ε ¼ 0.5, CW and CCW waves have the same optical power so that ϕCW ¼ ϕCCW, and the loop phase is perfectly balanced between waves propagating in the opposite directions. The impact of nonlinear phase on the power transfer function of a Sagnac loop mirror is more significant when the splitting ratio ε is further away from 0.5, as shown in Fig. 2.5.10. Fig. 4.3.11 shows the power reflectivity and transmissivity of the L ¼ 0.5 m Sagnac loop mirror as the function of the optical signal peak power inside the loop for three different power splitting ratios of ε¼ 0.55, 0.6, and 0.65. Fig. 4.3.11 indicates that when used in the reflection mode, a Sagnac loop mirror can provide nonlinear saturation as the power reflectivity is reduced with the increase of the signal optical power. On the other hand, when used in the transmission mode, a Sagnac loop can saturable transmission loss. This can be used to squeeze time-domain optical pulses (Fig. 2.5.11). Similar to fiber-based Michelson and Mach-Zehnder interferometers, birefringenceinduced signal polarization rotation is an important issue as the optical interference requires the match of the polarization state. For most applications, the length of the Sangac loop does not need to be very long, so that polarization-maintaining (PM) fiber may be used to avoid the need of polarization control.

Basic mechanisms and instrumentation for optical measurement

Power transfer funcons (dB)

1

= 0.55 0.8

R = 0.6 = 0.65

0.6

0.4

= 0.65 = 0.6

T

0.2

= 0.55 0 0

2

4

6

8

10

12

14

16

18

20

Signal peak power (kW) Fig. 2.5.11 Power reflectivity (thick lines) and transmissivity (thin lines) of a 0.5 m Sagnac loop as the function of signal peak power level for three different 2  2 coupler power splitting ratio: ε ¼ 0.55, 0.6, and 0.65.

2.6 Optical wavelength meter In optical communication and optical sensor systems, wavelength of the laser source is an important parameter, especially when multiple wavelengths are used in the system. The wavelength of an optical signal can be measured by an OSA; however, the measurement accuracy is often limited by the spectral resolution of the OSA, which is usually in the order of 0.1 nm for a compact version. The most popular optical wavelength meter is based on a Michelson interferometer, which is structurally simpler than an OSA and can easily be made into a small package using fiber-optic components (Snyder, 1982; Monchalin et al., 1981). A Michelson interferometer-based wavelength meter can typically provide a measurement accuracy of better than 0.001 nm. An optical wavelength meter can be built from a simple Michelson interferometer by scanning the optical length of one of the two interferometer arms. Based on the Michelson interferometer configuration shown in Fig. 2.6.1, assuming that the input optical field is Ei, the power splitting ratio of the coupler is ε, and the optical lengths are L1 and L2, for the two arms, then the optical field reaches the receiving photodetector is pffiffiffiffiffiffiffiffiffiffiffipffiffiffi E0 ¼ 1  ε εE i ½ exp jð2βL 1 + ϕ1 Þ + exp jð2βL 2 + ϕ2 Þe jωt



pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ϕ1 + ϕ2 ϕ1  ϕ2 jωt cos βΔL + e ¼ 2 εð1  εÞE i exp j βðL 1 + L 2 Þ + (2.6.1) 2 2 where β ¼ 2πn/λ is the propagation constant, n is the refractive index, ΔL ¼ L1  L2 is the length difference between the two arms, and ω is the optical frequency. φ1 and φ2 are the initial phase shift of the two arms, which can be introduced by the beam splitter as well as the reflecting mirrors.

179

Fiber optic measurement techniques

Detector Scan

E0 Ei L1

Mirror

180

L2 Mirror

Fig. 2.6.1 Basic configurations of a wavelength meter based on a Michelson interferometer.

Since the photodiode is a square-law detection device, the photocurrent generated by the photodiode is directly proportional to the optical power: I ¼ ηjE0 j2 ¼ 2εð1  εÞP i ηf1 + cos ð2β + ΔϕÞg

(2.6.2)

where η is the detector responsivity and Δϕ ¼ (ϕ1  ΔLϕ2) is the initial phase difference between the two arms when ΔL ¼ 0. Obviously, if the path length of one of the two arms is variable, the photocurrent changes with ΔL.

2.6.1 Operating principle of a wavelength meter based on Michelson interferometer In the simplest case, the optical signal has a single wavelength λ. Eq. (2.6.2) indicates that when the length of ΔL scans, the photocurrent I passes through maximums and minimums alternatively. By analyzing the photocurrent I as a function of ΔL, the signal wavelength λ can be determined. For example, every time the photocurrent crosses the half-maximum point, Eq. (2.6.2) becomes cos(4πΔL/λ + Δϕ) ¼ 0. The interval between adjacent crossings δL, as shown in Fig. 2.6.2, can be recorded, satisfying the following equation: δL ¼π λ and therefore, the signal wavelength can be found as λ ¼ 4δL. In principle, to find the signal wavelength, the arm length of the interferometer only needs to scan for a half wavelength. However, in practice, because of the measurement uncertainty and noise associated with the optical signal, the arm length usually has to scan for much longer than a wavelength. This long scan length provides multiple measures of δL, and accurate wavelength estimation can be obtained by averaging the measured δL values. The wavelength measurement technique we described is also known as fringe counting. The key to this measurement technique is the knowledge of the exact differential length variation of the interferometer arm. The initial phase difference Δϕ in Eq. (2.6.2) does not make a real impact as long as it does not change with time. 4π

Basic mechanisms and instrumentation for optical measurement

1 0.9 Normalized photo-current

0.8 0.7 0.6 dL

0.5 0.4 0.3 0.2 0.1 0 −10

−8

−6

−4

−2

0 DL (μm)

2

4

6

8

1 0

Fig. 2.6.2 Normalized photocurrent versus differential arm length. Signal wavelength λ ¼ 1550 nm.

So far, we have discussed the simplest case where the signal has only a single wavelength. Now let’s look at the case when the optical signal has two wavelength components. Assuming that these two wavelength components have an equal amplitude Ei/2 and their frequencies are ω1 and ω2, respectively, the combined field at the photodiode is   exp ½ jβ1 ðL 1 + L 2 Þ exp ð jω1 tÞ cos ðβ1 ΔL Þ + E 0 ¼ 0:5E i (2.6.3) exp ½ jβ2 ðL 1 + L 2 Þ exp ð jω2 tÞ cos ðβ2 ΔL Þ where the initial phase difference Δϕ is neglected for simplicity and a 50% splitting ratio of the optical coupler is assumed. The photocurrent can then be found as   cos ð2β1 ΔL Þ + cos ð2β2 ΔL Þ I ¼ ηP i 1 + (2.6.4) 2 In obtaining Eq. (2.6.4), we have removed the cross-term which contains a timedependent factor exp[ j(ω1  ω2)t]. This fast-changing random phase can be averaged out because of the much slower moving speed of the interferometer arms. Fig. 2.6.3 shows the normalized photocurrent versus differential arm length ΔL, and the two wavelength components in the optical signal are λ1 ¼ 1350 nm and λ2 ¼ 1550 nm. In this case, a simple fringe counting would not be sufficient to determine the wavelengths in the optical signal. However, Fig. 2.6.3 shows distinct features of the mixing between the two wavelength components. Applying a fast Fourier transformation (FFT) on the I(ΔL) characteristic given by Eq. (2.6.4), we should be able to determine the two distinct wavelengths within the optical signal.

181

Fiber optic measurement techniques

1

Normalized photo-current

182

0.8

0.6

0.4

0.2

0 −20

−15

−10

−5

0 DL (μm)

5

10

15

20

Fig. 2.6.3 Normalized photocurrent versus differential arm length. The optical signal has two equalamplitude wavelength components at λ1 ¼ 1350 nm and λ2 ¼ 1550 nm.

In general, to measure an optical signal with multiple wavelengths using a Michelson interferometer, the FFT is always used to determine the wavelengths that contained in the optical signal. This technique is indeed equivalent to analyzing the optical spectrum, and therefore, in principle, it can be used to construct an optical spectrum analyzer. However, an OSA made by a Michelson interferometer would only be suitable to measure optical signals with discrete wavelength components; it cannot be used to measure power spectral densities of optical noises. This will be discussed in the next section.

2.6.2 Wavelength coverage and spectral resolution In a wavelength meter based on the Michelson interferometer, the accuracy of determining the signal wavelength, to a large extent, depends on how accurately the differential length ΔL is measured during the scanning. Since the FFT is used to convert the measured data from the length (ΔL) domain into the frequency (f ) domain, the wavelength coverage and the spectral resolution of the wavelength meter will be determined by the step size and the total range of the scanning interferometer arm, respectively. 2.6.2.1 Wavelength coverage A good wavelength meter has to be able to cover a large wavelength range. For example, in an WDM system C-band extends from 1530 to 1560 nm. To measure all wavelength components within this 30 nm bandwidth, the wavelength coverage of the wavelength meter has to be wide enough. Because the wavelengths are determined by the Fourier transformation of the measured photocurrent versus differential arm length, a smaller step size of ΔL will result in a wider wavelength coverage of the measurement.

Basic mechanisms and instrumentation for optical measurement

Frequency domain

Time domain

dt

(a)

frange

(b)

Fig. 2.6.4 Relationship between (A) time-domain sampling interval δt and (B) frequency-domain coverage frange.

It is well known that to perform FFT on a time-domain waveform cos(ωt), if the time-domain sampling interval is δt as illustrated in Fig. 2.6.4, the corresponding frequency domain coverage will be frange ¼ 1/(2δt) after the FFT. It can be understood because a smaller time-domain sampling interval picks up fast variation features of the signal, and it is equivalent to a wider frequency bandwidth. Similarly, for the wavelength meter, the photocurrent is proportional to cos(2ωΔL/c) for each frequency component, as shown in Eq. (2.6.2). If the sampling interval (step size) of the differential arm length ΔL is δl, after the FFT, the frequency-domain coverage will be c f range ¼ (2.6.5) 4dl For example, in order for the wavelength meter to cover the entire C-band from 1530 nm to 1560 nm, frange has to be as wide as 3750 GHz. In this case, the step size of the differential arm length has to be δl 20μm. 2.6.2.2 Spectral resolution Another important parameter of a wavelength meter is its spectral resolution, which specifies its ability to identify two closely spaced wavelength components. Again, because the wavelengths are determined by the Fourier transformation of the measured photocurrent versus differential arm length, a larger traveling distance of the mirror will result in a finer wavelength resolution of the measurement. In general, for a time-domain waveform cos(ωt), if the total sampling window in the time domain is T, as illustrated in Fig. 2.6.5, the corresponding sampling interval in the frequency domain is fresolution ¼ 1/T after the FFT. Obviously, a longer time-domain window helps to pick up slow variation features of the signal, and it is equivalent to a finer frequency resolution. For a wavelength meter, the photocurrent is proportional to cos(2ωΔL/c) for each frequency component. If the differential arm length of the interferometer scans for a total amount of Lspan, after FFT the sampling interval in the frequency domain will be

183

184

Fiber optic measurement techniques

Frequency domain

Time domain

f resolution

T

(a)

(b)

Fig. 2.6.5 Relationship between (A) time-domain sampling window T and (B) frequency-domain resolution fresolution.

f resolution ¼

c 2L span

(2.6.6)

For example, in order for the wavelength meter to have a spectral resolution of 5 GHz, which is approximately 0.04 nm in a 1550-nm wavelength window, the interferometer arm has to scan for a total distance of Lspan 3cm. 2.6.2.3 Effect of signal coherence length A wavelength meter based on the Michelson interferometer utilizes beam interference between its two arms. One fundamental requirement for interference is that the two participating beams have to be mutually coherent. This requires that the path length difference between the two interferometer arms be shorter than the coherence length of the optical signal to be measured. For an optical signal with linewidth of Δflw, its coherence length is defined as c L coherent ¼ (2.6.7) πΔf lw In the derivation of Michelson interferometer Eq. (2.6.2), we have assumed that the phase difference between the optical signal traveled through the two arms is fixed and not varying with time: Δϕ ¼ constant. This is true only when the arm’s length difference is much less than the signal coherence length. If the arm’s length difference is longer than the signal coherence length, the coherent interference will not happen, and the photocurrent will become a random function of time. This restricts the distance that the mirror can scan such that c L span ≪ (2.6.8) πΔf lw Then, since the traveling distance of the mirror is limited by the coherence length of the optical signal, according to Eq. (2.6.6), the achievable spectral resolution will be limited to

Basic mechanisms and instrumentation for optical measurement

f resolution ≫

πΔf lw 2

(2.6.9)

Since the available spectral resolution has to be much wider than the spectral linewidth of the optical signal, Michelson interferometer-based wavelength meter is better suited to measure optical signals with discrete wavelength components. On the other hand, a wavelength meter may not be suitable to measure power spectral density of random optical noise, such as amplified spontaneous emission generated by optical amplifiers. This is one of the major differences between a wavelength meter and an OSA.

2.6.3 Wavelength calibration Throughout our discussion so far, it is important to note that the accuracy of wavelength measurement relies on the accurate estimation of the differential arm length ΔL. In practice, absolute calibration of mirror displacement is not easy to accomplish mechanically. Therefore, wavelength calibration using a reference light source is usually used in commercial wavelength meters. Fig. 2.6.6 shows the block diagram of a typical wavelength meter with an in situ calibration using a HeNe laser as the wavelength reference. The reference light is launched into the same system simultaneously with the optical signal to be measured. Two independent photodiodes are used to detect the signal and the reference separately. The outputs from both photodetectors are digitized through an A/D converter before performing the FFT operation. Since the wavelength of the reference is precisely known (for example λref ¼ 632.8 nm for a HaNe laser), this reference wavelength can be used to calibrate the differential arm length ΔL of the interferometer. Then, this ΔL is used as a determining parameter in the FFT of the signal channel to precisely predict the wavelength of the unknown signal.

Display

FFT

A/D

PD 1

PD 2 Moving

Mirror

Signal

HeNe laser Mirror

Fig. 2.6.6 Calibration of wavelength meter using a wavelength reference laser.

185

Fiber optic measurement techniques

2.6.4 Wavelength meter based on Fizeau wedge interferometer In a Michelson interferometer-based optical wavelength meter as described in the previous sections, the scan of the interference fringe is accomplished by mechanically scanning the position of one of the two mirrors. A Fizeau wedge interferometer-based wavelength meter equivalently spreads the interference fringe in the spatial domain and uses a photodiode array to simultaneously acquire the fringe pattern. Therefore, no moving part is needed. As illustrated in Fig. 2.6.7, in a Fizeau wedge interferometer-based wavelength meter, the incoming optical signal is collimated and launched onto a Fizeau wedge whose front surface is partially reflective and the back facet is a total reflector. The two reflective surfaces are not parallel, and there is a small angle ϕ between them. Therefore, the light beams reflected from the front and the back facets converge at the surface of the photodiode array, and the angle between these two reflected beams is 2ϕ. From simple geometric optics analysis, the normalized interference pattern formed on the onedimensional photodiode array can be obtained as the function of the position x:   2π P ðxÞ ¼ 1 + cos x + ϕ0 (2.6.10) Λ where Λ ¼ λ/(2ϕ) is the fringe spacing on the diode array and ϕ0 is a constant phase that represents the phase at x ¼ 0. Since the angle ϕ of the Fizeau wedge is fixed, a Fourier transform on the measured interference pattern P(x) determines the wavelength of the optical signal. Similar to the Michelson interferometer-based wavelength meter, the spectral resolution of the Fizeau wedge-based wavelength meter is determined by the total length of the diode array L and the angle of the Fizeau wedge ϕ: c f resolution ¼ (2.6.11) 2ϕL The overall range of wavelength coverage is determined by the width of each pixel l of the diode array c f range ¼ (2.6.12) 4ϕl

Fizeau wedge

x φ

2f Diode array

DSP and display

Optical signal

Data acquisition

186

Fig. 2.6.7 Optical configuration of a Fizeau wedge interferometer-based wavelength meter.

Basic mechanisms and instrumentation for optical measurement

For example, if a diode array has 1024 pixels and the width of each pixel is 25 μm, the angle of the Fizeau wedge is ϕ ¼ 10 degree, and the frequency resolution is about 33 GHz, which is approximately 0.26 nm in the 1550 nm wavelength window. The wavelength coverage range is about 138 nm. It is important to point out the difference between the spectral resolution and the wavelength measurement accuracy. The spectral resolution specifies the resolving power. For example, if the optical signal has two frequency components and their frequency difference is δf, to distinguish these two frequency components, the spectral resolution of the wavelength meter has to satisfy fresolution < δf. On the other hand, the wavelength measurement accuracy depends on both the frequency resolution and the noise level of the measurement system. Although the signal-to-noise ratio can be enhanced by averaging, it will inevitably slow the measurement. Since a Fizeau wedge-based wavemeter is based on the measurement of the spatial interference pattern introduced by the optical wedge, the optical flatness must be guaranteed. In addition, the wavefront curvature of the incident beam may also introduce distortions on the fringe pattern, which has to be compensated for. Furthermore, if the optical wedge is made by a solid silica glass, chromatic dispersion may also be a potential source of error, which causes different wavelength components to have different optical path length inside the wedge. A number of designs have been proposed and demonstrated to solve these problems (Gray et al., 1986; Reiser and Lopert, 1988). In comparison to the Michelson interferometer-based wavelength meter, the biggest advantage of a Fizeau wedge-based wavemeter is its potentially compact design, which does not need any moving part. The disadvantage is the spectral resolution, which is mainly limited by the size of the linear diode array and the total differential optical delay through the wedge.

2.7 Optical ring resonators and their applications An optical ring resonator is an interferometer based on multipass optical interference. High quality optical ring resonators have a wide range of applications, including tunable optical filtering, biomedical sensing, and electro-optic modulation. Reducing the roundtrip loss in the ring is the most important issue to improve the frequency selectivity of a ring resonator. In this section, we discuss the basic properties, major applications, and techniques to characterize optical ring resonators, as well as using the ring resonator as a tool for precision measurements.

2.7.1 Ring resonator power transfer function and Q-factor The basic configuration of a fiber-optic ring resonator is shown in Fig. 2.7.1A in which an optical ring is connected with a fiber for the input and output through a 2  2 optical coupler. The power transfer function of this 2-port optical device can be calculated based on the transfer function of a 2  2 optical coupler discussed earlier in this chapter.

187

Fiber optic measurement techniques

1 a1 Pin

a2

b1 b2

Pout

Transmission

188

0.8 0.6 0.4 0.2 0 −5

0 5 10 15 Relative phase (rad.)

20

Fig. 2.7.1 (A) Fiber-based ring resonator and (B) power transfer function.

Assume that the power splitting ratio of the fiber coupler is ε, the length of the fiber ring is L, and the refractive index of the fiber is n; based on the notation in Fig. 2.7.1A, pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi b1 ¼ a1 1  ε + ja2 ε (2.7.1a) pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi b2 ¼ ja1 ε + a2 1  ε (2.7.1b) The fiber ring then connects the output port b2 to the input port a2, and thus, pffiffiffi a2 ¼ ηb2 ejϕ (2.7.2) where η is the power transmission coefficient of the fiber, ϕ ¼ 2πfτ is the phase delay introduced by the fiber ring, and τ ¼ nL/c is the time delay of the ring. Combining these three equations, the power transfer function can be obtained as:

2

2

pffiffiffi jϕ

pffiffiffiffiffiffiffiffiffiffiffi

b1 ε η e

pffiffiffiffiffiffiffiffiffiffiffi T ðϕÞ ¼



¼ 1  ε  (2.7.3) p ffiffi ffi jϕ a1

1  e η 1  ε It is easy to find that if the fiber loss in the ring is neglected, η ¼ 1, the power transfer function will be independent of the phase delay of the ring so that T(ϕ) ≡ 1. In practice, any transmission media should have loss so that η < 1 even though it can be very close to unity. Fig. 2.7.1B shows the transfer function with fiber coupler power splitting ratio ε ¼ 0.1 and the transmission coefficient η ¼ 0.998. Obviously, this is a periodic notch filter. The transmission stays near to 100% for most of the phases except at the resonance frequencies of the ring, where the power transmission is minimized. These transmission minima correspond to the phase delay of ϕ ¼ 2mπ along the ring, where m are integers, and therefore, the free spectral range (FSR) of this ring resonator filter is Δf ¼ c/(nL), where c is the speed of light. The value of the minimum transmission can be found from Eq. (2.7.3) as pffiffiffipffiffiffiffiffiffiffiffiffiffiffi 1ε2 η 1ε+η T min ¼ (2.7.4) pffiffiffipffiffiffiffiffiffiffiffiffiffiffi 1  2 η 1  ε + ηð1  εÞ

Basic mechanisms and instrumentation for optical measurement

Minimum transmission

1 0.8

h = 99.9%

0.6 0.4

h = 99%

h = 95%

0.2 0

0

0.02

0.04

0.06

0.08

0.1

Splitting ratio e

Fig. 2.7.2 Minimum power transmission of a ring resonator-based filter.

From a notch filter application point of view, a high extinction ratio requires the minimum transmission Tmin to be as small as possible. Fig. 2.7.2 shows the minimum power transmission of a ring resonator-based filter as the function of the power splitting ratio of the fiber coupler. Three different loss values are used for the fiber ring in Fig. 2.7.2. The optimum power splitting ratio depends on the loss of the fiber ring. In fact, in order for the minimum transmission to be zero, η ¼ 1  ε is required. In addition to the extinction ratio, finesse is also an important parameter to specify a ring resonator. The finesse of a ring resonator is defined by the FSR divided by the FWHM (full width at half maximum) of the transmission notch. If we use the optimized value of the splitting ratio such that η ¼ 1  ε, the ring resonator transfer function will be simplified to 

pffiffiffi

η 1  ejϕ 2

T ðϕÞ ¼

(2.7.5) 1  ηejϕ Assuming that at a certain phase, ϕΔ, this power transfer function is T(ϕΔ) ¼ 0.5, we can find that

2 1 1  4η + η ϕΔ ¼ cos (2.7.6) 2η Since the FSR corresponds to 2π in the phase change, the finesse of the resonator can be expressed as Finesse ¼

2π π   ¼ 14η + η2 2ϕΔ 1 cos 2η

(2.7.7)

Fig. 2.7.3 shows the calculated finesse as the function of the roundtrip loss, 1 η, of the fiber ring. Apparently high finesse requires low loss of the optical ring. In practice this

189

Fiber optic measurement techniques

105 104

Finesse

190

103 102 101 10−4

10−3 10−2 Fiber ring loss (1−h)

10−1

Fig. 2.7.3 Calculated finesse as a function of the fiber ring loss.

loss can be introduced either by the fiber loss or by the excess loss of the directional coupler. Similar to the finesse, there is also a Q-value which defines the quality of a ring resonator, Q¼

λm δλm

(2.7.8)

where λm is the resonance wavelength at which the transmission is minimum, that is, λm ¼ τc/m so that ϕ ¼ 2mπ with m, an integer. δλm is the FWHM of the absorption peak around λm in the wavelength domain. Because the roundtrip phase is related to the wavelength by ϕ ¼ 2πτc/λ, its derivative is δϕ ¼  2πτcδλ/λ2; thus, δλm at λm can be expressed as a function of ϕΔ as λ2m ð2ϕΔ Þ (2.7.9) 2πτc where ϕΔ is defined in Eq. (2.7.6), and the negative sign is removed because δλm is a measure of linewidth. We can then find the simple relation between the Q-value and the finesse defined in Eq. (2.7.7) by δλm ¼

Q ¼ m  Finesse

(2.7.10)

where m ¼ τc/λm is an integer representing the order of the cavity resonance. From an application point of view, the biggest problem with the fiber-based ring resonator is probably the birefringence in the fiber. If there is a polarization rotation of the optical signal in the fiber ring, the interference effect after each roundtrip will be reduced, and thus, the overall transfer function will be affected. Therefore, planar lightwave circuit technology might be more appropriate to make high-Q ring resonators compared to allfiber devices. Ring resonators have been used in evanescent wave biosensors, in which attachment of biomolecules on the surface of the waveguide changes the effective propagation delay

Basic mechanisms and instrumentation for optical measurement

of the optical signal due to the interaction of biomolecules with the evanescent wave. Thus, the concentration of certain biomolecules can be quantified by monitoring the wavelength shift of the transmission notches of the optical transfer function (Sun and Fan, 2011). Ring resonators with Q > 107 have been reported, which can potentially make ring-based biosensors very attractive. The configuration of the optical ring resonator has also been used to make electro-optic modulators. In this application, an electrode is applied on the ring so that the roundtrip optical delay can be modulated by the applied electric voltage. For an optical signal at a certain wavelength near the transmission notch, intensity modulation can be accomplished by modulating the notch wavelength on and off the optical signal wavelength (Xu et al., 2006).

2.7.2 Ring resonators as tunable optical filters The power transfer function of a resonator constructed with a single ring can be used as an optical notch filter as shown in Fig. 2.7.1. As the roundtrip phase delay ϕ ¼ 2πfτ is proportional to the optical frequency f, the variable in Eq. (2.7.3) can be changed from ϕ to the optical frequency f so that

2

pffiffiffi

pffiffiffiffiffiffiffiffiffiffiffi ε ηe j2πτf

pffiffiffiffiffiffiffiffiffiffiffi Tð f Þ ¼ 1  ε  (2.7.11) p ffiffi ffi j2πτf

1e η 1  ε where τ ¼ nL/c is the ring cavity roundtrip time with n, the refractive index, c, the speed of light, and L, the cavity length. This power transfer function is periodical with the free spectral range (FSR) determined by the cavity length, and the spectral width of the notch is determined by both the cavity length and the cavity roundtrip loss. Multiple optical rings can be concatenated to form more sophisticated transfer functions with desired spectral shapes. Fig. 2.7.4 shows two examples of multiring T1

Tk

T2

TN

a1

dN

(a) k=1

k=N

k=2

b1

dN+1 d1

a2

c2

b3

d3

bN

c1

b2

d2 a3

c3

aN

a1

cN+1

(b) Fig. 2.7.4 Illustration of multiple optical ring resonators arranged in two different configurations.

191

192

Fiber optic measurement techniques

configurations. In Fig. 2.7.4A, N optical rings are linked along the same bus so that the overall transfer function is the multiplication of transfer functions of individual rings,

2

2 pffiffiffiffiffi j2πτk f

YN

pffiffiffiffiffiffiffiffiffiffiffiffiffi

d N ε η e k

k ffiffiffiffiffiffiffiffiffiffiffiffi ffi p T ðf Þ ¼



¼ 1  ε  (2.7.12)

pffiffiffiffiffi k k¼1 a1 1  ej2πτk f ηk 1  εk For an integrated optical circuit fabricated on a planar substrate, the length of each waveguide ring and the coupling ratio can be very precisely controlled. This allows the design of microring-based optical filters with multiple notches at desired wavelengths. Fig. 2.7.4B shows another type of ring topology to construct a 4-port device with multiple ring resonators. In this configuration, the derivation of the transfer function is more complicated because of the coupling between adjacent rings. For the kth ring shown in Fig. 2.7.4B, the transfer function of the 2  2 coupler is # " pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffi bk ak 1  εk j εk ¼ pffiffiffiffi (2.7.13) pffiffiffiffiffiffiffiffiffiffiffiffiffi dk 1  εk c k j εk where εk is the power coupling ratio. By rearranging the input and output ports, this coupler transfer function is equivalent to pffiffiffiffiffiffiffiffiffiffiffiffiffi ak ck 1  1  εk 1 ¼ pffiffiffiffi (2.7.14) pffiffiffiffiffiffiffiffiffiffiffiffiffi j εk dk 1 1  εk bk The section between the two (k and k + 1) adjacent directional couplers consists of two parallel delay lines representing the top and bottom parts of the ring, and the transfer function is " pffiffiffiffiffiffi # 0 ejϕk1 ηk1 c k ak+1 ¼ jϕ pffiffiffiffiffiffi (2.7.15) e k2 = ηk2 0 bk+1 dk where ηk1, ηk2, φk1, and φk2 are propagation losses and phase delay of the top part and the bottom part waveguides of the kth ring, respectively. Combining Eqs. (2.7.14) and (2.7.15), we have the transfer matrix of the kth ring, 3 2 rffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ηk1 jϕk1 1  εk jϕk1 6j e 7 j ηk1 e εk εk ak+1 7 ak 6 ¼ 6 rffiffiffiffiffiffiffiffiffiffiffiffiffi (2.7.16) 7 rffiffiffiffiffiffiffiffiffiffi 5 bk 4 1  εk jϕ bk+1 1 jϕ j e k2 j e k2 ηk2 εk ηk2 εk The overall transfer matrix can then be obtained through multiplication of transfer matrices of all the N rings,

Basic mechanisms and instrumentation for optical measurement

8 > > >

> e 7>  ηk1 = 6 εk e εk c N +1 N 6 j 7 ¼ pffiffiffiffiffiffiffiffiffiffi 7 6 ffiffiffiffiffiffiffiffiffiffi r ffiffiffiffiffiffiffiffiffiffiffiffi ffi r k¼1 4 εN +1 > 5> dN +1 1  εk jϕk2 1 jϕk2 > > > >  e e : ; ηk2 εk ηk2 εk pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a1  1  εN +1 1 (2.7.17) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1  εN +1 b1 In a special case with only a single ring, as the configuration shown in Fig. 2.7.1A, the transfer function can be derived from Eq. (2.7.16) with k ¼ 1 and a2 ¼ b2 (without the 2nd ring). It can be shown that the same expression as Eq. (2.7.3) can be derived with ϕ ¼ ϕ11 + ϕ12 and η ¼ η11η12 representing the phase delay and the propagation loss of the entire ring, respectively. Another special case is a single optical ring with two couplers, one on each side of a ring as shown in the inset of Fig. 2.7.5. In this case, the transfer matrix expression in Eq. (2.7.17) can be reduced to c2 M 11 M 12 a1 ¼ (2.7.18) d2 M 21 M 22 b1 with

Power transfer function (linear)

1 0.8

b1

0.6 0.4

a2

c1

b2

a1

0.2 0

d1

-0.3

c2

d2

-0.2

-0.1

0

0.1

0.2

0.3

Phase angle (π) Fig. 2.7.5 Power transfer functions of a 4-port optical ring resonator with η1 ¼ η2 ¼ 0.98 and ε1 ¼ ε2 ¼ 0.01. Solid line: jb1/a1 j2 and dashed line: j d2/a1 j2.

193

194

Fiber optic measurement techniques

qffiffiffiffiffiffiffiffiffiffi and A ¼ j η 1ε1 ε2 ejϕ12

hpffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i M 11 ¼ A 1  ε1  ηð1  ε2 Þejϕ hpffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i M 12 ¼ A ηð1  ε1 Þð1  ε2 Þejϕ  1 hpffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi i M 21 ¼ A ð1  ε1 Þð1  ε2 Þ  ηejϕ hpffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffii M 22 ¼ A ηð1  ε1 Þejϕ  1  ε2

(2.7.19a) (2.7.19b) (2.7.19c) (2.7.19d)

12

Assume the input signal is only from port a1; the power transfer functions from a1 to b1 and from a1 to d2 can be found as

2

2

b1

¼ M 11 (2.7.20)

a1

M 12 and

2

2

d 2

¼ M 21  M 22 M 11 (2.7.21)

a1

M 12 Examples of these two transfer functions with η1 ¼ η2 ¼ 0.98 and ε1 ¼ ε2 ¼ 0.01 are shown as the function of the ring roundtrip phase ϕ in Fig. 2.7.5. There is a slightly higher loss in jd2/a1 j2 in comparison to jb1/a1 j2 because of an extra path loss across the ring. The transfer function of a general multiring configuration shown in Fig. 2.7.4B can be calculated from the matrix formula in Eq. (2.7.17). By carefully selecting sizes of the rings and coupling coefficients of the couplers, different filter transfer functions can be obtained. Extensive works have been done in the design and fabrication of multringbased optical filters with different types of materials (Orta et al., 1995; Cheng et al., 2017; Pernice et al., 2012; Rabus et al., 2002; Griffel, 2000).

2.7.3 Label-free biosensors based on high-Q ring resonators Optical sensing relies on the interaction between biomolecules to be targeted and the signal optical field. Strong optical field can certainly help improve detection sensitivity. For a high-Q optical ring resonator at the resonance wavelength, optical field can circulate in the cavity for a long time so that the field intensity can be significantly enhanced by a factor proportional to the finesse of the cavity. The basic idea of optical sensing based on optical ring resonators is that biomolecules immobilized on the surface of the optical ring will change the effective optical length of the ring and thus change the resonance wavelength. Fig. 2.7.6A illustrates the basic configuration of an optical sensor based on a ring resonator. When biomolecules are attached on the surface of the optical ring, they

Basic mechanisms and instrumentation for optical measurement

Pin

Pout Bio molecules

(a)

Field distribution

Transmissivity

1 0.8 0.6 0.4 0.2 0 1351

(b) 1351.5

1352

1352.5

Wavelength (nm)

Fig. 2.7.6 (A) Basic configuration of an evanescent wave optical sensor based on a ring resonator and (B) power transfer functions of a ring resonator with slightly different effective refractive indices (δn ¼ 5  105 between adjacent curves).

interact with the evanescent field of the propagation mode of the waveguide, and the effective optical length of the ring can be changed slightly through the change in the effective index of the waveguide or the change in the circumference of the ring. This will introduce a change of the resonance wavelength. The number of biomolecules immobilized on the ring is linearly proportional to the measured change in the resonance wavelength. The solid line in Fig. 2.7.6B shows an example of the power transfer function Pout(λ)/ Pin(λ) of a ring resonator with circumference L ¼ 80 μm, refractive index n ¼ 1.47, ring propagation loss η ¼ 0.995, and ε ¼ 0.005. This results in a finesse of 667 according to Eq. (2.7.7), and a Q-value of approximately 87 at 1350 nm wavelength according to Eq. (2.7.10). The dashed, dotted, and dash-dotted lines in Fig. 2.7.6B show that the wavelength of transmission notch is shifted by changing the effective index of the waveguide by an increment of δn ¼ 5  105 between adjacent curves. Experimentally, the wavelength change of the transmission notch can be measured to estimate the number of biomolecules immobilized on the waveguide surface which caused the effective refractive index change. A sharp transmission notch, corresponds to a high Q-value of the ring resonator, is essential to allow the precise measurement of very small changes in the effective refractive index. Practically, the four most important issues in ring resonator-based evanescent wave optical sensors are (1) create optical rings with high enough Q-values, (2) find mechanisms to efficiently bind biomolecules on the surface of an optical ring, (3) maximize the induced change in the effective optical cavity length, and (4) precise measurement of the induced resonance wavelength change. A high Q-value is essential for the high sensitivity of sensing which is equivalent to the long interaction length between the probing optical signal and the target analyte. To better understand the interaction length of sensing, we can relate the Q-value to a cavity ring downtime τCR, which is often used to describe the photon lifetime in the cavity. The inverse of the ring cavity power transfer function shown in Eq. (2.7.3) can be approximated with a Lorentzian shape, and the normalized power spectrum can be expressed as

195

196

Fiber optic measurement techniques



E ðf Þ 2 1

E ðf Þ ¼ ðf  f Þ2 + ðδf =2Þ2 m m m

(2.7.22)

where fm is the resonance frequency and δfm is the full width at half maximum (FWHM) of the mth notch. Since fm ¼ c/λm and δfm ¼  cδλm/λ2m, the Q definition in Eq. (2.7.8), which is related to the FWHM of the transfer function, can be rewritten as Q ¼  fm/δfm, and Eq. (2.7.22) becomes

E ðf Þ 2 1

(2.7.23)

E ðf Þ ¼ ðf  f Þ2 + ðf =2QÞ2 m m m This Lorentzian shape of the transfer function is equivalent to an energy decay in the time domain as dU m ðtÞ ω ¼  m U m ðt Þ Q dt

(2.7.24)

where Um(t) is the energy of the mth mode as the function of time. The solution of Eq. (2.7.24) is U m ðtÞ ¼ U m ð0Þ exp ðt=τCR Þ

(2.7.25)

where the equivalent cavity ring downtime is τCR ¼ Q=f m

(2.7.26)

and the equivalent optical path length in the cavity is L EQ ¼ τCR c ¼ Qλm

(2.7.27)

As an example, for a ring cavity at a resonance wavelength λm ¼ 1310 nm and Q ¼ 106, the cavity ring downtime is approximately τCR ¼ Qλm/c  4.4ns. This is equivalent to a light traveling over a distance of 1.3 m in the cavity before the energy level is reduced to 1/e of its original value. For evanescent wave biosensors, optical fiber rings cannot be directly used because the thick cladding prevents optical signal in the fiber core from interacting with materials outside the cladding. In addition to planar waveguide optical rings on substrates, practical optical resonator geometries often used for evanescent wave sensing include sphere, cylinder, and toroid, as illustrated in Fig. 2.7.7. In these geometries, the mechanism of the optical signal traveling along the circumferences of the rings is known as the whisperinggallery mode (WGM). The WGM was originally discovered for whispering sound waves which was found to travel along the wall of the St Paul Cathedral’s gallery, where the whispering-gallery waves are guided by the wall curvature. The guiding mechanism of the WGM of light

Basic mechanisms and instrumentation for optical measurement

Sphere

Tube

Toroid

Fig. 2.7.7 Images of optical sphere, tube, and toroid which can be used as ring resonators.

waves is the total internal reflection on the inner surface of an optically transparent sphere (tube or toroid) as illustrated by ray tracing in Fig. 2.7.8A. Thus, the wave of WGM travels beneath the surface of the sphere with the field of evanescent wave extending outside the interface with the depth discussed in Chapter 1. Optical signal coupling into and out of the ring can be accomplished by a tapered optical fiber placed in close vicinity of the ring. The coupling coefficient ε is determined by both the fiber diameter in the coupling region and the separation between the fiber and the ring (Cai et al., 2000). The resonance condition of the WGM is the phase match of the signal optical wave after each roundtrip around the circumference, the same as the fiber-based optical ring resonators discussed previously. The binding of the analyte on the surface is equivalent to an increase in the radius of the ring from r to r + δr as illustrated in Fig. 2.7.8B, which increases the cavity length and the corresponding redshift of the resonance frequency. Practically, the layer of the analyte binding on the ring surface is very thin on the nanometer level. For a certain thickness δr of the analyte layer, the introduced change δλanalyte to the mth order resonance wavelength λm is   δr δn + (2.7.28) δλm,analyte ¼ λm r n

Tapered coupling region Output fiber

Input fiber

r Cross section of optical sphere

(a)

r+δr Binding of bio molecules on the surface

(b)

Fig. 2.7.8 (A) Ray trace illustration of whispering-gallery mode and optical coupling with a tapered optical fiber for input/output. (B) Radius increase due to biomolecules binding on the surface.

197

198

Fiber optic measurement techniques

where δr/r and δn/n are relative changes of the ring radius and the refractive index, respectively, as the analyte which replaces the original medium may have a different refractive index. Because the cavity Q-value defined in Eq. (2.7.8) is Q ¼ λm/δλm with δλm, the linewidth of resonance, in order to resolve wavelength shift introduce by the binding of analyte, δλm < δλm,analyte is required, that is,   δr δn 1 (2.7.29) + Q> r n While δn/n is determined by the materials, a smaller size of the microring resonator can help improve the detection efficiency. For example, for a microring cavity with Q ¼ 105, in order to detect an analyte layer of thickness δr ¼ 1 nm, r < 100 μm is required assuming δn ¼ 0. Binding of biomolecules may also create absorption and scattering of optical signal. It is equivalent to the reduction of power transmission coefficient η in Eq. (2.7.3), which tends to reduce the Q-value of the cavity and degrade the detection sensitivity. Ring-based evanescent wave biochemical sensors have been demonstrated with various geometrics and configurations. More detailed descriptions including molecular binding mechanisms, microfluidic structure, and detection sensitivity analysis can be found in a number of review articles (Fan et al., 2008; Vollmer and Yang, 2012; Jiang et al., 2020).

2.7.4 Electro-optic modulators based on ring resonators In addition to optical filtering and biochemical sensing, optical ring resonators can also be used to make electro-optic modulators. As ring resonators are frequency selective, a modulator based on the ring resonator has the advantage of selectively modulating a desired wavelength channel while letting other channels to pass through with minimum loss. Because of the compact size of microring resonators and the potential for low power consumption, electro-optic modulators based on microring resonators have attracted significant interest. Especially, in integrated silicon photonic circuits, modulators based on microring resonators are the most common. Ring-based electro-optic modulators are usually fabricated with planar waveguide circuits so that electrodes can be added for modulation. Fig. 2.7.9A shows a silicon photonic ring modulator made on silicon on insulator (SOI) platform (Xu et al., 2007). The electrodes are made of heavily doped silicon in the substrate layer on both sides of the ring waveguide. This forms a p-i-n junction, and carriers can be injected into the silicon waveguide through electric biasing, which allows the electronic control of the refractive index of the waveguide. The cross section of the silicon waveguide typically has the height of 200 nm and the width of 450 nm, as shown in Fig. 2.7.9B where the radius of the waveguide ring is r. The coupling between the straight waveguide and the ring is determined

Basic mechanisms and instrumentation for optical measurement

a1 Pin

Pout V

a2

Waveguide

p+ doped electrode

SiO2 p+

n+ doped electrode

Si

b1

SiO2

b2 r

n+

Cross section

(a)

(b)

(c)

Fig. 2.7.9 (A) configuration of a waveguide ring modulator made on the SOI platform, (B) cross-section of the waveguide, and (C) coupling between the straight waveguide and the ring resonator.

by the gap between them. The coupling region is a 2  2 waveguide coupler, and its power transfer function is presented by Eq. (2.7.3), and the Q-value of the ring defined by Q ¼ λ/δλ is only dependent on the loss of the ring, where λ is the wavelength of the selected absorption peak, and δλ is the FWHM of the power transmission notch. The Q-value of a ring resonator can be measured by an optical spectrometer, and the transmission coefficient of the ring η can be calculated from the Q-value based on Eqs. (2.7.6) through (2.7.9) as Q¼

2πrng π   λ cos 1 14η + η2 2η

(2.7.30)

where ng is the group index of the waveguide, r is the radius of the ring, and η is the waveguide transmission coefficient of the ring from b2 to a2 as shown in Fig. 2.7.9C which is similar to that shown in Fig. 2.7.1A. Based on Power Transfer Function (Eq. 2.7.3), if the waveguide transmission coefficient η is known, the optimum splitting ratio ε of the coupler should satisfy η ¼ 1  ε so that the power transmission at the resonance wavelength λ0 is T(λ0) ¼ 0. For a silicon microring modulator, the effective index ng can be reduced by carrier injection, which provides the mechanism of roundtrip phase modulation. The phase modulation modulates the resonance wavelength of the ring so that the transmission loss of an optical signal at λ0 will be reduced as illustrated in Fig. 2.7.10. A resonance wavelength change by a linewidth δλ is equivalent to a phase change of ϕΔ as defined by Eqs. (2.7.6) and (2.7.9). Based on the relation between the finesse and the Q-value defined by Eq. (2.7.10), it can be found that the relative change of the refractive index required to cause the change of the resonance wavelength by a linewidth δλ is

ng,0  ng 1 δng ≡ (2.7.31) ¼ ng,0 Q where ng, 0 and ng are the effective indices of waveguide before and after carrier injection, respectively. It usually needs to shift the resonance wavelength by more than a δλ to ensure that the loss is low enough to represent the high transmission state of the signal.

199

Fiber optic measurement techniques

1 0.8

T (linear)

200

0.6

Shift by 4dλ

0.4

Shift by dλ

0.2 0 -0.08



-0.06

-0.04

-0.02

0

0.02

0.04

Wavelength (λ- λ0) (nm) Fig. 2.7.10 Power transfer function of a microring resonator near a resonance wavelength. Electrooptic modulation changes both the refractive index and the loss of the ring so that the resonance wavelength can be shifted, and the peak absorption is also reduced. Solid line: ideal transfer function without carrier injection, dashed line: resonance wavelength is shifted by a linewidth δλ, and dotted line: resonance wavelength is shifted by a 4δλ. λ0 is the optical signal wavelength.

The dotted line in Fig. 2.7.10 shows that when the resonance is shifted by 4δλ, the transmission loss at signal wavelength λ0 is reduced to less than 1 dB. It has been found that the index reduction in a silicon p-i-n junction waveguide is on the order of Δn  2.1  103 by the carrier density increase of ΔN ¼ 1018cm3 so that the efficiency is nΔ ¼ Δn/ ΔN  2.1  1021cm3. Thus, the carrier density change required to shift the ring resonance wavelength by 4δλ should be on the order of ΔN ¼ (4ng, 0/Q)  4.8  1020cm3. Meanwhile, carrier injection will also increase the absorption loss of the waveguide (decrease the value of η); this results in the reduction of the Q-value and the broadening of the resonance linewidth, as also shown in Fig. 2.7.10. As an example, for a a silicon ring modulator operating in the 1550 nm wavelength window with the waveguide group index ng ¼ 4.3, ring radius r ¼ 6 μm, and the quality factor Q ¼ 105, the roundtrip loss of the ring can be found as η  0.9967 ¼ 0.014dB by numerically solving Eqs. (2.7.7) through (2.7.10). The optimum splitting ratio of the 2  2 coupler is ε ¼ 1  η ¼ 0.0032. The resonance condition of a ring resonator requires λ ¼ 2πrng/m with m, an integer. Resonance wavelengths within the telecommunication’s C-band (1529 nm - 1565 nm) are 1529.3 nm, 1543.87 nm, and 1558.71 nm with a free spectral range of ΔλFSR ¼

λ2  14:82nm 2πrng

(2.7.32)

Consider a signal wavelength at λ0 ¼ 1543.87nm which is one of the resonance wavelengths of the resonator, and assume the efficiency of carrier-induced complex index change for the silicon waveguide is Δng/ΔN ¼  (2.1  j0.4)  1021cm3, where the imaginary part of Δng represents the loss, and ΔN is the carrier density change. The real

Basic mechanisms and instrumentation for optical measurement

part of the group index change has to be Re{Δng} ¼ 4ng/Q ¼ 1.72  104 for a resonance wavelength change of 4δλ, which corresponds to a carrier density increase of ΔN ¼ 8.19  1016cm3. Because of the carrier-induced propagation loss of the silicon waveguide, the complex group index change is Δng ¼ 4ng/Q ¼ (1.72 + j0.1638)  104. This corresponds to a phase change of Δϕ ¼ 4π 2rΔng/λ0 ¼ 0.026 + j0.0025. Based on the transfer function of Eq. (2.7.3), the power loss at ϕ ¼ Δϕ is T(Δϕ) ¼ 0.964 ¼  0.16dB as shown in Fig. 2.7.11. Because a ring resonator is largely transparent for wavelengths away from the resonance wavelength, a number of ring resonators can be cascaded to modulate an optical source which emits multiple wavelengths, as shown in Fig. 2.7.12A. In this case, each ring resonator is dedicated to modulate the optical signal at one of the wavelengths without affecting all other wavelength channels. The wavelength window that this composite 1

|t|2 (linear)

0.8 0.6 0.4 0.2 0 1543.5

1543.6

1543.7

λ 01543.9

1543.8

1544

1544.1

Wavelength (nm) Fig. 2.7.11 Power transfer function of a microring resonator near a resonance wavelength λ0 ¼ 1543.87nm without (solid line) and with carrier injection (dashed line). v1(t)

λ1 λ1 λ2 λ3

λN

DEMUX

Multi-wavelength laser

λ2

λ2

MZM

v2(t) MZM

vN(t) λ1 λ2 λ3

λN

λN

Modulated output

v1(t) λ1

(b)

vN(t)

λN

MUX

(a)

v2(t)

Multi-wavelength laser

Modulated output

MZM

Fig. 2.7.12 Modulation of a light source with multiple wavelength channels by the cascade of multiple ring modulators (A) and by conventional Mach-Zehnder modulators (MZM) (B).

201

202

Fiber optic measurement techniques

ring modulator can cover is determined by the FSR is each ring. Practically, for microring-based modulators with the ring diameter on the order of a few micrometers and the FSR on the order of tens of nanometers, a very small fabrication error could change the resonance wavelength significantly. In order to match the resonance wavelength of a ring resonator to the ITU wavelength grid, thermal tuning can be used to correct any fabrication error in the resonance wavelength as well as to compensate for environmental changes. This can be done by integrating a resistive heater near the waveguide ring. In comparison, if Mach-Zehnder modulators are used for the source with multiple wavelengths, a WDM EDMUX has to be used to separate different wavelength channels, and a MUX has to be used to combine them after electro-optic modulation as shown in Fig. 2.7.12B. For the electro-optic modulators based on microrings discussed so far, we have used the static power transfer function of Eq. (2.7.3). We have also shown that a high Q-value is necessary to increase the modulation efficiency and to reduce the required density of carrier injection. However, a high Q-value may become a limiting factor for the modulation speed because of the long photon lifetime. The Q-value is defined by Q ¼ λ0/δλ with λ0 and δλ representing the operation wavelength and the width of the resonance line, respectively. The photon lifetime-limited modulation bandwidth can be estimated from Li et al. (2004), c δf ¼ (2.7.33) λ0 Q where δf ¼ δλ  c/λ20, and c is the speed of light in vacuum. For a modulator operating in a 1550-nm wavelength window, in order for the photon lifetime-limited modulation bandwidth δf > 10GHz, the Q-value has to be less than 2  104. In addition, factors affecting modulation speed also include parasitic capacitance of the electrodes and carrier response time of the p-i-n junction. There are a number of research papers describing these effects both experimentally and mathematically (Xu et al., 2007; Baba et al., 2013; Sacher and Poon, 2009). Although the carrier response time of a p-i-n junction waveguide is typically on the nanosecond level, the relationship between optical transmission and carrier density is highly nonlinear. A small change in the group index of the ring waveguide can introduce a big change in the optical transfer function. In fact, based on the ring modulator power transfer function of Eq. (2.7.3), assume the optimum coupling coefficient ε ¼ 1  η; the transfer function can be written as T ðϕÞ ¼ 2η

1  cos ϕ 1 + η2  2η cos ϕ

(2.7.34)

Assume the optical signal is at the resonance wavelength of the ring without carrier injection, and a small phase deviation δϕ caused by carrier injection can be linearized with cosδϕ  1  δϕ2/2 so that

Basic mechanisms and instrumentation for optical measurement

T ðδϕÞ ¼ 1  



1

1+

η ð1ηÞ2

δϕ2

1  2

¼1 1+

(2.7.35)

δN ξ

ð1ηÞ λ0 pffiffi is a parameter depending on the specific device and material used where ξ ¼ 2πrn η Δ with nf ¼ δn/δN representing the efficiency of carrier-induced index change, and δN is the carrier density change. Fig. 2.7.13 shows the optical signal transmission loss through the ring modulator as the function of normalized injection carrier density δN/ξ. This highly nonlinear transfer function indicates that a sharp reduction of transmission loss can be introduced by a relatively small change of the normalized carrier density. As an example, for a ring modulator operating at λ0 ¼ 1550nm with r ¼ 6 μm, Q ¼ 105 (so that η  0.9967), nΔ ¼ 2.1  1021cm3, and the value of ξ is approximately 6.5  1016cm2. Assume a carrier lifetime τ for the p-i-n junction; with a step function current injection, the carrier density increase follows the standard charging function,   δN ðtÞ ¼ N B 1  et=τ (2.7.36)

where NB is the final carrier density determined by the injection current level, and at t ¼ τ the carrier density reaches to NB(1  e1)  0.632NB. Fig. 2.7.14A shows the normalized carrier density δN/NB as the function of the normalized time t/τ for the current turn-on transition. Because of the nonlinear relation between carrier density and the signal transmission loss, the switch on of the optical signal can be much faster depending on the target carrier density NB. Fig. 2.7.14B shows the signal transmission loss as the function of the normalized time with different NB levels. When the modulator is overdriven with NB much 0

T(dB)

-10

-20

-30

-40

-50

0

1

2

3

4

5

δN/ξ Fig. 2.7.13 Optical signal transmission loss through the ring modulator as the function of normalized injection carrier density.

203

Fiber optic measurement techniques

δN/N B (linear)

1 0.8 0.6 0.4

(a)

0.2 0 0 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

NB=ξ

-10

T(dB)

204

NB=2ξ -20

NB=5ξ NB=10ξ

-30

-40

0

0.2

(b) 0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

t/τ Fig. 2.7.14 (A) normalized carrier density as the function of the normalized time during injection current turn-on transition, (B) signal transmission loss as the function of normalized time for different NB levels.

higher than ξ, the optical signal switch-on time can be much shorter than the carrier lifetime τ. Although we have only discussed intensity modulation with microring modulators, it is also possible to realize complex optical I/Q modulation based on these devices (Zhang et al., 2008; Sacher and Poon, 2009; Dong et al., 2012). This is because the optical field transfer function of a microring is complex, and carrier injection is able to change both the power transmission and the phase of the optical signal. An I/Q modulator has been demonstrated with two microrings each on an arm of a Mach-Zehnder interferometer structure.

2.8 Optical polarimeter It is well known that an electromagnetic wave has four basic parameters including amplitude, frequency, phase, and the state of polarization. For an optical signal, its amplitude is represented by the brightness, and its frequency is represented by the color; both can be distinguished by a human eye. They can be easily measured by a power meter and a wavelength meter or an optical spectrum analyzer in an optical communication system. On the other hand, the state of polarization of an optical signal is less intuitive because human

Basic mechanisms and instrumentation for optical measurement

eyes are not polarization sensitive. In optical communications and optical sensors, the effect of optical polarization can be significant and must be considered. As an additional degree of freedom, the state of polarization in a lightwave signal can be used to carry information in an optical system. At the same time, the polarization effect may also degrade the performance of an optical fiber transmission system through polarization mode dispersion. The polarimeter is an instrument that can be used to measure the state of polarization of a lightwave signal; it is useful in many applications, such as optical system monitoring and characterization.

2.8.1 General description of lightwave polarization A propagating lightwave can be represented by electromagnetic field vectors in the plane that is perpendicular to the wave propagation direction. The complex electric field envelope can be expressed as (Born and Wolf, 1990; Collett, 1992) !

!

!

E ¼ a x Ex0 cos ðωt  kzÞ + a y E y0 cos ðωt  kz  φÞ

! ax

(2.8.1)

! ay

and are unit vectors and ϕ is the relative phase delay between the two where orthogonal field components. The polarization of the optical signal is determined by the pattern traced out on the transverse plane of the electrical field vector over time. Fig. 2.8.1 illustrates two-dimensional descriptions of various polarization states of polarized lights, such as linear polarization, circular polarization, and elliptical polarization. If we rewrite the optical field vector as shown in Eq. (2.8.1) into two scalar expressions,

x

x

Ex = Ey , f = π/4

Ex = Ey , f = 0

x

Ex = 2 Ey , f = 0 Ex = 2 Ey , f = π/4

x

Ex = Ey , f = π/2

x

Ex = Ey , f = 3π/4 y

y

y

y

y

y

y

y

x

x

Ex = 2 Ey , f = π/2

x

Ex = 2 Ey , f = 3π/4

Fig. 2.8.1 Two-dimensional view of polarization states of a polarized light.

205

206

Fiber optic measurement techniques

E x ¼ E x0 cos ðωt  kzÞ

(2.8.2a)

Ey ¼ E y0 cos ðωt  kz  φÞ

(2.8.2b)

The pattern of the optical field traced on a fixed xy-plane can be described by the polarization ellipse, E 2y Ex Ey E 2x + 2 cos φ ¼ sin 2 φ (2.8.3) Ex0 E y0 E 2x0 E2y0 Generally, an optical signal may not be fully polarized. That means that the x-component and the y-component of the light are not completely correlated with each other; in other words, the phase difference ϕ between them may be random. Many light sources in nature are unpolarized, such as sunlight, whereas optical signals from lasers are mostly polarized. An optical signal can usually be divided into a fully polarized part and a completely unpolarized part. The degree of polarization (DOP) is often used to describe the polarization characteristics of a partially polarized light, which is defined as DOP ¼

P polarized P polarized + P unpolarized

(2.8.4)

where Ppolarized and Punpolarized are the powers of the polarized part and unpolarized part, respectively. The DOP is therefore the ratio between the power of the polarized part and the total power. The DOP is equal to zero for an unpolarized light and is equal to unity for a fully polarized light.

2.8.2 The stokes parameters and the Poincare sphere The state of polarization of a lightwave signal is represented by the maximum amplitudes Ex0, Ey0, in the x and y directions and a relative phase ϕ between them, as shown in Eq. (2.8.1). In addition, if the signal is not completely polarized, the DOP also has to be considered. Because of the multiple parameters, the representation and interpretation of polarization states are often confusing. A Stokes vector is one of the most popular tools to describe the state of polarization of an optical signal. A Stokes vector is determined by four independent Stokes parameters, which can be represented by optical powers in various specific reference polarization states. Considering an optical signal with the electrical field given by Eq. (2.8.1), the four Stokes parameters are defined as S0 ¼ P

2 S1 ¼ jEx0 j2  Ey0

S2 ¼ 2jE x0 j Ey0 cos φ

S3 ¼ 2jE x0 j Ey0 sin φ

(2.8.5) (2.8.6) (2.8.7) (2.8.8)

Basic mechanisms and instrumentation for optical measurement

where P is the total power of the optical signal. For an ideally polarized lightwave signal, since there are only three free variables, Ex0, Ey0, and ϕ, in Eq. (2.8.1), the four Stokes parameters should not be independent, and the total optical power is related to S1, S1 and S1 by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S21 + S22 + S23 ≡ P (2.8.9) where S0 represents the total power of the optical signal, S0 ¼ j Ex0 j2 + j Ey0 j2 ¼ P. However, if the optical signal is only partially polarized, S21 + S22 + S23 only represents the power of the polarized part Ppolarized, which is smaller than the total optical power because S0 ¼ P ¼ Ppolarized + Punpolarized. In this case, the Stokes parameters can be normalized by the total power S0, and the normalized Stokes parameters have three elements: s1 ¼

S1 S S ,s ¼ 2 ,s ¼ 3 S0 2 S0 3 S0

(2.8.10)

According to the definition of DOP given by Eq. (2.8.4), qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s21 + s22 + s23 ¼ DOP

(2.8.11)

These normalized Stokes parameters can be displayed in a real three-dimensional space. Any state of polarization can be represented as a vector s ¼ [s1 s2 s3], as shown in Fig. 2.8.2. The value of each normalized Stokes parameter ranges between 1 and +1. If the optical signal is fully polarized, the normalized Stokes vector endpoint is always on a sphere with unit radius, which is commonly referred to as Poincare sphere. On the other hand, if the optical signal is partially polarized, the endpoint of the normalized Stokes vector should be within the unit shell and the length of the vector is equal to the (0, 0, 1)

s3

s ζ s1

ψ

s2

(0, 1, 0)

(1, 0, 0)

Fig. 2.8.2 Three-dimensional representation of a normalized Stokes vector.

207

208

Fiber optic measurement techniques

DOP of the optical signal, as defined by Eq. (2.8.11). To convert variables from a threedimensional Cartesian coordinate to a spherical polar coordinate, an azimuthal angle 0 < Ψ < 2π and a polar angle –π/2 < ζ < π/2 must be introduced with s1 ¼ jsj cos ζ cos Ψ ,s2 ¼ jsj cos ζ sin Ψ ,s3 ¼ jsj sin ζ

(2.8.12)

Since each polarization state of an optical signal can be represented by a specific location on the Poincare sphere, it is useful to identify a few special locations on the Poincare sphere, as shown in Fig. 2.8.3, and their relationships with the polarization states described in Fig. 2.8.1. For circular polarization with j Ex0 j ¼ j Ey0 j and φ ¼  π/2, Eqs. (2.8.6)–(2.8.10) indicate that s1 ¼ 0, s2 ¼ 0 and s3 ¼  j sj. Therefore, the North Pole and the South Pole on the Poincare sphere represent circular polarizations of the optical signal. Specifically, the North and South Poles represent right and left circular polarization, respectively. For a linear polarization, φ ¼ 0 so that s3 ¼ 0.Therefore, the Stokes vector endpoint of an optical signal with linear polarization has to be located on the equator of the Poincare sphere. As examples, for a horizontal linear polarization jEy0 j ¼ 0, s1 ¼ js j and s2 ¼ s3 ¼ 0, whereas for a vertical linear polarization j Ex0 j ¼ 0, s1 ¼  js j and s2 ¼ s3 ¼ 0. These two polarization states are represented at (1,0,0) and (1.0,0), respectively, on the equator. For optical signals with 45∘ linear polarizations, jEx j ¼ jEy j and ϕ ¼ 0 or π so that s1 ¼ s3 ¼ 0 and s2 ¼  j sj. These two polarization states are therefore located at (0, 1, 0) on the equator. (S0, 0, 0) [S1 ¼ S0, S2 ¼ 0 and S3 ¼ 0] represents either horizontal or vertical polarized light (because either jEx j ¼ jEy j or j Ey j ¼ jEx j to make S2 ¼ 0 and S3 ¼ 0). In general, right and left elliptical polarization states occupy the Northern and the Southern Hemisphere, respectively, on the Poincare sphere. The azimuthal angle ψ s3 Right-hand elliptical

Right-hand circular (0, 0, 1) Vertical linear (0, 0, 1)

s2 −45⬚ linear (0, −1, 0)

+45⬚ linear (0, 1, 0)

s1 Left-hand elliptical Horizontal linear (0, 0, 1)

Left-hand circular (0, 0, −1)

Fig. 2.8.3 Polarization states represented on a Poincare sphere.

Basic mechanisms and instrumentation for optical measurement

indicates the spatial tilt of the ellipse, and the polar angle ζ is related to the differential optical phase ϕ between Ex and Ex by sinζ ¼ sin 2α sin φ where α ¼ tan1(jEx0 j/j Ey0 j).

2.8.3 Optical polarimeters An optical polarimeter is an instrument used to measure the Stokes parameters of optical signals (Chipman, 1994; Madsen et al., 2004; Wang and Weiner, 2006). According to the definition of Stokes parameters given by Eqs. (2.8.5)–(2.8.8), they can be obtained by measuring the optical powers after the optical signal passes through polarization-sensitive optical devices such as linear polarizer and retarders. If we define the following seven measurements: P ¼ total optical power P0∘¼optical power measured after a linear horizontal polarizer P90∘¼optical power measured after a linear vertical polarizer P+45∘¼optical power measured after a linear +45 degree polarizer P45∘¼optical power measured after a linear –45 degree polarizer P+45∘, λ/4 ¼ optical power measured after a λ/4 retarder and a linear +45 degree polarizer P45∘, λ/4 ¼ optical power measured after a λ/4 retarder and a linear –45 degree polarizer Then the Stokes parameters can be obtained as S0 ¼ P 0∘ + P 90∘

(2.8.13)

S1 ¼ P 0∘  P 90∘

(2.8.14)

S2 ¼ P +45∘  P 45∘

(2.8.15)

S3 ¼ P +45∘ ,λ=4  P 45∘ ,λ=4

(2.8.16)

However, these seven measurements are not independent. In fact, only four measurements are necessary to determine the Stokes parameters. For example, using P0∘, P90∘, P+45∘, and P+45∘, λ/4, we can find S2 and S3 by S2 ¼ P +45∘  P 45∘ ¼ 2P +45∘  S0

(2.8.17)

S3 ¼ P +45∘ ,λ=4  P 45∘ ,λ=4 ¼ 2P +45∘ ,λ=4  S0

(2.8.18)

Based on these relations, most polarimeters rely on four independent power measurements, either in parallel or sequential. As shown in Fig. 2.8.4, a parallel measurement technique uses a 1  4 beam splitter that splits the input optical signal into four equal parts. A polarizer is used in each of the first three branches, with 0, 90, and 45 degree orientation angles. The fourth branch has a λ/4 waveplate and a polarizer oriented at 45 degree. The principal axis is the waveplate, aligned with 0 degree so that it introduces an

209

Fiber optic measurement techniques

0⬚ Pol.

P0⬚

PD

Input optical signal

1 x 4 beam splitter

210

90⬚ Pol. PD 45⬚ Pol. PD

P90⬚ P45⬚

45⬚ Pol. P45⬚,l/4 PD λ/4 retarder

Signal processing & display

Fig. 2.8.4 Block diagram of a polarimeter with parallel detection. Pol., polarizer; PD, photodetector.

additional π/2 phase shifts between the x and y components of the optical field. A photodetector detects the optical power of each branch, providing the required parameters of P0∘, P90∘, P+45∘, and P+45∘, λ/4, respectively. These optical power values are then analyzed by a signal processing system, and Stokes parameters can be calculated using Eqs. (2.8.13), (2.8.14), (2.8.17), and (2.8.18). Most polarimeters have graphic interfaces that allow the display of the measured Stokes parameters on a Poincare sphere. Another type of polarimeter uses sequential measurement, which requires only one set of polarizers and waveplates, as shown in Fig. 2.8.5. The simplest measurement setup is shown in Fig. 2.8.5A, where a λ/4 waveplate and a polarizer are used and their principal axis can be rotated manually. The power of optical signal passing through these devices is measured and recorded with a photodetector followed by an electronic circuit. The four

Input optical signal

(a)

Manually adjustable retarder

PD

Record

Manually adjustable Polarizer

Electrically switchable retarders Fixed Polarizer Input optical signal PD B A C

(b)

Computer control & synchronization

Data acquisition & display

Fig. 2.8.5 Block diagram of polarimeters with sequential detection. (A) Manual adjustment (B) automated measurement.

Basic mechanisms and instrumentation for optical measurement

Table 2.8.1 State of polarizations (SOPs) at locations A, B, and C in Fig. 2.8.5B with different sets of retarder optic-axis orientations; RHC, right-hand circular. SOP at point A

First retarder optic axis

SOP at point B

Second retarder optic axis

SOP at point C

Equivalent PD power

0 degree 90 degree 45 degree RHC

90 degree 135 degree 90 degree 135 degree

0 degree RHC RHC 0 degree

0 degree 135 degree 135 degree 0 degree

0 0 0 0

P0 ∘ P90∘ P+45∘ P+45∘,

degree degree degree degree

λ/4

measurements of P0∘, P90∘, P+45∘, and P+45∘, λ/4 as described previously can be performed one at a time by properly adjusting the orientations (or removing, if required) of the λ/4 waveplate and the polarizer. This measurement setup is simple; however, we have to make sure that the polarization state of the optical signal does not change during each set of measurements. Since manual adjustment is slow, the determination of Stokes parameters could take as long as several minutes. It is not suitable for applications in fiber-optic systems where polarization variation may be fast. To improve the speed of sequential measurement, electrically switchable retarders can be used as shown in Fig. 2.8.5B. In this configuration, the orientation of the optical axis can be switched between 90 and 135 degree for the first retarder and between 135 degree (equivalent to –45 degree) and 0 degree for the second retarder. The optical axis of the polarizer is fixed at 0 degree. Table 2.8.1 shows the state of polarizations (SOAs) at Locations A, B, and C (referring to Fig. 2.8.5B) with different combinations of retarders optic-axis orientations. Four specific combinations of retarders optic-axis orientations can convert input SOPs of 0 degree linear, 90 degree linear, 45 degree linear, and right-hand circular into 0 degree linear at the output, which is then selected by the fixed polarizer. Therefore, the optical powers measured after the polarizer should be equivalent to P0∘, P90∘, P+45∘, and P+45∘, λ/4 as defined previously. Because of the rapid development of liquid crystal technology, fast switchable liquid crystal retarders are commercially available with submillisecond-level switching speed, which allows the Stokes parameters to be measured using the sequential method. As shown in Fig. 2.8.5, a computerized control system is usually used to synchronize data acquisition with instantaneous retarder orientation status. Therefore, the measurement can be fast and fully automatic.

2.9 Measurement based on coherent optical detection Coherent detection is a novel detection technique that is useful both in fiber-optic communication systems (Betti et al., 1995) and optical measurement and sensors. In a direct detection optical receiver, thermal noise is usually the dominant noise, especially when the power level of an optical signal is low when it reaches the receiver.

211

212

Fiber optic measurement techniques

In coherent detection, a strong local oscillator is used, mixing with the optical signal at the receiver and effectively amplifying the weak optical signal. Thus, compared to direct detection, coherent detection has much improved detection sensitivity. In addition, coherent detection also provides superior frequency selectivity because of the wavelength tunability of local oscillator acting as an optical frequency and/or phase reference. The coherent detection technique was investigated extensively in the 1980s to increase receiver sensitivity in optical communication systems. However, the introduction of EDFA in the early 1990s made coherent detection less attractive, mainly for two reasons: (1) EDFA provides sufficient optical amplification without the requirement of an expensive local oscillator, and (2) EDFA has wide gain bandwidth to support multichannel WDM optical systems. Nevertheless, coherent detection still has its own unique advantages because optical phase information is preserved, which is very important in many practical applications, including optical communication, optical metrology, instrumentation, and optical signal processing.

2.9.1 Operating principle Coherent detection originates from radio communications, where a local carrier mixes with the received RF signal to generate a product term. As a result, the received RF signal can be demodulated or frequency translated. A block diagram of coherent detection is shown in Fig. 2.9.1. In this circuit, the received signal m(t) cos (ωsct) has an information-carrying amplitude m(t) and an RF carrier at frequency ωsc, whereas the local oscillator has a single frequency at ωloc. The RF signal multiplies with the local oscillator in the RF mixer, generating the sum and the difference frequencies between the signal and the local oscillator. The process can be described by the following equation: mðtÞ cos ðωsc tÞ  cos ðωloc tÞ ¼

mðtÞ f cos ½ðωsc + ωloc Þt + cos ½ðωsc  ωloc Þtg (2.9.1) 2

A lowpass filter is usually used to eliminate the sum frequency component, and thus, the baseband signal can be recovered if the frequency of the local oscillator is equal to that of the signal (ωloc ¼ ωsc). When the RF signal has multiple and very closely spaced frequency channels, excellent frequency selection in coherent detection can be m(t) cos (wsct)

Mixer Low-pass

m(t) cos[(wsc − wloc)t]

Local oscillator cos(wloct )

Fig. 2.9.1 Block diagram of coherent detection in radio communications.

Basic mechanisms and instrumentation for optical measurement

accomplished by tuning the frequency of the local oscillator. This technique has been used in ratio communications for many years, and high-quality RF components such as RF mixers and amplifiers are standardized. For coherent detection in lightwave systems, although the fundamental principle is similar, its operating frequency is many orders of magnitude higher than the radio frequencies; thus, the required components and circuit configurations are different. In a lightwave coherent receiver, mixing between the input optical signal and the local oscillator is done in a photodiode, which is a square-law detection device. A typical schematic diagram of coherent detection in a lightwave system is shown in Fig. 2.9.2, where the incoming optical signal and the optical local oscillator are combined in an optical coupler. The optical coupler can be made by a partial reflection mirror in free space or by a fiber directional coupler in guided wave optics. To match the polarization state between the input optical signal and the local oscillator (LO), a polarization controller is used; it can be put either at the LO or at the input optical signal. A photodiode converts the composite optical signal into an electrical domain and performs mixing between the signal and LO. Consider the field vector of the incoming optical signal as ! E s ðt Þ

!

¼ As ðt Þ exp ðjωs t + jφs ðt ÞÞ

(2.9.2)

and the field vector of the local oscillator as ! E LO ðtÞ

!

¼ ALO exp ðjωLO t + jφLO Þ

(2.9.3)

! ! where As ðtÞand ALO ðt Þare the real amplitudes of the incoming signal and the LO, respec-

tively, ωs and ωLO are their optical frequencies, and ϕs(t) and ϕLO are their optical phases. According to the optical transfer function of the 2  2 optical coupler, #" ! # " ! # " pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi 1ε j ε ES E1 ¼ pffiffiffi (2.9.4) pffiffiffiffiffiffiffiffiffiffiffi ! ! j ε 1  ε E LO E2

where ε is the power coupling coefficient of the optical coupler. In this simple coherent receiver configuration, only one of the two outputs of the 2  2 coupler is used, and the other output is terminated. The composite optical signal at the coupler output is E1(t )

Es(t )

i (t)

2x2

ELO(t )

E2(t )

PD

PC LO

Fig. 2.9.2 Block diagram of coherent detection in a lightwave receiver. PC, polarization controller; PD, photodiode; LO, local oscillator.

213

214

Fiber optic measurement techniques

! E 1 ðt Þ

¼

pffiffiffiffiffiffiffiffiffiffiffi! pffiffiffi! 1  εE s ðt Þ + j εE Lo ðt Þ

(2.9.5)

As discussed in Section 1.3, a photodiode has a square-law detection characteristic, and the photocurrent is proportional to the square of the input optical signal, that is,

! 2

iðt Þ ¼ R E 1 ðtÞ n o pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! ! ¼ R ð1  εÞjAs ðtÞj2 + εjALO j2 + 2 εð1  εÞAs ðtÞ  ALO cos ðωIF t + ΔφðtÞÞ (2.9.6) where Ris the responsivity of the photodiode, ωIF ¼ ωs  ωLO is the frequency difference between the signal and the LO, which is referred to as the intermediate frequency (IF), and Δφ(t) ¼ φs(t)  φLO is their relative phase difference. We have neglected the sum frequency term in Eq. (2.9.6) because ωs + ωLO will still be in the optical domain and will be eliminated by the RF circuit that follows the photodiode. The first and the second terms on the right side of Eq. (2.9.6) are the direct detection components of the optical signal and the LO. The last term is the coherent detection term, which mixes the optical signal with the LO. Typically, the LO is a laser operating in continuous waves. Suppose the LO laser has pffiffiffiffiffiffiffiffi no intensity noise; ALO ¼ P LO is a constant, where PLO is the optical power of the LO. In general, the optical

LO is much stronger than that of the optical signal

power!of the

2 ! such that jAs ðtÞj ≪ As ðtÞ  ALO . Therefore, the only significant component in Eq. (2.9.6) is the last term, and thus, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi iðtÞ  2R εð1  εÞ cos θ P s ðtÞ  P LO cos ðωIF t + Δφðt ÞÞ ! As ðtÞ

(2.9.7)

! ALO,

where cosθ results from the dot product of  which is the angle of polarization state mismatch between the optical signal and the LO. Ps(t) ¼ j As(t)j2 is the signal optical power. Eq. (2.9.7) shows that the detected electrical pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi signal level depends also on the coefficient of the optical coupler ε. Since 2 εð1  εÞ reaches the maximum value of 1 when ε ¼ ½, 3-dB couplers are usually used in the coherent detection receiver. In addition, in the ideal case, if the polarization state of the LO matches the incoming optical signal, cosθ ¼ 1; therefore, the photocurrent of coherent detection is simplified as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi iðtÞ  R P s ðtÞ  P LO cos ðωIF t + ΔφÞ (2.9.8) Eq. (2.9.8) shows that coherent detection shifts the spectrum of the optical signal from the optical carrier frequency ωs to an intermediate frequency ωIF, which can be handled by an RF circuit. In general, the coherent detection can be categorized as homodyne detection if ωIF ¼ 0 or heterodyne detection if ωIF 6¼ 0.

Basic mechanisms and instrumentation for optical measurement

2.9.2 Receiver SNR calculation of coherent detection 2.9.2.1 Heterodyne and homodyne detection In coherent detection, the photocurrent is proportional to the field of the optical signal, and thus, the phase information of the optical signal is preserved. In the simplest coherent detection case, assuming the matched polarization states between the signal and the LO, the photocurrent is iðt Þ ¼ RAs ðtÞALO cos ðωIF t + ΔφðtÞÞ

(2.9.9)

In a heterodyne configuration, the intermediate frequency ωIF is usually much higher than the bandwidth of the signal carried by Ps(t). To convert the IF signal into baseband, two basic techniques can be used. One of these techniques, shown in Fig. 2.9.3A, uses an RF bandpass filter and an RF detector. This is usually referred to as RF direct detection. Another technique, shown in Fig. 2.9.3B, uses an RF local oscillator at the intermediate frequency ωIF, which mixes with the heterodyne signal to recover the baseband signal. This second technique is called RF coherent detection. Although both techniques recover baseband signals from the intermediate frequency, each has its unique characteristics. Direct RF detection is relatively insensitive to the IF frequency variation because the width of the bandpass filter can be chosen wider than the signal bandwidth, but on the other hand, a wider filter bandwidth also increases the noise level. RF coherent detection has higher detection efficiency than direct RF detection because a strong RF local oscillator effectively amplifies the signal level. In addition, the signal-to-noise ratio of RF coherent detection is 3-dB higher than the direct RF detection because only the in-phase noise component is involved, and the quadrature noise component can be eliminated. Obviously, RF coherent detection is sensitive to BPF Es(t) Heterodyne i (t) detection

wIF

RF detection P (t) s ( )2

BPF

wIF

(a)

Es(t)

Heterodyne i (t) detection

RF mixing

Ps(t)

wIF

(b)

wIF

Fig. 2.9.3 (A) Heterodyne optical detection and direct RF detection and (B) heterodyne optical detection and RF coherent detection. PC, polarization controller; PD, photodiode; LO, local oscillator.

215

216

Fiber optic measurement techniques

the variation of the intermediate frequency, and typically, a frequency locked loop needs to be used. In coherent homodyne detection, the intermediate frequency is set to ωIF ¼ 0, that is, the frequency of the LO is equal to the signal optical carrier. Therefore, baseband information carried in the input optical signal is directly obtained in the homodyne detection as iðt Þ ¼ RAs ðtÞALO cos ΔφðtÞ

(2.9.10)

Although homodyne detection seems to be simpler than homodyne detection, it requires both frequency locking and phase locking between the LO and the optical signal because a random phase Δ φ(t) in Eq. (2.9.24) may also introduce signal fading. 2.9.2.2 Signal-to-noise ratio in coherent detection receivers In an optical receiver with coherent detection, the power of an LO is usually much stronger than the received optical signal; therefore, the SNR is mainly determined by the shot noise caused by the LO. If we consider only thermal noise and shot noise, the receiver SNR is hi2 ðt Þi SNR ¼  2   2  ith + ish

(2.9.11)

 2  R2 P s ðt Þ  P LO i ðt Þ ¼ 2

(2.9.12)

 2  4kTB ith ¼ RL

(2.9.13)

where

is the RF power of the signal;

is the thermal noise power, and 2 ish ¼ qR½P s ðt Þ + P LO B

(2.9.14)

where B is the receiver electrical bandwidth, RL is the load resistance, and q is the electron charge. Then the SNR can be expressed as SNR ¼

R2 P s ðt ÞP LO 1  2B 4kT =RL + qR½P s ðtÞ + P LO 

(2.9.15)

Fig. 2.9.4A shows the SNR comparison between direct detection and coherent detection, where we assume that the receiver bandwidth is B ¼ 10 GHz, the load resistance is RL ¼ 50 Ω, the photodiode responsivity is R ¼ 0:75A=W, the temperature is T ¼ 300 K, and the power of the LO is fixed at PLO ¼ 20 dBm. Obviously, coherent

Basic mechanisms and instrumentation for optical measurement

20

60 40

15

Coherent detection

10

0 −20

Direct detection

−40 −80 −60

5 0 −5

−10

−60

(a)

SNR (dB)

SNR (dB)

20

−15

P LO = 20 dBm −50

−40 −30 P S (dBm)

−20

−10

−20 −30 (b)

P S = −35 dBm −20

−10 0 P LO (dBm)

10

20

Fig. 2.9.4 (A) Comparison between direct detection and coherent detection when PLO is 20 dBm and (B) effect of LO power when Ps is fixed at 35 dBm. Other parameters: B ¼ 10 GHz, RL ¼ 50 W, R ¼ 0:75A=W, and T ¼ 300 K.

detection significantly improves the SNR compared to direct detection, especially when the signal optical power level is low. In fact, if the LO power is strong enough, the shot noise caused by the LO can be significantly higher than the thermal noise; therefore, Eq. (2.9.15) can be simplified into SN Rcoh 

R P ðt Þ 2Bq s

(2.9.16)

In this approximation, we also assumed that PLO ≫ Ps, which is usually true. Therefore, the SNR in a coherent detection receiver is linearly proportional to the power of the input optical signal Ps. In contrast, in a direct detection receiver, SNR ∝ P2s . Fig. 2.9.4B shows that the SNR in coherent detection also depends on the power of the LO. If the LO power is not strong enough, the full benefit of coherent detection is not achieved, and the SNR is a function of the LO power. When LO power is strong enough, the SNR no longer depends on the LO because strong LO approximation is valid, and the SNR can be accurately represented by Eq. (2.9.16).

2.9.3 Balanced coherent detection and polarization diversity Coherent detection using the simple configuration shown in Fig. 2.9.2 has both the mixed-frequency term and direct-detection terms shown in Eq. (2.9.6). In the analysis of the last section, we assumed that the local oscillator has no intensity noise and that the direct-detection term of the signal is negligible. In practice, however, the directdetection term may overlap with the IF component and introducing crosstalk. In addition, since the optical power of the LO is significantly higher than the optical signal, any intensity noise of the LO would introduce the excessive noise in coherent detection. Another important concern is that a strong LO may saturate the electrical preamplifier after the photodiode. Therefore, the first two direct detection terms in Eq. (2.9.6) may

217

218

Fiber optic measurement techniques

E1(t) Es(t)

i1(t) Di(t)

2x2 ELO(t)

PC

E2(t)

i2(t)

LO

Fig. 2.9.5 Block diagram of balanced coherent detection.

degrade the performance of coherent detection. To solve this problem and eliminate the effect of direct detection, a balanced coherent detection configuration is often used. The schematic diagram of a balanced coherent detection is shown in Fig. 2.9.5. Instead of using one photodetector as shown in Fig. 2.9.2, two head-to-toe photodetectors are used in this balanced coherent detection configuration. Based on the transfer function of the 2  2 optical coupler shown in Eq. (2.9.4), the optical fields at the two output ports are h! i pffiffiffi ! ! E 1 ðt Þ ¼ E s ðtÞ + jE Lo ðt Þ = 2 (2.9.17) h! i pffiffiffi ! ! E 2 ðtÞ ¼ j E s ðtÞ  jE Lo ðtÞ = 2 (2.9.18) and the photocurrents generated by the two photodiodes are  1  i1 ðt Þ ¼ R jAs ðtÞj2 + jALO j2 + 2As ðtÞ  ALO cos ðωIF t + ΔφÞ cos θ 2  1  i2 ðtÞ ¼ R jAs ðtÞj2 + jALO j2  2As ðtÞ  ALO cos ðωIF t + ΔφÞ cos θ 2

(2.9.19) (2.9.20)

Therefore, the direct-detection components can be eliminated by subtracting these two photocurrents and we obtain ΔiðtÞ ¼ 2RAs ðtÞALO cos ðωIF t + ΔφÞ cos θ

(2.9.21)

In the derivation of Eqs. (2.9.18)–(2.9.21), we assumed that the power coupling coefficient of the optical coupler is ε ¼ 0.5 (3-dB coupler) and the two photodetectors have identical responsivities. In practical applications, paired photodiodes with identical characteristics are commercially available for this purpose. In Eq. (2.9.21), although direct detection components are eliminated, the effect of polarization mismatch, represented by cosθ, still affects coherent detection efficiency. Since the polarization state of the optical signal is usually not predictable after the transmission over long distance, the photocurrent term may fluctuate over time and thus cause receiver fading. Polarization diversity is a technique that overcomes polarization mismatch-induced receiver fading in coherent systems. The block diagram of

Basic mechanisms and instrumentation for optical measurement

Es// + E LO,//

Es(t)

PBS

2x2

( )2

Δi12(t)

Es⊥ + ELO,⊥

ELO(t) LO

Δi1(t)

2x2

PIF(t)

Δi2(t)

( )2

Δi22(t)

Fig. 2.9.6 Block diagram of balanced coherent detection.

polarization diversity in a coherent receiver is shown in Fig. 2.9.6, where a polarization beam splitter (PBS) is used to separate the input optical signal into horizontal (Es//) and vertical (Es?) polarization components. The polarization state of the local oscillator is aligned midway between the two principal axis of the PBS such that the optical power of the LO is equally split between the two outputs of the PBS, that is, E LO,== ¼ pffiffiffi ELO,? ¼ E LO = 2. As a result, the photocurrents at the output of the two balanced detection branches are pffiffiffi Δi1 ðtÞ ¼ 2RAs ðtÞALO cos ðωIF t + ΔφÞ cos θ (2.9.22) pffiffiffi (2.9.23) Δi2 ðtÞ ¼ 2RAs ðtÞALO cos ðωIF t + ΔφÞ sin θ where θ is the angle between the polarization state of the input optical signal and the principal axis of the PBS, Es// ¼ Es cos θ and Es? ¼ Es sin θ. Both photocurrents are squared by RF power detectors before they combine to produce the RF power of the coherently detected signal, P IF ðt Þ ¼ Δi21 + Δi22 ¼ 2R2 P s ðtÞP LO cos 2 ðωIF t + ΔφÞ

(2.9.24)

This RF power is independent of the polarization state of the input optical signal. A lowpass filter can then be used to select the baseband information carried by the input optical signal.

2.9.4 Phase diversity in coherent homodyne detection In coherent homodyne detection, the photocurrent is proportional to the field of the optical signal, and thus, the phase information of the optical signal is preserved. At the same time, phase noise in the LO and the received optical signal also play an important role because iðtÞ ¼ RAs ðtÞ  ALO cos ΔφðtÞ as given by Eq. (2.9.10). If Δφ(t) varies randomly with time, the photocurrent will also fluctuate randomly and cause signal fading. Traditionally, a phase locked loop can be used to overcome this signal fading problem. Assuming that the variation of Δφ(t) is much slower than the data rate on the optical carrier, the cos Δφ(t) term can be seen as a slow varying envelope. Fig. 2.9.7 schematically

219

220

Fiber optic measurement techniques

Output

Es(t)

2x2

PD

ELO(t)

lowpass

PC LO

Phase control

Power detection

Fig. 2.9.7 Coherent homodyne detection with a phase locked feedback loop.

shows a phase locked loop, which includes a lowpass filter that selects the slow varying envelope, a power detector, and a phase control unit. The optical phase of the LO is adjusted by the feedback from the phase control unit to maximize the power level such that cosΔφ(t) ¼ 1. In practice, a coherent homodyne system requires narrow linewidth lasers for both the transmitter and the LO for the low phase noise. The implementation of adaptive phase locked loop is also expensive, which limits the application of the coherent homodyne receiver. Another technique to minimize phase noise induced signal fading in coherent homodyne system is phase diversity. As shown in Fig. 2.9.8, the phase diversity coherent receiver is based on a 90 degree optical hybrid. It is well known that the transfer matrix of a conventional 3-dB 2  2 optical fiber coupler is given by "! # " ! # E1 ES 1 1 j ¼ pffiffiffi (2.9.25) ! ! 2 j 1 E LO E2 In the balanced detection coherent receiver shown in Fig. 2.9.5 using a 3-dB, 2  2 fiber coupler, the cross products are 180 degree apart in the two output arms after photodetectors, as shown in Eqs. (2.9.19) and (2.9.20). Therefore, the conventional 2  2 fiber coupler can be classified as a 180 degreehybrid

E1(t) Es(t)

i1(t)

( )2

i12(t)

i (t)

90⬚ hybrid

ELO(t) PC

E2(t)

i2(t)

( )2

i22(t)

LO

Fig. 2.9.8 Block diagram of a phase diversity coherent homodyne receiver.

Basic mechanisms and instrumentation for optical measurement

An ideal 90 degree optical hybrid should have a transfer matrix as "! # " ! # 1 exp ðjπ=4Þ E S E1 1 ¼ pffiffiffi ! ! 1 2 exp ðjπ=4Þ E2 E LO

(2.9.26)

Applying this 90 degree optical hybrid in the receiver shown in Fig. 2.9.8 and assuming that the matched polarization states between the input optical signal and the LO,   π i1 ðtÞ ¼ RAs ðt Þ  ALO cos ΔφðtÞ  (2.9.27) 4   π i2 ðtÞ ¼ RAs ðt Þ  ALO sin ΔφðtÞ  (2.9.28) 4 where for simplicity, we have neglected the direct detection components and assumed that the input optical signal and the LO have the same polarization state. Also, ωIF ¼ 0 is assumed because of the homodyne detection. Obviously, in this circuit, there is a quadrature relationship between i1(t) and i2(t), and i21 ðt Þ + i22 ðtÞ ¼ R2 P ðtÞ  P LO

(2.9.29)

which is independent of the differential phase Δφ(t). It is apparent that in the phase diversity receiver configuration shown in Fig. 2.9.8, the key device is the 90 degree optical hybrid. Unfortunately, the transfer matrix shown in Eq. (2.9.26) cannot be provided by a simple 2  2 fiber coupler. In fact, a quick test shows that the transfer function of Eq. (2.9.26) does not satisfy the energy conservation principle because jE1 j2 + jE2 j2 6¼ jEs j2 + jELO j2. A number of optical structures have been proposed to realize the 90 degree optical hybrid, but the most straightforward way is to use a specially designed 3  3 fiber coupler with the following transfer matrix (Epworth, 2005): 2 pffiffiffiffiffiffi   pffiffiffiffiffiffi  3 pffiffiffiffiffiffi 3π 3π 3 2 3 2 0:2 0:4 exp j 0:4 exp j 6 4 4 7 Es E1 7 6 pffiffiffiffiffiffi   pffiffiffiffiffiffi   pffiffiffiffiffiffi 3π 3π 76 7 6 7 6 0:2 0:4 exp j 4 E2 5 ¼ 6 0:4 exp j 74 E 5 4 4 7 LO 6   pffiffiffiffiffiffi   pffiffiffiffiffiffi 5 0 4 pffiffiffiffiffiffi E3 3π 3π 0:4 exp j 0:2 0:4 exp j 4 4 (2.9.30) where only two of the three ports are used at each side of the coupler, and therefore, it is used as a 2  2 coupler. On the input side, the selected two ports connect to the input optical signal and the LO, whereas the first two output ports provide pffiffiffiffiffiffi    pffiffiffiffiffiffi 3π E 01 ¼ (2.9.31) 0:2Es ðt Þ + 0:4 exp j E LO 4

221

222

Fiber optic measurement techniques

E 02 ¼

pffiffiffiffiffiffi    pffiffiffiffiffiffi 3π 0:4 exp j E s ðtÞ + 0:2ELO 4

(2.9.32)

After photodiodes and neglecting the direct detection components, the AC parts of these two photocurrents are   pffiffiffiffiffiffiffiffiffi 3π i1 ðtÞ ¼ 2R 0:08As ðt ÞALO cos Δφðt Þ  (2.9.33) 4   pffiffiffiffiffiffiffiffiffi 3π i2 ðtÞ ¼ 2R 0:08As ðt ÞALO sin ΔφðtÞ  (2.9.34) 4 Therefore, after squaring and combining, the receiver output is i21 ðt Þ + i22 ðtÞ ¼ 0:32R2 P ðt Þ  P LO

(2.9.35)

Compare this result with Eq. (2.9.29), where an ideal 90 degree hybrid was used. There is a signal RF power reduction of approximately 5 dB using the 3  3 coupler. Because one of the three output ports is not used, obviously a portion of the input optical power is dissipated through this port.

2.9.5 Coherent OSA based on swept frequency laser As has been discussed previously, coherent detection not only has high detection sensitivity but also has excellent frequency selectivity because the coherent detection process converts the optical spectrum into the RF domain. Optical spectrum analyzing using tunable laser and coherent detection has become practical because in recent years; tunable semiconductor lasers are widely available with a wide continuous tuning range and fast wavelength sweeping speed. Because of the narrow linewidth of tunable lasers in the submegahertz level, unprecedented spectral resolution can be obtained with coherent OSAs (Baney et al., 2002). Fig. 2.9.9 shows a block diagram of a coherent OSA; basically, it is a balanced coherent detection receiver discussed in the previous section, except that the wavelength of the LO is a wavelength-swept laser that sweeps across the wavelength window of interest. The purpose of using balanced coherent detection is to minimize the direct detection component and improve the measurement signal-to-noise ratio. Because the detected E1(t) Es(t)

i1(t) LPF ADC

2x2 ELO(t)

PC λ-swept laser

E2(t)

i2(t)

Synchronization

DSP Computer control & display

Fig. 2.9.9 Block diagram of a coherent OSA. LPF, lowpass filter; ADC, analog-to-digital converter; DSP, digital signal processing.

Basic mechanisms and instrumentation for optical measurement

differential photocurrent is linearly proportional to the optical field of the input optical signal Es(t), this setup allows the measurement of the incident light field strength. First, consider that the input is a monochromatic optical signal: E s ðtÞ ¼ As ðtÞ exp ½j2πυs t + jφs ðt Þ

(2.9.36)

where As(t) is the amplitude of the incident light, υs is the mean optical frequency, and φs(t) is phase noise. The LO is a swept frequency laser, and its optical field is ðt (2.9.37) E LO ðt Þ ¼ ALO ðtÞ exp j2π υLO ðτÞdτ + jφLO ðt Þ 0

ALO(t) Ðand φLO(t) are the amplitude and phase noise of the local oscillator and Ðwhere t t 1 2 υ ð τ 0 LO Þdτ ¼ 0 ðυ0 + γτ Þdτ ¼ υ0 t + 2 γt is the instantaneous scanning frequency of the local oscillator, υ0 is the initial scanning frequency, and γ is the rate of change of frequency with time. At the balanced coherent receiver, the differential photocurrent detected by the photodiodes is   ΔiðtÞ ¼ 2RjAs ðtÞjjALO ðtÞj cos πγt2 + ψ (2.9.38) where R is the responsivity of the two photodiodes and ψ is the combined phase of LO and the input optical signal. Assuming that the temporal response of the lowpass electrical filter is Gaussian, then after the filter, the differential photocurrent becomes 2

  t ΔiðtÞ ¼ 2RjAs ðt ÞjjALO ðtÞj exp cos πγt2 + ψ (2.9.39) τ where τ is equal to the filter bandwidth divided by γ. Fig. 2.9.10 shows an example of the measured photocurrent as a function of time, which represents the signal optical spectrum. In this setup, a 1550-nm tunable external-cavity semiconductor laser was used as the local oscillator with 100 KHz spectral

(a)

(b)

Fig. 2.9.10 (A) A detected coherent detection photocurrent of an external-cavity laser (yellow solid line; dark gray in print versions) and theoretical calculation result (red solid line; light gray in print versions). (B) Optical spectrum of the external-cavity laser.

223

224

Fiber optic measurement techniques

linewidth. A balanced photodiode pair was used, each connected to an output of the 3-dB coupler, which ensures the subtraction of the direct detection intensity noise in this heterodyne coherent detection receiver. The wavelength of the local oscillator was linearly swept under a computer control. In this experiment, the wavelength scanning rate was 0.5 nm/s, which is equivalent to approximately 62.5 GHz/s. After the balanced coherent detection, a lowpass electrical filter was used with a bandwidth of approximately 500 kHz. The electrical signal was magnified by a logarithmic amplifier with a 90 dB dynamic range. A 5 MHz, 12-bit A/D converter was used for data acquisition, the speed of which was 2 megasamples per second. Therefore, the sampling interval was equivalent to 31.25 kHz, which defined the frequency resolution of the data acquisition. It is important to note that the actual spectral resolution of this OSA was determined by the spectral linewidth of the local oscillator, which was 100 kHz in this case. As the frequency of the LO sweeps, the incident optical field Es is effectively sampled in the frequency domain by coherent detection. Therefore, the spectral density of the signal optical field is represented by the average fringe envelope described by Eq. (2.9.39), and the power spectral density can be calculated from the square of the average envelope amplitude. In Fig. 2.9.10, the input optical signal is a single-frequency laser with narrow linewidth; therefore, the beating with the swept-frequency LO simply produces a photocurrent that is proportional to cos(πγt2 + ψ), as shown in Fig. 2.9.10A, where the measured photocurrent is shown as the yellow line and the calculated result is shown as the red line. Ideally, if the local oscillator has a linear phase, the phase of the input optical signal can be extracted from this measurement. However, a practical LO using an external-cavity semiconductor laser may have phase discontinuities during frequency sweeping, especially when the frequency tuning range is large. Therefore, this phase measurement may not be reliable. In the future, if the LO phase continuity can be guaranteed during a frequency sweep, this coherent SOA technology may be used to measure phase distortion of the optical signal in an optical system introduced by fiber chromatic dispersion. In a coherent SOA using frequency-swept LO, an important aspect is to translate the measured time-dependent photocurrent waveform into an optical power spectrum; this translation depends on the speed of the LO frequency sweep. Note that the resolution of the coherent OSA is determined by the linewidth of the local oscillator and the bandwidth of the receiver filter. Selecting the bandwidth of the lowpass filter should consider the frequency scanning rate of the LO. Assuming that the LO frequency scanning rate is γ [GHz/s] and the desired coherent OSA frequency resolution is Δf, the scanning time interval across a resolution bandwidth Δf is Δt ¼

Δf γ

(2.9.40)

Equivalently, to obtain the signal spectral density within a time interval Δt, the electrical filter bandwidth B should be wide enough such that

Basic mechanisms and instrumentation for optical measurement

B

1 γ ¼ Δt Δf

(2.9.41)

In this measurement example, γ ¼ 62.5GHz/s and B ¼ 500 kHz, and the spectral resolution is approximately 125 kHz, according to Eq. (2.9.41). Since the LO linewidth is only 100 kHz, it has a negligible effect on the spectral resolution of this OSA. Although a wider bandwidth of the filter gives higher spectral resolution of the OSA, it will certainly introduce more wideband noise and thus degrade the receiver signal-to-noise ratio. To measure a wideband optical signal, the coherent detection is equivalent to a narrowband optical filter. Continuously scanning the frequency of the LO across the frequency range of interest, the signal power spectral density can be obtained. Because of the excellent spectral resolution and detection sensitivity, a coherent OSA can be used as an optical system performance monitor, which is able to tell modulation data rates as well as modulation formats by measuring the optical spectral characteristics using the coherent OSA. As an example, Fig. 2.9.11A and B show the measured optical spectrum of a 10 Gb/s optical system with RZ and NRZ modulation formats. Similarly, Fig. 2.9.12A and B demonstrate the measured spectrum of a 2.5-Gb/s optical system with 10Gb/s RZ

Frequency (15.6GHz/div.) (a)

10Gb/s NRZ

Frequency (15.6GHz/div.) (b)

Fig. 2.9.11 Optical spectrum of 10 Gb/s RZ (left) and 10 Gb/s NRZ (right) signals measured by a COSA. Horizontal: 15.6 GHz/div. Vertical: 8 dB/div. 2.5Gb/s RZ

2.5Gb/s NRZ

Frequency (7.8GHz/div.)

Frequency (7.8GHz/div.)

(a)

(b)

Fig. 2.9.12 Optical spectrum of (A) 2.5 Gb/s RZ and (B) 2.5 Gb/s NRZ signals measured by a COSA. Horizontal: 7.8 GHz/div. Vertical: 8 dB/div.

225

226

Fiber optic measurement techniques

RZ and NRZ modulation formats. Clearly, the coherent OSA revealed the detailed spectral features of the optical signals. This enables precise identification of the data rate and modulation format of the signal in the optical domain, which cannot be achieved by conventional grating based OSAs.

2.10 Waveform measurement Waveform measurement is a very important aspect of both electronic systems and optical systems. An oscilloscope can usually be used to characterize the signal electric waveform, transient effect, and distortion of system response as well as eye diagrams in digital communication systems. To display high-speed time-domain signal waveforms, an oscilloscope should have wide enough bandwidth, low noise, and excellent response linearity. Although traditionally an oscilloscope is based on analog detection and linear display, because of the rapid development of high-speed silicon technology and digital electronic circuits, the digital oscilloscope is becoming more and more popular. In addition, the combination with computerized digital signal processing and display has greatly improved the capability and flexibility of oscilloscopes. In this section, we will discuss the operating principles of various types of oscilloscopes, including analog oscilloscopes, digital oscilloscopes, and sampling oscilloscopes. The application of oscilloscopes in characterizing digital optical communication systems is also discussed.

2.10.1 Oscilloscope operating principle The traditional oscilloscope is based on the analog format, in which the input timedomain electrical signal is linearly amplified and displayed on the screen. Fig. 2.10.1 illustrates the operation principle of an analog oscilloscope based on the cathode ray tube (CRT). The input signals to be measured are linearly amplified and applied directly to a pair of vertical deflection plates inside the CRT. This results in an angular deflection of the electron beam in the vertical direction, which is linearly proportional to the amplitude of the electrical signal. The time scale is introduced by a swept Electrical Signal

Electron gun

Vertical Amp.

Vertical deflection

Horizontal Sweep

CRT (cathode ray tube)

Horizontal deflection

Fig. 2.10.1 Illustration of an analog oscilloscope using a cathode ray tube.

Phosphor screen

Basic mechanisms and instrumentation for optical measurement

voltage source inside the oscilloscope, which sweeps at a uniform rate and is applied to a pair of horizontal deflection plates of the CRT. This generates a repetitive and uniform left-to-right horizontal motion on the screen. The direction of the high-speed electrons generated by the electron gun inside the CRT is guided by the vertical and the horizontal deflectors, and therefore, the electron beam projected on the phosphor screen precisely traces the waveform of the electrical signal to be measured. Fig. 2.10.2 shows the block diagram of an analog oscilloscope. Each input signal channel passes a preamplifier, a delay line, and a power amplifier before it is connected to the CRT to drive the vertical deflection plates. A trigger can be selected from either a signal channel or a separate external trigger. The purpose of triggering is to provide a timing to start each horizontal sweep of the CRT. Therefore, for periodic electrical signals, the period of the trigger should be either the same as or the exact multiple of the period length of the signal. A trigger level can be set by the trigger generator and compared with the selected trigger signal. At the moment, when the trigger signal amplitude reaches the trigger level, a pulse is created by the trigger generator, which sets the starting time for each horizontal sweep. A linear ramp signal is used to drive the horizontal sweep, and an amplifier is used to adjust the horizontal driving amplitude (Fig. 2.10.3). The electron beam inside the CRT is normally blocked unless an unblanking signal is applied. The unblanking gate is activated only during the ramp period, which determines the display time window. An electrical delay line is used in each signal channel to allow the horizontal adjustment of the waveform on the oscilloscope screen. Finally, a trigger hold-off pulse appears to mark the end of the display window. In recent years, rapid advances in digital electronic technologies, such as microprocessors, digital signal processing, and memory storage, have made digital oscilloscopes more and more popular. The combination between an oscilloscope and a computer

Channel 1

Channel 2

Delay line Post amp

Preamp

Delay line Post amp

Preamp

Trigger selection External trigger Preamp

Un-blanking gate

Trigger Ramp generator generator

Fig. 2.10.2 Block diagram of an analog oscilloscope.

CRT Amp

227

228

Fiber optic measurement techniques

Trigger level Trigger selection

Trigger generator Ramp generator Unblanking gate Signal delay line Trigger hold-off

Fig. 2.10.3 Waveforms and their synchronization at different parts of the oscilloscope. Channel 1 Preamp

ADC

Memory

Preamp

ADC

Memory

Channel 2

Trigger selection External trigger

Trigger generator

Delay

Time base

Embedded microcontroller

Preamp

Crystal oscillator

Display

Fig. 2.10.4 Block diagram of a digital oscilloscope.

enables many new capabilities and functionalities that could not be achieved previously. Fig. 2.10.4 shows the block diagram of a basic digital oscilloscope. In a digital oscilloscope, an A/D converter samples the signal of each channel and converts the analog voltage into a digital value. This sampled digital signal is stored in memory. The sampling frequency and timing are determined by the time base of a crystal oscillator. At a later time, the series of digital values sampled signals can be retrieved from memory and the graph of signal voltage versus time can be reconstructed and displayed on the screen. The graphic display of the digital oscilloscope is quite flexible using digital display technologies such as a liquid crystal plate or even on a computer screen. Similar to analog oscilloscopes, triggering is also necessary in digital oscilloscopes to determine the exact instant in time to begin signal capturing. Two different modes of triggering are usually used in a digital oscilloscope.

Basic mechanisms and instrumentation for optical measurement

Trigger level

Trigger level

Trigger pulse Positive slope trigger

Trigger pulse

Negative slope trigger

Fig. 2.10.5 Illustration of edge triggering in a digital oscilloscope.

The most often used triggering mode is edge triggering. As shown in Fig. 2.10.5, in this mode, a trigger pulse is generated at the moment the input signal passes through a threshold voltage, which is called a trigger level. The trigger pulse provides the exact timing to start each data acquisition process. To ensure the time accuracy of the trigger pulse, the trigger level can be adjusted such that the slope of the signal at the trigger level is steep enough. Usually, an oscilloscope also has an option that allows the selection of either a positive slope trigger or a negative slope trigger, as illustrated in Fig. 2.10.5. The general rules of trigger-level selection are (a) it has to be within the signal amplitude range, (b) near the trigger level, the signal should have the smallest time jitter, and (c) at the trigger level, the signal should have the highest slope. Another often used trigger mode in a multichannel digital oscilloscope is pattern triggering. Instead of selecting a trigger level, the trigger condition for pattern triggering is defined as a set of states of the multichannel input. A trigger pulse is produced at the time when the defined pattern appears. Fig. 2.10.6 shows an example of pattern triggering in a four-channel digital oscilloscope; the selected trigger pattern is high-low-high-low (HLHL). In this case, as soon as the preselected pattern (which is high, low, high, and low for channels 1, 2, 3, and 4, respectively) appears, a trigger pulse is generated. Pattern triggering is useful in digital systems and can be used in isolating simple logical patterns in digital systems. In a digital oscilloscope, the speed of a horizontal sweep is determined by the selection of a time base. The sweep speed control labeled time/division is usually set in 1, 2, 5 sequence, such as 1 ms/division, 2 ms/division, 5 ms/division, 10 ms/division, and so Ch. 1 Ch. 2 Ch. 3 Ch. 4 Trigger pulse

Fig. 2.10.6 Pattern triggering in a four-channel digital oscilloscope using HLHL as the trigger pattern.

229

230

Fiber optic measurement techniques

on. To be able to accurately evaluate the transient time of the measured waveform, the horizontal sweeping speed of the oscilloscope often needs to be precise. In a digital oscilloscope, the precision of the time base is mainly determined by the accuracy of the crystal oscillator. The sweep control in an oscilloscope allows the selection of single sweep or normal sweep. If the single sweep button is pushed, when the trigger event is recognized, the signal is acquired once and displayed on the screen. After that, the signal acquisition stops until the single sweep button is pressed again. If we choose normal sweep, the oscilloscope will operate in the continuous sweep mode. In this mode, as long as trigger pulses are generated regularly, the signal data is continually acquired, and the graph on the display screen is continually updated. However, if the trigger condition is not met, the sweep stops automatically. To avoid the stop of horizontal sweeping, the autotrigger mode may be used. This is similar to normal sweep mode except that if the trigger condition is not met for a certain time (called trigger timeout), a forced sweep is initiated by an artificial trigger. Therefore, in the autotrigger mode, sweeping never really stops, even if the trigger condition is not met. However, the disadvantage of autotrigger is that the forced sweep is not synchronized to the signal. Unless the periodic signal has the same time base as the oscilloscope, the phase of the measured waveform will not be stable. In addition to the sweeping speed selection, the horizontal phase shift of the measured waveform can be adjusted by changing the relative delay between the signal and the trigger pulse. As shown in Fig. 2.10.4, this is accomplished by a tunable trigger delay.

2.10.2 Digital sampling oscilloscopes As discussed in the last section, data acquisition in a digital oscilloscope is accomplished by digital sampling and data storage. To capture enough details of the waveform, the sampling has to be continuous at a fast enough rate compared to the time scale of the waveform. This is referred to as real-time sampling. According to the Nyquist criterion, fs 2fsig, max is required to characterize the waveform, where fs is the sampling frequency and fsig,max is the maximum signal frequency. For a sinusoid signal waveform, the maximum frequency is equal to is the fundamental frequency and sampling at Nyquist frequency, that is, two samples per period, is sufficient to reproduce the waveform. However, this is assuming that we know that the waveform was sinusoid before the measurement. In general, for an unknown waveform, the maximum frequency fsig,max can be much higher than the fundamental frequency fsig. Therefore, to capture waveform details, the sampling frequency has to be much higher than fsig. Typically, fs 5fsig is necessary for an accurate measurement. That means that to characterize a 10-GHz signal, the sampling frequency must be higher than 100 GHz, which is a very challenging task for amplifiers and electronic circuits in the oscilloscope. The purpose of a sampling oscilloscope is to be able to measure high-speed signals using relatively low sampling rates, thus relaxing the requirements for the electronic circuits.

Basic mechanisms and instrumentation for optical measurement

The operating principle of a sampling oscilloscope is based on the undersampling of repetitive waveforms. Although only a small number of samples might be acquired within each period of the waveform, after the sampling of many periods and combining the data, the waveform can be reconstructed with a large number of sampling points per period. This method is also called equivalent-time sampling, and there are two basic requirements: (1) the waveform must be repetitive, and (2) a stable trigger and precisely controlled relative delay must be available. Fig. 2.10.7A illustrates the principle of waveform sampling. Since the signal waveform must be repetitive, a periodic trigger pulse train is used, and the synchronization between the first trigger pulse and the signal waveform is determined by the trigger-level selection. A discrete data point is sampled at the moment of each trigger pulse. By setting the trigger period to be slightly longer than the signal period by ΔT, data sampling happens at different positions within each period of the signal waveform. Since only one data point is acquired within each trigger period, by definition, this is sequential sampling. If the total number of sampling points required to reconstruct the waveform on the screen is N and the time window of the waveform to display is ΔTdsp, the sequential delay of each sampling event within a signal period should be ΔT ¼

ΔT dsp N 1

(2.10.1)

Fig. 2.10.7B shows the reconstructed waveform, and obviously, ΔT represents the time-domain resolution of the sampling. Here, ΔT can be chosen very short to have high

Trigger level

V5

V4

V3

V2

V6

V1 2ΔT

ΔT

Trigger pulse

3ΔT

4ΔT

5ΔT

Delayed Delayed trigger pulse trigger pulse

(a) V2

V3 V4 V 5

V6

V1 ΔT

(b) Fig. 2.10.7 Illustration of (A) sequential sampling method and (B) the waveform reconstructed after sequential sampling.

231

232

Fiber optic measurement techniques

resolution while still using a relatively low sampling frequency. A compromise has to be made, though, when choosing to use high resolution; data acquisition becomes longer because a large number of signal periods will have to be involved. A major disadvantage is that since only one sampling point is allowed for each signal period, when the signal period is long, the measurement time can be very long. Random sampling is an alternative sampling algorithm, as illustrated in Fig. 2.10.8. In random sampling, the signal is continuously sampled at a frequency that is independent of the trigger frequency. Therefore, within a trigger period, more than one or less than one data point(s) may be sampled, depending on the frequency difference between the sampling and the triggering. In the measurement process, the data is stored in memory with the sampled voltage value of the signal and the time at which the voltage was taken. The time difference between each sampling moment and the nearest trigger is collected and arranged to reconstruct the signal waveform on the display screen. Assume that the sampling period is ΔTs, the trigger period is ΔTtg, and the time window of signal to display on screen is ΔTdsp. If ΔTdsp ¼ ΔTtg, then, on average, there are Nave ¼ ΔTtg/ΔTs data points sampled within every trigger period. In order to accumulate N sampling points for graphic display, the total time required for data acquisition is TN ¼ NΔTs ¼ NaveΔTtg. Using random sampling, the sampling rate is not restricted to one sample per trigger period as in sequential sampling, and thus, the sampling efficiency can be higher, limited only by the sampling speed the oscilloscope can offer. The major disadvantage is that since the sampling has a different frequency from the trigger, the V4

V1 Triggering V2

Trigger pulse

V3

V11 V7

V5

ΔTs

V14

V9

V6 V8

V10

V12 V13

V15

ΔTtg

(a) V14 V9 V4 V6 V 1 V11

V12 V7

V8 V3 ΔTdsp

V2

V10 V15

(b) Fig. 2.10.8 Illustration of (A) random sampling method and (B) the waveform reconstructed after random sampling.

Basic mechanisms and instrumentation for optical measurement

reconstructed sampling points within the display window might not be equally spaced, as shown in Fig. 2.10.8B. Therefore, more samples are usually required compared to sequential sampling to reconstruct the original signal waveform. It is interesting to note that for all the oscilloscopes and sampling methods discussed so far, a trigger is always required to start the sweeping process and synchronize data sampling with the signal. In recent years, the capability of high-speed digital signal processing has been greatly expanded, bringing benefits to many technical areas, including instrumentation. Microwave transition analysis is another sampling technique that does not require a separate trigger. It repetitively samples the signal and uses DSP algorithms such as the FFT to determine the fundamental frequency and high-order harmonics (or subharmonics) of the signal. Once the fundamental period ΔTs of the signal is determined, measurement can be performed similar to sequential sampling.

2.10.3 High speed real-time digital analyzer Digital sampling oscilloscopes discussed in the previous section are able to measure high speed waveforms, but these waveforms must be repetitive. For many applications, waveforms are random so that digital sampling oscilloscopes cannot be used. High-speed optical communication is a good example, in which signal data rates can be higher than 100 Gb/s, but the waveforms are generally not repetitive. The capability of measuring waveforms in real time is desirable to characterize system performance including pattern-dependent transient effect and time synchronization among different channels. A traditional digital oscilloscope, with a general block diagram shown in Fig. 2.10.4, could be used to measure random waveforms if the bandwidth of amplifiers is wide enough and the speed of analog to digital converters (ADC) is fast enough. Usually, the sampling speed of ADCs is the bottleneck. For a digital oscilloscope, the sampling speed of the ADC must be more than two times the analog bandwidth of the signal as determined by the Nyquist theorem. For example, at least 2GS/s sampling rate is required for an ADC in order to measure a waveform with 1GHz bandwidth. Because of the rapid advance in CMOS digital electronics, the sampling speed of ADCs has been increased considerably over the past 20 years, which enabled a new generation of digital oscilloscopes for real-time waveform measurements with multi-GHz analog bandwidths. In order to produce ultra-high-speed real-time digital oscilloscopes for applications requiring >25GHz bandwidths, a number of techniques have been used, including synchronous time interleaving, frequency down-conversion and balanced frequency down-conversion. Fig. 2.10.9A shows the block diagram of synchronous time interleaving for each channel of a digital oscilloscope. The signal is split into two equal parts after a low-noise preamplifier and each part is sampled by an ADC. The sampling time of these two ADCs are interleaved so that the combined sampling speed is doubled compared to using only

233

Fiber optic measurement techniques

Memory

ADC Input signal

Digital processing

Clock

Preamp

Memory

ADC Time control

Display

(a) Clock Leading edge trigger Trailing edge trigger

(b) Fig. 2.10.9 (A) Block diagram of a single channel digital oscilloscope based on synchronous time interleaving. (B) Sample time interleaving between the two ADCs to double the overall sampling speed.

0

B/2

B Input signal

Delay 0

amp.

LPF

Memory

B/2 B/2

0

0

ADC

B/2

0

B

amp.

LPF

Clock

Time control

ADC

Memory

B/2

Digital processing/display

one ADC. Time interleaving of sampling events can be achieved by different ways, and the example illustrated in Fig. 2.10.9B is based on the leading edge and trailing edge triggering from a square-wave clock. The signal waveform can then be deinterleaved and stitched together from the sampled data stored in the two memories through a digital signal processing (DSP) unit. Similar to the sampling oscilloscope described in the previous section, the precision of time interleaving requires a stable clock with low time jitter and precise sampling times for the ADCs. In addition to time interleaving, the sampling rate of a digital oscilloscope can also be increased through frequency down-conversion with the block diagram shown in Fig. 2.10.10. In this example, the signal with bandwidth B is first split into two frequency

Frequency de-mulplexer

234

fLO =B/2

Fig. 2.10.10 Block diagram of a single channel digital oscilloscope based on the frequency downconversion.

Basic mechanisms and instrumentation for optical measurement

bands each having a bandwidth of B/2 through a frequency demultiplexer. The highfrequency band is frequency down-converted by mixing with a local oscillator. Signals in the two branches in Fig. 2.10.10 with similar bandwidth B/2 are then amplified and lowpass filtered to remove the high-frequency noise and frequency aliasing components caused by mixing and sampled by two ADCs. Time interleaving can also be used to sample the signal carried by each frequency band to further increase the sampling speed. The original waveform can be reconstructed from data stored in the memories by DSP, which performs frequency up-conversion and time synchronization if necessary. Digital filtering in the DSP unit can also help remove any spectral overlap between the two bands which could be caused by the slow roll-off of analog filtering of the demultiplexer. A delay line is placed in the arm which does not have frequency conversion to balance the differential delay caused by mixing and frequency down-conversion in the other arm. However, the differential delay introduced in the frequency down-conversion process can be frequency dependent, which can potentially affect the accuracy of waveform reconstruction, and needs to be carefully compensated in the DSP process. Balanced frequency down-conversion is another sampling technique to avoid the differential delay and amplitude difference caused by conventional frequency downconversion. In the block diagram shown in Fig. 2.10.11, for the purpose of illustration, the signal spectrum is divided into a high-frequency band AU and a low-frequency band

AL 0

AU

f

AL

AU

2cos(2πBt) 00

B

Input signal

A(t)

Power splier

AU B/2

0

0

AU B/2

AL B/2

f

0

B

AU

AL

A(t)

LPF amp.

A(t)[1-2cos(2πBt)]

0

B/2

LPF amp.

B f

f B

0 0

AU

AL 0

B/2

Β

f

B

ADC

0

AL + AU

f

B/2

0

– AL

Memory

Clock

Time control

ADC

Memory

LPF

AL

– AU

f

B/2

0

fLO=B

0 0

AL

AU

A(t)[1+2cos(2πBt)]

-2cos(2πBt) AL

LPF

f B

B

B/2

AL

AU B/2

0

Digital processing/display

0

0

AL – AU

AU – – AL

AL – AU

0

f

B/2

Fig. 2.10.11 Block diagram of a single channel digital oscilloscope based on the balanced frequency down-conversion. Frequency conversion and spectral flipping in the upper and the lower branches are also illustrated.

235

236

Fiber optic measurement techniques

AL. The signal is equally split into two copies by a power splitter with the same signal A(t) in both arms of the splitter output. The signals in the upper and the lower arms are mixed with [1 + 2cos(2πBt)] and [1 2cos(2πBt)], respectively. The DC part is represented by a direct path, and the AC part is represented by mixing with a local oscillator (LO) at a frequency fLO ¼ B. The RF mixing process flips the signal spectrum so that AL shifts L , and AU shifts to the low-frequency band to the high-frequency band and becomes A U , where A x represents the frequency flipped copy of Ax with x ¼ L, U. As becoming A there is a 180 degree phase shift of the LO for the mixers in the upper and lower arms of the power splitter output, signals reaching to the ADCs in these two arms are V 1 ¼ U and V 2 ¼ AL  A U , respectively, after lowpass filtering. AL + A The original waveform A(t) can be digitally reconstructed from AL ¼ (V1 + V2)/2 and U can be converted to AU through frequency up-conversion. U ¼ ðV 1  V 2 Þ=2, and A A This frequency conversion technique is balanced, which minimizes differential delay between the upper and the lower branches in the block diagram. Although the frequency fLO for RF mixing is the same as that used for ADC sampling clock, there is no need to phase synchronization between them. For that reason, this technique is also known as asynchronous time interleaving (Knierim and Lamb, 2016).

2.10.4 High-speed sampling of optical signal The measurement of optical waveforms in the time domain is important in the characterization of optical devices, systems, and networks. The simplest and most often used way to measure an optical waveform is to use an oscilloscope equipped with an optical detector, as shown in Fig. 2.10.12. To measure high-speed and typically weak optical signals, the photodiode has to have a wide bandwidth and low noise. Suppose that the photodiode has 1 A/W responsivity and 50 Ω impedance, a 20 dBm (10 μW) optical power produces only 0.5 mV electric voltage at the photodetector output. Therefore, a low-noise preamplifier is usually required before sending the signal into the electric oscilloscope. In modern optical communication and optical signal processing systems, short optical pulses in the picosecond and even femtosecond levels are often used, which would require an extremely wide bandwidth for photodiodes and amplifiers in order to characterize these waveforms.

Optical signal

Photo -diode

Preamp

Electric oscilloscope

Optical oscilloscope

Fig. 2.10.12 Block diagram of an optical oscilloscope.

Basic mechanisms and instrumentation for optical measurement

2.10.4.1 Nonlinear optical sampling Direct optical sampling as shown in Fig. 2.10.13 has been used to measure ultrafast optical waveforms without the need for an ultra-wideband photodetector. In this configuration, the optical signal to be tested is mixed with a short pulse train in a nonlinear optical mixer. The mixer has an output only at the times when the reference pulses exist; therefore, the optical signal is sampled by these short pulses. The electrical bandwidth of the photodiode only needs to be wider than the repetition rate of the short pulses, whereas it can be much narrower than the bandwidth of the optical signal to be tested. An optical sampling oscilloscope can be built based on this sampling technique with the same operating principle and time alignments as for electrical sampling oscilloscopes. The key device in the optical sampling is the nonlinear optical mixer, which can be realized using nonlinear optical components such as semiconductor optical amplifiers, nonlinear optical fibers, and periodically polled LiNbO3 (PPLN) ( Jundt et al., 1991; Fejer et al., 1992). A simple example of this nonlinear mixer, shown in Fig. 2.10.14, is based on four-wave mixing (FWM) in a nonlinear medium, which can be either an SOA or a piece of highly nonlinear fiber. Assume that the wavelengths of the short pulse mode-locked laser and the optical signal to be measured are λp and λs, respectively. Degenerate FWM between these two wavelength components creates a frequency conjugated wavelength component at λc ¼ λpλs/(2λs  λp). This conjugate wave can then be selected by an optical bandpass filter (OBPF) before it is detected by a photodiode. As discussed in Section 1.4.5, the power of the FWM component is proportional to the

Optical mixer Data acquisition

PD

Optical signal

Tunable delay Synchronization Mode-locked laser

Short pulse train

Fig. 2.10.13 Direct sampling of high-speed optical waveform.

Signal at ws

Nonlinear medium

Pump at wp

PD

OBPF

OBPF

MLL λp λs

λc

λp λs

λc

Fig. 2.10.14 Illustration of optical sampling in a nonlinear medium using FWM. OBPF, optical bandpass filter; MLL, mode-locked laser; PD, photodiode.

237

238

Fiber optic measurement techniques

power product of the pump and the signal in the time domain, PFWM(t) ∝ P2p (t)Ps(t). This process samples the value of the signal optical power precisely at the timing of each short optical pulse. One of the challenges in optical sampling is the low nonlinear coefficient in photonic devices and optical fibers; therefore, both the optical signal and the sampling pulses have to be optically amplified prior to mixing. Another challenge is that the dispersion in optical devices or fibers creates a walk-off between the signal and the sampling pulse, causing a relatively narrow optical bandwidth to be available for the optical signal. Nevertheless, optical sampling has been a topic of research for many years, and various novel techniques have been proposed to increase the sampling efficiency and increase the optical bandwidth (Diez et al., 1999; Westlund et al., 2004; Nogiwa et al., 1999; Kikuchi et al., 1998; Li et al., 2004). Although nonlinear optical sampling is a powerful technique that allows the characterization of ultrafast optical waveforms, it is only capable of measuring the optical intensity while the phase information carried by the optical signal is usually lost. In many cases, optical phase information is important, especially in phase-modulated optical systems. Therefore, alternative techniques are required to characterize complex fields of the optical signal.

2.10.4.2 Linear optical sampling It is well known that coherent detection preserves phase information of the optical signal; the principle of coherent detection is presented in Section 2.9. In fact, linear optical sampling can be understood as modified coherent homodyne detection which uses a short pulse source as the local oscillator (Dorrer et al., 2003; Dorrer, 2006a,b). The circuit configuration of linear optical sampling is shown in Fig. 2.10.12. A 90 degree optical hybrid is used to combine the optical signal with a mode-locked short pulse laser. Although the 90 degree optical hybrid may be constructed in various ways, the following discussion provides the understanding of the general operating principle. Consider the field of the optical signal to be measured: Es ðt Þ ¼ As ðtÞ exp ðjωs t + jφs ðtÞÞ

(2.10.2)

and the field of the mode-locked short pulse train: ELO ðtÞ ¼ ALO ðt Þ exp ðjωLO t + jφLO Þ

(2.10.3)

where As(t) and ALO(t) are the real amplitude of the incoming signal and the LO, respectively, ωs and ωLO are their optical frequencies, and ϕs(t) and ϕLO are their optical phases. Since the LO is a short pulse train, ALO(t) 6¼ 0 only happens at discrete time windows of tN ¼ t0 + NT, where t0 is an arbitrary starting time, T is the pulse period, and N is an integer.

Basic mechanisms and instrumentation for optical measurement

E1(t)

ELO(t)

Es(t)

MLL

iR(t)

90⬚ hybrid iI(t) E2(t)

Fig. 2.10.15 Block diagram of linear optical sampling.

Using the circuit shown in Fig. 2.10.15 and following the derivation of coherent homodyne detection with phase diversity discussed in Section 2.9.4, we can derive expressions similar to Eqs. (2.9.27) and (2.9.28):   π iR ðtÞ ¼ RAs ðt Þ  ALO ðtÞ cos ΔφðtÞ  (2.10.4) 4   π iI ðt Þ ¼ RAs ðtÞ  ALO ðtÞ sin Δφðt Þ  (2.10.5) 4 where Δφ(t) ¼ φs(t)  φLO is the phase difference between the optical signal to be measured and that of the LO. In this linear sampling system, the detection speed of the photodiode is higher than the pulse repetition rate but much lower than both the input signal modulation speed and the speed determined by the width of the LO short pulses; therefore, the photodiodes perform integration at each pulse window. Consequently, there is only one effective data point acquired for each pulse period. Therefore, Eqs. (2.10.4) and (2.10.5) can be written as   eðtN Þ iR ðtN Þ ¼ Re A (2.10.6)   eðtN Þ iI ðt N Þ ¼ Im A (2.10.7) where eðtN Þ ¼ R  As ðtN Þejφs,N  ALO ðt N ÞejðφLO π=4Þ A

(2.10.8)

If we assume that the amplitude of the LO pulses is constant over time and the phase of eðtN Þ is, in fact, the complex the LO does not change in the measurement time window, A field of the input optical signal sampled at t ¼ tN. In practice, if the variation of the LO optical phase is slow enough, the measurement can be corrected via digital signal processing. The measurement results of linear sampling can be represented on a constellation diagram, as illustrated in Fig. 2.10.16A. The complex optical field sampled by each LO short pulse is represented as a dot on this complex plan, with the radius representing the field amplitude and the polar angle representing the optical phase. Fig. 2.10.16B shows an

239

240

Fiber optic measurement techniques

iI

fs

iR

(b)

(a)

Fig. 2.10.16 (A) Illustration of constellation diagram to represent complex optical field. (B) An example of a 10-Gb/s BPSK signal generated by an MZ modulator (Dorrer, 2006a,b).

example of a constellation diagram in a 10-Gb/s BPSK signal generated by a MachZehnder modulator (Dorrer, 2006a,b, No. 1, p. 313). In this example, the signal optical phase is switched between 0 and π while the amplitude has negligible change. 2.10.4.3 Sampling oscilloscope base on single-photon detection Digital sampling oscilloscopes described in Section 2.10.2 are able to measure electrical signal waveforms through sampling over many repetition periods of the signal and reconstruction. High-speed modulated optical waveforms can also be sampled using linear or nonlinear techniques with short optical sampling pulses and reconstruct as discussed in Sections 2.10.4.1 and 2.10.4.2. Another optoelectronic technique which allows the measurement of broadband modulated but extremely weak optical signals is to use singlephoton counting. Single-photon counting based on avalanche photodiodes (APD) and time-correlated photon-counting electronics have been traditionally used in molecular spectroscopy which requires very high detection sensitivity. Operating in the Geiger mode, a single photon is able to trigger an electric pulse in an APD as described in Section 1.3.6, and each pulse event can be recorded by a digital electronic circuit. This detection technique has been demonstrated to construct a digital sampling oscilloscope for optical signals with >100 GHz modulation bandwidth and 10 ps time jitter, a state-of-the-art low-jitter niobium nitride-based SPD can allow 200 fs time resolution of sampling (Wang et al., 2020). For an SNSPD with a few MHz maximum photon-counting rate and 100 dark counts per second, the measurable extinction ratio of the waveform can reach 40 dB. As a digital sampling oscilloscope measures periodical waveforms, the signal-to-noise ratio (SNR) can be improved by averaging over a large number of periods. In addition to the requirement of low timing jitter of single-photon detection, the long-term average requires the clock to have a very stable phase. Fig. 2.10.19 shows an example of reconstructed eye diagrams of binary modulated optical waveforms at 32 Gb/s (a), 40 Gb/s (b), 90 Gb/s (c), and 102 Gb/s (d). These waveforms were obtained with 64 fW input average optical power and 120 s acquisition time. At the same acquisition time, the increase of the signal data rate deduces the SNR due to the increased bandwidth, as well as the increased susceptibility to the clock phase stability and timing jitter of photon counting. 2.10.4.4 High-speed electric ADC using optical techniques As discussed in the last section, a sampling oscilloscope can only be used to measure repetitive signals because it relies on the sparse sampling and waveform reconstruction. The measurement of ultrafast and nonrepetitive electric signals has been a challenging task. In recent years, digital signal processing (DSP) has become very popular, where an analogto-digital converter (ADC) is a key component that translates the analog signal into a digital domain for processing. To sample a high-speed, nonrepetitive analog signal, the sampling rate must be high enough to satisfy the well-known Nyquist criterion, and the speed of electrical sampling is often a limiting factor. Optical techniques can help increase the sampling speed of electrical signals and realize ultra-high-speed ADC. Fig. 2.10.20 shows an example of sampling ultrafast electrical signals with the help of optical techniques (Yariv and Koumans, 1998; Han and Jalali, 2003). A group of short-pulsed mode-locked lasers is used; each operates in a different central wavelength. The low-repetition rate pulse trains from different lasers are 1100

Photon counts

242

1380

530

(b)

(a)

605

(c)

(d)

900

1160

458

537

700

940

386

469

500

720

314

401

300

500

242

333

170

265

100

280 20

30

Time (ps)

40

15

20

25

30

Time (ps)

35

5

10

Time (ps)

15

6

8

10

12

14

Time (ps)

Fig. 2.10.19 Constructed eye diagrams of binary modulated optical waveforms at (A) 32 Gb/s, (B) 40 Gb/s, (C) 90 Gb/s, and (D) 102 Gb/s. Waveforms were obtained with input average optical power of 64 fW and an acquisition time of 120 s (Wang et al., 2020). (Used with permission.)

Basic mechanisms and instrumentation for optical measurement

Opcal amplitude modulator

MLL ( λ1)

Demux

Input electrical signal

MLL ( λ2 )

PD

ADC

PD

ADC

PD

ADC

MLL ( λn)

Fig. 2.10.20 ADC using optical sampling.

interleaved to form a high-repetition rate pulse train. This pulse train passes through an electro-optic modulator, which is driven by the electrical signal to be sampled. Therefore, the amplitude of the high-repetition rate optical pulse train is linearly proportional to the modulating electrical signal. Then, this modulated optical signal is split by a wavelength division demultiplexer into various wavelength components, each of which is a low-repetition rate pulse train. At this point, each low-rate pulse train can be easily detected by a relatively low-speed optical receiver and converted into a digital signal by a low-speed ADC. These channels can then be reconstructed in the digital domain to recover the original waveform.

2.10.5 Short optical pulse measurement using an autocorrelator Time-domain characterization of ultrashort optical pulses is usually challenging because it requires very high speed of the photodetector, the amplifying electronics, and the oscilloscope. In recent years, ultrafast optics are becoming an important tool for optical communication, optical signal processing, and optical measurement and testing. Optical autocorrelators are popular instruments for characterizing ultrashort optical pulses. The fundamental operation principle of an autocorrelator is based on the correlation of the optical pulse with itself. Experimentally, autocorrelation can be accomplished in a Michelson interferometer configuration as shown in Fig. 2.10.21. In this setup, the short optical pulse is split into two and recombines at a focal point. A key device in this setup is a second harmonic generation (SHG) nonlinear crystal, which is an optical “mixer.” In general, this mixing process generates a second harmonic wave ESHG, which is proportional to the product of the two input optical signals E1 and E2. The frequency of the second harmonic wave is equal to the sum frequency of the two participating optical signals: E SHG ðω1 + ω2 Þ∝ E1 ðω1 Þ  E2 ðω2 Þ

(2.10.9)

In the specific case of autocorrelation, as shown in Fig. 2.10.21, the mixing happens between the original signal and its time-delayed replica, and the SHG process simply

243

Fiber optic measurement techniques

SGH Detector Filter

Photocurrent

8

Scanning mirror

6 4 2 0

Short pulses

0

t (fs)

Fig. 2.10.21 Optical autocorrelation using second harmonic generation.

produces the frequency-doubled output. In the time domain, if the relative delay between the two pulses is τ, the SHG output will be ESHG ðt, τÞ∝ E ðt Þ  Eðt  τÞ

(2.10.10)

where E(t) and E(t τ) represent the original optical signal and its time-delayed replica, respectively. Note that the fundamental frequency of ESHG(t, τ) is frequency-doubled compared to the input optical signal, and the purpose of the short-pass optical filter after the SHG in Fig. 2.10.21 is to block the original optical frequency ω and allow the frequency-doubled output 2ω from the SHG to be detected. Based on square-law detection, the photocurrent generated at the photodetector is I SHG ðt, τÞ∝ jEðtÞ  E ðt  τÞj2

(2.10.11)

If the speed of the photodetector is high enough, the optical phase walk-off between the two pulses can be observed, and the envelope of the measured waveform, shown as the dotted line in Fig. 2.10.22, represents the convolution between the two pulses. On the other hand, if the speed of the photodetector is low, the impact of the optical phase

Photocurrent (Linear)

244

High-speed detector

Low-speed detector

0 Delay t (fs)

Fig. 2.10.22 Illustration of the measured waveforms of the autocorrelator by a high-speed detector or by a low-speed detector.

Basic mechanisms and instrumentation for optical measurement

cannot be observed, and the measured waveform is simply the convolution of the intensity of the two pulses: ð∞ I SHG ðτÞ∝ P ðt Þ  P ðt  τÞdt (2.10.12) ∞

where P(t) ¼ jE(t)j is the optical intensity of the pulse. Fig. 2.10.23 shows the optical circuit of the autocorrelator. The collimated input optical beam is split by a beam splitter (partial reflecting mirror). One part of the beam (Path 1) goes through a fixed delay formed by Mirror 1 and a retroreflector. It is reflected into the nonlinear crystal through the beam splitter and the concave mirror. The other part (Path 2) of the beam is directed into a scanning delay line that consists of a parallel pair of mirrors (Mirror 2 and Mirror 3) mounted on a spinning wheel and a fixed mirror (Mirror 4). The optical signal reflected from this scanning delay line is also sent to the nonlinear crystal through the beam splitter and the concave mirror. The optical beams from Path 1 and Path 2 overlap in the nonlinear crystal, creating a nonlinear wave mixing between them. In this case, the SHG crystal operates noncollinearly; therefore, it is relatively easy to spatially separate the second-order harmonic wave from the fundamental waves. The mixing product is directed onto a photodetector. Because of the frequency doubling of this nonlinear mixing process, the photodetector should be sensitive to the wavelength which is approximately half of the input optical signal wavelength. A shortpass optical filter is usually placed in front of the photodetector to further block the optical signal of the original wavelength. Due to the autocorrelation process we discussed previously, if the input optical signal is a periodic pulse train, the SHG component can measure only optical pulses from the two paths overlapping in time at the nonlinear crystal. Therefore, the waveform of the optical pulses measured by an autocorrelator is the self-convolution of the original optical pulses. The measured waveform can be displayed as a function of the differential time 2

Retroreflector Mirror 1 Mirror 2 Nonlinear Short-pass filter crystal Splitter

P2 P Concave 1 mirror

Mirror 4

Mirror 3

Detector

Second harmonic

Input

Fig. 2.10.23 Optical configuration of an autocorrelator.

Spin wheel

245

Fiber optic measurement techniques

Δτ

path 1

SHG

246

Δτ

path 2

t

(a)

(b)

Fig. 2.10.24 SHG is generated by the cross-correlation between the optical signals arriving from the two paths: (A) optical pulse train and (B) second harmonic signal.

delay τ between the two paths in the autocorrelator, as shown in Fig. 2.10.24. As the result of autocorrelation, one disadvantage of the autocorrelator is that the actual optical pulse width cannot be precisely predicted unless the pulse shape is known. For example, for the pulses produced by a Ti:sapphire laser, the pulse shape is assumed to be sech2, and the pulse width measured by an autocorrelator (with a low-speed photodetector) is approximately 1.543 times the original pulse width. This number is 1.414 for Gaussian pulses. There are several types of nonlinear crystals for SHG. For example, BBO (beta barium borate) crystals are often used for autocorrelation; they have relatively high second-order coefficients. Recently, it was reported that an unbiased light-emitting diode (LED) could also be used as the nonlinear medium for autocorrelation through the two-photon absorption effect (Reid et al., 1997). The linear scan of the differential delay τ between the two paths is accomplished by the rotating wheel, as illustrated in Fig. 2.10.25A. If the separation between the parallel

Mirror 4 Mirror 2 φ

D

(a)

Mirror 3 ft

Dead zone

Dead zone t

(b) Fig. 2.10.25 (A) Illustration of optical path length change introduced by rotating the wheel and (B) waveform of differential delay versus time.

Basic mechanisms and instrumentation for optical measurement

mirrors on the optical path is D, the path length difference should be a function of the rotating angle ϕ as ΔL ¼ 2DsinðϕÞ  2Dϕ

(2.10.13)

where we have assumed that only within a small rotation angle ϕ of the wheel, the light beam can be retroreflected effectively. Usually, beyond this angular range, the light beam will not reach Mirror 4, and therefore, it is referred to as the dead zone. Suppose the wheel rotation frequency is f; the differential delay can be expressed as a function of time: τ ðt Þ ¼

ΔL 2Dϕ 4πfD  ¼ t c c c

(2.10.14)

It is a periodic ramp signal, and there is a dead time slot within each period when the optical beam is outside the mirror coverage, as shown in Fig. 2.10.25B. At the time τ is zero or the exact multiple of the period of the optical pulse train, pulses from the two paths overlap, producing strong second harmonic signals. Because of the nature of autocorrelation, the measured pulse waveform in the SHG is the selfconvolution of the original optical signal pulses. The actual pulse width of the original optical signal under test can be calculated from the measured SHG with a correcting factor depending on the pulse shape. Fig. 2.10.26 shows the typical experimental setup using an autocorrelator to characterize short optical pulses. The second harmonic signal generated by the detector in the autocorrelator is amplified and displayed by an oscilloscope. A periodic electrical signal provided by the autocorrelator, which is equal to the spin rate of the rotation wheel, is used to trigger the oscillator and to provide the time base. With the measured waveform on the oscilloscope V(t), the corresponding autocorrelation function ISHG(τ) can be found by converting time t to the differential delay τ using Eq. (2.10.14). However, a more precise calibration can be accomplished by adjusting the position of the retroreflection mirror (see Fig. 2.10.23). Obviously, shifting the retroreflection mirror by a distance Δx, the corresponding differential delay change should be Δτ ¼ 2Δx/c. If the pulse peak of a measured oscilloscope trace is shifted by a time ΔT, we can find a calibration factor of 2Δx τ ¼ cΔT t. Another very important issue to note is that due to the nature of autocorrelation, the relationship between the measured pulse width and the actual pulse width of the optical

Short pulse laser

Autocorrelator

Oscilloscope Trigger

Fig. 2.10.26 Measurement setup using an autocorrelator.

247

248

Fiber optic measurement techniques

signal depends on the actual time-domain shape of the pulse. For example, if the pulse shape is Gaussian, "

2 # t P ðtÞ ¼ P 0 exp 2:77 (2.10.15) σ actual where σ actual is the FWHM of the pulse; the FWHM of the measured pulse using an autocorrelator is σ measured ¼ 1.414σ actual. If the optical pulse shape is hyperbolic secant,

t 2 P ðtÞ ¼ P 0 Sech 1:736 (2.10.16) σ actual the FWHM of the measured pulse using an autocorrelator is σ measured ¼ 1.543σ actual. For the autocorrelator discussed so far, the measurement has been made only for the intensity of the optical pulse, whereas the phase information is lost. In addition, since the measurement is based on autocorrelation, detailed time-domain features of the optical pulse are smoothed out. Frequency-resolved optical gating, known as FROG, has been proposed to solve these problems; it provides the capability of characterizing both the intensity and the phase of an optical pulse. Fig. 2.10.27 shows the basic optical configuration of FROG based on SHG, where the key difference from a conventional autocorrelator is the added spectral analysis of the second harmonic signal and the signal analysis to recover the intensity and the phase information of the optical pulse. As indicated in Eq. (2.10.10), in the SHG process, if we neglect the constant multiplication factor, the field of the second-order harmonic signal is ESHG(t, τ) ¼ E(t)  E(t  τ), and its power spectral density can be obtained as the square of a one-dimensional Fourier transform:

2

ð ∞

SFROG ðω, τÞ ¼ EðtÞE ðt  τÞ exp ðjωt Þdt

(2.10.17) ∞

Scanning mirror

SHG

Camera Prism

Short pulses

Fig. 2.10.27 Optical configuration of FROG based on SHG.

Signal processing

Basic mechanisms and instrumentation for optical measurement

w − w0

I(w), j(w)

I(t), j(t)

Through the measurement using setup shown in Fig. 2.10.27, the FROG trace SFROG(ω, τ) can be obtained experimentally, which is a two-dimensional data array representing the spectrum of the SHG signal at different differential delay τ. Then, the next step is to determine the actual optical field of the signal pulse E(t) based on the knowledge of SFROG(ω, τ) (DeLong et al., 1996). This is a numerical process based on the phase-retrieval algorithm known as the iterative Fourier transform (Fienup, 1982). Fig. 2.10.28 shows an example of FROG traces of optical pulses with various phase characteristics, where I(t) ¼ j E(t)j2 is the pulse intensity and φ(t) is the relative phase shift of the pulse. The first and second rows show the time-domain and frequency-domain descriptions of the pulse, respectively. The bottom row in the figure shows the measured FROG traces in which the horizontal axis indicates the differential time delay τ, and the vertical axis is the frequency-domain information measured with the spectrometer. Among the different phase shapes shown in this example, the first column in Fig. 2.10.28 is a transform-limited pulse with a constant phase, and the second and the third columns are optical pulses with negative and positive chirps, respectively, and the third column is an optical pulse with significant self-phase modulation so that the pulse spectrum is split into two peaks. It is evident that the FROG spectrograms predict most of the pulse features, both intensity and phase; however, the SHG-based FROG is able to directly predict the sign of the frequency chirp as shown in the second and third columns. There are a number of FROG techniques based on other nonlinear mechanisms which are able to distinguish the sign of the chirp. A more detailed description of these techniques can be found in a review paper (Trebino et al., 1997).

Fig. 2.10.28 Example of the FROG measurements. Top row: pulse intensity (solid line) and phase (dashed line). Bottom row: measured 2D FROG traces (Trebino et al., 1997). (Used with permission.)

249

250

Fiber optic measurement techniques

2.11 LIDAR and OCT Optical reflectometry measures the target distance and (or) speed through roundtrip time delay, which has enabled a variety of practical applications. The target can be a moving object, such as a vehicle, or an optical interface which creates reflection or scattering. The well-known terms of LIDAR (light detection and ranging) and OCT (optical coherence tomography) are essentially based on the same principle of optical reflectometry. LIDAR has been widely used for optical altimeter, remote sensing, target distance and velocity detection, and 3-dimensional (3D) surface imaging. In comparison, OCT is more specific for 3D volumetric imaging for biomedical applications with a relatively small volume in the submillimeter to centimeter scales and fine resolution on the micrometer levels. While both LIDAR and OCT are normally used for free space applications, an OTDR (optical time-domain reflectometer), also based on optical reflectometry, is an instrument specifically designed to characterize distributed loss along fiber-optic systems for fault allocation and troubleshooting. The details of the OTDR will be discussed in Chapter 4. The most straightforward approach of optical reflectometry is to use short optical pulses through intensity modulation and measuring the roundtrip time τ which is related to the distance L by 2nL (2.11.1) c where c is the speed of light, n is the refractive index of the medium, and the factor 2 indicates a roundtrip. Intuitively, as a short optical pulse helps pinpoint a specific time, the precision of roundtrip time determination is limited by the temporal width of the optical pulses. With a Fourier transform, the spectral width of an optical pulse is inversely proportional to its temporal width, and thus, the range resolution can be defined as τ¼

cT w c ¼ (2.11.2) 2n 2nB where Tw is the temporal width, and B is the spectral width of the pulses. As an example, a 1 ns optical pulse (B ¼ 1GHz) can provide an approximately 15 cm range resolution for a measurement in the free space with n ¼ 1. Eq. (2.11.2) indicates that a finer range resolution can be achieved by using an optical signal with a broad spectral bandwidth. This principle also applies to optical signal modulation techniques other than using short optical pulses. In general, the spectral width of an optical signal can be broadened by intensity modulation, frequency modulation, or complex optical field modulation. Another option is to directly use incoherent light, or light with low coherence length, which naturally has a broad spectrum. Various techniques have been developed for optical reflectometry to achieve precise range resolution, extend measurable distance, increase the speed of measurement, and Rres ¼

Basic mechanisms and instrumentation for optical measurement

improve detection sensitivity to allow the use of the reduced signal optical power. Practically, it is not feasible to achieve all these desirable performance with existing technologies at a reasonable cost, and therefore, different practical approaches attempt to optimize system performance for specific applications. In this section, we discuss operation principles, system configurations, and major applications of LIDAR and OCT.

2.11.1 Light detection and ranging (LIDAR) A LIDAR system typically consists of a transmitter which emits an optical signal and a receiver which detects the signal reflected or scattered off the target, as illustrated in Fig. 2.11.1. The range, L, between the LIDAR transceiver (transmitter+receiver) and the target can be measured through the roundtrip delay of the optical signal. For 3D imaging, the roundtrip time can provide the resolution in the axial direction, and the lateral resolution has to be provided by other mechanisms. Flash and beamsteering are the two basic types of LIDAR system architectures to provide lateral resolution for 3D imaging. A beam-steering LIDAR focuses the entire optical signal power to a single spot on the target at each moment, and the position of the light spot is scanned on the target across the field of view (FoV). A number of techniques have been used for beam scanning with the collimation of a telescope, including micromirror on a scanning galvanometer, microelectromechanical system (MEMS)-based micromirrors, and optical-phased arrays based on liquid crystals. The major advantage of beam-steering LIDAR architecture is the relatively high signal strength because only a concentrated

Timing

Modulaon circulator Laser diode

Processing

Telescope Beam collimaon & steering

Light beam

Photodetector (a)

Timing

Modulaon Laser diode

Image Processing

Telescope Beam collimaon & steering PhotoBeam detector collimaon & array steering (b)

Fig. 2.11.1 Basic configurations of monostatic (A) and bistatic (B) LIDAR systems for ranging and (or) imaging.

251

252

Fiber optic measurement techniques

light spot is on the target at a time, and a single photodetector is used in the receiver. Beam-steering LIDAR can be used for both 3D imaging for automobile auto driving, and remote sensing, for example, performing gas trace spectroscopy along the light path. Flash LIDAR, on the other hand, distributes the light on the target across the entire FoV at once, and a photodetector array is used at the receiver for imaging. While for beamsteering LIDAR, the image resolution on the horizontal plane is determined by the spot size on the target, the horizontal image resolution of a flash LIDAR is determined by the pixel size of the photodetector array and collimating optics in the receiver. Without the need of beam scanning, the imaging rate of a flash LIDAR can be much faster than a beam-steering LIDAR. In terms of light collimation and focusing, a LIDAR transceiver can also be categorized as monostatic or bistatic. A monostatic LIDAR uses a single telescope for collimating and steering optical signal to the target, as well as collecting the echo from the target, as shown in Fig. 2.11.1A. An optical circulator can be used to separate forward and backward propagated optical signals at the back of the telescope. A single photodetector is usually used in the receiver of a monostatic LIDAR which detects the scattered signal power from the illuminated point on the target. A bistatic LIDAR shown in Fig. 2.11.1B uses separate telescopes for the transmitter and the receiver. Although both flash and beam-steering LIDARs can be built with the bistatic transceiver, it is more suitable for flash LIDAR using a photodetector array at the focal plane of the receiver telescope for best image resolution in the lateral direction. Although the range resolution (axial resolution for 3D imaging) shown in Eq. (2.11.2) is only related to the spectral width of the optical signal, the range accuracy of a LIDAR system also depends on the signal-to-noise ratio (SNR) at the receiver as c σ R ¼ K pffiffiffiffiffiffiffiffiffiffi (2.11.3) B SNR where K is a proportionality factor depending on the waveform of the LIDAR signal (Pierrottet et al., 2008). Consider a uniform scattering of the optical signal on the target across the entire 4π Euler angle; the return loss of the optical signal from the target is PR A ¼η Ps 2πL 2

(2.11.4)

where Ps and PR are transmitted and received signal power, respectively, L is the target distance, A is the receiver telescope aperture area, and η < 1 represents the combined impact of propagation loss in the media and scattering loss on the target. This quadratic relation between the return loss and the target distance makes long range LIDAR operation quite challenging in terms of providing an acceptable SNR level. Thus, with a given signal optical power, improving SNR is critically important in LIDAR waveform design and receiver structure consideration.

Basic mechanisms and instrumentation for optical measurement

2.11.1.1 Pulsed LIDAR with direct detection A pulsed LIDAR system uses intensity modulated short optical pulses and measures the roundtrip time of each pulse scattered from the target. The transmitted waveform Ps(t) and received waveform PR(t) are shown in Fig. 2.11.4, where the transmitted pulse train has a pulse width Tw and a repetition period TP. The received pulse train has the same repetition period TP, but there is a time delay τ with respect to the transmitted pulse, which represents the roundtrip time between the LIDAR transceiver and the target. Because of the repetitive nature of the transmitter pulse train, the maximum unambigcT uous range Lmax is limited by the period TP as L max < 2np . For a pulse LIDAR with direct detection using a photodiode, the major noise source in the receiver is the thermal noise which is independent of the received signal optical power, and the SNR can be found as  2 RP Rp SNR ¼ (2.11.5) 4kTB=RL where R is the photodiode responsivity, k is the Boltzmann’s constant, T is the absolute temperature, RL is the load resistance, and PRp represents the peak power of the received optical pulse (Fig. 2.11.2). Substitute Eqs. (2.11.5) and (2.11.4) into Eq. (2.11.3), and assume K ¼ 1; the range accuracy can be expressed as rffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffi c 4kT 2πcL 2 4kT σR ¼ ¼ (2.11.6) RP Rp RL B RP sp ηA RL B where PSp represents the peak power of the transmitted pulses. As an example, Fig. 2.11.3 shows the calculated range accuracy σ R based on Eq. (2.11.6) as the function of the signal peak power PSp from the transmitter and the target range L. We have assumed a receiver telescope of 6-in. (152.4 mm) aperture diameter, a target scattering efficiency η ¼ 10%, signal bandwidth B ¼ 1GHz, corresponding to a 1-ns pulse width, photodiode responsivity R ¼ 1A=W, load resistance RL ¼ 50 Ω, and at room temperature T ¼ 300 K. A constant level of 10 cm is also shown in the figure for reference. Fig. 2.11.3 indicates that in order to Ps(t)

TP

Tw

t PR(t) τ

TP t

Fig. 2.11.2 Illustration of pulse trains from the transmitter and at the receiver.

253

Fiber optic measurement techniques

σR (m)

254

Fig. 2.11.3 Calculated range accuracy of direct detection as the functions of target distance L and pulse peak power Psp of the transmitter.

maintain a 10 cm range accuracy and with a 1-W (30dBm) signal peak power, the maximum measurable distance is less than 15 m. As high as 60 W (48 dBm) signal peak power is required to extend the maximum range to 100 m for the same range accuracy. Pulse amplification with the peak power up to kW levels can be accomplished in solid state laser systems based on mechanisms like Q-switching. With a fiber-optic approach in 1550 nm wavelength window, EDFA can be used to amplify the pulses. As the gain saturation time of an EDFA is on the order of milliseconds determined by the carrier lifetime, its output is limited by a maximum average power. Thus, pulse peak power can be N times higher than the average power when the pulse duty cycle is 1/N. Nevertheless, the requirement of high pulse peak power is a major disadvantage of a pulsed LIDAR using direct detection with a photodiode. Extremely high pulse peak power may cause damages of optical components and raise safety concern for practical applications. To overcome this problem, single-photon avalanche photodetectors (SPAPD) have been used to replace linear photodiodes in pulsed LIDAR receivers. For the application as a single-photon detector, as discussed in Section 1.3.6, the reverse bias voltage of an APD is set higher than its breakdown voltage (VB > VBK) so that the detector operates in an unstable condition. A single photon arriving is enough to trigger an avalanche breakdown of the APD and generate large electric current. Immediately after the breakdown event, the SPAPD must be reset by temporarily reducing the bias to the level below VBK and then increase the bias to the prebreakdown state VB > VBK waiting for the next photon to arrive. This breakdown and recovery process may take more than 100 ns. As SPAPDs and their arrays can be mass fabricated with complementary metal oxide semiconductor (CMOS)-compatible processes and monolithically integrated with other

Basic mechanisms and instrumentation for optical measurement

electronic circuits, the cost can be significantly reduced compared to discrete components to reduce their cost (Niclass et al., 2005; Kawahito et al., 2007). The single-photon detection sensitivity of SPAPD is especially suitable for LIDAR applications in which the return loss is proportional to the square of the target range, and the optical signal power arriving at the receiver is extremely low. The CMOS-based SPAPD array is ideal for LIDAR imaging with a large number of pixels. However, a major limit of using SPAPD is the recovery time of >100 ns after each photon-induced breakdown event, which limits the speed of measurement. Because of the very high photon multiplexing gain, SPAPDs are susceptible to noise contributions from both the detector itself and ambient light in the detectable wavelength window that leaks into the detector. The false detection rate due to noise can be reduced by averaging over multiple repetitive pulses. As LIDAR pulses are periodic, the recording can be time gated by synchronizing with the transmitted pulse repetition rate and pulse width. This technique is known as time-correlated single-photon counting (TCSPC). But this will inevitably reduce the speed of measurement. The average can also be across multiple parallel SPAPD pixels in an SPAPD array, but this will reduce the lateral resolution of imaging. 2.11.1.2 FMCW LIDAR and pulse compression For a pulsed LIDAR waveform, narrow pulse width helps improve the range resolution, and the low duty cycle helps increase pulse peak power for a given average power so that the SNR can be increased. However, photon damage of optical components as well as eye safety can become big concerns for long-range LIDAR systems using very high pulse peak power. Frequency modulated continuous wave (FMCW) LIDAR uses chirped waveform and long pulses to achieve a range resolution comparable to short-pulsed LIDAR with a comparable spectral bandwidth. Through pulse compression which will be discussed below, the detection sensitivity of an FMCW LIDAR is determined by the pulse energy rather than the peak power. Fig. 2.11.4 explains the concept of pulse compression in a LIDAR system based on frequency chirped long pulses. The pulse is carried by a sine wave whose frequency is linearly increased from f1 to f2 and that the time-dependent frequency within the pulse duration Tw can be expressed as f ðtÞ ¼ f 1 +

f2 f1 t Tw

The chirped signal waveform within the pulse duration is ðt

f f1 t t As ðtÞ ¼ At sin 2π f ðtÞdt ¼ At sin 2π f 1 + 2 2T w 0

(2.11.7)

(2.11.8)

where At is the amplitude of the transmitted waveform. After a delay from the target, the received signal waveform becomes

255

Fiber optic measurement techniques

As(t)

A(t)

AR(t)

t (a)

f(t) f2

τ

f(t)

fR

f(t-τ)

f1 0

t Tw (b) 1/(f2-f1)

Normalized spectrum

256

τ

1 0.8

Time

(c)

0.6 0.4 0.2 0

fR

F

1/Tw

Fig. 2.11.4 Illustration of FMCW LIDAR waveforms and time-frequency diagram. (A) Transmitted (solid line) and received (dotted line) LIDAR pulse with frequency chirping. (B) Frequency as the function of time for the transmitted (solid line) and received (dashed line) pulse. (C) Normalized spectrum V(F) after dechirping, illustrating spectral resolution and the relation to range resolution.



 f2 f1 AR ðtÞ ¼ Ar sin 2π f 1 + ðt  τÞ t 2T w

(2.11.9)

where Ar is the amplitude of the received waveform, and τ is the roundtrip delay. This long pulse with the pulse width Tw can be compressed by mixing the received waveform AR(t) with the transmitted waveform As(t), known as dechirping. Filtering out the frequency doubling component, the frequency down-converted waveform after mixing is  ð t    ðt f2 f1 1 1 vðt Þ ¼ At Ar sin 2π f ðtÞdt  f ðt  τÞdt ¼ At Ar sin 2π τ t Tw 2 2 0 0 (2.11.10)

Basic mechanisms and instrumentation for optical measurement

The RF spectrum of v(t) can be obtained through a Fourier transform V(F) ¼ FFT {v(t)}, where FFT represents fast Fourier transform, and frequency F ¼ f 2Tfw 1 τ represents the down-converted RF frequency. Ideally, this frequency down-converted RF signal V(F) should have a single frequency at fR ¼

f2 f1 τ Tw

(2.11.11)

which carries the information of roundtrip time τ between the transceiver and the target. However, because the limited pulse width Tw, v(t) is truncated with v(t) ¼ 0 outside the time window of 0 t Tw, the spectral resolution of V(F) is ΔF ¼ T1w. As the conversion

from F to the delay time is determined by the chirping slope ðf 2Tfw 1 Þ, the time-domain 1 resolution is Δτ ¼ jf f j , as illustrated in Fig. 2.11.4C. Finally, the range resolution 2

1

can be found as Rres ¼

c c Δτ ¼ 2n 2njf 2  f 1 j

With the chirping bandwidth B ¼ jf2  f1 j, this is identical to Eq. (2.11.2) which is the range resolution of the short-pulsed LIDAR. Here, for simplicity, we have neglected the impact of time mismatch between the transmitted and the received pulses in the bandwidth estimation. Direct detection: Fig. 2.11.5 shows the block diagram of a monostatic FMCW LIDAR with direct detection, where a laser output is intensity modulated by a chirped waveform As(t). The modulated optical signal is then amplified and sent to a telescope for collimation and launch to the target. The optical signal scattered from the target is collected by the same telescope and sent to a photodetector for envelope detection. Then the detected envelop AR(t) mixes with a delayed version Asd(t) of the original modulating waveform for dechirping as described by Eq. (2.11.10). In the overlap region between Asd(t) and CW

Modulator

Telescop

Optical amplifier

Laser As(t) Power splitter Waveform generator

Delay

De-chirping Asd(t) mixer AR(t)

Photodetector

v(t) FFT

Fig. 2.11.5 Block diagram of an FMCW LIDAR with direct detection

Data analysis

257

258

Fiber optic measurement techniques

AR(t), the dechirped electric waveform v(t) has a single frequency fR defined by Eq. (2.11.11) which includes the information of target roundtrip delay and thus the target range. This frequency can be obtained through a fast Fourier transform (FFT) in signal processing. Although the pulse width Tw used in the FMCW LIDAR can be much longer than that of a short pulse-based LIDAR system, the range resolution is not compromised as it is determined by the chirping bandwidth. The dechirping process involves all signal energy within the overlap region of the pulse for the determination of the fR value; therefore, this dechirping process is also known as pulse compression. In a practical LIDAR system, the transmitted signal optical power is usually very high; any spurious reflections from optical interfaces in the telescope can be orders of magnitude higher than the signal scattered from the target and collected by the telescope. These spurious reflections may overwhelm the receiver frontend. In order to avoid the impact of optical reflections from within the transmitter, photodetection of the receiver has to be done in a silent period when the transmitter does not send out the signal. Fig. 2.11.6A shows the pulse train of an FMCW LIDAR with pulse width Tw0 and repetition period Tp. Assume the roundtrip delay from the target is Td; the pulse width Tw0 and period Tp can be designed so that the received pulse falls within the time window where the transmitter is silent, as shown in Fig. 2.11.6C. For efficient dechirping, a delayed replica of the transmitted waveform is used to mix with the received signal, and the delay Td1, shown in Fig. 2.11.6B, can be determined by a rough estimation of target roundtrip time Td. Their difference τ ¼ Td  Td1 determines the RF frequency fR after dechirping as defined by Eq. (2.11.11). After mixing, the pulse width of v(t) is reduced to Tw ¼ Tw0  τ, as shown in Fig. 2.11.6D so that a smaller τ value helps maintain a high pulse compression efficiency because of the overwhelming pulse overlap. But τ has to be high enough for fR to be measurable within the pulse window and to avoid 1/f noise in the electronic circuits. Tp

As(t) (a) Asd(t)

Tw0

t

Tw0

Td1

t

(b) Td2 AR(t) (c)

Td

t Tw

v(t)

t

(d) FRF (e)

fR

t

Fig. 2.11.6 Illustration of transmitted (A), delayed (B), and received (C) FMCW LIDAR pulse trains, and frequency chirping (D and E).

Basic mechanisms and instrumentation for optical measurement

Since Td1 is set in the LIDAR transceiver, which is a known parameter, the actual target roundtrip delay can be obtained from the measured fR as

Tw Td ¼ (2.11.12) f + T d1 f2 f1 R Another important issue in the FMCW LIDAR is the linearity of chirping. Ideally, the chirping rate Rchirp ¼ ðf 2Tfw 1 Þ is a constant so that the instantaneous frequency f(t) in Eq. (2.11.7) is linearly proportional to the time t within each pulse. Nonlinear chirping characteristic can be induced by a frequency-dependent chirping rate Rchirp(f ), and as a result, fR after dechirping as shown in Fig. 2.11.6E will not be a constant value. This is equivalent to a degradation of range resolution. Coherent heterodyne hetection: For an FMCW LIDAR, linear detection is required at the receiver which prevents the use of single-photon avalanche photodetectors, and the SNR is limited by the thermal noise if direct detection is used. Coherent detection is a promising approach for an FMCW LIDAR to improve the SNR and range accuracy. Fig. 2.11.7A shows the block diagram of an FMCW LIDAR system with coherent Modulator

Optical amplifier f

Laser

Telescope

m

AOM Optical mixing Power splitter Bandpass filter Waveform

Delay

generator

IF detection Asd(t)

FFT

RF down conversion 2Be

(b)

0

f

Coherent heterodyne down conversion

Data analysis

signal

Optica l LO

Base band

De-chirp

fR

AR(t)

De-chirping mixer

(a)

Lowpass filter

f

Be Delayed original waveform

fm IF

f0+fm

f0 Optical

f

Fig. 2.11.7 (A) Block diagram of an FMCW LIDAR with coherent heterodyne detection. (B) Illustration of optical and IF frequency down-conversion.

259

260

Fiber optic measurement techniques

heterodyne detection. In this system, the laser output with a constant amplitude and an optical frequency f0 is split into two parts. One of them is modulated by the chirped waveform through an electro-optic modulator and amplified by an optical amplifier before sending to the telescope. The other part is frequency shifted by fm through an acousto-optic modulator (AOM) and is used as the optical local oscillator (LO). This frequency shifted LO mixes with the optical signal scattered from the target in a balanced optical detector for heterodyne detection to yield a photocurrent, qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i ¼ 2R P Rp ðtÞP LO cos ½2πf m t + φðtÞ (2.11.13) where PRp(t) is the scattered signal optical power within the pulse duration, PLO is the power of LO, fm is the intermediate frequency (IF), φ(t) is the differential phase variation between the signal and the LO, and R is the photodiode responsivity. This photocurrent signal in the IF domain is selected by a bandpass filter with bandwidth 2Be, with Be the modulation bandwidth. After an IF envelope detection, the signal is down-converted to the baseband with an electric power 2RP Rp ðt ÞP LO . This two-step frequency downconversion process is illustrated by Fig. 2.11.7B. The baseband signal is then dechirped by mixing with the delayed version of the original chirped waveform and processed with FFT similar to that previously described. The major advantage of coherent detection is the improved SNR compared to direct detection. In coherent detection with a high enough LO power, the major noise source at2the  optical receiver is the shot noise caused by the LO, and the electric noise power is ish  4RqP Lo Be , where q is the electron charge, and an IF bandwidth of 2Be is used here. The electric signal-to-noise power ratio of coherent heterodyne detection is SNRcoh ¼

RP Rp 2qBe

(2.11.14)

In comparison, for direct detection, the SNR is SNRdir ¼

R2 P 2Rp 4kT Be =RL

(2.11.15)

where k is the Planck’s constant and T is the absolute temperature and RL is the load resistance. Fig. 2.11.8 shows the SNR comparison between direct detection and coherent detection based on Eqs. (2.11.14) and (2.11.15) with R ¼ 1mA=mW, Be ¼ 50MHz, RL ¼ 50Ω, and T ¼ 300K. While SNR reduction versus signal power reduction is 20 dB/decade for direct detection, it is 10 dB/decade for coherent detection. Thus, coherent detection with quantum-limited detection efficiency is superior for LIDAR systems operating in a very low signal power regime.

SNR (dB)

Basic mechanisms and instrumentation for optical measurement

Coherent detection Direct detection

Signal opcal power (dBm)

Fig. 2.11.8 Comparison of SNR as the function of received optical signal power for direct detection (dashed line) and coherent detection (solid line).

Coherent homodyne detection and optical domain dechirping: For an FMCW LIDAR with coherent heterodyne detection discussed above, the signal electric power is insensitive to the optical phase noise φ(t) because of the IF envelope detection. For coherent homodyne detection with the block diagram shown in Fig. 2.11.9, no frequency shift is used for the LO (or fm ¼ 0), and the optical signal is directly down-converted to the baseband with an electric bandwidth Be. This reduces the required optical receiver bandwidth by half compared to heterodyne detection and thus improves the SNR by 3 dB. However, the impact of laser phase noise has to be considered, which can be accomplished with either a phase locked loop or phase diversity detection, as described in Section 2.8. In the homodyne configuration shown in Fig. 2.11.9, dechirping is accomplished in the electric domain by mixing the recovered baseband signal with a delayed version of the original chirped waveform to obtain fR. This dechirping process can be further simplified by performing in the optical domain together with homodyne detection. In the configuration shown in Fig. 2.11.10A (Adany et al., 2009), instead of using a CW LO, the modulated optical waveform, identical to that sent to the telescope (with an appropriate delay), is used as the LO, which carries the chirp information. After coherent homodyne detection, the photocurrent is iðt Þ ¼ 2RARp ðtÞARp ðt  τÞ (2.11.16) pffiffiffiffiffiffiffiffiffiffiffiffiffi where ARp ðtÞ ¼ P Rp ðt Þ is the optical field of the optical signal, and τ is the relative delay between the optical signal and the LO. We have neglected the differential phase noise and assumed that a phase diversity optical receiver is used. This is equivalent to the dechirping processing based on Eq. (2.11.10), and the relative delay τ can be found through spectral analysis of i(t). The frequency down-conversion and dechirping process are illustrated in Fig. 2.11.10B.

261

262

Fiber optic measurement techniques

Modulator

Optical amplifier Laser

Telescope

Phase diversity receiver

Power splitter Waveform

Delay De-chirping mixer Asd(t)

generator

Lowpass filter

AR(t)

(a)

FFT

Coherent Homodyne down conversion

(b)

Data analysis

optical LO signal

Baseband

De-chirp

0

fR

Optical mixing

f

Be

f0

Delayed original waveform

f

Optical

f

Fig. 2.11.9 (A) Block diagram of an FMCW LIDAR with coherent homodyne detection and RF dechirping. (B) Illustration of optical frequency down-conversion and RF dechirping.

In addition to the elimination of RF dechirping, the receiver bandwidth requirement of this simplified homodyne configuration can be much less than the chirping bandwidth Be. This is because the dechirping process is performed in the optical domain, the receiver electric bandwidth just needs to be broad enough to accommodate the dechirped waveform at frequency fR, which can be set to the low-megahertz region by adjusting the delay line for the LO. FMCW RADRA based on complex optical fields: For the FMCW LIDAR discussed above, only the amplitude of the optical signal was used, and the modulated optical signal has double-sideband carrying redundant chirping information. Complex optical field modulation based on in-phase and quadrature (I/Q) modulator can help enhance the capability of coherent LIDAR by applying independent chirping information on the two modulation sidebands around the optical carrier. The phase-diversity optical receiver based on 90 optical hybrid coupler also allows complex optical field detection. In the

Basic mechanisms and instrumentation for optical measurement

Modulator

Optical amplifier

Laser

Telescope Optical delay Optical mixing

Waveform Phase diversity receiver

generator FFT

(a) Display Coherent Homodyne down conversion

optical LO signal

(b) 0 fR

f

f0

Optical

Fig. 2.11.10 (A) Block diagram of a simplified FMCW LIDAR with coherent homodyne detection and optical dechirping. (B) Illustration of optical frequency down-conversion and dechirping.

following, we discuss techniques to exploit the independence of the two modulation sidebands to (1) double the overall chirping bandwidth and (2) to measure target range and velocity simultaneously. Fig. 2.11.11 shows the block diagram of an FMCW LIDAR with complex field modulation and phase diversity detection. Similar to the coherent LIDAR previously discussed, a continuous wave (CW) laser output at an optical frequency f0 is split into two parts by a

Digital waveforms

DAC DAC

VQ(t) VI(t)

I/Q modulator

Laser

Bias

f0-fcw f0

f0+f1

f0+f2

f

Target

EDFA Telescope

I/Q receiver iI(t)

DSP & De-chirping

ADC

ADC

iQ(t)

90◦ hybrid

f0

Fig. 2.11.11 Block diagram of an FMCW LIDAR with coherent complex optical field modulation and phase diversity detection.

263

264

Fiber optic measurement techniques

fiber directional coupler. One part is sent into an I/Q electro-optic modulator, whereas the other part is sent directly to the receiver as the LO for coherent homodyne detection. The real and imaginary parts of the complex modulation waveform can be generated digitally and converted to the analog domain through two digital-to-analog converters (DAC) and sent to the I and Q electrodes of the I/Q modulator after driving amplifiers. As illustrated in Fig. 2.11.11, an I/Q electro-optic modulator consists of two intensity modulators in the two arms of a Mach-Zehnder interferometer (MZI) configuration and a phase shifter in one of the two arms. Both intensity modulators inside the I/Q modulator are biased at the minimum transmission point, and the phase shifter is biased at the quadrature point with a 90 constant phase shift between the two MZI arms. As previously discussed in Section 1.6.4, with the applied signal voltage waveforms VI and VQ, the optical field at the output of the I/Q modulator is



πV Q πV I π + j sin ej2πf 0 t  Es ðV + jV Q Þej2πf 0 t (2.11.17) E o ¼ E s sin Vπ Vπ Vπ I where f0 is the optical carrier frequency and Es is the optical field amplitude at the modulator input, and Vπ is the voltage required for the transfer function to change from the minimum to the maximum of each intensity modulator. In order to upload chirp waveforms FHU(t) on the upper sideband and FHL(t) on the lower sideband of the optical carrier f0, the complex modulating signal has to be V cx ¼ ej2πF HU ðtÞt + jej2πF HL ðtÞt

(2.11.18)

The real and imaginary parts of Vcx are Re fV cx g ¼ cos ½2πF HU ðt Þt + sin ½2πF HL ðtÞt

(2.11.19a)

ImfV cx g ¼ sin ½2πF HU ðtÞt + cos ½2πF HL ðt Þt

(2.11.19b)

(1) Extending chirp bandwidth: In order to utilize both modulating sidebands to double the overall chirp bandwidth, the linear chirping frequencies applied on the upper and the lower sidebands can be F HU ðt Þ ¼ f 1 + and



f2 f1 t T w0

f f1 t F HL ðtÞ ¼  f 2  2 T w0

(2.11.20a)

(2.11.20b)

respectively, within each modulating pulse width 0 t Tw0 as illustrated in Fig. 2.11.12A. During the pulse window Tw0, the modulation frequency of the upper sideband is linearly chirped from f1 to f2, while the frequency of the lower sideband is linearly chirped from f2 to f1 so that the total chirping bandwidth is doubled from jf2 f1 j to 2 j f2  f1 j.

Basic mechanisms and instrumentation for optical measurement

f

f

f2

f2 FHU(t)

f1 0 -f1

τ

Transmied

I

Echo

f1 0

Tw0

t

t 0

-f1 -Tw0

Tw0 Q

FHL(t)

-f2

-f2

(a)

Digital Translaon

(b)

Fig. 2.11.12 (A) Linear chirp applied on the upper (FHU(t)) and the lower (FHL(t)) modulation sidebands, (B) align the chirp waveforms of the upper and the lower sidebands in the digital processing to double the overall chirping bandwidth.

Then, the voltage waveforms VI(t) and VQ(t) representing the real and the imaginary parts of the complex waveform should be V I ¼ V D f cos ½2πF HU ðt Þt + sin ½2πF HL ðt Þtg

(2.11.21)

V Q ¼ V D f sin ½2πF HU ðtÞt  + cos ½2πF HL ðtÞtg

(2.11.22)

where VD is the amplitude of the driving voltage signal. At the LIDAR receiver, the received optical field ER scattered from the target contains a time delay τ with respect to the launched optical field, that is, E R ∝ ½V I ðt  τÞ + jV Q ðt  τÞej2πf 0 ðtτÞ This optical signal mixes with the LO with the optical frequency f0 in the phase diversity optical receiver consisting of a 90 optical hybrid coupler and balanced photodiodes; the complex field ER of the optical signal can be linearly reconstructed with the two photocurrents, iI(t) ∝ VI(t  τ) and iQ(t) ∝ VQ(t  τ). The actual magnitude of the photocurrent depends on photodiode responsivity, the strength of the LO, as well as scattering and coupling loss from the target. These photocurrent signals can be digitized through the ADCs for digital postprocessing. Because the photocurrent signals represent the real and the imaginary parts of the optical signal, the recovered complex optical field is

f2 f1 icx ðtÞ ¼ iI ðt Þ + jiQ ðtÞ∝ exp 2π f 1 + ðt  τÞ t T w0

f f1 + j exp 2π f 2 + 2 ðt  τÞ t (2.11.23) T w0 The two terms in Eq. (2.11.23) represent the two independently chirped waveforms with modulating frequencies linearly swept from f1 to f2 in the upper sideband and from -f2 to -f1 in the lower sideband, all in the same time window of pulse duration

265

Fiber optic measurement techniques

Tw0. As illustrated in Fig. 2.11.12B, in digital postprocessing, the chirp waveforms on the upper and the lower modulation sidebands are realigned to form a continuous linear chirp with an extended bandwidth 2 jf2  f1 j. This also doubles the effective pulse width in the time domain. In order to realize the full benefit of improved range resolution, phase continuity of the combined chirp waveform is critically important. Since the lowest chirp frequency on each sideband is f1, there is a frequency gap of 2f1 between the two sidebands, which can cause a phase discontinuity when combining the waveforms carried by the upper and the lower sidebands. As the phase of the RF chirping waveform is digitally generated and well defined, this phase discontinuity can be corrected digitally in the frequency down-conversion process. More specifically, the waveforms (both transmitted and echo from target) acquired from one sideband have to be shifted by Tw0 in the time and 2f1 in the frequency with respect to the waveforms of the other sideband as illustrated in Fig. 2.11.12B. Special attention has also to be made when synthesizing the chirp waveforms in the waveform generator to ensure an equal phase at the beginning and the end of the waveform. Because the two modulation sidebands are based on the same optical carrier, the laser phase noise does not introduce differential phase variations between them to affect chirp reconstruction. To illustrate the impact of phase discontinuity, Fig. 2.11.13 shows the downconverted RF spectra (after mixing between transmitted and received waveforms) composed of two chirp sections, and the width of the spectral peak indicates the range resolution. This example is calculated with j f2  f1 j ¼ 3.5GHz chirping bandwidth on each sideband and Tw  Tw0 ¼ 1μs pulse width. If only a single sideband is used, the 0

-5

Spectra (dB)

266

-10

0 π/3 2π/3 π Single side

-15

-20

-25 -3

-2

-1

0

1

2

3

Relative frequency (F – fR) (MHz) Fig. 2.11.13 Impact of phase discontinuity on range resolution. Dashed line: use only one sideband. Thin lines: use both sidebands but with phase discontinuity (indicated in the legend). Thick solid line: use both sidebands but with no phase discontinuity.

Basic mechanisms and instrumentation for optical measurement

3 dB width of the dechirped spectral peak is 1/Tw ¼ 1MHz, shown as the dashed line in Fig. 2.11.13, corresponding to a range resolution Rres ¼ c/2j f2  f1 j  4.3cm. When both sidebands are used, although the overall chirping bandwidth is doubled after combining the waveforms of the two sidebands, the position of the dechirped spectral peak varies with the phase discontinuity, and thus, the range resolution cannot be improved. Only when the phase discontinuity is zero, shown as the bold solid curve in Fig. 2.11.13, the width of the main spectral peak is minimized to 0.5 MHz, corresponding to a range resolution of 2.15 cm. Fig. 2.11.14A shows the spectra of the digitally generated (red) and the received from target scattering (black) chirp waveforms in an experiment with 3.5 GHz bandwidth on each side of an optical carrier. Within the pulse width of 1 μs, the modulation frequency sweeps from 0.5 to 4 GHz for the upper sideband and from 4 to 0.5 GHz for the negative sideband. In this experiment, a small amount of signal optical power is tapped before the telescope and reflected from a fixed distance via an optical fiber and a reflector for range calibration. Frequency down-converted RF spectra after dechirping are shown in Fig. 2.11.4B, where the horizontal axis has been converted from frequency F to range   R through R ¼ 2c f Tfw F. By combining the dechirped waveforms from the upper and 2

1

the lower sidebands, the spectrum shown as the bold line provides a range resolution of approximately 2 cm, which is half compared to using either the upper or the lower sideband alone. The reference reflector used in the experiment not only provided range calibration but also helps minimizing phase discontinuity between the dechirped waveforms from the two sidebands (by minimizing the spectral width corresponding to the reference peak).

(a)

(b)

Target Reference

Fig. 2.11.14 (A) Spectra of digitally generated chirp waveform (red dashed line, dark gray in print version) and waveform detected at the receiver (blue solid line, light gray in print version) and (B) dechirped RF spectra with both sidebands used (bold black line) and using only one of the two sidebands (red and green thin lines).

267

268

Fiber optic measurement techniques

(2) Measure range and velocity simultaneously: In addition to the range measurement, the velocity of a moving target can also be measured by a LIDAR based on the Doppler effect. For a short-pulsed Doppler LIDAR with direct detection, a sine wave intensity modulation at a constant frequency fCW is applied over the duration Tw of each transmitted pulse. When the target is moving, the scattered signal acquires a Doppler frequency shift fD ¼ 2v/λ, where v is the longitudinal velocity of the target and λ is the laser wavelength so that the frequency of the scattered signal becomes f ¼ fcw +fD. This modulation frequency shift can be detected at a direct detection receiver by comparing the transmitted and received amplitude modulation frequencies through a Fourier process to determine the target velocity. Theoretically, fCW determines the maximum unambiguous velocity that could be measured without spectral aliasing of the received signal. The value of the Doppler frequency shift fD can be either positive or negative depending on the direction of the velocity (moving toward or away from the observer). Long optical pulses are usually desired to ensure the accuracy of velocity measurements as the resolution of velocity is determined by vres ¼ λ/(2Tw), where Tw is the pulse duration. On the other hand, it is well known that short optical pulses are required to achieve high-range resolution if the target distance needs to be measured as discussed previously. For an FMCW LIDAR uses a linear frequency chirp f ðtÞ ¼ f 1 + f 2Tfw 1 t as shown in Eq. (2.11.7), target range can be determined by the dechirped frequency fR ¼( f2 f1)τ/Tw, where τ is the roundtrip delay from the target. For a moving target, an additional Doppler frequency shift fD will be superimposed on fR. In order to separate the contributions of target velocity and distance from the dechirped frequency fR, a triangle chirping can be used, as shown in Fig. 2.11.15, which contains both up-ramp and down-ramp segments within each pulse. For a moving target, the dechirped frequencies measured during the up-ramp and the down-ramp sections are f+R and f R , respectively, and range and velocity can be detected independently by calculating the two related frequencies. fR ¼

f+ f ðf 2  f 1 Þ R τ¼ R 2 Tw

f d ¼ 2νλ ¼

f +R + f  R 2

(2.11.24) (2.11.25)

Dechirping for up- and down-ramp sections must be performed separately within their respective temporal slots to determine the direction of target velocity. In general, fine range resolution can be obtained with an FMCW LIDAR by increasing chirping bandwidth, whereas a simple continuous wave at a single frequency is preferred in Doppler LIDAR for velocity measurements, which provides the most accurate results by avoiding possible complications of wideband modulation and the difficulty of maintaining the linearity of chirping. For simultaneous range and velocity measurements, a composite LIDAR waveform consisting of a wideband chirp and a single frequency tone may help optimize the overall

Basic mechanisms and instrumentation for optical measurement

Transmied A(t)

Received t

(a)

f(t) f2

f(t-τ) f1 0

τ

f(t)

t

Tw (b)

Fig. 2.11.15 Illustration of FMCW LIDAR waveforms and time-frequency diagram with triangle frequency chirping. (A) Transmitted (solid line) and received (dotted line) LIDAR pulse with triangle frequency chirping. (B) Frequency as the function of time for the transmitted (solid line) and received (dashed line) pulse.

performance. This can be accomplished by complex optical field modulation with an I/Q modulator to create a wideband linear chirp on one modulation sideband and a constant modulation frequency on the opposite sideband. In this way, the resolution of velocity measurement is no longer dependent on the chirping bandwidth so that the system can be more flexible. The direction of velocity can be directly read from the spectrum of the received signal without the need of temporally truncated spectral analysis described by Eqs. (2.11.24) and (2.11.25). Because of the wide modulation bandwidth of the electro-optic I/Q modulator, this LIDAR system allows wide chirping bandwidth on the upper sideband to ensure the fine range resolution and high enough modulation frequency on the lower sideband to allow the measurement of large unambiguous vector velocity, while maintaining a relatively simple optical system configuration. Similar to Eqs. (2.11.21) and (2.11.22) and use frequency up-ramp as an example, in order to create a wideband chirp waveform which linearly sweeps from f1 to f2 on the upper optical sideband, and a sinewave modulation at a constant frequency fCW on the lower optical sideband, the two voltage waveforms used to drive I and Q arms of the I/Q modulator are

f2 f1 2 V I ðtÞ ¼ V D cos 2πf 1 t + π t + sin ð2πf CW t Þ (2.11.26) T

f2 f1 2 V Q ðt Þ ¼ V D sin 2πf 1 t + π t + cos ð2πf CW tÞ (2.11.27) T The complex optical field at the modulator output is then

269

Fiber optic measurement techniques





f2 f1 t t + sin ð2πf 0 t  2πf CW tÞ E 0 ∝ cos 2π f 0 + f 1 + 2T

(2.11.28)

This is a carrier-suppressed optical signal with independent modulating signals carried on the upper and the lower sidebands. The optical field ER scattered from the target contains a time delay τ and a Doppler frequency shift fd compared to the launched optical field so that

f2 f1 ER ∝ cos 2π f 0 + f 1 + ðt  τÞ + f d t + sin ð2πf 0 t  2πf CW t  2πf d tÞ 2T (2.11.29) After mixing with the optical LO with optical frequency f0 in a phase-diversity coherent homodyne detection, the complex optical field is linearly translated to the RF domain as   f2 f1 icx ðt Þ∝ exp 2π f 1 + ðt  τÞ + f d t + j exp ð2πf CW t  2πf d tÞ (2.11.30) T w0 This is similar to Eq. (2.11.23) except for the 2nd term which carries a constant frequency in the negative sideband with the information of Doppler shift fd, whereas the positive sideband corresponds to the time-delayed saw-tooth chirping signal. Dechirping can be applied by mixing the positive sideband with the original chirping waveform to find the dechirped frequency (f2  f1)τ/Tw + fd and the target range. The Doppler frequency shift fd, which is known from the negative sideband measurement, can be easily subtracted to make the range measurement accurate. If triangle chirping shown in Fig. 2.11.5 is applied on the upper sideband, the Doppler frequency shift can be isolated from the de-chirped frequency (Eqs. 2.11.24 and 2.11.25). Fig. 2.11.16A shows an example of a transmitted optical spectrum of a LIDAR system. In this example, the LIDAR transmitter was built based on an optical transmitter for Single tone modulaon

-20

Residual carrier Broadband chirp

Amplitude((dB) dBm) Spectrum

270

Single tone Single-tone modulation modulation

Optical Residualcarrier DC Residual Crosstalk of sideband single tone

-30

-40

Broadband chirp -50

-60 -5

(a)

0 Frequency(GHz)

5

(b)

Fig. 2.11.16 Power spectral densities of (A): transmitted optical spectrum and (B): detected complex electrical signal described by Eq. (2.11.30).

Basic mechanisms and instrumentation for optical measurement

10 Gb/s coherent optical fiber communication with the capability of electronic domain precompensation. Two 21.42 GS/s DACs with 6-bit resolution were equipped in the transmitter, which converted the digital modulation signal to the analog format to drive the two RF ports of the I/Q modulator. The amplitude of the RF modulating signal can be varied by adjusting the gain of the two RF power amplifiers after the DACs. This transmitter has a 215-bit long internal digital memory so that the maximum temporal length of the arbitrary waveform generated by this transmitter was 1.5298 μs. The laser wavelength was set at 1549.65 nm in the experiment. In the waveform design, the upper sideband was loaded with a saw-tooth linear chirping with 4.3 GHz bandwidth. A raised cosine window function was applied to the chirping signal to minimize the edge effect on each optical pulse. The pulse repetition period was Tp ¼ 1.5298μs, with a pulse duration Tw ¼ 0.6884μs. The lower sideband was loaded with a single frequency modulation at fCW ¼ 5.3545GHz, which corresponds to a maximum measurable nonaliasing velocity of 4.149 km/s. For the signal on the lower sideband, the pulse width equals to the repetition period Tp so that the amplitude of the sinusoid is constant. The modulation frequency fCW ¼ 5.3545GHz was chosen to minimize the phase discontinuity between data frames. Fig. 2.11.16 was measured with a coherent heterodyne technique by mixing the optical signal with another narrow linewidth laser so that the optical spectrum was shifted to the RF domain measured by an RF spectrum analyzer. The optical spectrum in Fig. 2.11.16A clearly shows a constant frequency component on the left side for velocity measurement, a residual optical carrier in the middle, and a wideband frequency chirp on the right side for range measurement. In addition, the residual modulation sideband on the opposite side of the optical carrier is also visible, but with >20 dB power suppression ratio. Note that there are also a few spectral lines in Fig. 2.11.16A which were generated from nonlinear mixing between the residual optical carrier, the single-tone modulation and its unsuppressed sideband. This is attributed to the nonlinear transfer function of the system as well as the saturation of the photodiode in the heterodyne spectral measurement apparatus since the power of the optical LO was relatively high. The optical signal from the transmitter was collimated onto a target as shown in Fig. 2.11.11. In order to create a velocity, a spinning disk was used as the target. The surface of the disk was covered with retroreflective tape so that the reflected optical signal can be easily collected. The disk was placed approximately 1.6 m away from the telescope. The longitudinal velocity seen by the laser beam can be adjusted by changing either the beam position along the radius of the disk or the angle between the laser beam and the disk surface normal. Note that the velocity is zero if the disk surface normal is in the same direction of the laser beam; the sign of the longitudinal velocity seen by the laser can also be flipped through this angle change. The spinning speed of the disk is approximately 2316 rpm (revolutions per minute), and the beam position on the disk is about 4.5 cm off the center. At the LIDAR receiver, the optical signal reflected from the spinning disk was combined with LO through the 90 degree optical hybrid. The two output ports are detected

271

Fiber optic measurement techniques

by two photodiodes, digitized and combined to reconstruct the complex optical field as described by Eq. (2.11.30). With the perspective of real-time LIDAR operation, the recorded time-domain digital signal was truncated into lengths equal to the pulse repetition time Tp ¼ 1.5298μs, corresponding to the update rate of approximately 653 MHz. This composite signal is converted into a frequency domain through an FFT process. Fig. 2.11.16B shows a typical spectrum of the complex electric signals measured at the LIDAR receiver. This figure covering the full spectral window demonstrates the digital recovery of the complex optical field spectrum comparable to that shown in Fig. 2.11.16A, and the two optical sidebands can be utilized for independent purposes. Velocity detection is accomplished by measuring the Doppler-induced frequency shift of the single-tone on the lower optical sideband. Fig. 2.11.17 shows the expanded view at that single-tone for the target with a positive and a negative velocity by changing the orientation angle of the spinning disk. In the two measurements, both the radial position of the laser beam on the disk and the distance between the LIDAR and the disk remain unchanged. The solid line and the dashed line in Fig. 2.11.17 represent the spectra of the received signal with the target moving toward and away from the observer, respectively. Because of the Doppler effect, the measured frequency is different from the original single-tone modulation frequency of fCW ¼  5.3545GHz. The precise target velocity information can be obtained by measuring this frequency shift. The bottom and the top horizontal axes of Fig. 2.11.17 indicate the frequency f and the corresponding velocity calculated from v ¼ (jf j  j fCW j)λ/2. The target velocities indicated by the solid and the dashed spectra are +5.81 m/s and  7.62 m/s, corresponding to the disk orientation angle of 32 degree and negative 44 degree, respectively. Based on the 3 dB width

12.01

-10

Amplitude(dBm)

272

4.26

Velocity (m/s) -3.49

-11.24

-18.98

fcw

-v v

-20 -30 -40 -50 -5.37

-5.36 -5.35 -5.34 Frequency(GHz)

-5.33

Fig. 2.11.17 Zoomed-in view at the single-tone modulation frequency for velocity measurement. Solid and dashed lines represent spectra measured when the target moves toward and away from the observer, respectively.

Basic mechanisms and instrumentation for optical measurement

0 Amplitude(dBm)

-v v

Reflected signal from target

-10 Reflected signal from fiber APC terminal

-20 -30

0.42

0.44

0.46 0.48 Frequecny(GHz)

0.5

0.52

Fig. 2.11.18 Dechirped signal spectra for distance measurements. Solid and dashed lines represent spectra measured when the target moves toward and away from the observer, respectively.

of the spectra, the resolution of the velocity measurement in this experiment is approximately 0.5 m/s, which agrees with the theoretical value predicted by vres ¼ λ/(2Tp). Fig. 2.11.18 shows the dechirped spectra for range measurements. Dechirping was performed by mixing the received signal digitally selected from the upper sideband of the spectrum shown in Fig. 2.11.16B with the original chirping waveform. However, for a moving target, the Doppler effect shifts the detected signal by a frequency fd which is a function of the target velocity. This Doppler frequency shift has an impact on the dechirping process. The solid and the dashed lines in Fig. 2.11.18 represent dechirped spectra obtained for the target with the positive and the negative speed as discussed previously. The distance between the LIDAR transmitter and the target can be calculated by R ¼ ðf R  f d ÞcT w =2B

(2.11.31)

where fR is the peak frequency of the dechirped spectrum, fd is the Doppler frequency shift which is obtained from the velocity measurement described above, and B ¼ j f2  f1 j is the chirping bandwidth. Note that in Fig. 2.11.18, there is an extra frequency component at fc ¼ 0.43GHz in each spectrum but at a much lower amplitude. This is originated from the back reflection off the fiber APC (angled physical contact) terminal which leads to the telescope. Since the optical signal reflected from the fiber terminal is not affected by the Doppler frequency shift caused by the target velocity, the peak frequency of this component is identical for the two measurements. In fact, the actual distance between the fiber terminal (at the telescope) and the target can simply be determined from the frequency difference between the spectral peaks caused by the target and the fiber terminal as R0 ¼ (fR  fc  fd)cTw/2B. This resulted in the target distances

273

274

Fiber optic measurement techniques

(from the fiber terminal) of R0  1.613 m. Based on the 3 dB width of the dechirped spectrum, the range resolution is determined as approximately 2.8 cm. This roughly agrees with the calculated range resolution Rres ¼ c/2B ¼ 3.49cm, where Rres is defined as the width between the minima on the two sides of the central peak which is slightly wider than the full width at half maximum (FWHM). So far, in this section, we have discussed various LIDAR techniques and configurations. Actual design and implementation of a LIDAR system depend on the specific application, performance requirements, and cost expectation. For example, automobile LIDAR systems have to be economical and reliable, whereas a long-range space-born LIDAR system would require extremely high detection sensitivity. Because of the rapid development of optical fiber communication systems, fiber-optic components and integrated photonic circuits become more mature and affordable. In addition, high-speed DSP capability developed with high-speed coherent fiber-optic systems also enabled new design concepts and lower cost implementations of LIDAR systems.

2.11.2 OCT OCT stands for optical coherence tomography, which is a technique often used to make high resolution 3D imaging or tomography in medical terms. Similar to LIDAR discussed previously, OCT measures reflectivity as the function of distance (Giles, 1997), known as the z-scan, and the scan in the horizontal xy-plane can be performed by beam steering. LIDAR operation is mostly based on the modulated optical waveforms, either optical pulses or FMCW with pulse compression through dechirping, so that the resolution is determined by the pulse width or chirping bandwidth. In comparison, OCT often relies on the interference of optical carrier itself so that the resolution can be much finer, down to the micrometer level. In this section, we will discuss two basic categories of OCT implementations, namely, optical low-coherence reflectometer (OLCR) and Fourier-domain OCT (FOCT). (1) Optical Low-coherence reflectometer (OLCR) The operation of an OLCR system is based on a Michelson interferometer configuration (Huang et al., 1991), as shown in Fig. 2.11.19. In this simplified OLCR configuration, a low-coherence optical source with wide spectral width is divided evenly between the reference and test arms using a 3-dB fiber coupler. The optical delay in the reference arm can then be varied by moving the Wideband light source

Isolator

Test arm

DUT Scanning mirror

PD Reference arm

Fig. 2.11.19 Simplified OLCR optical configuration.

Basic mechanisms and instrumentation for optical measurement

scanning mirror. The reflected signals from each arm travel back through the coupler, where they are recombined and received at the photodiode. Due to the nature of the coupler, half the reflected power will also be directed back to the source, but it is attenuated by the isolator. From the arrangement shown in Fig. 2.11.19, coherent interference will appear at the photodiode if the difference in optical length between the reference and test arms is less than the source coherence length. The coherence length LC of the light source is determined by its spectral width Δλ according to the definition Lc ¼ λ2/(nΔλ), where n is the refractive index of the test material and λ is the average source wavelength. The incident optical power on the photodiode leads to a photocurrent which is described by   pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I ¼ R P REF + P DUT + 2 P REF P DUT cos ðφREF ðtÞ  φDUT ðtÞÞ (2.11.32) where R is the responsivity of the photodiode, PREF is the reflected reference signal with optical phase of φREF(t), and PDUT is the reflected test signal with phase φDUT(t). One assumption made in using the photocurrent expression is matched polarization states between the reference and test reflected signals incident upon the photodetector. This matched state maximizes the interference signal created from the combined reflected power. Conversely, signals received at the photodiode having orthogonal states of polarization will create no interference signal, even when the optical length of the two arms is within the coherence length, LC, of the source. An illustration of the interference signal received at the detector is shown in Fig. 2.11.20. As shown, the signal is a function of the length difference z, with the peak occurring when the optical lengths of each arm are equal (z ¼ 0). The DC value, IAVE, is a result of the constant power, PREF and PDUT, reflected from each arm of the OLCR arrangement. The sinusoidal wave represents the interference between the two reflected signals received at the photodiode when the delays become equal. As the reference arm length changes by one-half the average source wavelength, λ/2, the sinusoidal interference signal completes one period. This is a result of the signal in the reference arm traveling twice across the variable delay, in effect doubling the distance.

l/2

ID

Δz IAVE

0

Fig. 2.11.20 Example of detected interference signal current.

z

275

Fiber optic measurement techniques

According to Eq. (2.11.32), when the optical length difference of the two arms is greater than a coherence length, the phases of each path are uncorrelated, varying randomly with respect to one another. Since the bandwidth of the photodetector is much slower than the optical frequency at which the phase difference varies, a constant current will be observed at the photodiode output. However, once the length difference decreases to less than LC, the phase term difference, φREF(t)  φDUT(t), does not average to zero. Therefore, even though the photodetector bandwidth is unable to observe the resulting sinusoidal interference signal, it will recognize the signal envelope shown in Fig. 2.11.20. The 3-dB width of this envelope, Δz, is the spatial resolution to the OLCR setup. The resolution is ultimately determined by the coherence length of the source, LC λ2 ¼A (2.11.33) 2 2nΔλ where the factor A is governed by the low-coherence source spectral shape. For example, A ¼ 0.44 for Lorentzian, A ¼ 0.88 for Gaussian, and A ¼ 1.2 for rectangular line shape (Derickson, 1998). Fig. 2.11.21 shows a typical OLCR spatial resolution as a function of the source spectral bandwidth for A ¼ 1 and n ¼ 1.5. As shown, a spatial resolution of less than 10 μm is achieved by implementing a source with at least 100 nm spectral bandwidth. The measurement resolution is also of interest in determining the ability of an OLCR system to detect multiple reflections in the test arm. To be observed, the spacing of adjacent reflections must be greater than the spatial resolution, Δz. In this manner, any number of reflections can be detected as long as they lie within the system dynamic range. In practical applications, the optical reflection from the device under test (DUT) is usually much weaker than that from the reference arm. Therefore, the Michelson Δz 

103

Spatial Resolution (μm)

276

102

101

0

0

10

20 30 40 50 60 70 80 Source Spectral Bandwidth (nm)

90 100

Fig. 2.11.21 OLCR spatial resolution versus source spectral bandwidth, assuming A ¼ 1.

Basic mechanisms and instrumentation for optical measurement

Wideband light source

Isolator PC PZT stretcher

DUT Scanning mirror

fm PD BPF ( )2 ADC

PD

DSP, control and synchronization

Fig. 2.11.22 Practical OLCR using balanced detection and phase modulation.

interferometer is strongly unbalanced. Consequently, although the average photocurrent is strong, the signal contrast ratio is weak. To minimize the DC component in the photocurrent, a balanced photodetection is often used, as shown in Fig. 2.11.22. In addition, an optical phase modulation in the reference arm can help improve the measurement signal-to-noise ratio. This can be accomplished using a PZT, which stretches the fiber. This phase modulation at frequency fm converts the self-homodyne detection into a heterodyne detection with the IF frequency at fm. It shifts the signal electrical spectrum away from DC, thus reducing the low-frequency noise introduced in the receiver electronics. A polarization controller can be inserted in either the reference or the test arm to match the polarization states and maximize the detection efficiency. Another limitation of an OLCR is that the maximum measurement range is usually limited to a few centimeters, mainly determined by the length coverage of the scanning optical delay line. Range extension has been proposed using a pair of retroreflectors in the optical delay line (Takada et al., 1995). By letting the light bounce back and forth between the two retroreflectors for N times, the length coverage range of the delay line is increased N times. In addition to extra insertion loss in the delay line, another disadvantage of this method is that both motor step size and mechanical errors in the translation stage (which controls the position of the retroreflectors in the scanning delay line) will be amplified N times, degrading the length resolution of the measurement. Although there have been numerous demonstrations of OLCR for many applications, the major limitation of OLCT technique is the requirement of a swept optical delay line. Since the sweep of optical delay is usually accomplished by mechanical means, the sweeping speed is generally slow, and the long-term reliability may not be guaranteed. To overcome this problem, Fourier-domain reflectometry was proposed based on the fact that scanning the length of the optical delay line can be replaced by sweeping the wavelength of the optical source. Since wavelength sweeping in lasers can be made fast and without the need of mechanical tuning, Fourier-domain reflectometry has the potential to be more practical.

277

278

Fiber optic measurement techniques

Wavelengthswept laser

Isolator

Test arm

DUT Fixed mirror

FFT

PD Reference arm

Fig. 2.11.23 Simplified Fourier-domain OCT optical configuration.

(2) Fourier-domain OCT (FOCT) As shown in Fig. 2.11.23, FOCT is also based on a Michelson interferometer, which is similar to the OLCR, but a wavelength-swept laser is used instead of a low-coherent light source. In this configuration, a fixed length of the reference arm is used. Although the wavelength-swept laser source needs a wide wavelength tuning range, at each point of time, its spectral linewidth is narrow. The interference between optical signals from the test and the reference arms is detected by a photodiode, and the reflection in the DUT as a function of distance can be obtained by a Fourier-domain analysis of the interference signal. The operation principle of FOCT is illustrated in Fig. 2.11.24, in which the optical frequency of the laser source is f(t) ¼ f1 + (f2  f1)t/Tw, which linearly sweeps from f1 to f2 during a time interval Tw, and the source optical bandwidth is therefore B0 ¼ jf2  f1 j. After being reflected from the DUT, the signal from the test arm is mixed with that reflected from the reference mirror at the optical receiver where the optical signal is “dechirped.” In the simplest case, if the DUT has only one discrete reflection interface, and the differential delay between the reference mirror and the reflection interface in DUT is τ, the mixed signal will produce only one constant frequency at the receiver, which indicates the location of the reflection within the DUT through the dechirped frequency fR ¼ (f2  f1)τ/Tw. In general, if the DUT has continuous reflections with distributed intensity, a Fourier analysis can be used to translate the measured frequency components to the longitudinal distribution of reflections at various locations within the DUT.

f2

Signal arm

Frequency

f1 f2

t Frequency

Reference arm

f1 Frequency

Δt

t

fR t

Fig. 2.11.24 Illustration of the Fourier-domain OCT operation principle.

Basic mechanisms and instrumentation for optical measurement

At this point, readers may have recognized that the chirp/dechirp process presented in Fig. 2.11.24 looks almost identical to that of the FMCW LIDAR shown in Fig. 2.11.4. While this is true, the major difference is that for an FMCW LIDAR, the chirp is applied on the modulation envelop of the optical carrier, whereas for FOCT, the carrier optical frequency is linearly swept from f1 to f2 within a time window Tw. As the result, the spatial resolution in the longitudinal dimension of a FOCT is inversely proportional to the source optical bandwidth as δx ¼ c/(2nBo), or equivalently, δx ¼ λ20/(2nΔλ), where Δλ ¼ (λ20/c)B0 is chirp bandwidth in wavelength, n is the refractive index of the material, and λ0 is the central wavelength of the wavelength-swept source. For example, if the source wavelength is in the 1550 nm window, a wavelength sweep over 50 nm bandwidth can provide a spatial resolution of approximately 17 μm in an optical fiber, with n ¼ 1.45. The FOCT implementation shown in Fig. 2.11.23 is essentially a self-coherent detection setup in which the optical local oscillator (LO) is provided by the same laser source as the optical signal. Based on the fundamental principle of coherent homodyne detection, if the 3-dB coupler is ideal, the signal component of the photocurrent obtained at the output of the photodetector is pffiffiffiffiffiffiffiffiffi I s ðtÞ ¼ 2R P r P t exp fj2π ½f ðtÞ  f ðt  τÞt + jφðtÞg pffiffiffiffiffiffiffiffiffi (2.11.34) ¼ 2R P r P t exp fj½2πf R t + φðtÞg where Pr and Pt are the optical powers that come from the reference and the test arms, respectively. τ is the roundtrip propagation delay difference between the reference and the test arms, fR ¼ f(t)  f(t  τ) ¼ f1 + (f2  f1)τ/Tw is the dechirped frequency, and φ(t) is the differential phase noise between the signal and the LO. The direct detection component is neglected in Eq. (2.11.34) for simplicity. The dechirped frequency fR can be found from the photocurrent through a fast Fourier transform, and the impact of different phase noise φ(t) can be minimized through digital filtering. For a FOCT, dechirping does not require RF mixing, as it takes place in the photodiode, and the location of the reflection interface inside the DUT can be easily calculated by x¼

cτ cT w f ¼ 2n 2nðf 2  f 1 Þ R

(2.11.35)

In contrast to OLCR, where low-coherent light source was used, the wavelengthswept light source used in the FOCT is highly coherent. In fact, for a FOCT, the maximum range coverage of the measurement Δx is directly proportional to the coherence length of the laser source, which is inversely proportional to the spectral linewidth of the laser δλ by Δx max ¼

λ20 2nδλ

(2.11.36)

279

Fiber optic measurement techniques

This means that the optical signals reflected from the test, and the reference arms should always be mutually coherent so that their path length difference can be found by fast Fourier transform. For example, suppose the spectral linewidth of a swept laser source is 2 GHz, which is equivalent to δλ ¼ 16 pm in the 1550 nm wavelength window; it can provide an approximately 51 mm maximum spatial coverage. This length coverage is not enough to diagnose failures along a fiber-optic system, but it is enough to characterize the detailed structures of optical components or typical biomedical samples. For 3D imaging or section scanning, the object has continuous reflections along the longitudinal direction of the testing light beam, and therefore, both the power of the reflected light Pt(τ) and the dechirped frequency fR(τ) will be dependent on the position of reflection, and the photocurrent will be a collection of all the reflection contributions, Xpffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I s ðtÞ ¼ 2R P r P t ðτi Þ exp fj½2πf R ðτi Þt + φðtÞg (2.11.37) i

The Fourier transform of photocurrent IS(t) can provide the information of the distributed reflection along the signal beam pass. One unique challenge in the FOCT, though, is the requirement of a high-quality wavelength-swept laser source, especially with high sweep speed of its wavelength. Although small-footprint distributed Bragg reflector (DBR) semiconductor lasers can provide >20 nm wavelength tuning range with about 100 MHz linewidth, wavelength sweep of a DBR laser is usually not continuous across the entire 20 nm wavelength band, and therefore, it cannot be used for the FOCT. In practical FOCT systems, fiber-based ring lasers are often used with an inter-cavity scanning bandpass filter for frequency sweep, as shown in Fig. 2.11.25. In this fiber laser configuration, a semiconductor optical amplifier (SOA) is used as the gain medium. Two optical isolators are used to ensure the unidirectionality of the optical Iso

FPI

SOA

λ-sweep

Coupler

(a)

Iso

FR

Spectral density

280

δλ

Δλ λ

Output

(b)

Fig. 2.11.25 (A) Configuration of a wavelength-swept fiber ring laser. FR, Faraday rotator; FPI, FabryPerot interferometer; ISO: isolator; SOA, semiconductor optical amplifier. (B) Illustration of the optical bandwidth Δλ and the linewidth δλ of the laser.

Basic mechanisms and instrumentation for optical measurement

signal and prevent unwanted optical reflections. A swept narrowband optical filter is used to select the oscillation wavelength in this fiber ring cavity. In general, the maximum sweeping spectral width of the laser, Δλ, is determined by the bandwidth of the SOA used in the ring. Because of the saturation effect of the SOA, this spectral bandwidth is in general wider than the small-signal 3-dB bandwidth of the SOA itself. Commercial fiber-pigtailed SOAs in the 1320 nm region usually have >50 nm bandwidth, which can potentially provide an approximately >70 nm ring laser bandwidth. The spectral linewidth of the laser output δλ, on the other hand, is determined by the narrowband optical filter in the loop. A commercially available fiber Fabry-Perot tunable filter can have a finesse of 2000. Therefore, if one chooses the free spectral range (FSR) of 100 nm, the width of the passband can be on the order of 0.05 nm. Generally, because of the mode competition mechanism in the homogenous-broadening gain medium, the linewidth of the ring laser may be slightly narrower than the 3-dB passband of the tunable filter. A Faraday rotator in the ring cavity is used to minimize polarization sensitivity of the lasing mode. For practical FOCT applications, the wavelength swept rate of the laser source is an important parameter that, to a large extent, determines the speed of measurement. In a fiber ring laser, after the bandpass optical filter selects a certain wavelength, it needs a certain time for the laser to build up the lasing mode at that wavelength. Theoretically, when the lasing threshold is achieved, the optical power should reach its saturation value Psat. If the net optical of each roundtrip is g (including amplifier gain and cavity loss) and the ASE noise selected by the narrowband filter is δPASE, it requires N roundtrips to build up the ASE to the saturation power, that is, P sat ¼ δP ASE  gN

(2.11.38)

where δPASE ¼ PASE(δλf/Δλ), PASE is the total ASE noise power within the bandwidth Δλ of the SOA, and δλf is the bandwidth of the optical filter. Since the time required for each roundtrip in the ring cavity is τcavity ¼ Ln/c, with L the cavity length, the time required to build the lasing mode is

Ln P sat Δλ δt ¼ log (2.11.39) P ASE δλf c  log ðgÞ For example, if PASE ¼ 1mW, Psat ¼ 10mW, Δλ ¼ 120nm, δλf ¼ 0.135nm, L ¼ 2.4m, g ¼ 15dB, and n ¼ 1.46, we can find δt  30.73ns. Then, the maximum wavelength sweeping speed vtunecan be determined such that with the time interval δt, the wavelength tuning should be less than the bandwidth of the filter, and in this example, vmax δλf/δ t ¼ 4.39nm/μs. Consider the total SOA bandwidth of Δλ ¼ 120nm; the limitation on the repetition rate of wavelength sweeping is approximately f

max



1 v max  11:65kHz π Δλ

(2.11.40)

281

282

Fiber optic measurement techniques

where the factor 1/π accounts for the higher-frequency sweep speed of a sinusoidal sweep compared to a linear, unidirectional sweep, assuming the filter sweep occurs over the same spectral range (Huber et al., 2005). If the tuning frequency is higher than fmax, the lasing mode would not have enough time to establish itself, and output power would be weak and not stable. One technique to increase wavelength-sweeping speed is to synchronize the repetition time of wavelength sweeping with the roundtrip time of the ring cavity, that is, the narrowband optical filter is tuned periodically at the roundtrip time or a harmonic of the roundtrip time of the ring cavity. This allows the delay line in the ring cavity to store the wavelength-swept optical signal. This signal reenters the narrowband filter and the SOA at the exact time when the filter is tuned to the particular wavelength of the optical signal. In this case, the lasing mode does not have to build up from the spontaneous emission at each wavelength; therefore, a much higher sweeping speed of up to a few hundred kilohertz is permitted (Huber et al., 2006; Wojtkowski et al., 2004). For example, for a fiber ring laser with the fiber length L ¼ 2 km with the refractive index n ¼ 1.47, the roundtrip time is nL/c ¼ 9.8μs, corresponding to a repetition frequency of 102 kHz. This allows the repetition rate of laser frequency sweep to be an integer multiple of 102 kHz. The FOCT eliminates the requirement of a mechanical tunable delay at the reference arm of the interferometer; however, it shifts the tenability requirement to the wavelength of the laser source. Without a tunable laser source, an alternative technique is to use a fixed wideband light source in the FOCT but to employ an optical spectrometer at the detection side, as shown in Fig. 2.11.26. In this case, if the target has only one reflection interface, the optical spectrum measured at the end of the interferometer will be pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I ðf Þ∝ Rref + RDUT + 2 Rref RDUT cos ð4πfcxÞ (2.11.41) where Rref and RDUT are power reflectivities of the reference arm and the DUT, respectively, and x is the length difference between the two reflection interfaces. The value of x can be predicted by a Fourier transform of the optical spectrum. Using the setup shown in Fig. 2.11.26, the spatial resolution of the measurement is determined by the spectral bandwidth Δλ of the light source δx ¼ λ20/(2nΔλ), whereas the length coverage is determined by the spectral resolution δλ of the optical spectrometer Wideband light source

Isolator

Test arm DUT

Spectrometer FFT

Fixed mirror Reference arm

Fig. 2.11.26 Fourier-domain reflectometry using a fixed wideband source and an optical spectrometer for optical length determination.

Basic mechanisms and instrumentation for optical measurement

Δx ¼ λ20/(2nδλ), where λ0 is the central wavelength of the source and n is the refractive index of the DUT material. Although this measurement technique does not need a tunable laser, it has stringent requirements on the optical spectrometer. For the measurement to be fast, grating and a CCD array-based optical spectrometer are often used, allowing the parallel processing of signal wavelength components simultaneously; also, there is no moving part required in the spectrometer, as discussed at the end of Section 2.2. Practically speaking, commercially available, compact CCD array-based spectrometers usually have low-spectral resolution that is typically not better than 0.5 nm, and therefore, the length coverage is less than 1.6 mm if operating in the 1550 nm wavelength window. Another problem is that a CCD array is available in silicon material which is sensitive only up to 1 μm wavelength. For long wavelength operations such as 1320 or 1550 nm, we must use InGaAs-based diode arrays, which are very expensive, especially with a large number of pixels in the array. Although tunable filter-based optical spectrometers, such scanning FPI, can be used in this setup, the detection efficiency is rather poor. The reason is that the optical source emits energy that simultaneously covers a wide wavelength band, whereas the tunable filter in the spectrometer only picks up a narrow slice of the spectrum at any given time. Therefore, most of the optical energy provided by the source is wasted in the process, and thus, the detection efficiency is low. Finally, it is important to note that polarization state mismatch between signals from the two interferometer arms is an important issue affecting the performances of both OLCR and FOCT. Polarization diversity, similar to that used in coherent optical communication receivers, can be used to minimize the polarization related signal fading but will increase the complexity of the system.

2.12 Optical network analyzer Network analyzer is one of the basic instruments in the research and development of RF, microwave, and, recently, lightwave technologies. An optical network analyzer is based on the same concept as an RF network analyzer except for its difference in the operation frequencies. Therefore, our discussion should start from RF network analyzers.

2.12.1 S-Parameters and RF network analyzer An RF network analyzer performs accurate measurement of the ratios of the transmitted signal to the incident signal and the reflected signal to the incident signal. Typically, an RF network analyzer has a frequency-swept source and a detector that allow the measurement of both the amplitude and the phase responses at each frequency that produce complex transfer functions of RF devices and systems.

283

284

Fiber optic measurement techniques

Two-port network S21

a1 b1

b2 S22

S11 S12

a2

Fig. 2.12.1 S-parameter of a two-port network.

In general, the linear characteristics of a two-port RF network can be defined by its S-parameters, as illustrated in Fig. 2.12.1. By definition, the S-parameter relates the input, the output, and the reflected fields by b1 s11 s12 a1 ¼ (2.12.1) b2 s21 s22 a2 where s11, s12, s21 and s22 are matrix elements, each providing a specific relation between the input and the output,

reflected from port 1 b1

¼ (2.12.2) s11 ¼ incident into port 1 a1 a2 ¼0

transmitted from port 2 to port 1 b1

s12 ¼ (2.12.3) ¼ a2 a1 ¼0 incident into port 2

transmitted from port 1 to port 2 b2

s21 ¼ (2.12.4) ¼ a1 a2 ¼0 incident into port 1

reflected from port 2 b2

s22 ¼ (2.12.5) ¼ a2 a1 ¼0 incident into port 2 In general, each element of the S-parameter is both wavelength-dependent and complex, representing both the amplitude and the phase of the network properties. Fig. 2.12.2 shows the block diagram of a transmission/reflection (T/R) test set, which can be used to measure the S-parameters of RF components. In this test set, a swept synthesizer is used as the signal source, which supplies the stimulus for the stimulus-response test system. We can either sweep the frequency of the source or sweep its power level, depending on the measurement requirement. Part of the signal from the source is sent to the device under test (DUT), and part of it is split out through a directional coupler as the reference. To measure s11, the signal reflected from DUT is directed to an RF coherent receiver, where it is mixed with the reference signal that serves as the local oscillator. An ADC is used to convert the mixed intermediate frequency (IF) signal into a digital format and send for signal processing and display. Since RF coherent detection is used, both the amplitude and the phase information of the reflected signal from Port 1 of DUT can be

Basic mechanisms and instrumentation for optical measurement

Swept Directional oscillator coupler DUT

switch mixer

mixer ADC

ADC

Control, signal processing & display

Fig. 2.12.2 Block diagram of a transmission/reflection test set.

determined, which yields s11. Similarly, the signal transmitted from Port 1 to Port 2 can also be coherently detected by setting the switch to the other RF mixer on the left; this measures the parameter s21. Note that in this T/R test set, the stimulus RF power always comes out of Test Port 1, and Test Port 2 is always connected to a receiver in the analyzer. To measure s12 and s22, the stimulus RF source has to be connected to the output of the DUT, and therefore, one must disconnect the DUT, turn it around, and reconnect it to the test set. In doing this, the test set may need to be recalibrated, which often degrades the measurement accuracy. Fig. 2.12.3 shows the block diagram of an RF network analyzer. It allows both forward and reverse measurements on the DUT, which are needed to characterize all the

Variable attenuator

Bias

Swept oscillator

Switch

fIF

fIF

Variable attenuator

Bias

Control, signal processing & display

DC Port 1

ADC BPF DC

BPF ADC

DUT

Port 2

Fig. 2.12.3 Block diagram of an RF network analyzer. DC, directional coupler.

285

286

Fiber optic measurement techniques

four S-parameters. The variable attenuators are used to set the stimulus signal power levels, and the bias tees are used to add a CW bias level to the signal, if necessary. The directional couplers (DCs) separate the stimulus and the reflected signals from the DUT. In the coherent receiver, the reflected (transmitted) signal from (through) the DUT is mixed with the frequency-shifted local oscillator and selected by narrow bandpass filters (BPFs) before been sent to analog-to-digital converters (ADCs). The constant frequency shift fIF in the local oscillator ensures the heterodyne RF detection with the intermediate frequency at fIF. The central frequency of the BPF is determined by the intermediate frequency fIF, and to achieve high detection sensitivity, the bandwidth of BPF is usually very narrow. A central electronic control circuit synchronizes the swept-frequency oscillator and the signal processing unit to find the S-parameters of the DUT. The network analyzer can be used to characterize both RF devices and complicated RF systems. For example, an optical transmission system can be regarded as a two-port RF system, and the transmission performance of can be characterized by a network analyzer, as illustrated in Fig. 2.12.4. In an optical transmission system, the transmitter (Tx) converts the RF input into an optical signal, optical fibers and optical amplifiers deliver the optical signal to the destination, and the receiver (Rx) converts the received optical signal back to the RF domain. Therefore, the frequency response of the overall system can be measured by the complex S21 parameter. In this measurement, an important detail we need to pay attention to is the time delay of the transmission system. For example, for a 100-km long optical fiber, the propagation delay is approximately 500 μs. Since the local oscillator in the network analyzer is linearly swept, during this long time delay, the mixed frequency between the transmitted signal over the system and the local oscillator will likely be outside the bandwidth of the BPF; as a

Network analyzer

Port 1

Port 2

Optical transmission system Fiber

Tx

EDFA

Rx

Fig. 2.12.4 Measure transmission characteristics using an RF network analyzer.

Basic mechanisms and instrumentation for optical measurement

consequence, no response will be measured by the network analyzer. In most commercial RF network analyzers, the “step sweep” function is available, in which the local oscillator stays in each frequency for a certain amount of time before jumping to the next frequency. By choosing the dwell time of step sweep longer than the system delay, the BPF in the coherent receiver of the network analyzer will be able to capture the heterodyne signal.

2.12.2 Optical network analyzers An optical network analyzer is an instrument to characterize the transfer function of optical components and systems. Although the transfer function of an optoelectronic system can be measured by an RF network analyzer, as illustrated in Fig. 2.12.4, the system must have an electrical-to-optical (E/O) converter and an optical-to-electrical (O/E) converter as terminals of the optical system. 2.12.2.1 Scalar optical network analyzer To be able to characterize an optical system itself without optoelectronic conversions, an optical network analyzer can be extended from an RF network analyzer by providing calibrated O/E and E/O converters. Fig. 2.12.5 shows the block diagram of a scalar optical network analyzer. The O/E converter consists of a wavelength tunable laser and a wideband external electro-optic intensity modulator, which converts an RF modulation into optical modulation. A wideband photodetector and a low-noise preamplifier are used in the optical receiver, which converts the optical signal back into the RF domain. An optical directional coupler is used at the transmitter port, providing the capability of measuring the reflection from the DUT. An optical switch at the receiver port selects between the measurements of transmission (S21) or reflection (S11). A conventional RF network analyzer is used to provide RF stimulation, detection, signal processing, RF Network Analyzer

E/O modulator

S21

Input

Tunable laser

Directional coupler

Amp.

Output

PD

1

DUT

Fig. 2.12.5 Block diagram of a “scalar” lightwave network analyzer.

2

287

288

Fiber optic measurement techniques

and display. Since the transmitter and receiver pair is unidirectional, this optical network analyzer only measures S21 and S11, and obviously, S12 and S22 can be measured by reversing the direction of DUT. In fact, this optical network analyzer simply provides a calibrated electro-optic transmitter/receiver pair, which is an interface between the RF network analyzer and the optical device or system to be tested. Integrating the wideband transmitter/receiver pair into the same equipment makes it easy to calibrate the overall transfer function including the RF network analyzer and the O/E and E/O converters. This enables an optical network analyzer to accurately characterize wideband response of optical devices and systems up to 50 GHz. It is important to notice that in the optical network analyzer block diagram shown in Fig. 2.12.5, intensity modulation and direct detection are used for the O/E and E/O conversion. The instrument is effective in measuring the response of the DUT to the intensity modulated lightwave signal at various stimulating frequencies determined by the RF network analyzer and wavelengths determined by the tunable laser. However, optical phase information of the DUT cannot be determined in the measurements due to the intensity modulation/direct detection nature of the O/E and E/O conversion. For this reason, this optical network analyzer is usually referred to as a scalar network analyzer. 2.12.2.2 Vector optical network analyzer Generally, to be able to measure optical phase information, coherent detection will have to be used in the O/E-E/O interface box. However, another measurement technique using optical interferometric method appears to be more effective and simpler to obtain the vector transfer function of the DUT. Fig. 2.12.6 shows the block diagram of a vector optical network analyzer using the interferometric technique (Glombitza and Brinkmeyer, 1993; Froggatt et al., 1999; Luna Technologies, 2022). A wavelength swept tunable laser is used to provide an optical stimulus. Two optical interferometers are used →



ay



p

s



t02 Tunable laser

→ Ein

ˆ T(jw) t12



PMC

E1 t01

ax

DUT

PBS

Pol.

t11



E2

PBS

Es Is

Fig. 2.12.6 Block diagram of a “vector” optical network analyzer.

Ip Ep

Basic mechanisms and instrumentation for optical measurement

to provide relative optical delays. The first interferometer is made by the polarizationmaintaining fiber and the polarization-maintaining coupler (PMC) equally splitting the optical signal into the two branches. At the output of this first interferometer, the polarization axis of one of the PM fiber branches is rotated by 90 degree, and a polarization beam splitter (PBS) is used to combine the optical signals. Since the optical signals carried by the two branches are orthogonal with each other after combining, there is no interference between them. Thus, from a classic definition, this is not a real interferometer. Instead, it merely produces a differential delay between the two orthogonally polarized components. Assume the field of the input optical signal is ! E in

!

¼ a x Aejωt

(2.12.6)

! ax

is a unit vector indicating the polarization orientation, A is a complex envewhere lope, and ω is the frequency of the optical signal. Then, after the first interferometer, the optical field is  ! A ! ! E 1 ¼ pffiffiffi a x exp ðjωðt  τ01 ÞÞ + a y exp ðjωðt  τ02 ÞÞ (2.12.7) 2 !

!

where a x and a y are two mutually orthogonal unit vectors and τ01 and τ02 are time delays of the two interferometer branches. The second interferometer in Fig. 2.12.6 is composed of two conventional 3-dB optical couplers. The lower branch is a simple optical delay line, and the upper branch connects to the DUT. Another PBS is used at the end to split the optical signal into two orthogonal polarization components, s and p, before they are detected by two photodiodes. The optical field at the output of the second interferometer is composed of two components: ! E2 ! E 2L

! E 2U

!

!

¼ E 2L + E 2U

(2.12.8)

and are the optical signals that come from the lower and upper branches where of the second interferometer, respectively, h i ! A ! ! E 2L ¼ pffiffiffi a x ejωðtτ01 Þ + a y ejωðtτ02 Þ ejωτ11 (2.12.9) 2 2 h i ! A ! ! E 2U ¼ pffiffiffi TbðjωÞ a x ejωðtτ01 Þ + a y ejωðtτ02 Þ ejωτ12 (2.12.10) 2 2 where τ11 and τ12 are time delays of the lower and upper branches and TbðjωÞis the vector transfer function of the DUT, which can be expressed as a 2  2 Jones matrix: T xx ðjωÞ T xy ðjωÞ b T ðjωÞ ¼ (2.12.11) T yx ðjωÞ T yy ðjωÞ

289

290

Fiber optic measurement techniques

The polarization controller (Pol.) before the second interferometer is adjusted such that after passing the lower branch of the second interferometer, each field component ! in E 1 is equally split into the s and p branches. This can be simply explained in a 45-degree rotation between the two orthogonal bases (ax, ay) and (s, p), as illustrated in the inset of ! Fig. 2.12.6. Based on this setting, the E 2L component is equally split into s and p branches and their optical fields are, respectively, h i A jωðtτ01 τ11 Þ E2L,s ¼ + ejωðtτ02 τ11 Þ e (2.12.12) 4 h i A jωðtτ01 τ11 Þ E 2L,p ¼  ejωðtτ02 τ11 Þ e (2.12.13) 4 Similarly, for the signal passing through the upper branch of the second interferometer, the corresponding optical fields reaching the s and p detectors are, respectively, h i A T xx ejωðtτ12 τ01 Þ + T xy ejωðtτ12 τ02 Þ + T yx ejωðtτ12 τ01 Þ + T yy ejωðtτ12 τ02 Þ E 2U,s ¼ 4 (2.12.14) h i A E 2U,p ¼ T xx ejωðtτ12 τ01 Þ + T xy ejωðtτ12 τ02 Þ  T yx ejωðtτ12 τ01 Þ  T yy ejωðtτ12 τ02 Þ 4 (2.12.15) Then, composite optical field at the s and p photodetectors can be obtained by combining these two contributing components as E s ¼ E 2U,s + E 2L,s i    jE in j h T xx + T yx ejωðτ12 +τ01 Þ + T xy + T yy ejωðτ12 +τ02 Þ + ejωðτ11 +τ01 Þ + ejωðτ11 +τ02 Þ ejωt ¼ 4 (2.12.16)

and Ep ¼ E2U,p + E 2L,p i    jE j h ¼ in T xx  T yx ejωðτ12 +τ01 Þ + T xy  T yy ejωðτ12 +τ02 Þ + ejωðτ11 +τ01 Þ  ejωðτ11 +τ02 Þ ejωt 4 (2.12.17)

Since photodiodes are square-law detection devices, photocurrents in the s and p branches can be obtained by Is ¼ η jEs j2 andIp ¼ ηj Ep j2, where η is the responsivity of the photodiode. It can be easily found that there are four discrete propagation delay components in the expression of the photocurrent in each branch: e jωτ0, e jωτ1, e jω(τ1τ0), and ejω(τ1+τ0), where τ02 + τ01 ¼ δτ0 is the relative delay in the first interferometer and τ12 + τ11 ¼ δτ1 is the relative delay in the second interferometer. From a signal processing point of view, it is not generally easy to distinguish terms corresponding to different

Basic mechanisms and instrumentation for optical measurement

w

w dw/dt t

dt t

dt

Frequency-swept source

w dt

dw dt

t

Fig. 2.12.7 Illustration of the differential frequency term generated by an interferometer and a frequency-swept source. dω/dt is the frequency slope, and δτ is the differential time delay between the two interferometer arms.

propagation delays in a photocurrent signal. However, as illustrated in Fig. 2.12.7, if the tunable laser source is linearly swept, these discrete propagation delay terms will each have a different frequency and can be easily analyzed by a Fourier analysis. If the tunable laser is swept at a rate of dω/dt, the four discrete delay terms ejωδτ0, ejωδτ1, jω(δτ1δτ0) jω(δτ1+δτ0) e , and components in the phodωe  dωwill become four dω discrete frequencydω  tocurrent δτ0 dt , δτ1 dt , ðδτ1  δτ0 Þ dt , and ðδτ1 + δτ0 Þ dt , respectively. Among dω these frequency components, only ðδτ1  δτ0 Þ dω ð + δτ Þ and δτ are useful 1 0 dt dt because their coefficients are proportional to the DUT transfer matrix elements. Simple derivation from Eqs. (2.12.16) and (2.12.17) yields h    i dω dω I s ¼ ηjEs j2 ∝ T 11 exp jðδτ1  δτ0 Þ t + T 12 exp jðδτ1 + δτ0 Þ t (2.12.18) dt dt and    i

2 h dω dω I p ¼ η E p ∝ T 21 exp jðδτ1  δτ0 Þ t + T 22 exp jðδτ1 + δτ0 Þ t dt dt (2.12.19) where

T 11

T 12

T 21

T 22



"  T xx + T yx  ¼  T xx  T yx

 # T xy + T yy   T xy  T yy

(2.12.20)

is the based-rotated transfer function of the DUT. In practical implementations, the photocurrent signals Is and Ip are digitized and analyzed through digital signal processing. The amplitudes at frequencies (δτ1  δτ0)(dω/dt) and (δτ1 + δτ0)(dω/dt) can be calculated by fast Fourier transformation (FFT), and the matrix elements T11, T11, T21, T22 can be obtained. Finally, the vector transfer function of the DUT can be expressed as T xx T xy ðT 11 + T 21 Þ ðT 12 + T 22 Þ 1 TbðjωÞ ¼ ¼ (2.12.21) 2 ðT 11  T 21 Þ ðT 12  T 22 Þ T yx T yy This linear transfer function is a 2  2 Jones matrix, and each element is generally a function of optical frequency. This transfer function contains all the information required

291

292

Fiber optic measurement techniques

Pol.1

Tunable laser

PMC PBS →

Reflection receiver

Eref

DUT

ˆ T(jw)

Pol.2 Transmission receiver

Fig. 2.12.8 Optical configuration of a vector optical network analyzer that is capable of both transmission and reflection measurements (Van Wiggeren et al., 2003).

to characterize the performance of the DUT, such as optical loss, chromatic dispersion, and polarization effects. Note that the vector optical network analyzer shown in Fig. 2.12.6 is unidirectional, which only measures the transmission characteristics of the DUT. This configuration can be modified to measure both transmission and reflection characteristics, as shown in Fig. 2.12.8 (Van Wiggeren et al., 2003; Freundorfer, 1991). In this configuration, the reference signal does not go through the differential polarization delay, and therefore, compared to the configuration in Fig. 2.12.6, Eqs. (2.12.12) and (2.12.13) should be replaced by E2L,s ¼ E 2L,p ¼ A2 ejωðtτ0 Þ

(2.12.22)

where A2 is the reference signal amplitude at the receiver and τ0 is the delay of the reference signal. Here, we have assumed that the polarization controller (Pol.2) in the reference path is set such that the reference signal is equally split into s and p polarizations in the receivers. Therefore, there  will be onlythree  discrete frequency dω components in this configudω ration: ðτ01  τ0 Þ dω , τ , and τ ð + τ Þ ð  τ Þ 02 0 dt 02 01 dt , where τ01 and τ02 in this case dt represent overall propagation delays (from source to detector) of optical signals passing through the lower and upper branches of only one interferometer. Among these three frequency components, the coefficients of the first two are related to the DUT transfer functions, and then, similar procedures can be used to determine the vector transfer function of the DUT as described previously.

References Adany, P., Allen, C., Hui, R., 2009. Chirped lidar using simplified homodyne detection. J. Lightwave Technol. 27 (16), 3351–3357. Baba, T., Akiyama, S., Imai, M., Hirayama, N., Takahashi, H., Noguchi, Y., Horikawa, T., Usuki, T., 2013. 50-Gb/s ring-resonator-based silicon modulator. Opt. Express 21 (10), 11869–11876.

Basic mechanisms and instrumentation for optical measurement

Baney, D.M., Szafraniec, B., Motamedi, A., 2002. Coherent optical spectrum analyzer. IEEE Photon. Technol. Lett. 14 (3), 355–357. Betti, S., De Marchis, G., Iannone, E., 1995. Coherent Optical Communications Systems. Wiley Series in Microwave and Optical Engineering, Wiley-Interscience. Born, M., Wolf, E., 1990. Principles of Optics, seventh ed. Cambridge University Press. Cai, M., Painter, O., Vahala, K.J., 2000. Observation of critical coupling in a fiber taper to a silicamicrosphere whispering-gallery mode system. Phys. Rev. Lett. 85, 74. Cao, S., Chen, J., Damask, J.N., Doerr, C.R., Guiziou, L., Harvey, G., Hibino, Y., Li, H., Suzuki, S., Wu, K.-Y., Xie, P., 2004. Interleaver technology: comparisons and applications requirements. J. Lightwave Technol. 22 (1), 281–289. Cheng, X., Hong, J., Spring, A.M., Yokoyama, S., 2017. Fabrication of a high-Q factor ring resonator using LSCVD deposited Si3N4 film. Opt. Express 7 (7), 2182–2187. Chipman, R.A., 1994. Polarimetry. In: Bass, M. (Ed.), Handbook of Optics, second ed. vol. 2. McGrawHill, New York. Clayton, J.B., El, M.A., Freeman, L.J., Lucius, J., Miller, C.M., 1991. Tunable Optical Filter. US Patent #5,073,004,. Collett, E., 1992. Polarized Light: Fundamentals and Applications. Dekker. DeLong, K.W., Fittinghoff, D.N., Trebino, R., 1996. Practical issues in ultrashort-laser-pulse measurement using frequency-resolved optical gating. IEEE J. Quantum Electron. 32 (7), 1253–1264. Derickson, D., 1998. Fiber Optic Test and Measurement. Prentice Hall PTR. Diez, S., Ludwig, R., Schmidt, C., Feiste, U., Weber, H.G., 1999. 160-Gb/s optical sampling by gaintransparent four-wave mixing in a semiconductor optical amplifier. IEEE Photon. Technol. Lett. 11 (11), 1402–1404. Dong, P., Xie, C., Chen, L., Fontaine, N.K., Chen, Y.-K., 2012. Experimental demonstration of microring quadrature phase-shift keying modulators. Opt. Lett. 37 (7), 1178–1180. Doran, N.J., Wood, D., 1988. Nonlinear-optical loop mirror. Opt. Lett. 13, 56–58. Dorrer, C., 2006a. High-speed measurements for optical telecommunication systems. IEEE J. Sel. Top. Quantum Electron. 12 (4), 843–858. Dorrer, C., 2006b. Monitoring of optical signals from constellation diagrams measured with linear optical sampling. J. Lightwave Technol. 24 (1), 313–321. Dorrer, C., Kilper, D.C., Stuart, H.R., Raybon, G., Raymer, M.G., 2003. Linear optical sampling. IEEE Photon. Technol. Lett. 15 (12), 1746–1748. Epworth, R., 2005. 3 Fibre I and Q Coupler. US Patent #6,859,586,. Fan, X., White, I.M., Shopova, S.I., Zhu, H., Suter, J.D., Sun, Y., 2008. Sensitive optical biosensors for unlabeled targets: a review. Anal. Chim. Acta 620, 8–26. Fejer, M.M., Magel, G.A., Jundt, D.H., Byer, R.L., 1992. Quasi-phase-matched second harmonic generation: tuning and tolerances. IEEE J. Quantum Electron. 28 (11), 2631–2654. Fienup, J.R., 1982. Phase retrieval algorithms: a comparison. Appl. Opt. 21 (15), 2758–2769. Freundorfer, A.P., 1991. A coherent optical network analyzer. IEEE Photon. Technol. Lett. 3 (12), 1139–1142. Froggatt, M., Erdogan, T., Moore, J., Shenk, S., 1999. Optical frequency domain characterization (OFDC) of dispersion in optical fiber Bragg gratings. In: Bragg Gratings, Photosensitivity, and Poling in Glass Waveguides. OSA Technical Digest Series, Optical Society of America, Washington, DC. Paper FF2. Giles, C.R., 1997. Lightwave applications of fiber Bragg gratings. J. Lightwave Technol. 15 (8). Glombitza, U., Brinkmeyer, E., 1993. Coherent frequency domain reflectometry for characterization of single-mode integrated optical waveguides. J. Lightwave Technol. 11, 1377–1384. Gray, D.F., Smith, K.A., Dunning, F.B., 1986. Simple compact Fizeau wavemeter. Appl. Opt. 25 (8), 1339–1343. Green, P.E., 1991. Fiber Optic Networks. Prentice-Hall. Griffel, G., 2000. Synthesis of optical filters using ring resonator arrays. IEEE Photon. Technol. Lett. 12 (7), 810–812. Han, Y., Jalali, B., 2003. Differential photonic time-stretch analog-to-digital converter. In: CLEO 2003. Hernandez, G., 1986. Fabry-Perot Interferometers. Cambridge University Press.

293

294

Fiber optic measurement techniques

Huang, D., Swanson, E.A., Lin, C.P., Schuman, J.S., Stinson, W.G., Chang, W., Hee, M.R., Flotte, T., Gregory, K., Puliafito, C.A., Fujimoto, J.G., 1991. Optical coherence tomography. Science 254, 1178–1181. Huber, R., Wojtkowski, M., Taira, K., Fujimoto, J.G., Hsu, K., 2005. Amplified, frequency swept lasers for frequency domain reflectometry and OCT imaging: design and scaling principles. Opt. Express 13 (9), 3513–3528. Huber, R., Wojtkowski, M., Fujimoto, J.G., 2006. Fourier domain mode locking (FDML): a new laser operating regime and applications for optical coherence tomography. Opt. Express 14 (8), 3225–3237. Hui, R., 2004. Optical Domain Signal Analyzer. US Patent #6,697,159,. Jiang, X., Qavi, A.J., Huang, S.H., Yang, L., 2020. Whispering-gallery sensors. Matter 3, 371–392. Jundt, D.H., Magela, G.A., Fejer, M.M., Byer, R.L., 1991. Periodically poled LiNbO3 for high-efficiency second-harmonic generation. Appl. Phys. Lett. 59 (21), 2657–2659. Kawahito, S., et al., 2007. A CMOS time-of-flight range image sensor with gates-on-field-oxide structure. IEEE Sensors J. 7 (12), 1578–1586. Kersey, A.D., Marrone, M.J., Davis, M.A., 1991. Polarisation-insensitive fibre optic Michelson interferometer. Electron. Lett. 27 (6), 518–520. Kikuchi, K., Futami, F., Katoh, K., 1998. Highly sensitive and compact cross-correlator for measurement of picosecond pulse transmission characteristics at 1550 nm using two-photon absorption in Si avalanche photodiode. Electron. Lett. 34 (22), 2161–2162. Knierim, D.G., Lamb, J.S., 2016. Test and Measurement Instrument Including Asynchronous TimeInterleaved Digitizer Using Harmonic Mixing. US Patent #9306590B2. Li, J., Westlund, M., Sunnerud, H., Olsson, B.-E., Karlsson, M., Andrekson, P.A., 2004. 0.5-Tb/s eyediagram measurement by optical sampling using XPM-induced wavelength shifting in highly nonlinear fiber. IEEE Photon. Technol. Lett. 16 (2), 566–568. Loewen, E.G., Popov, E., 1997. Diffraction Gratings and Applications, first ed. CRC Press, ISBN: 0824799232. Luna Technologies, 2022. Fiber Optic Component Characterization. www.lunatechnologies.com. Madsen, C., et al., 2004. Integrated optical spectral polarimeter for signal monitoring and feedback to a PMD compensator. J. Opt. Netw. 3 (7), 490–500. Monchalin, J.P., Kelly, M.J., Thomas, J.E., Kurnit, N.A., Szoeke, A., Zernike, F., Lee, P.H., Javan, A., 1981. Accurate laser wavelength measurement with a precision two-beam scanning Michelson interferometer. Appl. Opt. 20, 736–757. Niclass, C., et al., 2005. Design and characterization of a CMOS 3-D image. IEEE J. Solid-State Circuits 40 (9), 1847–1854. Nogiwa, S., Ohta, H., Kawaguchi, Y., Endo, Y., 1999. Improvement of sensitivity in optical sampling system. Electron. Lett. 35 (11), 917–918. Oguma, M., Kitoh, T., Inoue, Y., Mizuno, T., Shibata, T., Kohtoku, M., Kibino, Y., 2004. Compact and low-loss interleave filter employing latticeform structure and silica-based waveguide. J. Lightwave Technol. 22 (3), 895–902. Orta, R., Savi, P., Tascone, R., Trinchero, D., 1995. Synthesis of Mu1 triple-ring-resonator filters for optical systems. IEEE Photon. Technol. Lett. 7 (12), 1447–1449. Pernice, W.H.P., Xiong, C., Tang, H.X., 2012. High Q micro-ring resonators fabricated from polycrystalline aluminum nitride films for near infrared and visible photonics. Opt. Express 20 (11), 12261–12269. Pierrottet, D.F., Amzajerdian, F., Petway, L., Barnes, B., Lockard, G., Rubio, M., 2008. Linear FMCW laser radar for precision range and vector velocity measurements. In: Proceedings of the Materials Research Society Symposium. vol. 1076. (1076-K04-06). Pietzsch, J., 1989. Scattering matrix analysis of 3  3 fiber couplers. J. Lightwave Technol. 7 (2), 303–307. Rabus, D.G., Hamacher, M., Troppenz, U., Heidrich, H., 2002. Optical filters based on ring resonators with integrated semiconductor optical amplifiers in GaInAsP-InP. IEEE J. Sel. Top. Quantum Electron. 8 (6), 1405–1411. Reid, D.T., Padgett, M., McGowan, C., Sleat, W., Sibbett, W., 1997. Light-emitting diodes as measurement devices for femtosecond laser pulses. Opt. Lett. 22, 233–235. Reiser, C., Lopert, R.B., 1988. Laser wavemeter with solid Fizeau wedge interferometer. Appl. Opt. 27 (17), 3656–3660. Sacher, W.D., Poon, J.K.S., 2009. Microring quadrature modulators. Opt. Lett. 34 (24), 3878–3880.

Basic mechanisms and instrumentation for optical measurement

Snyder, J.J., 1982. Laser wavelength meters. Laser Focus 18, 55–61. Standard Diffraction Gratings, 2022. https://www.newport.com/c/diffraction-gratings. Sun, Y., Fan, X., 2011. Optical ring resonators for biochemical and chemical sensing. Anal. Bioanal. Chem. 399 (1), 205–211. Takada, K., Yamada, H., Hibino, Y., Mitachi, S., 1995. Range extension in optical low coherence reflectometry achieved by using a pair of retroreflectors. Electron. Lett. 31 (18), 1565–1567. Trebino, R., DeLong, K.W., Fittinghoff, D.N., Sweetser, J.N., Krumb€ ugel, M.A., Richman, B.A., Kane, D.J., 1997. Measuring ultrashort laser pulses in the time-frequency domain using frequencyresolved optical gating. Rev. Sci. Instrum. 68 (9), 3277–3295. Van Wiggeren, G.D., Motamedi, A.R., Baney, D.M., 2003. Single-scan interferometric component analyzer. IEEE Photon. Technol. Lett. 15, 263–265. Vaughan, J.M., 1989. The Fabry-Perot Interferometer: History, Practice and Applications. Adam Hilger, Bristol, England. Vollmer, F., Yang, L., 2012. Label-free detection with high-Q microcavities: a review of biosensing mechanisms for integrated devices. Nano 1, 267–291. Wang, S.X., Weiner, A.M., 2006. A complete spectral Polarimeter design for Lightwave communication systems. J. Lightwave Technol. 24 (11), 3982–3991. Wang, X., et al., 2020. Oscilloscopic capture of greater-Than-100 GHz, ultra-low power optical waveforms enabled by integrated Electrooptic devices. J. Lightwave Technol. 38 (1), 166–173. Westlund, M., Sunnerud, H., Olsson, B.-E., Andrekson, P.A., 2004. Simple scheme for polarizationindependent all-optical sampling. IEEE Photon. Technol. Lett. 16 (9), 2108–2110. Wojtkowski, M., Srinivasan, V.J., Ko, T.H., Fujimoto, J.G., Kowalczyk, A., Duker, J.S., 2004. Ultrahighresolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation. Opt. Express 12 (11), 2404–2422. Xu, Q., Schmidt, B., Shakya, J., Lipson, M., 2006. Cascaded silicon micro-ring modulators for WDM optical interconnection. Opt. Express 14 (20), 9430–9435. Xu, Q., Manipatruni, S., Schmidt, B., Shakya, J., Lipson, M., 2007. 12.5 Gbit/s carrier-injection-based silicon micro-ring silicon modulators. Opt. Express 15 (2), 430–436. Yariv, A., Koumans, R.G.M.P., 1998. Time interleaved optical sampling for ultrahigh speed AID conversion. Electron. Lett. 34 (21), 2012–2013. Zhang, L., Yang, J.Y., Li, Y., Beausoleil, R.G., Willner, A.E., 2008. Optical Fiber Communication Conference (OFC 2008). Optical Society of America. Paper OWL5.

295

This page intentionally left blank

CHAPTER 3

Characterization of optical devices 3.1 Introduction Optical devices are building blocks of optical systems and networks. The performance and reliability of a complex optical network depend heavily on the quality of optical components. As the capability requirement of optical systems and networks has increased, various new optical devices have been created with unprecedented high quality and rich functionalities. As the complexity of optical systems increases, the requirement for qualification of optical components also becomes more and more stringent. Therefore, a good understanding of various techniques to characterize optical devices is critical for optical systems and network research, development, implementation, and trouble shooting. This chapter is devoted to various measurement and characterization techniques of fundamental optical devices. Sections 3.2 and 3.3 discuss the measurement of semiconductor lasers, including relative intensity noise, phase noise, intensity, and phase modulation responses as well as frequency chirp characterization. Frequency-dependent and non-Lorentzian phase noises are also discussed, and semiconductor laser-based optical frequency combs are used as examples to demonstrate a multi-heterodyne detection technique. Section 3.4 presents techniques of testing wideband optical receivers, in both frequency domain and time domain. The characterization of optical amplifiers is discussed in Section 3.5, which includes wavelength-dependent gain, noise, gain saturation, noise figure, and dynamic properties. Distributed Raman amplification in optical fiber systems is also included with different pumping schemes. At the end of this chapter, Section 3.6 discusses the configurations and working principles of a few passive optical devices, including optical fiber couplers, Bragg grating filters, WDM multiplexers and demultiplexers, optical isolators, and circulators. These basic fiber-optical components are indispensable in building fiber-optic systems as well as optical testing apparatus.

3.2 Characterization of RIN, linewidth, and phase noise of semiconductor lasers The semiconductor laser is a key device in an optical transmitter; it translates electrical signals into optical domain. It is one of the most important components in lightwave communication systems, photonic sensors, and in optical instrumentation. When operating above the threshold, the output optical power of a laser diode is linearly Fiber Optic Measurement Techniques https://doi.org/10.1016/B978-0-323-90957-0.00001-1

Copyright © 2023 Elsevier Inc. All rights reserved.

297

298

Fiber optic measurement techniques

proportional to the injection current, the electrical current signal can therefore be converted into an optical signal through direct modulation of the injection current. However, direct modulation on a laser diode not only introduces optical intensity modulation but also creates a frequency modulation on the optical carrier, which is known as the frequency chirp. Frequency chirp broadens the signal optical spectrum and introduces transmission performance degradation in high-speed optical systems due to the dispersive nature of the transmission media. To overcome this problem, external electro-optic modulators are often used in high-speed and long-distance optical systems. In this case, the laser diode operates in a continuous wave mode while the external modulator encodes the electrical signal into the optical domain. Optical system performance, to a large extent, depends on the characteristics of the laser source. Important issues such as optical power, wavelength, spectral line width, relative intensity noise, modulation response, and modulation chirp are all practical concerns in an optical transmitter. Some of these properties have relatively simple definitions and can be measured straightforwardly; for example, the optical power of a semiconductor laser can be directly measured by an optical power meter and its wavelength can be measured by either a wavelength meter or an optical spectrum analyzer. Some other properties of semiconductor lasers are defined rather non-intuitively, and the measurements of these properties often require good understanding of their physical mechanisms and the limitations of various measurement techniques. This section is devoted to the measurements and explanations of these properties in semiconductor lasers.

3.2.1 Measurement of relative intensity noise (RIN) In semiconductor lasers operating above the threshold, although stimulated emission dominates the emission process, there are still a small percentage of photons that are generated by spontaneous emission. The origin of optical intensity noise in a semiconductor laser is caused by these spontaneous emission photons. As a result, when a semiconductor laser operates in CW mode, the effect of spontaneous emission makes both carrier density and photon density fluctuate around their equilibrium values (Agrawal and Dutta, 1986). In general, the intensity noise in a laser diode caused by spontaneous emission is a wideband noise; however, this noise can be further amplified by the resonance of the relaxation oscillation in the laser cavity, which modifies the noise spectral density. As illustrated in Fig. 3.2.1, each single event of spontaneous increase in the carrier density will increase the optical gain of the laser medium and thus increase the photon density in the laser cavity. However, this increased photon density will consume more carriers in the cavity, creating a gain saturation in the medium; as a result, the photon density will tend to be decreased due to the reduced optical gain. This, in turn, will increase the carrier density due to the reduced saturation effect. This resonance process is strong in a

Characterization of optical devices

Fig. 3.2.1 Explanation of a relaxation oscillation process in a semiconductor laser.

specific frequency Ωp which is determined by the optical gain G, the differential gain dG/ dN, and the photon density P in the laser cavity as Ω2R ¼ G  P 

dG dN

(3.2.1)

Due to relaxation oscillation, the intensity noise of a semiconductor laser is a function of frequency. The normalized intensity noise spectral density can be fit by the following expression:    Ω2R + BΩ2 2  (3.2.2) H ðΩÞ∝  jΩðjΩ + γ Þ + Ω2R  where B and γ are damping parameters depending on specific laser structure and bias condition. Fig. 3.2.2 shows examples of the normalized intensity noise spectral density with

Fig. 3.2.2 Normalized intensity noise spectral density with 10 GHz relaxation oscillation frequency.

299

300

Fiber optic measurement techniques

three different damping parameters. The relaxation oscillation frequency used in this figure is ΩR ¼ 2π  10 GHz. At frequencies much higher than the relaxation oscillation frequency, the dynamic coupling between the photon density and the carrier density is weak and therefore the intensity noise becomes independent of the frequency. In addition to relaxation oscillation, external optical feedback to semiconductor lasers may also change the spectral distribution of intensity noise (Tkach and Chraplyvy, 1986; Gimlett and Cheung, 1989). This can be introduced by external optical reflectors such as connectors and other optical devices which form external cavities with the help of the output mirror of the semiconductor laser. The resonance nature in the external cavity may be strong enough to cause excessive carrier density fluctuation in the laser and result in strong optical power fluctuation. The distinct feature of optical feedback-induced intensity noise is that it has well-defined periodical resonance peaks in the frequency domain and the period is Δf ¼ c/(2nL), where L is the optical length of the external cavity, c is the speed of light, and n is refractive index of the external cavity. Another source of optical intensity noise is a result of frequency noise to intensity noise conversion. In an optical system, frequency instabilities in the laser source can be converted into intensity noise through a frequency-dependent transfer function as well as chromatic dispersion. For example, Fabry-Perot cavities can be formed by multiple reflectors (e.g., connectors) in an optical system, which introduce frequencydependent transmission characteristics. As illustrated in Fig. 3.2.3, when laser frequency fluctuates, the transmission efficiency varies, thus causing intensity fluctuation. In addition, if the transmission system is dispersive, different frequency components will arrive at the detector at different times; interference between them also causes frequency noise to intensity noise conversion (Wang and Petermann, 1992). In general, the optical intensity noise in an optical system is linearly proportional to signal optical power; therefore, it is more convenient to normalize the intensity noise by the total optical power. Relative intensity noise (RIN) is defined as the ratio between the noise power spectral density and the total power. Fig. 3.2.4A shows the block diagram of a RIN measurement system, which consists of a wideband photodetector, an electrical preamplifier, and an RF spectrum analyzer.

Frequency-dependent transmission

Intensity fluctuation

Frequency fluctuation

Fig. 3.2.3 Illustration of frequency noise to intensity noise conversion through frequency-dependent transmission.

Characterization of optical devices

Fig. 3.2.4 System block diagram to measure laser RIN without (A) and with (B) intensity modulation of the optical signal.

To characterize the RIN characteristic, the laser diode to be measured is operated in a continuous wave. Obviously, without intensity noise in the laser, one would only be able to see a DC component on the RF spectrum analyzer. Due to intensity noise, the optical power fluctuation introduces photocurrent fluctuation, and the spectral density of this photocurrent fluctuation is measured by the RF spectrum analyzer. Because of the square-law detection principle, the electrical power Pele generated at the output of a photodiode is proportional to the square of the received optical power Popt, that is, Pele ∝ P2opt. Therefore, the RIN can be defined as the ratio between noise power spectral density and the signal power in the electrical domain. 8D  E9 < P opt  P opt,ave 2 = RIN ðωÞ ¼ F (3.2.3) : ; P 2opt,ave where D Popt,ave isE the average optical power and F denotes Fourier transformation. 2 R2 P  P opt is the mean square intensity fluctuation, and its Fourier transform is the noise power spectral density in the electrical domain. RP opt is the signal photocurrent, and R2 P 2opt is the signal electrical power. Therefore, the definition of RIN can also be simply expressed as RIN ¼

SP ðωÞ R2 P 2opt,ave

(3.2.4)

where SP(ω) is the measured electrical noise power spectral density on the RF spectrum analyzer. Obviously, the noise power spectral density SP(ω) increases with the increase of the square of the average optical power P2opt, ave. RIN is a convenient way to characterize

301

302

Fiber optic measurement techniques

optical signal quality, and the result is not affected by the uncertainties in optical attenuation in optical systems; therefore, absolute calibration of the photodetector responsivity is not required. The unit of RIN is [Hz1], or [dB/Hz] as a relative measure. Practically, RIN is a convenient parameter to use in optical system performance calculation. For example, in an analog system, if RIN is the only source of noise, the signalto-noise ratio (in electrical domain) can be easily determined by SNR ¼

m2 Z 2

1 B

(3.2.5)

RIN ðf Þdf

0

where m is the modulation index and B is the receiver bandwidth. Good-quality semiconductor lasers usually have the RIN levels lower than 155 dB/Hz and it is usually challenging to accurately measure the RIN at such a low level. This requires the measurement system to have a much lower noise level. Considering thermal noise and quantum noise in the photodiode and the noise in the electrical preamplifier, the measured RIN can be expressed as RIN measure ¼

Sp + σ 2shot + σ 2th + σ 2amp R2 P 2opt

¼ RIN laser + RIN error

(3.2.6)

where RIN error ¼

kT ðF A GA + F SA  1Þ=GA 4kT =RL 2q + + 2 2 RP opt R P opt R2 P 2opt

(3.2.7)

is the measurement error due to instrument noise; σ 2shot ¼ 2qRP optis the shot noise power spectral density, σ 2th ¼ 4kT/RL is the thermal noise power spectral density at the photodiode, and σ 2amp ¼ kT(FAGA + FSA  1)/GA is the equivalent noise power spectral density introduced by the electrical preamplifier and the RF spectrum analyzer. FA is the noise figure of the electrical preamplifier, FSA is the noise figure of the spectrum analyzer front end, and GA is the gain of the preamplifier. Fig. 3.2.5 shows the RIN measurement error estimated with R ¼ 0:75A=W , RL ¼ 50 Ω, T ¼ 300 K, FA ¼ FSA ¼ 3 dB, and GA ¼ 30 dB. The measurement error decreases with the increase of the signal optical power, and eventually the quantum noise is the dominant noise in the high-power region. In this particular case, in order to achieve 180 dB/Hz error level (which is considered much lower than the actual RIN of the laser of 155 dBm/Hz), the signal optical power has to be higher than 0 dBm. As RIN is the ratio between the noise power spectral density and the average power, the accuracy of average power measurement is critically important. Practically the noise power spectral density can be measured by a high-speed photodiode followed by a RF preamplifier and a spectrum analyzer, as shown in Fig. 3.2.4A; it is quite challenging to

Characterization of optical devices

RINerror (dB/Hz)

−100 −120 −140 −160 −180 −200 −50

−40

−30

−20

−10

0

10

Optical signal power Popt (dBm)

Fig. 3.2.5 Error in RIN versus input optical signal power.

ensure the frequency response of this setup to be flat till zero frequency for the average power measurement. Although the average signal optical power can be easily measured by a calibrated optical power meter, the responsivity of high-speed photodiode and the gain of RF amplifier in the noise measurement are not easy to calibrate. One way to overcome this calibration problem is to apply an intensity modulation on the optical signal, as illustrated in Fig. 3.2.4B. In this configuration, the laser source under test is intensity modulated by a sine wave at a frequency fm, so that the signal optical power reaching the photodiode is P s ðtÞ ¼ P20 ½1 + m cos ð2πf m tÞ , where P0 is the laser power, and 0  m  1 is the modulation index. After photodetection and RF amplification, AC component of the electric signal measured by the RF spectrum analyzer is vðtÞ ¼ η P20 m cos ð2πf m t Þ , where η is the transimpedance gain of the receiver consisting of the photodiode and electric amplifier. With 100% modulation index (m ¼ 1), the peak of modulation sideband at fm measured by the RF spectrum analyzer is equivalent to 50% of the laser average power measured by this system. As the noise power spectral density measured by the same system has the same transimpedance gain η, RIN can be obtained without the requirement of DC measurement. The frequency fm of intensity modulation can be in the MHz region, just enough to avoid the uncertainty of low frequency response of the measurement system. The following example provides detailed steps of RIN measurement assuming a Mach-Zehnder (MZ) intensity modulator is used. Step 1: Bias the MZ intensity modulator at the quadrature point, and the optical signal is modulated at 100% modulation index. This can be confirmed by a time-domain waveform measurement using an oscilloscope. To assure the modulator bias is at the quadrature point, the measured waveform needs to be symmetric. To assure the 100% modulation index, the waveform has to reach the clipping point from both sides (top and bottom).

303

304

Fiber optic measurement techniques

Fig. 3.2.6 (A) 50 MHz modulation peak with amplitude of 21.6 dBm, (B) measured RF noise power spectral density (PDF) and instrument noise floor with 1 MHz resolution bandwidth, (C) laser RIN after calibration.

Step 2: With 100% intensity modulation, 50% of the optical power is at the modulation sideband, which can be seen on the RF spectrum analyzer. The power level of the spectral peak at f ¼ fm is denoted as PRF0 with the unit of [W]. This helps identifying the average optical power level on the RF spectrum analyzer which is required as the denominator for RIN calculation. This is shown in Fig. 3.2.6A where fm ¼ 50 MHz and PRF0 ¼ 21.6 dBm. Step 3: Switch off the modulation on the modulator (the modulator is still biased at the quadrature point) so that 50% optical power from the source passes through the modulator. The RF spectral density measured by the spectrum analyzer is PRF(f ) with the unit of [W/RB], where RB is the resolution bandwidth. This is shown in Fig. 3.2.6B, where instrument noise floor Pnoise(f ) including contributions from photodetection noise and amplifier noise is also shown, which needs to be removed for RIN calculation. f ÞP noise ðf Þ Step 4: The actual RIN can be calculated by RIN ðf Þ ¼ P RFPðRF0  RB , which has the 1 unit of [Hz ], or [dB/Hz]. Fig. 3.2.6C shows the calibrated RIN for a diode laser which has a resonance peak at around 3.5 GHz. This resonance peak originates from the classic relaxation oscillation, and the oscillation frequency scales with the square-root of the optical power. In this example, due to the limitation of the instrumentation noise, the minimum measurable RIN level is on the order of 140 dB/Hz. Example 3.1 Consider the RIN measurement system shown in Fig. 3.2.4A. The resolution bandwidth of the RF spectrum analyzer is set at 3 MHz. When a laser source is DC biased at an operation current of IB ¼ 50 mA, the measured noise level on the spectrum analyzer is 80 dBm at 1 GHz frequency. Then the injection current of the laser source is directly modulated by a sinusoid at fm ¼ 1 MHz, which swings from threshold to 50 mA. A very narrow peak at 1 MHz is measured in the spectrum analyzer with the amplitude of 5 dBm. Find the value of the RIN of this laser at 1 GHz frequency.

Characterization of optical devices

Solution: Typically, the power-level reading on the RF spectrum analyzer is the RF power within a resolution bandwidth. Therefore, the reading of 80 dBm in the spectrum analyzer in this case means a power spectral density of Sp ¼  80 dBm/3 MHz ¼  144.78 dBm/Hz. When the laser is analog modulated with the index m, the mean-root-square electrical power measured on the spectrum analyzer is 5 dBm. Since this is a sinusoid modulation, the spectral bandwidth at the modulation frequency should be much narrower than the resolution bandwidth of 3 MHz. Therefore, the actual total electrical power converted from the optical signal should be approximately 2 dBm (at the modulation index of m ¼ 1, the effective signal power is ½). Therefore, the RIN of this laser at 50 mA bias should be RIN ¼  144.78  (2) ¼ 142.78 dBm/Hz.

3.2.2 Measurement of laser linewidth and phase noise Phase noise is an important issue in semiconductor lasers, especially when they are used in coherent optical systems where optical phase information is utilized. Phase noise in semiconductor lasers is originated from spontaneous emission. A spontaneous emission event not only generates variation in the photon density but also produces phase variation (Agrawal and Dutta, 1986). In addition, the photon density-dependent refractive index in the laser cavity significantly enhances the phase noise, which makes the linewidth of semiconductor lasers much wider than other types of solid-state lasers (Henry, 1986). Assume that the optical field in a laser cavity is pffiffiffiffiffiffiffiffiffi EðtÞ ¼ P ðtÞ exp fj½ω0 t + φðt Þg (3.2.8) where P(t) is the photon density inside the laser cavity, ω0 is the central optical frequency, and ϕ(t) is the time-varying part of optical phase. A differential rate equation that describes the phase variation in the time domain is (Henry, 1983) dφðtÞ α (3.2.9) ¼ F φ ðtÞ  lw F P ðt Þ 2P dt where αlw is the linewidth enhancement factor of a semiconductor laser, which accounts for the coupling between intensity and phase variations. Fφ(t) and FP(t) are Langevin noise terms for phase and intensity. They are random and their statistic measures are   F P ðtÞ2 ¼ 2Rsp P (3.2.10) and 

 Rsp F φ ðtÞ2 ¼ 2P Rsp is the spontaneous emission factor of the laser.

(3.2.11)

305

306

Fiber optic measurement techniques

Although we directly use Eq. (3.2.9) without derivation, the physical meaning of the two terms on the right side of the equation is very clear. The first term is created directly due to spontaneous emission contribution to the phase variation. Each spontaneous emission event randomly emits a photon which changes the optical phase as illustrated in Fig. 1.2.13 in Chapter 1. The second term in Eq. (3.2.9) shows that each spontaneous emission event randomly emits a photon that changes the carrier density, and this carrier density variation, in turn, will change the refractive index of the material. Then this index change will alter the resonance condition of the laser cavity and thus will introduce a phase change of the emitting optical field. Eq. (3.2.9) can be solved by integration: Z t Z Z t dφðt Þ αlw t φðt Þ ¼ F φ ðtÞdt  F ðtÞdt (3.2.12) dt ¼ 2P 0 P dt 0 0 If we take an ensemble average, the power spectral density of phase noise can be expressed as Z t 2 Z t Z Z αlw t αlw t 2 2 φðtÞ ¼ Fφ ðtÞdt  FP ðtÞdt ¼ Fφ ðtÞ dt  FP ðtÞ2 dt 2P 0 2P 0 0 0 

2

Rsp Rsp αlw + 1 + α2lw jt j ¼ 2Rsp P jtj ¼ (3.2.13) 2P 2P 2P Since φ(t) is a white Gaussian process D E 2 1 ejφðtÞ ¼ e2hφðtÞ i

(3.2.14)

Then the optical power spectral density is Z∞ Sop ðωÞ ¼

  E ðt ÞE ∗ ð0Þ ejωt dt ¼ P

∞

Z∞

ejðωω0 Þt e2hφðtÞ i dt 1

2

(3.2.15)

∞

The normalized optical power spectral density has a Lorentzian shape which can be found as h  i2 Rsp 2 1 + α lw 4P Sop ðωÞ ¼ h  i2  Rsp 2 + ðω  ω0 Þ2 4P 1 + αlw Or Sop ðf Þ ¼

Δv  2 i 2π f + Δv 2 h

2

(3.2.16)

Characterization of optical devices

where the FWHM linewidth of this Lorentzian shaped spectrum is (Henry, 1983) Δv ¼

 Rsp  Δω 1 + α2lw ¼ 2π 4πP

(3.2.17)

This formula is commonly referred to as the modified Scholow-Towns formula because of the introduction of linewidth enhancement factor αlw. Spectral linewidth of a non-modulated laser is a good indicator of the phase noise, which is a measure of coherence of the lightwave signal. Other measures such as coherence time and coherence length can all be related to the linewidth. Coherence time is defined as t coh ¼

1 πΔν

(3.2.18)

It is the time over which a lightwave may still be considered coherent. Or, in other words, it is the time interval within which the phase of a lightwave is still predictable. Similarly, coherence length is defined by vg L coh ¼ tcoh vg ¼ (3.2.19) πΔν It is the propagation distance over which a lightwave signal maintains its coherence where vg is the group velocity of the optical signal. As a simple example, for a lightwave signal with 1 MHz linewidth, the coherence time is approximately 320 ns and the coherence length is about 95 m in free space. In principle, the linewidth of an optical signal can be directly measured by an optical spectrum analyzer. However, a finest resolution of a conventional OSA is on the order of 0.1 nm, which is approximately 12 GHz in a 1550 nm wavelength window, whereas the linewidth of a commercial single longitudinal mode laser diode (such as a DFB or DBR laser) is on the order of 1–100 MHz. In this section, we discuss two techniques that are commonly used to measure linewidths of semiconductor lasers: coherent detection and self-homodyne selection. 3.2.2.1 Self-homodyne and self-heterodyne detection Because of their simplicity, self-homodyne and self-heterodyne detections are most often techniques to evaluate laser phase noise through linewidth measurements. Fig. 3.2.7 shows the block diagram of self-homodyne detection experimental setup, in which

Es(t)

t2 t1

I(t) ET (t)

PD

Fig. 3.2.7 Self-homodyne method to measure the linewidth of an optical signal.

307

308

Fiber optic measurement techniques

the optical signal under detection mixes with a delayed version of itself in a photodetector. Unlike Mach-Zehnder interferometer discussed in Chapter 2, where the differential delay has to be much shorter than the coherence length of the optical signal to maintain the mutual coherence, the differential delay in a self-homodyne linewidth measurement setup needs to be much longer than the coherence length. We can use the same expression as Eq. (3.2.8) for the input signal optical field, pffiffiffi EðtÞ ¼ P exp ½jðωt + φðtÞÞ After the Mach-Zehnder interferometer, the composite optical field is ET ðtÞ ¼ A1 exp fj½ωðt  τ1 Þ + φðt  τ1 Þg + A2 exp fj½ωðt  τ2 Þ + φðt  τ2 Þg (3.2.20) where τ1 and τ2 are the propagation delays of the two interferometer arms and A1 and A2 are the amplitudes of the fields emerging from these two arms. In a Mach-Zehnder interferometer, differential delay Δτ ¼ j τ2  τ1 j is an important parameter. For applications such as filters, modulators, and inter-leavers, which rely on coherent interference between signals carried by the two arms, the interferometer has to operate in the coherent regime with Δτ much shorter than the signal coherence length (Δτ ≪ tcoh). For self-homodyne linewidth measurement, on the other hand, the interferometer has to operate in the incoherent regime with differential delay Δτ much longer than the signal coherence length (Δτ ≫ tcoh). In this case, the two terms on the right-hand-side of Eq. (3.2.20) are not coherently related. Since the phase noise has white Gaussian statistics, with Δτ ≫ tcoh, Δφ(t) ¼ φ(t)  φ(t  Δτ) is also a stationary process with the statistics independent of time t, and thus Eq. (3.2.21) can be written as E T ðt Þ ¼ A1 exp fj½ωt + φðtÞg + A2 exp fj½ωt  ωΔτ + φðt  ΔτÞg

(3.2.21)

The two terms on the right-hand-side of Eq. (3.2.21) are mutually incoherent, equivalent to the contributions of two lasers of identical central frequency and same phase noise characteristic, and ωΔτ is a constant phase. Upon photodetection, which mixes the two terms of Eq. (3.2.21) in a photodiode, the normalized power spectral density of the photocurrent is the auto-convolution of the optical signal power spectral density Sp,s(f ), SIF ðf Þ ¼ Sp,s ðf Þ  Sp,s ðf Þ ¼

1+

1 f

2

(3.2.22)

ΔvIF =2

where ΔvIF is the FWHM spectral width measured by the RF spectrum analyzer. In this homodyne detection, the intermediate frequency is fIF ¼ 0 and the measured RF signal spectral linewidth Δvs is twice as wide as the actual optical signal spectral linewidth Δv due to the self-mixing process: ΔvIF ¼ 2Δv. Since the center of the RF peak is at zero frequency and the negative part of the spectrum will be flipped onto the positive side, only

Characterization of optical devices

half the spectrum can be seen on the RF spectrum analyzer. Therefore, the width of this single-sided spectrum is equal to the linewidth of the optical signal. Self-homodyne linewidth measurement is simpler than the coherent detection because it does not require a tunable local oscillator; however, it has a few disadvantages. First, for the interferometer to work in the incoherent regime, the length difference between the two arms has to be large enough so that Δτ ≫ tcoh and Eq. (3.2.22) is valid. This requirement may be difficult to satisfy when the linewidth of the light source is very narrow. For example, for a light source with the linewidth of Δν ¼ 10kHz, the coherence length is approximately Lcoh ¼ c/(πΔν)  10 km in the air. To ensure an accurate measurement of this linewidth, the length of the delay line in one of the two interferometer arms has to be much longer than 10 km, even in the fiber. Another problem of self-homodyne measurement is that the central RF frequency is at zero whereas most RF spectrum analyzers have high noise levels in this very low-frequency region. Laser intensity noise (usually strong at low frequency, such as 1/f noise) may also significantly affect the measurement accuracy. To improve the performance of self-homodyne measurement, it would be desirable to move the intermediate frequency away from DC and let fIF > Δv to avoid the accuracy concern at the low-frequency region. This leads to the use of self-heterodyne detection. Fig. 3.2.8 shows the measurement setup of the self-heterodyne linewidth measurement technique, which is similar to the self-homodyne setup, except an optical frequency shifter is used in one of the two interferometer arms. As a result, the frequency of the optical signal is shifted by fIF in one arm, whereas in the other arm the optical frequency is not changed. The mixing of the optical signal with its frequency-shifted version in the photodiode creates an intermediate frequency at fIF. Therefore, the normalized RF spectral density in the electrical domain will be 1

SIF ðf Þ ¼ Sp,s ðf  f IF Þ  Sp,s ðf Þ ¼ 1+

f f IF Δvs =2

2

(3.2.23)

Again, ΔvIF is the FWHM spectral width of the RF spectral peak, and ΔvIF ¼ 2Δv. The frequency shift is typically on the order of a few hundred megahertz, which is much higher than the typical linewidth of a semiconductor laser, and is also sufficient to move

t2 Es(t)

I(t) Frequency t1 shifter

ET (t)

PD

fIF

Fig. 3.2.8 Self-heterodyne method to measure the linewidth of an optical signal.

309

310

Fiber optic measurement techniques

the IF spectrum away from the noisy low-frequency region (Okoshi et al., 1980; Horak and Loh, 2006). It is worthwhile to point out that the effect of slow laser frequency variation within the time of differential delay can make the intermediate frequency unstable. This fIF fluctuation will certainly affect the accuracy of linewidth (often causing an overestimate of the linewidth). Since the spectral shape of the laser line is known as Lorentzian as shown in Eq. (3.2.23), one can easily measure the linewidth at –20 dB point and then calculate the linewidth at –3 dB point using Lorentzian fitting. This may significantly increase the measurement tolerance to noise and thus improve the accuracy. Example 3.2

Assume the full RF spectral width measured at 20 dB point is ΔvIF, 20dB ¼ 1 GHz in a delayed self-heterodyne detection measurement of a laser diode. What is the 3 dB linewidth of this laser? Solution: Assuming Lorentzian line shape, the RF spectrum after coherent detection is S p ðf Þ ¼

1+

1 f f 0 ΔvIF =2

2

where ΔvIF is the 3-dB (FWHM) IF spectral width. At the –20 dB point of the IF spectrum with f ¼ f20dB, the Lorentzian shape will be   f20dB  f0 2 ¼ 100, where 2(f20dB  f0) ¼ ΔvIF, 20dB. Therefore, equivalent to 1 + ΔvIF =2 2 vIF ¼ pffiffiffiffiffi ΔvIF,20dB 99 For this problem, ΔvIF, 20dB ¼ 1 GHz. Since the FWHM linewidth at the –20 dB point is 1 GHz, that is, 2(f20dB  f0) ¼ 1 GHz, so we can find ΔvIF  100 MHz.

ΔvIF is the FWHM linewidth of the IF power spectral density at –3 dB point, and the 3-dB linewidth of the optical signal is, therefore, approximately Δv  50 MHz. The most popular frequency shifter often used for self-heterodyne detection is an acousto-optic frequency modulator (AOFM). Frequency shift on an AOFM is introduced by the interaction between the lightwave signal and a traveling wave RF signal. An oversimplified explanation is that the frequency of the lightwave signal is shifted by the moving grating created by the RF traveling wave through the Doppler effect, as illustrated in Fig. 3.2.9. In contrast to a conventional frequency modulator, which often creates two modulation sidebends, an AOFM shifts the signal optical frequency to only one direction. Another unique advantage of an AOFM is its polarization independency, which is desired for practical linewidth measurement based on a fiber-optical system.

Characterization of optical devices

Traveling wave f

f − fIF

fIF

Fig. 3.2.9 Illustration of the operating principle of an acousto-optic frequency shifter.

Fig. 3.2.10 Photographs of RF spectrum analyzer screen for delayed self-homodyne (A) and delayed self-heterodyne (B) linewidth measurements of the same laser diode.

Fig. 3.2.10A shows a photograph of example of RF spectrum analyzer screen for a delayed self-homodyne measurement of a DFB laser diode. After a Lorentzian fit, the curve drops to -3 dB with respect to the peak is approximately 2.4 MHz. According to our discussion above, this single-sided Lorentzian spectral width is equal to the laser spectral linewidth, that is, Δv  5.2 MHz. Fig. 3.2.10B shows the linewidth measurement result of the same laser diode but with delayed self-heterodyne method. An acousto-optic frequency modulator (AOFM) is used in one of the two interferometer arms to create an intermediate frequency fIF ¼ 80 MHz. The spectrum shown in Fig. 3.2.10B has a classic Lorentzian shape with a central frequency of 80 MHz and an FWHM spectral width of approximately 2.1 MHz. Further, the spectral width at 20 dB point is measured to be about 34 MHz. Based on Example 3.2, the 3 dB width ΔvIF can be found to be approximately 7 MHz, and the laser spectral linewidth should be Δv  3.4 MHz. The major reason for the inaccurate measurement of self-homodyne method is the uncertainty of RF spectrum analyzer at the very low frequency region.

311

312

Fiber optic measurement techniques

Es(t) ELO(t)

22

E1(t)

i (t)

RF spectrum analyzer

PD PC

LO

Fig. 3.2.11 Linewidth measurement using coherent envelope detection.

It is worthwhile to reiterate that in the linewidth measurements using delayed homodyne or heterodyne techniques, the differential delay between the two interferometer arms has to be much longer than the coherence length of the laser source under test. This ensures the incoherent mixing between the optical signal and its delayed version. 3.2.2.2 Coherent envelope detection and complex optical field detection For a coherent envelope detection receiver shown in Fig. 3.2.11, the incoming lightwave signal mixes with an optical local oscillator (LO) in a photodiode and downshifts the signal from an optical frequency to an intermediate frequency (IF). Since an RF spectrum analyzer can have an excellent spectral resolution, the detailed spectral shape of the frequency downshifted IF signal can be precisely measured (Nazarathy et al., 1989). Similar to Eq. (2.9.6) in Chapter 2, if the splitting ratio of the fiber directional coupler is 50%, after the input optical signal mixes with the local oscillator in the photodiode the photocurrent is  R (3.2.24) jE s ðtÞj2 + jE LO j2 + Es ðt ÞE ∗LO + E ∗s ðt ÞE LO 2 where Es and ELO are the complex optical fields of the input lightwave signal and the local oscillator, respectively. Neglecting the direct detection contributions, the useful part of the photocurrent, which produces the IF frequency component, is the cross term I ðt Þ ¼ RjE 1 ðt Þj2 ¼

R (3.2.25) E s ðtÞE∗LO ðtÞ + E∗s ðtÞELO ðtÞ 2 where cc stands for complex conjugate. This time-domain multiplication between the signal and the local oscillator is equivalent to a convolution between the two optical spectra in the frequency domain. Therefore, the IF power spectral density measured by the RF spectrum analyzer is iðtÞ ¼

SIF ðf Þ∝ Sp,s ðf Þ  Sp,LO ðf Þ

(3.2.26)

where Sp,s and Sp,LO are the power spectral densities of the input lightwave signal and the local oscillator, respectively. Suppose these two power spectral densities are both Lorentzian,

Characterization of optical devices

1

Sp,s ðf Þ ¼ 1+

f f s0 Δv=2

2

(3.2.27)

and Sp,LO ðf Þ ¼

1+

1

2

(3.2.28)

f f LO0 ΔvLO =2

where Δv and ΔvLO are the FWHM spectral linewidths and fs0 and fLO0 are the central frequencies of the signal and the local oscillator, respectively. The normalized IF power spectral density measured by the RF spectrum analyzer should be SIF ðf Þ ¼

1+

1

2

f f IF ðΔv + ΔvLO Þ=2

(3.2.29)

where fIF ¼ j fs0  fLO0 j is the central frequency of the heterodyne IF signal; obviously, the measured IF signal linewidth in the electrical domain is the sum of the linewidths of the incoming lightwave signal and the local oscillator ΔvIF ¼ Δv + ΔvLo

(3.2.30)

RF spectral density

Optical spectral density

Fig. 3.2.12 illustrates an example of frequency translation from optical domain to RF domain by coherent heterodyne detection, as well as the linewidth relationship among the signal, the local oscillator, and the IF beating note. In this case, the central frequencies

0.1 MHz

10 MHz 193.5THz 193.5THz + 1GHz

10.1 MHz

1GHz

Signal intensity noise due to direct detection

Fig. 3.2.12 Frequency translation and linewidth relations in coherent detection.

313

314

Fiber optic measurement techniques

of the optical signal and the local oscillator are 193,500 GHz (1550.388 nm) and 193,501 GHz (1550.396 nm), and their linewidths are 10 MHz and 100 kHz, respectively. As the result of coherent detection, the central RF frequency is 1 GHz and the RF linewidth is 10.1 MHz. Although the operation principle of linewidth measurement using coherent heterodyne detection is straightforward, a few practical issues need to be considered in the measurement. First, the linewidth of the local oscillator has to be narrow enough. In fact, ΔvLo < < Δv is usually required to accurately measure the linewidth of an optical signal. Second, the wavelength of the local oscillator has to be continuously tunable to be able to translate the coherent beating note to the measurable frequency range of an RF spectrum analyzer. Generally, grating-based external-cavity lasers can be used for this purpose, which provides both narrow spectral linewidth and wide continuous tuning range. Third, polarization states have to be aligned between the signal and the local oscillator to maximize the coherent detection efficiency. This is usually accomplished by a polarization controller. Finally, the effect of slow frequency drifts of both signal and local oscillator can potentially make the intermediate frequency unstable. This fIF fluctuation will certainly affect the accuracy of linewidth (often causing an overestimation of the linewidth). Since the spectral shape of the laser line is known as Lorentzian as shown in Eq. (3.2.29), one can also measure the linewidth at –20 dB point, as discussed in Example 3.2, and then calculate the linewidth at –3 dB point using Lorentzian fitting. This may significantly increase the measurement tolerance to noise and thus improve the accuracy. Coherent envelope detection with the setup shown in Fig. 3.2.11 is the simplest format of coherent detection which detects a real-valued photocurrent proportional to the amplitude of the signal optical field. Then, an RF spectrum analyzer can be used to measure the IF power spectral density and to evaluate the spectral linewidth. A more sophisticated coherent detection setup which uses phase diversity detection as discussed in Chapter 2 can provide the capability of complex optical field detection. Fig. 3.2.13 shows the block diagram of a coherent receiver for complex optical field detection based on phase diversity. A low phase noise tunable laser is used as the local oscillator (LO) with the optical frequency tuned to match the laser under test for

Fig. 3.2.13 Block diagram for complex optical field detection based on a phase-diversity coherent homodyne receiver. BPD, balanced photodiodes, TIA, transimpedance amplifier; ADC, analog to digital converter; PC, polarization controller.

Characterization of optical devices

homodyne detection. A 90 optical hybrid coupler separates the in-phase (I) and quadrature (Q) components of the optical signal, and sends them to two balanced photodetectors separately, as discussed in Section 2.9, to create the two independent photocurrents. These photocurrents are then amplified by two transimpedance amplifiers and translated to voltage signals vI(t) and vQ(t) before being sampled to the digital domain for signal processing. The voltage signals are

π vI ðt Þ ¼ GTIA RAs ðtÞALO ðt Þ cos ΔφðtÞ  (3.2.31) 4

π vQ ðtÞ ¼ GTIA RAs ðtÞALO ðtÞ sin Δφðt Þ  (3.2.32) 4 where As(t) and ALO(t) are the real amplitudes of the incoming signal and the LO, respectively. Δφ(t) ¼ φs(t)  φLO(t) with φs(t) and φLO(t) optical phases of the signal and the LO, R is the photodetector responsivity, and GTIA is the transimpedance gain of the electrical amplifier. Further assume that the LO is ideal with both amplitude and phase noises negligible, so that ALO(t) ¼ ALO, φLO(t) ¼ 0, and Δφ(t) ¼ φs(t), then the two photocurrent components can be written as h i h i π vI ðtÞ ¼ GTIA RALO As ðt Þ cos φs ðtÞ  (3.2.33) ¼ GTIA RALO Re Es ðt Þejπ=4 4 h i h i π vQ ðtÞ ¼ GTIA RALO As ðt Þ sin φs ðtÞ  (3.2.34) ¼ GTIA RALO Im E s ðtÞejπ=4 4 where Re(x) and Im(x) represent the real and imaginary parts of x, respectively, and Es(t) is the signal complex optical field. The complex signal optical field can be reconstructed as E s ðtÞ ¼ ηf Re ½vI ðt Þ + j  Im½vQ ðtÞg jπ=4

(3.2.35)

where η ¼ GTIAe RALO is a proportionality constant. In practical implementations, the voltage signals vI(t) and vQ(t) corresponding to the Iand Q-components are digitized and recorded for digital processing to reconstruct the complex optical field. In a typical laboratory setup, a high-speed real-time digital analyzer, discussed in Section 2.10.3, with two input ports is often used for ADC and data acquisition in the coherent I/Q detection setup shown in Fig. 3.2.13. Fig. 3.2.14A shows an example of vI(t) (dashed line) and vQ(t) (solid line) waveforms measured by a real-time digital analyzer with 25 GS/s sampling rate. In the experiment, the laser diode under test is a DFB laser with its linewidth on the MHz level, and a tunable external cavity semiconductor laser is used as the local oscillator (LO) with a much lower spectral linewidth (8 MHz despite the same value of σ 2φ(τ). Δv of each spectrum in Fig. 3.2.16B was estimated through Lorentzian fitting, commonly used in delayed self-heterodyne or coherent p receiver setup, by measuring the 20-dB linewidth Δv20dB so that Δv ¼ ffiffiffiffiffi Δv20dB = 99, as explained in Example 3.2. Fig. 3.2.17A shows the Lorentzian-equivalent linewidths evaluated by Eq. (3.2.42) at different sampling intervals of τ. The value of τ was changed by decimating the phase sequences which were originally generated at a high sampling rate of 20 GS/s. The term “sampling frequency” with a unit of Hz is used here to represent 1/τ of decimated sequences, not to be confused with the sampling rate which has a unit of sample/s used to generate the original phase sequences (or to acquire digital sequences in a measurement setup). As expected, the results show that for the white FM noise (ideal Lorentzian field PSD), a constant linewidth, 1 MHz in this case, is obtained from phase variance with Eq. (3.2.42) independent of the sampling frequency 1/τ. Thus, a low-speed digital receiver with a bandwidth of only a few hundred MHz may suffice for characterizing the phase noise through linewidth estimation. On the other hand, the measured Lorentzianequivalent linewidth can vary drastically with the change of sampling frequency for non-white FM noise. Therefore, much higher sampling frequencies are required to evaluate σ 2φ(τ) at frequencies comparable to the symbol rate in practical coherent systems (usually >5 GBaud). Ideally, sampling the phase noise information at the transmission symbol rate would be desirable to measure the phase noise variance for assessing the CPR performance, which operates typically on Ts-spaced samples. However, additive noise commonly exists in the

(A)

(B)

Fig. 3.2.17 Lorentzian-equivalent linewidths of the phase noise sequences used to obtain Fig. 3.2.16 (A) without and (B) with the effect of additive instrumentation noise included. BW, bandwidth.

Characterization of optical devices

measurement setup (induced by, e.g., photodiode shot noise and electronic circuit noise) can drastically overestimate the measured phase noise variance if wide measurement bandwidths are used (Maher and Thomsen, 2011). Thus, limiting the measurement bandwidth is also required to reduce the impact of instrumentation noise. Nevertheless, Fig. 3.2.17A shows that even with non-white FM noise, the Lorentzian-equivalent linewidths evaluated at 5 GHz sampling frequency can be reasonably accurate to represent σ 2φ(τ) at higher frequencies. In fact, limiting the signal bandwidth to 5 GHz ( 2.5 GHz) affected the measurement of the Lorentzian-equivalent linewidth only marginally at the sampling frequency (1/τ) of 5 GHz. Fig. 3.2.17B shows the effect of additive noise on the measurement with (dashed lines) and without (solid lines) applying the 5 GHz bandwidth limitation. In the simulation, before extracting the signal phase, instrumentation noise with a white Gaussian PSD of 68 dB/Hz for the real and imaginary parts was added to the unity power signal optical field, resulting in a total SNR of 35 dB over a 10 GHz bandwidth. Even with this high SNR, the Lorentzian-equivalent linewidths at 20 GHz were overestimated by approximately 400% without band-limiting. Limiting the measurement bandwidth to 5 GHz resulted in a more accurate estimation of the Lorentzian-equivalent linewidth at the 5 GHz sampling frequency for all three examples of FM noise used in this simulation, with only 23% overestimation in average. In practice, the optimum measurement bandwidth will depend on the level of the additive noise and the specific phase noise characteristics of the laser. However, the examples here suggest that a sampling frequency of 5 GHz is sufficient in setups to measure the Lorentzian-equivalent linewidths for lasers with similar non-white FM noise profiles shown in Fig. 3.2.16. This also indicates that a digital receiver with a sampling rate of at least 5 GS/s is required for the characterization purpose. Experimental examples include the measurement of FM-noise PSDs of an external cavity laser (ECL), a DFB laser, and two single-section InAs/InP QD-MLLs with different repetition frequencies. QD-MLLs are mode-locked laser sources that produce multiple spectral lines with equal spacing over a wide range of wavelengths (Vedala et al., 2017; Huynh et al., 2014). Their use has been demonstrated in multiple-lane and WDM applications (Moscoso-Ma´rtir et al., 2018; Kemal et al., 2017a, 2017b; Eiselt et al., 2017). The two QD-MLLs used in this measurement both operate in the C-band and with 11-GHz and 25-GHz frequency spacing between adjacent spectral lines, denoted by “11G-MLL” and “25G-MLL,” respectively, in Fig. 3.2.18. The measurement setup is shown in Fig. 3.2.13, and a tunable ECL with 10 within the same sampling frequency range. FWHM linewidths, Δυ, measured from the PSDs of the beat tones, shown in the inset of Fig. 3.2.18B, were comparable to the Lorentzian-equivalent linewidths calculated at the lowest sampling frequency of 0.1 GHz for all lasers. This is because low-sampling frequencies are closer to the flat low-frequency region of the FM-noise PSDs (see Fig. 3.2.18A) for these lasers, which is closely related to the FWHM linewidths. The results in Fig. 3.2.18 illustrate the ambiguity of Δυ as a parameter to describe phase noise of lasers with non-white FM noise. Furthermore, the QD-MLLs with Δυ values of 17 MHz and 9 MHz for the 11G-MLL and the 25G-MLL, respectively, have Lorentzian-equivalent linewidths of 1 MHz and 900 kHz near 5 GHz

Characterization of optical devices

sampling frequency, comparable with the 700 kHz linewidth of the DFB laser. Note that if τ is equal to the symbol period Ts in a digital coherent receiver, the abscissa in Fig. 3.2.18B represents the symbol rate of the system. Despite their relatively large FWHM linewidths, QD-MLLs exhibit similar performance as the DFB laser in coherent systems at practical symbol rates (Al-Qadi et al., 2020).

3.2.3 Multi-heterodyne technique to characterize spectral properties of semiconductor laser frequency combs Coherent optical frequency combs have been used in precision metrology as well as optical communications (Hall, 2006; H€ansch, 2006). Mutually coherent optical carriers in wavelength division multiplexed (WDM) systems with coherent detection allow spectral overlap between adjacent channels to increase the spectral efficiency (Ellis and Gunning, 2005; Jinno et al., 2009). Owing to phase orthogonality between adjacent channels, the crosstalk caused by modulation-induced spectral overlap between adjacent channels can be eliminated with electrical domain signal processing. Passively mode-locked fiber lasers, which will be discussed in Chapter 5, can generate high-quality coherent frequency combs, often used in metrology. These fiber lasers typically have repetition rates on the order of 100 MHz because of the relatively long cavity lengths. A number of other techniques have also been demonstrated to generate coherent frequency combs, including dispersive parametric mixing in highly non-linear fibers and ring resonators (Ataie et al., 2015; Matsko et al., 2011), and carrier-suppressed single sideband modulation in fiber-optic re-circulating loops (Li et al., 2011). In comparison, mode-locked semiconductor laser frequency combs (Duan et al., 2009; Sato, 2003; Rafailov et al., 2007; Lu et al., 2008) have the unique advantages of miniature size, low power consumption, and highly reliable to meet the telecommunication standard. However, due to spontaneous emission and frequency chirp inherent to semiconductor lasers, the frequency and phase stability of these lasers are generally inferior to those of fiber-based combs with passive mode-locking. Although common-mode optical phase fluctuations contribute to the spectral linewidth, the impact of differential-mode phase noise is usually the major concern in multi-wavelength applications. Common-mode phase noise refers to the phase noise of each individual spectral line, whereas differential-mode phase noises refers to the relative phase variations between different spectral lines. An ultrashort optical pulse train with a repetition time TR corresponds to a comb structure in the frequency domain with a frequency separation F ¼ 1/TR between adjacent spectral lines. An ideal frequency comb has constant repetition frequency F, and all spectral lines are mutually coherent. But a practical frequency comb, especially a device based on a passively mode-locked diode laser, always has phase noise and relative intensity noise (RIN) which hampers its applications in both metrology and optical communications. Optical phases of spectral lines in a passively mode-locked diode laser are related,

325

326

Fiber optic measurement techniques

and it has been predicted theoretically that the phase noise of each optical spectral line can be expressed as (Rosales et al., 2012) ϕn ðt Þ ¼ ϕr ðtÞ + Δϕr,n ðtÞ ¼ ϕr ðtÞ + ðr  nÞδϕðt Þ

(3.2.43)

where φr (t) is a time-varying common-mode phase of a specific spectral line with line index r, Δφr,n(t) is the differential phase between spectral line n and r where n is a variable, and δφ(t) is the intrinsic differential-mode phase (IDMP) which is defined as the differential phase between adjacent spectral lines. Optical phase φn(t) of each spectral line is a random process, and the spectral linewidth can be also evaluated as (Paschotta et al., 2006) Δv ¼ 2πf 2 Sφ ðf Þ

(3.2.44)

where Sφ(f ) is the frequency-dependent power spectral density (PSD) of the optical phase variation φ(t). For simplicity, we assume that the phase noise φ(t) is a white Gaussian random walk, then its PSD is proportional to f2, and thus f2Sφ(f ) should be independent of the frequency. For an optical frequency comb with the phase noise described by Eq. (3.2.43), the spectral linewidth of the nth spectral line can be expressed by Habruseva et al. (2009)  2 λ  λr Δvn ¼ Δvr + Δvdiff (3.2.45) Fλ2r =c where Δvr is a common-mode spectral linewidth of the reference spectral line r at wavelength λr, and Δvdiff is the intrinsic differential linewidth attributed to the IDMP noise between adjacent spectral lines separated by the pulse repetition frequency F. While the common-mode linewidth in a passively mode-locked diode laser originates from spontaneous emission and can be predicted by the modified Schawlow-Townes formula (Paschotta, 2004), differential-mode linewidth is mainly attributed to the inter-pulse timing jitter. Theoretically, if there is no correlation between the common-mode and the differential-mode phase noises, the timing jitter δtj(t) is linearly proportional to the IDMP δφ(t) as (Rosales et al., 2012) δtj ðtÞ ¼

δϕðtÞ 2πF

(3.2.46)

This indicates that the timing jitter δtj(t) is also a white Gaussian random walk. The statistical nature of the timing jitter can be quantified by its standard deviation σ p which ffiffiffiffiffiffiffiffiffiffiffiis ffi proportional to the square-root of the observation time T. That is, σ ðT Þ ¼ D  T , where D is commonly referred to as a diffusion constant (Rosales et al., 2012). From the measurement point of view, common-mode phase noise can be measured straightforwardly by characterizing each spectral line separately, whereas to measure

Characterization of optical devices

differential phase noise, multiple spectral lines have to be used simultaneously to include phase correlation among these spectral lines. Passively mode-locked fiber lasers and diode-pumped solid state lasers such as Ti:Sapphire and Nd:YAG lasers usually have repetition rates lower than 100 MHz allowing a large number of discrete optical spectral lines to be mixed and measured within the electrical bandwidth, say 10 GHz, of a wideband photodiode and RF spectrum analyzer. In this way, relative phase variations and mutual coherence between different spectral lines can be measured. For passively mode-locked semiconductor lasers with tens of GHz repetition rates desirable for WDM optical communications, it would require THz electrical bandwidth for the receiver to include multiple spectral lines simultaneously, which is not feasible. In the following, we show that this problem can be solved by using a multi-heterodyne detection method that allows simultaneous downshift of a large number of optical spectral lines from an optical frequency comb into the electrical domain (Klee et al., 2013a,b). Common-mode and differential-mode phase noises can be obtained by analysis of electric domain waveforms. As shown in Fig. 3.2.13, in a coherent I/Q receiver, the optical signal from the laser under test mixes with the LO in balanced photodiodes after a 90˚ optical hybrid; both in-phase (I) and quadrature (Q) components of the complex optical field can be measured simultaneously which can be used to reconstruct the complex optical field. For illustration purpose, Fig. 3.2.19A shows a frequency comb under-test (CUT) with six discrete spectral lines a1, a2, … a6, and a frequency spacing F between each other. Mixing with the LO in an I/Q coherent receiver, a complex RF spectrum can be obtained which is the frequency down-shifted replica of the complex optical spectrum of the CUT. The maximum number of discrete spectral lines that can be measured is restricted by nmax  b2Be/Fc, where Be is the single-side electric bandwidth of the coherent receiver. For the QD-MLLs with F > 10 GHz, only a few spectral lines can be measured in each LO wavelength setting owing to limited bandwidth Be. The aforementioned limitation can be lifted to allow the measurement of timedependent phase relations among a large number of spectral lines. This is accomplished in a multi-heterodyne technique, which uses a reference frequency comb as the LO in coherent heterodyne detection as illustrated in Fig. 3.2.19B. The reference comb with an optical bandwidth B0 has a repetition frequency F + δf which differs slightly from F of the CUT. Assume the first spectral line of the reference comb b1 is a frequency Δ away from the closest spectral lines a1 of the CUT, coherent mixing between bn and an will create an RF spectral line en at frequencies [Δ + (n  1)δf] (with n ¼ 1 to 7 in this example) on the positive frequency side of the RF spectrum. Meanwhile, mixing between bn and an+1 will create an RF spectral line dn at frequencies [(n  1)δf  Δ]  F on the negative frequency side as shown in Fig. 3.2.19B. This coherent multi-heterodyne mixing translates the CUT with line spacing F into an RF comb of line spacing δf ≪ F. In order to avoid frequency aliasing, optical bandwidth B0  F2/δf is required, and the maximum number of spectral lines that can be measured is nmax  (Be/F) assuming that the single-side electric

327

328

Fiber optic measurement techniques

a1

a2

a3

a4

a6

a5

Complex RF spectrum

F

CUT

E2

I/Q mixing

E3

E4

E5 F

ELO

f

0 2Be

LO 'Gf 'Gf

'

F

a1

a2

a3

b1

F+Gf

b2 b3

a5

a4

a6

a7

a9 a10 a11

a8

CUT Reference comb

b4

f0

-(F-')

Gf

d3

b6

I/Q Mixing

b7

B0

Complex RF spectrum d1 d2

b5

d4 d d6d7 e1 5 0

e2 e3 e4

e5

e6 e7 F-'

f

2Be

Fig. 3.2.19 (A) Illustration of coherent I/Q mixing between comb under-test (CUT) with a repetition frequency F and a local oscillator (LO) with a single spectral line. (B) Coherent I/Q mixing between CUT and a reference comb with a repetition frequency F + δf. Double-ended arrows indicate mixing between spectral lines and single-ended arrows indicate locations of resultant spectral lines in the RF domain.

bandwidth of the coherent receiver is Be F. Note that if a simple coherent detection is used with a single photodiode, only the amplitude of the optical field is detected. In such a case, the maximum number of spectral lines that can be measured in the RF domain without spectral aliasing is determined by nmax  F/(2δf), which is only half compared to that using coherent I/Q detection. For a more general analysis, the complex optical fields of the CUT and the reference comb, respectively, can be written as the superposition of discrete frequency components, XN 

 Að t Þ ¼ a exp j 2πf t + ϕ ð t Þ (3.2.47) n A A n n n¼1 XN 

 b exp j 2πf t + ϕ ð t Þ (3.2.48) BðtÞ ¼ n B B n n¼1 where an and bn are real amplitudes, fAn and φAn are frequency and phase of the nth spectral line of the CUT, fBn and φB are frequency and phase of the nth spectral line of the reference comb where we assumed that φB is stable and independent of line number n. N is the total number of spectral lines of the reference comb. With coherent I/Q mixing,

Characterization of optical devices

the photocurrents are iI(t) ∝ Re (A∗ B) and iQ(t) ∝ Im (AB∗) for the I and Q channels, respectively, where Re(x) and Im(x) represent the real and the imaginary parts of x, and “*” represents complex conjugate. With coherent I/Q detection, two photocurrents are obtained, which can be combined to form complex RF waveforms and further decomposed into discrete frequency components, i1 ðt Þ ¼ iI ðtÞ  jiQ ðtÞ ¼ ξA* B n XN XN ¼ ξ n¼1 k¼1 an bk exp j½2π ðk  nÞF  t + 2πΔ + 2π ðk  1Þ  δf  t (3.2.49) o  ϕAn ðtÞ + ϕB ðtÞ i2 ðtÞ ¼ iI ðt Þ + jiQ ðt Þ ¼ ξAB* n XN XN ¼ ξ n¼1 k¼1 an bk exp j½2π ðn  kÞF  t  2πΔ  2π ðk  1Þ  δf  t (3.2.50) o + ϕAn ðt Þ  ϕB ðtÞ where ξ is a proportionality constant, δf is the constant repetition frequency difference between the CUT and the reference comb, F is the repetition frequency of the CUT, and Δ is a frequency offset at n ¼ k ¼ 1. Double-sided spectra can be obtained from Fourier transforms of photocurrents i1(t) and i2(t) of Eqs. (3.2.49) and (3.2.50), respectively. As illustrated in Fig. 3.2.19B, each spectral line in the RF domain is a frequency-downshifted optical spectral line of the CUT. On the positive RF sideband of Fig. 3.2.19B, each line is the mixing between An and Bn (k ¼ n in Eq. (3.2.49)), while on the negative side of the RF spectrum in Fig. 3.2.19B, each line is the mixing between An+1 and Bn (k ¼ n  1 in Eq. (3.2.49)). For k > n and k < n  1, the RF spectral lines will have frequencies higher than F and they are normally outside the bandwidth of the receiver. Without the loss of generality, consider the mth spectral line (set k ¼ n ¼ m in Eq. (3.2.49)) on the positive side of the RF frequency, which is the Fourier Transform of,

i1m ðtÞ ¼ ξam bm exp 2π ðm  1Þδf  t + 2πΔ  ϕAm ðtÞ + ϕB ðt Þ (3.2.51) Decompose the phase noise φAm(t) into a common-mode phase φr(t) and a differential phase Δφr,m(t) as defined in Eq. (3.2.43) for the CUT, Eq. (3.2.51) becomes

(3.2.52) i1m ðtÞ ¼ ξam bm exp 2π ðm  1Þδf  t + 2πΔ  ϕr ðtÞ  Δϕr,m ðtÞ + ϕB ðt Þ Similarly, let k ¼ n, Eq. (3.2.50) is modified to XN

i2 ðtÞ ¼ ξ n¼1 an bn exp 2π ðn  1Þδf  t  2πΔ + ϕr ðtÞ + Δϕr,n ðtÞ  ϕB ðt Þ (3.2.53)

329

Fiber optic measurement techniques

where Δφr,m(t) and Δφr,n(t) are differential phases between spectral lines m and r, and n and r, respectively, for CUT. As both i1(t) and i2(t) obtained from coherent receiver can be digitized and recorded, digital signal processing (DSP) such as filtering and mixing can be performed in the digital domain offline. Selecting the mth spectral component i1m(t) from the positive frequency side of the RF spectrum with a digital filter, mixing it with the negative frequency side of the spectrum i2(t), the complex conjugate of the mixing products will be XN fi1m ðtÞi2 ðtÞg∗ ¼ ξ2 am bm n¼1 an bn exp fj½2π ðn  mÞ  δf  t + Δϕmn ðt Þg (3.2.54) where Δϕmn(t) ¼ Δϕr, m(t)  Δϕr, n(t) is the phase difference between the nth and mth spectral lines. This digital mixing process allows us to separate the differential phase noise from the common-mode phase noise. In the experiment, a quantum-dot mode-locked laser (QD-MLL) is used as the CUT, which is a single-section InAs/InP quantum-dot (QD) mode-locked laser (MLL) with a pulse repetition frequency of 11 GHz. The laser emits phase-locked discrete spectral lines ranging from 1540 to 1550 nm. A detailed description of the laser structure can be found in Lu et al. (2008). The spectrum shown in Fig. 3.2.20A was measured by an optical spectrum analyzer (OSA) with 0.01 nm spectral resolution. All experiments reported in this paper were performed with 400 mA constant bias current on the laser, and the optical power at the output of the PM fiber pigtail was approximately 10 mW. This operation point was chosen to obtain an optimally flat optical spectrum in the wavelength window from 1540 to 1550 nm. First, the beat signal of adjacent spectral lines can be measured with direct detection in a high-speed photodiode and an electrical spectrum analyzer (ESA) (Finch et al., 1990). Fig. 3.2.20B shows the 1st-and 2nd-order beating spectra recorded by the ESA with -5 -10

0

(a)

Beating RF spectra (dB)

Optical spectrum (dB)

330

-15 -20 -25 -30 -35 -40

RBW: 0.01nm

-45 -50 1538

1540

1542

1544

1546

Wavelength (nm)

1548

1550

(b) -10

2nd

-20

-30

-40

-50 -400

1st order -200

0

200

400

f - fn (GHz)

Fig. 3.2.20 (A) Optical spectral density of the diode laser frequency comb measured with 0.01 nm resolution bandwidth. (B) RF spectra of the 1st- and the 2nd-order beating notes and Lorentzian fitting, where the frequency has been shifted by the central frequency fn (n ¼ 1, 2) of each peak.

Characterization of optical devices

25 GHz RF bandwidth. The central frequency of each peak has been downshifted to 0 for comparison. The peak of the 1st-order beating note at f1 ¼ 11 GHz is the mixing between all adjacent spectral lines, and the 2nd-order beating at frequency f2 ¼ 22 GHz is the mixing between all next-nearest lines. These mixing spectra can be fitted to a Lorentzian line shape, P(f ) ¼ 1/{1 + [(f  fn)/(Δv/2)]2}, where fn is the central frequency and Δv is the FWHM linewidth of power spectral density. The continuous lines in Fig. 3.2.20B show the Lorentzian fits with the FWHM linewidths of 2.9 kHz and 9.1 kHz for the 1st-order and the 2nd-order mixing peaks, respectively. The narrow RF linewidths shown in Fig. 3.2.20B indicate that adjacent optical spectral lines of the QD-MLL are highly correlated with low IDMP noise. However, the optical phase noise of individual lines can be much larger, some tens of megahertz in the linewidth. Then the spectral linewidths can be measured using coherent heterodyne detection by mixing the QD-MLL output with an external-cavity tunable laser (20 GHz modulation bandwidth has been demonstrated. However, further increasing the modulation bandwidth to 40 GHz appears to be quite challenging, mainly limited by the carrier lifetime as well as the parasitic effect of the electrode. Another well-known property of direct modulation in a semiconductor laser is the associated frequency modulation, commonly referred to as frequency chirp, as discussed in Chapter 1. To improve the performance of optical modulation, various types of external modulators have been developed to perform intensity modulation as well as phase modulation, as discussed in Section 1.6. In this section, we primarily discuss how to characterize modulation response.

3.3.1 Characterization of intensity modulation response Intensity modulation response of an optical transmitter can be measured either in frequency domain or in time domain. Usually, it is relatively easy for a frequency-domain measurement to cover a wide continuous range of modulation frequencies and provide detailed frequency responses of the transmitter. Time-domain measurement, on the other hand, measures the waveform distortion that includes all the non-linear effects that cannot be obtained by small-signal measurements in frequency domain. In this section, we discuss techniques of frequency-domain and time-domain characterizations separately. 3.3.1.1 Frequency-domain characterization Frequency response is a measure of how fast an optical transmitter can be modulated. Typically, the modulation efficiency of a semiconductor laser or an external modulator is a function of modulation frequency Ω. Frequency-domain characterization is a popular way to find the features of device response such as cut-off frequency, uniformity of in-band response, and resonance frequencies. Many time-domain waveform features can be predicted by frequency-domain characterizations. A straightforward way to characterize the frequency-domain response of an optical transmitter is to use an RF network analyzer and a calibrated wideband optical receiver as shown in Fig. 3.3.2. In this measurement, the transmitters have to be properly DC biased so that they operate in the desired operating condition. An RF network analyzer is used and is set to the mode of measuring S21 parameter. In this mode, port 1 of the network analyzer provides a frequency swept RF signal, which is fed into the transmitter under test. The transmitter converts this frequency swept RF modulation into the optical domain, and the optical receiver then converts this optical modulation back to the RF domain and sent for detection into port 2 of the network analyzer. To characterize the modulation property of the transmitter, the bandwidth of both the network analyzer and the optical

Characterization of optical devices

External modulation Laser diode

Direct modulation Laser diode

Optical out DC current

DC Voltage

Optical out

RF in

RF in

RF in Transmitter Optical out under test Short fiber PD RF amp.

Port 1

Wideband receiver

Port 2

RF network analyzer (S12)

Fig. 3.3.2 Frequency domain characterization of an optical transmitter using an RF network analyzer.

receiver should be wide enough and the receiver should be calibrated in advance so that the transfer function of the receiver can be excluded from the measurement. Fig. 3.3.3 shows an example of the measured amplitude modulation response of a DFB laser under direct modulation of its injection current (Vodhanel et al., 1989). With the increase of the average output optical power of the laser, the modulation bandwidth increases, and the relaxation oscillation peak becomes strongly damped. This is due to the fact that at high optical power level, the carrier lifetime is short due to strong stimulated recombination.

Fig. 3.3.3 Example of a direct modulation response of a DFB laser diode at different optical power levels.

341

Fiber optic measurement techniques

The advantage of frequency domain measurement is its simplicity and high sensitivity due to the use of an RF network analyzer. Because the frequency of the interrogating RF signal continuously sweeps across the bandwidth of interest, detailed frequency domain characteristics of the optical transmitter can be obtained through a single sweep. However, in the network analyzer, the receiver in port 2 only selects the very frequency component sent out from port 1. High-order frequency harmonics generated by the non-linear transfer characteristics of the electro-optic circuits cannot be measured. In general, as a disadvantage, the frequency domain measurement only provides small-signal linear response while incapable of characterizing non-linear effects involved in the electro-optic circuits. To find non-linear characteristics of an optical transmitter, the measurement of the strengths of high-order harmonics is necessary especially in analog system applications. This measurement can be performed using an RF signal generator (usually known as a frequency synthesizer) and an RF spectrum analyzer, as shown in Fig. 3.3.4. To measure the non-linear response, the transmitter is modulated with a sinusoid signal at frequency Ω generated by the synthesizer. Then if the transmitter response is non-linear, several discrete frequency components will be measured on the RF spectrum analyzer. In addition to the fundamental frequency at Ω, RF energy will exist at the second order, the third order, and higher orders of harmonic frequencies at 2 Ω and 3 Ω, and so on. The kth-order harmonic distortion parameter is defined as HDm ¼

P ðΩk Þ P ðΩ1 Þ

(3.3.2)

where Ωk ¼ kΩ, with k ¼ 2, 3, 4 …., P(Ωk) is the RF power at the kth-order harmonic frequency, and P(Ω1) is the power at the fundamental frequency, as shown in Fig. 3.3.5A. From this measurement, the total harmonic distortion (THD) can be calculated as Wideband receiver RF freq. synthesizer

Transmitter under test

RF amplifier

PD

RF spectrum analyzer

Fig. 3.3.4 Frequency domain characterization of an optical transmitter use an RF spectrum analyzer.

P(2Ω)

0

(a)

Ω



P(3Ω) 3Ω

P(Ω1) P(Ω2)

RF power

P (Ω) RFpower

342

ω

P(Ωimd)

P(Ωimd)

0 2Ω1- Ω2 Ω1

Ω2 2Ω2- Ω1 ω

(b)

Fig. 3.3.5 Illustration of (A) the measurements of second and third orders of harmonic distortions and (B) inter-modulation distortion on an RF spectrum analyzer.

Characterization of optical devices

−10 m = 0.64

−20

0.20

−30

0.06

IMD (dBc)

HD2 (dBc)

0 −10

−40 −50 −60

0

m = 0.64

−30

0.36

−40

0.29

−50 −60

1.0 2.0 3.0 4.0 5.0 Modulation frequency(GHz)

(a)

−20

0 1.0 2.0 3.0 4.0 5.0 Mean modulation frequency(GHz)

(b)

Fig. 3.3.6 Examples of (A) measured second-order distortion and (B) inter-modulation distortion of a laser diode under direct modulation (Helms, 1991). (Used with permission.)

X∞ THD ¼

k¼2

P ð Ωk Þ

P ðΩ1 Þ

(3.3.3)

In general, harmonic distortions in directly modulated laser diodes are functions of both modulation frequency and the modulation index m. A larger modulation index implies a wider swing of signal magnitude and therefore often results in higher harmonic distortions, as indicated in Fig. 3.3.6A. Another type of distortion caused by transmitter non-linear response is referred to as inter-modulation distortion (IMD). IMD is created due to the non-linear mixing between two or more discrete frequency components of the RF signal in the transmitter. In this non-linear mixing process, new frequency components will be created. If there are originally three frequency components at Ωi, Ωj, and Ωk, new frequency components will be created at Ωijk ¼ Ωi Ωj Ωk, where i, j, and k are integers. In a simple case, if i ¼ j or j ¼ k, only two original frequency components are involved, which is similar to the case of degenerated four-wave mixing in a non-linear fiber system as discussed in Chapter 1. For example, two original frequency components at Ω1 and Ω2 will generate two new frequency components at 2Ω1–Ω2 and 2 Ω2–Ω1, as shown in Fig. 3.3.5B. These two new frequency components are created by two closely spaced original frequency components which have the highest power compared to others; these are usually the most damaging terms to the optical system performance. The IMD parameter is defined as IMDðΩimd Þ ¼

P ðΩimd Þ P ðΩi Þ

(3.3.4)

where Ωimd is the frequency of the new frequency components and P(Ωimd) is the power at this frequency, as shown in Fig. 3.3.5B. Fig. 3.3.6B shows an example of the measured IMD(Ωimd) versus the average modulating frequency (Ω1 + Ω2)/2 (Helms, 1991). Again, higher modulation index usually results in higher IMD.

343

344

Fiber optic measurement techniques

In lightwave CATV systems, a large number of sub-carrier channels are modulated onto the same transmitter, and even a low level of inter-modulation distortion may generate a significant amount of inter-channel crosstalk. For CATV applications, both THD and IMD should generally be lower than 55 dBc. 3.3.1.2 Time-domain characterization In binary modulated digital optical systems, waveform distortions represented by eyeclosure penalty may be introduced by limited frequency bandwidth as well as various non-linear effects in the transmitter. Time-domain measurement directly characterizes waveform distortion, which is most relevant to digital optical systems. In addition, frequency-domain response of the transmitter can be indirectly evaluated from the pulse response measurement performed in time domain. As shown in Fig. 3.3.7, time-domain characterization of an optical transmitter requires a waveform generator, a wideband optical receiver, and a high-speed oscilloscope. A unique advantage of time-domain measurement is the ability to characterize the transient effect during the switch of signal level from low to high, and vice versa. In general, the transient effect not only depends on the frequency bandwidth of the transmitter; it also depends on specific patterns of the input signal waveform. Therefore, pseudorandom waveforms are generally used; they contain a variety of combinations of data patterns. We discussed time-domain waveform characterization in Section 2.9, where both electrical domain sampling and optical domain sampling were presented. More detailed descriptions of time-domain waveform measurement and eye diagram evaluation are given in Chapter 5, where we discuss optical transmission systems.

3.3.2 Measurement of frequency chirp Frequency chirp is the associated frequency modulation when an optical transmitter is intensity modulated. Frequency chirp usually broadens the spectral bandwidth of the modulated optical signal and introduces additional performance degradation when the optical signal propagates through a dispersive media. The origins and physical mechanisms of frequency chirp in directly modulated laser diodes and in electro-optic external modulators were discussed in Chapter 1; here we discuss techniques of measuring the effect of frequency chirp. Waveform generator

Transmitter under test

Short fiber

Wideband receiver

High-speed oscilloscope

Trigger

Fig. 3.3.7 Block diagram of time-domain measurement of optical transmitter.

Characterization of optical devices

Let us consider an optical field that has both intensity and phase fluctuations: pffiffiffiffiffiffiffiffiffi E ðt Þ ¼ P ðt ÞejφðtÞ ¼ exp fjφðtÞ + 0:5 ln P ðtÞg (3.3.5) where P(t) ¼ j E(t)j2is the optical power. In this expression, the real part inside the exponent is 0.5 ln P(t) and the imaginary part is ϕ(t). The ratio between the derivatives of the imaginary part and the real part is equivalent to the ratio between the phase modulation and the intensity modulation, which is defined as the chirp parameter: αlw ¼ 2

dφðtÞ=dt dφðtÞ=dt ¼ 2P d f ln P ðt Þg=dt dP ðtÞdt

(3.3.6)

The frequency deviation is equal to the time derivative of the phase modulation, Δω ¼ dφ(t)/dt; therefore, the intensity modulation-induced optical frequency shift is

α 1 dP Δω ¼ lw (3.3.7) 2 P dt With sinusoid modulation at frequency Ω and with modulation index m, the signal optical power is P(t) ¼ P0(1 + m sin Ωt) and its derivative is dP ¼ P 0 mΩ cos Ωt dt where m is the modulation index, P0 is the average power, and Ω is the RF angular frequency of the modulating signal. Then, according to Eq. (3.3.7), the induced frequency modulation is Δω ¼

αlw mΩ cos Ωt 2 1 + m sin Ωt

(3.3.8)

If the modulation index is small enough (m ≪ 1), the maximum frequency shift is approximately α Δω max ¼ lw mΩ (3.3.9) 2 This indicates that through the modulation chirp, signal optical spectral bandwidth is broadened by the intensity modulation; this broadening is proportional to the linewidth enhancement factor αlw as well as the modulation index m. As we discussed in Chapter 1, frequency chirp may exist in both direct modulation of semiconductor lasers and external modulation. However, in both cases, if the linewidth enhancement factor αlw is known, the amount of spectral broadening can be evaluated by Eq. (3.3.9). There are several techniques available to characterize modulation-induced frequency chirp in optical transmitters. The often-used techniques include modulation spectral measurement, dispersion measurement, and interferometric measurement.

345

346

Fiber optic measurement techniques

3.3.2.1 Modulation spectral measurement When an optical transmitter is modulated by a sinusoid at frequency Ω with the modulation index m, its output optical power is P(t) ¼ P0(1 + m sin Ωt). Meanwhile, due to the linear chirp, its optical phase is also modulated. If the modulation index is small enough (m ≪ 1), according to the definition of linewidth enhancement factor in Eq. (3.3.6), the phase modulation is α m φðtÞ ¼ lw sin Ωt (3.3.10) 2 Then the complex optical field can be expressed as h

i pffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi mαlw E ðtÞ ¼ P 0 1 + m sin Ωt  exp j ωt + sin Ωt 2

(3.3.11)

Since the modulation index is small, Eq. (3.3.11) can be linearized to simplify the analysis: h

i pffiffiffiffiffi

m mαlw EðtÞ  P 0 1 + sin Ωt  exp j ωt + sin Ωt (3.3.12) 2 2 The right side in Eq. (3.3.12) can be expanded into a Bessel series: pffiffiffiffiffiX mαlw E ðt Þ ¼ P 0 k J k exp ½jðω kΩÞt  2 " jm pffiffiffiffiffi X mαlw J exp ½jðω ðk + 1ÞΩÞt P0  k k 2 4 # X mαlw J  exp ½jðω ðk  1ÞΩÞt  k k 2

(3.3.13)

where Jk(mαlw/2) is the kth-order Bessel function. Eq. (3.3.13) indicates that if the transmitter is chirp-free (αlw ¼ 0), there should be only two modulation sidebands, one at each side of the carrier, which is the nature of a typical intensity modulation. However, in general when αlw 6¼ 0, there will be additional modulation sidebands in the optical spectrum, as indicated by Eq. (3.3.13). The amplitude of each modulation sideband, determined by Jk(mαlw/2), is a function of modulation index m as well as the linewidth enhancement factor αlw. Therefore, by measuring the amplitude of each modulation sideband, the linewidth enhancement factor of the transmitter can be determined. In this way, modulation chirp measurement becomes a spectral measurement. However, since the sidebands separation is usually in the multi-megahertz range, which is determined by the RF modulation frequency, most optical spectrum analyzers will not be able to resolve the fine spectral features created by modulation. Coherent heterodyne detection

Normalized spectral density (dB)

Characterization of optical devices

Normalized spectral density (dB)

1

1

−10

J 2(b) 0

−20 −30 −40 −50 −6

(a)

(b)

J 2(b)

J 2(b)

b = 2.405

0

0

−4 −2 0 2 4 Relative frequency w – w 0 (GHz)

b = 3.832

J 2(b) 2

J 2(b)

6

J22(b)

0

−10 −20

J12(b)

J 2(b) 1

−30 −40 −6

−4 −2 0 2 4 Relative frequency w – w 0 (GHz)

6

Fig. 3.3.8 Optical heterodyne spectrum of a directly modulated DFB laser m ¼ 0.1 and β ¼ mαlw/2 is the phase modulation index.

probably is the best way to perform this measurement. It translates the modulated optical spectrum into RF domain, which can be measured by an RF spectrum analyzer, as discussed in Chapter 2. Fig. 3.3.8 shows an example of a directly modulated DFB laser diode measured by a coherent heterodyne detection. In this particular case, the modulation frequency is 1 GHz and the intensity modulation index is m ¼ 0.1. When the phase modulation index β ¼ mαlw/2 is equal to 2.405, the carrier component is suppressed because J0(2.405) ¼ 0, as shown in Fig. 3.3.8A. With the increase of phase modulation index to β ¼ 3.832, the nearest modulation sidebands at both sides of the carrier are J2 2(β), whereas J2 1(β) components are nulled, which is the solution of J1(β) ¼ 0. This allows the precise determination of the linewidth enhancement factor of the transmitter with the known intensity modulation index m, which in turn can be obtained by a time-domain measurement of the modulated intensity waveforms.

347

348

Fiber optic measurement techniques

Dispersive fiber Transmitter

Port 1

EDFA

PD

Port 2 Network analyzer

Fig. 3.3.9 Experimental setup for frequency chirp measurement using a dispersive fiber and an RF network analyzer.

3.3.2.2 Measurement utilizing fiber dispersion As mentioned previously, frequency chirp has an effect of broadening the width of the signal optical spectrum and introducing system performance degradation when the transmission fiber is dispersive. For measurement purposes, this effect can be utilized to evaluate the chirp parameter of a transmitter. The experimental setup of this measurement is shown in Fig. 3.3.9, where the transmitter under test is modulated by a frequency swept RF signal from a network analyzer (port 1). The optical output from the transmitter passes through an optical fiber, which has a predetermined amount of chromatic dispersion. Then the transmitted optical signal is detected by a wideband receiver with an optical amplifier, if necessary, to increase the optical power for detection. Then the RF signal detected by the receiver is sent back to the receiving port (port 2). The RF network analyzer measures the S21 parameter of the entire optoelectronic system to determine its overall transfer function (Devaux et al., 1993). To explain the operating principle, we assume that the modulation index m is small enough and the modulation efficiency is linear. Because of the chirp effect, both the optical power and the optical phase are modulated. According to Eq. (3.3.11), the modulated complex optical field is h

i pffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi mαlw EðtÞ ¼ P 0 1 + m sin ð2πftÞ  exp j (3.3.14) sin ð2πftÞ ej2πf 0 t 2 where f and f0 are the applied RF modulating frequency and the carrier optical frequency, respectively. Again, for small-signal modulation (m ≪ 1), linear approximation can be applied so that ih i pffiffiffiffiffih 1 α m E ðt Þ  P 0 1 + m sin ð2πftÞ 1 + j lw sin ð2πftÞ ej2πf 0 t (3.3.15) 2 2 Neglecting higher-order m components, Eq. (3.3.15) can be simplified as pffiffiffiffiffih 1 + jαlw j2πft 1 + jαlw j2πft i j2πf 0 t e E ðt Þ  P 0 1 + m e e +m (3.3.16) 4 4

Characterization of optical devices

Obviously this modulated optical spectrum has two discrete sidebands (f0 f ), one on each side of the optical carrier f0. Due to chromatic dispersion of optical fiber, each frequency component will have a slightly different propagation constant β. βðωÞ ¼ β0 + 2πβ1 ðf  f 0 Þ + ð2π Þ2

β2 ðf  f 0 Þ2 + … 2

(3.3.17)

By definition, β1 ¼ 1/vg is the inverse of the group velocity, β2 ¼  λ20D/2πc is the dispersion coefficient, and λ0 ¼ c/f0 is the center wavelength of the optical carrier. D is the fiber dispersion parameter with the unit of ps/nm/km. Then the propagation constants at carrier, the upper sideband, and the lower sideband can be expressed as β ðf 0 Þ ¼ β 0

(3.3.18) πλ20 Df 2

β+1 ¼ βðf 0 + f Þ ¼ β0 +

2πf  vg

β1 ¼ βðf 0  f Þ ¼ β0 

2πf πλ20 Df 2  c vg

c

(3.3.19) (3.3.20)

With the consideration of chromatic dispersion, after propagating over the fiber with length L, each of these three frequency components at f0 and f0 f has a slightly different propagation delay. At the fiber output, the optical field is pffiffiffiffiffih 1 + jαlw jð2πftβ+1 LÞ 1 + jαlw jð2πf tβ1 LÞ i j2πf 0 t E ðt Þ  P 0 ejβ0 L + m e e +m (3.3.21) e 4 4 At the photodetector, the photocurrent is proportional to the square of the optical field I(t) ¼ η jE(t)j2, where η is the responsivity of the photodiode. Collecting the components of the photocurrent only at the modulation frequency f (in fact, this is what an RF network analyzer does), we have    2 2      πλ0 Df L πλ20 Df 2 L  1 + jαlw  I ðf Þ  ηP 0 m exp j + exp j  2 c c  2 2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffi πλ0 Df L ¼ ηP 0 m 1 + α2lw cos + tan 1 ðαlw Þ (3.3.22) c In this derivation, high-order harmonics, at 2f, 3f, and so on, have been neglected. Eq. (3.3.22) indicates that the RF transfer function of the system is strongly frequencydependent and the spectral domain features of this transfer function largely depend on the value of the chirp parameter αlw. One of the important features of this transfer function is the resonance zeros when the following condition is satisfied:  2 2 πλ0 Df L + tan 1 ðαlw Þ ¼ kπ + π=2 (3.3.23) c

349

Fiber optic measurement techniques

where k ¼ 0, 1, 2, 3, … is an integer. The solution to Eq. (3.3.23) can be found at discrete frequencies f ¼ fk, where sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

c 2 1 ðα Þ (3.3.24) fk ¼ 1 + 2k  tan lw π 2DLλ20 By measuring the frequencies at which the transfer function reaches zero, the chirp parameter αlw can be evaluated. In practical measurements, the following steps are suggested: 1. Find the accumulated fiber dispersion DL. This can be accomplished by measuring at least two discrete zero-transfer frequencies—for example, fk and fk +1. From Eq. (3.3.24), it can be found that DL¼

f 2k+1

c  ,  f 2k λ20

(3.3.25)

which is independent of the chirp parameter of the transmitter. 2. Once the accumulated fiber dispersion is known, the laser chirp parameter αlw can be found by using Eq. (3.3.24) with the knowledge of the frequency of any transfer-zero, fk. This measurement technique not only provides the absolute value of αlw but also tells the sign of αlw. Fig. 3.3.10 shows the calculated zero-transfer frequencies versus the chirp parameter in a system with 50 km of standard single-mode fiber with D ¼ 17 ps/nm-km. In order to be able to observe high orders of transfer nulls, a wide bandwidth receiver and RF network analyzer have to be used. In general, a network analyzer with bandwidth of 20 GHz is appropriate for more such measurements. The following is a practical example of chirp measurement of an electro-absorption (EA) modulator. For this modulator, the chirp parameter is a function of the bias voltage, and the value of the chirp can be both positive and negative depending on the bias level

Notch frequency (GHz)

350

20 18 16 14 12 10 8 6 4 2

f2 f1

f0 0

2

4

αlw

6

8

10

Fig. 3.3.10 Transmission notch frequencies versus αlw in a system of 50 km standard single-mode fiber with D ¼ 17 ps/nm-km.

Normalized transmission (dB)

Characterization of optical devices

0 −2 −4 −6 −8 −10 −12 −14 −16 −18

−2 −1.8 −1.6 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 Bias voltage (V)

0

Fig. 3.3.11 Voltage-dependent transmission of an electro-absorption modulator.

applied on the device. The voltage-dependent transfer function of this particular EA modulator is shown in Fig. 3.3.11, where the optical power transmission efficiency T(V) decreases with the increase of the negative bias voltage V. Fig. 3.3.12 shows the RF transfer functions, measured with the experimental setup shown in Fig. 3.3.9, together with the calculated transfer functions using Eq. (3.3.22). Four transfer functions are shown in Fig. 3.3.12; each was measured at a different EA modulator bias voltage. In the experimental setup, 50 km standard single mode fiber was used with a dispersion parameter of D ¼ 17.2 ps/nm-km; therefore, the accumulated dispersion of the fiber system is approximately 860 ps/nm. To fit the measured transfer function at a certain bias voltage using Eq. (3.3.22), the only fitting parameter that can be adjusted is the chirp parameter αlw. By performing the transfer-function measurement at various bias voltage levels, chirp parameter versus bias voltage can be obtained, as shown in Fig. 3.3.13. It is interesting to note that the chirp parameter αlw(V) exhibits a singularity around 1.8 V bias voltage. The reason is that in this particular device, optical transmission efficiency T versus bias voltage reaches a local minimum at the bias level of approximately 1.8 V, where the slope dT/dV changes the sign. Meanwhile, the phase modulation efficiency dΦ/dV is non-zero in this region. Therefore, based on the chirp parameter defined by Eq. (3.3.6), the value of αlw should be ∞ at this bias level. Based on this measurement, signal optical phase versus bias voltage of the EA modulator can also be obtained by integrating Eq. (3.3.6) as Z Z 1 dP ðtÞ 1 ΦðV Þ ¼ αlw ðV Þ dt ¼ αlw ðV Þ dP (3.3.26) dt ð Þ ð 2P t 2P VÞ t P where P(V) describes the output optical power from the EA modulator. Since the input optical power is constant during the measurement, the output power P(V) should be proportional to the optical transmission efficiency of the device. Based on Eq. (3.3.26) and the measured chirp parameter shown in Fig. 3.3.13, the bias-dependent signal optical

351

Fiber optic measurement techniques

Fig. 3.3.12 Measured (red (dark gray in the printed version)) and calculated (black) transfer functions of the EA modulator at bias voltages of 0.3 V, 1.2 V, 1.7 V, and 1.8 V, respectively. 12 10 8 Chirp parameter

352

6 4 2 0 −2 −4 −6 −2

−1.5

−1 Bias voltage (V)

Fig. 3.3.13 Measured chirp parameter αlw versus EA bias voltage.

−0.5

0

Characterization of optical devices

0.8

Relative phase (rad.)

0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −2

−1.5

−1 Bias voltage (V)

−0.5

0

Fig. 3.3.14 Measured optical phase versus the EA bias voltage.

phase can be obtained as shown in Fig. 3.3.14. This figure indicates that with the increase of the reverse bias, the signal optical phase first goes negative and then swings back to positive. Even though the chirp parameter exhibits a singularity around 1.8 V bias level, the phase versus voltage curve is continuous. From the practical application point of view, what actually affects the optical system performance is the signal optical phase instead of the chirp parameter.

3.3.3 Time-domain measurement of modulation-induced chirp Frequency-domain measurement of modulating chirp is a powerful tool that is relatively simple. However, since it is based on the swept-frequency small-signal modulation, it usually provides only linear performance of the transmitter. To measure intensitydependent waveform distortion, a large-signal characterization in time domain is usually required. Fig. 3.3.15 shows an example of time-domain characterization of transmitter chirp using a setup based on a balanced Mach-Zehnder interferometer (Kruger and Kruger, 1995). In this configuration, a waveform generator is used to modulate the transmitter under test. The optical signal is equally split into the two arms of a balanced MZI, which is formed by two optical fibers between two 3-dB fiber couplers. A small portion of the output optical signal from the MZI is detected and used as the feedback to control the relative phase mismatch of the MZI arms, whereas the rest is sent to a wideband optical receiver and a high-speed sampling oscillator for the waveform measurement. To understand the operating principle, let us first consider the transfer function of a MZI, as described in Eq. (3.2.28). However, it is important to point out that in the application presented here, the MZI needs to operate in the coherent regime where the

353

354

Fiber optic measurement techniques

Fig. 3.3.15 Time-domain measurement of transmitter chirp using MZI.

differential time delay between the MZI arms should be much shorter than the source coherence time (Δτ < < tcoh). Under this condition, dφðtÞ Δτ ¼ ΔωðtÞΔτ (3.3.27) dt where Δω(t) is the modulation-induced frequency deviation and Δτ is the relative time delay between two MZI arms. Therefore, in the coherent regime, the MZI transfer function can be expressed as n o pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I ðtÞ ¼ R P 1 ðtÞ + P 2 ðtÞ + 2 P 1 ðtÞP 2 ðtÞ cos ½ωðt ÞΔτ + Δφ (3.3.28) φðtÞ  φðt  ΔτÞ 

where P1(t) and P2(t) represent the powers of the modulated optical signal at the two MZI arms, R is the photodiode responsivity, and Δφ ¼ ω0Δτ is the phase constant that is determined by the MZI setting. ω0 is the central frequency of the source. In the setup shown in Fig. 3.2.15, the differential arm length of the MZI can be adjusted by a piezo-electric transducer (PZT) so that the bias phase can be precisely tuned to the quadrature point such that Δφ ¼ π/2 + 2kπ, where k is an integer. At this particular bias point, the photocurrent waveform is defined as n o pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I ðtÞ ¼ I Q ðtÞ ¼ R P 1 ðtÞ + P 2 ðtÞ  2 P 1 ðtÞP 2 ðt Þ sin ½ΔωðtÞΔτ (3.3.29) Then the frequency chirp Δω(t) can be expressed as ( ) 1 1 P 1 ðt Þ + P 2 ðt Þ  I Q ðt Þ=R pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ΔωðtÞ ¼ sin Δτ 2 P 1 ðtÞP 2 ðtÞ

(3.3.30)

Characterization of optical devices

Based on Eq. (3.3.30), to measure the frequency chirp Δω(t), we need to know P1(t), P2(t), IQ(t), and photodiode responsivity R. The measurement procedure can be summarized as follows: 1. Measure the frequency-domain transfer function of the MZI using an optical spectrum analyzer, which should be a sinusoid. The period δω of this sinusoid transfer function is related to the differential delay between the two arms: Δτ ¼ 1/δω. This measurement may be accomplished with a CW wideband light source or with a tunable laser and a power meter. 2. Use a modulated transmitter and block-out the first arm of the MZI to measure the photocurrent waveform I2(t) generated by the second MZI arm: I 2 ðtÞ ¼ RP 2 ðtÞ. 3. Use the same modulated transmitter while blocking the second arm of the MZI to measure the photocurrent waveform I1(t) generated by the second MZI arm: I 1 ðtÞ ¼ RP 1 ðt Þ. 4. Convert Eq. (3.3.30) into the following form: ( ) I ð t Þ + I ð t Þ  I ð t Þ 1 2 Q pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ΔωðtÞ ¼ δω  sin 1 2 I 1 ðtÞI 2 ðt Þ

(3.3.31)

MZI transfer function

Then the time-dependent frequency deviation can be determined with single-arm currents I1(t), I2(t), quadrature current waveform IQ(t), and the period of the MZI δω. During the measurement, special attention has to be paid such that the same modulation data pattern and the same modulation index have to be maintained for step 2 and step 3. Although the measurement procedure we have described looks quite simple, the requirement of blocking of MZI arms may disturb the operating condition of the experimental setup. This is especially problematic if an all-fiber MZI is used; each reconnection of a fiber connector may introduce slightly different loss and therefore create measurement errors. On the other hand, since the phase of the MZI can be easily adjusted with the PZT phase control, the measurement procedure can be modified to avoid the necessity of blocking the MZI arms (Saunders et al., 1994). Again, referring to the MZI transfer function given by Eq. (3.3.28), if the MZI is biased at point A in Fig. 3.3.16, where Δφ ¼ 2mπ + π/2 and the slope is positive, the photocurrent waveform is δω

A

Fig. 3.3.16 Biasing a MZI in the positive and negative quadrature points.

w

355

356

Fiber optic measurement techniques

n o pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I Q+ ðt Þ ¼ R P 1 ðtÞ + P 2 ðt Þ + 2 P 1 ðtÞP 2 ðt Þ sin ½ΔωðtÞΔτ

(3.3.32)

On the other hand, if the MZI is biased at point B, where Δφ ¼ 2mπ  π/2 and the slope is negative, the photocurrent waveform becomes n o pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I Q ðtÞ ¼ R P 1 ðt Þ + P 2 ðtÞ  2 P 1 ðtÞP 2 ðtÞ sin ½ΔωðtÞΔτ (3.3.33) Practically, the change of bias point from A to B in Fig. 3.3.16 can be achieved by stretching the length of one of the two MZI arms by half a wavelength using the PZT. We can then define the following two new parameters: I Q+ ðtÞ + I Q ðt Þ (3.3.34) ¼ RfP 1 ðtÞ + P 2 ðt Þg 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I Q+ ðtÞ  I Q ðtÞ I FM ¼ (3.3.35) ¼ 2R P 1 ðtÞP 2 ðtÞ sin ½ΔωðtÞΔτ 2 where IAM depends only on the intensity modulation, whereas IFM depends on both intensity and frequency modulations. To find the frequency modulation Δω(t), we can combine Eqs. (3.3.33) and (3.3.34) so that ! " # P 1 ðt Þ + P 2 ðt Þ I FM ðt Þ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  Δωðt Þ ¼ δω  sin (3.3.36) I AM ðtÞ 2 P 1 ðtÞP 2 ðt Þ I AM ¼

If both of the two fiber couplers are ideally 3 dB (50%), P1(t) ¼ P2(t), and Eq. (3.3.36) can be simplified to  1 I Q+ ðt Þ  I Q ðt Þ ΔωðtÞ ¼ δω  sin (3.3.37) I Q+ ðtÞ + I Q ðt Þ This measurement is relatively simple without the need to block the MZI arms. In general, the fiber couplers may not be ideally 3 dB; in that case, we may have to use P2(t) ¼ ηP1(t), where η has a value very close to unity for good couplers which determines the extinction ratio of the MZ interferometer:   I Q+ ðtÞ  I Q ðtÞ 1+η 1 pffiffiffi  (3.3.38) Δωðt Þ ¼ δω  sin 2 η I Q+ ðt Þ + I Q ðtÞ pffiffiffi since η is a parameter of fiber coupler and the value of ð1 + ηÞ=2 η can usually be calibrated easily. It is worthwhile to note that in the chirp measurement using an MZI, the differential delay between the two interferometer arms Δτ ¼ 1/δω is an important parameter. The following are a few considerations in selecting the Δτ value of the MZI:

Characterization of optical devices

1. Coherence requirement, which needs the MZI arm length difference to be short enough such that Δτ ≪ tcoh. This is the necessary condition to justify the linear approximation given by Eq. (3.3.27). Otherwise, if the MZI arm length difference is too long, the MZI is no longer coherent for an optical signal with wide linewidth, and Eq. (3.3.28) will not be accurate. 2. Sensitivity requirement, which desires a longer Δτ: Eqs. (3.3.32) and (3.3.33) show that the FM/AM conversion efficiency at the quadrature point is proportional to sin(ΔωΔτ). Increasing Δτ certainly helps to increase the measured signal level IQ and improve the signal-to-noise ratio. 3. Frequency bandwidth requirement, which desires a short Δτ. This frequency bandwidth refers to the maximum frequency deviation of the laser source Δωmax that can be measured by the MZI. Certainly, this maximum frequency deviation must be smaller than the half free-spectral range δω/2 of the MZI as illustrated in Fig. 3.3.16: Δωmax < δω/2 ¼ 1/(2Δτ). Beyond this limitation, the MZI transfer function would provide multiple solutions. In practice, to maintain a good efficiency of FM/AM conversion, Δωmax < 1/(8Δτ) is usually required. 4. Modulation speed requirement, which requires short Δτ. This refers to the requirement that within the differential delay time Δτ, there should be negligible intensity variation of the optical signal induced by modulation. Therefore, the differential delay must be much smaller than the inverse of the maximum modulation frequency fm: Δτ ≪ 1/fm. As an example, for a 40 GHz modulation frequency, the differential delay should be much shorter than 25 ps which corresponds to the arm length difference of approximately 0.8 mm in the air. In the design of the chirp measurement, we must consider all four of these requirements and make tradeoffs. Fig. 3.3.17 shows an example of the measured waveforms of intensity modulation and the associated frequency modulation of an unbalanced LiNbO3 external modulator (Saunders et al., 1994). This waveform-dependent frequency chirp usually cannot be precisely characterized by small-signal frequency-domain measurements.

3.4 Wideband characterization of an optical receiver In optical systems, an optical receiver converts the incoming signal from the optical domain to the electrical domain. An optical receiver usually consists of a photodetector and an electrical circuit for transimpedance amplification and signal manipulation. Important parameters of an optical receiver include photodetector responsivity, bandwidth, flatness of frequency response within the bandwidth, noise figure, linearity, and signal wavelength coverage. Optical receiver characterization and calibration are important for both optical communication and instrumentation, which directly affect optical system performance and measurement accuracy. In this section, we discuss

357

358

Fiber optic measurement techniques

Fig. 3.3.17 Example of intensity modulation (dashed line) and frequency modulation (solid line) of a LiNbO3 intensity modulator (Data extracted from Saunders et al., 1994).

techniques to characterize optical receivers, with a focus on the wideband characterization of their frequency response.

3.4.1 Characterization of photodetector responsivity and linearity Photodetector responsivity (defined by Eq. (1.3.2) in Chapter 1) is a measure of opticalto-electrical conversion efficiency of a photodetector and is usually expressed by the value of the photocurrent (mA) generated by each milliwatt of optical signal. Ideally, the value of responsivity R should be a constant, but in practice, R can be a function of both signal wavelength and signal optical power. Though the wavelength dependency of responsivity in a photodiode is typically weak, the power dependency of R determines the linearity of the photodetection, which may introduce saturation and high-order harmonics in the optical receiver. Fig. 3.4.1 shows a measurement setup to characterize the wavelength-dependent responsivity of a photodetector. A tunable laser is used to provide a wavelength tunable light source and a variable attenuator adjusts the optical power level that is sent to the

Oscilloscope

Trigger Tunable laser

Variable attenuator

Photodetector under test

TIA

Voltmeter

Calibrated power meter

Fig. 3.4.1 Experimental setup to calibrate photodetector responsivity. TIA, trans-impedance amplifier.

Characterization of optical devices

photodetector under test. The photocurrent signal is usually quite weak, especially if we need to avoid the non-linear effect of photodetection and keep the signal optical power level low; therefore, a transimpedance amplifier (TIA) must be used to linearly convert the photocurrent into a voltage signal. A voltmeter is then used to measure the voltage signal at the output of the TIA which is proportional to the photocurrent generated by the photodetector. A calibrated optical power meter measures the absolute power level of the optical signal, providing the base to calibrate the measurement setup. In this setup, by scanning the wavelength of the tunable laser, the wavelengthdependent responsivity of the photodetector can be easily characterized. In practice, the responsivity of a photodiode is relatively insensitive to the change in signal wavelength except for specially designed wavelength-selective photodetectors, and the linewidth of the tunable laser does not have to be very narrow. In fact, in most cases, a wavelength resolution of 1 nm should be fine enough to characterize a photodiode. Since most of the tunable lasers have limited wavelength tuning range (typically 30 nm, which is about 4750 GHz in a 1550 nm wavelength window, whereas the interested RF frequency range for characterizing a photodetector is less than 100 GHz. Therefore, B0  Δf  B0 and Sspsp ðΔf Þ  R2 ðΔf Þρ2 B0

(3.4.8)

The frequency dependency of the measured RF noise spectral density is only determined by the frequency response RðΔf Þ of the photodetector. The block diagram of the measurement setup, as shown in Fig. 3.4.6, is very simple. The photodetector directly detects the wideband optical noise and sends the converted electrical spectrum into an RF spectrum analyzer. One major disadvantage of this measurement technique is that the electrical noise spectral density generated by the photodetector, which is related to the frequency response of the photodetector, is usually very low. For example, for a 1 mW optical source with 30 nm bandwidth, if the photodetector responsivity is 1 mA/mW, the spontaneousspontaneous emission beat noise spectral density generated at the photodetector, which is the useful signal in this measurement, is approximately 157 dBm/Hz. This spectral density level is close to the electrical noise level of a typical wideband RF spectral analyzer, and therefore, the amplitude dynamic range of the measured is severely limited. It is interesting to note from Eq. (3.4.8) that if the total power of the optical source is fixed as Pi, the spontaneous-spontaneous emission beat noise is inversely proportional to the optical bandwidth of the source:

Wideband optical source

Photodetector under test

RF spectrum analyzer

Fig. 3.4.6 Block diagram of photodetector bandwidth characterization using a wideband light source and an RF spectrum analyzer.

363

364

Fiber optic measurement techniques

Sspsp ðΔf Þ 

R2 ðΔf ÞP 2i B0

(3.4.9)

For example, if the source optical power is 1 mW but the bandwidth is only 3 nm, for a 1 mA/mW photodetector responsivity, the spontaneous-spontaneous emission beat noise spectral density will be approximately 147 dBm/Hz, which is 10 dB higher than the case when the source bandwidth is 30 nm. Although reducing source optical bandwidth may help increase the measurement amplitude dynamic range, the fundamental requirement that B0 ≫ f0 has to be met for Eq. (3.4.9) to be valid, where f0 is the measurement electrical bandwidth. In general, the frequency response of a photodetection does not vary dramatically over a narrow wavelength range; therefore, discrete sampling in the frequency domain is sufficient. In this case, a Fabry-Perot (FP) filter can be used to select discrete and periodic frequency lines from a wideband ASE noise source. Fig. 3.4.7 shows the block diagram of the measurement setup, where a wide spectrum LED is used to generate the spontaneous emission optical noise and a FP filter is used to select a frequency comb. An EDFA is then used to amplify the filtered optical spectrum so that the optical noise spectral density at the transmission peaks of the FP filter can be very high. There are two reasons to use a relatively low total optical power while having high peak optical powers at selected frequencies: first, the EDFA is usually limited by its total optical power; if finesse of the FP filter is F, the peak spectral density at peak transmission frequencies can be found as ρFP ¼ ρF/π, where ρ is the uniform power spectral density without FP filter. Second, a photodetector usually has limitations on the total input optical power to avoid saturation and damaging. If the free spectral range of the FP filter is FSR, the mixing between various discrete optical frequency components will generate a series of discrete RF frequency components at the photodetector with the frequency interval of FSR. In this application, it is the actual sampling interval in the frequency domain. The RF spectral density measured at the spectrum analyzer can be approximated as (Baney et al., 1994) Sspsp ðf Þ 

R2 ðf ÞP 2i F X 1 f k  FSR2 k B0 π 1+

(3.4.10)

Δv

ρ ρ

f LED

FP

EDFA

f Photodetector under test

RF spectrum analyzer

Fig. 3.4.7 Block diagram of photodetector bandwidth characterization using an FP filter to increase measurement dynamic range.

Characterization of optical devices

where Δv is the FWHM width of the FP filter transmission peaks and B0 is the optical bandwidth of each FP transmission peak. Compare Eq. (3.4.10) with Eq. (3.4.9); for the same total input optical power Pi to the photodetector, the peak RF spectral density is increased by F/π. For a typical fiber FP filter with a finesse of 500, the improvement in the measurement dynamic range of more than 20 dB can be easily obtained. Using this technique, B0 ≫ f0 can be easily maintained and the RIN amplitude can also be significantly increased. One important concern of this technique is the frequency resolution of the measurement. In fact, in the electrical domain, the frequency sampling interval is equal to the FSR of the FP interferometer, which can be easily made into megahertz regime by increasing the cavity length.

3.4.4 Photodetector characterization using short optical pulses Both the coherent heterodyne technique and the spontaneous-spontaneous emission noise beating technique we have discussed measure linear characteristics of the photodetector because they both characterize small-signal frequency response. Short optical pulses can also be used to characterize photodetector response because of their wide spectral bandwidths. Unlike the frequency-domain measurement techniques we have described, the short-pulse technique measures the time-domain response. With the recent development of ultrafast fiber-optics, picosecond actively mode-locked semiconductor lasers and femtosecond passively mode-locked fiber lasers are readily available. Fig. 3.4.8 shows the block diagram of an experimental setup to characterize a photodetector in time domain, where the output of a femtosecond laser source is split into two parts by a beam splitter. One part is directly fed to the photodetector under test, which converts the short optical pulses into electrical pulses. A high-speed sampling oscilloscope is then used to measure the electrical signal waveform. The other part of the laser source is sent to an optical receiver, which measures the pulse arrival time and its repetition rate in order to trigger the sampling oscilloscope. In this case, the speed of the optical receiver only needs to be much faster than the repetition rate of the optical pulse train for triggering purposes. P(t)

P(t)

t

t Short pulse laser

Photodetector under test

Sampling oscilloscope

Receiver

Fig. 3.4.8 Block diagram of photodetector bandwidth characterization using an FP filter to increase measurement dynamic range.

365

366

Fiber optic measurement techniques

Theoretically, the response of the photodetector can be obtained by comparing the shape of the output electrical pulses with that of the input optical pulses. In practice, however, the duration of the femtosecond optical pulses is usually much shorter than the time constant of the photodetector; therefore, the width of the input optical pulses can be neglected for most applications. For example, 100 fs optical pulses in a 1550 nm wavelength window have the bandwidth of approximately 1 THz, whereas the bandwidth of a photodetector is usually less than 100 GHz. Generally, the time-domain measurement provides more information than frequency-domain techniques because the frequency response of the photodetector can be easily obtained by a Fourier analysis of the time-domain waveform measured by the oscilloscope. The unique advantage of the time-domain technique is that it measures the actual waveform from the photodetector, which includes transient effect and saturation. For example, a photodiode may have different responses and time constants during the leading edge than the falling edge of a signal pulse. Sometimes, although the input optical pulses are symmetrical, the output electrical pulses are not symmetrical, with a relatively short rise time and a much longer falling tail due to slow carrier transport outside the junction region of the photodiode. In addition, the power-dependent saturation effect of the photodetector can also be evaluated by measuring the electrical waveform change due to the increase in the amplitude of the input optical pulses. The major limitation of this time-domain measurement is perhaps the limited speed of the sampling oscilloscope. A 50 GHz sampling oscilloscope is commercially available, whereas a sampling oscilloscope with bandwidth higher than 100 GHz is difficult to find.

3.5 Characterization of optical amplifiers The general characteristics of optical amplifiers have been introduced in Chapter 1. The desired performance of an optical amplifier may depend on the specific application. For example, in optical communication systems, a line amplifier for a multi-wavelength WDM system should have high gain over a wide bandwidth to accommodate a large number of wavelength channels. The gain over this bandwidth should be flat to ensure uniform performance for all the channels, and the noise figure should be low so that the accumulated ASE noise along the entire system with multiple line amplifiers is not too high. On the other hand, for a preamplifier used in an optical receiver, the noise figure is the most important concern. For an SOA used for optical signal processing, gain dynamics may be important to enable ultrahigh-speed operation of the optical system. The general characterization techniques and necessary equipment for EDFA and SOA measurements are similar except for some specific characteristics. In this section, we discuss a number of often-used optical amplifier characterization techniques for their important properties such as optical gain, gain dynamics, optical bandwidth, and optical noise.

Characterization of optical devices

3.5.1 Measurement of amplifier optical gain Since the basic purpose of an optical amplifier is to provide optical gain in an optical system, the characterization of optical gain is one of the most important issues that determine the quality of an optical amplifier. In general, optical gain of an optical amplifier is both a function of signal optical wavelength and the signal optical power: G ¼ G(λ, P). In addition, when operating in room temperature, both EDFA and SOA can be modeled as homogenously broadened systems, and therefore, optical gain at one wavelength can be saturated by optical power of other wavelengths, which is commonly referred to as cross-gain saturation: GðλÞ ¼

1+

X

G0 ðλÞ P ðλk Þ=P sat ðλk Þ k

(3.5.1)

where G0(λ) is the small-signal optical gain at wavelength λ, P(λk) is the optical power, and Psat(λk) is the saturation power at wavelength λk. The optical gain of an amplifier can be measured using either an optical power meter or an optical spectrum analyzer, or even an electrical spectrum analyzer. The important concerns include wide enough optical bandwidth of the source to cover the entire amplifier bandwidth, calibrated signal power variation to accurately measure the saturation effect, and a way to remove the impact of the ASE noise, which might affect the accuracy of optical gain measurement. A typical experimental setup to measure the gain spectrum of an optical amplifier is to use a tunable laser and an optical power meter as shown in Fig. 3.5.1. The tunable laser provides the optical signal that is injected into the optical amplifier through a calibrated variable optical attenuator (VOA) and an optical isolator. After passing through the EDFA under test, the amplified optical signal is measured by an optical power meter. Since the EDFA not only amplifies the optical signal, it also produces wideband amplified spontaneous emission noise, known as ASE noise. A narrowband optical filter is used to select the optical signal and reject the ASE noise. A calibration path consists of an identical GPs + B0r

Ps

Tunable laser

VOA

EDFA

ASE noise filter

Optical power meter

Device under test Calibration interface

ASE noise filter

Optical power meter

Fig. 3.5.1 Block diagram of EDFA gain measurement based on an optical power meter.

367

368

Fiber optic measurement techniques

ASE noise filter, and an optical power meter can usually be realized by simply removing the EDFA from the measurement path. In this setup, the wavelength-dependent nature of the optical amplifier gain can be measured by scanning the wavelength of the tunable laser across the wavelength window of interest. Of course, the ASE noise filter must be tunable as well to be able to synchronize with the wavelength of the tunable laser source. Assume that the optical signal power at the input of the EDFA is Ps and the EDFA optical power gain is G; then the signal optical power at the EDFA output is GPs. Taking into account the ASE optical noise produced by the EDFA with the power spectral density ρ, and assuming that the ASE noise is wideband, the total ASE noise power after the narrowband ASE optical filter will be Z B0 =2 P ASE ¼ ρðf Þdf  B0 ρ (3.5.2) B0 =2

where B0 is the ASE filter optical bandwidth. Then the total optical power that enters the power meter is approximately P measured ¼ GP s + B0 ρ

(3.5.3)

The measured optical gain is therefore Gmeasured ≡

P easured GP s + B0 ρ ¼ ¼ G + B0 ρ=P s Ps Ps

(3.5.4)

In this expression, the second term is the measurement error introduced by the residual ASE noise. This error term is proportional to the output ASE noise divided by the input signal optical power. Since B0ρ/Ps  G(B0ρ/Pout), where Pout ¼ GPs is the output signal power; this term is in fact proportional to the optical gain of the amplifier. Nevertheless, because the optical gain G is usually a large number, it is more relevant to evaluate the relative error, which is (1 + B0ρ/Pout). To ensure the accuracy of the measurement, the input optical signal Ps cannot be too small; otherwise, the second term in Eq. (3.5.4) would be significant. On the other hand, because of the gain saturation effect in the optical amplifier, the optical gain G is typically a function of input optical power Ps. Practically, to measure the small-signal optical gain of the amplifier, the input optical signal power has to be small enough to avoid the gain saturation. This usually requires the signal input optical power to be Ps ≪ Psat/G or simply Pout ¼ GPs ≪ Psat, where Psat is the saturation optical power at the signal wavelength. Combining the ASE noise limitation and the non-linearity limitation, the input signal optical power should be chosen such that B0ρ/G ≪ Ps ≪ Psat/G, or B0ρ ≪ Pout ≪ Psat. Small signal gain is the most fundamental property of an optical amplifier, which is typically a function of the wavelength, and by definition, it is not a function of the signal optical power (because it is assumed to be very small). As an example, for a typical EDFA with a 5 dB noise figure, 13 dBm saturation power, and 30 dB small-signal gain, if the ASE

Characterization of optical devices

filter has an optical bandwidth of 1 nm, the ASE noise optical power that reaches the power meter is approximately 13 dBm. Therefore, the input signal optical power for this measurement should be 43 dBm ≪ Ps ≪ –17 dBm. Thus, a signal optical power on the order of Ps ¼ 30 dBm should be appropriate. In most practical applications, the signal optical power may not satisfy the small-signal condition. In this case, the measured optical gain is referred to as large-signal gain, or sometimes static gain. The large-signal optical gain spectrum can be measured using the same setup as shown in Fig. 3.5.1 where the tunable laser provides a constant but high optical power across the wavelength of interest. In general, large-signal optical gain G ¼ G(λ, P) is a function of both wavelength and signal optical power. It is important to note that the wavelength dependency of the large-signal optical gain is usually different from that of the small-signal optical gain. The reason is that the optical gain is wavelength dependent. Although the input signal optical power from the tunable laser is constant, the output optical power varies over the wavelength and the saturation effect is higher at wavelengths where the gains are higher. The measurement setup shown in Fig. 3.5.1 can also be modified by replacing the optical filter and optical power meter with an optical spectrum analyzer as shown in Fig. 3.5.2. In this case, the tunable ASE noise filter is no longer required because the spectral resolution of an OSA is determined by the bandwidth of an internal tunable optical filter of the OSA. Therefore, the major advantage of using OSA is the adjustable resolution bandwidth, which allows the measurement of OSNR at different optical signal wavelengths. In addition, a high-quality OSA also provides much higher dynamic range in the measurement than an optical power meter combined with a fixed bandwidth tunable optical filter. The major advantage of optical gain measurement using an optical power meter or OAS is its simplicity and low bandwidth requirement for electronics. However, the results of measurement may be sensitive to the ASE noise creased by the optical amplifier itself, especially when the signal level is very small. For example, for an OSA with spectral resolution of 0.1 nm, if the ASE noise spectral density of the EDFA is 20 dBm/nm, the

Fig. 3.5.2 Block diagram of EDFA gain measurement based on an optical spectrum analyzer (OSA).

369

370

Fiber optic measurement techniques

Fig. 3.5.3 Block diagram of EDFA gain measurement based on an electrical spectrum analyzer (ESA).

total ASE noise power within the resolution bandwidth is 30 dBm. In this case, if the signal optical power at EDFA output is less than 20 dBm for small-signal gain measurement, there will be a significant error (>1 dB) because ASE noise is added to the signal. Another way to measure EDFA gain is to use an electrical spectrum analyzer (ESA) that is able to provide a much finer spectral resolution and thus less power of ASE noise. The experimental setup using ESA is shown in Fig. 3.5.3. In this measurement, the output power of a tunable laser is modulated by a sinusoid at frequency Ω. After the modulated optical signal is amplified, it is detected by an electrical spectrum analyzer. By measuring the amplitude of the RF tune at the modulation frequency Ω and comparing it with the measured value at the reference arm (without EDFA), the optical gain of the EDFA can be evaluated. In the gain calculation, since the photocurrent measured by the photodetector is proportional to the optical power, the RF power will be proportional to the square of the received optical power. Therefore, 1 dB of RF power change corresponds to 0.5 dB change in the EDFA optical gain. Using the notation in Fig. 3.5.3, Pe/Pe0 ¼ G2, where Pe and Pe0 are RF powers measured before and after the EDFA. Since an ESA can have much finer spectral resolution (as low as 30 Hz) compared to an OSA, the contribution from wideband ASE noise can be made negligible. The modulation frequency in this measurement does not have to be very high. It is enough as long as the modulated signal can be easily measured by an ESA and away from the very lowfrequency region where the background noise is high in an ESA. A few tens of megahertz should be appropriate.

3.5.2 Measurement of static and dynamic gain tilt As we just discussed, the small-signal gain of an optical amplifier can be measured with a very weak optical signal, whereas large-signal optical gain can be evaluated when the output optical signal is comparable to the saturation power of the optical amplifier. In general, both small-signal and large-signal optical gains are functions of signal wavelength.

Characterization of optical devices

Gain flatness of an optical amplifier is one of the most important parameters, especially when the amplifier is used for WDM operation. Gain tilt is the slope of optical gain across a certain wavelength window, which is an indication of the gain flatness of the amplifier. 3.5.2.1 Static gain tilt The static gain tilt can be sub-categorized into small-signal and large-signal gain tilts. By definition, the small-signal gain tilt near a central wavelength λ0 and within a wavelength window 2Δλ is Gsm ðλ0 + ΔλÞ  Gsm ðλ0  ΔλÞ (3.5.5) 2Δλ where Gsm(λ Δλ) is the small-signal gain at wavelengths λ Δλ. Obviously, to obtain the small-signal gain tilt, the optical signal power has to be much lower than the saturation power of the amplifier. Likewise, the large-signal gain tilt is defined as the gain slope around wavelength λ0 and within a wavelength window 2Δλ but with the input signal optical power Pin, that is, msm ðλ0 Þ ¼

Gst ðλ0 + Δλ, P in Þ  Gst ðλ0  Δλ, P in Þ (3.5.6) 2Δλ where Gst(λ Δλ, Pin) is the large-signal optical gain at wavelengths λ Δλ with input signal optical power Pin. Both the small-signal gain tilt and the large-signal gain tilt are considered static gain tilt, which involves only one wavelength-tunable optical signal at the input. Therefore, the static gain tilts can be measured by the simple experimental setups shown in Fig. 3.5.1 or Fig. 3.5.2. mst ðλ0 , P in Þ ¼

3.5.2.2 Dynamic gain tilt In practical optical systems, multiple optical signals are usually used, and cross-gain saturation may happen between them. The optical gain spectrum is likely to change if a strong optical signal at a certain fixed wavelength is injected into the optical amplifier. Fig. 3.5.4 shows the experimental setup to measure the gain spectrum change of an optical amplifier caused by the existence of a strong optical signal, commonly referred to as the holding beam. In this measurement, a strong holding beam with power Ph at a fixed wavelength λh is combined with a wavelength-tunable optical probe with a weak power Ps before they are fed into the optical amplifier. By sweeping the wavelength of the smallsignal optical probe, gain spectrum can be measured at various power levels of the holding light. Since the holding beam usually has much higher power than the small-signal tunable probe (Ph ≫ Ps), this system basically measures small-signal gain spectrum under a strong holding light. In this case, the total output optical power of the amplifier is essentially

371

Fiber optic measurement techniques

Fig. 3.5.4 Block diagram of an optical amplifier dynamic gain tilt meter.

45 40

Small-signal optical gain (dB)

372

35 30 25

−30 −25 −20 −15 Ph = −10dBm

20 15 10 5 0

L = 35m (HE20P fiber) Pump = 75mW λh = 1550nm

−5 1500 1510 1520 1530 1540 1550 1560 1570 1580 1590 1600

Wavelength (nm)

Fig. 3.5.5 Small-signal optical gain spectrum of an EDFA with the existence of a holding beam at 1550 nm at various power levels.

determined only by the holding beam: Pout  G(λh)Ph. Fig. 3.5.5 shows an example of the measured small-signal gain spectrum of an EDFA under the influence of a strong holding beam. In this example, the EDFA is made of 35 m erbium-doped fiber (Lucent HE20P) and the pump power at 980 nm is 75 mW. A holding beam at 1550 nm wavelength is injected into the EDFA together with the small-signal probe. By increasing the power Ph of the holding beam from –30 dBm to 10 dBm, not only is the overall optical gain level decreased, the shape of the gain spectrum is also changed; this effect is commonly known as the dynamic gain tilt.

Characterization of optical devices

Dynamic gain tilt is defined as the slope of small-signal gain around a central wavelength λ0, within a wavelength range 2Δλ, and in the presence of a strong holding light with power Ph, Gsm ðλ0 + Δλ, P h Þ  Gsm ðλ0  Δλ, P h Þ (3.5.7) 2Δλ where Gsm(λ Δλ, Ph) is the small-signal gain at wavelengths λ Δλ and with a holding beam of optical power Ph. The physical mechanism behind the cross-gain saturation effect is the homogenous broadening of the gain medium in the amplifier. Homogenous broadening is originated from the fact that carrier density in the upper energy band tends to maintain a thermal equilibrium distribution across the energy band. After releasing photons of a certain wavelength due to stimulated emission, carrier density at a certain energy position is reduced due to band-to-band transition. As a result, carriers at other energy positions will quickly move in, and therefore, the carrier density distribution across the energy band tends to be unchanged. Because of the homogenous broadening, a large signal at any wavelength within the gain bandwidth can saturate the amplifier optical gain over the entire gain bandwidth. It is important to note that both EDFA and SOA have homogenously broadening gain medium. The major difference between them is the carrier lifetime, which determines the speed of saturation. The typical carrier lifetime of an EDFA is on the order of 1–10 ms, whereas the carrier lifetime of an SOA is in nanosecond or even picosecond levels. In optical communication systems with data rates higher than megabits per second, dynamic gain tilt in an EDFA is determined by the average power of the optical signal at each wavelength, whereas an SOA may suffer severe inter-symbol crosstalk between channels due to the fast carrier response. (We will discuss the measurement of gain dynamics later.) Though homogeneous broadening is responsible for cross-gain saturation in an optical amplifier, spectral hole burning is another effect that tends to reduce cross-gain saturation. Spectral hole burning happens when carriers at other energy levels within the energy band cannot move in fast enough to fill the vacancy caused by strong stimulated recombination at a certain energy level. In this case, thermal equilibrium is not reached and carrier density at a particular energy level can be much lower compared to the adjacent energy levels. This can be caused by a strong optical signal at a certain wavelength, which strongly saturates the optical gain at that particular wavelength. In fact, spectral hole burning is a desired property for WDM optical system applications because it will reduce non-linear crosstalk between different wavelength channels. At room temperature, the effect of spectral hole burning is very weak; however, at a sufficiently low temperature, intra-band relaxation can be greatly reduced and spectral hole burning can be significantly increased. EDFA without crosstalk has been demonstrated by cooling the mdy ðλ0 , P h Þ ¼

373

374

Fiber optic measurement techniques

EDF below 70 K (Goldstein et al., 1993). The same analogy can be used to explain polarization hole burning, which is related to spin direction of the electrons. In addition to homogeneous broadening and spectral hole burning, in optical amplifiers, especially in fiber-based amplifiers, carrier density also changes along the amplifier longitudinal direction. In fact, in an EDFA, a strong holding beam not only reduces the upper-level carrier density but also changes the carrier density distribution along the erbium-doped fiber, which is the major reason for the change in the gain spectrum profile.

3.5.3 Optical amplifier noise In an optical amplifier, amplification is supported by stimulated emission in the gain medium with carrier inversion. At the same time, this inverted carrier population in the gain medium not only supports coherently stimulated emission, which amplifies the incoming optical signal; the carriers also spontaneously recombine to generate spontaneous emission photons. These spontaneously emitted photons are not coherent with the input signal and they constitute optical noise. To make things worse, the spontaneous emission photons are also amplified by the gain medium while they travel through the optical amplifier; thus the optical noise generated by an optical amplifier is commonly known as the amplified spontaneous emission (ASE). By its nature, ASE noise is random in wavelength, phase, and the state of polarization. Generally, the level of ASE noise produced by an optical amplifier depends on both the optical gain and the level of carrier inversion in the gain medium. ASE noise power spectral density with the unit of [W/Hz] can be expressed as ρASE ðλÞ ¼ 2nsp

hc ½GðλÞ  1 λ

(3.5.8)

In this expression, G(λ) is the optical gain of the amplifier at wavelength λ, h is the Planck’s constant, c is the speed of light, and the factor 2 indicates two orthogonal polarization states. nsp is the unitless spontaneous emission factor that is a function of carrier inversion level in the gain medium and is defined by nsp ¼

σeN 2 σeN 2  σaN 1

(3.5.9)

where σ e and σ a are the emission and absorption cross-sections of the erbium-doped fiber, and N1 and N2 are carrier densities at the lower and upper energy levels, respectively. In general, 0  nsp  1 and the maximum value of 1 is achieved when the carrier inversion is complete, with N1 ¼ 0 and N1 ¼ NT, where NT is the total doping density of erbium ions in the fiber core. This expression is simple, but it is accurate only if both N1 and N2 are constant along the fiber length, which is usually not true. Although N1 + N2 ¼ NT is always a constant, the ratio between N1 and N2 usually varies along the fiber. In general,

Characterization of optical devices

if N1(z) and N2(z) are functions of z, the spontaneous emission factor can be generalized in an integral form: Z L σe N 2 ðzÞdz 0 nsp ¼ Z L (3.5.10) Z L σe N 2 ðzÞdz  σ a N 1 ðzÞdz 0

0

Obviously, since both the optical gain and the cross-section values are functions of wavelength, the ASE noise spectral density is also a function of wavelength. Within a certain optical bandwidth B0, the total noise optical power can be calculated by integration: Z B0 =2 P ASE ¼ ρASE ðλÞdλ (3.5.11) B0 =2

If the optical bandwidth is narrow enough and the ASE noise spectral density is flat over this bandwidth, the noise power will be directly proportional to the optical bandwidth and Eq. (3.5.11) can be simplified as P ASE  2nsp hf ½Gðf Þ  1B0

(3.5.12)

For an optical amplifier, the measurement of optical gain spectrum G(λ) was discussed in the last section. But this is not enough to accurately calculate ASE noise spectral density, because the spontaneous emission factor nsp is a function of both wavelength and the carrier inversion level. Experimental methods of ASE noise measurement are extremely important in the characterization of optical amplifiers.

3.5.4 Optical domain characterization of ASE noise Since ASE noise is a random process, it is relatively easy to be characterized by its statistic value such as optical power spectral density, which can be measured by an optical spectrum analyzer, as shown in Fig. 3.5.6.

Fig. 3.5.6 ASE noise measurement using an OSA.

375

376

Fiber optic measurement techniques

By comparing the optical spectrum measured before and after the optical amplifier, we can evaluate both optical gain and the optical noise spectral density. Although this experimental setup is simple and the measurement is easy to perform, the accuracy is often affected by the linewidth of the optical signal and the limited dynamic range of the OSA. In practice, an optical signal has a Lorentzian line shape; if the OSNR is high enough, the measured linewidth of the optical signal can be quite wide at the ASE noise level, which is many dB down from the peak, as illustrated by Fig. 3.5.7. As a consequence, ASE noise spectral density at the optical signal peak wavelength cannot be directly measured because it is hidden behind the strong optical signal. It would not be helpful to use an OSA with finer spectral resolution. The reason is that the ASE noise level is measured by its spectral density, which is the noise power per spectral resolution bandwidth. Since the optical signal from a laser diode is usually narrowband, decreasing OSA resolution bandwidth will only increase the amplitude difference between the signal peak and the measured noise level on the OAS. In general, it is more difficult to accurately evaluate the noise level when the OSNR is very high. If the ASE noise spectral density is known to be flat within the bandwidth covered by the optical signal, the measurement should be accurate, but this is not guaranteed. In most cases, the characteristics of the noise are not well known, and the accuracy of measurement may become a significant concern when the spectral width of the optical signal is too wide. One simple way to overcome this problem is to make interpolation between the measured noise levels at each side of the optical signal, shown as points A and B in Fig. 3.5.7. This allows us to estimate the noise spectral density at the signal peak wavelength. Another way to find the noise spectral density at the wavelength of a strong optical signal is to use a polarizer to eliminate the optical signal, as illustrated by Fig. 3.5.8. This measurement is based on the assumption that optical signal is polarized (the degree of polarization is 100%), whereas the ASE noise is unpolarized. If this is true, a polarizer

Fig. 3.5.7 Illustration of ASE power spectral density measured by an OSA.

Characterization of optical devices

Fig. 3.5.8 ASE noise measurement using an OSA and a polarizer to null the strong optical signal. PC, polarization controller.

always blocks 50% of the ASE noise regardless of the orientation of the polarizer. Meanwhile, the polarizer can completely block the single-polarized optical signal if the polarization state of the optical signal is adjusted to be orthogonal to the principle axis of the polarizer. In this measurement, the polarization state of the optical signal is first adjusted to fully pass through the polarizer so that its power GPs can be measured. The second step is to completely block the optical signal by adjusting the polarization controller, and in this case, the ASE noise level ρ0 ASE(λs) at exactly the signal wavelength λs can be measured, which is in fact 3 dB lower than its actual value. Then the OSNR can be calculated as OSNR(λs) ¼ GPs/2ρ0 ASE(λs).

3.5.5 Impact of ASE noise in electrical domain Although the noise performance of an optical amplifier can be characterized in the optical domain as described in the last section, it can also be characterized in the electrical domain after the ASE noise is detected by a photodetector, as shown in Fig. 3.5.9. In general, both the spectral resolution and the dynamic range of an electrical spectrum analyzer can be several orders of magnitude better than an optical spectrum analyzer; an electricaldomain measurement should be able to provide better accuracy. In addition, since in an optical system the optical signal will eventually be converted into an electrical domain through a photodiode, electrical domain SNR would be the ultimate parameter for many applications. Understanding the relationship between optical noise spectral density and the corresponding electrical SNR is essential in the design and performance characterization of optical receivers.

Light source

Ps

Amplifier

Pout Photodiode

Fig. 3.5.9 ASE noise measurement in an electrical domain.

Electrical spectrum analyzer

377

378

Fiber optic measurement techniques

In the system shown in Fig. 3.5.9, the photodiode performs square-law detection, and the photocurrent is  2 I ¼ RP ¼ REsig + E noise  (3.5.13) where Esig and Enoise are the fields of the amplified optical signal and optical noise, respectively. R ¼ ηq=hf is the responsivity, η is the quantum efficiency, and f is the optical frequency. In an electrical domain, in addition to thermal noise and shot noise, the ASE noise created by the optical amplifier will introduce two additional noise terms, often referred to as signal-spontaneous emission beat noise and spontaneous-spontaneous beat noise. The origins of thermal noise and shot noise were described in Section 1.3.4. In an optical system with amplifiers, the shot noise is generated by both the optical signal and the ASE noise within the receiver optical bandwidth B0. The shot noise power spectral density (single-sideband in electrical domain) can be expressed by Si ðf Þ ¼ 2qRhP out + B0 ρASE i

(3.5.14)

where ρASE is the optical spectral density of the ASE noise, Pout ¼ GPs ¼ jEsig j is the output optical power of the optical amplifier, G is the amplifier gain, and Ps is the input signal optical power to the amplifier. The total shot noise power generated by the photodiode within a receiver bandwidth Be is Z Be Si ðf Þdf ¼ 2qRhGP s + B0 ρASE iBe (3.5.15) P shot ¼ 2

0

Since average electrical power converted from the optical signal and the ASE noise is hP e i ¼ R2 hGP s + B0 ρASE i2, the relative intensity noise (RIN) measured after the photodiode can be found as RIN e ¼

Si ðf Þ 2qRhGP s + B0 ρASE i ¼ 2 hP e i R hGP s + B0 ρASE i2

(3.5.16)

This is the ratio between the noise power spectral density and the average signal power (converted from the optical signal). The unit of RINe is [Hz1]. When the optical SNR is high enough, GPs ≫ B0ρASE, the expression of RINe can be simplified to RIN e 

2q 2hf ¼ RhGP s i ηhGP s i

(3.5.17)

Note that this electrical domain relative intensity noise RINe is dependent on the photodiode quantum efficiency η. Sometimes it is convenient to define a RIN parameter that is independent of the photodetector characteristics. Thus, it is more convenient to define an optical RIN as RINo ¼ η  RINe, and this optical RIN will be independent of the photodiode quantum efficiency η

Characterization of optical devices

RIN o ¼

2hf hGP s i

(3.5.18)

This optical RIN can also be explained as the electrical RIN while the photodiode quantum efficiency η is 100%. The noise level measured in an electrical domain after photodetection consists of several components due to different mixing processes in the photodiode. In addition to the noise terms directly generated by the photodiode, as described in Chapter 1, there will be signal-spontaneous emission beat noise and spontaneous-spontaneous emission beat noise, which are unique to receivers in optically amplified systems. 3.5.5.1 Signal-spontaneous emission beat noise Signal-spontaneous emission beat noise is generated by the mixing between the amplified optical signal and the ASE noise in the photodiode. Therefore, signal-spontaneous beat noise exists only in the electrical domain after photodetection. As illustrated in Fig. 3.5.10, at a photodiode, the amplified optical signal GPs mixes with a bin of ASE noise with spectral density ρ(Δω), which is separated from the optical signal by Δω in the optical frequency domain. The electrical field at the input of the photodiode is pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi E i ðΔωÞ ¼ GP s exp ðjωtÞ + ρðΔωÞ exp ðjðω  ΔωÞtÞ (3.5.19) where ω is the optical frequency of the carrier. Similar to coherent detection, the photo current is n o pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi I i ðΔωÞ ¼ RjE i j2 ¼ R GP s + ρðΔωÞ + 2 GP s ρðΔωÞ cos ðΔωtÞ (3.5.20) The first and the second terms of Eq. (3.5.20) are both DC components, whereas the last term is a time-varying photocurrent at frequency Δω. Therefore, the RF power spectral density at frequency Δω can be found as

Spectral density

Si,DSB ðΔωÞ ¼ 2R2 GP s ρðΔωÞ

(3.5.21)

GPs r(Δw) w

Δw

Fig. 3.5.10 Illustration of signal-spontaneous beat noise generation.

379

Fiber optic measurement techniques

This is a double-sideband, RF spectral density because ρ(Δω) and ρ(Δω) produce the same amount of RF beating noise; therefore, the single-sideband spectral density of the signal-spontaneous beat noise is Si,SSB ðΔωÞ ¼ 4R2 GP s ρðΔωÞ

(3.5.22)

Consider that the optical signal is usually polarized while the ASE noise is unpolarized; only half the ASE power can coherently mix with the optical signal. In this case, the actual signal-spontaneous emission beat noise RF spectral density is Sssp ðΔωÞ ¼ 2R2 GP s ρðΔωÞ

(3.5.23)

Considering the contributions of both the amplified optical signal and the ASE noise, the average power in the electrical domain after photodetection is " #2 Z hP e i ¼ R2 GP s +

B0 =2

B0 =2

ρASE ðΔωÞd ðΔωÞ

¼ R2 ½GP s + P ASE 2

(3.5.24)

R where PASE ¼  B0/2B0/2ρASE(Δω)d(Δω) is the total spontaneous emission noise power within the optical bandwidth B0. Thus, the signal-spontaneous beat noise-limited RIN is RIN ssp ðΔωÞ ¼

Sssp ðΔωÞ 2GP s ρðΔωÞ ¼ hP e i ½GP s + P ASE 2

(3.5.25)

3.5.5.2 Spontaneous-spontaneous beat noise spectral density Though signal-spontaneous emission beat noise is generated by the mixing between the optical signal and the ASE noise, spontaneous-spontaneous beat noise is generated by the beating between different frequency components of the ASE noise at the photodiode. Like signal-spontaneous beat noise, spontaneous-spontaneous beat noise exists only in the electrical domain. Fig. 3.5.11 illustrates that the beating between the two frequency components ρ(ω) and ρ(ω + Δω) in a photodetector generates an RF component at Δω. Assuming that the optical bandwidth of the ASE noise is B0 and the ASE noise spectral density is constant within the optical bandwidth as shown in Fig. 3.5.12A, the spontaneous-spontaneous beat noise generation process can be described by a self-

Spectral density

380

r(w) r(w + Δw) w

Δw

Fig. 3.5.11 Illustration of signal-spontaneous beat noise generation.

Spectral density

Characterization of optical devices

B0 B0

r2B0/4

r w

Δw

(a)

−B0

Δw B0

(b)

Fig. 3.5.12 Illustration of spontaneous-spontaneous beat noise generation.

convolution of the ASE noise optical spectrum. The size of the shaded area is a function of the frequency separation Δω and the result of correlation is a triangle function versus Δω, as shown in Fig. 3.5.12B. Another important consideration in calculating spontaneous-spontaneous beat noise is polarization. In general, the ASE noise is unpolarized, which has half energy in the TE mode and half in the TM mode, whereas optical mixing happens only between noise components in the same polarization. The single-sideband spontaneous-spontaneous beat noise electrical spectral density can be found as Sspsp ðΔωÞ ¼

R2 ρ2 ðBo  ΔωÞ 2

(3.5.26)

Again, the unit of spontaneous-spontaneous beat noise is in [A2/Hz]. In most cases, the electrical bandwidth is much less than the optical bandwidth (Δω ≪ B0); therefore, Eq. (3.5.26) is often expressed as Sspsp  R2 ρ2 B0 =2

(3.5.27)

which is a white noise within a reasonable RF receiver bandwidth. It is important to notice that in some textbooks, the RF spectral density of spontaneous-spontaneous beat noise is expressed as Sspsp ¼ 2R2 ρ2 B0 , which seems to be four times higher than that predicted by Eq. (3.5.27). The reason for this discrepancy is due to their use of single-polarized ASE noise spectral density ρ ¼ nsphf[G(f )  1], which is half of the unpolarized ASE noise spectral density given by Eq. (3.5.12). We can also define a spontaneous-spontaneous beat noise limited relative intensity noise as RINspsp(Δω) ¼ Sspsp(Δω)/hPei, where, hPei is the average RF power created by the spontaneous emission on the photodetector. For each polarization, the optical power received by the photodiode is Z 1 B0 =2 ρ ðωÞdω (3.5.28) hP i== ¼ hP i? ¼ 2 B0 =2 ASE After photodetection, the ASE noise-induced average power in the electrical domain is

381

382

Fiber optic measurement techniques

n o 1 hP e i ¼ hP e i== ¼ R2 hP e i2? + hP e i2== ¼ R2 ½ρB0 2 2

(3.5.29)

Therefore, the spontaneous-spontaneous beat noise-limited RIN is RIN spsp ¼

Sspsp 1 ¼ B0 hP e i

(3.5.30)

The unit of RIN is [1/Hz].

3.5.6 Noise figure definition and its measurement 3.5.6.1 Noise figure definition In electric amplifiers, the noise figure is defined as the ratio between the input electrical SNR and the output electrical SNR. For an optical amplifier, both the input and the output are optical signals. To use the similar noise figure definition, a photodetector has to be used both in the input and the output, and therefore F¼

SN Rin SN Rout

(3.5.31)

where SNRin and SNRout are the input and the output signal-to-noise power ratios in the electrical domain, which can be measured using the setup shown in Fig. 3.5.13. It is important to note that at the input side of the optical amplifier there is essentially no optical noise. However, since there is a photodetector used to convert the input optical signal into the electrical domain, quantum noise will be introduced even if the photodetector is ideal. Therefore, the SNRin in the amplifier input side is determined by the shot noise generated in the photodetector. Assume that the photodetector has an electrical bandwidth Be, the total shot noise power is P shot ¼ 2qRP s Be, where Ps is the input optical signal power. At the same time, the detected signal electrical power at the optical amplifier input side is ðRP s Þ2 , so the input SNR is

Fig. 3.5.13 Measurement of optical amplifier noise figure.

Characterization of optical devices

SN Rin ¼

ðRP s Þ2 RP s ¼ 2qRP s Be 2qBe

(3.5.32)

At the optical amplifier output, in addition to the amplified optical signal, ASE noise is also generated by the amplifier. After photodetection, the major noise sources are shot noise, signal-spontaneous beat noise, and spontaneous-spontaneous beat noise. Since the spontaneous-spontaneous beat noise can be significantly reduced by using a narrowband optical filter in front of the photodetector, in the calculation of SNRout, only shot noise and signal-spontaneous beat noise are included. Considering that the signal electrical power is ðRGP s Þ2 , the signal-spontaneous beat noise electrical power is Si ðf Þ ¼ 2R2 GP s ρASE Be ¼ 4R2 GP s nsp hf ðG  1ÞBe , and the shot noise power is P shot ¼ 2qRGP s Be , one can easily find SN Rout ¼

RGP s 2qBe + 4Rnsp hf ðG  1ÞBe

(3.5.33)

Therefore, the noise figure as defined in Eq. (3.5.31) is F¼

1 4Rnsp hf ðG  1Þ + 2q 2nsp ðG  1Þ 1 ¼ + G η 2q G G

where η is the quantum efficiency of the photodetector. Strictly speaking, the noise figure is the property of the optical amplifier, which should be independent of the property of the photodetector used for the measurement. The precise definition of the optical amplifier noise figure should include a statement that the photodetector used for SNR detection has 100% quantum efficiency (η ¼ 1) and thus F ¼ 2nsp

ðG  1Þ 1 + G G

(3.5.34)

In most practical applications, the amplifier optical gain is large (G ≫ 1), and therefore, the noise figure can be simplified as F  2nsp

(3.5.35)

From the definition of nsp given by Eq. (3.5.10), it is obvious that nsp 1, and the minimum value of nsp ¼ 1 happens with complete carrier inversion when N1 ¼ 0 and N1 ¼ NT. Therefore, the minimum noise figure of an optical amplifier is 2, which is 3 dB. Although the definition of the noise figure is straightforward and the expression is simple, there is no simple way to predict the noise factor nsp in an optical amplifier, and therefore, experimental measurements need to be performed.

383

384

Fiber optic measurement techniques

3.5.6.2 Optical domain measurement of noise figure Although optical amplifier noise figure is originated from an electrical domain definition given by Eq. (3.5.31), its value can be evaluated either by optical domain measurements or by electrical domain measurements. Based on Eqs. (3.5.8) and (3.5.34), the noise figure of an optical amplifier can be expressed as a function of the ASE noise spectral density ρASE and the optical gain G of the amplifier: F¼

λρASE 1 + hcG G

(3.5.36)

Using techniques described in Sections 3.5.1 and 3.5.4, ASE noise spectral density and optical gain of an optical amplifier can be precisely characterized; therefore, the noise figure of the amplifier can be obtained. In general, both the ASE noise spectral density and the optical gain of an amplifier are functions of signal wavelength; thus, the noise figure should also depend on the wavelength. In addition, the value of the noise figure depends on the signal optical power because of the gain saturation. Therefore, the commonly accepted definition of a noise figure should be under the small-signal condition where gain saturation effect is negligible. In practice, the accuracy of optical domain measurement of a noise figure is mainly limited by the spectral resolution and the dynamic range of the OSA, especially when the input optical signal level is low, and the noise figure varies significantly with wavelength. 3.5.6.3 Electrical domain characterization of a noise figure Because of the direct relationship between the ASE noise spectral density in the optical domain and the induced electrical noise after photodetection, as we discussed in the last section, it is possible to characterize optical amplifier noise in an electrical domain. Thanks to the excellent accuracy and frequency selectivity of RF circuits, electrical domain characterization has unique advantages in many circumstances. Fig. 3.5.14 shows the ASE characterization technique using an electrical spectrum analyzer. This allows the precise measurement of the detected electrical noise spectral densities before and after the optical amplifier. To avoid the measurement in very low RF frequencies where electrical spectrum analyzers are often inaccurate, the optical source is modulated by a sinusoid on the order of tens of megahertz. At the calibration interface, if the signal laser source is ideal, the electrical SNR at the photodetector should be given by Eq. (3.5.32). However, in practice, a signal laser is usually not ideal and its performance can be specified by its relative intensity noise RINsource. This source intensity noise will generate an additional electrical noise in the photodetector, and by definition, the spectral density of this additional electrical noise is

Characterization of optical devices

Fig. 3.5.14 ASE noise characterization using an electrical spectrum analyzer (ESA).

ρsource ¼ ðRP s Þ2  RIN source

(3.5.37)

Considering this source intensity noise, the SNR at the calibration interface becomes SN Rin ¼

RP s ½2q + RIN source RP s Be

(3.5.38)

Similarly, at the output of the optical amplifier, the detected signal power in the electrical domain is G2 ðRP s Þ2, the amplified noise spectral density caused by source relative intensity noise is ρsource ¼ G2 ðRP s Þ2 RIN source , and most important, the signalspontaneous emission beat noise created by the optical amplifier has to be included. Thus, the output SNR can be expressed as SN Rout ¼

GRP s

2q + RIN source GRP s + 4Rnsp hf ðG  1Þ Be

Therefore, the measured noise figure is

2q + RIN source GRP s + 4Rnsp hf ðG  1Þ F¼ ½2q + RIN source RP s G

(3.5.39)

(3.5.40)

This equation shows the impact of the source relative intensity noise. Obviously, if RINsource ¼ 0, Eq. (3.5.40) will be identical to Eq. (3.5.34).

3.5.7 Time-domain characteristics of EDFA It is well known that SOA and EDFA have very different time responses. Whereas an SOA has a sub-nanosecond time constant that is often used for all-optical signal processing, an EDFA has a long time constant on the order of milliseconds, which is ideal for WDM optical systems with negligible crosstalk between channels. The time-domain

385

386

Fiber optic measurement techniques

response of carrier density N2 in an optical amplifier can be expressed by a simplified firstorder differential equation: dN 2 N ¼ 2 dt τeff

(3.5.41)

where τeff is the effective carrier lifetime, which depends on both the spontaneous emission carrier lifetime and the operation condition such as photon density within the amplifier. To switch the carrier density from the initial value Nint to the final value Nfin, the transient process can be described by the following equation:     t N 2 ðtÞ ¼ N fin + N int  N fin exp (3.5.42) τeff and this transition is illustrated in Fig. 3.5.15. Instantaneous carrier density is an important parameter in an optical amplifier, which, to a large extent, determines the time-dependent optical gain of the amplifier. Although time-invariant carrier density is desired for the stable operation of an optical amplifier, the value of carrier density may be changed by variations in the pump power, signal optical power, and sudden channel add/drop in a WDM system. Fig. 3.5.16 illustrates the general response of an EDFA to a square-wave optical signal. If the repetition time of the N2(t)

N2(t)

Nint

Nfin

Nfin t

Nint

t0 t0+teff

t t0 t0+teff

Fig. 3.5.15 Carrier density transient between two values.

Fig. 3.5.16 Illustration of EDFA response to a square-wave optical signal.

Characterization of optical devices

square wave is much longer than the carrier lifetime, the EDFA provides the small-signal gain only immediately after the optical signal is turned on, and at these instances, the amplified output optical power is high. After that, the carrier density inside the amplifier is reduced exponentially over time due to strong stimulated recommendation, as indicated by Eq. (3.5.42), and the optical gain of the amplifier is decreased. The carrier density and the optical gain fluctuate between the small-signal values and the saturated values within each cycle. Since the saturation effect is determined by the total optical power of the signal, crosstalk may happen in WDM optical systems during the add/drop of wavelength channels. On the other hand, the relatively slow carrier dynamics in an EDFA can be utilized to measure its noise performance using the time gating technique (Baney and Dupre, 1992; Baney, 1998). As we discussed in Section 3.5.4 that the accuracy of optical domain characterization of EDFA is often limited by the spectral resolution and dynamic range of the SOA. It is usually difficult to measure the ASE noise spectral density at wavelengths very close to a strong optical signal, as illustrated in Fig. 3.5.7. Fig. 3.5.17A shows the measurement setup of the time-gated technique to characterize ASE noise performance of an EDFA. The input and the output sides of the EDFA are each gated by an optical shutter with complementary on-off status, as shown in Fig. 3.5.16B. The repetition rate of the gating waveform is chosen to be much high than the carrier lifetime of the EDFA; thus, the carrier density is determined by the average optical power that enters the EDFA. Though the optical signal from the source laser provides a useful saturation effect in the EDFA, it never reaches the OSA, because the two

Fig. 3.5.17 EDFA noise characterization using a time-gating technique: (A) test setup and (B) gating waveforms.

387

388

Fiber optic measurement techniques

gating switches are always complementary. In this way, the OSA is able to measure the ASE noise spectral density of the EDFA under various operation conditions and different optical signal levels, whereas the measurement accuracy is not hampered by the strong amplified optical signal on the OSA. In practice, however, even though the carrier density fluctuation over time is small when the repetition period of the gating signal is much shorter than the carrier lifetime, this fluctuation may still impact the accuracy of the measurement to some extent. As an example, assume the carrier lifetime of the EDFA is 1 ms and the gating pulse repetition period is 10 μs with a 50% duty cycle. The maximum fluctuation of the carrier density is approximately 0.5%. In terms of the accuracy requirement of optical amplifier measurement, since large numbers of in-line optical amplifiers may be used in an optical system, even a small amount of uncertainty of each amplifier parameter may mean a significant error to estimate the system performance.

3.5.8 Characterization of fiber Raman amplification Both SOA and EDFA are discrete optical amplifiers, their basic characteristics and measurement techniques have been discussed in the previous sections, including optical gain, optical noise, and noise figure. Fiber Raman amplifiers, on the other hand, utilize stimulated Raman scattering to provide optical gain in the optical fiber, and Raman amplifier can be made as either discrete or distributed, so that noise figure definition can be different from other types of optical amplifiers. In addition, both forward pump and (or) backward pump can be used for a fiber Raman amplifier, and the profile of signal power evolution along the fiber depends on the pumping scheme, which complicates the characterization of Raman amplifiers. 3.5.8.1 Noise characteristics of Raman amplifiers Note that in the calculation of Raman gain using Eq. (1.5.82) in Chapter 1, we have used a small-signal approximation which neglected the saturation effect caused by the power Ps(f ) in the Stokes frequency. We have also neglected the impact of spontaneous emission in the wave-propagation Eqs. (1.5.78) and (1.5.79) in Chapter 1. In fact, the spontaneous emission generated along the fiber in the Stokes frequency can be amplified by the Raman effect like the amplified spontaneous emission (ASE) in an EDFA. The ASE noise power spectral density at the Stokes wavelength can be calculated from the differential equation similar to Eq. (1.4.78), but with an additional noise source term. Without losing generality, we use forward propagated ASE noise power spectral density as an example,   g P p ðzÞ P p ðzÞ dρASE ðzÞ ¼ R  αs ρASE ðzÞ + 2nsp hf s gR (3.5.43) dz Aeff Aeff

Characterization of optical devices

where Pp(z) is the pump power density along the fiber, nsp ¼ 1/ exp(hf/kBT) is a temperature-dependent spontaneous scattering factor with kB the Boltzmann’s constant, T is the absolute temperature, and f is the pump-Stokes frequency difference. The ASE noise power spectral density at the fiber output can be found as Z P FP,in L eαp z ρASE,FP ðL Þ ¼ 2nsp hf s GðL ÞgR dz (3.5.44) Aeff 0 GFP ðzÞ for forward pumping, and 2nsp hf s gR ρASE,BP ðL Þ ¼ GðL ÞP BP,in Aeff

Z

L αp ðLzÞ 0

e dz GBP ðzÞ

(3.5.45)

for backward pumping, where G(L) is the Raman gain over the fiber length L, as defined in Eq. (1.5.82) in Chapter 1, which is the same for forward pump and backward pump, but GFP(z) and GBP(z) are position-dependent gains for forward and backward pumping schemes defined as    P s ð f s , zÞ gR P FP,in 1  eαp z ¼ exp (3.5.46)  αs z GFP ðzÞ ¼ Aeff αp P s ðf s , 0Þ    P s ð f s , zÞ g P BP,in eαp ðzLÞ  eαp L GBP ðzÞ ¼ (3.5.47) ¼ exp R  αs z Aeff αp P s ðf s , 0Þ The noise figure of a fiber Raman amplifier can be calculated based on the procedure outlined in Section 3.5.6, for forward pump and backward pump, respectively, Z L α z ρ P 1 e p 1 N F FP ¼ ASE,FP + dz + ¼ 2nsp gR FP,in (3.5.48) Aeff 0 GFP ðzÞ GðL Þhf s G ðL Þ G ðL Þ Z ρASE,BP P BP,in L eαp ðLzÞ 1 1 dz + + ¼ 2nsp gR (3.5.49) N F BP ¼ Aeff 0 GBP ðzÞ GðL Þhf s G ðL Þ G ðL Þ In addition to the dependence of pumping scheme (forward or backward), the noise figure of a fiber Raman amplifier also depends on the Raman gain coefficient, fiber length, fiber loss at the pump and the Stokes wavelengths, and the pump power. In general, both the gain and the noise figure also depend on the optical signal power at the Stokes wavelength fs which may saturate the Raman gain. But this signal-induced pump depletion is not considered for simplicity with analytical formulas. Fig. 3.5.18 shows the noise figures calculated from Eqs. (3.5.48) and (3.5.49). For the case of forward pump shown in Fig. 3.5.18A, ASE noise is generated primarily near the input end of the fiber, and significantly attenuated by the fiber while propagating to the output, and thus the noise figure is relatively low. Noise figure becomes almost independent of the fiber length when the pump power is high enough. This is because both signal

389

390

Fiber optic measurement techniques

Fig. 3.5.18 Noise figure as a function of the fiber length for forward pump (A) and backward pump (B) at different pump power levels. Parameters used are as follows: Raman gain coefficient gR ¼ 5.8  1014m/W, fiber effective core area Aeff ¼ 80 μm2, spontaneous scattering factor nsp ¼ 1.2, and fiber attenuation parameters αp ¼ 0.3 dB/km and αs ¼ 0.25 dB/km at the pump and the Stokes wavelengths, respectively.

amplification and ASE noise generation happen near the input side of the fiber, and they are equally attenuated by the rest part of the fiber. With the same signal optical gain as the forward pumping, backward pumping has a much higher noise figure as shown in Fig. 3.5.18B. This is because with backward pump, the ASE noise is primarily generated near the output side of the fiber where the pump power is the highest, and fiber attenuation has much more impact in the optical signal than for the generated ASE noise. For this reason, the noise figure shown in Fig. 3.5.18B increases with the increase of the fiber length. For a forward pumped fiber Raman amplifier with spontaneous scattering factor nsp ¼ 1.2, the noise figure is slightly higher than 4 dB when the fiber is long enough and the pump power is sufficiently high, as shown in Fig. 3.5.18A. This is similar to the intrinsic (without considering input and output connection losses) noise figure, 3.8 dB, of an EDFA with nsp ¼ 1.2. However, for a backward pumped fiber Raman amplifier, the noise figure can be much higher, on the order of 15 dB as shown in Fig. 3.5.18B. But this does not mean backward pumped Raman amplifier is not useful. In the contrary, backward pumped Raman amplification is widely used to extend the reach of fiber-optic system beyond what can be accomplished with EDFAs. Here we explain why. For a discrete fiber Raman amplifier as illustrated in Fig. 3.5.19A, the length of the fiber can be chosen to maximize the Raman gain depending on the available pump power. In that case, a fiber Raman amplifier is packaged in a box, and it has no advantage in terms of noise figure compared to an EDFA. However, a popular application of fiberbased Raman amplifier is the distributed Raman amplification in which the transmission fiber is used as the amplification medium as illustrated in Fig. 3.5.19B. In such a case, the length of the fiber is usually longer than 50 km and the overall amplification G(L) can even be lower than 0 dB. The noise figure may also be quite high, especially when

Characterization of optical devices

Fig. 3.5.19 (A) A localized Raman amplifier with both forward and backward pumps. (B) A distributed Raman amplifier using transmission fiber as the gain medium. (C) A model which separates a distributed Raman amplifier into a transmission fiber span and a localized amplifier.

G(L) is low. It is important to note that in a distributed Raman amplifier, since the transmission fiber is used as the amplification medium, the attenuation of the transmission fiber has already been counted against the overall Raman gain (Bristiel et al., 2004). For a fair comparison with localized optical amplifiers such as SOA and EDFA, we can separate the attenuation of the transmission fiber which is e αsL from an equivalent localized Raman amplifier as shown in Fig. 3.5.19C. The effective optical gain of this localized amplifier should be Geff ¼ G(L)eαsL, and thus, the effective noise figure of this localized amplifier should be NFeff ¼ NF  e αsL. For example, in Fig. 1.5.20 in Chapter 1, which was obtained using the same set of parameters as in Fig. 3.5.18, the signal optical gain is about 2 dB with 600 mW pump power and 100 km fiber length. Consider the 25 dB fiber loss without the Raman pump, the effective optical gain is equivalent to approximately 27 dB when used as a distributed amplifier. Fig. 3.5.20 shows the effective noise figure of the equivalent localized amplifier as a function of the fiber length, for forward pumping and backward pumping, respectively. All parameters used are the same as those in Figs. 1.5.20 and 3.5.18. The negative values of effective noise figure in Fig. 3.5.20 seem to suggest that the output SNR is lower than the input SNR, which would clearly violate the fundamental rules of physics. But bear in mind that this is an “equivalent” noise figure which equivalently sets the fiber

391

392

Fiber optic measurement techniques

Fig. 3.5.20 Effective noise figure NFeff versus fiber length for forward pump (A) and backward pump (B) for different forward pump power levels. All parameters are the same as those used to obtain Figs. 1.5.20 and 3.5.18.

attenuation to be zero at the signal wavelength. In fact, even for the “distributed” Raman amplifier, the overall SNR at the output is still higher than that at the input, and the physically relevant noise figure is NF rather than NFeff. For distributed fiber Raman amplification with forward pumping, although it has much smaller noise figure compared to backward pumping, there are a few other issues need to be considered in optical system applications. For forward pumping, the optical signal co-propagates with the pump in the same direction along the fiber, relative intensity noise (RIN) of the pump laser can be transferred into both the intensity noise (Bromage, 2004) and the phase noise (Martinelli et al., 2006; Xu et al., 2016) of the optical signal. This will degrade the system performance, especially when high power pump lasers are used which usually have high levels of RIN. For backward pumped fiber Raman amplification, the counter-propagation of the pump and the signal makes the bandwidth of RIN transfer function much narrower than that in the forward pump scheme because the walk-off between the pump and the signal is twice the speed of light. Another concern of using forward pumping is the impact of fiber non-linearity due to the increased signal optical power along the fiber. The accumulated non-linear phase shift can be calculated through the integration of the signal optical power along the fiber as R ΦNL ¼ γ LPs(z)dz, where γ is the non-linear parameter of the fiber defined by Eq. 0 (1.4.97). Consider Raman amplification with forward pumping, there is a large portion along the fiber where the signal optical power is higher than the input level as shown in Fig. 1.5.20A in Chapter 1, and thus, the induced non-linear phase ΦNL can be high. However for backward pump, the signal optical power is kept at low level for most part of the fiber, except at the very last portion where the Raman gain is high. Therefore, backward Raman pump does not introduce significant non-linear phase shift.

Characterization of optical devices

Spectral density

3.5.8.2 Forward/backward hybrid pumping and 2nd-order pumping For distributed Raman amplification with either forward pumping or backward pumping, signal power variation Psig(z) along the fiber can be quite significant, and the power of the pump laser needs to be high enough to produce sufficient Raman gain to compensate for the fiber loss. The combination of forward and backward pumping can help reduce signal power variation along the fiber and increase the overall Raman gain. For the purpose of comparison, Figs. 3.5.21–3.5.23 show examples of distributed Raman amplification system characteristics with forward pumping, backward, and forward + backward hybrid pumping, respectively. These figures were obtained from numerical simulations based on typical parameters of standard single mode fiber. While detailed equations and procedure of numerical simulations can be found in several reference papers which are outside the focus of this book, Figs. 3.5.21–3.5.23 show the evolution of pump and signal power levels along the fiber and the optical spectrum at the fiber output which indicates the optical signal-to-noise ratio (OSNR). Four WDM channels of optical signal are used at wavelengths of 1540 nm, 1545 nm, 1550 nm, and 1555 nm with 10 dBm power of each channel. Pump power is adjusted to provide approximately 14 dB Raman gain to compensate for the loss of the 70 km fiber. Fig. 3.5.21 shows Raman amplification with only forward pumping. In this case, the power of the 1450 nm pump decreases exponentially along the fiber as shown in Fig. 3.5.21A, and the optical signal channels in the 1550 nm window are amplified near the input of the fiber where the pump power is high, as shown in Fig. 3.5.21B. Note that the derivative of the Psig(z) curve represents the optical gain, with Psig(z) the signal optical power. The maximum signal optical power can reach to a level of approximately

Fig. 3.5.21 Distributed Raman amplification for a 70 kM SMF using 300 mW forward pump at 1450 nm wavelength. (A) Pump power evolution as the function of position along the fiber. (B) Powers of 4 signal channels along the fiber. (C) Optical spectrum at the fiber output with 0.1 nm resolution bandwidth.

393

Spectral density (dB/0.1nm)

Fiber optic measurement techniques

Fig. 3.5.22 Distributed Raman amplification for a 70 kM SMF using 350 mW backward pump at 1450 nm wavelength. (A) Pump power evolution as the function of position along the fiber. (B) Powers of 4 signal channels along the fiber. (C) Optical spectrum at the fiber output with 0.1 nm resolution bandwidth.

Spectral density

394

Fig. 3.5.23 Distributed Raman amplification for a 70 kM SMF using both forward (170 mW) and backward (170 mW) pumping at 1450 nm wavelength. (A) Forward (solid line) and backward (dashed line) pump power evolution as the function of position along the fiber. (B) Powers of 4 signal channels along the fiber. (C) Optical spectrum at the fiber output with 0.1 nm resolution bandwidth.

5 dB higher than that at the fiber input, which makes forward pumped Raman system vulnerable to fiber non-linearity as discussed before. In addition, as the pump and the signal propagate in the same direction along the fiber, relative intensity noise (RIN) of the pump laser can easily translate into both RIN and phase noise in the optical signal, which will be discussed later in this section. However, the major advantage of forward

Characterization of optical devices

Raman pumping is the relatively lower ASE noise and high OSNR at the fiber output. The reason is that most of the ASE noise is generated near the input end of the fiber, there the pump power is the highest and is significantly attenuated by the fiber before reaching to the output side. Fig. 3.5.21C shows the optical spectrum at the output of the fiber span, which includes the 4 signal channels and broadband ASE noise created by the Raman process. OSNR is approximately 43 dB in this case with 0.1 nm resolution bandwidth for the ASE noise. Fig. 3.5.22 shows Raman amplification with only a backward pump, where pump power evolution along the fiber shown in (A) is opposite compared to the case of forward pumping because of the opposite propagation direction. The maximum Raman gain is near the end of the fiber where pump power is the maximum, as shown in Fig. 3.5.22B. The signal optical power does not go much higher than its level at the fiber input so that non-linear phase shift is lower than that in the system with forward pump. However, because ASE noise is primarily generated near the output port of the fiber where pump power is high, and the ASE noise is not attenuated by the fiber before exiting, the OSNR is approximately 38 dB as shown in Fig. 3.5.22C, which is about 5 dB lower compared to forward pumping. In fact, low OSNR is the most noticeable disadvantage of Raman amplification with backward pumping. Forward and backward pumps can also be applied simultaneously to provide high Raman gain, which is usually used in systems of long fiber spans with high losses. For comparison, we still use 70 km SMF to demonstrate the characteristics of forward/backward hybrid pumping as shown in Fig. 3.5.23 with 170 mW power for both forward and backward pump lasers. In this case, both forward and backward pumps are at 1450 nm wavelength, and their power levels are attenuated in the fiber along their propagation directions. The 4 channels of optical signal are amplified both at the input side by the forward pump and at the output side by the backward pump. The maximum signal power level along the fiber is about 1.5 dB higher than that at the input, which is lower than that with forward-only pumping. The OSNR shown in Fig. 3.5.23C is approximately 40 dB for each signal channel, which is also lower than the 43 dB OSNR with forward-only pumping but higher than the 38 dB OSNR with backward-only pumping. Through the analysis presented above, it is apparent that with forward pumping the maximum Raman gain is at the input end of the fiber, which is responsible for the high maximum signal power level along the fiber, making the system vulnerable to fiber nonlinear effects. On the other hand, with backward pumping, the maximum ASE noise is generated at the output end of the fiber, which is responsible for the poor OSNR. Intuitively moving the maximum Raman gain away from the fiber ends may help improve the performance of distributed Raman amplification. One of the techniques to move the maximum Raman gain away from the input or (and) output end(s) of the fiber is to use 2nd-order pumping. Second-order pumping is to amplify the power of the 1st-order pump with another pump at a shorter wavelength.

395

396

Fiber optic measurement techniques

Fig. 3.5.24 Raman gain coefficients of the 2nd-order pump at 1360 nm (dashed line) and the 1st-order pump at 1450 nm (solid line). P2nd, P1st, and Psig illustrate power levels of 2nd-order pump, the 1st-order pump, and the optical signal, respectively.

Fig. 3.5.24 shows an example of 2nd-order pumping wavelength and gain diagram, where the dashed line indicates the Raman gain coefficient of the 2nd-order pump at 1360 nm wavelength which has its peak Raman gain near 1450 nm to amplify the 1st-order Raman pump at that wavelength. The solid line is the gain coefficient of the 1st-order pump which has the highest peak near the signal wavelength at 1550 nm. Without the 1450 nm 1st-order pump, the 1350 nm 2nd-order pump itself has negligible Raman gain on the 1550 nm optical signal. By applying a strong 2nd-order pump and a relatively weak 1st-order pump, the maximum Raman gain can be moved away from the input (forward pump) and the output (backward pump) ends of the fiber. Fig. 3.5.25 shows the characteristics of a distributed Raman system with 2nd-order pump P2nd ¼ 780 mW at 1360 nm wavelength and 1st-order pump P1st ¼ 10 mW at 1450 nm wavelength, both in the backward direction. As indicated in Fig. 3.5.25, the power of the 2nd-order pump is reduced monotonically along its propagation direction due to fiber attenuation as well as the energy transfer to the 1st-order pump through the Raman process. As the result, the 1st-order pump is amplified from 10 mW to approximately 110 mW at its peak at around 12 km from the fiber output terminal. Because the Raman gain of the optical signals at 1550 nm wavelength window is only produced by the 1st-order pump at 1450 nm, the major Raman amplification region is between z ¼ 40 km and z ¼ 65 km, and in this region, the slope of Psig(z) is positive as shown in Fig. 3.5.25B. Optical spectrum shown in Fig. 3.5.25C indicates that the OSNR is approximately 40 dB which is 2 dB higher than only using a 1st-order backward pump shown in Fig. 3.5.22C. The major reason of this 2 dB OSNR improvement is the fact that the maximum power of the 1st-order pump is around 12 km from the fiber output terminal, and

Characterization of optical devices

Fig. 3.5.25 Distributed Raman amplification for a 70 kM SMF using 780 mW 2nd-order backward pump at 1360 nm wavelength and 10 mW 1st-order backward pump at 1450 nm wavelength. (A) 2nd-order (dashed) and 1st-order (solid) pump power evolution as the function of position along the fiber, (B) powers of 4 signal channels along the fiber, and (C) optical spectrum at the fiber output with 0.1 nm resolution bandwidth.

the ASE noise generated there is attenuated by approximately 2 dB before reaching to the fiber output. One can also apply a 1st-order forward pump together with 1st-order and 2nd-order backward pumps to further optimize the system performance as shown in Fig. 3.5.26. In this case, a relatively weak 1st-order forward pump of 100 mW is applied. As the result, the signal power profile along the fiber is quite uniform, with the maximum signal power level less than 0.5 dB higher than that at the input, as shown in Fig. 3.5.26B, and the OSNR reaches to approximately 42 dB as shown in Fig. 3.5.26C, which is very close to the forward-only pumping configuration indicated in Fig. 3.5.21C. 3.5.8.3 RIN transfer from the pump to the optical signal In a fiber Raman amplifier, the Raman gain induced upon the optical signal is proportional to power of the pump. As a result, power variation of the pump laser will cause Raman gain fluctuation in the system and intensity noise in the optical signal. This is known as the relative intensity noise (RIN) transfer. In addition, the intensity noise of the pump can also introduce phase noise in the optical signal through cross-phase modulation as briefly discussed in Section 1.4. This is commonly referred to as RPN (relative phase noise). Both RIN and RPN on the optical signal caused by the RIN of the pump laser through Raman amplification process may introduce significant system performance degradation and need to be accounted for in system design and performance evaluation.

397

398

Fiber optic measurement techniques

Fig. 3.5.26 Distributed Raman amplification for a 70 kM SMF using 670 mW 2nd-order backward pump at 1360 nm wavelength, 10 mW 1st-order backward pump at 1450 nm wavelength, and 100 mW 1storder forward pump. (A) 2nd-order backward (dashed), 1st-order backward (solid), and 1st-order forward (dash-dotted) pump power evolution as the function of position along the fiber, (B) powers of 4 signal channels along the fiber, and (C) optical spectrum at the fiber output with 0.1 nm resolution bandwidth.

Fig. 3.5.27 Block diagram of a distributed Raman amplification system consisting of a 2nd-order backward pump with power P1 at wavelength λ1, a 1st-order backward pump with power P2 at wavelength λ2, and a 1st-order forward pump with power P3 at wavelength λ3. The signal complex optical field is As at wavelength λs.

Fig. 3.5.27 shows a generic fiber Raman amplification system including a 2nd-order backward pump with power P1 and wavelength λ1, a 1st-order backward pump with power P2 and wavelength λ2, and a 1st-order forward pump with power P3 and wavelength λ3. The optical signal is in the forward direction with a complex field As at wavelength λs. Raman interaction among the three pumps can be modeled through a set of coupled propagation equations ∂P 1 ∂P ¼ ds1 1  α1 P 1  g12 P 2 P 1  g13 P 3 P 1 ∂z ∂t

(3.5.50a)

Characterization of optical devices

∂P 2 ∂P ¼ d s2 2  α2 P 2 + g21 P 1 P 2  g23 P 3 P 2 (3.5.50b) ∂z ∂t ∂P 3 ∂P (3.5.50c) ¼ ds3 3  α3 P 3 + g31 P 1 P 3  g32 P 3 P 2 ∂z ∂t representing the evolution of the 2nd-order backward pump, the 1st-order backward pump, and the 1st-order forward pump, respectively. For the complex signal optical field propagating in the forward direction, the propagation equation is g g g ∂As α ¼  s As + j2γ s ðP 1 + P 2 + P 3 ÞAs + s1 P 1 As + s2 P 2 As + s3 P 3 As ∂z 2 2 2 2

(3.5.51)

In these equations, the group velocity of the optical signal vs is used as the reference, so that in Eq. (3.5.50), ds1 ¼ 1/vs + 1/v1, ds2 ¼ 1/vs + 1/v2, and ds3 ¼ 1/vs  1/v3 represent relative walk-off between the three pumps and the optical signal. α1, α2, α3, and αs are fiber attenuation parameters at the pump and the signal wavelengths. guv with u ¼ 1, 2, 3 and v ¼ 1, 2, 3 represents Raman gain coefficient at the uth pump wavelength created by the vth pump, and guv ¼ gvu is always true. gsu with u ¼ 1, 2, 3, is the Raman gain coefficient at the signal wavelength created by the uth pump. γ s is the Kerr-effect non-linear parameter of the optical fiber at the signal wavelength. For simplicity, the effect of non-linear waveform distortion on the pump channels is neglected. For a reasonable pump laser with power fluctuation much smaller than the average power, the intensity noise can be treated as a small-signal perturbation. For frequency domain analysis, power fluctuations of the 3 pumps at frequency Ω can be written as

P 1 ðz, tÞ ¼ P1 ðzÞ 1 + m1 ðzÞejΩt (3.5.52a)

(3.5.52b) P 2 ðz, tÞ ¼ P2 ðzÞ 1 + m2 ðzÞejΩt

P 3 ðz, tÞ ¼ P3 ðzÞ 1 + m3 ðzÞejΩt (3.5.52c) where Pu ðzÞ with u ¼ 1, 2, 3 is the average power, and mu(z) is the relative intensity noise amplitude of the uth pump at the position z along the fiber. Now we can insert Eq. (3.5.52) back to the propagation Eq. (3.5.50). Since  ∂m1 ∂P 1 ∂P1  ∂P ¼ 1 + m1 ejΩt + P ejΩt and d s1 1 ¼ jΩds1 m1 ejΩt ∂z ∂z ∂z 1 ∂t Eq. (3.5.50a) becomes  ∂m1   ∂P1  1 + m1 ejΩt + P1 ejΩt ¼ jΩds1 m1 P1 ejΩt  α1 P1  g12 P1 P2 1 + m2 ejΩt ∂z ∂z    g13 P1 P3 1 + m3 ejΩt (3.5.53)

399

400

Fiber optic measurement techniques

This equation can be split into a static equation including only the average power terms, and a small-signal equation only including the perturbation terms as, ∂P1 ¼ α1 P1  g12 P1 P2  g13 P1 P3 ∂z ∂m1 ¼ jΩds1 m1  g12 m2 P2  g13 m3 P3 ∂z

(3.5.54a) (3.5.54b)

Similarly, Eq. (3.5.50b) can be split into ∂P2 ¼ α2 P2 + g21 P1 P2  g23 P2 P3 ∂z ∂m2 ¼ jΩds2 m2 + g21 m1 P1  g32 m3 P3 ∂z and Eq. (3.5.50c) can be split into, ∂P3 ¼ α3 P3 + g31 P1 P3 + g32 P2 P3 ∂z ∂m3 ¼ jΩd s3 m3 + g31 m1 P1 + g32 m2 P2 ∂z

(3.5.55a) (3.5.55b)

(3.5.56a) (3.5.56b)

Similar approaches can also be applied to Eq. (3.5.51) for the Raman amplificationinduced perturbation on the complex signal optical field s ∂A αs gs1 gs2 gs3 (3.5.57a) ¼  As + j2γ s ðP1 + P2 + P3 ÞAs + P1 As + P2 As + P3 As ∂z 2 2 2 2 ∂ gs1 gs2 gs3 δAs ¼ j2γ s ðP1 m1 + P2 m2 + P3 m3 ÞAs + P1 m1 As + P2 m2 As + P3 m3 As ∂z 2 2 2 (3.5.57b) The solution of small-signal perturbation nR Eq. (3.5.57b) can be found through s ðL Þ exp L j2γ s ðP1 m1 + P2 m2 + P3 m3 Þ + gs1 P1 m1 + integration, δAs ðΩ, L Þ ¼ A 0 2 gs2  gs3  2 P 2 m2 + 2 P 3 m3 dz:g which yields the expressions of signal intensity and signal optical phase, respectively, at the perturbation frequency Ω as Z L  P s ðΩ, L Þ ¼ P s ðL Þ exp ½gs1 P1 m1 + gs2 P2 m2 + gs3 P3 m3 dz (3.5.58) 0

Z

L

θðΩ, L Þ ¼ 0

½2γ s ðP1 + P2 + P3 Þdz +

Z

L

½2γ s ðP1 m1 + P2 m2 + P3 m3 Þdz (3.5.59)

0

By the definition of RIN, the RIN induced upon the signal channel is

Characterization of optical devices

D RIN s ¼

jP s ðΩ, L Þ  Ps ðL Þj2

E

P s ðΩ, L Þi2 * hZ 2 + L   ¼  exp ½gs1 P1 m1 + gs2 P2 m2 + gs3 P3 m3 dz  1

(3.5.60)

0

and the RPN created on the signal optical field is *Z 2 +   L  ½2γ s ðP1 m1 + P2 m2 + P3 m3 Þdz  2   0 δθ ðΩ, L Þ RPN ¼  2 ¼ Z L 2  θ ðL Þ    ½2γ s ðP 1 + P 2 + P 3 Þdz

(3.5.61)

0

RL

where  θðL Þ ¼ 0 ½2γ s ðP1 + P2 + P3 Þdz is the average optical phase change caused by Raman pumping. Consider that the three-pump lasers have independent RIN waveforms with random phases, their contributions have to be added in power (instead of field). Thus, 2 + sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi *  Z L 2 Z L 2 Z L 2   RIN s ¼  exp gs1 P1 m1 dz + gs2 P2 m2 dz + gs3 P3 m3 dz  1 0 0 0   (3.5.62)

and  2  δθ ðΩ, L Þ RPN ¼  2 ¼  θðL Þ

Z

L

2 Z L 2 Z L 2    P 1 m1 dz + P 2 m2 dz + P 3 m3 dz

0

Z 0

0

L

0

2 Z L 2 Z L 2    P 1 dz + P 2 dz + P 3 dz 0

0

(3.5.63) Now the remaining issue is to find the values of complex perturbation terms m1(z), m2(z), and m3(z). As the evolutions of steady-state pump powers, P1 ðzÞ, P2 ðzÞ, and P3 ðzÞ, are z-dependent which can be solved numerically, the small-signal perturbations m1(z), m2(z), and m3(z) can be solved based on Eqs. (3.5.54b), (3.5.55b), and (3.5.56b) using a split-step numerical approach. This is to divide the transmission fiber into short sections and find the relations between m(z + Δz) and m(z), where Δz is the section length. Then the perturbation m(L) at the fiber output can be found with the knowledge of m(0) at the fiber input as a boundary condition. For example, the solution of Eq. (3.5.54b) at z + Δz is

401

402

Fiber optic measurement techniques

m1 ðΩ, z + ΔzÞ ¼ 8 9 Z Z > > Z = jΩds1 dz  jΩd s1 dz < z+Δz

  e dz + C g12 m2 ðΩ, zÞP ðzÞ2  g13 m3 ðΩ, zÞP 3 ðzÞ e > > : z ;

ðejΩds1 Δz  1Þ ¼ m1 ðΩ, zÞejΩds1 Δz + g12 m2 ðΩ, zÞPðzÞ2  g13 m3 ðΩ, zÞP3 ðzÞ jΩd s1 (3.5.64a) where the integration constant C is determined by the initial conditions at z. Similarly, m2 ðΩ, z + ΔzÞ ¼ m2 ðΩ, zÞejΩds2 Δz

ðe + ½g21 m1 ðΩ, zÞP1 ðzÞ  g23 m3 ðΩ, zÞP3 ðzÞ

jΩds2 Δz

m3 ðΩ, z + ΔzÞ ¼ m3 ðΩ, zÞejΩds3 Δz + ½g31 m1 ðΩ, zÞP1 ðzÞ + g32 m2 ðΩ, zÞP2 ðzÞ

 1Þ (3.5.64b) jΩds2 ðejΩds3 Δz  1Þ jΩds3 (3.5.64c)

The values of m1(z), m2(z), and m3(z) can be found at any position z by solving the linear set of Eq. (3.5.64) with the known boundary conditions including the ffiffiffiffiffiffiffiffiffiffiffiof ffi the pRIN 2nd-orderpand 1st-order backward pumps at z ¼ L, which are m ð L Þ ¼ RIN , and 1 1 ffiffiffiffiffiffiffiffiffiffiffiffi m2 ðL Þ ¼ pffiffiffiffiffiffiffiffiffiffiffi RIN 2ffi ; and the RIN of the 1st-order forward pump at z ¼ 0, which is m3 ð0Þ ¼ RIN 3 . In the simple cases of unidirectional 1st-order pumping, either in the forward or in the backward direction, analytical formulas can be obtained. Let us use the 1st-order forward pumping as an example by letting P1(z) ¼ P2(z) ¼ 0, with reference to the block diagram shown in Fig. 3.5.27, and m1(z) ¼ m2(z) ¼ 0. In this case, signal power and phase propagation equations (3.5.58) and (3.5.59) can be simplified to  Z L   P s ðΩ, L Þ ¼ P s ðL Þ exp m3 ðΩ, 0Þgs3 P 3 ðzÞ exp ðjΩds3 zÞdz (3.5.65) 0

Z θðΩ, L Þ ¼ 2γ s gs3

L 0

P3 ðzÞdz + 2γ s m3 ðΩ, 0Þgs3

Z

L

P3 ðzÞ exp ðjΩds3 zÞdz

(3.5.66)

0

where m3(Ω, z) ¼ m3(Ω, 0) exp(jΩds3z) is used. If we further assume that the signal optical power is small enough, and neglect pump depletion, P3 ðzÞ ¼ P3 ð0Þeα3 z can be used. With a small-signal linearization of the exponential term in Eq. (3.5.65), it becomes

Characterization of optical devices

 Z L P s ðΩ, L Þ ¼ Ps ð0Þeαs L 1 + m3 ðΩ, 0Þgs3 P3 ð0Þ eðjΩds3 α3 Þz dz 0

so that



RIN s ðΩÞ ¼

δP 2s ðΩ, L Þ



hP s ðΩ, L Þi2

¼

Z 

RIN 3 ðΩÞg2s3 P23 ð0Þ α3 L

1  2e ¼ RIN 3 ðΩÞg2s3 P23 ð0Þ

L

e 0

ðjΩds3 α3 Þz

2  dz

cos ðΩds3 L Þ + e2α3 L α23 + Ω2 d2s3

(3.5.67)

and    2  1  eðjΩds3 α3 ÞL 2 α2p δθ ðΩ, L Þ 2 RPN s ðΩÞ ¼  2 ¼ m3 ðΩ, 0Þ  α23 + Ω2 d 23s ð1  eαp L Þ2 θðL Þ α23 1  2eα3 L cos ðΩd s3 L Þ + e2α3 L ¼ RIN 3 ðΩÞ α23 + Ω2 d 2s3 ð1  eα3 L Þ2

(3.5.68)

Eqs. (3.5.67) and (3.5.68) are also applicable for 1st-order backward pumping by simply changing d3s to 2n/c with c as the speed of light and n is the effective refractive index of the fiber. This is because in the case of backward pumping, the pump and the signal are propagating in the opposite directions so that the walk-off parameter is determined by twice of 1/c. Fig. 3.5.28 shows the normalized RIN and RPN power transfer functions which are defined as HRIN(Ω) ¼ RINs(Ω)/RIN3(Ω) and HRPN(Ω) ¼ RPNs(Ω)/RIN3(Ω). Parameters used in the calculation include standard SMF with length L ¼ 70 km, attenuation at the 1450 nm pump wavelength α3 ¼ 0.22 dB/km, Raman gain coefficient at 1550 nm optical signal g3s ¼ 0.25 W1 km1, pump power P3 ð0Þ ¼ 500 mW, fiber dispersion slope S0 ¼ 0.095 ps/(nm2km), and zero dispersion wavelength λ0 ¼ 1280 nm. Based on Eq. (1.4.88) (note: group delay per unit length is equal to the inverse of the group velocity), the walk-off parameter between the signal and the pump is d s3 ¼ 

2

2 λ20 λ20 S0 ¼ 1:67 ps=m λs  λs  λ3  λ3 8 Both RIN and RPN transfer functions clearly exhibit low-pass characteristics because the walk-off between the pump and the signal averages out the impact of high-frequency components of the pump RIN. To better understand the key parameters determining the RIN transfer functions, we can further simplify Eqs. (3.5.67) and (3.5.68) by considering that e αpL ≪ 1 (in the example shown in Fig. 3.5.28, e αpL ¼ 0.029), which is valid for most practical distributed Raman amplification systems. The RIN transfer functions can be simplified to

403

404

Fiber optic measurement techniques

Fig. 3.5.28 (A) RIN transfer functions with forward and backward pumping at power levels of 250 mW (dashed), 500 mW (solid), and 1000 mW (dotted). (B) RPN transfer functions with forward and backward pumping.

H RIN ðΩÞ ¼

g2 P2 ð0Þ RIN s ðΩÞ  2 s3 3 2 2 RIN 3 ðΩÞ α3 + Ω d s3

(3.5.69)

H RPN ðΩÞ≡

α2 RPN s ðΩÞ  2 32 2 RIN 3 ðΩÞ α3 + Ω d s3

(3.5.70)

and

The cut-off frequencies are fCF ¼ α3/(2πds3) for forward pumping, and fCB ¼ α3c/ (4πn) for backward pumping. For the parameters used to obtain Fig. 3.5.28, the cutoff frequencies can be easily found as fCF ¼ 4.83 MHz and fBF ¼ 823 Hz. An important take-home message is that the bandwidths of both RIN and RPN transfer functions with forward pumping are approximately 4 orders of magnitude broader than those with backward pumping. This is the major reason that systems employing forward Raman pumping are much more susceptible to the RIN of the pump laser. For the RIN transfer functions shown in Fig. 3.5.28A, the low-frequency response near Ω  0 is proportional to the square of the pump power as indicated in Eq. (3.5.69). For every 3 dB increase in the pump power, the RIN transfer function is increased by 6 dB, as shown in Fig. 3.5.28A. On the other hand, the RPN transfer function shown in Fig. 3.5.28B is independent of the pump power as indicated in Eq. (3.5.70). But this does NOT mean that the phase noise induced by Raman amplification is independent of the pump power. In fact, in Eq. (3.5.61), the RPN transfer function is defined as the

Characterization of optical devices

phase variation normalized by the average non-linear phase. In the simplified case of only a single 1st-order pump is used and neglecting pump depletion, the average non-linear phase is proportional to the pump power as  Z L 2  2  θðL Þ ¼ 2γ s P3 ðzÞdz  4γ 2s P3 2 ð0Þ 0

So that the mean square of phase noise induced by Raman amplification, which is most relevant to the system performance, is  2  δθs ðΩ, L Þ ¼ 4γ 2s P3 2 ð0ÞRPN s ðΩÞ (3.5.71) The next question is what is the impact of a 2nd-order Raman pump on the RIN and RPN transfer functions? This question can be answered by Eqs. (3.5.69) and (3.5.70). For the RIN transfer function, the 2nd-order pump has negligible impact directly on the signal RIN because the Raman gain coefficient gs1 between the 2nd-order pump and the signal is very small as indicated in Fig. 3.5.24. But RIN transfer can happen from the 2ndorder pump with the power P1 to the 1st-order pump with the power P3, and then to the signal. This two-step RIN transfer from the 2nd-order pump to the signal can be evaluated from Eq. (3.5.69) assuming the 1st-order pump is ideal (without RIN to begin with), so that H RIN ,2nd ðΩÞ ¼

g2 P2 ð0Þ g2 P2 ð0Þ RIN s ðΩÞ  2 31 1 2 2 2 s3 3 2 2 RIN 1 ðΩÞ α1 + Ω d31 α3 + Ω d s3

(3.5.72)

Note that we have neglected the depletion of the 2nd-order pump due to the 1storder pump, which may not be accurate. The reason is that the power of the 1st-order pump is usually not small enough to warrant the non-depletion assumption of the 2ndorder pump. Nevertheless, Eq. (3.5.72) provides a very simple expression to estimate the RIN transfer characteristic of the 2nd-order pump. For the 2nd-order forward pumping, the RIN transfer function from the 2nd-order pump to the signal is approximately the square of the RIN transfer function from the 1st-order pump to the signal, so that the bandwidth is only slightly narrower. However, as the power required for the 2nd-order pump is usually much higher than using only the 1st-order pump to achieve the same Raman gain, the impact of RIN transfer from the 2nd-order pump can be quite significant. In terms of the RPN transfer function, as the Kerr effect non-linearity is a broadband process, the RIN of the 2nd-order pump can have a direct impact on the phase of the optical signal, and the RPN transfer function can still be described by Eq. (3.5.70), except that the walk-off parameter ds3 needs to be replaced by ds1 representing the walk-off between the signal and the 2nd-order pump. For 2nd-order forward pumping, the optical frequency separation between the 2nd-order pump and the signal is approximately

405

406

Fiber optic measurement techniques

twice compared to the separation between the 1st-order pump and the signal, which doubles the walk-off parameter. As a result, the bandwidth of the RPN transfer function is reduced to half compared to 1st-order pumping. But, again, since the power required for the 2nd-order pump is much higher than only using a 1st-order pump, the mean square of non-linear phase noise generated in the signal can be much higher as shown in Eq. (3.5.71) by changing P3 2 ð0Þ into P1 2 ð0Þ which represents the power of the 2nd-order pump. For distributed Raman amplification in fiber-optic communication systems, the impact of RIN transfer on the system performance is mainly determined by the induced intensity noise and phase noise variances, which are the integration of RINs(Ω) and hδθ2s (Ω, L)i over their bandwidths. As the bandwidths of the RIN/RPN transfer functions with backward pumping are approximately 4 orders of magnitude lower than those with forward pumping as shown in Fig. 3.5.28, the system performance is relatively insensitive to the RIN of the pump laser. For forward pumping, on the other hand, although it has clear advantage in the OSNR compared to backward pumping, the RIN of the pump laser can be an important concern, especially when 2nd-order forward pumping is concerned which requires high pump power. For that reason, 2nd-order forward pumping is rarely used in practical fiber-optic systems. The best system performance can be obtained by the combination of EDFA and distributed Raman amplification with both backward and forward pumps (Cai et al., 2015) but at the expense of increased system complexity and additional amplifier cost. The reduced effective noise figure through distributed Raman amplification allows the use of high-order modulation formats to increase the signal spectral efficiency as well as to extend the transmission distance (Pelouch, 2016). 3.5.8.4 Characterization of fiber Raman amplifiers Similar to other optical amplifiers such as EDFA discussed earlier in this section, key parameters of a Raman amplifier include optical gain, gain bandwidth, gain flatness within the bandwidth, and OSNR. Unique properties of Raman amplification also require the characterization of RIN transfer from the pump laser to the RIN and phase noise of the optical signal, as well as the double Rayleigh scattering because no optical isolator is usually used with Raman amplification. Techniques of measuring optical gain, gain bandwidth, and noise figure of a Raman amplification system are similar to those used to measure EDFA, as illustrated in Fig. 3.5.29A. For optical domain measurements, the optical signal source can either be a tunable laser or a broad-spectrum ASE source, and the optical detector can be an optical spectrum analyzer (OSA), or simply be a photodetector followed by a tunable optical filter. For a localized Raman amplifier, a reference path, such as that shown in Fig. 3.5.1, can be used to calibrate the net optical gain provided by the amplifier. For a distributed Raman amplifier, on the other hand, the transmission fiber is part of the

Characterization of optical devices

Fig. 3.5.29 (A) Block diagram of experimental setup to measure optical gain, gain bandwidth, and OSNR of a Raman amplifier. (B) Illustration of pump lasers with both polarization multiplexing and wavelength division multiplexing. (C) Lyot de-polarizer consists of two pieces of highly birefringence fibers with their birefringence axes relatively rotated by 45°. PC, polarization controller; PBC, polarization beam combiner.

amplifier, and the receiver may not be co-located with the optical signal source, and thus, a reference path cannot be set. In this case, for a constant power of the optical signal source, the received signal optical power difference with and without Raman pumping provides a measure of on/off Raman gain. Unlike a discrete optical amplifier, the transmission fiber cannot be removed from the system so that the on/off Raman gain is the most relevant measure of the impact of Raman pumping. One of the important concerns in Raman amplification is the polarization-dependent gain. As the Raman gain coefficient is polarization selective, meaning that Raman gain exists only in the same state of polarization (SOP) of the pump. However, as the wavelength difference between the pump and the optical signal is on the order of 100 nm, their SOPs can walk off rapidly due to high-order PMD of the transmission fiber. Nevertheless, polarization multiplexing of pump lasers through a polarization beam combiner (PBC) is often used in commercial systems with Raman amplification to minimize the polarization-dependency of Raman gain on the optical signal, as shown in Fig. 3.5.29B. While polarization beam combining allows the increase of total pump power with minimum combining loss, a simpler way to minimize polarization-dependency of Raman gain is to use a pump de-polarizer. A Lyot de-polarizer (Burns, 1983) can be constructed with two pieces of highly birefringence fibers, such as polarization maintaining (PM) fibers, with their birefringence axes misaligned by 45°. This is equivalent to creating a large amount of 2nd-order PMD, which spreads different spectral components of pump laser spectrum to different SOPs. In the measurement system shown in Fig.

407

408

Fiber optic measurement techniques

Fig. 3.5.30 Example of Raman gain coefficient with the combination of pump lasers of six different wavelengths and power levels.

3.5.29A, if the optical signal source is a tunable laser emitting polarized light, a polarization controller can be used to adjust the signal SOP. On the other hand, if the optical signal source is a non-polarized wideband ASE noise, a polarizer followed by a polarization controller before the receiver can be used to characterize polarizationdependent gain. Another concern of Raman amplification is the flatness of Raman gain. As the Raman gain coefficient is highly frequency dependent as shown in Fig. 3.5.24, the Raman gain bandwidth can be relatively narrow, on the order of a few nanometers. In order to increase the bandwidth with flat gain for WDM system applications, multiple pump lasers with different wavelengths can be combined through wavelength division multiplexing. Fig. 3.5.30 shows an example of using pump lasers of six different wavelengths and power levels to achieve a relatively flat Raman gain over 50 nm. In Raman amplification systems, since the pump power is usually much higher than the signal power, and the effect of pump depletion is not as strong as in the EDFA, dynamic gain tilt in a Raman system is not an important concern in comparison to an EDFA-based amplification system. If multiple pump lasers with different wavelengths are combined through WDM to produce broad-spectrum Raman gain, cross-gain saturation between different WDM signal channels is also expected to be much less than both EDFA and SOA where a common pool of carrier density is responsible for optical gain for all wavelengths within the band. A potential disadvantage of distributed Raman amplification is the increased signal power level along the transmission fiber, and thus, the optical system can be more susceptible to Kerr-effect fiber non-linearity and the associated signal waveform distortion. This problem can be especially significant with forward pumping as shown in Fig. 3.5.21

Characterization of optical devices

Fig. 3.5.31 (A) Experimental setup to measure signal power distribution in a distributed Raman amplification system using OTDR. (B) Example of measured signal power distribution along the transmission fiber without Raman pumping (dotted line), with on a high-power 1st-order backward pump (solid line), and with all three Raman pumps (dashed line) turned on as shown in (A).

and the related discussion. Profiles of signal optical power distribution along the transmission fiber can be measured with an optical time domain reflectometer (OTDR) as shown in Fig. 3.5.31 (Akasaka et al., 2002). While the operation principles of OTDR will be discussed in more details in Chapter 4, here we show its application in characterizing distributed Raman amplification systems. OTDR emits optical signal pulses into the fiber and measures Rayleigh back scattering from different locations along the fiber. As the intensity of Rayleigh scattering is linearly proportional to the signal optical power at each location, signal power profile along the fiber can be obtained from time-resolved Rayleigh scattering measurement with the knowledge of the group velocity. In the measurement example shown in Fig. 3.5.31 (Akasaka et al., 2002), a 75 km long standard SMF was used in the system. A 1st-order forward pump at 1455 nm, a 1st-order backward pump at 1455 nm, and a 2nd order backward pump at 1365 nm were used. The dotted line shows the OTDR trace without Raman pumping, which indicates a total attenuation of approximately 23 dB including five splices along the fiber (shown as abrupt power drops in the curve). The solid line shows the OTDR trace when only a high power backward (BW) pump was applied, which provided 23 dB Raman gain. In this

409

410

Fiber optic measurement techniques

case, the maximum amount of signal power variation along the fiber is about 13 dB. The dashed line in Fig. 3.5.31 shows the OTDR trace when all the three Raman pump lasers were turned on to produce the same amount of Raman gain of 23 dB. In this case, relatively small optical power levels were used for the 1st-order forward backward pumps with 150 mW and 10 mW, respectively, and the 2nd-order backward pump had a power of 1 W. As the result of the hybrid pumping scheme, the maximum signal power variation along the transmission fiber is reduced to about 7 dB. Note that in this measurement setup, an optical band-pass filter (OBPF) with a bandwidth of a few nanometers needs to be used in front of the OTDR. This OBPF is necessary to minimize the impact of the broadband ASE noise generated by Raman amplification which would otherwise overwhelm the optical receiver inside the OTDR. In addition to signal power distribution, pump RIN transfer is another important issue in Raman amplifiers, especially with forward pumping where the bandwidth of RIN transfer is 3–4 orders of magnitude wider compared to backward pumping. RIN transfer function can be measured in the frequency domain with a RF network analyzer as shown in Fig. 3.5.32. The RF network analyzer is set in the S21 mode, which sends out a frequency-swept electric signal from port 1 to modulate the intensity of the pump laser, and measures the response of the amplified optical signal. An optical bandpass filter (BPF) is used to select the signal wavelength at λs before the detection by a photodiode and an electric preamplifier, and then received by port 2 of the network analyzer. A small portion of the optical power from each pump laser is tapped, directly detected, and measured by the network analyzer to calibrate the modulation response

Fig. 3.5.32 Experimental setup to measure RIN transfer function in Raman amplification systems with backward (A) and forward (B) pumping. BPF, optical band-pass filter.

Characterization of optical devices

Fig. 3.5.33 Measured (dots), trial function fitted (solid lines), and numerically simulated (dashed lines) RIN transfer functions with 1st-order pumping at 1465 nm and 2nd-order pumping at 1375 nm wavelengths for backward pumping (A) and forward pumping (B). The system has 60 km AllWave fiber, and the optical signal wavelength is at λs ¼ 1560 nm (Mermelstein et al., 2003). (Used with permission.)

of the pump lasers. A difficulty in this measurement setup is the direct modulation on the pump lasers. As these high-power lasers are usually not designed and packaged for intensity modulation, special experimental setup may have to be made for proper electrical connection to allow modulating signal to be superimposed in a large injection current of the pump laser. Fig. 3.5.33 shows examples of measured RIN transfer functions (Mermelstein et al., 2003) with backward pumping and forward pumping, respectively, and RIN transfer functions of both 1st-order and 2nd-order pumping are also shown. In these measurements, the input power of the 1st-order and the 2nd-order pumps are 13.4 dBm and 28.8 dBm, respectively. The level of low-frequency RIN transfer function of 2nd-order pumping is approximately 15.5 dB higher than that of 1st-order pumping, primarily because of the higher power level. The solid lines in Fig. 3.5.33 are least-squares fits to the measured transfer functions with jH ð0Þj2 jH RIN ðΩÞj2 ¼  RIN  1 + f 2 =f 2 2 c which is a low-pass transfer function similar to those predicted by Eqs. (3.5.69) and (3.5.72) with j HRIN(0)j2 the DC level of the RIN transfer function and fc the 6 dB bandwidth. As expected, for 1st-order pumping, the bandwidth of RIN transfer function is 1.59 kHz for backward and 18.5 MHz for backward pumping, respectively. For 2ndorder pumping, the RIN transfer bandwidths are 1.33 kHz and 11.2 MHz for backward and forward pumping, respectively, which are only slightly narrower than 1st-order pumping. The dashed lines in Fig. 3.5.33 were obtained through numerical simulation,

411

412

Fiber optic measurement techniques

Fig. 3.5.34 Block diagram of experimental setup to measure RPN based on coherent homodyne I/Q detection, and utilizing a vector RF network analyzer to simultaneously measure the I and Q components of the complex signal optical field.

which show that the out-of-band roll-off rate for 2nd-order pumping is higher than that for 1st-order pump, and the reason is explained by Eq. (3.5.72). For the measurement of RPN transfer function, a coherent detection receiver will have to be used, and the experimental setup is shown in Fig. 3.5.34. In this RPN measurement setup, a vector network analyzer is used to simultaneously detect the in-phase (I) and quadrature (Q) components of the complex optical field. The optical signal source needs to have narrow spectral linewidth so that cross-phase modulation caused by the Raman pumps can be accurately measured. Part of the optical signal from the source laser can be tapped to serve as the local oscillator (LO) for coherent homodyne detection. Intensity modulation response of the pump lasers can also be calibrated with the same coherent detection receiver with only the pump power spectra detected as Ps(Ω) ¼ I2(Ω) + Q2(Ω). Note that in Raman amplification systems, high-power pump lasers mostly have multiple longitudinal modes, and mode partitioning noise can be significant. In addition, competition among longitudinal modes is also responsible for random hopping of emission wavelengths which is equivalent to a large frequency noise. This frequency noise can then be converted into intensity noise through chromatic dispersion of the transmission fiber, which will be added to the original intensity noise. The most popular pump lasers used for fiber Raman amplification are fiber Bragg grating (FBG)-stabilized semiconductor lasers, in which the emission wavelength is stabilized by the external feedback from the FBG in the output fiber pigtail. Because of the relatively long external cavity, on the order of a meter, the external cavity mode spacing is quite small, on the order of 100 MHz. In comparison, a simple Fabry-Perot (FP) laser diode without FBG will have the longitudinal mode space on the order of 50 GHz, which is determined by the laser cavity length. As the speed of mode competition is inversely proportional to the mode spacing, the bandwidth of mode partitioning noise will be much higher for a FBG stabilized laser diode compared to a FP laser diode.

Wavelength (nm)

RIN (dB/Hz)

(a)

(b)

With FBG

RIN (dB/Hz)

Normalized op cal PSD (dB)

Characterization of optical devices

(c)

Without FBG Frequency (Hz)

Fig. 3.5.35 (A) Normalized optical power spectral density (PSD) of a pump laser with (red (dark gray in the printed version)) and without (black) FBG. RIN of the pump laser measured after SMF of different lengths with (B) and without (C) FBG.

To illustrate the evolution of pump laser RIN along the fiber, a typical diode laser emitting at 1427 nm is used as an example, which has approximately 24 dBm output power at 800 mA injection current. Without FBG, the laser has a FP mode spacing of 0.4 nm, and a 10 dB spectral width of about 10 nm, as shown in Fig. 3.5.35A. After adding a FBG in the fiber pigtail, the 10 dB spectral width is reduced to approximately 3 nm with an external-cavity mode spacing of 120 MHz, corresponding to a 0.83 m fiber external cavity length. This external cavity mode spacing cannot be resolved by the SOA used in the measurement with 0.01 nm resolution bandwidth. Laser RIN was measured in the time-domain with a DC-coupled high-speed photodetector following an optical attenuator, and the waveforms were recorded by a real-time oscilloscope with 50 GS/s sampling rate. Fig. 3.5.35B and C show the spectra of the RIN obtained through FFT, for the same laser diode with and without the FBG in the fiber pigtail. The measurements were performed directly at the laser output as well as after standard single mode fiber (SMF) of different lengths. It is obvious that the level of RIN is significantly enhanced after fiber transmission, primarily due to the conversion from frequency noise to an amplitude noise through chromatic dispersion of the fiber. Even after only 5 km of SMF, the RIN is increased by more than 15 dB in certain frequency regions. For the diode laser without FBG, high-frequency content of RIN beyond 1 GHz is relatively low even after SMF transmission. This is mainly attributed to the low speed of mode partitioning. Whereas with FBG, mode-partition noise of external cavity becomes much faster, creating highfrequency intensity and phase noises, and thus high-frequency contents of RIN are significantly enhanced, especially after transmission through SMF. Therefore, RIN levels measured at the immediate output of a pump laser may not be adequate to evaluate noise transfer in a Raman amplification system. RIN introduced from the frequency noise to

413

414

Fiber optic measurement techniques

amplitude noise conversion through fiber dispersion also needs to be considered in a practical optical communication system.

3.6 Characterization of passive optical components Passive optical components are essential in optical systems for the manipulation and processing of optical signals. In this section, we discuss the basic properties and techniques of characterizing several often-used passive optical components such as fiber-optic couplers, optical filters, WDM multiplexers and demultiplexers, and optical isolators and circulators. From an optical measurement point of view, optical components are not only the subjects of measurements but they are also fundamental building blocks of experimental setups of the measurements. A good understanding of optical components is very important in the design of optical measurement setups and the assessment of measurement accuracy.

3.6.1 Fiber-optic couplers An optical fiber directional coupler is one of the most important inline fiber-optic components, often used to split and combine optical signals. For example, a fiber coupler is a key component of a fiber-based Michelson interferometer; it is also required in coherent detection optical receivers to combine the received optical signal with the local oscillator. A Mach-Zehnder interferometer can be made by simply combining two fiber directional couplers. Since the power-coupling efficiency of a fused fiber directional coupler is generally wavelength dependent, it can also be used to make wavelength selective devices such as wavelength division multiplexers. On the other hand, if the coupler is used for broadband applications, the wavelength dependency has to be minimized through proper design of the coupler structure. A fiber directional coupler is made by fusing two parallel fibers together. When the cores of the two fibers are brought together sufficiently close to each other laterally, their propagation mode fields start to overlap and the optical power can be transferred periodically between the two fibers, as illustrated in Fig. 3.6.1. Fused fiber directional couplers are easier to fabricate compared to many other optical devices, and their fabrication can be automated by online monitoring of the output optical powers. It is important to note that by modifying the fabrication process, the same Pt

Ps Pref Pret

z

Pc

Fig. 3.6.1 Illustration of a fused fiber directional coupler. z is the length of the coupling region.

Characterization of optical devices

fabrication equipment can also be used to make other wavelength selective optical devices, such as wavelength-division multiplexers/demultiplexers and WDM channel inter-leavers. As shown in Fig. 3.6.1, the power-splitting ratio of the coupler is defined as α¼

Pc Pt + Pc

(3.6.1)

This is the ratio between the output of the opposite fiber Pc and the total output power and the input power. In an ideal fiber coupler, the total output power is equal to the input power Ps; therefore, α  Pc/Ps. In practical fiber couplers, absorption and scattering loss always exist and add up to an excess loss: η¼

Pc + Pt Ps

(3.6.2)

This is another important parameter of the fiber coupler, which is a quality measure of the device. In high-quality fiber couplers, the excess loss is generally lower than 0.3 dB. From an application point of view, insertion loss of a fiber coupler is often used as a system design parameter. The insertion loss is defined as the transmission loss between the input and the designated output: T c,t ¼

P c,t Ps

(3.6.3)

The insertion loss is, in fact, affected by both the splitting ratio and the excess loss of the fiber coupler, that is, Tc ¼ Pc/Ps ¼ αη and Tt ¼ Pt/Ps ¼ (1  α)η. In a 3 dB coupler, although the intrinsic loss due to power-splitting is 3 dB, the actual insertion loss is generally in the vicinity of 3.3 dB. In practical fiber couplers, due to the back scattering caused by imperfections in the fused region, there might be optical reflections back to the input ports. Directionality is defined as Dr ¼

P ret Ps

(3.6.4)

Rref ¼

P ref Ps

(3.6.5)

and reflection is defined as

In high-quality fiber couplers, both the directionality and the reflection are typically lower than 55 dB. From a coupler design point of view, the coupling ratio is a periodic function of the coupling region length z as indicated in Fig. 3.6.1:

415

Fiber optic measurement techniques

α ¼ F 2 sin 2



K ðz  z0 Þ F

(3.6.6)

where F  1 is the maximum coupling ratio, which depends on the core separation between the two fibers and the uniformity of the core diameter in the coupling region, z0 is the length of the fused region of the fiber before stretching, and K is the parameter that determines the periodicity of the coupling ratio. An experience-based formula of K is often used for design purposes 21λ5=2 (3.6.7) a7=2 where a is the radius of the fiber within the fused coupling region and λ is the wavelength of the optical signal. Since the radius of the fiber reduces when the fiber is fused and stretched, the parameter K is a function of z. For example, if the radius of the fiber is a0 ffiffiffiffiffiffiffiffiffiffiffiffiffi before stretching, the relationship between a and z should be approximately a  p a20 z0 =z. Fig. 3.6.2 shows the calculated splitting ratio α based on Eqs. (3.6.6) and (3.6.7), where z0 ¼ 6 mm, a0 ¼ 62.5 μm, λ ¼ 1550 nm, and F ¼ 1 were assumed. Obviously, any splitting ratio can be obtained by gradually stretching the length of the fused fiber section. Therefore, in the fabrication process, an in-situ monitoring of the splitting ratio is usually used. This can be accomplished by simply launching an optical signal Ps at the input port while measuring the output powers Pt and Pc during the fusing and stretching process. Eqs. (3.6.6) and (3.6.7) also indicate that for a certain fused section length z, the splitting ratio is also a function of the signal wavelength λ. This means that the splitting ratio of K

1

0.8

Pc /Ps

0.6 α

416

0.4

0.2

0

Pt /Ps

6

8

10

12

14 z (mm)

16

18

20

22

Fig. 3.6.2 Calculated splitting ratio as a function of the fused section length, with z0 ¼ 6 mm, a0 ¼ 62.5 μm, and λ ¼ 1550 nm.

Characterization of optical devices

1 0.8

Pc /Ps

α

0.6 0.4 0.2

(a)

Pt /Ps

0 1200 1250 1300 1350 1400 1450 1500 1550 1600 1650 1700 Wavelength (nm) 1 0.8

Pc /Ps

α

0.6 0.4 0.2

(b)

Pt /Ps

0 1200 1250 1300 1350 1400 1450 1500 1550 1600 1650 1700 Wavelength (nm)

Fig. 3.6.3 Calculated splitting ratio as a function of wavelength with z0 ¼ 6 mm and a0 ¼ 62.5 μm. The fused section lengths are z ¼ (A) 12.59 mm and (B) 21.5 mm.

a fiber directional coupler is, in general, wavelength-dependent. The design of a wideband fused fiber coupler is challenging. For a commercial 3-dB fiber directional coupler, the variation of the splitting ratio is typically 0.5 dB across the telecommunication wavelength C-band. Fig. 3.6.3A shows the calculated splitting ratio of a 3 dB fiber coupler with the same parameters as for Fig. 3.6.2 except that the fused section length was chosen as z ¼ 12.59 mm. In this case the variation of splitting ratio within the 1530 nm and 1565 nm wavelength band is less than 0.2 dB. On the other hand, the wavelength-dependent power-splitting ratio of a fiber coupler can be utilized to make a WDM multiplexer. Fig. 3.6.3B shows the splitting ratio versus a fiber coupler, again with the same parameters as in Fig. 3.6.2. But here the fused section length was chosen as z ¼ 21.5 mm, where Pt/Ps ¼ 100%, as can be seen from Fig. 3.6.2. It is interesting that in this case, the coupling ratio for 1320 nm is Pt/Ps ¼ 0%, which means Pc/Ps ¼ 100%. As illustrated in Fig. 3.6.4, this fiber coupler can be used as a multiplex to

417

418

Fiber optic measurement techniques

P1320nm

P1320nm+ P1500nm

P1320nm+ P1500nm

P1320nm

P1500nm

(a)

P1500nm

(b)

Fig. 3.6.4 A wavelength-dependent fiber coupler used as (A) a MUX and (B) a DEMUX for optical signals at 1320 and 1550 nm wavelengths.

combine optical signals of 1550 and 1320 nm wavelengths or as a demultiplexer to split optical signals at 1550 and 1320 nm. Compared to using wavelength-independent 2  2 couplers, the wavelength-division multiplexing and demultiplexing do not suffer from intrinsic combining and splitting losses. From the measurement point of view, the wavelength-dependency of power splitting ratio and excess loss of fiber directional couplers are critical for many applications; therefore, these parameters have to be measured carefully before being used in the systems. The characterization of a fiber directional coupler requires a tunable light source and optical power meters, as shown in Fig. 3.6.5. In this setup, a wideband light source such as a super-luminescent LED or a tungsten light bulb can be used with a tunable optical filter to select the wavelength. A polarizer and a polarization controller select the state of polarization of the optical signal that enters the fiber coupler, which allows the measurement of polarization-dependent characteristics of the coupler. Four optical power meters, PM1, PM2, PM3, and PM4, are used to measure the coupled, the transmitted, the returned, and the reflected optical powers, respectively; therefore, we can determine the splitting ratio, the insertion loss, the directionality, and the reflectivity as defined in Eqs. (3.6.1)– (3.6.5). It is important to note that the measurement setup, including the wideband light source, the polarizer, and the polarization controller, is possibly wavelength-dependent, and therefore, careful calibration is necessary to guarantee the measurement accuracy. Assuming that the measurement is performed in the linear regime, a simple calibration can be done by measuring the wavelength dependency and the polarization dependency of the source power Ps at the calibration interface, which is then used to correct the measured results.

PC Wideband source

Tunable filter

Calibration PM

PM 4 Ps

Pt

Pret

Pc PM 2

PM 1

Polarizer PM 3

Fig. 3.6.5 Measuring wavelength-dependent characteristics of a fiber directional coupler using a tunable wavelength source and optical power meters (PM).

Characterization of optical devices

A(0)

Λ

B(0)

A(L) Grating region

B(L)

L 0

z

Fig. 3.6.6 Illustration of fiber Bragg grating. Λ, grating period; L, grating length.

3.6.2 Fiber Bragg grating filters Fiber Bragg grating (FBG) is an all-fiber device which can be used to make low-cost, low-loss, and compact optical filters and demultiplexers. In an FBG, the Bragg grating is written into the fiber core to create a periodic refractive index perturbation along the axial direction of the fiber, as illustrated in Fig. 3.6.6. The periodic grating can be made in various shapes, such as sinusoid, square, or triangle; however, the most important parameters are the grating period Λ, the length of the region L, and the strength of the index perturbation δn. Although the details of the grating shape may contribute to the high-order harmonics, the characteristic of the fiber grating is mainly determined by the fundamental periodicity of the grating. Therefore, the index profile along the longitudinal direction z, which is most relevant to the performance of the fiber grating, is n

o 2π nðzÞ ¼ ncore + δn 1 + cos z (3.6.8) Λ where ncore is the refractive index of the fiber core. The frequency selectivity of FBG is originated from the multiple reflections from the index perturbations and their coherent interference. Obviously, the highest reflection of an FBG happens when the signal wavelength matches the spatial period of the grating, λBragg ¼ 2ncoreΛ, which is defined as the Bragg wavelength. The actual transfer function around the Bragg wavelength can be calculated using coupled mode equations. Assume a forward-propagating wave A(z) ¼ jA(z)j exp( jΔβz) and a backward-propagating wave B(z) ¼ j B(z)j exp(jΔβz), where Δβ ¼ β  π/(Λncore) is the wave number detuning around the Bragg wavelength. These two waves couple with each other due to the index perturbation of the grating, and the coupled-wave equations are dAðzÞ ¼ jΔβAðzÞ  jκBðzÞ (3.6.9a) dz dBðzÞ ¼ jΔβBðzÞ + jκAðzÞ (3.6.9b) dz where κ is the coupling coefficient between the two waves, which is proportional to the strength of the index perturbation δn of the grating. To solve the coupled-wave Eq. (3.6.9), we assume

419

420

Fiber optic measurement techniques

AðzÞ ¼ a1 eγz + a2 eγz BðzÞ ¼ b1 e

γz

+ b2 e

γz

(3.6.10a) (3.6.10b)

where a1, a2, b1, and b2 are constants. Substituting Eq. (3.6.10) into Eq. (3.6.9), the coupled-wave equations become ðΔβ  γ Þa1 ¼ jκb1

(3.6.11a)

ðΔβ + γ Þb1 ¼ jκa1

(3.6.11b)

ðΔβ + γ Þa2 ¼ jκb2

(3.6.11c)

ðΔβ  γ Þb2 ¼ jκa1

(3.6.11d)

To have non-trivial solutions for a1, a2, b1, and b2, we must have    Δβ  γ 0 jκ 0      0 Δβ + γ 0 jκ   ¼0    jκ 0 Δβ + γ 0    0 jκ 0 Δβ  γ  This leads to Δβ2  γ 2 + κ 2 ¼ 0, and therefore, qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi γ ¼ Δβ2 + κ 2

(3.6.12)

Now we define a new parameter: ρ¼

jðΔβ + γ Þ κ ≡ κ jðΔβ  γ Þ

(3.6.13)

The relationships between a1, b1, a2, and b2 in Eq. (3.6.11) become b1 ¼ a1/ρ and b2 ¼ a2ρ. If we know the input and the reflected fields A and B, the boundary conditions at z ¼ 0 are A(0) ¼ a1 + a2 and B(0) ¼ a1/ρ + a2ρ. Equivalently, we can rewrite the coefficients a1 and a2 in terms of A(0) and B(0) as a1 ¼

ρAð0Þ  Bð0Þ ρ  1=ρ

(3.6.14a)

a2 ¼

Að0Þ  ρBð0Þ 1  ρ2

(3.6.14b)

Therefore, Eq. (3.6.10) can be written as AðL Þ ¼

ρ½Bð0Þ  ρAð0Þ γL ½Að0Þ + ρBð0Þ γL e + e 1  ρ2 1  ρ2

Characterization of optical devices

BðL Þ ¼

½Bð0Þ  ρAð0Þ γL ρ½Að0Þ + ρBð0Þ γL e + e 1  ρ2 1  ρ2

where L is the length of the grating area. This is equivalent to a transfer matrix expression    AðL Þ S11 S12 Að0Þ ¼ (3.6.15) BðL Þ S21 S22 Bð0Þ where the matrix elements are S11 ¼

eγL  ρ2 eγL 1  ρ2

(3.6.16a)

S22 ¼

eγL  ρ2 eγL 1  ρ2

(3.6.16b)

S12 ¼ S21 ¼

ρðeγL  eγL Þ 1  ρ2

(3.6.16c)

Since there is no backward-propagating optical signal at the fiber grating output, B(L) ¼ 0, the reflection of the fiber grating can be easily found as  2γL  Bð0Þ S21 e 1 ¼ R¼ ¼ ρ 2γL (3.6.17) S22  ρ2 e Að0Þ Fig. 3.6.7 shows the calculated grating reflectivity versus the frequency detune from the Bragg wavelength. Since the reflectivity is a complex function of the frequency detune, Fig. 3.6.7 shows both the power reflectivity jRj2 and the phase angle of R. The power reflectivity clearly shows a bandpass characteristic, and the phase shift is quasilinear near the center of the passband. The dashed line in Fig. 3.6.7 shows the reflectivity in a grating with uniform coupling coefficient κ ¼ 1.5 m1, and in this case, the out-band rejection ratio is only approximately 15 dB. This poor out-band rejection ratio is mainly caused by the edge effect because the grating abruptly starts at z ¼ 0 and suddenly stops at z ¼ L. To minimize the edge effect, apodization can be used in which the coupling coefficient κ is non-uniform and is a function of z. The solid line in Fig. 3.6.7 shows the reflectivity of a fiber grating with κ(z) ¼ κ 0 exp {5(z  L/2)2}, as shown in Fig. 3.6.8A, where κ0 ¼ 1.5 m1 is the peak coupling coefficient of the grating and L ¼ 10 mm is the grating length. The apodizing obviously increases the out-band rejection ratio by an additional 10 dB compared to the uniform grating. In general, more sophisticated apodization techniques utilize both z-dependent coupling coefficient κ(z) and z-dependent grating period Λ(z), which help both to improve the FBG performance including the out-band rejection, the flatness of the passband, and the phase of the transfer function (Mihailov et al., 2000; Erdogan, 1997).

421

Fiber optic measurement techniques

0 −5

Reflection (dB)

−10 −15 −20 −25 −30 −35 −40 −45 −50 −60

−40

−20

20

40

60

−40

−20 0 20 Frequency detune (GHz)

40

60

0

40 30 Phase (degree)

422

20 10 0 −10 −20 −30 −40 −60

Fig. 3.6.7 Calculated reflectivities of fiber gratings with κ ¼ 1.5 m1, L ¼ 10 mm, and λbragg ¼ 1557 nm. Dashed lines: uniform grating; solid line: apodized grating.

For non-uniform coupling coefficient κ ¼ κ(z) and grating period Λ(z), calculation can be performed by dividing the grating region into a large number of short sections, as illustrated in Fig. 3.6.8B, and assuming both κ and Λ are constant within each short section. Therefore, the input/output relation of each section can be described by Eq. (3.6.15). The overall grating transfer function can be found by multiplying the transfer matrices of all short sections: #)  (Y " ðmÞ ðmÞ AðL Þ Að0Þ S11 S12 N ¼ (3.6.18) ðmÞ ðmÞ m¼1 BðL Þ Bð0Þ S S 21

22

(i ¼ 1, 2, j ¼ 1, 2) are transfer matrix where N is the total number of sections and elements of the mth short section that use the κ and Λ values of that particular section. From an application point of view, the bandpass characteristic of the FBG reflection is often used for optical signal demultiplexing. Since FBG attenuation away from the Bragg S(m) i, j

Characterization of optical devices

κ (m−1)

1.5 1 0.5 0

0

1

2

3

4

(a)

5 6 z (mm)

7

8

9

10

A(0)

A(L)

B(0) 0

z m z m−1

B(L) = 0 z

L

(b) Fig. 3.6.8 (A) Non-uniform coupling coefficient κ(z) and (B) dividing FBG into short sections for calculation using transfer matrices.

Input

l2

l1 l1

l2

lN

l3 l3

lN

Fig. 3.6.9 Configuration of a WDM DEMUX based on FBGs.

wavelength is very small, many FBGs, each having a different Bragg wavelength, can be concatenated, as illustrated in Fig. 3.6.9, to make multiwavelength DEMUXes. With special design, the phase shift introduced by an FBG can also be used for chromatic dispersion compensation in optical transmission systems. In recent years, FBGs are often used to make distributed sensors, which utilize the temperature or mechanical sensitivities of FBG transfer functions. Another note is that although an FBG can be made low-cost, an optical circulator has to be used to redirect the reflection from an FBG, which significantly increases the cost. Although a 3-dB fiber directional coupler can be used to replace the circulator, it will introduce a 6-dB intrinsic loss for the roundtrip of the optical signal.

3.6.3 WDM multiplexers and demultiplexers In WDM optical systems, multiple wavelengths are used to carry wideband optical signals; therefore, precisely filtering, multiplexing, and demultiplexing, these optical channels are very important tasks. Like RF filters, the specification of optical filters includes bandwidth, flatness of passband, stopband rejection ratio, transition slope from passband to stopband, and phase distortion. Traditional multi-layer thin film optical filters are able to provide excellent wavelength selectivity, especially with advanced thin film deposition

423

424

Fiber optic measurement techniques

technology; hundreds of thin film layers with precisely controlled index profiles can be deposited on a substrate. Multi-channel WDM multiplexers (MUX), demultiplexers (DEMUX), and wavelength add/drop couplers have been built based on the thin film technology. In terms of disadvantages, since thin films filters are free-space devices, precise collimation of optical beams is required, and the difficulty may become significant when the channel count is high. On the other hand, arrayed waveguide grating (AWG) is another configuration to make MUX, DEMUX, and add/drop couplers. AWG is based on planar lightwave circuit (PLC) technology, in which multi-path interference is utilized through multiple waveguide delay lines. PLC is an integrated optics technology that uses photolithography and etching; very complex optical circuit configuration can be made with sub-micrometer-level precision. WDM MUX and DEMUX with very large channel counts have been demonstrated by AWG. 3.6.3.1 Thin film-based interference filters Fig. 3.6.10A shows the simplest thin film fiber-optic filter in which the optical signal is collimated and passes through the optical interference film. At the output, another collimator couples the signal light beam into the output fiber. With many layers of thin films deposited on transparent substrates, high-quality edge filters and bandpass filters can be made with very sharp transition edges, as illustrated in Fig. 3.6.10C. If the transmitted and reflected optical signals are separately collected by two output fibers, a simple two-port WDM coupler can be made as shown in Fig. 3.6.10B. Fig. 3.6.11A shows the configuration of a four-port WDM DEMUX that uses four interference films, each having a different edge frequency of the transmission spectrum. Fig. 3.6.11B shows an alternative arrangement of the thin film filters to construct an

Fig. 3.6.10 Thin film-based optical filters. T, transmission; R, reflection.

Characterization of optical devices

4

8

3

7

2

6

Output 1

5 4 3

BP4 BP2 Input

BP3

Output 2 Output 1

BP1

Input

(a)

(b)

(c) Fig. 3.6.11 Thin film-based WDM DEMUX. (A, B) Optical circuit configuration and (C) transfer function of a four-channel DEMUX.

eight-channel WDM DEMUX in which only edge filter characteristics are required for the thin films. Fig. 3.6.11C is an example of the typical transfer function of a four-port WDM DEMUX, which shows the maximum insertion loss of about 1.5 dB, very flat response within the passband, and more than 40 dB attenuation in the stopband. It is also evident that the insertion loss increases at longer wavelengths because long wavelength channels pass through larger numbers of thin film filters. In addition to the intensity transfer function, the optical phase of a thin-film filter is also a function of the wavelength, which creates chromatic dispersion as illustrated in Fig. 3.6.12 (Zhang et al., 2002). The characterization of an optical filter has to include both intensity and phase transfer functions. Thin-film technology has been used for many years; sophisticated structural design, material selection, and fabrication techniques have been well studied. However, thin film-based WDM MUX and DEMUX are based on discrete thin film units. The required

425

(a)

200 GHz 100 GHz

0 −5 −10 −15 −20 −25 1549

1549.5 1550 1550.5 Wavelength (nm)

1551

Chromatic dispersion (ps/nm)

Fiber optic measurement techniques

Insertion loss (dB) (nm)

426

100 GHz 200 GHz

150 100 50 0 −50 −100 −150 1549

(b)

1549.5 1550 1550.5 Wavelength (nm)

1551

Fig. 3.6.12 (A) Intensity transfer function and (B) chromatic dispersion of thin film bandpass filters (Zhang et al., 2002). (Used with permission.)

number of cascaded thin film units is equal to the number of wavelength channels, as shown in Fig. 3.6.11; therefore, insertion loss linearly increases with the number of ports. In addition, the optical alignment accuracy requirement becomes more stringent when the number of channels is high and thus the fabrication yield becomes low. For this reason, the channel counts of commercial thin film filter-based MUX and DEMUX rarely go beyond 16. For applications involving large numbers of channels, such as 64 and 128, AWGs are more appropriate. 3.6.3.2 Arrayed waveguide gratings The wavelength selectivity of an AWG is based on multipath optical interference. Unlike transmission or reflection gratings or thin film filters, an AWG is composed of integrated waveguides deposited on a planar substrate, commonly referred to as planar lightwave circuits (PLC). As shown in Fig. 3.6.13, the basic design of an AWG consists of input and output waveguides, two star-couplers, and an array of waveguides bridging the two star-couplers. Within the array, each waveguide has a slightly different optical length, and therefore interference happens when the signals combine at the output. The wavelength-dependent interference condition also depends on the design of the star couplers. For the second star coupler, as detailed in Fig. 3.6.14, the input and the output N

Waveguide array

2 1 1st star coupler

2nd star coupler

Input

Fig. 3.6.13 Illustration of an arrayed waveguide grating structure.

Outputs 1 2 M

Characterization of optical devices

Lf

ΔX

Output waveguides

Waveguide array

q0 d

Fig. 3.6.14 Configuration of the star coupler used in AWG.

waveguides are positioned at the opposite sides of a Roland sphere with a radius of Lf/2, where Lf is the focus length of the sphere. In an AWG operation, the optical signal is first distributed into all the arrayed waveguides by the input star coupler, and then at the output star coupler each wavelength component of the optical field emerging from the waveguide array is constructively added up at the entrance of an appropriate output waveguide. The phase condition of this constructive interference is determined by the following equation (Takahashi et al., 1995): nc  ΔL + ns d sin θo ¼ mλ

(3.6.19)

where θ0 ¼j Δx/Lf is the diffraction angles in the output star coupler, Δx is the separation between adjacent output waveguides, j indicates the specific output waveguide number, ΔL is the length difference between two adjacent waveguides in the waveguide array, ns and nc are the effective refractive indices in the star coupler and waveguides, m is the diffraction order of the grating and is an integer, and λ is the wavelength. Obviously, the condition for a constructive interference at the center output waveguide at wavelength λ0 is determined by the differential length of the waveguide array: λ0 ¼

nc  ΔL m

(3.6.20)

On the other hand, the wavelength separation between output waveguides of an AWG depends on the angular dispersion of the star coupler. This can be easily found from Eq. (3.6.19): dθ m 1 ¼ dλ ns d cos θ0

(3.6.21)

When d is constant, the dispersion is slightly higher for waveguides further away from the center. Based on this expression, the wavelength separation between adjacent output waveguides can be found as  1 Δx dθ0 Δx ns d Δλ ¼ ¼ (3.6.22) cos θ0 L f dλ Lf m

427

428

Fiber optic measurement techniques

Fig. 3.6.15 Calculated transfer function of an AWG with 32 wavelength channels.

Because AWG is based on multi-beam interference, spectral resolution is primarily determined by the number of waveguides in the waveguide array between the two star-couplers. Larger number of waveguides will provide better spectral resolution. As an example, Fig. 3.6.15 shows the calculated insertion loss spectrum of a 32-channel AWG, in which 100 grating waveguides are used between the two star-couplers. In this example, the passband of each channel has a Gaussian-shaped transfer function. With advanced design and apodizing, flat-top transfer function can also be designed this type of AWGs are also commercially available. Both thin-film filters and AWGs are often used to make MUX, DEMUX, and wavelength add/drop devices for WDM applications. To characterize these devices, important parameters include width of the passband, adjacent channel isolation, non-adjacent channel isolation, passband amplitude and phase ripple, return loss and polarizationdependent loss.

3.6.4 Characterization of optical filter transfer functions From the measurement point of view, for wavelength-dependent optical devices such as FBGs, optical filters, wavelength-division MUX and DEMUX, insertion loss, intensity transfer function, wavelength-dependent phase delay, temperature coefficient, and the sensitivity to mechanical stretch are all important parameters which need to be characterized. Fig. 3.6.16 shows a simple measurement setup which uses a wideband light source and an optical spectrum analyzer. Both the transmitted and the reflected optical spectra are measured to determine the filtering characteristics of an optical device. The source optical spectrum is also measured for calibration purpose. A polarizer and a polarization controller are used to control the state of polarization of the optical signal and therefore to

Characterization of optical devices

Wideband light source

Calibration OSA

PC Circulator

DUT OSA

Polarizer

Fig. 3.6.16 Measurement of wavelength-dependent characteristics of an optical filter using a wideband wavelength source and OSA.

characterize the polarization dependency of the optical device performance. Although this measurement setup is simple, it only provides the intensity transfer function of the device under test and the differential phase delay information cannot be obtained. In addition, since most of the grating-based OSAs have spectral resolution on the order of 0.05 nm, which is about 6.25 GHz in 1550 nm wavelength window, narrowband optical filters cannot be precisely measured, which require better spectral resolution. 3.6.4.1 Modulation phase-shift technique Fig. 3.6.17 shows the experimental setup known as RF phase shift technique. In this setup, a tunable laser source is used which provides a continuous wavelength scan over the interested wavelength region. The signal optical intensity is modulated by a sinusoid waveform at frequency fm through an electro-optic modulator (EOM). The optical signal passing through the device under test is detected by a wideband photodiode and then amplified for signal processing. The received RF waveform can be measured and compared with the original modulating waveform either by using a vector voltmeter or by a RF network analyzer as shown in Fig. 3.6.17A and B, respectively. If you have a scalar lightwave network analyzer (which is also known as optical component analyzer) as described in Chapter 2, the measurement can be made easier because the tunable laser, PC Tunable laser

(a)

Vector voltmeter DUT

EOM

PD

Amp

Phase reference

fm

PC Tunable laser

EOM

DUT

PD

Amp

fm

RF Network Analyzer (S21)

(b) Fig. 3.6.17 Experimental setup of measuring optical device characteristics with (A) modulation phaseshift technique using a vector voltmeter and (B) an RF network analyzer.

429

430

Fiber optic measurement techniques

the EO-modulator, and the photodetection unit are already built-in and the response is calibrated. Assume the waveform of the received RF signal is I RF ðtÞ ¼ I ðλl Þ  cos ð2πf m t + φðλl ÞÞ

(3.6.23)

where λl is the wavelength of the tunable laser; the intensity transfer function of the optical device can be obtained directly from I(λl) and the group delay can be calculated from the wavelength-dependent RF phase ϕ(λl). In fact, when an optical carrier at frequency fl ¼ c/λl is intensity modulated by an RF frequency fm, the separation between the two optical sidebands will be 2fm. Assume the relative optical phase shift between these two modulation sidebands is θ(fl + fm)  θ(fl  fm), and assume the group delay does not significantly change within the modulation bandwidth fl fm, the group delay in the vicinity of optical frequency fl can be found as τg ðf l Þ ¼

dθ θðf l + f m Þ  θðf l  f m Þ  dω 2ð2πf m Þ

(3.6.24)

Then, the relative RF phase of the recovered intensity modulation waveform is proportional to the group delay and the RF frequency φðf l Þ ¼ 2πf m τg ðf l Þ ¼ ½θðf l + f m Þ  θðf l  f m Þ=2

(3.6.25)

Therefore, the wavelength-dependent group delay can be expressed as the function of the relative RF phase shift τg ðλl Þ ¼

φðλl Þ 2πf m

(3.6.26)

The experimental setups shown in Fig. 3.6.17 are able to measure both the intensity transfer function I(λl) and the RF phase delay ϕ(λl) as functions of wavelength set by the tunable laser. Smaller wavelength tuning step-size ensures good frequency resolution of the measurement and commercially available tunable lasers are able to provide 125 MHz tuning step size which is approximately 1 pm in 1550 nm wavelength window. In terms of measurement accuracy of RF phase shift, since a 2π RF phase shift corresponds to a group delay difference of 1/fm, obviously a high modulation frequency would create a larger relative phase shift for a certain group delay difference. Therefore, high modulation frequency generally helps to ensure phase measurement accuracy. However, since the measured group delay τg(fl) using modulation phase shift technique is an averaged value across the bandwidth of fl  fm < f < fl + fm, if the group delay varies significantly within this frequency window, the measured results will not be accurate.

Optical power Phase ripple

Characterization of optical devices

q(f )

Fp

Δqp f

carrier lower sideband

upper sideband f

fl −fm fl fl +fm

Fig. 3.6.18 Illustration of phase ripple and the signal modulation sidebands.

As illustrated in Fig. 3.6.18, assume that the optical phase has a sinusoid variation (Niemi et al., 2001)   2πf l θðf l Þ ¼ Δθp sin (3.6.27) Fp where Δθpis the magnitude and Fp is the period of the phase ripple. Based on Eq. (3.6.24), the measured group delay would be



  sin 2π ðf l + f m Þ=F p  sin 2π ðf l  f m Þ=F p ¼ Δτp  sin c 2πf m =F p τg ðf l Þ ¼ Δθp 2ð2πf m Þ (3.6.28) where Δτp ¼ ΔθpFp cos(2πfl/Fp) is the actual ripple magnitude of the group delay. Fig. 3.6.19 shows the sinc function, which represents the ratio between the measured and 1 0.8

τg (fl)/Δτp

0.6 0.4 0.2 0 −0.2 −0.4

0

0.5

1 2πfm /Fp

1.5

2

Fig. 3.6.19 Measurement error introduced by modulation phase-shift technique, which is a sinc function, as indicated by Eq. (3.6.28).

431

432

Fiber optic measurement techniques

the actual group delay τg(fl)/Δτp. Obviously, the measurement is accurate only when the modulation frequency is much lower than the ripple period, fm < < Fp, so that sinc(2πfm/ Fp)  1. When the modulation frequency is high enough, the measured results may strongly depend on the modulation frequency. In general, if the group delay is frequency-dependent, any ripple function can be decomposed into discrete periodic functions using Fourier transform:

T ðsÞ ¼ F τg ðf Þ (3.6.29) which represents the magnitude of the group delay ripple as a function of s. Obviously s is the inverse of the ripple period Fp in optical frequency (Fortenberry et al., 2003). On the other hand, for each Fourier component of the group-delay ripple s, the modulation phase-shift technique introduces a measurement error according to Eq. (3.6.28), which is equivalent to a filtering transfer function   πs H ðsÞ ¼ sinc (3.6.30) sm where the parameter sm ¼ 1/(2fm) is determined by the RF modulation frequency fm. Since for each Fourier component of the group-delay ripple and when the modulation frequency fm is known, the filtering transfer function H(s) is deterministic, using standard signal processing algorithm, this filtering error can be corrected by using an inverse Fourier transform to obtain the correct group-delay function:   1 T ðsÞ τg ðf Þ ¼ F (3.6.31) H ðsÞ The procedure is: (1) measure τg(f ) using setup shown in Fig. 3.6.17 with a certain modulation frequency fm; (2) take the Fourier transform of τg(f ) to find the magnitude of the group delay ripple T(s), and (3) correct the measurement error caused by H(s) using inverse Fourier transform shown in Eq. (3.6.31). However, one difficulty of this signal processing algorithm is that for a certain ripple period s, if πs/sm is close to unity, the value of H(s) is close to zero and Eq. (3.6.31) will have a singularity that makes the signal processing invalid. To solve this problem, two modulation frequencies fm1 and fm2 can be used to measure the same device (Fortenberry et al., 2003). Suppose the group-delay functions measured with these two modulation frequencies are τg1(f ) and τg2(f ), the following equation can be used for correcting the results:   H 1 ðsÞT 1 ðsÞ H 2 ðsÞT 2 ðsÞ + (3.6.32) τg ðf Þ ¼ F 1 H 21 ðsÞ + H 22 ðsÞ H 21 ðsÞ + H 22 ðsÞ where

  T 1 ðsÞ ¼ F τg1 ðf Þ

Characterization of optical devices

  T 2 ðsÞ ¼ F τg2 ðf Þ H 1 ðsÞ ¼ sin c ðπs=sm1 Þ H 2 ðsÞ ¼ sin c ðπs=sm2 Þ with sm1 ¼ 1/(2fm1) and sm2 ¼ 1/(2fm2). The weighting method given in Eq. (3.6.32) for the two measurements was chosen so that the resulting summation would yield the original signal while avoiding zeros in the denominator that would cause the expression to diverge. As long as H1(s) and H2(s) do not get to zero at the same time, this correction algorithm is effective. 3.6.4.2 Interferometer technique The interferometer is another technique to measure the complex transfer function of optical devices. Because of the optical coherence nature of interferometers, this technique can potentially provide better phase resolution compared to non-coherent techniques. On the other hand, because the signal optical phase is used in the measurement, special attention needs to be paid to maintain the stability of the system. Fig. 3.6.20 shows a measurement setup in which the device under test (DUT) is placed in one of the two arms of a Mach-Zehnder interferometer. A piezoelectric transducer (PZT) is used in the other (reference) arm to stretch the fiber and thus to control the relative optical phase delay in the arm. A wavelength-swept laser is used as the light source; a fixed-wavelength laser is also used to provide a phase reference to stabilize the measurement system. At the output the signal and the reference wavelengths are separated by a wavelength DEMUX and detected independently. Assume that both fiber couplers equally split the optical power (3-dB couplers); at signal wavelength, the photocurrent at the photodiode is n o pffiffiffiffi I ¼ I 0 1 + T + 2 T cos ½A sin ðωm tÞ + ϕ (3.6.33) where T is the optical power attenuation of the DUT, which is generally wavelength-dependent, and I0 is proportional to the signal optical power. Because the length of the reference arm is modulated at a frequency of ωm by the PZT stretcher, wm

λ

Feedback control

PZT PC

Mux Reference λ0 laser

λ0 PD1 Demux

DUT

λ PD2

Signal processing

λ-swept laser

Fig. 3.6.20 Measuring optical device phase shift using the interferometer technique. PZT, piezoelectric transducer; PC, polarization controller.

433

434

Fiber optic measurement techniques

the phase mismatch between the two arms is modulated with A sin(ωmt), where A is the magnitude of this phase modulation. ϕ is a constant phase mismatch between the two arms, which can be adjusted by a CW voltage on the PZT. The term cos[A sin(ωmt)] can be expanded in a Bessel series and the time-varying part of Eq. (3.6.33) may be written as iðtÞ ¼ I 0 cos ðϕÞ½J 0 ðAÞ + 2J 2 ðAÞ cos ð2ωm t Þ + …:  I 0 sin ðϕÞ½2J 1 ðAÞ sin ðωm tÞ + 2J 3 ðAÞ sin ð3ωm tÞ + …:

(3.6.34)

At the receiver, if we only select the frequency components at ωm and 2ωm, their amplitudes are i(ωm) ¼ 2I0 sin(ϕ)J1(A) and i(2ωm) ¼ 2I0 cos(ϕ)J2(A), respectively. Since the phase modulation index A is a constant, one can precisely determine the constant phase mismatch ϕ through the measured values of i(ωm) and i(2ωm) (Beck and Walmsley, 1990). Since the phase delay of the DUT is wavelength-dependent, the constant phase mismatch between the two arms is 2π ½Δl + δl ðλÞ (3.6.35) λ where Δl is a fixed optical length mismatch between the two arms and δl is the wavelength-dependent optical path length through the DUT. In practical systems, the optical path length difference Δl is environmentally sensitive and varies with temperature. This may be caused by both thermal expansion and the temperature-dependent refractive index of the silica material. For example, if the thermal sensitivity of the refractive index is dn/dT ¼ 106, for a one-meter fiber, each degree of temperature change will generate 1 μm optical path length change, which is comparable to the optical wavelength and will change the optical phase mismatch significantly. The purpose of the fixed-wavelength reference laser in Fig. 3.6.20 is to help stabilize the initial phase mismatch through the feedback to control the PZT. In the reference receiver PD1, we can detect the constant phase ϕ(λ0) through the measurements of i(ωm) and i(2ωm). With the active feedback control, one can minimize the i(ωm) component by adjusting the DC bias voltage on the PZT and therefore ensuring that ϕ(λ0) ¼ mπ, where m is an integer. Then, in the signal channel, the phase shift can also be obtained through the measurements of i(ωm) and i(2ωm). Because of the active feedback, the phase shift is no longer sensitive to the environmental variation:   2π mπ  δl ðλ0 Þ + δlðλÞ ϕðλÞ ¼ (3.6.36) λ λ0 ϕðλÞ ¼

where the first term in the bracket is a constant and the phase shift changes only with the optical path length of the DUT. The group delay caused by DUT can then be calculated as

Characterization of optical devices

τg ðλÞ ¼ 

λ2 dϕðλÞ λ d ðδlðλÞÞ ¼ 2πc dλ c dλ

(3.6.37)

Because this technique is based on optical interferometry, very high phase measurement accuracy can be expected. Experimentally, group delay measurement accuracy on the order of 0.1 fs has been reported in a 600–640 nm wavelength window (Beck and Walmsley, 1990). There are several variations in terms of practical implementation. The biggest disadvantage of using a PZT fiber stretcher or other mechanical methods to modulate the optical path length difference is the low modulation speed and possibly the lack of long-term stability. Fig. 3.6.21 shows an alternative implementation of interferometry using an acousto-optic frequency shifter (AOFS). In this implementation, the upper output port of the AOFS collects the 0th-order beam in which the optical frequency is not changed, while the lower output port of the AOFS collects the first-order Bragg diffraction and the optical frequency is shifted by ωm, which is the RF driving frequency of the AOFS. Therefore, the optical signals carried by the lower and the upper arms of the interferometer have slightly different frequencies. Each optical receiver performs a coherent heterodyne detection with the intermediate frequency ωIF equal to the frequency shift ωm of the AOFS. Assuming equal splitting of the optical power by both the AOFS and the fiber directional coupler, at each optical signal wavelength, the photocurrent at the receiver is

pffiffiffiffi I ¼ I 0 1 + T + 2 T cos ðωm t + ϕÞ (3.6.38) where T is the optical power attenuation of the DUT, which is generally wavelength-dependent, and I0 is proportional to the signal optical power. In the measurement, the phase ϕ ¼ ϕ(λ0) of the intermediate frequency resulted from heterodyne detection at the reference wavelength provides a phase reference. The wavelengthdependent phase ϕ(λ) measured with the tunable laser needs to be corrected by ϕ(λ) to ϕ(λ0) to remove the random phase fluctuations due to environmental and temperature

Reference λ 0 laser MUX λ-swept laser

λ

Sweep control

DUT

AOFS

Delay wm

Digital Signal processing & display

PC

DEMUX λ0 PD1

λ PD2

A/D A/D

Fig. 3.6.21 Real-time interferometric measurement of optical device transfer function using an acousto-optic frequency shift (AOFS).

435

0 −1

193.30 193.32 193.34 193.38 193.40 Frequency (THz) (a)

Phase (rad)

1

2 0 −2

Intensity (A.U.)

Phase (rad)

Fiber optic measurement techniques

Intensity (A.U.)

436

(b)

193.08 193.07 Frequency (THz)

Fig. 3.6.22 Examples of the measured phase and intensity transfer functions of (A) a fiber Bragg grating and (B) a thin film filter (Ogawa, 2006).

variations (Ogawa, 2006). To accurately track the phase of the intermediate frequency, the linewidth of the laser source δν must be narrow enough such that δν ≪ ωm. From an instrumentation point of view, the measurement accuracy can be further improved by digitizing the recovered RF signal and utilizing advanced digital signal processing (DSP) techniques for frequency locking and phase tracking. Fig. 3.6.22 shows an example of the measured phase delay and intensity transmission spectrum of a narrowband fiber Bragg grating. This was measured with an interferometer technique using AOFS and the modulation frequency on the acousto-optic modulator was ωm ¼ 100 MHz. The linewidth of the laser source is less than 1 MHz.

3.6.5 Optical isolators and circulators 3.6.5.1 Optical isolators An optical isolator is a device that only allows unidirectional transmission of the optical signal. It is often used in optical systems to avoid unwanted optical reflections. For example, a single-frequency semiconductor laser is very sensitive to external optical feedback. Even a very low level of optical reflection from an external optical circuit, on the order of 50 dB, is sufficient to cause a significant increase in laser phase noise, intensity noise, and wavelength instability. Therefore, an optical isolator is usually required at the output of each laser diode in applications that require low optical noise and stable optical frequency. Another example is in optical amplifiers, where unidirectional optical amplification is required. In this case, bidirectional optical amplification provided by the optical gain medium would cause self-oscillation if the external optical reflection from, for example, connectors and other optical components is strong enough.

Characterization of optical devices

block

Polarizer 2

Input

Polarizer 1

Faraday Rotator (45º)

Output

reflected

Fig. 3.6.23 Optical configuration of a polarization-sensitive optical isolator.

The traditional optical isolator is based on a Faraday rotator sandwiched between two polarizers, as shown in Fig. 3.6.23. In this configuration, the optical signal coming from the left side passes through the first polarizer whose optical axis is in the perpendicular direction, which matches the polarization of the input optical signal. Then a Faraday rotator rotates the polarization of the optical signal by 45° in a clockwise direction. The optical axis of the second polarizer is oriented 45° in respect to the first polarizer, which allows the optical signal to pass through with little attenuation. If there is a reflection from the optical circuit at the right side, the reflected optical signal has to pass through the Faraday rotator from right to left. Since the Faraday rotator is a nonreciprocal device, the polarization state of the reflected optical signal will rotate for an additional 45° in the same direction as the input signal, thus becoming perpendicular to the optical axis of the first polarizer. In this way, the first polarizer effectively blocks the reflected optical signal and assures the unidirectional transmission of the optical isolator. For the Bi-YIG-based Faraday rotator of a certain thickness, the angle of polarization rotation versus the intensity of the applied magnetic field is generally not linear. It saturates at around 1000 G. Therefore, a magnetic field of >1500 G will guarantee the stable rotation angle for a certain Bi-YIG thickness. The value of isolation is, to a large extent, determined by the accuracy of the polarization rotation angle and thus by the thickness of the YIG Faraday rotator. The Faraday rotator is the key component of an optical isolator. In long wavelength applications in 1.3 and 1.5 μm windows, Bismuth-substitute yttrium iron garnet (Bi-YIG) crystals are often used; they have high Verdet constant and relatively low attenuation. Packaged optical isolators with 30 dB isolation are commercially available. If a higher isolation level is required, multi-stage isolators can be used to provide optical isolation of >50 dB. Although this isolate configuration is simple, the polarization state of the optical signal has to match the orientation of the polarizer in the isolator. Otherwise, significant optical attenuation will occur. Although this type of isolator can be used in the same package with the semiconductor laser, and therefore, the SOP of the optical signal is deterministic; its polarization sensitivity is a major concern for many inline fiber-optic applications

437

438

Fiber optic measurement techniques

Faraday rotator (45º)

Input fiber Output fiber

Beam collimator YVO4 Beam displacer

Fig. 3.6.24 Configuration of polarization independent optical isolator.

for which the SOP of the optical signal may vary. For these applications, polarizationinsensitive optical isolators are required. Fig. 3.6.24 shows the configuration of a polarization-independent optical isolator made by two birefringence beam displacers and a Faraday rotator. In the 1550 nm wavelength window, a YVO4 crystal is often used to make the birefringence beam displacers due to its high birefringence, low loss, and relatively low cost. The operation principle of this polarization-independent optical isolator can be explained in Fig. 3.6.25. The incoming light beam is first split into vertical and horizontal polarized components (or o beam and e beam) by the first YVO4 beam splitter; they are shown as solid and dashed lines, respectively, in Fig. 3.6.25. These two beams are separated after passing through the YVO4 crystal. The Bi-YIG Faraday rotator rotates the polarization states of both these two beams by 45° without changing their spatial beam positions. The second YVO4 crystal has the same thickness as the first one. By arranging the orientation of the birefringence axis of the second YVO4 crystal, the o beam and e beam in the first YVO4 beam displacer become the e beam and o beam, respectively, in the second YVO4 beam displacer. Therefore, the two beams converge at the end of the second YVO4 crystal. F1

F2

F3

F4

Input fiber Output fiber Forward direction

Reflection

Input fiber

Output fiber Backward direction

Fig. 3.6.25 Illustration of the operating principle of a polarization-independent optical isolator.

Characterization of optical devices

In the backward direction, the light beams pass through the second YVO4 crystal along the same routes as for the forward direction. However, due to its non-reciprocal characteristic, the Faraday rotator rotates the polarization of the backward-propagated light beams by an additional 45° (in the same rotation direction as the forwardpropagated light beams). The total polarization rotation is 90° after a roundtrip through the Faraday rotator. Therefore, the initially ? (//) polarized light beam in the forward direction becomes ? (//) polarized in the backward propagation. The spatial separation between these two beams will be further increased when they pass through the first YVO4 crystal in the backward direction and will not be captured by the optical collimator in the input side. Fig. 3.6.25 also illustrates the polarization orientation of each light beam at various interfaces. In the design of an optical isolator, the thickness of YVO4 has to be chosen such that the separation of o and e beams is larger than the beam cross-section size. 3.6.5.2 Optical circulators An optical circulator is another device that is based on the non-reciprocal polarization of an optical signal by Faraday effect. A basic optical circulator is a three-terminal device as illustrated in Fig. 3.6.26, where terminal 1 is the input port and terminal 2 is the output port, while the reflected signal back into terminal 2 will be redirected to terminal 3 instead of terminal 1. Optical circulators have many applications in optical communication systems and optical instrumentations for redirecting optical signals. One example is the use with fiber Bragg gratings, as shown in Fig. 3.6.27A. Since the reflection characteristic of a fiber Bragg grating can be used either as a bandpass optical filter or as a dispersion compensator, an optical circulator has to be used to redirect the reflected optical signal into the output. Although a 3-dB fiber directional coupler can also be used to accomplish this job, as Signal input

1

Signal output

2

Reflection input Reflection output 3

Fig. 3.6.26 Basic function of a three-terminal optical circulator.

Input

1

Input

2

FBG

FBG Output

(a)

3

3dB Output

(b)

Fig. 3.6.27 Redirect FBG reflection using (A) a circulator and (B) a 3-dB fiber directional coupler.

439

440

Fiber optic measurement techniques

YIG Faraday rotators (45º) +45 −45

Port 1

+45 −45 Port 2

D1

D2

D3

Port 3 Beam collimator YVO4 beam displacers

Fig. 3.6.28 Configuration of a polarization-insensitive optical circulator.

shown in Fig. 3.6.27B, there will be a 6 dB intrinsic insertion loss for the optical signal going through a roundtrip in the fiber coupler. Fig. 3.6.28 illustrates the configuration of a polarization-independent optical circulator. Like a polarization-independent optical isolator discussed previously, an optical circulator also uses YVO4 birefringence material as beam displacers and Bi-YIG for Faraday rotators. However, configuration of an optical circulator is obviously much more complex than an isolator because the backward-propagated light has also to be collected at port 3. The operating principle of the optical circulator can be explained using Fig. 3.6.29. In the forward-propagation direction, the incoming light beam in port 1 is first split into o and e beams by the first YVO4 beam displacer, which are shown as solid and dashed lines, respectively. The polarization state is also labeled near each light beam. These two beams are separated in the horizontal direction after passing through the first YVO4 displacer (D1) and then they pass through a pair of separate Bi-YIG Faraday rotators. The left Faraday rotator (a1) rotates the o beam by +45° and the right Faraday rotator (b1) rotates the e beam by 45° without shifting their beam spatial positions. In fact, after passing through the first pair of Faraday rotators, the two beams become co-polarized, and they are both o beams in the second YVO4 displacer (D1). Since these two separate beams now have the same polarization state, they will pass the second displacer D2 without further divergence. At the second set of Faraday rotators, the left beam will rotate an additional +45° at (a2) and the right beam will rotate an additional 45° at (b2), then their polarization states become orthogonal with each other. The third beam displacer D3 will then combine these two separate beams into one at the output, which reconstructs the input optical signal with a 90° polarization rotation. For the reflected optical signal into port 2, which propagates in the backward direction, the light beams pass through D3, creating a beam separation. However, because of the non-reciprocal characteristic, the Faraday rotator a2 rotates the reflected beam by

Characterization of optical devices

Port 2

Port 1

a1

D1

D2

b1

a2

Forward direction

D3

b2

Port 2

Port 3

D1

a1 b1

D2

a2

Backward direction

b2

D3

Fig. 3.6.29 Illustration of the operating principle of an optical circulator.

+45° and b2 rotates the reflected beam by 45°, all in the same direction as rotating the forward-propagated light beams. The total polarization rotation is then 90° after a roundtrip through the second pair of Faraday rotators. Therefore, in the second beam displacer D2, the backward-propagated beams are again co-polarized, but their polarization orientations are both e beams. (Recall that the two beams were both o beams in the forwardpropagating direction.) Because of this polarization rotation, the backward-propagated beams will not follow the same routes as the forward-propagated beams in the second beam displacer D2, as shown in Fig. 3.6.29. After passing through the first set of Faraday rotators and the first beam displacer D1, the backward-propagated beams are eventually recombined at port 3 at the input side, which is in a different spatial location than port 1. From a measurement point of view, since both optical isolators and circulators are wideband optical devices, the measurements do not usually require high spectral resolution. However, these devices have very stringent specifications on optical attenuation, isolation, and polarization-dependent loss, so accurate measurements of these parameters are especially important. For an optical isolator, the important parameters include isolation, insertion loss, polarization-dependent loss (PDL), polarization-mode dispersion (PMD), and return loss. The insertion loss is defined as the output power divided by the input power, whereas isolation is defined as the reflected power divided by the input power when the output is connected to a total reflector. Return loss is defined as the reflected power divided by the input when the output side of the isolator has no reflection; thus, return loss is the measure of the reflection caused by the isolator itself. For a single stage isolator within

441

442

Fiber optic measurement techniques

PC Tunable laser

EOM

DUT Isolator

P2

Lock-in amplifier

PD

Calibration path

fm

(a)

P1

Calibration path PC Tunable laser

P1

EOM P3 PD

(b)

fm

DUT Isolator

Lock-in amplifier

Total reflector

Fig. 3.6.30 Measuring (A) optical isolator insertion loss and (B) isolation.

a 15 nm window around 1550 nm wavelength, the insertion loss is around 0.5 dB, the isolation should be about 30 dB, and the PDL should be less than 0.05 dB. With good antireflection coating, the return loss of an isolator is on the order of 60 dB. Fig. 3.6.30 shows a measurement setup for characterizing optical isolator insertion loss and isolation. In this setup, a tunable laser is used as the light source; its output is intensity modulated by an electro-optic modulator (EOM). The purpose of this intensity modulation is to facilitate the use of a lock-in amplifier in the receiver, so the modulation frequency fm is only in the kilohertz range. Some commercially available tunable lasers have the built-in function of intensity modulation up to Megahertz frequency. In that case the external EOM is no longer necessary. A polarization controller at the laser output makes it possible to measure the polarization-dependent effects of insertion loss and isolation. Fig. 3.6.30A shows the setup to measure insertion loss in which the photodiode and the lock-in amplifier are placed at the output of the isolator. The insertion loss is defined as the lowest value of P2/P1 over all input signal state of polarization (SOP) and wavelengths within the specified wavelength window. To measure the isolation, a total reflector is connected to the output side of the isolator and an optical circulator is used to redirect the reflected optical power as shown in Fig. 3.6.30B. Then the isolation is obtained as the highest value of P3/P1 over all signal SOP and wavelength. Polarization-dependent and wavelength-dependent parameters can also be evaluated with this setup. The measurement can be simplified by replacing the tunable laser with a wideband light source and replacing the photodiode and lock-in amplifier with an OSA. However, the measurement accuracy would not be as good as using a tunable laser, because the power spectral density of a wideband light source is generally low and the detection sensitivity of an OSA is limited. For an optical circulator, important specifications also include insertion loss, isolation, PDL, and return loss. In addition, since a circulator has more than two terminals,

Characterization of optical devices

directionality is also an important measure. For a three-terminal circulator, as illustrated in Fig. 3.6.26, the insertion loss includes the losses from port 1 to port 2 and from port 2 to port 3. Likewise, the isolation also includes the isolation from port 2 to port 1 and from port 3 to port 2. The directionality is defined by the loss from port 1 to port 3 when port 2 is terminated without reflection. Fig. 3.6.31 shows the measurement setups to characterize a three-port isolator. In Fig. 3.6.31A, the insertion losses are measured separately for P2, out/P1, in and P3, out/P2, in. During the measurement, the polarization controller adjusts the SOP of the input optical signal, and the tunable laser adjusts the wavelength to find the worstcase insertion loss values within the specified wavelength window. Fig. 3.6.31B shows how to measure isolation. To measure isolation between ports 1 and 2, the input signal is connected to port 1 and a total reflector is linked to port 2. Then the isolation is I1, 2 ¼ Pr/P1, in. Similarly, to measure isolation between ports 2 and 3, the input signal is connected to port 2 and a total reflector is linked to port 3 and I2, 3 ¼ Pr/P2, in. To measure the directionality of an optical circulator from port 1 to port 3, port 2 has to be terminated without reflection. This can be easily done by connecting port 2 to an open-ended angled connector such as an angled physical contact (APC). A typical isolator operating in a 1550 nm wavelength window usually has insertion loss of about 0.8 dB, which is slightly higher than an isolator due to its more complicated optical structure. The isolation value is on the order of –40 dB and the directionality is usually better than –50 dB. P2,in

PC Tunable laser

EOM

P2,out P1,in

fm

(a)

P2,in

EOM

P1,in

DUT

Pr PD

fm

(b)

PC Tunable laser

EOM fm

Lock-in amplifier

P3,out

PC Tunable laser

PD

DUT

Total reflector

Lock-in amplifier

DUT PD

P1,in

Lock-in amplifier

P3,out

(c) Fig. 3.6.31 Measuring (A) optical circulator insertion loss, (B) isolation, and (C) directionality.

443

444

Fiber optic measurement techniques

References Agrawal, G.P., Dutta, N.K., 1986. Long Wavelength Semiconductor Lasers. Van Nostrand Reinhold Co. Inc., New York, NY. Akasaka, Y., Huang, R., Harris, D.L., Yam, S., Kazovsky, L., 2002. New pumping scheme on distributed Raman amplifier over novel transmission fiber aimed at 40Gb/s based terrestrial network. In: 2002 28TH European Conference on Optical Communication, Copenhagen, pp. 1–2. Al-Qadi, M., Vedala, G., Hui, R., 2018. Phase noise of diode laser frequency comb and its impact in coherent communication systems. In: Proc. Conf. Lasers Electro-Opt. (Paper JTu2A.35). Al-Qadi, M., O’Sullivan, M., Xie, C., Hui, R., May 2019. Differential phase noise properties in QD-MLL and its performance in coherent transmission systems. In: Conference on Lasers and Electro-Optics (CLEO), San Jose, CA. (Paper SW3O.2). Al-Qadi, M., O’Sullivan, M., Xie, C., Hui, R., 2020. Phase noise measurements and performance of lasers with non-white FM noise for use in digital coherent optical systems. J. Lightwave Technol. 3 (6), 1157–1167. Ataie, V., Temprana, E., Liu, L., Myslivets, E., Kuo, P., Alic, N., Radic, S., 2015. Ultrahigh count coherent WDM channels transmission using optical parametric comb-based frequency synthesizer. J. Lightwave Technol. 33 (3), 694–699. Baney, D.M., 1998. Characterization of erbium-doped fiber amplifiers. In: Derickson, D. (Ed.), Fiber-Optic Test and Measurement. Prentice Hall. Baney, D.M., Dupre, J., 1992. Pulsed-source technique for optical amplifier noise figure measurement. In: European Conference on Optical Communications, ECOC’92, pp. 509–512. Baney, D.M., Sorin, W.V., Newton, S.A., 1994. High-frequency photodiode characterization using a filtered intensity noise technique. IEEE Photon. Technol. Lett. 6, 1258–1260. Beck, M., Walmsley, I.A., 1990. Measurement of group delay with high temporal and spectral resolution. Opt. Lett. 15 (9), 492–494. Bristiel, B., Gallion, P., Jaoue¨n, Y., Pincemin, E., 2004. Intrinsic noise figure derivation for fiber Raman amplifiers from equivalent noise figure measurement. In: IEEE LTIMC 2004—Lightwave Technologies in Instrumentation & Measurement Conference, Palisades, New York, USA, pp. 135–140. Bromage, J., 2004. Raman amplification for fiber communications systems. J. Lightwave Technol. 22 (1), 79–93. Bucalovic, N., et al., 2012. Experimental validation of a simple approximation to determine the linewidth of a laser from its frequency noise spectrum. Appl. Opt. 51, 4582–4588. Burns, W.K., 1983. Degree of polarization in the Lyot depolarizer. J. Lightwave Technol. 1, 475–479. Cai, J., et al., 2015. 49.3 Tb/s transmission over 9100 km using C+ L EDFA and 54 Tb/s transmission over 9150 km using hybrid-Raman EDFA. J. Lightwave Technol. 33 (13), 2724–2734. Devaux, F., Sorel, Y., Kerdiles, J.F., 1993. Simple measurement of fiber dispersion and of chirp parameter of intensity modulated light emitter. J. Lightwave Technol. 11 (12), 1937–1940. Di Domenico, G., Schilt, S., Thomann, P., 2010. Simple approach to the relation between laser frequency noise and laser line shape. Appl. Opt. 49 (25), 4801–4807. Duan, G., Shen, A., Akrout, A., Van Dijk, F., Lelarge, F., Pommereau, F., LeGouezigou, O., Provost, J., Gariah, H., Blache, F., Mallecot, F., Merghem, K., Martinez, A., Ramdane, A., 2009. High performance InP-based quantum dash semiconductor mode-locked lasers for optical communications. Bell Labs Tech. J. 14 (3), 63–84. Eichen, E., Schlafer, J., Rideout, W., McCabe, J., 1990. Wide-bandwidth receiver/photodetector frequency response measurements using amplified spontaneous emission from a semiconductor optical amplifier. J. Lightwave Technol. 8, 912–916. Eiselt, N., et al., 2017. Real-time 200 Gb/s (4  56.25 Gb/s) PAM-4 transmission over 80 km SSMF using quantum-dot laser and silicon ring-modulator. In: Proc. Opt. Fiber Commun. Conf. (Paper W4D.3). Ellis, A.D., Gunning, F.C.G., 2005. Spectral density enhancement using coherent WDM. IEEE Photon. Technol. Lett. 17 (2), 504–506. Erdogan, T., 1997. Fiber grating spectra. J. Lightwave Technol. 15 (8), 1277–1294. Finch, A., Zhu, X., Kean, P.N., Sibbett, W., 1990. Noise characterization of mode-locked color-center laser sources. IEEE J. Quantum Electron. 26 (6), 1115–1123.

Characterization of optical devices

Fortenberry, R., Sorin, W.V., Hernday, P., 2003. Improvement of group delay measurement accuracy using a two-frequency modulation phase-shift method. IEEE Photon. Technol. Lett. 15 (5), 736–738. Gimlett, J.L., Cheung, N.K., 1989. Effects of phase-to-intensity noise conversion by multiple reflections in gigabit-per-second DFB laser transmission systems. J. Lightwave Technol. 7, 888–895. Goldstein, E.L., da Silva, V., Eskildsen, L., Andrejco, M., Silberberg, Y., 1993. Inhomogeneously broadened fiber-amplifier cascade for wavelength-multiplexed systems. IEEE Photon. Technol. Lett. 5, 543–545. Habruseva, T., O’Donoghue, S., Rebrova, N., Kefelian, F., Hegarty, S.P., Huyet, G., 2009. Optical linewidth of a passively mode-locked semiconductor laser. Opt. Lett. 34 (21), 3307–3309. Hall, J.L., 2006. Nobel lecture: defining and measuring optical frequencies. Rev. Mod. Phys. 78, 1279–1295. H€ansch, T.W., 2006. Nobel lecture: passion for precision. Rev. Mod. Phys. 78, 1297–1309. Helms, J., 1991. Intermodulation and harmonic distortions of laser diodes with optical feedback. J. Lightwave Technol. 9 (11), 1567–1575. Henry, C., 1983. Theory of the phase noise and power spectrum of a single mode injection laser. IEEE J. Quantum Electron. 19 (9), 1391–1397. Henry, C., 1986. Phase noise in semiconductor lasers. J. Lightwave Technol. LT-4 (3), 298–311. Horak, P., Loh, W.H., 2006. On the delayed self-heterodyne interferometric technique for determining the linewidth of fiber lasers. Opt. Express 14, 3923–3928. Huynh, T.N., Nguyen, L., Barry, L.P., 2013. Phase noise characterization of SGDBR lasers using phase modulation detection method with delayed self-heterodyne measurements. J. Lightwave Technol. 31, 1300–1308. ´ ., Nguyen, L., Rusch, L.A., Barry, L.P., 2014. Simple analytical model for lowHuynh, T.N., Du´ill, S.P.O frequency frequency-modulation noise of monolithic tunable lasers. Appl. Opt. 53, 830–835. Jinno, M., Takara, H., Kozicki, B., Tsukishima, Y., Sone, Y., Matsuoka, S., 2009. Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies. IEEE Commun. Mag. 47 (11), 66–73. Kawanishi, S., Takada, A., Salruwatari, M., 1989. Wideband frequency-response measurement of optical receivers using optical heterodyne detection. J. Lightwave Technol. 7, 92–98. Kemal, J.N., et al., 2017a. 32QAM WDM transmission using a quantumdash passively mode-locked laser with resonant feedback. In: Proc. Opt. Fiber Commun. Conf. Exhib., Los Angeles, CA, USA. (Paper TH5C.3). Kemal, J.N., et al., 2017b. WDM transmission using quantum-dash modelocked laser diodes as multiwavelength source and local oscillator. In: Proc. Opt. Fiber Commun. Conf. Exhib., Los Angeles, CA, USA. (Paper Th3F.6). Kikuchi, K., 2012. Characterization of semiconductor-laser phase noise and estimation of bit-error rate performance with low-speed offline digital coherent receivers. Opt. Express 20 (5), 5291–5302. Klee, A., Davila-Rodriguez, J., Williams, C., Delfyett, P.J., 2013a. Characterization of semiconductor-based optical frequency comb sources using generalized multiheterodyne detection. IEEE J. Sel. Top. Quantum Electron. 19 (4) (Paper 1100711). Klee, A., Davila-Rodriguez, J., Williams, C., Delfyett, P.J., 2013b. Generalized spectral magnitude and phase retrieval algorithm for self-referenced multiheterodyne detection. J. Lightwave Technol. 30 (23), 3758–3764. Kruger, U., Kruger, K., 1995. Simultaneous measurement of the linewidth, linewidth enhancement factor a, and FM and AM response of a semiconductor laser. J. Lightwave Technol. 13 (4), 592–597. Li, J., Zhang, X., Tian, F., Xi, L., 2011. Theoretical and experimental study on generation of stable and highquality multi-carrier source based on re-circulating frequency shifter used for Tb/s optical transmission. Opt. Express 19 (2), 848–860. Lu, Z.G., Liu, J.R., Raymond, S., Poole, P.J., Barrios, P.J., Poitras, D., 2008. 312-fs pulse generation from a passive C-band InAs/InP quantum dot mode-locked laser. Opt. Express 16 (14), 10835–10840. Maher, R., Thomsen, B., 2011. Dynamic linewidth measurement technique using digital intradyne coherent receivers. Opt. Express 19 (26), B313–B322. Martinelli, C., Lorcy, L., Durecu-Legrand, A., Mongardien, D., Borne, S., 2006. Influence of polarization on pump-signal RIN transfer and cross-phase modulation in copumped Raman amplifiers. J. Lightwave Technol. 24 (9), 3490–3505.

445

446

Fiber optic measurement techniques

Matsko, A.B., Savchenkov, A.A., Liang, W., Ilchenko, V.S., Seidel, D., Maleki, L., 2011. Mode-locked Kerr frequency combs. Opt. Lett. 36, 2845–2847. Mermelstein, M.D., Brar, K., Headley, C., 2003. RIN transfer measurement and modeling in dual-order Raman Fiber amplifiers. J. Lightwave Technol. 21, 1518–1523. Mihailov, S.J., Bilodeau, F., Hill, K.O., Johnson, D.C., Albert, J., Holmes, A.S., 2000. Apodization technique for fiber grating fabrication with a halftone transmission amplitude mask. Appl. Opt. 39, 3670–3677. Moscoso-Ma´rtir, A., et al., 2018. 8-channel WDM silicon photonics transceiver with SOA and semiconductor mode-locked laser. Opt. Express 26 (19), 25446–25459. Nazarathy, M., Sorin, W.V., Baney, D.M., Newton, S., 1989. Spectral analysis of optical mixing measurements. J. Lightwave Technol. 7 (7), 1083–1096. Niemi, T., Uusimaa, M., Ludvigsen, H., 2001. Limitations of phase-shift method in measuring dense group delay ripple of fiber Bragg gratings. IEEE Photon. Technol. Lett. 13 (12), 1334–1336. Ogawa, K., 2006. Characterization of chromatic dispersion of optical filters by high-stability real-time spectral interferometry. Appl. Opt. 45 (26), 6718–6722. Okoshi, T., Kikuchi, K., Nakayama, A., 1980. Novel method for high-resolution measurement of laser output spectrum. Electron. Lett. 16, 630–631. Paschotta, R., 2004. Noise of mode-locked lasers (part II): and other fluctuations. Appl. Phys. B Lasers Opt. 79 (2), 163–1733. Paschotta, R., Schlatter, A., Zeller, S.C., Telle, H.R., Keller, U., 2006. Optical phase noise and carrierenvelope offset noise of mode-locked lasers. Appl. Phys. B Lasers Opt. 82 (2), 265–273. Pelouch, W.S., 2016. Raman amplification: an enabling technology for Long-Haul coherent transmission systems. J. Lightwave Technol. 34 (1), 6–19. Rafailov, E.U., Cataluna, M.A., Sibbett, W., 2007. Mode-locked quantum-dot lasers. Nat. Photonics 1 (7), 395–401. Rosales, R., Merghem, K., Martinez, A., Lelarge, F., Accard, A., Ramdane, A., 2012. Timing jitter from the optical spectrum in semiconductor passively mode locked lasers. Opt. Express 20 (8), 9151–9160. Sato, K., 2003. Optical pulse generation using Fabry-Perot lasers under continuous wave operation. IEEE J. Sel. Top. Quantum Electron. 99 (5), 1288–1296. Saunders, R.A., King, J.P., Hardcastle, O., 1994. Wideband chirp measurement technique for high bit rate sources. Electron. Lett. 30 (16), 1336–1338. Takahashi, H., Oda, K., Toba, H., Inoue, Y., 1995. Transmission characteristics of arrayed waveguide N  N wavelength multiplexer. J. Lightwave Technol. 13, 447–455. Tkach, R.W., Chraplyvy, A.R., 1986. Regimes of feedback effects in 1.5-μm distributed feedback lasers. J. Lightwave Technol. LT-4, 1655–1661. Vedala, G., Al-Qadi, M., O’Sullivan, M., Cartledge, J., Hui, R., 2017. Phase noise characterization of a QD-based diode laser frequency comb. Opt. Express 25 (14), 15890–15904. Vodhanel, R.S., Elrefaie, A.F., Wagner, R.E., Iqbal, M.Z., Gimlett, J.L., Tsuji, S., 1989. Ten-to-twenty gigabit-per-second modulation performance of 1.5-pm distributed feedback lasers for frequency-shiftkeying systems. J. Lightwave Technol. 7 (10), 1454–1460. Vonder Linde, D., 1986. Characterization of the noise in continuously operating mode-locked lasers. Appl. Phys. B Lasers Opt. 39 (4), 201–217. Wang, J., Petermann, K., 1992. Small signal analysis for dispersive optical fiber communication systems. J. Lightwave Technol. 10 (1), 96–100. Xu, L., et al., 2016. Experimental verification of relative phase noise in Raman amplified coherent optical communication system. J. Lightwave Technol. 34 (16), 3711–3716. Zhang, K., Wang, J., Schwendeman, E., Dawson-Elli, D., Faber, R., Sharps, R., 2002. Group delay and chromatic dispersion of thin-film-based, narrow bandpass filters used in dense wavelength-divisionmultiplexed systems. Appl. Opt. 41 (16), 3172–3175.

CHAPTER 4

Optical fiber measurement 4.1 Introduction Optical fiber is an indispensable part of fiber-optic communication systems; it provides a low-loss and wideband transmission medium. The performance of an optical fiber system depends, to a large extent, on the characteristics of optical fibers. The realization of lowloss optical fibers in the early 1970s made optical fiber communication a viable technology, and the research and development of optical fibers have since become among the central focuses of telecommunications industry. In addition to standard multi-mode fiber (MMF) and standard single-mode fiber, many different types of optical fibers have been developed to provide modified chromatic dispersion properties, engineered non-linear properties, and enlarged low-loss windows. Effects such as chromatic dispersion, polarization mode dispersion (PMD), polarizationdependent loss (PDL), and non-linearities, which may be negligible in low-speed optical systems, have become extremely critical in modern optical communications using highspeed time-division multiplexing (TDM) and wavelength-division multiplexing (WDM). The improved fiber parameters and the introduction of new types of optical fibers enabled high-speed and long-distance optical transmission systems as well as various applications of fiber optics such as optical sensing and imaging. Precise measurement and characterization of optical fiber properties are important in the research, development, and fabrication of fibers as well as in the performance evaluation of optical systems. This chapter reviews various techniques to characterize the properties of optical fibers, their operating principles, and the comparison among techniques. Section 4.2 briefly reviews various types of optical fibers, including standardized fibers for telecommunications and specialty fibers that are developed for various other applications ranging from optical signal processing to optical sensing. Section 4.3 discusses mode-field distributions, the definition of mode-field diameter (MFD) techniques to measure near-field and far-field profiles. Section 4.4 introduces fiber attenuation measurement techniques. The primary focus of this section is to discuss the operating principle, accuracy considerations, and application of optical time-domain reflectometers (OTDRs). In Section 4.5, we discuss the measurement of fiber dispersions, including modal dispersion in MMFs and chromatic dispersion in single-mode fibers. Both time-domain and frequency-domain measurement techniques are discussed and compared. Section 4.6 reviews a number of techniques for the characterization of PMD in optical fibers, including pulse delay, interferometric methods, Poincare arc length, fixed analyzer, Jones Fiber Optic Measurement Techniques https://doi.org/10.1016/B978-0-323-90957-0.00006-0

Copyright © 2023 Elsevier Inc. All rights reserved.

447

448

Fiber optic measurement techniques

Matrix, and Mueller Matrix methods (MMMs). Understanding these measurement techniques, their pros and cons, and the comparison between them is critical in experimental system design and accuracy assessment. Section 4.7 discusses the measurement of PDL in optical fibers based on the Mueller Matrix technique. Since both the PMD and the PDL in optical systems are randomly varying, the measured results have to be presented to reflect the statistic nature of the process. Section 4.8 discusses implementations of PMD sources and emulators, which are useful instruments in optical transmission equipment testing and qualification. Section 4.9, the last section of this chapter, reviews various fiber non-linear effects and techniques to characterize them.

4.2 Classification of fiber types As we all know, optical fiber is a cylindrical waveguide that supports low-loss propagation of optical signals. The general properties of optical fibers have been discussed in Chapter 1. In recent years, numerous fiber types have been developed and optimized to meet the demand of various applications. Some popular fiber types that are often used in optical communication systems have been standardized by the International Telecommunication Union (ITU-T). The list includes graded index MMF (G.651), nondispersion-shifted single-mode fiber (G.652), dispersion-shifted fiber (DSF) (G.653), and non-zero dispersion-shifted fiber (NZDSF) (G.655). In addition to fibers designed for optical transmission, there are also various specialty fibers for optical signal processing, such as dispersion-compensating fibers (DCF), polarization-maintaining (PM) fibers, photonic crystal fibers (PCF), and rare earth-doped active fibers for optical amplification. Unlike transmission fibers, these specialty fibers are less standardized.

4.2.1 Standard optical fibers for transmission The ITU-T G.651 MMF has a 50-μm core diameter and a 125-μm cladding diameter. The attenuation parameter for this fiber is in the order of 0.8 dB/km at 1310 nm wavelength. Because of its large core size, MMF is relatively easy to handle with large misalignment tolerance for optical coupling and connection. However, due to its large modal dispersion, MMF is often used for short-reach and low data-rate optical communication systems. Although this fiber is optimized for use in the 1300-nm band, it can also operate in the 850- and 1550-nm wavelength bands (www.corning.com; www.ofsoptics.com). The ITU-T G.652 fiber, also known as standard single-mode fiber, is the most commonly deployed fiber in optical communication systems. This fiber has a simple step-index structure with a 9-μm core diameter and a 125-μm cladding diameter. It is single-mode with a zero-dispersion wavelength around λ0 ¼ 1310 nm. The typical chromatic dispersion value at 1550 nm is about 17 ps/nm-km. The attenuation parameter for G.652 fiber is typically 0.5 dB/km at 1310 nm and 0.2 dB/km at 1550 nm. An example of this type of fiber is Corning SMF-28.

Optical fiber measurement

Although standard SMF has low loss in the 1550 nm wavelength window, making it suitable for long-distance optical communications, it shows relatively high chromatic dispersion values in this wavelength window. In high-speed optical transmission systems, chromatic dispersion introduces significant waveform distortion, which may significantly degrade system performance. The trend of shifting the transmission wavelength window from 1310 to 1550 nm in the early 1990s initiated the development of DSF. Through proper cross-sectional design, DSF shifts the zero-dispersion wavelength λ0 from 1310 nm to approximately 1550 nm, making the 1550 nm wavelength window to have both the lowest loss and the lowest dispersion. The core diameter of the DSF is about 7 μm, which is slightly smaller than the standard SMF. DSF is able to significantly extend the dispersion-limited transmission distance if there is only a single optical channel propagating in the fiber, but it was soon realized that DSF is not suitable for multi-wavelength WDM systems due to the high-level non-linear crosstalk between optical channels through four-wave mixing (FWM) and cross-phase modulation (XPM). For this reason, the deployment of DSF did not last very long. To reduce chromatic dispersion while maintaining reasonably low non-linear crosstalk in WDM systems, NZDSFs were developed. NZDSF moves the zero-dispersion wavelength outside the 1550 nm window so that the chromatic dispersion for optical signals in the 1550 nm wavelength is less than that in standard SMF but higher than that of DSF. The basic idea of this approach is to keep a reasonable level of chromatic dispersion at 1550 nm, which prevents high non-linear crosstalk from happening but without the need of dispersion compensation in the system. There are several types of NZDSF depending on the selected value of zero-dispersion wavelength λ0. In addition, since λ0 can be either longer or shorter than 1550 nm, the dispersion at 1550 nm can be either negative (normal) or positive (anomalous). The typical chromatic dispersion for G.655 fiber at 1550 nm is 4.5 ps/nm-km. Although NZDSF usually has core sizes smaller than standard SMF, which increases the non-linear effect, some designs, such as Corning LEAF (large effective area fiber), have the comparable effective core areas as standard SMF, which is approximately 80 μm2. Fig. 4.2.1 shows typical dispersion versus wavelength characteristics for several major fiber types that have been offered for long-distance links (Demarest et al., 2002). More detailed specifications of these fibers are listed in Table 4.2.1. In optical communication system applications, the debate on which fiber has the best performance can never settle down, because data rate, optical modulation formats, channel spacing, the number of WDM channels, and the optical powers used in each channel may affect the conclusion. In general, standard SMF has high chromatic dispersion so that the effect of non-linear crosstalk between channels is generally small; however, a large amount of dispersion compensation must be used, introducing excess loss and requiring higher optical gain in the amplifiers. This in turn will degrade optical signal-to-noise ratio (OSNR) at the receiver. On the other hand, low dispersion fibers may reduce the requirement of dispersion compensation but at the risk of increasing non-linear crosstalk.

449

Fiber optic measurement techniques

+18

NDSF

+16 +10 Dispersion (ps/nm/km)

450

TeraLight

+8

ELEA

+6

+2

TW

TW-RS

+4

TW +

0 -2 -4 1520

DSF

TW -

LS 1530

1540

1550 Wavelength (nm)

1560

1570

Fig. 4.2.1 Chromatic dispersions of NDSF and various different non-zero dispersion-shifted fibers (NZDSFs).

Table 4.2.1 Important parameters for standard SMF (NDSF), long-span (LS) fiber, Truewave-REACH, Truewave-RS, large effective area fiber (LEAF), and Teralight. D: chromatic dispersion, S0: dispersion slope, λ0: zero-dispersion wavelength, dMF: mode-field diameter. LEAF fiber dispersion around h i ð1530 nmÞ λR ¼ 1565 nm wavelength is calculated from DðλÞ ¼ DðλR ÞD35 ðλ  λR Þ + Dð1530 nmÞ. Fiber type

ITU standard

D (1550 nm) S0 (ps/km/ (ps/nm/km) nm2)

NDSF(SMF-28) LS Truewave REACH Truewave—RS LEAF TERALIGHT AllWave

NDSF/G.652 NZNDSF/G.655 NZNDSF/G.655

16.70 1.60 5.5–8.9

NZNDSF/G.655.E NZNDSF/G.655 NZNDSF/G.655 G.652D, G.657A1

2.6–6 4 (1) 8.0 16.5

λ0 (nm)

dMF (μm)

W20/λ. In general, if the far-field angular radiation pattern EH(θ) can be measured, according to the property of the Hankel transform, the near-field distribution can be easily obtained by an inverse Hankel transform, ð∞ E ðr Þ ¼ E H ðqÞ J 0 ð2πqr Þqdq (4.3.6) 0

where q ¼ (sinθ)/λ is commonly referred to as the spatial frequency. Similar to the MFD for the near field as defined by Eq. (4.3.2), the MFD D0 for the far field can also be defined as 11=2 0 ð∞ 3 2 B2 0 ρ EH ðρÞdρC C 2D0 ¼ 2B (4.3.7) A @ ð∞ 2 ρE H ðρÞdρ 0

where ρ ¼ R sin θ is the radial distance between the observation point and the center on the observation plane. Both the near-field and the far-field MFDs are determined by the distribution of the optical power density on the fiber cross section; thus, the most important task is the precise measurement of the actual distribution of the mode field. There are a number of techniques to measure mode-field distribution, such as the far-field scanning technique and the near-field scanning technique. Again, since the relationship between the near-field and the far-field diameters is deterministic from Eqs. (4.3.2) to (4.3.7), the measurement of either one of them is considered to be sufficient and the other one can be derived accordingly.

4.3.2 Far-field measurement techniques Fig. 4.3.3 shows the block diagram of the far-field measurement setup. The lightwave signal from a laser diode is coupled into the input end of the optical fiber through a

459

Fiber optic measurement techniques

Detector head θ

Light source Fiber Cladding mode filter

R

Power meter

Angular scanning stage

Fig. 4.3.3 Far-field measurement setup using angular scanning.

focusing lens. A cladding mode filter is usually necessary to remove the optical power carried in the cladding of the fiber and to accelerate the process of achieving equilibrium mode-field distribution. The aperture of the optical detection head determines the spatial resolution of the measurement, which can be a pinhole in front of a photodiode, or simply using an optical fiber acting as the probe. The normalized far-field power distribution Pangle(θ) ¼ u jEH(θ)j2 with EH(θ) as defined by Eq. (4.3.5) can be obtained through scanning the angular position of the detector head, where u is a constant representing the detector efficiency and the aperture size. Fig. 4.3.4 shows examples of measured angular distributions of far-field power density for three different types of fibers. To make precise estimation of the mode diameter, the far-field power distribution pattern has to be measured over a wide angular range to cover at least the entire main lobe. One practical problem is that the absolute power density j Ψ (R, θ)j2 of the far field is inversely proportional to the distance R, as indicated by Eq. (4.3.4), and thus, the measured signal power levels will be very weak if the detector is placed far away from the fiber facet. On the other hand, the definition of “far field” requires the detector to be far away from the fiber facet such that R > > W20/λ. Normalized power density (dB)

460

0 −10 −20 −30 −40 −50 −30

−20

−10 10 0 Radiation angle q (degree)

20

30

Fig. 4.3.4 Examples of measured angular distribution of far-field power density of conventional singlemode step-index fiber at λ ¼ 1300 nm (solid line), dispersion-shifted fiber at λ ¼ 1300 nm (dot-dashed line), and dispersion-flattened fiber at λ ¼ 1550 nm (dashed line) (Artiglia et al., 1989). (Used with permission.)

Optical fiber measurement

Light source

q Fiber Cladding mode filter

R

r

z

Display

CCD array

Fig. 4.3.5 Far-field measurement setup using a planar detection array.

Meanwhile, acceptable spatial resolution of the measurement requires the use of a small size pinhole in front of the detector, which further reduces the optical power at the detector. To make things worse, the power level is even weaker when the detector is placed at large angles with respect to the fiber axis. Therefore, very sensitive detectors have to be used in these measurements, with extremely low noise levels. Lock-in amplifiers are used in most practical implementations. The detection and amplification circuitry also must have reasonably large dynamic ranges to ensure measurement accuracy. As shown in Fig. 4.3.4, the dynamic range has to be higher than 55 dB to clearly show the side lobes of the mode distribution. Fortunately, the mode-field distribution is stationary and the measurement does not require high-speed detection. Therefore, the far-field detection can also be accomplished using a two-dimensional charge-coupled detection (CCD) array, which is basically a video camera. The mode-field distribution at various emission angles can be simultaneously detected by sensing pixels in the camera and can be processed in parallel. This avoids the requirement of mechanically scanning the position of the detector head, which could easily introduce uncertainties. As shown in Fig. 4.3.5, since the CCD is a planar array, the far-field power density Pplane(z,ρ) measured on the plane is slightly different than the one measured by the angular scanning, and their relationship is P plane ðz, ρÞ∝ P angle ðθÞ cos 3 θ

(4.3.8)

where ρ ¼ z tan(θ) is the radial distance from the center of the array plan. The factor cos2θ in Eq. (4.3.8) comes from the fact that R ¼ z/ cos θ and the far-field power density is proportional to R2, as shown in Eq. (4.3.4). The other cosθ factor comes from the angle of the detector surface normal, which has an angle θ with respect to the incident light ray. Instrumentations are commercially available now for the measurement of mode-field distributions (www.photon-inc.com). Fig. 4.3.6 shows examples of the measured farfield profiles of two different fibers using the far-field imaging technique. Obviously, the two mode-field profiles are different, resulting from the difference of the index profiles of the two fibers.

4.3.3 Near-field measurement techniques As signified by its definition, near field is the radial distribution of the electrical field on the fiber end facet. Since the transversal dimension of a fiber facet is very small, it is not

461

462

Fiber optic measurement techniques

(a)

(b)

Fig. 4.3.6 Examples of the measured far-field profiles of two different fibers (Guttman, 2002). (Used with permission.) Translation Probe stage PD

Light source Fiber Current control

Cladding mode filter

Lock-in amplifie

Fig. 4.3.7 Near-field measurement setup using a transversal translation stage and a lock-in amplifier.

feasible to directly measure the field distribution using a probe. It is usually necessary to magnify the image on the fiber exit facet with a micro-objective lens. A block diagram of a near-field measurement setup is shown in Fig. 4.3.7, where an objective lens magnifies the image of the fiber exit facet and an optical probe mounted on a translation stage scans across the image plane. A photodiode converts the optical signal sampled by the optical probe into a voltage signal. In this measurement, the scanning probe can simply be a piece of single-mode fiber with a core diameter of approximately 9 μm. To make accurate measurements with enough sampling points, the magnification of the optical system has to be large enough, say 100 , with good linearity. A careful calibration has to be made because the absolute value of image magnification is critical in determining the MFD of the fiber under test. Similar power budget concerns as in the case of far-field measurement also exist in the near-field measurement: For a 100x magnification of the optical system, the power density on the image plane is 104 times (or equivalently 40 dB) lower than that on the fiber exit facet. In addition to having a large dynamic range, the noise level in the detection system has to be extremely low; therefore, a lock-in amplifier is usually required. Another consideration of the near-field measurement setup is the numerical aperture of the magnification system. Due to diffraction, the detailed features of the near-field

Optical fiber measurement

distribution on the imaging plane may be smoothed out. Based on the Rayleigh criterion, to be able to resolve two adjacent points separated by d on the object plane, the numerical aperture of the optical system has to be NAopt 0.61λ/d. As an example, if the signal wavelength carried in the fiber is 1550 nm, to resolve 10 sampling points along a fiber core diameter of 9 μm (d ¼ 1 μm), the numerical aperture of the optical system has to be NAopt  0.95. Again, similar to the far-field measurement setup shown in Fig. 4.3.5, the scanning optical probe, the translation stage, and the photodiode can be replaced by a twodimensional CCD array. This eliminates the repeatability concern and position uncertainties due to the translation stage, and the measurement is also much faster due to parallel sampling and signal processing. As an example, Fig. 4.3.8 shows the measured near-field image on the fiber facet together with the near-field intensity profiles in the horizontal and vertical directions. In recent years, the advances in digital signal processing (DSP) and high-sensitivity CCD technology greatly simplified the processing and analysis of digital images, allowing real-time observation of the far-field and near-field images (www.photon-inc.com). In principle, the far-field and the near-field are related and they are all determined by the optical field distribution E(r) on the fiber cross section. In practice, near-field measurement requires a large magnification lens, which may introduce image distortion in the measurement. In comparison, far-field measurement is more straightforward, which only requires a scanning probe. If mechanical scanning stage is stable enough, far-field measurement can potentially be more accurate with better signal-to-noise ratio

1.0

Horizontal axis

0.8 0.6 0.4 0.2 0.0 −4 1.0

−3

−2

−1

0

1

2

3

1

2

3

4

5

Vertical axis

0.8 0.6 0.4 0.2 0.0 −2

−1

0

Fig. 4.3.8 Examples of the measured near-field profile with the numerical values in the horizontal and vertical directions (scale: 1 μm/div) (Guttman, 2002). (Used with permission.)

463

464

Fiber optic measurement techniques

(SNR) and dynamic range, which can be observed by comparing Fig. 4.3.6 with Fig. 4.3.8 (Guttman, 2002). There are a number of alternative techniques to measure fiber MFD, such as the transverse offset method, the variable aperture method, and the mask method. The details of these methods can be found in (Meumann, 1988).

4.4 Fiber attenuation measurement and OTDR Optical attenuation in an optical fiber is one of the most important issues affecting all applications that use optical fibers. A number of factors may contribute to fiber attenuation, such as material absorption, optical scattering, micro- or macro-bending, and interface reflection and connection. Some of these factors are uniform, whereas some of them vary along the fiber, especially if different spools of fibers are fusion-spliced together. Characterization of fiber attenuation is fundamental to optical system design, implementation, and performance estimation.

4.4.1 Cutback technique In early days, the cutback technique was often used to measure fiber attenuation. As illustrated in Fig. 4.4.1, the cutback technique measures fiber transmission losses at different lengths. Suppose the attenuation coefficient α is uniform; the power distribution along the fiber is P(z) ¼ P0e αz. With the same amount of optical power coupled into the fiber, if the output power measured P1, P2, and P3 at the fiber lengths of L1, L2, and L3, respectively, the fiber attenuation coefficient can be calculated as α ¼ [ln(P2/P1)]/(L1  L2) or α ¼ [ln(P3/P2)]/(L2  L3). For a single-mode fiber, there are only two orthogonal fundamental modes and the differential attenuation is generally negligible. For a MMF, on the other hand, there are literally hundreds of propagation modes and different modes may have different attenuation coefficients. Therefore, launching condition from the source to the fiber is an important consideration in the loss measurement, especially in comparing results Mode scrambler

P1

Light source

L1

Light source

Power meter P2

Power meter

L2 P3

Light source

L3

Power meter

Fig. 4.4.1 Illustration of the cutback technique to measure fiber loss.

Optical fiber measurement

Light source

P2

P1

P4

P3

Bending

Light source

P5

Light source

Bending L1

L2

Fig. 4.4.2 Technique to selectively measure the attenuation coefficient of the fundamental mode (LP01) and the second-order mode LP01.

measured by different laboratories. A mode scrambler can be used to stabilize the power distribution over the guided modes and stripe out the cladding mode. It is well known that a single-mode fiber will turn into multi-mode if the signal wavelength is shorter than the cutoff wavelength. Immediately below the cutoff wavelength, the second-order LP11 mode starts to exist in addition to the fundamental mode. Cutback technique can be used to evaluate the single-mode condition in a fiber by measuring the attenuation coefficient of the second-order LP11 mode. Fig. 4.4.2 illustrates a technique of measuring the attenuation coefficients of both the fundamental LP01 mode and the second-order LP11 mode based on the fact that the bending loss of LP11 mode is much higher than LP01 mode (Ohashi et al., 1984). The measurement procedure is as follows: First, bend the fiber with small radius (1–2 cm) near the input end of the fiber and measure the power P1 and P2 at fiber lengths of L1 + L2, and L1, respectively. In this case, the bending is equivalent to a mode filter that removes the power carried by the LP11 mode while introducing negligible loss for the fundamental LP01 mode. The attenuation coefficient of LP01 mode can be obtained as   1 P2 α01 ¼ ln (4.4.1) P1 L2 Next, release the bending and measure the powers for the straight fiber at the same locations. In this case, the measured powers P3 and P4 include both the LP01 mode and the LP11 mode. Then, bend the fiber near the output terminal so that the measured power P5, which only includes the LP01 mode. Through these measurements, the attenuation coefficient of LP11 mode can be found as   1 P1P4  P2P5 ln (4.4.2) α11 ¼ P1P3  P1P5 L2 The cutback measurement technique is simple, direct, and very accurate; however, it has several disadvantages. First, cutting the fiber step by step will certainly make the fiber

465

466

Fiber optic measurement techniques

less usable. Especially, if the fiber loss is low, a very long fiber has to be used to have measurable losses. For example, if the attenuation coefficient of the fiber is α ¼ 0.25 dB/km, the output optical power only increases by 0.025 dB by cutting off 100 m of fiber, and obviously, the accuracy of the measurement is limited by the accuracy and the resolvable digit of the power meter. In addition, it would be difficult to use the cutback technique to evaluate fibers after cabling; it is even impossible to measure in-service fibers that are already buried under ground. Therefore, non-destructive techniques, such as OTDR, would be more practical.

4.4.2 Optical time-domain reflectometers Rayleigh backscattering is one of the most important linear effects in a single-mode optical fiber; it sets a fundamental limit of fiber loss and is responsible for the major part of the attenuation in modern optical fibers in which other losses are already minimized. Meanwhile, Rayleigh backscattering can also be utilized to measure the fiber attenuation distribution along the fiber (Barnoski and Jensen, 1976). In this backscattering measurement technique, a short and high-peak power optical pulse train is launched into the fiber and the waveform of the backscattered optical signal from the fiber is detected, providing the detailed local loss information throughout the fiber. An important advantage of this technique is that it only needs to access a single end of the fiber because both the source and the detector are collocated, so the method is non-destructive. Since Rayleigh backscattering is a linear process, at any point along the fiber, the magnitude of the backscattered optical power is linearly proportional to the optical power at that location. Due to the effect of fiber loss, both the transmitted and the backscattered powers are gradually attenuated along the fiber. The measurement of the time-dependent waveform of the backscattered power at the fiber input terminal provides the information about the loss distribution along the fiber; this information can be used to precisely calculate the attenuation coefficient. Fig. 4.4.3 shows the block diagram of an OTDR. A laser diode is driven by an electrical pulse generator that produces a train of short optical pulses. A photodiode is used to detect the backscattered optical power from the fiber through a directional coupler. The detected waveform of the optical signal is amplified and digitized through an ADC and analyzed by a DSP unit. The timing of the DSP is synchronized with the source optical pulses so that the propagation delay of each backscattered pulse can be precisely calculated. In general, the loss in an optical fiber may be caused by absorption, scattering, bending, and connecting. Most of these effects can be non-uniform along the fiber, especially if different fiber spools are connected together. Therefore, the fiber attenuation coefficient is a function of the location along the fiber. The power distribution along the fiber can be expressed as  ðz P ðzÞ ¼ P ð0Þ exp  ½α0 ðxÞ + αSC ðxÞdx (4.4.3) 0

Optical fiber measurement

Laser diode Fiber under test PD Electrical pulse generator

Amplifier ADC

Timing

Digital signal processing OTDR Display

Fig. 4.4.3 Block diagram of an OTDR. PD: photodetector, ADC: analog-to-digital converter.

where P(0) is the input optical power, αSC(x) is the attenuation coefficient due to Rayleigh scattering, and α0(x) is that caused by attenuation effects other than scattering along the fiber. Suppose an optical pulse is injected into the fiber at time t0 with the pulse width τ; neglecting the effect of chromatic dispersion and fiber non-linearity, the locations of the pulse leading edge and the trailing edge along the fiber at time t are zle ¼ vg(t  t0) and ztr ¼ vg(t  t0  τ), respectively, where vg is the group velocity of the optical pulse. Then, we consider the Rayleigh backscattering caused by a short fiber section of length dz. According to Eq. (4.4.3), the optical power loss due to Rayleigh scattering within this short fiber section is dP SC ðzÞ ¼ P ðzÞαSC ðzÞdz

(4.4.4)

Only a small fraction of this scattered energy is coupled to the guided mode, which can propagate back to the input side of the fiber. Therefore, the reflected optical power that is originated from the location z and reaches to the input end of the fiber can be calculated as  ðz dP BS ðzÞ ¼ P ð0Þ  η  αSC ðzÞ  dz  exp 2 αðxÞdx (4.4.5) 0

where α(x) ¼ α0(x) + αSC(x) is the composite attenuation coefficient of the fiber, which includes both the scattering and other attenuation effects. η ¼ (1  cos θ1)/2 is the conversion efficiency from the scattered light to that captured by the fiber, as explained by Eq. (1.4.59), and θ1 is the maximum trace angle of the guided mode in the fiber, which is proportional to the numerical aperture, as illustrated in Fig. 1.4.11. More precisely, if the normalized frequency, V, is between 1.5 < V < 2.4, this conversion efficiency can be estimated as (Nakazawa, 1983)

467

468

Fiber optic measurement techniques

0

ztr zle Δz

At time t

Input

Fiber

z

Signal pulse 0

At time t1

Backscattering

Signal pulse

z zle

Δz/2

ztr Signal pulse leading edge

Signal pulse trailing edge

t0

t0+t

t

t1

t

Fig. 4.4.4 Illustration of the locations of the signal optical pulse, the scattering section, and the backscattered pulse in the fiber.

 2 1 NA η 4:55 n1

(4.4.6)

where n1 is the refractive index of the fiber core. As an example, in a silica-based singlemode fiber with NA ¼ 0.11 and n1 ¼ 1.45, the conversion efficiency is approximately η  0.13%. As illustrated in Fig. 4.4.4, at time t1, the backscattered optical signal that reaches the input terminal of the fiber is originated from a short fiber section of length Δz/2 ¼ (zle  ztr)/2, and the amplitude is therefore  ðz vg τ P BS ðzÞ ¼ P ð0Þ  η  αSC ðzÞ  exp 2 αðxÞdx (4.4.7) 2 0 where Δz ¼ vgτ has been used. Assume that both the scattering loss αsc and the capturing coefficient η are constants along the fiber; the attenuation coefficient at location z can be evaluated from the differential return loss from Eq. (4.4.7):     P BS ðzÞ 4:343 d 1 d P BS ðzÞ αdB ðzÞ ¼  (4.4.8) ¼  ln 2 dz 2 dz P ð0Þ dB P ð0Þ where the unit of the attenuation coefficient αdB is in dB and (PBS(z)/P(0))dB is the normalized return loss, also measured in dB. It is worthwhile to note that the slope of the return loss as the function of z alone is sufficient to determine the fiber attenuation coefficient. However, the source power P(0), the pulse duration τ, and the efficiency of

Optical fiber measurement

capturing the backscattered power into guided mode ηαSC are important to determine the actual scattering optical power that reaches the receiver, and thus to evaluate the SNR. Although the Rayleigh backscattering power PBS is usually measured as the function of time, it is straightforward to convert the time scale into the fiber length scale z based on the well-known group velocity of the optical pulse. The following is an example of OTDR measurement. A standard single-mode fiber has the attenuation coefficient αdB ¼ 0.25 dB/km in a 1550 nm wavelength window. Assume that the attenuation is uniform along the fiber and a large part of this attenuation is caused by Rayleigh backscattering such that αSC,dB ¼ 0.2 dB/km and α0,dB ¼ 0.05 dB/km. We also assume that the efficiency of capturing the backscattered power into guided mode is η ¼ 1.19  103 and the signal optical pulse launched into the optical fiber has the peak power of P(0) ¼ 10 mW. The backscattered optical power that reaches the receiver at the fiber input terminal is  ðz vg τ αSC,dB ðzÞ P BS ðzÞ ¼ P ð0Þ  η   exp 2 αðxÞdx 2 4:343 0 n o 0:25 ¼ 5:59  103 τ  exp 2  z (4.4.9) 4:343 where τ is the pulse width and vg ¼ 2.04  108 m/s has been used considering that the refractive index of silica is approximately 1.47. Fig. 4.4.5 shows the backscattered optical power that reaches the receiver at the fiber input terminal as the function of the pulse location z along the fiber or as the function of time t. Due to the low capturing efficiency of the backscattered optical power by the fiber propagating mode, the level of the received signal power is at least 60 dB lower than the t (μs)

−50

0

98

196

294

392

490

80

100

−60 PBS (dBm)

−70

t = 1μs

−80 −90 t = 100ns

−100 −110 −120

0

20

40

60 z (km)

Fig. 4.4.5 The backscattered power that reaches the receiver as the function of the location of the signal optical pulse. The input pulse peak power is 10 dBm.

469

470

Fiber optic measurement techniques

original optical signal peak power when the pulse width is 1 μs. This power level is linearly reduced with further decreasing the pulse width. Therefore, in practical implementations, there are very stringent requirements on the receiver sensitivity, the dynamic range, and the linearity. If two fibers are connected together either by a connector or by fusion splicing, there will be a loss at the location of connection. In addition, because of the discontinuity possibly caused by the air gap if a connector is used, there is usually a discrete reflection peak in the OTDR trace. The connection loss can be evaluated straightforwardly by OTDR measurements. Assuming that the two fibers are identical with a uniform attenuation coefficient, the backscattered optical power measured by the OTDR is 8 vg τ α < ðz < L s Þ P ð0Þ  η  SC,dB  e2αz 4:343 2 P BS ðzÞ ¼ (4.4.10) : A2 vg τ P ð0Þ  η  αSC,dB  e2αz ð z > L Þ s s 2 4:343 where Ls is the location and As is the fractional loss of the connection. As is squared in Eq. (4.4.10) because the roundtrip pass of the optical signal across the connection point. Therefore, the connection loss can be estimated from the measured OTDR trace; simply compare the backscattered optical power immediately before and immediately after the connection point, as illustrated in Fig. 4.4.6: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P BS ðL s + δÞ As ¼ (4.4.11) P BS ðL s  δÞ δ!0

where δ is a very short fiber length. However, this measurement may not be accurate if the two fibers are not of the same type. In fact, even if they are of the same type, different fiber spools may still have slightly different numerical apertures or different attenuation coefficients. In this more practical case, the backscattered optical powers measured immediately before and after the splicing point are

Connection NA1

0

NA2

Ls

Fig. 4.4.6 Illustration of an OTDR trace for a fiber with splicing.

z

Optical fiber measurement

vg τ α (4.4.12a) P ð0Þ  η1  SC1,dB  e2α1 L s 4:343 2 vg τ α P BS ðL s + δÞ ¼ A2s (4.4.12b) P ð0Þ  η2  SC2,dB  e2α1 Ls 4:343 2 where η1 and η2 are the conversion efficiencies defined by Eq. (4.4.6) for the first and second fiber sections and αSC1 and αSC2 are the scattering coefficients for the two fiber sections, respectively. Since δ ! 0, the fiber attenuation in the second fiber section is negligible; therefore, only α1 is used in Eq. (4.4.12), which is the fiber loss coefficient in the first fiber section. Obviously, sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P BS ðL s + δÞ η2  αSC2,dB (4.4.13) ¼ As As,L ¼ η1  αSC1,dB P BS ðL s  δÞ P BS ðL s  δÞ ¼

δ!0

Eq. (4.4.13) may not accurately predict the splicing loss. The measurement error can be caused by the difference in the scattering and capturing efficiencies between the two fiber sections. To overcome this measurement error, a useful technique often used in fiber testing is to perform OTDR measurement from both sides of the fiber. First, if the OTDR is used from the left side of the fiber shown in Fig. 4.4.6, the measured OTDR trace in the vicinity of the splicing point, z ¼ Ls, is given by Eq. (4.4.12). As the second step, the OTDR measurement is launched from the right side of the fiber. The measured backscattering values immediately after and before the splicing point are, vg τ 0 α P 0BS ðL s  δÞ ¼ A2s (4.4.14a) P ð0Þ  η1  SC1,dB  e2α2 L s 4:343 2 vg τ 0 α P 0BS ðL s + δÞ ¼ (4.4.14b) P ð0Þ  η2  SC2,dB  e2α2 Ls 4:343 2 where α2 is the loss coefficient and Ls0 is the length of the second fiber section, respectively (from the splicing point to the right end). From this second measurement, one can obtain sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P 0BS ðL s  δÞ η1  αSC1,dB As,R ¼ (4.4.15) ¼ As η2  αSC2,dB P 0BS ðL s + δÞ δ!0

Because the actual connection loss is reciprocal and should be independent of the direction in which the OTDR signal is launched, the value of As in Eqs. (4.4.13), (4.4.15) should be identical. Combining the results of these two measurements, the correct connection loss can be obtained as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi As ¼ As,L As,R (4.4.16) This automatically removes the measurement error introduced by fiber type mismatch (O’Sullivan and Lowe, 1986). In general, an optical fiber may consist of many sections that are concatenated together by splicing and connecting. Fig. 4.4.7 illustrates a typical OTDR trace for a

471

Fiber optic measurement techniques

Events 1 dB

2

34 5

6

78

9 10 11

12

13

14 15

16 17

18

19

20 21

22 B

10

0

PBS (dB)

472

−10

−20

−30 50

100

km

Fiber length (km)

Fig. 4.4.7 Illustration of an OTDR trace for a fiber with multiple sections.

fiber, which is composed of multiple sections. In this case, each fiber section may have different attenuation coefficient, which can be seen from the slope of the section on the OTDR trace. The sharp peaks in the OTDR trace indicate the locations and the strengths of reflections from the connectors, while the splices simply introduce attenuation and the reflection is usually small. In addition, negative attenuations can be found in a number of fiber splices (events 8, 12, 15, and 17 in Table 4.4.1). This is the result of the change in the numerical aperture of the fiber sections. Therefore, more accurate evaluation of splicing loss has to use the measurements from both direction as indicated in Eq. (4.4.16). For accurate fiber attenuation measurement using OTDR, receiver SNR is a very important concern; it limits the total fiber length the OTDR can measure. For example, pffiffiffiffiffiffi for a high-quality receiver with noise-equivalent power of NEP ¼ 0:5 pW= Hz, if the bandwidth of the receiver is 1 MHz, the required signal optical power that reaches the receiver is PBS ¼ 0.5 nW ¼  63 dBm to achieve SNR ¼ 1. Because the noise is random while the signal is deterministic, the SNR can be improved by averaging. pffiffiffiffiffi In general, the average over N pulses will increase the SNR by simply a factor of N . Therefore, an OTDR measurement usually takes time, from a few seconds to a few minutes, depending on the number of averages required.

4.4.3 Improvement considerations of OTDR One possibility to improve the sensitivity of an OTDR is to use coherent detection. Unlike direct detection, whereby the SNR is mainly limited by the receiver thermal noise and the dark current noise of the photodiode, the SNR in coherent detection is

Table 4.4.1 Explanation of the OTDR trace in Fig. 4.4.6. Event (23)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Distance (km)

Attenuation (dB)

Reflectance (dB)

0.98226 7.53062 13.75157 16.20721 20.13623 26.19347 30.44991 32.33257 37.32570 38.63537 44.93817 51.07727 57.38007 63.60102 68.92157 75.14252 81.11791 98.47108 110.74927 119.83513 122.94560 129.98510

0.545 0.168 0.219 0.532 0.100 0.131 0.098 0.159 0.037 0.187 0.076 0.181 0.234 0.153 0.066 0.188 0.012 0.407 0.165 0.372 0.692

41.09

9.81 90 dB dynamic range to allow the measurement of long fibers with high losses. Another practical issue is that one has to consider the Fresnel reflection at the fiber input terminal. For a well-cleaved fiber end facet, considering the refractive indices between air (n ¼ 1) and silica (n ¼ 1.47), the Fresnel reflection is approximately 3.6%, or –14.4 dB, which is several orders of magnitude higher than the Rayleigh backscattering from inside the fiber. Therefore, most OTDR designs have to block this frontend reflection using proper time gating. This usually introduces difficulty in the measurement of fiber characteristics near the input, which is commonly referred to as the dead zone. Nevertheless, OTDR is by far the most popular equipment for fiber attenuation measurement, fiber length measurement, and troubleshooting in fiber-optical systems.

4.5 Fiber dispersion measurements The phenomenon of dispersions in optical fibers, their creation mechanisms, and their consequences in fiber-optic systems were discussed briefly in Chapter 1. Fiber dispersion can be categorized into intermodal dispersion and chromatic dispersion. Intermodal dispersion is caused by the fact that different propagation modes in a fiber travel at different speeds. Usually, a large number of modes coexist in a MMF; therefore, intermodal dispersion is the major source of dispersion in MMFs. Chromatic dispersion is originated from the fact that different frequency components in each propagation mode may travel at slightly different speeds, and therefore, it is the dominant dispersion source in singlemode fibers. Both intermodal dispersion and chromatic dispersion may cause optical signal pulse broadening and waveform distortion in fiber-optic systems.

475

476

Fiber optic measurement techniques

4.5.1 Intermodal dispersion and its measurement In a multi-mode optical fiber, the optical signal is carried by many different propagating modes. Each mode has its own spatial field distribution and its propagation constant in the longitudinal direction. Because different modes have different propagation speeds, they have different arrival times after traveling through the fiber. Fig. 4.5.1 shows a ray trace of propagation modes in a MMF, where each mode travels in a different trace angle. According to Fresnel refraction theory, the largest incidence angle at the core/cladding interface is determined by the critical angle, θ θc ¼ sin1(n2/n1). Obviously, the slowest propagation mode is the one with the incidence angle θ ¼ θc at the core/cladding interface, whereas the fastest propagation mode is the one that propagates exactly in the direction of the fiber axis (θ ¼ π/2). Thus, to the first approximation, the maximum intermodal dispersion can be estimated by the propagation delay difference between these two extreme ray traces. The maximum differential delay for a fiber of unit length is     n1 1 n1 n1  n2 Dmod ¼ (4.5.1) 1 ¼ c sin θc c n2 The unit of intermodal dispersion is in [ns/km]. Eq. (4.5.1) indicates that intermodal dispersion is proportional to the index difference between fiber core and cladding. For example, for a MMF with a numerical aperture of NA ¼ 0.2 and n1 ¼ 1.5, the absolute maximum intermodal dispersion is approximately 45 ns/km. However, in practice, most of the optical signal energy in a MMF is carried by lower-order modes and their intermodal dispersion values are much lower than that predicted by Eq. (4.5.1). The actual intermodal dispersion value in a MMF is on the order of 200 ps/km. Generally, with an arbitrary launching condition, the lightwave signal in a MMF excites only a certain number of modes instead of all the possible guided modes that can be supported by the fiber. Therefore, the optical signal energy distribution among various modes is somewhat arbitrary near the input of the fiber which is quite sensitive to the launching condition. This may cause uncertainties in the intermodal dispersion n2

θ

n1 n2

Fig. 4.5.1 Geometric representation of propagation modes in a multi-mode fiber.

Optical fiber measurement

measurements. In practice, an optical fiber is not ideally uniform in geometry. Signal optical energy may randomly couple between modes when there are bending, twisting, and other intrinsic and non-intrinsic perturbations. Thus, after a certain transmission distance, energy will be eventually coupled into most of the modes that the fiber can support and reach to the equilibrium distribution of modes. To speed up this mode redistribution process, a mode scrambler is often used in the measurement setups to make the results more stable and reliable. This can be accomplished by twisting or winding the fiber on an array of mechanical bars; therefore, the measured intermodal dispersion is less sensitive to other mechanical disturbances. Intermodal dispersion of a MMF can be measured both in time domain and in frequency domain (Hernday, 1998). Since the dispersion value in a MMF is typically >100 ps/km, the measurement accuracy requirement is generally not as stringent as it is for the chromatic dispersion measurement in single-mode fibers, where the dispersion values are much smaller. However, due to the randomness of mode coupling in a MMF, the measurement only provides a statistic value of intermodal dispersion and a mode scrambler has to be used to ensure the repeatability of the measurement, especially when the fiber is short. 4.5.1.1 Pulse distortion method Fig. 4.5.2 shows the block diagram of an intermodal dispersion measurement setup using the pulse distortion method, which is also referred to as the pulse-broadening technique. In this setup, a short optical pulse is generated by a laser diode that is directly modulated by an electrical short pulse train or by active mode locking. The optical pulses are injected Calibration path

LD

PD Mode scrambler

DC bias

Sampling oscilloscope

Fiber to be tested

Trigger

Electrical pulse source

(a) Fiber response Δτi

(b)

A(t)

H(t)

Δτ0 B(t)

Fig. 4.5.2 Pulse-broadening method to measure intermodal dispersion: (A) measurement setup and (B) input and output waveforms.

477

478

Fiber optic measurement techniques

into the MMF through a mode scrambler to ensure the equilibrium modal distribution in the fiber. A high-speed photodetector is used at the receiver to convert the received optical pulses into the electrical domain; therefore, they can be amplified and the waveforms can be displayed on a sampling oscilloscope. Because of the intermodal dispersion of the fiber, the output pulses will be broader than the input pulses, and the amount of pulse broadening is linearly proportional to the intermodal dispersion of the fiber. In this way, the fiber dispersion characteristics can be directly measured in the time domain. Assuming that the optical signal power is low enough so that the system operates in the linear regime, the relationship between the input waveform A(t) and the output waveform B(t) is related by BðtÞ ¼ AðtÞ H ðtÞ

(4.5.2)

where stands for convolution and H(t) is the time-domain transfer function of the fiber. Consider that in a MMF, intermodal dispersion is the dominant source of dispersion effect; thus, H(t) essentially represents intermodal dispersion of the fiber. To take into account the finite bandwidth of the measurement system other than the fiber, we need a system calibration that directly connects the transmitter to the receiver, as illustrated by the calibration path in Fig. 4.5.2. Since this is a linear system in which superposition is valid, the waveform measured in the calibration process can be regarded as the input waveform A(t) in the calculation of the fiber transfer function H(t). If we use Gaussian approximations for both the input and output signal waveforms and the fiber transfer function, the bandwidth of the fiber can be simply evaluated by the pulse width difference between the input A(t) and the output B(t). If the input pulse width is Δτi and the output pulse width is Δτ0, the response time of the fiber will be ΔτH ¼ Δτ0  Δτi. 4.5.1.2 Frequency-domain measurement Although the time-domain pulse-broadening technique is easily understandable, a more straightforward way to find the fiber transfer function is in frequency domain. By converting the time-domain waveforms A(t) and B(t) into frequency-domain A(ω) and B(ω) through Fourier transform, the frequency-domain transfer function of the fiber can be directly obtained by H ðωÞ ¼

BðωÞ AðωÞ

(4.5.3)

Fig. 4.5.3 shows the block diagram of intermodal dispersion measurement in the frequency domain using an electrical network analyzer, where the network analyzer is set in the S21 parameter mode. A laser diode is intensity-modulated by the output of the network analyzer, which is a frequency-swept RF source. The DC bias to the laser and the RF modulation are combined through a bias-tee. The optical signal is launched into the fiber under test through a mode scrambler. A wideband photodiode is used to convert the

Optical fiber measurement

Calibration path

PD

LD Mode scrambler

DC bias

Fiber under tested

RF network analyzer (S12)

(a) H (w) 0 –6dB w

w 3dB

(b)

Fig. 4.5.3 Frequency-domain method to measure intermodal dispersion: (A) measurement setup and (B) electrical power transfer function.

received optical signal into electrical domain, and the RF signal is then fed to the receiving terminal of the network analyzer. Because the effect of fiber dispersion is equivalent to a lowpass filter, by comparing the transfer functions with and without the fiber in the system, the net effect of bandwidth limitation caused by the fiber can be obtained. The use of an RF network analyzer makes this measurement very simple and accurate. Compared with the time-domain measurement discussed previously, this frequency-domain technique does not require a short pulse generator and a high-speed oscilloscope. The fiber transfer function H(t) is measured directly without the need for additional calculation using FFT. The measurement procedure can be summarized as follows: 1. Remove the fiber and use the calibration path to measure the frequency response of the measurement system (transmitter/receiver pair and other electronic circuitry), which is recorded as Hcal(ω). 2. Insert the optical fiber and measure the frequency response again, which is recorded as Hsys(ω). Then, the net frequency response of the fiber is H fib ðωÞ ¼

H sys ðωÞ H cal ðωÞ

(4.5.4)

It is important to note that in this setup, the S21 parameter of the electrical power spectral transfer function is measured by the RF network analyzer. We know that at the output of the photodiode, signal electrical power is proportional to the square of the received optical power; therefore, 3 dB optical bandwidth is equivalent to 6 dB

479

480

Fiber optic measurement techniques

bandwidth of the electrical transfer function of Hfib(ω). Or simply, a 6 dB change in the measured Hfib(ω) is equivalent to a 3 dB change in the optical power level. Obviously, both the pulse-broadening method and the frequency-domain method are very effective in characterizing fiber dispersion by measuring the dispersion-limited information bandwidth. In general, the pulse-broadening method is often used to characterize large-signal response in the time domain, whereas the frequency-domain method is only valid for small-signal characterization. However, in fiber dispersion characterization, especially in the measurement of MMFs, the non-linear effect is largely negligible, and thus, the time-domain and frequency-domain methods should result in the same transfer function. Another important note is that the unit of intermodal dispersion is in [ps/km], whereas the unit of chromatic dispersion is in [ps/nm/km]. The reason is that pulse broadening caused by intermodal dispersion is independent of the spectral bandwidth of the optical source, and it is only proportional to the fiber length. Because the value of intermodal dispersion in MMFs is on the order of 100 ps/km, the RF bandwidth requirement of the measurement equipment is not very stringent. Typically, a measurement setup with 15 GHz RF bandwidth is sufficient to characterize a MMF of >2 km in length.

4.5.2 Chromatic dispersion and its measurement Chromatic dispersion is introduced because different frequency components within an optical signal may travel at different speeds, which is the most important dispersion effect in single-mode fibers. But this does not mean a MMF would not have chromatic dispersion. Since chromatic dispersion is usually much smaller than intermodal dispersion for the majority of MMFs, its effect is often negligible. From a measurement point of view, because of the relatively small chromatic dispersion values, the dispersion-limited information bandwidth is usually very wide; it is rather difficult to directly measure this equivalent bandwidth. For example, for a standard single-mode fiber operating in the 1550 nm wavelength window, the chromatic dispersion is approximately 17 ps/nm/km. It would require a picosecond level of pulse width for the optical source and >100 GHz optical bandwidth in the receiver to directly measure the pulse broadening in a 1 km fiber, which would be extremely challenging. Therefore, alternative techniques have to be used to characterize chromatic dispersion in single-mode fibers. There are a number of different methods for this purpose, but in this section, we will discuss the two most popular techniques: (1) the phase shift method, which operates in the time domain and (2) the AM response method, which operates in the frequency domain. 4.5.2.1 Modulation phase shift method Fig. 4.5.4 shows the measurement setup of the modulation phase shift method to characterize fiber chromatic dispersion. A tunable laser is used as the light source whose

Optical fiber measurement

Tunable laser (λs)

Intensity modulator

Fiber to be tested

PD

Calibration path fm

Phase reference Oscilloscope or phase comparator

Fig. 4.5.4 Experimental setup of the modulation phase shift method to measure fiber chromatic dispersion.

wavelength can be adjusted to cover the wavelength window in which the chromatic dispersion needs to be measured. An electro-optic intensity modulator converts a sinusoid electrical signal at frequency fm, into optical domain. After the modulated optical signal passes through the optical fiber, it is detected by a photodiode, and a sinusoid electrical signal is recovered that is then amplified and sent to an oscilloscope. The waveform of the same sinusoid source at frequency fm is also directly measured by the same oscilloscope for phase comparison. Because of the chromatic dispersion in the fiber, the propagation delay of the optical signal passing through the fiber is different at different wavelengths: τg ¼ τg(λ). To measure this wavelength-dependent propagation delay, the optical source is modulated by a sinusoid wave and the propagation delay can be evaluated by the relative phase retardation of the received RF signal. By varying the wavelength of the tunable laser, the RF phase delay as a function of the source wavelength, φ ¼ φ(λ), can be obtained (Costa et al., 1983; Cohen, 1985). If the modulating frequency of the sinusoid wave is fm, a relative phase delay of Δφ ¼ 360∘ between two wavelength components corresponds to a group delay difference of Δτg ¼ 1/fm between them. The group delay versus optical signal wavelength λ can be expressed as Δτg ðλÞ ¼

φðλÞ  φðλr Þ 1 fm 360∘

(4.5.5)

where φ(λ)  φ(λr) is the RF phase difference measured between wavelength λ and a reference wavelength λr. With the knowledge of group delay as a function of wavelength, φ(λ), the chromatic dispersion coefficient D(λ) can be easily derived as   dφðλÞ 1 d Δτg ðλÞ 1 DðλÞ ¼ ¼ (4.5.6) dλ L 360∘ Lf m dλ where L is the fiber length. Since the value of chromatic dispersion is determined by the derivative of the group delay instead of its absolute value, the reference phase delay φ(λr) and the choice of reference wavelength are not important. In fact, the selection of the RF

481

Fiber optic measurement techniques

modulation frequency has a much bigger impact on the measurement accuracy. In general, a high modulation frequency helps increase phase sensitivity in the measurement. For a certain fiber dispersion, the phase delay per unit wavelength change is linearly proportional to the modulation frequency: dφ(λ)/dλ ¼ 360∘ D(λ)Lfm. However, if the modulation frequency is too high, the measured phase variation may easily exceed 360o within the wavelength of interest; therefore, the measurement system has to track the number of full cycles of 2π phase shift. In addition, the wavelength tuning step of the source needs to be small enough such that there is enough number of phase measurements within each 2π cycle. Therefore, the selection of the modulation frequency depends on the chromatic dispersion value of the fiber under test and the wavelength window the measurement needs to cover. The tunable laser in Fig. 4.5.4 can also be replaced by the combination of a wideband light source, such as superluminescent LED (SLED), and a tunable optical bandpass filter. The SLED can be directly modulated by injection current, and the optical bandpass filter can be placed either immediately after the SLED (before the fiber) or at the receiver side before the photodiode to select the signal wavelength. As an example, a spool of 40 km standard single-mode fiber needs to be characterized in the wavelength window between 1530 and 1560 nm. We know roughly that the zerodispersion wavelength of the fiber is around λ0 ¼ 1310 nm and the dispersion slope is approximately S0 ¼ 0.09 ps/nm2/km. Using the Sellmeier equation shown in Eq. (1.4.84), the wavelength-dependent group delay is expected to be   ðλ S0 L 2 λ40 τg ðλÞ ¼ L DðλÞdλ ¼ λ + 2 + τg ðλr Þ (4.5.7) 8 λ λr Fig. 4.5.5 shows the expected relative group delay Δτg(λ) ¼ τg(λ)  τg(λr) and the number of full 2π cycles of RF phase shifts for the modulation frequencies of 1, 5, and 10 GHz. 20

200

L = 40km

No. of 2π cycles of Δj (λ)

18

Group delay Δτg(λ) (ns)

482

16 14 12 10 8 6 4 2 0 1530

(a)

1535

1540

1545

1550

Wavelength (nm)

1555

180

L = 40km

10GHz

160 140 120 100

5GHz

80 60 40

fm = 1GHz

20 0 1530

1560

(b)

1535

1540

1545

1550

1555

1560

Wavelength (nm)

Fig. 4.5.5 (A) Relative group delay and (B) the number of full 2π cycles of phase shift for the modulation frequencies of 1, 5, and 10 GHz. Fiber length is 40 km.

Optical fiber measurement

To obtain this plot, the reference wavelength was set at 1530 nm. In terms of the measurement setup, the speed requirement of the electronic and opto-electronic devices and instrument is less than 10 GHz, which is usually easy to obtain. The measurement accuracy may be limited by the wavelength accuracy of the tunable laser and the step size of wavelength tuning. It may also be limited by noises in the measurement system when the signal level is low. Since this is a narrowband measurement, an RF bandpass filter at the modulation frequency can be used at the receiver to significantly reduce the noise level and improve the measurement accuracy. 4.5.2.2 Baseband AM response method The modulation phase shift method previously discussed relies on the RF phase comparison between the reference path and the signal passing through the fiber under test. An oscilloscope or a digitizing system may be used to perform waveform comparison. In this section, we discuss the baseband AM response method, which measures chromatic dispersion based on the interference between modulation sidebands (Christensen et al., 1993; Devaux et al., 1993). Because of the chromatic dispersion, different modulation sidebands may experience different phase delays, and interference at the receiver can be used to predict the dispersion value. Fig. 4.5.6 shows the experimental block diagram of the AM response method to measure fiber chromatic dispersion. A tunable laser is used as the optical source whose output is intensity modulated by an external electro-optic modulator. The modulator is driven by the output from an RF network analyzer operating in the S21 mode. The network analyzer provides a frequency-swept RF signal to modulate the optical signal. After passing through the optical fiber under test, the optical signal is detected by a wideband photodiode. After being amplified, the RF signal is fed to the receiving port of the network analyzer. In fact, this measurement technique is similar to that shown in Fig. 3.3.9 in Chapter 3, where the modulation chirp of an optical transmitter was characterized. Through the analysis in Section 3.3, we know that the AM response of the fiber system shown in Fig. 4.5.6 is not only determined by the transmitter modulation chirp; it also depends on the chromatic dispersion of the fiber. According to the AM modulation response given by Eq. (3.3.22), Tunable laser (λs)

Intensity modulator

Fiber under test

PD Amp

(S21) RF network analyzer

Fig. 4.5.6 Experimental block diagram of AM response method to measure fiber chromatic dispersion.

483

484

Fiber optic measurement techniques

 2  πλs Dðλs Þ f 2 L 1 + tan ðαlw Þ I ðf Þ∝ cos c

(4.5.8)

where αlw is the modulator chirp parameter and f is the modulation frequency. An important feature of this AM modulation response is the resonance zeroes at frequencies determined by the following equation: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

c 2 1 ðα Þ fk ¼ (4.5.9) 1 + 2k  tan lw π 2Dðλs ÞLλ2s In Chapter 3, the purpose of the experiment was to measure the modulator chirp parameter; we assumed that the fiber dispersion parameter D and the fiber length L were known. In this section, the purpose is to measure fiber chromatic dispersion, so we need to eliminate the impact of modulator chirp of the transmitter. From Eq. (4.5.9), we can find that two consecutive zeroes in the AM response, fk and fk+1, are related only to the accumulated chromatic dispersion: c  Dðλs Þ  L ¼  2 (4.5.10) f k+1  f 2k λ2s This allows the accurate determination of fiber dispersion by measuring the spectrum of AM modulation response at each wavelength of the laser source. By adjusting the wavelength λs of the laser source, the wavelength-dependent chromatic dispersion parameter D(λs) can be obtained. Compared with the modulation phase shift method, this AM response technique directly measures dispersion parameter D(λs) at each source wavelength λs without the need to measure group delay around this wavelength. Since the zeroes in the spectrum of the AM response are sharp enough, as indicated by Fig. 3.3.12, the measurement can be quite accurate. However, one practical concern might be the required RF bandwidth of the electro-optic devices as well as the RF network analyzer. In general, short fibers and low chromatic dispersions in the fiber require wide bandwidth of the measurement system. As an example, Fig. 4.5.7A shows the normalized AM response for 100 km standard single-mode fiber with a 17 ps/nm/km dispersion parameter and 100 km Truewave fiber (TWF) with a 2 ps/nm/km dispersion parameter. The source modulation chirp is assumed to be zero. Fig. 4.5.7B shows the first three frequencies of resonance zero in AM response as the function of the accumulated fiber dispersion D  L. Using a network analyzer with 20 GHz bandwidth, the minimum accumulated dispersion the system can measure is about 500 ps/nm; this is because at least two resonance zeroes f0 and f1 need to be observed within the 20 GHz network analyzer bandwidth. 500 ps/nm corresponds to approximately 29 km of standard single-mode fiber with 17 ps/nm/km dispersion parameter.

Optical fiber measurement

Freq. of zero response (GHz)

0

AM response (dB)

−10 −20 −30 −40 −50

SMF, D = 17ps/nm/km

−60 −70

(a)

0

5

10

TWF, D = 2ps/nm/km 15

20

Modulation frequency (GHz)

25

(b)

20 18 16 14 12 10 8 6 4 2 0

f2 f1 f0

0

500 1000 1500 2000 2500 3000 3500 Accumulated dispersion (ps/nm)

Fig. 4.5.7 (A) Normalized AM responses of SMF with D ¼ 17 ps/nm/km and TWF with D ¼ 2 ps/nm/km. L ¼ 100 km and αlw ¼ 0 are assumed. (B) Frequencies of resonance zeroes in the AM response versus the accumulated chromatic dispersion DL.

From our discussion on chromatic dispersion measurement so far, we have found both modulation phase shift and AM modulation response techniques suitable for measuring fibers with relatively large accumulated dispersion. If a fiber sample is very short, for example, only a few meters, the required RF bandwidth for both of these techniques would have to be extremely wide. In this case, alternative techniques have to be used such as interferometric method. 4.5.2.3 Interferometric method The basic block diagram of the interferometric method for the measurement of chromatic dispersion is shown in Fig. 4.5.8 (Saunders and Gardner, 1984; Shang, 1981). A tungsten lamp is usually used as a wideband light source that can cover a very wide wavelength window from 500 to 1700 nm. The signal wavelength is selected by a tunable optical bandpass filter with the FWHM bandwidth of Δλ. The wavelength selection can also be accomplished with a monochromator before the photodetector. In that case, the bandwidth can be adjusted by changing the resolution of the monochromator. The optical signal is then fed into a Mach-Zehnder interferometer with the fiber under test put in one arm while the length of the other arm is continuously variable. A photodiode at the output of the interferometer detects the optical signal, and the waveform is displayed on an oscilloscope. For a Mach-Zehnder interferometer, as discussed in Section 3.3.3, if the length difference between the two arms is much shorter than the source coherence length, coherent interference happens, and the photocurrent generated in the photodiode is   pffiffiffiffiffiffiffiffiffiffi iðtÞ ¼ R P 1 + P 2 + 2 P 1 P 2 cos ½ΔφðΔlÞ (4.5.11)

485

486

Fiber optic measurement techniques

Δl

Air

Lamp

Tunable filter

Fiber

PD

Oscilloscope

Synchronization

Control

(a) Air

Δl Control

Lamp

Fiber

Synchronization

Monochromator

PD

Oscilloscope

(b) Fig. 4.5.8 Interferometric technique to measure fiber chromatic dispersion: (A) using a tunable filter and (B) using a monochromator.

where Δl is the arm length difference, P1 and P2 are the optical powers in the two arms, R is the photodiode responsivity, and Δφ(Δl) is the relative optical phase difference between the two arms. On the other hand, if the length difference between the two interferometer arms is much longer than the source coherence length, the interference is incoherent and the photocurrent generated in the photodiode is simply proportional to the power addition of the two arms, that is, iðtÞ ¼ RfP 1 + P 2 g

(4.5.12)

For the tunable optical filter with the FWHM bandwidth of Δλ, the coherence length of the selected optical signal is approximately Lc ¼

λ2 πng Δλ

(4.5.13)

where vg ¼ c/ng is the group velocity, ng is the group index of the material, and λ is the central wavelength selected by the optical filter. For example, for a signal optical bandwidth of Δλ ¼ 1 nm in a 1550 nm wavelength region, the coherence length is approximately 0.5 mm. Obviously, when the two interferometer arms have the same group delay, a slight change in the differential length Δl will introduce the maximum variation of photocurrent. Though outside the coherence length, the photocurrent is independent of the differential arm length Δl, as illustrated in Fig. 4.5.9. The FWHM width of the envelope of the coherent interference pattern is approximately

Optical fiber measurement

i(Dl)

Δzr

Dl

0

Fig. 4.5.9 Illustration of the photocurrent versus the arm length difference.

Δzr ¼

2 ln ð2Þ λ2 ng Δλ π

(4.5.14)

A narrower width of Δzr is desired to help accurately determine the differential arm delay of the interferometer. According to the definition, chromatic dispersion is proportional to the derivative of the group delay:   1 d τg ðλÞ DðλÞ ¼ (4.5.15) dλ L where L is the fiber length and τg(λ) is the wavelength-dependent group delay. To measure the chromatic dispersion in a fiber, its wavelength dependency of the group delay is the key parameter that needs to be evaluated. This can be accomplished by sweeping the wavelength of the tunable optical filter and measuring the relative delay as a function of the wavelength. In the interferometer configuration shown in Fig. 4.5.10, the reference arm for which group delay is wavelength independent is in the air, and therefore, the important parameter to evaluate is the differential arm length difference as a function of wavelength Δl(λ), which indicates the chromatic dispersion of the fiber under test. DðλÞ ¼

1 dðΔlðλÞÞ Lc dλ

(4.5.16)

As illustrated in Fig. 4.5.10, when the tunable optical filter sweeps across the wavelength, the peak of the interference envelope will move, providing Δl(λ). In practical l3 l2 l1 DI DI1

DI2

DI3

Fig. 4.5.10 Illustration of measured arm length difference as a function of the filter wavelength.

487

Fiber optic measurement techniques

measurements, the resolution of Δl(λ) measurement is determined by the sharpness of the interference pattern Δzr, as indicated by Eq. (4.5.14), while the minimum wavelength step is limited by the bandwidth of the tunable filter Δλ. Although it is desirable for both of them to be small, Eq. (4.5.14) indicated that their product is only determined by the square of the wavelength. As an example, if the filter bandwidth is 2 nm in a 1550 nm wavelength window, the spatial resolution Δzr is approximately 0.35 mm, which corresponds to a relative time delay of 1.17 ps in air. The light source used in this technique can have a very wide wavelength window in which the measurement can take place. This fiber dispersion characterization technique covers wide bandwidth. In practice, the wavelength-dependent group delay and thus the chromatic dispersion can be measured in the wavelength window between 500 and 1700 nm by fitting the measured dispersion characteristic with the well-known three-term Sellmeier equation for standard singlemode fibers. It allows the precise determination of chromatic dispersion, even for short pieces of fiber samples. However, the major disadvantage of this technique is that the fiber length cannot be too long. The major reason is that the reference arm is in free space so that a long reference arm may introduce mechanical instability and thus excessive coupling loss. Because the reference arm length has to be roughly equal to the length of the fiber sample, the typical fiber length in the interferometric measurement setup should not be longer than one meter. For fibers with very small dispersion parameters or if the dispersion is a strong function of the wavelength that cannot be fitted by the Sellmeier equation, such as dispersion-shifted or dispersion-flattened fibers, it would be quite difficult to ensure the accuracy of the measurement. To overcome the reference arm length limitation, an all-fiber setup can be useful for measuring the characteristics of long fibers. In the all-fiber setup, a foldback interferometer using a 3-dB fiber coupler may be employed with total reflection at the end of the reference fiber arm, as illustrated in Fig. 4.5.11. This all-fiber setup is simple and easy to align; however, we must know precisely the dispersion characteristic of the fiber in the reference arm before the measurement. Another difficulty is that the bandwidth of a 3-dB

Tunable source

PD 3-dB

488

Reference arm Fiber under test

Collimator

Fig. 4.5.11 Dispersion measurement using a foldback all-fiber interferometer.

Optical fiber measurement

fiber coupler is usually narrow compared with a thin film-based free-space beam splitter, and therefore, the measurement may be restricted to a certain wavelength sub-band.

4.6 Polarization mode dispersion (PMD) measurement Unlike chromatic dispersion, which represents wavelength-dependent differential group delay (DGD), PMD is a special type of intermodal dispersion in single-mode optical fibers. It is well known that in a “single-mode” fiber, there are actually two propagation modes co-existing, and they are orthogonally polarized; they are commonly referred to as HEx11 and HEy11. In an ideal cylindrically symmetrical optical fiber, these two modes are degenerate and they have identical propagation constant. While in practical optical fibers, the cylindrical symmetry may not be precise; as a result, these two orthogonal propagation modes may travel in different group velocities due to the birefringence in the fiber.

4.6.1 Representation fiber birefringence and PMD parameter The birefringence in an optical fiber is generally introduced through intrinsic or extrinsic perturbations. Intrinsic perturbation is related to permanent features in the fiber due to errors in the manufacturing process. Manufacturing errors during fiber production may cause non-circular fiber core, which is usually referred to as geometric birefringence, and non-symmetric stress due to the material in the fiber perform, which is known as stress birefringence. On the other hand, extrinsic perturbation is often introduced due to external forces in the cabling and installation process. Both of these two types of perturbations contribute to the effect of birefringence; as a consequence, the two orthogonally polarized propagation modes exhibit different propagation constants (Ulrich et al., 1980). Assume that the propagation constant of the perpendicularly polarized propagation mode is β? ¼ ωn?/c, the propagation constant of the parallel polarized propagation mode is β// ¼ ωn///c where ω is the optical frequency, n? and n// are the effective refractive indices of the two polarization modes, and c is the speed of light. Because of the birefringence in the fiber, n? and n// are not equal; therefore, β? 6¼ β//. Their difference is   ω n==  n? ω Δβ ¼ β==  β? ¼ (4.6.1) ¼ Δneff c c where Δneff ¼ n//  n? is the effective differential refractive index. Typically, Δneff is on the order of 105 to 107 in coiled standard single-mode fibers (Galtarossa et al., 2000). Then, for a fiber of length L, the relative propagation delay between the two orthogonally polarized modes is   n==  n? LΔneff Δτ ¼ (4.6.2) L¼ c c This is commonly referred to as DGD of the fiber.

489

490

Fiber optic measurement techniques

Because of fiber birefringence, the SOP of the optical signal will rotate when propagating in the fiber. For a total phase walk-off ΔβL ¼ 2π between the two orthogonal polarization modes, the SOP of the signal at the fiber output completes a full 2π rotation. According to Eqs. (4.6.1), (4.6.2), this polarization rotation can be accomplished by the change of fiber length L, the differential refractive index Δneff, or the signal optical frequency ω. As a simple example, consider an optical signal with two frequency components ω1 and ω2. If these two frequency components are co-polarized when the optical signal enters an optical fiber, then at the fiber output their polarization states will walk off from each other by an angle: Δω  Δneff L ¼ Δω  Δτ (4.6.3) c where Δω ¼ ω2  ω1. In other words, the fiber DGD value Δτ can be measured through the evaluation of differential polarization walk-off between different frequency components of the optical signal. In fact, this is the basic concept of the frequency-domain PMD measurement technique, discussed later in this section. The concept of birefringence is relatively straightforward in a short fiber where refractive indices are slightly different in the two orthogonal axes of the transversal cross section. However, if the fiber is long enough, the birefringence axes may rotate along the fiber due to banding, twisting, and non-uniformity. Meanwhile, there is also energy coupling between the two orthogonally polarized propagation modes in the fiber. In general, both the birefringence axis rotation and the mode coupling are random and unpredictable, which make PMD a complex problem to understand and to solve (Poole and Nagel, 1997). The following are a few important parameters related to fiber PMD: 1. Principal state of polarization (PSP) indicates two orthogonal polarization states corresponding to the fast and slow axes of the fiber. Under this definition, if the polarization state of the input optical signal is aligned with one of the two PSPs of the fiber, the output optical signal will keep the same SOP. In this case, the PMD has no impact in the optical signal, and the fiber only provides a single propagation delay. It is important to note that PSPs exist not only in “short” fibers but also in “long” fibers. In a long fiber, although the birefringence along the fiber is random and there is energy coupling between the two polarization modes, an equivalent set of PSPs can always be found. Again, PMD has no impact on the optical signal if its polarization state is aligned to one of the two PSPs of the fiber. However, in practical fibers, the orientation of the PSPs usually change with time, especially when the fiber is long. The change of PSP orientation over time is originated from the random changes in temperature and mechanical perturbations along the fiber. Δθ ¼

Optical fiber measurement

PSP1

w1 Δφ

w2 PSP2

Fig. 4.6.1 Definition of PSPs on Poincare sphere representation.

Using Stokes parameter representation on the Poincare sphere, the two PSPs are represented as two vectors, which start from the origin and point to the two opposite extremes on the Poincare sphere as shown Fig. 4.6.1. In a short fiber without energy coupling between the two polarization modes, the PSP of the fiber is fixed and is independent of optical signal frequency. In Fig. 4.6.1, different circles represent different input polarization states of the optical signal. For each input polarization state, the output polarization vector rotates on the Poincare sphere in a regular circle with the change of the optical signal frequency. If the fiber is long enough, the PSP will be largely dependent on the signal optical frequency, and the regular circles shown in Fig. 4.6.1 will no longer exist. 2. DGD indicates the propagation delay difference between the signals carried by the two PSPs. In general, DGD is a random process due to the random nature of the birefringence in the fiber. The probability density function (PDF) of DGD in an optical fiber follows a Maxwellian distribution (Poole and Nagel, 1997): rffiffiffi   2 Δτ2 Δτ2 P ðΔτÞ ¼ (4.6.4) exp  2 π α3 2α where P(Δτ) is the probability that the DGD of the fiber is Δτ and α is a parameter related to the mean DGD. In fact, according to the Maxwellian distribution given in Eq. (4.6.4), the average value of DGD is rffiffiffi 8 (4.6.5) hΔτi ¼ α π This is usuallypreferred to as the mean DGD, while the root mean square (RMS) value ffiffiffi of DGD is α 3. A Maxwellian distribution with α ¼ 0.5 ps is shown in Fig. 4.6.2, which is plotted both in the linear scale and in the logarithm scale. It is important to notice that even though the average DGD is only 0.8 ps in this case, low-probability

491

Fiber optic measurement techniques

100

0.7

Log scale

Linear scale

0.6 0.5

P(Δt)

492

10−5

0.4 0.3 10−10

0.2 0.1 0

0

0.5

1

1.5

Δt(ps)

2

2.5

3

10−15

0

0.5

1

1.5

2

2.5

3

Δt(ps)

Fig. 4.6.2 Probability of fiber DGD at the value of Δτ, which has a Maxwellian distribution with α ¼ 0.5 ps and a mean DGD of 0.8 ps.

events may still happen; for example, the instantaneous DGD may be 2 ps but at a low probability of 105. This is an important consideration in calculating the outage probability of optical communication systems. 3. Mean DGD, which is the average DGD hΔτi as defined in Eq. (4.6.5). The unit of mean DGD is in picoseconds (ps). Mean DGD in a fiber is usually counted as the average value of DGD over certain ranges of wavelength, time, and temperature. For short fibers, mean DGD is proportional to the fiber length L: hΔτi ∝ L, whereas pffiffiffi for longpfibers, the mean DGD is proportional to the square root of fiber length L: ffiffiffi hΔτi∝ L . This is due to the random mode coupling and the rotation of birefringence axes along the fiber. The mean DGD is an accumulated effect over fiber; therefore, random mode coupling makes the average effect of PMD smaller. However, due to the random nature of mode coupling, the instantaneous DGD can still be quite high at the event when the coupling is weak. PMD parameter is defined as the mean DGD over a unit length of fiber. For the reasons we have discussed, for short fibers, the unit of PMD parameter pffiffiffiffiffiffiffi is [ps/km], whereas for long fibers, the unit of PMD parameter becomes [ps= km]. Because of the timevarying nature of DGD, the measurement of PMD is not trivial. In the following sections, we discuss several techniques that are often used to measure PMD in fibers.

4.6.2 Pulse delay method The simplest technique to measure DGD in an optical fiber is the measurement of differential delay of short pulses that are simultaneously carried by the two polarization modes (Poole and Giles, 1988). Fig. 4.6.3A shows a block diagram of the measurement setup. Typically, a mode-locked laser source is used to provide short optical pulses and is injected into the fiber under test through a PC. A high-speed photodiode converts the

Optical fiber measurement

Mode-locked Laser

PD Polarization controller

Oscilloscope

Fiber under test

Trigger

Synchronization

(a) Δτ

Launch in fast PSP

Launch in slow PSP t

(b) Fig. 4.6.3 PMD measurement using differential pulse delay technique: (A) experimental setup and (B) illustration of differential pulse delay.

output optical signal from the fiber into electrical domain and the pulse waveform is displayed by an oscilloscope. The propagation delay of the optical pulse depends on the group velocity of the optical signal that propagates in the fiber. Since the DGD is defined as the group delay difference between the fast axis and the slow axis of the fiber, it can be evaluated by directly measuring the fastest and the slowest arrival times of the optical pulses at the receiver, as shown in Fig. 4.6.3B. Since the optical signal from the source laser is polarized, the search for the fastest and the slowest group velocity in the fiber is accomplished by adjusting the PC before the optical signal is launched into the fiber. The underlying principle of the pulse-broadening technique is straightforward and easy to understand. But the accuracy of this measurement, to a large extent, depends on the temporal width of the optical pulses used. Narrower optical pulses are obviously desired to measure fibers with low levels of DGD. However, chromatic dispersion in the fiber will broaden and distort the optical pulses. For example, to measure a 10 km standard single-mode fiber with 17 ps/nm/km dispersion parameter in a 1550 nm wavelength window, if the input optical pulse width is 10 ps, the optical spectral bandwidth is approximately 0.8 nm. The pulse width at the output of the fiber will be about 136 ps only due to chromatic dispersion. Therefore, it would not be helpful to use very narrow pulses for DGD measurement when the fiber has significant chromatic dispersion. In fact, if the width of the input optical pulse is Δtin, the output pulse width can be calculated by Δt out ¼ Δt in +

Dλ2 L cΔtin

(4.6.6)

493

Fiber optic measurement techniques

150 D = 17ps/nm/km L = 10km

140 130 Output pulse width (ps)

494

120 110 100 90 80 70 10

20

30

40 50 60 70 Input pulse width (ps)

80

90

100

Fig. 4.6.4 Output pulse width versus input pulse width in a 10 km fiber with 17 ps/nm/km chromatic dispersion parameter.

where D is the fiber chromatic dispersion parameter, λ is the operation wavelength, and L is the fiber length. If we assume that D ¼ 17 ps/nm/km and L ¼ 10 km, the relationship between the widths of the output and the input pulses is shown in Fig. 4.6.4. The minimum width of the output pulse is about 75 ps; obviously, it is not feasible to measure DGD values of less than a few picoseconds without proper dispersion compensation. On the other hand, if dispersion compensation is applied, the DGD of the dispersion compensator has to be taken into account and its impact in the overall result of measurement may become complicated. To summarize, the differential pulse delay method is a time-domain technique that directly measures the DGD in a fiber, and the explanation of the results is rather straightforward. By changing the wavelength of the optical source, the wavelength dependency of DGD can be evaluated; thus, the PMD parameter can also be obtained. Although the operating principle of this technique is simple; its accuracy may be limited by the achievable pulse width. This technique is usually used to measure fibers with low chromatic dispersion and relatively high levels of DGD.

4.6.3 The Interferometric method As we just discussed in the last section, the major difficulty of the differential pulse delay method is the requirement of short optical pulses and ultrafast detection. These can be avoided using the optical interferometric technique. The principle of the interferometric

Optical fiber measurement

Halogen lamp PC

Fiber under test

Michelson interferometer Mirror PM coupler E1

Monochromator

l

E2

PD Reference Lock-in amplifier

Signal

Mirror control Computer control and analysis

Fig. 4.6.5 Optical setup for DGD measurement using interferometry method.

method is based on the measurement of the differential delay between the signals carried by two PSPs using the low-coherent interferometer technique (Von der Weid et al., 1987; Gisin et al., 1991). Fig. 4.6.5 shows the schematic diagram of a DGD measurement setup using the interferometry technique. In this system, a wideband lightwave source, such as a halogen lamp, is used. A monochromator selects the wavelength of the optical signal and the spectral bandwidth. Once the spectral bandwidth of the light source is wide enough, it has a very short coherence length, and the optical signals reflected from the two interferometer arms of the Michelson interferometer is coherent only when the two arms have almost the same length. Assume that the spectral bandwidth selected from the light source is Δλ and the center wavelength is λ; its coherence length is approximately 1 λ2 (4.6.7) 2n Δλ where n is the refractive index. In the Michelson interferometer setup, if the arm length difference is shorter than the source coherence length, the mixing between the optical fields coming from the two arms is coherent in the photodetector. On the other hand, if the arm length difference is longer than the coherence length, there is no phase relation between optical fields coming from the two arms, and therefore, the optical powers add up at the photodetector incoherently. To demonstrate the operation principle, we assume that the PM fiber coupler has a 50% splitting ratio and the optical signal amplitudes in the two arms are equal, i.e., j E1 j ¼ j E2 j. Within coherence length, the mixed optical power at the photodiode will oscillate between zero (destructive interference) and 2j E1 j2 (constructive interference), depending on the phase relation between the two arms. Outside the coherence length, the total optical power will simply be a constant jE1 j2, as illustrated in Fig. 4.6.6. When a birefringent optical fiber is inserted between the light source and the interferometer, the optical signal will be partitioned into the fast axis and the slow axis. At the fiber output, the optical field is Δl ¼

495

496

Fiber optic measurement techniques

Popt 2E12

Δl E12

0

l 0

Fig. 4.6.6 Illustration of interference pattern in a low-coherence interferometer. jE1 j is the optical field in each of the two interferometer arms, and l is the arm length difference.

  E ¼ E x ejβx L + E y ejβy L

(4.6.8)

where x and y are the two orthogonal PSPs of the fiber, βx and βy are the propagation constants of the fast and the slow axis, Ex and Ey are the amplitude of the optical signal carried by these two orthogonal modes, and L is the fiber length. Then, after a roundtrip through the Michelson interferometer, the optical signal at the photodiode is   1 E0 ¼ Ex ejβx L + E y ejβy L ejβl + 1 (4.6.9) 2 where l is the differential delay between the two interferometer arms and β ¼ 2πn/λ is the propagation constant in the interferometer arm. We have assumed that the powersplitting ratio of the optical coupler is 50%. Then, the square-law detection of the photodiode generates a photocurrent, which is



2 η I ¼ ηjE 0 j2 ¼ E x ejβx L + ejðβx L+βlÞ + E y ejβy L + ejðβy L+βlÞ (4.6.10) 4 As illustrated in Fig. 4.6.7A, a strong coherent interference peak is generated when l ¼ 0, which is mainly caused by the self-mixing terms of j Ex j2 and jEy j2 in Eq. (4.6.10). In addition, two satellite peaks are generated when the differential arm length l satisfies

βy  βx L βl ¼ 0 (4.6.11) or approximately l ¼ Δτ  c

(4.6.12)

where Δτ is the DGD of the fiber under test. These two coherent interference peaks represent the contribution of the cross-mixing between Ex and Ey terms as given by Eq. (4.6.10), and the width of each interference peak is determined by the spectral width of the source. By measuring the location of these two secondary interference peaks,

Optical fiber measurement

σ

l

−cDt

0

l

cΔτ

(a)

(b)

Fig. 4.6.7 Interference pattern as the function of differential delay l of the interferometer: (A) short fiber and (B) long fiber with random mode coupling.

the fiber DGD can be estimated with Eq. (5.4.12). A good measurement resolution (and thus good accuracy) requires narrow interference peaks, which corresponds to a wide optical bandwidth used. On the other hand, if the fiber is long enough, random-mode coupling exists between the two polarization modes and the PSPs vary along the fiber. In this case, the interference pattern consists of a stable central peak and randomly scattered sidebands due to random-mode coupling. The overall envelope of the energy distribution is generally Gaussian, as shown in Fig. 4.6.7B. Suppose the standard deviation pffiffiffiffiffiffiffiffi of this Gaussian fitting is σ; then, the mean DGDpcan be calculated as Δτ ¼ σ 2=π, whereas the RMS value h i ffiffiffiffiffiffiffiffi 1=2 of DGD is hΔτ2 i ¼ σ 3=4 (Hernday, 1998). In the long fiber situation, due to mode coupling in the fiber, the DGD value is generally a function of signal wavelength. This measurement essentially provides an averaged DGD over the signal spectral bandwidth of Δλ. Referring to Fig. 4.6.5, the wavelength resolution of the DGD measurement can be selected by choosing a proper width of the exit slit of the monochromator.

4.6.4 Poincare arc method As discussed in Section 4.6.1, PMD is originated from the birefringence of the fiber, which can be measured by the frequency dependency of polarization rotation. A more sophisticated representation of the SOP is the use of Stokes parameters, in which ! a polarized optical signal is represented as a vector S on the Poincare sphere,!as discussed in Section 2.6.2. After propagating through a birefringent fiber, the signal S vector will rotate around the PSP on the Poincare sphere when the wavelength (or the frequency) is changed. In a short fiber without mode coupling, the PSP of the fiber is stable and independent of optical signal frequency. In this case, the SOP vector of the optical signal rotates on the Poincare sphere in a regular circle around the PSP when the signal optical frequency is swept as shown in Fig. 4.6.8A. On the other hand, in a long fiber, where random mode coupling is significant, the PSP is no longer stable and is a function of signal optical

497

498

Fiber optic measurement techniques

Ω(w) Ω φ

S(w1) S(w2)

S(w2)

φ

S(w1)

(a)

(b)

Fig. 4.6.8 Illustration of traces of signal polarization vector on Poincare spheres when the optical frequency is varied (A) for a short fiber and (B) for a long fiber.

frequency. In this case, each differential frequency change of the optical signal has its corresponding PSP and the SOP vector wanders on the Poincare sphere irregularly with the change of signal optical frequency, as shown in Fig. 4.6.8B. By definition, the PMD vector Ω originates from the center of the Poincare sphere and points toward the PSP. In either case in Fig. 4.6.8, with the change of optical signal frequency ω, the signal polarization state at! fiber output will change. This is represented by the directional change of the SOP vector S versus ω. Obviously, the amount of this change is directly proportional to fiber birefringence (or PMD). For an infinitesimal change of the signal optical frequency dω, this vectorial relationship is !

! dS ¼Ω S dω

(4.6.13)

For a step change in the optical frequency, Δω ¼ ω2  ω1, it is also convenient to use the scalar relationship (Hernday, 1998): ϕ ¼ Δτ  Δω

(4.6.14) !

where ϕ is the angular change of the polarization vector S in radians on the plan perpendicular to the PSP, as shown in Fig. 4.6.8, and Δτ is the DGD between the two PSP components. Obviously, the simple relation given in Eq. (4.6.14) can be used to measure fiber DGD if a polarimeter is available and the variation of Stokes parameters versus signal frequency can be measured. Fig. 4.6.9 shows the block diagram of the measurement setup using Poincare arc technique, where a tunable laser is used which provides a source whose optical frequency can be swept. A PC is placed before the optical fiber under test to explore various launching SOPs of the optical signal into the fiber. A polarimeter is used to measure the Stokes

Optical fiber measurement

Polarizer Tunable laser Polarization controller

Fiber under test

Polarimeter (S1, S2, S3)

Fig. 4.6.9 Block diagram of DGD measurement using a polarimeter. !

!

parameters corresponding to each frequency of the optical source, S ðωÞ ¼ a x S1 ðωÞ + ! ! ! ! ! a y S2 ðωÞ + a z S3 ðωÞ, where a x, a y, and a z are unit vectors. For a small wavelength increment Δω ¼ ω2  ω1, the angular rotation of the polarization vector on the plan perpendicular to the PSP vector can be evaluated by

!

9 8 ! ! ! < S ðω1 Þ  ΩðωÞ  S ðω2 Þ  ΩðωÞ = ! (4.6.15) ϕ ¼ cos 1 ! ! : ! S ðω1 Þ  ΩðωÞ S ðω2 Þ  ΩðωÞ ; Then, the DGD value at this frequency can be obtained by ϕ (4.6.16) Δω where ω ¼ (ω2 + ω1)/2 is the average frequency. Overall, the Poincare arc measurement technique is easy to understand and the measurement setup is relatively simple. However, it requires a polarimeter, which is a specialized instrument. In addition, the frequency tuning of the tunable laser has to be continuous to provide the accurate trace of the polarization rotation, as illustrated in Fig. 4.6.8B. ΔτðωÞ ¼

4.6.5 Fixed analyzer method Compared with the Poincare arc measurement technique, the fixed analyzer method replaces the polarimeter with a fixed polarizer; thus, it is relatively easy to implement because of the reduced requirement of specialized instrumentation. Fig. 4.6.10 shows the schematic diagrams of two equivalent versions of the fixed analyzer measurement apparatus (Poole and Favin, 1994). Fig. 4.6.10A uses a tunable laser as the optical source, while at the receiver side, a fixed polarizer is followed by an optical power meter. By varying the frequency of the tunable optical source, the signal polarization state at the fiber output changes and the polarizer converts this signal SOP change into an optical power change, which is then detected by the power meter. The system shown in Fig. 4.6.10B uses a wideband source that provides a signal with a broad optical spectrum. Because of the birefringence in the fiber, different signal frequency components will exhibit different polarization states at the output of the fiber, and the polarizer converts this frequency-dependent polarization rotation into a frequency-dependent optical power spectral density, which can be accurately measured by an optical spectrum analyzer. In both implementations, a polarizer immediately after the source is used to make

499

500

Fiber optic measurement techniques

Calibration path Polarizer Tunable laser

(a)

Power meter Polarization controller

Fiber under test

Polarizer

Calibration path Polarizer Wideband source

OSA Polarization controller

Fiber under test

Polarizer

(b) Fig. 4.6.10 Block diagram of PMD measurement using a fixed analyzer method: (A) tunable laser + power meter combination and (B) wideband source + OSA combination.

sure the source has a single and fixed polarization state. The PC enables the change of the polarization state of the optical signal that is injected into the fiber. In the Poincare sphere representation, the power transfer function through a perfect polarizer can be expressed as T ðωÞ ¼

P out 1 ¼ ½1 + ^sðωÞ  p^ P in 2

(4.6.17)

where ^sðωÞ is the unit vector representing the polarization state of the input optical signal into the polarizer and p^ is the unit vector representing the high transmission state of the polarizer. Because of the birefringence in the fiber, the SOP of the optical signal at fiber output will rotate around the PSP on the Poincare sphere when the optical signal frequency is changed. In a long optical fiber, the birefringence axis is randomly oriented along the fiber and the mode coupling may also be significant; therefore, the polarization vector may essentially walk all over the Poincare sphere with the change of the signal optical frequency. As a result, the power transmission efficiency T(ω) through the fixed polarization analyzer at the fiber output can have all possible values between 0 and 1 as illustrated Fig. 4.6.11. If the birefringence orientation along the fiber is truly random, the transfer function T(ω) will have a uniform probability distribution between 0 and 1 with the average value of 0.5. If we define dT ðωÞ (4.6.18) dω as the frequency derivative of the power transfer function of the fixed polarizer, then statistically, the expected value of jT0 j is E{j T0 j, T ¼ hTi}, which is obtained under the condition that the transfer function is at its mean level T(ω) ¼ hT(ω)i. According to the fundamental rules of statistics, the mean-level crossing density, which specifies how often the random variable T(ω) crosses through its average value per unit frequency interval, is T0 ¼

Optical fiber measurement

T(w) 1

1

T(w)

0.5

w

0

(a)

1

PDF

0

(b)

Fig. 4.6.11 (A) Illustration of fixed polarizer transfer function T(ω) with random input polarization state of the signal and (B) probability distribution of T(ω).

γ m ¼ f T ðhT iÞðE fjT 0 j, T ¼ hT ig

(4.6.19)

where fT(hTi) is the PDF of the transfer function T(ω) evaluated at its average level hT(ω)i. In this specific case of uniform distribution, since hT(ω)i ¼ 0.5 and fT(hTi) ¼ fT(0.5) ¼ 1, Eq. (4.6.19) can be simplified into γ m ¼ EfjT 0 j, T ¼ hT ig

(4.6.20)

Use the definition of T(ω) shown in Eq. (4.6.17), its derivative shown in Eq. (4.6.18) can be expressed as   1 d^sðωÞ 0 T ðωÞ ¼  p^ (4.6.21) 2 dω Meanwhile, Eq. (4.6.13) in the last section indicated that d^sðωÞ ¼ Ω  ^sðωÞ dω where Ω is the PMD vector. Then one can find: 1 1 T 0 ðωÞ ¼ ½ðΩ  ^sÞ  p^ ¼ ½ð^s  p^Þ  Ω 2 2

(4.6.22)

(4.6.23)

According to Eq. (4.6.17), at the mean value of T(ω) ¼ 0.5, ^sðωÞ  p^ has to be equal to zero, which is equivalent to ^sðωÞ  p^ ¼ ^1, where ^1 is a unit vector, and therefore,   T 0 ðωÞ ¼ 0:5 ^1  Ω . Then, Eq. (4.6.20) can be expressed as γm ¼

 1 1  ^ 1  Ω ¼ hjΩjihj cos θji 2 2

(4.6.24)

501

502

Fiber optic measurement techniques

In a long fiber with truly random mode coupling, cosθ is uniformly distributed between 1 cos θ 1 so that hj cos θ ji ¼ 0.5. Also, by definition, the magnitude of the PMD vector Ω is equal to the DGD of the fiber, Δτ ¼ j Ωj; therefore, γm ¼

hΔτi 4

(4.6.25)

Recall that the physical meaning of γ m can be explained by how often the transfer function T(ω) crosses through its average value per unit frequency interval. Then, within a frequency interval Δω, if the average number of crossovers is hNmi, we will have γ m ¼ hNmi/Δω, that is, hΔτi ¼ 4

hN m i Δω

(4.6.26)

In this way, by changing the signal frequency and counting the number of transmission crossovers through its average value within a certain frequency interval, the average DGD can be evaluated. Similarly, the measurement can also be performed by counting the average number of extrema (maximum + minimum) within a frequency interval to decide the average DGD value. Without further derivation, the equation to calculate the DGD can be found as (Poole and Favin, 1994) hN e i (4.6.27) Δω where hNei is the number of extrema within a frequency interval Δω. If we choose to use a wavelength interval instead of a frequency interval, Eq. (4.6.27) can be expressed as hΔτi ¼ 0:824π

hΔτi ¼

0:412hN e iλstart λstop   c λstop  λstart

(4.6.28)

where λstart and λstop are the start and stop wavelengths of the measurement. One practical consideration using the fixed polarizer method is how small the frequency step size should be used for the tunable laser if the setup shown in Fig. 4.6.10A is implemented. This question is equivalent to the selection of the spectral resolution for the OSA if the setup shown in Fig. 4.6.10B is used. Assuming that we need at least three data points between adjacent transmission extrema, according to Eq. (4.6.28) the wavelength step size has to be δλ
Lw, where L is the total fiber length, the average power density of the Stokes signal at a certain frequency fs can be found by integrating Eq. (4.9.20) as  ½ exp ðP 0 gs L w Þ  1  ½ exp ðP 0 gs L w Þ + 1 hP s ðf s ÞiL>Lw ¼ P s0 η ð2 + P 0 gs L Þ P 0 gs L w (4.9.21) where Ps0 ≡ σ sp/gs(fs), gs(fs) ≡ gR(fs)/Aeff, and P0 is the peak power of the square pump waveform Pp(0). For long optical pulses, the walk-off length is longer than the length of the fiber, and the average power density of the Stokes signal is  ½ exp ðP 0 gs L w Þ  1 L  ½ exp ðP 0 gs L Þ + 1 hP s ðf s ÞiL 1 GHz) in Fig. 6.7.7 the XPM transfer function, only depends on the Kerr effect non-linearity γ k. Below 1GHz in Fig. 6.7.7B, the transfer function depends on a combination of Kerr effect and electrostriction non-linearities. Using γ ¼ γ k + γ e(Ω) in Eqs. (6.7.21), (6.7.22), the XPM transfer function can be calculated. The proportionality coefficient U in Eq. (6.7.23) can be adjusted once to obtain the best fit between the calculated and the measured XPM transfer functions for both high (Ω > 1 GHz) and low (Ω < 1 GHz) frequency regions as shown in Fig. 6.7.7. Although this optimum fitting only guarantees the correct ratio of γ e(Ω)/γ k, it avoids complications due to a number of calibration uncertainties such as pump power levels, modulator response, and receiver electro-optic conversion efficiency. Fig. 6.7.8 shows the real and the imaginary parts of γ e(Ω) calculated from Eq. (6.7.23) with U ¼ 5  1015, which resulted in the best fit to the measured XPM transfer function as shown in Fig. 6.7.7B, where γ k ¼ 1.1 W1 km1 was assumed. Note that this γk value is dependent on the fiber type.

6.7.2 FWM-induced crosstalk in optical systems FWM is a parametric process that results in the generation of signals at new optical frequencies: f jkl ¼ fj + fk  fl

(6.7.28)

where fj, fk, and fl are the optical frequencies of the contributing signals. There will be system performance degradation if the newly generated FWM frequency component overlaps with a signal channel in a WDM system, and appreciable FWM power is delivered into the receiver. The penalty will be greatest if the frequency difference between the FWM product and the signal, fjkl  fi, lies within the receiver bandwidth. 0.4

0.1

(a)

Real { (Ω)}

0.2 0.1 0 -0.1 -0.2 -0.3

(b)

0

Imaginary { (Ω)}

0.3

-0.1 -0.2 -0.3 -0.4 -0.5

0

0.5

1

Frequency (GHz)

1.5

0

0.5

1

1.5

Frequency (GHz)

Fig. 6.7.8 Calculated real (A) and imaginary (B) parts of γ e(Ω) through best fitting to the measured XPM transfer function. U ¼ 5  1015.

743

744

Fiber optic measurement techniques

Unfortunately, for signals on the regular ITU frequency grid in a WDM system, this overlapping is quite probable, especially at start of life, when the wavelengths of WDM channels are precisely on the ITU grid. Over an optical cable span in which the CD is constant, there is a closed form solution for the FWM product-power-to-signal-power ratio: xs ¼

P jkl ðL s Þ ¼ ηL 2eff χ 2 γ 2 P j ð0ÞP k ð0Þ P l ðL s Þ

(6.7.29)

where Leff ¼ (1  exp(αLs))/α is the non-linear length of the fiber span, Ls is the span length, Pj(0), Pk(0), and Pl(0) are contributing signal optical powers at the fiber input, χ ¼ 1, 2 for non-degenerate and degenerate FWM, respectively, and the efficiency is "  # 4eαL s sin Δkjkl L s =2 α2 η¼ρ 2 (6.7.30) 1+ Δkjkl + α2 ð1  eαLs Þ2 where the detune is

 

2 2πc 2 Δkjkl ¼  2 Dðλm Þ f j  f m  ðf l  f m Þ fm

(6.7.31)

0 < ρ < 1 is the polarization mismatch factor, and λm is the central wavelength, corf +f f +f responding to the frequency of f m ¼ j 2 k ¼ l 2 jkl . In practice, in a multi-span WDM system, the dispersion varies not only from span to span but also between cable segments, typically a few kilometers in length, within each span. The contributions from each segment can be calculated using the analytic solution described earlier, with the additional requirement that the relative phases of the signals and FWM product must be considered in combining each contribution (Inoue and Toba, 1995). The overall FWM contribution is a superposition of all FWM contributions throughout the system: sffiffiffiffiffiffiffiffiffiffiffiffiffi) (

P ðzÞ X X jkl aF ¼ (6.7.32) exp iΔφjkl spans segments P l ðzÞ where Δφjkl is the relative phase of FWM generated at each fiber section. The magnitude of the FWM-to-signal-ratio is quite sensitive, not only to the CDs of the cable segments but also to their distribution, the segment lengths, and the exact optical frequencies of the contributing signals. Because of the random nature of the relative phase in each fiber section, statistical analysis may be necessary for system performance evaluation. Fig. 6.7.9 shows the experimental setup to evaluate the impact of FWM in a multispan WDM optical system. In this measurement, a WDM system uses n optical transmitters, and each transmitter is individually adjusted for the optical power and the SOP

Optical system performance measurements

VOA

Tx.l2

VOA

Tx.ln

VOA

PC delay

BERT

PC Multispan fiber transmission

EDFA OSA

Decision circuit

l1 l2 Amp

Sync. Oscilloscope

Fig. 6.7.9 Measurement of FWM-induced crosstalk in a WDM system.

PD

lm ln

WDM-DEMUX

delay

Tx.l1

WDM-MUX

before being combined in a WDM multiplexer. The PRBS pattern generated by a bit error test set (BERT) is split into n channels to modulate the n optical transmitters. To make sure that the bit sequence of the n channels are not correlated, a different time delay is introduced before modulating each transmitter. The maximum time delay is determined by the pattern length of the PRBS. The combined multi-channel optical signal propagates through a multi-span amplified optical system. The optical spectrum of the composite signal at the system output is measured by an OSA and tells the optical power levels of both the WDM optical signal and the FWM crosstalk. Another part of the optical output is filtered through a WDM demultiplexer, which selects a particular optical channel, and is detected by a wideband photodiode. The signal waveform is displayed on an oscilloscope to measure the time-domain waveform distortion caused by FWM. The signal is also sent for BER testing through a decision circuit in Fig. 6.7.9. For optical spectrum measurement using OSA, if the system has only two wavelength channels, degenerate FWM exists and the new frequency components created on both sides of the original optical signal frequencies can be easily measured. However, for a WDM system with multiple equally spaced wavelength channels, the generated FWM components overlap with the original optical signals, and therefore it is not possible to precisely measure the powers of the FWM crosstalk. For example, in a WDM system with four equally spaced wavelength channels, there are 10 FWM components which overlap with the original signal channels, as illustrated in Fig. 6.7.10A, where f1, f2, f3, and f4 are the frequencies of the signal optical channels and fjkl (j, k, l ¼ 1, 2, 3, 4) are the FWM components created by the interactions among signals at fj, fk, and fl. One way to overcome this overlapping problem is to use unequal channel spacing in the system for the purpose of evaluating the strength of FWM. We can also deliberately remove one of the signal channels and observe the power of the FWM components generated at that wavelength, as illustrated in Fig. 6.7.10B. This obviously underestimates the FWM crosstalk because the FWM contribution that would involve

745

Fiber optic measurement techniques

WDM channels

f223 f234

f334

f221

f143

f243

f132

f142

f1

f332 f231

FWM

f f2

f3

f

f4

(a)

(b)

Fig. 6.7.10 (A) Illustration of FWM components in a four-channel system and (B) evaluating FWM crosstalk in a WDM system with one empty channel slot.

that empty channel is not considered. For example, in the four-channel case, if we remove the signal channel at f3, FWM components at frequency f3 would only be f221 and f142, whereas f243 would not exist. Fig. 6.7.11 shows an example of the measured FWM-to-carrier power ratio defined by Eq. (6.7.29). The system consists of five amplified fiber spans with 80 km fiber in each span, and the per-channel optical power level at the beginning of each fiber span ranges from 6 to 8 dBm. The fiber used in the system has zero-dispersion wavelength at −16 Without SBS dither With SBS dither

−18

FWM-to-carrier ratio (dBc)

746

−20 −22 −24 −26 −28 −30 −32 370

375

380

385 390 395 400 405 Frequency differencef 1 – f 3 (GHz)

410

415

420

Fig. 6.7.11 FWM-to-carrier power ratio measured in a five-span optical system with three wavelength channels. The horizontal axis is the frequency separation of the two outer channels.

Optical system performance measurements

λ0 ¼ 1564 nm, whereas the average signal wavelength is approximately λ ¼ 1558 nm. In this system, three channels were used with frequencies at f1, f2, and f3, respectively. The horizontal axis in Fig. 6.7.11 is the frequency separation between f1 and f3, and the frequency of the FWM component f132 is 15 GHz away from f2, that f +f is,f 2 ¼ 1 2 3  7:5 GHz. Because there is no frequency overlap between f132 and f2, the FWM-to-carrier power ratio P132/P2 can be measured. However, in practice, both the spectral resolution and the dynamic range of a gratingbased OSA are not sufficient to make accurate measurement of this power ratio because the FWM power is typically much lower than the carrier. Coherent heterodyne detection can help us solve this problem, shifting the optical spectrum onto RF domain and utilizing an RF spectrum analyzer. Fig. 6.7.11 clearly demonstrates that FWM efficiency is very sensitive to the frequency detune, and there can be more than 10 dB efficiency variation, with the frequency change only on the order of 5 GHz. This is mainly due to the rapid change in the phase match conditions as well as the interference between FWM components created at different locations in the system. The instantaneous FWM efficiency is also sensitive to the relative polarization states of the contributing signals. In this system, the fiber DGD averaged over the wavelength pffiffiffiffiffiffi ffi range 1550–1565 nm is approximately τ ¼ 0:056 ps= km. From this DGD value, we can estimate the rate at which signals change their relative polarizations (at the system input they are polarization-aligned). For signal launch states that excite both PSPs of the fiber, the projection of the output SOP onto the Poincare sphere rotates around the PSP vector at a rate of dφ/dω ¼ τ, where τ is the instantaneous DGD and ω is the (radian) optical frequency. In crude terms, we expect the signals to remain substantially aligned within the non-linear interaction length of around 20 km. There may be a significant loss of alignment between adjacent spans. Over the full link, the outer channel alignments between f1 and f3 can take arbitrary relative orientations. If transmitters are mutually aligned and the composite signal is aligned with one of the PSPs of the fiber, the output states will be almost aligned, with any residual misalignment arising from second-order PMD (movement of the PSP). There will still be misalignment at intermediate spans, so launching on the principal states will not necessarily give worst-case FWM, even ignoring any contributions of fiber birefringence to phase matching. However, for the PMD coefficients given in this system, such effects p areffiffiffiffiffiffi not ffi expected to be large. In contrast, if the fiber PMD coefficients exceed τ ¼ 0:2 ps= km, the signal polarization states in successive spans will be largely uncorrelated. The worst-case FWM is likely to occur when the signals are aligned in the lowest dispersion spans. If these are intermediate rather than outer spans, then the worst-case FWM will not necessarily occur for signals mutually aligned at the transmitter or for alignment with the principal states at the transmitter or receiver. In practical long-distance optical transmission systems, frequency dithering is often used to reduce the effect of stimulated Brillouin scattering (SBS). The SBS dither

747

5dB/div

Fiber optic measurement techniques

(a)

Wavelength (2 nm/div)

(b)

Time (20 ps/div)

5dB/div

748

(c)

Wavelength (2 nm/div)

(d)

Time (20 ps/div)

Fig. 6.7.12 (A) and (B) WDM optical spectrum with eight equally spaced channels and the measured eye diagram; (C) and (D) WDM optical spectrum with eight unequally spaced channels and the corresponding eye diagram (Forghieri et al., 1995). (Used with permission.)

effectively increases the transmission efficiency of the fiber by decreasing the power loss due to SBS while simultaneously smoothing out the phase match peaks. As a result, Fig. 6.7.11 indicates that SBS dithering only moderately reduces the level of FWM. Another way to evaluate FWM crosstalk is to measure the closure in the eye diagrams. Fig. 6.7.12 shows an example of the measured WDM optical spectra and the corresponding eye diagrams (Forghieri et al., 1995). This figure was obtained in a system with a single-span 137 km dispersion shifted fiber that carries eight optical channels at 10 Gb/s data rate per channel. Since FWM crosstalk is generated from non-linear mixing between optical signals, it behaves more like a coherent crosstalk than a noise. In an optical system with equally spaced WDM channels, severe degradation on the eye diagram can be observed, as shown in Fig. 6.7.12A and B, where the per-channel signal optical power at the input of the fiber was 3 dBm. This eye closure penalty can be significantly reduced if the wavelengths of the WDM channels are unequally spaced as shown in Fig. 6.7.12C and D, although a higher per-channel signal optical power of 5 dBm was used. This is because the FWM components do not overlap with the signal channels and there is no coherent crosstalk. To explain the coherent crosstalk due to FWM, we can consider that a single FWM product is created at the same wavelength as the signal channel fi. Assuming that the power of the FWM component is pfwm, which coherently interferes with an amplitude-modulated optical signal whose instantaneous power is ps, the worst-case

Optical system performance measurements

eye closure occurs when all the contributing signals are at the high level (digital 1). Due to the mixing between the optical signal and the FWM component, the photocurrent at the receiver is proportional to

2 pffiffiffiffi pffiffiffiffiffiffiffiffi I 0 ∝ ps + pfwm cos Δφ + 2π f i  f jkl (6.7.33) As a result, the normalized signal 1 level becomes rffiffiffiffiffiffiffiffi ps + pfwm  2pffiffiffiffiffiffiffiffiffiffiffi ps pfwm pfwm A¼ 12 ps ps

(6.7.34)

This is a “bounded” crosstalk because the worst-case closure to the normalized eye qffiffiffiffiffiffiffiffiffiffiffiffiffiffi diagram is 2 pfwm =ps . In a high-density multi-channel WDM system there will be more than one FWM product that overlaps with each signal channel. In general, these FWM products will interfere with the signal at independent beating frequencies and phases. The absolute worst-case eye closure can be found by superpositioning the absolute value of each contributing FWM product. However, if the number of FWM products is too high, the chance of reaching the absolute worst case is very low. The overall crosstalk then approaches Gaussian statistics as the number of contributors increases (Eiselt et al., 1999).

6.7.3 Create WDM crosstalk channels with spectrally shaped broadband Gaussian noise In WDM optical systems with relatively low per-channel data rates and large spectral gap between channels, signal optical spectrum often consists of discrete spectral lines as shown in Fig. 6.7.12A. FWM among these spectral lines can be sensitive to the channel wavelength arrangements as discussed in the previous sub-section. In high baud-rate coherent optical systems using complex high-level modulation, such as QAM and with DSP, the spectrum of the modulated optical signal of each wavelength channel can be well-shaped with sharp edges and flat-top. This allows the WDM channels to be tightly packed with minimum spectral gaps between them so that optical spectral efficiency can be maximized. As an example, Fig. 6.7.13A shows optical spectra produced from 7 coherent optical transmitters emitting polarization-multiplexed (PM) optical signals with 64-QAM modulation at 69.4 Gbaud symbol rate. The channel spacing is 0.6 nm. Considering that 19.4 Gbaud is used for FEC and other overheads in this particular case, the net symbol rate for data transmission is 50 Gbaud, and thus the bit rate of each transmitter is 600 Gb/s. The spectral width of the optical signal from each transmitter is approximately 0.5 nm, and the spectrum is shaped by the DSP which compensates the frequency-dependent responses of RF amplifiers and opto-electronic devices in the transmitter. The sharp edges of the

749

750

Fiber optic measurement techniques

(a)

(b)

Fig. 6.7.13 (A) Optical spectra produced from 7 coherent transmitters with 64-QAM modulation at 69.4 Gbaud symbol rate, and (B) composite optical spectrum after multiplexing.

spectrum enabled by high order digital filters allow the use of minimum spectral guardband between channels while still be able to separate them in the receiver. When these channels are combined through a multiplexer, the composite optical spectrum shown in Fig. 6.7.13B is predominately flat across a 4 nm wavelength window. For high-speed coherent optical systems with complex optical field modulation, the optical signals have very similar statistics as broadband random Gaussian noise. Thus, for the estimation of non-linear crosstalk among WDM channels, a Gaussian noise model (Poggiolini, 2012a, b) can be used in which each signal channel can be represented by a Gaussian noise of the same spectral shape. Consider three WDM channels with equal spectral density GWDM, at frequencies f1, f2, and f1 + f2  f, the spectral density of FWM component produced at frequency f can be expressed as, (Poggiolini, 2012a, b)

Optical system performance measurements

16 2 GNLI ð f Þ ¼ γ 2 Leff 27

∞ ð

∞ ð

GWDM ðf1 ÞGWDM ðf2 ÞGWDM ðf1 + f2  f Þ  ∞ ∞

ρðf1 , f2 , f Þ  χ ðf1 , f2 , f Þdf2 df1

(6.7.35)

1  e2αLs ej4π 2 β2 Ls ðf1 f Þðf2 f Þ 2 ρðf1 , f2 , f Þ ¼ ½2α  j4π 2 β ðf  f Þðf  f ÞL eff

(6.7.36)

with

2 1

2

and χ ðf 1 , f2 , f Þ ¼

sin 2 ½2N s π 2 ðf 1  f Þðf 2  f Þβ2 L s  sin 2 ½2π 2 ðf 1  f Þðf 2  f Þβ2 L s 

(6.7.37)

Where GWDM(f1), GWDM(f2), and GWDM(f1 + f2  f ) are the spectral densities of the three crosstalk channels, Leff ¼ (1  exp(αLs))/α is the non-linear length of the fiber span, γ is the non-linear parameter, β2 is the dispersion parameter, α is the loss parameter of the fiber, Ls is the fiber span length, and Ns is the total number of fiber spans. ρ(f1, f2, f ) in Eq. (6.7.36) represents the normalized FWM efficiency of each fiber span, and χ(f1, f2, f ) in Eq. (6.7.37) accounts for the coherent interference of the FWM products produced by different fiber spans at the receiver. The spectral density GNLI(f ) created by FWM can be considered as a “non-linear noise” which will be added to the amplified spontaneous emission (ASE) linear noise to determine the overall OSNR at the receiver. In a WDM system with a large number of optical channels, each channel can be represented by a band-limited Gaussian noise to evaluate its impact in the non-linear crosstalk to the signal channel. In a laboratory setting, where multiple optical transmitters may not be available, in order to evaluate the impact of non-linear crosstalk in a WDM system, it is reasonable to replace all crosstalk channels with amplified optical noise with the same spectral width and power density as the signal channel as shown in Fig. 6.7.14, and thus only one actual optical transmitter is required. In this experimental setup, an erbium-doped fiber amplifier (EDFA) or a semiconductor optical amplifier (SOA) is used as the broadband incoherent light source, which is followed by an optical bandpass filter (OBPF) to determine the total bandwidth (8 nm in this example) of the WDM system. Then a 1  2 optical interleaver is used to shape the continuous optical spectrum into a spectrum similar to WDM crosstalk channels as shown in the inset of Fig. 6.7.14. While an optical interleaver is simple with low loss, the spectral transfer function is not adjustable and the duty cycle of the selected spectrum is often less than 50%. A wavelength-selective switch (WSS) can also

751

Fiber optic measurement techniques

Crosstalk channel generation EDFA

OBPF

Interleaver or WSS

EDFA

VOA

Optical Rx (λs)

Add/drop MUX Transmission system

Optical Tx (λs)

PSD (dB)

752

Wavelength (nm)

Fig. 6.7.14 Block diagram of an experimental setup to create WDM crosstalk channels from a broadband optical noise. The inset shows an example of a signal channel at λs ¼ 1549.2 nm and 9 crosstalk channels within an 8 nm optical bandwidth. OBPF: optical bandpass filter, WSS: wavelength-selected switch, VOA: variable optical attenuator.

be used to replace the interleaver to provide the flexibility of additional spectral shaping. Then another EDFA is used to boost the optical power of crosstalk channels before combining with the spectrum of the signal channel in an optical add/drop multiplexer (MUX). This add/drop MUX first drops the band of the signal wavelength λs from the crosstalk spectrum and then add that from the optical transmitter (Tx). A VOA before the add/drop MUX adjusts the relative power between the signal channel and the crosstalk channels. After the transmission system consisting of multiple fiber spans and inline amplifiers, the optical receiver only selects the signal channel at λs and rejects all the crosstalk channels. Using spectrally shaped broadband ASE noise to replace a large number of WDM transmitters with an experimental block diagram shown in Fig. 6.7.14 is a simplified way to test the system performance including non-linear crosstalk. It can provide significant saving to the experimental setup without sacrificing the accuracy of the measurements.

6.8 Optical performance monitoring based on coherent optical transceivers In recent years polarization-multiplexed (PM) coherent transceivers (modems) with DSP capabilities have been widely adopted for high-speed fiber-optic transmission, which are highly flexible in terms of arbitrary waveform generation in the transmitter (Tx) and signal analysis at the receiver (Rx) all in the digital domain (Roberts et al., 2017; Savory, 2010; Cho et al., 2020). These coherent optical transceivers can be utilized as measurement tools for optical fiber system parameters interrogation and performance monitoring. This section discusses two examples of using coherent optical transceivers to monitor the OSNR in a fiber system, and the measurement of accumulated non-linear phase shift in a fiber system caused by the optical signal.

Optical system performance measurements

6.8.1 Estimating system OSNR with a digital coherent transceiver OSNR is one of the most important parameters of an optical system affecting the transmission performance. It is usually monitored by an OSA inserted in the system. Coherent transceivers have also been used for the estimation of system OSNR caused by both linear and non-linear noise contributions. One way is to compare transmitted and recovered data symbols, and BER, to estimate the OSNR indirectly (Dong et al., 2012; Lin et al., 2018; Faruk et al., 2014). The impact of transceiver-originated noise in OSNR monitoring can also be estimated by using neural network training (Shiner et al., 2020; Caballero et al., 2018). This sub-section presents a technique to measure system OSNR utilizing a commercial PM-coherent transceiver, and a procedure to measure the noise contributions from the Tx and the Rx independently, so that their impacts can be removed from the measured OSNR (Hui et al., 2021; Hui and O’Sullivan, 2021). Optical noise spectrum within the signal bandwidth can also be measured including the tilt of the noise spectrum. A commercial PM-coherent optical transceiver is used to demonstrate the measurement technique. Fig. 6.8.1 shows the experimental setup in which the general structure of the transceiver is described. The Tx is equipped with two in-phase(I)/quadrature(Q) modulators to perform independent complex modulation of x- and y-polarized optical

DACxI DACxQ

Transmitter

AmpxI AmpxQ

vxq vxi MZMx PBS

LD

Bias control

EDFA PBC

VOA

MZMy DACyQ DACyI

AmpyQ AmpyI

vyq

vyi

ADC

TIA

PD

ADC

TIA

PD

ADC

TIA

PD

ADC

TIA

PD

I/Q hybrid and PBS

Data acquisition

Control firmware Fiber Noise loading VOA LO Receiver

OBPF OSA

VOA

EDFA

Fig. 6.8.1 Experimental setup. MZM: Mach-Zehnder modulator, PBC: polarization combiner, PBS polarization splitter, OBPF: optical bandpass filter, VOA: variable optical attenuator, TIA: transimpedance amplifier.

753

754

Fiber optic measurement techniques

fields emitting from the same tunable laser diode (LD). The real and imaginary parts of a complex waveform can be designed and applied to each modulator through two DACs and driving amplifiers with >34 GHz analog bandwidth. The Rx detects signal complex optical field through an optical LO, a 90° optical hybrid, a polarization beam splitter, and 4 balanced high-speed photodiodes, to produce photocurrents proportional to the I and Q components of x- and y-polarized optical channels. Each photocurrent signal is amplified by a trans-impedance amplifier (TIA) and digitized by an analog-to-digital converter (ADC). Both ADCs and DACs have 8-bit resolution in this transceiver. The digitized signal waveforms are then recorded for post-processing. Data input/output and operating conditions of both the Tx and the Rx are controlled through a firmware interface which can enable and disable various functional blocks. Optical noise loading is used in the experiment in the fiber system to vary the level of system OSNR through an EDFA acting as a noise source, and a VOA is used to change the level of added optical noise. In order to measure the system OSNR accurately with the coherent transceiver, it is very important to remove noise contributions from both the Tx and Rx. Based on the transceiver block diagram shown in Fig. 6.8.1, major noise sources in the Tx include DAC digitizing noise, RF noise of amplifiers driving the modulators, and ASE noise generated by the EDFA post-amplifier inside the Tx. Dominant noise sources of the Rx include shot noise mainly caused by the LO, amplification noise of the TIA, and digitizing noise of the ADC. Identifying individual noise contributions of Tx and Rx not only helps characterizing transceiver performance but is also necessary to obtain accurate OSNR of the system. The purpose of Tx waveform design and measurement procedure that will be described below is to isolate various noise contributions. Time-gated complex data sequences of Gaussian statistics with a 30 GHz bandwidth shaped by a 10th order Super-Gaussian filter are loaded to the Tx. Both modulators are biased for maximum carrier suppression, which is the same as if they were used in normal operation in an optical transmission system with high level QAM modulation. In the time domain, as shown in Fig. 6.8.2A (only shows the real part), modulating waveform exists only in time window 1, and the there is no modulation in time window 2. Fig. 6.8.2B shows the spectra of complex waveforms for the x- and y-polarized channels of the Tx. In the experiment, the waveform shown in Fig. 6.8.2A can be selectively loaded to one or both polarization channels in the Tx, and the corresponding waveforms detected by the Rx can be analyzed to extract system parameters. However, a commercial PM coherent Rx is designed to operate with polarization-multiplexed optical signals with equal average power in both polarization channels, and automatic gain control (AGC) of each TIA attempts to equalize the average signal voltages for all the 4 TIA outputs for optimum ADC performance. If only one of the two polarization channels is loaded with the modulating waveform in the Tx so that the optical signal is single-polarized, the gain of the 4

Optical system performance measurements

(b) Spectrum (dB)

Time window 2

Time window 1

Amplitude (linear)

(a)

Frequency (GHz)

Time (μs)

Time (μs)

(d) Spectrum (dB)

Time window 2

Time window 1

Amplitude (linear)

(c)

Frequency (GHz)

Fig. 6.8.2 (A) and (B): amplitude and spectrum of x- and y-polarized waveforms loaded into the coherent Tx. (C) and (D): amplitude and spectrum of x- and y-polarized waveforms recovered in the coherent Rx when the Tx is loaded only with the x-polarized channel.

TIAs in the Rx would vary randomly with the random state-of-polarization (SOP) of the signal, preventing the linear reconstruction of the complex optical field. With this Rx constraint in mind, our measurement procedure is outlined in the following four steps: (1) Load the same waveform into both x- and y-polarized in the Tx so that the Rx can reach its nominal operation condition in which the average photocurrents of the four photodiodes and thus the gain of the four TIAs are uniform and optimized to fit the maximum dynamic range of the ADCs. Then disable all AGCs so that the gain of all four TIAs will not change. (2) Turn off RF amplifiers which drive the modulator of the y-polarized channel so that the Tx output only has the x-polarized channel. (3) Record digital waveforms from all the four ADCs of the Rx for post-processing. In time window 2, the variance difference between x- and y-polarized channels represents the Tx noise caused by RF driving amplifiers and DACs. (4) Minimize signal optical power to the Rx by maximizing VOA attenuation (>30 dB), and signal-independent Rx noise can be measured, which includes LO-induced shot noise, thermal noise, TIA noise and ADC digitizing noise. Due to fiber birefringence, polarization axes of the Tx and Rx need to be re-aligned to recover the complex optical fields of the x- and y-polarized channels in step (3) mentioned above. This can be accomplished with a Jones matrix operation,

755

756

Fiber optic measurement techniques



E x ðf Þ E y ðf Þ

"

 ¼

cos ψ ejξðf Þ sin ψ

ejξðf Þ sin ψ cos ψ

#

E x0 ðf Þ E y0 ðf Þ

 (6.8.1)

where Ex0(f ) and Ey0(f ) are complex fields measured at the four ADCs in the Rx, and Ex(f ) and Ey(f ) are the recovered complex fields after removing the birefringence of the fiber. ψ represents polarization angle, and ξ(f ) ¼ 2πfδt represents frequency-dependent differential phase, with δt a differential group delay between the x- and y- polarization components (Lee et al., 2006). As the Tx only emits the x-polarized channel, matrix parameters ψ and δt can be obtained by minimizing the power of Ey(f ) with a numerical search loop. Fig. 6.8.2C shows the amplitudes of recovered waveforms Ex and Ey (only real parts) without optical noise loading. In time window 1, the power ratio between the x- and y-polarization components is about 20 dB. Whereas in time window 2, noise measured for the y-polarization is mainly caused by the Rx noise (y-channel is turned off in the Tx), but noise in the x-polarization includes both Tx noise and Rx noise. Fig. 6.8.2D shows the power spectra of the recovered Ex(f ) and Ey(f ) where polarization extinction ratio can be higher than 25 dB in the low frequency region, but reduces to less than 18 dB in high frequency regions of beyond 15 GHz. This reduced polarization extinction is attributed to the phase mismatch in I/Q reconstruction which is usually more noticeable at high frequencies, as well as the contribution of Rx noise which is independent of signal polarization. OSNR measurement based on polarization diversity followed by direct detection can be susceptible to PMD (Sui et al., 2010), but this does not apply for a PM coherent Rx which detects and reconstructs complex optical fields. In addition, as time-gated waveforms are used here, and noise is only measured in time window 2, polarization nulling only needs to minimize the Tx noise contribution in time window 2, which is already small. In order to demonstrate the proposed technique to work with a wide range of OSNR, variable optical noise loading is applied to the fiber system as shown in Fig. 6.8.1, and an OSA is used to monitor the OSNR for comparison with that measured by the digital Rx. The right y-axis of Fig. 6.8.3 shows the relative signal and noise powers measured from their variances scaled by the Rx responsivity, as the functions of system OSNR measured by the OSA. In the process of measuring OSNR by the Rx, signal power was obtained from the variance of the x-polarized waveform during time-window 1, and noise powers were obtained during time-window 2 for either x- or y-polarized components for comparison. Noise power of x-polarized component includes Tx noise in time-window 2 because the amplifiers driving the x-polarization modulator are powered-on even though the modulating signal is zero in that window. On the other hand, driving amplifiers of the y-polarized channel in the Tx are powered off, so that Tx noise is minimized. Although a small amount of optical carrier can leak through the y-polarization modulator if the bias control is not exactly at the null point, its impact

Modem measured OSNR (dB)

Relative signal & noise power (dB)

Optical system performance measurements

OSNR measured by OSA (dB)

Fig. 6.8.3 Right y-axis: signal variance (thin solid line), x-polarized noise variance (open circles), y-polarized noise variance (open squares), and Rx noise variance (thick solid line) measured from the digital receiver as the function of OSNR measured by an OSA. Left y-axis: Stars: OSNR obtained from the digital receiver versus OSNR measured by OSA, dash-dotted straight line: linear fit of 1 dB/1 dB slope.

can be removed by eliminating the very low frequency components (40 dB) where the accuracy requirement of Tx and Rx noise estimation becomes more stringent as optical noise is very small in comparison. For the results presented in Fig. 6.8.3, each OSNR value is the average of five captured waveforms. The average Tx output and Rx input optical powers are 0.5 dBm and  11 dBm, respectively. Note that the ASE contributed by the EDFA inside Tx is considered part of the accumulated system optical noise in the OSNR measurement. This is also the case when an OSA is used for the OSNR measurement. For OSNR estimation presented above, the spectrum of optical noise is uniform across the signal spectral window. However, in a practical optical system, the spectrum of accumulated ASE noise at the receiver may not always be flat, and the measurement of ASE noise spectral tilt underneath the signal spectrum can be quite challenging. One way to measure this ASE noise spectral tilt is to minimize the optical signal with a polarizer following a polarization controller (Lee et al., 2006). As the optical signal is polarized (thus assuming only one polarization channel exists) and the optical noise is unpolarized, the polarizer can completely block the optical signal while only reducing the noise PSD by 3 dB. Here we show that a similar functionality can also be accomplished with a coherent optical receiver. As an example, Fig. 6.8.4 shows the noise spectrum of the y-polarized component during time window 2 measured in step 3 and subtract the Rx noise spectrum obtained in step 4 of the procedure. As the Tx only sends the x-polarized channel which is needed to set the average optical signal power to maintain the normal operation of EDFAs in the

Spectrum measured by Rx (dB)

758

Frequency (GHz)

(b)

Fig. 6.8.4 Comparison between normalized optical noise spectra measured with the digital coherent receiver (left y-axes), and optical noise spectra measured by an OSA (right y-axes) for the system with flat optical noise spectrum (A) and tilt optical noise spectrum (B). Inset in (B) (blue line) shows the tilt spectrum in (B) after subtracting the flat spectrum shown in (A) to remove Rx transfer function.

Optical system performance measurements

transmission link, noise in the y-polarized component is primarily caused by the ASE noise of the system. The blue line in Fig. 6.8.4A is the noise spectrum measured by the coherent Rx. This was obtained with a flat optical noise spectrum loaded into the system, and the OSA trace is shown as the red line in the same figure for comparison. Then an optical filter is added in the noise loading path to create a tilt in the optical noise spectrum shown in Fig. 6.8.4B as the red line measured with an OSA. The corresponding noise spectrum measured by the coherent Rx shown as the blue line also shows the same tilt within the Rx bandwidth. Note that OSNR measured with the coherent Rx shown in Fig. 6.8.3B is the power ratio between the signal and noise, which is independent of photodiode responsivity and TIA gain in the Rx. But the level of optical noise PSD itself measured with the coherent Rx depends on the optoelectronic gain of the Rx. Thus, in Fig. 6.8.4 the noise PSD measured by the Rx are normalized only to indicate the spectral tilt of ASE noise. Although the detailed spectral shape of noise PSD measured by the coherent Rx can be affected to some extent by frequency response and gain equalization of photodetection and TIA circuits, Fig. 6.8.4 indicates a reasonably good agreement between the optical noise spectra measured by the coherent Rx and by an OSA. The noise spectral shape can be made more accurate if the transfer function of the Rx frontend can be separately characterized. For example, the noise spectrum measured with a flat ASE shown in Fig. 6.8.4A can be used as a reference for the calibration of Rx response. The blue curve in the inset of Fig. 6.8.4B shows the tilt spectrum of Fig. 6.8.4B after subtracting the Rx response to a flat ASE noise shown in Fig. 6.8.4A. Note that technique discussed here requires time-gated waveforms specially designed for system interrogation and noise detection, it is appropriate to be performed prior to actual data transmission to determine, for example, the maximum supported data rate based on linear channel quality, but not during data transmission.

6.8.2 Measuring non-linear phase shift in a fiber-optic system with a digital coherent transceiver Kerr effect non-linearity is an important process that limits transmission performance in fiber optic communication systems. This is especially the case for WDM optical systems with large channel counts. Coherent optical transmission is predicated on complex optical field modulation and detection wherein linear impairments caused by CD and PMD can be compensated by means of linear DSP. However, it still remains challenging to compensate transmission performance degradation caused by fiber non-linearities manifested through SPM, XPM, and FWM. Kerr-effect non-linearity of fiber can be represented by the non-linear phase shift created on an optical carrier, which depends on signal optical intensity and fiber non-linear refractive index, n2. In a fiber system with multiple spans and in-line optical amplifiers, non-linear phase created from different fiber spans will accumulate. It is desirable to know non-linear phase generated from each amplified

759

760

Fiber optic measurement techniques

span when evaluating link performance in an optical network (Poggiolini, 2012a, b). The total accumulated non-linear phase from transmitter to receiver can be used in an estimate of transmission performance (Vorbeck and Schneiders, 2004). Various techniques have been demonstrated to estimate fiber non-linearities as discussed in Chapter 4, and the majority of these techniques require dedicated laboratory instrumentation and special system setups. In this sub-section, we discuss a technique to measure non-linear phase shift caused by an optical signal based on a coherent transceiver (Hui et al., 2021). The measurement is based on XPM between two orthogonally polarized sub-carrier tones emitted by the same Tx. The measurement can also provide estimates of the number of amplified fiber spans, and the CD of each span. The same PM-coherent optical transceiver (Ciena WaveLogic-Ai) shown in Fig. 6.8.1 is used in this experiment to demonstrate the measurement of non-linear phase, as shown in Fig. 6.8.5. The average signal optical power Pave is controlled by a VOA before launching into the transmission fiber. A pump and a probe waveform can be created by the x- and y-polarization channels in the Tx, and sent to the fiber system. The XPM between the pump and the probe can be measured at the receiver to provide spatially resolvable non-linear phase along the fiber. The spatial resolution is determined by the temporal walk-off between the pump and probe waveforms due to their frequency separation and CD of the fiber. Within the transceiver’s approximately 60 GHz double sided bandwidth in this example, the central frequencies of the probe and the pump subcarriers were assigned to 22 and 28 GHz, respectively. Thus, the pump-probe frequency separation is 50 GHz (0.4 nm), as shown in Fig. 6.8.6A. These asymmetric sub-carrier frequencies are chosen to minimize potential pump-probe crosstalk due to incomplete sideband rejection of I/Q modulation. Similar to the OSNR measurement discussed in Section 6.8.1, band-limited Gaussian noise is also added in the low frequency region to maintain the stability of the Tx operation. While the bandwidth of the Gaussian noise is not particularly important, it has to be on the order of a few GHz to provide a signal for modulator biasing control which minimizes the optical carrier component through a feedback loop. The pump and probe optical fields are independently generated by the two I/Q modulators, which ensures that their SOPs are mutually orthogonal. Fig. 6.8.6B shows the time-domain pump and probe waveforms, which are both repetitive with 1.92 μs pattern lengths. Each pattern is divided into five equal-length sections, and each section includes a band-limited Gaussian noise, a pump (probe) pulse on the 28 GHz (22 GHz) sub-carrier. A silent region is implemented on each side of the pump (probe) pulse. In the first two sections, pump pulses are short with 250 ps FWHM width and 10th order super-Gaussian shaped, while for the last two sections 10 ns wide super-Gaussian pump pulses are used. There is no pump pulse in the middle section, and this section is used for probe phase calibration. All probe pulses are 50 ns wide, but the amplitude of the 1st probe pulse is 20% higher than the other four to indicate the beginning of each pattern at the receiver. More details of pump/probe waveforms are shown in

Optical system performance measurements

PM-coherent transmitter i

ax(t)

q

i

ay(t)

VOA q

EDFA

Coherent receiver

Pave fiber link

Data processing Pump

Probe Gaussian noise

Fig. 6.8.5 Top: Experimental setup. Bottom: signal optical spectrum.

Fig. 6.8.6C and D corresponding to the short and long pump pulses, respectively. The digitally designed pump/probe complex waveforms are linearly translated to the x- and y-polarized complex optical fields through DACs and I/Q modulators, and the optical spectrum at the Tx output is shown Fig. 6.8.5 which includes both the pump and the probe which are carried by mutually orthogonal polarization states. After propagating through the fiber system under test, the optical signal is detected by a coherent receiver, and the optical phase change on the probe caused by the pump pulses through XPM can be evaluated. 6.8.2.1 Measurement using a single transceiver Although in real applications the Tx and the Rx are not in the same location, the experiment can be performed with the Tx and Rx of the same transceiver, and with the fiber spools in the same room. As a single laser diode is shared by the Tx and the LO of the Rx, only homodyne detection is possible in this case. After coherent detection, four digitized data streams representing the in-phase and quadrature components of the x- and y-polarized optical fields are saved into a memory and downloaded through a digital interface for off-line processing. As only the probe waveform is needed for non-linear phase detection, which is relatively narrowband with a central frequency fprob ¼ -22 GHz after homodyne detection, a bandpass digital filter, 6 GHz FWHM in this case, around fprob is used for probe selection. Then this narrowband spectrum is frequency down-converted to the baseband and down-sampled by 10-fold before data acquisition. Neglecting high order PMD of the fiber, the pump and the probe maintain mutual orthogonally at the receiver, but the SOP may vary randomly over time due to the randomness of birefringence of the fiber, thus an inverse Jones matrix, shown in Eq. (6.8.1), has to be used in the digital domain to remove the impact of fiber birefringence. Because the probe is assigned only to the x-polarization and pump is only in the y-polarization in the Tx, these matrix parameters can be easily obtained by maximizing j Ex j2 and minimizing j Ey j2 in Eq. (6.8.1). Fig. 6.8.7A and B shows the frequency down-converted spectra around the probe spectral peak before (A) and after (B) polarization selection with

761

(a)

(b)

(c)

(d)

Fig. 6.8.6 (A) Spectra of pump (red) and probe (black) channels, (B) waveforms of pump (red) and probe, with more detailed shapes in the regions with short and long pump pulses shown in (C) and (D).

(a)

(b)

(c)

Fig. 6.8.7 (A) and (B): detected probe spectra carried by x- (black) and y- (red) polarization channels after frequency down-conversion, before (A) and after (B) polarization tracking with an inverse Jones matrix. (C) Time-domain probe waveform.

764

Fiber optic measurement techniques

(a)

-0.5dBm 1.5dBm 3.5dBm 5.5dBm 7.5dBm

(b)

-0.5dBm 1.5dBm 3.5dBm 5.5dBm 7.5dBm

Fig. 6.8.8 Probe non-linear phase caused by 250 ps (A) and 10 ns (B) pump pulses measured at different average signal power levels.

the inverse Jones matrix, which provided an approximately 30 dB polarization extinction ratio. As an example, the amplitude of time domain probe waveform is also shown in Fig. 6.8.7C which is the average of 50 consecutive data frames. Then the phase change during each probe pulse is evaluated to determine the impact of XPM from the pump. In order to avoid potential dynamic saturation of photodetectors by pump pulses, the central wavelength of the BPF in front of the Rx is misaligned by 5 pm (6.25 GHz) to attenuate the pump pulses by >20 dB. In practical applications this can be accomplished by simply shifting the optical frequency of the LO in the Rx closer to that of the probe, and thus the pump will be moved outside the detection bandwidth. Fig. 6.8.8A and B show the measured non-linear phase variations of probe pulses caused by the 250 ps (A) and 10 ns (B) pump pulses, respectively, for different signal average power levels Pave. The measurement was performed with a single fiber span of 101 km standard single mode fiber (SSMF). The phase of the center probe pulse, not overlapping with any pump pulse, is used as the phase reference to minimize the impact of random phase variation due to the laser phase noise. The peak non-linear phases versus Pave are shown in Fig. 6.8.9 corresponding to short (open circles) and long (squares) pump pulses (averaged over the flat top region). The dashed straight line in Fig. 6.8.9 shows the calculated non-linear phase shift based on 8 φNL ¼ γP pump ð0ÞL eff (6.8.3) 9 where γ ¼ 1.32 W1 km1 is the non-linear parameter, Leff ¼ 21.5 km is the effective non-linear length of the fiber, and the well-known 8/9 factor is due to the orthogonal SOPs between the pump and the probe which reduces the non-linear coefficient according to the Manakov-PMD equation (Marcuse et al., 1997). The calculation also included a factor of 2.3 between the input pump pulse peak power Ppump(0) and the signal average power Pave based on the designed waveforms shown in Fig. 6.8.6B. Fig. 6.8.9 indicates

Optical system performance measurements

Fig. 6.8.9 Peak non-linear phase caused by 250 ps (open circles) and 10 ns (squares) pump pulse as the function of the average signal optical power. Dashed straight line: Theoretical prediction.

that the measured peak non-linear phase caused by broad pump pulses can accurately predict φNL. But the peak non-linear phase corresponding to a short pump pulse is not accurate enough to tell fiber non-linearity in the absence of a corrective calibration; instead the integration of the pulse area shown in Fig. 6.8.8A would have to be used in that case (Shiner et al., 2016). However, short pump pulses can be helpful to estimate relative signal power levels of different fiber spans and CD of each span in a multi-span system, as will be discussed in the next section. 6.8.2.2 Multi-span measurements with a recirculating loop and a separate coherent receiver In order to demonstrate the application of the proposed technique in multi-span fiber systems, the 101 km SMF is inserted inside a re-circulating loop, and the equivalent number of amplified fiber spans can be chosen by counting the number of circulations in the loop through synchronized time gating. More detailed description of optical recirculating loop is provided in Section 6.10. Because the digital interface of the PM-coherent transceiver is not equipped with the synchronized time gating capability for data acquisition, a separate coherent receiver has to be constructed in which the data is acquired with a real time digital oscilloscope. The time gating of data acquisition is synchronized with the re-circulating loop controller through triggering. Heterodyne detection is used by setting LO frequency about 10GHz away from the center of the probe, followed by frequency down-conversion, probe selection by filtering, and phase measurement. Fig. 6.8.10A shows non-linear phase caused by XPM from 10 ns pump pulses for 1–10 fiber spans. Due to fiber dispersion, after each span the starting time of added non-linear phase waveform is shifted by 664 ps, so that the pulse width is narrowed by the same amount. As long as the pump pulse width is larger than the walk-off caused by the accumulated fiber dispersion, the measured non-linear phase curve shown in Fig. 6.8.10A

765

(c)

(b)

Time (ns)

Time walkoff (ns)

Nonlinear phase (rad.)

(d)

Time (ns)

(e)

Number of spans

Time (ns)

Nonlinear phase (rad.)

Nonlinear phase (rad.)

Peak phase shift (rad.)

Nonlinear phase (rad.)

(a)

Number of spans

(f)

Time (ns)

Fig. 6.8.10 (A) Measured non-linear phase caused by 10 ns pump pulses for systems with 1–10 spans, (B): measured non-linear phase caused by 250 ps pump pulses for systems with 1, 5, and 9 spans, in which 0.2 rad. is added to separate between these curves for clear display. (C) Measured (squares) and calculated (solid line) peak non-linear phase in (A) versus the number of spans, (D): measured (squares) and calculated (solid line) time walk-off of non-linear phase peaks shown in (B) versus the number of fiber spans, (E) and (F) simulated non-linear phase caused by 10 ns (E), and 250 ps (F) pump pulses.

Optical system performance measurements

always has a flat top. Averaging the flat top provides the peak non-linear phase versus the number of fiber spans as shown in Fig. 6.8.10C, in which the solid line is obtained with 8 φNL ¼ N γP pump ð0ÞL eff (6.8.4) 9 where N is the number of spans, Ppump(0) ¼ 8.3 mW is the pump peak power (corresponding to an average signal power of Pave ¼ 3 mW for this particular waveform) at the input of each fiber span for these measurements. Fig. 6.8.10B shows the measured non-linear phase caused by 250 ps pump pulses for 1, 5, and 9 fiber spans (0.2 rad. is added to adjacent curves for better display). With the fiber dispersion parameter at the signal wavelength D ¼ 16.1 ps/nm/km, fiber length L ¼ 101 km, and pump/probe wavelength separation Δλ ¼ 0.4 nm, there should be DΔλL ¼ 664 ps walk-off between the pump and the probe after each fiber span. Thus, each peak in the non-linear phase curve in Fig. 6.8.10B represents the impact of an amplified fiber span, and the delays of these peaks versus the number of fiber spans are shown on Fig. 6.8.10D, where the solid line shows the theoretical prediction. Fig. 6.8.10E and F shows results of numerical simulations of XPM on the probe caused by 10 ns (E) and 250 ps (F) pump pulses, respectively, based on the split-step Fourier method, and the same parameters of the fiber used in the experiments. The measured and simulated results indicate that XPM measurements based on broad pump pulses can accurately predict the accumulated non-linear phase along the fiber link. Whereas short pump pulses can help identifying the number of fiber spans, dispersion of each span, and distribution of non-linear phase among different fiber spans. Note that each curve in Fig. 6.8.10A and B was obtained by averaged over 20 repetitive patterns. Increase the number of averages will help further improving measurement accuracy, especially for short pump pulses. A more sophisticated experiment of longitudinal power profile measurement in a fiber system with 19 amplified fiber spans has also been reported with more detailed analysis (Hui et al., 2002).

6.9 Optical system performance evaluation based on required OSNR It is well known that transmission performance of a digital optical communication system can be evaluated by the BER or the Q-value at the receiver. In the system design and characterization, another important measure of system performance is the system margin. For example, a 3 dB Q-margin means that the system can tolerate additional 3 dB Q degradation from the current state before the system produces unaccepted bit errors. In low data rate or short-distance optical systems without optical amplifiers, the BER degradation is usually caused by receiver thermal noise when the optical signal is too low. In this case, the optical power level at the receiver is the most important measure to determine the system performance. On the other hand, in high data rate optical systems involving multiple wavelength channels and multiple inline optical amplifiers, signal waveform distortion, and

767

768

Fiber optic measurement techniques

Amplified optical system Receiver

Transmitter ASE source

VOA

OSA

BERT

Fig. 6.9.1 Block diagram of R-OSNR measurement using optical noise injection.

accumulated ASE noise in the system become significant. In amplified optical systems, the signal optical power level at the receiver is no longer the most relevant measure of the system performance. One widely adopted system margin measurement is the R-OSNR as introduced in Section 5.3.2. In this section, we further discuss the impact of R-OSNR due to several specific degradation sources in the system such as CD, SPM, and the limited bandwidth of an optical filter. The definition of R-OSNR is the required optical SNR of the optical signal at the receiver to achieve the specified BER. Fig. 6.9.1 shows a schematic diagram of the experimental setup to measure R-OSNR using optical noise injection. The optical signal from the transmitter is delivered to the receiver through the optical fiber transmission system, which may include multiple amplified optical spans. The output of an independent wideband ASE noise source is combined with the optical signal so that the optical SNR at the receiver can be varied by controlling the level of the inserted ASE noise through adjusting the VOA. The BER of the optical signal detected by the receiver is measured at the bit error rate test set (BERT). Since the measured BER is a function of the optical SNR at the receiver, an R-OSNR level can be found at which the BER is at the specified level. A good transmitter/receiver pair should exhibit a low R-OSNR level, whereas obviously a higher R-OSNR implies that the system can tolerate less additional ASE noise, and thus the system margin is smaller.

6.9.1 Measurement of R-SNR due to chromatic dispersion Before the wide application of coherent detection and digital compensation, CD, CD in a fiber-optic transmission system has been one of the biggest sources of performance impairment. It primarily introduces signal waveform distortion and eye closure penalty and deteriorates the receiver BER. Dispersion compensation has been widely used in long-distance optical fiber transmission systems to reduce the accumulated CD between the transmitter and the receiver in systems without the capability of digital compensation in the transceivers. Since CD in a fiber-optic system is usually wavelength-dependent, different wavelength channels in a WDM system may experience different dispersion. Although dispersion slope compensation may help equalize the dispersion levels for various wavelength channels, it would increase non-linear crosstalk between channels through XPM and FWM because of the minimized inter-channel phase walk-off.

Optical system performance measurements

Pre-compensation DCF

Tx-1

Tx-N

MUX

Amplified optical system Tx-2

SW-1

SW-2

DCF

Rx-1 Rx-2

Rx-N

DEMUX

BERT

OSA

VOA SW-4

SW-3

Post-compensation

ASE source

Fig. 6.9.2 System test bed for dispersion window measurement.

From an optical transmission equipment performance point of view, as one of the most important qualification measures, a high-quality transmitter/receiver pair should be able to tolerate a wide range of CD in the fiber system. A wide dispersion window can be achieved using advanced optical modulation formats, optical, and electronic signal processing. Fig. 6.9.2 shows an optical system test bed that is used to perform qualification test for optical transmitters and receivers. Dispersion window and R-OSNR are two critical parameters to qualify the optical transmission equipment. In Fig. 6.9.2, a WDM system with N wavelength channels is tested. The data rate of each channel is 10 Gb/s and the number of channels can be selected by activating the optical switch in front of each transmitter. Both pre-compensation and postcompensation are included to mitigate the effect of CD. Each switchable dispersion compensation unit is made by a group of DCF, each having a different length, sandwiched between two optical switches. Because of the wideband nature of the DCF, the switchable compensator is able to support WDM optical system operation while avoiding connection errors compared to manually reconfigurable fiber arrays. An independent ASE noise source with variable power level is added to the optical link so that the OSNR at the receiver can be adjusted. A BERT is used, providing the modulating data pattern for the transmitter and BER testing for signals recovered by the optical receivers. In the OSNR measurement, the spectral resolution of the OSA was set at 0.1 nm. Fig. 6.9.3 shows an example of the measured R-OSNR as the function of the residual CD, which includes pre-compensation at the transmitter side, post-compensation at the receiver side, and the inline dispersion compensations in front of each EDFA line amplifier along the transmission system. The system has eight amplified fiber spans and the lengths of the eight fiber spans are 92.2, 92.4, 91.9, 103.4, 104.4, 102.3, 99.3, and 101.3 km, and the CD values of these eight fiber spans are, respectively, 1504, 1500,

769

Fiber optic measurement techniques

18 Map #1 Map #2 Map #3 Map #4

Q = 3.57 dB 17

0.1 nm resolution bandwidth

16 R-OSNR (dB)

770

Fitting 15

14

13 Dispersion window 12 600

800

1000

1200 1400 1600 1800 2000 Residual dispersion (ps/nm)

2200

2400

2600

Fig. 6.9.3 Example of the measured R-OSNR as a function of residual chromatic dispersion.

1477, 1649, 1697, 1650, 1599, and 627 ps/nm. Fig. 6.9.4 shows four different dispersion maps by choosing different values of the inline dispersion compensators in the system. The accumulated CDs of the transmission system (excluding pre- and post-compensations) are 941, 1465, 1734, and 2189 ps/nm for the four dispersion maps. Then, by fine adjusting the pre- and post-compensations through switch selection, the residual dispersion of the optical link between the transmitter and the receiver can be varied around the value of each dispersion map. In this particular example, the transmission performance is optimized when the residual dispersion is in the vicinity of 1500 ps/nm, which depends on the transmitter and the receiver design as well as the chirp of the optical signal. If 1-dB OSNR penalty is allowed, the dispersion window of this system is approximately 1000 ps/nm as illustrated in Fig. 6.9.3. Note that in coherent systems with the capability of DSP in the transceivers, the tolerance to system CD has been tremendously increased and the requirement of dispersion compensation in the optical domain is practically eliminated. On the contrary, dispersion modules installed in fiber transmission links can have significant adversarial impact in the system performance because of the increased non-linear crosstalk among DWDM channels.

6.9.2 Measurement of R-SNR due to fiber non-linearity The R-OSNR shown in Fig. 6.9.3 was measured with only a single wavelength channel, and the average optical power at the input of each fiber span is approximately +3 dBm,

Optical system performance measurements

3600.0 3000.0 2400.0 1800.0 1200.0 600.0 0.0 −600.0 −1200.0 3600.0 3000.0 2400.0 1800.0 1200.0 600.0 0.0 −600.0 −1200.0 3600.0 3000.0 2400.0 1800.0 1200.0 600.0 0.0 −600.0 −1200.0 3600.0 3000.0 2400.0 1800.0 1200.0 600.0 0.0 −600.0 −1200.0

Dispersion map #1

−53

−172

−420

1684

1424

1328

1084

35

Dispersion map #2 2209 1084

Dispersion map #3

1465

3269

3207

3038

2780

1374

1339

1120

815

559

200

−172

−420

1677

1328

941

3001

2939

2769

2512

849

815

595

290

2476

2414

2245

1987

2209 1084

1642

1607

1388

1083

1734

559

200

−172

−420

1677

1328

3725

Dispersion map #4

3207

3038

2780 2209 1084 −420

−172

2097

1677

1328

1083 200

1388

1607

2189

559

Fig. 6.9.4 Four different dispersion maps used to create dispersion window measurement in Fig. 6.9.3.

whereas the optical power at each dispersion compensator was at 0 dBm. It is worthwhile to mention that the optimum residual dispersion value may also depend on the signal optical power level used in each fiber span because the effect of SPM is equivalent to an additional chirp on the optical signal. When the optical power level is low enough, the system is linear and the effect of CD can be completely compensated, no matter which location the dispersion compensator is placed at along the system. In a non-linear system, on the other hand, the non-linear PM created by the SPM process is converted into an intensity modulation through CD after the non-linear PM is created. Therefore, the location of dispersion compensation module is important. Similar to the case of XPM,

771

Fiber optic measurement techniques

14

14

P = –6 dBm

13.5

R-OSNR (dB)

R-OSNR (dB)

P = –4 dBm

13.5

13 12.5 12 11.5

13 12.5 12 11.5

11

11

10.5

10.5 10

10

0

2

4

6

8

10

12

0

2

4

8

10

12

14

14

P = –2 dBm

13.5

P = 0 dBm

13.5 13

R-OSNR (dB)

13 12.5 12 11.5

12.5 12 11.5

11

11

10.5

10.5

10

6

Span

Span

R-OSNR (dB)

772

10 0

2

4

6

Span

8

10

12

0

2

4

6

8

10

12

Span

Fig. 6.9.5 Measured R-OSNR along the system at four different signal optical power levels.

which was discussed in Section 6.7, per-span dispersion compensation usually works better than lumped compensation either in the transmission side or in the receiver side. Fig. 6.9.5 shows the measured R-OSNR in a 12-span amplified fiber-optic system with 80 km of standard SMF in each span. The CD is also compensated at the end of each fiber span. The experimental setup is similar to that shown in Fig. 6.9.2, but the R-OSNR measurements were conducted at the end of each fiber span. The output optical power level, P, at the output of each inline optical amplifier was adjusted to observe the effect due to non-linear SPM. Fig. 6.9.5 indicates that when the signal optical power level is less than 4 dBm, the system can be considered linear. A slight R-OSNR degradation can be observed when the signal optical power increases from 4 to 2 dBm near the end of the system. R-OSNR degradation becomes significant when the signal optical power increases from 2 to 0 dBm. Fig. 6.9.6 shows the R-OSNR measured at the end of the 12th span as the function of the signal optical power, which indicates that an approximately 1.3 dB R-OSNR penalty was introduced due to non-linear SPM effect when the signal optical power was at 0 dBm. In general, the optimization of residual CD and the measurement of dispersion window width are affected by the chirp in the optical waveform, the SPM effect, which is

Optical system performance measurements

14 13.5

12 span system

Q = 3.57 dB

R-OSNR (dB)

13 12.5 12 11.5 11 10.5 10 −7

−6

−5

−4

−3

−2

−1

0

Signal optical power (dBm)

Fig. 6.9.6 R-OSNR in a 12-span system as the function of the signal optical power.

Tx

Rx Rx Rx

BERT

Demux

Tx

VOA ASE source Noise loading

Group power equalizers

OPM-WSS

Mux

Tx

OPM-WSS

determined by the optical power of the signal channel, and the non-linear crosstalk, which is determined by the optical powers of other channels. Fig. 6.9.7 shows a the schematic diagram of a WDM system over 20 amplified fiber spans, and 80 km of standard single-mode fiber is used in each span (Birk et al., 2006). The optical power level of each wavelength channel is equalized by optical power management (OPM) using wavelength selective switch (WSS). In this particular system, the 10 Gb/s transmitters are equipped with electronic pre-compensations for the CD; therefore no optical dispersion compensation is necessary in the system. This WDM system

Fig. 6.9.7 Simplified schematic configuration of a 20-span fiber transmission system employing bidirectional optical amplifiers and optical power management (OPM).

773

Fiber optic measurement techniques

Group 9

Group 6

Normal (A)

Spectral density (dBm)

774

−18.0 dBm

REF

2.0dB /div −28.0 dBm

−38.0 dBm

1529.0nm 3.7nm/div

1547.5nm

in Vac

1566.0nm

Wavelength (nm)

Fig. 6.9.8 Optical spectrum at the receiver measured by an OSA with 0.2 nm resolution bandwidth.

consists of nine groups of optical channels with eight wavelengths in each group. The optical spectrum is shown in Fig. 6.9.8, where the frequency separation between adjacent channels is 50 GHz and the gap between adjacent groups is 100 GHz. The polarization controller in front of each transmitter is used to make sure that the SOPs of the transmitters are aligned so that the non-linear crosstalk in the fiber system reaches the worst case. A BERT supplies 231–1 PRBS patterns to modulate each transmitter and detect transmission errors after the receivers. Because there is no optical dispersion compensation in the system, the accumulated CD in the 1600 km transmission system can be as high as 27,200 ps/nm-km. Crosstalk caused by FWM is expected to be rather weak because of the strong phase mismatch between channels due to rapid walkoff. However, the effect of XPM can still be strong because the efficiency of phase noise to intensity noise conversion is high due to the high uncompensated CD. The transmission performance was evaluated by ASE noise loading at the receiver side. The required OSNR levels were measured at which the raw BERs at the receiver were kept at 3.8  103 before FEC. ASE noise loading measurements were performed in three cases: (a) a single wavelength channel, (b) only one group of eight channels, and (c) all 72 wavelength channels shown in Fig. 6.9.8 are turned on. To complete each measurement, the peak power level at each channel was varied from 3.5 to 2 dBm. Fig. 6.9.9 shows the measured R-OSNR at the 1551.32 nm channel in group 6 and the 1563.45 channel within group 9, respectively. Fig. 6.9.9 indicates that R-OSNR increases with the increase of the per-channel optical power level, which is mainly caused by the non-linear SPM effect. Meanwhile, with the increase in the number of channels, the R-OSNR also increases, primarily caused by non-linear crosstalk due to XPM.

Optical system performance measurements

17 16

R-OSNR (dB)

15 14 13

17

Group 6

16

Group 9

15

single wavelength

8 wavelengths

14

8 wavelengths

72 wavelengths

13

72 wavelengths

single wavelength

12

12

11

11

10

10

9

9

8 8 −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0

0.5

Signal optical power (dBm)

Fig. 6.9.9 R-OSNR as the function of the per-channel signal optical power.

From overall system performance point of view, low R-OSNR levels are obtained at low signal optical powers, as shown in Fig. 6.9.9. However, if the accumulated ASE noise generated by the inline amplifiers in the system is fixed, OSNR at the receiver (excluding noise loading) should be linearly proportional to the level of signal optical powers. Therefore, low optical power is not necessarily the best choice. Taking into account the actual ASE noise generated in the transmission system and the signal optical power, the OSNR of each received optical channel can be measured. The difference between this actual OSNR and the R-OSNR is defined as the OSNR margin, which signifies the robustness of the system operation. Fig. 6.9.10 shows the measured OSNR margins of group-6 and group-9 in the system with all 72 WDM 8

OSNR margin (dB)

7

Group 6

6 5 4 3

Group 9

2 1 −6

−5

−3 −2 −1 −4 Signal optical power (dBm)

0

Fig. 6.9.10 OSNR margin as a function of the per-channel signal optical power.

775

Fiber optic measurement techniques

channels fully loaded (Birk et al., 2006). The optimum per-channel signal optical power is approximately –2 dBm.

6.9.3 Measurement of R-OSNR due to optical filter misalignment The complex transfer functions of optical devices such as optical filters, add/drop multiplexers, and narrowband dispersion compensators are important parameters that have significant impact in system performance. Techniques to measure individual optical components were discussed in the previous chapters. However, their impact on performance depends on system configurations, modulation formats, and the combination with other system building blocks. Therefore a more relevant measurement of optical device performance and its impact on an optical system is to place the device into the system. The change of R-OSNR introduced by this particular device can be measured as an indication of the device quality. Fig. 6.9.11 shows the schematic diagram of an experimental setup to characterize the impact of a WSS in an optical transmission system. Noise loading technique is used to evaluate the R-OSNR degradation due to the optical device under test. In this measurement, a tunable transmitter is used whose optical wavelength can be continuously tuned. The WSS is placed in a recirculating loop, in which 80 km of standard single-mode fiber is used with two optical amplifiers to compensate for the loss and a dispersion compensator is used to reduce the accumulated dispersion. An optical recirculating loop simulates a multi-span optical transmission system; its detailed operation principle and applications are discussed in the next section. Here we simply accept that each circulation in the loop is equivalent to an amplified optical span. The measurement was performed by sweeping the wavelength of the tunable transmitter across the bandwidth of the optical filter. The R-OSNR at the receiver is measured for each wavelength setting of the transmitter. Fig. 6.9.12 shows an example of the

Filter under test

80km

Optical recirculating loop

AOM Tunable transmitter

MUX

776

AOM

ASE source

Receiver

VOA

OSA

BERT

Fig. 6.9.11 Measurement of optical filter characteristics using noise loading. AOM: acousto-optic modulator, VOA: variable optical attenuator.

Optical system performance measurements

17 16

R-OSNR (dB)

15

OSA resolution bandwidth: 0.1 nm Q = 3.57 dB

14

2

2

13 12

1

11

1 1

10 5

9 8 −30

0 −20

−10 0 10 Frequency offset (GHz)

20

30

Fig. 6.9.12 Measured R-OSNR as a function of the transmitter frequency offset.

measured R-OSNR as a function of the transmitter central frequency offset for different numbers of circulations of the optical recirculating loop as indicated beside the corresponding curve. For small numbers of loop circulations, the WSS optical bandwidth extends to approximately 20 GHz. When the optical signal passes through the WSS more than 10 times in the recirculating loop, the optical bandwidth shrinks down to about 10 GHz. Recall that R-OSNR represents the required OSNR at the receiver to guarantee a certain Q-value, which is Q ¼ 3.57 dB in this case. Therefore, the sharp R-OSNR degradation at the edges of the WSS passband is not likely caused by the reduced power transmission efficiency at these frequencies, nor the associated ASE noise increase in the system. Rather, it is caused primarily by the waveform distortion due to phase transition and group velocity distortion near the edges of the passband.

6.10 Fiber-optic recirculating loop To test optical transmission equipment and devices for long-distance fiber-optic systems, the system test bed may have to include hundreds of kilometers of optical fibers, many optical inline amplifiers, and dispersion compensators. Fiber-optic recirculating loops have been used widely in optical communication laboratories to simplify the system test bed and enable equipment manufactures to demonstrate their transmission terminal equipment anywhere without bringing with them the entire system (Bergano and Davidson, 1995). It is a very convenient way to evaluate transmission performance of both ultra-long sub-marine and terrestrial fiber-optic systems without actually using many spans of fibers and a large number of optical amplifiers.

777

Fiber optic measurement techniques

Fiber-optic system

AOM-2

2x2

Receiver

AOM-1 Trigger enabling

Signal input

778

Pulse generator Loop switch

Transmitter

Oscilloscope

Signal switch Data

T1

T2

T4 T3

Trigger

Gating BERT

Fig. 6.10.1 Configuration of a fiber-optic recirculating loop test set.

6.10.1 Operation principle of a recirculating loop A fiber-optic recirculating loop is a controlled optical switch which allows the optical signal from a transmitter to pass through an optical system many times to simulate a multi-span optical transmission. A block diagram of an optical recirculating loop is shown in Fig. 6.10.1, where two acousto-optic modulators (AOMs) are used as the optical switches. The major reason for using AOMs to perform optical switching is due to their superior extinction ratio of typically >50 dB and their polarization insensitivity. Because the optical signal has to pass through the optical switch many times, a high on/off ratio is essential to minimize artificial multi-path interference caused by the switches. In Fig. 6.10.1, the optical signal from a transmitter passes through the first optical switch AOM-1 and splits into two parts by a 2  2 fiber-optic directional coupler. One output from the coupler is sent to an optical transmission system; the other output is sent to an optical receiver for testing. The timing of the two optical switches are arranged such that when AOM-1 is on, AOM-2 has to be off; this allows the optical signal to fill up the transmission system without interference. Then AOM-1 is set to off and AOM-2 is set to on so that the optical signal can make roundtrip circulations in the optical transmission system. Obviously, for every roundtrip, the optical signal suffers a power loss due to the fiber coupler. The optical receiver detects the optical signal that passes through the optical system after the desired number of circulations. The waveform is then measured by a digital oscilloscope and the BER is tested by a BERT. To measure only the

Optical system performance measurements

optical signal that passes through the transmission system for a certain number of circulations, a time-gating pulse enables the BERT to only count the signal digital errors during certain time windows. Similarly, the trigger signal generated by the BERT is also enabled only during the same time windows such that the digital oscilloscope is only triggered during these time windows. Outside these time windows, both the BERT and the oscilloscope are disabled. Usually, a commercial BERT has a gating enabling function that allows a pulse train at the TTL voltage level to control the BER testing. However, the trigger generated by the BERT to synchronize the time sweep of the digital oscilloscope is typically at high frequency, which is a sub-harmonic of the digital data signal. The trigger enabling is accomplished by a microwave switch that is controlled by a pulse train synchronized with the gating of the BERT. In this measurement, four mutually synchronized switch pulse trains are required, for the two AOMs, the BERT enabling and the oscilloscope trigger switch, respectively. A multi-channel electrical pulse generator can be used and allows the precise control of the relative delays between the four pulse trains.

6.10.2 Measurement procedure and time control To help the description of the recirculating loop operation, it is useful to define the loading state, the looping state, and the loop time. The loading state is an operation state in which the optical signal is loading into the transmission system through the signal switch AOM-1 and the fiber coupler. In the loading state, the signal switch AOM-1 is on and the loop switch AOM-2 is off. Therefore the optical signal exactly fills up the entire transmission system. The looping state is an operation state when the optical signal loaded in the loop circulates within the loop. In this state, the signal switch AOM-1 is off and the loop switch AOM-2 is on. The loop time is defined as the roundtrip time delay for the optical signal to travel through the fiber system, that is, τ ¼ nL/c, where n is the refractive index of the fiber, L is the length of the fiber, and c is the speed of light. The following are the fundamental steps of operating a recirculating loop: 1. As illustrated in Fig. 6.10.2A, a transmission experiment using a recirculating loop starts with the loading state, with the signal switch on and the loop switch off. The two switches are held in this loading condition for at least one loop time to completely fill up the loop with the optical data signal. In the loading state, the control pulse T1 is in the high level to turn on the signal switch and T2 should be in the low level to turn off the loop switch. 2. Once the transmission system in the loop is fully loaded with data, the switches change to the looping state. In the looping state, T1 is on the low level to turn off the signal switch whereas T2 is on the high level to turn on the loop switch. In this state, the optical signal is allowed to circulate in the loop for a specified number of revolutions. The duration of a looping state can be estimated by ΔT ¼ Nτ, where N is the desired

779

780

Fiber optic measurement techniques

EDFA

EDFA One Span

One Span

Loop switch Coupler Signal switch Transmitter

Receiver T1

T2

BERT T3

Pulse generator

(a)

EDFA

EDFA

One Span

One Span

Loop switch Coupler Signal switch Transmitter

Receiver T1

(b)

T2

BERT T3

Pulse generator

Fig. 6.10.2 Two states of switch setting: (A) loading state and (B) looping state.

number of revolutions for the signal to travel in the loop, and τ is the roundtrip time of each loop. Even though after each circulation, a portion of the optical signal is coupled to the receiver through the fiber coupler, but the eye diagram and the BER are measured only at the end of the looping state after the signal is circulated in the loop for N times. The BERT is gated by a TTL signal T3; the gating function, typically available in most of the BER test set, allows the BERT to be activated only when T3 is on the high level. 3. The measurement continues with the AOMs switch alternatively between the loading state and the looping states so that bit errors can be accumulated over time. The BER is then calculated as the number of errors detected by the BERT error gating period divided by the total number of bits transmitted during the observation period. Since errors are counted only during the error gating period, the effective bit rate for the experiment is diminished by the duty cycle of the error gating signal; thus, the actual time that is needed for demonstrating a particular BER using a recirculating loop will be much longer than conventional system BER measurements.

Optical system performance measurements

4. In the eye diagram measurement using a digital oscilloscope, the trigger is usually generated by the BERT which is a sub-harmonic of the data signal. A microwave switch that is controlled by the synchronized pulse train T4, as shown in Fig. 6.10.1, allows the trigger to be delivered to the oscilloscope only during the BERT gating time window. That is, the microwave switch is in the on state only at the end of each looping state, the time window when the eye diagram needs to be measured. Fig. 6.10.3 shows the timing control diagram of the pulse generator in the recirculating loop experiment. Typically, a BERT test set is gated by positive pulses of TTL level. The microwave switch embedded in the loop test set is on the off state for positive TTL pulses and in the on state at the zero control pulse level. It is worth noting that there might be BER burst errors at the beginning and the end of each loop time, as illustrated in Fig. 6.10.3, and these errors are artifacts due to the use of recirculating loops. The burst errors happen because of the rise and fall time of the signal and loop switches. Although the switch speed of the AOM is typically less than 1 μs, this time interval corresponds to thousands of corrupted data bits in a multi-gigabit data stream. By choosing a BERT gating time window to be slightly less than a loop time τ, these burst errors will be excluded from the BER counting, and therefore these burst errors do not affect the emulation of optical fiber transmission with recirculating loop. The following is an example that helps us understand how to establish a practical optical system test bed using a recirculating loop.

Signal switch T1

On

Loading

Looping

Δt1

Δt2

Off t

Δt Loop SwitchT2

On Off

BERT error Gating T3

On Off

Trigger switch T4

Δt3 Error counting Time window

Δt4 Off On

Error output

t

DT3

t

Trigger enabling Window t Burst errors on boundaries between loop Cycles t

Fig. 6.10.3 Timing control diagram for recirculating loop experiment. Time unit is normalized by loop time τ.

781

782

Fiber optic measurement techniques

Assuming that a two-span fiber-optic system with 80 km of optical fiber in each span is used in the loop test, the loop delay time is τ¼

nL 1:5  160  103 ¼ 800 μs ¼ c 3  108

(6.10.1)

where n ¼ 1.5 is assumed for the refractive index in the fiber. Then the time window of loading state should be between 800 and 160 μs (τ < Δt1 < 2τ). One can simply choose a value in the middle: Δt1 ¼ 1200 μs. If one wants to measure the system performance after 10 loops of circulation, that will be equivalently a 20-span system with total transmission distance of 16,000 km, the switch repetition period can be chosen as Δt ¼ 800  10 + 1200 ¼ 9200 μs

(6.10.2)

As shown in Fig. 6.10.3, T1 and T2 are complementary (T1 is positive while T2 is negative) and there is no relative time delay between them. BERT error gating pulse T3 and oscilloscope trigger switch control T4 have the same pulse widths but opposite polarities as well, with Δt3 ¼ Δt4 < 800 μs. To avoid bust errors caused by AOM leading and trailing edges, we can choose Δt3 ¼ Δt4 ¼ 600 μs

(6.10.3)

The relative delay between T2 and T3 is (the same delay is between T1 and T4) DT 3 ¼ 9200  800 + 100 ¼ 8500 μs

(6.10.4)

In this arrangement, the error gating pulse train has the duty cycle of d¼

600 μs ¼ 0:0652 9200 μs

(6.10.5)

Therefore actual BER of the 20-span system is related to the measured BER by BERactual ¼

BERmeasured 0:0652

(6.10.6)

This is because the time used to accumulate the errors (during the time window Δt3) is only a small fraction of the pulse repetition period. As a consequence, to observe a reasonable number of errors to accurately evaluate BER, a much longer time (about 15 times longer) is required for the measurement in this case.

6.10.3 Optical gain adjustment in the loop In long-distance multi-span optical transmission systems, optical amplifiers are used to overcome the loss of optical fibers and other optical components. In a system test bed using an optical recirculating loop, an optical signal travels through the system in the loop for a number of circulations. Therefore optical gain provided by the optical amplifiers in

Optical system performance measurements

P

Loading

Loading

t

(a) P

Loading

Loading

t

(b) P

Loading

Loading

(c)

t

Fig. 6.10.4 Illustration of loop optical gain adjustment.

the loop has to be adjusted to be equal to the losses in the system, which also includes the losses introduced by the loop switch and the fiber directional couple. This ensures that after each circulation in the loop, the optical power stays at the same level. An easy way to adjust the loop gain is to use a low-speed photodetector (O/E converter) in place of the optical receiver shown in Fig. 6.10.1, which is connected directly to an oscilloscope. A VOA is added in the loop to adjust the loop gain. Since the O/E converter has low speed, the waveform displayed by the oscilloscope is proportional to the average optical power after each loop circulation. Since the trigger to the oscilloscope is synchronized to the loop switch, the measured waveform shows exactly the loop control function and timing. The optical gain of each circulation through the loop can then be easily evaluated by this waveform. As shown in Fig. 6.10.4, if the net gain of the loop is smaller than unity, the photocurrent signal displayed on the oscilloscope decreases with each consecutive loop circulation, as illustrated by Fig. 6.10.4A. On the other hand, if the loop gain is higher than unity, the waveform on the oscilloscope will increase with each consecutive circulation and this is shown in Fig. 6.10.4B. Fig. 6.10.4C shows the expected waveform when the optical gain is exactly unity. In this case, there is no optical power change from one loop circulation to the next. This optical gain adjustment is important for system stability consideration. It is also necessary to simulate a straight-line optical transmission system in which optical power levels in different span are usually identical. Fig. 6.10.5 shows an example of eye diagrams measured after the signal traveled through different loop circulations. These eye diagrams were obtained from a 5 Gb/s system using Truewave fibers with dispersion of about 2.5 ps/nm/km at the 1555 nm optical signal wavelength. Two fiber spans were used in the recirculating loop with two inline

783

784

Fiber optic measurement techniques

150 km

300 km

600 km

900 km

1500 km

Fig. 6.10.5 Examples of eye diagrams measured after signal traveled through different loop circulations; 5 Gb/s system using Truewave fibers.

fiber amplifiers. The gain of the optical amplifiers was carefully adjusted to compensate for the overall loss. Since the system was not dispersion compensated, it shows visible waveform distortion after 900 km of equivalent fiber transmission. Another important note is that a system test bed based on recirculating loop uses a relatively small number of amplified optical spans in the loop. From a system performance point of view it may differ from a long-haul straight-line optical system in a few aspects. First, in a real long-haul optical system with a large number of amplified fiber spans, the amplified spontaneous emission noise accumulates toward the end of the system. As a result, the optical amplifier gain saturation caused by ASE noise is stronger near the end of the system than at the beginning of the system. Obviously, this non-uniform amplifier gain saturation effect along the system does not appear in an optical recirculating loop test bed with a small number of amplified spans in the loop. Second, a recirculating system with M amplified spans in the loop is equivalent to a straight-line long-haul system in which all the system parameters exactly repeat themselves after each M span. This artificial periodicity would not likely to happen in a practical system. As a consequence the statistics of system performance tested with a recirculating loop may not be the same as a real straight-line system. For example, FWM in a multi-span optical transmission system is caused by non-linear interaction between different wavelength components. The efficiency of FWM depends both on

Optical system performance measurements

the non-linearity of the fiber and on the phase match condition between participating channels. In a single-span optical system with two wavelength channels, the power of the FWM component at the fiber output can be expressed as !    2 αL 4eαL sin 2 ðΔβL=2Þ α2 2 P FWM ¼ γL eff e P 1 ð0ÞP 2 ð0Þ 2 (6.10.7) 1+ α + Δβ2 ð1  eαL Þ2 where P1(0) and P2(0) are the optical power levels of the two optical channels at the input of the fiber, γ is the non-linear parameter of the fiber, Leff ¼ (1  e αL)/α is the non-linear length of the fiber, α is the fiber loss parameter, and L is the fiber length. Δβ is the wave number mismatch between the two wavelength channels, which is   2πλ2 2 λ2 dD ð1Þ ð2Þ Δβ ¼ βFWM + βsig  2βsig ¼ (6.10.8) Δf D + Δf c c dλ (2) where β(1) sig , βsig and βFWM are wave numbers of the two participating signal channels and the FWM component, Δf is the frequency difference between the two signal channels, and D is the dispersion parameter of the fiber. For an amplified multi-span optical system, FWM contributions from different fiber spans will combine coherently at the end of the system. Assuming that the parameters at each fiber span are identical, which is equivalent to use only one fiber span in the recirculating loop, the FWM power at the output of the system will be,

P FWM_MS ¼ P FWM

sin 2 ðN A ΔβL=2Þ sin 2 ðΔβL=2Þ

(6.10.9)

where NA is the total number of amplified fiber spans, or equivalently, the number of circulations in the single-span recirculating loop. Fig. 6.10.6 shows the block diagram of an experimental setup to measure FWM in a fiber system using recirculating loop test bed. In this experiment the OSA is controlled by a pulse train that is synchronized with the switch of AOMs. The OSA sweeps only during the time window when the enabling pulse is at the high level. Fig. 6.10.7A shows the measured FWM efficiency as a function of the wavelength separation between the two signal channels. This result was obtained when the optical signal circulated in the loop for five times, which represents a five-span optical system. The optical power at the output of the EDFA was 18 dBm and the optical fiber used in the experiment was 57 km non-zero dispersion shifted fiber (NZDSF) with the dispersion of D ¼ 2.5 ps/nm/ km at the signal wavelength. Because of the precise periodical structure of the optical system using recirculating loop, the FWM efficiency versus channel spacing exhibits strong resonance because of perfect phase matching at these resonance channel separation. This resonance effect can be easily described by Eq. (6.10.9) analytically, shown as the dashed line in Fig. 6.10.7A.

785

Fiber optic measurement techniques

Enabling

Pulse generator PC Laser #1

OSA

AOM-1

PC

AOM-2

Laser #2

VOA

786

22

NZDSF 57 km EDFA

Fig. 6.10.6 Measurement of FWM in a fiber-optical system using recirculating loop.

A numerical simulation using split-step Fourier method also predicted the same phenomenon which is shown as the solid line in the same figure. Both of them agree with the measured results. As an example, Fig. 6.10.7B shows a measured FWM optical spectrum when the channel spacing is at one of the resonance peaks shown in Fig. 6.10.7A. In this case, the FWM sidebands can even have higher powers than that of the original optical signals, which may be due to the effect of parametric gain introduced by modulation instability. It is worthwhile to note that in practical optical systems with multiple amplified fiber spans, the fiber lengths, the dispersion parameters, and the optical power levels at different spans are not likely to be identical; perfect phase match in a recirculating loop would typically not happen in a practical fiber system. On the positive side, this recirculating loop measurement might be useful to determine the non-linear and the dispersion parameters of a particular fiber span by putting it in the loop. Due to the same reason of perfect periodicity of CD, the PDL and PMD parameters are also precisely periodic in a recirculating loop. Therefore the statistic natures of PDL and PMD in practical straight-line fiber systems may not be accurately simulated by a recirculating loop measurement. One way to overcome these problems is to add a polarization scrambler in the fiber loop, as illustrated in Fig. 6.10.8 (Sun et al., 2003; Xu et al., 2004; Yu et al., 2003). In this system, the polarization scrambler is controlled by a pulse train that is synchronized with the loop switch AOM-2. The polarization scrambler adds a random rotation of the polarization for every loop circulation of the optical signal, which makes the system closer to reality. Another note is that each time when an optical signal passes through an AOM, its optical frequency is shifted by carrier frequency of the RF driver, which is in the tens

Optical system performance measurements

−5 Experimental

−10 FWM efficiency (dB)

Simulation −15

Analytical

−20 −25 −30 −35 −40

(a)

−45 0.1

0.2

0.3

0.4 0.5 0.6 0.7 Channel spacing (nm)

0.8

0.9

1

0

λ2

λ1

−5

Power (dBm)

−10 −15 −20 −25 −30 −35 −40 −45 1550

(b)

1552

1554

1556

1558

1560

Wavelength (nm)

Fig. 6.10.7 (A) Measured FWM efficiency and (B) an example of FWM spectrum.

of MHz level (typically 50–100 MHz) depending on the specific model of the AOM. This frequency shift has been utilized to make delayed self-heterodyne measurements of spectral linewidth of laser diodes, as discussed in Section 3.2. In principle, this frequency shift does not affect systems with IMDD in an optical recirculating loop experiment, but may be a potential concern for systems with coherent detection, especially homodyne detection in which the optical frequency of the LO has to match that of the received optical signal. Since the central carrier frequency of the optical signal is the function of the number of loops it circulates, the coherent receiver has to make changes accordingly to accommodate the change of the IF. Special AOMs without optical frequency shift are also available for optical recirculating loop application. By concatenating two AOMs, with one of them shifts frequency by FM and the other

787

Fiber optic measurement techniques

Pulse generator

Transmitter

Receiver

AOM-1 PS

VOA

788

AOM-2

22

NZDSF 57km EDFA

Fig. 6.10.8 Adding a synchronized polarization scrambler (PS) to a recirculating loop.

one by FM, the overall frequency shift will be zero. The major drawback is that with two AOMs in tandem the total loss is doubled. To conclude this section, an optical recirculating loop is a very useful instrument for long-distance optical system performance measurements. It significantly reduces the required number of optical fiber spans and optical amplifiers. Time controls of EO switches and enabling pulses for BERT and oscilloscopes are important for recirculating loop measurement. Optical gain in the loop also has to be adjusted so that the net gain is zero for each circulation. The periodic nature of the optical system test bed based on a recirculating loop may create artificial phase match and polarization state match over the system, which have to be considered in the measurements.

References Agilent Technologies, 2003. Jitter Analysis Techniques for High Data Rates. Application Notes #1432. Agrawal, G.P., 1989. Nonlinear Fiber Optics. Academic Press, San Diego, CA. Agrawal, G.P., 1992. Fiber-Optic Communication Systems, second ed. Wiley, New York. Aoki, Y., Tajima, K., Mito, I., 1988. Input power limits of single-mode optical fibers due to stimulated Brillouin scattering in optical communication systems. J. Lightwave Technol. 6 (5), 710–719. Bergano, N.S., Davidson, C.R., 1995. Circulating loop transmission experiments for the study of long-haul transmission systems using erbium doped fiber amplifiers. J. Lightwave Technol. 13, 879–888. Bergano, N.S., Kerfoot, F.W., Davidsion, C.R., 1993. Margin measurements in optical amplifier system. IEEE Photon. Technol. Lett. 5, 304–306. Birk, M., Zhou, X., Boroditsky, M., Foo, S.H., Bownass, D., Moyer, M., O’Sullivan, M., 2006. WDM technical trial with complete electronic dispersion compensation. In: ECOC 2006. (paper Th2.5.6). Boskovic, A., Chernikov, S.V., Taylor, J.R., Gruner-Nielsen, L., Levring, O.A., 1996. Direct continuouswave measurement of n2 in various types of telecommunication fiber at 1.55 μm. Opt. Lett. 21, 1966–1968. Caballero, F.J.V., et al., 2018. Machine learning based linear and nonlinear noise estimation. IEEE/OSA J. Opt. Commun. Netw. 10 (10), D42–D51.

Optical system performance measurements

Chiang, T.-K., Kagi, N., Marhic, M.E., Kazovsky, L., 1996a. Cross-phase modulation in fiber links with multiple optical amplifiers and dispersion compensators. IEEE J. Lightwave Technol. 14 (3), 249–260. Chiang, T.-K., Kagi, N., Marhic, M.E., Kazovsky, L., 1996b. Cross-phase modulation in fiber links with multiple optical amplifiers and dispersion compensators. J. Lightwave Technol. 14, 249–260. Cho, H.J., Varughese, S., Lippiatt, D., Desalvo, R., Tibuleac, S., Ralph, S.E., 2020. Optical performance monitoring using digital coherent receivers and convolutional neural networks. Opt. Express 28, 32087–32104. Chraplyvy, A.R., 1990. Limitations of lightwave communications imposed by optical fiber nonlinearities. IEEE J. Lightwave Technol. 8, 1548–1557. Craig, R.M., Gilbert, S.L., Hale, P.D., 1998. High-resolution, nonmechanical approach to polarizationdependent transmission measurements. J. Lightwave Technol. 16 (7), 1285–1294. Dianov, E.M., Luchnikov, A.V., Pilipetskii, A.N., Starodumov, A.N., 1991. Long-range interaction of soliton pulse trains in a single mode fiber. Sov. Lightwave Commun. 1, 37. Dianov, E.M., Luchnikov, A.V., Pilipetskii, A.N., Prokhorov, A.M., 1992. Long-range interaction of picosecond solitons through excitation of acoustic waves in optical fibers. Appl. Phys. B Lasers Opt. 54, 175–180. Dong, Z., Pak, A., Lau, T., Lu, C., 2012. OSNR monitoring for QPSK and 16-QAM systems in presence of fiber nonlinearities for digital coherent receivers. Opt. Express 20, 19520–19534. Eiselt, M., Shtaif, M., Garrett, L.D., February 1999. Cross-phase modulation distortions in multi-span WDM systems. In: Optical Fiber Communication Conference OFC ‘99, paper ThC5, San Diego, CA. Faruk, M.S., Mori, Y., Kikuchi, K., 2014. In-band estimation of optical signal-to-noise ratio from equalized signals in digital coherent receivers. IEEE Photonics J. 6 (1), 1–9. Forghieri, F., Tkach, R.W., Chraplyvy, A.R., 1995. WDM systems with unequally spaced channels. IEEE J. Light. Technol. 13, 889–897. Fu, B., Hui, R., 2005. Fiber chromatic dispersion and polarization-mode dispersion monitoring using coherent detection. IEEE Photon. Technol. Lett. 17 (7), 1561–1563. Heffner, B., 1992. Deterministic, analytically complete measurement of polarization-dependent transmission through optical devices. IEEE Photon. Technol. Lett. 4, 451–454. Ho, K.-P., 2003. Performance degradation of phase-modulated systems due to nonlinear phase noise. IEEE Photon. Technol. Lett. 15, 1213–1215. Hui, R., O’Sullivan, M., 2021. Estimating nonlinear phase shift in a multi-span fiber-optic link using a coherent transceiver. In: 2021 Optical Fiber Communications Conference and Exhibition (OFC). Hui, R., Demarest, K., Allen, C., 1999a. Cross phase modulation in multi-span WDM optical fiber systems. J. Lightwave Technol. 17 (7), 1018. Hui, R., Vaziri, M., Zhou, J., O’Sullivan, M., 1999b. Separation of noise from distortion for high-speed optical fiber system link budgeting. IEEE Photon. Technol. Lett. 11, 910–912. Hui, R., Zhu, B., Huang, R., Allen, C., Demarest, K., Richards, D., 2002. Subcarrier multiplexing for highspeed optical transmission. J. Lightwave Technol. 20, 417–427. Hui, R., Laperle, C., O’Sullivan, M., 2002. Measurement of total and longitudinal nonlinear phase shift as well as longitudinal dispersion for a fiber-optic link using a digital coherent transceiver. J. Light. Technol. https://doi.org/10.1109/JLT.2022.3198549. Hui, R., Saunders, R., Heffner, B., Richards, D., Fu, B., Adany, P., 2007. Nonblocking PMD monitoring in live optical systems. Electron. Lett. 43 (1), 53–54. Hui, R., Laperle, C., Charlton, D., O’Sullivan, M., 2021. Estimating system OSNR with a digital coherent transceiver. IEEE Photon. Technol. Lett. 33 (14), 743–746. Inoue, K., Toba, H., 1995. Fiber four-wave mixing in multi-amplifier systems with non-uniform chromatic dispersion. IEEE J. Light. Technol. 13 (1), 88–93. Ip, E., Pak, A., Lau, T., Barros, D.l.J.F., Kahn, J.M., 2008. Coherent detection in optical fiber systems. Opt. Express 16 (2), 753–791. Ippen, E.P., Stolen, R.H., 1972. Stimulated Brillouin scattering in optical fibers. Appl. Phys. Lett. 21, 539. Jiang, J., Richards, D., Allen, C., Oliva, S., Hui, R., 2008. Non-intrusive polarization dependent loss monitoring in fiber optic transmission systems. Opt. Commun. 281, 4631–4633. Kikuchi, K., 2006. Phase-diversity homodyne detection of multilevel optical modulation with digital carrier phase estimation. IEEE J. Sel. Top. Quantum Electron. 12, 563–570.

789

790

Fiber optic measurement techniques

Kumar, P.V., Win, M.Z., Lu, H.F., Georghiades, C.N., 2002. Error-control coding techniques. In: Kaminow, I., Li, T. (Eds.), Optical Fiber Telecommunications IVB. Academic Press. Lee, J.H., Choi, H.Y., Shin, S.K., Chung, Y.C., 2006. A review of the polarization-nulling technique for monitoring optical-signal-to-noise ratio in dynamic WDM networks. J. Lightwave Technol. 24 (11), 4162–4171. Lin, X., Dobre, O.A., Ngatched, T.M.N., Eldemerdash, Y.A., Li, C., 2018. Joint modulation classification and OSNR estimation enabled by support vector machine. IEEE Photon. Technol. Lett. 30 (24), 2127–2130. Luo, T., Pan, A., Nezam, S.M.R.M., Yan, L.S., Sahin, A.B., Willner, A.E., 2004. PMD monitoring by tracking the chromatic-dispersion-insensitive RF power of the vestigial sideband. IEEE Photon. Technol. Lett. 16 (9), 2177–2179. Mahmoud, H.A., Arslan, H., 2009. Error vector magnitude to SNR conversion for nondata-aided receivers. IEEE Trans. Wirel. Commun. 8 (5), 2694–2704. Marcuse, D., Chraplyvy, A.R., Tkach, R.W., 1994. Dependence of cross-phase modulation on channel number in fiber WDM systems. IEEE J. Lightwave Technol. 12, 885–890. Marcuse, D., Manyuk, C.R., Wai, P.K.A., 1997. Application of the Manakov-PMD equation to studies of signal propagation in optical fibers with randomly varying birefringence. J. Lightwave Technol. 15 (9), 1735–1746. McNicol, J., O’Sullivan, M., Roberts, K., Comeau, A., McGhan, D., Strawczynski, L., 2005. Electrical domain compensation of optical dispersion. In: Proc. OFC. paper OThJ3. Melloni, A., Martinelli, M., Fellegara, A., 1999. Frequency characterization of the nonlinear refractive index in optical fiber. Fiber Integr. Opt. 18 (1), 1–13. O’Sullivan, M.S., Roberts, K., Bontum, C., 2005. Electronic dispersion compensation techniques for optical communication systems. In: European Conference on Optical Communications (ECOC 2005), Glasgow, Scotland, September 25–29, pp. 189–190. Penninckx, D., Chbat, M., Pierre, L., Thiery, J.P., 1997. The phase-shaped binary transmission (PSBT): a new technique to transmit far beyond the chromatic dispersion limit. IEEE Photon. Technol. Lett. 9, 259–261. Personick, S.D., 1977. Receiver design for optical fiber systems. Proc. IEEE 65, 1670–1678. Poggiolini, P., 2012a. The GN model of non-linear propagation in uncompensated coherent optical systems. J. Lightwave Technol. 30, 3857–3879. Poggiolini, P., 2012b. The GN model of non-linear propagation in uncompensated coherent optical systems. J. Lightwave Technol. 30 (24), 3857–3879. Proakis, J.G., 1968. Probabilities of error for adaptive reception of M-phase signals. IEEE Trans. Commun. Technol. 16, 71–81. Proakis, J.G., 2001. Digital Communications, fourth ed. McGraw-Hill, New York. Roberts, K., Zhuge, Q., Monga, I., Gareau, S., Laperle, C., 2017. Beyond 100 Gb/s: capacity, flexibility, and network optimization. J. Opt. Commun. Netw. 9 (4), C12–C24. Roudas, I., Piech, G.A., Mlejnek, M., Mauro, Y., Chowdhury, D.Q., Vasilyev, M., 2004. Coherent frequency-selective Polarimeter for polarization-mode dispersion monitoring. J. Lightwave Technol. 22 (4), 953–967. Savory, S.J., 2010. Digital coherent optical receivers: algorithms and subsystems. IEEE J. Sel. Top. Quantum Electron. 16 (5), 1164–1179. Shelby, R.M., Levenson, M.D., Bayer, P.W., 1985. Guided acoustic-wave Brillouin scattering. Phys. Rev. B 31 (8), 5244–5252. Shiner, A.D., Borowiec, A., Reimer, M., Charlton, D., O’Sullivan, M., 2016. Nonlinear spatially resolved interferometer for distance resolved power and gain tilt measurement. In: ECOC 2016; 42nd European Conference on Optical Communication, pp. 283–285. Shiner, A.D., et al., 2020. Neural network training for OSNR estimation from prototype to product. In: Optical Fiber Communication Conference paper M4E.2. Shtaif, M., Eiselt, M., 1997. Analysis of intensity interference caused by cross-phase modulation in dispersive optical fibers. IEEE Photon. Technol. Lett. 9 (12), 1592–1594.

Optical system performance measurements

Sittig, E.K., Coquin, G.A., 1970. Visualization of plane-strain vibration modes of a long cylinder capable of producing sound radiation. J. Acoust. Soc. Am. 48 (5), 1150–1159. Stephens, R., 2004. Analyzing jitter at high data rates. IEEE Commun. Mag. 42 (2), S6–S10. Sui, Q., Lau, A.P.T., Lu, C., 2010. OSNR monitoring in the presence of first-order PMD using polarization diversity and DSP. J. Lightwave Technol. 28 (15), 2105–2114. Sun, Y., Lima, A.O., Zweck, J., Yan, L., Menyuk, C.R., Carter, G.M., 2003. Statistics of the system performance in a scrambled recirculating loop with PDL and PDG. IEEE Photon. Technol. Lett. 15, 1067–1069. Vorbeck, S., Schneiders, M., 2004. Cumulative nonlinear phase shift as engineering rule for performance estimation in 160-Gb/s transmission systems. IEEE Photon. Technol. Lett. 16 (11), 2571–2573. Waklin, S., Conradi, J., 1999. Multilevel signaling for increasing the reach of 10 Gb/s lightwave systems. J. Lightwave Technol. 17, 2235–2248. Walker, D., Sun, H., Laperle, C., Comeau, A., O’Sullivan, M., 2005. 960-km transmission over G.652 fiber at 10 Gb/s with a laser/electro-absorption modulator and no optical dispersion compensation. IEEE Photon. Technol. Lett. 17, 2751–2753. Wang, J., Petermann, K., 1992. Small signal analysis for dispersive optical fiber communication systems. IEEE J. Lightwave Technol. 10 (1), 96–100. Wilson, S.G., 1996. Digital Modulation and Coding. Prentice Hall. Xu, H., Jiao, H., Yan, L., Carter, G.M., 2004. Measurement of distributions of differential group delay in a recirculating loop with and without loop-synchronous scrambling. IEEE Photon. Technol. Lett. 16, 1691–1693. Yu, Q., Yan, L.-S., Lee, S., Xie, Y., Willner, A.E., 2003. Loop-synchronous polarization scrambling technique for simulating polarization effects using recirculating Fiber loops. J. Lightwave Technol. 21, 1593–1600. Yu, Q., Pan, Z., Yan, L.-S., Willner, A.E., 2002. Chromatic dispersion monitoring technique using sideband optical filtering and clock phase-shift detection. J. Lightwave Technol. 20 (12), 2267–2271.

791

This page intentionally left blank

CHAPTER 7

Measurement errors 7.1 Introduction All measurements are approximations. Knowledge of a measurand is limited by measurement error. Nevertheless, measurements are essential to many enterprises. For example, optical transceivers are designed and sold to meet specified performance levels over their installed lifetime. At manufacture, each transceiver is composed from a set of fairly independent components, individually specified to meet a level of performance that on assembly must guarantee transceiver performance. A specification requires measurement and measurements have error. It follows that measurement errors contribute to the observed spreads of performances that determine success or failure of manufactured transceivers. Measurement error contributes to the transceiver cost if it causes an apparent performance failure at manufacture or if it allows a failing transceiver to be sold. Factory pass criterion is set to a higher measured performance to guard band measurement error. In a second example, equipment and propagation models are used to predict performance of transceivers in field installations. This is less costly than measuring on a case-by-case basis. To ensure predictions, the models are verified by measurement. This involves assigning observed differences to the measurement or the model. Estimated measurement error is key to this assignment and drives measurement or model improvements. Ultimately, guaranteed performance must be reduced by irreducible model misrepresentation and measurement error, with consequential increase in the cost of network deployment. In a third example, a neutrino was initially observed at CERN to have traveled faster than light (Adam et al., 2011–2012) within estimated measurement error. Unsurprisingly, a detailed examination of the experiment found flaws in the apparatus responsible for a systematic error far outside the estimated confidence limits. In these examples, estimation of measurement errors is essential to the design and use of measurements, and yet, it is sometimes regarded as optional in common engineering practice. This chapter presents some tools to make and interpret physical measurements with attention to measurement errors.

7.1.1 Error classification and reporting The speed of light in a vacuum is defined by standard to be 299,792,458 m/s; 1 m is the distance light travels in a vacuum in 1 s of time and 1 s is the time elapsed  in 9,192,631,770 cycles of radiation from the hyperfine transition specifically,  6 2S1=2 , F ¼ 3, mf ¼ 0 ! 6 2S1=2 , F ¼ 4, mf ¼ 0 of unperturbed Cesium 133. Fiber Optic Measurement Techniques https://doi.org/10.1016/B978-0-323-90957-0.00007-2

Copyright © 2023 Elsevier Inc. All rights reserved.

793

794

Fiber optic measurement techniques

Although these length and time units are, by definition, known absolutely, no measurement can be known with absolute accuracy in terms of these units. Consider a wavemeter that measures the vacuum wavelength of light under test by simultaneously counting its AC-coupled detected interference fringes and those of an overlapped and counterpropagating stable reference light in the folded movable arm of a Michelson interferometer (Hall and Lee, 1976). A new fringe occurs for every advance by one wavelength of the optical distance of the interferometer’s arm. Over its travel, the number of test fringes, Ntest, is counted starting and stopping on the first and final maximum positive slope threshold crossings of a preset number of reference fringes, Nref. The number of crossings equals the number of fringes plus one. The starting reference fringe crossing is located randomly w.r.t. (with respect to) the first test fringe crossing. Test and reference fringe visibility is constant over a travel distance much shorter than the test and reference coherence lengths. The precision of a single test fringe count measurement is 1 fringe (the counter only reports a whole number). Its value is inaccurate by one fringe. The test counter systematically reports one less fringe than can fit in the measurement interval because of the offset of reference and test start crossings. This systematic error is corrected by adding one fringe to the test count. Precision and accuracy can be further improved to a fraction of a fringe by averaging repeated measurements. The precision of a single measurement of pffiffiffiffiffiffiffiffiffiffiffiffiffiffi a 1 Nref is   2π NSRref , as the count contains a whole number of reference fringes up to the precision of two independent crossing detections. A fringe noise-to-signal ratio (NSR) of less than 0.1 is readily achievable and test count precision (0.5 fringe) dominates the uncertainty of ratio of test/reference fringe counts for a single measurement. A more precise measure is obtained by starting and stopping counts on coincident crossings of test and reference to ensure a whole number of fringes at both wavelengths (Kahane et al., 1983). With this method, the uncertainty in test/reference fringe count ratio is vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi!2  pffiffiffiffiffiffiffiffiffiffiffiffiffiffi2 u*  2 + 1 u u NSRref NSRtest N N test t δ (7.1.1) ffi  test  t 2π + 2π N test N ref N ref N ref

a

NSRref is the noise to signal ratio of the (AC-coupled) detected fringes. The fringe signal is s(l) ¼ A  sin (2πl) where l is the continuous optical distance traveled by the movable interferometer arm in units of 2 reference fringes. Signal power is A2 and an amplitude error in the signal, δ(s), stemming from signal

δðsÞ amplitude noise, , leads to a fringe error at the crossing, δðl Þ ¼ 2πA . Thus, the rms fringe error of a rffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi q ffiffiffiffiffiffiffi 2 δ ð s Þ h i 2 1 1 NSR The reference interval is ¼  2π crossing detection is   δðl Þ ¼  2π A2 2 : 2 2

determined by two independent crossing measurements hence the rms fringe error of the interval is  pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 NSRref .  2π

Measurement errors

For the case of the method of coincident crossings, consider a measurement in the visible, using a wavemeter with a  1.5 m movable arm, for which NSRref ffi 0.1, Nref ¼ 5000331, and Ntest ¼ 5005001. The arithmetic of the count ratio allows arbitrary digits of precision, e.g., NN test ¼ 1:0009339381733; however, the estimated uncertainty ref in the measurement, namely, 0.000000014 (for NSRtest ¼ 0.1), puts a limit on the number of significant digits. Any reported measurement must be curtailed to its last significant one or 2 digits. In this case, the reported measured ratio is NN test ¼ 1:00093394 ð1Þ limited ref by measurement uncertainty. Estimated error in brackets applies to the last significant digit of the measured value (i.e., 1.0009339381733 rounded to 1.00093394) such that the last significant digit reported is 4  1. If the accuracy of estimated uncertainty justifies test 2 significant digits, the reported ratio can be N N ref ¼ 1:000933938 ð14Þ. In another standard N test notation, N ref ¼ 1:00093394  0:00000001. Both notations achieve the goal of limiting confidence in the reported measurement to the extent that it can be reliably used for other purposes. The estimate of a test vacuum wavelength from known reference vacuum wavelength and a measured fringe count ratio is λtest ¼

ntest  λref N test  nref N ref

(7.1.2)

where λ and n are wavelength and refractive index, respectively. The uncertainty in the measured test wavelength is 8 9 ntest  λref Ntest > > > >   > > > > nref Nref > > > > > > > > v ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > * ! + > u > 2 > qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > < u0 =

1 0 1 n ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi q 2 test 2 u 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 δ   δðλtest Þ2 ¼ u 1 NSR δ λ NSRtest ref > > nref uB2π ref C > > B C > > > @ A + @2π A + + u > > !2 > u > > 2 N N > > λ test u ref > > ref ntest > > t > > > > : ; nref (7.1.3) N test ntest the measurement under consideration N ref  1:0, nref  1:0,λref   pffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffi 1 1 NSRref NSR ¼ 2π N test test  108 ; for a stabilized reference HeNe laser 0:63 μm, 2π N ref r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 δðλref Þ  5  109 and the ratio of indices of air at the test and reference waveλ 2

In

ref

lengths can be known to 1 part in 109. Consequently, the error in the test wavelength is 1  108 μm. The example of a wavemeter measurement also demonstrates the importance of error estimation in the design, interpretation, and reporting of measurements. It showed that higher precision can be achieved by starting and stopping reference and test counters on coincident crossings. In the absence of this, it also determined that an unsynchronized test

795

796

Fiber optic measurement techniques

counter reports a systematically lower test count by one fringe. Having accounted for the systematic error and the precision of an unsynchronized test count can be increased by averaging many repeated measurements. In the absence of any other systematics so too can its accuracy, according to Eq. (7.3.3), an estimated limit of precision or accuracy that might be achieved by averaging depends on the reciprocal of the square root of the number of independent repeated measurements. Eq. (7.1.3) provides a finish based on acceptable uncertainty in the wavemeter measurement objective. Estimates of the standard deviations of distributions of independent repeated measured values quantify the measurement errors. Error distributions are assumed reasonably symmetric about their mean. Precision is a measure of the spread of the random part of a measured value. A sample of repeated measurements is taken from an experiment in a fixed state defined by controlled measurement conditions (e.g., a loss, a launched power, an optical waveform, a data record, etc.). Each independent measurement instance is an element of the sample. A spread of measured values, such as depicted in Fig. 7.1.1 for normally distributed errors, is observed. The spread may originate from such things as noise, instrument resolution, reading error, etc., and error distributions vary depending on measurement/experiment statistics (Taylor 1997, pp. 3–10, Pugh and Winslow, 1966, pp. 2–14). Properties of normally distributed errors will be emphasized in this chapter. The mean of the distribution which depends on the state of the experiment as determined by its state parametersb is offset from the true value by a residual systematic error. The wider the spreads of elements of samples, the greater the difference of sample means required to distinguish between

Fig. 7.1.1 Normal distribution of repeated measured values minus the true value of the measurand. The offset of the distribution mean from zero is caused by a systematic error. The spread of values is measured by the standard deviation of the distribution. b

Parameters of the experiment that control the value of the measurand.

Measurement errors

measurement states. Fig. 7.1.1 shows a systematic error that separates the measured mean from the measurand’s true value and limits measurement accuracy. The spread sample elements depicted in Fig. 7.1.1 is the precision of any single element. The spread of means of repeated samples is reduced by the reciprocal of the square root of the number of elements in a sample. Standard deviation is used to measure the precision. A systematic error is a knowable measurement infidelity. Compared with the duration of the experiment, systematic errors may be time-independent as for an instrument miscalibration or time-dependent as for a temperature dependence in an environment where temperature fluctuates faster than the duration of a measurement campaign. The dispersion of air depends on temperature, pressure, and humidity.c In the wavemeter example, with 1.5 and 0.63 μm test and reference wavelengths, respectively, the measured ratio in air is lower than in a vacuum by 0.4 ppm. This can vary on a timescale of hours by 3 ppb due to temperature, pressure, and humidity fluctuations. The accuracy of a measurement is its departure from truth. It can be dominated by measurement precision but is ultimately limited by systematic errors. Lastly, error can be affected by operator bias. A person’s belief or motivation can affect a measurement’s interpretation to alter its value or meaning.

7.2 Measurement error statistics We assume that the underlying mean of a sample of repeated measurements has no systematic error and the measured values are real. Vi[1,N] is a sample of N independent, repeated measurements of a random variable V from a stationary underlying distribution with finite variance. Forbes et al. (2011) provides a useful reference of statistical distributions of finite variance. The standard deviation of the underlying distribution measures reproducibility. This is estimated from the incomplete knowledge contained in a sample. The sample mean is V N X

V ¼

Vi

i¼1

(7.2.1)

N

The mean is an unbiased estimator. It is the best estimate of V from the sample values (Taylor, 1997). The sample variance, σ 2V, is nearly the average sum of squares of the differences of sample values from the mean. A factor N  1 rather than N in the denominator c

For 

estimates

8342:13 + 2406030 1 +

provided  15997 38:9 1 λ2

here,

P 108 133:31566

the

index

of

air

is

estimated

from

nðλ, rH, P, T Þ 

rHPð5:722:0457 Þ λ2 +1 . λ is the vacuum wavelength in μm, rH is the 133:31536 relative humidity fraction, P is the air pressure in Pascals and T is the air temperature in Celsius. 130

λ2

1 + :003671T

797

798

Fiber optic measurement techniques

makes the sample variance an unbiased estimate of the true variance, σ 2. A degree of freedom is lost by the use of sample rather than true mean leaving N  1 degrees of freedom. N X

σ 2V ¼

i¼1

2

ðV i  V Þ N 1

(7.2.2)

The sample standard deviation is defined as the square root of the sample variance vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX u N 2 u ðV i  V Þ t i¼1 σV ¼ (7.2.3) N 1 The true standard deviation, σ, measures the spread of sample values about their mean. As defined, the sample standard deviation, σ V, underestimates σ by an extent that depends on the underlying distribution and the size of the sample, N. For the case of an underlying normal distribution with standard deviation σ and mean μ, with probability density function ðxμÞ2 1 GðxÞ ¼ pffiffiffiffiffiffiffi  e 2σ2 2πσ

(7.2.4)

an unbiased estimate of the standard deviation is obtained from the sample data (Lehmann and Casella, 1998, p. 92). vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX u N 2 rffiffiffiffiffiffiffiffiffiffiffiffiffi N 1 u ðV i  V Þ t Γ N 1 i¼1 N2   ¼ corr ðN Þ  σ V b σ¼ N 1 2 Γ 2

(7.2.5)

Γ() is a Gamma function. Examples of the correction factor, corr(N), are shown versus N in Fig. 7.2.1, for underlying normal, uniformd chi of order 3e and double exponentialf distributions. The mean of the sampling distribution of σ V converges to σ with increasing N. The longer the tails of the distribution, the slower the convergence. The uncertainty in the

d

e

f

1 x½a, a The PDF of a zero mean uniform distribution is U ðxÞ ¼ 2  a 0 otherwise x2 =2 The PDF of a chi distribution of order 3 is CHI 3ðxÞ ¼ x2  peffiffi2Γ 3 ð2 Þ The PDF of a double exponential distribution is DE ðxÞ ¼ α2  eαjxj

corr(N)

Measurement errors

1.2 1.18 1.16 1.14 1.12 1.1 1.08 1.06 1.04 1.02 1

Normal Uniform Chi_order 3 Double exponential

0

10

20 30 sample size, N

40

Fig. 7.2.1 Standard deviation correction factor versus sample size, N. The product of this factor and the sample standard deviation gives an unbiased estimate of standard deviation for underlying normal (red line), uniform (blue line), double exponential (black line), and order 3 chi (open symbols) distributions.

bias caused by not knowing the distribution over the set of distributions explored here is from 10 % at N ¼ 2 to 1 % at N ¼ 50.

7.2.1 Effective sample size in the presence of serial correlations Variance and standard deviation estimates are based on the number of independent instances in the sample. Thus far it has been assumed that each element of a sample expresses an independent instance of random error such that the number of degrees of freedom in the estimated standard deviation is N  1. Correlation of errors in the sequence of a sample can affect the number independent instances in the sample. As a result, Eqs. (7.2.2), (7.2.5), and, presently, (7.3.3), (7.3.4), (7.3.5) can produce underestimates or overestimates, depending on the sign of the correlation coefficient. For example, correlations might occur if the timescale of random processes that contribute to measurement error is longer than the time between repeated measurements. We assume a stationary distribution of errors. The correlation, r, between two sequences pi[1,N] and qi[1,N] is N X ðpi  pÞ  ðqi  qÞ i¼1 ffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi N N X X 2 ðpi  pÞ  ðqi  qÞ2 i¼1

(7.2.6)

i¼1

When pi is independent of qi, the numerator of Eq. (7.2.6) tends to be zero for large N: r ! 0. When pi ∝ qi, r ¼ 1. The correlation between measurements of a single large N

sequence in time is estimated from its autocorrelation. This is obtained from

799

800

Fiber optic measurement techniques

Fig. 7.2.2

Neff N

as a function of lag1 autocorrelation coefficient.

Eq. (7.2.6) by setting qi ¼ pi+l where l is the number of lags. The autocorrelation coefficient at the lth lag of the sequence pi is N l  X    pi  pj1  pi + l  pj2

N l X

rl ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! N l  N l  X X 2 2 large N pi  pj1  pi + l  pj2 i¼1

i¼1

i¼1

pj1 ¼

ðpi  pÞ  ðpi + l  pÞ

i¼1 N X

(7.2.7) 2

ðpi  pÞ

i¼1

1 XN l 1 XN p ; pj ¼ p i 2 i¼1 i¼l i N l N l

(7.2.8)

The dependence on past measured values usually decays with time and can be strongest for the shortest time between measurements. r1 is an estimate of the correlation between successive measurements. Under these conditions an estimate of effective sample size, Neff, in the presence of a serial correlation is N eff ¼ N  Fig. 7.2.2 is the

N eff N

1  r1 1 + r1

(7.2.9)

as a function of lag1 autocorrelation coefficient.

7.3 Central limit theorem (CLT) Let Xi[1,N] be a set of independent identically distributed random variables from a sequence with underlying mean μ and finite variance σ 2. As N tends to infinity, the distribution of the random variable XN ¼

N 1 X ðX Þ N i¼1 i

tends to a distribution with a normal probability density

(7.3.1)

Measurement errors

2

PDF ðX N Þ |{z} ¼)

N !∞

σ

NðX N μÞ 1 qffiffiffiffi  e 2σ2

(7.3.2)

2π N

The distribution of sample means of random variables tends to a normal distribution at large N for any underlying sample distribution of finite variance. Similarly, distributions of sample variances and standard deviations tend to normal distributions at large number of samples. The estimated standard deviations (standard errors) of the distributions of sample mean, variance, and standard deviation, δm, δv, δs, respectively, are σ V  corr ðN Þ pffiffiffiffiffi N rffiffiffiffiffiffiffiffiffiffiffiffiffi 2 δv ¼ σ 2V  N 1 δm ¼

(7.3.3) (7.3.4)

σ  corr ðN Þ δs ¼ pVffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2  ðN  1Þ

(7.3.5)

7.3.1 Approximations related to the central limit theorem When elements Vi of V are the sums of N independent, identically distributed random contributions of mean μ and finite variance σ 2 then, as N increases, the distribution of elements can often be well represented by a normal distribution (Eq. (7.3.2)). 2

ðV i NμÞ 1 PDF ðV i Þ ffi pffiffiffiffiffiffiffiffiffiffi  e 2Nσ2 σ 2πN

(7.3.6)

For the case of multiplicative contributions, the distribution of the logarithm of elements of V approaches a normal distribution. The tendency to a normal distribution can be argued even when contributions come from independent distributions with different variances as long as no one variance dominates. For the sum of N0 contributions from distributions with means μi¼[1,N0 ]and variances σ 2i¼[1,N0 ]. An equivalent to Eq. (7.3.6) is  X ’ 2 N V i



1 PDFðV i Þ ffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  e XN ’ 2π i¼1 σ 2i

i¼1

μi

XN ’

σ2 i¼1 i

(7.3.7)

801

Fiber optic measurement techniques

10 1 2

1

4

exceedance prob.

802

10

0.1

Normal 0.01

0.001

0.0001

0

1

2

3

4

+/– standard deviations from mean

Fig. 7.3.1 Probability of a measurement instance in excess of  a multiple of standard deviations from the mean of its distribution. Lines are drawn for the sums of independent contributions from 1, 2, 4, and 10 identical uniform distributions. A line for the normal distribution is also shown.

The approximation of Eq. (7.3.7) is shown in Fig. 7.3.1 for the case of the sum of sets of one, two, four, and ten random contributions from identical uniform distributions. Here the logarithm of the residual probability of an observation outside of the interval [A  σ, A  σ], where σ is the underlying standard deviation of the reproducibility plotted vs. A. The thin dashed line is the case of a normal probability distribution.g For A > 1.5, the normal distribution overestimates the residual probability of observing a measured value beyond A  σ from sums of uniform distributions by an extent that decreases with increasing N.

7.4 Identifying candidate outliers On occasion, the value of an element can depart from the rest of the sample by enough to call into question its legitimacy. “Enough” depends on the sizes of the sample and the element’s departure. A large size sample is more likely to contain low-probability occurrences than a small size sample. A measure of departure is the magnitude of the difference between the element and the estimated sample mean divided by the estimated sample standard deviation. Fig. 7.3.1 shows the probability of exceedance by  this departure or more for a normal distribution (thin dashed line). Start by assuming a normal underlying distribution. When the exceedance probability times the sample size is PC(N, 1). If none is found there are no doubtful elements. If one is found then test again with PC(N, 2) and continue testing sequentially with PC(N, i ¼ 2, 3, 4…) until the first occasion of test for which PC(N, ( i + 1) ¼ 2 or 3 or 4…) does not result in the identification of an additional doubtful element. If more than one doubtful element (e.g., 2 instead of 1) is identified in the PC(N, 1) test, continue the testing at PC(N, 1 + 2).

803

804

Fiber optic measurement techniques

Table 7.4.1 Peirce’s criterion values (Ross, 2003), PC(N, ε), for sample size N and candidate number of doubtful elements, ε.

Peirce's criterion values PC(N, ԑ ) e→

1

2

3

4

5

6

7

8

9

N↓ 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

e→

1

2

3

4

5

6

7

8

9

2.412 2.425 2.438 2.45 2.461 2.472 2.483 2.494 2.504 2.514 2.524 2.533 2.542 2.551 2.56 2.568 2.577 2.585 2.592 2.6 2.608 2.615 2.622 2.629 2.636 2.643 2.65 2.656 2.663

2.132 2.146 2.159 2.172 2.184 2.196 2.208 2.219 2.23 2.241 2.251 2.261 2.271 2.281 2.29 2.299 2.308 2.317 2.326 2.334 2.342 2.35 2.358 2.365 2.373 2.38 2.387 2.394 2.401

1.957 1.971 1.985 1.998 2.011 2.024 2.036 2.047 2.059 2.07 2.081 2.092 2.102 2.112 2.122 2.131 2.14 2.149 2.158 2.167 2.175 2.184 2.192 2.2 2.207 2.215 2.223 2.23 2.237

1.828 1.842 1.856 1.87 1.883 1.896 1.909 1.921 1.932 1.944 1.955 1.966 1.976 1.987 1.997 2.006 2.016 2.026 2.035 2.044 2.052 2.061 2.069 2.077 2.085 2.093 2.101 2.109 2.116

1.725 1.74 1.754 1.768 1.782 1.795 1.807 1.82 1.832 1.843 1.855 1.866 1.876 1.887 1.897 1.907 1.917 1.927 1.936 1.945 1.954 1.963 1.972 1.98 1.988 1.996 2.004 2.012 2.019

1.64 1.655 1.669 1.683 1.697 1.711 1.723 1.736 1.748 1.76 1.771 1.783 1.794 1.804 1.815 1.825 1.835 1.844 1.854 1.863 1.872 1.881 1.89 1.898 1.907 1.915 1.923 1.931 1.939

1.567 1.582 1.597 1.611 1.624 1.638 1.651 1.664 1.676 1.688 1.699 1.711 1.722 1.733 1.743 1.754 1.764 1.773 1.783 1.792 1.802 1.811 1.82 1.828 1.837 1.845 1.853 1.861 1.869

1.502 1.517 1.532 1.547 1.561 1.574 1.587 1.6 1.613 1.625 1.636 1.648 1.659 1.67 1.681 1.691 1.701 1.711 1.721 1.73 1.74 1.749 1.758 1.767 1.775 1.784 1.792 1.8 1.808

1.444 1.459 1.475 1.489 1.504 1.517 1.531 1.544 1.556 1.568 1.58 1.592 1.603 1.614 1.625 1.636 1.646 1.656 1.666 1.675 1.685 1.694 1.703 1.711 1.72 1.729 1.737 1.745 1.753

N↓ 1.196 1.383 1.509 1.61 1.693 1.763 1.824 1.878 1.925 1.969 2.007 2.043 2.076 2.106 2.134 2.161 2.185 2.209 2.23 2.251 2.271 2.29 2.307 2.324 2.341 2.356 2.371 2.385 2.399

1.078 1.2 1.299 1.382 1.453 1.515 1.57 1.619 1.663 1.704 1.741 1.775 1.807 1.836 1.864 1.89 1.914 1.938 1.96 1.981 2 2.019 2.037 2.055 2.071 2.088 2.103 2.118

1.099 1.187 1.261 1.324 1.38 1.43 1.475 1.516 1.554 1.589 1.622 1.652 1.68 1.707 1.732 1.756 1.779 1.8 1.821 1.84 1.859 1.877 1.894 1.911 1.927 1.942

1.022 1.109 1.178 1.237 1.289 1.336 1.379 1.417 1.453 1.486 1.517 1.546 1.573 1.599 1.623 1.646 1.668 1.689 1.709 1.728 1.746 1.764 1.781 1.797 1.812

1.045 1.114 1.172 1.221 1.266 1.307 1.344 1.378 1.409 1.438 1.466 1.492 1.517 1.54 1.563 1.584 1.604 1.624 1.642 1.66 1.677 1.694 1.71

1.059 1.118 1.167 1.21 1.249 1.285 1.318 1.348 1.377 1.404 1.429 1.452 1.475 1.497 1.517 1.537 1.556 1.574 1.591 1.608 1.624

1.009 1.07 1.12 1.164 1.202 1.237 1.268 1.298 1.326 1.352 1.376 1.399 1.421 1.442 1.462 1.481 1.5 1.517 1.534 1.55

1.026 1.078 1.122 1.161 1.195 1.226 1.255 1.282 1.308 1.332 1.354 1.375 1.396 1.415 1.434 1.452 1.469 1.486

1.039 1.084 1.123 1.158 1.19 1.218 1.245 1.27 1.293 1.315 1.336 1.356 1.375 1.393 1.411 1.428

32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Finally, all doubtful elements have been identified based on the assumption of normally distributed errors. In the event these are removed, sample statistics can be re-evaluated.

7.5 Error estimates of measurement combinations Measurements are often combined to estimate a metric. For simplicity, we assume that in a constituent no systematic errors are present. For example, a combination, R ¼ R(o, p, q), is some function of measurements o, p, and q with small rms uncertainties δo,  δp, and δq. (The number of possible different constituents is unlimited.) δR, the uncertainty in R, is estimated by Taylor expansion to be a maximum of       ∂R ∂R ∂R δR ffi    δo +    δp +    δq ∂o ∂p ∂q

(7.5.1)

Measurement errors

Let o, p, and q be measurable quantities considered to be random variables with means o, p, and q. For small differences, by Taylor expansion in the vicinity of their means Ri ¼ Rðoi , pi , qi Þ  Rðo, p, qÞ +

∂R ∂R ∂R ðo  oÞ + ðp  pÞ + ðq  qÞ ∂o i ∂p i ∂o i

(7.5.2)

i  [1, N] is a shared index of o, p, and q. The means of Taylor terms involving the derivatives are zero by inspection, thus, R ¼ Rðo, p, qÞ: The standard deviation of R is vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2 uX u N ∂R ∂R ∂R u ðo  oÞ + ðp  pÞ + ðq  qÞ t ∂o i ∂p i ∂q i i¼1 (7.5.3) σR ¼ N 1 Expanding, s ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   2  2  ffi ∂R 2 2 ∂R ∂R ∂R ∂R ∂R ∂R ∂R ∂R   σ o, p +   σ q, p +   σ o, q σR ¼ σo + σ 2p + σ 2q + 2  ∂o ∂p ∂q ∂o ∂p ∂q ∂p ∂o ∂q (7.5.4)

The last term in the square root comprises cross-correlations of o, p, and q measurement sample elements. N  X   ðai  aÞ bi  b

σ a,b ¼

i¼1

N 1

(7.5.5)

is the covariance of a and b.

7.5.1 Error estimates for combinations of uncorrelated measurement samples When measured sample elements are uncorrelated, their covariances are zero and Eq. 7.5.4 can be rewritten as sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2 ffi

2 ∂R ∂R ∂R σR ¼ (7.5.6) σ 2o + σ 2p + σ 2q ∂o ∂p ∂q Associating σs of Eq. (7.5.6) with δs of Eq. (7.5.1), the latter estimate is an upper bound by Schwartz inequality. Based on Eq. (7.5.6), we write the error estimates of some common measurement combination functions. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Rðo, p, qÞ ¼ A  o + B  p + C  q; σ R ¼ A2 σ 2o + B2 σ 2p + C 2 σ 2q (7.5.7)

805

806

Fiber optic measurement techniques

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2

2 A B C 2 2 σ o + 2 σ p + 2 σ 2q 2 o p q sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

2 σ 2 σ 2 σo p q Rðo, p, qÞ ¼ o  p  q; σ R ¼ ðo  p  qÞ + + o p q

A B C Rðo, p, qÞ ¼ + + ; σR ¼ o p q

(7.5.8)

(7.5.9)

opq ;σ s  t  u Rsffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

o  p  q 2 σ 2 σ 2 2 2 2 σo σ σ σ p q ¼ + + + s + t + u o p q s u stu t

Rðo, p, q, s, t, uÞ ¼

(7.5.10) sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2  2ffi σp σq σo + + o+p+q o+p+q o+p+q

Rðo, p, qÞ ¼ log 10 ðo + p + qÞ; σ R ¼

lnð10Þ (7.5.11)

Thus, for products or quotients, the relative error in R is the root sum of squares of the relative errors of its measured arguments.

7.5.2 The weighted mean Independent measurements of the same measurand can be combined to improve the precision of the measured value. This is shown in Eq. (7.3.3) where the relative standard error on the mean of a sample, δVm , shrinks as p1ffiffiffi . In this example, all elements are subject to the same N underlying random error distribution (a common standard deviation). What if the measurements in question are sample means, or elements of, different samples of the same measurand obtained under different circumstances resulting in different precisions? Intuitively one might expect that as long as all the measurements share an underlying mean (i.e., same or no systematic error), some sort of average that assigns more importance to the more certain measurements will have better precision than any of the individuals. Indeed, this is the case. One can demonstrate by considering two measurements of the same measurand, m1 and m2, with respective standard errors σ 1 and σ 2 and underlying mean m. The probabilities each of these observations are obtained from their respective normal distributions and the probability of the pair is the product of these individual probabilities.

2 2 

P ðm1 , m2 Þ ¼

e

ðm1 mÞ ðm2 mÞ 2σ 1 2 + 2σ 2 2

2π  σ 1 σ 2

 Δm2

(7.5.12)

where Δm is a fixed interval centered on m1 or m2 to estimate probabilities at m1 or m2 from the probability density. The most likely value of m is that which maximizes the

Measurement errors

probability of the observed pair (i.e., minimizes the magnitude of the exponent). Setting the derivative of the exponent w.r.t. m equals to zero, one obtains m¼

m1  σ 1 2 + m2  σ 2 2 σ1σ2 ffi  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 σ1 + σ2 σ12 + σ22

(7.5.13)

Thus, the most likely value of m is the weighted average of m1 and m2 where the measured values are weighted by the reciprocal of the square of their standard errors σ1 σ2 ffi (standard deviations). The estimated error of m,  pffiffiffiffiffiffiffiffiffiffiffiffiffi from Eq. (7.5.7), is less than 2 2 σ1 + σ2

σ2 σ1 ffi < 1 and pffiffiffiffiffiffiffiffiffiffiffiffiffi ffi < 1. the error on m1 or m2 since pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 2 σ1 + σ2

σ1 + σ2

The solution is readily extended to the case k measurements k X



mi  σ i 2

i¼1 k X

k Q

σi  vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! uX k u k Y 2 t σj i¼1

σi

2

i¼1

i¼1

(7.5.14)

j6¼i

7.6 Linear least squares fitting of data It has been said that “all models are wrong but some models are useful.“ In other words, any model is at best a useful fiction—there never was, or ever will be, an exactly normal distribution or an exact linear relationship. Nevertheless, enormous progress has been made by entertaining such fictions and using them as approximations.” G.E.P. Box 1997 (Box and Luceño, 1997, p. 6).

Data fitting is an extensive topic with established methods and software tools adapted to particular data characteristics and intended applications. Here a simple case of linear least squares fitting of measurements is examined. Usually a measurand is expected and/or observed to depend in some way on parameters of the measurement state (state parameters).h For example, a measured current may depend on an applied voltage state parameter or a measured absorption at some wavelength might depend on the partial pressure states of two different gases. The task is to find a sum of functions of the state parameters that predict the measurements within estimated measurement error. The task has two aspects: A) Find a smallest set of independent basis functions (i.e., functions of the state parameters) that can reasonably predict measurements over the examined parameter ranges. The functions may be known (e.g., by a priori modeling) and the fit to measured data may provide insight regarding systematic error in that knowledge or in the measurement. Alternately, the functions may be guessed and the fit may provide clues to a model and/or a means of prediction over the ranges of examined parameters. h

State parameter values are fixed for a sample.

807

808

Fiber optic measurement techniques

B) Find the best linear combination of the set of independent functions of parameter that predicts measurements consistent with estimated measurement error. Consider a set of measurements Vi[1,N] obtained for a set of independent measurement state parameters {ci, u j u  [1, U]}.i Vi is posited to be predictable by a sum of independent basis functions fk({ci, u}), k  [1, K], i.e., X Vi ¼ (7.6.1) ρk  f k ðfc i,u gÞ k

ρk are coefficients of the predictor function that best reproduce the measurements taking measurement uncertainties into account. Measurement state parameters {ci, u} are treated to have negligible error.j We write the probability of the of measurement uncertainty as P ðvÞ∝ e ln ðF ðvÞÞ where F(v) is a probability density function of measurement error, v, symmetric w.r.t. v ¼ 0. The best choice of ρk is that for which the predictor function is closest to notional error-free measurements (truth). Since only the measured values are available, the likelihood of observing the differences between measurements and predictor function is maximized by choice of ρk. The predictor function will then lie as close as the measurements can justify to the notional error-free measurement values. Hence 0 K ln B Y B K e fρk g ¼ max BΔV ρk @ k¼1

F

V i

X

!! 1 C C C A

ρk f k ðfc i,u gÞ

k

(7.6.2)

where ΔV is a fixed incremental range in the vicinity of Vi to convert density to probability. The product in Eq. (7.6.2) is a maximum when the sum of exponents of its terms is a maximum. Thus fρk g ¼ max ρk

i j

N X i¼1

ln F V i 

X

!!! ρk  f k ðfc i,u gÞ

(7.6.3)

k

For example, Vi might be a voltage instance that depends on a current ci. See (Press et al., 1999, pp. 666–670, Golub and Van Loan, 1980) for methods and analyses that include state parameter errors.

Measurement errors

For a normal distribution of errors 0 X V i

F Vi 

X

@

! ρk  f k ðfc i,u gÞ

k

¼

1  e 2πσ V i

k

12

ρk f k ðfc i,u gÞ

σV

i

A

2

(7.6.4)

σ Viis the standard error of Vi and Eq. (7.6.3) can be rewritten as X 12 0 Vi  ρk  fk ðfci, u gÞ C B k fρk g ¼ min A @ ρk σ Vi i¼1 N X

(7.6.5)

The sum of squares of residuals on the right-hand side of Eq. (7.6.5) is referred to as a cost- or a loss- or a merit function. Constant terms that do not affect the minimization solution of Eq. (7.6.3) have been removed from Eq. (7.6.5). For normally distributed residuals, the sum of terms to be minimized is recognized as a chi-square statistic. X 12 0 Vi  ρk  f k ðfc i,u gÞ N B X C k C B χ2 ¼ A @ σV i i¼1

(7.6.6)

whose probability density function is 1 2    2  ν=2 ðν=2Þ1 PDF χ ¼ 2 Γ ðν=2Þ eχ =2  χ 2

(7.6.7)

ν is the number of degrees of freedom in the statistic. At most, this is just the number elements in the sum; however, the fitted coefficients decrease the number of degrees of freedom by interrelating otherwise independent elements of the statistic. Thus ν ¼ N  K, where N is the number of measurements and K is the number fitting coefficients. Setting the first derivative of χ 2 with respect to each coefficient equal to zero gives a set of K equations between measurements and coefficients to be solved for the K coefficiants

809

810

Fiber optic measurement techniques

12 1 0 K X B Vi  ρj  f j ðfc i,u gÞC C N B BX CC B j¼1 B CC B ∂B CC σ B i¼1 B V i AC @ A @ 0



∂ρk 1 1 00 K X ρj  f j ðfc i,u gÞC C BV i  N B X C f k ðfc i,u gÞC BB j¼1 C C BB ¼ 2 C BB σV i σV i C A A i¼1 @@

n o 1 1 0 X V i  f 1 c i,u @ A C 7 B 0 1 6 … 7 C 6 B σV i 2 σV i 2 ρ1 7 C 6 i B i 7 C B B C 6 7 C B B C 6 ⋮  ⋮ ⋮ 7 B C B⋮ C¼6 7 C B @ A 6

n o

n o

n o

n o

n o 0 1 0 1 0 1 7 C 6 B 7 C 6X f B X V f  f  f c c f c c c X ρK K i,u i,u K i,u K i,u i K i,u 1 5 A 4 @ @ A @ A @ A |fflfflfflfflffl{zfflfflfflfflffl} … 2 2 2 ! σV i σV i σV i i i i ρ |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} 2

X

0

f @ 1

n

o

c i,u

f1

n o 1 c i,u A

n o 1 0 n o  f K c i,u X f 1 c i,u @ A σV i 2 i

(7.6.8)

½C 1

31 0

*B

(7.6.9) Since the state parameters have negligible error, error in the coefficients originates from the Vis * !! *!   K N X X ∂B ∂B fj ðfci, u gÞ j j 1 where ¼ ½C k, j ΔVi (7.6.10) Δρk ¼ σ Vi 2 ∂Vi ∂Vi j¼1 i¼1 Δρk and ΔVi are increments of coefficients and measured values that are to be replaced by σ ρk and σ Vi, respectively. From Eq. (7.6.10), we can write the product of standard errors of two coefficients, σ ρm  σ ρn ! K X K N  X X fj ðfci, u gÞ  fl ðfci, u gÞ 1 1 σ ρm  σ ρn ¼ ½C m, j  ½C n, l σ Vi 2 j¼1 l¼1 i¼1 (7.6.11) K X K

X 1 1 1 ½C m, j  ½C n, l  ½C j, l ¼ ½C m, n ¼ j¼1 l¼1

We have used the fact that

K

P j¼1

½C 1  ½ C m,j j,l is the Kronecker delta of (m, l). Thus,

the standard error variances of the coefficients are σ 2ρk ¼ ½C 1 k,k Off-diagonal elements of [C]1 are recognized as coefficient co-variances.

(7.6.12)

Measurement errors

7.6.1 Fitting evaluation based on a chi-square merit function With the fitted coefficients, and correctly chosen basis functions, the expectation of χ 2 in Eq. (7.6.6) equals the number of degrees of freedom, ν. This can be used to test the fit and underlying assumptions. We can initially assume that the sum in Eq. (7.6.6) is indeed a chi-square statistic, i.e., residuals are normally distributed and divided by their standard errors. The fit is tested with this presumptionk according to its value of reduced 2 chi-square, χ 2r ¼ χν , which has an expectation of 1. From Eq. (7.6.7), the likelihood, P( χ 2r ), of observing a value of the reduced chisquare at or exceeding the measured value is     γ 2ν , χ 2r  2ν 2 P χr ¼ 1  (7.6.13) Γðν=2Þ γ(, ) is the lower incomplete gamma function. P( χ 2r ) or P( χ 2r ) ¼ 1  P( χ 2r ) flag unlikely large or small values of the observed χ 2r , respectively, for scrutiny. For example, we choose to diagnose fit quality for a χ 2r value observed in the highest or lowest 5% of possible values. Consider an example of 10 measurements fitted to the sum of 3 state parameter functions (3 fitting coefficients), ν ¼ 10  3 ¼ 7. For this fit, the top and bottom 5% of possible values are χ 2r 2 and χ 2r 0.3, respectively. If χ 2r 2 or χ 2r 0.3 1) It may be a valid though unlikely occurrence. If possible, re-measure the data set and fit to the same state parameter functions. Otherwise, look for possible causes outlined below. 2) Standard errors may be underestimated (χ 2r 2) or overestimated (χ 2r 0.3). 3) Measurement errors and/or fitted residuals may not be adequately represented by a normal distribution (see next section). 4) Chosen basis functions may be inappropriate to predict the measurements (low fidelity). 5) There may be too many state parameter functions included in the fitting: overfitting (χ 2r 0.4). This is a condition for which there are enough coefficients to partially fit measurement errors and cause a false minimization of the merit function. In cases 2), 3), the best fit least squares coefficients obtained can still be valid as long as the relative magnitudes of the estimated error standard deviations reflect the actual uncertainties.

7.6.2 A fitting example To demonstrate, linear least square fitting is applied to modeled measurements. A measured value, V, depends on a single state parameter, x, where the true functional relationship is. k

It will be shown in Section 7.6.2 that interpretation of the fitted merit function value in terms of exceedance depends on the underlying error distribution.

811

Fiber optic measurement techniques

pffiffiffi V ¼ 10 x + 8x + 0:1x2 + :0001x4 + 5:5 sin ðxÞ (7.6.14) pffiffiffi Its basis functions are x, x, x2, x4 and sin(x). A measurement campaign comprises 26 measured values Vi[1,26] for different xi[1,26]. Measurement errors are normally distributed and heteroscedastic (unequal error variances). In particular

2 Vi σ 2V i ¼ 0:5 + (7.6.15) 5 A campaign is modeled by adding a random error to each true value, according to its error variance, to simulate measured values. These are then fitted to the true basis functions by the method of linear least squares. Fig. 7.6.1 shows fitting result for a first modeled measurement campaign instance. True values are shown as a red line, fitted values are shown as a black line, measured values, bracketed by their standard errors, are blue symbols. The true and fitted coefficients for this instance are listed in Table 7.6.1. Standard errors in the fitted coefficients are tabulated in parentheses. True and fitted coefficients agree within estimated coefficient standard error. The reduced chi-square for this campaign is 1.55 and there are 26–5 ¼ 21 degrees of freedom. From Eq. (7.6.13), the likelihood of observing this value or higher is 5%. Fortunately, this experiment can be repeated. Coefficients and reduced chi-squares are collected from 10,000 independent modeled campaigns. The distributions for each coefficient and that of all the resulting reduced chisquares obtained from the fits are shown in Fig. 7.6.2. The means of coefficient distributions agree with true coefficients within standard error on the mean.

true and measured values

812

true measured fit error

400 300 200 100 0

0

10

20

30

state parameter value x

Fig. 7.6.1 Measurements in blue of a functional relationship (red line) between a measurand (ordinate) and a state parameter value (abscissa). The black line is obtained by linear least square fitting of basis functions to the measurements. Error bars are 1 standard deviation.

Measurement errors

Table 7.6.1 True coefficients from Eq. (7.6.14) (in the same sequence of their appearance) and fitted coefficients from the campaign of Fig. 7.6.1. Coefficients TRUE

FITTED

10 8 0.1 0.0001 5.5

0 (10) 12 (7) 0.1 (4) 0.0000 (4) 4 (6)

A quantile-quantile test of the normality of a coefficient distribution is obtained by plotting observed Z defined by Eq. (7.3.1) vs. Z of a normal distribution. The observed Z is the Z of a normal distribution that has an accumulated probability equal to that of the observed distribution. k X b¼0 X

nb nb

¼1

erfc

Z ffiffik p 2

2

ðrefer to footnote 7Þ

(7.6.16)

b

nb is the number of observations in the bth bin of the observed histogram and Zk is Z of a normal distribution that has the same accumulated probability as the histogram up to its kth bin. For an observed normal distribution, the quantile-quantile plot is a straight line with unity slope. Quantile-quantile plots, for all five coefficients, are provided in Fig. 7.6.3A. As expected, distributions of all fitted coefficients are normal. The sensitivity of fitting results to underlying error distribution is tested with another pair of 10,000 measurement campaigns where measurement errors are either uniformly distributed or distributed according to a double exponential.l The same standard deviations of Eq. (7.6.15) are used. Fitting results for the five coefficients obtained using earlier normal and present uniform as well as double exponential error distributions are listed in Table 7.6.2. In this table, quoted errors of the mean and standard deviation are the standard errors of Eqs. (7.3.3), (7.3.5), respectively. l

Due to longer probability tails of its PDF, a more robust merit function for the double exponential error  X  Vi   ρ k  fk ðfci, u gÞ N  P   k distribution is   (obtained by using the double exponential function for F() in Eq. σ Vi  i¼1    (7.6.4) (Press et al., 1999, pp. 700–702).

813

814

Fiber optic measurement techniques

Fig. 7.6.2 Distributions of linear least squares fitting coefficients and merit function values obtained from 10,000 modeled measurements of Eq. (7.6.14). Means and standard deviations are quoted with standard errors.

Observed coefficient means and standard deviations agree with their references and estimates using Eq. (7.6.12) within  1 standard error, respectively. Quantile-quantile plots in Fig. 7.6.3 for uniform (B) and double exponential (C) error distributions show that fitted coefficient distributions have very nearly normal statistics over 2 standard deviations of their mean values (95% of the population). Fig. 7.6.4A compares exceedance probabilities of minimum merit function values from each campaign and normalized

Measurement errors

Fig. 7.6.3 Quantile-quantile plots of five coefficient distributions obtained for three sets of 10,000 independent least squares fittings of basis functions in Eq. (7.6.1) to simulated measurements subject to underlying (A) normal, (B) uniform, and (C) double exponential distributions with the same error standard deviations (Eq. (7.6.15)).

to the number of degrees of freedom with Eq. (7.6.13) for a reduced chi-square distribution. As expected, there is good agreement with reduced chi-squared for normally distributed errors. For uniform and double exponential reduced chi-square exceedance of Eq. (7.6.13) over- and underestimates, respectively, observed large value exceedance. These disparities are not attenuated by the number of degrees of freedom. Rather, they are persistent properties of the underlying error distribution. At a given low large value exceedance probability, distributions with longer or shorter tails than normal have larger or smaller merit function values than chi-squared, respectively. It is shown in Fig. 7.6.4 that double exponential and uniform merit function exceedances evaluated at the reduced chi-square

815

Table 7.6.2 Fitted coefficient means and standard deviations from 10,000 modeled measurement campaigns for underlying normal, uniform, and double exponential error distributions. Error standard deviations are determined by Eq. (7.6.15). Quoted errors are estimated using Eqs. (7.3.3), (7.3.5). Reference coefficients are those of Eq. (7.6.14) and reference coefficient standard deviations are according to Eq. (7.6.12). Normal

Dist. type ! Coefficient #

Mean

#1 #2 #3 #4

9.9 (1) 8.08 (7) 0.094 (4) 0.000108 (4) 5.51 (4)

#5

Standard deviation

9.81 (7) 6.56 (5) 0.384 (3) 0.000390(3) 5.54 (4)

Uniform Mean

9.9 (1) 8.06 (7) 0.097 (4) 0.000102 (4) 5.53 (6)

Standard deviation

9.67 (7) 6.50 (5) 0.377 (3) 0.000385 (3) 5.50 (4)

Exponential Mean

10.0 (1) 8.01 (7) 0.098 (4) 0.000103 (4) 5.57 (6)

Reference

Standard deviation

Coefficients

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  C 21 i,i

9.74 (7) 6.59 (5) 0.386 (3) 0.000392 (3)

10 8 0.1 0.0001

9.72 6.53 0.38 0.000388

5.55 (4)

5.5

5.48

Measurement errors

Fig. 7.6.4 (A) Exceedance probabilities of reduced (normalized by degrees of freedom) fitting merit function values for normal, uniform, and double exponential underlying error distributions. The reference line for a reduced chi-square is drawn in black. (B) Exceedance perecentages versus degrees of freedom of uniform (Unif ) and double exponential (DE) reduced merit functions evaluated at reduced chi-square values that give 10% and 5% exceedance.

values of 5% and 10% exceedances give, respectively, 13(2)% and 19(2)% exceedances for double exponential and 0.4(1)% and 2.1(3)% exceedances for uniform. These observations inform possibility 3) of Section 7.6.1. Thus, for the symmetric underlying error distributions examined here, a linear relationship between measurand and state parameters, known or correctly chosen basis functions and representative relative estimates of measurement error variances, linear least squares fitting is observed to obtain reliable estimates of the most likely values of fitting coefficients by Eq. (7.6.9). With representative error variance estimates, errors on the coefficients can be reliably estimated with Eq. (7.6.11). Interpretation of the merit function value depends on the underlying error distribution (Fig. 7.6.4A and B).

References Adam, T., et al., 2011–2012. Measurement of the neutrino velocity with the OPERA detector in the CNGS beam. arXiv:1109.4897 V1-V4,. Box, G.E.P., Lucen˜o, A., 1997. Statistical Control: By Monitoring and Feedback Adjustment. John Wiley and Sons. Forbes, C., Evans, M., Hastings, N., Peacock, B., 2011. Statistical Distributions, fourth ed. Wiley, New York. Golub, G.H., Van Loan, C., 1980. An analysis of the total least squares problem. SIAM J. Numer. Anal. 17 (6), 883–893. Hall, J.L., Lee, S.A., 1976. Interferometric real-time display of CW dye laser wavelength with sub-Doppler accuracy. Appl. Phys. Lett. 29 (6), 367–369. Kahane, A., O’Sullivan, M.S., Sanford, N.M., Stoicheff, B.P., 1983. Vernier fringe-counting device for laser wavelength measurements. Rev. Sci. Instrum. 54 (9), 1138–1142.

817

818

Fiber optic measurement techniques

Lehmann, E.L., Casella, G., 1998. Theory of Point Estimation. Springer-Verlag, New York. Peirce, B., 1852. Criterion for the rejection of doubtful observations. Astron. J. II (45), 161–163. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 1999. Numerical Recipes in C, the Art of Scientific Computing, second ed. Cambridge University Press. Pugh, E.M., Winslow, G.H., 1966. The Analysis of Physical Measurements. Addison-Wesley. Ross, S., 2003. Peirce’s criterion for the elimination of suspect experimental data. J. Eng. Technol. 20 (2), 1–12. Taylor, J., 1997. Error Analysis, second ed. University Science Books, Sausalito, CA.

Index Note: Page numbers followed by f indicate figures, t indicate tables and b indicate boxes.

A Acousto-optic frequency modulator (AOFM), 310–311, 311f Active mode locking, 477–478 Aggregate noise power distributions, 659, 660f Airy disk, 149 All-fiber FPI, 163, 163f AllWave fiber attenuation, 450–451, 451t Amplified multi-span WDM optical system, 655, 727–728 Amplified spontaneous emission (ASE) noise, 334, 374, 597–598 in electrical domain, 377–382, 377f filter optical bandwidth, 368 power spectral density, 376, 376f using OSA, 375, 375f using OSA and polarizer, 376–377, 377f Amplifier optical gain, 367–370 Angled physical contact (APC), 443 Angle-polished fiber surface, 60, 60f Antireflection (AR) coating, 163 Arrayed waveguide gratings, 426–428, 426f Autocorrelation coefficient, 799–800, 800f Autocorrelator measurement setup, 247, 247f optical circuit, 245, 245f short optical pulse measurement using, 243–249, 244–245f Automatic gain control (AGC), 107–110, 109f Automatic power control (APC), 107–110, 110f Avalanche effect, 38 Avalanche photodiode (APD), 30, 38–41, 39f advantages, 40–41 biasing circuits, 41–42, 42f charge density distribution, 39, 39f electrical field density profile, 39, 39f frequency response, 40 layer structure, 39, 39f optical receiver, 659–660 as single photon detectors, 41–42

B Backward-propagated Stokes power, 527 Backward pumping, 106–107, 107–108f

Balanced coherent detection, and polarization diversity, 217–219, 218–219f Baseband AM response method, 483–485, 483f, 485f BER vs. decision voltage (BER-V) measurement, 666 Bessel equation, 55–56, 55f, 118 Bessel–Jacob expansion, 567–568 Bias point stabilization of I/Q modulator, 128–132, 129f Bidirectional interferometry based on direct detection, 589, 589f distributed fiber sensor, 590, 591f Bidirectional optical amplifiers, 592 Birefringence in optical fiber, 489–492 Bit error rate (BER), 651 bit decision, binary receiver, 656, 656f decision phase, 675–676 vs. decision threshold measurement, 675–676, 688–690, 689f decision timing, 675–676 digital signal quality, 655 error detection, 672–676, 672f error function, 658 gating time, 655 for high order complex modula, 690–698 optimum decision threshold, 675–676 pattern generator, 669–672, 669–670f percent of unavailability, 668, 668f precise synchronization, 675 probability distribution function, 656–657, 657f Q vs. decision threshold plot, 689–690, 690f receiver Q-value function, 659, 659f synch loss, 675 system Q function, 655–656 testing, 666–676, 667f test set, 668–669, 669f time-domain waveform, 672–673, 673f Bragg wavelength, 24 Brewster angle, 48–49 Brillouin backscattering efficiency, 584 Brillouin gain coefficient, 526

819

820

Index

Brillouin optical time-domain analysis (BOTDA), 585–586 Brillouin-optical time-domain reflectometer (BOTDR), 582–588 Brillouin scattering process, 525–530, 582–586

C Calibration process, 699–700, 699f Carrier confinement, semiconductors, 6 double heterostructure, 6, 6f homojunction, 6 photon density, 6 Carrier drift speed, 33–34 Carrier multiplication process, 38, 38f Carrier phase recovery (CPR), 591–592 Carrier-suppressed single-sideband (CS-SSB) modulation, 586 Central limit theorem (CLT), 800–802, 802f Chromatic dispersion, 65, 70 baseband AM response method, 483–485, 483f, 485f interferometric method, 485–489, 486f and measurement, 480–489 modulation phase shift method, 480–483, 481f R-SNR, 768–770, 770f sources of, 69–71 vs. wavelength, 70–71, 71f wavelength-dependent group velocity, 71–72 Classic electromagnetic theory, 54 Clock recovery circuit, 674, 674f Code-division multiple access (CDMA), 2–3 Coherent anti-Stokes Raman scattering (CARS), 615, 622 anti-Stokes signal, 622, 638–640 nonlinear wave mixing, 622–623 Coherent envelope detection and complex optical field detection, 312–318, 313f data acquisition and processing, 317–318 distributed fiber sensor, 590, 591f precise frequency alignment, 317 real-time digital analyzer, 315–317 Coherent heterodyne detection, 259–260, 259f, 361, 361f, 611 Coherent homodyne detection, 624 balanced detection coherent receiver, 220 based on swept frequency laser, 222–226, 222–223f

phase discontinuities during frequency sweeping, 224 phase diversity, 219–222, 220f phase diversity receiver, 220, 220f with phase locked feedback loop, 219–220, 220f phase noise induced signal fading, 220 signal RF power reduction, 222 signal spectral density, 224–225 time-dependent photocurrent waveform, 224 wideband optical signal, 225–226 Coherent optical detection, 211–226 fiber directional coupler in guided wave optics, 213 laser operating in continuous waves, 214 in lightwave systems, 213, 213f operating principle, 212–214 and polarization diversity, 217–219, 218–219f in radio communications, 212, 212f receiver SNR calculation, 215–217 square-law detection, 214 Coherent optical transceivers, 652, 690 Coherent Raman scattering (CRS) spectroscopy, 622–640, 626f, 628–629f, 631–632f, 635f frequency-shifted fundamental soliton, 632–634 Lorentzian spectral line shape, 629–630 measurement techniques, 631 photonic crystal fiber, 631–632 signal reduction, 628, 628f spectral resolution, 625–627, 626f using spectral focusing, 640 Commercial optical systems, 652 Complementary metal oxide semiconductor (CMOS)-compatible processes, 254–255 Complex optical field detection, 312–318 Computerized digital signal processing and display, 226 Constant wave (CW) probe channel, 732–733 Corning Vascade fibers, 451, 451t Critical angle, 46–48 Cross-gain saturation effect, 373 Cross phase modulation (XPM), 80, 449, 614–615 intensity modulation, 732–738, 733f non-linear index measurement, 545–549, 545f, 547–549f normalized transfer functions, 741–742, 741f phase modulation, 729–732 power transfer function, 735

Index

and pump-probe based measurement techniques, 728–743, 731f CRS-based label-free microscopy, 640–646 lock-in amplifier, 645–646 modulation frequency, 644–645, 644f photothermal imaging, 641 photothermal signal, 645–646 Cutback measurement technique, 464–466, 464–465f

D Dark current noise, 36 Data-aided reception, 696–697 Dechirping process, 257–258 Decision-feedback equalizer (DFE), 684 Dense wavelength division multiplexed (DWDM) fiber systems, 614–615 Device under test (DUT), 276–277, 284–285 Differential group delay (DGD), 73, 491–492, 492f Differential phase noise, 320, 337 Differential phase-shift-key (DPSK) receiver, 686 Differential refractive index, 73 Diffraction gratings diffraction angle spreading, 142–143 fundamentals of, 140–144, 140–141f lightwave incident and diffraction, 141, 141f power transfer function, 141–142 signal wavelength, fixed diffraction angle, 143–144 Digital electronic circuits, 226 Digital equalization techniques, 684 Digital optical communication systems, 226 Digital oscilloscope, 226, 228, 228f autotrigger mode, 230 edge triggering, 229, 229f pattern triggering, 229, 229f single sweep or normal sweep, 230 Digital postprocessing, chirp waveforms, 265–266 Digital sampling oscilloscopes, 230–233 data acquisition, 230 equivalent-time sampling, 231 of high-speed optical waveform, 237, 237f high-speed waveform measurement, 699 low nonlinear coefficient in photonic devices and optical fibers, 238 operating principle, 231 random sampling, 232, 232f real-time sampling, 230

time-base calibration, 699–700, 699f ultrafast optical waveforms, 238 waveform reconstructed after random sampling, 232, 232f waveform sampling, 231, 231f Digital signal processing (DSP), 463 Digital sub-carrier multiplexing, 655 Digital transmission system testing, 670 Direct and indirect semiconductors, 5–6 Direct RF detection, 215–216, 217f Discrete fiber-optic sensors, 559–576 Fabry–Perot interferometers, 572–576, 573–575f Faraday rotation of polarization, 561–564, 562–563f fiber Bragg gratings, 568–572, 569–571f fiber-optic sensors, optical path loss, 559–560, 559–560f interferometry, 560–561, 561f Dispersion-compensating fibers (DCF), 448, 451–452, 654–655 Dispersion-compensating transmitter, 654–655, 684, 685f Dispersion measurements, in optical fibers, 475–489, 598 chromatic dispersion, 480–489 foldback all-fiber interferometer, 488–489, 488f intermodal dispersion, 476–480 Distributed Bragg reflector (DBR) semiconductor lasers, 280 Distributed feedback (DFB), 23–24 grating, 25, 25f laser diode, 24–25, 24–26f Distributed fiber Raman amplification with forward pumping, 392 Distributed fiber sensors, 557, 576–594 bidirectional fiber link and coherent detection, 590, 591f Brillouin and Raman OTDR, 582–588 interferometer-based distributed fiber sensors, 589–594 phase-sensitive OTDR, 576–582 unidirectional fiber link and coherent detection, 592, 592f Distributed Raman amplification in fiber systems, 3 Double-differential phase cancelation technique, 582 Double-pass monochromator, 145–146, 146f

821

822

Index

Double-sideband modulation, 122, 124f DQPSK optical signal, 693–694, 694f Dual-comb-based linear absorption spectroscopy, 611–612, 611f, 613f Dynamic gain tilt, 371–374, 372f

E Edge-emitting LEDs, 8, 8f Edge triggering, digital oscilloscope, 229, 229f Electric ADC using optical techniques, 242–243, 243f Electrical domain digital signal processing, 655 Electrical spectrum analyzer (ESA), 370, 370f Electrical-to-optical (E/O) conversion efficiency, 117–118 Electro-absorption (EA) modulation, 3 optical modulators, 132–134, 133f Electromagnetic field theory, 53–57 Electronic dispersion compensation (EDC), 684 Electro-optic I-Q modulator (EOM), 125–128, 126f, 339–357, 442 Electro-optic modulators, ring resonators, 198–204, 199–201f, 204f Electro-optic phase modulator, 114–134 based on MZI configuration, 116, 116f, 120f bias-dependent modulation characteristics, 120 bias point stabilization of I/Q modulator, 128–132 equivalent chirp parameter, 119 field transfer function, 121, 122f frequency doubling and duobinary modulation, 120–121 input-output power relationship, 117 I/Q modulation of complex optical field, 125–128 modulation efficiency, 116 operation principle of, 115–120, 116f optical modulators using electro-absorption effect, 132–134 optical power transfer function, 117 optical single-side modulation, 122–125, 123f power transfer function, 117, 121, 122f transfer function and input/output waveforms, 117–118, 118f zero-chirp modulators, 119 Electrostriction non-linearity, coherent detection, 738–743, 739f

Erbium-doped fiber (EDF), 598, 601–602 Erbium-doped fiber amplifiers (EDFAs), 3, 84, 97–111, 97f absorption and emission cross sections, 98–100, 100f with AGC and APC, 107–110 amplification system, 408 amplified spontaneous emission, 388–389 backward direction, 105 carrier density fluctuation, 388 design considerations, 106–110 dynamic gain tilt, 106 forward pumping and backward pumping, 106–107 gain flattening, 110–111, 111f instantaneous carrier density, 386–387 I/Q modulator, 332–333 limitations, 104 optical gain vs. wavelength, 106 quasi analytical formulation, 104 rate equation for carrier density, 100–106 Rayleigh back scattering, 408–409 slow carrier dynamics, 387 square-wave optical signal, 386–387, 386f time-domain characteristics, 385–388 time-gating technique, 387–388, 387f wavelength-dependent confinement factor, 105 Error classification and reporting, 793–797 Error estimates of measurement combinations, 804–807 of uncorrelated samples, 805–806 weighted mean, 806–807 Error ratio, 668 Error statistics measurement, 797–800 Error vector magnitude (EVM), 695 data-aided reception, 696–697 for high order complex modula, 690–698 External cavity, 23–24 External-cavity lasers (ECLs), 25–28, 26–27f, 320 External cavity semiconductor lasers, 28 External-cavity tunable semiconductor lasers, 735–736 External electro-optic modulator, 114–134 based on MZI configuration, 116, 116f bias point stabilization of I/Q modulator, 128–132 frequency doubling and duobinary modulation, 120–121

Index

I/Q modulation of complex optical field, 125–128 operation principle of, 115–120, 116f optical modulators using electro-absorption effect, 132–134 optical single-side modulation, 122–125 transfer function and input/output waveforms, 117–118, 118f External electro-optic modulators, 1–2 External modulation, 114–115 Extrinsic perturbation, 72–73

F Fabrication process and doping, 112–113 Fabry–Perot interferometer (FPI), 568–576, 569–571f, 573–575f basic optical configurations, 162–163 using concave mirrors, 162–163, 162f using free-space optics, 163 using plano mirrors, 162, 162f calibration procedure, 160–161, 161f cavity material absorption and beam misalignment, 155–156 collinear configuration, 154 configuration and transfer function, 152–158, 152f, 154f contrast, 157–158 fiber-optic applications, 154 finesse, 156–157 free spectral range, 156 half-power bandwidth, 156 notch filter, 157–158b, 157f optical spectrum analyzer, 158–161, 163–164 optical spectrum measurement, 159–160, 159f scanning, 151–164 spectrum folding, 160, 160f transmission vs. the signal wavelength, 155, 155f Fabry–Perot (FP) laser diode, 23, 412 Fabry–Perot resonator, 23 Faraday rotation of optical signal state of polarization (SOP), 561–564, 562–563f Far-field (FF) measurement techniques, 458–461, 459f charge-coupled detection, 461 normalized power distribution, 460–461, 460f using angular scanning, 459–460, 460f using planar detection array, 461, 461f Feed-forward equalizer (FFE), 684 Femtosecond fiber lasers, 596–605, 614–646

Femtosecond optical pulses CRS microscopy, 640–646 CRS spectroscopy, 622–640, 626f, 628–629f, 631–632f, 635f soliton self-frequency shift, 615–618 two-photon fluorescence microscopy, 618–621, 618f, 620–621f Fiber, classification, 448–457 Fiber attenuation measurement cutback technique, 464–466 optical time-domain reflectometers, 466–472, 467–470f, 472f and OTDR, 464–475 Fiber-based Michelson interferometer, 560–561, 561f Fiber-based nonlinear pulse amplification system, 601–602, 602f Fiber-based optical metrology discrete fiber-optic sensors, 559–576 distributed fiber sensors, 576–594 nonlinear spectroscopy and microscopy, femtosecond fiber lasers, 614–646 optical frequency combs and applications, 594–614 and spectroscopy techniques, 557–559, 594–646 Fiber-based Raman amplification, 614–615 Fiber birefringence, 489–492 Fiber Bragg grating (FBG), 419–423, 422f, 568–572, 569–571f Fiber coupler power splitting ratio, 188 Fiber lasers, 596 Fiber nonlinearity, 615 Fiber-optic communication/networking, 557 Fiber-optic couplers, 414–418, 414f, 416–418f Fiber-optic gyroscopes, 564–568, 564f, 566f Fiber-optic recirculating loop, 777–788 loading state, 779–781, 780f looping state, 779–781, 780f loop time, 779–781 measurement procedure and time control, 779–782, 780f operation principle, 778–779, 778f optical gain adjustment, 782–788, 783–784f Fiber-optic sensors, 559–576 on Fabry–Perot interferometers, 572–576, 573–575f on Faraday rotation of polarization, 561–564, 562–563f

823

824

Index

Fiber-optic sensors (Continued) on fiber Bragg gratings, 568–572, 569–571f on interferometry, 560–561, 561f optical path loss, 559–560, 559–560f Fiber-optic transmission systems, 652–676 Fiber Raman amplifiers, 113–114, 115f characterization of, 406–414, 407f on/off Raman gain, 406–407 Fiber system non-linearity, 738–743, 741f Finesse of FPI, 156–157 Fixed analyzer method, 499–503, 500f polarizer transfer function, 500, 501f rule of thumb, 502–503 Fizeau wedge interferometer-based wavelength meter, 186–187, 186f Focusing optics, 149–150, 149–150f aberration induced by lens, 150 beam divergence, 149, 149–150f output focusing lens, 149, 150f output slit or pinhole, 150 photodetection, 150 response calibration, 150 F€ orster resonance energy transfer (FRET), 619 Forward-biased pn junctions, 3 Forward/backward hybrid pumping, 106–107, 107–108f fiber Raman amplification system, 398 Raman amplification induced perturbation, 400 and 2nd-order pumping, 393–397, 393f, 396f small-signal perturbation, 400 unidirectional 1st-order pumping, 402 Fourier-domain OCT (FOCT), 274, 278 dechirping, 279–280 mechanical tunable delay, 282 operation principle, 278, 278f optical configuration, 278, 278f using fixed wideband source and optical spectrometer, 282–283, 282f wavelength-sweeping speed, 282 wavelength-swept laser source, 280 Four-wave mixing (FWM), 80–82, 449, 614–615 non-linear index measurement, 540–545, 542–544f in nonlinear medium, 237–238 in optical fiber, 93 Four-wave mixing (FWM)-induced crosstalk in optical systems, 743–749, 745–746f, 748f Free-space Michelson interferometers, 169–178 Free spectral range (FSR), 156, 188

Frequency chirp, 344–353 associated frequency modulation, 344 dispersion measurement, 345 interferometric measurement, 345 measurement utilizing fiber dispersion, 348–353, 348f, 350–353f modulation spectral measurement, 345–347, 347f in optical transmitters, 345 origins and physical mechanisms, 344 Frequency conjugation effect, 95 Frequency-domain method, intermodal dispersion, 478–480, 479f Frequency doubling and duobinary modulation, 120–121 MZI-based electro-optic modulator, 121 power transfer function, 125 Frequency modulated continuous wave (FMCW) LIDAR carrier-suppressed optical signal, 270 with coherent complex optical field modulation, 263–264, 263f coherent heterodyne detection, 259–260, 259f coherent homodyne detection and optical domain dechirping, 261, 262f dechirped frequencies, 268, 273–274, 273f with direct detection, 257–258, 257f extending chirp bandwidth, 264 FMCW RADRA based on complex optical fields, 262–263, 263f I/Q electro-optic modulator, 264 linearity of chirping, 259 phase discontinuity, 266–267, 266f and pulse compression, 255–274, 258f range and velocity, 268 reference reflector, 267–268 single-tone modulation frequency, 272–273, 272f transmitted optical spectrum, 270–271, 270f velocity detection, 272–273 wideband chirp waveform, 269 Frequency-resolved optical gating (FROG), 248 based on SHG, 248, 248f traces of optical pulses, 249, 249f Frequency stabilization of optical frequency combs, 605–608, 606f Frequency synthesizer, 342–343 Fresnel reflection coefficients, 44–45 Functional optical devices, 3 basic properties, 3 physics background, 3

Index

G Gain recovery process, 90, 91f, 92 Gaussian noise distribution, 659 Geometric birefringence, 72–73 Geometric optics analysis, 50–53, 51–52f Graded-index fiber, 50, 50f Grating-based optical spectrum analyzers, 137–151 amplitude stability, 139 calibration accuracy, 139 diffraction gratings, 140–144, 140–141f dynamic range, 139 frequency sweep rate, 140 maximum allowable signal optical power, 139 OSA configurations, 144–151, 145f polarization dependence, 140 resolution bandwidth, 139 sensitivity, 139 signal optical power, 140 wavelength accuracy, 139 wavelength range, 138 Group delay dispersion, 69 Group velocity, 65–67 Group velocity dispersion, 67–68

H Half-power bandwidth (HPBW), 156 Helmholtz equations for electrical field, 54 for magnetic field, 54 Heterodyne optical detection, 215–216 High-sensitivity CCD technology, 463 High-speed electric ADC using optical techniques, 242–243, 243f High speed photon counting, 41–42 High speed real-time digital analyzer, 233–236, 234–235f High-speed sampling of optical signal, 236–243, 236f electric ADC using optical techniques, 242–243, 243f linear optical sampling, 238–240, 239–240f nonlinear optical sampling, 237–238, 237f sampling oscilloscope, single-photon detection, 240–242, 240–242f short optical pulse measurement using autocorrelator, 243–249, 244–245f High-speed silicon technology, 226 Hollow Core Nested Antiresonant Nodeless Fiber (HC-NANF), 455–456, 456f

Homodyne demodulation scheme, 573–574, 574f Homodyne optical detection, 215–216

I In situ monitoring of chromatic dispersion, 710–713, 711–713f of linear propagation impairments, 710–727 PDL monitoring, 723–727, 725f In situ PMD monitoring, 713–723, 714f polarization analysis, 716 signal modulation formats, 718–719 using coherent detection, 715–719, 717f using RF power detection, 714–715, 714f using vestigial sideband optical filtering, 714–715, 714f Integrated external cavity tunable lasers, 28–29 Integrated tunable lasers, 28–30, 29f Intensity-modulated direct detection (IMDD) systems, 652–653 data flow, 653–654, 653f network functionalities, 653–654 signal waveform distortion, 654 Intensity modulation of pump, 340–344, 341–342f, 739–740 Interferometer-based distributed fiber sensors, 589–594, 589f, 591f Interferometric method, 485–489, 486–487f, 494–497, 495–496f Intermodal dispersion fiber core and cladding, 476 frequency-domain measurement, 478–480, 479f and measurement, 476–480 pulse distortion method, 477–478, 477f randomness of mode coupling, 477 ray trace of propagation modes, 476 Inter-modulation distortion (IMD), 343 International Telecommunication Union (ITU-T), 448 cross-phase modulation, 449 dispersion vs. wavelength characteristics, 449, 450f four-wave mixing, 449 ITU-T G.652 fiber, 448 ITU-T G.651 MMF, 448 material quality and fabrication techniques, 450–451 standard single-mode fiber, 448 zero-dispersion wavelength, 449 Intrinsic jitter, 703–705, 705f Intrinsic perturbation, 72–73, 489

825

826

Index

Ionization, 38 I/Q modulation bias control, 132 bias point stabilization, 128–132, 129f of complex optical field, 125–128, 126f feedback control system, 132 Iterative Fourier transform, 249

J Jacobi–Anger identity, 125 Jitter detection techniques, 706–710 Jitter generation, 703–705, 705f Jitter measurement on BER-T scan, 708–710, 709f on phase detector, 707–708, 707f on sampling oscilloscope, 706–707, 706f spectrum analyzer, 708 Jitter tolerance, 703–704, 704f Jitter transfer, 705–706, 705f Jones matrix method, 503–510, 503f

K Kerr effect fiber non-linearity, 76–82, 405–406, 408–409, 536–553, 738, 743 using cross-phase modulation, 545–549, 545f, 547–549f using FWM technique, 540–545, 542–544f using modulation instability, 549–553, 551–552f using SPM technique, 537–540, 539f

L Label-free biosensors, high-Q ring resonators, 194–198, 195f, 197f Lambertian, 8 λ-tunable femtosecond pulses excitation CRS microscopy, 640–646 CRS spectroscopy, 622–640, 626f, 628–629f, 631–632f, 635f soliton self-frequency shift, 615–618 two-photon fluorescence microscopy, 618–621, 618f, 620–621f Laser diodes (LDs), 11–23 gain and loss profile, 14, 14f InGaAsP semiconductor laser, 13–14b and LEDs, 3–30 multiple longitudinal modes, 13–14 optical feedback, 12, 12f photon density decay rate, 15

rate equations, 14–15 steady state solutions of rate equations, 15 threshold carrier density, 16–23 threshold condition, 12 Laser linewidth, 305–325 Laser noises, 22–23 Light detection and ranging (LIDAR), 251–274, 251f Light-emitting diodes (LEDs), 8–11, 11b emitting area of edge-emitting diodes, 9 modulation dynamics, 10–11 P  I curve (insert symbol), 9–10, 9f Light guiding and propagation in optical fiber, 42 Lightwave polarization, 205–206, 205f LiNbO3-based Mach–Zehnder modulators, 3 Linear least squares fitting of data, 807–817 based on chi-square merit function, 811 coefficients and reduced chi-squares, 812 error distribution, 813 linear least squares fitting coefficients and merit function values, 813, 814f modeled measurements, 811–812 quantile-quantile test, 813 Linear optical sampling, 238–240, 239–240f Line system, 652 Linewidth enhancement factor, 23 Liquid crystal technology, 110–111 Local oscillator (LO), in photodiode, 312 Lorentzian-equivalent linewidth, 319–325 Low-coherence interferometer, 561 Lowpass filter (LPF), 707–708 Lyot de-polarizer, 407–408

M Mach–Zehnder (MZ) intensity modulator, 302–303 Mach–Zehnder interferometer (MZI), 164–169, 308, 560–561, 561f, 570 distributed fiber sensor, 589, 589f beam splitter, 165 configuration of, 164–165, 165f transfer function, 167–168, 167f transfer matrix of 22 optical coupler, 165–167, 165f used as optical filter, 168–169, 169f Mach–Zehnder modulator (MZM), 684

Index

Macro-bending, 452 Material dispersion, 69–70 Material-induced dispersion, 69 Maximum-likelihood sequence estimation (MLSE), 684 Maxwellian distribution, 491 Measurement errors absolute accuracy, 793–794 AC-coupled detected interference, 794 candidate outliers, 802–804 central limit theorem, 800–802 classification and reporting, 793–797 count ratio, 795 covariances, 805 degrees of freedom, 797–798, 817f distributions, 796 equipment and propagation models, 793 error estimates of measurement combinations, 804–807 estimation of, 793 linear least squares fitting of data, 807–817 mean of sampling distribution, 798–799 precision of, 794, 796–797 probability density function, 798, 809 sample size, serial correlations, 799–800, 800f statistics, 797–800 systematic error, 797 Michelson interferometer, 169–178 birefringence effect, 171 fiber coupler splitting ratio change, 172, 173f free-space optics, 169, 170f measurement and characterization, 174–175, 174–175f operating principle, 171–174 optical configuration, 169, 170f polarization effect, 171–172 power reflectivity and transmissivity, 178, 178–179f power splitting ratio, 172 power transfer function, 173, 173f Sagnac loop mirror, 175–178, 175f, 177–179f transfer function vs. power reflectivity ratio, extinction ratio of, 174, 174f unidirectional application, 172 using Faraday mirrors, 170–171, 170f Micro-bending, 62, 452 Microelectromechanical systems (MEMS), 28–29 Microring resonators, 568 Miniaturized optical spectrometers, 150

M-level quadrature amplitude modulation (M-QAM), 130 Modal dispersion, 71–72 Mode analysis in optical fibers, 53–57 Mode-field distribution, 457–464 far-field measurement, 458–461, 459f near-field measurement, 459, 461–464 of single-mode fiber, 457–458, 458f Mode-locked laser, 594–595, 597f Mode-locked Titanium-doped Sapphire (Ti:Sapphire) lasers, 596 Mode partition noise, 23 Modulation-induced chirp, time-domain measurement of, 353–357, 354–355f Modulation instability, non-linear index measurement, 549–553, 551–552f Modulation phase shift method, 480–483, 481f Mueller matrix method (MMM), 503f, 510–517, 512f, 723–724 Multi-heterodyne technique experimental setup, 332–333, 333f polarization controller, 332–333 reference frequency comb, 338–339 spectral properties, semiconductor laser frequency combs, 325–339, 328f, 330f, 332f tunable external cavity semiconductor laser, 332–333 Multi-modal microscopy, 643–644 Multimode fiber, 57

N Near-field measurement techniques, 459, 461–464, 462–463f Network analyzer, 283–292 RF network analyzer, 283–287, 285f scalar optical network analyzer, 287–288, 287f S-parameters, 283–287, 284f transmission/reflection test set, 284–285, 285f vector optical network analyzer, 288–292, 288f, 292f Noise-equivalent power (NEP), 38 Noise figure amplifier optical gain, 383 definition, 382–383, 382f electrical domain characterization, 384–385, 385f optical domain measurement, 384 signal-spontaneous beat noise electrical power, 383 Noise loading technique, 683

827

828

Index

Noise process, 659–660 Noise spectral density, 661, 661–662t Nonlinear fiber amplification system, 601–602, 602f Non-linear index measurement using cross-phase modulation, 545–549, 545f, 547–549f using FWM technique, 540–545, 542–544f using modulation instability, 549–553, 551–552f using SPM technique, 537–540, 539f Non-linearity in optical fiber, 524–553, 525f Kerr effect non-linearity, 536–553 stimulated Brillouin scattering coefficient, 525–530 stimulated Raman scattering coefficient, 531–536 Nonlinear microscopy, femtosecond fiber lasers, 614–646 Nonlinear optical sampling, 237–238, 237f Nonlinear polarization rotation (NPR) of optical fiber, 597 Nonlinear pulse compression, 596 Nonlinear Schr€ odinger equation, 76–82 Nonlinear spectroscopy, femtosecond fiber lasers, 614–646 Non-Lorentzian phase noise, and Lorentzianequivalent linewidth, 319–325 Non-return-to-zero (NRZ) modulation format, 673 Non-zero dispersion-shifted fiber (NZDSF), 736–738, 737f Numerical aperture, 57–61, 58f, 63–65, 64–65b

O Optical amplifiers, 1–3, 75–76, 83–114 characterization of, 366–414 3-dB saturation input power, 86 forward pumping, 392 gain medium, 84–86 gain saturation effect, 368 large-signal optical gain spectrum, 369 measurement setup, 367f, 369 noise, 374–375 nonlinear saturation, 85 non-linear phase shift, 392 optical gain, 84–86, 86f point-to-point optical transmission system and, 83, 83f postamplifier, 83 preamplifier, 83 static gain, 369 Optical attenuation, 61–62, 77

Optical autocorrelation using second harmonic generation, 243–249, 244–245f Optical band-pass filter (OBPF), 237–238, 410, 410f, 606–607 Optical circulators, 439–443, 439–441f Optical coherence tomography (OCT), 274–283 Optical communication system, 1 amplified spontaneous noise, 2–3 chromatic dispersion, 2–3 configuration of, 1, 2f fiber nonlinearity, 2–3 polarization-mode dispersion, 2–3 Optical devices characterization of, 297–443 electro-optic modulation response, 339–357 frequency instabilities, 300 intensity noise conversion, 300 normalized intensity noise spectral density, 299–300, 299f optical intensity noise, 300 transmission system, 300 wideband characterization, 357–366 Optical eye distortion mask, 663–666, 663f, 665f Optical fiber attenuation, 61–65, 62f material absorption, 61 scattering loss, 61 Optical fiber measurement fiber attenuation measurement and OTDR, 464–475 fiber dispersion measurements, 475–489 of fiber mode-field distribution, 457–464 of fiber non-linearity, 524–553 fiber types, 448–457 optical signal-to-noise ratio, 449 PMD sources and emulators, 520–524 polarization-dependent loss, determination of, 517–520 polarization mode dispersion, 489–517 Optical fibers, 42–82 attenuation, 61–65, 62f fiber-optic communication systems, 42 graded-index fiber, 50, 50f group velocity and dispersion, 65–74 nonlinear effects, 74–82 propagation modes, 49–61 reflection and refraction, 43–49 step-index fiber, 50, 50f structure of, 42, 42f wave propagation, 50

Index

Optical filter transfer functions, 428–436, 429f interferometer technique, 433–436, 433f, 435f modulation phase-shift technique, 429–433, 429f, 431f phase ripple and signal modulation sidebands, 431–432, 431f Optical field phase shift, incident and reflected beams, 48 Optical frequency combs, 297 and applications, 594–614 broad spectral coverage, 604–605 carrier-envelope-offset, 595 definitions of, 595–596, 595f femtosecond fiber lasers, 596–605 frequency domain, 595, 595f frequency stabilization, 605–608 time domain, 595, 595f Optical frequency comb stabilization, 605–608, 606f Optical gain suppression, 90, 91f Optical intensity modulator, 116 Optical intensity noise, 22 Optical isolators, 367–368, 436–439, 437–438f Optical low-coherence reflectometer (OLCR), 274, 274f interference signal current, 275, 275f range extension, 277 spatial resolution vs. source spectral bandwidth, 276, 276f using balanced detection and phase modulation, 276–277, 277f Optically-pumped high-Q ring resonators, 596 Optical measurement based on coherent optical detection, 211–226 grating-based optical spectrum analyzers, 137–151 LIDAR and OCT, 250–283 Mach–Zehnder interferometer, 164–169 measurement accuracy, 137 mechanisms and instrumentation, 137–292 Michelson interferometer, 169–178 optical network analyzer, 283–292 polarimeter, 204–211 ring resonators and applications, 187–204 scanning FP interferometer, 151–164 waveform measurement, 226–249 wavelength meter, 179–187 Optical modulators, using electro-absorption effect, 132–134, 133f Optical path loss, fiber-optic sensors, 559–560, 559–560f

Optical phase noise, 22–23, 22f Optical polarimeters, 137–138, 204–211 Optical power amplification, 83–114 Optical power emission patterns, 8 Optical power management (OPM), 773–774, 773f Optical power variation, 733–734 Optical receiver characterization and calibration, 357–366 Optical recirculating loop, 684–686, 776, 778, 784, 786–788 Optical ring resonators and applications, 187–204 electro-optic modulators, 198–204, 199–201f, 204f label-free biosensors, 194–198, 195f, 197f Q-factor, 187–191 ring resonator power transfer function, 187–191, 188–189f as tunable optical filters, 191–194, 191f, 193f Optical signal, fiber DGD vs. DGD, 719–723 Optical signal, high-speed sampling of, 236–243, 236f electric ADC using optical techniques, 242–243, 243f linear optical sampling, 238–240, 239–240f nonlinear optical sampling, 237–238, 237f sampling oscilloscope, single-photon detection, 240–242, 240–242f short optical pulse measurement using autocorrelator, 243–249, 244–245f Optical signal-to-noise ratio (OSNR), 84, 449 with digital coherent transceiver, 753–759, 755f, 757–758f margin, 681–688, 683f, 685f multi-span measurements, recirculating loop and coherent receiver, 765–767, 766f non-linear phase shift in fiber-optic system, 759–767, 761f non-linear phase variations of probe pulses, 764–765, 764f optical system performance evaluation, 767–777 Optical single-sideband (OSSB) modulation, 122–125, 123f Optical spectrum analyzer (OSA), 616–617 based on double monochromator, 145–146, 146f configurations, 144–151, 145f focusing optics, 149–150, 149–150f

829

830

Index

Optical spectrum analyzer (OSA) (Continued) grating-based, 137 with polarization sensitivity compensation, 146–149, 147–148f using combination of grating and FPI, 163–164, 164f using photodiode array, 150–151, 151f Optical system performance evaluation, OSNR, 767–777 Optical system performance measurements digital fiber-optic systems and sub-systems, 651 digital optical transmission systems, 651 high-speed optical waveforms, 651 receiver sensitivity measurement and, 676–698 Optical time-domain reflectometers (OTDR), 250, 466–472, 467f, 473t backscattered optical signal, 468–469, 468f backscattered power, 469–470, 469f Brillouin-OTDR, 582–588 discrete reflection peak, 470 fiber with multiple sections, 471–472, 472f fiber with splicing, 470, 470f Fresnel reflection, 475 improvement considerations of, 472–475 local oscillator, 580–582 optical pulse train, 577–579 phase-sensitive, 576–582 photodiode responsivity, 474–475 polarization controller, 474 Raman OTDR, 582–588 signal optical pulse, 468–469, 468f signal-to-noise ratio, 577–579 spatial resolution, 475 Optical wavelength conversion using cross-gain saturation, 92, 92f using FWM in SOA, 93–95, 94f Optical wavelength meter, 179–187 based on Fizeau wedge interferometer, 186–187, 186f calibration, wavelength reference laser, 185, 185f configurations of, 179, 180f fast Fourier transformation, 181, 182f operating principle, 180–182 signal coherence length, effect of, 184–185 spectral resolution, 183–184, 184f wavelength coverage, 182–183, 183f Orthogonal frequency-division multiplexing (OFDM), 2–3

P Passive optical components, 414–443 characterization of, 414–443 Fiber Bragg grating filters, 419–423, 422f fiber-optic couplers, 414–418, 414f, 416–418f non-uniform coupling coefficient, 421, 423f Pattern triggering, digital oscilloscope, 229, 229f Peirce’s criterion (PC), 803–804, 804t Phase noise, 22–23, 22f, 305–325 Phase-sensitive optical time-domain reflectometer, 576–582 Phase-shaped binary (PSB) modulation, 655 Phase velocity, and group velocity, 65–67 Photodetectors avalanche photodiode, 30, 38–41 current-voltage relationship, 34–35, 34f electrical characteristics, 34–35 equivalent load resistance, 35 incident optical signal, 37 junction capacitance, 34 noise and SNR, 35–38 noise sources, 35 optical signal, 36 Pn-junction photodiodes, 30–32 preamplifier circuits, 35, 35f quantum efficiency, 35 responsivity vs. bandwidth, 32–34, 32f transimpedance amplifier, 35 Photodetector bandwidth characterization electrical signal waveform, 365 Fabry–Perot filter, 364, 364f frequency-dependent photodiode responsivity, 362–363 RF spectral density, 364–365 source spontaneous-spontaneous beat noise, 362–365, 363f time-domain technique, 366 using short optical pulses, 365–366, 365f Photodetector responsivity frequency domain characterization, 360–362, 360f and linearity measurement, 358–359, 359f Photodiode array optical spectral meter, 150–151, 151f small optical aperture for spectral selection, 150 Photodiode array-based optical spectrometer, 150–151, 151f

Index

Photodiodes, 30 Photon density variation, 23 Photonic bandgap fiber, 454 hollow-core, 454–455, 454f large core area, 454, 454f non-linear PCF, 454–455 Photothermal imaging, 641 Planar lightwave circuits (PLC), 426–427 Plastic optical fiber (POF), 456–457 PN detector, 30 Pn-junction photodiodes, 30–32, 31f Poincare arc method, 497–499, 498–499f Poincare sphere, 206–209, 208f Polarization averaging, 147–148, 148f Polarization-dependent loss (PDL), 441–442, 517–520, 517f, 519t Polarization diversity detection, 147, 147f Polarization-maintaining (PM) fibers, 407–408, 452–453, 597f, 598 Polarization mode dispersion (PMD), 72–74, 441–442, 489–517 emulators, 520–524, 523–524f fiber birefringence, 489–492 fixed analyzer method, 499–503, 500f interferometric method, 494–497, 495–496f Jones matrix method, 503–510, 503f Mueller matrix method, 503f, 510–517 parameter, 489–492 Poincare arc method, 497–499, 498–499f pulse delay method, 492–494, 493f sources and emulators, 520–524, 521–522f Polarization mode dispersion (PMD)-induced eye distortion measurement, 666, 666f Polarization-multiplexed (PM) coherent transceivers, 752–767 Polarization sensitivity compensation, 146–149, 147–148f Power reflectivity, normal incidence, 45–48, 47f Power transmission coefficient, 724 Preamplifier circuits, 35, 35f Precision metrology, 360, 347f, 558–559 Principal state of polarization (PSP), 490, 491f Propagation modes in optical fibers, 49–61 Pseudorandom bit sequences (PRBS) generator, 670, 670f with NRZ modulation, 669–670, 670f Pulse-broadening method, 477–478, 477f

Pulse compression, 257–258 Pulse delay method, 492–494, 493f Pulse distortion method, 477–478, 477f Pulsed LIDAR with direct detection, 253–255, 253–254f Pulse repetition frequency, 615–616 Pump de-polarizer, 407–408 Pump wave, 525–526

Q Q-factor, 187–191 Quadrature amplitude modulated (QAM) optical signals, 591, 591f Quantile-quantile test, 813, 815f Quantum-dot mode-locked lasers (QD-MLLs), 320

R Radio frequency systems, 2–3 Raman amplification with backward pump, 395 with forward pumping, 392 noise characteristics, 388–392, 390–392f in optical fiber, 112–114 Raman gain effect, 113–114, 115f Raman optical time-domain reflectometer (ROTDR), 582–588, 588f Random sampling method, 232, 232f Rayleigh backscattered complex optical field, 582 Rayleigh scattering, 61, 577–582 Receiver sensitivity measurement calculated receiver Q-value, 677, 677f direct detection receivers with and without optical pre-amplifier, 677–678, 677f margin of transmission system, 680–681 optically pre-amplified PIN receiver, 678–679, 678f and OSNR tolerance, 676–698 and power margin, 676–681 receiver sensitivity, optical transmission system, 679, 679f waveform distortion, 676, 678–680 Receiver signal-to-noise ratio (R-SNR) chromatic dispersion, 768–770, 770f of coherent detection, 215–217 fiber non-linearity, 770–776, 772f per-channel signal optical power, 775–776, 775f signal optical power, 772, 773f

831

832

Index

Relative intensity noise (RIN) transfer, 22, 298–305, 301f vs. input optical signal power, 302, 303f of pump laser, 393–395 from pump to optical signal, 397–406, 404f Relaxation oscillation process in semiconductor laser, 298–299, 299f Required optical signal-to-noise ratio (R-OSNR), 681–688, 687f optical filter misalignment, 776–777, 776f receiver Q-value, 680f, 682 Responsivity vs. bandwidth, 32–34, 32f Reverse bias, 34–35 RF coherent detection, 215 Ring-based electro-optic modulators, 198–204, 199–201f, 204f Ring cavity laser, 598–599 Ring resonator power transfer function, 187–191, 188–189f

S Sagnac loop mirror, 175–178, 175f, 177–179f Sagnac phase shift, 565–568 Sampled-grating distributed Bragg reflector (SGDBR) lasers, 320 Saturable absorption mirrors (SAM), 597f, 598–599, 599f Scalar optical network analyzer, 287–288, 287f Segmented waveform capture, 699, 699f Self-heterodyne detection, 307–312, 309f Self-homodyne detection, 307–312, 307f Self-phase modulation (SPM), 79, 614–615 non-linear index measurement, 537–540, 539–540f Self-referencing technique, 606–608 Semiconductor laser frequency combs, multi-heterodyne technique, 325–339, 328f, 330f, 332f Semiconductor lasers, 596 carrier confinement, 6 direct and indirect semiconductors, 5–6, 5f external optical feedback, 300 forward-biased pn junctions, 3 laser diodes, 11–23 light-emitting diodes (LEDs), 3–30 linewidth, 297–339 phase noise, 297–339 Pn junction and energy diagram, 4–5, 4f relative intensity noise, 298–305

single-frequency semiconductor lasers, 23–30 spontaneous emission, 7–8 stimulated emission, 7–8 Semiconductor optical amplifier (SOA), 3, 84, 86–97, 280–281 all-optical switch, 96 gain dynamics, 88–97 in Mach–Zehnder configuration, 96, 96f mean-field approximation, 88 optical phase modulation, 95–97 phase modulation, 96 power-induced saturation, 88 saturation photon density, 87 semiconductor optical amplifier, 96–97 steady-state analysis, 87–88 wavelength conversion using FWM, 95, 95f Semiconductor photodetectors, 30 Semiconductors direct, 5–6, 5f indirect, 5–6, 5f Semiconductor saturable absorption mirror (SESAM), 597 Short optical pulse measurement using autocorrelator, 243–249, 244–245f Shot noise, 36 Side-mode suppression ratio (SMR), 18–19 Signal processing techniques, 577–579 Signal-spontaneous emission beat noise, 378–380, 379f Signal-to-noise ratio (SNR), 35–36, 37f, 216–217, 217f Signal waveform distortion, 95 Silicon-based CCDs, 150–151, 571 Silicon-based photodiode, 33–34 Single-frequency laser, 23–24 Single-frequency semiconductor lasers, 1–2, 23–30 DFB laser diode, 24–25, 24–26f external cavity laser diode, 25–28, 26–27f integrated tunable lasers, 28–30, 29f Single-mode fiber, 57 Single-mode optical fiber (SMF), 616 Single-photon detection sensitivity of SPAPD, 255 Single-photon detector Geiger mode, 41–42 operation principle, 41–42, 41f sampling oscilloscope, 240–242, 240–242f Single-sideband modulation, 122, 124f, 125

Index

Small-footprint distributed Bragg reflector (DBR) semiconductor lasers, 280 Small-signal modulation response, 20–22 Snell’s law, 43–44 Soliton self-frequency shift (SSFS), in nonlinear fiber, 615–618 Spatial light modulator (SLM), 110–111 Specialty optical fibers, 451–457 Spectral hole burning, 373–374 Spectral linewidth, 22–23 Spontaneous emission, 7–8, 7f Spontaneous-spontaneous beat noise spectral density, 378, 380–382, 380–381f State of polarization (SOP), 211, 211t, 557, 582 Static gain tilt, 371 Step-index fiber, 50, 50f, 57f Stimulated Brillouin scattering (SBS) coefficient, 75, 525–530, 529–530f, 614–615 Stimulated emission, 7–8, 7f Stimulated Raman gain (SRG), 642–643 Stimulated Raman scattering (SRS) spectroscopy, 76, 531–536, 531f, 533f, 614–615 fiber loss, effect of, 535 and microscopy, 615 power spectral density, 536 square-wave pump, 534–535, 535f Stokes parameters, 206–209, 207f Stokes wave group delay, 632–635, 636–637f Stokes wavelength of Raman scattering, 587–588 Stokes waves, 112–113, 526, 532–533 Stress-applying parts (SAP), 452–453 Stress birefringence, 72–73 Surface-emitting LEDs, 8, 8f System Q estimation, eye diagram parameterization, 663–666, 663f, 665f

T Thermal noise, 35–36 Thin film fiber-optic filter, 424–426, 424–425f Thin-film technology, 425–426 Three-port isolator, 443 Threshold carrier density, 16–23 above threshold regime, 17–18, 17f laser noises, 22–23 oscillation frequency, 21–22 side-mode suppression ratio, 18–19 small-signal modulation response, 20–22

threshold current density, 16–17, 16f turn-on delay, 19–20, 20f Threshold current density, 16–17, 16f Time-correlated single-photon counting (TCSPC), 255 Time-domain characterization, 344 Time jitter eye diagrams with and without noise contributions, 703, 703f generation in transmitters, 703 parameters and definitions, 701–706 system performance degradation, 701–702 tolerance of receivers, 703 transfer of system elements, 703 Truewave fiber (TWF), 484 Tunable optical filters, ring resonators, 191–194, 191f, 193f Turn-on delay, 19–20, 20f Two-photon fluorescence microscopy, λ-switchable femtosecond pulses excitation, 618–621, 618f, 620–621f

U Ultra-high-speed photodiodes, 33–34 Ultra-low-loss fibers, 451, 451t Unbiased variance, 797–798, 799f Unidirectional optical system with coherent detection, distributed fiber sensor, 592, 592f Uniform and double exponential reduced chi-square exceedance, 815–817, 816t

V Variable optical attenuator (VOA), 367–368, 616–617 Vector optical network analyzer, 288–292, 288f, 292f Viterbi-Viterbi 4th power algorithm, 591–592 Voltage-controlled oscillator (VCO), 674, 674f

W Water absorption peaks, 61 Waveform distortion measurements, 698–701 Waveform measurement analog oscilloscope, 227, 227f digital oscilloscope, 227–228, 228f digital sampling oscilloscopes, 230–233 high-speed time-domain signal waveforms, 226

833

834

Index

Waveform measurement (Continued) oscilloscope operating principle, 226–230, 226f and synchronization at oscilloscope, 227, 228f Waveguide dispersion, 69 Wavelength conversion, and frequency conjugation, 95 Wavelength-division demultiplexer, 1–2 Wavelength-division multiplexed (WDM) optical systems, 1–2, 2f, 83, 569–570, 661

arrayed waveguide gratings, 426–428, 426f crosstalk channels, 749–752, 750f, 752f demultiplexers, 423–428 multiplexers, 423–428 with spectrally shaped broadband Gaussian noise, 749–752, 750f, 752f thin film fiber-optic filter, 424–426, 424–425f Wavelength selective switch (WSS), 773–774 Wideband optical receivers, 357–366