Digital Analysis of Remotely Sensed Imagery [1 ed.] 0071604650, 9780071604659

"Jay Gao’s book on the analysis of remote sensing imagery is a well-written, easy-to-read, and informative text bes

243 107 10MB

English Pages 689 Year 2009

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents......Page 8
Preface......Page 20
Acknowledgments......Page 24
1 Overview......Page 28
1.1 Image Analysis System......Page 29
1.2 Features of Digital Image Analysis......Page 30
1.2.1 Advantages......Page 31
1.2.2 Disadvantages......Page 32
1.3 Components of Image Analysis......Page 33
1.3.4 Accuracy Assessment......Page 35
1.4.1 Pixel......Page 36
1.4.3 Image Reference System......Page 38
1.4.4 Histogram......Page 40
1.4.5 Scatterplot......Page 41
1.5.1 Spatial Resolution......Page 42
1.5.2 Spectral Resolution......Page 44
1.5.3 Radiometric Resolution......Page 45
1.5.4 Temporal Resolution......Page 47
1.6 Organization of the Book......Page 49
2 Overview of Remotely Sensed Data......Page 52
2.1 Meteorological Satellite Data......Page 53
2.2 Oceanographic Satellite Data......Page 55
2.3.1 Landsat Data......Page 57
2.3.2 SPOT Data......Page 62
2.3.3 IRS Data......Page 66
2.3.4 ASTER Data......Page 69
2.3.5 MODIS Data......Page 73
2.3.6 ALOS Data......Page 74
2.4 Very High Spatial Resolution Data......Page 75
2.4.1 IKONOS......Page 77
2.4.2 QuickBird......Page 79
2.4.3 OrbView-3......Page 82
2.4.4 Cartosat......Page 83
2.4.5 WorldView......Page 85
2.4.6 GeoEye-1......Page 86
2.4.7 Other Satellite Programs......Page 87
2.5 Hyperspectral Data......Page 89
2.5.1 Hyperion Satellite Data......Page 90
2.5.2 AVIRIS......Page 91
2.5.3 CASI......Page 92
2.6.2 ERS Data......Page 94
2.6.3 Radarsat Data......Page 96
2.6.4 EnviSat Data......Page 98
2.7 Conversion from Analog Materials......Page 101
2.8 Proper Selection of Data......Page 105
2.8.1 Identification of User Needs......Page 106
2.8.2 Seasonal Factors......Page 107
2.8.4 Mode of Data Delivery......Page 108
References......Page 109
3.1.1 Storage Space Needed......Page 112
3.1.2 Data Storage Forms......Page 113
3.2.1 CDs......Page 115
3.2.3 Memory Sticks......Page 116
3.2.4 Computer Hard Disk......Page 117
3.3 Format of Image Storage......Page 118
3.3.2 GIF......Page 119
3.3.3 JPEG......Page 120
3.3.4 TIFF and GeoTIFF......Page 121
3.4.1 Variable-Length Coding......Page 123
3.4.2 Run-Length Coding......Page 124
3.4.3 LZW Coding......Page 126
3.4.4 Lossy Compression......Page 127
3.4.5 JPEG and JPEG 2000......Page 128
References......Page 131
4 Image Processing Systems......Page 132
4.1.1 Image Analysis Functions......Page 133
4.1.2 Display and Output......Page 134
4.1.4 User Interface......Page 136
4.1.5 Documentation and Evaluation......Page 137
4.2 ERDAS Imagine......Page 138
4.2.2 Data Preparation......Page 139
4.2.4 Image Classification......Page 141
4.2.7 Other Toolboxes......Page 142
4.2.8 Documentation and Evaluation......Page 144
4.3.1 Data Preparation and Display......Page 145
4.3.2 Image Enhancement......Page 146
4.3.3 Image Classification and Feature Extraction......Page 147
4.3.5 Documentation and Evaluation......Page 149
4.4.1 User Interface, Data Input/Output and Preparation......Page 150
4.4.2 Image Display......Page 152
4.4.4 Image Web Server......Page 153
4.4.5 Evaluation......Page 154
4.5.1 Image Input and Display......Page 155
4.5.2 Major Modules......Page 157
4.5.4 Documentation and Evaluation......Page 158
4.6.1 General Overview......Page 160
4.6.3 Documentation and Evaluation......Page 161
4.7 GRASS......Page 163
4.8 Comparison......Page 165
References......Page 168
5 Image Geometric Rectification......Page 170
5.1.1 Errors Associated with the Earth......Page 171
5.1.2 Sensor Distortions......Page 174
5.1.3 Errors Associated with the Platform......Page 176
5.2 Projection and Coordinate Systems......Page 178
5.2.1 UTM Projection......Page 179
5.2.2 NZMG......Page 180
5.3.2 Image Geometric Transformation......Page 183
5.3.3 GCPs in Image Transformation......Page 185
5.3.4 Sources of Ground Control......Page 187
5.4.2 Sensor-Specific Models......Page 188
5.4.4 Projective Transformation......Page 189
5.4.5 Direct Linear Transform Model......Page 190
5.4.7 Rubber-Sheeting Model......Page 191
5.5.1 Transform Equations......Page 192
5.5.2 Minimum Number of GCPs......Page 194
5.5.3 Accuracy of Image Transform......Page 195
5.5.4 Creation of the Output Image......Page 199
5.6 Issues in Image Georeferencing......Page 203
5.6.1 Impact of the Number of GCPs......Page 205
5.6.2 Impact of Image Resolution......Page 207
5.6.3 Impact of GCP Quality......Page 208
5.7 Image Orthorectification......Page 210
5.7.1 Perspective versus Orthographic Projection......Page 211
5.7.2 Methods of Image Orthorectification......Page 212
5.7.3 Procedure of Orthorectification......Page 215
5.8.1 Transformation Equation......Page 219
5.8.2 Comparison with Polynomial Model......Page 222
5.9.1 Image Subsetting......Page 224
5.9.2 Image Mosaicking......Page 226
References......Page 228
6 Image Enhancement......Page 230
6.1.1 Density Slicing......Page 231
6.1.2 Linear Enhancement......Page 232
6.1.4 Look-Up Table......Page 235
6.1.5 Nonlinear Stretching......Page 237
6.1.6 Histogram Equalization......Page 238
6.2 Histogram Matching......Page 243
6.3.2 Kernels and Convolution......Page 246
6.3.3 Image Smoothing......Page 248
6.4 Edge Enhancement and Detection......Page 251
6.4.1 Enhancement through Subtraction......Page 252
6.4.2 Edge-Detection Templates......Page 253
6.5 Multiple-Image Manipulation......Page 254
6.5.1 Band Ratioing......Page 255
6.5.2 Vegetation Index (Components)......Page 256
6.6 Image Transformation......Page 258
6.6.1 PCA......Page 259
6.6.2 Tasseled Cap Transformation......Page 269
6.6.3 HIS Transformation......Page 271
6.7 Image Filtering in Frequency Domain......Page 272
References......Page 274
7 Spectral Image Analysis......Page 276
7.1.2 Image Elements......Page 277
7.1.3 Data versus Information......Page 280
7.1.4 Spectral Class versus Information Class......Page 281
7.1.5 Classification Scheme......Page 282
7.2 Distance in the Spectral Domain......Page 284
7.2.1 Euclidean Spectral Distance......Page 285
7.2.2 Mahalanobis Spectral Distance......Page 286
7.3.1 Moving Cluster Analysis......Page 287
7.3.3 Agglomerative Hierarchical Clustering......Page 291
7.3.4 Histogram-Based Clustering......Page 293
7.4.1 Procedure......Page 294
7.4.2 Selection of Training Samples......Page 297
7.5 Per-Pixel Image Classifiers......Page 298
7.5.1 Parallelepiped Classifier......Page 299
7.5.2 Minimum-Distance-to-Mean Classifier......Page 301
7.5.3 Maximum Likelihood Classifier......Page 303
7.5.4 Which Classifier to Use?......Page 308
7.6 Unsupervised and Supervised Classification......Page 310
7.7 Fuzzy Image Classification......Page 311
7.7.1 Fuzzy Logic......Page 312
7.7.2 Fuzziness in Image Classification......Page 314
7.7.3 Implementation and Accuracy......Page 316
7.8.1 Mathematical Underpinning......Page 318
7.8.2 Factors Affecting Performance......Page 320
7.8.3 Implementation Environments......Page 321
7.8.4 Results Validation......Page 323
7.9 Postclassification Filtering......Page 324
7.10 Presentation of Classification Results......Page 327
References......Page 329
8 Neural Network Image Analysis......Page 332
8.1.2 Artificial Neurons......Page 333
8.2 Neural Network Architecture......Page 334
8.2.1 Feed-Forward Model......Page 336
8.2.2 Backpropagation Networks......Page 338
8.2.3 Self-Organizing Topological Map......Page 340
8.2.4 ART......Page 341
8.2.5 Parallel Consensual Network......Page 343
8.2.7 Structured Neural Network......Page 344
8.2.8 Alternative Models......Page 346
8.3.1 Learning Paradigm......Page 348
8.3.2 Learning Rate......Page 349
8.3.3 Learning Algorithms......Page 350
8.3.4 Transfer Functions......Page 351
8.4 Network Configuration......Page 352
8.4.1 Number of Hidden Layers......Page 353
8.4.2 Number of Hidden Nodes......Page 354
8.5.1 General Procedure......Page 356
8.5.2 Size of Training Samples......Page 357
8.5.4 Ease and Speed of Network Training......Page 358
8.5.5 Issues in Network Training......Page 360
8.6.1 Methods of Data Encoding......Page 361
8.6.2 Incorporation of Ancillary Data......Page 362
8.6.3 Standardization of Input Data......Page 363
8.6.4 Strengths and Weaknesses......Page 364
8.7.1 Case Study......Page 367
8.7.2 A Comparison......Page 369
8.7.3 Critical Evaluation......Page 370
References......Page 374
9.1 Fundamentals of Decision Trees......Page 378
9.2.1 Univariate Decision Trees......Page 380
9.2.2 Multivariate Decision Trees......Page 382
9.2.3 Hybrid Decision Trees......Page 384
9.2.4 Regression Trees......Page 385
9.3.1 Construction Methods......Page 387
9.3.2 Feature Selection......Page 388
9.3.3 An Example......Page 391
9.3.4 Node Splitting Rules......Page 393
9.3.5 Tree Pruning......Page 395
9.3.6 Tree Refinement......Page 397
9.4 Common Trees in Use......Page 398
9.4.1 CART......Page 399
9.4.2 C4.5 and C5.0 Trees......Page 400
9.4.3 M5 Trees......Page 401
9.4.4 QUEST......Page 402
9.5 Decision Tree Classification......Page 403
9.5.1 Accuracy......Page 404
9.5.2 Robustness......Page 406
9.5.3 Strengths......Page 408
9.5.5 Ensemble Classifiers......Page 410
References......Page 413
10 Spatial Image Analysis......Page 416
10.1 Texture and Image Classification......Page 417
10.1.1 Statistical Texture Quantifiers......Page 419
10.1.2 Texture Based on Gray Tone Spatial Matrix......Page 421
10.1.4 Semivariogram-Based Texture Quantification......Page 426
10.1.5 Comparison of Texture Measures......Page 428
10.1.6 Utility of Texture in Image Classification......Page 429
10.2 Contexture and Image Analysis......Page 433
10.3 Image Segmentation......Page 434
10.3.1 Pixel-Based Segmentation......Page 435
10.3.2 Edge-Based Segmentation......Page 436
10.3.3 Region-Based Segmentation......Page 437
10.3.4 Knowledge-Based Image Segmentation......Page 440
10.3.5 Segmentation Based on Multiple Criteria......Page 442
10.3.6 Multiscale Image Segmentation......Page 446
10.4 Fundamentals of Object-Oriented Classification......Page 447
10.4.1 Rationale......Page 448
10.4.2 Process of Object-Oriented Analysis......Page 450
10.4.3 Implementation Environments......Page 451
10.5.1 A Case Study......Page 453
10.5.2 Performance Relative to Per-Pixel Classifiers......Page 456
10.5.3 Strengths......Page 459
10.5.4 Limitations......Page 461
10.5.5 Affecting Factors......Page 463
References......Page 464
11 Intelligent Image Analysis......Page 470
11.1.1 General Features......Page 471
11.1.2 Knowledge Base......Page 472
11.1.3 Expert Systems and Image Analysis......Page 474
11.2.1 Type of Knowledge......Page 476
11.2.2 Spectral Knowledge......Page 478
11.2.3 Spatial Knowledge......Page 479
11.2.4 External Knowledge......Page 480
11.2.5 Quality of Knowledge......Page 482
11.2.6 Knowledge Integration......Page 484
11.3.1 Acquisition via Domain Experts......Page 485
11.3.2 Acquisition through Machine Learning......Page 486
11.3.3 Acquisition through Remote Sensing and GPS......Page 487
11.4.1 Semantic Network......Page 489
11.4.2 Rule-Based Representation......Page 490
11.4.3 Frames......Page 492
11.4.4 Blackboards......Page 494
11.5.1 Mathematical Underpinning......Page 495
11.5.2 Evidential Reasoning and Image Classification......Page 498
11.5.3 Utility......Page 499
11.6 Knowledge-Based Image Analysis......Page 500
11.6.1 Knowledge-Based Image Classification......Page 501
11.6.2 Postclassification Filtering......Page 505
11.6.3 A Case Study......Page 506
11.6.4 Postclassification Spatial Reasoning......Page 512
11.7 Critical Evaluation......Page 514
11.7.1 Relative Performance......Page 515
11.7.2 Effectiveness of Spatial Knowledge......Page 516
11.7.3 Strengths......Page 517
11.7.4 Limitations......Page 518
References......Page 520
12 Classification Accuracy Assessment......Page 524
12.1 Precision versus Accuracy......Page 525
12.2.1 Image Misclassification......Page 527
12.2.2 Boundary Inaccuracy......Page 528
12.2.3 Inaccuracy of Reference Data......Page 529
12.2.4 Characteristics of Classification Inaccuracy......Page 530
12.3 Procedure of Accuracy Assessment......Page 531
12.3.1 Scale and Procedure of Assessment......Page 532
12.3.2 Selection of Evaluation Pixels......Page 533
12.3.3 Number of Evaluation Pixels......Page 534
12.3.4 Collection of Reference Data......Page 536
12.4.1 Aspatial Accuracy......Page 538
12.4.2 Spatial Accuracy......Page 539
12.4.3 Interpretation of Error Matrix......Page 541
12.4.4 Quantitative Assessment of Error Matrix......Page 545
12.4.5 An Example of Accuracy Assessment......Page 547
12.4.6 Comparison of Error Matrices......Page 548
References......Page 551
13 Multitemporal Image Analysis......Page 552
13.1.1 Conceptual Illustration......Page 554
13.1.2 Requirements of Change Analysis......Page 555
13.1.3 Procedure of Change Analysis......Page 556
13.2 Qualitative Change Analysis......Page 557
13.2.1 Visual Overlay......Page 558
13.2.2 Image Compositing......Page 559
13.3 Quantitative Change Analysis......Page 560
13.3.1 Spectral Differencing......Page 561
13.3.2 Spectral Ratioing......Page 562
13.3.3 NDVI-Based Change Analysis......Page 563
13.4 Postclassification Change Analysis......Page 564
13.4.1 Aspatial Change Detection......Page 565
13.4.2 Spatial Change Analysis......Page 567
13.4.3 Raster Implementation......Page 569
13.4.4 Vector Implementation......Page 570
13.4.5 Raster or Vector?......Page 571
13.5.2 PCA......Page 574
13.5.3 Change Vector Analysis......Page 575
13.5.4 Correlation-Based Change Analysis......Page 578
13.5.5 A Comparison......Page 579
13.5.6 Change Analysis from Monotemporal Imagery......Page 580
13.6 Accuracy of Change Analysis......Page 581
13.6.1 Factors Affecting Detection Accuracy......Page 582
13.6.2 Evaluation of Detection Accuracy......Page 587
13.7 Visualization of Detected Change......Page 591
References......Page 592
14 Integrated Image Analysis......Page 594
14.1.1 GIS Database......Page 595
14.1.2 Vector Mode of Representation......Page 596
14.1.3 Raster Mode of Representation......Page 599
14.1.4 Attribute Data......Page 601
14.1.5 Topological Data......Page 602
14.1.6 GIS Functions......Page 604
14.1.7 Database Query......Page 605
14.1.8 GIS Overlay Functions......Page 608
14.1.9 Errors in Overlay Analysis......Page 613
14.1.10 Relevance of GIS to Image Analysis......Page 615
14.2.1 Principles of GPS......Page 616
14.2.2 GPS Accuracy......Page 618
14.2.3 Improvements in GPS Accuracy......Page 620
14.2.4 Relevance of GPS to Image Analysis......Page 623
14.3 Necessity of Integration......Page 625
14.4.1 Linear Integration......Page 627
14.4.2 Interactive Integration......Page 628
14.4.3 Hierarchical Integration......Page 629
14.4.4 Complex Model of Integration......Page 632
14.4.5 Levels of Integration......Page 633
14.5.1 Format Incompatibility......Page 634
14.5.2 Accuracy Incompatibility......Page 635
14.6.1 Image Analysis and GIS......Page 636
14.6.2 Image Analysis and GPS......Page 637
14.7 Applications of Integrated Approach......Page 638
14.7.1 Resources Management and Environmental Monitoring......Page 639
14.7.2 Emergency Response......Page 640
14.7.4 Prospect of Integrated Analysis......Page 641
References......Page 643
A......Page 646
C......Page 648
D......Page 650
E......Page 652
F......Page 653
G......Page 654
H......Page 655
I......Page 656
K......Page 659
M......Page 660
N......Page 662
P......Page 663
R......Page 665
S......Page 667
T......Page 670
W......Page 671
Z......Page 672
Recommend Papers

Digital Analysis of Remotely Sensed Imagery [1 ed.]
 0071604650, 9780071604659

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Digital Analysis of Remotely Sensed Imagery

About the Author Jay Gao, Ph.D., attended Wuhan Technical University of Surveying and Mapping in China. He received his Bachelor of Engineering degree in Photogrammetry and Remote Sensing in 1984. He continued his education at the University of Toronto, and obtained his Master of Science degree from the Geography Department in 1988, majoring in remote sensing. Four years later, he earned his Ph.D. from the University of Georgia in the field of remote sensing and geographic information systems. He then joined the Geography Department at the University of Auckland in New Zealand as a lecturer. His teaching interests include remote sensing, digital image processing, geographic information systems, and spatial analysis. Over his academic career Dr. Gao has done extensive research in digital image analysis and its applications to resources management and hazards monitoring. His numerous papers have appeared in a wide range of journals and conference proceedings.

Digital Analysis of Remotely Sensed Imagery Jay Gao, Ph.D. School of Geography, Geology and Environmental Science The University of Auckland Auckland, New Zealand

New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto

Copyright © 2009 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. ISBN: 978-0-07-160466-6 MHID: 0-07-160466-9 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-160465-9, MHID: 0-07-160465-0. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please visit the Contact Us page at www.mhprofessional.com. Information contained in this work has been obtained by The McGraw-Hill Companies, Inc. (“McGraw-Hill”) from sources believed to be reliable. However, neither McGraw-Hill nor its authors guarantee the accuracy or completeness of any information published herein, and neither McGraw-Hill nor its authors shall be responsible for any errors, omissions, or damages arising out of use of this information. This work is published with the understanding that McGraw-Hill and its authors are supplying information but are not attempting to render engineering or other professional services. If such services are required, the assistance of an appropriate professional should be sought. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon, transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior consent. You may use the work for your own noncommercial and personal use; any other use of the work is strictly prohibited. Your right to use the work may be terminated if you fail to comply with these terms. THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR WARRANTIES AS TO THE ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM USING THE WORK, INCLUDING ANY INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA HYPERLINK OR OTHERWISE, AND EXPRESSLY DISCLAIM ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee that the functions contained in the work will meet your requirements or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you or anyone else for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting therefrom. McGraw-Hill has no responsibility for the content of any information accessed through the work. Under no circumstances shall McGraw-Hill and/or its licensors be liable for any indirect, incidental, special, punitive, consequential or similar damages that result from the use of or inability to use the work, even if any of them has been advised of the possibility of such damages. This limitation of liability shall apply to any claim or cause whatsoever whether such claim or cause arises in contract, tort or otherwise.

This book is dedicated to my father, who did not survive long enough to see its publication and who will always be missed!

This page intentionally left blank

Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii 1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Image Analysis System . . . . . . . . . . . . . . . . . . . . . 1.2 Features of Digital Image Analysis . . . . . . . . . . . 1.2.1 Advantages . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Disadvantages . . . . . . . . . . . . . . . . . . . . 1.3 Components of Image Analysis . . . . . . . . . . . . . 1.3.1 Data Preparation . . . . . . . . . . . . . . . . . . 1.3.2 Image Enhancement . . . . . . . . . . . . . . . 1.3.3 Image Classification . . . . . . . . . . . . . . . 1.3.4 Accuracy Assessment . . . . . . . . . . . . . 1.3.5 Change Detection . . . . . . . . . . . . . . . . . 1.3.6 Integrated Analysis . . . . . . . . . . . . . . . 1.4 Preliminary Knowledge . . . . . . . . . . . . . . . . . . . . 1.4.1 Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Digital Number (DN) . . . . . . . . . . . . . . 1.4.3 Image Reference System . . . . . . . . . . . 1.4.4 Histogram . . . . . . . . . . . . . . . . . . . . . . . 1.4.5 Scatterplot . . . . . . . . . . . . . . . . . . . . . . . 1.5 Properties of Remotely Sensed Data. . . . . . . . . . 1.5.1 Spatial Resolution . . . . . . . . . . . . . . . . . 1.5.2 Spectral Resolution . . . . . . . . . . . . . . . . 1.5.3 Radiometric Resolution . . . . . . . . . . . . 1.5.4 Temporal Resolution . . . . . . . . . . . . . . 1.6 Organization of the Book . . . . . . . . . . . . . . . . . . .

1 2 3 4 5 6 8 8 8 8 9 9 9 9 11 11 13 14 15 15 17 18 20 22

2

Overview of Remotely Sensed Data . . . . . . . . . . . . . . 2.1 Meteorological Satellite Data . . . . . . . . . . . . . . . . 2.2 Oceanographic Satellite Data . . . . . . . . . . . . . . . . 2.3 Earth Resources Satellite Data . . . . . . . . . . . . . . . 2.3.1 Landsat Data . . . . . . . . . . . . . . . . . . . . . 2.3.2 SPOT Data . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 IRS Data . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 ASTER Data . . . . . . . . . . . . . . . . . . . . . . 2.3.5 MODIS Data . . . . . . . . . . . . . . . . . . . . . 2.3.6 ALOS Data . . . . . . . . . . . . . . . . . . . . . . . 2.4 Very High Spatial Resolution Data . . . . . . . . . . . 2.4.1 IKONOS . . . . . . . . . . . . . . . . . . . . . . . . .

25 26 28 30 30 35 39 42 46 47 48 50

vii

viii

Contents

3

2.4.2 QuickBird . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 OrbView-3 . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Cartosat . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 WorldView . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 GeoEye-1 . . . . . . . . . . . . . . . . . . . . . . . . 2.4.7 Other Satellite Programs . . . . . . . . . . . 2.5 Hyperspectral Data . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Hyperion Satellite Data . . . . . . . . . . . . 2.5.2 AVIRIS . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 CASI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 JERS Data . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 ERS Data . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 Radarsat Data . . . . . . . . . . . . . . . . . . . . 2.6.4 EnviSat Data . . . . . . . . . . . . . . . . . . . . . 2.7 Conversion from Analog Materials . . . . . . . . . . 2.8 Proper Selection of Data . . . . . . . . . . . . . . . . . . . . 2.8.1 Identification of User Needs . . . . . . . . 2.8.2 Seasonal Factors . . . . . . . . . . . . . . . . . . 2.8.3 Cost of Data . . . . . . . . . . . . . . . . . . . . . . 2.8.4 Mode of Data Delivery . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

52 55 56 58 59 60 62 63 64 65 67 67 67 69 71 74 78 79 80 81 81 82

Storage of Remotely Sensed Data . . . . . . . . . . . . . . . . 3.1 Storage of Multispectral Images . . . . . . . . . . . . . 3.1.1 Storage Space Needed . . . . . . . . . . . . . 3.1.2 Data Storage Forms . . . . . . . . . . . . . . . 3.2 Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 CDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Digital Versatile Disk (DVD) . . . . . . . . 3.2.3 Memory Sticks . . . . . . . . . . . . . . . . . . . 3.2.4 Computer Hard Disk . . . . . . . . . . . . . . 3.3 Format of Image Storage . . . . . . . . . . . . . . . . . . . 3.3.1 Generic Binary . . . . . . . . . . . . . . . . . . . 3.3.2 GIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 JPEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 TIFF and GeoTIFF. . . . . . . . . . . . . . . . . 3.4 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Variable-Length Coding . . . . . . . . . . . 3.4.2 Run-Length Coding . . . . . . . . . . . . . . . 3.4.3 LZW Coding . . . . . . . . . . . . . . . . . . . . . 3.4.4 Lossy Compression . . . . . . . . . . . . . . . 3.4.5 JPEG and JPEG 2000 . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85 85 85 86 88 88 89 89 90 91 92 92 93 94 96 96 97 99 100 101 104

Contents 4

Image Processing Systems . . . . . . . . . . . . . . . . . . . . . . . 4.1 IDRISI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Image Analysis Functions . . . . . . . . . . 4.1.2 Display and Output . . . . . . . . . . . . . . 4.1.3 File Format. . . . . . . . . . . . . . . . . . . . . . . 4.1.4 User Interface . . . . . . . . . . . . . . . . . . . . 4.1.5 Documentation and Evaluation . . . . . 4.2 ERDAS Imagine . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Image Display and Output . . . . . . . . . 4.2.2 Data Preparation . . . . . . . . . . . . . . . . . . 4.2.3 Image Enhancement . . . . . . . . . . . . . . . 4.2.4 Image Classification . . . . . . . . . . . . . . . 4.2.5 Spatial Modeler . . . . . . . . . . . . . . . . . . . 4.2.6 Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.7 Other Toolboxes . . . . . . . . . . . . . . . . . . 4.2.8 Documentation and Evaluation . . . . . 4.3 ENVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Data Preparation and Display . . . . . . 4.3.2 Image Enhancement . . . . . . . . . . . . . . . 4.3.3 Image Classification and Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 Processing of Hyperspectral and Radar Imagery. . . . . . . . . . . . . . . . 4.3.5 Documentation and Evaluation . . . . . 4.4 ER Mapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 User Interface, Data Input/Output and Preparation . . . . . . . . . . . . . . . . . . 4.4.2 Image Display . . . . . . . . . . . . . . . . . . . . 4.4.3 Image Enhancement and Classification . . . . . . . . . . . . . . . . . . . . . 4.4.4 Image Web Server . . . . . . . . . . . . . . . . . 4.4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . 4.5 PCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Image Input and Display. . . . . . . . . . . 4.5.2 Major Modules . . . . . . . . . . . . . . . . . . . 4.5.3 User Interface . . . . . . . . . . . . . . . . . . . . 4.5.4 Documentation and Evaluation . . . . . 4.6 eCognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 General Overview . . . . . . . . . . . . . . . . 4.6.2 Main Features . . . . . . . . . . . . . . . . . . . . 4.6.3 Documentation and Evaluation . . . . . 4.7 GRASS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

105 106 106 107 109 109 110 111 112 112 114 114 115 115 115 117 118 118 119 120 122 122 123 123 125 126 126 127 128 128 130 131 131 133 133 134 134 136 138 141

ix

x

Contents 5

Image Geometric Rectification . . . . . . . . . . . . . . . . . . . 5.1 Sources of Geometric Distortion . . . . . . . . . . . . . 5.1.1 Errors Associated with the Earth . . . . 5.1.2 Sensor Distortions . . . . . . . . . . . . . . . . 5.1.3 Errors Associated with the Platform . 5.1.4 Nature of Distortions . . . . . . . . . . . . . . 5.2 Projection and Coordinate Systems . . . . . . . . . . 5.2.1 UTM Projection . . . . . . . . . . . . . . . . . . . 5.2.2 NZMG . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Fundamentals of Image Rectification . . . . . . . . . 5.3.1 Common Terms. . . . . . . . . . . . . . . . . . . 5.3.2 Image Geometric Transformation . . . 5.3.3 GCPs in Image Transformation . . . . . 5.3.4 Sources of Ground Control . . . . . . . . . 5.4 Rectification Models . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Affine Model . . . . . . . . . . . . . . . . . . . . . 5.4.2 Sensor-Specific Models . . . . . . . . . . . . 5.4.3 RPC Model. . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Projective Transformation . . . . . . . . . . 5.4.5 Direct Linear Transform Model . . . . . 5.4.6 Polynomial Model . . . . . . . . . . . . . . . . 5.4.7 Rubber-Sheeting Model . . . . . . . . . . . . 5.5 Polynomial-Based Image Rectification . . . . . . . . 5.5.1 Transform Equations . . . . . . . . . . . . . . 5.5.2 Minimum Number of GCPs . . . . . . . . 5.5.3 Accuracy of Image Transform. . . . . . . 5.5.4 Creation of the Output Image. . . . . . . 5.6 Issues in Image Georeferencing. . . . . . . . . . . . . . 5.6.1 Impact of the Number of GCPs . . . . . 5.6.2 Impact of Image Resolution . . . . . . . . 5.6.3 Impact of GCP Quality . . . . . . . . . . . . 5.7 Image Orthorectification. . . . . . . . . . . . . . . . . . . . 5.7.1 Perspective versus Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Methods of Image Orthorectification 5.7.3 Procedure of Orthorectification . . . . . 5.8 Image Direct Georeferencing . . . . . . . . . . . . . . . . 5.8.1 Transformation Equation. . . . . . . . . . . 5.8.2 Comparison with Polynomial Model 5.9 Image Subsetting and Mosaicking . . . . . . . . . . . 5.9.1 Image Subsetting . . . . . . . . . . . . . . . . . 5.9.2 Image Mosaicking . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

143 144 144 147 149 151 151 152 153 156 156 156 158 160 161 161 161 162 162 163 164 164 165 165 167 168 172 176 178 180 181 183 184 185 188 192 192 195 197 197 199 201

Contents 6

Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Density Slicing . . . . . . . . . . . . . . . . . . . 6.1.2 Linear Enhancement . . . . . . . . . . . . . . 6.1.3 Piecewise Linear Enhancement . . . . . 6.1.4 Look-Up Table . . . . . . . . . . . . . . . . . . . . 6.1.5 Nonlinear Stretching . . . . . . . . . . . . . . 6.1.6 Histogram Equalization. . . . . . . . . . . . 6.2 Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . 6.3 Spatial Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Neighborhood and Connectivity . . . . 6.3.2 Kernels and Convolution . . . . . . . . . . 6.3.3 Image Smoothing . . . . . . . . . . . . . . . . . 6.3.4 Median Filtering . . . . . . . . . . . . . . . . . . 6.4 Edge Enhancement and Detection . . . . . . . . . . . 6.4.1 Enhancement through Subtraction . . 6.4.2 Edge-Detection Templates. . . . . . . . . . 6.5 Multiple-Image Manipulation . . . . . . . . . . . . . . . 6.5.1 Band Ratioing . . . . . . . . . . . . . . . . . . . . 6.5.2 Vegetation Index (Components) . . . . . 6.6 Image Transformation. . . . . . . . . . . . . . . . . . . . . . 6.6.1 PCA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Tasseled Cap Transformation . . . . . . . 6.6.3 HIS Transformation . . . . . . . . . . . . . . . 6.7 Image Filtering in Frequency Domain . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

203 204 204 205 208 208 210 211 216 219 219 219 221 224 224 225 226 227 228 229 231 232 242 244 245 247

7

Spectral Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 General Knowledge of Image Classification . . . 7.1.1 Requirements . . . . . . . . . . . . . . . . . . . . 7.1.2 Image Elements. . . . . . . . . . . . . . . . . . . 7.1.3 Data versus Information . . . . . . . . . . . 7.1.4 Spectral Class versus Information Class . . . . . . . . . . . . . . . . . 7.1.5 Classification Scheme. . . . . . . . . . . . . . 7.2 Distance in the Spectral Domain . . . . . . . . . . . . . 7.2.1 Euclidean Spectral Distance . . . . . . . . 7.2.2 Mahalanobis Spectral Distance. . . . . . 7.2.3 Normalized Distance . . . . . . . . . . . . . 7.3 Unsupervised Classification . . . . . . . . . . . . . . . . 7.3.1 Moving Cluster Analysis . . . . . . . . . . . 7.3.2 Iterative Self-Organizing Data Analysis . . . . . . . . . . . . . . . . . . . .

249 250 250 250 253 254 255 257 258 259 260 260 260 264

xi

xii

Contents 7.3.3 Agglomerative Hierarchical Clustering . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Histogram-Based Clustering . . . . . . . 7.4 Supervised Classification . . . . . . . . . . . . . . . . . . . 7.4.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Selection of Training Samples . . . . . . . 7.4.3 Assessment of Training Sample Quality . . . . . . . . . . . . . . . . . . . 7.5 Per-Pixel Image Classifiers . . . . . . . . . . . . . . . . . . 7.5.1 Parallelepiped Classifier . . . . . . . . . . . 7.5.2 Minimum-Distance-to-Mean Classifier . . . 7.5.3 Maximum Likelihood Classifier . . . . . 7.5.4 Which Classifier to Use? . . . . . . . . . . . 7.6 Unsupervised and Supervised Classification . . . 7.7 Fuzzy Image Classification . . . . . . . . . . . . . . . . . 7.7.1 Fuzzy Logic . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Fuzziness in Image Classification . . . 7.7.3 Implementation and Accuracy . . . . . . 7.8 Subpixel Image Classification . . . . . . . . . . . . . . . 7.8.1 Mathematical Underpinning . . . . . . . 7.8.2 Factors Affecting Performance . . . . . . 7.8.3 Implementation Environments. . . . . . 7.8.4 Results Validation . . . . . . . . . . . . . . . . . 7.9 Postclassification Filtering . . . . . . . . . . . . . . . . . . 7.10 Presentation of Classification Results . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Neural Network Image Analysis . . . . . . . . . . . . . . . . . 8.1 Fundamentals of Neural Networks . . . . . . . . . . 8.1.1 Human Neurons . . . . . . . . . . . . . . . . . . 8.1.2 Artificial Neurons . . . . . . . . . . . . . . . . . 8.2 Neural Network Architecture . . . . . . . . . . . . . . . 8.2.1 Feed-Forward Model . . . . . . . . . . . . . . 8.2.2 Backpropagation Networks . . . . . . . . 8.2.3 Self-Organizing Topological Map . . . 8.2.4 ART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 Parallel Consensual Network . . . . . . . 8.2.6 Binary Diamond Network. . . . . . . . . . 8.2.7 Structured Neural Network . . . . . . . . 8.2.8 Alternative Models . . . . . . . . . . . . . . . . 8.3 Network Learning . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Learning Paradigm. . . . . . . . . . . . . . . . 8.3.2 Learning Rate . . . . . . . . . . . . . . . . . . . . 8.3.3 Learning Algorithms . . . . . . . . . . . . . . 8.3.4 Transfer Functions . . . . . . . . . . . . . . . .

264 266 267 267 270 271 271 272 274 276 281 283 284 285 287 289 291 291 293 294 296 297 300 302 305 306 306 306 307 309 311 313 314 316 317 317 319 321 321 322 323 324

Contents 8.4

9

Network Configuration . . . . . . . . . . . . . . . . . . . . 8.4.1 Number of Hidden Layers . . . . . . . . . 8.4.2 Number of Hidden Nodes . . . . . . . . . 8.5 Network Training . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 General Procedure . . . . . . . . . . . . . . . . 8.5.2 Size of Training Samples . . . . . . . . . . . 8.5.3 Nature of Training Samples . . . . . . . . 8.5.4 Ease and Speed of Network Training . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.5 Issues in Network Training . . . . . . . . 8.6 Features of ANN Classifiers . . . . . . . . . . . . . . . . . 8.6.1 Methods of Data Encoding . . . . . . . . . 8.6.2 Incorporation of Ancillary Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3 Standardization of Input Data . . . . . . 8.6.4 Strengths and Weaknesses . . . . . . . . . 8.7 Parametric or ANN Classifier? . . . . . . . . . . . . . . 8.7.1 Case Study . . . . . . . . . . . . . . . . . . . . . . . 8.7.2 A Comparison . . . . . . . . . . . . . . . . . . . . 8.7.3 Critical Evaluation . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

325 326 327 329 329 330 331

335 336 337 340 340 342 343 347

Decision Tree Image Analysis. . . . . . . . . . . . . . . . . . . . 9.1 Fundamentals of Decision Trees . . . . . . . . . . . . . 9.2 Types of Decision Trees. . . . . . . . . . . . . . . . . . . . . 9.2.1 Univariate Decision Trees . . . . . . . . . . 9.2.2 Multivariate Decision Trees . . . . . . . . 9.2.3 Hybrid Decision Trees . . . . . . . . . . . . . 9.2.4 Regression Trees . . . . . . . . . . . . . . . . . . 9.3 Construction of Decision Trees . . . . . . . . . . . . . . 9.3.1 Construction Methods . . . . . . . . . . . . . 9.3.2 Feature Selection . . . . . . . . . . . . . . . . . . 9.3.3 An Example . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Node Splitting Rules . . . . . . . . . . . . . . 9.3.5 Tree Pruning . . . . . . . . . . . . . . . . . . . . . 9.3.6 Tree Refinement . . . . . . . . . . . . . . . . . . 9.4 Common Trees in Use . . . . . . . . . . . . . . . . . . . . . . 9.4.1 CART . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 C4.5 and C5.0 Trees . . . . . . . . . . . . . . . 9.4.3 M5 Trees . . . . . . . . . . . . . . . . . . . . . . . . 9.4.4 QUEST . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Decision Tree Classification . . . . . . . . . . . . . . . . . 9.5.1 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Robustness . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Strengths . . . . . . . . . . . . . . . . . . . . . . . .

351 351 353 353 355 357 358 360 360 361 364 366 368 370 371 372 373 374 375 376 377 379 381

331 333 334 334

xiii

xiv

Contents

10

11

9.5.4 Limitations. . . . . . . . . . . . . . . . . . . . . . . 9.5.5 Ensemble Classifiers. . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

383 383 386

Spatial Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Texture and Image Classification . . . . . . . . . . . . 10.1.1 Statistical Texture Quantifiers . . . . . . . 10.1.2 Texture Based on Gray Tone Spatial Matrix . . . . . . . . . . . . . . . . . . . . 10.1.3 Texture Measures from Fourier Spectra . . . . . . . . . . . . . . . . . . . 10.1.4 Semivariogram-Based Texture Quantification . . . . . . . . . . . . . . . . . . . . 10.1.5 Comparison of Texture Measures . . . 10.1.6 Utility of Texture in Image Classification . . . . . . . . . . . . . . . . . . . . . 10.2 Contexture and Image Analysis . . . . . . . . . . . . . 10.3 Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Pixel-Based Segmentation . . . . . . . . . . 10.3.2 Edge-Based Segmentation . . . . . . . . . . 10.3.3 Region-Based Segmentation . . . . . . . . 10.3.4 Knowledge-Based Image Segmentation. . . . . . . . . . . . . . . . . . . . . 10.3.5 Segmentation Based on Multiple Criteria . . . . . . . . . . . . . . . . . . 10.3.6 Multiscale Image Segmentation . . . . 10.4 Fundamentals of Object-Oriented Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Process of Object-Oriented Analysis . . . 10.4.3 Implementation Environments . . . . . 10.5 Potential of Object-Oriented Image Analysis . . . 10.5.1 A Case Study . . . . . . . . . . . . . . . . . . . . . 10.5.2 Performance Relative to Per-Pixel Classifiers . . . . . . . . . . . . . . . . . . . . . . . . 10.5.3 Strengths . . . . . . . . . . . . . . . . . . . . . . . . 10.5.4 Limitations. . . . . . . . . . . . . . . . . . . . . . . 10.5.5 Affecting Factors . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

389 390 392

Intelligent Image Analysis . . . . . . . . . . . . . . . . . . . . . . 11.1 Expert Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 General Features . . . . . . . . . . . . . . . . . . 11.1.2 Knowledge Base . . . . . . . . . . . . . . . . . . 11.1.3 Expert Systems and Image Analysis. . .

394 399 399 401 402 406 407 408 409 410 413 415 419 420 421 423 424 426 426 429 432 434 436 437 443 444 444 445 447

Contents 11.2

12

Knowledge in Image Classification. . . . . . . . . . . 11.2.1 Type of Knowledge. . . . . . . . . . . . . . . . 11.2.2 Spectral Knowledge . . . . . . . . . . . . . . . 11.2.3 Spatial Knowledge . . . . . . . . . . . . . . . . 11.2.4 External Knowledge . . . . . . . . . . . . . . . 11.2.5 Quality of Knowledge . . . . . . . . . . . . . 11.2.6 Knowledge Integration . . . . . . . . . . . . 11.3 Knowledge Acquisition . . . . . . . . . . . . . . . . . . . . 11.3.1 Acquisition via Domain Experts . . . . 11.3.2 Acquisition through Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.3 Acquisition through Remote Sensing and GPS . . . . . . . . . . . . . . . . . . 11.4 Knowledge Representation . . . . . . . . . . . . . . . . . 11.4.1 Semantic Network . . . . . . . . . . . . . . . . 11.4.2 Rule-Based Representation . . . . . . . . . 11.4.3 Frames . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.4 Blackboards . . . . . . . . . . . . . . . . . . . . . . 11.5 Evidential Reasoning . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Mathematical Underpinning. . . . . . . . 11.5.2 Evidential Reasoning and Image Classification . . . . . . . . . . . . . . . . . . . . . 11.5.3 Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Knowledge-Based Image Analysis . . . . . . . . . . . 11.6.1 Knowledge-Based Image Classification . . . . . . . . . . . . . . . . . . . . . 11.6.2 Postclassification Filtering . . . . . . . . . . 11.6.3 A Case Study . . . . . . . . . . . . . . . . . . . . . 11.6.4 Postclassification Spatial Reasoning . . . 11.7 Critical Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 11.7.1 Relative Performance . . . . . . . . . . . . . 11.7.2 Effectiveness of Spatial Knowledge . . . 11.7.3 Strengths . . . . . . . . . . . . . . . . . . . . . . . . 11.7.4 Limitations . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

449 449 451 452 453 455 457 458 458

Classification Accuracy Assessment . . . . . . . . . . . . . . 12.1 Precision versus Accuracy . . . . . . . . . . . . . . . . . . 12.2 Inaccuracy of Classification Results . . . . . . . . . . 12.2.1 Image Misclassification . . . . . . . . . . . . 12.2.2 Boundary Inaccuracy . . . . . . . . . . . . . . 12.2.3 Inaccuracy of Reference Data . . . . . . . 12.2.4 Characteristics of Classification Inaccuracy . . . . . . . . . . . . . . . . . . . . . . . 12.3 Procedure of Accuracy Assessment . . . . . . . . . .

497 498 500 500 501 502

459 460 462 462 463 465 467 468 468 471 472 473 474 478 479 485 487 488 489 490 491 493

503 504

xv

xvi

Contents 12.3.1 Scale and Procedure of Assessment . . . 12.3.2 Selection of Evaluation Pixels . . . . . . . 12.3.3 Number of Evaluation Pixels . . . . . . . 12.3.4 Collection of Reference Data . . . . . . . . 12.4 Report of Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 12.4.1 Aspatial Accuracy . . . . . . . . . . . . . . . . . 12.4.2 Spatial Accuracy . . . . . . . . . . . . . . . . . . 12.4.3 Interpretation of Error Matrix. . . . . . . 12.4.4 Quantitative Assessment of Error Matrix . . . . . . . . . . . . . . . . . . . . . . 12.4.5 An Example of Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . 12.4.6 Comparison of Error Matrices . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Multitemporal Image Analysis . . . . . . . . . . . . . . . . . . . 13.1 Fundamentals of Change Analysis . . . . . . . . . . . 13.1.1 Conceptual Illustration . . . . . . . . . . . . 13.1.2 Requirements of Change Analysis . . . 13.1.3 Procedure of Change Analysis . . . . . . 13.2 Qualitative Change Analysis . . . . . . . . . . . . . . . . 13.2.1 Visual Overlay. . . . . . . . . . . . . . . . . . . . 13.2.2 Image Compositing . . . . . . . . . . . . . . . 13.3 Quantitative Change Analysis . . . . . . . . . . . . . . . 13.3.1 Spectral Differencing . . . . . . . . . . . . . . 13.3.2 Spectral Ratioing . . . . . . . . . . . . . . . . . 13.3.3 NDVI-Based Change Analysis . . . . . . 13.4 Postclassification Change Analysis . . . . . . . . . . . 13.4.1 Aspatial Change Detection . . . . . . . . . 13.4.2 Spatial Change Analysis . . . . . . . . . . . 13.4.3 Raster Implementation . . . . . . . . . . . . 13.4.4 Vector Implementation . . . . . . . . . . . . 13.4.5 Raster or Vector? . . . . . . . . . . . . . . . . . . 13.5 Novel Change Analysis Methods . . . . . . . . . . . . 13.5.1 Spectral Temporal Change Classification . . . . . . . . . . . . . . . . . . . . . 13.5.2 PCA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.3 Change Vector Analysis . . . . . . . . . . . . 13.5.4 Correlation-Based Change Analysis. . . 13.5.5 A Comparison . . . . . . . . . . . . . . . . . . . . 13.5.6 Change Analysis from Monotemporal Imagery. . . . . . . . . . . . 13.6 Accuracy of Change Analysis . . . . . . . . . . . . . . . 13.6.1 Factors Affecting Detection Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 13.6.2 Evaluation of Detection Accuracy . . .

505 506 507 509 511 511 512 514 518 520 521 524 525 527 527 528 529 530 531 532 533 534 535 536 537 538 540 542 543 544 547 547 547 548 551 552 553 554 555 560

Contents

14

13.7 Visualization of Detected Change . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

564 565

Integrated Image Analysis . . . . . . . . . . . . . . . . . . . . . . . 14.1 GIS and Image Analysis . . . . . . . . . . . . . . . . . . . . 14.1.1 GIS Database . . . . . . . . . . . . . . . . . . . . . 14.1.2 Vector Mode of Representation . . . . . 14.1.3 Raster Mode of Representation . . . . . 14.1.4 Attribute Data . . . . . . . . . . . . . . . . . . . . 14.1.5 Topological Data . . . . . . . . . . . . . . . . . . 14.1.6 GIS Functions . . . . . . . . . . . . . . . . . . . . 14.1.7 Database Query . . . . . . . . . . . . . . . . . . 14.1.8 GIS Overlay Functions . . . . . . . . . . . . . 14.1.9 Errors in Overlay Analysis . . . . . . . . . 14.1.10 Relevance of GIS to Image Analysis . . . 14.2 GPS and Image Analysis . . . . . . . . . . . . . . . . . . . 14.2.1 Principles of GPS . . . . . . . . . . . . . . . . . 14.2.2 GPS Accuracy . . . . . . . . . . . . . . . . . . . . 14.2.3 Improvements in GPS Accuracy . . . . 14.2.4 Relevance of GPS to Image Analysis. . . 14.3 Necessity of Integration . . . . . . . . . . . . . . . . . . . . 14.4 Models of Integration . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Linear Integration . . . . . . . . . . . . . . . . . 14.4.2 Interactive Integration . . . . . . . . . . . . . 14.4.3 Hierarchical Integration. . . . . . . . . . . . 14.4.4 Complex Model of Integration . . . . . . 14.4.5 Levels of Integration . . . . . . . . . . . . . . 14.5 Impediments to Integration . . . . . . . . . . . . . . . . . 14.5.1 Format Incompatibility . . . . . . . . . . . . 14.5.2 Accuracy Incompatibility . . . . . . . . . . 14.6 Exemplary Analyses of Integration . . . . . . . . . . 14.6.1 Image Analysis and GIS . . . . . . . . . . . 14.6.2 Image Analysis and GPS . . . . . . . . . . . 14.6.3 GPS and GIS . . . . . . . . . . . . . . . . . . . . . 14.7 Applications of Integrated Approach . . . . . . . . . 14.7.1 Resources Management and Environmental Monitoring . . . . . . . . . 14.7.2 Emergency Response . . . . . . . . . . . . . 14.7.3 Mapping and Mobile Mapping . . . . . 14.7.4 Prospect of Integrated Analysis . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

567 568 568 569 572 574 575 577 578 581 586 588 589 589 591 593 596 598 600 600 601 602 605 606 607 607 608 609 609 610 611 611

Index

619

.......................................

612 613 614 614 616

xvii

This page intentionally left blank

Preface

D

igital image analysis is a field that crosses the boundaries of several disciplines. Digital analysis of remotely sensed data for managing the environment of the Earth and its natural resources, however, differs from medical image processing and imaging processing in electrical engineering in three ways. First, the object of study is different. Satellite images are snapshots of the Earth’s surface, which lies in a state of constant change. These changes need to be monitored from multitemporal images to identify the longitudinal trends. Second, the data used are captured over a much longer wavelength range, extending into the thermal infrared and microwave spectrum, usually recorded in the multispectral domain. Their enhancement and classification require an understanding of the interaction between solar radiation and the Earth’s surface. Finally, the objective of image analysis is different. A very significant component of digital analysis of remotely sensed data is to convert them into useful information on land cover/land use over the Earth’s surface. The derived information is usually presented in graphic (map) format at a certain scale. In order to make the map conform to certain cartographic standards, image geometry and the accuracy issue must be addressed and featured prominently in digital image analysis. This book aims at providing exhaustive coverage of the entire process of analyzing remotely sensed data for the purpose of producing an accurate and faithful representation of the Earth’s resources in the thematic map format. Recent years have witnessed phenomenal development in sensor technology and the emergence of a wide variety of remote sensing satellites. Now it is possible to acquire satellite images with a spatial resolution as fine as submeters or comparable to that of airborne photographs. The wide and easy availability of remote sensing data from various sensors in the digital format creates an ideal opportunity to process and analyze them automatically. Satellite data are routinely analyzed using various image processing systems to fulfill different application needs. The functionality and sophistication of image analysis have evolved considerably over the last two decades, thanks to the incessant advances in computing technology.

xix

xx

Preface Parallel to the advances in sensing technology is progress in pertinent geocomputational fields such as positioning systems and geographic information systems. The geospatial data from these sources not only enrich the sources of data in digital image analysis, but also broaden the avenue to which digitally processed results are exported. Increasingly, the final products of digital image analysis are not an end in themselves, but a part of a much larger database. The prerequisite for integrating these data from diverse sources is compatibility in accuracy. This demands that results derived from digital analysis of remotely sensed data be assessed for their thematic accuracy. Reliable and efficient processing of these data faces challenges that can no longer be met by the traditional, well-established perpixel classifiers, owing to increased spatial heterogeneity observable in the imagery. In response to these challenges, efforts have gone to developing new image processing techniques, making use of additional image elements, and incorporating nonremote sensing data into image analysis in an attempt to improve the accuracy and reliability of the obtained results. In the meantime, image analysis has evolved from one-time to long-term dynamic monitoring via analysis of multitemporal satellite data. A new book is required to introduce these recent innovative image classification methods designed to overcome the limitations of per-pixel classifiers, and to capture these new trends in image analysis. Contained in this book is a comprehensive and systematic examination of topics in digital image analysis, ranging from data input to data output and result presentation, under a few themes. The first is how to generate geometrically reliable imagery (Chap. 5). The second theme is how to produce thematically reliable maps (Chaps. 6 to 11). The third theme of the book centers around the provision of accuracy indicators for the results produced (Chap. 12). The last theme is about integration of digital image analysis with pertinent geospatial techniques such as global positioning system and geographic information system (GIS) (Chaps. 13 and 14). This book differs from existing books of a similar topic in three areas. First, unlike those books written by engineers for engineering students, this book does not lean heavily toward image processing algorithms. Wherever necessary, mathematical formulas behind certain processing are provided to ensure a solid theoretical understanding. Nevertheless, the reader is left with the discretion to decide on the level of comprehension. Those who are mathematically challenged may wish to skip the mathematical equations. Instead, they can concentrate on the examples provided and on the interpretation of processed output. In this way the fundamental concepts in image analysis are not lost. Second, the book features the geometric component of digital image analysis, a topic that is treated rather superficially or in a fragmented manner by authors with little background in geography,

Preface prominently. Finally, this book captures the most recent developments in image analysis comprehensively. Tremendous progress has been made in analyzing remotely sensed data more accurately and with greater ease. This book about digital image analysis is a timely reflection of these recent changes and trends in the field. This book is best used as a textbook for a course in digital image analysis. The targeted readership of the book is upper-level undergraduate students and lower-level postgraduate students. Ideally, they should have had a fundamental remote sensing course in their curriculum already. Otherwise, they need to spend more time than other students in the class to familiarize themselves with the content of the first few chapters. No assumption is made about their mathematical background, even though it is a great advantage to understand matrix operations in order to comprehend certain analyses better. Besides, this book is a great reference for those practitioners, resources managers, and consultants engaged in analysis of geospatial data, especially those who need to derive information about the Earth from airborne or spaceborne remote sensing materials. Jay Gao, Ph.D.

xxi

This page intentionally left blank

Acknowledgments

I

t is very hard to say when the writing of this book started. I initially became interested in digital image analysis when I was doing my master’s thesis research at the University of Waterloo and later at the Canadian Centre for Remote Sensing. At the beginning of my teaching career, Peng Gong generously shared his lecturing material with me. His material on the topic of image segmentation has been revised and incorporated in this book. After I started my teaching career in the School of Geography at the University of Auckland, I further acquired new skills in image analysis and undertook several projects of digital image analysis. This book could not have been published without the assistance of Somya Rustagi. She put up with my impatience and sluggish response to her queries, and chased me for follow-up matters that I had forgotten. Taisuke Soda at McGraw-Hill deserves special mention for his persistence in seeing this project through. Stephen Smith offered insightful advice on how to format the manuscript. Many publishers and organizations have generously granted me the use of their copyrighted materials in this book. It is not possible for me to express my gratitude to all of them as there are too many. However, I would like to mention ESA for Figs. 2.10 and 2.12; DigitalGlobe for Figs. 2.9 and 13.1; Clark Labs at Clark University for Fig. 4.1; ITT Visual for Fig. 4.3; Definiens for Fig. 4.6; ERDAS for Figs. 4.2, 4.4, and 11.10; Visual Learning Systems for Fig. 10.10; ASPRS for Figs. 10.4B and 11.4; Elsevier for Figs. 9.7 and 10.8 and Table 9.3; Taylor and Francis for Figs. 9.9 and 13.9; Wiley for Figs. 5.5, 6.26, and 7.7; Trimble for Fig. 14.19; and SpringerVerlag for Fig. 6.24. Tim Noland drew Fig. 5.20. Igor Dracki offered valuable tips on how to use CoralDraw competently and on preparation of graphics. Last but not least, I would like to thank my parents for their support over the years. Without their generosity I could not have gone to university and received such a good education. I am especially indebted to my father, who was always proud of my achievements and who surely would have been very pleased to see the publication of this book.

xxiii

This page intentionally left blank

Digital Analysis of Remotely Sensed Imagery

This page intentionally left blank

CHAPTER

1

Overview

D

igital processing of satellite imagery refers to computerbased operations that aim to restore, enhance, and classify remotely sensed data. It may involve a single band or multiple bands in the input, depending on the nature and purpose of the processing. The output image is a single band in most cases. Digital analysis of remotely sensed data has a relatively short history. It did not come into existence until the early 1970s with the launch of the first Earth Resources Technology Satellite (subsequently renamed Landsat), when remote sensing images in the digital format became available for the first time in history. The availability of a huge quantity of data necessitated their timely and efficient processing. In response to this demand, digital analysis of remote sensing data experienced exponential development. In its early years, digital image analysis was very cumbersome to undertake because the computer had limited functions and capability. Undertaking of digital image analysis was made more difficult by the minicomputer running the user-unfriendly UNIX operating system. Over the years digital image analysis has become increasingly easy to perform, thanks to advances in computing technology and in image analysis systems. Now more processing functions can be achieved at a much faster pace than ever before. This chapter introduces the main characteristics and components of a digital image processing system. The nature of digital analysis of remote sensing images is summarized comparatively with that of the familiar visual image interpretation. Following this comparison is a comprehensive review of the entire process of digital image analysis from data input to results presentation. Presented next in this chapter is an introduction to the preliminary knowledge of digital image analysis that serves to lay a solid foundation for discussion in the subsequent chapters. Featured prominently in this section are pixels, the building blocks of satellite imagery. Lastly, this chapter introduces the important properties of satellite data, such as their spatial and spectral resolutions, in detail. This chapter ends with an overview of the content of the remaining chapters in this book.

1

2 1.1

Chapter One

Image Analysis System In order to function smoothly, a digital image analysis system must encompass a few essential components in hardware, software, the operating system, and peripheral devices (Fig. 1.1). Featured prominently among various hardware components is the computer, which, among other things, is made up of a central processing unit, a monitor, and a keyboard. As the heart of the system, the central processing unit determines the speed of computation. The keyboard/ mouse is the device through which the user interacts with the machine. The monitor fulfils the function of displaying the image processing software and visualizing image data, as well as any intermediate and final results in tabular and graphic forms. The operating system controls the operation of a computer’s activities and manages the entry, flow, and display of remote sensing data within the computer. These components are common to all computers. Unique to image analysis is the software that executes computer commands to achieve desired image processing functions. These computer programs are written in a language comprehensible to the computer. During image analysis these commands are issued by the image analyst by clicking on certain buttons or icons of the image processing system. So far, a number of image analysis systems have been developed for processing a wide range of satellite data and for their integrated analysis with non-remote sensing data. An image analysis system is incomplete without peripheral devices that input data into the system and output the results from

Data input/ import

Data analysis & display

Results output/ export

FIGURE 1.1 Configuration of a typical digital image processing system.

Overview the system. Common input devices include scanners that are able to convert analog images into a digital format quickly and drives that allow data stored in the external media to be read into the computer. Standard output devices include printers and plotters. Printers can print results, usually small in size, in black and white, or color. A plotter is able to print a large map of classified results. Other peripheral devices include a few ports and drives that can read data stored in special media. Disk drives and special drives for CD read-only memory (CD-ROM) and memory sticks are so universal to all desktop and laptop computers that they can hardly be regarded as peripheral devices any more.

1.2

Features of Digital Image Analysis Analysis of remotely sensed data in the digital environment differs drastically from the familiar visual interpretation of satellite images. The main features of digital image analysis are summarized in Table 1.1, comparatively with visual interpretation. The most critical difference lies in the use of cues in the input data. In the digital environment only the value of pixels in the input data is taken advantage of. During image classification these pixels are treated mostly in isolation without regard to their spatial relationship. Another distinctive feature of digital analysis is its abstractness. Both the raw data and the final processed results are invisible to the analyst unless they are visualized on the computer monitor. The analyst’s prior knowledge or experience plays no role in the decision making behind a classification. The analyst is only able to exert an influence prior to the decision-making process, such as during selection of input fed into the computer. In this way

Features

Digital

Visual

Evidence of decision making

Pixel value in multiple bands treated in isolation

All seven elements in one image treated in a spatial context

Process of decision making

Fast, abstract, invisible

Slow, concrete, visible

Role of prior knowledge

Limited

Critical

Nature of result

Quantitative and objective

Qualitative and subjective

Facilities required

Complex and expensive

Simple and inexpensive

TABLE 1.1 Main Features of Digital Image Analysis in Comparison with Visual Interpretation

3

4

Chapter One the results are much more objective than visual ones that are strongly influenced by the interpreter’s knowledge and expertise in the subject area concerned, as well as personal bias. The results, nevertheless, are quantitative and can be exported to other systems for further analysis without much additional work. This ease of portability is achieved at the expense of purchasing and maintaining expensive and sophisticated computer hardware and software.

1.2.1 Advantages Digital image processing has a number of advantages over the conventional visual interpretation of remote sensing imagery, such as increased efficiency and reliability, and marked decrease in costs.

Efficiency Owing to the improvement in computing capability, a huge amount of data can be processed quickly and efficiently. A task that used to take days or even months for a human interpreter to complete can be finished by the machine in a matter of seconds. This process is sped up if the processing is routinely set up. Computer-based processing is even more advantageous than visual interpretation for multiple bands of satellite data. Human interpreters can handle at most three bands simultaneously by examining their color composite. However, there is no limit as to the number of bands that can be processed in image classification. Moreover, the input of many spectral bands will not noticeably slow the processing.

Flexibility Digital analysis of images offers high flexibility. The same processing can be carried out repeatedly using different parameters to explore the effect of alternative settings. If a classification is not satisfactory, it can be repeated with different algorithms or with updated inputs in a new trial. This process can continue until the results are satisfactory. Such flexibility makes it possible to produce results not only from satellite data that are recorded at one time only, but also from data that are obtained at multiple times or even from different sensors. In this way the advantages of different remote sensing data can be fully exploited. Even non-remote sensing data can be incorporated into the processing to enhance the accuracy of the obtained results.

Reliability Unlike the human interpreter, the computer’s performance in an image analysis is not affected by the working conditions and the duration of analysis. In contrast, the results obtained by a human interpreter are likely to deteriorate owing to mental fatigue after the user has been working for a long time, as the interpretation process is highly demanding mentally. The results are also likely to be different, sometimes even drastically, if obtained by different interpreters,

Overview because of their subjectivity and personal bias. By comparison, the computer can produce the same results with the same input no matter who is performing the analysis. The only exception is the selection of training samples, which could be subjective. However, the extent of such human intervention is con-siderably reduced in the digital environment.

Portability As digital data are widely used in the geoinformatics community, the results obtained from digital analysis of remote sensing data are seldom an end product in themselves. Instead, they are likely to become a component in a vast database. Digital analysis means that all processed results are available in the digital format. Digital results can be shared readily with other users who are working in a different, but related, project. These results are fully compatible with other existent data that have been acquired and stored in the digital format already. This has profound repercussions for certain analyses that were not possible to undertake before. For instance, the results of digital analysis can be easily exported to a geographic information system (GIS) for further analysis, such as spatial modeling, land cover change detection, and studying the relationship between land cover change and socioeconomic factors (e.g., population growth).

1.2.2

Disadvantages

Digital image analysis has four major disadvantages, the critical ones being the initial high costs in setting up the system and limited classification accuracy.

High Setup Costs The most expensive component of digital image analysis is the high initial cost associated with setting up the analysis system, such as purchase of hardware and software. These days the power of computers has advanced drastically, while their prices have tumbled. Desktop computers can now perform jobs that used to require a minicomputer. The same machine can be shared with others for many other purposes in addition to image analysis, such as GIS spatial analysis and modeling. Nevertheless, they depreciate very fast and have a short life cycle. Hardware has to be replaced periodically. Similar to hardware, the initial cost of purchasing software is also high. Unlike hardware, software is never meant to be a one-off cost. Software licensing policy usually needs to be renewed annually. Additional costs may include subscription of ongoing user support service so that assistance is available whenever the system runs into problem. The third cost is related to the purchase of data. Compared with printed materials, satellite data are much more expensive. Although the price of medium-resolution data has dropped considerably, it is

5

6

Chapter One still expensive to buy the most recent, very high spatial resolution satellite data. High costs are also related to maintenance personnel. A system administrator is needed to update the image processing system periodically and to back up system data and temporary results regularly.

Limited Accuracy The second major limitation of digital image analysis is the lowerthan-expected classification accuracy. Classification accuracy varies with the detail level and the number of ground covers mapped. In general, it hovers around 60 to 80 percent. A higher accuracy is not so easy to achieve because the computer is able to take advantage of only a small portion of the information inherent in the input image, while a large portion of it is disregarded. Understandably, the accuracy is rather limited for ground covers whose spectral response bears a high resemblance to that of other covers.

Complexity A digital image system is complex in that the user requires special training before being able to use it with confidence. Skillful operation of the system requires many hours of training and practice. As the system becomes increasingly sophisticated, it becomes more difficult to navigate to a specific function or to make full use of the system’s capability.

Limited Choices All image processing systems are tailored for a certain set of routine applications. In practice it may be necessary to undertake special analyses different from what these prescribed functions can offer. Solutions are difficult to find among the functions available in a given package. Although this situation has improved with the availability of a special scripting language in some image analysis systems, it is still not easy to tackle this scripting job if the user does not have a background in computer programming.

1.3

Components of Image Analysis The process of image analysis starts from preparation of remotely sensed data readable in a given system and feeding them into the computer to generate the final results in either graphic or numeric form (Fig. 1.2). Additional preliminary steps, such as scanning, may also be required, depending on the format in which data are stored. There is no common agreement as to what kind of postclassification processing should be contained in the process. In this book, three postclassification processings are considered: accuracy assessment, change detection, and integration with non-remote sensing data. The logical sequence of these processing steps is chronologically presented

Overview Satellite data

Analog photos/images

Preprocessing

Scanning

Remotely sensed data

Image subsetting

Topo.map (GPS) data

Georeferencing

Image transformation Enhancement

Feature selection

Image classification Manual interpretation

Accuracy assessment

Output (maps, reports, data)

Spectral per-pixel classification Subpixel image classification Fuzzy image classification Spatial image classification Object-oriented classification Knowledge-based classification

Postclassification processing

Change detection

Modeling

Ancillary data

GIS

GPS

FIGURE 1.2 Flowchart of a comprehensive image analysis procedure. Some of the steps in the chart could be absent in certain applications while other steps can be carried out in a sequence different from that shown in the chart. The major blocks in the chart will be covered in separate chapters in this book.

in a flowchart in Fig. 1.2. Not all topics shown in the diagram are equally complex and significant. Some of them need a paragraph to explain while others require a chapter to cover adequately. The important topics are identified below.

7

8

Chapter One

1.3.1

Data Preparation

Core to data preparation is image preprocessing. Its objective is to correct geometrically distorted and radiometrically degraded images to create a more faithful representation of the original scene. Preprocessing tasks include image restoration, geometric rectification, radiometric correction, and noise removal or suppression. Some of these tasks may have been performed at a groundreceiving station when the data are initially received from the satellite. More preprocessing specific to the needs of a particular project or a particular geographic area may still be performed by the image analyst.

1.3.2

Image Enhancement

Image enhancement refers to computer operations aimed specifically at increasing the spectral visibility of ground features of interest through manipulation of their pixel values in the original image. On the enhanced image it is very easy to perceive these objects thanks to their enhanced distinctiveness. Image enhancement may serve as a preparatory step for subsequent machine analysis such as for the selection of training samples in supervised classification, or be an end in itself (e.g., for visual interpretation). The quality or appearance of an image can be enhanced via many processing techniques, the most common ones being contrast enhancement, image transformation, and multiple band manipulation.

1.3.3

Image Classification

Image classification is a process during which pixels in an image are categorized into several classes of ground cover based on the application of statistical decision rules in the multispectral domain or logical decision rules in the spatial domain. Image classification in the spectral domain is known as pattern recognition in which the decision rules are based solely on the spectral values of the remote sensing data. In spatial pattern recognition, the decision rules are based on the geometric shape, size, texture, and patterns of pixels or objects derived from them over a prescribed neighborhood. This book is devoted heavily to image classification in the multispectral domain. Use of additional image elements in performing image classification in the spatial domain is covered extensively, as well, together with image classification based on machine learning.

1.3.4 Accuracy Assessment The product of image classification is land cover maps. Their accuracy needs to be assessed so that the ultimate user is made aware of the potential problems associated with their use. Accuracy assessment is a quality assurance step in which classification results are compared with what is there on ground at the time of imaging or something that

Overview can be regarded as its acceptable substitute, commonly known as the ground reference. Evaluation of the accuracy of a classification may be undertaken for each of the categories identified and its confusion with other covers, as well as for all the categories. The outcome of accuracy assessment is usually presented in a table that reveals accuracy for each cover category and for all categories as a whole.

1.3.5

Change Detection

Change detection takes remote sensing to the next stage, during which results from respective analysis of remotely sensed data are compared with each other, either spatially or nonspatially. This is commonly known as multitemporal remote sensing that attempts to identify what has changed on the ground. Change may be detected from multitemporal remotely sensed data using different methods, all of which are covered in this book. A number of issues relating to change detection (e.g., operating environment, accuracy, and ease of operation) and their impact on the accuracy of detected results are examined in depth, as well.

1.3.6

Integrated Analysis

In addition to satellite imagery data, non-remote sensing data have been increasingly incorporated into digital image analysis to overcome one of the limitations identified above, namely, to make use of more image elements in the decision making so that classification results can be more accurate. Many kinds of ancillary data, such as topographic, cadastral, and environmental, have found use in image analysis. Different methods have been developed to integrate them with remotely sensed data for a wide range of purposes, such as development of more accurate databases and more efficient means of data acquisition. This book explores the various methods by which different sources of data may be integrated to fulfill specific image analysis objectives.

1.4

Preliminary Knowledge 1.4.1

Pixel

Formed from the combination of picture and element, pixel is the fundamental building block of a digital image. An image is composed of a regularly spaced array of pixels (Fig. 1.3). All pixels have a common shape of square, even though triangle and hexagon are also possible. When a pixel is stored in a computer, it is represented as an integer. In this sense, a pixel does not have any size. Nevertheless, a pixel still has a physical size. Also known as cell size, it refers to the ground area from which the reflected or emitted electromagnetic radiation is integrated and recorded as a single value in the image

9

10

Chapter One 245 269 305 305 331 544 666 390 246 239 445 536 447 317 258 314 297 294 280

233 240 268 258 218 270 380 319 253 252 377 490 487 307 258 295 262 427 272

247 251 230 310 454 383 253 254 258 260 241 299 349 281 258 311 280 378 274

274 260 234 259 386 851 431 246 255 295 352 350 274 308 244 277 286 297 279

344 332 259 276 557 820 342 256 271 329 492 473 420 512 365 237 285 274 298

306 263 273 351 822 674 291 337 440 358 552 610 592 521 357 262 276 291 289

293 229 245 372 616 442 397 332 375 362 441 453 511 542 457 289 289 289 306

304 259 251 279 347 305 384 267 238 516 525 288 315 319 307 321 295 295 315

325 324 258 285 263 246 252 361 321 515 604 513 426 428 368 268 338 287 305

FIGURE 1.3 An image is composed of a two-dimensional array of pixel values. Down: row. Across: column.

during sampling of the Earth’s surface. Thus, pixel size is synonymous with the ground sampling interval. Theoretically, the pixel size of a satellite image cannot be made finer once the image is scanned, though it is possible to reduce this size to a smaller dimension (e.g., from 10 to 5 m) through resampling during image processing. However, the detail of the image cannot be improved by simply splitting a pixel into fractions. Similarly, through resampling the pixel size of an image can be reduced by amalgamating spatially adjoining pixels. As more adjoining pixels are merged, the image increasingly loses its detail level. Pixels fall into two broad categories, pure pixels and mixed pixels, in terms of the composition of their corresponding covers on the ground. Pure pixels are defined as those that are scanned over a homogeneous ground cover. These pixels have a pure identity relating to a unique type of ground feature. By comparison, mixed pixels contain the electromagnetic radiation originating from at least two types of cover features on the ground. The formation and quantity of mixed pixels in an image are affected by the following three factors. (1) Spatial resolution or pixel size: Given the same scene on the ground, an image of a coarser spatial resolution contains more mixed pixels. (2) Homogeneity of the scene: A highly heterogeneous scene is conducive to formation of more mixed pixels (these pixels are usually located at the interface of differing ground covers). (3) Shape and orientation of these different cover parcels in relation to the direction of scanning: Highly irregulary shaped cover parcels tend to have more mixed pixels along their borders.

Overview Since mixed pixels do not have a singular identity, it is impossible to correctly classify them into any one component cover at the pixel level. Their precise labeling has to take place at the subpixel level with a probability attached to each component feature. No matter whether a pixel is pure or mixed, it always has two crucial properties, its value or digital number (DN), and its location in a two-dimensional space.

1.4.2

Digital Number (DN)

The DN of a pixel in a spectral band represents the amount of radiation received at the sensor, which is determined primarily by the capability of the ground object in reflecting and emitting energy. The amount of energy reaching the sensor is a function of the wavelength of the radiation. Thus, pixel value varies from band to band. The actual DN value of a pixel in an image is affected by many other external factors, such as atmospheric radiation, the sensor’s sensitivity, and more importantly, the ground sampling interval of the sensing system. In spite of these external interferences, theoretically, the same target should have the same or similar DN value in the same band; and different targets should have dissimilar DN values in the same band. However, this relationship is not always maintained because of the similar appearance of some ground objects. No matter how many bands the received energy is split into spectrally, it is always recorded as positive integers (Fig. 1.3). The theoretical range of pixel values in an image is determined by the number of bites used to record the energy, or the quantization level. A commonly adopted quantization level is 8 bits. So the number of potential DN values amounts to 28 or 256, ranging from 0 to 255. A DN value of 0 implies that no radiative energy is received from the target on the ground. A value of 255 indicates a huge amount of radiation has reached the sensor in space. Because of the atmospheric impact or limitations in the sensing system, not all of the potential levels of DN are fully taken advantage of during data recording, a situation that can be remedied through image enhancement. Recent advances in sensing technology have made it possible to reach a quantization level as high as 11 bits. As illustrated in Fig. 1.3, at a quantization level of 9, pixel values vary from 0 to 29 − 1 (511). In the binary system of encoding the amount of received energy, pixel values are not allowed to have any decimal points. They are recorded as 8-bit, unsigned integers. Floating point pixel values are not commonly associated with raw satellite data. With the use of more bits in a computer, it is possible to have floating point data for some processed results (e.g., ratioed band).

1.4.3

Image Reference System

There are many coordinate systems in use, such as the latitudelongitude system and the cartesian coordinate system. The latter is a plane system suitable for representing two-dimensional digital

11

Chapter One imagery (Fig. 1.4a). This system consists of two axes: abscissa that increases in value eastward and ordinate that increases in value northward. Hence, the space is partitioned into four quadrants. Coordinates in different quadrants have different signs. Only in the first quadrant are both abscissa and ordinate positive. Due to the presence of negative coordinates, this system is not suitable for referencing pixels in an image. In spite of the three-dimensional Earth’s surface in reality, its rendition in a digital image has one fewer dimension. This reduction is permissible given that the sensor is usually located hundreds of kilometers above the Earth’s surface that has a negligible relief by comparison. Since the third dimension (height) of ground objects is not a concern in natural resource applications of remote sensing, it is acceptable to approximate this surface as a flat one represented by a two-dimensional array of pixels. Thus, a pair of coordinates in the form of row and column (also known as line and pixel) is required to locate uniquely locate a pixel in this array. Both have an increment of 1. These coordinates depict the central location of a grid cell. Since an image always starts with the first pixel and then the next sequentially, an image coordinate system differs from the commonly known cartesian coordinate system. Here, its origin is located in the upper left corner (Fig. 1.4b). Known as line, row increases vertically downward. Column refers to the position of a pixel in a row. It increases across from left to right. The total number of rows and columns of an image defines its physical size. Pixel P in Fig. 1.4b has a coordinate of (3, 10), in which 3

Northing Column (position, pixel)

+6

Origin

+5

Quadrant I Easting > = 0 Northing > = 0

Quadrant II +4 Easting < 0 +3 Northing > = 0+2

P

+1 –6 –5 –4 –3 –2 –1 0 –1

+1 +2 +3 +4 +5 +6

–2

Quadrant III –3 Easting < 0 –4 Northing < 0 –5

Easting

Row (line)

12

Quadrant IV Easting > = 0 Northing < 0

–6

(a)

(b)

FIGURE 1.4 Comparison of the cartesian coordinate system (a) with the image coordinate system (b). In the cartesian coordinate system, the space is divided into four quadrants, so coordinates can be positive or negative, dependent upon in which quadrant a point is located. In the image coordinate system, all coordinates are positive, as the origin is located in the upper left corner. Both systems require a pair of coordinates to reference a location uniquely.

Overview is known as row or line, and 10 as column, position, or pixel. This convention of representation is not universally adhered to, so it can vary with the image processing system. Of particular note is that the first row and last row are counted in determining the number of pixels/ columns of an image. Also, the first row or column can start from 0 as well as from 1. As with all raster data, the coordinates of pixels in an image are not explicitly stored in the computer except for a few strategic ones (i.e. the four corner pixels). Instead, all pixels are recorded sequentially by column first and by row next as a long list. Their geographic location is implicitly defined by their relative position in the list or their distance from the origin (i.e., the first pixel). This relative position can be converted into a pair of absolute coordinates expressed as row and column from this distance as well as the physical dimension (e.g., number of rows by number of columns) of the image. These coordinates may be further converted into the metric expression by multiplying them by the spatial resolution of the image.

1.4.4

Histogram

A histogram is a diagram displaying the frequency distribution of pixels in an image with respect to their DNs (Fig. 1.5). It can be presented either graphically or numerically. A graphic histogram contains two axes. The horizontal axis is reserved for the pixel’s DN. It is an integer with an increment of 1 or other larger integers specified by the analyst. Thus, the histogram is not smooth but discrete. The vertical axis represents the frequency, in either relative terms (percentage) or absolute terms (actual number of pixels). A graphic histogram is an effective means of visualizing the quality of a single spectral band directly. For instance, a broad histogram curve signifies a reasonable contrast while its position relative to the horizontal axis is indicative of the overall tone of the band (Fig. 1.5a). A position toward the left suggests that the image tends to have an overall dark tone, a phenomenon equivalent to underexposure in an analog aerial photograph (Fig. 1.5b). On the other hand, a position toward the right shows that the image has a bright tone throughout, with an appearance similar to an overexposed aerial photograph. Unlike a graphic histogram, a numeric histogram displays the exact number of pixels at every given DN level. In order to reduce the number of DN levels, a few DNs may be amalgamated. In this case, the frequency refers to the combined pixels over the indicated range of DNs. Both forms of histogram are essential in contrast manipulation of spectral bands. A preview of a graphic histogram enables the analyst to prescribe the kind of enhancement method most appropriate for the image. A numeric histogram provides important clues in deciding critical thresholds needed in performing certain kinds of image contrast stretching.

13

14

Chapter One Histogram 18195

0 1

2047

(a) Histogram 28894

0 80

2047

(b)

FIGURE 1.5 Examples of two graphic histograms illustrating different qualities of the spectral bands they correspond to. The first histogram (a) has a larger range, but most pixels have a small DN value, causing the image to have a darkish overall tone. The spike in the histogram represents water pixels. The skinny and narrow histogram (b) shows a limited contrast as not all available DNs are taken advantage of during data recording.

1.4.5

Scatterplot

A scatterplot is an extension of a one-band graphic histogram into a two-band situation. This diagram illustrates the distribution of pixel values in the two spectral band domain (Fig. 1.6). Either band can serve as the horizontal or vertical axis in a scatterplot. The variable in both axes is the pixel DN of the usual range of 0 to 255. What this diagram is able to reveal depends on where the pixels originate from. If they come from the entire image, then a scatterplot is able to reveal whether the content of the two bands is correlated with each other. If all pixels fall into a linear trend neatly, then the content of both bands exhibits a high degree of resemblance, or there is severe data redundancy between them. Since a scatterplot is best at showing the distribution of pixel values over two bands, multiple scatterplots have to be constructed to illustrate the correlation extent between any two spectral bands in case of more than two multispectral bands. If the pixels are selected from a subarea related to specific land covers, the scatterplot can be used to identify whether the covers represented by these pixels are spectrally separable. Such a plot is very useful in revealing the feasibility of mapping these covers prior to the

Overview

Band B

255

0

0

255 Band A

FIGURE 1.6 A scatterplot of two spectral bands. It illustrates the correlation between the information content of spectral band A versus band B. In the diagram the variable in both axes is DN, which ranges from 0 to 255. Dashed lines are histograms of respective bands.

classification. They can also foretell the accuracy of mapping these covers on the basis of the spectral distance between these pixels and pixels from other covers. If the pixels from one type of land cover feature are distributed in close proximity to those from another type of land cover feature, then there is a low spectral separability between the two concerned land covers in these two spectral bands.

1.5

Properties of Remotely Sensed Data The property of remotely sensed data most critical to their utility is their resolution. It refers to an imaging system’s capability of resolving two adjacent features or phenomena. There are four types of resolution for remote sensing imagery: spatial, spectral, radiometric, and temporal.

1.5.1

Spatial Resolution

Also called ground sampling distance, spatial resolution of imagery refers to its ability to distinguish two spatially adjacent objects on the ground. Spatial resolution is the equivalent of the spatial dimension of scanning on the ground during image acquisition. For raster images, spatial resolution is synonymous with the pixel size of the remotely sensed data. Ground sampling distance is jointly governed by the instantaneous field-of-view (IFOV) (α) of the sensing system and the altitude of the platform (H) that carries the sensor (Fig. 1.7), or

Pixel size = a × H

(1.1)

15

16

Chapter One Rotating scan mirror

rs

to

c te

Field of view y rra

of

de

A

IFOV

H

tion

irec

d can

S

h

idt

w ath w S Ground cell size

Direction of satellite motion

FIGURE 1.7 Relationship among spatial resolution of satellite imagery, satellite altitude (H), and IFOV (a) of the scanner.

where α is expressed as a radian angle. According to this equation, at the same altitude a smaller IFOV translates into a smaller pixel size, and vice versa. At the same IFOV, a lower altitude leads to an image of a finer spatial resolution, and vice versa. Spatial resolution denotes the theoretical dimension of ground features that can be identified from a given remote sensing image. The finer the spatial resolution, the more detailed the image is. As the pixel size increases, less detail about the target is preserved in the data (Fig. 1.8). A small cell size is desirable in those local-scale applications that demand great details about the target. A fine spatial resolution reduces the number of mixed pixels, especially if the landscape is highly fragmented and land cover parcels have an irregular shape. The downside effect of having a fine spatial resolution is a large image file size. This file size is going to double or triple if two or three spectral bands are needed. As it is a common practice to record satellite data in the multispectral mode, an image file size can reach a few megabits easily. Such a large file is going to slow down all subsequent analyses. It is thus important to select data with a spatial resolution appropriate for the needs of an application. If the digital remote sensing data are obtained through scanning of existing aerial photographs, their spatial resolution is determined by

Overview

(a)

(b)

(c)

(d)

FIGURE 1.8 Appearance of an image represented at four spatial resolutions of 4 m (a), 8 m (b), 20 m (c), and 40 m (d). As pixel size increases, ground features become less defined. See also color insert.

both the scanning interval and the scale of the photographs used. If an analog satellite image is scanned, then the scanned image’s spatial resolution may not bear any relationship with that of the original digital image. This discrepancy needs to be taken into consideration when data scanned from analog materials are analyzed digitally.

1.5.2

Spectral Resolution

Spectral resolution refers to the ability of a remote sensing system to differentiate the subtle difference in reflectance of the same ground object at different wavelengths. Spectral resolution is determined by the number of spectral bands used to record spectrally split radiative energy received from the target. It is related to the wavelength range of each spectral band, as well as the wavelength range of all bands. It must be noted that not all spectral bands have the same wavelength range (Fig. 1.9). Nor is the wavelength range of all bands continuous. Because of atmospheric scattering and absorption, electromagnetic radiation over some wavelengths cannot be used for spaceborne remote sensing, causing discontinuity in the wavelength of spectral bands. Spectral bands in the visible and near infrared spectrum tend

17

Chapter One Dry bare soil (gray brown) 60 Reflectance (%)

18

Vegetation (green)

40

20

B1 B2 B3 B4

B6

B5

0 0.4

0.6

0.8

1.0

1.2

1.4

1.6

1.8

2.0

2.2

Wavelength (mm)

FIGURE 1.9 Spectral resolution of imagery. It is defined as the width of a spectral band. As illustrated in this figure, band 6 has the coarsest spectral resolution against bands 1 and 2. Spectral resolution affects the spectral separability of covers.

to have a narrower wavelength range, than those in the middle and far infrared spectrum, because of it’s stronger reflectance here. Since the reflectance curves of most ground objects vary with wavelength (Fig. 1.9), in general, the finer the spectral resolution, the more information about the target is captured. This generalization is valid to a certain degree. The issue of data redundancy arises if the spectrum is sliced into too many spectral bands thinly, as is the case with hyperspectral remote sensing data. Spectral resolution is an important image property to consider in certain applications as it determines the success or failure of computer-assisted per-pixel image classification of satellite imagery data based exclusively on pixel values. The use of more spectral bands in a classification is conducive to the achievement of higher classification accuracy to a certain degree. In general, spaceborne remotely sensed data have a higher spectral resolution than panchromatic aerial photographs that are taken with a frame camera of a single lens. Such data recorded in the multispectral domain represent an effort of increasing spatial resolution to compensate for the inability to use other image elements than pixel values.

1.5.3

Radiometric Resolution

Radiometric resolution refers to the ability of a remote sensing system to distinguish the subtle disparity in the intensity of the radiant energy from a target at the sensor. It is determined by the level of quantizing the electrical signal converted from the radiant energy (Fig. 1.10). Radiometric resolution controls the range of pixel values of an image, and affects its overall contrast. Recently, the common

Overview

W av el

en gt h

Band 6 Band 5 Band 4 Band 3 Band 2 Band 1

FIGURE 1.10 The multispectral concept in obtaining remotely sensed data. It is a common practice to obtain multispectral data in spaceborne remote sensing in which the low spatial resolution is compensated for by a finer spectral resolution.

8-bit quantization level has evolved into a level as high as 11 bits, thanks to advances in sensing technology. With the use of more bits in recording remotely sensed data, the radiative energy received at the sensor is sliced into more levels radiometrically (Fig. 1.11), which

Intensity

Level of sampling the signal

Time

FIGURE 1.11 Quantization of energy reflected from a ground target is converted into an electrical signal whose intensity is proportional to its reflectance. The interval of sampling the signal intensity determines the radiometric resolution of the satellite imagery, or its ability to discriminate subtle variation in reflectance.

19

20

Chapter One makes it possible to differentiate subtle variations in the condition of targets. A fine radiometric resolution is critical in studying targets that have only a subtle variation in their reflectance, such as detection of different kinds of minerals in the soil and varying levels of vegetation stress caused by drought and diseases. Also, remotely sensed data of a fine radiometric resolution are especially critical in quantitative applications in which a ground parameter (e.g., sea surface temperature and concentration level of suspended solids in a water body) is retrieved from pixel values directly. Data of a higher quantization level enable the retrieval to be achieved more accurately, while a coarse radiometric resolution causes the pixels to look similar to one another.

1.5.4 Temporal Resolution Also known as revisit period, temporal resolution refers to the temporal frequency at which the same ground area is sensed consecutively by the same sensing system. Since remote sensing satellites are revolving around the Earth 24 hours a day and 365 days a year, the temporal resolution is directly related to the satellite orbital period. A short period means more revolutions per day and is equivalent to a high temporal resolution. Temporal resolution of the same satellite varies with latitude of the geographic area being sensed. At higher latitude there is more spatial overlap among images acquired over adjoining orbits. The same ground is sensed more frequently, or at a finer temporal resolution, than at a lower latitude. One method of refining temporal resolution of satellite data is to tilt the scanning mirror of the sensing system. In this way the same scene is able to be scanned repeatedly at a close temporal interval from the adjoining orbits either to the left or to the right of the current path. Temporal resolution is very significant in applications in which the object or phenomenon of study is temporally dynamic or in a state of constant change, such as weather conditions, floods, and fires. In general, satellite remote sensing data have a higher temporal resolution than airborne remote sensing data. Among the four image resolutions, temporal resolution bears no direct relationship to the other three resolutions. Although spectral and spatial resolutions are independent of each other, both are tied closely to radiometric resolution. In recording images at a finer spectral or spatial resolution, the returned energy emitted from or reflected by the ground is sliced into numerous units either spectrally or spatially. The successful detection of such a tiny quantity of energy over a unit imposes a stringent demand on the radiometric sensitivity of the detectors (Fig. 1.12). Consequently, a fine radiometric resolution is achievable by compromising either spectral or spatial or both resolutions in order to accumulate sufficient energy from the target to be accurately identifiable. Conversely, a low radiometric resolution may be adopted in order to achieve a finer spectral or spatial

Overview

(a)

(b)

(c)

FIGURE 1.12 Appearance of the same image represented at three radiometric levels: (a) 8 bits (256 gray levels), (b) 6 bits (64 gray levels), and (c) 3 bits (8 gray levels).

21

22

Chapter One resolution. Since the amount of energy emitted by targets is much smaller than what is reflected, it is more difficult to achieve the same spatial or radiometric resolution for images acquired over the thermal infrared portion of the spectrum than over visible and near infrared wavelengths.

1.6

Organization of the Book This book is divided into 14 chapters. Chapter 2 comprehensively surveys the main characteristics of existent remote sensed data available for digital analysis. Also included in this chapter is how to convert existing analog remote sensing materials into digital format via scanning. Chapter 3 presents various media for storing remote sensing data, and the common image formats for saving remote sensing imagery and processed results. Also covered in this chapter are methods of data compression, both lossy and error free. Contained in Chap. 4 is a critical overview and assessment of main digital image analysis systems, their major features and functions. A few of the lead players are described in great depth, with the strengths and limitations of each system critically assessed. How to prepare remote sensing data for digital analysis geometrically forms the content of Chap. 5. After the fundamentals of image geometric rectification are introduced, several issues related to image rectification are addressed through practical examples. Also featured in this chapter are the most recent developments in image georeferencing, such as image orthorectification and real-time georeferencing. Chapter 6 is devoted to image enhancement methods, ranging from simple contrast manipulation to sophisticated image transformation. Most of the discussion centers around processing in the spectral domain while image enhancement in the spatial domain is covered briefly. Covered in Chaps. 7 to 11 are five unique approaches toward image classification. Chapter 7 on spectral image classification begins with a discussion on the requirements and procedure of image classification. The conventional per-pixel-based parametric and nonparametric methods, namely, unsupervised and supervised methods, are presented next. Three supervised image classification algorithms are introduced and compared with one another in terms of their requirements and performance. This is followed by more advanced classification methods, including subpixel and fuzzy image classification. This chapter ends with a brief discussion on postclassification processing. With the advances in machine learning, new methods have been attempted to perform image classification in the hope of achieving higher accuracies. Two attempts of neural network classification and decision tree classification form the focus of Chaps. 8 and 9, respectively. After the various types of neural network structures are introduced in Chap. 8, the discussion then shifts to network configuration and

Overview training, both being critical issues to the success of neural network image classification. The potential of this endeavor is evaluated toward the end of the chapter. Chapter 9 on decision tree classification begins with an introduction to major decision trees that have found applications in image classification, followed by a discussion on how to construct a tree. The potential of this classification method is assessed toward the end of this chapter. The focus of Chap. 10 is on spatial image classification in which the spatial relationship among pixels is taken advantage of. Two topics, use of texture and objectbased image classification, are featured prominently in this chapter. In addition, image segmentation, which is a vital preparatory step for object-oriented image classification, is also covered extensively. Recently, image classification has evolved to a level where external knowledge has been incorporated into the decision making. How to represent knowledge and incorporate it into image classification forms the content of Chap. 11. After presenting various types of knowledge that have found applications in intelligent image classification, this chapter concentrates on how to acquire knowledge from various sources and represent it. A case study is supplied to illustrate how knowledge can be implemented in knowledge-based image classification and in knowledge-based postclassification processing. The performance of intelligent image classification relative to per-pixel classifiers is assessed in terms of the classification accuracy achievable. The next logical step of processing following image classification is to provide a quality assurance. Assessment of the classification results for their accuracy forms the content of Chap. 12. Addressed in this chapter are sources of classification inaccuracy, procedure of accuracy assessment, and proper reporting of accuracies. Chapter 13 extends digital analysis of remote sensing data to the multitemporal domain, commonly known as change detection. The results derived from respective remote sensing data are compared with each other either spatially or nonspatially. Many issues related to change detection are identified, in conjunction with innovative methods of change detection. Suggestions are made about how to assess and effectively visualize change detection results. The last chapter of this book focuses on integrated image analysis with GIS and global positioning system (GPS). After models of integrating these geoinformatic technologies are presented, this chapter identifies the barriers to full integration and potential areas to which the integrated analysis approach may bring out the most benefits.

23

This page intentionally left blank

CHAPTER

2

Overview of Remotely Sensed Data

I

n the late 1960s, meteorological satellite data with a coarse spatial resolution from instruments such as the Advanced Very High Resolution Radiometer (AVHRR) from the National Oceanographic and Atmospheric Administration (NOAA) came into existence for the first time in history. These data, initially designed chiefly for the purpose of studying weather conditions, were not accompanied by wide practice of digital image analysis in the remote sensing community, due probably to the fledgling state of computing technology back then. In the early 1970s, the Landsat program was initiated to acquire satellite data for the exclusive purpose of natural resources monitoring and mapping. Since then tremendous progress has been made in remote sensing data acquisition, with tens of satellites launched. The advance in our data acquisition capacity is attributed largely to the progress in rocket technology and sensor design. Consequently, a wide range of satellite data has become available at a drastically reduced price. Over the years the spatial and spectral resolutions of these data have been improved. Satellite data of a finer spatial resolution have opened up new fields of applications that were not possible with data of a poor spatial or spectral resolution before. In addition to multispectral data, it is possible to obtain satellite data in hundreds of spectral bands. These remotely sensed data with improved viewing capabilities and improved resolution have not only opened up new areas of successful applications, but also created specific fields in digital image analysis. In this chapter, these satellite data are comprehensively reviewed in terms of their critical properties and main areas of application. All the satellite data, including meteorological, oceanographic, natural resources, and even radar, will be covered in this overview. Both multispectral and hyperspectral data are included in this review. In addition, this chapter also identifies recently emerged trends in satellite data acquisition, including the acquisition from airborne platforms. This identification is followed by a discussion on how to convert existent

25

26

Chapter Two analog materials into the digital format. Finally, this chapter concentrates on the proper selection of remotely sensed data for a given application.

2.1

Meteorological Satellite Data Among all remote sensing satellites, meteorological satellites have the longest history. Of the existing meteorological satellite data, the most widely known and used are from the AVHRR sensors aboard the NOAA series of satellites, the most recently launched being the 18th. These satellites orbit around the Earth at an altitude of 833 km with an average period of approximately 102 minutes (Table 2.1). Designed primarily for meteorological applications, the NOAA series of satellites are capable of obtaining data of a fine temporal resolution via at least two satellites working in a sun-synchronous orbit. Some missions have a daylight (e.g., 7:30 a.m.) north-to-south equatorial crossing time while other missions have a nighttime (e.g., 2:30 a.m.) equatorial crossing time. As a result, any location on the surface of the Earth can be sensed twice a day, once in the morning and again in the afternoon. The AVHRR sensor captures radiation over the visible light, near infrared (NIR), and thermal infrared (TIR) portion of the spectrum in five spectral bands (Table 2.2). This radiometer has a nominal swath width of 2400 km and an instantaneous field-of-view (IFOV) of 1.3 milliradians at nadir. AVHRR data are available in three forms, high resolution picture transmission (HRPT), global area coverage (GAC), and local area coverage (LAC). Both HRPT and LAC data have a full ground resolution of approximately 1.1 ⫻ 1.1 km2. It increases to about 5 km at the largest off-nadir viewing angle near the edges of the 3000–km wide imaging swath. GAC data are sampled four out of every five pixels along the scan line, and every third scan line in LAC data. Such processed data have a spatial resolution of 4 ⫻ 4 km2. AVHRR data are available at two levels. Level 1B data are raw data that have not been radiometrically calibrated, even though radiometric Satellite Number

Launch Date

Ascending Node

Descending Node

14

12/30/94

1340

0140

15

05/13/98

0730

1930

16

09/21/00

1400

0200

17

06/24/02

2200

1000

18

05/20/05

1400

0200

These satellites had an altitude of 833 km, a period of 102 min, a revisit period of 12 h, and an inclination of 98.9°.

TABLE 2.1 Characteristics of Recent NOAA AVHRR Satellites

Overview of Remotely Sensed Data Spatial Resolution, km Band

Wavelength, µm

Typical Use

LAC

GAC

1

0.58–0.68

Daytime cloud/ surface and vegetation mapping

1.1

4

2

0.725–1.10

Surface water delineation, ice, and snow melt

1.1

4

3A

1.58–1.64

Snow/ice discrimination

1.1

4

3B

3.55–3.93

Night cloud mapping, SST

1.1

4

4

10.30–11.30

Night cloud mapping, SST

1.1

4

5

11.50–12.50

SST (sea surface temperature)

1.1

4

TABLE 2.2 Characteristics of AVHRR Bands and Their Uses

calibration coefficients are appended to the data, together with Earth location data. They are supplied either as a single scene or as a mosaic of multiple scenes. A single scene image has a dimension of 2400 ⫻ 6400 km2. A mosaic consists of multiple images from the same orbit that have been stitched together. Their availability is limited to certain dates only. Georegistered level 1B data have been radiometrically and geometrically corrected in accordance with the parameters specified by the user. They include projection, resampling method, and pixel size. The data are supplied in single scenes only in the binary format of 8 or 10 bits. Because of the broad geographic area that can be covered by one scene and their low cost, AVHRR data have found applications in global and regional monitoring of forests, tundra, and grasslands ecosystems. Other applications include agricultural assessment, land cover mapping, soil moisture analysis at the regional scale, tracking of regional and continental snow cover, and prediction of runoff from snow melting. The thermal bands of AVHRR data are also useful in retrieving various geophysical parameters such as SST (sea surface temperature) and energy budget. Since they have a fairly continuous global coverage since June 1979, AVHRR data are perfectly suited to long-term longitudinal studies. Multiple results can be averaged to show the long-term patterns of global biomass and chlorophyll concentration (Fig. 2.1). Their extremely high temporal resolution makes them perfectly suited to monitor dynamic and ephemeral processes like flooding and fires on a broad scale. In geology, AVHRR images can be used to monitor volcanic eruptions, and study regional drainage and physiographic features.

27

28

Chapter Two

FIGURE 2.1 Global distribution of vegetation expressed as normalized difference vegetation index (NDVI) and chlorophyll averaged from multitemporal AVHRR data between June and August 1998. (Source: Goddard Space Flight Center.) See also color insert.

2.2

Oceanographic Satellite Data The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite was launched on August 1, 1997 into a 705 km sun-synchronous orbit that is inclined at 98.2° (Table 2.3). This satellite serves as the successor to the Coastal Zone Color Scanner sensor that ceased operation in 1986 in a mission to acquire quantitative data of ocean biooptical and biogeochemical properties on the global scale. This satellite has a period of 98.9 minutes and a return period of only 1 day. The nadir resolution of SeaWiFS imagery is 1.13 km (LAC) and 4.5 km (GAC). All data are quantized to 10 bits. The ground swath varies from 1500 km at the scanning angle of 45° (GAC) to 2800 km at 58.3° (LAC). There are 1285 (LAC) and 248 (GAC) pixels along scan. The eight spectral bands of SeaWiFS imagery cover the wavelength range of 0.402 to 0.885 μm over the visible light and NIR spectrum (Table 2.4). Such a narrow range of spectral sensitivity is justified because ocean color is mostly observable in visible light. These data are processed to three levels. Level 1A data are raw radiance values. Their

Overview of Remotely Sensed Data Height

705 km

Inclination

98.217°

Period

98.9 min

Orbit type

Sun synchronous

Speed

6.47 km/s

Repeat cycle

1 day

Spatial resolution (km)

1.13 (LAC), 4.5 (GAC)

Swath width

2,801 km LAC/HRPT (58.3°) 1,502 km GAC (45°)

Quantization

10 bits

Source: Feldman.

TABLE 2.3 Orbit Characteristics of SeaWiFS Satellite

calibration and navigation information is stored in a separate file in the hierarchical data format (HDF). Level 2 (GAC) data are processed products of 11 geophysical parameters, such as normalized waterleaving radiances at 412, 443, 490, 510, 555, and 670 nm. Other derived products are chlorophyll α concentration, Epsilon of aerosol correction at 765 and 865 nm, and aerosol optical thickness at 865 nm. Data processed to level 3 include, five normalized water-leaving radiances that have been corrected for atmospheric scattering and sun angles

Band

Center Wavelength, mm (color)

Primary Use

1

0.402–0.422 (violet)

Dissolved organic matter (incl. Gelbstoffe)

2

0.443–0.453 (blue)

Chlorophyll absorption

3

0.480–0.500 (blue-green)

Pigment absorption (Case 2), K(490)

4

0.500–0.520 (blue-green)

Chlorophyll absorption

5

0.545–0.565 (green)

Pigments, optical properties, sediments

6

0.660–0.680 (red)

Atmospheric correction and sediments

7

0.745–0.785 (NIR)

Atmospheric correction, aerosol radiance

8

0.845–0.885 (NIR)

Atmospheric correction, aerosol radiance

TABLE 2.4 Spectral Bands of SeaWiFS Data and Their Major Uses

29

30

Chapter Two differing from nadir, and seven geophysical parameters. Free access to these data and the results processed from them is granted to approved users only. SeaWiFS data have a narrow and focused application area, namely, the study of ocean color on the global scale, which is critical to studying the concentration of microscopic marine plants (e.g., phytoplankton) and ocean biogeochemical properties. In conjunction with ancillary data, SeaWiFS data enable retrieval of meaningful biologic parameters such as photosynthesis rates.

2.3

Earth Resources Satellite Data There are several satellites in this category, all of which share the same characteristics of capturing radiation in the visible light, and NIR spectrum at a medium spatial resolution and a return period of around 20 days. Introduced in this section are six of the lead satellites/sensors: Landsat, Le Systeme Pour l’Observation de la Terre (SPOT, or Earth Observation System), Indian Remote Sensing (IRS), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Moderate Resolution Imaging Spectroradiometer (MODIS), and Advanced Land Observing Satellite (ALOS).

2.3.1

Landsat Data

The launch of the first Landsat satellite by the National Aeronautics and Space Administration (NASA) on June 23, 1972, ushered remote sensing into the space era and aroused an enormous interest in digital image processing. During the course of the NASA space program, Landsat images have evolved toward a higher spatial resolution and a finer spectral resolution. Although some early satellites in this series are no longer in service, they did collect a tremendous quantity of data that are indispensable in long-term monitoring applications. These historic data are also essential in studying changes in land cover and the environment. Initially called the Earth Resources Technology Satellite and later renamed Landsat, the Landsat program represents the first unmanned satellite designed specifically to acquire medium resolution, multispectral data about the Earth on a systematic and repetitive basis. The satellite has a circular, sun-synchronous orbit that is inclined at 99° (Table 2.5). At a height of 915 km and an orbital period of 103 minutes, the satellite is able to complete 14 revolutions a day around the globe. A distance of 2760 km is shifted at the equator between two consecutive orbits on the same day. The same orbit travels by 160 km from one day to the next, resulting in a maximum overlap of only 14 percent between images recorded in successive orbits at the equator. Thus, it is impossible to establish three-dimensional (3D) viewing for most of the Earth’s surface from Landsat imagery. Eighteen days later the orbit returns to where it starts. These parameters have not changed for the first three satellites to maintain consistency in the data acquired.

Overview of Remotely Sensed Data Height

915 km (880–940)

Inclination

99°

Period

103 min

Revolution

14 per day

Speed

6.47 km/s

Distance between successive tracks at the equator

2,760 km

Distance between orbits

159.38 km

Repeat cycle

18 days

Overlap at the equator

14%

Time of equatorial crossing

9:42 a.m.

Total IFOV

11.56°

Orbit type

Circular, sun-synchronous

TABLE 2.5 Orbital Characteristics of Landsats 1, 2, and 3

Aboard Landsats 1 to 3 are two sensors, Return Beam Vidicon (RBV) and Multispectral Scanner (MSS). RBV consists of three television-like cameras. These detectors with a central perspective projection were intended to obtain images of a high geometric fidelity in three spectral bands for mapping purposes. However, the sensor malfunctioned soon after launch. Consequently, a highly limited number of images were obtained during the mission. MSS operates in four spectral bands spanning from 0.5 to 1.1 µm (Table 2.6). Each band is equipped with six detectors. Thus, six lines of images are obtained simultaneously during cross-track scanning that is perpendicular to the direction of satellite motion. During scanning, a swath width of 185 km is covered on the ground as the scanning mirror rotates within a field-of-view (FOV) of 11.56°. At each scanning position, a ground area of 57 ⫻ 79 m2 is scanned. One image comprises 2340 scan lines and 3240 pixels (Fig. 2.2). Data are transmitted to the ground receiving stations electronically, where all images are resampled to 79 ⫻ 79 m2 before they are released to the general public. Data are recorded in the CCT (computer-compatible tape) form and can be downloaded from the U.S. Geological Survey website at http://glovis.usgs.gov/. Launched on July 16, 1982, and March 1, 1984, respectively, Landsat 4 and Landsat 5 retained most of the orbital characteristics of their predecessors (Table 2.7). While the satellite altitude was lowered by about 200 km, the total FOV increased to 14.92° so that the same swath width of 185 km on the ground could be maintained. Associated with the lower altitude is the shorter return period of 16 days. Landsat 4 and Landsat 5 are considered the second generation in the series in

31

32

Chapter Two

Sensor

Spectral Band

MSS

4: 0.5–0.6 5: 0.6–0.7 6: 0.7–0.8 7: 0.8–1.1

TM

1: 0.45–0.52 2: 0.52–0.60 3: 0.63–0.69 4: 0.76–0.90 5: 1.55–1.75 7: 2.08–2.35 6: 10.4–12.5

Landsat 7 & ETM+ (15/04/99)

PAN: 0.52–0.90 6: 10.4–12.5 (the remaining bands are the same as TM’s)

Spatial Resolution, m

Swath Width, km

Quantization Level, bits

79

185

7

30

185

8

120 15 60

TABLE 2.6 Characteristics of Landsat MSS and TM Imagery

that their images have several improved qualities over imagery form Landsats 1 to 3. Since the RBV sensor was not successful, it was dropped from these two satellites. The MSS sensor had exactly the same properties as before. Added to these satellites was a new sensor called Thematic Mapper (TM). TM imagery is recorded in seven spectral bands at a spatial resolution of 30 m except band 6, which has a spatial resolution of 120 m. The wavelength range of these bands and their primary uses are provided in Table 2.8. The newest satellite in the series is Landsat 7 launched on April 15, 1999 (Landsat 6 failed soon after launch). Carried on board was a new sensor called Enhanced TM Plus (ETM⫹). It has a few improvements over its predecessors, such as a panchromatic band (band 8) at a spatial resolution of 15 m. Besides, the spatial resolution of the TIR band (band 6) was refined from 120 to 60 m. All the seven multispectral bands have maintained the same wavelengths (Table 2.8). The ground area covered per scene still stays at 185 ⫻ 185 km2. Landsat 7 ETM⫹ data are available to the general public at two levels, 0Rp and 1G. Level 0Rp data are raw data that have not been corrected for radiometric and geometric distortions except that scan lines are reversed and nominally aligned. Level 1G data have been corrected for systematic distortions, such as radiometric calibration and geometric transformation to a user-specified projection. Such geometrically corrected images have a typical accuracy of 0, B>0)

Time 1

Time 1

Band A

Band A

Quadrant IV (A >0, Bj = Pri−>j − 0.508 Pri−>j |dm| Lj,2/Aj,2

(13.9)

Pci−>j = probability of a pixel that changes its identity from i in the first input map to j in the second land cover map dm = registration error Lj,2 and Aj,2 = total boundary length and area of class j in the second land cover map, respectively

where

According to Eq. (13.9), change detection accuracy can be degraded by misregistrations for covers that have a large proportion of boundaries relative to the total. The joint application of Eqs. (13.8) and (13.9) is able to reveal the accuracy of a pixel with a changed identity so long as its accuracy in the source and destination layers is known.

13.7 Visualization of Detected Change It is difficult to visualize the detected change effectively because both the original and destination covers at the location of a change have to be represented in one map. A common practice is to use a specially designed color scheme to represent all the possible types of change. This method of visualization is limited in that the change from a source cover to all destination covers is not clearly conveyed in the map. Besides, the map readability deteriorates rapidly in light of many different kinds of change, even though the readability can be improved by omitting all those parcels whose identity has not changed. In this way the reader’s attention is focused more on the changed areas. Another means of visualizing change is to combine the use of two graphic elements (e. g., color and pattern) in the map. One is reserved for the source cover while the other is reserved for the destination cover. Since human eyes are more sensitive to change in color than in pattern, it is better to use color for the destination covers if they are considered more important than the source covers. The above visualization methods are able to exhibit all potential types of change in one map for results that are detected from only two land cover maps. They are inapplicable to change that has been detected from a series of maps. For instance, it is not possible to visualize the temporal evolution of urban sprawl over a period of time using the above methods. In this case only the destination cover is illustrated in the visualization. Such change is best visualized via animation in which this series of maps are superimposed on top of each other. The maps are displayed at a short temporal interval continuously as an animation on a

Multitemporal Image Analysis computer screen. Through animating the change maps at different times, the process of the gradually changing phenomenon can be effectively perceived by the viewer.

References Allen, T. R., and J. A. Kupfer. 2000. “Application of spherical statistics to change vector analysis of Landsat data: Southern Appalachian spruce-fir forests.” Remote Sensing of Environment. 74(3):482–493. Burnicki, A. C., D. G. Brown, and P. Goovaerts. 2007. “Simulating error propagation in land-cover change analysis: The implications of temporal dependence.” Computers, Environment and Urban Systems. 31(3):282–302. Byrne, G. F., P. F. Crapper, and K. K. Mayo. 1980. “Monitoring land-cover change by principal component analysis of multitemporal Landsat data.” Remote Sensing of Environment. 10(3):175–184. Chen, J., P. Gong, C. He, R. Pu, and P. Shi. 2003. “Land-use/land-cover change detection using improved change-vector analysis.” Photogrammetric Engineering and Remote Sensing. 69(4):369–379. Congalton, R. G. 2004. “Putting the map back in map accuracy assessment.”In Remote Sensing and GIS Accuracy Assessment, ed. R. S. Lunetta and J. G. Lyon, 1–11. Boca Raton, FL: CRC Press. Gao, J., and D. Skillcorn. 1996. “Detection of land cover change from remotely sensed data: A comparative study of spatial and non-spatial methods. Proceedings of the 8th Australasian Remote Sensing Conference, March 26–28, Canberra, CD-ROM. Garcia-Haro, F. J., M. A. Gilabert, and J. Melia. 2001. “Monitoring fire-affected areas using Thematic Mapper data.” International Journal of Remote Sensing. 22(4):533–549. Howarth, J. P., and E. Boasson. 1983. “Landsat digital enhancements for change detection in urban environments.” Remote Sensing of Environment. 13(2):149–160. Im, J., and J. R. Jensen. 2005. “A change detection model based on neighborhood correlation image analysis and decision tree classification.” Remote Sensing of Environment. 99(3):326–340. Im, J., J. R. Jensen, and J. A. Tullis. 2008. “Object-based change detection using correlation image analysis and image segmentation.” International Journal of Remote Sensing. 29(2):399–423. Jensen, J. R. 1996. Introductory Digital Image Processing, A Remote Sensing Perspective (2nd ed.). Upper Saddle River, NJ: Prentice-Hall. Jensen, J. R., D. J. Cowen, J. D. Althausen, S. Narumalani, and O. Weatherbee. 1993. “Evaluation of the coastwatch change detection protocol in South Carolina.” Photogrammetric Engineering and Remote Sensing. 59(6):1039–1046. Johnson, R. D., and E. S. Kasischke. 1998. “Change vector analysis: A technique for the multispectral monitoring of land cover and condition.” International Journal of Remote Sensing. 19(3):411–426. Lambin, E. F., and A. H. Strahler. 1994. “Change-vector analysis in multitemporal space: A tool to detect and categorize land-cover change processes using high temporalresolution satellite data.” Remote Sensing of Environment. 48(2):231–244. Lanjeri, S., D. Segarra, and J. Melia. 2004. “Interannual vineyard crop variability in the Castilla-La Mancha region during the period 1991–1996 with Landsat Thematic Mapper images.” International Journal of Remote Sensing. 25(12): 2441–2457. Li, X., and A. G. O. Yeh. 1998. “Principal component analysis of stacked multitemporal images for the monitoring of rapid urban expansion in the Pearl River Delta.” International Journal of Remote Sensing. 19(8):1501–1518. Lo, C. P., and R. L. Shipman. 1990. “A GIS approach to land-use change dynamics detection.” Photogrammetric Engineering and Remote Sensing. 56(2):197–206. Macleod, R. D., and R. G. Congalton. 1998. “A quantitative comparison of changedetection algorithms for monitoring eelgrass from remotely sensed data.” Photogrammetric Engineering and Remote Sensing. 64(3):207–216.

565

566

Chapter Thirteen Malila, W. A. 1980. “Change vector analysis: An approach for detecting forest changes with Landsat.” In Proceedings of the 6th Annual Symposium on Machine Processing of Remotely Sensed Data, June 6–3, Purdue University, West Lafayette, IN. 326–335. Ann Arbor, MI: ERIM (Environmental Research Institute of Michigan). Michalek, J. L., T. W. Wagner, J. J. Luczkovich, and R. W. Stoffle. 1993. “Multispectral change vector analysis for monitoring coastal marine environments.” Photogrammetric Engineering and Remote Sensing. 59(3):381–384. Michener, W. K., and P. F. Houhoulis. 1997. “Detection of vegetation changes associated with extensive flooding in a forested ecosystem.” Photogrammetric Engineering and Remote Sensing. 63(12):1363–1374. Muchoney, D. M., and B. N. Haack. 1994. “Change detection for monitoring forest defoliation.” Photogrammetric Engineering and Remote Sensing. 60(10):1234–1251. Nackaerts, K., K. Vaesen, B. Muys, and P. Coppin. 2005. “Comparative performance of a modified change vector analysis in forest change detection.” International Journal of Remote Sensing. 26(5):839–852. Quarmby, N. A., and J. L. Cushnie. 1989. “Monitoring urban land cover changes at the urban fringe from SPOT HRV imagery in south-east England.” International Journal of Remote Sensing. 10(6):953–963. Schoppmann, M. W., and W. A. Tyler. 1996. “Chernobyl revisited: Monitoring change with change vector analysis.” Geocarto-International. 11(1):13–27. Singh, A. 1989. “Digital change detection technique using remotely-sensed data.” International Journal of Remote Sensing. 10(6):989–1003. Skillcorn, D. J. 1995. Detection of Land Cover Change at the Auckland Urban Periphery: An Integrated Approach of GIS and Remote Sensing. M.S. thesis, University of Auckland. Townshed, J. R. G., C. O. Justice, C. Gurney, and J. McManus. 1992. “Impact of mis-registration on change detection.” IEEE Transactions on Geoscience and Remote Sensing. 30(5):1054–1060. van Oort, P. A. J. 2007. “Interpreting the change detection error matrix.” Remote Sensing of Environment. 108(1):1–8. Warner, T. 2005. “Hyperspherical direction cosine change vector analysis.” International Journal of Remote Sensing. 26(6):1201–1215. Weismiller, R. A., S. J. Kristof, D. K. Scholz, P. E. Anuta, and S. A. Momin. 1977. “Change detection in coastal environment.” Photogrammetric Engineering and Remote Sensing. 43:1533–1539. Zhang, B. 1994. “Comparison error analysis of two classified satellite images.” Australian Journal of Geodesy, Photogrammetry and Surveying. 61(December):49–67.

CHAPTER

14

Integrated Image Analysis

A

s shown in Chap. 11, nonimage ancillary data have been used increasingly in digital image analysis to overcome the limitations of conventional image classifiers and to improve the accuracy of classification results. In turn, the improved results from digital analysis of remotely sensed data also form an invaluable data source in a geographic information system (GIS) database. With these results the database can be updated more quickly and frequently than is possible otherwise. Thus, there exists a mutually interactive and beneficial relationship between digital image analysis and GIS. In the early 1990s a new geoinformatic technology called global positioning system (GPS), developed by the U.S. Department of Defense mainly for military uses, started to find a wide range of civilian applications. As an efficient and accurate means of spatial data acquisition, this satellite-based positioning and navigation system is able “to provide a global absolute positioning capability with respect to a consistent terrestrial reference frame” (Bock, 1996). The emergence of the GPS technology has not only confounded the approaches by which it can be integrated with image analysis and GIS to achieve more accurate classification results, but also has diversified the fields to which this integrated approach can be applied. Thanks to the integrated use of GPS, digital image analysis, and GIS, more and more problems in resource management and environmental modeling can be tackled with relative ease while new fields of application have become feasible. In this chapter the fundamentals of GIS and GPS are introduced, with their major functions related to image analysis highlighted. Through this introduction the necessity for the integrated approach of analysis is justified. All possible manners in which the three disciplines have been integrated are summarized and presented graphically in four models. The applications that demand different manners of integration are

567

568

Chapter Fourteen systematically and comprehensively reviewed. This chapter ends with a discussion on the prospect of and the obstacles to full integration.

14.1

GIS and Image Analysis There is no universally accepted definition for GIS in the literature. Goodchild (1985) considered GIS “a system that uses a spatial database to provide answers to queries of a geographical nature.” Burrough and McDonnell (1998) referred to it as “a powerful set of tools for collecting, storing, retrieving at will, transforming, and displaying spatial data from the real world for a particular set of purposes.” Irrespective of its precise definition, GIS plays a critical role in digital image analysis owing to its comprehensive spatial database and powerful spatial analytical functions.

14.1.1

GIS Database

Highly similar to digital image analysis, a GIS consists of a number of components, such as software, hardware, a user interface, and peripheral devices for data input, output, and display. Essential to all GIS analyses is a spatial database that contains a large collection of spatially referenced data and their attributes. Acting as a model of reality, data stored in the database represent a selected set or an approximation of real-world phenomena that are deemed important to be represented in digital form. These data originate chiefly from analog maps and aerial photographs, GPS and field data, satellite imagery, as well as statistical data (Fig. 14.1). Depending upon the nature of the data, they can be entered into the database via a keyboard, a digitizer, a scanner, or direct importing. Keyboard is the proper mode for entering nonspatial data. Both scanner and digitizer are suited for entering spatial data (e.g., original photographs and land cover parcels interpreted

Existing maps and photos

Satellite data

GPS data

Statistical data

Change in data format

Direct import

Linking with spatial entities

Scanning

On-screen interpretation/ digitization

Editing

GIS database

FIGURE 14.1 Sources of GIS data and methods of data input into the GIS database. Note: All spatial data must be projected to a common coordinate system before they can be fully integrated with other data in the database.

Integrated Image Analysis from them). Direct importing applies to existing data that have been converted to or saved in digital format already. A common method of analog spatial data entry is to scan them into digital format first and then trace all features, both point and linear, using on-screen digitization. Alternatively, the analog materials may be directly converted into digital form via a digitizer. Afterwards, the acquired digital data are made useful through editing. To be fully integratable with data from other sources, the spatial component of all data in the GIS database has to be transformed into a ground coordinate system common to all data layers already in the database. Nonspatial data, such as special statistics and questionnaire results, are entered into the computer either through direct porting if they are already in digital format, or via the keyboard otherwise. In either case the attribute data must be linked to spatial entities through an internally generated code or identifier before they can be queried, analyzed, and modeled. No matter where the data originate from initially, all the captured spatial data must be represented in either vector or raster format when stored in the GIS database. Each format of representation has its own unique strengths and limitations in terms of accuracy and efficiency.

14.1.2 Vector Mode of Representation In the vector mode of representation a real-world entity is abstracted either as a point, a line, or an area (Fig. 14.2). Exemplified by fire hydrants, hospitals, and transmission towers, point data are zerodimensional (0D) features in terms of their topological complexity. All point entities must be represented by a pair of coordinates. They Reality Tower Plantation forest

Lake

Track

Digital representation

FIGURE 14.2 Digital representation of objects of varying complexity in vector mode. Notice that the accuracy of representation is determined by the sampling interval as a straight line segment is used to connect any two adjacent nodes.

569

570

Chapter Fourteen Line

Line segment

String

Arc

Link

*

*

Directed link

*

*

Chain

*

*

FIGURE 14.3 Various forms of representing linear features in a GIS database.

indicate the horizontal location of ground features in a ground coordinate system. Linear features such as river channels, rail tracks, and highways are one-dimensional (1D) objects that require representation by a string of points. All linear features can be represented in one of six line forms: line segment, string, arc, link, directed link, and chain (Fig. 14.3). The end point of each line segment is called a node. The space between any two consecutive points is always linked with a straight line. Thus, line segments are the simplest form of representation among all linear features. They may be used to represent a street in a block or make up a string. Both string and arc are suitable for representing sinuous features such as river channels. Link and directed link are commonly used to represent the direction of movement associated with a linear feature, such as traffic flow of a oneway street. Chain is a combination of string with directed link. It is suitable for representing the direction associated with a sinuous feature (e.g., flow direction of a river channel). Areas, or polygon features, are two-dimensional (2D) objects that are represented in the same manner as linear features, except that the first node and the last node in this string of points are identical (Fig. 14.4). The specific format in which a real-world object is represented in the database is a function of the scale of representation. So the same ground feature may be represented either as a point or as an area. For instance, a 2D object such as a town or city that is normally considered

Integrated Image Analysis y 3

14 4

2

II I

13 12

III.I

III.II

5 6

1

7

8

11 9 10 x

Origin

FIGURE 14.4 Representation of geographic entities of various topological complexities in vector format.

Feature

ID Number

Representation

Point

I

x, y (single pair)

Line

II

String of x, y coordinate pairs

Polygon

III.I

Closed loop of x, y coordinate pairs

III.II

Closed loop sharing coordinates with others

an area may be represented as a point if the scale of representation is sufficiently small. In order to achieve efficiency and convenience in data management and retrieval, all vector data are organized into layers according to the similarity in their topological complexity, with each layer containing a unique aspect of the complex world (Fig. 14.5), in drastic contrast to topographic maps, which incorporate all represented features in one layer. For instance, all linear features may be separated into one layer whereas all land cover parcels (polygon features) are stored in another layer. All hydrologic features (e.g., coastal line, channel networks, and watershed boundaries) may also be organized into one layer. No matter what type of features a layer contains, it must be compatible with other layers in its spatial accuracy and georeferencing system so that the same object on the ground will have the same coordinates in all layers. This method of organization has a few advantages, such as efficient retrieval of features from the database. Analysis in some applications can be performed very quickly by activating only the concerned data layers while all other irrelevant data layers can be left out. The vector form of representation is precise. All real-world entities can be accurately represented with different combinations of the three fundamental elements: points, lines (and their variants), and

571

572

Chapter Fourteen

Contours

Footpaths

90 80 70 60 50

60

Schools

Cadastral

Land use

Composite (contours excluded)

FIGURE 14.5 The data-layer concept in a GIS database. Conceptually related spatial objects are organized into one layer known as theme or coverage (e.g., a layer may contain only stream segments or may contain streams, lakes, coastline, and swamps).

areas. This data structure is very compact, with little data redundancy. However, this data model is very complex owing to the need to encode and store spatial relationships among geographic entities (see Sec. 14.1.5). Comprehensive encoding of topology makes it efficient to carry out certain applications (e.g., spatial queries). Nevertheless, this overhead must be updated whenever the spatial component is altered. As a consequence of the complex topology and geometric problems, certain GIS analyses require considerable computation, while other analytical operations (e.g., spatial modeling) are almost impossible to implement in vector mode.

14.1.3

Raster Mode of Representation

In the raster mode of representation, the Earth’s surface is partitioned into a set of discrete but connected 2D arrays of grid cells (Fig. 14.6). Each cell has a regular shape. The most common shape is square, although triangle and hexagon are also possible. The size of each

Integrated Image Analysis Origin

Column Point

Row

(0D)

Area 2 (2D)

Area 1

Line (1D)

FIGURE 14.6 The raster form of feature representation. In this model of representation, all ground objects are presented as cells of a uniform size. Proper ties of different areal objects are represented as different cell values.

cell is known as resolution. All cells have a regular orientation. Each cell can be surrounded by four or eight neighboring cells, depending upon the connectivity number adopted. All cells are referenced to the origin in the upper left corner. Cell coordinates are implicitly coded by their row and column numbers, or the distance from the origin. Raster data are obtained by sampling the space at a regular interval, both horizontally and vertically. In this view of the world, all point features are represented as a single cell. All linear features are represented as a string of cells. An area is represented as an array of cells (Fig. 14.6). Thus, all features of different topological complexities can be stored in the same raster layer. The raster mode of representation treats the space as making up of grids of different values instead of objects. Objects exist only when a group of spatially contiguous cells are examined simultaneously. The attribute at each cell is represented as a code that can be nominal, categorical (e.g., 1 for forest and 2 for water), or ratio. Since each cell can have only one code, every aspect of the real world must be represented by a separate raster layer, for instance, one layer for elevation, another layer for land cover. The raster mode of representation has a simple and uniform data structure. It is very popular in representing spatially continuous surfaces. Besides, certain GIS operations, such as overlay, modeling, and simulation, can be efficiently implemented in this data structure. This model of representation is inherently compatible with remote sensing

573

574

Chapter Fourteen imagery. Therefore, it is very easy to integrate raster GIS data with remotely sensed data in undertaking sophisticated image analysis and spatial modeling. However, this data mode is limited in that the representation is very crude and highly approximate for point and linear features. The representation is also inaccurate for areal features that do not conform to a regular boundary (Fig. 14.6). Most of all, the accuracy of representation is adversely affected by the cell size. A large cell size may reduce the file size, but can cause loss of details. The indiscriminate partition of the space into an array of uniformly sized cells is also inefficient. A huge quantity of data must be maintained even if the feature of interest does not vary much spatially or does not occupy much space, resulting in severe data redundancy. Besides, it is impossible to search the data spatially without any links between any two cells in space. Consequently, topology cannot be built for the data, and some analyses (e.g., network) are impossible to carry out using data represented in this mode.

14.1.4 Attribute Data In addition to spatial data, the GIS database must also encompass attribute data. Attributes depict certain aspects of a spatial entity. These nongeographic properties describe the quality or degree of a selected aspect of spatial features stored in vector format. Each feature may have a number of attributes associated with it. How many attributes or which attributes should be retained in the database is governed by the purpose of constructing the database. The manner of storing the attribute data varies with the data format. In raster form attributes are represented as cell values inseparably from the spatial data themselves. Thus, both spatial (e.g., location) and nonspatial data (e.g., thematic value) are stored in the same raster layer. In vector format, however, attribute data are stored separately from spatial data. Attributes associated with a data layer are commonly represented in tabular format. In this attribute table, each row is called a record, corresponding to a geographic entity in the spatial database. A column represents a unique quality, or attribute, of that entity that can be either quantitative or qualitative (Table 14.1). Both rows and columns can be updated or expanded conveniently. New rows are addable to the table if new entities are created during a spatial operation. Obsolete records are removed from the table by deleting the relevant rows. Similarly, new attributes can be added to the table by inserting new columns. Because ground features are organized into layers according to their topological complexity, a separate attribute table must be constructed for each data layer. Attribute tables fall into three categories, corresponding to various topological complexities of spatial entities. For instance, an attribute table is needed for a point layer or coverage, a line table is essential for a layer containing linear features, and a polygon table is required for a coverage of polygon features. Neither geographic nor attribute data can be of any use if the two are not linked with each other. This linkage is established via an

Integrated Image Analysis ID Code

Address

Organization

X Coordinate

Y Coordinate

1971

74 Epsom Ave.

Auckland College of Education

2667942

6478421

1972

15 Marama Ave.

Dr. Morris Rooms

2667970

6478580

1973

16 Park Rd.

Auckland Sexual Health Service

2668054

6480627

1974

95 Mountain Rd.

Cairnhill Health Centre

2668071

6479489

1976

98 Mountain Rd.

St. Joseph’s Hospice

2668142

6479408

1980

475A Manukau Rd.

Epsom Medical Care

2668496

6477045

1984

235 Manukau Rd.

Ranfurly Medical Centre

2668651

6478119

1989

2 Owens Rd.

Auckland Healthcare

2668731

6478789

1990

197 Broadway

Newmarket Medical Centre

2668879

6479859

9322

12 St. Marks Rd.

The Vein Centre

2669017

6479242

9337

3 St. Georges Bay Rd.

Parnell Medical Centre

2669355

6481055

9403

383 Great North Rd.

Dr. Mackay’s Surgery

2665674

6480254

9455

491A New North Rd.

Consulting Rooms

2665862

6479590











TABLE 14.1 An Attribute Table for Location of Medical Facilities in Auckland

internally generated identification number or code that is unique for every spatial entity. Each of the records in the database is assigned a sequential number (Table 14.1, column 1) automatically, corresponding to the same number in the spatial layer. In addition to spatial and attribute data, the GIS database also contains topological data.

14.1.5 Topological Data Topological data portray the interrelationship between different spatial entities and the relationship of one entity to other subentities in the database. The former is concerned with spatial arrangement and spatial adjacency. The latter depicts compositional relationship. Topological relationships among spatial entities must be explicitly spelled out and stored in the database if it is to be efficiently queried spatially. The

575

576

Chapter Fourteen y 3

4

3

5

7

6 2

4

31

32

6 5 33

2

1

1

9 7

8

x

Origin

FIGURE 14.7

Topological relationships for three adjacent polygons.

Link ID

Polygon ID

Line Segments

31

1, 2, 3, 4

32

5, 4, 7, 6

33

8, 5, 9

Left Polygon

Right Polygon

From Node

To Node

1

0

31

1

2

2

0

31

2

3

3

0

31

3

4

4

32

31

4

1

5

33

32

6

1

6

0

32

5

6

7

0

32

4

5

8

0

33

7

1

9

0

33

6

7

Node ID

X

1

16

3

2

3

3

3

3

30

4

13

30

5

20

31

6

29

15

7

30

3

Y

Integrated Image Analysis complexity of topological information for an entity that has to be stored varies with its spatial dimension. Polygon features have the most complex topology. Topology for linear and point features, by comparison, is much simpler. As illustrated in Fig. 14.7, polygons 31 and 32 are adjacent as they share one common boundary (first table). Both of them are made up of four line segments (second table), each defined by two nodes. All the nodes are further defined by a pair of coordinates (third table). When encoding the topological relationship it is imperative to conform to the established convention. Namely, if the clockwise direction is observed, then it should be adhered to for all polygons in the map to avoid inconsistency and potential confusion. Explicit encoding of all the potential relationships (e.g., belonging and neighboring) among spatial entities beforehand is a prerequisite to an efficient search of the database. Higher search efficiency is achieved if more relationships are stored in the database at the expense of maintaining a larger overhead of topological data. These relationships enable queries to be answered quickly. In addition, they also make certain GIS analysis functions possible. However, it must be pointed out that not all possible spatial relationships need to be encoded explicitly. For instance, those that can be determined from calculation of node coordinates (e.g., whether two lines are intersect with each other) during a database query do not need to be encoded explicitly. Apparently, this absence of codes slows down the query.

14.1.6

GIS Functions

A GIS can serve a number of functions, such as data storage, data retrieval, data query, spatial analysis, and results display (Fig. 14.8). Of these functions, data collection, input, and transformation are preparatory steps for the construction of the database. These generic steps are not related directly to any particular GIS applications. Once all the data are stored in the database in a proper format, they can be retrieved, queried, and analyzed, and the generated results visualized and printed if necessary. Data retrieval is a process of extracting a subset of the database and visualizing it graphically for effective communication. It takes advantage of the data storage function of a GIS. In this context GIS is treated as a data depository. Since data are organized logically, they can be retrieved quickly and efficiently. Data query is a process of searching the database to identify all the records that meet the specified criteria. In this sense, it is very similar to data retrieval. However, data query is not synonymous with data retrieval in that it can be performed on new data layers derived from spatial analysis or on nonexisting entities. In this case the attribute table has to be updated as new information has been generated following a spatial operation. It is the information newly generated from the analysis that can be queried. Spatial analyses that can be done to the data stored in the GIS database include topographic analysis, overlay

577

578

Chapter Fourteen Data collection

Data input

Data transformation

Data storage (GIS database)

Spatial analysis

Data retrieval

Data query

(geocoding network analysis overlay analysis topographic analysis geostatistical analysis)

Data and results display

FIGURE 14.8 Functions of a typical GIS system and their relationship in the flow of data analysis.

analysis, network analysis, and geostatistical analysis. Some of these analyses are meaningful only when the data format is appropriate. All data retrieval, database query, and/or spatial analysis may be followed by display and visualization. Of all the aforementioned GIS functions, database query and spatial analysis are so important to GIS integration with digital image analysis that they will be covered in greater depth under separate headings.

14.1.7

Database Query

Database queries can be executed either nonspatially or spatially. An aspatial query is a search of an existing attribute table (e.g., a relational database) similar to an Excel spreadsheet file. It involves data retrieval followed by result display. In this kind of query, properties about spatial objects are retrieved and/or displayed without any change to the spatial component of the database. Namely, no new spatial entities are created as a result of the operation. Queries by no means are always such a simple operation of data recall. On the contrary, new attributes, such as population density and per capita income, may be created following a query. They can be inserted back into the original attribute table. The query is performed on one class of objects from one attribute table. It is executed by searching the database using a special attribute value or a combination of several values. Nonspatial queries strongly resemble those using the structured query language, an industry standard query language used by commercial

Integrated Image Analysis database systems, such as ORACLE, for relational databases. It has three keywords: SELECT, FROM, and WHERE. Their proper usage is illustrated here: SELECT : An attribute whose values are to be retrieved. FROM

: A relational table containing the data. WHERE : A boolean expression to identify the records. For instance, the next example illustrates a query of the database Auckland (suburb_name, population). It identifies all suburbs having a population over 50,000. SELECT population FROM Auckland WHERE population >50,000 Similarly, the query of a relational GIS database typically involves three essential ingredients: an attribute table, an attribute, and a selection criterion. A keyword to all queries is SELECT or RESELECT. The query is done in three steps: 1. Selection of the attribute table (e.g., auckland.pat) 2. Selection of the attribute value (e.g., population) 3. Specification of the selection criteria if any (e.g., >50,000) (can be combined with step 2 in the form of “population >50,000”) Example:

The property prone to landslide must meet the following conditions:

Rainfall: High Vegetation cover: Pasture Elevation: High Slope gradient: Steep Result of query: Location number 3

Location 1 2 3 4 5 6

Rainfall

Vegetation Cover

Elevation

Slope Gradient

High Low High Moderate High High

Shrub Forest Pasture Shrub Pasture Pasture

Moderate Low High High Moderate High

Gentle Gentle Steep Moderate Steep Moderate

FIGURE 14.9 An example of a data query using multiple criteria. In this query the property prone to landslide (location 3) is identified.

579

580

Chapter Fourteen

Conditional Query More sophisticated conditional queries can be formulated by combining different attributes or different values of the same attribute through boolean logic. For instance, area 10,000 would enable all suburbs with an area between 10,000 and 50,000 ha to be selected. All properties prone to landslides are identified in the query example illustrated in Fig. 14.9. In this query only one record (location 3) meets all the selection criteria. All those records that meet the selection criteria may be analyzed further to derive such statistical parameters as sum, average, and standard deviation. Spatial queries involve the use of locational information. A query can be issued for existing features after they are displayed on the computer screen. All three types of spatial objects (point, line, and area) can be queried by directly clicking on them on screen. All attributes associated with the selected entity are then displayed, such as street name, ID number, street address, location, and so on (see Table 14.1). In addition, it is also possible to query multiple objects at a time through multiple selections, or by defining a query area. All features within it are selected. These selected features are usually highlighted in a color different from that of the same class of objects to confirm their selection and to show their spatial distribution. All attribute data related to these objects are highlighted in the attribute table if it is displayed on screen already. Query of nonexistent objects is more complex, lengthy, and difficult to implement than query of existing entities. It may have to be preceded by some kind of spatial analysis, during which the geographic area to be queried is created first. If the queried area spreads over a few polygons, the query cannot be resolved with relational algebra. Relationships not stored in the database will have to be ascertained first using computational geometry, thus prolonging the query process. There are several types of such queries. The simplest form is point-to-point queries, such as identifying all point features within a spatial range from a given spot or identifying the nearest point(s) from a designated point. Typical queries are “Where is the nearest hospital from the accident spot?,” “How many restaurants are located within 500 m from here?,” and “Where is the nearest river from a burning house?” The last query exemplifies a point-to-line query. Region or zonal queries, and path queries are more complex than the above queries, and hence more difficult to undertake. An example of a zonal query is “Whose properties will be affected by a proposed landfill site or a motorway route?” To answer this kind of query, preparatory steps (e.g., buffering) have to be undertaken to create a new polygon to be used in the query first. Path queries are attempts to find the shortest route between two points in a network, such as the best route to navigate to the nearest hospital from the spot of a traffic accident. Its successful implementation requires a road network database in vector format.

Integrated Image Analysis The queried results can be visualized in map format to show the spatial distribution and pattern of the queried attribute. The number of attributes that can be visualized in one map is normally restricted to one, even though two are possible. The attribute value can be either numeric or categorical. Charts may also be produced for the queried results.

14.1.8

GIS Overlay Functions

Of all GIS analytical functions, overlay analysis is the most important and relevant to digital image analysis. Overlay analysis is defined as placement of a cartographic representation of one theme over that of another. Conceptually, it refers to stacking one map layer or theme over the top of another to generate a third coverage, namely coverage A + coverage B = coverage C (Fig. 14.10). This very intuitive but powerful GIS function is difficult to achieve with analog maps, but is a straightforward process in the digital environment. Overlay analysis can be performed to fulfill many needs, such as showing spatial correlation among different variables, revealing any cause-effect relationships among them, or modifying the database or study area, and detecting changes in land cover. There are different approaches by which the two input layers are stacked. In certain kinds of operations, their sequence in the input affects the overlay outcome. Prior to being overlaid, all input layers must be georeferenced to the same ground coordinate system, even though they may not cover an identical ground area. The topology of the output layer must be rebuilt after the operation in order to reflect the fact that new spatial objects may have been created out of the overlay analysis. Of special notice is that overlay is by no means

Coverage 1

+ Coverage 2

= Output coverage

FIGURE 14.10 The concept of spatial overlay analysis in GIS. All coverages involved in the analysis, including both the input and output ones, must be georeferenced to the same ground coordinate system.

581

582

Chapter Fourteen restricted to only two layers, a common number in practice. If a GIS can manage only one pair of input layers at a time, it is still possible to overlay more than two layers through multiple overlay analyses. For instance, three coverages of soils, crop, and farm practices may be overlaid to predict or to help understand yield potential. Any two of them can be overlaid first before the newly created layer is overlaid with the third one. Not all input layers in an overlay analysis contain the same type of features. In fact, it is quite legitimate for the input layers to contain features of different topological complexities. For instance, one layer can be point-based school or hospital locations while another layer contains the boundaries of suburbs. Combination of layers of different topological complexities fulfills different purposes of overlay analysis, such as identifying point in polygon, line on polygon, and polygon on polygon. In point-in-polygon overlay, one layer contains point data while another contains area (polygon) entities. The area boundary is used to group the points into spatial segments, but no new polygons are created during the overlay. The properties of the point attributes may be further studied by relating them to other statistical data, for example, to identify crime scenes in different suburbs and to explore potential factors contributing to the crimes. In line-on-polygon overlay, one of the input layers contains linear objects and another contains polygons. The sequence of entering the two layers in the analysis critically affects the overlay outcome. If the polygon coverage is the first input layer, then lines no longer exist in the output layer. Instead, they have become the boundaries of newly created polygons (refer to the heading “Identity” later in this section). After the analysis, more but smaller polygons are created through the intersection of polygons with lines. However, lines can also be partitioned into segments by the boundaries of the polygons if the line coverage is the first input layer. A potential application of this kind of analysis is to identify the types of land cover to be crossed by a proposed pylon and the length of the power line in each type of land cover. In polygon-on-polygon overlay, both coverages contain areal objects. Many new but smaller polygons are created after the operation. The topology of the output coverage needs to be rebuilt following the operation. There are different logic options in implementing polygon overlay, such as union, split, intersect, update, and identity. They are discussed in detail next.

Union In union a new polygon coverage is created out of two input ones using the boolean logic OR. The resultant output coverage retains all the features in either of the input layers. In other words, all features and attributes of both coverages are preserved in the output layer (Fig. 14.11). If the two coverages do not cover an identical ground area, the output coverage will always cover a larger area than that

Integrated Image Analysis

+

=

Union layer

Input layer

Output layer

FIGURE 14.11 Graphic illustration of union in overlay analysis. Any area unique to the union layer will be annexed in the output layer while the common area will not be duplicated.

covered in either of the input layers. New polygons are created through intersection of arcs in the input layers. They are not formed until the postoperation stage when the topology of the newly created coverage is constructed. Its attribute table is formed by joining the coverage items of both input layers. Besides, all existing polygons retain their identity in both of the input coverages prior to the operation. The sequence of inputting the two coverages exerts no effect on the output, even though both must be polygon coverages. It is illogical to use point or line coverage inputs in undertaking a union operation. Union differs from map join in that any area common to both layers will not be duplicated in the output layer. This operation is valuable in identifying land cover parcels whose identity has changed in change detection from multitemporal satellite images in which both input layers have exactly the same ground area.

Intersect Underpinned by the boolean logic AND, intersect creates a new coverage out of two input layers. After the two coverages are geometrically intersected through their coordinates, only the area and those features common to both the input and intersect coverages are preserved in the output layer (Fig. 14.12). Thus, the output layer always covers a smaller area than either of the input coverages if they have a different size. The attribute tables from both layers are joined as a single

+

Input layer

=

Intersect layer

Output layer

FIGURE 14.12 Graphic illustration of intersect overlay analysis. The output area is common to both layers, and the areas unique to either layer in the input are clipped off in the output.

583

584

Chapter Fourteen one with duplicated records deleted. These features are of the same class as those in the input coverage. The first (input) coverage can be point, line, or area. The intersect (second) coverage must always contain polygon features. It is not permissible to use a point or line coverage as the intersect layer. Similar to union, intersect is also a useful way of identifying land cover changes from multitemporal remotely sensed results in vector mode. In this case both the input and the intersect layers contain polygon-based land cover parcels.

Identity Similar to all overlay analyses, identity requires two layers in the input, an input (first) coverage and an identity (second) coverage. The input layer may contain points, lines, and polygons. However, the identity coverage must be polygon-based. Since most land cover parcels are polygons, the polygon option is the most common in image analysis. With this option, all arcs in the input coverage are intersected with and split by those in the identity layer (Fig. 14.13). New polygons are formed after the topology is rebuilt to update the attribute table. Unlike union, the geographic area covered by the output layer is identical to that of the input layer only, with all entities of the input coverage retained, and the area unique to the identity layer is clipped off. However, among the features in the identity coverage, only those overlapping the spatial extent of the first (input) coverage are preserved in the output coverage. Therefore, it is important to specify the correct sequence of coverages in performing the analysis. This operation is useful in unifying the spatial extent of all data layers related to the same geographic area that may cover a unique area of their own initially.

Erase and Clip The first of the two input layers in erase is regarded as the input layer and the second the erase coverage that defines the region to be erased. Features in the input layer overlapping the erase region (polygons) are removed after this operation (Fig. 14.14). The output coverage contains only those input features outside the erase region. The input

+

Input layer

=

Identity layer

Output layer

FIGURE 14.13 Graphic illustration of the identity operation in overlay analysis. More polygons in the output layer are created through the intersection with arcs in the identity layer.

Integrated Image Analysis

+

Input layer

=

Erase layer

Output layer

FIGURE 14.14 Graphic illustration of erase in overlay analysis. The area enclosed inside a boundary in the erase layer will be removed from the input layer after this operation.

coverage can contain point, line, or polygon features. However, the erase coverage must always contain polygon features. Output coverage features are of the same class as the input features. Their topology must be rebuilt after this operation. Erase is a useful operation in stripping off areas of no interest in certain applications, such as removal of land areas from an image to be used in water quality analysis. Clip is opposite to erase in that features in the input layer outside the erase region are removed, and those overlapping with the erase region are retained in the output layer (Fig. 14.15). The input (first) coverage may be a point, line, or polygon coverage. The clip coverage (second) contains a polygon that defines the clipping region. Essentially, a portion of the coverage is cut out using a “cookie cutter” in clip. Since only those input coverage features that fall within the clipping region are preserved in the output, the output layer is always smaller than the input layer in size, in sharp contrast to erase, in that the erased area could be anywhere inside the input layer. Clipping is a useful way of extracting features, or parts of them, from a large dataset or area. In particular, it is used very commonly in redefining the size of a remote sensing image. In this case the clipping layer contains the boundary of the study area.

Split As a feature extraction operation, split is very similar to clip in that the input coverage is divided into a number of output coverages, each covering a subarea of the whole coverage. The input coverage may contain point, line or polygon features, but the split coverage must

+

Input layer

FIGURE 14.15

=

Clip layer

Output layer

Graphic illustration of clip in overlay analysis.

585

586

Chapter Fourteen

Zone 1

Zone 2

+

= Zone 3

Input layer

Zone 4

Split layer

Output layer

FIGURE 14.16 Graphic illustration of split in overlay analysis. The input layer is partitioned into four subcoverages as there are four polygons in the split layer.

always contain polygons, through which the input coverage features are partitioned by the boundaries of the split polygons. The number of resultant coverages equals the number of polygons in the split coverage (Fig. 14.16). All the output coverages have the same feature class as the input coverage, but they are smaller than the input layer in size. Split produces an effect opposite to that of union, achievable via a series of clip operations. This analysis is useful in partitioning a huge geographic area into a number of smaller areas so that each one can be analyzed separately by several analysts to speed up the process of data analysis. Before this section ends, it must be emphasized that GIS overlay analysis sounds sophisticated, but simple to perform. At most, it is only an analytical tool, no matter how powerful it is. The driving reason behind them lies in the applications, not the computer operations. Apart from the specific technicalities behind different overlay operations, the image analyst needs to understand which one is best at achieving the desired objective of an application. Without such an understanding, overlay is merely an exercise in fancy graphics.

14.1.9

Errors in Overlay Analysis

When multiple layers are input into the computer to perform one of the spatial overlay analyses discussed above, intersection of all arcs in the respective coverages creates many new polygons. While some of them are genuinely formed by arcs representing the boundaries of different polygons in separate input layers, others are artificially created by arcs that presumably represent the same boundaries as a consequence of minor shift in their horizontal positions in different data sources. Characterized by a small size and an elongated (skinny) shape, these spurious polygons are called sliver polygons (Fig. 14.17).

FIGURE 14.17 Formation of spurious polygons in overlay analysis owing to a slight shift in the position of the same boundary in different input layers.

Integrated Image Analysis Composed of only two line segments in most cases, these skinny polygons have a large perimeter-to-area ratio. Sliver polygons can be formed for three reasons: 1. First, the horizontal position of the object being depicted has indeed shifted. For instance, a river channel could have shifted its course in the interim when it is surveyed in different seasons or years. If the channel is represented with a single line in different layers, sliver polygons will result. 2. Second, the same boundary is depicted at different scales in different data sources. Different scales mean different levels of generalization for the same boundary. For instance, the same coastal line looks slightly different at a large scale from that at a small scale. 3. Finally and more likely, sliver polygons are caused by minor artificial variations in indicating the boundary. The same boundary is not identical in all input layers because its nature is ambiguous or fuzzy, or its representation is not error free. For instance, soil boundary is rather fuzzy. Different pedologists may interpret and draw the same soil boundary differently. Moreover, artificial changes are inevitably introduced into its representation during digitization. Even if the same source is used, it is unlikely that the same boundary is captured identically by different operators, or even by the same operator at different times, owing to the use of varying spatial intervals (Fig. 14.18). Consequently, the same linear feature can look slightly different from one layer to the next. For these reasons, no two boundaries are exactly the same. That is to say, boundary inconsistency is the norm in the captured digital data, and sliver polygons are inevitable in the overlaid results. The critical issue is how to deal with them. The varying position of the same boundary in different input layers can be resolved through conflation, a procedure of reconciling the differences in boundaries by dissolving sliver polygons. Sliver polygons may be removed from the resultant output coverage by two means. The first method is to average the two sets of boundary lines

FIGURE 14.18 The impact of sampling interval on the appearance of a curved boundary (dashed line) in digital format. Solid line: the captured line consisting of line segments. Notice how the cur ve is more generalized at a longer interval.

587

588

Chapter Fourteen if the reliability of both boundaries is the same or unknown. This can be achieved by breaking apart the intersecting boundaries of the sliver polygons and then removing both line segments. The two dangling nodes left behind are then joined with a straight line. In this way the newly drawn line falls roughly in the middle of the dissolved sliver polygon. This process is lengthy and tedious as every spurious polygon has to be identified and eliminated manually. A better and more efficient alternative is to eliminate them automatically. A logic expression may be followed by the DELETE command that defines the characteristics of the polygons to be eliminated. A common elimination criterion is polygon size. All polygons, both genuine and spurious, are eliminated after the operation so long as their area falls below the specified threshold. Thus, it is important to set an appropriate threshold carefully so that genuine polygons are not affected. Needless to say, this method is much faster than the manual one. In the output layer, the removal is accomplished by dropping one of the longest shared borders between them. A more sensible way is to remove the segment of borders with a higher positional uncertainty. Of course, this will have to be done manually, thus prolonging the process of conflation.

14.1.10

Relevance of GIS to Image Analysis

GIS is related closely to image analysis in at least two areas. First, it provides a framework for preparing the data for analysis and for undertaking change detection analysis. Second, it is able to supply a huge amount of nonsatellite data in knowledge-based image analysis.

Analytical Framework Although most of the data used in image analysis are obtained from a sensor aboard a satellite or an aircraft, rudimentary GIS analysis is essential in getting the data into the right shape during data preparation. In this instance, the data must be subset to an appropriate size and shape to follow that of the study area closely so that precious time can be spared in subsequent analyses. The redefinition of a study area using its boundary file is effectively carried out using the clip function in a GIS overlay, as shown above. In digital image analysis it may be necessary to undertake a longterm longitudinal study of a geographic problem under investigation, in addition to analysis of remotely sensed data acquired at a given time. This involves spatial comparison of multitemporal results derived from the analysis of satellite image data of the same geographic area recorded at different times. This spatial comparison is virtually a spatial overlay analysis in concept. It is best undertaken in a GIS if the results are in vector format.

Supplier of Ancillary Data As shown in Table 14.2, a vast variety of spatial data are stored in a GIS database, all of which must have been properly edited and

Integrated Image Analysis Data Category

Example Layers

Topographic

Elevation Gradient Orientation

Hydrologic

Stream channels Lakes and reservoirs Watershed Coastal line

Environmental

Soil pH Floodplain Protected reserves

Natural resources

Vegetation Land cover Farmland

Transport

Bus stops Passenger rail network Highway

TABLE 14.2

Exemplary Data Stored in a GIS Database

georeferenced to a common ground coordinate system. While some of them are best represented in vector format, others are more suited to the raster format of representation. These data can be easily exported to an image analysis system with a simple change in data format, or used directly without any change if the system is able to read the GIS data. They are the potential sources for deriving external knowledge (e.g., residential area must have a slope gradient