138 78 17MB
English Pages [104] Year 2023
CALL FOR PAPERS IEEE Geoscience and remote sensing magazine Since 2013, the IEEE Geoscience and Remote Sensing Society (GRSS) publishes IEEE Geoscience and Remote Sensing Magazine. The magazine provides a new venue to publish high-quality technical articles that by their very nature do not find a home in journals requiring scientific innovation but that provide relevant information to scientists, engineers, end-users, and students who interact in different ways with the geoscience and remote sensing disciplines. Therefore, the magazine publishes tutorial and review papers, as well as technical papers on geoscience and remote sensing topics, but the last category (technical papers) only in connection to a Special Issue. The magazine also publishes columns and articles on these topics: • Education in remote sensing • Space agency activities • Industrial profiles and activities • IDEA Committee activities • GRSS Technical Committee activities • GRSS Chapter activities • Software reviews and data set descriptions • Book reviews • Conferences and workshop reports. The magazine is published with an appealing layout in a digital format, and its articles are also published in electronic format in the IEEE Xplore online archive. The digital format is different from the electronic format used by the IEEE Xplore website. The digital format allows readers to navigate and explore the technical content of the magazine with a look and feel similar to that of a printed magazine. The electronic version provided on the IEEE Xplore website allows readers to access individual articles as separate PDF files. Both the digital and the electronic magazine content is freely available to all GRSS members. This call for papers is to encourage all potential authors to prepare and submit tutorials and technical papers for review to be published in the IEEE Geoscience and Remote Sensing Magazine. Contributions for any of the abovementioned topics of the magazine (including columns) are also welcome. Authors interested in proposing special issues as guest editors are encouraged to contact the editor-in-chief directly for information about the proposal submission template, as well as the proposal evaluation and special issue management procedures. All tutorial, review and technical papers will undergo blind review by multiple reviewers. The submission and review process are managed at the IEEE Manuscript Central, as is already done for the three GRSS journals. Prospective authors are required to submit electronically using the website http://mc.manuscriptcentral.com/grs by selecting the “Geoscience and Remote Sensing Magazine” option from the drop-down list. Instructions for creating new user accounts, if necessary, are available on the login screen. No other manners of submission are accepted. Papers should be submitted in single column, double-spaced format. For any additional information, contact the editor-in-chief: Prof. Paolo Gamba Department of Electrical, Biomedical and Computer Engineering University of Pavia Via A. Ferrata, 5 26100 Pavia Italy E-Mail: [email protected] Digital Object Identifier 10.1109/MGRS.2023.3278368
JUNE 2023 VOLUME 11, NUMBER 2 WWW.GRSS-IEEE.ORG
FEATURES
©SHUTTERSTOCK.COM/METAMORWORKS
10
by Agata M. Wijata, Michel-François Foulon, Yves Bobichon, Raffaele Vitulli, Marco Celesti, Roberto Camarero, Gianluigi Di Cosimo, Ferran Gascon, Nicolas Longépé, Jens Nieke, Michal Gumiela, and Jakub Nalepa
40
PG. 60
Taking Artificial Intelligence Into Space Through Objective Selection of Hyperspectral Earth Observation Applications
nboard Information Fusion for Multisatellite O Collaborative Observation
by Gui Gao, Libo Yao, W enfeng Li, Linlin Zhang, and Maolin Zhang
ON THE COVER: Schematic diagram of federated learning (see p. 69). ©SHUTTERSTOCK.COM/VECTORFUSIONART
Security for Geoscience and Remote Sensing 60 AbyIYonghao Xu, Tao Bai, Weikang Yu, Shizhen Chang, Peter M. Atkinson, and Pedram Ghamisi
SCOPE IEEE Geoscience and Remote Sensing Magazine (GRSM) will inform readers of activities in the IEEE Geoscience and Remote Sensing Society, its technical committees, and chapters. GRSM will also inform and educate readers via technical papers, provide information on international remote sensing activities and new satellite missions, publish contributions on education activities, industrial and university profiles, conference news, book reviews, and a calendar of important events.
Digital Object Identifier 10.1109/MGRS.2023.3277244 JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
1
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
COLUMNS & DEPARTMENTS
4 FROM THE EDITOR
7 PRESIDENT’S MESSAGE
86 PERSPECTIVES 89 TECHNICAL COMMITTEES
EDITORIAL BOARD Dr. Paolo Gamba University of Pavia Department of Electrical, Biomedical, and Computer Engineering Pavia, Italy [email protected] Subit Chakrabarti (Cloud to Street, USA) Gong Cheng (Northwestern Polytechnical University, P.R. China) Michael Inggs (University of Cape Town, South Africa) George Komar (NASA retired, USA) Joseé Levesque (Defence Research and Development, Canada) Andrea Marinoni (UiT, Artic University of Norway, Norway)
Vice President of Technical Activities Dr. Fabio Pacifici Maxar, USA Secretary Dr. Steven C. Reising Colorado State University, USA Chief Financial Officer Dr. John Kerekes Rochester Institute of Technology, USA IEEE PERIODICALS MAGAZINES DEPARTMENT Journals Production Manager Sara T. Scudder Senior Manager, Production Katie Sullivan Senior Art Director Janet Dudar
Mario Parente (University of Massachusetts, USA)
Associate Art Director Gail A. Schnitzer
Michael Schmitt (Universität der Bundeswehr, Germany) Vicky Vanthof (Univ. of Waterloo, Canada) Hanwen Yu (University of Electronic Science and Technology of China, P.R. China) GRS OFFICERS President Mariko Sofie Burgin NASA Jet Propulsion Laboratory, USA
The IEEE Geoscience and Remote Sensing Society of the IEEE seeks to advance science and technology in geoscience, remote sensing and related fields using conferences, education, and other resources.
Vice President of Meetings and Symposia Sidharth Misra NASA-JPL, USA
Fabio Pacifici (Maxar, USA)
Nirav N. Patel (Defence Innovation Unit, USA)
MISSION STATEMENT
Vice President of Professional Activities Dr. Lorenzo Bruzzone University of Trento, Italy
Executive Vice President Saibun Tjuatja The University of Texas at Arlington, USA
Production Coordinator Theresa L. Smith Director, Business Development– Media & Advertising Mark David +1 732 465 6473 [email protected] Fax: +1 732 981 1855 Advertising Production Manager Felicia Spagnoli Production Director Peter M. Tuohy Editorial Services Director Kevin Lisankie Senior Director, Publishing Operations Dawn M. Melley
Vice President of Publications Alejandro C Frery Victoria University of Wellington, NZ Vice President of Information Resources Keely L Roth Salt Lake Cty UT, USA
IEEE Geoscience and Remote Sensing Magazine (ISSN 2473-2397) is published quarterly by The Institute of Electrical and Electronics Engineers, Inc., IEEE Headquarters: 3 Park Ave., 17th Floor, New York, NY 10016-5997, +1 212 419 7900. Responsibility for the contents rests upon the authors and not upon the IEEE, the Society, or its members. IEEE Service Center (for orders, subscriptions, address changes): 445 Hoes Lane, Piscataway, NJ 08854, +1 732 981 0060. Individual copies: IEEE members US$20.00 (first copy only), nonmembers US$110.00 per copy. Subscription rates: included in Society fee for each member of the IEEE Geoscience and Remote Sensing Society. Nonmember subscription prices available on request. Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limits of U.S. Copyright Law for private use of patrons: 1) those post-1977 articles that carry a code at the bottom of the first page,
provided the per-copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923 USA; 2) pre-1978 articles without fee. For all other copying, reprint, or republication information, write to: Copyrights and Permission Department, IEEE Publishing Services, 445 Hoes Lane, Piscataway, NJ 08854 USA. Copyright © 2023 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Application to Mail at Periodicals Postage Prices is pending at New York, New York, and at additional mailing offices. Canadian GST #125634188. Canada Post Corporation (Canadian distribution) publications mail agreement number 40013885. Return undeliverable Canadian addresses to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA. IEEE prohibits discrimination, harassment, and bullying. For more information, visit http://www.ieee.org/web/aboutus/whatis/policies/p9-26.html.
Digital Object Identifier 10.1109/MGRS.2023.3282408
2
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Share Your Preprint Research with the World! TechRxiv is a free preprint server for unpublished research in electrical engineering, computer science, and related technology. Powered by IEEE, TechRxiv provides researchers across a broad range of fields the opportunity to share early results of their work ahead of formal peer review and publication.
BENEFITS: • Rapidly disseminate your research findings • Gather feedback from fellow researchers • Find potential collaborators in the scientific community • Establish the precedence of a discovery • Document research results in advance of publication
Upload your unpublished research today!
Follow @TechRxiv_org Learn more techrxiv.org
Powered by IEEE
FROM THE EDITOR BY PAOLO GAMBA
Why Does GRSM Require the Submission of White Papers?
A
s you have already guessed from the title, and in line with my editorial in the March 2023 issue, I will use my space here to address two different points. First, the reader will find a summary of the contents of this issue, which is useful to those who would like to quickly navigate the issue and read only what they are interested in. The second part of this editorial will be devoted instead to better explaining the concept of a white paper, which is central to the editorial policy of this journal.
GRSM ISSUE CONTENT This issue of IEEE Geoscience and Remote Sensing Magazine (GRSM) includes the right blend of technical articles and columns with reports and information sharing by some of the committees and directorates of the IEEE Geoscience and Remote Sensing Society (GRSS). Indeed, this issue contains three technical articles, tackling a few of the most interesting issues in Earth observation (EO) data processing. The first article, titled “Taking Artificial Intelligence Into Space Through Objective Selection of Hyperspectral Earth Observation Applications,” [A1] starting from a review of the potential EO use cases that may directly benefit from onboard hyperspectral data analysis, introduces a procedure for the objective, quantifiable, and interpretable selection of onboard data analysis applications. Similarly, the article “Onboard Information Fusion for Multisatellite Collaborative Observation: Summary, Challenges, and Perspectives” [A2] summarizes, analyzes, and studies issues involving multisatellite data onboard intelligent fusion processing, showing how strong is the need for a shift from efficient data exploitation on the ground toward effective, although computationally lightweight, onboard data processing. Finally, the article titled “AI Security for Geoscience and Remote Sensing: Challenges and Future Digital Object Identifier 10.1109/MGRS.2023.3277221 Date of current version: 30 June 2023
4
Trends” [A3] provides a systematic and comprehensive review of artificial intelligence (AI) security-related research in remote sensing applications. As for column articles, there is a very interesting “Perspectives” column by Dr. Nirav Patel [A4], “Generative Artificial Intelligence and Remote Sensing: A Perspective on the Past and the Future,” devoted to explaining the background, the current status, and the possible issues related to the use of generative deep learning models in EO data. This thought-provoking article shows the increasing importance that generative deep learning methods are taking in image processing and the attention they deserve in EO as well as in artificial data generation in general. The potential misuse of these systems, originally designed to provide additional samples to supervised interpretation techniques, opens new research paths and asks new questions about data reliability that have not been considered, so far, from the EO data user community. The next column reports on the first Summer School of the Image Analysis and Data Fusion (IADF) Technical Committee (TC) [A5]. This TC is well known in the GRSS community and beyond because of the organization of the yearly Data Fusion Contest. A specific organizing committee made it possible to realize the first GRSS IADF School on Computer Vision for Earth Observation, held as an online event from 3 to 7 October 2022. The event was a success, with 85 selected participants. The material distributed in the school is available via the GRSS YouTube channel at https://www.youtube. com/c/IEEEGRSS. Finally, two columns provide updates about important activities by the Standards TC [A6] and the first ideas from the brand-new Remote Sensing Environment, Analysis, and Climate Technologies (REACT) TC. [A7] Specifically, the first column provides a first outlook on the guidelines for the EO community to create/ adapt Analysis Ready Data formats that integrate with various AI workflows. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
WHITE PAPER RATIONALE AND TEMPLATE DESCRIPTION The technical contributions in this issue are the results of a procedure that started with the submission of a white paper by their respective authors. But why does GRSM require the submission of a white paper, and what exactly is a GRSM white paper? The scope of this preliminary step, which may not be clear to all of our readership, is in fact twofold. On the one side, a GRSM white paper is meant to provide to the GRSM Editorial Board with enough information to understand whether the proposed full article is in line with the requirements of the journal scope (https://www.grss -ieee.org/wp-content/uploads/2023/03/GRSM_call_for _papers_rev2023.pdf). “GRSM publishes … high-quality technical articles that by their very nature do not find a home in journals requiring scientific innovation but that provide relevant information to scientists, engineers, end-users, and students who interact in different ways with the geoscience and remote sensing disciplines…” On the other side, the submission of a white paper is important to the authors because they will receive preliminary feedback from expert reviewers about their full arti-
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
cle, together with comments and suggestions to improve it. Therefore, without the burden of a full review and without the effort required to write a full article, the submission of a white paper ensures that the authors and the Editorial Board shape the full article in such a way that it will be easier to review, and hopefully, accept it. In summary, a white paper stage is an opportunity for the authors to get early feedback—for example, whether their article is not a good fit for the magazine. This avoids spending time on an article that can instead be submitted to a different publication with different requirements. According to this rationale, here are a few suggestions about the format of a GRSM white paper. First of all, its length should be limited; a white paper is expected to be six pages (or fewer) in the IEEE journal page format (A4/letter paper, two columns). The proposed title is an important component to set the stage for the scope of the article. Good titles should not be too long, but they should not be too generic. The abstract (expected to be no longer than half a column) should describe clearly if your article is a tutorial, a review, or a special issue article, and it should describe the topic of the work and the main novelties and results of the article.
5
The main body of the white paper (four pages maximum) should be a shorter version of the full article with all the necessary information to understand how the full article will be structured and what would be the main topic, analyses, and results described in it. It can be structured with “Introduction,” “Methodology and Data,” and “Results” if it is a special issue article; with “Introduction,” “Review,” and “Comments” in the case of a review article; and with “Introduction,” “State of the Art,” and “Tutorial” in the case of a tutorial article. This is the place to make the case that an article will be of broad interest to the remote sensing community. It is useful to cite some references in this section. It is also important to highlight differences with other tutorials, overviews, or survey articles on related topics. Here, one should also provide a section and subsection outline of the final article. It is useful to briefly mention the content of each section and to list the (main) articles that you plan to cite in this section. Given the importance of mathematics in remote sensing, it would be unusual to find a feature article without any equations. Due to the tutorial nature of the articles, though, it would not be typical to see the pages filled with equations either. Mathematical equations should be limited in scope and complexity. If one feels that the article needs more equations, references, figures, or tables than the magazine guidelines allow, this may be an indication that it’s too technical for a magazine article and is more suitable for publication in a regular journal.
6
References in the white paper are a subset of the references of the full article. They should be cited and included to guide readers to more information on the topic. For feature articles, the reference list should not include every available source. For instance, in a tutorial article, one should include only the key references that are important to explain a topic and the most influential and well-understood articles for an overview. The final part of a GRSM white paper should be the authors’ list and bios, not longer than half a page. Indeed, this part should be short enough to avoid not being read yet long enough to explain why the authors feel that they are adequate to write the special issue article, review, or tutorial that they are willing to submit to GRSM. This list is used to determine if the authors have expertise in the area. Good feature articles will be coauthored by different research groups. This choice helps to ensure diverse perspectives on different lines of research, as expected in tutorial and overview articles. APPENDIX: RELATED ARTICLES [A1] A. M. Wijata et al., “Taking artificial intelligence into space through objective selection of hyperspectral Earth observation applications,” IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 10–39, Jun. 2023, doi: 10.1109/ MGRS.2023.3269979. [A2] G. Gao, L. Yao, W. Li, L. Zhang, and M. Zhang, “Onboard information fusion for multisatellite collaborative observation: Summary, challenges, and perspectives,” IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 40–59, Jun. 2023, doi: 10.1109/MGRS.2023.3274301. [A3] Y. Xu, T. Bai, W. Yu, S. Chang, P. M. Atkinson, and P. Ghamisi, “AI security for geoscience and remote sensing: Challenges and future trends,” IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 60–85, Jun. 2023, doi: 10.1109/ MGRS.2023.3272825. [A4] N. Patel, “Generative artificial intelligence and remote sensing: A perspective on the past and the future,” IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 86–88, Jun. 2023, doi: 10.1109/MGRS.2023.3275984. [A5] G. Vivone, D. Lunga, F. Sica, G. Tas¸kin, U. Verma, and R. Hänsch, “Computer vision for earth observation—The first IEEE GRSS image analysis and data fusion school,” IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 95–100, Jun. 2023, doi: 10.1109/MGRS.2023.3267850. [A6] D. Lunga, S. Ullo, U. Verma, G. Percivall, F. Pacifici, and R. Hänsch, “Analysis-ready data and FAIR-AI—Standardization of research collaboration and transparency across earth-observation communities,” IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 89–93, Jun. 2023, doi: 10.1109/MGRS.2023.3267904. [A7] I. Hajnsek et al., “REACT: A new technical committee for earth observation and sustainable development goals,” IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 93–95, Jun. 2023, doi: 10.1109/MGRS.2023.3273083. GRS IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
PRESIDENT’S MESSAGE BY MARIKO BURGIN
Letter From the President
H
ello and nice to see you again! My name is Mariko Burgin, and I am the IEEE Geoscience and Remote Sensing Society (GRSS) President. You can reach me at [email protected] and @GRSS_President on Twitter. I hope you all had a chance to read my 2033 “press release” in the previous column [1] and continue to dream big and tell us how you see the GRSS in 2033 by sharing on social media with #GRSSin10Years. If you haven’t had the chance yet, keep posting or send me an e-mail. I would be happy to hear from you. In this letter, I’d like to focus on the upcoming International Geoscience and Remote Sensing Symposium (IGARSS) 2023, which will take place in Pasadena, CA, USA, from 16 to 21 July 2023. If you’re not familiar, IGARSS is the GRSS’ annual flagship conference, and it is in its 43rd year. IGARSS is an invitation for sharing knowledge and experiences on recent developments and advancements in geoscience and remote sensing. For 2023, its particular focus is on Earth observation, disaster monitoring, and risk assessment. If you head over to the IGARSS 2023 website (https:// 2023.ieeeigarss.org/), you might get overwhelmed by the sheer amount of information, but fear not. In this letter, I will try my best to familiarize you with the various activities and give you a few tips along the way. IEEE GRSS-USC MHI 2023 REMOTE SENSING SUMMER SCHOOL (THURSDAY TO SATURDAY, 13–15 JULY) It is a longstanding tradition for the GRSS to organize a GRSS school in conjunction with its IGARSS conference. This year, it will be held at the Ming Hsieh Institute (MHI) of the University of Southern California (USC) and consists of three full days of tutorials and lectures on geospatial raster and vector data handling with PyDigital Object Identifier 10.1109/MGRS.2023.3277234 Date of current version: 30 June 2023
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
thon, deep learning for remote sensing data analysis, and radar interferometry and its applications. It also includes three invited lectures on space-based environmental monitoring, small satellite science and applications, and the NASA-ISRO synthetic aperture radar (NISAR) mission. Tip: Use the opportunity to learn from experts in the field before attending IGARSS to present your own research. TUTORIALS (SUNDAY, 16 JULY) The Sunday before IGARSS, in this case 16 July, is traditionally the day of the IGARSS tutorials. This year, IGARSS is running 14 tutorials ranging from machine and deep learning in remote sensing and Earth observation to identifying ethiTIP: USE THE OPPORTUNITY cal issues in Earth observation TO LEARN FROM EXPERTS IN research, from getting started w it h SA R a nd TomoSA R to THE FIELD BEFORE ATTENDING learning about pansharpenIGARSS TO PRESENT YOUR ing and GNSS-R in-the-cloud OWN RESEARCH. drone processing for water applications (MAPEO-water), and from predictive modeling of hyperspectral responses of natural materials to a NASA Transform to Open Science (TOPS) Open Science 101 workshop, all led by renowned experts in their respective fields. Our tutorials usually book out fast, so grab your spot while they are still available. Tip: Can’t attend the GRSS school but still want to learn from the best experts in the field? Perhaps a tutorial fits your needs. IEEE GRSS AdCom OPEN HOUSE (SUNDAY, 16 JULY) This year, for the first time, the GRSS, the sponsoring Society of IGARSS, is organizing an Administrative Committee (AdCom) Open House on Sunday, 16 July. Over light refreshments, you will have a chance to meet the GRSS leadership: your GRSS President, Executive 7
Vice President (VP), CFO, Secretary, Past President, VPs and Directors, and all the volunteers working within the various GRSS committees on activities and projects for you. You will also get to meet the IGARSS 2023 Local Organizing Committee. Tip: Land in California and register for IGARSS on Sunday, and then come mingle and let us get to know each other at the Open House. OPENING NIGHT CELEBRATION (MONDAY, 17 JULY) On Monday evening, you are invited to Pasadena Central Park for the IGARSS 2023 Opening Night Celebration. Catch up with old friends and make new connections while enjoying music and refreshments. Tip: The Opening Night Celebration is usually a great opportunity to network. TECHNICAL PROGRAM (MONDAY, 17 JULY TO FRIDAY, 21 JULY) Throughout the week, we have a mind-boggling variety of technical sessions, grouped into 13 regularly recurring themes and many community-contributed sessions. These are split into oral and poster sessions. TIP: YP EVENTS ARE Tip: There is no easy way to EXCELLENT OPPORTUNITIES figure out which technical sessions to attend. I recommend TO MEET NEW PEOPLE taking a good look at the ses(NOT JUST YPS) IN A sion descriptions, and in addiCASUAL SETTING. tion to attending those on your specific field of expertise, also mix it up and visit a few new sessions. Who knows? You might discover a new passion, meet future collaborators, or simply learn something new. EXHIBIT HALL (MONDAY, 17 JULY TO FRIDAY, 21 JULY) Don’t forget to swing by the IGARSS exhibit hall to meet the IGARSS sponsors and exhibitors. Tip: Looking to find the GRSS leadership? Hang around the GRSS booth. The GRSS booth is ideal if you want to make new connections, network, or provide feedback on IGARSS and the GRSS. TIE EVENTS (THROUGHOUT THE WEEK) For several years, IGARSS has organized its Technology, Industry, and Education (TIE) events. This year, you can attend a tutorial on how to query, fetch, analyze, and visualize geospatial data with Python; make new friends at the Women in GRSS luncheon; and participate in a CV/resume writing workshop. Tip: The Women in GRSS luncheon is open to everyone (but is a paid event). YOUNG PROFESSIONAL EVENTS (THROUGHOUT THE WEEK) For our students, recent graduates, and young professionals (YPs), IGARSS is organizing a YP mixer where you can join for an evening of interactive and engaging experiences that bring together a diverse group of academics, graduate stu8
dents, YPs, industry leaders, and entrepreneurs. This event is an ideal way to mix, mingle, network, and make new connections while you challenge yourself in a trivia contest and other surprises. Tip: YP events are excellent opportunities to meet new people (not just YPs) in a casual setting. STUDENT PAPER COMPETITION (TUESDAY, 19 JULY) AND 3MT (TO BE DETERMINED) Don’t forget to stop by the Student Paper Competition held on Tuesday, 18 July. You’ll see the final 15-min presentations of the 10 finalists who are competing for the Mikio Takagi Student Prize, which will be presented at the IGARSS Night on Wednesday, 19 July. Also don’t miss the Three-Minute Thesis (3MT) Award where 10 master’s and doctoral students describe their research in just 3 min to a general audience with only one static slide. The three winners will receive the GRSS Excellence in Technical Communication Student Prize Awards. Tip: Thinking of honing your own elevator pitch for your thesis, project, or startup? Swing by to get some inspiration. IGARSS NIGHT “SPACE & MAGIC” (WEDNESDAY, 19 JULY) Did you know that the Space Shuttle Endeavour traveled 122,883,151 mi around Earth and 12 mi through the streets of Los Angeles to its current home in the California Science Center? On Wednesday night, you’ll get the unique opportunity to enjoy locally inspired food and drink while mingling underneath the Space Shuttle Endeavour and being entertained by roaming magicians and jazz music. You’ll also get included museum access where you can learn about local ecosystems and trace the 12-mi journey of Endeavour through Los Angeles. Tip: This year, the IGARSS Night will be a standing dinner, so there will be plenty of opportunities to mix and mingle. TABLE TENNIS TOURNAMENT (TO BE DETERMINED) It is a long-standing IGARSS tradition to organize a friendly sports tournament, and this year, it is table tennis. IGARSS will supply the tables, paddles, and the ball; you just have to show up. Tip: Nothing bonds more than a won (or lost) sports game. Apart perhaps from fieldwork. NASA JET PROPULSION LABORATORY TECHNICAL TOUR (FRIDAY, 21 JULY) Round out the IGARSS week by securing a spot for a technical tour at NASA’s Jet Propulsion Laboratory (JPL). The tour is already full, but you can put your name down on the waiting list. Tip: As a JPLer myself, I highly recommend the JPL tour. Where else can you visit the center of the universe? I hope to meet you at IGARSS 2023 in Pasadena! REFERENCE [1] M. Burgin, “Letter From the President [President’s Message],” IEEE Geosci. Remote Sens. Mag., vol. 11, no. 1, pp. 6–7, Mar. 2023, doi: 10.1109/MGRS.2023.3243686.GRS IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
One of the most influential reference resources for engineers around the world. For over 100 years, Proceedings of the IEEE has been the leading journal for engineers looking for in-depth tutorial, survey, and review coverage of the technical developments that shape our world. Offering practical, fully referenced articles, Proceedings of the IEEE serves as a bridge to help readers understand important technologies in the areas of electrical engineering and computer science.
To learn more and start your subscription today, visit
ieee.org/proceedings-subscribe
Taking Artificial Intelligence Into Space Through Objective Selection of Hyperspectral Earth Observation Applications AGATA M. WIJATA , MICHEL-FRANÇOIS FOULON, YVES BOBICHON, RAFFAELE VITULLI, MARCO CELESTI , ROBERTO CAMARERO, GIANLUIGI DI COSIMO, FERRAN GASCON, NICOLAS LONGÉPÉ, JENS NIEKE, MICHAL GUMIELA, AND JAKUB NALEPA
Digital Object Identifier 10.1109/MGRS.2023.3269979 Date of current version: 30 June 2023
10
2473-2397/23©2023IEEE
R
ecent advances in remote sensing hyperspectral imaging and artificial intelligence (AI) bring exciting opportunities to various fields of science and industry that can directly benefit from in-orbit data processing. Taking AI into space may accelerate the response to various events, as massively large raw hyperspectral images (HSIs) can be turned into useful information onboard a satellite; hence, the images’ transfer to the ground becomes much faster and offers enormous scalability of AI solutions to areas across the globe. However, there are numerous IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
©SHUTTERSTOCK.COM/FOXPICTURES
To bring the “brain” close to the “eyes” of satellite missions
challenges related to hardware and energy constraints, resource frugality of (deep) machine learning models, availability of ground truth data, and building trust in AI-based solutions. Unbiased, objective, and interpretable selection of an AI application is of paramount importance for emerging missions, as it influences all aspects of satellite design and operation. In this article, we tackle this issue and introduce a quantifiable procedure for objectively assessing potential AI applications considered for onboard deployment. To prove the flexibility of the suggested technique, we utilize the approach to evaluate AI applications for two fundamentally different missions: the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) [European Union/European Space Agency (ESA)] and the 6U nanosatellite Intuition-1 (KP Labs). We believe that our standardized process may become an important tool for maximizing the outcome of Earth observation (EO) missions through selecting the most relevant onboard AI applications in terms of scientific and industrial outcomes. INTRODUCTION Hyperspectral missions have been attracting research and industrial attention due to numerous exciting applications of such imagery that span a multitude of fields, including precision agriculture, surveillance, event detection and tracking, environmental monitoring, and many more [1]. Such imagery captures very detailed information in hundreds of contiguous narrow spectral bands, but its efficient transfer and storage are costly due to its large volume. Additionally, downlinking raw HSIs to the ground for further processing is suboptimal, as, in the majority of cases, only a subset of all available bands conveys important information about remotely sensed objects [2], [3]. Additionally, sending such large-size images is timeconsuming (not to mention onerous), thus negatively impacting the response time and mission abilities, especially if undertaking fast and timely actions is of paramount importance, e.g., during natural disasters, and it can induce tremendous data delivery costs. To tackle these issues, bringing the “brain” close to the “eyes” of hyperspectral missions through deploying onboard processing is of interest and has been increasingly researched recently. Onboard algorithms can not only be deployed for data compression, band and data selection, and image quality enhancement but they can turn raw pixels into useful information before sending it to Earth, hence converting images into elaborate lightweight actionable items that are easier to transfer. As an example, the ESA’s U- Sat-1 mission [4] used onboard deep learning for cloud detection. Although it is the “hello, world” in remote sensing, selecting a specific application, or a set of them, and fitting it into the concept of operations (ConOps) for emerging missions is critically important, and it is necessary for proving the versatility of machine learning-powered payloads. (We may, however, JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
expect that in the future, satellite designs and ConOps will be derived from user needs and not the opposite.) This was demonstrated in [5], where a machine learning payload for flood monitoring was updated “on the fly” and retrained by exploiting knowledge transferred across two different optical payloads. Such use cases not only should bring real value to the end users, community, and technology but it also affects satellite design, especially if it will be solved using recent deep learning advances, which are commonly memory and energy hungry. Unfortunately, there is no standardized, traceable, reproducible, and quantifiable UNBIASED, OBJECTIVE, AND process that can be followed INTERPRETABLE SELECTION to objectively assess and seOF AN AI APPLICATION IS lect the use case(s) for such OF PARAMOUNT missions. We address this imIMPORTANCE FOR portant research gap. EMERGING MISSIONS, AS IT We have been observing INFLUENCES ALL ASPECTS an unprecedented tsunami of OF SATELLITE DESIGN. AI algorithms for solving EO tasks in a plethora of fields, including precision agriculture, detection of natural disasters, monitoring industrial environmental effects, maritime surveillance, smart compression of HSI, and more [6]. As with any other state-of-the-art processing function, deploying AI onboard satellites is challenging due the hardware- and energy-constrained environment, acquisition strategies, and data storage [7]. Note that AI is not just deep learning, as people from outside the machine learning field might think. AI was defined by John McCarthy (an emeritus professor at Stanford University) as “the science and engineering of making intelligent machines.” Then, machine learning is a subfield of AI focused on building computer programs that can improve their operation based on experience and data, and deep learning is one family of techniques toward approaching that. Here, data-driven approaches trained in a supervised way require large amounts of high-quality, heterogeneous, and representative ground truth data to build models. Since the world is not labeled, creating such ground truth datasets is human dependent, time-consuming, and costly, especially if in situ measurements should be performed. Since it is impossible to capture real HSI for emerging missions, as the sensor is not operating in orbit before launch, data-level digital twins have been proliferating to simulate target imagery based on existing HSI by “injecting” atmospheric conditions and noise into the data [8]. This approach is pivotal to quantify the robustness of models against in-orbit acquisition. Also, we need to figure out ways of producing representative enough data so that the performance of a solution can be validated before flight, especially in data-driven approaches. Preflight validation is even more important because “trust” still needs to be built in this field. Unless there are very similar data from 11
another mission (which is unlikely, at least for institutional missions), the best way of producing these data is to use complex instrument/observation simulators fully representative of the expected instrument design. These simulators need, in turn, input data with a higher spatial and spectral resolution (generally from costly aerial campaigns). Therefore, there is a huge effort involved even before the annotation work starts. On the other hand, there exist approaches for synthesizing training data based on limited samples [9] and for training from limited training samples by using transfer learning [10]. Finally, there are strategies, such as semisupervised, self-supervised, and unsupervised learning, which operate over small training samples [11], [12]. Although recent advances in data-driven algorithms for EO have changed the way we process remotely sensed data on the ground, designing and validating onboard machine learning models is strongly affected by several factors independent of the EO mission. They relate to the tradeoff among an algorithm’s complexity and hardware constraints, the availability of ground truth datasets, and the characteristics of the considered application. Such issues impact the entire EO mission, due to the technological difficulty of developing onboard AI applications. These constraints must be carefully considered while planning a mission. To the best of our knowledge, there are no quantifiable approaches allowing the assessment of potential applications in an unbiased way. CONTRIBUTION In this article, we approach the problem of the quantifiable selection of EO applications to be deployed onboard an AI-powered imaging satellite of interest. We aim at demystifying this procedure and suggest an objective way of evaluating use cases that can indeed benefit from in-orbit processing. We introduce a set of mission-specific and mission-agnostic objectives and constraints that, in our opinion, should be thoroughly analyzed and assessed before reaching a final decision on a use case(s) to be implemented. The flexibility of our evaluation procedure is illustrated based on two example case studies of fundamentally different missions. Overall, our contributions revolve around the following points: ◗◗ We present a synthetic review of potential EO use cases that may directly benefit from onboard hyperspectral data analysis in an array of fields, ranging from precision agriculture, surveillance, and event and change detection to environmental monitoring (the “HSI Analysis: A Review” section). We discuss why bringing such data analysis (here, in a form of AI) onboard a satellite may become a game changer for those downstream tasks. ◗◗ We introduce a procedure for the objective, quantifiable, and interpretable selection of onboard data analysis applications that can be utilized for any EO mission (the “Objective and Quantifiable Selection of Onboard AI 12
Applications” section). Although we focus on AI-powered solutions, our approach is generic enough to be seamlessly applied without any particular family of solutions in mind (i.e., the same logic can be followed for other algorithms, as well). ◗◗ We exploit our evaluation procedure to assess potential AI applications for two EO missions: CHIME and Intuition-1 (the “Case Studies” section). Therefore, we illustrate the flexibility of the suggested approach in real-life missions and show that mission-agnostic objectives and constraints may be evaluated once and conveniently propagate as input to other missions to ease the assessment process. Also, we show that our technique allows practitioners to simulate various mission profiles through the weighting scheme, which affects the impact of specific objectives and constraints on the overall use case score. We brought together a unique team of professionals with different backgrounds, including advanced data analysis, machine learning and AI, space operations, hardware design, remote sensing, and EO to target a real-life challenge of deploying AI in space. We believe that the standardized procedure of selecting appropriate AI applications for emerging EO missions established in this article can be an important step toward more objective and unbiased mission design. Ultimately, we hope that our evaluation will maximize the number of successful EO missions by pruning AI applications that are not mature enough to be implemented and would not bring important commercial and scientific value to the community. ARTICLE STRUCTURE This article is structured as follows. The “HSI Analysis: A Review” section presents a concise yet thorough review of applications that may be tackled using hyperspectral remote sensing. In the “Objective and Quantifiable Selection of Onboard AI Applications” section, we introduce our objective and quantifiable procedure for selecting a target AI application for onboard implementation, in which we consider both mission-independent and mission-specific objectives and constraints affecting the prioritization of each application. The evaluation procedure is deployed to analyze two missions (CHIME and Intuition-1) in the “Case Studies” section. The “Conclusions” section provides conclusions. Finally, Table 1 gathers the abbreviations used in this article. HSI ANALYSIS: A REVIEW In this section, we provide an overview of potential EO use cases that can be effectively tackled through hyperspectral data analysis and for which such imagery can bring real value. Although we are aware that there are very detailed and thorough review papers on HSI analysis in EO, including those by Li et al. [13], Audebert et al. [1], and Paoletti et al. [6], we believe that our overview serves as important background and concisely consolidates the current state of IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
the art in the context of the deployment of AI techniques onboard CHIME and Intuition-1 missions. THE ROAD MAP We focus on the following fields, which can directly benefit from the advantages of onboard hyperspectral data analysis. They include ◗◗ agricultural applications (the “Agricultural Applications” section) ◗◗ monitoring plant diseases and water stress (the “Monitoring Plant Diseases and Water Stress” section) ◗◗ detection and monitoring of floods (the “Detection and Monitoring of Floods” section) ◗◗ detection of fire, volcanic eruptions, and ash clouds (the “Detection of Fire, Volcanic Eruptions, and Ash Clouds” section) ◗◗ detection and monitoring of earthquakes and landslides (the “Detection and Monitoring of Earthquakes and Landslides” section) ◗◗ monitoring industrially induced pollution (the “Monitoring Industrially Induced Pollution” section), further split into dust events (the “Dust Events” section), mine tailings (the “Mine Tailings” section), acidic discharges (the “Acidic Discharges” section), and hazardous chemical compounds (the “Hazardous Chemical Compounds” section) ◗◗ detection and monitoring of methane (the “Detection and Monitoring of Methane” section) ◗◗ water environment analysis (the “Water Environment Analysis” section), including marine litter detection (the “Marine Litter Detection” section), detection of water pollution (the “Detection of Water Pollution” section), detection of harmful algal blooms (HABs) and water quality monitoring (the “Detection of HABs and Water Quality Monitoring” section), and maritime surveillance (the “Maritime Surveillance” section). AGRICULTURAL APPLICATIONS The nature of activities in the agricultural sector has changed as a result of broadly understood human activity concerned with the rapidly growing population, environmental pollution, climate change, and depletion of natural resources. The premise of precision agriculture is effective food production with a reduced impact on the environment. To achieve this goal, we need to be able to assess soil quality, irrigation, fertilizer content, and seasonal changes that occur in the ecosystem. Estimating the yields planned for a given region may also convey important information related to the effectiveness of implemented practices [14]. Remote sensing may become a tool enabling the identification of soil and crop parameters, due to its scalability to large areas. Approaches using multispectral images (MSIs) are mainly based on the content of chlorophyll and, on that basis, the estimation of other parameters [15], [16], [17]. Hyperspectral imaging enables capturing more subtle JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
characteristics of areas, including various abnormalities and plant diseases [18]. It provides a tremendous amount of spectral–spatial information, which may be used to estimate the volume of crops and soil quality [19] as well as to predict the effectiveness of fertilization [20], analyze plant growth [21], and extract vegetation indices [21]. These coefficients can be exploited to estimate and monitor biomass and assess soil composition and moisture [22]. There are lots of in situ campaigns focusing on HSI analysis captured by manned and unmanned airplanes [14], [21]. Such measurements are necessary to develop and verify the approaches that will be deployed onboard an imaging satellite [8]. Such airborne data have been recently used in the HYPERVIEW Challenge organized by KP Labs, the ESA, and QZ Solutions, which was aimed at estimating soil parameters from HSIs. (For more details, see https:// platform.ai4eo.eu/seeing-beyond-the-visible. The challenge attracted almost 160 teams from across the globe, and the winning solution will fly on the Intuition-1 satellite mission.) MONITORING PLANT DISEASES AND WATER STRESS Plant diseases and parasites have a serious impact on the production of cereals and, subsequently, the functioning of the economy and food safety [23]. Currently, the assessment of vegetation conditions is carried out using a range of in-field methods, which lacks scalability [24].
TABLE 1. THE ABBREVIATIONS USED IN THIS ARTICLE. ABBREVIATION
MEANING
AI
Artificial intelligence
AIU
AI unit
AMD
Acid mine drainage
AVIRIS
Airborne Visible/Infrared Imaging Spectrometer
CHIME
Copernicus Hyperspectral Imaging Mission for the Environment
CNN
Convolutional neural network
ConOps
Concept of operations
DPU
Data processing unit
EO
Earth observation
ESA
European Space Agency
GSD
Ground sampling distance
HAB
Harmful algal bloom
HSI
Hyperspectral image
MSI
Multispectral image
NIR
Near infrared
SAR
Synthetic aperture radar
SWIR
Short-wave infrared
TIR
Thermal infrared
TRL
Technology readiness level
UAV
Unmanned aerial vehicle
VAU
Video acquisition unit
VNIR
Visible and NIR
13
Here, remote sensing, which captures detailed plant characteristics by using, e.g., hyperspectral sensors, brings important practical advantages of being noninvasive and inherently scalable over larger areas. In the case of ground-based methods, a hyperspectral camera is commonly placed at a distance of about 30 cm above the test object, with constant lighting conditions [25]. Since various parts of the available spectrum are important in different use cases, the raw hyperspectral data are often preprocessed with band selection methods [26]. To automate the analysis process of highly dimensional hyperspectral data, an array of machine learning methods were introduced for monitoring plant diseases [18]. Deep WE NEED TO FIGURE OUT learning techniques conWAYS OF PRODUCING stitute the current research REPRESENTATIVE ENOUGH focus in the field, with deep DATA SO THAT THE convolutional neural networks PERFORMANCE OF A (CNNs) playing the leading SOLUTION CAN BE role [27], [28], [29]. On the VALIDATED BEFORE FLIGHT, other hand, there are apESPECIALLY IN DATAproaches that strongly benefit from handcrafted features DRIVEN APPROACHES. and vegetation indices that may be extracted from satellite MSIs [23]. The impact of global climate change is manifested by drought in terrestrial ecosystems. Drought is one of the main factors of environmental stress; therefore, estimating the water content in vegetation is one of the critically important practical challenges faced today [30]. Based on this information, it is possible to monitor the physiology and health of forests and crops [31] by extracting an array of quantifiable drought-related parameters. Such parameters include, but are not limited to, the canopy water content (reflecting water conditions at the leaf level and information related to the structure of the canopy, based on remote sensing data [32]), the leaf equivalent water thickness (estimating the water content per unit of a leaf ’s area [33]), and the live fuel moisture content [34]. There are, indeed, in-field methods that may allow us to quantify such parameters, but they are not scalable; hence, they are infeasible for large areas of interest [30]. DETECTION AND MONITORING OF FLOODS Between 1995 and 2015, flooding affected more than 2.2 billion people, representing 53% of all people affected by weather-related disasters. Detecting and monitoring floods in vulnerable areas is extremely important in emergency activities to maintain the safety of the population and diversity of the underlying ecosystems [35]. Today, the most important source of information about floods is satellite data; both MSI and synthetic aperture radar (SAR) data are used to detect and determine floods’ extent [35], [36]. A 14
key element in satellite-based flood monitoring is the need for rapid disaster response. Detection and assessment of floods is difficult due to the environmental impact of the water present in affected areas [37]. SAR has great potential to monitor flood situations in near real time, due to its ability to monitor Earth’s surface in all weather conditions [36]. Such data have been successfully exploited in various deep learning-powered solutions toward the detection of floods [36], [38]. Here, temporal information captured within time series data plays a key role in quantifying and understanding the evolution of flood maps. Detection of flooding areas is also performed based on MSIs, commonly with the use of the Normalized Difference Water Index [35]. Interestingly, data-driven solutions may benefit from the public WorldFloods database, which currently contains 422 flood maps [35]. DETECTION OF FIRE, VOLCANIC ERUPTIONS, AND ASH CLOUDS Fires are an example of a natural threat that destroys natural resources and causes extensive socioeconomic damage. The nature of a fire is primarily influenced by the type, flammability, and amount of fuel present in an area. Climate change and natural conditions, such as topography and wind, are also important factors here. Historically, the estimation of a fire area was carried out using field methods based on GPS data. However, only the perimeter of the area can be determined, and this approach is limited by the difficulty in reaching an area covered by an active fire and the risk of unevenness of burning areas [39]. The nature of a fire also changes over time, due to the fire’s effect of increasing the volume of charred vegetation and changing the temperature and level of humidity. Importantly, the decreasing amount of chlorophyll as a result of combustion causes changes in the characteristics of the spectral signature of acquired MSIs/HSIs [40]. This information allows us to assess whether an area is experiencing an active fire (because there are changes over time), the area is burned (because the chlorophyll content is low), and there is a risk of the active area redeveloping (partial fuel burnout) [41]. The monitoring and prevention of fires in Europe is carried out by the European Forest Fire Information System, which is part of the Copernicus EO program, in which the monitoring process exploits the 13-band MSI data captured by the Sentinel-2 mission. The key source of information here is the red-edge band, which is one of the best descriptors of chlorophyll, whereas the assessment of a fire condition is most often based on vegetation indices [40]. Such analysis can be, however, automated using a variety of machine learning approaches operating on MSIs and HSIs [42], [43]. Accurate detection of active areas of fire, obtained in a safe way, is an important advantage over methods based only on the perimeter of an area, including both active and burned parts [40]. Moreover, the application of remote IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
sensing techniques allows us to monitor fire progress over time [42] and to perform early fire detection [44]. The use of the airborne methods in this situation is limited due to smoke and high temperatures, to which drones equipped with sensors are sensitive. The wide scope and scalability of satellite imaging is also important to identify active hot spots. Unmanned aerial vehicles (UAVs) can be additional sources of information, but they are supplemental due to their lack of continuity in time [40]. Among the threats induced by volcanic ash (being a direct result of an eruption), we can indicate air and water pollution, climate change [45], and affecting aviation safety; therefore, monitoring and forecasting the location of volcanic ash clouds is critically important [46]. Additionally, falling dust, containing silicate particles, causes diseases of the respiratory system and contaminates drinking water [45]. Satellite remote sensing techniques are one of the most widely used tools for monitoring and locating volcanic ash clouds [46]. Their advantage over terrestrial methods is the lack of the need to install and maintain measuring instruments in hard-to-reach often dangerous locations [45]. An example of an existing solution is HOTVOLC, which is a Web geographic information system based on data from the Spinning Enhanced Visible and Infrared Imager aboard the Meteosat geostationary satellite. The system exploits the infrared thermal range (8–14 μm) to distinguish silicon particles from those of water and sulfuric acid [45]. Utilizing the thermal infrared (TIR) band also allows us to determine the height of an ash cloud based on temperature, which is critical information for aviation safety [45]. The use of a geostationary satellite enables monitoring volcanic activity and dust clouds 24 h a day [45], but this monitoring can also be performed cyclically using the Moderate Resolution Imaging Spectroradiometer, in which both the visible light bands [47] and TIR [48] are used. There exists the correlation of MSI data with various quantifiable indicators for the assessment of volcanic activity [49]. End-to-end monitoring solutions commonly exploit other data modalities, such as SAR, to detect the deformation of volcanoes that precedes eruptions [50]. Afterward, the detection of volcanic eruptions may be effectively performed using classic [48] and deep [51] machine learning techniques operating on MSI data. An important challenge concerned with this approach is the rapid darkening of lava, even though its temperature is still high, which may result in an incorrect volcano rating status if a satellite image is captured too late. Extending the analysis to capture high-temperature information available from short-wave infrared (SWIR) allows us to classify volcanic activity more precisely [52]. Using spectral information, the presence of sulfur dioxide (contained in volcanic eruptions) can also be identified at an early stage of an eruption [52]. Finally, the detection of volcanic ash clouds by using the 400–600-nm bands from HSI can enable us to determine the activity of a volcano, due to the possibility of separating ash clouds JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
from water clouds [47] (hence, satellite imaging may be useful even in the presence of clouds). Changes in the accuracy of volcanic ash cloud detection are also observed due to changes in meteorological conditions induced by seasonality during the year [53]. DETECTION AND MONITORING OF EARTHQUAKES AND LANDSLIDES Detecting an area affected by an earthquake allows us to estimate the scale of damage and plan rescue activities [54], and precise information about the effects of an earthquake obtained quickly increases the effectiveness of mitigating their impact [55]. Monitoring also allows for preventive actions [56]. The extent of the damage caused by an earthquake often exceeds the capacity of ground equipment, due to the dangerous and difficult access to some areas and the risk of further negative phenomena related to infrastructure damage and the limited operating time of the devices. The major drawback of using terrestrial tools working at the microscale is the limited area that can be assessed in a given time; therefore, macroscale imaging with satellites can increase the scalability of monitoring solutions. Most often, the purpose of detecting damage caused by an earthquake is to identify landslides and affected buildings. In the case of landslides, the assessment is carried out using tools that are divided into three categories: 1) analysis of landslide image features by using optical data, including remote airborne and satellite sensing data [54], [57]; 2) detection of surface deformation and deposition due to landslides by using radar data [58]; and 3) fusion of optical and radar data [59], [60]. Detection of damaged buildings can be carried out based on the geometrical features of the five main types of damage: 1) sloped layers, 2) pancake collapses, 3) debris heaps, 4) overturn collapses, and 5) overhanging elements [61]. These elements can be identified using ground methods, but the use of MSIs and HSIs allows for executing this procedure in a much shorter time, given that a sufficiently high spatial resolution is maintained [62]. Damage to buildings, especially in highly urbanized areas, must be mapped in a short time to improve (and even enable) rescue operations. For historical buildings and networks of tunnels, it is pivotal to efficiently map the damage. The main characteristics of urban rubble to be assessed encompass location, volume, weight, and the type of collapsed building materials, including hazardous components (e.g., asbestos), which can have a major impact on further operations [59]. Due to the large spectral and spatial heterogeneity of urbanized areas, classic methods may become insufficient due to changes, e.g., caused by attempts to remove debris. Thus, capturing subtle spectral differences for such areas may play a key role in the urban environment. Current research concerning the analysis of optical data acquired in areas affected by earthquakes focuses on MSIs. The use of 450–510- (blue), 520–580- (green), 15
655–690- (red), and 780–920-nm [near-infrared (NIR)] bands was shown to be sufficient both in the detection of landslides and assessing the damage using deep learning [61]. The detection stage is often preceded by image enhancement using, e.g., pan sharpening [54] and by reducing the dimensionality of the data [55]. Nevertheless, most of the methods based on CNNs skip preprocessing [57], [63] and benefit solely from representation learning. Utilizing pretrained networks of various architectures can help us tackle the important issue of limited and imbalanced ground truth optical data that could be used to train such large-capacity learners from scratch [54], [62]. Such images are commonly obtained from public sources and manually labeled. Also, one earthquake may be imaged by different satellites. For example, the images used in [61] and [62] from two satellites (Geoeye-1 and QuickBird) captured the damage caused by the 2010 Haiti earthquake, but the reported methods operated on comparable bands. MONITORING INDUSTRIALLY INDUCED POLLUTION Industrially induced pollution is a threat to terrestrial and aquatic ecosystems. The impact and nature of this human– nature interaction can be determined by detecting various phenomena and monitoring them over time. The impact of industrial pollution can be divided into several groups: in the case of air, there are dust events (the “Dust Events” section) [64], whereas in inland areas, we commonly monitor mines (the “Mine Tailings” section) [65] as well as water acidification (the “Acidic Discharges” section) [66]. Hazardous chemicals should also be observed in agricultural areas, due to their interaction with vegetation processes (the “Hazardous Chemical Compounds” section) [67], [68]. In each of these cases, it is advisable to detect a phenomenon and monitor it over time in a way that enables determining qualitatively and quantitatively its impact on the environment. DUST EVENTS Air quality affects the health of humans and animals; therefore, measures are taken to monitor the concentrations of dust particles in the air. In China, it is estimated that air pollution contributes to 1.6 million deaths annually, accounting for about 17% of all deaths in the country [69]. The key air assessment parameter is dust with a diameter below 2.5 μm (PM 2.5) [64], and appropriately responding to the real-time PM 2.5 value can significantly reduce harmful effects on the human respiratory and circulatory systems [70] through, e.g., avoiding overexposure and striving to reduce the level of pollution [71]. Estimating the spatial distribution of the PM 2.5 concentration from satellite data was shown to be feasible using deep learning in both low- and high-pollution areas [64]. Importantly, the level of PM 2.5 depends on seasonality [72]; the process of gathering ground truth data should reflect that to enable training well-generalizing models. 16
The summer and spring periods are often characterized by low concentration values, while high concentrations are observed during the winter period [70]. Additionally, underestimation of the PM 2.5 value may be caused by local weather conditions, such as snow and rain (and clouds) as well as wind in coastal locations [64]. Also, the level of urbanization of an area translates into the concentration value and may be a source of significantly larger errors when estimating PM 2.5 [64], [73]. The use of satellite images allows for estimating pollutant concentrations, while the use of solutions based on geostationary satellites enables determining the spatial distribution of concentrations every several minutes, thus indicating sudden increases in the PM 2.5 value [64]. Extending optical data with natural and social factors, such as, e.g., traffic information, directly translates into an improvement in the accuracy of concentration mapping in urban conditions and monitoring over time [71], [73]. The use of remote sensing also enables the estimation of pollutant concentrations in rural areas, where networks of ground sensors are absent or extremely limited [69]. Validation of estimation algorithms can be done based on values obtained by ground sensors [74], an example of which is the Aerosol Robotic Network optical depth network [70], [75]. Data collected in this way allow for effective verification in daily, monthly, and yearly cycles [64], [70]. MINE TAILINGS Mining activities have a strong environmental impact, both in terms of underground and open-pit mining. A typical example of an impact is acid mine drainage (AMD) and acid rock drainage [76]. Mine wastes are the main difficulty in the rehabilitation of former mining sites, and they have a negative impact on soil and water ecosystems, due to their toxicity [65]. The challenge is also to accurately determine their influence on the environment, which requires monitoring and systematic data collection over time [77]. The cause of the formation of AMD is sulfide minerals, which, through the action of atmospheric oxygen and water, are oxidized, resulting in the release of ions, such as H+, Fe2+, and SO 24 [78]. The result of the reaction is sulfuric acid [79], which reduces the natural acidity of Earth’s surface [78]. A significant decrease in acidity causes further reactions, which result in the release of metals and metalloids, such as Fe, Al, Cu, Pb, Cd, Co, and As, and sulfates from the soil. The released heavy metals penetrate soil, aquifers, and the biosphere, accumulating in vegetation and thus posing a threat to humans and animals, increasing the potential toxicity for agriculture, aquifers, and the biosphere [80]. The mapping of an area corresponding to AMD, but also others minerals, and the estimation of the pollution level are carried out using in-field, air [77], and satellite [65], [81] methods. The use of remote sensing to estimate pollution maps using MSIs [65], [78] and HSIs [77] requires preparing the reference data, the source of IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
which are ground measurements taking into account, among others, acidity, X-ray fluorescence, and reflection spectrometry [77], [78]. Another method of identifying spectral signatures in remotely sensed images is the use of known spectral signatures [82]. Interestingly, analysis of satellite and flight data combined with in situ measurements enables the identification of bands containing spectral signatures characteristic to chemical reactions involving Fe +3 , which result in the formation of sulfuric acid [79]. The suggested wavelength ranges are 560–782 [76] and 700–900 nm [65] when processing the data collected from the Sentinel-2 mission [78], which, due to the higher resolution of the visible and NIR (VNIR) bands, is more explored in the area of ferrous minerals than, for example, data from the Landsat program. It was shown that mineral mapping depends on the quality of the image spectra and season of the year [77], and it is influenced by the spatial resolution of the images [with a 15-m ground sampling distance (GSD) giving sensible results] [82]. It can also use machine learning techniques [83]. ACIDIC DISCHARGES Chemical reactions of sulfide minerals in mine areas, which reduce the acidity of the soil, affect the quality of nearby water [84]. The ultimate effect is to lower the acidity of mine waters, which directly poses a threat to the health of miners [77]. Groundwater and surface water are important resources for the health of humans and animals; therefore, their acidification (affecting surrounding rivers, water reservoirs, and soil) is a serious threat [83], [85]. Mine water purification and reclamation involve an expensive process carried out through the oxidation of sulfide minerals, resulting in minerals that require further disposal [84]. Detection of the ecohydrological hazard in a given area in time, therefore, contributes to environmental safety [79], [85], not only due to the assessment of water quality but also thanks to the assessment of conditions in local mines [86]. It is also possible to forecast the quality of surface water based on the monitoring of mine waste by using HSI [79], [87]. As in the case of soil analysis in former mining areas, the assessment of the impact of mining on water may be carried out by combining data collected through in situ measurements with remotely sensed data [85], [88], [89] and airborne methods [82]. Based on the methods of pixel classification and those exploiting water rating indicators [90], hydrological maps are created that show the spatial distribution of mineral compounds and allow us to prepare a model for the transport of chemicals [91]. The spectral analysis of water is commonly performed using VNIR (350–1,000 nm), which may mark the occurrence of mine water in rivers [83]. In this context, mapping water quality is difficult due to vegetation in rivers and reservoirs (and due to the varying spatial resolution of HSIs) [66]. The spectral characteristics of acidic water measured with a spectrometer revealed the spectral effect of green vegetation, similar to the effect of water depth and transparency [66]. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
An acidified environment favors the development of some species of algae, which also causes the water color to change over time [88], [92]. Finally, seasonality is important in terms of water acidification, due to changing weather and plant conditions [89]; hence, acquiring ground truth information is both spatially and temporally challenging [87]. HAZARDOUS CHEMICAL COMPOUNDS Soil is an open system that interacts with the atmosphere, hydrosphere, and biosphere through the exchange of matter and energy [68]. The structure and quality of soil is strongly related to the mineral composition of the area and to its contamination [93]. Pollutants penetrate vegetation, thus posing a direct threat to crops and, indirectly IMPLEMENTING AI through accumulation, to huONBOARD AN EO SATELLITE mans and animals [68]. The IS NOT A GOAL IN ITSELF, most serious soil contaminaAND IT MUST PRODUCE tion is related to heavy metSIGNIFICANT BENEFITS als, and their sources include WHEN COMPARED TO WHAT the progressive urbanization ON-GROUND DATA and industrialization of larger areas as well as mining [68]. PROCESSING MAY PROVIDE. Therefore, elaborating mineral contamination maps is an essential element of risk level estimation and monitoring to manage food safety [68]. A map of heavy metals can be obtained on the basis of soil and vegetation samples subjected to X-ray fluorescence spectrometry and spectral analysis in the range of visible light (400–700 nm) and NIR (700–2,500 nm) [68]. As in the case of AMD, the analysis of samples and determination of substance concentrations is not sufficient to assess the spatial distribution of heavy metals but may be the basis for the preparation of ground truth data exploited by data-driven mapping algorithms [93], which benefit from MSIs [93], [94] and HSIs [95]. Unpolluted and contaminated soils have different spectral properties in the bands corresponding to heavy metals, which may be the basis for the classification of an area as potentially toxic [93]. Due to the different bioavailability and toxicity of chemical substances, mapping the spatial distribution of these substances is more important than determining their exact concentrations [68]. In the case of data for areas contaminated with heavy metals, increasing values of the spectral reflectance in 500–780 nm and 1,200–2,500 nm, and decreasing in 780–900 nm, are observed [93]. Satellite images captured, e.g., using Hyperion and Landsat-8, as well as those obtained by airborne methods [95] allow for detecting heavy metals by using AI [92], [93]. An important element influencing the detection process is the concentration of metals [94], which is also influenced by environmental factors, such as soil moisture content and surface roughness [68]. High concentrations of heavy metals occur most often in mine and slag areas [93] and in areas rich in organic matter [94]. On the other hand, 17
concentrations are lower in arable lands. HSIs can help us detect lower concentrations and allow for the monitoring of subtle changes in the environment [67]. Finally, it is possible to optimize HSI data for noise suppression and amplification of the signals of the metal of interest [67]. DETECTION AND MONITORING OF METHANE Methane is one of the most important greenhouse gases with a strong climate impact [96], [97], with 28 # greater global warming potential than carbon dioxide, due to its significant influence on the global radiation balance [98]. This phenomenon causes changes in ecosystems that can be estimated by determining the value of the concenSELECTING APPROPRIATE tration of methane [99]. We ONBOARD APPLICATIONS have been observing methCAN IMPACT SOCIETY AT ane both from natural [99] LARGE THROUGH, E.G., THE and anthropogenic sources IMPLEMENTATION OF [100], such as industry [101], SUSTAINABLE agriculture [102], and landDEVELOPMENT GOALS AND fills [103]. The detection and CLIMATE CHANGE monitoring of methane can, thus, form the basis for manADAPTATION. aging anthropogenic sources to reduce the gas’ emission [98]. These tasks are, however, challenging due to the dependence on the shape and size of methane plumes at specific sources, such as oil and gas infrastructure, warehouses, and pipelines [104]. Remote sensing may have the potential to detect methane leaks so that they can be repaired, but for this purpose, high-resolution [105] and cyclic observations are required [106]. Methane absorption is observed in the NIR spectral range (780–3,000 nm) and midinfrared (3,000–5,000 nm), particularly at 1,650, 2,350, 3,400, and 7,700 nm [96]. Mapping the emission of methane plumes is commonly associated with SWIR range analysis (1,600–2,500 nm), due to the two absorption bands present in this range (the weaker, at 1,700 nm, and the stronger, at 2,300 nm) [101]. Although most sensors are not optimized for gas analysis, it is possible to map an observed area while benefiting from known spectral signatures [101]. Here, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)-C and AVIRIS-NG airborne sensors manifested agreement of the determined emission value with in situ measurements [108]. Also, comparison of the Hyperion and airborne AVIRIS-C cameras shows that both present similar plume morphology [109], which indicates the possibility of using orbital tools to continuously monitor gas infrastructure. A similar comparison of the detection of methane plumes carried out between simulated data from, e.g., the Hyperspectral Precursor of the Application Mission and CHIME sensors, and data from AVIRIS-NG suggests the feasibility of detecting point sources by using satellite sensors [105]. This is important due to the possibility of reducing the costs of repeated image acquisition 18
by using a sensor located on a satellite, which offers enormous scalability [106] together with the ability to track the temporal variability of emissions [108]. Methane detection directly benefits from sensors’ spatial resolution, as those with a resolution of more than 1 km are not sufficient to locate the plumes [105]. Finally, the detection of methane in an oil shale extraction area was verified using a thermal camera [96]. WATER ENVIRONMENT ANALYSIS Water is one of the basic elements of all ecosystems; therefore, its quality is an important element that affects humans, animals, and vegetation. Water pollution may have a different nature and source, including littering (the “Marine Litter Detection” section) and oil leaks and spills (the “Detection of Water Pollution” section). Both are caused by human activity during a relatively short period of time; thus, their detection, evaluation, and monitoring are of utmost importance. Since the ocean makes up 70.8% of Earth’s surface, assessment of the scale of the phenomenon of moving pollutants in marine waters requires the use of remote sensing methods. The quality of water may be also affected by algal blooms (the “Detection of HABs and Water Quality Monitoring” section), and oceanic ecosystems may be influenced by coastal and inland pollution, which should be monitored, as well. Apart from maintaining water quality, ensuring the security of water can be supported with EO applications (the “Maritime Surveillance” section). MARINE LITTER DETECTION Floating marine litter, such as damaged fishing nets, plastic bottles and bags, wood, rubber, metals, and even shipwrecks, involves hazardous objects that significantly affect the environment. It is estimated that 5 to 13 million tons of litter ended up in the marine environment in 2010 [110], whereas in 2016, it was 19 to 23 million tons of plastics debris [111]. It is predicted that by 2030, the mass of debris discharged into the marine environment could be 53 million tons [111]. Such contaminants are characterized by a significant variation in their composition [112], which is a challenge in the process of their detection and tracking [113]. The location of macroplastic, which is often accumulated by processes such as river plumes, windrows, oceanic fronts, and currents, forms the basis for further actions to remove it [114]. Preliminary works suggest that the debris is similar in both size and shape across different plastic islands [113]. Nevertheless, some spectral variation was observed, which was likely caused by differences in the optical properties of the objects, the level of immersion in the water, and the intervening atmosphere. Identification of plastic materials may be associated with the use of unique absorption features, which is manifested in the range of 800–1,900 nm for polymers [115]. The spatial resolution of the sensor is of great importance in the case of contamination detection because with the increase of IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
a pixel’s area, the possibility of detecting smaller debris decreases [116]. Data preprocessing in the detection of plastic in the ocean is commonly minimal (without noise reduction and normalization [117]) and very often omits atmospheric correction to eliminate the risk of removing information in wavelengths dominated by water absorption [115]. Atmospheric correction may have a minimal impact on narrow bands of HSIs, which may adversely affect detection and classification based on pollutants’ spectral profiles [113], [118]. Therefore, level 1 MSIs and HSIs are often exploited [116], [119], [120], and HSIs may be utilized to distinguish different categories of garbage (e.g., fishing nets and plastics) [113] and seawater [115]. Since supervised methods require ground truth, which is most often obtained with in situ methods in extremely time-consuming processes, unsupervised and semisupervised techniques have been increasing in the field [116]. Finally, it is worth mentioning that tracking microplastics at the ocean surface layer requires very detailed radiative transfer analysis and the development of high signal-to-noise sensors, and it constitutes another exciting avenue of emerging research [121]. DETECTION OF WATER POLLUTION Water pollution poses a serious threat to ecosystems; therefore, monitoring it is an important element of various countermeasures. Due to the nature of pollution, several types of events can be distinguished here. Oil spills are one of the main sources of marine pollution [122], resulting in not only oil stains on the water surface but also the death of animals and vegetation and damaged beaches, which translates into losses in the economy and tourism [122]. Locating oil spots and monitoring their displacement allows us to track their environmental impact, take preventive actions [123], and ensure justice (and compensation) when a spill source can be identified [124]. Remote sensing can help monitor the displacement of oil spills while ensuring high scalability over large areas. SAR imagery can be exploited here, as oil spots are manifested as contrasting elements to surrounding clean water [125], [126]. As a result, SAR images show black spots representing oil spills and brighter areas representing clear water [127]. Such imaging can effectively work regardless of weather conditions, such as the degree of cloudiness and changes in lighting [123]. The research area of detecting oil spills from SAR is very active and spans classic [127], [128] and deep machine learning [122], [123], [125], [126], [129]. The less common processing of MSI data from Landsat-8 [130] can also be precise in locating oil spills. Garbage that flows down rivers to the ocean is a remnant of human activity [113]. Another threat to water quality and ecosystems is the human impact on water management through eutrophication [92] and industrial [131] and mining [65] activities. Litter contaminants are characterized by significant variation in their composition (plastic, wood, metal, and biomass) [112], which is a challenge to detecting JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
and tracking them [113]. HSI analysis enables capturing the characteristics of such materials; for example, initial simulations confirmed that the absorption characteristics of 1,215 and 1,732 nm have applications in detecting plastic [113]. Also, the use of the SWIR range allows us to eliminate the bathymetric influence [115], which should be considered due to water vapor absorption. Spatial resolution is of great importance in the case of contamination detection because with the increase of the area per pixel, the possibility of detecting smaller plastic debris decreases [116]. Industrial impacts reduce water quality, especially inland, through heavy metals [88]. Keeping track of these changes and deA QUANTITATIVE ANALYSIS termining their nature is thus OF THE EXISTING STATE OF essential for human health. THE ART IS PIVOTAL TO The assessment of the state MAKE AN INFORMED of rivers should form the basis for planning agricultural DECISION ABOUT THE and industrial activities [83]. RESEARCH AREA THAT WILL The assessment of the mineral BE TACKLED IN AN composition of water may be UPCOMING MISSION. based on HSI and MSI analysis through observing spectral signatures [65], [83]. DETECTION OF HABs AND WATER QUALITY MONITORING HABs are a serious threat both to humans and other living organisms present in the aquatic environment [132]. HABs cause the extinction of some organisms, due to limited light and the inhibition of photosynthesis [133], and they reduce fishing [132], deteriorate water quality [133], and may be a threat in the case of power plants located near reservoirs, as they may cause blockages of cooling systems, as was the case with the Fangchenggang Nuclear Power Plant, in China [134]. Monitoring such harmful blooms can, therefore, result in socioeconomic benefits [133]. The development of algal blooms is influenced by environmental conditions, such as temperature [133] and water fertility [135], and man-made infrastructure, e.g., river dams limiting the movement of water [136]. Algae occurrence estimation can be performed using in situ methods [134]. This approach allows us to study conditions by using buoys equipped with a sensor collecting data from aquatic ecosystems (temperature, salinity, dissolved oxygen, chlorophyll a, and the concentration of algae [134]). In situ measurements can also validate estimation based on HSIs [137] and MSIs. Due to the influence of seasonality on the value of the water surface temperature [138], which affects the presence of HABs, the use of UAVs may limit the monitoring of changes over time. Satellite imagery, apart from providing massive scalability (in situ techniques are extremely costly and labor-intensive for periodic measurements), enables generating algal maps taking into account information on spatial distribution and variability over time [132], [129]. Automated machine learning detection methods 19
commonly exploit chlorophyll concentration estimation, as this parameter can be determined from HSIs [140]. Water quality monitoring is most often carried out by determining parameters, such as temperature and acidity as well as chlorophyll a [141]. An increase in temperature promotes the growth of algae, which translates into an increase in the content of the phytoplankton biomass in the water [92]. This is often indicative of an uncontrolled growth of HABs [139]. In recent work, Sahay et al. showed that it is possible to estimate chromophoric dissolved organic matter, being the fraction of dissolved organic matter that absorbs sunlight in the ultraviolet and visible region of electromagnetic radiation, from remote sensing reflectance in coastal waters of India [142]. The authors showed that seasonal and spatial variability in the investigated area allows their algorithm to retrieve the chromophoric dissolved organic matter absorption in coastal areas by using high-resolution ocean color monitors, such as Sentinel-3, but also from HSIs [143]. In [144], Cherukuru et al. focused on estimating the dissolved organic carbon concentration in turbid coastal waters by using optical remote sensing observations. Organic carbon is a major component of dissolved organic matter in the ONBOARD PROCESSING aquatic system, and it plays THAT COULD SELECT a critical role in the marine RELEVANT INFORMATION AT carbon cycle. Overall, dataTHE SENSOR LEVEL MAY driven HSI analysis may be used for water environment OFFER EXTENDED monitoring and to underMONITORING CAPACITIES stand its dynamics, leading BEYOND THE INITIAL to better understanding of MISSION PERIMETER. the underlying biogeochemical processes at a larger scale. Ocean monitoring is also tackled within NASA’s aquatic Plankton, Aerosol, Cloud, and Ocean Ecosystem mission carrying the Ocean Color Instrument, which will be capable of measuring the color of the ocean, from ultraviolet to SWIR [145]. Recently, Caribbean coasts have experienced atypical arrivals of pelagic Sargassum, with negative consequences both ecologically and economically [146]. Removing Sargassum before its arrival, thanks to early detection, could reduce the damage it causes [147]. It is known that floating mats of vegetation alter the spectral properties of the water surface; hence, deep learning-powered exploitation of HSIs has been investigated for such tasks [148]. MARITIME SURVEILLANCE Maritime surveillance aims at maintaining the security of waters by monitoring traffic [149] and fishing [150] as well as the elimination of smuggling [151], illegal fishing [152], and pollution [153]. Remote sensing can help to locate ships and verify their location with automatic identification systems [150], allowing for inferring the legality of a vessel’s movement at scale [154]. Ship detection 20
techniques based on signal processing [155] and deep learning approaches [156] are often based on SAR and MSIs, with the former unaffected by clouds and fog [157]. Ships are characterized by various sizes and shapes; thus, the appropriate spatial image resolution [158] is pivotal to detect them [159]. Also, fusing SAR and MSI data enables locating vessels [158], whose positions can be tracked [160]. Object detection based on satellite images is an important element supporting the search for lost ships and planes [161]. This task is important in both civil and military matters for safety reasons as well as for potentially quick assistance in the case of accidents [162]. The level of difficulty of detecting planes and ships depends on the background and their size [159]; hence, the spatial resolution of the imagery is important [158]. Although remotely sensed images allow identifying such objects at a global scale, they are also challenging due to the lack of homogeneity of the background [161]. Illegal fishing is a threat to coastal and marine ecosystems as well as the economy [152]. Unregulated fish catches reduce the stocks of fisheries and the lack of reporting makes it impossible to monitor fisheries, which poses a threat to fish species and leads to economic effects [150]. The detection of illegal fishing is commonly built upon the detection of ships and monitoring their trajectories by using, e.g., deep learning techniques over MSIs [152]. The Integrated System for the Surveillance of Illegal, Unlicensed, and Unreported Fishing is an example of a working system that exploits SAR (Sentinel-1) and MSI (Sentinel-2) data for this task [150]. OBJECTIVE AND QUANTIFIABLE SELECTION OF ONBOARD AI APPLICATIONS Implementing AI onboard an EO satellite is not a goal in itself, and it must produce significant benefits when compared to what on-ground data processing may provide. For a space project with cost and planning constraints, the answer is not obvious and must take into account different aspects by considering a wide range of criteria to estimate how much a solution complies with the satellite, system, and mission constraints as well as what benefits may result. The analysis of pros and cons must demonstrate that having AI onboard an EO satellite is the best option to provide operational added value for the final user in terms of performance, timeliness/latency, autonomy, and extended capacities as well as from an engineering, industrial, operational, commercial, and scientific points of view. Additionally, selecting appropriate onboard applications can impact society at large through, e.g., the implementation of sustainable development goals and climate change adaptation and mitigation. At the end-to-end level, onboard processing may improve overall system reactivity by sending alerts upon the detection of transient and short-duration phenomena, thus providing rapid responses to events that require fast decision making that is incompatible with a standard IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
n-ground processing chain. From a mission point of o view, onboard processing that could select relevant information at the sensor level may offer extended monitoring capacities beyond the initial mission perimeter by providing the final user with necessary and useful information while limiting data storage and transmission to marginal overheads. When considering data-driven AI solutions, additional criteria must be considered in the early engineering phases. They encompass database construction and algorithm design for selecting the target solution, and they apply for analytical approaches, too, even if database needs are often easier to meet. Given that, for a future EO mission, there are generally no actual data available before a satellite launch, the possibility to simulate onboard representative acquisitions from exogenous data must be investigated carefully, especially for data-driven solutions. For some use cases, generating such data could require in-depth knowledge of the future system, which is not necessarily fully characterized in early phases, where the design of the algorithms occurs. Consequently, this requires planning an ability for updating the onboard algorithm configuration by uploading new parameters once a satellite is in orbit. This level of flexibility may be, in turn, affected by the confidence and trust expected at launch. Further updates could also be necessary during a satellite’s lifetime to modify the algorithms according to the actual image quality and to adapt algorithm behavior to another use case. This constraint of updating the onboard algorithm configuration must be considered at the system level as soon as possible in the system development schedule since it may have a significant impact on the design of some satellite subsystems. Validation of the algorithms’ performance generally increases their maturity through mission phases, while the instrument design (and simulator) becomes more mature. The availability of annotated data related to the use case, or the difficulty to generate annotations (either manually or through automatic processing), must be carefully investigated for any emerging supervised model. Indeed, such activities may rapidly become cumbersome, requiring the expertise of specialists in the application domain to obtain relevant annotations. The cost of the engineering database development necessary to handle any data-driven AI solution must be put in front of a potential nondata-driven handcrafted algorithm solution that would not require the effort of collecting, annotating, and simulating a huge amount of representative data. In a classic data flow, which is followed while developing an AI-powered solution, the data may be acquired using different sources (e.g., drones, aircraft, and satellites) and are commonly unlabeled; hence, they should undergo manual or semiautomated analysis to generate ground truth information. Such data samples are further bundled into datasets, which should be carefully split into training and test sets, with the former exploited to train machine JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
learning models (they are commonly synthetically augmented to increase their size and representativeness [9]) and the latter used to quantify their generalization capabilities. Although designing appropriate validation procedures of AI solutions is not the main focus of this article, we want to emphasize that it should be performed with care, as incorrectly determined procedures can easily lead to unbiased and fundamentally flawed experimental results [163]. In such cases, we would not be able to quantify the expected operational abilities of AI algorithms; hence, we may end up being trapped in the “illusion of progress” reproducibility crisis [164], with overly optimistic hopes for onboard processing. We can appreciate that there are lots of objectives and constraints related to system design, hardware constraints, and data availability that can (and should) directly influence the process of selecting the target AI use case for final deployment onboard an EO satellite mission. In the following sections, we present the scales we suggest to quantify the objectives (the “Objectives” section) and constraints (the “Constraints and Feasibility” section) and to ultimately aggregate them into a weighted score assessing each considered onboard AI use case (the “Selecting Use Cases for Onboard AI Deployment” section). OBJECTIVES We split the objectives affecting the process of selecting a target AI application into those related to onboard processing, the mission itself, and the interest of the community. The following summarizes the objectives in more detail and presents the suggested scale for each. The scale is binary (values of zero and two) or three point (zero, one, and two); the larger value, the better: 1) Onboard data analysis: •• Faster response (better reactivity), d OBP fr : This objective relates to accelerating the extraction of actionable items/information from raw data through onboard analysis and in relation to the benefits it could give end users: 55 Zero: The faster response (when compared with onthe-ground processing of downlinked raw image data) is not of practical importance. 55 Two: The faster response directly contributes to specific actions undertaken on the ground, which would have not been possible without this timely information. •• Multitemporal analysis (compatibility with the revisit time), d OBP mta : This objective relates to multitemporal onboard analysis of a series of images acquired for the same area at consecutive time points and to the benefits it could give end users: 55 Zero: Multitemporal analysis would not be beneficial (it would not bring any useful information) or could be beneficial but is not possible to achieve within the mission (e.g., due to the assumed ConOps and missing onboard geolocalization). 21
Multitemporal analysis may be beneficial and can add new information, and it is possible to achieve within the mission. 55 Two: Multitemporal analysis is of critical importance, and it is possible to achieve within the mission. 2) Mission: •• Compatibility with the acquisition strategy, d Mcas: This objective indicates whether the use case is in line with the acquisition strategy of the considered mission (the duty cycle of the satellite, number of ground station locations, capabilities to store data onboard, and overall processing time, including data preprocessing and AI inference): 55 Zero: The use case is not compatible with the strategy and may decrease the overall mission return (e.g., it requires acquisition retargeting, while the baseline strategy assumes a constant nadir scan). 55 One: The use case is not compatible with the baseline strategy. However, modifications to the strategy are possible and do not have an adverse impact on the mission. 55 Two: The use case is compatible with the acquisition strategy (e.g., the acquisition duty cycle, coverage, and revisit capabilities). •• Potential of extending the mission perimeter and/or capacity, d M emp : This objective indicates whether the use case has potential to extend the current mission perimeter and/or capacity, e.g., in multipurpose/reconfigurable missions, and it relates to the cost of the mission’s perimeter expansion. As for enhancing the mission capacity, we may be able to have a higher scientific/ commercial return, for instance, by observing more target sites than what could be achieved without data analysis onboard a satellite: 55 Zero: The use case is already within the perimeter foreseen for the mission, and the mission capacity would not be enhanced by onboard processing. Alternatively, from a pure mission perimeter extension point of view, this use case is of poor interest. 55 One: The use case is not within the perimeter for which the mission was initially designed, and its implementation may have extra impact on the system design (e.g., sending an alert upon an illegal ship degassing detection may need an additional geostationary optical link to act with the necessary reactivity this situation requires). This use case is of great interest to extend the mission perimeter/capacity but may be feasible only with a significant impact on the satellite (e.g., a new optical transmission device onboard) and system levels (e.g., geostationary relay). 55 Two: The use case is not within the perimeter for which the mission was initially designed, and apart from the new AI function, the implementation of this use case has only a minor impact that can be absorbed by the current satellite and/or system design. Such a use case is therefore of great interest to extend the mission perimeter and/or capacity at a minimal cost. 55 One:
22
3) Interest to the community, d R : This objective indicates whether the use case is of interest to the community (e.g., the geoscience and remote sensing research community, businesses pursuing future trends based on novelty and impact, and so forth) if it is novel, worthy of investigation, and has potential to be disruptive: •• Zero: The number of existing papers is low and not increasing, or the number of papers is notable but has stabilized. This may be an indicator that the topic did not resonate in the community or has been widely researched, and it is difficult/unnecessary to contribute more. •• One: The number of existing papers is large (dozens) and increasing at a stable pace. This may be an indicator that the topic is worthy of investigation, although it has already been researched in the literature. •• Two: The number of existing papers is small (no more than tens) but increasing fast. This may be an indicator that the topic is worthy of investigation, novel, and disruptive and that it attracts significant research attention very fast. CONSTRAINTS AND FEASIBILITY Constraints relate to sensor capabilities and characteristics. Also, we discuss the availability of datasets, focusing on data-driven supervised techniques to achieve robust products. As in the case of objectives, the scale is either binary (zero/two) or three-point (zero, one, and two), with larger values corresponding to more preferred use cases: 1) Sensor capabilities: •• Compatibility with the sensor spectral range, ~ SC cspe : This constraint indicates the feasibility of tackling a use case with image data (with respect to its spectral range) captured using the considered sensor: 55 Zero: The sensor is not compatible (it does not capture the spectral range commonly reported in the papers discussing the use case). 55 One: The sensor is partly compatible (it captures part of the spectral range commonly reported in the papers discussing the use case). 55 Two: The sensor is fully compatible (it captures the spectral range commonly reported in the papers discussing the use case). •• Compatibility with the sensor spectral sampling, ~ SC css : This constraint indicates the feasibility of tackling a use case with image data (with respect to their spectral sampling) captured using the considered sensor: 55 Zero: The commonly reported spectral sampling is narrower than available in the target sensor; hence, it may not be possible to capture spectral characteristics of the objects of interest. 55 One: The commonly reported spectral sampling is much wider than available in the target sensor (e.g., MSIs versus HSIs); hence, we may not fully benefit from the sensor spectral capabilities. 55 Two: The commonly reported spectral sampling is compatible with the target sensor. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
•• Compatibility with the sensor spatial resolution, ~ SC cspa : This
•• Importance of data representativeness/variability, ~Dshd:
constraint indicates the feasibility of tackling a use case with image data (their spatial resolution) captured using the considered sensor: 55 Zero: The available spatial resolution is not enough to effectively deal with the use case (e.g., to detect objects of the anticipated size, accurately calculate the area of cultivated land, and so forth). 55 Two: The available spatial resolution is enough to effectively deal with the use case. 2) Dataset maturity: •• Availability of annotated (ground truth) data, ~ Dagt: This constraint relates to the availability of ground truth datasets that could be used to train and validate supervised models for onboard processing: 55 Zero: No ground truth datasets are available. 55 One: There exist ground truth datasets (at least one), but they are not fully compatible with the target sensor (compatibility with the sensor could be achieved from such data if there were an instrument simulator). 55 Two: There exist ground truth datasets (at least one) that are fully compatible with the target sensor. •• Difficulty of creating new ground truth data, ~ Ddgt: This constraint relates to the process of creating new ground truth datasets that could be used to train and validate supervised learners for onboard processing during the target mission: 55 One: The localization of the objects of interest is not known, and/or their spectral characteristics are not known in detail, but the current state of the art suggests preliminary wavelengths determined by airborne/laboratory methods and areas in which the phenomena of interest occur, e.g., based on in situ observations. Additional sources of ancillary information, such as analysis of news/social media related to the issue (e.g., environmental organizations in the event of a catastrophe), biogeophysical/chemical models, and another geospatial information, might be pivotal to elaborate new ground truth datasets. Preparing such an image database can be an important contribution to the development of HSI and MSI analysis. 55 One: Identification of objects is possible on the basis of characteristic spectral signatures for a specific phenomenon that is expected in a given area (geographic coordinates are known), and ground truth can be generated through in situ methods. 55 Two: Identification of objects of interest is possible based on the visibility in red–green–blue (RGB)/ panchromatic/selected bands/combination of bands [objects are visible in the RGB/panchromatic/selected band; thus, manual, semiautomatic, and automatic (by, e.g., automatic colocation with ancillary data) contouring is straightforward].
This constraint evaluates how training data would be representative of the situation at a global scale, covering spurious cases and extremes, hence ensuring a high level of generalizability. This point focuses on the need of capturing seasonally and/or spatially heterogeneous training data and the importance of such data heterogeneity in building well-generalizing data-driven models: 55 Zero: It is critical to capture seasonally/spatially heterogeneous training data to make the resulting machine learning/data analysis models applicable in practice (e.g., calculating soil moisture). 55 Two: Capturing seasonally/spatially heterogeneous training data may be beneficial, but it is not of critical importance for this use case (e.g., fire detection).
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
SELECTING USE CASES FOR ONBOARD AI DEPLOYMENT In Table 2, we assemble objectives and constraints that contribute to the selection process. There are parameters that are mission independent; therefore, the same values (determined once) can be used for fundamentally different satellite missions, as shown in the “Case Studies” section for CHIME and Intuition-1 missions. Afterward, they can be updated only when necessary, e.g., if the trends have changed within the “interest to the community” objective. The total score S, which aggregates all objectives and constraints, is their weighted sum: Objectives(1)
64444444 474444444 4M8 OBP M M OBP M R R S = a OBP + a OBP mta d mta + a cas d cas + a emp d emp + a d fr d fr 14444244443 Objectives(2)
SC SC SC SC SC + a SC cspe ~ cspe + a css ~ css + a cspa ~ cspa 1444444442444444443 Constraints (1) D D dgt dgt
+ a Dagt ~ Dagt + a ~ + a Dshd ~ Dshd . 14444444244444443
(1)
Constraints (2)
Since the scale for each parameter is the same, we do not need to normalize the assigned values, and they can be summed together to elaborate S. The importance of the specific parameters may be, however, directly reflected in the weighting factors (the a values). Here, the dominating parameters can be assigned with (significantly) larger a’s; thus, our procedure allows practitioners to conveniently simulate different mission profiles. This feature of the selection process is discussed in detail for CHIME in the “Selecting AI Applications for CHIME and Intuition-1” section. Similarly, if multiple teams are contributing to the evaluation process (e.g., the data analysis, space operations, and hardware design teams), the agreed parameter values may be evaluated following numerous approaches, including majority and weighted voting. Finally, the use case with the maximal S (or N use cases with the largest S scores if more than one AI application can be developed) will be retained for the ultimate deployment onboard the analyzed mission. 23
TABLE 2. A SUMMARY OF OBJECTIVES AND CONSTRAINTS USED TO SELECT A TARGET AI APPLICATION FOR ONBOARD D EPLOYMENT. PARAMETER OBJECTIVES
SYMBOL
WEIGHT
MISSION
d OBP fr
a OBP fr
✗
OBP mta
OBP mta
✓
aM cas
✓
ONBOARD PROCESSING (OBP) Faster response (better reactivity) Multitemporal analysis (compatibility with the revisit time)
d
a
MISSION (M) Compatibility with the acquisition strategy
dM cas
Potential of extending the mission perimeter
d
M emp
a
M emp
✓
INTEREST TO THE COMMUNITY (R) Interest to the community CONSTRAINTS
dR
aR
✗
~ SC cspe
a SC cspe
✓
SENSOR CAPABILITIES (SC) Compatibility with the sensor spectral range Compatibility with the sensor spectral sampling
~
Compatibility with the sensor spatial resolution
~
SC css
SC cspa
SC css
✓
SC cspa
✓
a a
DATASET MATURITY (D) Availability of annotated (ground truth) data Difficulty of creating new ground truth data
~ Dagt
a Dagt
✗
D dgt
D dgt
✗
D shd
✗
~
Importance of data representativeness/variability
~
D shd
a a
Mission-specific parameters are indicated with a ✓, whereas those that are mission agnostic are marked with an ✗.
QUANTIFYING THE INTEREST OF THE RESEARCH COMMUNITY We performed quantitative analysis of the recent body of literature (2012–2022) for each use case to objectively investigate the interest of the community. In Figure 1, we present the number of papers published yearly concerning the detection and monitoring of earthquakes and landslides (for all other use cases, see the supplementary material available at https://www.doi.org/10.1109/MGRS.2023.3269979). For the search process, we utilized keywords and key phrases
Number of Publications
commonly used in each application. (To perform the quantitative analysis of the state of the art, we used the Dimensions tool available at https://app.dimensions.ai/discover/ publication.) We can observe the steady increase of the number of papers published in each specific application related to the analysis of earthquakes and landslides, with the monitoring of earthquakes (rendered in yellow) and detection of landslides (dark blue) manifesting the fastest growth in 2019–2021. Since the body of knowledge related to those two topics has been significantly expanding, we can infer that they are disruptive areas and that contributing to the state of the art 600 here is of high importance (therefore, the topics were assigned the highest Monitoring of Earthquakes Estimation of Earthquake Damages 500 score in the evaluation matrix). On Detection of Landslides the other hand, as the estimation of Estimation of Landslide Damages damages induced by landslides does 400 Prediction of Landslides not resonate well in the research community (with a small number of pa300 pers published yearly, relative to the number of papers in other applica200 tions, without any visible increasing trend), it was scored as zero. The same 100 investigation should be performed for all applications, although we are 0 aware that this analysis could still be 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 considered a bit subjective (e.g., what Year of Publication is a “fast” increase in the number of papers, and when does a “slow” inFIGURE 1. The number of recent papers on the detection and monitoring of earthquakes and crease accelerate and become “fast”?). landslides. For other applications, see the supplementary material available at https://www. We believe that such a quantitative doi.org/10.1109/MGRS.2023.3269979. 24
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
TABLE 3. A HIGH-LEVEL COMPARISON OF CHIME AND INTUITION-1. FEATURE
CHIME
INTUITION-1
Satellite type
Copernicus extension
Nanosatellite (6U CubeSat)
CASE STUDIES 30 m 25 m Spatial resolution (GSD) Spectral range 400–2,500 nm 465–940 nm We present two case studies of CHIME (the “CHIME Spectral sampling interval # 10 nm 3–6 nm Case Study” section) and Intuition-1 (the “Intuition-1 Number of bands 250 192 Case Study” section), being EO satellites with funRevisit time 11 days 10 days damentally different characteristics (Table 3). For Uplink S-band (2 Mb/s) S-band (256 kb/s) CHIME and Intuition-1, we present their background, Downlink Ka-band (up to 3.6 Gb/s) X band (up to 50 Mb/s) objectives and motivation, and constraints, which Altitude 632 km 600 km impact the selection of the AI application in our evaluation matrix. To make the procedure straightforward, we present example questions that may be MISSION OBJECTIVES asked during the design process: The mission objectives are detailed in the CHIME mission ◗◗ Background: What is the big picture behind the mission? requirement document [166], and can be summarized as How is the mission different from other missions? Why “to provide routine hyperspectral observations through the is it “unique”? Copernicus Program in support of European Union—and ◗◗ Objectives and motivation: What are the main objectives related policies for the management of natural resources, of the mission, and how do they relate to AI? What do assets and benefits.” The observational requirements of we want to demonstrate? Do we treat AI applications as CHIME are driven by primary objectives, such as agricul“technology demonstrators” or as tools for empowerture, soil analysis, food security, and raw materials analying commercial/scientific use cases? Are there mission sis. In these domains, the system will have the potential objectives that should “dominate” the selection? Are there to deliver many value-added products for various applicaAI applications that should be deployed even though they tions, including sustainable agricultural management, soil would not be selected in the process? characterization, sustainable raw materials development, ◗◗ Constraints: What are the constraints imposed on the misforestry and agricultural services, urbanization, biodiversion and AI applications concerned with, e.g., the imaging sity and natural resources management, environmental sensor, processing resources, and other hardware? degradation monitoring, natural disaster responses and hazard prevention, inland and coastal water monitoring, CHIME CASE STUDY and climate change assessment. To achieve the mission, the CHIME satellites will be equipped with hyperspectral BACKGROUND spectrometers allowing them remotely characterize matter CHIME (Figure 2) is part of the second generation of Sentinel composing the surface of Earth and atmospheric aerosols. satellites that the ESA is developing to expand the Copernicus EO constellation. The space component of the system will be composed of two satellites that are to be launched by the end of the 2020s. In 2018, the CHIME Mission Advisory Group was established at the ESA to provide expert advice during design and development, concerned with the scientific objectives of the mission, data product definitions, instrument design and calibration, data validation, and data exploitation. Following the evaluation of the Phase B2/C/D/ E1 proposals in January 2020, Thales Alenia Space (France) was selected as satellite prime contractor and OHB (Germany) as the instrument prime contractor. Phase B2 started in mid-November 2020. Currently, the ESA is targeting the identification of new use cases that can be relevant for onboard AI applications [165]. At the time of writing this article, the decision to implement a dedicated AI unit (AIU) onboard CHIME is pending. The final decision will result from a global tradeoff that is under investigation at the system and satellite levels, based on an evaluation process as described in this article and also considering programmatic and budget aspects. FIGURE 2. The CHIME satellite. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
ESA
analysis of the existing state of the art is pivotal to make an informed decision about the research area that will be tackled in an upcoming mission.
25
KP LABS/AAC CLYDE SPACE
MISSION CONSTRAINTS The CHIME payload will be equipped with a hyperspectral camera delivering more than 250 spectral bands at a # 10-nm spectral sampling interval in the visible and infrared spectral range from 400 to 2,500 nm. The instrument field of view covers a swath of 130 km at a 30-m GSD at the CHIME spacecraft altitude. The revisit time will be 11 days when the two satellites are operational [167]. With a continuous data throughput close to 5 Gb/s acquired over land and coastal areas, the CHIME payload will deliver about 1 Tb of data per orbit. Downloads are foreseen within 6–12-min visibility time slots over the Svalbard and Inuvik polar stations through the Ka-band link, which offers up to a 3.6-Gb/ s download data rate. The acquired data will be processed on the ground between 5 and 10 h after acquisition. As a complement, onboard data processing might be an asset SENDING LARGE AMOUNTS to reduce and select/prioritize OF RAW HYPERSPECTRAL valuable information. This DATA FOR THE ON-THEleads to envisioning new stratGROUND PROCESSING IS egies for onboard processing, INEFFICIENT AND EVEN storage, and transmission for INFEASIBLE, DEPENDING ON CHIME. Besides the common THE LINK CONSTRAINTS. constraints imposed on any hardware that will operate in space (vacuum, temperature, radiation due to the space environment, and vibrations during the launch phase), one of the main challenges is to consider the real-time processing constraints imposed by the data acquisition principle used onboard CHIME alongside the constant monitoring mission requirement. To ensure the continuity of the mission, the design of the overall onboard image processing chain, from the acquisition by the sensor up to the transmission of the image data to the ground, must prevent any risk of a bottleneck that would result in data loss because of memory overflow. As an example, the CHIME nominal image processing
FIGURE 3. The Intuition-1 satellite.
26
chain will implement, in the data processing unit (DPU), real-time cloud detection [168], feeding a hyperspectral Consultative Committee for Space Data Systems (CCSDS)-123 compressor [169]. This will reduce the amount of data delivered by the sensor while compressing data with tunable losses over cloud-free areas [170]. A future unit implementing hypothetical onboard AI algorithms will have to process data on the fly at the sensor data rate to interface with the continuous data flow from the video acquisition unit (VAU). This imposes new constraints on the architecture of the current onboard data processing chain (i.e., a new interface with the existing VAU and updates in the DPU design) as well as strong constraints on the AIU hardware, such as high-speed input– output memory links, fast data processing cores, and the parallelization of DPUs to allow the AIU to process data at a rate compatible with the sensor throughput while ensuring the constant monitoring required by the CHIME mission. INTUITION-1 CASE STUDY BACKGROUND The purpose of the Intuition-1 space mission (Figure 3) is to observe Earth by using a 6U nanosatellite equipped with a hyperspectral optical instrument and multidimensional data processing capabilities, which employs deep CNNs. The mission is designed as a technology demonstrator, allowing for verification of various in-orbit data processing applications, use cases, and operations concepts. As of the third quarter of 2022, the Intuition-1 payload was qualified for space applications [technology readiness level (TRL) 8], with a launch scheduled for the second quarter of 2023. MISSION OBJECTIVES The main objective of the mission is to test a system composed of in-house-developed components—a high-performance computing unit, hyperspectral optical instrument, and data processing software (preprocessing, segmentation, and classification algorithms)—and evaluate the applicability of nanosatellites to performing mediumresolution hyperspectral imaging coupled with onboard data processing. The important goal of the mission is to assess real-life advantages and shortcomings of the onboard data processing chain, taking into account the full mission context, from the technical perspective (data, power, and thermal budgets) to spacecraft operations and scheduling. Intuition-1 will be equipped with a smart compression component that will prioritize the downlink and processing pipelines, based on the cloud cover within the area of interest. The mission is aimed to be multipurpose, with inorbit update capabilities (through uplinking updated AI algorithms) and targeting new use cases emerging during the mission; therefore, the satellite can be considered a “flying laboratory.” The first use case planned for Intuition-1 is the onboard estimation of soil parameters from acquired HSIs. Finally, hyperspectral data captured by Intuition-1 could be IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
useful in other missions to train machine learning models for in-orbit operations based on real-life data. MISSION CONSTRAINTS The optical instrument captures HSIs in a push broom manner, working in the VNIR range (465–940 nm) with up to 192 spectral bands (each of 3–6 nm) and a GSD of 25 m at the reference orbit (600 km). The instrument utilizes a CMOS image sensor with linear variable filters, so different parts of the sensor are sensitive to light of different wavelengths. By moving the instrument in the direction of the filters’ gradient, hyperspectral data of static terrain are recorded. Using specialized preprocessing, the coregistration process is performed, so a subpixel-accurate hyperspectral cube can be produced regardless of satellite platform attitude determination and control system lowfrequency disturbances. The Leopard DPU is responsible for the acquisition of raw image data from the optical system, storage of the data, running preprocessing algorithms, data compression (CCSDS-123), and AI processing. Other functionalities, such as handling the S-band radio (uplink, 256 kb/s) and X-band radio (downlink, up to 50 Mb/s), are also covered by the DPU. The Leopard DPU utilizes the Xilinx Vitis AI framework to accelerate CNNs on field-programmable gate array hardware, providing energy-efficient (0.3 tera operations per second per watt) inference and in-flight-reconfigurable deep models. SELECTING AI APPLICATIONS FOR CHIME AND INTUITION-1 In Table 4, we list the objectives (the “Objectives” section) and constraints (the “Constraints and Feasibility” section) assessed for both the CHIME and Intuition-1 missions (for the interactive evaluation matrix, see the supplementary material available at https://www.doi.org/10.1109/ MGRS.2023.3269979). For CHIME, the values of the missionspecific entries of the evaluation matrix were agreed to by a working group composed of the mission scientist, project manager, satellite manager, mission manager, payload data processing and handling engineers, and AI and data analysis experts. On the other hand, those parameters were quantified by the system engineer for Intuition-1. The mission-independent objectives and constraints were elaborated by the entire working group. Although some of the parameters, such as “faster response (better reactivity)” and those related to the datasets that could be used to train supervised learners, are straightforward to quantify and directly related to the use case characteristics, the “interest to the community” may look more subjective. The parameters are, however, still quantifiable, as we showed in the “Selecting Use Cases for Onboard AI Deployment” section. The mission-specific parameters are directly related to the mission planning, ConOps, and hyperspectral sensor’s capabilities. Therefore, their quantification is inherently objective, as it is based on well-defined JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
assumptions, such as the ConOps document and technical specification of the camera. As an example, compatibility with the sensor spectral range results from confronting the spectral range of a target sensor (400–2,500 nm and 465– 940 nm for CHIME and Intuition-1, respectively) with the spectral range commonly reported in the literature for a use case of interest. Therefore, for, e.g., estimating soil parameters, the corresponding scores for CHIME and Intuition-1 are two (fully compatible) and one (partly compatible, capturing the majority of the spectral range) for this constraint, as the spectral range often reported in the literature for this application is 400–1,610 nm [19]. Additionally, we can observe that the multitemporal analysis ^d OBP mta h is zero for all potential applications for the Intuition-1 mission, as this nanosatellite will not be equipped with onboard georeferencing routines; hence, it would not be possible to effectively benefit from multiple images captured for the very IF ACTIONABLE ITEMS same scene at more than one EXTRACTED FROM RAW time point. Similarly, since the DATA ARE NOT DELIVERED estimation of soil parameters IN TIME, THEY MAY EASILY is already planned for CHIME BECOME USELESS IN and Intuition-1, such agriculCOMMERCIAL AND tural applications would not SCIENTIFIC CONTEXTS. necessarily extend the mission perimeter; therefore, this parameter became zero for both satellites ^d M emp h . To show the flexibility of our evaluation procedure, we present radar plots showing the values of each parameter for the most promising use cases (according to the weighted total scores S ) in three scenarios where 1) the objectives and constraints are equally important, 2) the objectives are twice as important as the constraints (thus, the a weighting factors for the objectives are twice as big as the a’s assigned to the constraints), and 3) the constraints are twice as important as the objectives (Figure 4). The second scenario may correspond to missions whose aim is to push the current state of the art and be disruptive, even if the risk levels are higher, whereas the third may reflect missions minimizing risks related to constraints while still delivering contributions to the current state of knowledge. We can appreciate that the weighting process affects the ranking of the potential applications for both missions; hence, it can better guide the selection procedure based on the most relevant factors (objectives, constraints, or both). CONCLUSIONS The latest advances in hyperspectral technology allow us to capture very detailed information about objects, and they bring exciting opportunities to numerous EO downstream applications. Hyperspectral satellite missions have been proliferating recently, as acquiring HSIs in orbit offers enormous scalability of various solutions. However, sending large amounts of raw hyperspectral data for the 27
contexts. Therefore, the recent focus of space agencies, private companies, and research entities aims at taking AI solutions to space to extract and transfer knowledge from raw HSIs acquired onboard imaging satellites.
on-the-ground processing is inefficient and even infeasible, depending on the link constraints. Also, if actionable items extracted from raw data are not delivered in time, they may easily become useless in commercial and scientific
TABLE 4. AN EVALUATION MATRIX CAPTURING ALL MISSION OBJECTIVES AND CONSTRAINTS FOR CHIME AND INTUITION-1. OBJECTIVES ONBOARD PROCESSING dOBP fr USE CASE
MISSION
dOBP mta
M dcas
COMMUNITY M demp
dR
CHIME
INTUITION-1
CHIME
INTUITION-1
CHIME
INTUITION-1
AGRICULTURAL APPLICATIONS Estimation of soil parameters [19]
0
2
0
2
2
0
0
1
Analysis of fertilization [20]
0
2
0
2
2
0
0
1
Analysis of the plant growth [21]
0
1
0
2
2
0
0
1
Assessment of hydration [171]
2
2
0
2
2
0
0
1
MONITORING OF PLANT DISEASES AND WATER STRESS Monitoring of plant diseases [18]
2
2
0
2
2
0
0
1
Estimation of water content [32]
2
2
0
2
2
0
0
1
Detection of floods [35]
2
0
0
2
1
1
2
1
Damage estimation in floodplains [172]
0
1
0
2
2
1
2
1
DETECTION AND MONITORING OF FLOODS
DETECTION OF FIRE, VOLCANIC ERUPTIONS, AND ASH CLOUDS Early detection of fire [44]
2
0
0
2
0
1
2
2
Monitoring fire progress [173]
2
2
0
2
2
1
2
2
Burned area maps [41]
0
1
0
2
0
0
2
2
Assessment of vegetation destruction [42]
2
0
0
2
1
0
2
2
Greenhouse gas emission [43]
2
1
0
2
1
0
2
1
Detection of volcanic ash clouds [47]
2
1
0
2
1
1
2
1
Detection of volcanic eruption [52]
2
0
0
2
1
1
2
1
Lava tracking [174]
2
2
0
2
2
1
2
1
Estimation of fire damage [172]
0
1
0
2
2
1
2
2
DETECTION AND MONITORING OF EARTHQUAKES AND LANDSLIDES Monitoring of earthquakes [54]
2
1
0
2
1
0
2
2
Estimation of earthquake damage [62]
0
1
0
2
2
0
2
1
Detection of landslides [54]
2
0
0
2
1
0
2
2
Estimation of landslide damage [175]
0
1
0
2
1
0
2
0
Prediction of landslides [176]
2
2
0
2
0
1
1
1
MONITORING OF INDUSTRIALLY INDUCED POLLUTION Dust events [177]
0
2
0
2
1
1
1
1
Mine tailings [65]
0
2
0
2
2
1
2
1
Acidic discharges [78]
2
2
0
2
1
2
2
2
Hazardous chemical compounds [94]
2
2
0
2
1
2
2
1
2
2
0
2
1
2
2
1
Detection of marine litter [118]
2
2
0
1
2
2
2
0
Monitoring of algal blooms [92]
2
0
0
1
2
0
2
1
Coastal/inland water pollution [92]
2
2
0
1
2
0
2
1
Maritime surveillance [150]
2
0
0
0
0
2
1
1
Maritime support [158]
2
1
0
0
0
0
1
1
Aircraft crashes [107]
2
1
0
0
0
0
1
0
DETECTION AND MONITORING OF METHANE Detection of methane [178] WATER ENVIRONMENT ANALYSIS
28
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Although there are hyperspectral imaging missions with the ultimate goal of deploying AI at the edge, selecting a single (or a set of) onboard AI application(s) remains an important challenge, as it affects virtually all components
of the developed satellite. In this article, we tackled this research gap and introduced a fully traceable, objective, quantifiable, and interpretable approach of assessing potential onboard AI applications and selecting those that
CONSTRAINTS SENSOR CAPABILITIES SC ~cspe
DATASET MATURITY
SC ~css
SC ~cspa
~D agt
~D dgt
~D shd
CHIME
INTUITION-1
CHIME
INTUITION-1
CHIME
INTUITION-1
2
1
1
2
2
2
0
1
0
2
2
1
2
2
2
0
1
2
2
2
0
2
0
2
0
1
0
2
2
1
2
0
2
0
1
0
2
1
2
2
2
2
0
0
0
1
1
1
1
0
2
0
0
0
2
2
1
2
2
2
1
2
0
2
1
0
1
2
0
1
1
2
2
0
1
1
2
1
0
1
0
2
1
1
2
0
0
0
1
2
2
1
1
2
2
0
0
1
2
2
1
2
2
2
2
0
1
0
2
0
1
2
2
2
0
1
0
2
2
1
2
0
2
1
1
2
2
1
1
2
0
2
1
2
2
1
1
1
2
2
2
1
2
2
2
1
0
2
2
0
1
1
2
2
2
1
2
2
0
0
1
0
2
2
1
2
2
0
1
1
2
2
2
0
2
2
0
0
1
0
2
1
1
2
2
2
1
1
2
2
1
1
2
2
2
0
1
0
1
1
1
2
0
2
0
1
0
2
1
1
2
2
2
0
1
2
2
1
1
2
2
2
0
1
0
2
1
1
2
2
2
0
1
0
2
1
1
2
2
2
1
1
0
2
1
1
2
2
2
0
1
2
2
1
1
2
0
2
0
1
0
2
1
1
2
2
2
0
0
2
2
1
1
2
0
0
1
0
2
2
1
1
2
2
0
1
2
2
2
2
1
2
0
0
1
2
2
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
29
30
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
SC ω cspa
2
2
1
1
1
2
SC ω css
2
1
0
SC ω cspa
2
1
1
1
2
SC ω css
2
1
0
0
1
0
2 SC ω cspe
0
0
0
0
1
0
0
0
1
0
1
δ OBP fr 2
2 SC ω cspe
1
2 δR
2 δM emp
1
2
1
1
2
δ OBP mta
δR
M 2 δ emp
M 2 δ cas
Detection of Volcanic Ash Clouds (S = 16) Detection of Floods (S = 15)
1
1
1
2
δM cas
SC ω cspa
2
2
1
1
1
2
SC ω css
0
2
0 0
1
1
0
0
D 2 ω agt
D ω dgt
2
1
1
1
2
0
0
2 SC ω css
0
1
1
0
0
0
2
0
δ OBP fr 4
0
2 SC ω cspe
0
0
2
2
2
4 δR
4
4 δM emp
δM cas
1
0
2 SC ω cspe
0
0
0
2
2
2
2
δ OBP mta 4
4
δR
4
4 δM emp
δM cas
Monitoring Fire Progress (S = 22) Detection of Volcanic Eruptions (S = 22) Coastal / Inland Water Pollution (S = 21)
1
0
2
δ OBP mta 4
Lava Tracking (S = 25) Monitoring Fire Progress (S = 24) Others (S ≤ 23)
2) Objectives are More Important (b)
SC ω cspa
2
ωD shd
0
(a)
0
2
δ OBP fr 4
Lava Tracking (S = 24) Detection of Volcanic Ash Clouds (S = 22) Detection of Floods (S = 21) Others (S ≤ 20)
D 2 ω agt
D ω dgt
ωD shd
Acidic Discharges (S = 26) Methane Detection (S = 25) Hazardous Chemical Compounds (S = 24)
SC 4 ω cspa
4
2
2
2
4
0
0
4 SC ω css
0
2
2
0
0
D 4 ω agt
D ω dgt
4
2
2
2
4
SC ω css
0
4
0 0
2
2
0
0
0
1
0
δ OBP fr 2
0
0
4 SC ω cspe
0
0
1
1
1
2
δ OBP mta
δR
2
emp
2 δM
δM cas
2
0
0
4 SC ω cspe
0
0
1
1
2
1
1
2
δ OBP mta
δR
2
M 2 δ emp
δM cas
Detection of Volcanic Ash Clouds (S = 26) Detection of Floods (S = 24)
2
0
1
2
Maritime Support (S = 24) Marine Litter Detection (S = 23)
3) Constraints are More Important
SC ω cspa
4
ωD shd
0
1
δ OBP fr 2
Lava Tracking (S = 27) Detection of Volcanic Eruptions (S = 26) Others (S ≤ 23)
D 4 ω agt
D ω dgt
ωD shd
Lava Tracking (S = 26) Methane Detection (S = 23) Others (S ≤ 22)
FIGURE 4. Our evaluation procedure allows for investigating various mission profiles. We consider example mission profiles of (a) CHIME and (b) Intuition-1 where 1) both the objectives and constraints are equally important, 2) the objectives are twice as important as the constraints, and 3) the constraints are twice as important as the objectives. The most promising use cases are rendered in color (the larger S is, the better), whereas others are in gray (darker shades of gray indicate that more use cases were assigned the same value in the corresponding parameter). For each use case, we report its score S.
1) Objectives and Constraints are Equally Important
D 2 ω agt
D ω dgt 2
ωD shd
0
0
0
0
0
0
1
0
0
0
1
δ OBP mta 2
Acidic Discharges (S = 16) Monitoring Fire Progress (S = 15) Marine Litter Detection (S = 15)
0
1
δ OBP fr 2
Lava Tracking (S = 17) Detection of Volcanic Eruptions (S = 16) Others (S ≤ 14)
D 2 ω agt
D ω dgt
ωD shd
Lava Tracking (S = 17) Methane Detection (S = 16) Hazardous Chemical Compounds (S = 15) Others (S ≤ 14)
maximize the overall score aggregating the most important mission objectives and constraints in a simple way. We proved the flexibility of the evaluation process by employing it on to two hyperspectral missions: CHIME and Intuition-1. Our technique may be straightforwardly utilized to target two fundamentally different missions, and it allows practitioners to analyze different mission profiles and the importance of assessment factors through the weighting mechanism. On top of that, the procedure can be extended to capture other aspects, such as expected onboard data quality, e.g., geometric, spectral, and radiometric, and other types of payloads beyond optical sensors, which may play a key role in specific EO use cases. Also, it may be interesting to consider selected aspects that are currently treated as being mission agnostic as mission specific. As an example, creating new ground truth data may require planning in situ measurement campaigns to be in line with the ConOps of a mission. The same would apply to the training image data, whose acquisition time and target area characteristics should be close enough to those planned for a mission. Finally, including the TRL of specific onboard AI algorithms (in relation to the available hardware planned for inference) in the evaluation procedure could help make a more informed decision on selecting the actual AI solution (e.g., a deep learning architecture for a given downstream task). We believe that the standardized approach of evaluating onboard AI applications will become an important tool that will be routinely exploited while designing and planning emerging EO missions and that it will help maximize the percentage of successful satellite missions bringing commercial, scientific, industrial, and societal value to the community. ACKNOWLEDGMENT This work was partly funded by the ESA via a feasibility study for the CHIME mission and Intuition-1-focused GENESIS and GENESIS 2 projects supported by the U- lab (https:// philab.esa.int/). Agata M. Wijata and Jakub Nalepa were supported by a Silesian University of Technology grant for maintaining and developing research potential. This article has supplementary material, provided by the authors, available at https://www.doi.org/10.1109/MGRS.2023.3269979. AUTHOR INFORMATION Agata M. Wijata ([email protected]) received her M.Sc. (2015) and Ph.D. (2023) degrees in biomedical engineering at the Silesian University of Technology. Currently, she works as a researcher at the Silesian University of Technology, 44-800 Zabrze, Poland, and as a machine learning specialist at KP Labs, 44-100 Gliwice, Poland, where she has been focusing on hyperspectral image analysis. Her research interests include multi- and hyperspectral image processing, medical image processing, image-guided navigation systems in medicine, artificial neural networks, and artificial intelligence in general. She contributes to the Copernicus Hyperspectral Imaging Mission for the JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
Environment from the artificial intelligence and data processing perspectives. She is a Member of IEEE. Michel-François Foulon (mic hel-f rancois.foulon @thalesaleniaspace.com) received his Ph.D. degree (2008) in micro- and nanotechnologies and telecommunications from Université des Sciences et Technologies de Lille, France. In 2007, he joined Thales Alenia Space, 31100 Toulouse, France, where he is currently an imaging chain architect in the observation, exploration, and navigation business line. He has more than 15 years of experience in microwaves, space-to-ground transmission, onboard data processing, and image chain architecture design for Earth observation systems. Since 2021, he has also worked on onboard artificial intelligence solutions in the framework of the Copernicus program. Yves Bobichon ([email protected]) received his Ph.D. degree (1997) in computer science from University Nice Côte d’Azur, France. He joined Alcatel Space Industries in 1999. He is currently an image processing chain architect in the System and Ground Segment Engineering Department, Thales Alenia Space, 31100 Toulouse, France. He has more than 25 years of experience in satellite image processing, onboard data compression, and image chain architecture design for Earth observation systems. Since 2018, he has also been a researcher at the French Research Technological Institute Saint Exupéry. His research interests include embedded machine and deep learning applications for image processing onboard Earth observation satellites. Raffaele Vitulli ([email protected]) received his M.Sc. degree in electronic engineering from Politecnico di Bari, Italy, in 1991. He is a staff member of the Onboard Payload Data Processing Section, European Space Agency, 2201 AZ Noordwijk, The Netherlands, where he works on the Consultative Committee for Space Data System as a member of the Multispectral/Hyperspectral Data Compression Working Group. He has also been the chair and organizer of the Onboard Payload Data Compression Workshop. He is actively involved in the Copernicus Hyperspectral Imaging Mission for the Environment mission, supervising avionics and onboard data handling. Marco Celesti ([email protected]) received his M.Sc. (2014) and Ph.D. (2018) degrees in environmental sciences from the University of Milano–Bicocca (UNIMIB), Italy. After that, he worked as a postdoc at UNIMIB, being also involved as scientific project manager in the Horizon 2020 Marie Curie Training on Remote Sensing for Ecosystem Modeling project. He received a two-year fellowship cofunded by the European Space Agency (ESA) Living Planet Fellowship program, working in support of the Earth Explorer 8 Fluorescence Explorer mission. Since 2021, he has worked at the ESA, 2201 AZ Noordwijk, The Netherlands, as a Sentinel optical mission scientist. His research interests include optical remote sensing, retrieval of geophysical parameters, radiative transfer modeling, and terrestrial carbon assimilation. He is currently 31
the mission scientist of the Copernicus Hyperspectral Imaging Mission for the Environment and Copernicus Sentinel-2 next-generation missions. Roberto Camarero ([email protected]) received his M.Sc. degree in telecommunications, signal processing, and electronic systems engineering from the University of Zaragoza, Spain, in 2005 and advanced M.Sc. degree in aerospace telecommunications and electronics from Institut Supérieur de l’Aéronautique et de l’Espace, Toulouse, France, in 2006. He worked for the National Center for Space Studies (CNES) from 2006 to 2018 in the Onboard Data System Office and has been with the European Space Agency (ESA), 2201 AZ Noordwijk, The Netherlands, since then. He has been the CNES/ESA representative in the Consultative Committee for Space Data Systems Data Compression Working Group for over a decade. He has been a visiting lecturer on image compression at several French engineering schools, and he is a co-organizer of the Onboard Payload Data Compression Workshop. His research interests include onboard image compression and processing for optical remote sensing missions. Gianluigi Di Cosimo ([email protected]) received his M.Sc. degree in physics in 1991 and Ph.D. degree in 1995 at the Sapienza University of Rome, Italy. After a few years in the space industry, working for one of the major large system integrators in Europe, he joined the European Space Agency (ESA), 2201 AZ Noordwijk, The Netherlands, in 2006. At the ESA, he has been responsible for product assurance management on several projects across different application domains, e.g., telecommunications, navigation, and Earth observation, mainly following spacecraft development, testing, and launch. In 2020, was appointed satellite engineering and assembly, integration, and verification manager for the Copernicus Hyperspectral Imaging Mission for the Environment. Ferran Gascon ([email protected]) received his M.Sc. degree in telecommunications engineering from Universitat Politècnica de Catalunya, Barcelona, Spain, in 1998; M.Sc. degree from École Nationale Supérieure des Télécommunications de Bretagne, Brest, France; and Ph.D. degree in remote sensing from Centre d’Études Spatiales de la Biosphère, Toulouse, France, in 2001. He is currently an engineer with the European Space Research Institute, European Space Agency, 00044 Frascati, Italy. He spent several years on the development and operations of the Copernicus Sentinel-2 and Fluorescence Explorer missions as a data quality manager. The scope of his tasks covered all mission aspects related to product/algorithm definition, calibration, and validation. He is currently the Copernicus Sentinel-2 mission manager. Nicolas Longépé ([email protected]) received his M.Eng. degree in electronics and communication systems and M.Sc. degree in electronics from the National Institute for the Applied Sciences, Rennes, France, in 2005 and his Ph.D. degree in signal processing and telecommunication from the University of Rennes I, Rennes, in 2008. From 2007 32
to 2010, he was with the Earth Observation Research Center, Japan Aerospace Exploration Agency, Tsukuba, Japan. From 2010 to 2020, he was with the Space Observation Division, Collecte Localization Satellites, Plouzané, France, where he was a research engineer. Since September 2020, he has been an Earth observation data Scientist, U- Lab Explore Office, European Space Research Institute, European Space Agency, 00044 Frascati, Italy. His research interests include Earth observation remote sensing and digital technologies. such as machine (deep) learning. He has been working on the development of innovative synthetic aperture radar-based applications for environmental and natural resource management (ocean, mangrove, land and forest cover, soil moisture, snow cover, and permafrost) and maritime security (oil spills, sea ice, icebergs, and ship detection/tracking). At the U- Lab, he is particularly involved in the development of innovative Earth observation missions in which artificial intelligence is directly deployed at the edge (on the spacecraft). Jens Nieke ([email protected]) received his M.Eng. degree in aero- and astronautical engineering from the Technical University of Berlin, Germany, and the National Institute of Applied Sciences, Lyon, France, and his Ph.D. degree in an advanced satellite mission study for regional coastal zone monitoring at the Technical University of Berlin in 2001. In 1995, he joined the team working on the Moderate Optoelectrical Scanner for the Indian Remote Sensing satellite at the German Aerospace Center, Berlin, which launched a spaceborne imaging spectrometer in 1997. From 2000 to 2003, he was a visiting scientist with the Earth Observation Research Center, Japan Aerospace Exploration Agency, Tsukuba, Japan, where he was involved in the calibration and validation of the Advanced Earth Observing Satellite-II Global Imager mission. From 2004 to 2007, he was with the Remote Sensing Laboratories, University of Zurich, Switzerland, as a senior scientist, lecturer, and project manager of the Airborne Prism Experiment project of the European Space Agency (ESA). Since 2007, he has been with the European Space Research and Technology Center, ESA, 2201 AZ Noordwijk, The Netherlands, where he is a member of the Sentinel-3 mission team. Michal Gumiela ([email protected]) is a systems engineer with an electrical and software-embedded systems background. He received his B.Sc. degree in electronics and communications engineering at AGH University of Science and Technology, Krakow, Poland, and M.Sc. degree in microsystems and electronic systems at Warsaw University of Technology, Poland. He has worked in the Wireless Sensors Networks research group, AGH University of Science and Technology, and at Astronika. Now with KP Labs, 44-100 Gliwice, Poland, he works as a systems engineer on projects involving onboard processing using artificial intelligence (AI) algorithms for autonomous Earth observation data segmentation and classification. As a head of mission analysis, he prepares operations concepts of AI-capable missions involving heavy data processing and autonomy. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Jakub Nalepa ([email protected]) received his M.Sc. (2011), Ph.D. (2016), and D.Sc. (2021) degrees in computer science from Silesian University of Technology, 44-100 Gliwice, Poland, where he is currently an associate professor. He is also the head of artificial intelligence (AI) at KP Labs, 44-100 Gliwice, Poland, where he shapes the scientific and industrial AI objectives of the company, related to, among others, Earth observation, onboard and on-the-ground satellite data analysis, and anomaly detection from satellite telemetry data. He has been pivotal in designing the onboard deep learning capabilities of Intuition-1 and has contributed to missions, including the Copernicus Hyperspectral Imaging Mission for the Environment and Operations Nanosatellite. His research interests include (deep) machine learning, hyperspectral data analysis, signal processing, remote sensing, and tackling practical challenges that arise in Earth observation to deploy scalable solutions. He was the general chair of the HYPERVIEW Challenge at the 2022 IEEE International Conference on Image Processing, focusing on the estimation of soil parameters from hyperspectral images onboard Intuition-1 to maintain farm sustainability by improving agricultural practices. He is a Senior Member of IEEE.
[9]
[10]
[11]
[12]
[13]
[14]
REFERENCES [1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
N. Audebert, B. Le Saux, and S. Lefevre, “Deep learning for classification of hyperspectral data: A comparative review,” IEEE Geosci. Remote Sens. Mag., vol. 7, no. 2, pp. 159–173, Jun. 2019, doi: 10.1109/MGRS.2019.2912563. W. Sun and Q. Du, “Hyperspectral band selection: A review,” IEEE Geosci. Remote Sens. Mag., vol. 7, no. 2, pp. 118–139, Jun. 2019, doi: 10.1109/MGRS.2019.2911100. P. Ribalta Lorenzo, L. Tulczyjew, M. Marcinkiewicz, and J. Nalepa, “Hyperspectral band selection using attention-based convolutional neural networks,” IEEE Access, vol. 8, pp. 42,384– 42,403, Mar. 2020, doi: 10.1109/ACCESS.2020.2977454. G. Giuffrida et al., “The z-sat-1 mission: The first on-board deep neural network demonstrator for satellite earth observation,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–14, 2022, doi: 10.1109/TGRS.2021.3125567. G. Mateo-Garcia et al., “In-orbit demonstration of a re-trainable Machine Learning Payload for processing optical imagery,” Scientific Rep., early access, 2022, doi: 10.21203/rs.3. rs-1941984/v1. M. Paoletti, J. Haut, J. Plaza, and A. Plaza, “Deep learning classifiers for hyperspectral imaging: A review,” ISPRS J. Photogrammetry Remote Sens., vol. 158, pp. 279–317, Dec. 2019, doi: 10.1016/j.isprsjprs.2019.09.006. J. Nalepa et al., “Towards resource-frugal deep convolutional neural networks for hyperspectral image segmentation,” Microprocessors Microsystems, vol. 73, Mar. 2020, Art. no. 102994, doi: 10.1016/j.micpro.2020.102994. J. Nalepa et al., “Towards on-board hyperspectral satellite image segmentation: Understanding robustness of deep learning through simulating acquisition conditions,” Remote Sens., vol. 13, no. 8, Apr. 2021, Art. no. 1532, doi: 10.3390/rs13081532.
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[15]
[16]
[17]
[18]
[19]
[20]
[21]
J. Nalepa, M. Myller, and M. Kawulok, “Training- and test-time data augmentation for hyperspectral image segmentation,” IEEE Geosci. Remote Sens. Lett., vol. 17, no. 2, pp. 292–296, Feb. 2020, doi: 10.1109/LGRS.2019.2921011. J. Nalepa, M. Myller, and M. Kawulok, “Transfer learning for segmenting dimensionally reduced hyperspectral images,” IEEE Geosci. Remote Sens. Lett., vol. 17, no. 7, pp. 1228–1232, Jul. 2020, doi: 10.1109/LGRS.2019.2942832. L. Tulczyjew, M. Kawulok, and J. Nalepa, “Unsupervised feature learning using recurrent neural nets for segmenting hyperspectral images,” IEEE Geosci. Remote Sens. Lett., vol. 18, no. 12, pp. 2142–2146, Dec. 2021, doi: 10.1109/LGRS.2020. 3013205. J. Castillo-Navarro, B. Le Saux, A. Boulch, and S. Lefèvre, “Energy-based models in earth observation: From generation to semisupervised learning,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–11, 2022, doi: 10.1109/TGRS.2021.3126428. S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Deep learning for hyperspectral image classification: An overview,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 9, pp. 6690–6709, Sep. 2019, doi: 10.1109/TGRS.2019.2907932. J. Yue, C. Zhou, W. Guo, H. Feng, and K. Xu, “Estimation of winter-wheat above-ground biomass using the wavelet analysis of unmanned aerial vehicle-based digital images and hyperspectral crop canopy images,” Int. J. Remote Sens., vol. 42, no. 5, pp. 1602–1622, Mar. 2021, doi: 10.1080/01431161.2020. 1826057. X. Jin et al., “Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index,” Crop J., vol. 8, no. 1, pp. 87–97, Feb. 2020, doi: 10.1016/j.cj.2019.06.005. B. Lu and Y. He, “Evaluating empirical regression, machine learning, and radiative transfer modelling for estimating vegetation chlorophyll content using bi-seasonal hyperspectral images,” Remote Sens., vol. 11, no. 17, Aug. 2019, Art. no. 1979, doi: 10.3390/rs11171979. X. Wang et al., “Predicting soil organic carbon content in Spain by combining Landsat TM and ALOS PALSAR images,” Int. J. Appl. Earth Observ. Geoinformation, vol. 92, Oct. 2020, Art. no. 102182, doi: 10.1016/j.jag.2020.102182. B. Lu, P. D. Dao, J. Liu, Y. He, and J. Shang, “Recent advances of hyperspectral imaging technology and applications in agriculture,” Remote Sens., vol. 12, no. 16, Aug. 2020, Art. no. 2659, doi: 10.3390/rs12162659. C. Lin, A.-X. Zhu, Z. Wang, X. Wang, and R. Ma, “The refined spatiotemporal representation of soil organic matter based on remote images fusion of Sentinel-2 and Sentinel-3,” Int. J. Appl. Earth Observ. Geoinformation, vol. 89, Jul. 2020, Art. no. 102094, doi: 10.1016/j.jag.2020.102094. N. E. Q. Silvero et al., “Soil variability and quantification based on Sentinel-2 and Landsat-8 bare soil images: A comparison,” Remote Sens. Environ., vol. 252, Jan. 2021, Art. no. 112117, doi: 10.1016/j.rse.2020.112117. Y. Zhang et al., “Estimating the maize biomass by crop height and narrowband vegetation indices derived from UAV-based hyperspectral images,” Ecological Indicators,
33
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
34
vol. 129, Oct. 2021, Art. no. 107985, doi: 10.1016/j.ecolind. 2021.107985. M. Battude et al., “Estimating maize biomass and yield over large areas using high spatial and temporal resolution Sentinel-2 like remote sensing data,” Remote Sens. Environ., vol. 184, pp. 668–681, Oct. 2016, doi: 10.1016/j.rse.2016.07.030. Q. Zheng et al., “Integrating spectral information and meteorological data to monitor wheat yellow rust at a regional scale: A case study,” Remote Sens., vol. 13, no. 2, Jan. 2021, Art. no. 278, doi: 10.3390/rs13020278. D. Wang et al., “Early detection of tomato spotted wilt virus by hyperspectral imaging and outlier removal auxiliary classifier generative adversarial nets (OR-AC-GAN),” Scientific Rep., vol. 9, no. 1, pp. 1–4, Mar. 2019, doi: 10.1038/s41598-01940066-y. N. Gorretta, M. Nouri, A. Herrero, A. Gowen, and J.-M. Roger, “Early detection of the fungal disease ‘apple scab’ using SWIR hyperspectral imaging,” in Proc. 10th Workshop Hyperspectral Imag. Signal Process., Evol. Remote Sens. (WHISPERS), 2019, pp. 1–4, doi: 10.1109/WHISPERS.2019.8921066. J. Wang et al., “Estimating leaf area index and aboveground biomass of grazing pastures using Sentinel-1, Sentinel-2 and Landsat images,” ISPRS J. Photogrammetry Remote Sens., vol. 154, pp. 189–201, Aug. 2019, doi: 10.1016/j.isprsjprs.2019. 06.007. C.-Y. Chiang, C. Barnes, P. Angelov, and R. Jiang, “Deep learning-based automated forest health diagnosis from aerial images,” IEEE Access, vol. 8, pp. 144,064–144,076, Jul. 2020, doi: 10.1109/ACCESS.2020.3012417. L. Feng et al., “Investigation on data fusion of multisource spectral data for rice leaf diseases identification using machine learning methods,” Frontiers Plant Sci., vol. 11, Nov. 2020, Art. no. 577063, doi: 10.3389/fpls.2020.577063. L. Feng, B. Wu, Y. He, and C. Zhang, “Hyperspectral imaging combined with deep transfer learning for rice disease detection,” Frontiers Plant Sci., vol. 12, Sep. 2021, Art. no. 693521, doi: 10.3389/fpls.2021.693521. F. Zhang and G. Zhou, “Estimation of vegetation water content using hyperspectral vegetation indices: A comparison of crop water indicators in response to water stress treatments for summer maize,” BMC Ecology, vol. 19, no. 18, pp. 1–12, Dec. 2019, doi: 10.1186/s12898-019-0233-0. M. Wocher, K. Berger, M. Danner, W. Mauser, and T. Hank, “Physically-based retrieval of canopy equivalent water thickness using hyperspectral data,” Remote Sens., vol. 10, no. 12, Nov. 2018, Art. no. 1924, doi: 10.3390/rs10121924. F. J. García-Haro et al., “A global canopy water content product from AVHRR/Metop,” ISPRS J. Photogrammetry Remote Sens., vol. 162, pp. 77–93, Apr. 2020, doi: 10.1016/j.isprsjprs. 2020.02.007. B. Yang, H. Lin, and Y. He, “Data-driven methods for the estimation of leaf water and dry matter content: Performances, potential and limitations,” Sensors, vol. 20, no. 18, Sep. 2020, Art. no. 5394, doi: 10.3390/s20185394. K. Rao, A. P. Williams, J. F. Flefil, and A. G. Konings, “SARenhanced mapping of live fuel moisture content,” Remote Sens.
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
Environ., vol. 245, Aug. 2020, Art. no. 111797, doi: 10.1016/j. rse.2020.111797. G. Mateo-Garcia et al., “Towards global flood mapping onboard low cost satellites with machine learning,” Scientific Rep., vol. 11, no. 1, pp. 1–2, Mar. 2021, doi: 10.1038/s41598 -021-86650-z. X. Jiang et al., “Rapid and large-scale mapping of flood inundation via integrating spaceborne synthetic aperture radar imagery with unsupervised deep learning,” ISPRS J. Photogrammetry Remote Sens., vol. 178, pp. 36–50, Aug. 2021, doi: 10.1016/j.isprsjprs.2021.05.019. B. Peng et al., “Urban flood mapping with bitemporal multispectral imagery via a self-supervised learning framework,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 2001–2016, 2021, doi: 10.1109/JSTARS.2020.3047677. Y. Li, S. Martinis, and M. Wieland, “Urban flood mapping with an active self-learning convolutional neural network based on TerraSAR-X intensity and interferometric coherence,” ISPRS J. Photogrammetry Remote Sens., vol. 152, pp. 178–191, Jun. 2019, doi: 10.1016/j.isprsjprs.2019.04.014. B. M. Wotton et al., “Forest fire occurrence and climate change in Canada,” Int. J. Wildland Fire, vol. 19, no. 3, pp. 253–271, May 2010, doi: 10.1071/WF09002. G. Lazzeri, W. Frodella, G. Rossi, and S. Moretti, “Multitemporal mapping of post-fire land cover using multiplatform PRISMA hyperspectral and Sentinel-UAV multispectral data: Insights from case studies in Portugal and Italy,” Sensors, vol. 21, no. 12, Jun. 2021, Art. no. 3982, doi: 10.3390/ s21123982. S. Xulu, N. Mbatha, and K. Peerbhay, “Burned area mapping over the southern cape forestry region, South Africa using Sentinel data within GEE Cloud Platform,” ISPRS Int. J. Geo-Inf., vol. 10, no. 8, Aug. 2021, Art. no. 511, doi: 10.3390/ ijgi10080511. C. F. Waigl et al., “Fire detection and temperature retrieval using EO-1 Hyperion data over selected Alaskan boreal forest fires,” Int. J. Appl. Earth Observ. Geoinformation, vol. 81, pp. 72–84, Sep. 2019, doi: 10.1016/j.jag.2019.03.004. S. Amici and A. Piscini, “Exploring PRISMA scene for fire detection: Case study of 2019 bushfires in Ben Halls Gap National Park, NSW, Australia,” Remote Sens., vol. 13, no. 8, Apr. 2021, Art. no. 1410, doi: 10.3390/rs13081410. N. T. Toan, P. Thanh Cong, N. Q. Viet Hung, and J. Jo, “A deep learning approach for early wildfire detection from hyperspectral satellite images,” in Proc. 7th Int. Conf. Robot Intell. Technol. Appl. (RiTA), 2019, pp. 38–45, doi: 10.1109/ RITAPP.2019.8932740. M. Gouhier, M. Deslandes, Y. Guéhenneux, P. Hereil, P. Cacault, and B. Josse, “Operational response to volcanic ash risks using HOTVOLC satellite-based system and MOCAGE-accident model at the Toulouse VAAC,” Atmosphere, vol. 11, no. 8, Aug. 2020, Art. no. 864, doi: 10.3390/atmos11080864. M. J. Zidikheri, C. Lucas, and R. J. Potts, “Quantitative verification and calibration of volcanic ash ensemble forecasts using satellite data,” J. Geophys. Res. Atmos., vol. 123, no. 8, pp. 4135–4156, Apr. 2018, doi: 10.1002/2017JD027740.
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
[47] L. Arias, J. Cifuentes, M. Marín, F. Castillo, and H. Garcés, “Hyperspectral imaging retrieval using MODIS satellite sensors applied to volcanic ash clouds monitoring,” Remote Sens., vol. 11, no. 11, Jun. 2019, Art. no. 1393, doi: 10.3390/rs11111393. [48] L. Liu, C. Li, Y. Lei, J. Yin, and J. Zhao, “Volcanic ash cloud detection from MODIS image based on CPIWS method,” Acta Geophys., vol. 65, no. 1, pp. 151–163, Mar. 2017, doi: 10.1007/ s11600-017-0013-1. [49] A. Hudak et al., “The relationship of multispectral satellite imagery to immediate fire effects,” Fire Ecology, vol. 3, pp. 64–90, Jun. 2007, doi: 10.4996/fireecology.0301064. [50] N. Anantrasirichai, J. Biggs, F. Albino, P. Hill, and D. Bull, “Application of machine learning to classification of volcanic deformation in routinely generated InSAR data,” J. Geophys. Res. Solid Earth, vol. 123, no. 8, pp. 6592–6606, Aug. 2018, doi: 10.1029/2018JB015911. [51] L. Liu and X.-K. Sun, “Volcanic ash cloud diffusion from remote sensing image using LSTM-CA method,” IEEE Access, vol. 8, pp. 54,681–54,690, Mar. 2020, doi: 10.1109/ACCESS. 2020.2981368. [52] M. P. Del Rosso, A. Sebastianelli, D. Spiller, P. P. Mathieu, and S. L. Ullo, “On-board volcanic eruption detection through CNNs and satellite multispectral imagery,” Remote Sens., vol. 13, no. 17, Sep. 2021, Art. no. 3479, doi: 10.3390/rs13173479. [53] Y. Kim and S. Hong, “Deep learning-generated nighttime reflectance and daytime radiance of the midwave infrared band of a geostationary satellite,” Remote Sens., vol. 11, no. 22, Nov. 2019, Art. no. 2713, doi: 10.3390/rs11222713. [54] W. Qi, M. Wei, W. Yang, C. Xu, and C. Ma, “Automatic mapping of landslides by the ResU-Net,” Remote Sens., vol. 12, no. 15, Aug. 2020, Art. no. 2487, doi: 10.3390/rs12152487. [55] C. Ye et al., “Landslide detection of hyperspectral remote sensing data based on deep learning with constrains,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 12, no. 12, pp. 5047–5060, Dec. 2019, doi: 10.1109/JSTARS.2019.2951725. [56] Z. Ma, G. Mei, and F. Piccialli, “Machine learning for landslides prevention: A survey,” Neural Comput. Appl., vol. 33, no. 17, pp. 10,881–10,907, Sep. 2021, doi: 10.1007/s00521-020-05529-8. [57] Y. Li et al., “Accurate prediction of earthquake-induced landslides based on deep learning considering landslide source area,” Remote Sens., vol. 13, no. 17, Aug. 2021, Art. no. 3436, doi: 10.3390/rs13173436. [58] B. Adriano, J. Xia, G. Baier, N. Yokoya, and S. Koshimura, “Multi-source data fusion based on ensemble learning for rapid building damage mapping during the 2018 Sulawesi earthquake and tsunami in Palu, Indonesia,” Remote Sens., vol. 11, no. 7, Apr. 2019, Art. no. 886, doi: 10.3390/rs11070886. [59] M. Pollino et al., “Assessing earthquake-induced urban rubble by means of multiplatform remotely sensed data,” ISPRS Int. J. Geo-Inf., vol. 9, no. 4, Apr. 2020, Art. no. 262, doi: 10.3390/ ijgi9040262. [60] M. Hasanlou, R. Shah-Hosseini, S. T. Seydi, S. Karimzadeh, and M. Matsuoka, “Earthquake damage region detection by multitemporal coherence map analysis of radar and multispectral imagery,” Remote Sens., vol. 13, no. 6, Mar. 2021, Art. no. 1195, doi: 10.3390/rs13061195. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[61] U. Bhangale, S. Durbha, A. Potnis, and R. Shinde, “Rapid earthquake damage detection using deep learning from VHR remote sensing images,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2019, pp. 2654–2657, doi: 10.1109/ IGARSS.2019.8898147. [62] M. Ji, L. Liu, and M. Buchroithner, “Identifying collapsed buildings using post-earthquake satellite imagery and convolutional neural networks: A case study of the 2010 Haiti Earthquake,” Remote Sens., vol. 10, no. 11, Oct. 2018, Art. no. 1689, doi: 10.3390/rs10111689. [63] P. Xiong et al., “Towards advancing the earthquake forecasting by machine learning of satellite data,” Sci. Total Environ., vol. 771, Jun. 2021, Art. no. 145256, doi: 10.1016/j.scitotenv. 2021.145256. [64] X. Yan, Z. Zang, N. Luo, Y. Jiang, and Z. Li, “New interpretable deep learning model to monitor real-time PM2.5 concentrations from satellite data,” Environ. Int., vol. 144, Nov. 2020, Art. no. 106060, doi: 10.1016/j.envint.2020.106060. [65] H. Soydan, A. Koz, and H. S¸ebnem Düzgün, “Secondary iron mineral detection via hyperspectral unmixing analysis with Sentinel-2 imagery,” Int. J. Appl. Earth Observ. Geoinformation, vol. 101, Sep. 2021, Art. no. 102343, doi: 10.1016/j. jag.2021.102343. [66] A. Riaza, J. Buzzi, E. García-Meléndez, V. Carrère, A. Sarmiento, and A. Müller, “Monitoring acidic water in a polluted river with hyperspectral remote sensing (HyMap),” Hydrological Sci. J., vol. 60, no. 6, pp. 1064–1077, Jun. 2015, doi: 10.1080/02626667.2014.899704. [67] F. Wang, J. Gao, and Y. Zha, “Hyperspectral sensing of heavy metals in soil and vegetation: Feasibility and challenges,” ISPRS J. Photogrammetry Remote Sens., vol. 136, pp. 73–84, Feb. 2018, doi: 10.1016/j.isprsjprs.2017.12.003. [68] T. Shi et al., “Proximal and remote sensing techniques for mapping of soil contamination with heavy metals,” Appl. Spectrosc. Rev., vol. 53, no. 10, pp. 783–805, Nov. 2018, doi: 10.1080/05704928.2018.1442346. [69] Q. Li et al., “Estimating the impact of COVID-19 on the PM2.5 levels in China with a satellite-driven machine learning model,” Remote Sens., vol. 13, no. 7, Apr. 2021, Art. no. 1351, doi: 10.3390/rs13071351. [70] A. Basit, B. M. Ghauri, and M. A. Qureshi, “Estimation of ground level PM2.5 by using MODIS satellite data,” in Proc. 6th Int. Conf. Aerosp. Sci. Eng. (ICASE), 2019, pp. 1–5, doi: 10.1109/ICASE48783.2019.9059157. [71] H. Feng, J. Li, H. Feng, E. Ning, and Q. Wang, “A high-resolution index suitable for multi-pollutant monitoring in urban areas,” Sci. Total Environ., vol. 772, Jun. 2021, Art. no. 145428, doi: 10.1016/j.scitotenv.2021.145428. [72] B. Lyu, Y. Zhang, and Y. Hu, “Improving PM2.5 air quality model forecasts in China using a bias-correction framework,” Atmosphere, vol. 8, no. 8, Aug. 2017, Art. no. 147, doi: 10.3390/ atmos8080147. [73] H. Shen et al., “Integration of remote sensing and social sensing data in a deep learning framework for hourly urban PM2.5 mapping,” Int. J. Environ. Res. Public Health, vol. 16, no. 21, Nov. 2019, Art. no. 4102, doi: 10.3390/ijerph16214102.
35
[74] Q. Wang, H. Feng, H. Feng, Y. Yu, J. Li, and E. Ning, “The impacts of road traffic on urban air quality in Jinan based GWR and remote sensing,” Scientific Rep., vol. 11, Jul. 2021, Art. no. 15512, doi: 10.1038/s41598-021-94159-8. [75] S. Hou, F. Zhai, and F. Liu, “Inversion of AOD and PM2.5 mass concentration in taihu lake area based on MODIS data,” IOP Conf. Ser., Mater. Sci. Eng., vol. 569, no. 2, Jul. 2019, Art. no. 022037, doi: 10.1088/1757-899X/569/2/022037. [76] V. Kopacˇ ková, “Mapping acid mine drainage (AMD) and acid sulfate soils using Sentinel-2 data,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2019, pp. 5682–5685, doi: 10.1109/ IGARSS.2019.8900505. [77] R. Jackisch, S. Lorenz, R. Zimmermann, R. Möckel, and R. Gloaguen, “Drone-borne hyperspectral monitoring of acid mine drainage: An example from the Sokolov Lignite district,” Remote Sens., vol. 10, no. 3, Mar. 2018, Art. no. 385, doi: 10.3390/ rs10030385. [78] A. Seifi, M. Hosseinjanizadeh, H. Ranjbar, and M. Honarmand, “Identification of acid mine drainage potential using Sentinel 2a imagery and field data,” Mine Water Environ., vol. 38, pp. 707–717, Dec. 2019, doi: 10.1007/s10230-019-00632-2. [79] Z. Wang, Y. Xu, Z. Zhang, and Y. Zhang, “Review: Acid mine drainage (AMD) in abandoned coal mines of Shanxi, China,” Water, vol. 13, no. 1, 2021, Art. no. 8, doi: 10.3390/ w13010008. [80] F. Kruse, J. Boardman, and J. Huntington, “Comparison of airborne hyperspectral data and EO-1 Hyperion for mineral mapping,” IEEE Trans. Geosci. Remote Sens., vol. 41, no. 6, pp. 1388–1400, Jun. 2003, doi: 10.1109/TGRS.2003.812908. [81] Y. Zhong, X. Wang, S. Wang, and L. Zhang, “Advances in spaceborne hyperspectral remote sensing in China,” Geo-Spatial Inf. Sci., vol. 24, no. 1, pp. 95–120, Jan. 2021, doi: 10.1080/ 10095020.2020.1860653. [82] G. E. Davies and W. M. Calvin, “Mapping acidic mine waste with seasonal airborne hyperspectral imagery at varying spatial scales,” Environ. Earth Sci., vol. 76, Jun. 2017, Art. no. 432, doi: 10.1007/s12665-017-6763-x. [83] H. Flores et al., “UAS-based hyperspectral environmental monitoring of acid mine drainage affected waters,” Minerals, vol. 11, no. 2, Feb. 2021, Art. no. 182, doi: 10.3390/min11020182. [84] B. S. Acharya and G. Kharel, “Acid mine drainage from coal mining in the United States – An overview,” J. Hydrol., vol. 588, Sep. 2020, Art. no. 125061, doi: 10.1016/j.jhydrol. 2020.125061. [85] D. D. Gbedzi et al., “Impact of mining on land use land cover change and water quality in the Asutifi North District of Ghana, West Africa,” Environ. Challenges, vol. 6, Jan. 2022, Art. no. 100441, doi: 10.1016/j.envc.2022.100441. [86] W. H. Farrand and S. Bhattacharya, “Tracking acid generating minerals and trace metal spread from mines using hyperspectral data: Case studies from northwest India,” Int. J. Remote Sens., vol. 42, no. 8, pp. 2920–2939, Apr. 2021, doi: 10.1080/01431161.2020.1864057. [87] A. Riaza, J. Buzzi, E. García-Meléndez, V. Carrère, and A. Müller, “Monitoring the extent of contamination from acid mine drainage in the Iberian Pyrite Belt (SW Spain) using hyper-
36
spectral imagery,” Remote Sens., vol. 3, no. 10, pp. 2166–2186, Oct. 2011, doi: 10.3390/rs3102166. [88] M. A. Isgró, M. D. Basallote, and L. Barbero, “Unmanned aerial system-based multispectral water quality monitoring in the Iberian Pyrite Belt (SW Spain),” Mine Water Environ., vol. 41, no. 1, pp. 30–41, Mar. 2022, doi: 10.1007/s10230021-00837-4. [89] S. V. Pyankov, N. G. Maximovich, E. A. Khayrulina, O. A. Berezina, A. N. Shikhov, and R. K. Abdullin, “Monitoring acid mine drainage’s effects on surface water in the Kizel Coal Basin with Sentinel-2 satellite images,” Mine Water Environ., vol. 40, no. 3, pp. 606–621, Sep. 2021, doi: 10.1007/s10230-02100761-7. [90] S. G. Tesfamichael and A. Ndlovu, “Utility of ASTER and Landsat for quantifying hydrochemical concentrations in abandoned gold mining,” Sci. Total Environ., vol. 618, pp. 1560–1571, Mar. 2018, doi: 10.1016/j.scitotenv.2017.09.335. [91] C. Rossi et al., “Assessment of a conservative mixing model for the evaluation of constituent behavior below river confluences, Elqui River Basin, Chile,” River Res. Appl., vol. 37, no. 7, pp. 967–978, Sep. 2021, doi: 10.1002/rra.3823. [92] D. Gómez et al., “A new approach to monitor water quality in the Menor sea (Spain) using satellite data and machine learning methods,” Environ. Pollut., vol. 286, Oct. 2021, Art. no. 117489, doi: 10.1016/j.envpol.2021.117489. [93] Y. Yang, Q. Cui, P. Jia, J. Liu, and H. Bai, “Estimating the heavy metal concentrations in topsoil in the Daxigou mining area, China, using multispectral satellite imagery,” Scientific Rep., vol. 11, no. 1, Jun. 2021, Art. no. 11718, doi: 10.1038/s41598021-91103-8. [94] F. Mirzaei et al., “Modeling the distribution of heavy metals in lands irrigated by wastewater using satellite images of Sentinel-2,” Egyptian J. Remote Sens. Space Sci., vol. 24, no. 3, pp. 537–546, Dec. 2021, doi: 10.1016/j.ejrs.2021.03.002. [95] Z. Liu, Y. Lu, Y. Peng, L. Zhao, G. Wang, and Y. Hu, “Estimation of soil heavy metal content using hyperspectral data,” Remote Sens., vol. 11, no. 12, Jun. 2019, Art. no. 1464, doi: 10.3390/ rs11121464. [96] C. Xiao, B. Fu, H. Shui, Z. Guo, and J. Zhu, “Detecting the sources of methane emission from oil shale mining and processing using airborne hyperspectral data,” Remote Sens., vol. 12, no. 3, Feb. 2020, Art. no. 537, doi: 10.3390/rs12030537. [97] C. Ong et al., “Imaging spectroscopy for the detection, assessment and monitoring of natural and anthropogenic hazards,” Surv. Geophys., vol. 40, pp. 431–470, May 2019, doi: 10.1007/ s10712-019-09523-1. [98] M. D. Foote et al., “Fast and accurate retrieval of methane concentration from imaging spectrometer data using sparsity prior,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 9, pp. 6480–6492, Sep. 2020, doi: 10.1109/TGRS.2020.2976888. [99] A. Räsänen et al., “Predicting catchment-scale methane fluxes with multi-source remote sensing,” Landscape Ecology, vol. 36, pp. 1177–1195, Apr. 2021, doi: 10.1007/s10980-021-01194-x. [100] H. Boesch et al., “Monitoring greenhouse gases from space,” Remote Sens., vol. 13, no. 14, Jul. 2021, Art. no. 2700, doi: 10.3390/rs13142700. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
[101] L. Guanter et al., “Mapping methane point emissions with the PRISMA spaceborne imaging spectrometer,” Remote Sens. Environ., vol. 265, Nov. 2021, Art. no. 112671, doi: 10.1016/j. rse.2021.112671. [102] K. Kozicka et al., “Spatial-temporal changes of methane content in the atmosphere for selected countries and regions with high methane emission from rice cultivation,” Atmosphere, vol. 12, no. 11, Oct. 2021, Art. no. 1382, doi: 10.3390/ atmos12111382. [103] N. Karimi, K. T. W. Ng, and A. Richter, “Prediction of fugitive landfill gas hotspots using a random forest algorithm and Sentinel-2 data,” Sustain. Cities Soc., vol. 73, Oct. 2021, Art. no. 103097, doi: 10.1016/j.scs.2021.103097. [104] A. K. Ayasse et al., “Methane mapping with future satellite imaging spectrometers,” Remote Sens., vol. 11, no. 24, Dec. 2019, Art. no. 3054, doi: 10.3390/rs11243054. [105] D. H. Cusworth et al., “Detecting high-emitting methane sources in oil/gas fields using satellite observations,” Atmos. Chemistry Phys., vol. 18, no. 23, pp. 16,885–16,896, Nov. 2018, doi: 10.5194/acp-18-16885-2018. [106] J. A. de Gouw et al., “Daily satellite observations of methane from oil and gas production regions in the United States,” Scientific Rep., vol. 10, Jan. 2020, Art. no. 1379, doi: 10.1038/ s41598-020-57678-4. [107] Y. Ren, C. Zhu, and S. Xiao, “Deformable faster R-CNN with aggregating multi-layer features for partially occluded object detection in optical remote sensing images,” Remote Sens., vol. 10, no. 9, Sep. 2018, Art. no. 1470, doi: 10.3390/ rs10091470. [108] A. K. Thorpe et al., “Methane emissions from underground gas storage in California,” Environ. Res. Lett., vol. 15, no. 4, Apr. 2020, Art. no. 045005, doi: 10.1088/1748-9326/ab751d. [109] D. R. Thompson et al., “Space-based remote imaging spectroscopy of the aliso canyon ch4 superemitter,” Geophys. Res. Lett., vol. 43, no. 12, pp. 6571–6578, Jun. 2016, doi: 10.1002/ 2016GL069079. [110] J. R. Jambeck et al., “Plastic waste inputs from land into the ocean,” Science, vol. 347, no. 6223, pp. 768–771, Feb. 2015, doi: 10.1126/science.1260352. [111] S. B. Borrelle et al., “Predicted growth in plastic waste exceeds efforts to mitigate plastic pollution,” Science, vol. 369, no. 6510, pp. 1515–1518, Sep. 2020, doi: 10.1126/science. aba3656. [112] L. Buhl-Mortensen and P. Buhl-Mortensen, “Marine litter in the Nordic Seas: Distribution composition and abundance,” Mar. Pollut. Bull., vol. 125, no. 1, pp. 260–270, Dec. 2017, doi: 10.1016/j.marpolbul.2017.08.048. [113] S. P. Garaba et al., “Sensing ocean plastics with an airborne hyperspectral shortwave infrared imager,” Environ. Sci. Technol., vol. 52, no. 20, pp. 11,699–11,707, Sep. 2018, doi: 10.1021/acs. est.8b02855. [114] K. Topouzelis, D. Papageorgiou, G. Suaria, and S. Aliani, “Floating marine litter detection algorithms and techniques using optical remote sensing data: A review,” Mar. Pollut. Bull., vol. 170, Sep. 2021, Art. no. 112675, doi: 10.1016/j.marpolbul. 2021.112675. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[115] A. Hueni and S. Bertschi, “Detection of sub-pixel plastic abundance on water surfaces using airborne imaging spectroscopy,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2020, pp. 6325–6328, doi: 10.1109/IGARSS39084.2020. 9323556. [116] B. Basu, S. Sannigrahi, A. Sarkar Basu, and F. Pilla, “Development of novel classification algorithms for detection of floating plastic debris in coastal waterbodies using multispectral Sentinel-2 remote sensing imagery,” Remote Sens., vol. 13, no. 8, Apr. 2021, Art. no. 1598, doi: 10.3390/rs13081598. [117] A. Jamali and M. Mahdianpari, “A cloud-based framework for large-scale monitoring of ocean plastics using multi-spectral satellite imagery and generative adversarial network,” Water, vol. 13, no. 18, Sep. 2021, Art. no. 2553, doi: 10.3390/ w13182553. [118] M. Kremezi et al., “Pansharpening PRISMA data for marine plastic litter detection using plastic indexes,” IEEE Access, vol. 9, pp. 61,955–61,971, Apr. 2021, doi: 10.1109/ACCESS.2021.3073903. [119] L. Biermann, D. Clewley, V. Martinez-Vicente, and K. Topouzelis, “Finding plastic patches in coastal waters using optical satellite data,” Scientific Rep., vol. 10, no. 1, pp. 2045–2322, Apr. 2020, doi: 10.1038/s41598-020-62298-z. [120] J. Mifdal, N. Longépé, and M. Rußwurm, “Towards detecting floating objects on a global scale with learned spatial features using Sentinel 2,” ISPRS Ann. Photogrammetry Remote Sens. Spatial Inf. Sci., vol. V-3-2021, pp. 285–293, Jun. 2021, doi: 10.5194/isprs-annals-V-3-2021-285-2021. [121] S. P. Garaba and H. M. Dierssen, “An airborne remote sensing case study of synthetic hydrocarbon detection using short wave infrared absorption features identified from marine-harvested macro- and microplastics,” Remote Sens. Environ., vol. 205, pp. 224–235, Feb. 2018, doi: 10.1016/j. rse.2017.11.023. [122] J. Zhang, H. Feng, Q. Luo, Y. Li, J. Wei, and J. Li, “Oil spill detection in quad-polarimetric SAR images using an advanced con emote volutional neural network based on SuperPixel model,” R Sens., vol. 12, no. 6, Mar. 2020, Art. no. 944, doi: 10.3390/ rs12060944. [123] M. Krestenitis, G. Orfanidis, K. Ioannidis, K. Avgerinakis, S. Vrochidis, and I. Kompatsiaris, “Oil spill identification from satellite images using deep neural networks,” Remote Sens., vol. 11, no. 15, Jul. 2019, Art. no. 1762, doi: 10.3390/ rs11151762. [124] N. Longépé et al., “Polluter identification with spaceborne radar imagery, AIS and forward drift modeling,” Mar. Pollut. Bull., vol. 101, no. 2, pp. 826–833, Dec. 2015, doi: 10.1016/j. marpolbul.2015.08.006. [125] K. Zeng and Y. Wang, “A deep convolutional neural network for oil spill detection from spaceborne SAR images,” Remote Sens., vol. 12, no. 6, Mar. 2020, Art. no. 1015, doi: 10.3390/ rs12061015. [126] S. K. Chaturvedi, S. Banerjee, and S. Lele, “An assessment of oil spill detection using Sentinel 1 SAR-C images,” J. Ocean Eng. Sci., vol. 5, no. 2, pp. 116–135, Jun. 2020, doi: 10.1016/j. joes.2019.09.004.
37
[127] S. Tong, X. Liu, Q. Chen, Z. Zhang, and G. Xie, “Multi-feature based ocean oil spill detection for polarimetric SAR data using random forest and the self-similarity parameter,” Remote Sens., vol. 11, no. 4, Feb. 2019, Art. no. 451, doi: 10.3390/rs11040451. [128] D. Mera, V. Bolon-Canedo, J. Cotos, and A. Alonso-Betanzos, “On the use of feature selection to improve the detection of sea oil spills in SAR images,” Comput. Geosci., vol. 100, pp. 166–178, Mar. 2017, doi: 10.1016/j.cageo.2016.12.013. [129] S. Temitope Yekeen and A.-L. Balogun, “Advances in remote sensing technology, machine learning and deep learning for marine oil spill detection, prediction and vulnerability assessment,” Remote Sens., vol. 12, no. 20, 2020, Art. no. 3416, doi: 10.3390/rs12203416. [130] A.-L. Balogun, S. T. Yekeen, B. Pradhan, and O. F. Althuwaynee, “Spatio-temporal analysis of oil spill impact and recovery pattern of coastal vegetation and wetland using multispectral satellite Landsat 8-OLI imagery and machine learning models,” Remote Sens., vol. 12, no. 7, Apr. 2020, Art. no. 1225, doi: 10.3390/rs12071225. [131] P. Nie, H. Wu, J. Xu, L. Wei, H. Zhu, and L. Ni, “Thermal pollution monitoring of Tianwan nuclear power plant for the past 20 years based on Landsat remote sensed data,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 6146–6155, Jun. 2021, doi: 10.1109/JSTARS.2021.3088529. [132] J. M. Torres Palenzuela, L. G. Vilas, F. M. Bellas Aláez, and Y. Pazos, “Potential application of the new Sentinel satellites for monitoring of harmful algal blooms in the Galician aquaculture,” Thalassas Int. J. Mar. Sci., vol. 36, pp. 85–93, Apr. 2020, doi: 10.1007/s41208-019-00180-0. [133] S. Stroming, M. Robertson, B. Mabee, Y. Kuwayama, and B. Schaeffer, “Quantifying the human health benefits of using satellite information to detect cyanobacterial harmful algal blooms and manage recreational advisories in US lakes,” GeoHealth, vol. 4, no. 9, Sep. 2020, Art. no. e2020GH000254, doi: 10.1029/2020GH000254. [134] Z. Kang et al., “Phaeocystis globosa bloom monitoring: Based on p. globosa induced seawater viscosity modification adjacent to a nuclear power plant in Qinzhou Bay, China,” J. Ocean Univ. China, vol. 19, no. 5, pp. 1207–1220, Oct. 2020, doi: 10.1007/s11802-020-4481-6. [135] J. Jankowiak, T. Hattenrath-Lehmann, B. J. Kramer, M. Ladds, and C. J. Gobler, “Deciphering the effects of nitrogen, phosphorus, and temperature on cyanobacterial bloom intensification, diversity, and toxicity in Western Lake Erie,” Limnol. Oceanogr., vol. 64, no. 3, pp. 1347–1370, May 2019, doi: 10.1002/lno.11120. [136] R. Xia et al., “River algal blooms are well predicted by antecedent environmental conditions,” Water Res., vol. 185, Oct. 2020, Art. no. 116221, doi: 10.1016/j.watres.2020.116221. [137] J. Pyo et al., “A convolutional neural network regression for quantifying cyanobacteria using hyperspectral imagery,” Remote Sens. Environ., vol. 233, Nov. 2019, Art. no. 111350, doi: 10.1016/j.rse.2019.111350. [138] D. Tang, D. R. Kester, Z. Wang, J. Lian, and H. Kawamura, “AVHRR satellite remote sensing and shipboard measurements of the thermal plume from the Daya Bay, nuclear power
38
station, China,” Remote Sens. Environ., vol. 84, no. 4, pp. 506– 515, Apr. 2003, doi: 10.1016/S0034-4257(02)00149-9. [139] C. V. Rodríguez-Benito, G. Navarro, and I. Caballero, “Using Copernicus Sentinel-2 and Sentinel-3 data to monitor harmful algal blooms in Southern Chile during the COVID-19 lockdown,” Mar. Pollut. Bull., vol. 161, Dec. 2020, Art. no. 111722, doi: 10.1016/j.marpolbul.2020.111722. [140] P. R. Hill, A. Kumar, M. Temimi, and D. R. Bull, “Habnet: Machine learning, remote sensing-based detection of harmful algal blooms,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 13, pp. 3229–3239, Jun. 2020, doi: 10.1109/JSTARS.2020. 3001445. [141] K. Abdelmalik, “Role of statistical remote sensing for inland water quality parameters prediction,” Egyptian J. Remote Sens. Space Sci., vol. 21, no. 2, pp. 193–200, Sep. 2018, doi: 10.1016/j. ejrs.2016.12.002. [142] A. Sahay et al., “Empirically derived Coloured Dissolved Organic Matter absorption coefficient using in-situ and Sentinel 3/OLCI in coastal waters of India,” Int. J. Remote Sens., vol. 43, no. 4, pp. 1430–1450, Feb. 2022, doi: 10.1080/01431161.2022.2040754. [143] Y. Q. Tian et al., “Estimating of chromophoric dissolved organic matter (CDOM) with in-situ and satellite hyperspectral remote sensing technology,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2012, pp. 2040–2042, doi: 10.1109/ IGARSS.2012.6350975. [144] N. Cherukuru et al., “Estimating dissolved organic carbon concentration in turbid coastal waters using optical remote sensing observations,” Int. J. Appl. Earth Observ. Geoinformation, vol. 52, pp. 149–154, Oct. 2016, doi: 10.1016/j.jag.2016.06.010. [145] K. Zolfaghari et al., “Impact of spectral resolution on quantifying cyanobacteria in lakes and reservoirs: A machine-learning assessment,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–20, 2022, doi: 10.1109/TGRS.2021.3114635. [146] J. Arellano-Verdejo, H. E. Lazcano-Hernandez, and N. Cabanillas-Terán, “ERISNet: Deep neural network for Sargassum detection along the coastline of the Mexican Caribbean,” PeerJ, vol. 7, May 2019, Art. no. e6842, doi: 10.7717/peerj.6842. [147] J. Shin et al., “Sargassum detection using machine learning models: A case study with the first 6 months of GOCI-II imagery,” Remote Sens., vol. 13, no. 23, Jan. 2021, Art. no. 4844, doi: 10.3390/rs13234844. [148] H. Dierssen, A. Chlus, and B. Russell, “Hyperspectral discrimination of floating mats of seagrass wrack and the macroalgae Sargassum in coastal waters of Greater Florida Bay using airborne remote sensing,” Remote Sens. Environ., vol. 167, pp. 247–258, Sep. 2015, doi: 10.1016/j.rse.2015.01.027. [149] Z. Zhang, D. Huisingh, and M. Song, “Exploitation of transArctic maritime transportation,” J. Cleaner Prod., vol. 212, pp. 960–973, Mar. 2019, doi: 10.1016/j.jclepro.2018.12.070. [150] A. A. Kurekin et al., “Operational monitoring of illegal fishing in Ghana through exploitation of satellite earth observation and AIS data,” Remote Sens., vol. 11, no. 3, Feb. 2019, Art. no. 293, doi: 10.3390/rs11030293. [151] M. Reggiannini et al., “Remote sensing for maritime prompt monitoring,” J. Mar. Sci. Eng., vol. 7, no. 7, Jun. 2019, Art. no. 202, doi: 10.3390/jmse7070202. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
[152] J. Alghazo, A. Bashar, G. Latif, and M. Zikria, “Maritime ship detection using convolutional neural networks from satellite images,” in Proc. 10th IEEE Int. Conf. Commun. Syst. Netw. Technol. (CSNT), 2021, pp. 432–437, doi: 10.1109/CSNT51715.2021.9509628. [153] M. A. El-Alfy, A. F. Hasballah, H. T. Abd El-Hamid, and A. M. El-Zeiny, “Toxicity assessment of heavy metals and organochlorine pesticides in freshwater and marine environments, Rosetta area, Egypt using multiple approaches,” Sustain. Environ. Res., vol. 19, no. 1, Dec. 2019, Art. no. 19, doi: 10.1186/ s42834-019-0020-9. [154] N. Longépé et al., “Completing fishing monitoring with spaceborne Vessel Detection System (VDS) and Automatic Identification System (AIS) to assess illegal fishing in Indonesia,” Mar. Pollut. Bull., vol. 131, pp. 33–39, Jun. 2018, doi: 10.1016/j. marpolbul.2017.10.016. [155] R. Pelich, N. Longépé, G. Mercier, G. Hajduch, and R. Garello, “AIS-based evaluation of target detectors and SAR sensors characteristics for maritime surveillance,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 8, pp. 3892–3901, Aug. 2015, doi: 10.1109/ JSTARS.2014.2319195. [156] Y.-L. Chang, A. Anagaw, L. Chang, Y. C. Wang, C.-Y. Hsiao, and W.-H. Lee, “Ship detection based on YOLOv2 for SAR imagery,” Remote Sens., vol. 11, no. 7, Apr. 2019, Art. no. 786, doi: 10.3390/rs11070786. [157] G. Melillos et al., “The use of remote sensing for maritime surveillance for security and safety in Cyprus,” in Proc. SPIE Detection Sens. Mines, Explosive Objects, Obscured Targets XXV, 2020, vol. 11418, pp. 141–152, doi: 10.1117/12.2567102. [158] H. Heiselberg, “Ship-iceberg classification in SAR and multispectral satellite images with neural networks,” Remote Sens., vol. 12, no. 15, Jul. 2020, Art. no. 2353, doi: 10.3390/ rs12152353. [159] Y. Ren, C. Zhu, and S. Xiao, “Small object detection in optical remote sensing images via modified faster R-CNN,” Appl. Sci., vol. 8, no. 5, May 2018, Art. no. 813, doi: 10.3390/ app8050813. [160] M. Reggiannini and L. Bedini, “Multi-sensor satellite data processing for marine traffic understanding,” Electronics, vol. 8, no. 2, Feb. 2019, Art. no. 152, doi: 10.3390/electronics8020152. [161] A. Rasul, “An investigation into the location of the crashed aircraft through the use of free satellite images,” J. Photogrammetry, Remote Sens. Geoinformation Sci., vol. 87, pp. 119–122, Sep. 2019, doi: 10.1007/s41064-019-00074-z. [162] L. Shuxin, Z. Zhilong, and L. Biao, “A plane target detection algorithm in remote sensing images based on deep learning network technology,” J. Phys. Conf. Ser., vol. 960, no. 1, Jan. 2018, Art. no. 012025, doi: 10.1088/1742-6596/960/1/012025. [163] J. Nalepa, M. Myller, and M. Kawulok, “Validating hyperspectral image segmentation,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 8, pp. 1264–1268, Aug. 2019, doi: 10.1109/LGRS.2019.2895697. [164] S. Kapoor and A. Narayanan, “Leakage and the reproducibility crisis in ML-based science,” 2022. [Online]. Available: https:// arxiv.org/abs/2207.07048 [165] R. Vitulli et al., “CHIME: The first AI-powered ESA operational mission,” in Proc. Small Satell. Syst. Services 4S Symp., 2022, pp. 1–8. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[166] “Copernicus hyperspectral imaging mission for the environment – Mission requirements document,” European Space Agency, Paris, France, ESA-EOPSM-CHIM-MRD-321 issue 3.0, 2021. [167] M. Rast, J. Nieke, J. Adams, C. Isola, and F. Gascon, “Copernicus hyperspectral imaging mission for the environment (Chime),” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2021, pp. 108–111, doi: 10.1109/IGARSS47720.2021.9553319. [168] D. Lebedeff, M. Foulon, R. Camarero, R. Vitulli, and Y. Bobichon, “On-board cloud detection and selective spatial/spectral compression based on CCSDS 123.0-b-2 for hyperspectral missions,” in Proc. Int. Workshop On-Board Payload Data Compression (OBPDC), 2020, pp. 1–8. [169] “Recommendation for space data system standards CCSDS 123.0-b-2 – Low-complexity lossless and near-lossless multispectral and hyperspectral image compression – Blue book,” CCSDS Secretariat, National Aeronautics and Space Administration, Washington, DC, USA, 2021. [Online]. Available: https://public.ccsds.org/Pubs/123x0b2c3.pdf [170] Y. Barrios, P. Rodríguez, A. Sánchez, M. González, L. Berrojo, and R. Sarmiento, “Implementation of cloud detection and processing algorithms for CCSDS compliant hyper-spectral image compression on chime mission,” in Proc. Int. Workshop On-Board Payload Data Compression (OBPDC), 2020, pp. 1–8. [171] Y. Liu, Y. Yang, W. Jing, and X. Yue, “Comparison of different machine learning approaches for monthly satellite-based soil moisture downscaling over northeast china,” Remote Sens., vol. 10, no. 1, Dec. 2018, Art. no. 31, doi: 10.3390/rs10010031. [172] T. Valentijn, J. Margutti, M. van den Homberg, and J. Laaksonen, “Multi-hazard and spatial transferability of a CNN for automated building damage assessment,” Remote Sens., vol. 12, no. 17, Sep. 2020, Art. no. 2839, doi: 10.3390/rs12172839. [173] Y. Michael et al., “Forecasting fire risk with machine learning and dynamic information derived from satellite vegetation index time-series,” Sci. Total Environ., vol. 764, Apr. 2021, Art. no. 142844, doi: 10.1016/j.scitotenv.2020.142844. [174] C. Corradino et al., “Mapping recent lava flows at Mount Etna using multispectral Sentinel-2 images and machine learning techniques,” Remote Sens., vol. 11, no. 16, Aug. 2019, Art. no. 1916, doi: 10.3390/rs11161916. [175] N. Wang et al., “Identification of the debris flow process types within catchments of Beijing mountainous area,” Water, vol. 11, no. 4, Mar. 2019, Art. no. 638, doi: 10.3390/w11040638. [176] S. Ullo et al., “Landslide geohazard assessment with convolutional neural networks using sentinel-2 imagery data,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2019, pp. 9646–9649, doi: 10.1109/IGARSS.2019.8898632. [177] J. E. Nichol, M. Bilal, M. A. Ali, and Z. Qiu, “Air pollution scenario over China during COVID-19,” Remote Sens., vol. 12, no. 13, Jun. 2020, Art. no. 2100, doi: 10.3390/rs12132100. [178] D. J. Varon, D. Jervis, J. McKeever, I. Spence, D. Gains, and D. J. Jacob, “High-frequency monitoring of anomalous methane point sources with multispectral Sentinel-2 satellite observations,” Atmos. Meas. Techn., vol. 14, no. 4, pp. 2771–2785, Apr. 2021, doi: 10.5194/amt-14-2771-2021. GRS
39
Onboard Information Fusion for Multisatellite Collaborative Observation GUI GAO , LIBO YAO, W ENFENG LI, LINLIN ZHANG , AND MAOLIN ZHANG
Summary, challenges, and perspectives
O
nboard information fusion for multisatellites, which is based on spatial computing mode, can improve the satellites’ capability, such as the spatial–temporal coverage, detection accuracy, recognition confidence, position precision, and prediction precision for disaster monitoring, Digital Object Identifier 10.1109/MGRS.2023.3274301 Date of current version: 30 June 2023
40
2473-2397/23©2023IEEE
maritime surveillance, and other emergent or continuous persistent observing situations. First, we analyze the necessity of onboard information fusion. Next, the recent onboard processing developments are summarized and the existing problems are discussed. Furthermore, the key technologies and concepts of onboard information fusion are summarized in the fields of feature representation, association, feature-level fusion, spatial computing architecture, IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
and other issues. Finally, the future developments of onboard information fusion are investigated and discussed. INTRODUCTION Earth observation technology for modern satellites has developed rapidly. Through the innovative design and launch of small low Earth orbit (LEO) satellites, many commercial remote sensing satellite companies, such as Planet Labs and ICEYE, have established remote sensing satellite constellations. The number of these satellites exceeds the number of remote sensing satellites launched in the past. Consequently, the time interval for repeated satellite observations in the same area was substantially shortened. However, by considerably improving sensor
©SHUTTERSTOCK.COM/BLUE PLANET STUDIO
performance and data quality, traditional companies, such as MDA and AirBus, can create a single satellite with higher spatial, radiation, and spectral resolutions, larger observation width, highly robust mobile agility, and a high number of working modes, and they can vigorously develop integrated remote sensing satellites. In the future, Earth observation systems of modern satellites will have the capability of single-satellite/multiload collaboration and JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
multisatellite-networking/multiload collaboration, which can acquire Earth observation data with higher accuracy, advanced information dimensions, and higher space–time resolution than current satellite systems. Because information fusion can effectively reduce the conflict between multisource data, making full use of complementary information and realizing comprehensive mutual confirmation and collaborative reasoning [1], [2], [3], [4], [5], [6], [7], [8], multisatellite information fusion processing is one of the major research focuses in the field of satellite Earth observation applications [9]. The traditional mode of satellite Earth observation comprises conducting task planning, data processing, and ground-based production, and the data fusion processing of the satellite multiload collaborative observation is also performed on the ground [10], [11]. The product is then sent to the user via a ground network. The information fusion of multisatellite has been widely employed in cloud removal [12], land cover change detection [13], [14], land surface deformation surveying [15], [16], [17], [18], disaster monitoring [19], [20], air and water pollution monitoring [21], target searching and surveillance [22], [23], [24], [25], maritime situation awareness [26], [27], and other fields [31]. Environmental monitoring mainly uses imaging satellites, such as optical [11], synthetic aperture radar (SAR) [13], [28], [29], [30], and infrared [14]. In addition to imaging satellites, nonimaging satellites, such as the automatic identification system (AIS) [32], signal intelligence (SIGINT) [33], and electronic intelligence (ELINT) [7] have been applied in target surveillance. However, the traditional mode aims only at conventional, procedural, nonemergency, and low-timeliness Earth observation tasks. Because of a large number of transmission nodes and long-time delay, the traditional mode cannot undergo a rapid response to emergency tasks, such as disaster rescue, time-critical target monitoring, and other high-timeliness tasks. Furthermore, the transmission mode has a weak ability to quickly guide and fuse real-time information for multisatellite cooperative detection tasks, such as ship monitoring in the high open seas and wide-area missile warning [34]. Onboard information fusion of multisatellite data can dramatically improve the timeliness of satellite Earth observations. Onboard information fusion focuses two main challenges: first, the real-time data acquisition rate of satellite Earth observation sensors is increasing, reaching several gigabits per second, and real-time processing is challenging [34]; second, the onboard storage and processing hardware is restricted by the satellite load capacity, power consumption, and heat dissipation. Therefore, the onboard processing capacity considerably differs from the ground processing capacity. Space researchers worldwide are working on: implantation of key technologies, equipment development of intersatellite high-speed laser communication [35], [36]; onboard mass data storage and highspeed computing hardware [37], [38]; onboard embedded 41
real-time operating systems [39]; onboard raw observation data compression [40], [41], [42], [43], [44]; onboard SAR real-time imaging [45], [46], [47], [48]; onboard multisource heterogeneous data intelligent computing, such as cloud detection [49], [50], ship detection, and recognition [51], [52], [53], [54], [55]; onboard autonomous task planning [56], [57]; and other onboard processing [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], to achieve onboard autonomous intelligent perception and integration of multisatellite Earth observation. Many onboard test loadings in different satellites, such as ZY1E [68], ZY1-02D [69], EDRS-C [70], GaoFen-5 [71], FY-4 [72], HJ-2 A/B [73], HY-1C [74, 75], HY-2B, and MetOp-C [76], have been performed for various applications. By analyzing the current developments in satellite data onboard processing, this article summarizes, analyzes, and studies related issues involving multisatellite data onboard intelligent fusion processing. DEVELOPMENT OF SATELLITE DATA ONBOARD PROCESSING Globally, space government organizations and commercial companies attach great importance to developing onboard processing systems and equipment, researching core algorithms of onboard intelligent processing, and conducting onboard testing and verification. The United States was the first country to conduct onboard research on the hardware, software, algorithms, and autonomous task planning of multisatellites, and it has performed onboard tests on multiple satellites. This section introduces and analyzes the development history and key technologies for onboard satellite data processing.
RELEVANT RESEARCH PLAN AND ONBOARD PROCESSING SYSTEM CURRENT STATUS OF ONBOARD PROCESSING FOR OPTICAL REMOTE SENSING SATELLITE Currently, optical-remote sensing satellites are capable of onboard raw data compression, radiometric correction, geometric correction, cloud detection, target detection and recognition, terrain classification, and change monitoring. Optical remote sensing satellites with onboard processing capabilities are listed in Table 1. For example, the U.S. Earth Observation One Mission (EO-1) satellite has achieved onboard functions, including automatic selection of regions of interest (ROI), regional change detection, cloud detection, and invalid or unnecessary data removal in hyperspectral images; hence, the time consumption of data downloading has reduced from the original few hours to less than 30 min [77]. Onboard data processing of the U.S. military’s optical remote sensing satellites has practical applications. For example, the optical real-time adaptive signature identification system (ORASIS) [78] carried on the naval earth map observer (NEMO) satellite can provide functions, such as automatic data analysis, feature extraction, and data compression of satellite hyperspectral images, and it can send the processing results directly from the satellite to operational users in real time. The MightySat satellite realized onboard ROI identification [79] to verify that space technology supports real-time battlefield applications. An operational responsive space satellite can analyze image and signal data collected by the satellite in orbit and quickly provide soldiers with target information, combat equipment, and battlefield damage assessment information in
TABLE 1. ONBOARD PROCESSING SYSTEMS OF OPTICAL REMOTE SENSING SATELLITE. SATELLITE
COUNTRY
ONBOARD PROCESSING FUNCTIONS
LAUNCH TIME
EO-1
United States
Change detection and anomaly detection
2000
Mongoose V processor
BIRD
Germany
Multitype remote sensing image preprocessing, on-satellite real-time multispectral classification, forest fire detection, etc.
2001
TMS320C40 floating point digital signal processor (DSP), field- programmable gate array (FPGA) and NI1000 neural network coprocessor
FedSat
Australia
Multisource data compression, disaster monitoring
2002
FPGA
NEMO
United States
Adaptive compression of hyperspectral data
2003
SHARC 21060L DSP
MRO
United States
Multisensor information synthesis and analysis for autonomous mission planning
2005
RAD750
X-SAT
Singapore
Automatic invalid data culling
2006
Vertex FPGA Strong ARM
PROBA-2
European Space Agency (ESA)
Image analysis and compression, autonomous mission planning
2009
LEON2 - FT
Pleiades-HR
France
Radiation correction, geometric correction, image compression
2011
MVP modular processor with FPGA at its core
CubeSat
United States
CubeSat U.S. large compression ratio compression of video data (100x)
2011
Virtex 5VQ
Solar Orbiter
ESA
Image stabilization, preprocessing, radiation transformation equation transposition
2017
LEON3 and FPGA V4
JiLin-1 01/02 spectrum satellite
China
Forest fire detection, ship detection
2019
Multicore DSP,GPU
U-Sat-1
ESA
Cloud detection
2021
Movidius MA2450
42
HARDWARE SOLUTIONS
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
near real time. The project also researched how satellites cooperate to achieve continuous target imaging [80]. The onboard image intelligent processing system of China’s Tsinghua-1 microsatellite has realized onboard cloud detection, detection of cloud coverage areas, and reduced the amount of data transmitted [81]. The Jilin-1 half spectral satellite has realized automatic onboard detection of forest fires, marine ships, and other targets, and can send the processing results to the ground terminal station in the form of short messages via the Beidou navigation satellite system [82]. Germany’s bispectral infrared detection (BIRD) small satellite has realized the onboard processing of visible, middle infrared, and thermal infrared images, including radiometric correction, geometric correction, texture extraction, terrain classification, and subpixel fire point detection [83]. The French Pleiades HR satellite achieved onboard processing for radiometric correction, geometric correction, image compression, and other functions [84]. Australia’s FedSat is equipped with a reconfigurable onboard processing prototype system that uses data generated by onboard optical sensors for natural disaster monitoring [85]. CURRENT STATUS OF ONBOARD PROCESSING FOR SAR REMOTE SENSING SATELLITE Spaceborne SAR satellites have higher data rates, larger processing capacities, and more complex imaging algorithms than optical remote sensing satellites. Because of the satellite volume, weight, and power consumption constraints, SAR satellite data processing primarily depends on the ground processing system, and some onboard data processing was conducted. As shown in Table 2, several SAR remote sensing satellites have realized the onboard compression of raw echo data and are currently performing real-time imaging, target detection, other onboard processing algorithms, and hardware testing. For example, China’s Chaohu-1 satellite conducted onboard real-time imaging and artificial intelligence (AI) realtime processing verification. CURRENT STATUS OF ONBOARD FUSION SYSTEM FOR MULTISATELLITE With the development of onboard storage hardware, embedded real-time operating systems, and data processing algorithms, the onboard data processing capabilities of satellites have considerably improved. Simultaneously, the development of optical communication, relay satellites, and other technologies has enabled data transmission between satellites at high speeds. Some researchers have successively proposed the concepts of space-based Internet [86], [87], [88], [89], spatial information networks [90], [91], [92], [93], and air–space–ground-integrated information communication networks [94], [95]; conducted research on key technologies; and planned and built an onboard dynamic realtime distributed network aimed at integrating space-based communication, navigation, remote sensing, and computing resources. Onboard collaborative intelligent perception JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
and the fusion of multisatellite and multisensor information are key technologies for achieving this goal. Since 2012, the United States has begun paying attention to space computing technology and has investigated onboard information fusion and air–space–ground-integrated collaborative networking, covering space cloud computing, space prediction analysis, and space information collaboration. The U.S. Air Force Research Laboratory has tested the space-based cloud computing networks deployed in geosynchronous Earth orbits [96] and is now exploring the deployment scheme and architecture study of heterogeneous network cloud computing with high Earth orbit (HEO), medium Earth orbit (MEO), and LEO. The Blackjack small-satellite constellation proposed by the Defense Advanced Research Projects Agency (DARPA) of the United States has a high degree of autonomy and flexibility, as shown in Figure 1(a). With onboard autonomously distributed decision processors, payloads can operate autonomously. The satellite data can be completely processed onboard and can be autonomously collaboratively observed without the support of a ground station for 30 days to satisfy the requirements of command and control, intelligence, surveillance, and reconnaissance (ISR), tactical operations, and others [97]. The U.S. Space Development Agency (SDA) of the Department of Defense (DOD) is building the National Defense Space Architecture (NDSA), which is capable of onboard multisatellite information fusion on the transmission layer of its seven-layer architecture, as shown in Figure 1(b). After fusion, the information is transmitted to the users using microwave or laser communication.
TABLE 2. ONBOARD PROCESSING SYSTEMS OF SAR REMOTE SENSING SATELLITE.
PROGRAM/ ENGINEERING
ONBOARD PROCESSING FUNCTIONAL DESIGN COUNTRY AND VALIDATION
TIME
HARDWARE SOLUTIONS
Discoverer II
United States
Ground moving target indicator (GMTI), SAR real-time imaging
1998
CIP
SBR
United States
Real-time imaging, moving target indication)
2001
FPGA
TechSat21
United States
Real-time imaging, moving target indicator, targeting
2002
PowerPC 750
SAR processor
ESA
Real-time imaging
2004
System-onchip
ERS satellite
United States
Real-time imaging, change detection
2004
FPGA and PowerPC
Interferometric United SAR States
On-satellite SAR inter- 2009 ferometric processing
FPGA
Chaohu-1
China
SAR data on-orbit imaging and target intelligent detection
2022
DSP, FPGA, and GPU
Taijing-4 01
China
SAR data on-orbit imaging
2022
DSP and FPGA
43
Onboard multisatellite information fusion is regarded as a core capability required for future development. The SDA developed a platform prototype onboard experimental test in 2021 and performed onboard verification [98]. The Project for Onboard Autonomy (PROBA) satellite of the European Space Agency (ESA) conducted onboard autonomous observation mission-planning experiments [99]. The Pujiang-1 satellite, developed by China, has realized onboard autonomous observation task planning for the collaborative work of multiple payloads and has conducted onboard image preprocessing, information extraction, fusion, rapid location of suspected target areas, typical target recognition, and regional change detection for satellite formation [100]. In addition, a remote sensing satellite that carries multiple sensors necessitates the ability of onboard information fusion, such as satellites carrying SAR and AIS payloads, including Canada’s Radarsat Constellation Mission (RCM), China’s Gaofen-3 02 satellite, Japan’s ALOS-4, and satellites carrying optical and AIS payloads, including China’s Hainan-1 01/02 and Wenchang-1 01/02 satellites. Earth observation satellites have begun a new era with the promotion and application of AI algorithms, represented by deep learning, in the field of remote sensing image analysis and processing. First, Zhou [101], and Oktay and Zhou [102] presented the architecture of intelligent Earth observation systems. Li et al. proposed the implementation of intelligent Earth observation systems [103], Earth observation brain [104], and space-based information realtime service systems [105]. Many scientific research institutions and companies have conducted in-depth discussions on intelligent remote sensing satellites [106], [107], [108], [109], [110], [111]. Novel satellites, such as softwaredefined satellites and intelligent remote sensing satellites, have been launched; for example, ESA’s U- Sat satellite verified AI-based cloud detection, ship detection and classification, forest detection and anomaly monitoring, and other applications [112]. China’s software-defined experimental satellite Tianzhi-1 has demonstrated and verified onboard processing technologies, such as commercial
(a)
processor-based high-performance computing, onboard software reconstruction, onboard cloud computing, and open source application software uplink [113], which promotes research on onboard intelligent fusion processing of multisatellite information. MULTISATELLITE DATA FUSION TECHNOLOGY Information fusion was first applied to underwater signal processing during the 1970s. In 1988, the U.S. DOD listed information fusion as one of the 20 key technologies that focused on R&D and listed it as the highest priority category A. The U.S. DOD Joint Directors of Laboratories set up an information fusion expert group to organize and guide research. More than 100 information-fusion systems have been used in the United States. Information fusion primarily involves multisensor detection, tracking and state estimation, target recognition, situation awareness, and threat assessment. Its core processing link is multisource data association and fusion processing. This section summarizes the current development status of multisatellite data fusion processing technology based on the characteristics of satellite Earth observation sensors. CURRENT STATUS OF MULTISATELLITE DATA ASSOCIATION Target association is the premise and foundation of the following steps of fusion processing and aims to correlate the same target’s information that is obtained by the same sensor observed at different times or multiple sensors (illustrated in Figure 2). Traditional target association algorithms include the nearest neighbors [114], [115], probabilistic data association (PDA) [116], [117], joint PDA [118], [119], [120], multiple hypothesis tracking [121], [122], interacting multiple model [123], [124], sequential track correlation [125], double threshold track correlation [126], particle filtering based [127], probability hypothesis density filter-based [128], [129], and multidimension assignment-based [130], [131]. All the aforementioned algorithms were designed for land-based, air-based, and sea-based radars, electronic reconnaissance, and other nonimaging sensors with a high
(b)
FIGURE 1. Earth observation systems based on LEO small satellite constellation: (a) DARPA Blackjack constellation; (b) SDA-NDSA.
44
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
data-acquisition rate, high positioning accuracy, and long observation duration. In these cases, the target is regarded as a point target, and its motion parameters—such as position, velocity, and azimuth, which can be accurately predicted by building a target motion model—are used to achieve association. In addition to the target motion parameters, the sensor can also obtain other target characteristics, such as the radar cross-section in the radar data, electromagnetic characteristics of the emitter in the ELINT data, and other attributes. Few studies have been conducted on algorithms that simply use target attribute features for multitarget associations. Currently, multitarget association algorithms that use target motion parameters, attributes, and other features as association factors are attracting increasing attention. These algorithms are based on statistics, clustering analysis, fuzzy mathematics, grey theory, and evidence theory, which mainly use target motion-state filtering and prediction for association combined with the similarity measurement of target attribute features as auxiliary association factors. For example, amplitude features [132], Doppler frequencies [133], polarization features [134], and high-resolution radar profile features [135] are used as complementary association factors in radar applications. Pulse descriptor words, such as carrier frequency, pulse width, and pulse repetition frequency, are used in the target association for ELINT sensors [136]. Most algorithms for multicamera video target tracking scenes are based on the topological relationship between cameras and use the motion parameters, shape, color, texture, and other features of the target to achieve association, which requires the target motion state to be estimated accurately [137]. For heterogeneous sensor data associations, such as radar, AIS, ELINT, and infrared images, the target’s classification or identification information is used for auxiliary association [138]. Several methods based on shallow artificial neural networks (ANNs) have been proposed in the literature [139], [140], whereas few based on deep ANN have been proposed [141], [142]. Most of the aforementioned target association algorithms are based on spatial–temporal information and
attribute feature information. Target association based on spatial–temporal information is realized by establishing strict state and observation equations, and the data acquisition rate must be high. Target associations based on attribute feature information can be realized at the feature and decision levels, but the feature selection and similarity measurement function design are challenging. Based on the observation revisit period, space-based Earth observation modes can be classified into two categories, particularly for target surveillance: ◗◗ Sparse data acquisition over long time intervals: Most remote sensing satellites follow a sun-synchronous orbit. Some images of the same ROI were obtained during a single imaging period. The revisit interval is long, typically several tens of minutes. The new agile remote sensing satellite is capable of multiview imaging and can capture multiple images of the same ROI in one orbital period. This type of remote sensing satellite constellation can shorten the observation revisit period. ◗◗ Dense data acquisition with short observation duration: A geostationary Earth orbit (GEO) staring imaging satellite can gaze at the same ROI for a long duration and obtain image sequences with high time resolution, usually seconds or minutes. LEO video imaging, such as AIS, SIGINT, and ELINT satellites can stare at one zone for several minutes in a single orbital period. Space-based target monitoring is realized by multisatellite cooperative observation, and the target information obtained is in a sparse nonuniform acquisition mode. Under this condition, the target kinematic model cannot be established accurately, and the target motion state estimation is inaccurate because of the long revisit observation period, short observation duration in a single-visit orbit, and different data acquisition rates and accuracy of multiple space-based sensors. Therefore, achieving accurate target association using only target motion features is challenging. Compared with the motion-state features, the target attribute features are relatively stable: for example, image features, which are obtained by imaging satellites, such as shape, histogram, local invariant features, and
Reporting 460 Not Reporting 571 Data Sources: TerraSAR-X Cosmo-SkyMed Radarsat-2
(a)
500 km
(b)
FIGURE 2. Multisatellite data association: (a) SAR + AIS; (b) SAR + optical. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
45
transformation features; and electromagnetic features, such as the emitter center frequency, pulse width, and pulse repetition interval. Target-association algorithms between imaging satellites can be realized using image-matching methods. The key step of this method is feature matching that uses feature similarity measurements of the target images to build the matching relationship. The image features used for feature matching include both single- and group-target features. Single-target features are primarily used for target associations in high-resolution satellite remote sensing images. Lei et al. [143] proposed a target association algorithm based on multiscale autoconvolution features and association cost matrix for optical satellites. Group-target features are primarily used for the target association of medium- or low-resolution satellite remote sensing images. Group targets typically appear in a relatively fixed formation, and the membership and number of targets are typically related to specific operational tasks. Therefore, the group-target position can be regarded as a point set on the plane and each target in that group can be regarded as a point in the point set. Next, the multitarget association problem is transformed into a matching problem between the two point sets. Tang and Xu [144] proposed a target association algorithm based on the Kalman filter and the Dempster–Shafer (D–S) theory for multiple remote sensing images. Target association based on image matching focuses on the target feature design and feature similarity measurement. Another type of association method is based on point features, mainly designed for GEO staring imaging satellites and LEO video satellites, which use motion status parameters and image features as association factors. The type of methods uses filtered and predicted target motion status parameters from image frames for the first-step association, and they use image feature similarity to correct the association result as the second-step association. Lei [145] proposed a remote sensing image multitarget tracking algorithm based on ROI feature matching and a remote sensing image multitarget association algorithm based on multifeature fusion matching, which overcome the bias of kinematic feature matching error through image features and the ambiguity of image matching recognition through motion state parameters. Yang et al. [146] proposed a satellite video target correlation tracking algorithm based on a motion heat map and local saliency map. Wu et al. [147] proposed a satellite video target correlation tracking algorithm that combines motion smoothing constraints and the grey similarity likelihood ratio. Liu et al. [148] proposed a multilevel target association method using different features at different levels for the collaborative observation of LEO and GEO satellites. Target association algorithms between nonimaging and video-imaging satellites, such as AIS, SIGINT, or ELINT, mainly combine position and attribute information based on fuzzy mathematics [149] and D–S [150]. Some algorithms use group-formation topological characteristics as 46
association features that can be considered point-pattern match problems. For target association between imaging satellites and nonimaging satellites, Zeng [151] proposed several target association algorithms based on formation target structure features, hierarchical matching, formation target attribute features, the attribute and structure of formation targets, and multisource features in multisource sequence data. Lu and colleagues [152], [153] proposed target association algorithms based on point-pair topological features and spectral matching, point-pair topological features and probabilistic relaxation marking, and D–S evidence combinations based on topology and attribute features. CURRENT STATUS OF MULTISATELLITE DATA FUSION Information fusion algorithms can be classified into three levels based on the information abstraction level: data, features, and decisions. Feature-level fusion can maximally retain most of the information of the original data but can also greatly reduce the redundancy of multisource data. Current research mainly focuses on data- and decisionlevel fusion, whereas research on feature-level fusion is relatively scarce. However, feature-level fusion reduces the dimensions of the original feature space, eliminates the redundancy between feature representation vectors in the original feature space, and maintains the entropy, energy, and correlation of invariant feature data after dimension compression. The fused features can substantially describe the nature of the target, which is conducive to further target recognition. The following analysis mainly focuses on feature-level fusion algorithms. Conventional feature-level fusion algorithms include serial and parallel fusions [154]. Serial fusion methods directly concatenate multiple feature vectors into one feature vector and then apply a dimensional reduction to obtain the fused feature vector. Parallel fusion methods combine two feature vectors into a single feature vector using complex variable functions. Although the above two methods can retain raw data information to some extent, the dimension and redundancy of the fused feature remain high because the complementarities and redundancies between the original multiple features are not fully utilized. Feature extraction and transformation are also considered as feature fusion methods. Feature extraction obtains a fused feature by selecting the most effective feature from multiple original features using serial or parallel strategies. Feature transformation obtains a new fused feature through a linear or nonlinear mapping function of the original features and can still be considered as a type of feature extraction method. Some feature-level fusion methods based on multivariate statistical analysis theory have been proposed to solve the problem of serial fusion and parallel fusion methods not being able to use the interrelationship between multidimensional features. Feature fusion methods based on canonical correlation analysis (CCA) obtain the fused feature by building a correlation criterion function between two feature vectors, calculating the projection IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
vectors, and then extracting the canonical correlative feature as a fused feature for the joint dimensionality reduction of a high-dimensional feature space, which is a statistical theory for the correlation analysis of two random vectors. This kind of algorithm includes neural network CCA [155], kernel CCA [156], local preserving CCA [157], sparse CCA [158], discriminant CCA [159], and 2D-CCA [160]. The partial least squares (PLS) integrates the advantages of multiple linear regression, principal component analysis, and CCA, and have been applied for feature fusion [161]. Next, researchers have proposed a series of improved methods [162], [163], such as conjugate orthogonal PLS analysis, kernel PLS analysis, 2D-PLS, and sparse PLS analysis. Another type of feature-level fusion method obtains a fused feature by projecting multiple original feature spaces onto the same space with common attributes. For example, Long et al. [164] proposed a feature fusion algorithm based on multiview spectral clustering. Previous feature fusion methods were based on handcrafted features. Feature learning based on deep learning methods can essentially be considered as stage-by-stage multiple feature fusion, such as convolution and full connection operations of a convolutional neural network (CNN) [165]. The different layer outputs of the deep learning network correspond to different visual features and semantics; for example, the lower layer corresponds to brightness, edge, and texture; the middle layer corresponds to shape and direction; and the upper layer corresponds to category. Feature fusion can then be realized in the different layers. Currently, most deep learning algorithms are designed for single-modality data. The coupled correlation can be mined, and the redundancy can be reduced as much as possible for different dimensional features by combining deep learning and information fusion. Therefore, researchers have gradually focused on information fusion-based deep learning. For example, some CNN-based feature fusion structures and strategies have been proposed [166], [167], [168], [169], [170], [171], [172], [173], [174], [175], [176], [177], [178], [179]. Some methods have been proposed for fusing deep-learning features based on CCA, topic models, joint dictionaries, and bag-of-words [169]. The key step in feature fusion based on deep learning is the selection of a feature fusion layer and architecture. According to the fusion layer in the deep learning network, feature fusion methods based on deep learning can be classified into early-, middle-, and late-stage fusion. Early- and middle-stage fusion use the convolution layer, and late-stage fusion uses the output of the convolution layer or the output of the full connection layer. Another method for feature fusion based on deep learning is multimodality deep learning, which first learns the features of single-modality data individually and then learns the fused feature. Ngiam et al. [170] proposed a cross-modality deep autoencoder model that can better learn shared representations between modalities. Srivastava and Salakhutdinov [171] proposed a multimodal deep Boltzmann machine JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
for multimodality learning. The model works by learning the probability density over the space of multimodal data, and it uses the states of the latent variables as representations of different modality data. Multimodality data can be organized and represented with a tensor; therefore, tensorbased deep learning methods can be used for multimodality deep learning and can learn deep correlations between data of different modalities. Yu et al. [172] proposed a deep tensor neural network (DTNN) model. The DTNN extends the deep neural network (DNN) by replacing one or more of its layers with a double-projection layer, in which each input vector is projected into two nonlinear subspaces and a tensor layer, in which two subspace projections interact and jointly predict the next INFORMATION FUSION layer in the deep architecPRIMARILY INVOLVES ture. Hutchinson et al. [173] MULTISENSOR DETECTION, proposed a tensor-deep stacking network, which consists TRACKING AND STATE of multiple stacked blocks, ESTIMATION, TARGET where each block contains a RECOGNITION, SITUATION bilinear mapping from two AWARENESS, AND THREAT hidden layers to the output ASSESSMENT. layer using a weight tensor to incorporate higher-order statistics of the hidden binary features. Zhang et al. [174] proposed a deep computation model to fully learn the data distribution, which uses a tensor to model the complex correlations of heterogeneous data. The model uses a tensor autoencoder as the basic module of the tensor deep learning model, which adopts tensor distance as the average sum-of-squares error term of the reconstruction error in the output layer. A high-order back-propagation algorithm is designed for training. Information fusion methods for space-based observation data are primarily designed for multisource remote sensing image fusion and target recognition. Target-recognition algorithms based on information fusion use fuzzy mathematics and the evidential reasoning theory for decision-level fusion. PROBLEMS AND CHALLENGES The onboard data-processing capability of Earth observation satellites remains weak, and the main existing problems are as follows: ◗◗ Most remote sensing satellites are capable of onboard data compression and preprocessing. Some have realized onboard detection, classification, and recognition of targets of interest, such as ships and airplanes. Currently, remote sensing satellites carry only one type of sensor and either work independently or unite with each other. Intersatellite high-speed communication and data interchange are still in the testing stage. Observation planning mainly involves scheduling on the ground, and onboard autonomous intelligent planning for multisatellite cooperation is not yet practical. 47
Therefore, high-level onboard data processing, such as multisatellite information association, fusion, and situational awareness, is seldom studied because of the incapability of multisatellite dynamic autonomous networking for collaborative observation onboard. ◗◗ Traditional remote sensing satellites mainly focus on improving the observation capability of a single satellite or sensor without considering multisensor or multisatellite cooperation or information fusion. Because of the long revisit interval, fixed cross-time, narrow imaging width, few remote sensing satellites in the same orbital, and short duration in a single observation period, the multisatellite cannot respond to the emergent requirements for rapid cooperative observation, and the region or target of interest cannot be persistently observed. ◗◗ Currently, onboard processing algorithms are implemented using traditional theories, and intelligent computing, such as deep learning methods, cannot be directly transferred for onboard data processing. The high-speed realtime processing capability of onboard processing hardware and software must be further improved according to the capabilities of remote sensing satellites and user requirements, such as raw data compression, correction, target detection and classification, multisource information association, and fusion. Intelligent onboard data processing requires a hybrid computing architecture for different tasks. REQUIRED FUTURE TECHNIQUE DIRECTIONS A space information network will realize and integrates remote sensing, communication, computation, and other sources. Multisatellite onboard information fusion is a key space information network technology involved in the architecture of satellite networks, intersatellite communication protocols, spatially distributed computing, onboard hardware, and embedded operating systems. With the development of AI technology, intelligent processing theories have gradually been applied to both satellite sensor task planning and data processing to further enhance the efficiency of satellite use. This section analyses the key technologies in two aspects: multisatellite collaborative observation and information fusion, considering the potential onboard application of AI. ONBOARD COOPERATIVE SURVEY TECHNOLOGY OF MULTISATELLITES Although satellite Earth observations have the advantages of wide coverage and multiple sensor types, they also have the disadvantages of frequent alternation of cooperative observation satellites, dynamic changes in network topology, short working durations, and long observation intervals, which are characterized by sparse and uneven observations. To improve the efficiency of satellites, it is necessary to plan and design the satellite Earth observation system from the information fusion perspective, switch the satellite design based on satellite platforms to that on sensor loads, and 48
build an integrated network of intelligent observation and processing that is task-driven and accounts for information perception and fusion. MULTISENSOR INTEGRATED SATELLITE Traditional remote sensing satellites are mostly designed for a single sensor. For example, optical remote sensing satellites can obtain panchromatic, multispectral, and other types of images, whereas SAR remote sensing satellites can obtain strip, spotlight, polarization, and other mode images. However, the use of a single sensor has certain limitations. Taking marine target monitoring as an example, optical remote sensing satellites are vulnerable to night, cloud, rain, snow, and other factors, and SAR remote sensing satellites are vulnerable to electromagnetic interference. Moreover, its imaging mechanism is unique and its interpretation is difficult. AIS satellites cannot obtain self-reported information of noncooperative targets, and SIGINT or ELINT satellites are vulnerable to electromagnetic silence and false target deception with poor positioning accuracy; hence, obtaining multidimensional information of the target that is not conducive to fine identification is challenging. By carrying multiple types of sensors on one single satellite, as shown in Figure 3, as representative examples, such as active and passive microwave detection payloads (SAR and SIGINT or ELINT, SAR and passive radiation sources, SAR and global navigation satellite systems signals [189], [190]), all passive detection payloads (optical and SIGINT or ELINT, infrared and SIGINT or ELINT), and multimodal imaging sensor payloads (SAR and visible ones, SAR and infrared ones), the multisensor integrated remote sensing satellite has the benefit of simultaneously acquiring multidimensional information of the target, which is conducive to fusion processing. Through single-satellite onboard fusion processing, the complementary information of multiple types of sensors is fully utilized to reduce the uncertainty and imprecision of single-sensor observations and improve the single-satellite perception capability. ELASTIC MULTILAYER SATELLITE CONSTELLATION ARCHITECTURE Several known multisensor satellite constellations are illustrated in Figure 4. Although an integrated remote sensing satellite has the advantage of multidimensional observation, their design, manufacture, and maintenance are challenging, and they have a long launching network supplement cycle, which is not optimal for large-scale deployment. MEO and HEO remote sensing satellites have wide scanning ranges and long observation durations in a single period. The number of satellites required to achieve global observations is small; however, the resolution, positioning accuracy, and other performance indicators are low. LEO remote sensing satellites have high data quality, but the scanning range is narrow, and the observation duration is short for a single period. The number of LEO satellites required to achieve global coverage by a constellation is IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
different types in MEO and HEO, the multidimensional data of the region or target of interested can be obtained. Information at different levels can be rapidly extracted based on an onboard high-performance edge intelligent computing system and transmitted to other satellites through high-speed intersatellite communication for cooperative observation. In contrast, small-satellite technology can be used to achieve space-based sparse microwave imaging,
extremely large. Furthermore, the life of LEO satellites is also much shorter than that of MEO and HEO satellites. Considering the characteristics and performance of LEO, MEO, and HEO satellites, as well as different types of sensors, such as visible, infrared, SAR, SIGINT, and ELINT sensors, by deploying small satellites carrying one single sensor of the same type or different types in LEO, and deploying integrated satellites carrying multiple sensors of
Launch Vehicle Adapter
S-Band Antennas (× 2) GPS Antennas (× 2)
Keep Alive Solar Array SAR Antenna Tie Downs (× 6 Per Wing)
Bus Structure SAR Antenna Deployment Subsystem (DSS)
Deployable Solar Array AIS Antenna (× 4)
–X SAR Antenna Panel
+X SAR Antenna Panel
S-Band Antennas (× 2)
X-Band Antenna (a)
–X Solar Array S-Band Antennas (× 4) Star Sensors (× 3)
+X SAR Antenna Panel –X SAR Antenna Panel Sun Sensors (× 2) Optical Camera X-Band Data Transfer Antenna (b)
+X Solar Array
FIGURE 3. Multisensor integrated satellite: (a) Canadian RCM (SAR + AIS); (b) Chinese Taijing-4 01 satellite (SAR + optical).
(a)
(b)
(c)
FIGURE 4. Multisensor satellite constellation: (a) Canadian OptiSAR constellation; (b) German Pendulum; (c) French CartWheel. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
49
distributed aperture imaging, computational imaging, and other new observation mechanisms. For example, distributed radar small-satellite constellations can combine multiple radar antenna beams into a large beam to obtain a long baseline length by maintaining a rigid formation of multiple small satellites according to a certain spatial configuration to realize the function of a virtual large radar satellite. The key technologies for multisatellite cooperative observation include time–space synchronization and heterogeneous data fusion, because of the long observation interval and substantial different data acquisition rate, accuracy, and representation. SOFTWARE-DEFINED INTELLIGENT OBSERVATION OF REMOTE SENSING SATELLITE CONSTELLATIONS Currently, Earth observation satellites are designed as constellations, and the number of remote sensing satellites has rapidly increased. Multisatellite collaboration has become the main mode of Earth observation. Meanwhile, the requirements for the diversity and timeliness of Earth observation tasks are increasing, MEANWHILE, THE and the demand for multisatellite collaborative onboard REQUIREMENTS FOR THE autonomous task planning is DIVERSITY AND TIMELINESS becoming increasingly urgent. OF EARTH OBSERVATION However, traditional satellite TASKS ARE INCREASING, schedule planning generates AND THE DEMAND FOR action sequences indepenMULTISATELLITE dently, based on the task list. COLLABORATIVE ONBOARD Most of these are designed for AUTONOMOUS TASK a single satellite or several satPLANNING IS BECOMING ellites of the same type. Typically, instructions are generINCREASINGLY URGENT. ated and uplinked to satellites after they are generated on the ground. It is problematic to satisfy the real-time schedule planning of multiple types of remote sensing satellite constellations and avail new types of remote sensing satellites, such as operational response satellites, software-defined satellites, agile satellites, and smart satellites. Through the close combination of software-defined intelligent observation scheduling and onboard information fusion, the situation generated by onboard information fusion, considering both the satellites observation capability and observation tasks, the sensor working mode is intelligently selected, and the optimal action sequence of multisatellites is generated autonomously onboard to reasonably maximize the utilization and response. For emergencies, such as forest fires and search and rescue, building a virtual resource pool is necessary for satellite Earth observation, coordinating multiple observation tasks and ground coverage opportunities of multisatellites, assigning observation tasks to satellite resources with matching capabilities through multisatellite task planning, and to achieve 50
faster, improved, and highly continuous observation of emergencies through multisatellite cooperation. ONBOARD FUSION TECHNOLOGY OF MULTISATELLITE INFORMATION Compared with dense and uniform observation data acquired by land-, sea-, and space-based radar or ELINT sensors, data acquired by remote sensing satellites have the characteristics of spatial–temporal nonsynchronization, inconsistent data acquisition rates, large differences in data quality, multidimensional heterogeneous target description features, and they are sparse and nonuniform. Traditional information fusion methods are based on strict derivations of mathematical formulas. Taking moving target tracking as an example, the motion state must be predicted accurately, and the data acquisition rate must be sufficiently high. Therefore, traditional information fusion methods cannot be applied directly to multisatellite information fusion processing. Fusion strategies and architectures must be studied according to the characteristics of the satellite data, onboard processing hardware, and software performance. ONBOARD HYBRID HETEROGENEOUS INTELLIGENT COMPUTING ARCHITECTURE Because of the size, weight, and power consumption of the satellite, onboard computing and storage resources are limited, and scalability is limited. It is typically implemented based on a system-on-chip, and its computing architecture is considerably different from that of a ground system. Onboard hardware types have recently included fieldprogrammable gate array (FPGA), digital signal processor (DSP), CPU, GPU, and application-specified integrated circuits (ASICs). The operating systems include VxWorks and Integrity. The system architecture and bus standards included SpaceVPX and SpaceWire, respectively. All resource types offer processing advantages. A single resource cannot entirely meet onboard processing requirements or tasks. Moreover, because of the disunity in standards and specifications, it is difficult for satellites to achieve rapid integration with poor universality and reconfiguration. The onboard computing resources of multisatellites constitute a hybrid heterogeneous high-speed dynamic computing architecture. The hardware platform and software system were designed. Hardware resources must be universal, reconfigurable, have low power consumption, and exhibit high performance. Through computing resource virtualization and onboard management mechanisms, processing tasks are allocated to different hardware resources as required to ensure that the computing capabilities of each satellite node can be effectively used to realize dynamic collaborative computing of various onboard hardware. Software resources should be hierarchical, modular, and open. According to the characteristics of the task, resources, such as onboard detection, communication, and computing, are recombined through software definition to achieve real-time resource and data loading of satellite functions IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
onboard, dynamic updates, and reconstruction of functions to satisfy the requirements of different users and tasks. ONBOARD INTELLIGENT ASSOCIATION OF MULTISATELLITE INFORMATION Multisatellite and multipayload cooperative observation data must be associated with or registered for the same target or region. The multisatellite data first require spatial– temporal alignment in the same unified space–time coordinate system for the association. Taking target association as an example, traditional target association algorithms based on motion status prediction cannot be adapted to satellite observation data because of their long revisit period and inaccurate motion status estimation. The traditional target association method based on feature similarity measurement is mostly designed for the same type and structural data, and the distance metric function includes the Euclidean, Minkowsky, and Mahalanobis distances. multisatellite heterogeneous observation data are in different feature spaces and have the characteristics of multimodality. These metric functions cannot be directly used for multimodal feature similarity measurements. Deep learning methods can establish a mapping relationship between the original data and high-level semantics by building a multilayer learning network, which extracts different-level features of the original data. Multisource observation data for the same target are usually heterogeneous in representation and are correlated with semantics. Multisatellite data associations require combining multilevel and multidimension information, such as space, time, attributes, events, and identity, and specially designed intelligent association models, for different cooperative observation scenarios. Satellite information with different structures, such as remote sensing images and ELINT data, show high variability in data structure and feature description. Association factors are mainly reflected in the semantic hierarchy and spatial location relationships. By studying the relationship structure and knowledge maps between multisatellite data and multimodality depth-learning networks, spatial–temporal convolution networks and other models can be designed for association. Therefore, deep learning methods can learn and extract implicit target association patterns and rules based on the historical accumulation of multisource heterogeneous spatial–temporal big data, particularly at the semantic level. This type of association model was realized using the consistency of multisatellite data in high-level semantics and spatial locations. Different types of isomorphic satellite data, such as optical remote sensing images and SAR remote sensing images, need to be analyzed for implicit similarities between their features. Solutions include domain-adaptive source-invariant feature extraction and a generic adversarial network (GANs)-based data translation model, which uses the relevance of different types of data on high-level features and realizes the identification of the association between different types of data by mapping different types of data to the JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
same feature space; the latter uses the data generation capability of GANs to convert one type of data into another and convert the association between different types of data into the implementation association between the same type of data. The same type of isomorphic satellite data, such as optical remote sensing images with different resolutions, can solve the problem of the complex semantic content of remote sensing images and the insufficient identification ability of traditional feature representation methods using depth measurement learning, attention mechanisms, and multitask learning models, and improve the accuracy of information association of the same type of data. A recurrent neural network, long short-term memory, graph neural network, or transformer can be used to target track-to-track and plot-to-plot associations that are primarily designed for target tracking. ONBOARD INTELLIGENT FUSION PROCESSING OF MULTISATELLITE INFORMATION Multisatellites can provide multidimensional information on regions or targets of interest from different observation modes, electromagnetic bands, times, and platform heights. Considerable relevant or complementary content is present in multisource data. Therefore, information fusion technology can be used to mine complementary information, remove redundancies, strengthen cross-validation between sources, and improve the accuracy and reliability of processing results. Feature layer fusion can effectively reduce data redundancy while retaining the original information to the maximum extent. Feature fusion considers the correlation and complementarity between different features; therefore, integrating both similar and different features is necessary. Target feature fusion can be divided into state and attribute fusion. State fusion is primarily used to track the targets, which can achieve intelligent associated tracking of targets with multisource and heterogeneous satellite information based on a deep-loop convolution network and a space– time map convolution network. Attribute fusion is mainly used for target recognition. A feature fusion network based on deep learning can extract features from multisource heterogeneous data, directly convert the data space to a feature space, and conveniently realize intelligent feature fusion at multiple levels, such as low-level spatial features, middlelevel semantic features, and high-level abstract concept features. Decision-level fusion requires obtaining prior knowledge, such as identification uncertainty measurements, fusion strategies, and inference rules from each source, and then combining evidence and random-set theories to achieve decision fusion. This knowledge can be acquired through learning using deep networks. JOINT SATELLITE–GROUND DATA PROCESSING AND DYNAMIC INTERACTION Onboard satellite data processing, particularly intelligent processing, requires expert experience and knowledge. 51
Compared with onboard storage and computing capabilities, the ground processing system can make full use of massive historical Earth observation satellite data for learning and has a more complete knowledge base and model base as learning support. Simultaneously, with the gradual transfer of satellite authority to end users, users can directly uplink observation task requests to satellites and receive data products at different levels in real time. However, terminal users, particularly highly effective users, pay great attention to high-level situation intelligence. Taking moving-target monitoring as an example, it is necessary to assess the threat, intention, and trend of danger targets and provide danger warnings and other information. It is difficult for satellite onboard processing systems to realize high-level situational awareness; therefore, they must interact with ground processing systems to obtain ground knowledge and intelligence support. Establish a satellite-ground-joint intelligent learning mechanism, learn massive satellite spatial–temporal remote sensing big data on the ground, generate a lightweight learning network model, and uplink it to the satellite in real-time for software updating and reconstruction to achieve onboard intelligent processing and continuous online learning. For emergencies and abnormal events, learning their recognition characteristics and behavior rules on the ground based on satellite spatial–temporal remote sensing big data, forming a knowledge base, and uplinking it to the satellite for storage and updating in real-time to realize the autonomous perception of emergencies and abnormal events in orbit. Ground processing can combine the information obtained by space-, sea-, and land-based sensors to carry out collaborative reasoning, overcome the problems of incomplete and discontinuous satellite sparse observation data, and uplink the generated situational intelligence to the satellite, providing support for onboard high-level situation awareness and collaborative observation task planning. FUTURE DEVELOPMENT TREND Networking and intelligence are development directions for future Earth observation satellite systems. Networking includes the networking of multiple platforms and sensors as well as observation, communication, and computing space resources. Intelligence includes intelligent cooperative observations and intelligent multisource data fusion. The future key development directions of multisatellite onboard intelligent observation and data fusion include implementation of the following methods: ◗◗ Building elastic and expandable multiorbit and multisensor Earth observation satellite constellation systems. LEO, MEO, and HEO satellites achieve cooperative observation through satellite formation flying, constellation group networking, and other technologies. Realtime dynamic observations of hotspots and time-critical targets can be realized by deploying a small number of high- and medium-orbit satellites. Combined with highdensity LEO satellite constellation groups, the capability 52
of global near real-time, high spatial–temporal resolution, and coverage can be greatly improved. Integrated satellites and constellations can realize visible, infrared, SAR, microwave, spectrum, SIGINT, ELINT, and other multispectral bands in active and passive modes, and they can provide multimodality heterogeneous data for onboard information fusion processing. ◗◗ Establishing an onboard autonomous planning and scheduling mechanism integrating satellite resources of observation, communication, and computing; designing a satellite virtual resource pool under a unified time– space framework; establishing a new mode of task-driven software-defined intelligent cooperative observation; dynamically allocating cooperative observation tasks and data processing tasks under a dynamic high-speed reconstruction environment to different satellites in near realtime; and improving the utilization efficiency of s atellite resources and the collaboration efficiency between multiple types of satellites, providing highly timely data for onboard information fusion processing. ◗◗ Designing an onboard hybrid heterogeneous intelligent computing architecture for multisatellite data fusion processing, combining the performance and characteristics advantage of FPGA, DSP, CPU, GPU, and other hardware; designing a reconfigurable, scalable and sustainable onboard intelligent fusion processing model for multisatellite data; establishing a integration of satellite–ground cooperative learning and uplink mechanism; learning knowledge, regularities and model on the ground based on massive satellite observation data and updating the intelligent fusion processing system on the satellite in real-time; adapting to the requirements of onboard multitask processing; realizing onboard autonomous awareness of emergencies; and providing users with near realtime multidimensional and multilevel information. ◗◗ Promoting the transfer of satellite task control authority to end users, such that users can directly control satellites in orbit, uplink instructions, and acquire satellite data; substantially shorten the satellite task planning, data processing, and information transmission chain from the satellite to end users; and improve the rapid response capability to hot events. CONCLUSION Modern Earth observation satellites are capable of high spatial–temporal-spectral resolutions, multiple working modes, high agility, and networking collaboration for Earth observation. The onboard information fusion of multisatellites can further improve the capability of large-scale observation, accurate interpretation, and rapid response for wide-area surveillance, emergency rescue, and other application scenarios. In this study, the key technologies of onboard collaborative observation and information fusion of multisatellites are analyzed and studied, and the development and suggestions for onboard information fusion of multisatellite in the future are proposed and IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
discussed. Onboard information fusion of multisatellites is a complex system engineering process. In addition to the key technologies analyzed in this study, it involves additional aspects, such as satellite platforms, sensors, and communications. Integrating onboard data processing, communication, and observations is a way forward for future progress in Earth observations. ACKNOWLEDGMENT This work was supported in part by the National Nature Science Foundation of China under Grant 41822105; in part by Sichuan Natural Science Foundation Innovation Group under Project 2023NSFSC1974; in part by Fundamental Research Funds for the Central Universities under Projects 2682020ZT34 and 2682021CX071;in part by the CAST Innovation Foundation; and in part by the State Key Laboratory of Geo-Information Engineering under Projects SKLGIE2020-Z-3-1 and SKLGIE2020-M-4-1. Libo Yao is the corresponding author. AUTHOR INFORMATION Gui Gao ([email protected]) received his B.S., M.S., and Ph.D. degrees from the National University of Defense Technology (NUDT), Changsha, China, in 2002, 2003, and 2007, respectively. In 2007, he joined the Faculty of Information Engineering, School of Electronic Science and Engineering, NUDT, as an associate professor. He is currently a professor with the Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 611756, China. He has authored more than 100 journal and conference papers and written four books and an English chapter. He has received numerous awards, including the Excellent Master Thesis of Hunan Province in 2006, Excellent Doctor Thesis in 2008, and Outstanding Young People in NUDT and Hunan Province of China in 2014 and 2016, as well as a first-class Prize of Science and Technology Progress and a Natural Science in Hunan Province award. He was also selected as a Young Talent of Hunan in 2016 and supported by the Excellent Young People Science Foundation of the National Natural Science Foundation of China. He is the lead guest editor of International Journal of Antenna and Propagation, the guest editor of Remote Sensing, and an associate editor and the lead guest editor of IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, and he is on the editorial board of the Chinese Journal of Radars. He was also the cochair of several conferences in the field of remote sensing. He was the Excellent Reviewer for Journal of Xi’an Jiaotong University in 2013. He is a Member of IEEE, the IEEE Geoscience and Remote Sensing Soci ety, and the Applied Computational Electromagnetics Society; a senior member of the Chinese Institute of Electronics (CIE); and a dominant member of the CIE Young Scientist Forum. Libo Yao ([email protected]) received his B.S., M.S., and Ph.D. degrees from Shandong University, the People’s JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
Liberation Army Information Engineering University of China, and the Naval Aviation University of China in 2003, 2006, and 2019, respectively. He is now an associate professor at the Institute of Information Fusion, Naval Aviation University of China, Yantai 264001, China. His research interests include satellite remote sensing information fusion and onboard processing. Wenfeng Li ([email protected]) is with the Shanghai Institute of Satellite Engineering, Shanghai 201109, China. His research interests include multiresource remote sensing imagery processing and onboard analysis. Linlin Zhang ([email protected]) received her B.S. degree from Wuhan University in 2016, and her M.S. degree from Southwest Jiaotong University in 2019. She is now pursuing her Ph.D. degree in the Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 611756, China. Her research interests include synthetic aperture radar imagery processing and object detection. Maolin Zhang ([email protected]) is with the Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 611756, China. His research interests include synthetic aperture radar onboard data processing. REFERENCES [1]
J. Llinas and E. Waltz, Multisensor Data Fusion. Norwood, MA, USA: Artech House, 1990. [2] J. Manyika and H. Durrant-Whyte, Data Fusion and Sensor Management: A Decentralized Information-Theoretic Approach. New York, NY, USA: Ellis Horwood, 1994. [3] Y. Bar-Sharlom and X. R. Li, Multitarget-Multisensor Tracking: Principles and Techniques. Storrs, CT, USA: YBS Publishing, 1995. [4] I. R. Goodman, R. P. S. Mahler, and H. T. Nguyen, Mathematics of Data Fusion. Norwell, MA, USA: Kluwer, 1997. [5] N. S. V. Rao, D. B. Reister, and J. Barhen, “Information fusion methods based on physical laws,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 1, pp. 66–77, Jan. 2005, doi: 10.1109/ TPAMI.2005.12. [6] L. D. Hall and J. Dlinas, Handbook of Multisensor Data Fusion. Boca Raton, FL, USA: CRC Press, 2001. [7] Y. He, G. Wang, and X. Guan, Information Fusion Theory with Applications. Beijing, China: Publishing House of Electronics Industry, 2011. [8] Z. Zhao, C. Xiong, and K. Wang, Conceptions, Methods and Applications on Information Fusion. Beijing, China: National Defense Industry Press, 2012. [9] M. D. Mura, S. Prasad, F. Pacifici, P. Gamba, J. Chanussot, and J. A. Benediktsson, “Challenges and opportunities of multimodality and data fusion in remote sensing,” Proc. IEEE, vol. 103, no. 9, pp. 1585–1601, Sep. 2015, doi: 10.1109/ JPROC.2015.2462751. [10] C. Han, H. Zhu, and Z. Duan, Multisource Information Fusion. Beijing, China: Tsinghua University Press, 2021. [11] L. Wald, “An overview of concepts in fusion of Earth data,” in Proc. EARSeL Symp. “Future Trends Remote Sens.,” 2009, pp. 385–390.
53
[12] W. Li, Y. Li, and C. Chan, “Thick cloud removal with optical and SAR imagery via convolutional-mapping-deconvolutional network,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. 2865–2879, Apr. 2020, doi: 10.1109/TGRS.2019.2956959. [13] S. Singh, R. K. Tiwari, V. Sood, H. S. Gusain, and S. Prashar, “Image fusion of Ku-band-based SCATSAT-1 and MODIS data for cloud-free change detection over Western Himalayas,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 4302514, doi: 10.1109/TGRS.2021.3123392. [14] Z. Yuan, L. Mou, Z. Xiong, and X. X. Zhu, “Change detection meets visual question answering,” IEEE Trans. Geosci. Remote Sens., vol. 60, Sep. 2022, Art. no. 5630613, doi: 10.1109/ TGRS.2022.3203314. [15] C. Lu, Y. Lin, and R. Y. Chuang, “Pixel offset fusion of SAR and optical images for 3-D coseismic surface deformation,” IEEE Geosci. Remote Sens. Lett., vol. 18, no. 6, pp. 1049–1053, Jun. 2021, doi: 10.1109/LGRS.2020.2991758. [16] H. Yu, N. Cao, Y. Lan, and M. Xing, “Multisystem interferometric data fusion framework: A three-step sensing approach,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 10, pp. 8501–8509, Oct. 2021, doi: 10.1109/TGRS.2020.3045093. [17] L. Zhou, H. Yu, V. Pascazio, and M. Xing, “PU-GAN: A one-step 2-D InSAR phase unwrapping based on conditional generative adversarial network,” IEEE Trans. Geosci. Remote Sens., vol. 60, Jan. 2022, Art. no. 5221510, doi: 10.1109/TGRS.2022.3145342. [18] L. Zhou, H. Yu, Y. Lan, and M. Xing, “Deep learning-based branch-cut method for InSAR two-dimensional phase unwrapping,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5209615, doi: 10.1109/TGRS.2021.3099997. [19] H. Thirugnanam, S. Uhlemann, R. Reghunadh, M. V. Ramesh, and V. P. Rangan, “Review of landslide monitoring techniques with IoT integration opportunities,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 5317–5338, Jun. 2022, doi: 10.1109/JSTARS.2022.3183684. [20] N. Jiang, H. Li, C. Li, H. Xiao, and J. Zhou, “A fusion method using terrestrial laser scanning and unmanned aerial vehicle photogrammetry for landslide deformation monitoring under complex terrain conditions,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 4707214, doi: 10.1109/ TGRS.2022.3181258. [21] S. K. Ahmad, F. Hossain, H. Eldardiry, and T. M. Pavelsky, “A fusion approach for water area classification using visible, near infrared and synthetic aperture radar for South Asian conditions,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. 2471– 2480, Apr. 2020, doi: 10.1109/TGRS.2019.2950705. [22] J. Park et al., “Illuminating dark fishing fleets in North Korea,” Sci. Adv., vol. 6, Jul. 2020, Art. no. eabb1197, doi: 10.1126/sciadv.abb1197. [23] S. Brusch, S. Lehner, T. Fritz, M. Soccorsi, A. Soloviev, and B. V. Schie, “Ship surveillance with TerraSAR-X,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 3, pp. 1092–1102, Mar. 2011, doi: 10.1109/TGRS.2010.2071879. [24] T. Liu, Z. Yang, A. Marino, G. Gao, and J. Yang, “Joint polarimetric subspace detector based on modified linear discriminant analysis,” IEEE Trans. Geosci. Remote Sens., vol. 60, Feb. 2022, Art. no. 5223519, doi: 10.1109/TGRS.2022.3148979.
54
[25] T. Liu, Z. Yang, G. Gao, A. Marino, S. Chen, and J. Yang, “A general framework of polarimetric detectors based on quadratic optimization.” IEEE Trans. Geosci. Remote Sens., vol. 60, Oct. 2022, Art. no. 5237418, doi: 10.1109/TGRS.2022.3217336. [26] B. Zhang, Z. Zhu, W. Perrie, J. Tang, and J. A. Zhang, “Estimating tropical cyclone wind structure and intensity from spaceborne radiometer and synthetic aperture radar,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 4043–4050, Mar. 2021, doi: 10.1109/JSTARS.2021.3065866. [27] H. Greidanus and N. Kourti, “Findings of the DECLIMS project-detection and classification of marine traffic from space,” in Proc. SEASAR, Adv. SAR Oceanogr. ENVISAT and ERS, 2006, pp. 126–131. [28] Y. Lan, H. Yu, Z. Yuan, and M. Xing, “Comparative study of DEM reconstruction accuracy between single- and multibaseline I nSAR phase unwrapping,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5220411, doi: 10.1109/ TGRS.2022.3140327. [29] L. Zhou, H. Yu, Y. Lan, S. Gong, and M. Xing, “CANet: An unsupervised deep convolutional neural network for efficient cluster-analysis-based multibaseline InSAR phase unwrapping,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5212315, doi: 10.1109/TGRS.2021.3110518. [30] H. Yu, T. Yang, L. Zhou, and Y. Wang, “PDNet: A light-weight deep convolutional neural network for InSAR phase denoising,” IEEE Trans. Geosci. Remote Sens., vol. 60, Nov. 2022, Art. no. 5239309, doi: 10.1109/TGRS.2022.3224030. [31] A. Allies et al., “Assimilation of multisensor optical and multiorbital SAR satellite data in a simplified agrometeorological model for rapeseed crops monitoring,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 1123–1137, 2022, doi: 10.1109/JSTARS.2021.3136289. [32] D. A. Kroodsma et al., “Tracking the global footprint of fisheries,” Science, vol. 359, no. 6378, pp. 904–908, Feb. 2018, doi: 10.1126/science.aao5646. [33] G. Thomas, “Collaboration in space: The silver bullet for global maritime awareness,” Can. Nav. Rev., vol. 8, no. 1, pp. 14–18, 2012. [34] A. D. George and C. M. Wilson, “Onboard processing with hybrid and reconfigurable computing on small satellites,” Proc. IEEE, vol. 106, no. 3, pp. 458–470, Mar. 2018, doi: 10.1109/ JPROC.2018.2802438. [35] M. Amaud and A. Barumchercyk, “An experimental optical link between an Earth remote sensing satellite, spot-4, and a European data relay satellite,” Int. J. Satell. Commun., vol. 6, no. 2, pp. 127–140, Apr./Jun. 1998, doi: 10.1002/ sat.4600060208. [36] H. Jiang and S. Tong, The Technologies and Systems of Space Laser Communication. Beijing, China: National Defense Industry Press, 2010. [37] K. Gao, Y. Liu, and G. Ni, “Study on on-board real-time image processing technology of optical remote sensing,” Spacecraft Recovery Remote Sens., vol. 29, no. 1, pp. 50–54, 2008. [38] Z. Yue, Z. Qin, and J. Li. “Design of in-orbit processing mechanism for space-earth integrated information network,” J. China Acad. Electron. Inf. Technol., vol. 6., no. 4, pp. 580–585, 2020. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
[39] X. Luo, “Design of multi-task real-time operating system in on board data handling,” Chin. Space Sci. Technol., vol. 3, no. 3, pp. 15–20, 1997. [40] C. Bian, “Research on valid region on-board real-time detection and compression technology applied for optical remote sensing image,” Harbin Institute of Technology, Harbin, China, Rep. TD36827148, 2018. [41] C. Liu, Y. Guo, and N. Li, “Composition and compression of satellite multi-channel remote sensing images,” Opt. Precis. Eng., vol. 21, no. 2, pp. 445–453, 2013. [42] J. Li, L. Jin, and G. Li, “Lossless compression of hyperspectral image for space-borne application,” Spectrosc. Spectral Anal., vol. 32, no. 8, pp. 2264–2269, Aug. 2012. [43] C. Liu, Y. Guo, and N. Li, “Real-time composing and compression of image within satellite multi-channel TDICCD camera,” Infrared Laser Eng., vol. 42, no. 8, pp. 2068–2675, Aug. 2013. [44] D. Valsesia and E. Magli, “High-throughput onboard hyperspectral image compression with ground-based CNN reconstruction,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 12, pp. 9544–9553, Dec. 2019, doi: 10.1109/TGRS.2019.2927434. [45] T. Yang, Q. Xu, F. Meng, and S. Zhang, “Distributed real-time image processing of formation flying SAR based on embedded GPUs,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 6495–6505, Aug. 2022, doi: 10.1109/JSTARS.2022.3197199. [46] D. Mota et al., “Onboard processing of synthetic aperture radar backprojection algorithm in FPGA,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 3600–3611, Apr. 2022, doi: 10.1109/JSTARS.2022.3169828. [47] J. Qiu et al., “A novel weight generator in real-time processing architecture of DBF-SAR,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5204915, doi: 10.1109/TGRS.2021.3067882. [48] Z. Ding, P. Zheng, Y. Wang, T. Zeng, and T. Long, “Blocked azimuth spectrum reconstruction algorithm for onboard real-time dual-channel SAR imaging,” IEEE Geosci. Remote Sens. Lett., vol. 19, 2022, Art. no. 4015305, doi: 10.1109/LGRS.2021.3091276. [49] D. Wang, X. Chen, and Z. Li, “On-board cloud detection and avoidance algorithms for optical remote sensing satellite,” Syst. Eng. Electron., vol. 3, no. 3, pp. 515–522, 2019. [50] X. Yan, Y. Xia, and J. Zhao, “Efficient implementation method of real-time cloud detection in remote sensing video based on FPGA,” Appl. Res. Comput., vol. 6, pp. 1794–1799, 2021. [51] G. Yang et al., “Algorithm/hardware codesign for real-time on-satellite CNN-based ship detection in SAR imagery,” IEEE Trans. Geosci. Remote Sens., vol. 60, Mar. 2022, Art. no. 5226018, doi: 10.1109/TGRS.2022.3161499. [52] J. Huang, G. Zhou, and X. Zhou, “A new FPGA architecture of Fast and BRIEF algorithm for on-board corner detection and matching,” Sensors, vol. 18, no. 4, pp. 1014–1031, Mar. 2018, doi: 10.3390/s18041014. [53] T. Zhang and Z. Zuo, “Some key problems on space-borne recognition of moving target,” Infrared Laser Eng., vol. 30, no. 6, pp. 395–400, 2001. [54] S. Yu, Y. Yu, and X. He, “On-board fast and intelligent perception of ships with the ‘Jilin-1’ Spectrum 01/02 satellites,” IEEE Access, vol. 8, pp. 48,005–48,014, Mar. 2020, doi: 10.1109/ACCESS.2020.2979476. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[55] J. T. Johnson et al., “Real-time detection and filtering of radio frequency interference onboard a spaceborne microwave radiometer: The CubeRRT mission,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 13, pp. 1610–1624, Apr. 2020, doi: 10.1109/JSTARS.2020.2978016. [56] G. Wu, B. Cui, and Q. Shen, “Research on real-time guided multi-satellite imaging mission planning method,” Spacecraft Eng., vol. 28, no. 5, pp. 1–6, Oct. 2019. [57] B. Zhang, J. Luo, and J. Yuan, “On-orbit autonomous operation cooperative control of multi-spacecraft formation,” J. Astronaut., vol. 10, no. 1, pp. 130–136, 2010. [58] Y. Li, X. Yin, W. Zhou, M. Lin, H. Liu, and Y. Li, “Performance simulation of the payload IMR and MICAP onboard the Chinese ocean salinity satellite,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5301916, doi: 10.1109/TGRS.2021.3111026. [59] F. Viel, W. D. Parreira, A. A. Susin, and C. A. Zeferino, “A hardware accelerator for onboard spatial resolution enhancement of hyperspectral images,” IEEE Geosci. Remote Sens. Lett., vol. 18, no. 10, pp. 1796–1800, Oct. 2021, doi: 10.1109/LGRS.2020.3009019. [60] Z. Wang, F. Liu, T. Zeng, and S. He, “A high-frequency motion error compensation algorithm based on multiple errors separation in BiSAR onboard mini-UAVs,” IEEE Trans. Geosci. Remote Sens., vol. 60, Feb. 2022, Art. no. 5223013, doi: 10.1109/ TGRS.2022.3150081. [61] M. Martone, M. Villano, M. Younis, and G. Krieger, “Efficient onboard quantization for multichannel SAR systems,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 12, pp. 1859–1863, Dec. 2019, doi: 10.1109/LGRS.2019.2913214. [62] H. M. Heyn, M. Blanke, and R. Skjetne, “Ice condition assessment using onboard accelerometers and statistical change detection,” IEEE J. Ocean. Eng., vol. 45, no. 3, pp. 898–914, Jul. 2020, doi: 10.1109/JOE.2019.2899473. [63] Z. Cao, R. Ma, J. Liu, and J. Ding, “Improved radiometric and spatial capabilities of the coastal zone imager onboard Chinese HY-1C satellite for inland lakes,” IEEE Geosci. Remote Sens. Lett., vol. 18, no. 2, pp. 193–197, Feb. 2021, doi: 10.1109/ LGRS.2020.2971629. [64] Z. Li et al., “In-orbit test of the polarized scanning atmospheric corrector (PSAC) onboard Chinese environmental protection and disaster monitoring satellite constellation HJ-2 A/B,” IEEE Trans. Geosci. Remote Sens., vol. 60, May 2022, Art. no. 4108217, doi: 10.1109/TGRS.2022.3176978. [65] C. Fu, Z. Cao, Y. Li, J. Ye, and C. Feng, “Onboard real-time aerial tracking with efficient Siamese anchor proposal network,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5606913, doi: 10.1109/TGRS.2021.3083880. [66] G. Doran, A. Daigavane, and K. L. Wagstaff, “Resource consumption and radiation tolerance assessment for data analysis algorithms onboard spacecraft,” IEEE Trans. Aerosp. Electron. Syst., vol. 58, no. 6, pp. 5180–5189, Dec. 2022, doi: 10.1109/ TAES.2022.3169123. [67] S. J. Lee and M. H. Ahn, “Synergistic benefits of intercomparison between simulated and measured radiances of imagers onboard geostationary satellites,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 12, pp. 10,725–10,737, Dec. 2021, doi: 10.1109/ TGRS.2021.3054030.
55
[68] Y. Lin, J. Li, and C. Xiao, “Vicarious radiometric calibration of the AHSI Instrument onboard ZY1E on Dunhuang radiometric calibration site,” IEEE Trans. Geosci. Remote Sens., vol. 60, Jun. 2022, Art. no. 5530713, doi: 10.1109/TGRS.2022.3180120. [69] Y. Liu et al., “A classification-based, semianalytical approach for estimating water clarity from a hyperspectral sensor onboard the ZY1-02D satellite,” IEEE Trans. Geosci. Remote Sens., vol. 60, Mar. 2022, Art. no. 4206714, doi: 10.1109/TGRS.2022.3161651. [70] I. Sandberg et al., “First results and analysis from ESA next generation radiation monitor unit onboard EDRS-C,” IEEE Trans. Nucl. Sci., vol. 69, no. 7, pp. 1549–1556, Jul. 2022, doi: 10.1109/ TNS.2022.3160108. [71] M. Zhao et al., “First year on-orbit calibration of the Chinese environmental trace gas monitoring instrument onboard GaoFen-5,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 12, pp. 8531–8540, Dec. 2020, doi: 10.1109/TGRS.2020.2988573. [72] Z. Ma, S. Zhu, and J. Yang, “FY4QPE-MSA: An all-day near-real-time quantitative precipitation estimation framework based on multispectral analysis from AGRI onboard Chinese FY-4 series satellites,” IEEE Trans. Geosci. Remote Sens., vol. 60, Mar. 2022, Art. no. 4107215, doi: 10.1109/TGRS.2022.3159036. [73] X. Lei et al., “Geolocation error estimation method for the wide swath polarized scanning atmospheric corrector onboard HJ2 A/B satellites,” IEEE Trans. Geosci. Remote Sens., vol. 60, Jul. 2022, Art. no. 5626609, doi: 10.1109/TGRS.2022.3193095. [74] X. Ye, J. Liu, M. Lin, J. Ding, B. Zou, and Q. Song, “Global ocean chlorophyll-a concentrations derived from COCTS onboard the HY-1C satellite and their preliminary evaluation,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 12, pp. 9914–9926, Dec. 2021, doi: 10.1109/TGRS.2020.3036963. [75] X. Ye, J. Liu, M. Lin, J. Ding, B. Zou, and Q. Song, “Sea surface temperatures derived from COCTS onboard the HY-1C satellite,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 1038–1047, 2021, doi: 10.1109/JSTARS.2020.3033317. [76] Z. Wang et al., “Validation of new sea surface wind products from scatterometers onboard the HY-2B and MetOp-C satellites,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 6, pp. 4387– 4394, Jun. 2020, doi: 10.1109/TGRS.2019.2963690. [77] G. Rabideau, D. Tran, and S. Chien, “Mission operations of earth observing-1 with onboard autonomy,” in Proc. 2nd IEEE Int. Conf. Space Mission Challenges Inf. Technol., Pasadena, CA, USA, 2006, pp. 1–7, doi: 10.1109/SMC-IT.2006.48. [78] W. A. Myers, R. D. Smith, and J. L. Stuart, “NEMO satellite sensor imaging payload,” in Proc. Infrared Spaceborne Remote Sens. VI (SPIE), 1998, vol. 3437, no. 11, pp. 29–40, doi: 10.1117/12.331331. [79] S. Yarbrough et al., “MightySat II.1 hyperspectral imager: Summary of on-orbit performance,” in Proc. Int. Symp. Opt. Sci. Technol. (SPIE), Imag. Spectrometry VII, 2002, vol. 4480, doi: 10.1117/12.453339. [80] L. Doggrell, “Operationally responsive space: A vision for the future of military space,” Air Space Power J., vol. 20, no. 2, pp. 42–49, 2006. [81] M. Dai and Z. You, “On-board intelligent image process based on transputer for small satellites,” Spacecraft Eng., vol. 9, no. 3, pp. 12–20, 2000.
56
[82] S. Yu et al., “The real-time on-orbit detection and recognition technologies for the typical target based on optical remote sensing satellite,” in Proc. 4th China High Resolution Earth Observ. Conf., Wuhan, China, 2015, pp. 28–32. [83] B. Zhukov et al., “Spaceborne detection and characterization of fires during the bi-spectral infrared detection (BIRD) experimental small satellite mission (2001-2004),” Remote Sens. Environ., vol. 100, no. 1, pp. 29–51, Jan. 2006, doi: 10.1016/j.rse.2005.09.019. [84] F. D. Lussy et al., “Pleiades HR in flight geometrical calibration: Location and mapping of the focal plane,” in Proc. Int. Arch. Photogrammetry, Remote Sens. Spatial Inf. Sci., Aug. 2012, vol. 8, pp. 519–523, doi: 10.5194/isprsarchives-XXXIX-B1-519-2012. [85] B. Fraser, “The FedSat microsatellite mission,” Space Sci. Rev., vol. 107, pp. 3030–306, Apr. 2003, doi: 10.1023/A:1025508816225. [86] R. Shen, “Some thoughts of Chinese integrated space-ground network system,” Eng. Sci., vol. 8, no. 10, pp. 19–30, Oct. 2006. [87] J. Rash, K. Hogie, and R. Casasanta, “Internet technology for future space missions,” Comput. Netw., vol. 47, no. 5, pp. 651–659, Apr. 2005, doi: 10.1016/j.comnet.2004.08.003. [88] J. Mukherjee and B. Ramamurthy, “Communication technologies and architectures for space network and interplanetary internet,” IEEE Commun. Surveys Tuts., vol. 15, no. 2, pp. 881–897, Second Quarter 2013, doi: 10.1109/SURV.2012.062612.00134. [89] R. C. Sofia et al., “Internet of space: Networking architectures and protocols to support space-based internet service,” IEEE Access, vol. 10, pp. 92,706–92,709, Sep. 2022, doi: 10.1109/ACCESS.2022.3202342. [90] D. R. Li, X. Shen, J. Y. Gong, J. Zhang, and J. Lu, “On construction of China’s space information network,” Geomatics Inf. Sci. Wuhan Univ., vol. 40, no. 6, pp. 711–715, 2016. [91] X. Yang, “Integrated spatial information system based on software-defined satellite,” Rev. Electron. Sci. Technol., vol. 4, no. 1, pp. 15–22, 2004. [92] S. Min, “An idea of China’s space-based integrated information network,” Spacecraft Eng., vol. 22, no. 5, pp. 1–14, 2013. [93] N. Zhang, K. Zhao, and G. Liu, “Thought on constructing the integrated space-terrestrial information network,” J. China Acad. Electron. Inf. Technol., vol. 10, no. 3, pp. 223–230, 2015. [94] H. Jiang et al., “Several key problems of space-ground integration information network,” Acta Armamentarii, vol. 35, no. 1, pp. 96–100, 2014. [95] C. Sun, “Research status and problems for space-based transmission network and space-ground integrated information network,” Radio Eng., vol. 47, no. 1, pp. 1–6, Jan. 2017. [96] S. Shekhar, S. Feiner, and W. G. Aref, “From GPS and virtual globes to spatial computing-2020,” Geoinformatica, vol. 19, no. 4, pp. 799–832, Oct. 2015, doi: 10.1007/s10707-015-0235-9. [97] P. Thomas, DARPA Blackjack Demo Program – Pivot to LEO and Tactical Space Architecture, DARPA Tactical Technology Office, Arlington, VA, USA, 2018. [98] National Academies of Sciences, Engineering, and Medicine. National Security Space Defense and Protection: Public Report. Washington, DC, USA: The National Academies Press, 2016. [99] F. C. Teston et al., “The PROBA-1 microsatellite,” in Proc. SPIE 49th Annu. Meeting, Opt. Sci. Technol., Imag. Spectrometry X, Oct. 2004, vol. 5546, doi: 10.1117/12.561071. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
[100] Z. Chen, “The innovation and practice of PJ-1 satellite,” Aerosp. Shanghai, vol. 33, no. 3, pp. 1–10, 2016. [101] G. Zhou, “Architecture of future intelligent Earth observing satellites (FIEOS) in 2010 and beyond,” in Proc. SPIE Earth Observ. Syst. VIII, San Diego, CA, USA, 2001, pp. 1021–1028. [102] B. Oktay and G. Zhou, “From global Earth observation system of system to future intelligent earth observing satellite system,” in Proc. 3rd Int. Symp. Future Intell. Earth Observ. Satell., Beijing, China, May 2006, pp. 33–38. [103] D. R. Li and X. Shen, “On intelligent Earth observation systems,” Sci. Surv. Mapping, vol. 30, no. 4, pp. 9–11, 2005. [104] D. R. Li, M. Wang, X. Shen, and Z. Dong, “From earth observation satellite to Earth observation brain,” Geomatics Inf. Sci. Wuhan Univ., vol. 42, no. 2, pp. 143–149, Feb. 2017. [105] D. R. Li, X. Shen, D. Li, and S. Li, “On civil-military integrated space-based real-time information service system,” Geomatics Inf. Sci. Wuhan Univ., vol. 42, no. 11, pp. 1501–1505, Nov. 2017. [106] J. Zhang and J. Guo, “Preliminary design on intelligent remote sensing satellite system and analysis on its key technologies,” Radio Eng., vol. 46, no. 2, pp. 1–5, 2016. [107] B. Zhang, “Intelligent remote sensing satellite system,” J. Remote Sens., vol. 15, no. 3, pp. 423–427, 2011. [108] M. Wang and Q. Wu, “Key problems of remote sensing images intelligent service for constellation,” Acta Geodaetica Cartogr. Sin., vol. 51, no. 6, pp. 1008–1016, 2022. [109] F. Yang, S. Liu, J. Zhao, and Q. Zheng, “Technology prospective of intelligent remote sensing satellite,” Spacecraft Eng., vol. 26, no. 5, pp. 74–81, 2017. [110] P. Mugen, S. Zhang, H. Xu, M. Zhang, Y. Sun, and Y. Cheng, “Communication and remote sensing integrated LEO satellites: Architecture, technologies and experiment,” Telecommun. Sci., vol. 38, no. 1, pp. 13–24, 2022. [111] F. Wu, C. Lu, M. Zhu, H. Chen, and Y. Pan, “Towards a new generation of artificial intelligence in China,” Nature Mach. Intell., vol. 2, no. 6, pp. 312–316, Jun. 2020, doi: 10.1038/s42256-020 -0183-4. [112] M. Bonavita, R. Arcucci, A. Carrassi, P. Dueben, and L. Raynaud, “Machine learning for Earth system observation and prediction,” Bull. Amer. Meteorol. Soc., vol. 102, no. 4, pp. 1–13, Apr. 2020, doi: 10.1175/BAMS-D-20-0307.1. [113] Y. Lin, J. Hu, L. Li, F. Wu, and J. Zhao, “Design and implementation of on-orbit valuable image extraction for the TianZhi-1 satellite,” in Proc. IEEE 14th Int. Conf. Intell. Syst. Knowl. Eng. (ISKE), 2019, pp. 1076–1080, doi: 10.1109/ISKE47853.2019. 9170453. [114] R. A. Singer and R. G. Sea, “New results in optimizing surveillance system tracking and data correlation performance in dense multitarget environments,” IEEE Trans. Autom. Control, vol. 18, no. 6, pp. 571–582, Dec. 1973, doi: 10.1109/TAC.1973.1100421. [115] T. L. Song, D. Lee, and J. Ryu, “A probabilistic nearest neighbor filter algorithm for tracking in a clutter environment,” Signal Process., vol. 85, no. 10, pp. 2044–2053, Oct. 2005, doi: 10.1016/j.sigpro.2005.01.016. [116] Y. Bar-Shalom and E. Tse, “Tracking in a cluttered environment with probabilistic data association,” Automatica, vol. 11, no. 9, pp. 451–460, Sep. 1975, doi: 10.1016/0005-1098(75)90021-7. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[117] D. Musicki, R. Evans, and S. Stankovic, “Integrated probabilistic data association,” IEEE Trans. Autom. Control, vol. 39, no. 6, pp. 1237–1241, Jun. 1994, doi: 10.1109/9.293185. [118] T. E. Formann, Y. Bar-Sharlom, and M. Scheffe, “Sonar tracking of multiple targets using joint probabilistic data association,” IEEE J. Ocean. Eng., vol. 8, no. 3, pp. 173–183, Jul. 1983, doi: 10.1109/JOE.1983.1145560. [119] J. A. Roecker and G. L. Phillis, “Suboptimal joint probabilistic data association,” IEEE Trans. Aerosp. Electron. Syst., vol. 29, no. 2, pp. 510–517, Apr. 1993, doi: 10.1109/7.210087. [120] J. A. Roecker, “A class of near optimal JPDA algorithms,” IEEE Trans. Aerosp. Electron. Syst., vol. 30, no. 2, pp. 504–510, Apr. 1994, doi: 10.1109/7.272272. [121] D. B. Reid, “An algorithm for tracking multiple targets,” IEEE Trans. Autom. Control, vol. 24, no. 6, pp. 843–854, Dec. 1979, doi: 10.1109/TAC.1979.1102177. [122] R. Danchick and G. E. Newman, “A fast method for finding the exact N-best hypotheses for multitarget tracking,” IEEE Trans. Aerosp. Electron. Syst., vol. 29, no. 2, pp. 555–560, Apr. 1993, doi: 10.1109/7.210093. [123] H. A. P. Blom, “Overlooked potential of systems with Markovian coefficients,” in Proc. 25th Conf. Decis. Control, Athena, Greece, Dec. 1986, pp. 1758–1764, doi: 10.1109/CDC.1986. 267261. [124] Y. Bar-Sharlom, K. C. Chang, and H. A. P. Blom, “Tracking a maneuvering target using input estimation versus the interacting multiple model algorithm,” IEEE Trans. Aerosp. Electron. Syst., vol. 25, no. 2, pp. 296–300, Mar. 1989, doi: 10.1109/7.18693. [125] Y. He, D. Lu, and Y. Peng, “Two new track correlation algorithms in a multisensory data fusion system,” Acta Electron. Sin., vol. 25, no. 9, pp. 10–14, 1997. [126] Y. He, Y. Peng, and D. Lu, “Binary track correlation algorithms in a distributed multisensor data fusion system,” J. Electron., vol. 19, no. 6, pp. 721–728, Nov. 1997. [127] C. Hue, J. P. L. Cadre, and P. Perez, “Tracking multiple objects with particle filtering,” IEEE Trans. Aerosp. Electron. Syst., vol. 38, no. 3, pp. 791–812, Jul. 2002, doi: 10.1109/ TAES.2002.1039400. [128] K. Punithakumar, T. Kirubarajan, and A. Sinha, “Multiplemodel probability hypothesis density filter for tracking maneuvering targets,” IEEE Trans. Aerosp. Electron. Syst., vol. 44, no. 1, pp. 87–98, Jan. 2008, doi: 10.1109/TAES.2008.4516991. [129] M. Tobias and A. D. Lanterman, “Probability hypothesis density-based multitarget tracking with bistatic range and Doppler observations,” IEE Proc.-Radar, Sonar Navigation, vol. 152, no. 3, pp. 195–205, Jun. 2005, doi: 10.1049/ip-rsn:20045031. [130] S. Deb, K. R. Pattipati, and Y. Bar-Shalom, “A multisensor-multitarget data association algorithm for heterogeneous sensors,” IEEE Trans. Aerosp. Electron. Syst., vol. 29, no. 2, pp. 560–568, Apr. 1993, doi: 10.1109/7.210094. [131] K. R. Pattipati and S. Deb, “A new relaxation algorithm and passive sensor data association,” IEEE Trans. Autom. Control, vol. 37, no. 2, pp. 198–213, Feb. 1992, doi: 10.1109/9.121621. [132] D. Lerrod and Y. Bar-Shalom, “Interacting multiple model tracking with target amplitude feature,” IEEE Trans. Aerosp. Electron. Syst., vol. 29, no. 4, pp. 494–508, Apr. 1993, doi: 10.1109/7.210086.
57
[133] X. J. Jing and Y. G. Chen, “Association algorithm of data fusion using Doppler frequency of targets,” Syst. Eng. Electron., vol. 21, no. 7, pp. 66–68, 1999. [134] Z. Xu, Y. Ni, X. Gong, and L. Jin, “Using target’s polarization for data association in multiple target tracking,” in Proc. IEEE 8th Int. Conf. Signal Process., 2006, doi: 10.1109/ICOSP.2006.346005. [135] L. Wang and J. Li, “Using range profiles for data association in multiple-target tracking,” IEEE Trans. Aerosp. Electron. Syst., vol. 32, no. 1, pp. 445–450, Jan. 1996, doi: 10.1109/7.481285. [136] J. G. Wang, J. Q. Luo, and J. M. Lv, “Passive tracking based on data association with information fusion of multi-feature and multi-target,” in Proc. Int. Conf. Neural Netw. Signal Process., 2003, pp. 686–689, doi: 10.1109/ICNNSP.2003.1279367. [137] C. Zhao, Q. Pan, and Y. Liang, Video Imagery Moving Targets Analysis. Beijing, China: National Defense Industry Press, 2011. [138] Y. Bar-Shalom, T. Kirubarjan, and C. Gokberk, “Tracking with classification-aided multiframe data association,” IEEE Trans. Aerosp. Electron. Syst., vol. 41, no. 3, pp. 868–878, Jul. 2005, doi: 10.1109/TAES.2005.1541436. [139] M. A. Zaveri, S. N. Merchant, and U. B. Desai, “Robust neuralnetwork-based data association and multiple model-based tracking of multiple point targets,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 37, no. 3, pp. 337–351, May 2007, doi: 10.1109/TSMCC.2007.893281. [140] P. H. Chou, Y. N. Chung, and M. R. Yang, “Multiple-target tracking with competitive Hopfield neural network based data association,” IEEE Trans. Aerosp. Electron. Syst., vol. 43, no. 3, pp. 1180–1188, Jul. 2007, doi: 10.1109/TAES.2007.4383609. [141] Z. Xiong, Y. Cui, W. Xiong, and X. Gu, “Adaptive association for satellite and radar position data,” Syst. Eng. Electron., vol. 43, no. 1, pp. 91–98, 2021. [142] P. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich, “SuperGlue: Learning feature matching with graph neural networks,” in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognit. (CVPR), 2020, pp. 4937–4946. [143] L. Lei, H. Cai, T. Tang, and Y. Su, “A MSA feature-based multiple targets association algorithm in remote sensing images,” J. Remote Sens., vol. 12, no. 4, pp. 586–592, 2008. [144] Y. Tang and S. Xu, “A united target data association algorithm based on D-S theory and multiple remote sensing images,” J. Univ. Sci. Technol. China, vol. 36, no. 5, pp. 465–471, May 2006. [145] L. Lin, “Research on feature extraction and fusion technology of ship target in multi-source remote sensing images,” Ph.D. dissertation, Nat. Defense Univ. Sci. Technol., Changsha, China, 2008. [146] T. Yang et al., “Small moving vehicle detection in a satellite video of an urban area,” Sensors, vol. 16, no. 9, pp. 1528–1543, Sep. 2016, doi: 10.3390/s16091528. [147] J. Wu, G. Zhang, T. Wang, and Y. Jiang, “Satellite video pointtarget tracking in combination with motion smoothness constraint and grayscale feature,” Acta Geodaetica Cartogr. Sin., vol. 46, no. 9, pp. 1135–1146, Sep. 2017. [148] Y. Liu, P. Guo, L. Cao, M. Ji, and L. Yao, “Information fusion of GF-1 and GF-4 satellite imagery for ship surveillance,” in Proc.
58
IEEE Int. Geosci. Remote Sens. Symp., 2021, pp. 5044–5047, doi: 10.1109/IGARSS47720.2021.9553591. [149] Y. Cao, “Research on target tracking correlation technology based on multi-source information of satellite reconnaissance,” Ph.D. dissertation, Nat. Defense Univ. Sci. Technol., Changsha, China, 2018. [150] W. Li, “Targets association based on electronic reconnaissance data,” M.S. thesis, Nat. Defense Univ. Sci. Technol., Changsha, China, 2013. [151] H. Zeng, “Research on ship formation target data association based on spaceborne optical imaging reconnaissance and spaceborne electronic reconnaissance,” M.S. thesis, Nat. Defense Univ. Sci. Technol., Changsha, China, 2008. [152] H. Zou, H. Sun, K. Ji, C. Du, and C. Lu, “Multimodal remote sensing data fusion via coherent point set analysis,” IEEE Geosci. Remote Sens. Lett., vol. 10, no. 4, pp. 672–676, Jul. 2013, doi: 10.1109/LGRS.2012.2217936. [153] H. Sun, H. Zou, K. Ji, S. Zhou, and C. Lu, “Combined use of optical imaging satellite data and electronic intelligence satellite data for large scale ship group surveillance,” J. Navigation, vol. 68, no. 2, pp. 383–396, Mar. 2015, doi: 10.1017/ S0373463314000654. [154] J. Yang, J. Y. Yang, D. Zhang, and J. F. Lu, “Feature fusion: Parallel strategy vs. serial strategy,” Pattern Recognit., vol. 36, no. 6, pp. 1369–1381, Jun. 2003, doi: 10.1016/S0031-3203(02)00262-5. [155] L. Pei and C. Fyfe, “Canonical correlation analysis using artificial neural networks,” in Proc. Eur. Symp. Artif. Neural Netw., 1998, pp. 363–367. [156] S. Akaho, “A kernel method for canonical correlation analysis,” in Proc. Int. Meeting Psychometric Soc., 2006, pp. 263–269. [157] T. Sun and S. Chen, “Locality preserving CCA with applications to data visualization and pose estimation,” Image Vision Comput., vol. 25, no. 5, pp. 531–543, May 2007, doi: 10.1016/j. imavis.2006.04.014. [158] D. R. Hardoon and J. Shawe-Taylor, “Sparse canonical correlation analysis,” Mach. Learn., vol. 83, no. 3, pp. 331–353, Jun. 2011, doi: 10.1007/s10994-010-5222-7. [159] T. Sun, S. Chen, J. Yang, and P. Shi, “A novel method of combined feature extraction for recognition,” in Proc. 8th IEEE Int. Conf. Data Mining (ICDM), Pisa, Italy, 2008, pp. 1043–1048, doi: 10.1109/ICDM.2008.28. [160] S. Lee and S. Choi, “Two-dimensional canonical correlation analysis,” IEEE Signal Process. Lett., vol. 14, no. 10, pp. 735–738, Oct. 2007, doi: 10.1109/LSP.2007.896438. [161] J. Baek and M. Kim, “Face recognition using partial least squares components,” Pattern Recognit., vol. 37, no. 6, pp. 1303–1306, Jun. 2004, doi: 10.1016/j.patcog.2003.10.014. [162] R. Rosipal, “Kernel partial least squares regression in reproducing kernel Hilbert space,” J. Mach. Learn. Res., vol. 2, pp. 97–123, Dec. 2001. [163] D. Chung and S. Keles, “Sparse partial least squares classification for high dimensional data,” Statist. Appl. Genetics Mol. Biol., vol. 9, no. 1, pp. 1544–1563, Mar. 2010, doi: 10.2202/15446115.1492. [164] B. Long, S. Y. Philip, and Z. Zhang, “A general model for multiple view unsupervised learning,” in Proc. SIAM Int. Conf. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Data Mining, Atlanta, GA, USA, 2008, pp. 822–833, doi: 10.1137/1.9781611972788.74. [165] L. Zhou, H. Yu, Y. Lan, and M. Xing, “Artificial intelligence in interferometric synthetic aperture radar phase unwrapping: A review,” IEEE Geosci. Remote Sens. Mag., vol. 9, no. 2, pp. 10–28, Jun. 2021, doi: 10.1109/MGRS.2021.3065811. [166] D. O. Pop, A. Rogozan, F. Nashashibi, and A. Bensrhair, “Fusion of stereo vision for pedestrian recognition using convolutional neural networks,” in Proc. 25th Eur. Symp. Artif. Neural Netw., Comput. Intell. Mach. Learn. (ESANN), Bruges, Belgium, 2017, pp. 772–779. [167] A. Karpathy, G. Toderici, and S. Shetty, “Large-scale video classification with convolutional neural networks,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., Columbus, OH, USA, 2014, pp. 1725–1732. [168] H. Ergun, Y. C. Akyuz, M. Sert, and J. Liu, “Early and late level fusion of deep convolutional neural networks for visual concept recognition,” Int. J. Semantic Comput., vol. 10, no. 3, pp. 379–397, Sep. 2016, doi: 10.1142/S1793351X16400158. [169] G. Andrew, R. Arora, J. Bilmes, and K. Livescu, “Deep canonical correlation analysis,” in Proc. 30th Int. Conf. Mach. Learn. (PMLR), Atlanta, GA, USA, 2013, pp. 1247–1255. [170] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, “Multimodal deep learning,” in Proc. 28th Int. Conf. Mach. Learn., Bellevue, WA, USA, 2011, pp. 689–696. [171] N. Srivastava and R. R. Salakhutdinov, “Multimodal learning with deep Boltzmann machines,” in Proc. Adv. Neural Inf. Process. Syst., Lake Tahoe, NV, USA, 2012, pp. 2231–2239. [172] D. Yu, L. Deng, and F. Seide, “The deep tensor neural network with applications to large vocabulary speech recognition,” IEEE Trans. Audio, Speech, Language Process., vol. 21, no. 2, pp. 388–396, Feb. 2013, doi: 10.1109/TASL.2012.2227738. [173] H. Brian, D. Li, and Y. Dong, “Tensor deep stacking networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1944– 1957, Aug. 2013, doi: 10.1109/TPAMI.2012.268. [174] Q. Zhang, L. T. Yang, and Z. Cheng, “Deep computation model for unsupervised feature learning on big data,” IEEE Trans. Services Comput., vol. 9, no. 1, pp. 161–171, Jan./Feb. 2016, doi: 10.1109/TSC.2015.2497705. [175] W. Li, Y. Gao, R. Tao, and Q. Du, “Asymmetric feature fusion network for hyperspectral and SAR image classification,” IEEE Trans. Neural Netw. Learn. Syst., early access, 2022, doi: 10.1109/ TNNLS.2022.3149394. [176] W. Kang, Y. Xiang, F. Wang, and H. You, “CFNet: A cross fusion network for joint land cover classification using optical and SAR images,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 1562–1574, Jan. 2022, doi: 10.1109/ JSTARS.2022.3144587. [177] Y. Jiang, S. Wei, M. Xu, G. Zhang, and J. Wang, “Combined adjustment pipeline for improved global geopositioning accuracy of optical satellite imagery with the aid of SAR and GLAS,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 5076–5085, Jun. 2022, doi: 10.1109/JSTARS.2022.3183594. [178] Y. Liao et al., “Feature matching and position matching between optical and SAR with local deep feature descriptor,”
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 448–462, 2022, doi: 10.1109/JSTARS.2021.3134676. [179] J. Wang, W. Li, Y. Gao, M. Zhang, R. Tao, and Q. Du, “Hyperspectral and SAR image classification via multiscale interactive fusion network,” IEEE Trans. Neural Netw. Learn. Syst., early access, 2022, doi: 10.1109/TNNLS.2022.3171572. [180] C. Silva-Perez, A. Marino, and I. Cameron, “Learning-based tracking of crop biophysical variables and key dates estimation from fusion of SAR and optical data,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 7444–7457, Aug. 2022, doi: 10.1109/JSTARS.2022.3203248. [181] L. Lin, J. Li, H. Shen, L. Zhao, Q. Yuan, and X. Li, “Low-resolution fully polarimetric SAR and high-resolution singlepolarization SAR image fusion network,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5216117, doi: 10.1109/ TGRS.2021.3121166. [182] T. Tian et al., “Performance evaluation of deception against synthetic aperture radar based on multifeature fusion,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 103–115, 2021, doi: 10.1109/JSTARS.2020.3028858. [183] J. Fan, Y. Ye, G. Liu, J. Li, and Y. Li, “Phase congruency orderbased local structural feature for SAR and optical image matching,” IEEE Geosci. Remote Sens. Lett., vol. 19, May 2022, Art. no. 4507105, doi: 10.1109/LGRS.2022.3171587. [184] L. Fasano, D. Latini, A. Machidon, C. Clementini, G. Schiavon, and F. D. Frate, “SAR data fusion using nonlinear principal component analysis,” IEEE Geosci. Remote Sens. Lett., vol. 17, no. 9, pp. 1543–1547, Sep. 2022, doi: 10.1109/LGRS. 2019.2951292. [185] D. Quan et al., “Self-distillation feature learning network for optical and SAR image registration,” IEEE Trans. Geosci. Remote Sens., vol. 60, May 2022, Art. no. 4706718, doi: 10.1109/ TGRS.2022.3173476. [186] P. Jain, B. Schoen-Phelan, and R. Ross, “Self-supervised learning for invariant representations from multi-spectral and SAR images,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 7797–7808, Sep. 2022, doi: 10.1109/ JSTARS.2022.3204888. [187] Y. Chen and L. Bruzzone, “Self-supervised SAR-optical data fusion of sentinel-1/-2 images,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5406011, doi: 10.1109/TGRS.2021. 3128072. [188] Z. Zhang, Y. Xu, Q. Cui, Q. Zhou, and L. Ma, “Unsupervised SAR and optical image matching using Siamese domain adaptation,” IEEE Trans. Geosci. Remote Sens., vol. 60, Apr. 2022, Art. no. 5227116, doi: 10.1109/TGRS.2022.3170316. [189] Z. Zhao et al., “A novel method of ship detection by combining space-borne SAR and GNSS-R,” in Proc. IET Int. Radar Conf. (IET IRC), 2020, pp. 1045–1051, doi: 10.1049/icp. 2021.0695. [190] D. Pastina, F. Santi, F. Pieralice, M. Antoniou, and M. Cherniakov, “Passive radar imaging of ship targets with GNSS signals of opportunity,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 3, 2627–2642, Mar. 2021, doi: 10.1109/TGRS.2020.3005306. GRS
59
AI Security for Geoscience and Remote Sensing Challenges and future trends YONGHAO XU , TAO BAI , WEIKANG YU , SHIZHEN CHANG , PETER M. ATKINSON , AND PEDRAM GHAMISI
R
ecent advances in artificial intelligence (AI) have significantly intensified research in the geoscience and remote sensing (RS) field. AI algorithms, especially deep learningbased ones, have been developed and applied widely to RS data analysis. The successful application of AI covers almost all aspects of Earth-observation (EO) missions, from lowlevel vision tasks like superresolution, denoising, and inpainting, to high-level vision tasks like scene classification,
INTRODUCTION With the successful launch of an increasing number of RS satellites, the volume of geoscience and RS data is on an explosive growth trend, bringing EO missions into the big
©SHUTTERSTOCK.COM/METAMORWORKS
Digital Object Identifier 10.1109/MGRS.2023.3272825 Date of current version: 30 June 2023
object detection, and semantic segmentation. Although AI techniques enable researchers to observe and understand the earth more accurately, the vulnerability and uncertainty of AI models deserve further attention, considering that many geoscience and RS tasks are highly safety critical. This article reviews the current development of AI security in the geoscience and RS field, covering the following five important aspects: adversarial attack, backdoor attack, federated learning (FL), uncertainty, and explainability. Moreover, the potential opportunities and trends are discussed to provide insights for future research. To the best of the authors’ knowledge, this article is the first attempt to provide a systematic review of AI security-related research in the geoscience and RS community. Available code and datasets are also listed in the article to move this vibrant field of research forward.
60
2473-2397/23©2023IEEE
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Cumulative Number of Articles
data era [1]. The availability of large-scale RS data has two substantial impacts: it dramatically enriches the way Earth is observed, while also demanding greater requirements for fast, accurate, and automated EO technology [2]. With the vigorous development and stunning achievements of AI in the computer vision field, an increasing number of researchers are applying state-of-the-art AI techniques to numerous challenges in EO [3]. Figure 1 shows the cumulative number of AI-related articles appearing in IEEE Geoscience and Remote Sensing Society publications, along with
2,000
ISPRS Journal of Photogrammetry and Remote Sensing and Remote Sensing of Environment in the past 10 years. It is clearly apparent that the number of AI-related articles increased significantly after 2021. The successful application of AI covers almost all aspects of EO missions, from low-level vision tasks like superresolution, denoising, and inpainting, to high-level vision tasks like scene classification, object detection, and semantic segmentation [4]. Table 1 summarizes some of the most representative tasks in the geoscience and RS field using AI techniques and reveals the increasing
IEEE Transactions on Geoscience and Remote Sensing IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing IEEE Geoscience and Remote Sensing Letters ISPRS Journal of Photogrammetry and Remote Sensing Remote Sensing of Environment
1,500 1,000 500 0
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
Year FIGURE 1. The cumulative numbers of AI-related articles published in IEEE Geoscience and Remote Sensing Society publications, along with ISPRS Journal of Photogrammetry and Remote Sensing and Remote Sensing of Environment in the past 10 years. The statistics are obtained from IEEE Xplore and ScienceDirect.
TABLE 1. REPRESENTATIVE TASKS IN THE GEOSCIENCE AND RS FIELD USING AI TECHNIQUES. TASK TYPES
AI TECHNIQUES
DATA
REFERENCE
Low-level vision tasks
—
—
—
Pan-sharpening
GAN
WorldView-2 and Gaofen-2 (GF-2) images
[5]
Denoising
LRR
HYDICE and AVIRIS data
[6]
Cloud removal
CNN
Sentinel-1 and Sentinel-2 data
[7]
Destriping
CNN
EO-1 Hyperion and HJ-1A images
[8]
High-level vision tasks
—
—
—
Scene classification
CNN
Google Earth images
[9]
Object detection
CNN
Google Earth images, GF-2, and JL-1 images
[10]
Land use and land cover mapping
FCN
Airborne hyperspectral/VHR color image/lidar data
[11]
Change detection
SN and RNN
GF-2 images
[12]
Video tracking
SN and GMM
VHR satellite videos
[13]
Natural language processing-related tasks
—
—
—
Image captioning
RNN
VHR satellite images with text descriptions
[14]
Text-to-image generation
MHN
VHR satellite images with text descriptions
[15]
Visual question answering
CNN and RNN
Satellite/aerial images with visual questions/answers
[16]
Environment monitoring tasks
—
—
—
Wildfire detection
FCN
Sentinel-1, Sentinel-2, Sentinel-3, and MODIS data
[17]
Landslide detection
FCN and transformer
Sentinel-2 and ALOS PALSAR data
[18]
Weather forecasting
CNN and LSTM
SEVIRI data
[19]
Air-quality prediction
ANN
MODIS data
[20]
Poverty estimation
CNN
VHR satellite images
[21]
Refugee camp detection
CNN
WorldView-2 and WorldView-3 data
[22]
HYDICE: Hyperspectral Digital Imagery Collection Experiment; GAN: generative adversarial network; LRR: low-rank representation; AVIRIS: airborne visible/infrared imaging spectrometer; EO-1: Earth Observing One; HJ-1A: Huan Jing 1A; SN: Siamese network; GMM: Gaussian mixture model; MHN: modern Hopfield network; VHR: very high resolution; FCN: fully convolutional network; ALOS PALSAR: advanced land observing satellite phased-array type L-band SAR; LSTM: long short-term memory; MODIS: moderate-resolution imaging spectrometer; ANN: artificial neural network; JL-1: Jilin-1; FCN: fully convolutional neural network; SEVIRI: Spinning Enhanced Visible and InfraRed Imager.
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
61
importance of deep learning methods such as convolutional neural networks (CNNs) in EO. Despite the great success achieved by AI techniques, related safety and security issues should not be neglected [23]. Although advanced AI models like CNNs possess powerful data fitting capabilities and are designed to learn like the human brain, they usually act like black boxes, which makes it difficult to understand and explain how they work [24]. Moreover, such characteristics may lead to uncertainties, vulnerabilities, and security risks, which could seriously threaten the safety and robustness of the geoscience and RS tasks. Considering that most of these tasks are highly safety critical, this article aims to provide a systematic review of current developments in AI security in the geoscience and RS field. As shown in Figure 2, the main research topics covered in this article comprise the following five important aspects: adversarial attack, backdoor attack, FL, uncertainty, and explainability. A brief introduction for each topic is given as follows: ◗◗ An adversarial attack focuses on attacking the inference stage of a machine learning (ML) model by generating adversarial examples. Such adversarial examples may look identical to the original clean samples but can mislead ML models to yield incorrect predictions with high confidence. ◗◗ A backdoor attack aims to conduct data poisoning with specific triggers in the training stage of an ML model. The infected model may yield normal predictions on benign samples but make specific incorrect predictions on samples with backdoor triggers. ◗◗ FL ensures data privacy and data security by training ML models with decentralized data samples without sharing data. ◗◗ Uncertainty aims to estimate the confidence and robustness of the decisions made by ML models. ◗◗ Explainability aims to provide an understanding of and to interpret ML models, especially black-box ones like CNNs.
Applicatio n
Dat a
FL
Alg
ithm or
nability plai Ex
Bac kdo or k tac At
Ad ve r
tack l At ria a s
Uncertainty FIGURE 2. An overview of the research topics covered in
this article. 62
Although research on the aforementioned topics is still in its infancy in the geoscience and RS field, the topics are indispensable for building a secure and trustworthy EO system. The main contributions of this article are summarized as follows: ◗◗ For the first time, we provide a systematic and comprehensive review of AI security-related research for the geoscience and RS community, covering five aspects: adversarial attack, backdoor attack, FL, uncertainty, and explainability. ◗◗ In each aspect, a theoretical introduction is provided and several representative works are organized and described in detail, emphasizing in each case the potential connection with AI security for EO. In addition, we provide a perspective on the future outlook of each topic to further highlight the remaining challenges in the field of geoscience and RS. ◗◗ We summarize the entire review with four possible research directions in EO: secure AI models, data privacy, trustworthy AI models, and explainable AI (XAI) models. In addition, potential opportunities and research trends are identified for each direction to arouse readers’ research interest in AI security. Table 2 provides the main abbreviations and nomenclatures used in this article. The rest of this article is organized as follows. The “Adversarial Attack” section reviews adversarial attacks and defenses for RS data. The “Backdoor Attack” section further reviews backdoor attacks and defenses in the geoscience and RS field. The “FL” section introduces the concepts and applications of FL in the geoscience and RS field. The “Uncertainty” section describes the sources of uncertainty in EO and summarizes the most commonly used methods for uncertainty quantification. The “Explainability” section introduces representative XAI applications in the geoscience and RS field. Conclusions and other discussions are summarized in the “Conclusions and Remarks” section. ADVERSARIAL ATTACK AI techniques have been widely deployed in geoscience and RS, as shown in Table 1, and have achieved great success over the past decades. The existence of adversarial examples, however, threatens such ML models and raises concerns about the security of these models. With slight and imperceptible perturbations, clean RS images can be manipulated to be adversarial and fool well-trained ML models, i.e., making incorrect predictions [25] (see, for example, Figure 3). Undoubtedly, such vulnerabilities of ML models are harmful and would hinder their potential for safety-critical geoscience and RS applications. To this end, it is critical for researchers to study the vulnerabilities (adversarial attacks) and develop corresponding methods (adversarial defenses) to harden ML models for EO missions. PRELIMINARIES Adversarial attacks usually refer to finding adversarial examples for well-trained models (target models). Taking IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
TABLE 2. MAIN ABBREVIATIONS AND NOMENCLATURES. ABBREVIATION/NOTATION
DEFINITION
ABBREVIATION/NOTATION
DEFINITION
Adam
Adaptive moment estimation
SN
Siamese network
AdamW
Adam with decoupled weight decay
SVM
Support vector machine
AI
Artificial intelligence
UAV
Unmanned aerial vehicle Very high resolution
ANN
Artificial neural network
VHR
BNN
Bayesian neural network
XAI
Explainable AI
CE
Cross entropy
f
The classifier mapping of a neural network model
CNN
Convolutional neural network
i
A set of parameters in a neural network model
DNN
Deep neural network
X
The image space
EO
Earth observation
Y
The label space
FCN
Fully convolutional network
x
A sample from the image space
FL
Federated learning
y
The corresponding label of x
GAN
Generative adversarial network
yˆ
The predicted label of x
GMM
Gaussian mixture model
d
The perturbation in adversarial examples
HSI
Hyperspectral image
e
The perturbation level of adversarial attacks
IoT
Internet of Things
d
The gradient of the function
LRR
Low-rank representation
Db
The benign training set
LSTM
Long short-term memory network
Dp
The poisoned set
MHN
Modern Hopfield network
Rb
The standard risk
ML
Machine learning
Ra
The backdoor attack risk
PM2.5
Particulate matter with a diameter of 2.5 μm or less
Rp
The perceivable risk
t
The trigger patterns for backdoor attacks
RGB
Red, green, blue
s
The sample proportion
RNN
Recurrent neural network
E
The explanation of a neural network model
RS
Remote sensing
L
The loss function of a neural network model
SAR
Synthetic aperture radar
sign($)
Signum function
SGD
Stochastic gradient descent
I($)
Indicator function
image classification as an example, let f : X " Y be a classifier mapping from the image space X 1 R d to the label space Y = {1, f, K} with parameters i, where d and K denote the numbers of pixels and categories, respectively. Given the perturbation budget e under , p-norm, the common way to craft adversarial examples for the adversary is to find a perturbation d ! R d, which can maximize the loss function, e.g., cross-entropy loss L ce, so that f (x + d) ! y, where y is the label of x. Therefore, d can be obtained by solving the following optimization problem:
Formally, the perturbation d is updated in each iteration as follows:
d i + 1 = Proj B (x, e) ^d i + asign ^d x i L ce ^i, x i, y hhh x i = x + d i, d 0 = 0
(2)
where I is the current step, a is the step size (usually smaller than e), and Proj is the operation to make sure the values of d are valid. Specifically, for (i + 1) th iteration, we first calculate the gradients of L ce with respect to x i = x + d i,
d ) = arg max L ce (i, x + d, y) (1) B (x, e)
where B (x, e) is the allowed perturbation set, expressed as B (x, e)|= # x + d ! R d d p # e -. The common values of p are 0, 1, 2, and 3. In most cases, e is set to be small so that the perturbations are imperceptible to human eyes. To solve (1), gradient-based methods [27], [28], [29] a re u su a lly exploited. One of the most popular solutions is projected gradient descent [29], which is an iterative method. JUNE 2023
Original Image Pre: Airplane e (87.59%)
Adversarial Perturbation
Adversarial Image Pre: Runway (99.9%)
FIGURE 3. Adversarial attacks causing AlexNet [26] to predict the very high resolution image from “Airplane” to “Runway” with high confidence.
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
63
then add the gradients to previous perturbations d i and obtain d i + 1. To further ensure that the pixel values in the generated adversarial examples are valid (e.g., within 60, 1@), the Proj operation is adopted to clip the intensity values in d i + 1. There are different types of adversarial attacks depending on the adversary’s knowledge and goals. If the adversary can access the target models, including the structures, parameters, and training data, it is categorized as a white-box attack; otherwise, if the adversary can access only the outputs of target models, it is known as a black-box attack. When launching attacks, if the goal of the adversary is simply to fool target models so that f (x + d) ! y, this is a nontargeted attack; otherwise, the adversary expects target models to output specific results so that f (x + d) = y t ( y t ! y is the label of the target class specified by the adversary), which is a targeted attack. In addition, if the adversarial attacks are independent of data, such attacks are called universal attacks. The attack success rate is a widely adopted metric for evaluating adversarial attacks. It measures the proportion of adversarial examples that successfully deceive the target model, resulting in incorrect predictions.
DNNs can be fooled by adversarial examples with very high confidence. The transferability of adversarial examples was first discussed in [32], which indicated the harmfulness of adversarial examples generated with one specific model on other different models. According to their experiments, adversarial examples generated by AlexNet [26] can cause performance drops in deeper models of different degrees, and deep models are more resistant to adversarial attacks than shallow models. Xu and Ghamisi [33] exploited such transferability and developed the universal adversarial examples (UAEs) (https:// drive.google.com/file/d/1tbRSDJwhpk-uMYk2t-RUgC07x 2wyUxAL/view?usp=sharing) (https://github.com/Yong haoXu/UAE-RS). The UAE enables adversaries to launch adversarial attacks without accessing the target models. Another interesting observation in [32] is that traditional classifiers like support vector machines are less vulnerable to adversarial examples generated by DNNs. However, this does not mean that traditional classifiers are robust to adversarial examples [34]. Although the UAE is designed only for fooling target models, Bai et al. [35] extended it and developed two targeted universal attacks for specific adversarial purposes (https://github.com/tao-bai/TUAE-RS). It is worth noting that such targeted universal attacks sacrifice transferability ADVERSARIAL ATTACKS between different models, and enhancing the transferability Adversarial examples for deep learning were initially discovof targeted attacks is still an open problem. ered in [30]. Many pioneer works on adversarial examples In addition to deep learning models for optical images, have appeared since that time [27], [28], [29] and have motithose for hyperspectral images (HSIs) and synthetic aperture vated research on adversarial attacks on deep neural networks radar (SAR) images in RS are also important. For HSI classifi(DNNs) in the context of RS. Czaja et al. [25] revealed the excation, the threats of adversarial attacks are more serious due istence of adversarial examples for RS data for the first time to limited training data and high dimensionality [36]. Xu and focused on targeted adversarial attacks for deep learning et al. [36] first revealed the existence of adversarial examples models. They also pointed out two key challenges of designin the hyperspectral domain (https://github.com/YonghaoXu/ ing physical adversarial examples in RS settings: viewpoint SACNet), which can easily compromise several state-ofgeometry and temporal variability. Chen et al. [31] confirmed the-art deep learning models (see Figure 4 for an example). the conclusions in [25] with extensive experiments across Considering the high dimensionality of HSI, Shi et al. [38] various CNN models and adversarial attacks for RS. Xu et al. investigated generating adversarial samples close to the deci[32] further extended evaluation of the vulnerability of deep sion boundaries with minimal disturbances. Unlike optical learning models to untargeted attacks, which is complemenimages and HSIs with multiple dimensions, SAR images are tary to [25]. It was reported that most of the state-of-the-art acquired in microwave wavelengths and contain only the backscatter information in a limited number of bands. Chen et al. [39] and Li et al. [40] empirically investigated adversarial examples on SAR images using existing attack methods and found that the predicted classes of adversarial SAR images were highly concentrated. Another interesting phenomenon observed in [39] is that adversarial examples generated on SAR images tend to have greater transferability be(a) (b) (c) (d) tween different models than optical images, which indicates that SAR recFIGURE 4. The threat of adversarial attacks in the hyperspectral domain [36]. (a) An original HSI (in false color), (b) adversarial perturbation with e = 0.04, (c) adversarial HSI, and (d) the ognition models are easier to attack, classification map on the adversarial HSI using PResNet [37], which is seriously fooled (with an raising security concerns when applying SAR data in EO missions. overall accuracy of 35.01%). 64
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
ADVERSARIAL DEFENSES Adversarial attacks reveal the drawbacks of current deep learning-based systems for EO and raise public concerns about RS applications. Thus, it is urgent to develop corresponding adversarial defenses against such attacks and avoid severe consequences. Adversarial training [41] is recognized as one of the most effective adversarial defenses against adversarial examples and has been applied widely in computer vision tasks. The idea behind adversarial training is intuitive: it trains directly deep learning models on adversarial examples generated in each loop. Xu et al. [32] took the first step and empirically demonstrated adversarial training for the RS scene classification task. Their extensive experiments showed that adversarial training significantly increased the resistance of deep models to adversarial examples, although evaluation was limited to the naive attack fast gradient descent method [27]. Similar methods and conclusions were also obtained in [42] and [43]. However, adversarial training requires labeled data and suffers significant decreases in accuracy on testing data [44]. Xu et al. [44] introduced self-supervised learning into adversarial training to extend the training set with unlabeled data to train more robust models. Cheng et al. [45] proposed another variant of adversarial training, where a generator is utilized to model the distributions of adversarial perturbations. Unlike the aforementioned research, which mainly used the adversarial training technique, some further attempts were made to improve adversarial robustness by modifying model architectures. Xu et al. [36] introduced a self-attention context network, which extracts both local and global context information simultaneously. By extracting global context information, pixels are connected to other pixels in the whole image and obtain resistance to local perturbations. It is also reasonable to add preprocessing modules before the original models. For example, Xu et al. [46] proposed purifying adversarial examples using a denoising network. As the adversarial examples and original images have different distributions, such discrepancies have inspired researchers to develop methods to detect adversarial examples. Chen et al. [47] noticed the class selectivity of adversarial examples, (i.e., the misclassified classes are not random). They compared the confidence scores of original samples and adversarial examples and obtained classwise soft thresholds for use as an indicator for adversarial detection. Similarly, from the energy perspective, Zhang et al. [48] captured an inherent energy gap between the adversarial examples and original samples. FUTURE PERSPECTIVES Although much research related to security issues in RS was discussed in the previous section, the threats from adversarial examples have not been eliminated completely. Here we summarize some potential directions for studying adversarial examples. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
ADVERSARIAL ATTACKS AND DEFENSES BEYOND SCENE CLASSIFICATION As in the literature review introduced earlier, the focus of most adversarial attacks in RS is scene classification. Many other tasks like object detection [10] and video tracking [13] remain untouched, where DNNs are deployed as widely as in scene classification. Thus, it is equally important to study these tasks from an adversarial perspective. DIFFERENT FORMS OF ADVERSARIAL ATTACKS When talking about adversarial examples, we usually refer to adversarial perturbations. Nevertheless, crafting adversarial examples is not limited to adding perturbations because the existence of adversarial examples is actually caused by the gap between human vision and machine vision. Such gaps have not been well defined yet, which may enable us to explore adversarial examples in different forms. For example, scholars have explored the use of adversarial patches [49], [50], where a patch is added to an input image to deceive the machine, as well as the concept of natural adversarial examples [51], where an image looks the same to humans but is misclassified by the machine due to subtle differences. These approaches may offer insights into the mechanisms underlying adversarial examples. By better understanding the existence of adversarial examples in different forms, we can develop more comprehensive and effective defenses to protect against these attacks. DIFFERENT SCENARIOS OF ADVERSARIAL ATTACKS Despite white-box settings being the most common type when discussing the robustness of DNNs, black-box settings are more practical for real-world applications, where the adversary has no or limited access to the trained models in deployment. Typically, there are two strategies that adversaries can employ in a black-box scenario. The first is adversarial transferability [52], [53], which involves creating a substitute model that imitates the behavior of the target model based on a limited set of queries or inputs. Once the substitute model is created, the adversary can generate adversarial examples on the substitute model and transfer these examples to the target model. The second strategy is to directly query the target model using input–output pairs and use the responses to generate adversarial examples. This approach is known as a query-based attack [54], [55]. Future research in this area will likely focus on the development of more effective black-box attacks. PHYSICAL ADVERSARIAL EXAMPLES The current research on adversarial examples in the literature focuses on digital space without considering the physical constraints that may exist in the real world. Thus, one natural question that arises in the context of physical adversarial examples is whether the adversarial perturbations will be detectable or distorted when applied in the real world, where the imaging environment is more complex and unpredictable, leading to a reduction in their effectiveness. Therefore, it is crucial to 65
explore whether adversarial examples can be designed physically for specific ground objects [50], [56], especially considering that many DNN-based systems are currently deployed for EO missions. Incorporating physical constraints in adversarial examples may further increase our understanding of the limits of adversarial attacks in RS applications and aid in developing more robust and secure systems. THE POSITIVE SIDE OF ADVERSARIAL EXAMPLES Although often judged to be harmful, adversarial examples indeed reveal some intrinsic characteristics of DNNs, which more or less help us understand DNNs more deeply. Thus, researchers should not only focus on generating solid adversarial attacks but also investigate the potential usage of adversarial examples for EO missions in the future. BACKDOOR ATTACK Although adversarial attacks bring substantial security risks to ML models in geoscience and RS, these algorithms usually assume that the adversary can only attack the target model in the evaluation phase. In fact, applying ML models to RS tasks often involves multiple steps, from data collection, model selection, and model training to model deployment. Each of these steps offers potential opportunities for the adversary to conduct attacks. As acquiring high-quality annotated RS data are very time consuming and labor intensive, researchers may use third-party datasets directly to train ML models, or even use directly the pretrained ML models from a third party in a real-world application scenario. In these cases, the probability of the target model being attacked during the training phase is greatly increased. One of the most representative attacks designed for the training phase is the backdoor attack, also known as a Trojan attack [59]. Table 3 summarizes the main differences and connections between the backdoor attack and other types of attacks for ML models. To help readers better understand the background of backdoor attacks, this section will first summarize the related preliminaries. Then, some representative works about backdoor attacks and defenses will be introduced. Finally, perspectives on the future of this research direction will be discussed. PRELIMINARIES The main goal of backdoor attacks is to induce the deep learning model to learn the mapping between the hidden backdoor triggers and the malicious target labels (specified by the attacker) by poisoning a small portion of the
training data. Formally, let f : X " Y be a classifier mapping from the image space X 1 R d to the label space Y = {1, f, K} with parameters i, where d and K denote the numbers of pixels and categories, respectively. We use D b = {^ x i, y ih} iN= 1 to represent the original benign training set, where x i ! X, y i ! Y, and N denotes the total number of sample pairs. The standard risk R b of the classifier f on the benign training set D b can then be defined as R b ^D bh = E^ x, yh + PD I ^arg max ^ f ^ x hh ! y h (3)
where PD denotes the distribution behind the benign training set D b, I ($) is the indicator function [i.e., I (condition) = 1 if and only if condition is true], and arg max ^ f ^ x hh denotes the predicted label by the classifier f on the input sample x. With (3), we can measure whether the classifier f can correctly classify the benign samples. Let D p denote the poisoned set, which is a subset of D b. The backdoor attack risk R a of the classifier f on the poisoned set D p can then be defined as R a ^D ph = E^ x, yh + PD I ^arg max ^ f ^G t ^ x hhh ! S ^ y hh (4)
where G t ^ $ h denotes an injection function that injects the trigger patterns t specified by the attack to the input benign image, and S ^· h denotes the label shifting function that maps the original label to a specific category specified by the attack. With (4), we can measure whether the attacker can successfully trigger the classifier f to yield malicious predictions on the poisoned samples. As backdoor attacks aim to achieve imperceptible data poisoning, the perceivable risk R p is further defined as R p ^D ph = E^ x, yh + PD I ^C ^G t ^ x hh = 1 h (5)
where C ^· h denotes a detector function, and C ^G t ^ x hh = 1 if and only if the poisoned sample G t ^ x h can be detected as an abnormal sample. With (5), we can measure how stealthy the backdoor attacks could be. Based on the aforementioned risks, the overall objective of the backdoor attacks can be summarized as min R b ^D b - D ph + m a R a ^D p h + m p R p ^D p h (6)
i, t
where m a and m p are two weighting parameters. Commonly, the ratio between the number of poisoned samples D p
TABLE 3. DIFFERENCES AND CONNECTIONS BETWEEN DIFFERENT TYPES OF ATTACKS FOR ML MODELS. ATTACK TYPE
ATTACK GOAL
ATTACK STAGE
ATTACK PATTERN
TRANSFERABILITY
Adversarial attack [30]
Cheat the model to yield wrong predictions with specific perturbations
Evaluation phase
Various patterns calculated for different samples
Transferable
Data poisoning [57]
Damage model performance with out-ofdistribution data
Training phase
Various patterns selected by the attacker
Nontransferable
Backdoor a ttack [58]
Mislead the model to yield wrong predictions on data with embedded triggers
Training phase
Fixed patterns selected by the attacker
Nontransferable
66
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
and the number of benign samples D b used in the training phase is called the poisoning rate D p / D b [60]. There are two primary metrics used to evaluate backdoor attacks: attack success rate and benign accuracy. The attack success rate measures the proportion of misclassified samples on the poisoned test set, while benign accuracy measures the proportion of correctly classified benign samples on the original clean test set.
Figure 6), it is triggered to generate maliciously incorrect predictions on the poisoned samples (the fourth column in Figure 6). Apart from the aforementioned research, which focuses on injecting backdoor triggers into the satellite or aerial sensor images, the security of intelligent RS platforms [e.g., unmanned aerial vehicles (UAVs)] has recently attracted increasing attention [72], [73]. For example, Islam et al. designed a triggerless backdoor attack scheme for injecting backdoors into a multi-UAV system’s intelligent offloading policy, which reduced the performance of the learned policy by roughly 50% and significantly increased the computational burden of the UAV system [74]. Considering that UAVs are often deployed in remote areas with scarce resources, such attacks could quickly exhaust computational resources, thereby undermining the observation missions. As backdoor attacks can remain hidden and undetected until activated by specific triggers, they may also seriously threaten intelligent devices in smart cities [75], [76]. For example, Doan et al. conducted the physical backdoor attack for autonomous driving, where a “stop” sign with a sticker was maliciously misclassified as a “100 km/h speed limit” sign by the infected model [77], which could have led to a serious traffic accident. To make the attack more inconspicuous, Ding et al. proposed generating stealthy backdoor triggers for autonomous driving with deep generative models [78]. Kumar et al. further discussed backdoor attacks on other Internet of Things (IoT) devices in the field of smart transportation [79].
BACKDOOR ATTACKS The concept of backdoor attacks was first proposed in [58], in which Gu et al. developed the BadNet to produce poisoned samples by injecting diagonal or squared patterns into the original benign samples. Inspired by related works in the ML and computer vision field [63], [64], [65], [67], Brewer et al. conducted the first exploration of backdoor attacks on deep learning models for satellite sensor image classification [62]. Specifically, they generated poisoned satellite sensor images by injecting a 25 × 25 pixel white patch into the original benign samples, as shown in the “Golf course” sample in Figure 5. Then, these poisoned samples were assigned maliciously changed labels, specified by the attacker and different from the original true labels, and adopted to train the target model (VGG-16 [68]) along with the original benign RS images. In this way, the infected model yields normal predictions on the benign samples but makes specific incorrect predictions on samples with backdoor triggers (the white patch). Their experimental results on both the University of California, Merced land use dataset [61] and the road quality dataset [69] demonstrated that backdoor attacks can seriously threaten the safety of the satellite sensor image classification task [62]. To conduct more stealthy backdoor attacks, Dräger et al. further proposed the wavelet transform-based attack (WA BA) method (https:// github.com/ndraeger/waba) [70]. The main idea of WABA is to apply Medium Residential River Medium Residential the hierarchical wavelet transform [71] to both the benign sample and the trigger image and blend them in the coefficient space. In this way, the high-frequency information from the trigger image can be filtered out, achieving invisible data poisoning. Figure 6 illustrates the qualitative Overpass Golf Course Storage Tanks semantic segmentation results of the (Poisoned) backdoor attacks with the FCN-8s model on the Zurich Summer dataset using the WABA method. Although FIGURE 5. An illustration of data poisoning by backdoor attacks on RS images from the Unithe attacked FCN-8s model can yield versity of California, Merced land use dataset [61]. Here, the “golf course” sample (the middle accurate segmentation maps on the image in the second row) is poisoned by injecting a white patch into the top left corner of one benign images (the third column in sample (adapted from [62]). JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
67
BACKDOOR DEFENSES Considering the high requirements for security and stability in geoscience and RS tasks, defending against backdoor attacks is crucial for building a trustworthy EO model. Brewer et al. first conducted backdoor defenses on deep learning models trained for the satellite sensor image classification task, where the activation clustering strategy was adopted [62]. Specifically, they applied independent component analysis (ICA) to the neuron activation of the last fully connected layer in a VGG model for each sample in each category, and only the first three components were retained. K-means clustering was then conducted to cluster these samples into two groups with the three components as the input features. As hidden backdoor triggers existed in the poisoned samples, their distribution in 3D ICA space may significantly differ from those clean samples, resulting in two separate clusters after k-means clustering. Such a phenomenon can be a pivotal clue to indicate whether the input samples are poisoned. Islam et al. further developed a lightweight defensive approach against backdoor attacks for the multi-UAV system based on a deep Q-network [74]. Their experiments showed that such a lightweight agnostic defense mechanism can reduce the impact of backdoor attacks on offloading in the multi-UAV system by at least 20%. Liu et al. proposed
Impervious Surface
Benign Image
Poisoned Image
Building
a collaborative defense method named CoDefend for IoT devices in smart cities [80]. Specifically, they employed strong intentional perturbation and the cycled generative adversarial network to defend against the infected models. Wang et al. explored the backdoor defense for deep reinforcement learning-based traffic congestion control systems using activation clustering [81]. Doan et al. further investigated input sanitization as a defense mechanism against backdoor attacks. Specifically, they proposed the Februus algorithm, which sanitizes the input samples by surgically removing potential triggering artifacts and restoring the input for the target model [77]. FUTURE PERSPECTIVES Although the threat of adversarial attacks has attracted widespread attention in the geoscience and RS field, research on backdoor attacks is still in its infancy and many open questions deserve further exploration. In the next section, we discuss some potential topics of interest. INVISIBLE BACKDOOR ATTACKS FOR EO TASKS One major characteristic of backdoor attacks is the stealthiness of the injected backdoor triggers. However, existing research has not yet discovered a backdoor pattern that is imperceptible to the human observer, where the injected
Low Vegetation
Benign Map
Tree
Poisoned Map
Car
Ground Truth
FIGURE 6. Qualitative semantic segmentation results of the backdoor attacks with the FCN-8s model on the Zurich Summer dataset using
the WABA method (adapted from [70]). 68
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
backdoor triggers are either visible square patterns [62] or lead to visual style differences. Thus, a technique that makes full use of the unique properties of RS data (e.g., the spectral characteristics in hyperspectral data) obtained by different sensors to design a more stealthy backdoor attack algorithm deserves further study. BACKDOOR ATTACKS FOR OTHER EO TASKS Currently, most existing research focuses on backdoor attacks for scene classification or semantic segmentation tasks. Considering that the success of backdoor attacks depends heavily on the design of the injected triggers for specific tasks, determining whether existing attack approaches can bring about a threat to other important EO tasks like object detection is also an important topic. PHYSICAL BACKDOOR ATTACKS FOR EO TASKS Although current research on backdoor attacks focuses on the digital space, conducting physical backdoor attacks may bring about a more serious threat to the security of EO tasks. Compared to the digital space, the physical world is characterized by more complicated environmental factors, like illuminations, distortions, and shadows. Thus, the design of effective backdoor triggers and execution of physical backdoor attacks for EO tasks is still an open question. EFFICIENT BACKDOOR DEFENSES FOR EO TASKS Existing backdoor defense methods like activation clustering are usually very time consuming because they need to obtain the statistical distribution properties of the activation features for each input sample. Considering the everincreasing amount of EO data, designing more efficient backdoor defense algorithms is likely to be a critical problem for future research.
N
W|=
/ s i Fi (i) (7)
i=1
Server
Aggregation and Update
Upload
lo ad
Initial Model
l
Up
In itia lM
d
loa
e od
Owner 1
Up
od el
Global Model
lM tia
JUNE 2023
PRELIMINARIES Assuming that N data owners {O 1, f, O N} wish to train an ML model using their respective databases {D 1, f, D N} with no exchange and access permissions to each other, the FL system is designed to learn a global model W by collecting training information from distributed devices, as shown in Figure 7. Three basic steps are contained [88], [89]: 1) Each owner downloads the initial model from the central server, which is trusted by third-party organizations; 2) the individual device uses local data to train the model and uploads the encrypted gradients to the server; and 3) the server aggregates the gradients of each owner, then updates the model parameters to replace each local model according to its contribution. Thus, the goal of FL is to minimize the following objective function:
Ini
FL As previously discussed, AI technology has shown immense potential with rapidly rising development and application in both industry and academia, where reliable and accurate training is ensured by sophisticated data and supportive systems at a global scale. With the development of EO techniques, data generated from various devices, including different standard types, functionalities, resource constraints, sensor indices, and mobility, have increased exponentially and heterogeneously in the field of geoscience and RS [82]. The massive growth of data provides a solid foundation for AI to achieve comprehensive and perceptive EO [83]. However, the successful realization of data sharing and migration is
still hindered by industry competition, privacy security and sensitivity, communication reliability, and complicated administrative procedures [84], [85]. Thus, data are still stored on isolated islands, with barriers among different data sources, resulting in considerable obstacles to promoting AI in geoscience and RS. To reduce systematic privacy risks and the costs associated with nonpublic data when training highly reliable models, FL has been introduced for AI-based geoscience and RS analysis. FL aims to implement joint training on data in multiple edge devices and generalize a centralized model [86], [87]. In the following sections, we briefly introduce FL for geoscience and RS in three parts: related preliminaries, applications, and future perspectives.
Owner 2
Owner N …
Database 1
Database 2
Database N
Local Model 1
Local Model 2
Local Model N
FIGURE 7. A schematic diagram of FL. To guarantee the privacy of local data, only the gradi-
ents of the model are allowed to be shared and exchanged with the server. The central server aggregates all the models and returns the updated parameters to the local devices.
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
69
Feature Space (a)
Shared Sample Space Database B Feature Space (b)
Sample Space
Label A
Database A
Label B
Database B
Sample Space
Shared Feature Space
VERTICAL FL This category, also known as feature-based FL, is suitable for learning tasks where the databases of local owners have a tremendous amount of overlap between samples but nonoverlapping feature spaces. In this case, the databases are split vertically, as shown in Figure 8(b), and those overlapping samples with various characteristics are utilized to train a model jointly. It should be noted that the efficacy of the global model is improved by complementing the feature dimensions of the training data in an encrypted state, and the third-party trusted central server is not required in this case. Thus far, many ML models, such as logistic regression models, decision trees, and DNNs have been applied to vertical FL. For example, Cheng et al. [99] proposed a treeboosting system that first conducts entity alignment under a privacy-preserving protocol, and then boosts trees across multiple parties while keeping the training data local. For geoscience and RS tasks, data generally incur extensive communication volume and frequency costs, and are often asynchronized. To conquer these challenges, Huang et al. [100] proposed a hybrid FL architecture called StarFL for urban computing. By combining with a trusted execution environment, secure multiparty computation, and the
Database A
Database A
Label B
Label A
Sample Space
HORIZONTAL FL This category is also called sample-based FL, which refers to scenarios where the databases of different owners have high similarity in feature space, but there exists limited overlap between samples. In this case, the databases are split horizontally, as shown in Figure 8(a), and the samples with the same features are then taken out for collaborative learning. Specifically, horizontal FL can effectively expand the size of training samples for the global model while ensuring that the leakage of local information is not allowed. Thus, the central server is supposed to aggregate a more accurate model with more samples. One of the most typical applications of horizontal FL, proposed by Google in 2017, is a collaborative learning scheme for Android mobile phone updates [91]. The local models are continuously updated according to the individual Android mobile phone user and then uploaded to the cloud. Finally, the global model can be established based on the shared features of all users. For EO research, Hu et al. [92] and Gao et al. [93] developed the federated region-learning (FRL) framework for particulate matter with a diameter of 2.5 μm or less monitoring in urban environments. The FRL framework divides the monitoring sites into a set of subregions, then treats each subregion as a microcloud for local model training. To better target different bandwidth requirements, synchronous and asynchronous strategies are proposed so that the central server aggregates the global model according
No Shared Space Database B
Label B
FL APPLICATIONS IN GEOSCIENCE AND RS Based on the data distribution over both the sample and feature spaces, FL can be divided into three categories: horizontal FL, vertical FL, and federated transfer learning [90]. A brief sketch of the three categories is given in Figure 8, and the related applications in geoscience and RS are summarized in the next sections.
to additional terms. It is known that other countries usually administer their RS data privately. Due to the data privacy involved in RS images, Xu and Mao [94] applied the FL strategy for vehicle target identification, ensuring that each training node trains the respective model locally, and encrypts the parameters to the service nodes with the public key. To achieve real-time image sensing classification, Tam et al. [82] presented a reliable model communication scheme with virtual resource optimization for edge FL. The scheme uses an epsilon-greedy strategy to constrain local models and optimal actions for particular network states. Then, the global multi-CNN model is aggregated by comprehensively considering multiple spatial-resolution sensing conditions and allocating computational offload resources. Other than for the aforementioned research, the horizontal FL scheme, which trains edge models asynchronously, was also applied for cyberattack detection [95], forest fire detection [96], and aerial RS [97], [98], among others, based on the dramatic development of the IoT in RS.
Label A
where s i denotes the sample proportion of the ith database with respect to the overall databases. Thus, s i 2 0 and R i s i = 1; Fi represents the local objective function of the ith device, which is usually defined as the loss function on local data, i.e., Fi (i) = 1/n i R nj =i 1 L (i; x j, y j), where (x j, y j) ! D i, n i is the number of samples in the ith database, and i is the set of model parameters.
Feature Space (c)
FIGURE 8. Three categories of FL according to the data partitions [84], [85]. (a) Horizontal FL, (b) vertical FL, and (c) federated transfer learning.
70
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Beidou Navigation Satellite System, StarFL provides more security guarantees for each participant in urban computing, which includes autonomous driving and resource exploration. They specified that the independence of the satellite cluster makes it easy for StarFL to support vertical FL. Jiang et al. [101] also pointed out that interactive learning between vehicles and their system environments through vertical FL can help assist with other city sensing applications, such as city traffic lights, cameras, and roadside units. FEDERATED TRANSFER LEARNING This category suits cases where neither the sample space nor the feature space overlap, as shown in Figure 8(c). In view of the problems caused by the small amount of data and sparse labeled samples, federated transfer learning is introduced to learn knowledge from the source database, and to transfer it to the target database while maintaining the privacy and security of the individual data. In real applications, Chen et al. [102] constructed a FedHealth model that gathers the data owned by different organizations via FL and offers personalized services for health care through transfer learning. Limited by the available data and annotations, federated transfer learning remains challenging to popularize in practical applications today. However, it is still the most effective way to protect data security and user privacy while breaking down data barriers for large-scale ML. FUTURE PERSPECTIVES With the exponential growth of AI applications, data security and user privacy are attracting increasing attention in geoscience and RS. For this purpose, FL can aggregate a desired global model from local models without exposing data, and has been applied in various topics such as real-time image classification, forest fire detection, and autonomous vehicles, among others. Based on the needs of currently available local data and individual users, existing systems are best served by focusing more on horizontal FL. There are other possible research directions in the future for vertical FL and federated transfer learning. Here we list some examples of potential applications that are helpful for a comprehensive understanding of geoscience and RS through FL. GENERATING GLOBAL-SCALE GEOGRAPHIC SYSTEMATIC MODELS The geographic parameters of different countries are similar, but geospatial data often cannot be shared due to national security restrictions and data confidentiality. A horizontal FL system could train local models separately, and then integrate the global-scale geographic parameters on the server according to the contribution of different owners, which could effectively avoid data leaks.
the elevation information of land covers, are usually kept privately by different industries. Therefore, designing appropriate vertical FL systems will be helpful for increasing urban understanding, such as estimating population distributions and traffic conditions, and computing 3D maps of cities. OBJECT DETECTION AND RECOGNITION, CROSS-SPATIAL DOMAIN AND SENSOR DOMAIN The RS data owned by different industries is usually captured by different sensors, and geospatial overlap is rare. Considering that the objects of interest are usually confidential, local data cannot be shared. In this case, the federated transfer learning system can detect objects of interest effectively by integrating local models for cross-domain tasks. UNCERTAINTY In the big data era, AI techniques, especially ML algorithms, have been applied widely in geoscience and RS missions. Unfortunately, regardless of their promising results, heterogeneities within the enormous volume of EO data, including noise and unaccounted-for variation, and the stochastic nature of the model’s parameters, can lead to uncertainty in the algorithms’ predictions, which may not only severely threaten the performance of the AI algorithms with uncertain test samples but also reduce the reliability of predictions in high-risk RS applications [103]. Therefore, identifying the occurrence of uncertainty, modeling its propagation and accumulation, and performing uncertainty quantification in the algorithms are all critical to controlling the quality of the outcomes. PRELIMINARIES AI techniques for geoscience and RS data analysis aim to map the relationship between properties on the earth’s surface and EO data. In practice, the algorithms in these techniques can be defined as a mathematical mapping that transforms the data into information representations. For example, neural networks have become the most popular mapping function that transforms a measurable input set X into a measurable set Y of predictions, as follows:
where f denotes the mapping function, and i represents the parameters of the neural network. Typically, as shown in Figure 9, developing an AI algorithm involves data collection, model construction, model training, and model deployment. In the context of supervised learning, a training dataset D is constructed in the data collection step, containing N pairs of input data sample x and labeled target y, as follows:
INTERDISCIPLINARY URBAN COMPUTING As is known, much spatial information about a specific city can be recorded conveniently by RS images. Still, other information, such as the locations of people and vehicles, and JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
fi : X " Y (8)
D = (X, Y) = {x i, y i} iN= 1. (9)
Then, the model architecture is designed according to the requirement of EO missions, and the mapping function as well as its parameters i are initialized (i.e., fi is determined). Next, 71
the model training process utilizes a loss function to minimize errors and optimize the parameters of the model with the training dataset D (i.e., the parameters i are optimized to it ). Finally, the samples in the testing dataset x ) ! X ) are forwarded into predictions y ) ! Y ) using the trained model fit in the model deployment step (i.e., fit : X ) " Y )). The concept of uncertainty refers, to a lack of knowledge about specific factors, parameters or models [104]. Among the aforementioned steps of applying an AI algorithm, uncertainty can occur in the training dataset and testing dataset during data collection and model deployment (data uncertainty), respectively. Meanwhile, uncertainty can also arise in the model’s parameters and their optimization during model construction and model training (model uncertainty). In the literature, many studies have been undertaken to determine the sources of uncertainty, while various uncertainty quantification approaches have been developed to estimate the reliability of the model’s predictions. SOURCES OF UNCERTAINTY DATA UNCERTAINTY Data uncertainty consists of randomness and bias in the data samples in the training and testing datasets caused by measurement errors or sampling errors and lack of knowledge [105]. In particular, data uncertainty can be divided into uncertainty in the raw data and a lack of domain knowledge. Uncertainty in the raw data usually arises in the EO data collection and preprocessing stages, including the RS imaging process and annotations of the earth’s surface properties for remote observation. To understand uncertainty in this EO data collection stage, a guide to the expression of uncertainty in measurement was proposed. It defines uncertainty as a parameter associated with the result of a measurement that characterizes dispersion of the values that could be reasonably attributed to the measurand of the raw EO data (i.e., X and X )) [106]. However, uncertainty in the measurement is inevitable and remains difficult to represent and estimate from the observations [107]. On the contrary, the labeled targets’ subset Y of the training dataset can bring uncertainty due to mistakes in the artificial labeling process and discrete annotations of ground surfaces. Specifically, definite boundaries between land cover classes are often nonexistent in the real world, and determining
Data Uncertainty
Model Uncertainty
Data Collection
Model Construction
Model Deployment
Model Training
FIGURE 9. The flowchart of an AI algorithm being applied to geoscience and RS data analysis.
72
the type of classification scheme characterizing the precise nature of the classes is uncertain [108]. Furthermore, the lack of domain knowledge of the model can cause uncertainty concerning the different domain distributions of the observed data in the training dataset X and testing dataset X ). During the RS imaging process, characteristics of the observed data are related to spatial and temporal conditions, such as illumination, season, and weather. Alternatives to the imaging situation can lead to heterogeneous data that have different domain distributions (i.e., domain invariance), and the AI algorithms cannot generate correct predictions with the decision boundary trained by the data of different distributions (i.e., domain shift) [109]. As a result, model performance can be severely affected due to uncertainty in the inference samples in the model deployment stage. The trained model lacks different domain knowledge, and thus cannot recognize the features from unknown samples excluded from the training dataset with domain invariance. For the unlabeled data distributions that are indistinguishable from the models, applying unsupervised domain adaptation techniques can reduce the data uncertainty effectively. These techniques adjust the model parameters to extend the decision boundary to the unknown geoscience and RS data [110]. However, domain adaptation can only fine-tune the models, and uncertainty cannot be eradicated entirely, thus motivating researchers to perform uncertainty quantification for out-of-distribution model predictions. MODEL UNCERTAINTY Model uncertainty refers to the errors and randomness of the model parameters that are initialized in the model construction and optimized in the model training. In the literature, various model architectures associated with several optimization configurations have been developed for different RS applications. However, determining the optimal model parameters and training settings remains difficult and induces uncertainty in predictions. For example, the mismatch of model complexity and data volume may cause uncertainty about under and overfitting [111]. Meanwhile, the heterogeneity of training configurations can control the steps of model fitting directly and affect the final training quality continuously. As a result, the selection of these complex configurations brings uncertainty to the systematic model training process. The configurations used for optimizing an ML model usually involve loss functions and hyperparameters. In particular, the loss functions are designed to measure the distances between model predictions and ground reference data and can be further developed to emphasize different types of errors. For example, , 1 and , 2 loss functions are employed widely in RS image restoration tasks, measuring the absolute and squared differences, respectively, at the pixel level. The optimizers controlled by hyperparameters then optimize the DNNs by minimizing the determined loss functions in every training iteration. In the literature, several optimization algorithms [e.g., stochastic gradient descent (SGD), Adam, and Adam with decoupled weight decay] have been proposed to IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
accelerate model fitting and improve inference performance of the model. The difference between these optimizers is entirely captured by the choice of update rule and applied hyperparameters [122]. For example, a training iteration using the SGD optimizer [123] can be defined as follows: i i + 1 = i i - h i dL (i i)(10)
where i i represent model parameters in the ith iteration, L denotes the loss function, and h is the learning rate. Specifically, in a model training iteration, the amplitude of each update step is controlled by the learning rate h, while the gradient descent of the loss functions determines the directions. Concerning the model updates in a whole epoch, batch size determines the volume of samples to be calculated in the loss functions in each training iteration. Due to the heterogeneity of the training data, each sample of the whole training batch may be calculated in different optimization directions, which combines into an uncertain result in the loss functions. As a result, the batch size can manipulate the training stability in that a larger training batch reduces the possibility of opposite optimization steps. In conclusion, selection of the appropriate loss functions and optimization configurations becomes an uncertain issue when training AI algorithms [124], [125]. QUANTIFICATION AND APPLICATIONS IN GEOSCIENCE AND RS As described in the “Sources of Uncertainty” section, data and model uncertainty caused by various sources is inevitable and still remains after applying practical approaches. Thus, uncertainty quantification can evidence the credibility of the predictions, benefiting application of homogeneous data with domain variance [126], and decision making in high-risk AI applications in RS [127]. As shown in Table 1, current AI algorithms for RS can be divided into low-level vision tasks (e.g., image restoration and generation) and classification tasks (scene classification and object detection). In low-level RS vision tasks, neural networks are usually constructed end to end, generating predictions
that directly represent the pixelwise spectral information. As a result, uncertainty quantification remains challenging in low-level tasks due to the lack of possible representations in the prediction space [128]. On the contrary, the predictions of classification tasks are usually distributions transformed by a softmax function and refer to possibilities of the object classes. The Bayesian inference framework provides a practical tool to estimate the uncertainty in neural networks [129]. In Bayesian inference, data uncertainty for the input sample x ) is described as a posterior distribution over class labels y given a set of model parameters i. In contrast, model uncertainty is formalized as a posterior distribution over i given the training data D, as follows:
P _ y x ), D i =
_ y x ), i i p ^i | D h di . (11) # 1P44 42443 144424443 model
data
Several uncertainty quantification approaches are proposed in the literature to marginalize i in (11) to obtain the uncertainty distribution. As shown in Table 4, these schemes can be categorized into deterministic methods and Bayesian inference methods, depending on the subcomponent models’ structure and characteristics of the errors [105]. By opting for different quantification strategies, the uncertainty of model output y ) can be obtained as v ), as shown in Figure 10. DETERMINISTIC METHODS Parameters of the neural network are deterministic and fixed in the inference of the deterministic methods. To obtain an uncertainty distribution with fixed i, several uncertainty quantification approaches were proposed, directly predicting the parameters of a distribution over the predictions. In the classification tasks, the predictions represent class possibilities as the outputs of the softmax function, defined as follows:
P _ y x ); it i =
e zc ^x h )
K
/ e z ^x h k
)
(12)
k=1
TABLE 4. AN OVERVIEW OF UNCERTAINTY QUANTIFICATION METHODS. DETERMINISTIC MODELS
BAYESIAN INFERENCE
PREVIOUS NETWORK-BASED METHODS
ENSEMBLE METHODS
MONTE CARLO METHODS
EXTERNAL METHODS
Description
Uncertainty distributions are c alculated from the density of predicted probabilities represented by previous distributions with tractable properties over categorical distribution.
Predictions are obtained by averaging over a series of predictions of the ensembles, while the uncertainty is quantified based on their variety.
Uncertainty distribution over predictions is calculated by the Bayes theorem based on the Monte Carlo approximation of the distributions over Bayesian model parameters.
The mean and standard deviation values of the prediction are directly output simultaneously using external modules.
Optimization strategy
Kullback–Leibler divergence
Cross-entropy loss
Cross-entropy loss and K ullback–Leibler divergence
Depends on the method
Uncertainty sources
Data
Data
Model
Model
AI techniques
Deterministic networks
Deterministic networks
Bayesian networks
Bayesian networks
References
[112], [113], and [114]
[115], [116], and [117]
[118], [119], and [120]
[121]
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
73
where z k (x )) ! R denotes the kth class predictions of the input sample x ). However, the classification predictions of the softmax function in neural networks are usually poorly calibrated due to overconfidence of the neural networks [130], while the predictions cannot characterize the domain shifts by discarding the original features of the neural network [131]. As a result, the accuracy of uncertainty quantification is influenced. To overcome these challenges, several uncertainty quantification approaches introduced prior networks to parameterize the distribution over a simplex. For example, Dirichlet prior networks are adopted widely to quantify the uncertainty from a Dirichlet distribution with tractable
analytic properties [112], [113], [114]. The Dirichlet distribution is a prior distribution over categories that represents the density of the predicted probabilities. The Dirichlet distribution-based methods directly analyze the logit magnitude of the neural networks, quantifying the data uncertainty with awareness of domain distributions in Dirichlet distribution representations. For training of the Dirichlet prior networks, model parameters are optimized by minimizing the Kullback–Leibler divergence between the model and Dirichlet distribution, focusing on the in- and out-ofdistribution data, respectively [129]. Except for the previous network-based approaches, ensemble methods can also approximate uncertainty by averaging
x
x
Deterministic NN
Deterministic NN
ξ∗
Deterministic NN
y1∗
Deterministic NN
y2∗
yn∗
y ∗ = Mean [ξ ∗] σ ∗ = S.D. [ξ ∗]
y ∗ = Mean [y1∗, y2∗, ..., yn∗]
(a)
σ ∗ = S.D. [y1∗, y2∗, ..., yn∗] (b)
x x BNN
y1∗
BNN
y2∗
y ∗ = Mean [y1∗, y2∗, ..., yn∗]
BNN
yn∗
BNN
y∗
σ∗
σ ∗ = S.D. [y1∗, y2∗, ..., yn∗] (c)
(d)
FIGURE 10. A visualization of uncertainty quantification methods. (a) Previous network-based methods. (b) Ensemble methods. (c) Monte
Carlo methods. (d) External methods. For an input sample xt , the first three methods deliver the prediction y ) and the quantified uncertainty v ) from the average of a series of model outputs (i.e., p ) or y )n) and their standard deviation (S.D.) results, respectively. On the contrary, the external methods directly output the results of prediction and uncertainty quantification. BNN: Bayesian neural network. 74
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
over a series of predictions. In particular, ensemble methods construct a set of deterministic models as ensemble members that each generate a prediction with the input sample. Based on the predictions from multiple decision makers, ensemble methods provide an intuitive way for representing the uncertainty by evaluating the variety among the member’s predictions. For example, Feng et al. [115] developed an object-based change detection model using rotation forest and coarse-tofine uncertainty analysis from multitemporal RS images. The ensemble members segmented multitemporal images into pixelwise classes of changed, unchanged, and uncertain classes according to the defined uncertainty threshold in a coarseto-fine manner. Change maps were then generated using the rotation forest, and all the maps were combined into a final change map by major voting, which quantifies the uncertainty by calculating the variety of decisions from different ensembles. Following a similar idea, Tan et al. [116] proposed an ensemble, object-level change detection model with multiscale uncertainty analysis based on object-based Dempster–Shafer fusion in active learning. Moreover, Schroeder et al. [117] proposed an ensemble model consisting of several artificial neural networks, quantifying uncertainty through utilization of computation prediction variance lookup tables. BAYESIAN INFERENCE Bayesian learning can be used to interpret model parameters and uncertainty quantification based on the ability to combine the scalability, expressiveness, and predictive performance of neural networks. The Bayesian method utilizes Bayesian neural networks (BNNs) to directly infer the probability distribution over the model parameters i. Given the training dataset D as a prior distribution, the posterior distribution over the model parameters P (i ; D) can be modeled by assuming a prior distribution over parameters via the Bayes theorem [132]. The prediction distribution of y ) from an input sample x ) can then be obtained as follows:
P _ y ) x ), D i =
# P_ y )
x ), i i P ^i D h di. (13)
However, this equation is not tractable to the calculation step of integrating the posterior distribution of model parameters P (i, D) and, thus, many approximation techniques are typically applied. In the literature, Monte Carlo (MC) approximation has become the most widespread approach for Bayesian methods, following the law of large numbers. MC approximation can approximate the expected distribution by the mean of M neural networks, fi 1, fi 2, f, fi M with determined parameters, i 1, i 2, f, i m. Following this idea, MC dropouts have been applied widely to sample the parameters of a BNN by randomly dropping some connections of the layers according to a setting probability [118], [119], [120]. The uncertainty distribution can then be further calculated by performing variational inference on the neural networks with the sampling parameters [133]. Concerning the computational cost of sampling model parameters in MC approximation, external modules are JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
utilized to quantify uncertainty, along with the predictions in BNNs. For example, Ma et al. [121] developed a BNN architecture with two endpoints to estimate the yield and the corresponding predictive uncertainty simultaneously in corn yield prediction based on RS data. Specifically, the extracted high-level features from the former part of the BNN are fed into two independent subnetworks to estimate the mean and standard deviation, respectively, of the predicted yield as a Gaussian distribution, which can be regarded intuitively as the quantified uncertainty. FUTURE PERSPECTIVES Over the decades, uncertainty analysis has become a critical topic in geoscience and RS data analysis. The literature has seen fruitful research outcomes in uncertainty explanation and quantification. Nevertheless, other open research directions deserve to be given attention in future studies concerning the development trend of AI algorithms. In the next section, we discuss some potential topics of interest. BENCHMARK TOOLS FOR UNCERTAINTY QUANTIFICATION Due to the lack of a universal benchmark protocol, comparisons of uncertainty quantification methods are rarely performed in the literature. Despite this, the existing evaluation metrics on related studies are usually based on measurable quantities such as calibration, out-of-distribution detection, or entropy metrics [132], [134]. However, the variety of methodology settings makes it challenging to compare the approaches quantitatively using existing comparison metrics. Thus, developing benchmark tools, including a standardized evaluation protocol for uncertainty quantification, is critical in future research. UNCERTAINTY IN UNSUPERVISED LEARNING As data annotation is very expensive and time consuming given the large volume of EO data, semi- and unsupervised techniques have been employed widely in AI-based algorithms. However, existing uncertainty quantification methods still focus mainly on supervised learning algorithms due to the requirement for qualification metrics. Therefore, developing uncertainty quantification methods in the absence of available labeled samples is a critical research topic for the future. UNCERTAINTY ANALYSIS FOR MORE AI ALGORITHMS Currently, most of the existing uncertainty quantification methods focus on high-level and forecasting tasks in geoscience and RS. Conversely, uncertainty methods for lowlevel vision tasks, such as cloud removal, are rarely seen in the literature due to formation of the predictions, and thus deserve further study. QUANTIFYING DATA AND MODEL UNCERTAINTY SIMULTANEOUSLY Existing uncertainty quantification methods have a very limited scope of application. Deterministic and Bayesian 75
methods can only quantify data and model uncertainty, respectively. Developing strategies to quantify data and model uncertainty simultaneously is necessary to analyze uncertainty comprehensively. EXPLAINABILITY AI-based algorithms, especially DNNs, have been applied successfully for various real-world applications as well as in geoscience and RS due to the rise of available large-scale data and hardware improvements. To improve performance and learning efficiency, deeper architectures and more complex parameters have been introduced to DNNs, which make it more difficult to understand and interpret these black-box models [135]. Regardless of the potentially high accuracy of deep learning models, the decisions made by DNNs require knowledge of the internal operations that were once overlooked by non-AI experts and end users who were more concerned with results. However, for geoscience and RS tasks, the privacy of data and high confidentiality of tasks determine that designing trustworthy deep learning models is more aligned with the ethical and judicial
Search Interest
100 80 60 40 20
2 ct
.2
02
0 O
ct
.2
02
8 O
ct
.2
01
6 O
O
ct
.2
01
4 01 .2 ct O
O
ct
.2
01
2
0
Year Search Interest Predicted Search Tendency
Global Explainability Have Scope Method
Intrinsic By Definition Model Specific
Local
Posthoc Is Usually Model Agnostic
FIGURE 12. A pseudo ontology of XAI methods taxonomy
(referenced from [136]). 76
PRELIMINARIES The topic of XAI has received renewed attention from academia and practitioners. We can see from Figure 11 that the search interest in XAI by Google Trends has grown rapidly over the past decade, especially in the past five years. The general concept of XAI can be explained as a suite of techniques and algorithms designed to facilitate the trustworthiness and transparency of AI systems. Thus, explanations are used as additional information extracted from the AI model, which provides insightful descriptions for a specific AI decision, or the entire functionality of the AI model [138]. Generally, given an input image x ! R d, let f (i) : x " y be a classifier mapping from the image space to the label space, where i represents the parameters of the model in a classification problem. The predicted label yt for the input image x can then be obtained by yt = f (i, x). Now, the explanation E : f # R d " R d can be generated to describe feature importance, contribution, or relevance of that particular dimension to the class output [137]. The explanation map can be a pixel map equal in size to the input. For example, the saliency method [139] is estimated by the gradient of the output yt with respect to the input x.
FIGURE 11. Google Trends results for research interest in the term XAI. The numbers of search interest represent the relative frequency of users toward time, where “100” means the peak popularity for the term, “50” means that the term is half as popular, and “0” means that there were not enough data.
Can be
requirements of both designers and users. In response to the demands of ethical, trustworthy, and unbiased AI models, as well as to reduce the impact of adversarial examples in fooling classifier decisions, XAI was implemented for geoscience and RS tasks to provide transparent explanations about model behaviors and make the models easier for humans to manage. Specifically, explainability is used to provide understanding of the pathways through which output decisions are made by AI models based on the parameters or activation of the trained models.
E Saliency (yt, x) = 4 f (i, x). (14)
XAI APPLICATIONS IN GEOSCIENCE AND RS In the quest to make AI algorithms explainable, many explanation methods and strategies have been proposed. Based on previously published surveys, the taxonomy of XAI algorithms can be discussed in the axis of scope and usage, respectively [136], [140], and the critical distinction of XAI algorithms is drawn in Figure 12. ◗◗ Scope: According to the scope of explanations, XAI algorithms can be either global or local. Globally explainable methods provide a comprehensive understanding of the entire model’s behavior. Locally explainable methods are designed to justify the individual feature attributions of an instance x from the data population X. Some XAI algorithms can be extended to both. For example, in [141], Ribeiro et al. introduced a local interpretable model-agnostic explanation (LIME) method, which can reliably approximate any blackbox classifier locally around the prediction. Specifically, the LIME method gives human-understandable IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
representations by highlighting attentive contiguous superpixels of the source image with positive weight toward a specific class as they give intuitive explanations for how the model thinks when classifying the image. ◗◗ Usage: Another way to classify the explainable method is whether it can be embedded into one specific neural network, or applied to any AI algorithm as an external explanation. The design of model-specific XAI algorithms depends heavily on the model’s intrinsic architecture, which will be affected by any changes in the architecture. On the other hand, model-agnostic, posthoc XAI algorithms have aroused research interest as they are not tied to a particular type of model and usually perform well on various neural networks. One of the natural ideas of the model-agnostic method is to visualize representations of the pattern passed through the neural units. For example, in [142], Zhou et al. proposed a class activation mapping (CAM) method by calculating the contribution of each pixel to the predicted result and generating a heatmap for visual interpretation. The proposal of CAM provides great inspiration to giving visualized interpretations for CNN-model families, and a series of XAI algorithms have been developed based on vanilla CAM, such as the grad-CAM [143], guided gradCAM [143], grad-CAM++ [144], and so on. XAI methods have also been applied in geoscience and RS. In [145], Maddy and Boukabara proposed an AI version of a multi-instrument inversion and data assimilation preprocessing system (MIIDAPS-AI) for infrared and microwave polar and geostationary sounders and imagers. They generated daily MIIDAPS-AI Jacobians to provide reliable explanations for the results. The consistent results of the ML-based Jacobians with the expectations illustrate that the information leads to temperature retrieval at a particular layer and originates from channels that peak at those layers. In [137], Kakogeorgious and Karantzalos utilized deep learning models for multilabel classification tasks in the benchmark BigEarthNet and SEN12MS datasets. To produce human-interpretable explanations for the models, 10 XAI methods were adopted in regard to their applicability and explainability. Some of the methods can be visualized directly by creating heatmaps for the prediction results, as shown in Figure 13, which demonstrates their
S2 Image
Sal With SG (0.16)
0
DeepLift (0.15)
0.2
0.4
capability and provides valuable insights for understanding the behaviors of deep black-box models like DenseNet. In [146], Matin and Pradhan utilized the shapely additive explanation (SHAP) algorithm [147] to interpret the outputs of multilayer perceptrons, and analyzed the impact of each feature descriptor for postearthquake building-damage assessment. Through this study, the explainable model provided further evidence for the model’s decision in classifying collapsed and noncollapsed buildings, thus providing generic databases and reliable AI models to researchers. In [148] and [149], Temenos et al. proposed a fused dataset that combines data from eight European cities and explored the potential relationship with COVID-19 using a tree-based ML algorithm. To give trustworthy explanations, the SHAP and LIME methods were utilized to identify the influence of factors such as temperature, humidity, and O 3 on a global and local level. There exist further explanations of AI models that provide a visualization of the learned deep features and interpret how the training procedure works on specific tasks. In [150], Xu et al. proposed a fully convolutional classification strategy for HSIs. By visualizing the response of different neurons in the network, the activation of neurons corresponding to different categories was explored, as shown in Figure 14. Consistency exists between the highlighted feature maps from different layers and the object detection results. In [151], Onishi and Ise constructed a machine vision system based on CNNs for tree identification and mapping using RS images captured by UAVs. The deep features were visualized by applying the guided grad-CAM method, which indicates that the differences in the edge shapes of foliage and bush of branch play an important role in identifying tree species for CNN models. In [152], Huang et al. proposed a novel network, named encoder-classifier-reconstruction CAM (ECR-CAM), to provide more accurate visual interpretations for more complicated objects contained in RS images. Specifically, the ECR-CAM method can learn more informative features by attaching a reconstruction subtask to the original classification task. Meanwhile, the extracted features are visualized using the CAM module based on the training of the network. The visualized heatmaps of ResNet-101 and DenseNet-201, with the proposed ECR-CAM method and other XAI methods, are shown in Figure 15. We can observe that ECR-CAM can
LIME (0.08)
0.6
0.8
Occlusion (0.01)
Grad-CAM (0.01)
1
FIGURE 13. The heatmaps of DenseNet with different XAI algorithms for the Water class in the SEN12MS dataset (from [137]). The pixels with a deeper color represent that they are more likely to be interpreted as the target class. Sal with SG: Saliency with SmoothGrad. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
77
more precisely locate target objects and achieves a better evaluation result for capturing multiple objects. In [15], Xu et al. proposed a novel text-to-image modern Hopfield network (Txt2Img-MHN) for RS image generation (https://github.com/YonghaoXu/Txt2Img-MHN). Unlike previous studies that directly learn concrete and diverse text-image features, Txt2Img-MHN aims to learn the most representative prototypes from text-image embeddings by the Hopfield layer, thus generating coarse-to-fine images for different semantics. For an understandable interpretation of the learned prototypes, the top-20 tokens were visualized, which are highly correlated to the basic components
(a)
(b)
(c)
for image generation, such as different colors and texture patterns. Other representative prototype-based XAI algorithms in geoscience and RS include the works in [153], [154], and [155]. FUTURE PERSPECTIVES The past 10 years have witnessed a rapid rise of AI algorithms in geoscience and RS. Meanwhile, there now exists greater awareness of the need to develop AI models with more explainability and transparency, such as to increase trust in and reliance on the models’ predictions. However, the existing XAI research that aims to visualize, query, or interpret the inner function of the models still needs to be improved due to its tight correlation with the complexity of individual models [156]. In the following sections, we discuss potential perspectives of XAI for EO tasks from three aspects.
(d)
FIGURE 14. Visualized feature maps and unsupervised object detection results of the spatial fully convolutional network model (from [150]). (a) The 38th feature map in the first convolutional layer. (b) Detection results for vegetation. (c) The sixth feature map in the sixth convolutional layer. (d) Detection results for metal sheets.
VHR Image
ResNet-101 (CAM)
DenseNet-201 (CAM)
ResNet-101 (Grad-CAM++)
SIMPLIFY THE STRUCTURE OF DNNs By utilizing appropriate XAI models, the effect of each layer and neuron in the network toward the decision can be decomposed and evaluated. As a consequence, the consumption of training time and parameters can be saved by cutting the network and preserving the most useful layer and neurons for feature extraction.
DenseNet-201 (Grad-CAM++)
DenseNet-201 (ECR-CAM)
FIGURE 15. The heatmaps of ResNet-101 and DenseNet-201 with CAM, grad-CAM++, and ECR-CAM (adapted from [152]). The target objects in rows 1, 2, and 3 are airplanes, cars, and mobile homes, respectively. The pixels in red represent that they are more likely to be interpreted as the target objects.
78
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
CREATE MORE STABLE AND RELIABLE EXPLANATIONS FOR THE NETWORK It has been demonstrated in [157] that existing interpretations of the network are vulnerable to small perturbations. The fragility of interpretations sends a message that designing robust XAI methods will have promising applications for adversarial attacks and defenses for EO. PROVIDE HUMAN-UNDERSTANDABLE EXPLANATIONS IN EO TASKS Previous studies show that there is still a large gap between the explanation map learned by XAI methods and human annotations, thus, X AI methods produce semantically misaligned explanations and are difficult to understand directly. This problem sheds light on the importance of deriving interpretations based on the specific EO task and human understanding, increasing the accuracy of explanation maps by introducing more constraints and optimization problems to explanations. CONCLUSIONS AND REMARKS Although AI algorithms represented by deep learning theories have achieved great success in many challenging tasks in the geoscience and RS field, their related safety and security issues should not be neglected, especially when addressing safety-critical EO missions. This article provided the first systematic and comprehensive review of recent progress on AI security in the geoscience and RS field, covering five major aspects: adversarial attack, backdoor attack, FL, uncertainty and explainability. Although research on some of these topics is still in its infancy, we believe that all these topics are indispensable for building a secure and trustworthy EO system, and all five deserve further investigation. In particular, in this section, we summarize four potential research directions and provide some open questions and challenges. This review is intended to inspire readers to conduct more influential and insightful research into related realms. SECURE AI MODELS IN EO Currently, the security of AI models has become a concern in geoscience and RS. The literature reviewed in this article also demonstrates that either adversarial or backdoor attacks can seriously threaten deployed AI systems for EO tasks. Nevertheless, despite the great effort that has been made in existing research, most of the studies focus only on a single attack type. How to develop advanced algorithms to defend the AI model against both adversarial and backdoor attacks simultaneously for EO is still an open question. In addition, although most of the relevant research focuses on conducting adversarial (backdoor) attacks and defenses in the digital domain, how effective adversarial (backdoor) attacks and defenses in the physical domain might be carried out, considering the imaging characteristics of different RS sensors, is another meaningful research direction. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
DATA PRIVACY IN EO State-of-the-art AI algorithms, especially deep learningbased ones, are usually data driven, and training these giant models often depends on a large quantity of high-quality labeled data. Thus, data sharing and distributed learning have played an increasingly important role in training large-scale AI models for EO. However, considering the sensitive information commonly found in RS data, such as military targets and other confidential information related to national defense security, the design of advanced FL algorithms to realize the sharing and flow of necessary information required for training AI models while protecting data privacy in EO is a challenging problem. Additionally, most of the existing research focuses on horizontal FL, in which it is assumed that distributed databases share high similarity in feature space. Improving FL ability in cross-domain, cross-sensor, or crosstask scenarios for EO is still an open question. TRUSTWORTHY AI MODELS IN EO The uncertainty in RS data and models is a major obstacle to building a trustworthy AI system for EO. Such uncertainty exists in the entire lifecycle of EO, from data acquisition, transmission, processing, and interpretation, to evaluation, and constantly spreads and accumulates, affecting the accuracy and reliability of the eventual output of the deployed AI model. Currently, most of the existing research adopts the deterministic and Bayesian inference methods to quantify the uncertainty in data and models, which ignores the close relationship between data and models. Thus, finding a method to achieve uncertainty quantification for data and models simultaneously in EO deserves more in-depth study. Furthermore, apart from uncertainty quantification, it is equally crucial to develop advanced algorithms to further decrease uncertainty in the entire lifecycle of EO so that errors and risks can be highly controllable, achieving a truly trustworthy AI system for EO. XAI MODELS IN EO As an end-to-end, data-driven AI technique, deep learning models usually work like an unexplainable black box. This makes it straightforward to apply deep learning models in many challenging EO missions, like using a point-andshoot camera. Nevertheless, it also brings about potential security risks, including vulnerability to adversarial (backdoor) attacks and model uncertainty. Thus, achieving a balance between tractability, explainability, and accuracy when designing AI models for EO is worthy of further investigation. Finally, considering the important role of expert knowledge in interpreting RS data, finding a way to better embed the human–computer interaction mechanism into the EO system may be a potential research direction for building XAI models in the future. ACKNOWLEDGMENT The authors would like to thank the Institute of Advanced Research in Artificial Intelligence for its support. The corresponding author of this article is Shizhen Chang. 79
AUTHOR INFORMATION Yonghao Xu ([email protected]) received his B.S. and Ph.D. degrees in photogrammetry and remote sensing from Wuhan University, Wuhan, China, in 2016 and 2021, respectively. He is currently a postdoctoral researcher at the Institute of Advanced Research in Artificial Intelligence, 1030 Vienna, Austria. His research interests include remote sensing, computer vision, and machine learning. He is a Member of IEEE. Tao Bai ([email protected]) received his B.Eng. degree in engineering from Wuhan University, Wuhan, China, in 2018, and his Ph.D. degree in computer science from Nanyang Technological University, 639798, Singapore, in 2022, where he is currently a research fellow. His research interests mainly focus on adversarial machine learning, generative adversarial networks, remote sensing, and security and privacy. Weikang Yu ([email protected]) received his B.Sc. degree from Beihang University, Beijing, China, in 2020, and his M. Phil. degree from the Chinese University of Hong Kong, Shenzhen, in 2022. He is currently pursuing his Ph.D degree with the machine learning group at Helmholtz Institute Freiberg for Resource Technology, Helmholtz–Zentrum Dresden–Rossendorf, 09599 Freiberg, Germany. His research interests include remote sensing image processing and machine learning. He is a Student Member of IEEE. Shizhen Chang ([email protected]) received her B.S. degree in surveying and mapping engineering and her Ph.D. degree in photogrammetry and remote sensing from Wuhan University, Wuhan, China, in 2016 and 2021, respectively. She is currently a postdoctoral researcher with the Institute of Advanced Research in Artificial Intelligence, 1030 Vienna, Austria. Her research interests include weakly supervised learning, change detection, and machine (deep) learning for remote sensing. She is a Member of IEEE. Peter M. Atkinson ([email protected]) received his master of business administration degree from the University of Southampton, Southampton, U.K., in 2012 and his Ph.D. degree from the University of Sheffield, Sheffield, U.K., in 1990, both in geography. He was a professor of geography with the University of Southampton, where he is currently a visiting professor. He was the Belle van Zuylen Chair with Utrecht University, Utrecht, The Netherlands. He is currently a distinguished professor of spatial data science with Lancaster University, LA1 4YR Lancaster, U.K. He is also a visiting professor with the Chinese Academy of Sciences, Beijing, China. He has authored or coauthored more than 350 peer-reviewed articles in international scientific journals and approximately 50 refereed book chapters, and has also edited more than 10 journal special issues and eight books. His research interests include remote sensing, geographical information science, and spatial (and space–time) statistics applied to a range of environmental science and socioeconomic problems. He 80
was the recipient of the Peter Burrough Award of the International Spatial Accuracy Research Association and the NERC CASE Award from the Rothamsted Experimental Station. He is the editor-in-chief of Science of Remote Sensing, a sister journal of Remote Sensing of Environment, and an associate editor of Computers and Geosciences. He sits on various international scientific committees. Pedram Ghamisi ([email protected]) received his Ph.D. degree in electrical and computer engineering from the University of Iceland in 2015. He works as head of the machine learning group at Helmholtz–Zentrum Dresden–Rossendorf, 09599 Freiberg, Germany, and a senior principal investigator and research professor (the leader of artificial intelligence for remote sensing) at the Institute of Advanced Research in Artificial Intelligence, Austria. He is a cofounder of VasoGnosis Inc. with two branches in San Jose and Milwaukee, USA. His research interests include deep learning, with a sharp focus on remote sensing applications. For detailed information, see http://www.ai4rs. com. He is a Senior Member of IEEE. REFERENCES [1] M. Reichstein et al., “Deep learning and process understanding for data-driven Earth system science,” Nature, vol. 566, no. 7743, pp. 195–204, Feb. 2019, doi: 10.1038/s41586-019-0912-1. [2] P. Ghamisi et al., “New frontiers in spectral-spatial hyperspectral image classification: The latest advances based on mathematical morphology, Markov random fields, segmentation, sparse representation, and deep learning,” IEEE Geosci. Remote Sens. Mag., vol. 6, no. 3, pp. 10–43, Sep. 2018, doi: 10.1109/ MGRS.2018.2854840. [3] L. Zhang, L. Zhang, and B. Du, “Deep learning for remote sensing data: A technical tutorial on the state of the art,” IEEE Geosci. Remote Sens. Mag., vol. 4, no. 2, pp. 22–40, Jun. 2016, doi: 10.1109/MGRS.2016.2540798. [4] X. X. Zhu et al., “Deep learning in remote sensing: A comprehensive review and list of resources,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 4, pp. 8–36, Dec. 2017, doi: 10.1109/ MGRS.2017.2762307. [5] J. Ma, W. Yu, C. Chen, P. Liang, X. Guo, and J. Jiang, “Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion,” Inf. Fusion, vol. 62, pp. 110–120, Oct. 2020, doi: 10.1016/j.inffus.2020.04.006. [6] W. He et al., “Non-local meets global: An iterative paradigm for hyperspectral image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 4, pp. 2089–2107, Apr. 2022, doi: 10.1109/TPAMI.2020.3027563. [7] P. Ebel, Y. Xu, M. Schmitt, and X. X. Zhu, “SEN12MS-CR-TS: A remote-sensing data set for multimodal multitemporal cloud removal,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–14, Jan. 2022, doi: 10.1109/TGRS.2022.3146246. [8] Y. Zhong, W. Li, X. Wang, S. Jin, and L. Zhang, “Satelliteground integrated destriping network: A new perspective for EO-1 hyperion and Chinese hyperspectral satellite datasets,” Remote Sens. Environ., vol. 237, Feb. 2020, Art. no. 111416, doi: 10.1016/j.rse.2019.111416. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
[9] G. Cheng, J. Han, and X. Lu, “Remote sensing image scene classification: Benchmark and state of the art,” Proc. IEEE, vol. 105, no. 10, pp. 1865–1883, Mar. 2017, doi: 10.1109/ JPROC.2017.2675998. [10] J. Ding et al., “Object detection in aerial images: A large-scale benchmark and challenges,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 11, pp. 7778–7796, Nov. 2022, doi: 10.1109/ TPAMI.2021.3117983. [11] Y. Xu et al., “Advanced multi-sensor optical remote sensing for urban land use and land cover classification: Outcome of the 2018 IEEE GRSS data fusion contest,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 12, no. 6, pp. 1709–1724, Jun. 2019, doi: 10.1109/JSTARS.2019.2911113. [12] H. Chen, C. Wu, B. Du, L. Zhang, and L. Wang, “Change detection in multisource VHR images via deep Siamese convolutional multiple-layers recurrent neural network,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. 2848–2864, Apr. 2020, doi: 10.1109/TGRS.2019.2956756. [13] J. Shao, B. Du, C. Wu, M. Gong, and T. Liu, “HRSiam: High-resolution Siamese network, towards space-borne satellite video tracking,” IEEE Trans. Image Process., vol. 30, pp. 3056–3068, Feb. 2021, doi: 10.1109/TIP.2020.3045634. [14] X. Lu, B. Wang, X. Zheng, and X. Li, “Exploring models and data for remote sensing image caption generation,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 4, pp. 2183–2195, Apr. 2018, doi: 10.1109/TGRS.2017.2776321. [15] Y. Xu, W. Yu, P. Ghamisi, M. Kopp, and S. Hochreiter, “Txt2ImgMHN: Remote sensing image generation from text using modern Hopfield networks,” 2022, arXiv:2208.04441. [16] S. Lobry, D. Marcos, J. Murray, and D. Tuia, “RSVQA: Visual question answering for remote sensing data,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 12, pp. 8555–8566, Dec. 2020, doi: 10.1109/TGRS.2020.2988782. [17] D. Rashkovetsky, F. Mauracher, M. Langer, and M. Schmitt, “Wildfire detection from multisensor satellite imagery using deep semantic segmentation,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 7001–7016, Jun. 2021, doi: 10.1109/JSTARS.2021.3093625. [18] O. Ghorbanzadeh et al., “The outcome of the 2022 Landslide4Sense competition: Advanced landslide detection from multisource satellite imagery,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 9927–9942, Nov. 2022, doi: 10.1109/ JSTARS.2022.3220845. [19] S. Dewitte, J. P. Cornelis, R. Müller, and A. Munteanu, “Artificial intelligence revolutionises weather forecast, climate monitoring and decadal prediction,” Remote Sens., vol. 13, no. 16, Aug. 2021, Art. no. 3209, doi: 10.3390/rs13163209. [20] X. Feng, T.-M. Fu, H. Cao, H. Tian, Q. Fan, and X. Chen, “Neural network predictions of pollutant emissions from open burning of crop residues: Application to air quality forecasts in southern China,” Atmos. Environ., vol. 204, pp. 22–31, May 2019, doi: 10.1016/j.atmosenv.2019.02.002. [21] N. Jean, M. Burke, M. Xie, W. M. Davis, D. B. Lobell, and S. Ermon, “Combining satellite imagery and machine learning to predict poverty,” Science, vol. 353, no. 6301, pp. 790–794, Aug. 2016, doi: 10.1126/science.aaf7894. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[22] G. W. Gella et al., “Mapping of dwellings in IDP/refugee settlements from very high-resolution satellite imagery using a mask region-based convolutional neural network,” Remote Sens., vol. 14, no. 3, Aug. 2022, Art. no. 689, doi: 10.3390/ rs14030689. [23] L. Zhang and L. Zhang, “Artificial intelligence for remote sensing data analysis: A review of challenges and opportunities,” IEEE Geosci. Remote Sens. Mag., vol. 10, no. 2, pp. 270–294, Jun. 2022, doi: 10.1109/MGRS.2022.3145854. [24] Y. Ge, X. Zhang, P. M. Atkinson, A. Stein, and L. Li, “Geoscienceaware deep learning: A new paradigm for remote sensing,” Sci. Remote Sens., vol. 5, Jun. 2022, Art. no. 100047, doi: 10.1016/j. srs.2022.100047. [25] W. Czaja, N. Fendley, M. Pekala, C. Ratto, and I.-J. Wang, “Adversarial examples in remote sensing,” in Proc. SIGSPATIAL Int. Conf. Adv. Geographic Inf. Syst., 2018, pp. 408–411, doi: 10.1145/3274895.3274904. [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386. [27] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in Proc. Int. Conf. Learn. Representations, 2015. [28] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Proc. Int. Conf. Learn. Representations, 2017. [29] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in Proc. Int. Conf. Learn. Representations, 2018. [30] C. Szegedy et al., “Intriguing properties of neural networks,” in Proc. Int. Conf. Learn. Representations, 2014. [31] L. Chen, G. Zhu, Q. Li, and H. Li, “Adversarial example in remote sensing image recognition,” 2020, arXiv:1910.13222. [32] Y. Xu, B. Du, and L. Zhang, “Assessing the threat of adversarial examples on deep neural networks for remote sensing scene classification: Attacks and defenses,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 2, pp. 1604–1617, Feb. 2021, doi: 10.1109/ TGRS.2020.2999962. [33] Y. Xu and P. Ghamisi, “Universal adversarial examples in remote sensing: Methodology and benchmark,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–15, Mar. 2022, doi: 10.1109/ TGRS.2022.3156392. [34] Y. Zhou, M. Kantarcioglu, B. Thuraisingham, and B. Xi, “Adversarial support vector machine learning,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2012, pp. 1059–1067, doi: 10.1145/2339530.2339697. [35] T. Bai, H. Wang, and B. Wen, “Targeted universal adversarial examples for remote sensing,” Remote Sens., vol. 14, no. 22, Nov. 2022, Art. no. 5833, doi: 10.3390/rs14225833. [36] Y. Xu, B. Du, and L. Zhang, “Self-attention context network: Addressing the threat of adversarial attacks for hyperspectral image classification,” IEEE Trans. Image Process., vol. 30, pp. 8671–8685, Oct. 2021, doi: 10.1109/TIP.2021.3118977. [37] M. E. Paoletti, J. M. Haut, R. Fernandez-Beltran, J. Plaza, A. J. Plaza, and F. Pla, “Deep pyramidal residual networks for spectral–spatial hyperspectral image classification,” IEEE Trans.
81
Geosci. Remote Sens., vol. 57, no. 2, pp. 740–754, Feb. 2019, doi: 10.1109/TGRS.2018.2860125. [38] C. Shi, Y. Dang, L. Fang, Z. Lv, and M. Zhao, “Hyperspectral image classification with adversarial attack,” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2022, doi: 10.1109/LGRS.2021.3122170. [39] L. Chen, Z. Xu, Q. Li, J. Peng, S. Wang, and H. Li, “An empirical study of adversarial examples on remote sensing image scene classification,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 9, pp. 7419–7433, Jan. 2021, doi: 10.1109/TGRS.2021.3051641. [40] H. Li et al., “Adversarial examples for CNN-based SAR image classification: An experience study,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 1333–1347, 2021, doi: 10.1109/JSTARS.2020.3038683. [41] T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang, “Recent advances in adversarial training for adversarial robustness,” in Proc. Int. Joint Conf. Artif. Intell., 2021, pp. 4312–4321. [42] A. Chan-Hon-Tong, G. Lenczner, and A. Plyer, “Demotivate adversarial defense in remote sensing,” in Proc. IEEE Int. Geosci. Remote Sens. Symp., 2021, pp. 3448–3451, doi: 10.1109/ IGARSS47720.2021.9554767. [43] B. Peng, B. Peng, J. Zhou, J. Xie, and L. Liu, “Scattering model guided adversarial examples for SAR target recognition: Attack and defense,” IEEE Trans. Geosci. Remote Sens., vol. 60, Oct. 2022, Art. no. 5236217, doi: 10.1109/TGRS.2022.3213305. [44] Y. Xu, H. Sun, J. Chen, L. Lei, G. Kuang, and K. Ji, “Robust remote sensing scene classification by adversarial self-supervised learning,” in Proc. IEEE Int. Geosci. Remote Sens. Symp., 2021, pp. 4936–4939, doi: 10.1109/IGARSS47720.2021.9553824. [45] G. Cheng, X. Sun, K. Li, L. Guo, and J. Han, “Perturbation-seeking generative adversarial networks: A defense framework for remote sensing image scene classification,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–11, 2022, doi: 10.1109/TGRS.2021.3081421. [46] Y. Xu, W. Yu, and P. Ghamisi, “Task-guided denoising network for adversarial defense of remote sensing scene classification,” in Proc. Int. Joint Conf. Artif. Intell. Workshop, 2022, pp. 73–78. [47] L. Chen, J. Xiao, P. Zou, and H. Li, “Lie to me: A soft threshold defense method for adversarial examples of remote sensing images,” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2022, doi: 10.1109/LGRS.2021.3096244. [48] Z. Zhang, X. Gao, S. Liu, B. Peng, and Y. Wang, “Energybased adversarial example detection for SAR images,” Remote Sens., vol. 14, no. 20, Oct. 2022, Art. no. 5168, doi: 10.3390/ rs14205168. [49] Y. Zhang et al., “Adversarial patch attack on multi-scale object detection for UAV remote sensing images,” Remote Sens., vol. 14, no. 21, Oct. 2022, Art. no. 5298, doi: 10.3390/rs14215298. [50] X. Sun, G. Cheng, L. Pei, H. Li, and J. Han, “Threatening patch attacks on object detection in optical remote sensing images,” 2023, arXiv:2302.06060. [51] J.-C. Burnel, K. Fatras, R. Flamary, and N. Courty, “Generating natural adversarial remote sensing images,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–14, 2021, doi: 10.1109/ TGRS.2021.3110601. [52] Y. Dong et al., “Boosting adversarial attacks with momentum,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 9185–9193, doi: 10.1109/CVPR.2018.00957.
82
[53] C. Xie et al., “Improving transferability of adversarial examples with input diversity,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 2730–2739, doi: 10.1109/CVPR.2019.00284. [54] X. Sun, G. Cheng, L. Pei, and J. Han, “Query-efficient decisionbased attack via sampling distribution reshaping,” Pattern Recognit., vol. 129, Sep. 2022, Art. no. 108728, doi: 10.1016/j. patcog.2022.108728. [55] X. Sun, G. Cheng, H. Li, L. Pei, and J. Han, “Exploring effective data for surrogate training towards black-box attack,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 15,355– 15,364, doi: 10.1109/CVPR52688.2022.01492. [56] B. Deng, D. Zhang, F. Dong, J. Zhang, M. Shafiq, and Z. Gu, “Rust-style patch: A physical and naturalistic camouflage attacks on object detector for remote sensing images,” Remote Sens., vol. 15, no. 4, Feb. 2023, Art. no. 885, doi: 10.3390/ rs15040885. [57] H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli, “Is feature selection secure against training data poisoning?” in Proc. Int. Conf. Mach. Learn., 2015, pp. 1689–1698. [58] T. Gu, B. Dolan-Gavitt, and S. Garg, “BadNets: Identifying vulnerabilities in the machine learning model supply chain,” 2017, arXiv:1708.06733. [59] Y. Liu, Y. Xie, and A. Srivastava, “Neural trojans,” in Proc. IEEE Int. Conf. Comput. Des., 2017, pp. 45–48, doi: 10.1109/ ICCD.2017.16. [60] Y. Li, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor learning: A survey,” IEEE Trans. Neural Netw. Learn. Syst., early access, 2022, doi: 10.1109/TNNLS.2022.3182979. [61] Y. Yang and S. Newsam, “Bag-of-visual-words and spatial extensions for land-use classification,” in Proc. SIGSPATIAL Int. Conf. Adv. Geographic Inf. Syst., 2010, pp. 270–279, doi: 10.1145/1869790.1869829. [62] E. Brewer, J. Lin, and D. Runfola, “Susceptibility & defense of satellite image-trained convolutional networks to backdoor attacks,” Inf. Sci., vol. 603, pp. 244–261, Jul. 2022, doi: 10.1016/j. ins.2022.05.004. [63] X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” 2017, arXiv:1712.05526. [64] A. Nguyen and A. Tran, “WaNet–Imperceptible warping-based backdoor attack,” 2021, arXiv:2102.10369. [65] A. Turner, D. Tsipras, and A. Madry, “Label-consistent backdoor attacks,” 2019, arXiv:1912.02771. [66] A. Saha, A. Subramanya, and H. Pirsiavash, “Hidden trigger backdoor attacks,” in Proc. AAAI Conf. Artif. Intell., 2020, vol. 34, no. 7, pp. 11,957–11,965, doi: 10.1609/aaai.v34i07.6871. [67] Y. Li, Y. Li, B. Wu, L. Li, R. He, and S. Lyu, “Invisible backdoor attack with sample-specific triggers,” in Proc. IEEE Int. Conf. Comput. Vis., 2021, pp. 16,463–16,472, doi: 10.1109/ ICCV48922.2021.01615. [68] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014, arXiv:1409.1556. [69] E. Brewer, J. Lin, P. Kemper, J. Hennin, and D. Runfola, “Predicting road quality using high resolution satellite imagery: A transfer learning approach,” PLoS One, vol. 16, no. 7, Jul. 2021, Art. no. e0253370, doi: 10.1371/journal.pone.0253370. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
[70] N. Dräger, Y. Xu, and P. Ghamisi, “Backdoor attacks for remote sensing data with wavelet transform,” 2022, arXiv: 2211.08044. [71] L. Chun-Lin, “A tutorial of the wavelet transform,” Dept. Elect. Eng., Nat. Taiwan Univ., Taipei, Taiwan, 2010. [72] K.-Y. Tsao, T. Girdler, and V. G. Vassilakis, “A survey of cyber security threats and solutions for UAV communications and flying ad-hoc networks,” Ad Hoc Netw., vol. 133, Aug. 2022, Art. no. 102894, doi: 10.1016/j.adhoc.2022.102894. [73] A. Rugo, C. A. Ardagna, and N. E. Ioini, “A security review in the UAVNet era: Threats, countermeasures, and gap analysis,” ACM Comput. Surv., vol. 55, no. 1, pp. 1–35, Jan. 2022, doi: 10.1145/3485272. [74] S. Islam, S. Badsha, I. Khalil, M. Atiquzzaman, and C. Konstantinou, “A triggerless backdoor attack and defense mechanism for intelligent task offloading in multi-UAV systems,” IEEE Internet Things J., vol. 10, no. 7, pp. 5719–5732, Apr. 2022, doi: 10.1109/JIOT.2022.3172936. [75] C. Beretas et al., “Smart cities and smart devices: The back door to privacy and data breaches,” Biomed. J. Sci. Technol. Res., vol. 28, no. 1, pp. 21,221–21,223, Jun. 2020, doi: 10.26717/ BJSTR.2020.28.004588. [76] S. Hashemi and M. Zarei, “Internet of Things backdoors: Resource management issues, security challenges, and detection methods,” Trans. Emerg. Telecommun. Technol., vol. 32, no. 2, Feb. 2021, Art. no. e4142, doi: 10.1002/ett.4142. [77] B. G. Doan, E. Abbasnejad, and D. C. Ranasinghe, “Februus: Input purification defense against trojan attacks on deep neural network systems,” in Proc. Annu. Comput. Secur. Appl. Conf., 2020, pp. 897–912, doi: 10.1145/3427228.3427264. [78] S. Ding, Y. Tian, F. Xu, Q. Li, and S. Zhong, “Poisoning attack on deep generative models in autonomous driving,” in Proc. EAI Secur. Commun., 2019, pp. 299–318. [79] P. Kumar, G. P. Gupta, and R. Tripathi, “TP2SF: A trustworthy privacy-preserving secured framework for sustainable smart cities by leveraging blockchain and machine learning,” J. Syst. Archit., vol. 115, 2021, Art. no. 101954, doi: 10.1016/j.sysarc.2020.101954. [80] Q. Liu et al., “A collaborative deep learning microservice for backdoor defenses in industrial IoT networks,” Ad Hoc Netw., vol. 124, Jan. 2022, Art. no. 102727, doi: 10.1016/j.adhoc.2021.102727. [81] Y. Wang, E. Sarkar, W. Li, M. Maniatakos, and S. E. Jabari, “Stop-and-go: Exploring backdoor attacks on deep reinforcement learning-based traffic congestion control systems,” IEEE Trans. Inf. Forensics Security, vol. 16, pp. 4772–4787, Sep. 2021, doi: 10.1109/TIFS.2021.3114024. [82] P. Tam, S. Math, C. Nam, and S. Kim, “Adaptive resource optimized edge federated learning in real-time image sensing classifications,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 10,929–10,940, Oct. 2021, doi: 10.1109/ JSTARS.2021.3120724. [83] D. Li, M. Wang, Z. Dong, X. Shen, and L. Shi, “Earth observation brain (EOB): An intelligent Earth observation system,” Geo-Spatial Inf. Sci., vol. 20, no. 2, pp. 134–140, Jun. 2017, doi: 10.1080/10095020.2017.1329314. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[84] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, pp. 1–19, Jan. 2019, doi: 10.1145/3298981. [85] C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, “A survey on federated learning,” Knowl.-Based Syst., vol. 216, Mar. 2021, Art. no. 106775, doi: 10.1016/j.knosys.2021.106775. [86] Q. Li et al., “A survey on federated learning systems: Vision, hype and reality for data privacy and protection,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 4, pp. 3347–3366, Apr. 2023, doi: 10.1109/TKDE.2021.3124599. [87] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50–60, May 2020, doi: 10.1109/ MSP.2020.2975749. [88] P. Kairouz et al., “Advances and open problems in federated learning,” Found. Trends Mach. Learn., vol. 14, nos. 1–2, pp. 1–210, Jun. 2021, doi: 10.1561/2200000083. [89] J. Mills, J. Hu, and G. Min, “Multi-task federated learning for personalised deep neural networks in edge computing,” IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 3, pp. 630–641, Mar. 2022, doi: 10.1109/TPDS.2021.3098467. [90] L. Li, Y. Fan, M. Tse, and K.-Y. Lin, “A review of applications in federated learning,” Comput. Ind. Eng., vol. 149, Nov. 2020, Art. no. 106854, doi: 10.1016/j.cie.2020.106854. [91] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Aguera y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Int. Conf. Artif. Intell. Statist., PMLR, 2017, pp. 1273–1282. [92] B. Hu, Y. Gao, L. Liu, and H. Ma, “Federated region-learning: An edge computing based framework for urban environment sensing,” in Proc. IEEE Global Commun. Conf., 2018, pp. 1–7, doi: 10.1109/GLOCOM.2018.8647649. [93] Y. Gao, L. Liu, B. Hu, T. Lei, and H. Ma, “Federated region-learning for environment sensing in edge computing system,” IEEE Trans. Netw. Sci. Eng., vol. 7, no. 4, pp. 2192–2204, Aug. 2020, doi: 10.1109/TNSE.2020.3016035. [94] C. Xu and Y. Mao, “An improved traffic congestion monitoring system based on federated learning,” Information, vol. 11, no. 7, Jul. 2020, Art. no. 365, doi: 10.3390/info11070365. [95] M. Alazab, R. M. Swarna Priya, M. Parimala, P. K. R. Maddikunta, T. R. Gadekallu, and Q.-V. Pham, “Federated learning for cybersecurity: Concepts, challenges, and future directions,” IEEE Trans. Ind. Informat., vol. 18, no. 5, pp. 3501–3509, May 2022, doi: 10.1109/TII.2021.3119038. [96] Z. M. Fadlullah and N. Kato, “On smart IoT remote sensing over integrated terrestrial-aerial-space networks: An asynchronous federated learning approach,” IEEE Netw., vol. 35, no. 5, pp. 129–135, Sep./Oct. 2021, doi: 10.1109/MNET.101.2100125. [97] P. Chhikara, R. Tekchandani, N. Kumar, and S. Tanwar, “Federated learning-based aerial image segmentation for collisionfree movement and landing,” in Proc. ACM MobiCom Workshop Drone Assisted Wireless Commun. 5G Beyond, 2021, pp. 13–18, doi: 10.1145/3477090.3481051. [98] W. Lee, “Federated reinforcement learning-based UAV swarm system for aerial remote sensing,” Wireless Commun. Mobile Comput., early access, Jan. 2022, doi: 10.1155/2022/4327380.
83
[99] K. Cheng et al., “Secureboost: A lossless federated learning framework,” IEEE Intell. Syst., vol. 36, no. 6, pp. 87–98, Nov./ Dec. 2021, doi: 10.1109/MIS.2021.3082561. [100] A. Huang et al., “StarFL: Hybrid federated learning architecture for smart urban computing,” ACM Trans. Intell. Syst. Technol., vol. 12, no. 4, pp. 1–23, Aug. 2021, doi: 10.1145/3467956. [101] J. C. Jiang, B. Kantarci, S. Oktug, and T. Soyata, “Federated learning in smart city sensing: Challenges and opportunities,” Sensors, vol. 20, no. 21, Oct. 2020, Art. no. 6230, doi: 10.3390/s20216230. [102] Y. Chen, X. Qin, J. Wang, C. Yu, and W. Gao, “FedHealth: A federated transfer learning framework for wearable healthcare,” IEEE Intell. Syst., vol. 35, no. 4, pp. 83–93, Jul./Aug. 2020, doi: 10.1109/MIS.2020.2988604. [103] P. Atkinson and G. Foody, Uncertainty in Remote Sensing and GIS: Fundamentals. New York, NY, USA: Wiley, 2002, pp. 1–18. [104] M. Firestone et al., “Guiding principles for Monte Carlo analysis,” U.S. Environ. Protection Agency, Washington, DC, USA, 1997. [Online]. Available: https://www.epa.gov/risk/guidingprinciples-monte-carlo-analysis [105] G. Wang, G. Z. Gertner, S. Fang, and A. B. Anderson, “A methodology for spatial uncertainty analysis of remote sensing and GIS products,” Photogrammetric Eng. Remote Sens., vol. 71, no. 12, pp. 1423–1432, Dec. 2005, doi: 10.14358/PERS.71.12.1423. [106] BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP, and OIML, Guide to the Expression of Uncertainty in Measurement. Geneva, Switzerland: International Organization for Standardization, 1995. [107] A. Povey and R. Grainger, “Known and unknown unknowns: Uncertainty estimation in satellite remote sensing,” Atmos. Meas. Techn., vol. 8, no. 11, pp. 4699–4718, Nov. 2015, doi: 10.5194/amt-8-4699-2015. [108] A. M. Lechner, W. T. Langford, S. A. Bekessy, and S. D. Jones, “Are landscape ecologists addressing uncertainty in their remote sensing data?” Landscape Ecology, vol. 27, no. 9, pp. 1249– 1261, Sep. 2012, doi: 10.1007/s10980-012-9791-7. [109] D. Tuia, C. Persello, and L. Bruzzone, “Domain adaptation for the classification of remote sensing data: An overview of recent advances,” IEEE Geosci. Remote Sens. Mag., vol. 4, no. 2, pp. 41–57, Jun. 2016, doi: 10.1109/MGRS.2016.2548504. [110] B. Benjdira, Y. Bazi, A. Koubaa, and K. Ouni, “Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images,” Remote Sens., vol. 11, no. 11, May 2019, Art. no. 1369, doi: 10.3390/rs11111369. [111] H. Jabbar and R. Z. Khan, “Methods to avoid over-fitting and under-fitting in supervised machine learning (Comparative Study),” in Proc. Comput. Sci. Commun. Instrum. Devices, 2015, pp. 163–172, doi: 10.3850/978-981-09-5247-1_017. [112] Z. Yin, M. Amaru, Y. Wang, L. Li, and J. Caers, “Quantifying uncertainty in downscaling of seismic data to high-resolution 3-D lithological models,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–12, Feb. 2022, doi: 10.1109/TGRS.2022.3153934. [113] J. Gawlikowski, S. Saha, A. Kruspe, and X. X. Zhu, “An advanced Dirichlet prior network for out-of-distribution detection in remote sensing,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–19, Jan. 2022, doi: 10.1109/TGRS.2022.3140324. [114] J. Gawlikowski, S. Saha, A. Kruspe, and X. X. Zhu, “Towards out-of-distribution detection for remote sensing,” in Proc.
84
IEEE Int. Geosci. Remote Sens. Symp., 2021, pp. 8676–8679, doi: 10.1109/IGARSS47720.2021.9553266. [115] W. Feng, H. Sui, J. Tu, W. Huang, C. Xu, and K. Sun, “A novel change detection approach for multi-temporal high-resolution remote sensing images based on rotation forest and coarseto-fine uncertainty analyses,” Remote Sens., vol. 10, no. 7, Jun. 2018, Art. no. 1015, doi: 10.3390/rs10071015. [116] K. Tan, Y. Zhang, X. Wang, and Y. Chen, “Object-based change detection using multiple classifiers and multi-scale uncertainty analysis,” Remote Sens., vol. 11, no. 3, Feb. 2019, Art. no. 359, doi: 10.3390/rs11030359. [117] T. Schroeder, M. Schaale, J. Lovell, and D. Blondeau-Patissier, “An ensemble neural network atmospheric correction for Sentinel-3 OLCI over coastal waters providing inherent model uncertainty estimation and sensor noise propagation,” Remote Sens. Environ., vol. 270, Mar. 2022, Art. no. 112848, doi: 10.1016/j.rse.2021.112848. [118] C. Dechesne, P. Lassalle, and S. Lefèvre, “Bayesian U-Net: Estimating uncertainty in semantic segmentation of Earth observation images,” Remote Sens., vol. 13, no. 19, Sep. 2021, Art. no. 3836, doi: 10.3390/rs13193836. [119] M. Werther et al., “A Bayesian approach for remote sensing of chlorophyll-a and associated retrieval uncertainty in oligotrophic and mesotrophic lakes,” Remote Sens. Environ., vol. 283, Dec. 2022, Art. no. 113295, doi: 10.1016/j.rse.2022.113295. [120] B. W. Allred et al., “Improving Landsat predictions of rangeland fractional cover with multitask learning and uncertainty,” Methods Ecology Evol., vol. 12, no. 5, pp. 841–849, Jan. 2021, doi: 10.1111/2041-210X.13564. [121] Y. Ma, Z. Zhang, Y. Kang, and M. Özdog˘an, “Corn yield prediction and uncertainty analysis based on remotely sensed variables using a Bayesian neural network approach,” Remote Sens. Environ., vol. 259, Jun. 2021, Art. no. 112408, doi: 10.1016/j. rse.2021.112408. [122] D. Choi, C. J. Shallue, Z. Nado, J. Lee, C. J. Maddison, and G. E. Dahl, “On empirical comparisons of optimizers for deep learning,” 2019, arXiv:1910.05446. [123] H. Robbins and S. Monro, “A stochastic approximation method,” Ann. Math. Statist., vol. 22, no. 3, pp. 400–407, Sep. 1951, doi: 10.1214/aoms/1177729586. [124] Q. Wang, Y. Ma, K. Zhao, and Y. Tian, “A comprehensive survey of loss functions in machine learning,” Ann. Data Sci., vol. 9, no. 2, pp. 187–212, Apr. 2022, doi: 10.1007/s40745-020-00253-5. [125] F. He, T. Liu, and D. Tao, “Control batch size and learning rate to generalize well: Theoretical and empirical evidence,” in Proc. Neural Inf. Process. Syst., 2019, vol. 32, pp. 1143–1152. [126] C. Persello, “Interactive domain adaptation for the classification of remote sensing images using active learning,” IEEE Geosci. Remote Sens. Lett., vol. 10, no. 4, pp. 736–740, Jul. 2013, doi: 10.1109/LGRS.2012.2220516. [127] H. Feng, Z. Miao, and Q. Hu, “Study on the uncertainty of machine learning model for earthquake-induced landslide susceptibility assessment,” Remote Sens., vol. 14, no. 13, Jun. 2022, Art. no. 2968, doi: 10.3390/rs14132968. [128] R. Szeliski, Bayesian Modeling of Uncertainty in Low-Level Vision, vol. 79. New York, NY, USA: Springer Science & Business Media, 2012. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
[129] A. Malinin and M. Gales, “Predictive uncertainty estimation via prior networks,” in Proc. Neural Inf. Process. Syst., 2018, pp. 7047–7058. [130] M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to quantify classification uncertainty,” in Proc. Neural Inf. Process. Syst., 2018, pp. 1–11. [131] M. Moz˙ejko, M. Susik, and R. Karczewski, “Inhibited softmax for uncertainty estimation in neural networks,” 2018, arXiv:1810.01861. [132] J. Gawlikowski et al., “A survey of uncertainty in deep neural networks,” 2021, arXiv:2107.03342. [133] R. Seoh, “Qualitative analysis of monte Carlo dropout,” 2020, arXiv:2007.01720. [134] N. Hochgeschwender et al., “Evaluating uncertainty estimation methods on 3D semantic segmentation of point clouds,” 2020, arXiv:2007.01787. [135] P. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold, and P. M. Atkinson, “Explainable artificial intelligence: An analytical review,” Wiley Interdisciplinary Rev. Data Mining Knowl. Discovery, vol. 11, no. 5, Sep./Oct. 2021, Art. no. e1424, doi: 10.1002/widm.1424. [136] A. Adadi and M. Berrada, “Peeking inside the black-box: A survey on explainable artificial intelligence (XAI),” IEEE Access, vol. 6, pp. 52,138–52,160, Sep. 2018, doi: 10.1109/ACCESS.2018.2870052. [137] I. Kakogeorgiou and K. Karantzalos, “Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing,” Int. J. Appl. Earth Observ. Geoinf., vol. 103, Dec. 2021, Art. no. 102520, doi: 10.1016/j. jag.2021.102520. [138] L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, “Explaining explanations: An overview of interpretability of machine learning,” in Proc. IEEE Int. Conf. Data Sci. Adv. Anal., 2018, pp. 80–89, doi: 10.1109/DSAA.2018.00018. [139] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” in Proc. Int. Conf. Learn. Representations Workshops, 2013. [140] A. Das and P. Rad, “Opportunities and challenges in explainable artificial intelligence (XAI): A survey,” 2020, arXiv:2006.11371. [141] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you? Explaining the predictions of any classifier,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2016, pp. 1135–1144. [142] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2921– 2929, doi: 10.1109/CVPR.2016.319. [143] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 618–626, doi: 10.1109/ICCV.2017.74. [144] A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks,” in Proc. IEEE Winter Conf. Appl. Comput. Vis., 2018, pp. 839–847.
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
[145] E. S. Maddy and S. A. Boukabara, “MIIDAPS-AI: An explainable machine-learning algorithm for infrared and microwave remote sensing and data assimilation preprocessing - Application to LEO and GEO sensors,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 8566–8576, Aug. 2021, doi: 10.1109/JSTARS.2021.3104389. [146] S. S. Matin and B. Pradhan, “Earthquake-induced buildingdamage mapping using explainable AI (X AI),” Sensors, vol. 21, no. 13, Jun. 2021, Art. no. 4489, doi: 10.3390/s21134489. [147] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proc. Neural Inf. Process. Syst., 2017, vol. 30, pp. 4768–4777. [148] A. Temenos, I. N. Tzortzis, M. Kaselimi, I. Rallis, A. Doulamis, and N. Doulamis, “Novel insights in spatial epidemiology utilizing explainable AI (XAI) and remote sensing,” Remote Sens., vol. 14, no. 13, Jun. 2022, Art. no. 3074, doi: 10.3390/rs14133074. [149] A. Temenos, M. Kaselimi, I. Tzortzis, I. Rallis, A. Doulamis, and N. Doulamis, “Spatio-temporal interpretation of the COVID-19 risk factors using explainable AI,” in Proc. IEEE Int. Geosci. Remote Sens. Symp., 2022, pp. 7705–7708, doi: 10.1109/ IGARSS46834.2022.9884922. [150] Y. Xu, B. Du, and L. Zhang, “Beyond the patchwise classification: Spectral-spatial fully convolutional networks for hyperspectral image classification,” IEEE Trans. Big Data, vol. 6, no. 3, pp. 492–506, Sep. 2020, doi: 10.1109/TBDATA.2019.2923243. [151] M. Onishi and T. Ise, “Explainable identification and mapping of trees using UAV RGB image and deep learning,” Scientific Rep., vol. 11, no. 1, pp. 1–15, Jan. 2021, doi: 10.1038/s41598020-79653-9. [152] X. Huang, Y. Sun, S. Feng, Y. Ye, and X. Li, “Better visual interpretation for remote sensing scene classification,” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, Dec. 2021, doi: 10.1109/ LGRS.2021.3132920. [153] X. Gu, P. P. Angelov, C. Zhang, and P. M. Atkinson, “A semisupervised deep rule-based approach for complex satellite sensor image analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 5, pp. 2281–2292, Dec. 2020, doi: 10.1109/ TPAMI.2020.3048268. [154] X. Gu, C. Zhang, Q. Shen, J. Han, P. P. Angelov, and P. M. Atkinson, “A self-training hierarchical prototype-based ensemble framework for remote sensing scene classification,” Inf. Fusion, vol. 80, pp. 179–204, Apr. 2022, doi: 10.1016/j. inffus.2021.11.014. [155] N. I. Arnold, P. Angelov, and P. M. Atkinson, “An improved explainable point cloud classifier (XPCC),” IEEE Trans. Artif. Intell., vol. 4, no. 1, pp. 71–80, Feb. 2023, doi: 10.1109/TAI.2022.3150647. [156] D. Tuia, R. Roscher, J. D. Wegner, N. Jacobs, X. Zhu, and G. Camps-Valls, “Toward a collective agenda on AI for Earth science data analysis,” IEEE Geosci. Remote Sens. Mag., vol. 9, no. 2, pp. 88–104, Jun. 2021, doi: 10.1109/MGRS.2020.3043504. [157] A. Ghorbani, A. Abid, and J. Zou, “Interpretation of neural networks is fragile,” in Proc. AAAI Conf. Artif. Intell., 2019, vol. 33, no. 1, pp. 3681–3688, doi: 10.1609/aaai.v33i01.33013681. GRS
85
PERSPECTIVES NIRAV PATEL
Generative Artificial Intelligence and Remote Sensing A perspective on the past and the future
T
he first phase of 2023 has been marked with an explosion of interest around generative AI systems, which generate content. This type of machine learning promises to enable the creation of synthetic data and outputs in many different modalities. OpenAI’s ChatGPT has certainly taken the world by storm and opened discourse on how the technology should be used. Historically, generative models are certainly not new, dating back to the 1950s, with hidden Markov models and Gaussian mixture models [1], [2], [3]. The recent development of deep learning has allowed for generative models’ utility. In the early days of OpenAI’s ChatGPT HAS deep generative models, N-gram CERTAINLY TAKEN THE language modeling was utilized WORLD BY STORM AND to generate sentences in natural OPENED DISCOURSE ON language processing (NLP) [4]. HOW THE TECHNOLOGY This modeling did not scale well SHOULD BE USED. to generating long sentences, and hence, recurrent neural networks (RNNs) were introduced to deal with longer dependencies [5]. RNNs were followed by the development of long short-term memory [6] and gated recurrent unit methods, which leveraged gating mechanisms to control memory usage during training [7]. In the computer vision arena (more aligned with remote sensing), traditional image generation algorithms utilized techniques such as texture mapping [8] and texture synthesis [9]. These methods were very limited
The views expressed in this publication reflect those of the author and do not necessarily reflect the official policy or position of the U.S. Government or the Department of Defense (DoD).
Digital Object Identifier 10.1109/MGRS.2023.3275984 Date of current version: 30 June 2023
86
and could not generate complex and diverse images. The introduction of generative adversarial networks (GANs) [10] and variational autoencoders [11] in the past decade or so has allowed for more control over the image generation process to generate high-resolution images. Generative models in different modalities felt the advancement of the field in its totality with the introduction of the transformer architecture [12]. Large language models, such as the generative pretrained transformer (GPT), adopt this architecture as the primary building block, which initially had significant utility in the NLP world before later modifications to this architecture allowed for application to image-based streams of information [13], [14], [15], [16], [17]. Transformers consist of an encoder and a decoder, where the encoder takes in an input sequence and generates hidden representations, while the decoder has a multihead attention and feedforward NN [1]. See Figure 1 for an NLP example of a sentence being translated from English to Japanese. The emergence of these techniques has allowed for the creation of foundation models, which are the technical scaffolding behind generative AI capabilities. Foundation models, such as ChatGPT, learn from unlabeled datasets, which saves a significant amount of time and the expense of manual annotation and human attention. However, there is a reason why the most wellresourced companies in the world have made an attempt at generating these models [19]. First, you need the best computer scientists and engineers to maintain and tweak foundation models, and second, when these models are training data from the whole Internet, the computational cost is not insignificant. OpenAI’s GPT-3 was trained on roughly 45 TB of text data (equivalent to 1 million feet of bookshelf space), which cost several million dollars (estimated) [19]. With remote sensing applications, anecdotally, I have witnessed the rise of the use of GANs over the past few years. This deep learning technique, as mentioned IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
interactive segmentation methods would outperform their before, is an NN architecture that conducts the training promodel when many more points are provided. cess as a competition between a generator and a discrimiMy prediction for the future is that as we see the comnator to produce new data conforming to learned patterns. puter vision world make more investments in foundation Since GANs are able to learn from remote sensing data models related to image processing, the remote sensing and without supervision, some applications that the comgeosciences world will stand to benefit from large investmunity has found useful include (but are not limited to) ments by the world’s well-resourced tech companies. data generation/augmentation, superresolution, panAdvancements in computer vision models, due to the dechromatic sharpening, haze removal and restoration, and velopment of foundation models, however, will not always cloud removal [20], [21], [22], [23]. I strongly believe that be tailored toward the needs of remote sensing. Closely exthe ever-increasing availability of remotely sensed data and amining the data being fed into these foundation models the availability of relatively robust computational power and how exactly data are being labeled within these models in local and distributed (i.e., cloud)-based environments will allow for discerning remote sensing practitioners get will make GANs only more useful to the remote sensthe most value out of using such computer vision models. ing community in the coming years and may even lead Hence, a major caution to users of foundation models to some bespoke foundation models, especially with the for remote sensing applications is the same caution that apopen source remote sensing efforts that Google [25], Microplies for applications of foundation models to other types soft [26], and Amazon [27] are funding. of machine learning applications: the limits of utility for In other remote sensing areas, such as image segmentaoutputs are tied closely to the quantity and quality of the tion, the foundation models are already here (within days of labeled data associated with the model. Even the most sowriting this piece). In what could be the example for other phisticated foundation models cannot escape the maxim of remote sensing foundation models, Meta AI released Seg“garbage in, garbage out.” ment Anything [28], which is a new task, model, and dataset Well-resourced technology companies also have their for image segmentation. Meta claims to have “built the largmonetary interests that ultimately influence the foundaest segmentation dataset to date, with over 1 billion masks tion models that they create. It is important for remote senson 11M licensed and privacy respected images.” Social meing practitioners to understand this dynamic. For example, dia has many remote sensing companies, scientists, and enthusiasts alike ingesting satellite imagery into the model and yielding results, with varying utility. Meta’s paper provides more technical detail Transformer on how the foundation model is arOptimus Prime is a Encoder chitected (Figure 2), but in my opinCool Robot ion, the true uniqueness and value Hidden State 1 - {Optimus} lie in how massive the dataset is and Hidden State 2 {Prime} how well labeled it is in comparison to other image segmentation datasets Hidden State 3 {is} ... of its kind. Hidden State 6 {Robot} The authors of the Segment Anything admit that their model can “miss Decoder fine structures, hallucinates small disconnected components at times, and does not produce boundaries as crisply as more computationally intensive methods.” They posit that more dedicated FIGURE 1. An NLP translation of a sentence from English to Japanese [18].
, Score Mask Decoder
Image Encoder
, Score Conv
Image
Image Embedding Mask
Conv: Convolution
Prompt Encoder , Score Points
Box
Text Valid Masks
FIGURE 2. Technical detail of the Segment Anything architecture [28]. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
87
in exchange for providing access to lightweight and easyto-access interfaces, such as ChatGPT, all the data that are put in by the user can ultimately be utilized by OpenAI for other purposes. While the service does not cost any money for the user, ChatGPT still will gain insight from your inquiry to make itself better. Indeed, nothing truly comes for free, especially with the use of foundation models and the user interfaces associated with them. Finally, it is worth discussing the nefarious use cases that this technology can be used for, especially in the context of remote sensing. Synthetic data generation could be utilized, for example, to create fake satellite images that could provide the impression to an undiscerning user of information and evidence of something that doesn’t exist, and it could hide potential evidence. Consider an example of a country trying to hide changes around an area (land surface changes) to mask human rights violations. Synthetic data could be provided in the same place as a real satellite image was supposed to be provided in a data feed that is accessed by the public, giving a false sense of what the reality of the situation is. It is, thus, extremely important that the uses of synthetic data are also well defined and regulated by the community of remote sensing practitioners. Creating methods to identify synthetic remote sensing data would be the most effective in the near term, in my opinion. I also believe that synthetic data will be extremely useful in combination with real remote sensing data to train remote sensing models that aim at “few-shot” circumstances (i.e., detecting rare objects). Ultimately, the adoption of an extremely novel and effective technology in its nascent stages within a community requires a focus on the ethical implications of the use of the technology in each circumstance. The same holds true for our field of remote sensing, and I have confidence in our community to set the appropriate guardrails on the limits of use of this technology. ACKNOWLEDGEMENT None of this article was written by generative artificial intelligence (AI).
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10] [11] [12] [13] [14] [15]
[16] [17]
AUTHOR INFORMATION Nirav Patel (niravpatel@uf l.edu) is an affiliate faculty member in the Department of Geography at the University of Florida, Gainesville, FL 32611 USA. He is also a senior scientist in the Office of the Secretary of Defense at the U.S. Defense Innovation Unit, and a program manager for open-source/non-International Traffic in Arms Regulations (ITAR) restricted remote sensing efforts. REFERENCES [1]
[2]
88
Y. Cao et al., “A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to chatGPT,” 2023, arXiv:2303.04226. K. Knill and S. Young, “Hidden Markov models in speech and language processing,” in Corpus-Based Methods in Language
[18]
[19]
[20]
Speech Processing, S. Young and G. Bloothooft, Eds. Dordrecht, The Netherlands: Springer, 1997, pp. 27–68. D. A. Reynolds, “Gaussian mixture models,” in Encyclopedia Biometrics, vol. 741, S. Z. Li and A. Jain, Eds. Boston, MA, USA: Springer, 2009, pp. 659–663. Y. Bengio, R. Ducharme, and P. Vincent, “A neural probabilistic language model,” in Proc. Adv. Neural Inf. Process. Syst., 2000, vol. 13, pp. 932–938. T. Mikolov, S. Kombrink, L. Burget, J. Cˇ ernocký, and S. Khudanpur, “Extensions of recurrent neural network language model,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), May 2011, pp. 5528–5531, doi: 10.1109/ICASSP.2011.5947611. A. Graves, “Long short-term memory,” in Supervised Sequence Labelling With Recurrent Neural Networks, Berlin, Germany: Springer-Verlag, 2012, pp. 37–45. R. Dey and F. M. Salem, “Gate-variants of gated recurrent unit (GRU) neural networks,” in Proc. 60th IEEE Int. Midwest Symp. Circuits Syst. (MWSCAS), Aug. 2017, pp. 1597–1600, doi: 10.1109/MWSCAS.2017.8053243. P. S. Heckbert, “Survey of texture mapping,” IEEE Comput. Graph. Appl., vol. 6, no. 11, pp. 56–67, Nov. 1986, doi: 10.1109/ MCG.1986.276672. A. A. Efros and T. K. Leung, “Texture synthesis by non-parametric sampling,” in Proc. 7th IEEE Int. Conf. Comput. Vision, Sep. 1999, vol. 2, pp. 1033–1038, doi: 10.1109/ICCV.1999.790383. I. Goodfellow et al., “Generative adversarial networks,” Commun. ACM, vol. 63, no. 11, pp. 139–144, Oct. 2020, doi: 10.1145/3422622. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” 2013, arXiv:1312.6114. A. Vaswani et al., “Attention is all you need,” in Proc. 30th Adv. Neural Inf. Process. Syst., 2017, pp. 6000–6010. T. Brown et al., “Language models are few-shot learners,” in Proc. Adv. Neural Inf. Process. Syst., 2020, vol. 33, pp. 1877–1901. A. Ramesh et al., “Zero-shot text-to-image generation,” in Proc. Int. Conf. Mach. Learn., PMLR, Jul. 2021, pp. 8821–8831. M. Lewis et al., “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” 2019, arXiv:1910.13461. A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” 2020, arXiv:2010.11929. Z. Liu et al., “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proc. IEEE/CVF Int. Conf. Comput. Vision, 2021, pp. 10,012–10,022. D. J. Rogel-Salazar. “Transformers models in machine learning: Self-attention to the rescue.” Domino. Accessed: Apr. 13, 2023. [Online]. Available: https://www.dominodatalab. com/blog/transformers-self-attention-to-the-rescue “What is generative AI?” McKinsey & Company. Accessed: Apr. 13, 2023. [Online]. Available: https://www.mckinsey.com /featured-insights/mckinsey-explainers/what-is-generative-ai Y. Weng et al., “Temporal co-attention guided conditional generative adversarial network for optical image synthesis,” Remote Sens., vol. 15, no. 7, Mar. 2023, Art. no. 1863, doi: 10.3390 /rs15071863. (continued on p. 100)
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
TECHNICAL COMMITTEES DALTON LUNGA , SILVIA ULLO , UJJWAL VERMA , GEORGE PERCIVALL , FABIO PACIFICI, AND RONNY HÄNSCH
Analysis-Ready Data and FAIR-AI—Standardization of Research Collaboration and Transparency Across Earth-Observation Communities
T
he IEEE Geoscience and Remote Sensing Society (GRSS) Image Analysis and Data Fusion Technical Committee (IADF TC) serves as a global, multidisciplinar y network for geospatial image analysis, e.g., machine learning (ML), image and signal processing, and computer vision (CV). The IADF is also responsible for defining the directions of the data fusion contests while paying attention to remote sensing (RS) data’s multisensor, multiscale, and multitemporal integration challenges. Among recent activities, the IADF is collaborating with the GRSS TC on Standards for Earth Observation (GSEO) and other groups to promote two complementary initiatives: 1) reducing the overhead cost of preprocessing raw data and 2) improving infrastructure to support the community reuse of Earthobservation (EO) data and artificial intelligence (AI) tools. The EO community has engaged the aforementioned 1) via a series of workshops on analysis-ready data (ARD) [1], which has laid bare that current best practices are provider specific [2]. Engagements in developing the aforementioned 2) are in the early stages. They lack a guiding framework similar to findable, accessible, interoperable, reusable (FAIR) [3], outlining standardized principles for best data stewardship and governance practices. Developing templates and tools for consistently formatting/preprocessing data within a discipline is becoming
Digital Object Identifier 10.1109/MGRS.2023.3267904 Date of current version: 30 June 2023
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
common in many domains. It is a practice that is helping to increase research transparency and collaboration. Although not broadly adopted in EO, such a practice could enable data and derived AI tools become easily accessible and reusable. However, the immense diversity of modalities and sensing instruments across EO/ RS makes research development and adoption challenging. This article provides a first outlook on guidelines for the EO/RS communities to create/ adapt ARD data formats that integrate with various AI workflows. Such best practices have the potential to expand the impacts of image analysis and data fusion with AI and make it simpler for data providers to provide data that are more interoperable and reusable in crossmodal applications. WHY STANDARD ARD, FAIR DATA, AND AI EO SERVICES? Poor best practices and the lack of standardized templates can present barriers to advancing scientific research and knowledge generation in EO. For example, synthesizing cross-modal data can be extremely time consuming and carries huge overhead costs when preprocessing raw data. In addition, with AI tools being increasingly pervasive across humanitarian applications, the time needed to generate insights is becoming critical, and reusability is preferred. At the same time, duplication of efforts is costly and hinders fast progress. 89
Standards for data have been proposed as essential elements to advance EO sciences. The Open Geospatial Consortium’s Sensor Observation Service standard defines a web service interface that allows pulling observations, sensor metadata, and representations of observed features [4]. Such accredited standards help outline broad governing protocols, but can take longer to build governing processes and consensus. In contrast, grassroots efforts [5], [6] can foster efficient adaptation of best practices to harmonize cross-modal/-sensor data, creating datasheets and model cards for AI tool types, with working groups helping to maintain cross pollination of taxonomies. ARD With the increased availability of observations from multiple EO missions, merging these observations allows for better temporal coverage and higher spatial resolutions. Specifically, ARD can be created from observations across modalities. The ARD could be processed to ensure easy utilization for AI-based EO applications. Standard ARD components include atmospheric compensation, orthorectification, pansharpening, color balancing, bundle block adjustment, and grid alignment. In its advancement, future ARD processes could see radiometric and geometric adjustments applied to data across modalities to create “harmonized” data [7], enabling study of the evolution of a given location through time using information from multiple modalities. Figure 1 shows the envisioned standardization process informed by ARD and FAIR principles [8]. For example, suitable cross-modal data formats could be estab-
FAIR and Analysis-Ready EO Datasets
STAC-informed datasheets, STAC-based item identifiers, Metadata attributed with CEOS ARD specifications, Harmonized ARD, FAIR
Create AI-ready data: train, validate, and test on FAIR and ARD EO data
lished to create AI-ready datasets compatible with open source ML frameworks, accelerating the path from image analysis and data fusion research prototyping to production deployment. As FAIR principles continue to pave the way for the state of practice in other scientific domains, the EO community could benefit by following suit and introducing FAIR-EO definitions to guide research transparency and collaboration. TOWARD CROSS-MODAL ARD AND FAIR DATA PRINCIPLES The aforementioned shortcomings present an opportunity to collaborate toward a concise and measurable set of crossmodal FAIR ARD and FAIR model principles, ultimately advancing image analysis and data fusion algorithmic impacts at scale. The overarching goal is to harness bestpractice ARD developments to minimize user burden by harmonizing heterogeneous imagery data and promoting FAIR principles for both EO data and AI products. A joint IADF-GSEO paper submitted to the 2023 International Geoscience and Remote Sensing Symposium revisits current best practices and outlines guidelines for advancing EO data and derivative AI products for broader community use. The remaining work needs to start by revisiting common ARD essentials and aim to forge their evolution with FAIR principles to support cross-modal-based ML and CV opportunities emerging as central aspects for solving complex EO challenges. An integrated framework (presented as a general scheme in Figure 1) that combines ARD and FAIR for modernizing the state of practice in AI for EO must
Create baseline AI models, document with modelcards, and standardize dataloaders Pytorch DDP or DeepSpeed accelerated training
Deploy at scale (TensorRT)
Baseline AI models validated, tested and containerized.
ARD harmonization scripts are containerized. Dataloaders are containerized, AI models are published on DASE, EOD? Published FAIR AI models to include model cards, Jupyter notebooks to demo model deployment, links to metadata and sample test data
Link published FAIR-AI models to standardized EO datahubs housing FAIR AI-ready data (DASE, EOD,…)
FIGURE 1. ARD-motivated FAIR EO data and FAIR-AI model principles integrated into a common EO process. STAC: SpatioTemporal Asset Catalog; DASE: Data and Algorithm Standard Evaluation; EOD: Earth observation database; DPP: Distributed Data Parallel.
90
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
be contextualized. The framework will depend on several building blocks, including software scripts that demonstrate data harmonization, creation of datasheets for AIready datasets, creation of model cards for FAIR-AI models, measurement/validation metrics, and standardized environments for model deployment. FAIR-AI MODELS Recent developments from the ML community [9], [10] could provide initial building blocks to advance metadata standardization, but for EO applications [11], [12]. The ideas of developing datasheets [9] for data and model cards [10] for models have been introduced as mechanisms to organize the essential facts about datasets and ML models in a structured way. Model cards are short documents accompanying trained ML models that provide a benchmarked evaluation in various conditions. For EO/RS, such conditions could include different cultural and geographic locations, seasonality, sensor resolution, and object feature types relevant to the intended application domain. The RS community could aim to develop model cards to catalog model performance characteristics, intended use cases, potential pitfalls, or other information to help users evaluate suitability or compose detailed queries to match their application contexts. Similarly, each data source should be developed with a datasheet documenting its motivation, composition, collection process, recommended uses, and models generated from the data. STANDARDS FOR EVALUATION Evaluation metrics provide an effective tool for assessing the performance of AI models. Most of the evaluation metrics for CV-based EO applications are adapted from traditional CV tasks (such as image classification, semantic segmentation, and so on). These traditional CV metrics were designed for natural images. In addition, different evaluation metrics are sensitive to different types of errors [13]. Focusing on only one metric will result in a biased AI model. EO applications need community agreed-upon holistic evaluation metrics to develop a path for characterizing research/ operational progress, and limits for AI-ready EO datasets and AI models when deployed in real-world applications. WHAT THE GRSS COMMUNITY IS GOING TO DO The first step is to propose a framework for cross-modal ARD processing, and to provide definitions of what FAIR means for EO datasets and AI models. The following briefly summarizes the corresponding details and definitions of FAIR elements for EO: ◗◗ Findable: •• RS training and validation image metadata should first be standardized through the SpatioTemporal Asset Catalog (STAC) [12] family of specifications to create structured datasheets and easy-to-query formats. •• Cross-modal datasheets that provide a detailed description of the datasets, including resolution and the number of channels, sensor type, and collection date. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
•• Metadata contain STAC-based item identifiers that enable other users to search for data. •• Datasheets that contain machine-readable keywords, with metadata that are easy for humans and machines to find. •• Dataset metadata should be written using RS-based attributes similar to Committee on Earth Observation Satellites (CEOS) ARD specifications [14]. •• Develop open-consensus, GRSS-based ARD standards that advance harmonization of ARD datasets by vendors and imagery providers. Coordination can be established to consider CEOS ARD specifications and include broader EO expert contributions, e.g., the GRSS and industry. ◗◗ Accessible: •• ARD datasets, including EO benchmarks, should be made available and shared through searchable public repositories, with the data retrievable by standardized interfaces [(application programming interfaces (APIs)], including access by identifier. •• Human–computer-interaction-searchable repositories with tools (such as, e.g., an Earth observation database [15]) that search cross-modal datasets should be developed. •• Using STAC specifications, datasheets should be discoverable by humans and machines. ◗◗ Interoperable: •• Benchmark EO datasets should be in common formats and standardized through shared ARD best practices. •• Datasheets and model cards that contain references to training data, hyperparameter settings, validation metrics, and hardware platforms used in experiments. •• Data fusion experiments should be published in containerized environments to encourage interoperability and reproducibility across computing platforms. •• Conduct experiments on data fusion using ARD data from multiple sensors accessed using open APIs. The experiments should involve multiple software applications that identify best practices and needed harmonization to lessen analysts’ burden in creating fusion workflows. ◗◗ Reusable: •• Publishing of data, datasheets, model cards, and model weights should be supported by Jupyter notebooks for quick human interaction to understand and test models on new data. •• Datasheets that include details in machine-readable format on how data were collected. •• Establish domain-relevant community standards for AI-based data fusion evaluation methods based on the GRSS IADF and data and algorithm standard evaluation results [16]. After that, standard ARD components must be highlighted while focusing on emerging needs at the nexus of 91
cross-modal, cross-sensor EO/RS; image analysis; and data fusion technologies. Essential components must be identified to establish standards for systematic advancement and provision of image analysis and data fusion methods in the era of AI and big EO data. The GRSS and, in particular, the IADF and the GSEO will continue to work toward these goals. However, standards do not exist in isolation. There is an applicationrelated context that needs to be respected, existing work that needs to be incorporated, best practices that should be adapted, and communities that need to validate the proposed principles by adhering to and using them. Thus, we actively reach out and invite other groups working toward similar goals to focus our efforts and collaborate as a new standard needs to be created by the community for the community to be successful. AUTHOR INFORMATION Dalton Lunga ([email protected]) is with the Oak Ridge National Laboratory, Oak Ridge, TN 37830 USA, and is the IEEE Geoscience and Remote Sensing Society Image Analysis and Data Fusion Working Group on Machine/Deep Learning for Image Analysis lead. He is a Senior Member of IEEE. Silvia Ullo ([email protected]) is with the University of Sannio, 82100, Benevento, Italy, and is the IEEE Geoscience and Remote Sensing Society Image Analysis and Data Fusion Working Group on Machine/Deep Learning for Image Analysis co-lead. She is a Senior Member of IEEE. Ujjwal Verma ([email protected]) is with the Department of Electronics and Communication Engineering, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal 576104, India, and is the IEEE Geoscience and Remote Sensing Society Image Analysis and Data Fusion Working Group on Machine/ Deep Learning for Image Analysis co-lead. He is a Senior Member of IEEE. George Percivall ([email protected]) is with GeoRoundtable, Annapolis, MD 21114 USA, and is the IEEE Geoscience and Remote Sensing Society Technical Committee on Standards for Earth Observation cochair. He is a Senior Member of IEEE. Fabio Pacifici is with Maxar Technologies Inc, Westminster, CO 80234 USA, and is the IEEE Geoscience and Remote Sensing Society vice president of technical activities. He is a Senior Member of IEEE. Ronny Hänsch ([email protected]) is with the German Aerospace Center, 82234 Weßling, Germany, and is the IEEE Geoscience and Remote Sensing Society Image Analysis and Data Fusion chair. He is a Senior Member of IEEE. REFERENCES [1] Z. Ignacio. “Analysis ready data workshops.” ARD.Zone. Accessed: Dec. 1, 2022. [Online]. Available: https://www.ard. zone
92
[2] J. L. Dwyer, D. P. Roy, B. Sauer, C. B. Jenkerson, H. K. Zhang, and L. Lymburner, “Analysis ready data: Enabling analysis of the Landsat archive,” Remote Sens., vol. 10, no. 9, Aug. 2018, Art. no. 1363, doi: 10.3390/rs10091363. [Online]. Available: https://www.mdpi.com/2072-4292/10/9/1363 [3] M. D. Wilkinson et al., “The FAIR guiding principles for scientific data management and stewardship,” Scientific Data, vol. 3, no. 1, Mar. 2016, Art. no. 160018, doi: 10.1038/ sdata.2016.18. [4] “Sensor observation service,” Open Geospatial Consortium, Arlington, VA, USA, 2023. [Online]. Available: https:// www.ogc.org/standard/sos/ [5] “Standards working groups,” Open Geospatial Consortium, Arlington, VA, USA, 2023. [Online]. Available: https:// www.ogc.org/about-ogc/committees/swg/ [6] “Data readiness,” Earth Science Information Partners, Severna Park, MD, USA, 2023. [Online]. Available: https://wiki. esipfed.org/Data_Readiness [7] M. Claverie et al., “The harmonized Landsat and sentinel-2 surface reflectance data set,” Remote Sens. Environ., vol. 219, pp. 145–161, Dec. 2018, doi: 10.1016/j.rse.2018.09.002. [Online]. Available: https://www.sciencedirect.com/science/ article/pii/S0034425718304139 [8] “FAIR for machine learning (FAIR4ML) IG,” Research Data Alliance, USA, Australia, Europe, 2023. [Online]. Available: https://www.rd-alliance.org/groups/fair-machine-learning -fair4ml-ig [9] T. Gebru et al., “Datasheets for datasets,” 2018. [Online]. Available: https://arxiv.org/abs/1803.09010 [10] M. Mitchell et al., “Model cards for model reporting,” in Proc. Conf. Fairness, Accountability, Transparency, Jan. 2019, pp. 220–229, doi: 10.1145/3287560.3287596. [11] D. Lunga and P. Dias, “Advancing data fusion in earth sciences,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2022, pp. 5077–5080, doi: 10.1109/IGARSS46834.2022.9883176. [12] J. Rincione and M. Hanson, “CMR SpatioTemporal Asset Catalog (CMR-STAC) documentation,” NASA Earth Science, Nat. Aeronaut. Space Admin., Washington, DC, USA, 2021. [Online]. Available: https://wiki.earthdata.nasa.gov/ display/ED/CMR+SpatioTemporal+Asset+Catalog+%28CMR -STAC%29+Documentation [13] B. Cheng, R. Girshick, P. Dollár, A. C. Berg, and A. Kirillov, “Boundary IOU: Improving object-centric image segmentation evaluation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 15,329–15,337, doi: 10.1109/CVPR46437.2021.01508. [14] “CEOS analysis ready data,” Committee on Earth Observation Satellites, France, Canada, USA, Thailand, 2022. [Online]. Available: https://ceos.org/ard/ [15] M. Schmitt, P. Ghamisi, N. Yokoya, and R. Hänsch, “EOD: The IEEE GRSS earth observation database,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2022, pp. 5365– 5368, doi: 10.1109/IGARSS46834.2022.9884725. [16] G. I. T. Committee. “GRSS data and algorithm standard evaluation.” GRSS DASE. Accessed: May 3, 2023. [Online]. Available: http://dase.grss-ieee.org/ IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
IRENA HAJNSEK , SUBIT CHAKRABARTI , ANDREA DONNELLAN , RABIA MUNSAF KHAN , CARLOS LÓPEZ-MARTÍNEZ , RYO NATSUAKI, ANTHONY MILNE, AVIK BHATTACHARYA , PRAVEEN PANKAJAKSHAN , POOJA SHAH , AND MUHAMMAD ADNAN SIDDIQUE
REACT: A New Technical Committee for Earth Observation and Sustainable Development Goals
I
n November 2022 a new technical committee of the IEEE Geoscience and Remote Sensing Society (GRSS) was formed with the name Remote Sensing Environment, Analysis, and Climate Technologies (REACT). REACT is a venue for all scientists and engineers working on remote sensing and environment related domains as well as on the analysis of remote sensing data related to climate change and sustainable development goals (SDGs). The primary aim is to exchange ideas and share knowledge with the goal of advancing science and defining requirements for science-driven mission concepts and data products in the domains of cryosphere, biosphere, hydrosphere, atmosphere, and geosphere. Remote sensing for Earth observation (EO) represents a key tool for systematic and continuous observation of Earth surface processes and is therefore an indispensable instrument to quantify environmental changes. The changes that can be observed can be due to natural successions, hazards occurrences, or anthropogenic influences. The focus of the GRSS technical committees is on methods and algorithms, satellite systems, and datadriven solutions to estimate information products. With REACT we are going a step forward in using the information products derived from remote sensing and making them available to enforce sustainable management in the different environmental domains. In other words, we will contribute to the understanding of climate change and support the SDGs. At the moment, a team of five chairs, with expertise including mission design, image and signal processing, algorithm development, and application to different environments, is taking the lead in forming and structuring REACT. Within REACT, currently four local focus areas have been established to open up a venue for local and regional issues related to climate change and to support the SDGs using remote sensing for EO (see Figure 1). The main tasks in each of these focus areas are as follows: ◗◗ building a community through collaborative efforts in a shared region ◗◗ exploring different application domains utilizing a variety of methods/techniques
Digital Object Identifier 10.1109/MGRS.2023.3273083 Date of current version: 30 June 2023
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
◗◗ meeting SDGs and climate issues using remote sensing ◗◗ finding/dealing with local issues transferable to glob-
al issues ◗◗ defining current and future use cases in the local areas ◗◗ achieving diversity through an interdisciplinary and multicultural working environment. The current topics of the local focus areas are briefly presented in the following sections and are led by a team member. PACIFIC ISLANDS AND TERRITORIES Although Pacific Island nations have had the least involvement in causing anthropogenic climate change, they will experience the most extreme consequences. Recently, climate change has compounded an already-vulnerable situation by increasing the frequency and intensity of extreme climatic events that pose a sigTHE FOCUS OF THE GRSS nificant threat to the safety of TECHNICAL COMMITTEES people and communities. The IS ON METHODS AND cost of recover y from these events can be significant. Local ALGORITHMS, SATELLITE retrieved information from reSYSTEMS, AND DATAmote sensing data will increase DRIVEN SOLUTIONS TO awareness and is providing a ESTIMATE INFORMATION better understanding of the PRODUCTS. environmental processes. The task here is to educate the local inhabitants about the knowledge gained from remote sensing and to bring together experts’ knowledge worldwide to support them with quantitative information to inform and guide decisions in promoting sustainable management of both land and ocean environments. AGRICULTURE AND FOOD SECURITY Global food security is a part of the objectives of the United Nation’s SDG and can be achieved through sustainable agricultural and regenerative practices, reduced food losses and waste, improved nutrient content, and assured zero hunger. With global warming, escalating conflicts, spiraling climatic crises, and economic downturns in recent years, global agricultural monitoring for sustainable food production and regional food security are critical objectives to address at the moment. One of the essential components of this task is optimizing agricultural input resources, 93
including water usage, soil nutrients, pests and diseases, and the availability of clean energy and labor. However, the high variability in cropping systems and agroecological zones makes agricultural production extremely diverse. For instance, crop monitoring, forecasting, and mechanization are highly site specific because of variability in crop CLIMATE CHANGE IS traits, pathogen pressures, environmental conditions, input ALSO INCREASING THE availability, and management SEVERITY OF FLOODS, strategies, making technological WHICH MAKES THESE generalization very challenging. LOSSES CATASTROPHIC The volume of EO data used for AND THREATENS EVERY near-real-time monitoring along SDG AT LOCAL AND with cloud-based processing REGIONAL SCALES. and machine learning (ML) have recently enhanced scientific capacity and methods for investigating land and water resource management. Several efforts were made within scientific communities, commercial organizations, and national agencies to develop EO data products and ML methodologies that aid in monitoring biophysical (such as crop condition anomalies and planting acreage) and sociopolitical risk factors related to agricultural production and food security. End users with limited background in EO would like to have analysis-
Pacific Island
Agriculture and Food Security in India
Floods and Water Security in Africa
Hindu KushKarakoramHimalayas
FIGURE 1. The current four local focus areas of REACT.
94
ready datasets to assist them in continuous monitoring and impact analysis of climate change or interventions. These are critical needs across both developed and developing economies. As an example, EO-driven capacity building leads to emigration from a food-deficit nation as a logical progression of the endeavor to address major difficulties inherent in the food system of Indian small-acre farms. India’s presidency of the Group of 20 has now been recognized to address the growing challenges of food security for creating resilient and equitable food systems. We will initiate processes to provide EO data in the form of analysis-ready and state-of-the-art methodologies for end users toward the SDG. FLOODING IN AFRICA Floods, which constitute around half of all extreme events, are increasing, exposing a larger population to a higher risk of loss of livelihood and property. Climate change is also increasing the severity of floods, which makes these losses catastrophic and threatens every SDG at local and regional scales. The impact of flooding on the African continent is massive because robust flood defenses and urban drainage systems are lacking in many cities that are built on floodplains, which amplifies the risk further. Remote sensing and EO can help mitigate the loss of lives and livelihood and increase adaptation to floods. Nearreal-time maps of inundation allow first responders and disaster managers to prioritize aid to the most affected areas, flood-risk maps allow planners to build flood defenses for neighborhoods that are most at risk, and predictive models built using EO data can help aid agencies provide anticipatory financing to vulnerable communities. However, major technical challenges still remain in generating actionable insights and inundation maps from remotely sensed imagery, which can only be solved when remote sensing experts work with emergency managers and other end users directly. CRYOSPHERE CHANGES IN THE HINDU KUSH, KARAKORAM, AND HIMALAYAS The Hindu Kush–Karakoram–Himalaya still remain an understudied area, despite the fact that collectively they form what we call the “third pole,” with the “largest amount of ice cover outside of the polar regions.” Several glaciers are receding rapidly because of global warming. The entire region is likely to face extreme water stress in the coming decades. Climate change has exacerbated glacial melting, leading to an increase in glacial lake outburst floods. These vulnerable glacial lakes are mostly seasonal, so their precise incidence in time and location may not be known a priori; therefore, remote sensingbased automated detection can help scientists, policy makers, and the local communities directly. The main task is to bring this knowledge to the local people and to exchange expert knowledge for a reliable and sustainable event detection method. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
All scientists who are interested in one of the local focused areas and would like to contribute or participate in their activities are invited to join REACT. In addition to the local focus areas, we have many more activities, totally open to other people to participate and enrich with their ideas and expertise. ◗◗ As a next event, we will have the “Mini-Project Competition” EO4SDG, where your ideas about how to solve environmental problems can be submitted with a short three-page proposal in relation to the SGDs. The short proposals are evaluated, and the first three best-rated ones will have the opportunity to be published in IEEE Geoscience and Remote Sensing Magazine. ◗◗ Further, we have regular webinars about the different topics offered with the local focused areas.
◗◗ At the IEEE International Symposium on Geoscience
and Remote Sensing, we hold an annual meeting on one evening to exchange knowledge and collect ideas for new activities. Please watch out for the announcement of the REACT Technical Committee meeting. ◗◗ Currently, we are working on a new podcast series related to climate change and SDGs. The podcast will be launched in September/October 2023. ◗◗ We have a strong connection to the IEEE GRSS Young Professionals and are supported in different activities. All activities are announced through social media and the GRSS home page https://www.grss-ieee.org. Please have a look at it. We look forward to welcoming you at the next event.
GEMINE VIVONE , DALTON LUNGA , FRANCESCOPAOLO SICA , GÜLS¸EN TAS¸KIN , UJJWAL VERMA , AND RONNY HÄNSCH
Computer Vision for Earth Observation—The First IEEE GRSS Image Analysis and Data Fusion School
T
he first edition of the IEEE Geoscience and Remote Sensing Society (GRSS) Image A nalysis and Data Fusion (IADF) school (see Figure 1) was organized as an online event from 3 to 7 Oc tober 2022. It addressed topics related to computer vision (CV) in the context of Earth observation (EO). Nine lessons with both theoretical and hands-on sessions were provided, involving more than 17 lecturers. We received more than 700 registrations from all over the world. The organizing committee selected 85 candidates to join the online class. The remaining applicants were free
Digital Object Identifier 10.1109/MGRS.2023.3267850 Date of current version: 30 June 2023
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
to attend the live stream on the GRSS YouTube channel (https://www.youtube. com/c/IEEEGRSS). The selection process relied on several objective criteria, including work experience, academic recognitions, number of publications, and h-index. Due to the high number of registrations, the prescreening also assessed fundamental skills such as programming expertise and CV background, which are of crucial importance to fruitfully attend such a school. Aspects regarding diversity and inclusion were also taken into account as well. The selected people consisted of approximately 50% Ph.D. students and roughly 30% women. The geographical distribution of the selected participants, coming from 33 different countries, is depicted in Figure 2. 95
GOALS OF THE IADF SCHOOL ON COMPUTER VISION FOR EARTH OBSERVATION EO data, in particular, imagery, had been mostly manually interpreted in the early days of remote sensing. With increasing digitization and computational resources, the automatic analysis of these images came into focus. However, most of the approaches were very close to the sensor, interpreting the data as well calibrated measurement of a physical process. They were mostly based on the statistical and/or physical models that describe the functional relation between measurement and the physical properties of the ground (and atmosphere). The advantage of these models is that their results can be assigned to a clear
FIGURE 1. Logo of the first IEEE GRSS IADF School on Computer Vision for Earth Observation.
United States 14%
Other Countries 42%
India 12% France 7%
Germany 5%
Turkey 5%
Italy 5%
China 6% Canada 4%
(a) Africa 7%
Australia 2%
Asia 37%
Europe 29%
America 25% (b)
FIGURE 2. A geographical distribution of the selected participants.
(a) The countries and (b) continents. 96
phenomenological context, and the connection to physical laws is maintained. A limitation, however, is that most of these models are based on simplifying assumptions to make their computation tractable. With the success of CV in other areas (mostly the semantic or geometric interpretation of close-range imagery) a different type of model gained importance. CV emphasizes the “image” aspect of the acquired data, i.e., spectral–spatial relationships among pixels, instead of focusing at the information measured in a single pixel. Its methods are usually data driven, i.e., applying machine learning-based approaches to model the relationship between input data and target variables. This allows gains in flexibility, generalization, and complexity while potentially sacrificing interpretability and physical plausibility of the results. Since the beginnings of CV in EO, community and methods have shown significant progress. The used approaches are not merely adopted versions of methods designed for close-range photographs anymore, but are increasingly directly tailored toward the specific characteristics of remote sensing data. Sophisticated methods address data particularities such as high-dimensional hyperspectral images or complex-valued synthetic aperture radar (SAR) data as well as task-specific characteristics such as label noise or the general scarcity of annotations. The goal of this first IEEE GRSS IADF School on Computer Vision for Earth Observation (CV4EO) was to provide a general overview about the multitude of different aspects of how CV is used in EO applications, together with deep insights into modern methods to automatically process and analyze remote sensing images. ORGANIZATION OF THE FIRST IEEE GRSS IADF SCHOOL ORGANIZING COMMITTEE The school was organized by the IADF Technical Committee (IADF TC) of the GRSS. The organizing committee consists of the following members (Figure 3): ◗◗ Gemine Vivone, National Research Council, Italy ◗◗ Ronny Hänsch, German Aerospace Center, Germany ◗◗ Claudio Persello, University of Twente, The Netherlands ◗◗ Dalton Lunga, Oak Ridge National Laboratory, USA ◗◗ Güls¸en Tas¸kın, Istanbul Technical University, Turkey ◗◗ Ujjwal Verma, Manipal Institute of Technology Bengaluru, India ◗◗ Francescopaolo Sica, University of the Bundeswehr Munich, Germany ◗◗ Srija Chakraborty, NASA’s Goddard Space Flight Center, Universities Space Research Association, USA. PROGRAM Remote sensing generates vast amounts of image data that can be difficult and time consuming to analyze using IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Gemine Vivone
Ronny Hänsch
Claudio Persello
Dalton Lunga
Gülsen Taskin
Ujjwal Verma
Francescopaolo Sica
Srija Chakraborty
FIGURE 3. The Organizing Committee of the first IEEE GRSS IADF School on CV4EO.
conventional image processing techniques. CV algorithms enable the automatic interpretation of large data, allowing remote sensing to be used for a wide range of applications, including environmental monitoring, land use/cover mapping, and natural resource management. Thus, the IADF TC aimed for prioritizing topics that integrate CV into remote sensing data analysis. The first IADF school focused on applying CV techniques to address modern remote sensing challenges, consisting of a series of lectures
JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
discussing current methods for analyzing satellite images. The covered topics were image fusion, explainable artificial intelligence (AI) for Earth science, big geo-data, multisource image analysis, deep learning for spectral unmixing, SAR image analysis, and learning with zero/ few labels. The technical program of the IADF school is depicted in Figure 4. During the first day of the school, the “Deep/Machine Learning for Spectral Unmixing” lecture covered various topics related to linear hyperspectral unmixing. These included geometrical approaches, blind linear unmixing, and sparse unmixing. Additionally, the course delved into the utilization of autoencoders and convolutional REMOTE SENSING GENERnetworks for unmixing purposATES VAST AMOUNTS OF es. The lecture was followed by IMAGE DATA THAT CAN “Change Detection (TorchGeo),” BE DIFFICULT AND TIME which elaborated on the utilization of TorchGeo with PyTorch CONSUMING TO ANALYZE for training change detection USING CONVENTIONAL models using satellite imagery. IMAGE PROCESSING On the second day of the school, TECHNIQUES. the “Learning with Zero/Few Labels” lecture discussed recent developments in machine learning with limited label data in EO, including semisupervised learning, weakly supervised learning, and self-supervised learning. The subsequent “SAR Processing” lecture covered various topics, including the analysis of SAR images with different polarimetric channels, the geometry of SAR image 97
acquisition, radiometric calibration, and generation of the SAR backscatter image. On the third day, the “Semantic Segmentation” lecture started with a focus on recent advancements in methods and datasets for the task of semantic segmentation of remote sensing images. This lecture was followed by a practical exercise. During the exercise, participants had the opportunity to train and test a model for this particular task. The school proceeded with a lecture, “Big Geo-Data,” which explored the latest developments in machine learning. Practical considerations were presented to effectively deploy these advancements for analyzing high-resolution geospatial imagery across a wide range of applications including ecosystem monitoring, natural hazards, and urban land-cover/land-use patterns. On the fourth day of the school, the “Image Fusion” lecture discussed theo-
Topics
retical and practical elements to develop convolutional neural networks for pansharpening. The “XAI for Earth Science” lecture discussed the methods of explainable AI and demonstrated their use to interpret pretrained models for weather hazard forecasting. The school concluded with a lecture about “PolSAR,” which focused on statistical models for fully polarimetric SAR data that arise in practical applications. DISTRIBUTED MATERIAL Through lectures, hands-on exercises, and demonstrations, participants gained a deep understanding of key topics in CV4EO, including image fusion, explainable AI, multisource image analysis, deep learning for spectral unmixing, SAR image analysis, and unsupervised and self-supervised learning. The lectures were recorded and
Speakers
Affiliations 3 October
10 a.m.–2 p.m. (UTC +2) Deep/Machine Learning for Spectral Unmixing
Dr. Behnood Rasti
Helmholtz-Zentrum Dresden-Rossendorf (Germany)
2 p.m.–6 p.m. (UTC +2) Change Detection (TorchGeo)
Dr. Caleb Robinson
Microsoft (USA)
4 October 10 a.m.–2 p.m. (UTC +2) Learning With Zero/Few Labels
Dr. Sudipan Saha, Dr. Angelica I. Aviles-Rivero, Dr. Lichao Mou, Prof. Carola-Bibiane Schönlieb, and Prof. Xiao Xiang Zhu
Technical University of Munich (Germany), German Aerospace Center (Germany), and University of Cambridge (U.K.)
2 p.m.–6 p.m. (UTC +2) SAR Processing
Dr. Shashi Kumar
IIRS, ISRO (India)
5 October 10 a.m.–2 p.m. (UTC +2) Semantic Segmentation
Prof. Sylvain Lobry
Universitè de Paris (France)
2 p.m.–6 p.m. (UTC +2) Big Geo-Data
Prof. Saurabh Prasad, and Prof. Melba Crawford
University of Houston (USA), and Purdue University (USA)
6 October 10 a.m.–2 p.m. (UTC +2) Image Fusion
Prof. Giuseppe Scarpa, and Dr. Matteo Ciotola
University of Naples “Federico II” (Italy)
2 p.m.–6 p.m. (UTC +2) XAI for Earth Science
Dr. Michele Ronco
University of Valencia (Spain)
7 October 9 a.m.–1 p.m. (UTC +2) PolSAR
Prof. Avik Bhattacharya, Prof. Alejandro Frery, and Dr. Dipankar Mandal
Indian Institute of Technology Bombay (India), Victoria University of Wellington (New Zealand), and Kansas State University (USA)
FIGURE 4. The technical program of the first IEEE GRSS IADF School on CV4EO. IIRS, ISRO: Indian Institute of Remote Sensing, Indian Space
Research Organisation. 98
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Prof. Melba Crawford
Prof. Saurabh Prasad
Dr. Caleb Robinson
Dr. Behnood Rasti
Dr. Michele Ronco
Prof. Sylvain Lobry
Dr. Matteo Ciotola
Prof. Giuseppe Scarpa
Dr. Sudipan Saha
Dr. Angelica I. AvilesRivero
Dr. Dipankar Mandal
Dr. Shashi Kumar
Dr. Lichao Mou
Prof. Carola-Bibiane Schönlieb
Prof. Xiao Xiang Zhu
Prof. Avik Bhattacharya
Prof. Alejandro Frery
FIGURE 5. Speakers of the first IEEE GRSS IADF School on CV4EO.
made available online on a daily basis on the GRSS YouTube channel. Links to the daily lectures are provided for reference. SPEAKERS The first edition of the IEEE GRSS IADF school invited a diverse group of experts from four continents. As shown in Figure 5, the list includes the following: ◗◗ Prof. Melba Crawford, professor of civil engineering, Purdue University, USA ◗◗ Prof. Saurabh Prasad, associate professor, the Department of Electrical and Computer Engineering, the University of Houston, USA ◗◗ Dr. Caleb Robinson, data scientist, the Microsoft AI for Good Research Lab, USA ◗ ◗ Dr. Behnood R asti, principal research associate, Helmholtz–Zentrum Dresden–Rossendorf, Freiberg, Germany ◗◗ Prof. Giuseppe Scarpa and Dr. Matteo Ciotola, associate professor and Ph.D. fellow, respectively, the University of Naples “Federico II”, Italy ◗◗ Dr. Sudipan Saha, postdoctoral researcher, Technical University of Munich, Germany ◗◗ Dr. Angelica I. Aviles-Rivero, senior research associate, the Department of Applied Mathematics and Theoretical Physics, the University of Cambridge, U.K. JUNE 2023
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
◗◗ Dr. Lichao Mou, head of the Visual Learning and Rea-
soning Team, Remote Sensing Technology Institute, German Aerospace Center, Weßling ◗◗ Prof. Carola-Bibiane Schönlieb, professor of applied mathematics, the University of Cambridge, U.K. ◗◗ Prof. Xiao Xiang Zhu, professor for data science in EO, Technical University of Munich, Germany
Join the GRSS IADF TC You can contact the Image Analysis Data Fusion Technical Committee (IADF TC) chairs at [email protected]. If you are interested in joining the IADF TC, please complete the form on our website (https://www.grss-ieee.org/technicalcommittees/image-analysis-and-data-fusion) or send an email to us including your ◗◗ first and last name ◗◗ institution/company ◗◗ country ◗◗ IEEE membership number (if available) ◗◗ email address. Members receive information regarding research and applications on image analysis and data fusion topics, and updates on the annual Data Fusion Contest and on all other activities of the IADF TC. Membership in the IADF TC is free! Also, you can join the LinkedIn IEEE GRSS data fusion discussion forum, https://www.linkedin.com/groups/3678437/, or join us on Twitter: Grssiadf.
99
◗◗ Prof. Avik Bhattacharya, professor, the Centre of Studies
◗◗ ◗◗ ◗◗
◗◗
◗◗
in Resources Engineering, Indian Institute of Technology Bombay, Mumbai, India Prof. Alejandro Frery, professor of statistics and data science, the Victoria University of Wellington, New Zealand Dr. Dipankar Mandal, postdoctoral fellow, Department of Agronomy, Kansas State University, USA Dr. Shashi Kumar, scientist, the Indian Institute of Remote Sensing, Indian Space Research Organisation, Dehradun, India Prof. Sylvain Lobry, associate professor, the Université Paris Cité, the Laboratoire d’Informatique de Paris Descartes, France Dr. Michele Ronco, postdoctoral researcher, the Image Processing Laboratory, the University of Valencia.
IEEE GRSS IADF SCHOOL: FIND OUT THE NEXT EDITION! After the successful first edition of the IEEE GRSS IADF school, a second one will be announced soon. It will follow the same theme as the 2022 edition, i.e., CV4EO. It will be an in-person event and take place at the University of Sannio, Benevento, Italy, 13–15 September 2023. We look forward to seeing you in Benevento! Please stay tuned! CONCLUSION We would like to thank the GRSS and the IADF for their support, and all the lecturers who gave so freely of their time and expertise. A survey among the participants conducted after the school clearly showed that the event
PERSPECTIVES
AUTHOR INFORMATION Gemine Vivone ([email protected]) is with the National Research Council - Institute of Methodologies for Environmental Analysis, 85050 Tito Scalo, Italy, and National Biodiversity Future Center, 90133 Palermo, Italy. He is a Senior Member of IEEE. Dalton Lunga ([email protected]) is with the Oak Ridge National Laboratory, Oak Ridge, TN 37830 USA. He is a Senior Member of IEEE. Francescopaolo Sica ([email protected]) is with the Institute of Space Technology & Space Applications, University of the Bundeswehr Munich, 85577 Neubiberg, Germany. He is a Member of IEEE. Güls¸en Tas¸kın ([email protected]) is with the Institute of Disaster Management, Istanbul Technical University, Istanbul 34469, Turkey. She is a Senior Member of IEEE. Ujjwal Verma ([email protected]) is with the Department of Electronics and Communication Engineering, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal 576104, India. He is a Senior Member of IEEE. Ronny Hänsch ([email protected]) is with the DLR, 82234 Weßling, Germany. He is a Senior Member of IEEE. GRS
(continued from p. 88)
[21] W. Boulila, M. K. Khlifi, A. Ammar, A. Koubaa, B. Benjdira, and I. R. Farah, “A hybrid privacy-preserving deep learning approach for object classification in very high-resolution satellite images,” Remote Sens., vol. 14, no. 18, Sep. 2022, Art. no. 4631, doi: 10.3390/rs14184631. [22] S. Zhang, X. Zhang, T. Li, H. Meng, X. Cao, and L. Wang, “Adversarial representation learning for hyperspectral image classification with small-sized labeled set,” Remote Sens., vol. 14, no. 11, May 2022, Art. no. 2612, doi: 10.3390/ rs14112612. [23] S. Yang, M. Sun, X. Lou, H. Yang, and H. Zhou, “An unpaired thermal infrared image translation method using GMACycleGAN,” Remote Sens., vol. 15, no. 3, Jan. 2023, Art. no. 663, doi: 10.3390/rs15030663.
100
received high attention and provided an exciting experience. All the comments have been collected and will be used to improve the format of the next editions.
[24] M. Casey. “Foundation models 101: A guide with essential FAQs.” Snorkel AI. Accessed: Apr. 13, 2023. [Online]. Available: https://snorkel.ai/foundation-models/
[25] N. Gorelick, M. Hancher, M. Dixon, S. Ilyushchenko, D. Thau, and R. Moore, “Google earth engine: Planetary-scale geospatial analysis for everyone,” Remote Sens. Environ., vol. 202, pp. 18–27, Dec. 2017, doi: 10.1016/j.rse.2017.06.031. [26] T. Augspurger, “Scalable sustainability with the planetary computer,” presented at the AGU Fall Meeting Abstracts, New Orleans, LA, USA, Dec. 2021. [27] “Earth on AWS.” Amazon. Accessed: Apr. 13, 2023. [Online]. Available: https://aws.amazon.com/earth/ [28] A. Kirillov et al., “Segment anything,” 2023, arXiv:2304 .02643.GRS
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
JUNE 2023
Harness the publishing power of IEEE Access. ®
IEEE Access is a multidisciplinary open access journal offering high-quality peer review, with an expedited, binary review process of 4 to 6 weeks. As a journal published by IEEE, IEEE Access offers a trusted solution for authors like you to gain maximum exposure for your important research.
Explore the many benefits of IEEE Access: • Receive high-quality, rigorous peer review in only 4 to 6 weeks • Reach millions of global users through the IEEE Xplore® digital library by publishing open access • Submit multidisciplinary articles that may not fit in narrowly focused journals • Obtain detailed feedback on your research from highly experienced editors
Learn more at ieeeaccess.ieee.org
• Establish yourself as an industry pioneer by contributing to trending, interdisciplinary topics in one of the many topical sections IEEE Access hosts • Present your research to the world quickly since technological advancement is ever-changing • Take advantage of features such as multimedia integration, usage and citation tracking, and more • Publish without a page limit for $1,750 per article
CALL FOR PAPERS IEEE Geoscience and Remote Sensing Magazine
Special issue on “Data Fusion Techniques for Oceanic Target Interpretation” Guest Editors Gui Gao, Southwest Jiaotong University, China ([email protected]) Hanwen Yu, University of Electronic Science and Technology of China, China ([email protected]) Maurizio Migliaccio, Università degli Studi di Napoli Parthenope, Italy ([email protected])) Xi Zhang, First Institute of Oceanography, Ministry of Natural Resources, China ([email protected]) Interpreting marine targets using remote sensing can provide critical information for various applications, including environmental monitoring, oceanographic research, navigation, and resource management. With the development of observation systems, the ocean information acquired is multi-source and multi-dimension. Data fusion, as a general and popular multi-discipline approach, can effectively use the obtained remote sensing data to improve the accuracy and reliability of oceanic target interpretation. This special issue will present an array of tutorial-like overview papers that aim to invite contributions on the latest developments and advances in the field of fusion techniques for oceanic target interpretation. In agreement with the approach and style of the
Magazine, the contributors to this special issue will pay strong attention to creating a balanced mix between ensuring scientific depth, and dissemination to a wide public which would encompass remote sensing scientists, practitioners, and students.
The topics of interest include (but are not limited to) • Mul�-source remote sensing applica�ons of human mari�me ac�vi�es, such as fisheries monitoring, mari�me emergency rescue, etc. • Mul�-source remote sensing detec�on and evalua�on of marine hazards • Mul�-source remote sensing detec�on, recogni�on, and tracking of marine man-made target • Detec�on and sensing the changes in Arc�c sea ice by mul�-source remote sensing data • Ar�ficial Intelligence for mul�-sensor data processing. • Fusion of remote sensing data from sensors at different spa�al and temporal resolu�ons • Descrip�on and analysis of data fusion products such as databases that can integrate, share, and explore mul�ple data sources Format and preliminary schedule. Articles submitted to this special issue of the IEEE Geoscience and Remote Sensing Magazine must contain significant relevance to geoscience and remote sensing and should have noteworthy tutorial value. Selection of invited papers will be done on the basis of 4-page White papers, submitted in double-column format. These papers must discuss the foreseen objectives of the paper, the importance of the addressed topic, the impact of the contribution, and the authors’ expertize and past activities on the topic. Contributors selected on the basis of the White papers will be invited to submit full manuscripts. Manuscripts should be submitted online at http://mc.manuscriptcentral.com/grsm HYPERLINK "http://mc.manuscriptcentral.com/grsm" using the Manuscript Central interface. Prospective authors should consult the site http://ieeexplore.ieee.org/servlet/opac?punumber=6245518 for guidelines and information on paper submission. Submitted articles should not have been published or be under review elsewhere. All submissions will be peer reviewed according to the IEEE and Geoscience and Remote Sensing Society guidelines. Important dates: August 1, 2023 September 1, 2023 November 1, 2023 March 1, 2024 June 1, 2024 September 1, 2024 October 1, 2024 January 2025 Digital Object Identifier 10.1109/MGRS.2023.3278369
White paper submission deadline Invitation notification Full paper submission deadline Review notification Revised manuscript due Final acceptance notification Final manuscript due Publication date