118 40 35MB
English Pages [84] Year 2023
Volume 18 Number 4 ❏ November 2023 www.ieee-cis.org
Features 14 Surrogate-Assisted Many-Objective Optimization of Building Energy Management by Qiqi Liu, Felix Lanfermann,Tobias Rodemann, Markus Olhofer, and Yaochu Jin 29 Learning Regularity for Evolutionary Multiobjective Search: A Generative Model-Based Approach by Shuai Wang, Aimin Zhou, Guixu Zhang, and Faming Fang 43 RoCaSH2: An Effective Route Clustering and Search Heuristic for Large-Scale Multi-Depot Capacitated Arc Routing Problem by Yuzhou Zhang,Yi Mei, Haiqi Zhang, Qinghua Cai, and Haifeng Wu
Columns
on the cover
@SHUTTERSTOCK/SHABLOVSKYISTOCK
IEEE Computational Intelligence Magazine (ISSN 1556-603X) is published quarterly by The Institute of Electrical and Electronics Engineers, Inc. Headquarters: 3 Park Avenue, 17th Floor, New York, NY 10016-5997, U.S.A. +1 212 419 7900. Responsibility for the contents rests upon the authors and not upon the IEEE, the Society, or its members. The magazine is a membership benefit of the IEEE Computational Intelligence Society, and subscriptions are included in Society fee. Replacement copies for members are available for US$20 (one copy only). Nonmembers can purchase individual copies for US$220.00. Nonmember subscription prices are available on request. Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limits of the U.S. Copyright law for private use of patrons: 1) those post-1977 articles that carry a code at the bottom of the first page, provided the per-copy fee is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01970, U.S.A.; and 2) pre-1978 articles without fee. For other copying, reprint, or republication permission, write to: Copyrights and Permissions Department, IEEE Service Center, 445 Hoes Lane, Piscataway NJ 08854 U.S.A. Copyright © 2023 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Periodicals postage paid at New York, NY and at additional mailing offices. Postmaster: Send address changes to IEEE Computational Intelligence Magazine, IEEE, 445 Hoes Lane, Piscataway, NJ 08854-1331 U.S.A. PRINTED IN U.S.A. Canadian GST #125634188.
Digital Object Identifier 10.1109/MCI.2023.3309171
58 Guest Editorial AI-Explained (Part I) by Pau-Choo Chung, Alexander Dockhorn, and Jen-Wei Huang 60 AI-eXplained Group Formation by Group Joining and Opinion Updates Via Multi-Agent Online Gradient Ascent by Chuang-Chieh Lin, Chih-Chieh Hung, Chi-Jen Lu, and Po-An Chen 62 MAP-Elites for Genetic Programming-Based Ensemble Learning: An Interactive Approach by Hengzhe Zhang, Qi Chen, Bing Xue,Wolfgang Banzhaf, and Mengjie Zhang 64 Monte Carlo and Temporal Difference Methods in Reinforcement Learning by Isaac Han, Seungwon Oh, Hoyoun Jung, Insik Chung, and Kyung-Joong Kim 66 Application Notes Correspondence-Free Point Cloud Registration Via Feature Interaction and Dual Branch by Yue Wu, Jiaming Liu,Yongzhe Yuan, Xidao Hu, Xiaolong Fan, Kunkun Tu, Maoguo Gong, Qiguang Miao, and Wenping Ma
Departments 2 Editor’s Remarks An Interaction Is Worth a Thousand Words by Chuan-Kang Ting 3 President’s Message Tempus Fugit! by Jim Keller 5 Conference Reports Conference Report on the Inaugural 2023 IEEE Conference on Artificial Intelligence (IEEE CAI 2023) by Gary B. Fogel and Piero Bonissone
8 In Memoriam Obituary for Michio Sugeno by Kazuo Tanaka 10 Publication Spotlight CIS Publication Spotlight by Yongduan Song, Dongrui Wu, Carlos A. Coello Coello, Georgios N.Yannakakis, Huajin Tang,Yiu-Ming Cheung, and Hussein Abbass 80 Conference Calendar by Marley Vellasco and Liyan Song
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
1
CIM Editorial Board Editor-in-Chief Chuan-Kang Ting National Tsing Hua University Department of Computer Science No. 101, Section 2, Kuang-Fu Road Hsinchu 300044, TAIWAN (Phone) +886-3-5742795 (Email) [email protected] Founding Editor-in-Chief Gary G. Yen, Oklahoma State University, USA Past Editors-in-Chief Kay Chen Tan, Hong Kong Polytechnic University, HONG KONG Hisao Ishibuchi, Southern University of Science and Technology, CHINA Editors-At-Large Piero P. Bonissone, Piero P. Bonissone Analytics, USA David B. Fogel, Natural Selection, Inc., USA Vincenzo Piuri, University of Milan, ITALY Marios M. Polycarpou, University of Cyprus, CYPRUS Jacek M. Zurada, University of Louisville, USA Associate Editors Jose M. Alonso-Moral, Universidade de Santiago de Compostela, SPAIN Sansanee Auephanwiriyakul, Chiang Mai University, THAILAND Ying-ping Chen, National Yang Ming Chiao Tung University, TAIWAN Keeley Crockett, Manchester Metropolitan University, UK Liang Feng, Chongqing University, CHINA Jen-Wei Huang, National Cheng Kung University, TAIWAN Eyke H€ ullermeier, University of Munich, GERMANY Min Jiang, Xiamen University, CHINA Sheng Li, University of Virginia, USA Hongfu Liu, Brandeis University, USA Zhen Ni, Florida Atlantic University, USA Nelishia Pillay, University of Pretoria, SOUTH AFRICA Danil Prokhorov, Toyota R&D, USA Kai Qin, Swinburne University of Technology, AUSTRALIA Rong Qu, University of Nottingham, UK Manuel Roveri, Politecnico di Milano, ITALY Gonzalo A. Ruz, Universidad Adolfo Iba~nez, CHILE Ming Shao, University of Massachusetts Dartmouth, USA Ah-Hwee Tan, Singapore Management University, SINGAPORE Vincent S. Tseng, National Yang Ming Chiao Tung University, TAIWAN Handing Wang, Xidian University, CHINA Dongbin Zhao, Chinese Academy of Sciences, CHINA IEEE Periodicals/ Magazines Department Journals Production Manager, Eileen McGuinness Senior Manager, Journals Production: Patrick Kempf Associate Art Director, Gail A. Schnitzer Production Coordinator, Theresa L. Smith Director, Business Development— Media & Advertising, Mark David Advertising Production Manager, Felicia Spagnoli Production Director, Peter M. Tuohy Editorial Services Director, Kevin Lisankie Senior Director, Publishing Operations, Dawn Melley IEEE prohibits discrimination, harassment, and bullying. For more information, visit http://www. ieee.org/web/abou-tus/whatis/policies/p9-26.html.
Digital Object Identifier 10.1109/MCI.2023.3306138
2
Chuan-Kang Ting National Tsing Hua University, TAIWAN
Editor's Remarks
An Interaction Is Worth a Thousand Words
O
ne year ago, CIM published the inaugural AI-eXplained (AI-X) immersive article on IEEE Xplore. It is thrilled to witness that our call for the new form of content representation has been responded to with tremendous passion and a surge of submissions. With great efforts, CIM offers a platform for delivering Al/CI concepts, designs, and applications via a vivid and innovative means of knowledge distribution. This success cannot be achieved without the solid support of CIS and IEEE staff teams, especially CIS President Jim Keller, who is the bellwether in foreseeing the potential of immersive articles and promoting their publication. At this point of his sign-off as President, my sincere gratitude goes to Jim for his leadership and contribution to the realization of immersive articles. Besides, he has accompanied CIM readers to pull through the global difficult times with his witty and insightful President’s Messages, and CIS has been continuously growing under his guidance. I heartily thank Jim and wish him the very best in his future endeavors. For this Special Issue on AI-X, we are pleased to have accepted seven articles and will publish them in two parts, where readers can interact with the immersive articles by changing parameters or moving objects to explore the AI/CI techniques in the interactive contents on IEEE Xplore. In Part I, the first article uses interactive formation of groups to explain the concepts of best-response dynamics and online learning in regard to game theory. In the second article, an interactive approach is presented to demonstrate evolutionary ensemble learning based on genetic programming and MAP-Elites. The third article compares the Monte Carlo and temporal difference methods in reinforcement Jim Keller (right) and Chuan-Kang Ting (left) on a learning through a game environment. cruise boat at IEEE CEC 2023 in Chicago. In the Features, the first article formulates a 10-objective optimization problem for energy management and attempts to solve this problem with surrogate-assisted multiobjective evolutionary algorithms. The second article proposes leveraging the regularity property through a learning strategy to guide the evolutionary search for performance improvement. In the third article, a new divide-andconquer strategy is integrated with global optimization to deal with the large scale multidepot capacitated arc routing problem. The Columns article presents a deep learning framework with feature interaction and dual branch for point cloud registration. We cordially invite readers to enjoy both the physical reading experience of the print copy and the intriguing interactive contents of the digital version. If you have any suggestions or feedback for this magazine, please do not hesitate to contact me at [email protected].
Digital Object Identifier 10.1109/MCI.2023.3306147 Date of current version: 17 October 2023
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
1556-603X ß 2023 IEEE
President's Message
Jim Keller University of Missouri, USA
Tempus Fugit!
H
i all, Wow, how time flies. It seems like only yesterday that I was writing my welcome column as your new President in the first issue of the 2022 CIM. Now it’s time to hand over the reins to the 2024-2025 CIS President, Yaochu Jin. Yaochu will do a great job for CIS. It’s been a great privilege and honor for me to serve as President, and really, it’s been a blast. Both the field of computational intelligence and the IEEE Computational Intelligence Society have experienced huge gains, and I’m happy to have had a front row seat these last two years. Rather than attempt to summarize everything that’s been happening, if you are a glutton for punishment, you can read the other 7 of my editorials (yeah, right?) to get a sense of the vibrant and growing community we have in CIS. In signing off as President, I have two parting comments. First, I am immensely grateful to all of the dedicated and talented volunteers and staff who have propelled CIS to ever greater heights. While our exceptional staff is the glue that holds us together, no IEEE Society can function, and flourish, without a large number of committed members who give of their time and talent to tackle the myriad positions across the Society. CIS is blessed in this regard. CIS volunteers and staff form such a great group of people from all around the planet and I wish to thank each and every one for their service. My second message is for our student members, YPs, and those who just have not had the time or opportunity to volunteer. Please consider increasing the service component of your career. What is certainly true for me, and for all active volunteers I talk to, is that you get way more out of volunteer positions than the effort you expend. I can’t count all the dear friends I’ve made from around the world as a result of the many roles I’ve embraced over the years. The close camaraderie with all of the smart people who make up the working CIS has helped my technical career as well. It’s clearly worth the effort, particularly to “pay it forward”. I will miss this position, but this is the way it should be. As always, feel free to contact me at [email protected] with your thoughts, suggestions, questions, and innovative ideas, even as I move into the Past President role. I’m happy to hear from you and will pass along your ideas to the new President. Please stay safe and healthy. I hope to see many of you at CIS events over the upcoming years.
CIS Society Officers President – Jim Keller, University of Missouri, USA President-Elect – Yaochu Jin, Bielefeld University, GERMANY Vice President-Conferences – Marley M. B. R. Vellasco, Pontifical Catholic University of Rio de Janeiro, BRAZIL Vice President-Education – Pau-Choo (Julia) Chung, National Cheng Kung University, TAIWAN Vice President-Finances – Pablo A. Estevez, University of Chile, CHILE Vice President-Industrial and Governmental Activities – Piero P. Bonissone, Piero P. Bonissone Analytics, USA Vice President-Members Activities – Sanaz Mostaghim, Otto von Guericke University of Magdeburg, GERMANY Vice President-Publications – Kay Chen Tan, Hong Kong Polytechnic University, HONG KONG Vice President-Technical Activities – Luis Magdalena, Universidad Politecnica de Madrid, SPAIN Publication Editors IEEE Transactions on Neural Networks and Learning Systems Yongduan Song, Chongqing University, CHINA IEEE Transactions on Fuzzy Systems Dongrui Wu, Huazhong University of Science and Technology, CHINA IEEE Transactions on Evolutionary Computation Carlos A. Coello Coello, CINVESTAV-IPN, MEXICO IEEE Transactions on Games Georgios N. Yannakakis, University of Malta, MALTA IEEE Transactions on Cognitive and Developmental Systems Huajin Tang, Zhejiang University, CHINA IEEE Transactions on Emerging Topics in Computational Intelligence Yiu-ming Cheung, Hong Kong Baptist University, HONG KONG IEEE Transactions on Artificial Intelligence Hussein Abbass, University of New South Wales, AUSTRALIA Administrative Committee Term ending in 2023: Oscar Cord on, University of Granada, SPAIN Guilherme DeSouza, University of Missouri, USA Pauline Haddow, Norwegian University of Science and Technology, NORWAY Haibo He, University of Rhode Island, USA Hisao Ishibuchi, Southern University of Science and Technology, CHINA Term ending in 2024: Sansanee Auephanwiriyakul, Chiang Mai University, THAILAND Jonathan Garibaldi, University of Nottingham, UK Janusz Kacprzyk, Polish Academy of Sciences, POLAND Derong Liu, Guangdong University of Technology, CHINA Ana Madureira, Polytechnic of Porto, PORTUGAL Term ending in 2025: Keeley Crockett, Manchester Metropolitan University, UK Jose Lozano, University of the Basque Country UPV/EHU, SPAIN Alice E. Smith, Auburn University, USA Christian Wagner, University of Nottingham, UK Gary G. Yen, Oklahoma State University, USA
Digital Object Identifier 10.1109/MCI.2023.3306148 Date of current version: 17 October 2023
Digital Object Identifier 10.1109/MCI.2023.3306271
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
3
PS: Have to leave you with a picture:
A very small part of the CIS volunteer and staff family after an AdCom meeting before the first IEEE Conference on AI.
4
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
Conference Reports
Gary B. Fogel Natural Selection, Inc., USA Piero Bonissone Piero P. Bonissone Analytics, USA
Conference Report on the Inaugural 2023 IEEE Conference on Artificial Intelligence (IEEE CAI 2023)
T
he inaugural 2023 IEEE Conference on Artificial Intelligence was held June 5-6 at the Hyatt Regency Santa Clara in California’s Silicon Valley. This new event was several years in the making and included co-sponsorship by four IEEE societies including the IEEE Computational Intelligence Society (as lead organization for the first two years), IEEE Computer Society, IEEE Signal Processing Society, and IEEE Systems, Man and Cybernetics Society. Given the cosponsorship, the event is managed by a steering committee with members from all four societies and with a team from one of the societies annually running the event as the lead sponsor for that year. Our initial ideas for the event began in 2019 and developed through 2020 during the covid-19 pandemic. After attending an AI Exposition, we determined that a large event would require cooperation across multiple societies, and starting a new conference series during a pandemic would not be wise. Plans were made to launch the series in 2023 while we solved the cosponsorship arrangements. We were pleased to work with staff from the IEEE Computer Society in the planning and development of the conference itself, with content focused on the six
Digital Object Identifier 10.1109/MCI.2023.3306149 Date of current version: 17 October 2023
verticals of interest noted previously in IEEE CIM (AI in transportation, energy, healthcare, earth systems, industrial optimization, and societal implications of these applications). We faced additional headwinds going into 2023 as recession fears loomed across corporate America leading to budget cuts for conference engagement. The poorly timed bankruptcy of Silicon Valley Bank also did not help matters. However, we remained eager to have a fully in-person post-pandemic conference with as much personal interaction as possible.
We welcomed four keynote speakers to the conference. Sean Lie from Cerebras gave a great lecture about their new 2.6 trillion processor chips designed for AI applications. Peter Norvig from Stanford provided insights about how AI is transforming education. Kunle Olukotun from SambaNova discussed the importance of reconfigurable dataflow accelerators for improved machine learning models. Finally, Serafim Batzoglou of Seer Bio spoke about the many ways that large language models are game changers in bioinformatics. Along with these
FIGURE 1 Attendees enjoying lunch in the IEEE CAI 2023 exhibit hall.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
5
FIGURE 2 Zita Vale (left) and Marley Vellasco (right) enjoying their time at IEEE CAI 2023.
FIGURE 3 Cecilia Metra (left) presents Kathleen McGowan (right) with her IEEE TFA award.
FIGURE 4 IEEE CAI 2023 keynote presenter Peter Norvig addresses questions from the audience after his lecture.
6
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
keynotes, 70 other invited speakers provided their own presentations either in the form of lectures, panels, or workshops. We were pleased to have the assistance of IEEE Entrepreneurship in the development of their workshop on AI in Entrepreneurship to help people navigate the waters from initial company concept to funding. We were also fortunate to receive many excellent proposals for panels and workshops on a variety of topics from IEEE Standards in AI to various societal and ethical considerations. We were also pleased to have Dr. Kathleen McKeown receive her IEEE Innovation in Societal Infrastructure Technical Field Award at IEEE CAI 2023. In addition, we specially thank Sanaz Mostaghim and Keeley Crockett for helping to arrange an IEEE Women in Computational Intelligence reception to help encourage women to pursue careers in AI and CI. We also were pleased to have a selection of exhibitors, representing components of IEEE as well as companies. We received a total of 292 poster abstract submissions of which 154 were accepted for the proceedings (53% acceptance rate). The top papers in each vertical were identified with the top 2 papers from each of 5 verticals invited to present their material via oral presentation. From these 10 top papers, three were selected to receive best paper awards. Additionally, another three papers were selected as best poster awards across four poster sessions (thanks to all poster judges and especially Stephen Smith as poster chair). 364 attendees registered for the event, 62% of whom were non-authors. These attendees had broad geographic reach, although not unexpectedly for a first conference, 69% of the attendees were from the United States. 17 travel grants were awarded to mainly graduate student members of the IEEE CIS or IEEE CS (thanks to Hemant Singh, Sanaz Mostaghim, and Jo-Ellen Snyder for processing these awards). Along with Piero Bonissone and Gary Fogel as General Co-Chairs, a very large organizing committee helped make the conference a success including Marley
FIGURE 5 Piero Bonissone (left) and Gary Fogel (right) are pleased to reach the end of IEEE CAI 2023 as General Co-Chairs.
Vellasco as Finance Chair, Ivor Tsang, Yew-soon Ong, and Hussein Abbass as Technical Program Co-Chairs, and Dejan Milojicic as Industry Liaison and IAB Chair. We also appreciated service
by Simon See, Tingwen Huang, and Catherine Huang as Panel Session Co-Chairs, Liang Feng and Guang Yang as Submission Co-Chairs, Wei Liu as Proceedings Chair, Keeley
Crockett as Inclusion and Diversity Chair, and Daoyi Dong, Choon Ki Ahn, Xiaojie Su, Giancarlo Fortino, Zheng-Hua Tan, and Bihan Wen as Publicity Co-Chairs. For each vertical we also developed separate committees of 10-15 people to assist in identifying top-quality invited speakers. We also appreciated considerable assistance by staff from the IEEE Computer Society and IEEE Computational Intelligence Society and Quantinuum, Apple, Google, Zoox, and GE Aerospace whose corporate sponsorship helped make the conference possible. If you happened to miss the event, we’ll be posting videos of many of the speakers soon to the IEEE CIS Resource Center (https://resourcecenter.cis.ieee. org/). And of course we encourage your attendance at IEEE CAI 2024 which will be held June 25-27 at the Sands Expo and Convention Centre in Singapore!
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
7
In Memoriam
Kazuo Tanaka University of Electro-Communications, JAPAN
Obituary for Michio Sugeno
Dr. Michio Sugeno
T
he IEEE CIS community lost a giant in the field of fuzzy theory and applications on 9 August, 2023 with the passing of Dr. Michio Sugeno (Professor Emeritus, Tokyo Institute of Technology), aged 83, after a short battle with a brain tumor. Dr. Michio Sugeno, known to all of you, has left behind a great number of achievements in fuzzy theory and applications that I do not need to mention. His profound contributions, ranging from fuzzy measure/Sugeno integral to fuzzy control and computing with words, have had and will continue to have a lasting influence on generations of researchers. Among his immense scientific impact, he has made many significant contributions in the area of fuzzy control such as the demonstration of parking control of a model car at the 2nd International Fuzzy System Association World Congress (IFSA 1987), Tokyo, Japan, and the flight
Digital Object Identifier 10.1109/MCI.2023.3311953 Date of current version: 17 October 2023
8
FIGURE 1 Dr. Sugeno’s helicopter exhibited at 1991 International Fuzzy Engineering Symposium (IFES’91), 13–15 November, 1991, Yokohama, Japan. (a) Dr. Sugeno explaining his helicopter to Prof. Zadeh, other professors, and the audience. (b) Dr. Sugeno’s helicopter and some members of his laboratory at that time (from left to right: Prof. Toshiaki Murofushi, Prof. Junji Nishino, and Prof. Kazuo Tanaka).
control of an unmanned helicopter in the early 1990s (Figure 1), to name a few. These were groundbreaking attempts to
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
pioneer one of the hottest topics of recent years: automated vehicle driving and unmanned aerial vehicle control. In
1556-603X ß 2023 IEEE
FIGURE 2 Photo taken during workshop at 2001 American Control Conference, 24–27 June, 2001, Arlington, VA, USA. (From left to right: Prof. Hua O. Wang, Boston University, Dr. Michio Sugeno, and Prof. Kazuo Tanaka).
addition, the development of the TakagiSugeno fuzzy model serves as one of the most significant milestones in the history of fuzzy control. The appearance of this modeling framework opened the heavy door of theoretical barriers such as the stability of fuzzy control systems at once. In fact, with Dr. Sugeno and Prof. Hua O. Wang, Boston University (see Figure 2), I was able to organize a workshop entitled “Design and Analysis of Fuzzy Control Systems: A System-Theoretic Approach” at the 2001 American Control Conference (ACC), Arlington, Virginia, June 24-27, 2001. The ACC is one of the
largest and most important conferences on control theory and applications. The organization of this workshop demonstrated the widespread acceptance of the new fuzzy control approach as a nonlinear control method in systems and control theory. Dr. Sugeno was in the industry for three years and then joined the Tokyo Institute of Technology, Tokyo, Japan, first as a Research Associate, rising to an Associate Professor, and then a Professor across the timespan from 1965 to 2000. After retiring from the Tokyo Institute of Technology, he was a Laboratory Head
FIGURE 3 Dr. Sugeno, his former students, and an assistant, 21 December, 2019, Yokohama, Japan. (From left to right: Prof. Junji Nishino, University of Electro-Communications, Dr. Michio Sugeno, Prof. Yasuo Narukawa, Tamagawa University, Prof. Motoya Machida, Tennessee Tech University, Prof. Toshiaki Murofushi, Tokyo Institute of Technology, Ms. Kuniko Miyakawa, a former assistant, Prof. Kazuo Tanaka, and Prof. Katsushige Fujimoto, Fukushima University).
with the Brain Science Institute, RIKEN, from 2000 to 2005 and then a Distinguished Visiting Professor with the Doshisha University from 2005 to 2010. Finally, he was an Emeritus Researcher with the European Centre for Soft Computing, Spain, from 2010 to 2015. Dr. Sugeno was the President of the Japan Society for Fuzzy Theory and Systems from 1991 to 1993, and also the President of the International Fuzzy Systems Association from 1997 to 1999. He was the first recipient of the IEEE CIS Pioneer Award in Fuzzy Systems with Zadeh in 2000. He was the recipient of the 2010 IEEE Frank Rosenblatt Award and the recipient of the IEEE International Conference on Systems, Man, and Cybernetics 2017 Lotfi A. Zadeh Pioneer Award. Dr. Sugeno was an intellectual giant with a myriad of theoretical and technical achievements and led the fuzzy community as a pioneer throughout his distinguished career. As one of his former Ph.D. students, I was fortunate to learn about his belief and passion for research through his thought-provoking advice and guidance. Outside of research, Dr. Sugeno loved to play contract bridge and drink wine and Japanese sake. Even after I had received my Ph.D. degree, it was a great pleasure for me to have many opportunities to drink with him. I also remember participating in contract bridge camps hosted by him, where we enjoyed playing intellectual card games for more than 12 hours a day! The last time I met him was on 21 December, 2019 at his private drinking party (see Figure 3) in Yokohama, Japan, with some of his former students and an assistant, just before the COVID-19 pandemic. After that, I regret that I never had another opportunity to enjoy drinking with Dr. Sugeno due to the pandemic. His contribution and leadership to the fuzzy community were invaluable, and we will miss a great leader. In Japan, professors are usually addressed with the suffix of “-sensei.” Finally, I would like to finish this obituary in the Japanese style. May Sugeno-sensei rest in peace.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
9
Publication Spotlight
Yongduan Song Chongqing University, CHINA Dongrui Wu Huazhong University of Science and Technology, CHINA Carlos A. Coello Coello CINVESTAV-IPN, MEXICO Georgios N. Yannakakis University of Malta, MALTA Huajin Tang Zhejiang University, CHINA Yiu-Ming Cheung Hong Kong Baptist University, HONG KONG Hussein Abbass University of New South Wales, AUSTRALIA
CIS Publication Spotlight
IEEE Transactions on Neural Networks and Learning Systems
Improving the Accuracy of Spiking Neural Networks for Radar Gesture Recognition Through Preprocessing, by A. Safa, F. Corradi, L. Keuninckx, I. Ocket, A. Bourdoux, F. Catthoor, and G. G. E. Gielen, IEEE Transactions on Neural Networks and Learning Systems, Vol. 34, No. 6, Jun. 2023, pp. 2869–2881. Digital Object Identifier: 10.1109/ TNNLS.2021.3109958 “Event-based neural networks are currently being explored as efficient solutions for performing AI tasks at the extreme edge. To fully exploit their potential, event-based neural networks coupled to adequate preprocessing must be investigated. Within this context, we demonstrate a 4-b-weight spiking neural network (SNN) for radar gesture recognition, achieving a state-of-the-art 93% accuracy within only four processing time steps while using only one convolutional layer and two fully Digital Object Identifier 10.1109/MCI.2023.3306163 Date of current version: 17 October 2023
10
comparing the gesture recognition accuracy achieved with our SNN to a DNN with the same architecture and similar training. Unlike previously proposed neural networks for radar processing, this work enables ultralowpower radar-based gesture recognition for extreme-edge devices.” IMAGE LICENSED BY INGRAM PUBLISHING
connected layers. This solution consumes very little energy and area if implemented in event-based hardware, which makes it suited for embedded extreme-edge applications. In addition, we demonstrate the importance of signal preprocessing for achieving this high recognition accuracy in SNNs compared to deep neural networks (DNNs) with the same network topology and training strategy. We show that efficient preprocessing prior to the neural network is drastically more important for SNNs compared to DNNs. We also demonstrate, for the first time, that the preprocessing parameters can affect SNNs and DNNs in antagonistic ways, prohibiting the generalization of conclusions drawn from DNN design to SNNs. We demonstrate our findings by
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
IEEE Transactions on Fuzzy Systems
More Than Accuracy: A Composite Learning Framework for Interval Type-2 Fuzzy Logic Systems, by A. Beke and T. Kumbasar, IEEE Transactions on Fuzzy Systems, Vol. 31, No. 3, Mar. 2023, pp. 734–744. Digital Object Identifier: 10.1109/ TFUZZ.2022.3188920 “In this article, we propose a novel composite learning framework for interval type-2 (IT2) fuzzy logic systems (FLSs) to train regression models with a high accuracy performance and capable of representing uncertainty. In this context, we identify three challenges, first, the uncertainty handling capability, second, the construction of the composite loss, and third, a learning algorithm that overcomes the training complexity while
1556-603X ß 2023 IEEE
taking into account the definitions of IT2-FLSs. This article presents a systematic solution to these problems by exploiting the type-reduced set of IT2FLS via fusing quantile regression and deep learning (DL) with IT2-FLS. The uncertainty processing capability of IT2FLS depends on employed center-of-sets calculation methods, while its representation capability is defined via the structure of its antecedent and consequent membership functions. Thus, we present various parametric IT2-FLSs and define the learnable parameters of all IT2-FLSs alongside their constraints to be satisfied during training. To construct the loss function, we define a multiobjective loss and then convert it into a constrained composite loss composed of the log-cosh loss for accuracy purposes and a tilted loss for uncertainty representation, which explicitly uses the type-reduced set. We also present a DL approach to train IT2FLS via unconstrained optimizers. In this context, we present parameterization tricks for converting the constraint optimization problem of IT2-FLSs into an unconstrained one without violating the definitions of fuzzy sets. Finally, we provide comprehensive comparative results for hyperparameter sensitivity analysis and an inter/intramodel comparison on various benchmark datasets.” Fuzzy Clustering With Knowledge Extraction and Granulation, by X. Hu, Y. Tang, W. Pedrycz, K. Di, J. Jiang, and Y. Jiang, IEEE Transactions on Fuzzy Systems, Vol. 31, No. 4, Apr. 2023, pp. 1098–1112. Digital Object Identifier: 10.1109/ TFUZZ.2022.3195033 “Knowledge-based clustering algorithms can improve traditional clustering models by introducing domain knowledge to identify the underlying data structure. While there have been several approaches to clustering with the guidance of knowledge tidbits, most of them mainly focus on numeric knowledge without considering the uncertain nature of information. To capture the uncertainty of information, pure numeric knowledge tidbits are expanded to knowledge granules in this article. Then, two questions arise: how to obtain granular knowledge and how to use those
knowledge granules in clustering. To the end, a novel knowledge extraction and granulation (KEG) method and a granular knowledge-based fuzzy clustering model are proposed in this study. First, inspired by the concept of natural neighbors, an automatic KEG is developed. In KEG, high-density points are filtered from the dataset and then merged with their natural neighbors to form several dense areas, i.e., granular knowledge. Furthermore, the granular knowledge expressed by interval or triangular numbers is leveraged into the clustering algorithm, which is the framework of fuzzy clustering with granular knowledge. To concretize this model into clustering algorithms, the classical fuzzy C-Means clustering algorithm has been selected to incorporate the granular knowledge produced by KEG. Then, the corresponding fuzzy C-Means clustering with interval knowledge granules (IKG-FCM) and triangular knowledge granules (TKGFCM) are proposed. Experiments on synthetic and real-world datasets demonstrate that IKG-FCM and TKG-FCM always achieve better clustering performance with less time cost, especially on imbalanced data, compared with stateof-the-art algorithms.” IEEE Transactions on Evolutionary Computation
Explainable Artificial Intelligence by Genetic Programming: A Survey, by Y. Mei, Q. Chen, A. Lensen, B. Xue, and M. Zhang, IEEE Transactions on Evolutionary Computation, Vol. 27, No. 3, Jun. 2023, pp. 621– 641. Digital Object Identifier: 10.1109/ TEVC.2022.3225509 “Explainable artificial intelligence (XAI) has received great interest in the recent decade, due to its importance in critical application domains, such as selfdriving cars, law, and healthcare. Genetic programming (GP) is a powerful evolutionary algorithm for machine learning. Compared with other standard machine learning models such as neural networks, the models evolved by GP tend to be more interpretable due to their model structure with symbolic components. However, interpretability has not been explicitly considered in GP until recently,
following the surge in the popularity of XAI. This article provides a comprehensive review of the studies on GP that can potentially improve the model interpretability, both explicitly and implicitly, as a byproduct. We group the existing studies related to explainable artificial intelligence by GP into two categories. The first category considers the intrinsic interpretability, aiming to directly evolve more interpretable (and effective) models by GP. The second category focuses on post-hoc interpretability, which uses GP to explain other black-box machine learning models, or explain the models evolved by GP by simpler models such as linear models. This comprehensive survey demonstrates the strong potential of GP for improving the interpretability of machine learning models and balancing the complex tradeoff between model accuracy and interpretability.” IEEE Transactions on Games
Procedural Generation of Narrative Worlds, by J. T. Balint and R. Bidarra, IEEE Transactions on Games, Vol. 15, No. 2, Jun. 2023, pp. 262–272. Digital Object Identifier: 10.1109/ TG.2022.3216582 “A narrative world typically consists of several interrelated locations that, all together, fully support enacting a given story. For this, each location in a narrative world features all the objects as required there by the narrative, as well as a variety of other objects that plausibly describe or decorate the location. Procedural generation of narrative worlds poses many challenges, including that, first, it cannot lean only on domain knowledge (e.g., patterns of objects commonly found in typical locations), and, second, it involves a temporal dimension, which introduces dynamic fluctuations of objects between locations. In this article, we present a novel approach for the procedural generation of narrative worlds, following two stages: first, a narrative world mold is generated (only once) for a given story; second, the narrative world mold is used to create one (or more) possible narrative worlds for that story. For each story, its narrative world mold integrates spatiotemporal descriptions of its locations with
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
11
the object semantics and the domain knowledge previously acquired on typical locations. We describe how a narrative world mold can be generated, as well as how it can be fed to existing procedural generation methods, to create a variety of narrative worlds that fit that narrative. We evaluate our own implementation of this approach, performing a number of experiments that illustrate both the expressive power of narrative world molds and their ability to steer the generation of narrative worlds.” IEEE Transactions on Cognitive and Developmental Systems
Learning Abstract Representations Through Lossy Compression of Multimodal Signals, by C. Wilmot, G. Baldassarre, and J. Triesch, IEEE Transactions on Cognitive and Developmental Systems, Vol. 15, No. 2, Jun. 2023, pp. 348–360. Digital Object Identifier: 10.1109/ TCDS.2021.3108478 “A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here, we consider the learning of abstract representations in a multimodal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modality specific details and preferentially retain information that is shared across the different modalities. Specifically, we propose an architecture that is able to extract information common to different modalities based on the compression abilities of generic autoencoder neural networks. We test the architecture with two tasks that allow: 1) the precise manipulation of the amount of information contained in and shared across different modalities and 2) testing the method on a simulated robot with visual and proprioceptive inputs. Our results show the validity of the proposed approach and
12
demonstrate the applicability to embodied agents.” IEEE Transactions on Emerging Topics in Computational Intelligence
Multi-View Adjacency-Constrained Hierarchical Clustering, by J. Yang and C.-T. Lin, IEEE Transactions on Emerging Topics in Computational Intelligence, Vol. 7, No. 4, Aug. 2023, pp. 1126–1138.
Digital Object Identifier: 10.1109/ TETCI.2022.3221491 “This paper explores the problem of multi-view clustering, which aims to promote clustering performance with multi-view data. The majority of existing methods have problems with parameter adjustment and high computational complexity. Moreover, in the past, there have been few works based on hierarchical clustering to learn the granular information of multiple views. To overcome these limitations, we propose a simple but efficient framework: Multi-view adjacency-Constrained Hierarchical Clustering (MCHC). Specifically, MCHC mainly consists of three parts: including the Fusion Distance matrices with Extreme Weights (FDEW); adjacency-Constrained Nearest Neighbor Clustering (CNNC); and the internal evaluation Index based on Rawls' Max-Min criterion (MMI). FDEW aims to learn a fusion distance matrix set, which not only uses complementary information among multiple views, but exploits the information from each single view. CNNC is utilized to generate multiple partitions based on FDEW, and MMI is designed for choosing the best one from the multiple partitions. In addition, we propose a parameter-free version of MCHC (MCHC-PF). Without any parameter selection, MCHC-PF can give partitions at different granularity levels with a low time complexity. Comprehensive experiments tested on eight real-world datasets validate the superiority of the proposed methods compared with the 13 current state-of-the-art methods.”
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
IEEE Transactions on Artificial Intelligence
Building Trustworthy AI Solutions: A Case for Practical Solutions for Small Businesses, by K. Crockett, E. Colyer, L. Gerber, and A. Latham, IEEE Transactions on Artificial Intelligence, Vol. 4, No. 4, Aug. 2023, pp. 778–791. Digital Object Identifier: 10.1109/ TAI.2021.3137091 “Building trustworthy artificial intelligence (AI) solutions, whether in academia or industry, must take into consideration a number of dimensions including legal, social, ethical, public opinion, and environmental aspects. A plethora of guidelines, principles, and toolkits have been published globally, but have seen limited grassroots implementation, especially among small- and medium-sized enterprises (SMEs), mainly due to the lack of knowledge, skills, and resources. In this article, we report on qualitative SME consultations over two events to establish their understanding of both data and AI ethical principles and to identify the key barriers SMEs face in their adoption of ethical AI approaches. We then use independent experts to review and code 77 published toolkits designed to build and support ethical and responsible AI practices, based on 33 evaluation criteria. The toolkits were evaluated considering their scope to address the identified SME barriers to adoption, human-centric AI principles, AI life cycle stages, and key themes around responsible AI and practical usability. Toolkits were ranked on the basis of criteria coverage and expert intercoder agreement. Results show that there is not a one-size-fits-all toolkit that addresses all criteria suitable for SMEs. Our findings show few exemplars of practical application, little guidance on how to use/apply the toolkits, and very low uptake by SMEs. Our analysis provides a mechanism for SMEs to select their own toolkits based on their current capacity, resources, and ethical awareness levels – focusing initially at the conceptualization stage of the AI life cycle and then extending throughout.”
Harness the publishing power of IEEE Access. ®
IEEE Access is a multidisciplinary open access journal offering high-quality peer review, with an expedited, binary review process of 4 to 6 weeks. As a journal published by IEEE, IEEE Access offers a trusted solution for authors like you to gain maximum exposure for your important research.
Explore the many benefits of IEEE Access: • Receive high-quality, rigorous peer review in only 4 to 6 weeks • Reach millions of global users through the IEEE Xplore® digital library by publishing open access • Submit multidisciplinary articles that may not fit in narrowly focused journals • Obtain detailed feedback on your research from highly experienced editors
Learn more at ieeeaccess.ieee.org
• Establish yourself as an industry pioneer by contributing to trending, interdisciplinary topics in one of the many topical sections IEEE Access hosts • Present your research to the world quickly since technological advancement is ever-changing • Take advantage of features such as multimedia integration, usage and citation tracking, and more • Publish without a page limit for $1,750 per article
©SHUTTERSTOCK.COM/FANDESIGN
Surrogate-Assisted Many-Objective Optimization of Building Energy Management
Abstract—Building energy management usually involves a number of objectives, such as investment costs, thermal comfort, system resilience, battery life, and many others.
However, most existing studies merely consider optimizing less than three objectives since it becomes increasingly difficult as the number of objectives increases. In addition, the optimization of building energy management relies heavily on time-consuming energy component simulators, posing great challenges for conventional evolutionary algorithms that typically require a large number of real function evaluations. To address the above-mentioned issues, this paper formulates a building energy management scenario as a 10-objective optimization problem, aiming to find optimal configurations of power supply components. To solve this expensive manyobjective optimization problem, six state-of-the-art multiobjective evolutionary algorithms, five of which are assisted by surrogate models, are compared. The experimental results show that the adaptive reference vector assisted algorithm is
Digital Object Identifier 10.1109/MCI.2023.3304073 Date of current version: 17 October 2023
Corresponding author: Yaochu Jin (email: [email protected]).
Qiqi Liu
Westlake University, CHINA and also Bielefeld University, GERMANY
Felix Lanfermann , Tobias Rodemann , and Markus Olhofer Honda Research Institute Europe, GERMANY
Yaochu Jin
Bielefeld University, GERMANY
14
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
1556-603X ß 2023 IEEE
proven to be the most competitive one among the six compared algorithms; the five evolutionary algorithms with surrogate assistance always outperform their counterpart without the surrogate, although the kriging-assisted reference vector assisted evolutionary algorithm only performs slightly better than the algorithm without surrogate assistance in dealing with the 10-objective building energy management problem. By analyzing the non-dominated solutions obtained by the six algorithms, an optimal configuration of power supply components can be obtained within an affordable period of time, providing decision makers with new insights into the building energy management problem.
I. Introduction
T
he concept of reducing energy consumption to combat global warming has attracted increasing attention. In large building complexes such as campus and office buildings, a large amount of energy consumption is the everyday inevitability. Thus, managing the production, storage, and consumption in these complexes is of great importance. Investing into renewable energy like Photo Voltaic (PV) systems is, in this light, one of many options for improving the cost and emission performance of a facility. For facilities, like the Honda Research Institute, the investment in PV or battery systems has to consider a multitude of factors like investment costs, greenhouse gas emissions, profitability, grid stability, equipment (battery) lifetime, or system resilience. However, these objectives are usually conflicting. Moreover, with the involvement of different stakeholders, their preferences and trade-off for the factors should likewise be considered. Thus, the obtained solutions may not be applicable for implementation if some important objectives are not considered. As a result, many-objective optimization of the building energy management will be a highly desired alternative. A common approach to solving multi-objective problems (with two or three objectives) and many-objective problems (with more than three objectives) is to use multi-objective evolutionary algorithms (MOEAs), such as the non-dominated sorting genetic algorithm II (NSGA-II) [1], non-dominated sorting genetic algorithm III (NSGA-III) [2], multi-objective evolutionary algorithm based on decomposition (MOEA/ D) [3], and reference vector assisted evolutionary algorithm (RVEA) [4], just to name a few. Many researchers have proposed using canonical MOEAs to simultaneously optimize conflicting objectives within the context of building energy management to identify an optimal configuration of power supply components. For instance, the study in [5] endeavores to strike a balance between energy consumption cost and user satisfaction in home energy management. Meanwhile, the research in [6] proposes a strategy to shift the cooling load from peak to off-peak hours, thereby reducing energy costs to a certain extent without compromising thermal comfort. It should be noted that the majority of
building energy management studies optimize no more than three conflicting objectives, since it is challenging for traditional optimization algorithms to strike a balance among more than three objectives. As pointed out in [7], optimizing dwellings could take up to 12 days if MOEAs are used simply with the objective values from simulators such as EnergyPlus. This approach is not cost-effective for real-world applications, particularly for managing energy demand a day ahead, as discussed in [8]. Also, unpredictable environmental factors, such as weather changes, require the building energy management to react quickly. Thus, it is highly desired that the optimization of the building energy management is computationally efficient. During the past two decades, surrogate-assisted evolutionary algorithms (SAEAs) have shown to be highly successful in handling computationally expensive problems whose function evaluations can take from minutes to days to evaluate. While a few SAEAs have been proposed for expensive multi-objective problems (MOPs) and many-objective problems (MaOPs), it remains challenging to apply them to real-world applications. The first challenge is problem formulation, i.e., whether the formulation is reasonable or whether all essential factors are included for finding a solution for real implementation. Note that an inappropriate formulation could increase the difficulties of handling the problem. As discussed in [9], only by using a proper problem formulation, kriging-assisted reference vector guided evolutionary algorithm (K-RVEA) [10] could achieve an optimal solution that is much better than the baseline solution on the optimization of an air intake ventilation system. Another challenge is that it can be hard to decide which algorithm should be adopted for a specific problem, especially for black-box problems whose properties are unknown. It is therefore highly desirable to conduct a comparative study of different algorithms when solving real-world applications. In addition, only a few simulation based expensive real-world applications with variable simulator runtime or many-objective properties have been reported in the past decade. In particular, a hybrid electric vehicle control problem with seven objectives [11], an automotive engine calibration problem with 10 objectives [12], and the design of radar waveforms with 10 objectives [13] are proposed, but each function of these three problems is relatively fast for evaluation. The proposed BEM problem in this work consists of 10 objectives with each function evaluation time ranging from seconds to minutes. It has the special property that the time for each function evaluation is different, and each function evaluation is expensive, so it is ideal for evaluating the performance of costaware Bayesian optimization. It should be noted that the performance of most cost-aware Bayesian optimization approaches such as [14], [15], [16] is merely tested on the single- or multi-objective optimization of the hyperparameters of neural networks because of a lack of real-world applications with variable simulator runtime for different function evaluations.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
15
Considering the above-mentioned challenges and requirements, the contributions of this work are summarized as follows. ❏ This work identifies the need of the many-objective optimization for the BEM problem and proposes a 10-objective building energy management problem, carefully engineered around nine decision variables. This novel solution is tailored for a practical, real-world application on a German building complex, encompassing 10 meticulously designed objective functions. As an extension to our previous work [17] in which four conflicting objectives, i.e., investment cost, yearly energy costs, CO2 emissions, and system resilience, are considered, we present six additional exemplary objectives that a decision-maker may take into account in identifying an optimal configuration of power system components. ❏ This work exemplifies a successful application of five multi-objective evolutionary algorithms with surrogate assistance (we call it multi-objective SAEAs hereafter for simplicity), and one without, i.e., traditional MOEA, to optimize the 10-objective BEM problem. By comparing these six algorithms, it is demonstrated that the five adopted multi-objective SAEAs could significantly improve the performance and efficiency, confirming the benefits of multi-objective SAEAs in handling time-consuming real-world applications. ❏ The proposed real-world many-objective BEM application with different evaluation costs can be used in the future for testing SAEAs or cost-aware Bayesian optimization algorithms. The rest of this paper is organized as follows. Section II describes the related work on building energy management. Section III presents the formulation of BEM. Section IV studies the performance of the BEM system obtained by five multi-objective SAEAs and one MOEA. Section V concludes the paper. II. Related Work A. Work on Building Energy Management
The optimization of building energy management using SAEAs can be divided into two main categories, i.e., single-objective and multi-objective optimization. In the first group, the conflicting objectives are transformed into a single-objective optimization problem by weighting different objectives [6], [8], [18], [19] or treating some objectives as the constraints [20]. For instance, in [8] and [20], optimization is both conducted based on an artificial neural network using genetic algorithms. However, in [8], a single objective function is formulated by weighting energy cost and loading shifting, while in [20], it is formulated by using energy consumption as the objective function with thermal comfort as one of constraints. In the second group, the conflicting objectives are simultaneously handled using MOEAs. For instance, in [21], four conflicting objectives, i.e., initial investment cost, running costs,
16
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
CO2 emissions, and system resilience are considered with the expectation of finding a suitable trade-off in terms of finance, environment, and system. All function evaluations are conducted using the SimulationX simulator. In [7], considering direct optimization based on the building energy simulation software EnergyPlus using NSGA-II is time-consuming, thus, parallel computing is adopted to reduce the computational cost. Some studies propose training a surrogate model, such as an artificial neural network, to replace real function evaluations from a simulator to reduce the computational cost. For instance, in [22], by training an artificial neural network to replace the real function evaluations, the trade-off between energy demand and thermal comfort can be efficiently achieved using canonical NSGA-II, multi-objective particle swarm optimization [23], and multi-objective genetic algorithm [24] as the optimizer, respectively. Same as in [22], NSGA-II is also used as the optimizer in [25], however, it aims at simultaneously meeting economic and environmental targets. While leveraging a surrogate model in lieu of actual function evaluations that can enhance efficiency, some SAEAs designed for optimizing the BEM problem overlook the essential aspect of model management strategy, focusing solely on the constructed model. This oversight is evidenced in studies such as Delgarm et al. [22] and Xue et al. [25]. It is believed that the model management strategy is of great importance to balance the exploitation and exploration during the search process as the initially built surrogate model may not be accurate. Only a few researchers study the improvement of the performance of building energy management by considering model management strategies. In [26], [27], all solutions obtained by the multi-objective particle swarm optimizer are evaluated by the EnergyPlus simulator at each round of surrogate update. In [28], kriging variance is used in the model management strategy to obtain a set of well-diversified and converged solutions. Note that the above-mentioned studies only consider the balance of two conflicting objectives in building energy management. In summary, only a few studies have been conducted on building energy management. As far as we know, very few of them have studied a many-objective scenario, of which only a maximum number of five objectives has been considered. However, many-objective optimization of the BEM is essential for solutions to be finally implemented in the real world. B. Surrogate-Assisted Evolutionary Algorithms
In most evolutionary algorithms, it is assumed that the computation of objective values is relatively cheap and fast, i.e., the objective values can be quickly obtained, given any set of decision variables. Nonetheless, in certain applications, each function evaluation can be time-consuming. Therefore, the direct application of evolutionary algorithms to these computationally expensive problems will not be feasible due to the impracticality of performing a substantial number of real function evaluations. For instance, a single simulation run in computational fluid dynamics [9] or simulating a car crash [29] could
take days to months. Different from traditional evolutionary algorithms, surrogate models can be built to partially replace the real objective function evaluations in SAEAs [30]. In SAEAs, apart from the surrogate construction, the strategy of model management or the acquisition function, plays a pivotal role. This strategy is used to determine the next query point to effectively improve the set of optimal solutions by well-balancing exploration and exploitation. An acquisition function is the function used to determine the next most suitable solution to be infilled using real function evaluation. Given that each real function evaluation is notably time-intensive for these expensive problems, the evolutionary optimization can be strategically applied to the acquisition functions. These computationally economical functions, offer a cost-effective solution, thereby conserving computational resources in SAEAs. The resulting optimal solution will then be evaluated using the real objective functions and the surrogates, such as Gaussian process models (GP, also called kriging), [31] will be updated. Representative work of SAEAs for handling expensive MOPs and MaOPs can be mainly categorized into two groups. In the first group, a surrogate is built for each objective to represent the real objective function as in K-RVEA [10], krigingassisted two-archive evolutionary algorithm (KTA2) [32], Euclidean distance based expectation improvement matrix (EIMEGO) [33], and MOEA/D assisted efficient global optimization (MOEA/D-EGO) [34]. The above-mentioned work seldom considers the PF shapes, while most recently, a kriging-assisted evolutionary algorithm, termed GPiGNG [35], is proposed to solve problems with various Pareto front shapes, showing very promising results. Unlike the above-mentioned approaches, two optimization processes are performed in parallel in [36], instead of one, to optimize an amplified upper confidence bound in order to balance exploration and exploitation well. In the second group, instead of directly predicting the objective values, a surrogate is constructed to predict the dominance relationship. Classificationbased surrogate-assisted evolutionary algorithm (CSEA) [37], dominance prediction assisted evolutionary algorithm [38], and the recent relation learning and prediction assisted evolutionary algorithm (REMO) [39] are part of the representative work. In this group, a classifier that is able to distinguish the relative quality of pairwise individuals is usually required, but the main difficulty is a lack of sufficient training data. To overcome this issue, a huge number of the training data is constructed by utilizing the pairwise relation between each two solutions [39], [40]. III. Problem Statement
The present optimization problem is an extended version of a real-world facility optimization problem, primarily targeting the reduction of energy costs and CO2 emissions. This revised version takes into account the incorporation of a PV system, a battery storage facility, and several pragmatic modifications to the heating system, bearing space constraints and cost factors in mind. The objectives presented in this work are derived from
this analysis and represent different goals and concerns of facility management. The focus of the BEM problem is to pinpoint an optimal assembly of power supply components for an individual building. This building comprises various sections including office spaces, automotive testing benches, computer infrastructure, and other such facilities. Moreover, it includes a sensible combination of different investment options, such as a PV system, a battery storage, and a heat storage, as well as the respective controller settings. The different configurations are constructed by nine individual decision variables. The building also uses a combined heat and power (CHP) system, which is not optimized but needs to be considered for the calculation of the objectives. In total, 10 objectives, derived from the simulation output, contribute to the investment decision. These objectives are partially related and might not all be relevant in all related applications. However, they span a range of solution features that a decisionmaker could potentially be concerned with. A mathematical definition of a many-objective optimization problem is defined as follows.
min f ðxÞ ¼ ðf1 ðxÞ; f2 ðxÞ; . . . ; fM ðxÞÞ; subject to x ¼ ðx1 ; x2 ; . . . ; xD Þ; xi 2 R;
(1)
where x is a vector of D decision variables in the decision space RD . M is the number of objectives and f ðxÞ 2 L RM is the objective vector with M objectives, L is the objective space. When M < 4, (1) refers to a multi-objective optimization problem and when M 4, (1) refers to a many-objective optimization problem. A. Description of the Decision Variables
The problem considers nine decision variables in total, which will be discussed in the following. 1) Inclination angle of photovoltaics system aPV between 0 and 45 . 2) Orientation angle of photovoltaics system bPV between 0 and 360 . 3) Installed peak power of the photovoltaics system PPV between 10 kW and 450 kW. 4) Nominal Battery Capacity CB between 5 kWh and 1000 kWh. 5) Maximum Battery state of charge bSOC;max between 0.50 and 0.95. The battery is charged only up to the specified maximum state-of-charge (SOC) to limit battery aging. 6) Minimum Battery state of charge bSOC;min between 0.05 and 0.40. The battery is discharged only down to the specified minimum SOC to limit battery aging. This excludes the emergency power supply. 7) Battery Charging Threshold Pcharge between 500 kW and 149:9 kW. The stationary battery is charged when the current overall power demand is below the specified value.1 1
A detailed description of the battery control approach can be obtained from [17].
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
17
TABLE I The range of the decision variables. The hyphen symbol indicates that no unit is associated with the corresponding decision variable. PARAMETER
MIN VALUE
MAX VALUE
UNIT
DESCRIPTION
SYSTEM COMPONENT
0
45
PV inclination angle
PV system
bPV
0
360
PV orientation angle
PV system
PPV
10
450
kW
PV peak power
PV system
CB
5
1000
kWh
Battery capacity
Battery storage
bSOC;max
0.50
0.95
-
Maximum battery SOC
Battery storage
bSOC;min
0.05
0.40
-
Minimum battery SOC
Battery storage
Pcharge
-500
149.9
kW
Battery charging threshold
Battery storage
Pdischarge
150
700
kW
Battery discharging threshold
Battery storage
1
5
m3
Heat storage volume
CHP
aPV
VCHP
8) Battery Discharging Threshold Pdischarge between 150 kW and 700 kW. The stationary battery is discharged when the current overall power demand is above the specified value. 9) Heat storage cylinder volume VHeat between 1 m3 and 5 m3 . A summary of the decision variables and the affected system components is given in Table I. For the optimization framework, all parameters are normalized to the range 0 to 1. B. Objective Function Description
A study comparing different many-objective optimization algorithms (without surrogate assistance) on this problem was conducted in [17]. However, this earlier study only focused on five independent objectives (the fifth objective battery life is not considered), whereas this work is targeting to evaluate 10 objectives in total. A summary of the objectives is given in Table II. As a prerequisite for the optimizer, all objectives need to be minimized. Note that hardware installation and other costs are based on data from 2019 and might have changed. The computation of objectives is based on the output of the simulation tool (for example, the grid TABLE II Objective description and range. The hyphen symbol indicates that no unit is associated with the corresponding objective, or that there is no upper or lower bound. OBJECTIVE
18
MIN VALUE
MAX VALUE
UNIT
DESCRIPTION
Cinvest
0
-
Euro
Investment costs
Cannual
0
-
Euro
Operation costs
Gtotal
0
-
t
CO2 emissions
R0
-
0
s
(negative) Resilience
b SOC
0
1
-
Mean battery SOC
Ebatt;discharge
0
-
kWh
Discharged energy
Ppeak;supply
0
-
kW
Supply power peak
0 tm
0
1
-
Medium SOC share
Efeed
0
-
kWh
Feed-in energy
Ppeak;feed
0
-
kW
Feed-in power peak
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
electricity demand for every 15 min interval in the simulation time period). Also note that in contrast to many linearized simulation models that only employ analytical functions approximating the systems like the PV system or battery as in [41], our simulator can model even non-linear effects in high precision (at the cost of higher and variable simulation times). 1) Investment costs in Euro. In this paper, mainly three investment cost components are considered resulting from purchasing the PV system, the stationary battery, as well as the combined heat and power plant. Cinvest ¼ IPV þ IBatt þ IHeatStor ;
(2)
where IPV , IBatt , and IHeatStor refer to the investment cost for the PV system, the stationary battery, and the heat storage tank, respectively. Linearly scaling cost factors are assumed for all three components (1000 Euro=kW PV peak power, 250 Euro=kWh battery capacity, 700 Euro=m3 heat storage). 2) Annual operation costs in Euro per year. Maintaining and operating the whole system will involve the grid electric cost, the gas consumption, the peak electricity load cost and the CHP maintenance cost. Thus, these four costs constitute the annual cost. CAnnual ¼ CGrid þ CGas þ CPeak þ CCHP ;
(3)
where CPeak is the additional peak cost for the highest energy demand in a year. CHP maintenance costs of 4:3 Euro per hour of operation are assumed: a gas price of 0:025 Euro=kWh power demand from gas, a (beneficial) feed-in tariff of 0:07 Euro=kWh in case excess energy is fed into the grid, a (beneficial) subsidy of 0:087 Euro=kWh of energy produced by the CHP, a fixed base price of 1000 Euro, and a price of 0:131 Euro=kWh for electricity provided from the grid. An additional amount of 100 Euro=kW is charged for the overall maximum power demand peak. 3) Yearly CO2 emissions in tons. The CO2 emissions are estimated based on the simulated electricity and gas consumption
of grid electricity system, CHP, and boilers, respectively. Gtotal ¼ Ggrid þ Ggas
(4)
where Ggrid is the amount of CO2 emissions from grid electricity, approximated at 500 g=kWh. The term Ggas refers to the amount of CO2 emissions from operating the CHP and boilers, set at 185 g=kWh. 4) Resilience in seconds. The term resilience refers to the duration the company would be able to operate in case no grid power is available, i.e., energy is only provided by local production (PV system and CHP) and battery energy storage. This is, for example, relevant in cases of severe mal-functions due to extreme weather conditions, malicious physical or cyber-attacks. The resilience is in our case computed as bSOC CB R ¼ min Pload
(5)
where bSOC is the battery’s state of charge vector (for all simulation time steps), CB refers to the battery capacity and Pload is the grid load vector. Resilience thus refers to the minimum (of time) over all 15 min time periods in the simulation of energy in the battery ðbSOC ðtÞ CB Þ divided by the respective power consumption PLoad ðtÞ. This can be interpreted as the time period, for which the company would still be able to operate all electric components, at the worst point in time, i.e., the lowest ratio of battery state of charge and current electric load. Since all objectives need to be minimized, Equation (5) is formulated negatively as R0 ¼ R
(6)
5) Mean battery state of charge bSOC , between 0 and 1: bSOC is the average state of charge of the stationary battery over the entire simulation. On one hand, a high mean state of charge enables the battery to discharge large amounts of energy to mitigate high peak costs when the overall power consumption is high. However, on the other hand, a high battery SOC over large periods of time is undesirable, as it leads to faster battery degradation. 6) Yearly energy discharged from battery Ebatt;discharge in kWh. A second indicator for battery degradation is the amount of energy that is discharged from it. The more energy discharged (and charged) from it, the faster the degradation. 7) Maximum power peak Ppeak;supply in kW. The maximum power demand peak is treated as an individual objective. Aside from contributing to higher annual costs due to peak demand charges for the customer, a high maximum power peak may lead to instability of the grid. 8) Time share tm between 0 and 1 of the time in which the battery SOC is between 30% and 70%. As a trade-off between battery degradation and the ability to react to high demand charges, this objective creates an additional incentive to charge the battery with a medium amount of
energy. Since the objectives are minimized to an intermediate charge (SOC) level, it is formulated as tm0 ¼ 1 tm
(7)
9) Yearly energy Efeed fed into the grid in kWh. Minimizing the amount of energy that is fed into the grid has multiple advantages. It maximizes PV power self-consumption and creates a higher level of independence from the energy supplier. It also reduces CO2 emissions and leads to lower annual costs, since the feed-in tariff is lower than the grid supply rate. Yearly energy Efeed is already included in objective annual operation costs, but might be of special interest for some decision makers. 10)Maximum feed-in power peak Ppeak;feed in kW. Similar to the previous objective, a lower maximum feed-in power peak relates to higher PV power self-consumption and generally an efficient usage of self-produced energy. Furthermore, penalizing the maximum feed-in peak might be a realistic option for the supplier to reduce grid instability and frequency issues in the future. This would create additional costs for the consumer. In the present cost structure, there is no monetary impact of Ppeak;feed , but due to its impact on the grid stability, it is, however, also considered as a separate objective. In BEM, all decision values have an impact on the internal electricity consumption (vector over time) of the building that is simulated and all objectives (except for Cinvest ) are affected by this consumption, either directly (like Cannual ) or indirectly via the amount of energy that is stored in the stationary battery, or when this energy is discharged. Sensitivity analysis between the decision variables and all objectives is carried out, which confirms that nine of the ten objectives are affected by all decision variables, except that Cinvest is determined by PPV , CB , and bSOC;max only. The details of the sensitivity analysis are not provided here due to space limit. In summary, the simulation setup offers multiple directions to consider when optimizing the configuration. A large PV system enables the building to produce a significant share of the overall used energy, keeping the annual costs, peak power, and emissions low, while at the same time requiring a high initial investment. Orientation and inclination of the PV system can be tuned to define at which points in time (daily and seasonal) the system produces the most power. A large battery is also costly, though advantageous with regard to resilience and supply power peak. Many of the objectives are very sensitive to the four parameters that define the operation of the battery (bSOC;max , bSOC;min , Pcharge , Pdischarge ) and therefore, create a challenging optimization task. IV. Methods A. Adopted Algorithms
Five multi-objective SAEAs, i.e., GP-iGNG, RVMM, K-RVEA, KTA2, and REMO, and one MOEA, i.e., RVEAiGNG, are adopted to optimize the BEM problem. RVEA-
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
19
iGNG is used to verify whether SAEAs can converge faster than the traditional MOEA with the help of cheap surrogates and a model management strategy. Before discussing the detailed mechanism of the six algorithms, this study first illustrates how a multi-objective SAEA is different from an MOEA. In multi-objective SAEAs, the function evaluation is partly conducted on the built cheap surrogate model. This way, the computational cost can be reduced immensely when the real objective function is time-consuming. Note that the optimization process is partly based on the predicted objective values instead of the real objective values, so a reliable surrogate model is highly desired, which usually relies on an efficient surrogate model management strategy. A brief introduction of the six algorithms is as follows. 1) GP-iGNG [35]. GP-iGNG is able to handle expensive problems with various kinds of Pareto front shapes, making it suitable for handling real-world applications whose Pareto fronts are not known beforehand. 2) RVMM [36]. An adaptive reference vector based model management strategy is proposed. By optimizing an amplified upper confidence bound acquisition function using two optimization processes on top of two sets of reference vectors, RVMM shows great competitiveness in handling expensive many-objective problems. 3) K-RVEA [10]. The optimization is conducted on the basis of the predicted mean values of GP models, and the selection of newly infilled solutions is based on the angle penalized distance and the uncertainty. 4) KTA2 [32]. Three Gaussian process models, i.e., one global GP model and two GP influential point-insensitive models, are proposed to improve the prediction accuracy. An adaptive acquisition function that can adaptively emphasizing convergence, diversity or uncertainty is proposed in KTA2. 5) REMO [39]. Different from regression-based SAEAs, a neural network based relation model is trained to learn the relationship between pairs of candidate solutions. 6) RVEA-iGNG [42]. RVEA-iGNG is an MOEA that is proposed to handle many-objective problems. RVEAiGNG is based on the framework of reference vector assisted evolutionary algorithm (RVEA), and the reference vectors are adjusted by training an improved growing neural gas network. Given the absence of a prior knowledge regarding the shape of the true Pareto fronts of the BEM problem, RVEA-iGNG proves to be an appropriate choice. This is due to its competitive performance in handling benchmark problems characterized by various types of Pareto fronts. It is worth mentioning that other state-ofthe-art MOEAs can also be used. To sum up, the five adopted SAEAs differ in the following aspects. ❏ Model construction. In GP-iGNG, RVMM, and KRVEA, a traditional GP is adopted as the surrogate model for each objective to replace in part the objective function. KTA2 overcomes the drawbacks of conventional GPs by
20
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
excluding those influential points from the training set and constructs three separate GP models. In REMO, one relation model is constructed using a neural network to learn the relationship between pairs of solutions. ❏ Model management strategy. In GP-iGNG, KTA2, and KRVEA, the model management strategies are different, although they are all based on the solution set obtained by one optimization process using the GP models as the approximated objectives. In RVMM, the exploration and exploitation are balanced by selecting solutions from two optimization processes, each accounting for convergence and diversity, respectively. In REMO, the model management strategy is based on a specific voting strategy, and solutions with higher scores will be evaluated using the real function evaluations. V. Experimental Results A. Parameter Settings
For the five multi-objective SAEAs and one MOEA, the maximum number of real function evaluations is set to 1000 and 11200, respectively. The population size for the five adopted algorithms is set to 230. The initial number of training data for the five multi-objective SAEAs is set to 11 D 1, where D is the dimension of the decision space. The initial 11 D 1 real function evaluations contribute to the maximum number of 1000 or 11200 real function evaluations. In this paper, the five adopted multi-objective SAEAs are conducted for 10 independent runs and the adopted MOEA, i.e., RVEA-iGNG, is conducted for six independent runs due to time limitations. B. Performance Indicator
In this study, hypervolume (HV) [43] is adopted as the performance indicator since the true Pareto front (PF) for the BEM problem is not known. The reference point for calculating HV is obtained by all the non-dominated solutions obtained by the five algorithms under comparison. HðSÞ ¼ ^ð
[
½p; rÞ
(8)
p2S;p4r
where ^ denotes the Lebesgue measure, and ½p; r ¼ fq 2 Rd jp4q and q4rg denotes the box delimited below by p 2 S and above by r. The HV contribution of a point to a set p 2 Rd is as follows: Hðp; SÞ ¼ HðSÞ HðS n pÞ
(9)
A reference point is required for HV calculation. As discussed in [44], [45], first the non-dominated solutions by the six algorithms are normalized using the ideal point and nadir point (which are the minimum and the maximum objective value of each objective of all non-dominated solutions). Then, the HV values of all normalized solutions can be calculated using the reference point (1.1,1.1,...,1.1).
FIGURE 1 A system view of the simulation: The model simulates the building power and heat demand based on time and weather conditions. Energy is provided by the grid connection, a PV system, a combined heat and power plant (CHP), and a stationary battery. The battery’s charging and discharging behavior is controlled depending on a predetermined control strategy and internal reference values, e.g., the overall power consumption level.
C. Employed Simulation System and Optimization Framework
The energy management system is simulated using the commercial tool SimulationX,2 which is based on the Modelica simulation language3 (Figure 1). The adopted SimulationX simulator can create a complete renewable energy system rather than some components only, as shown in Figure 1. The hybrid simulation approach employs sensor readings from a real facility and well tested simulation modules based on fundamental physical equations. The simulation model was built based on an analysis of the real building and smart meter measurements of real energy consumption over several years. The aggregated simulation output, like yearly energy consumption, has been validated against real consumption values. More information on the simulation approach can be found in [46]. Building energy consumption profiles and weather patterns are based on recorded energy consumption values of real buildings obtained by smart meter measurements. Modelica uses differential equations (DE) of the underlying physical processes to simulate technical elements like PV systems or batteries. Different levels of details can be simulated, including nonlinear effects like variable technical efficiencies, which are often not represented in simpler simulation approaches. The drawback of this modeling approach is an increased runtime, which is also additionally dependent on the specific characteristics of the simulated system (due to the way the internal DE solver works). For example, simulating the (otherwise) same system with two different battery sizes might take different times. Certain extreme conditions (especially those that no human engineer has ever considered) might even cause the simulation run to stall completely. It is therefore necessary to stop long-running simulations when runtime exceeds a certain 2
[Online]. Available: https://www.esi-group.com/products/system-simulation [Online]. Available: https://modelica.org/modelicalanguage.html
3
FIGURE 2 The mean HV curve over the number of real objective function evaluations obtained by the six adopted algorithms over the evolution process. The upper and lower bars denote the standard deviation over 10 runs.
threshold (as will be discussed in more detail in Section IV.E) and discard this solution (by setting all objectives to the worst possible level). If this time-out threshold is set properly, the impact on the overall simulation results was found to be low. Generally, the configuration needs to be evaluated based on the simulation of at least a complete year (to cover all seasons). However, this work first conducts the optimization based on a single month to have sufficient runs for a fair comparison of algorithms. The month of August is chosen in this work, as it shows the most similar results compared to a full year in previous studies. Since the overall system can be influenced by different seasons, e.g., sunshine and temperature, in future work, we would study the performance of the algorithms on the optimization of the BEM over a complete year. Regarding the optimization tool, the six adopted algorithms under comparison are all implemented under the PlatEMO framework [47] to optimize the BEM application. D. Performance Comparison of the Six Algorithms
To further substantiate the potential of surrogate models and efficacious model management strategies in enhancing algorithmic performance, a comparative analysis is conducted. This analysis examines the performance of five multi-objective SAEAs, namely, GP-iGNG, KTA2, K-RVEA, REMO, and RVMM, juxtaposed with a traditional MOEA - RVEAiGNG, in the context of the BEM optimization. In Figure 2, the HV values of the solutions obtained by the six algorithms over the 1000 real function evaluations are given. It can be seen that RVMM achieves the best HV value among the five algorithms, followed by REMO and KTA2. Overall, REMO and RVMM achieve similar performance over the number of real function evaluations. GP-iGNG performs slightly worse than KTA2. RVEA-iGNG, one of the MOEAs without surrogate assistance, performs only marginally worse than the state-of-the-art K-RVEA in dealing with BEM. In Figure 3, We calculate the total number of individuals each individual
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
21
FIGURE 3 The total number of individuals each individual could dominate within the optimal solution set obtained by each of the five SAEAs.
could dominate within the optimal solution set obtained by each of the five SAEAs. The result is almost consistent with the HV results, showing that the number of solutions obtained by REMO is the largest, followed by RVMM. To observe the degree of improvement of the five multiobjective SAEAs in handling BEM, compared with RVEAiGNG, we conduct the test on BEM using RVEA-iGNG with a maximum number of 11200 real function evaluations. In Figure 4, it is observed that the performance of RVEAiGNG increases sharply over the first 4000 real function evaluations and then increases at a slower pace until the exhaustion of the 11200 real function evaluations budget. The mean HV values obtained by RVMM, REMO, KTA2, GP-iGNG, and K-RVEA with a maximum of 1000 real function evaluations, and RVEA-iGNG with a maximum of 11200 real function evaluations are 0.1928, 0.1919, 0.1712, 0.1623, 0.1249, and 0.1804, respectively. It can be concluded that RVMM and REMO can converge almost 10 times faster than RVEAiGNG. The HV values of the solutions obtained by RVMM and REMO with a maximum number of 1000 real function evaluations, i.e., 0.1928 and 0.1919, surpass the HV value of the solutions obtained by RVEA-iGNG with a maximum number of 11200 real function evaluations, i.e., 0.1804. It is noteworthy that REMO and RVMM, when compared to the other three multi-objective SAEAs, demonstrate significantly superior performance in the context of the BEM optimization. The HV values of the solutions obtained by KTA2 and GPiGNG with a maximum number of 1000 real function evaluations, i.e., 0.1712 and 0.1623, are close to that obtained by RVEA-iGNG with a maximum number of 11200 real function evaluations, i.e., 0.1804.
FIGURE 4 The HV curve over the number of real objective function evaluations obtained by RVEA-iGNG over six independent runs. Note that the abnormal solutions are deleted before calculating the HV values. Thus, the number of the existing solutions is less than 11200.
number of 1000 real function evaluations. During the simulation of the BEM, each evaluation may take a few seconds to minutes. Thus, in this subsection, the histogram of the evaluation time of the 11200 real function evaluations is shown in Figure 5 to illustrate the distribution of the evaluation time. It is observed that the most frequent evaluation time is in the range of about 35 seconds to 65 seconds. Currently, the optimization is merely conducted based on the data from the month of August, and the evaluation time ranges from seconds to hours for different sets of decision variables, which is not affordable for traditional MOEAs. During the experiments, it is observed that the time of one function evaluation could be up to hours for some sets of decision variables. Therefore, we first record the simulation time for each function evaluation and then set the maximum time limit to 7200 seconds, i.e., if the simulation time of one solution is longer than 7200 seconds, then this solution will be abandoned to improve the efficiency of optimization. It is observed that a
E. Runtime and Timeout Analysis
Conventional MOEAs necessitate a considerable quantity of real function evaluations. Therefore, when each evaluation is time-intensive, it restricts the feasible number of real function evaluations that can be feasibly carried out. In this study, RVEA-iGNG is applied to optimize the BEM with a maximum number of 11200 real function evaluations, and one independent run takes about six days. It takes around 65 hours for the five adopted multi-objective SAEAs with a maximum
22
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
FIGURE 5 Histogram of the runtime.
FIGURE 6 Mean squared prediction error of surrogates for all objectives over 10 independent runs.
FIGURE 7 Probability that a solution’s Pareto dominance rank is wrongly predicted over 10 independent runs.
number of solutions take more than 7200 seconds, while for the majority of solutions, the time it takes is in the range of 0 to 70 seconds. To analyze in which time range the solutions are more beneficial to the optimization process, those solutions whose time is longer than 70 seconds are first removed and then the HV contribution of solutions whose simulation time is below 70 seconds (s) is studied. We divide the whole evaluation time into six sub-ranges, i.e., 0 to 15 s, 15 to 25 s, 25 to 35 s, 35 to 45 s, 45 to 55 s, 55 to 65 s, and study the HV contribution of the solutions in each sub-range, as shown in Figure 5. It can be seen that the solutions in the range of 15 to 35 seconds contribute the most to the HV values, and solutions in the range below 15 and above 35 contribute less. It is concluded that the reasonable timeout value shall not be set to less than 35 seconds if we want to speed up optimization process. Note that long simulation times are also often an indicator of numerical instabilities, which makes a removal of these solutions even more advisable.
values are first predicted using the constructed GP models to obtain the predicted objective values, and then they are compared with their real objective values. The mean squared errors between the predicted objective values and real objective values are plotted in Figure 6. Interestingly, it is observed that the prediction error of the solution set obtained by K-RVEA is overall the lowest among the four multi-objective SAEAs under comparison. Specifically, RVMM, GP-iGNG, KTA2, and K-RVEA win on four, one, zero, and five out of 10 independent runs in terms of the prediction error. In addition to the predicted mean squared error, we also calculate the probability that a solution’s Pareto dominance rank is wrongly predicted, considering that in multi- or many-objective optimization, the rank, instead of the objective values of a solution, is used to determine the quality of the solution. Figure 7 shows that KTA2 and GP-iGNG predict the rank more accurately than K-RVEA and RVMM. Interestingly, RVMM occupies the lowest position in terms of rank prediction. This finding suggests that diminished model accuracy does not invariably equate to a decline in the performance of multi-objective SAEAs. Since in RVMM, two GPs are constructed in parallel in two optimization processes, and the low mean accuracy of two GPs may result from two GPs serving different purposes in two optimizations, i.e., one emphasizing on accelerating the convergence and the other on diversity. Note that in K-RVEA and GP-iGNG, the optimization process in both algorithms is based on the predicted mean values of a constructed GP model, supporting the statement that apart from the constructed surrogate model, model management strategy is vital in guiding the search process.
F. Analysis of the Surrogate Accuracy
Both surrogate model and model management constitute critical factors that significantly influence the performance of multi-objective SAEAs. Note that in Figure 2, RVMM performs the best in terms of HV value, followed by REMO, while K-RVEA ranks the worst. REMO adopts a neural network to predict the relationship between pairs of solutions instead of the objective values as in GP. Considering it is difficult to make comparisons of accuracy between the neural network and GP surrogates, only the accuracy of GP surrogates in four GP-assisted evolutionary algorithms is analyzed in this subsection. To better compare the adopted algorithms in solving BEM, we study the model accuracy of GPs trained by a subset of the solution set obtained by each algorithm under comparison. 80 percent of solutions are selected as training data and 20 percent as the test data. The solutions’ objective
G. Property Analyses of the BEM Problem
As previously discussed, certain solutions might be inherently unstable in real-world scenarios, leading to high time resolutions in simulations. Consequently, even after a waiting period of 7200 seconds, the computation of objective values might
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
23
FIGURE 8 The approximated Pareto front of the second objective (F2), the third objective (F3), and the ninth objective (F9).
remain incomplete. Also, apart from the expensive property of this BEM problem, Figure 5 shows that the runtime for different objectives is also different. Take the second, third, and ninth objective of the non-dominated solutions as an example, as shown in Figure 8, the shape of the obtained Pareto front is further analyzed. As defined in [48], a Pareto front is called regular if an infinite number of vectors with positive directions all intersect with it; otherwise, it is irregular. It is observed that the approximated Pareto front in Figure 8 only covers a small part of the whole objective space. Thus, it is concluded that the approximated Pareto front of this BEM problem is irregular. This irregular Pareto front property could possibly explain the ineffectiveness of K-RVEA in handling BEM, in which a set of predefined evenly distributed reference vectors covering the whole objective space is adopted.
FIGURE 10 Histogram of charging threshold.
To better visualize the solutions obtained by the six algorithms, both the decision variables and the objective values of the obtained non-dominated solutions are illustrated. For
the PV system, the orientation angle of the photovoltaics system is around 180 degrees in most cases, which means it is southward, as confirmed in Figure 9. The few northward facing PV-systems can be explained by the configurations with very small PV peak-powers. For those solutions, the orientation and inclination have no impact on the result, as they do not contribute to the energy production. Figure 10 shows that for most solutions, the battery is charged when the surplus power is above 149.9 kW (which is the upper limit of the parameter). Essentially, a higher charging threshold seems to be more reasonable. For the discharging limit in Figure 11, it is observed that there are multiple reasonable values, although many of the solutions are around 300 kW. We now study the correlation between objectives (for better visualization only pairwise comparisons are considered). In Figure 12, only two dimensions of the 10-dimension objective values of the non-dominated solutions are plotted, and it is
FIGURE 9 Histogram of bpv .
FIGURE 11 Histogram of discharging threshold.
H. Analysis of the Solutions
24
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
FIGURE 12 Visualization of the relation between annual operational costs and the investment costs and the relation between investment costs and CO2 emission using all non-dominated solutions obtained by REMO.
observed that there is a (rather weak) negative correlation between the investment costs and the annual operation costs. Similarly, in Figure 12, the investment costs and the yearly CO2 emissions are also negatively correlated, as expected. However, this analysis also shows that the overall savings potential for this specific use case are limited. To better observe the quality of the solution set obtained by the six algorithms and provide domain experts with more informed knowledge of the optimal set and its boundaries for better decision-making, we also list the best-found solution by all runs for each objective and two of the knee solutions obtained by RVMM, as shown in Table III, and the corresponding parameters are listed in Table IV. As a reference, the complete distributions of the decision variables and obtained objectives are shown in Figs. 13 and 14, respectively.
FIGURE 13 Distribution of the decision variables for all Pareto nondominated solutions: The boxes indicate the 25%-75% percentile, and the whiskers mark the minimum and maximum values. The markers indicate the two knee point solutions.
The first solution (id 1 in Table III) shows the lowest value for investment costs Cinvest . The low investment costs are realized by utilizing a very small PV system (PPV ) and a very small battery capacity(CB ). Solution 2, on the contrary, utilizes the largest possible PV system. In combination with a relatively large volume for the heat storage tank (VCHP ), it plausibly achieves the lowest annual costs (Cannual ), illustrating the competitive nature of investment and annual costs. These two power sources provide more electricity than necessary for the building itself and feed a substantial amount into the grid (shown by high values for Efeed and Ppeak;feed , in comparison to the low value of Ebatt;discharge ). By producing substantial amounts of energy from the large PV system and the CHP,
TABLE III The best found solution by all runs of the five algorithms for each objective and the objective values of two additional knee points. ID
BEST
Cinvest
Cannual
Gtotal
R
bSOC
Ebatt;discharge
Ppeak;supply
tm
Efeed
1
Cinvest
11600.00
335855.99
2156.36
-3.83
0.3623
1024.46
333428.19
0.3324
0.00
0.00
2
Cannual
566695.73
318833.10
2091.74
-1004.80
0.2000
0.01
333428.19
1.0000
76409.66
310735.30
3
Gtotal
566695.73
318833.10
2091.74
-1004.80
0.2000
0.01
333428.19
1.0000
76409.66
310735.30
4
R
666050.99
322715.73
2106.34
-9086.84
0.8828
8199.41
333429.00
0.9896
50683.61
226282.66
5
bSOC
315240.88
326323.42
2120.30
-47.95
0.0274
328.24
333428.21
1.0000
20415.92
107002.88
Ppeak;feed
6
Ebatt;discharge
11600.01
335854.46
2156.44
-10.80
0.2000
0.00
333428.19
1.0000
0.00
0.00
7
Ppeak;supply
106201.35
336393.05
2158.40
-431.47
0.6387
111547.83
273239.52
0.7589
0.00
0.00
8
tm
253864.20
333533.27
2147.19
-874.72
0.5971
654.20
333428.19
0.0002
0.00
0.00
9
Efeed
566335.20
331701.31
2140.57
-6294.54
0.9205
6344.97
333432.17
0.9871
0.00
0.00
10
Ppeak;fead
344858.68
337084.61
2154.64
-526.49
0.1040
494.72
333428.20
1.0000
0.00
0.00
11
(knee point 1)
460329.69
330866.87
2137.58
-7842.71
0.7223
6150.24
333428.35
0.9879
2843.12
26893.59
12
(knee point 2)
375578.36
330794.14
2137.21
-8153.59
0.7861
6956.20
333428.79
0.9878
7533.43
46370.87
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
25
TABLE IV The best-found solution by all runs of the five algorithms for each decision variable and the decision values of two additional knee points. ID
BEST OBJECTIVE
aPV
bPV
PPV
CB
bSOC;max
bSOC;min
Pcharge
Pdischarge
VCHP
1
Cinvest
33.63
57.25
10.00
5.00
0.5000
0.0716
149.87
278.83
4.8401
2
Cannual
45.00
185.49
450.00
465.32
0.5200
0.3681
-431.47
698.76
4.9332
3
Gtotal
45.00
185.49
450.00
465.32
0.5200
0.3681
-431.47
698.76
4.9332
4
R
34.65
216.24
415.43
1000.00
0.8883
0.3606
45.10
629.54
4.8264
5
bSOC
18.98
199.38
270.19
178.78
0.5039
0.0500
-358.75
195.20
4.9628
6
Ebatt;discharge
29.32
19.57
10.00
5.00
0.5000
0.1242
3.61
590.26
4.9609
7
Ppeak;supply
31.57
161.47
18.45
348.58
0.8697
0.0908
149.90
268.40
4.6988
8
tm
22.88
16.10
218.87
138.39
0.5634
0.1672
48.97
595.48
4.2494
9
Efeed
28.69
41.50
383.68
728.03
0.9282
0.1711
32.86
392.67
4.6852
10
Ppeak;feed
22.99
351.10
226.85
469.36
0.9472
0.1074
-475.62
206.53
1.0618
11
(knee point 1)
6.33
254.21
145.55
596.61
0.9090
0.3607
-313.61
539.80
3.1487
12
(knee point 2)
27.06
209.21
243.38
918.93
0.5361
0.2591
60.78
396.57
4.3863
the same solution is also able to achieve the lowest level of CO2 emissions (Gtotal , see solution 3). Solution 4 utilizes a large PV system in combination with a large battery. Together with the high mean state of charge (bSOC ) of the battery and its overall high charging limit (bSOC;max ), this results in the longest resilience time in cases when no grid power is available. Solution 5 requires only a minimum battery SOC (bSOC;max ) and achieves the lowest mean battery state of charge. Solution 6 utilizes the smallest possible PV system, as well as the smallest possible battery capacity and lowest maximum charging limit (bSOC;max ), thereby eliminating the utilization of the battery (shown by the lowest value of Ebatt;discharge ). Solution 7 surprisingly achieves the lowest power peak demand from the grid (Ppeak;supply ), despite a very small PV
FIGURE 14 Distribution of the objective values for all Pareto nondominated solutions: The boxes indicate the 25%-75% percentile range, and the whiskers mark the minimum and maximum values. The markers indicate the two knee point solutions.
26
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
system. The rather small battery (CB ) is efficiently discharged (shown by an above average value for Ebatt;discharge ) to compensate the demand peak. This behaviour is achieved with the highest possible battery charging threshold (Pcharge ) and a low discharging threshold (Pdischarge ). Overall, the solution seems to utilize the small battery very efficiently for power peak shaving. Interestingly, the solution does not rely on a large PV system to produce additional power when the building demand is high. Hence, an efficiently used battery seems to be more important for power peak shaving than a large PV system, in this specific scenario. Solution 8 achieves the largest time share (tm ) in which the battery SOC is between 30% and 70% due to a combination of a low maximum battery SOC, high charging and high discharging thresholds. The battery has a mean state of charge of roughly 60%. Small charging or discharging processes are likely to keep the SOC between the desired values. For solution 9, the lowest possible energy that is fed into the grid (Efeed ) corresponds to a feed-in power peak value of zero and a high value for tm . In spite of the large PV system, no excess energy is being produced and fed into the grid. This can be seen as a result of the low orientation angle of the PV system (bPV ). The orientation leads to an overall low energy production, in particular during times of higher energy demand around midday. Solution 10 can be interpreted in a similar way. The high value for tm and low value for Efeed are in line with the lowest possible feed-in power peak. The PV system is directed North, thus producing only a small amount of energy. Note that this is a very inefficient solution, indicating that multiple objectives are always needed to be considered. Several insights into the optimization of the BEM problem can be further gained from the distribution of the decision variables and objectives of the Pareto non-dominated solutions (Figs. 13 and 14). While some decision variables, such as aPV , PPV , CB , bSOC;max , and bSOC;min take a wide range of values, bPV , Pcharge , Pdischarge , and VCHP can only
take a small range of values to achieve good performance in all objectives. As expected, it can, for example, be observed that an orientation around 180 (South) is useful for the PV system, which is plausible considering a location in central Europe. It is unexpected, though very valuable, that mainly very large values for Pcharge seem to have a positive effect on the optimization. This means that the stationary battery will be charged in most configurations, even for a net positive energy consumption, thus prioritizing the benefit that the battery provides over a potentially lower energy consumption. In a similar way, the results show that large values for VCHP seem to be overall beneficial, highlighting the general advantage of a large heat storage volume. With regard to the objectives, it can be observed that most configurations lead to similar low values for Ebatt;discharge and Ppeak;supply , with few exceptions. Decision-makers may not have explicit preferences over the obtained solutions. In this case, it is suggested to choose knee points for implementation, as knee points can achieve a well-balanced trade-off between all conflicting objectives. As summarized in [49], there are three main commonly used knee identification methods. In this work, we propose using the convex knee based on the convex hull of the individual minima method to select the knee solutions. Since RVMM performs the best among the six algorithms under comparison, we propose selecting two of the most representative knee solutions from the solution set obtained by RVMM and have included them in Tables III and IV, as well as Figs. 13 and 14, respectively. The knee points demonstrate the average performance for six of the objectives. They achieve very strong resilience values (R0 ), mainly due to the use of above average battery sizes (CB ) and high discharging thresholds (Pdischarge ). On the downside, this leads to above-average investment costs (Cinvest ) and a battery that is almost always charged with a high mean SOC (refer to tm0 and bSOC ). Another noteworthy aspect is the fact that, although the decision variables of both selected knee points take significantly different values, they lead to rather similar values in terms of objectives. To summarize, the many-objective optimization algorithms can identify a set of optimal solutions. However, the task of selecting one single solution to be finally implemented is still the responsibility of the domain expert. Currently the most important benefit is that a more informed decision can be made given the knowledge of the optimal set and its boundaries. Nevertheless, further research is necessary for supporting the decision maker. Promising methods are currently being investigated, for example, incorporating human preferences into the decision making process [41], identifying solutions of interest from the Pareto set [50], directly integrating the user into an interactive multi-objective optimization [51], selecting knee points [49], or dividing the solutions into semantically meaningful concepts and selecting the representatives [52], [53].
VI. Conclusion
In this study, a real-world building energy management problem is formulated and then is optimized using a competitive multi-objective evolutionary algorithm and five surrogateassisted evolutionary algorithms. A thorough analysis of the obtained solutions in terms of both the decision space and objective space is provided. Furthermore, it is suggested which timeout value should be set according to the amount of HV contribution in each time slot. The experimental results demonstrate that multi-objective SAEAs can be applied to identify a set of optimal solutions and are generally very helpful in improving the efficiency of building energy management. The performance of K-RVEA seems to be very close to RVEAiGNG, indicating that better model management strategies need to be developed based on the properties of BEM to further improve performance and efficiency. In this work, by giving the knowledge of the optimal set and its boundaries, a more informed decision can be made to support decisionmakers. However, which solution shall be finally implemented still requires the knowledge and preferences of domain experts. Note that the time of each function evaluation ranges from less than one minute to more than two hours in the BEM. Thus, we believe it will be of great importance to include the cost of each real function evaluation when designing model management strategies, such as [14], to obtain competitive performance with the lowest cost. Thus, the proposed BEM problem can serve as a test problem for future design of new surrogate-assisted evolutionary algorithms or cost-aware Bayesian optimization, where expensive many-objective optimization problems are urgently needed. In our future work, we will design SAEAs by designing cost-aware model management strategies to select query points within the minimum time cost in light of this problem property of the BEM application. Acknowledgment This work was supported in part by the Honda Research Institute Europe GmbH, Offenbach am Main, Germany and in part by the National Natural Science Foundation of China under Grant 62302147, China. The work of Yaochu Jin was supported by an Alexander von Humboldt Professorship for Artificial Intelligence endowed by the Federal Ministry of Education and Research, Germany. References
[1] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, Apr. 2002. [2] K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints,” IEEE Trans. Evol. Comput., vol. 18, no. 4, pp. 577–601, Aug. 2014. [3] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, Dec. 2007. [4] R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector guided evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 20, no. 5, pp. 773–791, Oct. 2016.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
27
[5] X. Wang, X. Mao, and H. Khodaei, “A multi-objective home energy management system based on Internet of Things and optimization algorithms,” J. Building Eng., vol. 33, 2021, Art. no. 101603. [6] X. Li and A. Malkawi, “Multi-objective optimization for thermal mass model predictive control in small and medium size commercial buildings under summer weather conditions,” Energy, vol. 112, pp. 1194–1206, 2016. [7] F. Bre and V. D. Fachinotti, “A computational multi-objective optimization method to improve energy efficiency and thermal comfort in dwellings,” Energy Buildings, vol. 154, pp. 283–294, 2017. [8] N. Kampelis, E. Tsekeri, D. Kolokotsa, K. Kalaitzakis, D. Isidori, and C. Cristalli, “Development of demand response energy management optimization at building and district levels using genetic algorithm and artificial neural network modelling power predictions,” Energies, vol. 11, no. 11, 2018, Art. no. 3012. [9] T. Chugh, K. Sindhya, K. Miettinen, Y. Jin, T. Kratky, and P. Makkonen, “Surrogate-assisted evolutionary multiobjective shape optimization of an air intake ventilation system,” in Proc. IEEE Congr. Evol. Comput., 2017, pp. 1541–1548. [10] T. Chugh, Y. Jin, K. Miettinen, J. Hakanen, and K. Sindhya, “A surrogateassisted reference vector guided evolutionary algorithm for computationally expensive many-objective optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 1, pp. 129–142, Feb. 2018. [11] R. Cheng, T. Rodemann, M. Fischer, M. Olhofer, and Y. Jin, “Evolutionary many-objective optimization of hybrid electric vehicle control: From general optimization to preference articulation,” IEEE Trans. Emerg. Topics Comput. Intell., vol. 1, no. 2, pp. 97–111, Apr. 2017. [12] R. J. Lygoe, M. Cary, and P. J. Fleming, “A real-world application of a manyobjective optimisation complexity reduction process,” in Proc. 7th Int. Conf. Evol. Multi-Criterion Optim., 2013, pp. 641–655. [13] E. J. Hughes, “Radar waveform optimisation as a many-objective application benchmark,” in Proc. Int. Conf. Evol. Multi-Criterion Optim., 2007, pp. 700–714. [14] E. H. Lee, V. Perrone, C. Archambeau, and M. Seeger, “Cost-aware Bayesian optimization,” in Proc. ICML Workshop Automat. Mach. Learn., 2020. [15] G. Guinet, V. Perrone, and C. Archambeau, “Pareto-efficient acquisition functions for cost-aware Bayesian optimization,” NeurIPS Meta Learn. Workshop, 2020. [16] M. Abdolshah, A. Shilton, S. Rana, S. Gupta, and S. Venkatesh, “Cost-aware multi-objective Bayesian optimisation,” 2019, arXiv:1909.03600. [17] T. Rodemann, “A comparison of different many-objective optimization algorithms for energy system optimization,” in Applications of Evolutionary Comput., P. Kaufmann and P. Castillo, Eds. Berlin, Germany: Springer, 2019, pp. 1–16. [18] A. Jain, F. Smarra, E. Reticcioli, A. D’Innocenzo, and M. Morari, “NeurOpt: Neural network based optimization for building energy management and climate control,” in Proc. Learn. Dyn. Control, 2020, pp. 445–454. [19] M. H. Khan, A. U. Asar, N. Ullah, F. R. Albogamy, and M. K. Rafique, “Modeling and optimization of smart building energy management system considering both electrical and thermal load,” Energies, vol. 15, no. 2, 2022, Art. no. 574. [20] S. K. Howell, H. Wicaksono, B. Yuce, K. McGlinn, and Y. Rezgui, “User centered neuro-fuzzy energy management through semantic-based optimization,” IEEE Trans. Cybern., vol. 49, no. 9, pp. 3278–3292, Sep. 2019. [21] T. Rodemann, “A many-objective configuration optimization for building energy management,” in Proc. IEEE Congr. Evol. Comput., 2018, pp. 1–8. [22] N. Delgarm, B. Sajadi, and S. Delgarm, “Multi-objective optimization of building energy performance and indoor thermal comfort: A new method using artificial bee colony (ABC),” Energy Buildings, vol. 131, pp. 42–53, 2016. [23] S. Mostaghim and J. Teich, “Strategies for finding good local guides in multiobjective particle swarm optimization (MOPSO),” in Proc. IEEE Swarm Intell. Symp., 2003, pp. 26–33. [24] T. Murata and H. Ishibuchi, “MOGA: Multi-objective genetic algorithms,” in Proc. IEEE Int. Conf. Evol. Comput., 1995, pp. 289–294. [25] Q. Xue, Z. Wang, and Q. Chen, “Multi-objective optimization of building design for life cycle cost and CO2 emissions: A case study of a low-energy residential building in a severe cold climate,” in Building Simulation, vol. 15, no. 1. Berlin, Germany: Springer, 2022, pp. 83–98. [26] X.-k. Liang, Y. Zhang, and D.-w. Gong, “Surrogate-assisted multi-objective particle swarm optimization for building energy saving design,” in Proc. 11th Int. Conf. Evol. Multi-Criterion Optim., 2021, pp. 593–604. [27] Z. Yong, Y. Li-Juan, Z. Qian, and S. Xiao-Yan, “Multi-objective optimization of building energy performance using a particle swarm optimizer with less control parameters,” J. Building Eng., vol. 32, 2020, Art. no. 101505. [28] M. Moustapha, A. Galimshina, G. Habert, and B. Sudret, “Multi-objective robust optimization using adaptive surrogate models for problems with mixed continuous-categorical parameters,” Struct. Multidisciplinary Optim., vol. 65, 2022, Art. no. 357. [29] Y. Jin and B. Sendhoff, “A systems approach to evolutionary multiobjective structural optimization and beyond,” IEEE Comput. Intell. Mag., vol. 4, no. 3, pp. 62–76, Aug. 2009. [30] Y. Jin, “Surrogate-assisted evolutionary computation: Recent advances and future challenges,” Swarm Evol. Comput., vol. 1, no. 2, pp. 61–70, 2011.
28
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
[31] D. R. Jones, M. Schonlau, and W. J. Welch, “Efficient global optimization of expensive black-box functions,” J. Glob. Optim., vol. 13, no. 4, pp. 455–492, 1998. [32] Z. Song, H. Wang, C. He, and Y. Jin, “A Kriging-assisted two-archive evolutionary algorithm for expensive many-objective optimization,” IEEE Trans. Evol. Comput., vol. 25, no. 6, pp. 1013–1027, Dec. 2021. [33] D. Zhan, Y. Cheng, and J. Liu, “Expected improvement matrix-based infill criteria for expensive multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 6, pp. 956–975, Dec. 2017. [34] Q. Zhang, W. Liu, E. Tsang, and B. Virginas, “Expensive multiobjective optimization by MOEA/D with Gaussian process model,” IEEE Trans. Evol. Comput., vol. 14, no. 3, pp. 456–474, Jun. 2010. [35] Q. Liu, Y. Jin, M. Heiderich, and T. Rodemann, “Surrogate-assisted evolutionary optimization of expensive many-objective irregular problems,” Knowl.-Based Syst., vol. 240, 2022, Art. no. 108197. [36] Q. Liu, R. Cheng, Y. Jin, M. Heiderich, and T. Rodemann, “Reference vector-assisted adaptive model management for surrogate-assisted many-objective optimization,” IEEE Trans. Syst., Man, Cybern. Syst., vol. 52, no. 12, pp. 7760–7773, Dec. 2022. [37] L. Pan, C. He, Y. Tian, H. Wang, X. Zhang, and Y. Jin, “A classification-based surrogate-assisted evolutionary algorithm for expensive many-objective optimization,” IEEE Trans. Evol. Comput., vol. 23, no. 1, pp. 74–88, Feb. 2019. [38] Y. Yuan and W. Banzhaf, “Expensive multi-objective evolutionary optimization assisted by dominance prediction,” IEEE Trans. Evol. Comput., vol. 26, no. 1, pp. 159–173, Feb. 2022. [39] H. Hao, A. Zhou, H. Qian, and H. Zhang, “Expensive multiobjective optimization by relation learning and prediction,” IEEE Trans. Evol. Comput., vol. 26, no. 5, pp. 1157–1170, Oct. 2022. [40] H.-G. Huang and Y.-J. Gong, “Contrastive learning: An alternative surrogate for offline data-driven evolutionary computation,” IEEE Trans. Evol. Comput., vol. 27, no. 2, pp. 370–384, Apr. 2023. [41] T. Schmitt, M. Hoffmann, T. Rodemann, and J. Adamy, “Incorporating human preferences in decision making for dynamic multi-objective optimization in model predictive control,” Inventions, vol. 7, no. 3, 2022, Art. no. 46. [Online]. Available: https://www.mdpi.com/2411-5134/7/3/46 [42] Q. Liu, Y. Jin, M. Heiderich, T. Rodemann, and G. Yu, “An adaptive reference vector-guided evolutionary algorithm using growing neural gas for manyobjective optimization of irregular problems,” IEEE Trans. Cybern., vol. 52, no. 5, pp. 2698–2711, May 2022. [43] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach,” IEEE Trans. Evol. Comput., vol. 3, no. 4, pp. 257–271, Nov. 1999. [44] H. Ishibuchi, R. Imada, Y. Setoguchi, and Y. Nojima, “How to specify a reference point in hypervolume calculation for fair performance comparison,” Evol. Comput., vol. 26, no. 3, pp. 411–440, 2018. [45] Y. Tian, R. Cheng, X. Zhang, F. Cheng, and Y. Jin, “An indicator-based multiobjective evolutionary algorithm with reference point adaptation for better versatility,” IEEE Trans. Evol. Comput., vol. 22, no. 4, pp. 609–622, Aug. 2018. [46] R. Unger, B. Mikoleit, T. Schwan, B. B€aker, C. Kehrer, and T. Rodemann, “Green building-modeling renewable building energy systems with emobility using modelica,” in Proc. Modelica Conf. Modelica Assoc., 2012, pp. 897–906. [47] Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “PlatEMO: A MATLAB platform for evolutionary multi-objective optimization,” IEEE Comput. Intell. Mag., vol. 12, no. 4, pp. 73–87, Nov. 2017. [48] Y. Hua, Q. Liu, K. Hao, and Y. Jin, “A survey of evolutionary algorithms for multi-objective optimization problems with irregular pareto fronts,” IEEE/CAA J. Automatica Sinica, vol. 8, no. 2, pp. 303–318, Feb. 2021. [49] G. Yu, L. Ma, Y. Jin, W. Du, Q. Liu, and H. Zhang, “A survey on knee-oriented multiobjective evolutionary optimization,” IEEE Trans. Evol. Comput., vol. 26, no. 6, pp. 1452–1472, Dec. 2022. [50] T. Ray, H. K. Singh, K. H. Rahi, T. Rodemann, and M. Olhofer, “Towards identification of solutions of interest for multi-objective problems considering both objective and variable space information,” Appl. Soft Comput., vol. 119, 2022, Art. no. 108505. [Online]. Available: https://www.sciencedirect.com/science/ article/pii/S1568494622000503 [51] P. Aghaei Pour, T. Rodemann, J. Hakanen, and K. Miettinen, “Surrogate assisted interactive multiobjective optimization in energy system design of buildings,” Optim. Eng., vol. 23, pp. 303–327, 2022. [52] F. Lanfermann, S. Schmitt, and S. Menzel, “An effective measure to identify meaningful concepts in engineering design optimization,” in Proc. IEEE Symp. Ser. Comput. Intell., 2020, pp. 934–941. [53] F. Lanfermann and S. Schmitt, “Concept identification for complex engineering datasets,” Adv. Eng. Inform., vol. 53, 2022, Art. no. 101704.
©SHUTTERSTOCK.COM/DMYTRO VIKARCHUK
Learning Regularity for Evolutionary Multiobjective Search: A Generative Model-Based Approach
Abstract—The prior domain knowledge, i.e., the regularity property of continuous multiobjective optimization problems (MOPs), could be learned to guide the search for evolutionary multiobjective optimization. This paper proposes a learning-toguide strategy (LGS) for assisting the search for multiobjective optimization algorithms in dealing with MOPs. The main idea behind LGS is to capture the regularity via learning techniques to guide the evolutionary search to generate promising offspring solutions. To achieve this, a generative model called the generative topographic mapping (GTM) is adopted to capture
the manifold distribution of a population. A set of regular grid points in the latent space are mapped into the decision space within some manifold structures to guide the search for mating with some parents for offspring generation. Following this idea, three alternative LGS-based generation operators are developed and investigated, which combine the local and global information in the offspring generation. To learn the regularity more efficiently in an algorithm, the proposed LGS is embedded in an efficient evolutionary algorithm (called LGSEA). The LGSEA includes an incremental training procedure aimed at reducing the computational cost of GTM training by reusing the built GTM model. The developed algorithm is compared with some newly developed or classical learning-based algorithms on several benchmark problems. The results demonstrate the advantages of LGSEA over other approaches, showcasing its potential for solving complex MOPs.
Digital Object Identifier 10.1109/MCI.2023.3304080 Date of current version: 17 October 2023
Corresponding author: Aimin Zhou (e-mail: [email protected]).
Shuai Wang , Aimin Zhou , Guixu Zhang , and Faming Fang East China Normal University, CHINA
1556-603X ß 2023 IEEE
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
29
I. Introduction
I
n numerous real-world scenarios, handling optimization problems with multiple objectives or criteria is frequently required, and these problems are referred to as multiobjective optimization problems (MOPs) [1]. Since these objectives usually conflict with each other, a set of Pareto optimal solutions exists that balance the objectives. The compromised solutions set is called the Pareto optimal set (PS) in the decision space and the Pareto front (PF) in the objective space [2]. Multiobjective optimization evolutionary algorithms (MOEAs) can approximate the Pareto optimal solutions of a MOP in a single run with the advantages of the population based search scheme, and have flourished in the last two decades [3]. As seen from the literature, the existing developed MOEAs can be roughly classified into three categories: 1) Pareto-dominance based approaches adopt the Pareto dominance relationships among the solutions in a population to perform the selection operation, including NSGA-II [4], SPEA2 [5], and others [6]; 2) metric based MOEAs use some performance indicators to guide the selection of solutions with both convergence and diversity, where the hypervolume metric might be the most widely used indicator based selection since it does not need real PF information [7], [8]; 3) MOEAs based on decomposition (MOEA/Ds) aim to decompose a MOP into a set of subproblems that are optimized in a collaborative manner [9], [10], [11]. To design an efficient MOEA, prior domain knowledge should be considered within the optimization procedures for guiding the evolutionary search for promising offspring generation. It has been proved that under mild conditions, the Pareto optimal solutions of a continuous MOP form a piecewise (m-1)D continuous manifold in the decision and objective space [12], where m is the number of the objectives. This manifold characteristic of continuous MOPs is commonly referred to as the regularity property [13], [14], and has been employed in several learning-based MOEAs. Machine learning techniques are utilized to model or capture the manifold structure of the PS for generating offspring solutions. For example, several regularity model-based evolutionary approaches strive to explicitly approximate the manifold structure using principal component analysis (PCA) or its variants, such as local PCA [15], due to their latent component extraction capabilities [14], [16], [17], [18]. Offspring solutions are generated by sampling trial points from these learned manifold approximations. In [19], a general framework is proposed with the manifold learning methods for evolutionary multiobjective optimization, including the principal curve [20], [21] method for approximation of the PS manifold and Laplacian eigenmaps [22] for neighborhood relationships in low-dimensional space. Additionally, several clustering learning methods (i.e., k-means [23] and SOM clustering [24]) are adopted to partition the population with manifold structure information for mating restrictions for MOPs with complicated PF or PS shapes [25], [26], [27]. In [28], a cluster-based immune-inspired algorithm is proposed to deal with multimodal MOPs with the help of manifold learning by PCA. Moreover, some online learning methods, i.e., online agglomerative clustering [29] and incremental Gaussian mixture model [30], are to reduce the computational overhead of
30
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
learning for partitioning the population. Recently, with the manifold assumption, a manifold interpolation approach has been designed to interpolate some new solutions along the manifold of the approximated surrogate Pareto-optimal set in datadriven evolutionary optimization [31]. Generative Topographic Mapping (GTM) [32] is a generative stochastic (or probability distribution) model that can discover such underlying regularity, i.e., the manifold distribution, from the data in a high dimensional space. A significant advantage of GTM is its ability to map regular grid points from the latent space to the observed data space while preserving manifold structures [33]. Therefore, adopting GTM as a tool for learning the manifold distribution from the population and mapping a set of regular grid points to the decision space to guide the evolutionary search is a natural and effective approach. This paper considers the regularity property of continuous MOPs learned more efficiently to generate high-quality or promising offspring solutions for evolutionary multiobjective optimization. A learning-to-guide search strategy (LGS) is proposed for this purpose, and a LGS-based multiobjective evolutionary algorithm (LGSEA) is also developed to validate the effectiveness of LGS. The major characteristics of this paper can be summarized as follow: 1) A new evolutionary search strategy, i.e., the LGS, is proposed to generate offspring solutions for multiobjective optimization. In LGS, the GTM method is adopted to learn the regularity with a generative (or probability) model for capturing the manifold structure from the population. Regular grid points in the latent space are mapped into the decision space using the built GTM model to guide the search for offspring generation. 2) Three alternative offspring generation operators are developed with the above basic idea of LGS inspired by some existing operators. These proposed operators combine local and global information in the offspring generation by mating these mapped manifold points with some parent individuals in the population; 3) An efficient LGSEA framework is designed to reduce the computational cost of GTM training by reusing the built GTM model instead of initializing such a GTM model at each generation, which aims to embed the LGS into the evolutionary optimization framework more efficiently. The rest of this paper is organized as follows: some preliminary studies are introduced in Section II. Then, Section III provides the details of our proposed method, including the framework of LGSEA and offspring generation with LGS. Algorithm analysis and experimental comparisons are presented in Section IV. Finally, Section V demonstrates the conclusions and future work. II. Preliminary Studies A. Problem Formulation
This work considers the following box-constrained continuous MOPs: minimize FðxxÞ ¼ ðf1 ðxxÞ; f1 ðxxÞ; . . . ; fm ðxxÞÞT subject to x ¼ ðx1 ; . . . ; xn Þ 2 V
(1)
where x ¼ ðx1 ; x2 ; . . . ; xn Þ is a n-dimension variable vector, Q V ¼ ni¼1 ½xi ; ^xi is the decision (or variable) space, and xi and ^xi are the lower and upper boundaries of the i-th variable. F: V ! Rm denotes the mapping from the decision space to the objective space, which has m-objectives f1 ðxxÞ; f2 ðxxÞ; . . . ; fm ðxxÞ. 1) Pareto Dominance Let x1 ; x2 2 V be decision vectors, x1 Pareto-dominates x1 denoted by x1 x2 , if and only if
fi ðx1 Þ fi ðx2 Þ; fi ðx1 Þ < fi ðx2 Þ;
8i ¼ 1; . . . ; m 9i ¼ 1; . . . ; m:
(2)
where this dominance means “better than”. 2) Pareto Optimal Solution A solution x 2 V is Pareto-optimal if x 2 V ^ :9xx0 2 V : Fðxx0 Þ FðxxÞ
(3)
that no solution in V could dominate x. The PS is the set of all Pareto-optimal solutions in V, and the PF is the set of all corresponding objective vectors in Rm . 3) Manifold Property for Pareto Optima Suppose that the objectives fi ðxxÞ; i ¼ 1; . . . ; m be continuously differentiable at a decision vector x 2 V, then a ¼ ða1 ; . . . am ÞT ; ðkak2 ¼ 1Þ exists that m X
ai rfi ðx Þ ¼ 0
(4)
i¼1
where these Pareto optimal points x could be called KarushKuhn-Tucker (KKT) points which satisfy (4). Since the above theorem is not a sufficient condition, not all KKT points are (local) Pareto optimal points. Under certain smoothness conditions, the PS (PF) of a continuous MOP defines a piecewise (m-1)-D manifolds that can be embedded in the n-D decision space (the m-D objective space). Thus, the Pareto optimal solutions of a bi-objective optimization problem define a piece-wise continuous curve, while the Pareto optimal solutions of the three objective optimization problems define a piece-wise continuous surface. More details of theoretical proofs can be found in [12]. B. Modeling Manifold Distribution in MOEAs
The generative model obtained by GTM can be regarded as an estimation of distribution algorithm (EDA) to some degree that aims to model the manifold distribution of the population for offspring generation. The manifold property has inspired several efforts for modeling the regularity with the manifold distribution of populations that fall into the category of multiobjective EDAs. In our previous work, it might be the first time to adopt the regularity property in the design of an algorithm for offspring generation, and a regularity model-based estimation of distribution algorithm (RM-MEDA) is proposed for this purpose [14]. Unlike traditional MOEDAs that explore the variable independencies of the optimal solutions in the search space [26], [34],
RM-MEDA explicitly approximates the manifold structure by taking the regularity property effect through the modeling and sampling procedures. Since then, as a conceptual method, RMMEDA has been improved for better performance. For instance, a reducing redundant cluster operator is embedded in RMMEDA to tune the number of partitions in local PCA to build more precise models during the evolution [35]. Several research works on combing RM-MEDA with other generators have also been conducted to improve its local search [36], [37]. In [38], three alternative methods, i.e., the latent principal curve model (LPCM), univariate factorized Gaussian model (UGM), and marginalized multivariate Gaussian model (MGM), are proposed to model the regularity to improve the scalability of MOEAs with high-dimensional variables. In [39], the MGM is embedded in the MOEA/D framework by modeling the neighborhood solutions around each subproblem. It has been verified that MOEA/D-MGM performs well on MOPs with complex PS manifolds due to its full consideration of the regularity property. Moreover, some efforts aim to model the manifolds of PF in the objective space. In [40], an improved RM-MEDA variant (MMEA) aims to build some probabilistic model for approximating the PF and PS simultaneously, which can be used to deal with multi-modal MOPs. The estimation of PF vis models has received much attention in recent years, for example, a generic front modeling (GFM) method is proposed to estimate the shape of the nondominated front by a generalized simplex model [41], furthermore, a set of simplified local models is combined to approximate the PFs with complex geometrical structures [42]. Additionally, in [43], an inverse modeling procedure based MOEA (IM-MOEA) is proposed to learn the mapping from the PF to the PS by a regression model (i.e., the Gaussian process method) that new trial solutions are sampled from the objective space to the decision space. Recently, it has been noted that some efforts have adopted generative adversarial networks (GANs) with their generative procedures to model the population to generate offspring solutions for large-scale MOPs [44], [45]. However, GAN-based generations have to be combined with other genetic operators due to the mode collapse in training a GAN model. In summary, although many MOEDAs have been proposed to build some probability (or genetive) models by explicitly modeling the manifold structure to sample offspring solutions, the research on guiding evolutionary search with these methods is still quite limited, and it is worthwhile to conduct further investigations. C. Generative Topographic Mapping
1) The GTM Model GTM, developed by Bishop et al. in 1998 [32], [33], could often be regarded as a genetive model that defines a mapping with the nonlinear transformation from a low dimensional latent space to the observed data space: u ¼ mðvv ; W Þ ¼ W fðvv Þ
(5)
where u 2 RD is a point in the data space, v is a regular node with coordinates v ¼ ðv1 ; . . . ; vL Þ in the latent space. As shown in Figure 1, the mapping v ! u with the function m
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
31
an alternative population modeling method to local PCA for RM-MEDA, providing a probability distribution model with nonlinear mapping. As shown in Figure 2(a), offspring solutions y are sampled from the probability model obtained by GTM with some white Gaussian noise: 1 y N ðmðvv ; W Þ; IÞ b
FIGURE 1 The mapping of GTM from the latent space to the data space within a manifold structure.
where I is a n n identity matrix, and noise vectors.
defines a manifold embedded in the data space and is parameterized by a parameter matrix W , while the fðvv Þ is an array of M prefixed basis functions, i.e., the radially-symmetric Gaussian functions with inverse variance b. In this way, each latent space point v induces a Gaussian distribution of u in the data space: N ðuu; bÞ ¼ N ðmðvv ; W Þ; bÞ ¼ pðu j v ; W ; bÞ n=2 b b 2 exp kmðvv ; W Þ uk ¼ 2p 2
1 b
defines the level of
Algorithm 1. EM Training Steps for GTM.
Input: the existent model parameters W and b ; the training data U and latent space points V ; Output: the newly model parameters W and b ;
(6)
1: for g ¼ 1; . . . ; G do 2: E-step: Evaluate the posterior probabilities (or responsibilites) with Wg and bg , i.e., each Gaussian
Given a set of data points U ¼ fuu1 ; . . . ; uN g in the observed data space and a set of regular grid points V ¼ fvv 1 ; . . . ; v K g in the latent space, GTM can determine the parameter matrix W and the inverse variance b by maximizing the following log-likelihood, given by: ( ) N K X 1X ln pðun j v i ; W; bÞ (7) LðW; bÞ ¼ K i¼1 n¼1 2) Training for GTM Parameter As seen from the literature, the maximization of (7) is conducted with the expectation-maximization (EM) algorithm for the estimation of W and b. Algorithm 1 presents the main training steps with the EM procedures for GTM. With the training data U and regular latent points V, the parameters of the GTM model are updated by an iterative method with two steps, i.e., an expectation step (E-step) and a maximization step (M-step). At the iteration g: ❏ In the E-step (line 2), the posterior probabilities of each Gaussian component i are calculated with the existent Wg and bg for every data points un . It then computes the expectation vector for the next step. ❏ In the M-step (line 3), with the calculated posterior probabilities, the new parameter, Wgþ1 and bgþ1 , are estimated by maximizing the log-likelihood function (7). The E-step and M-step are constantly iterated until the maximal training iterations G. As suggested in [33], the EM algorithm could commonly provide satisfactory model parameters after a few tens of iterations. 3) An Early Work MEA/GTM In [46], the authors attempt for the first time to adopt the GTM for evolutionary multiobjective optimization, and proposed a multiobjective evolutionary algorithm based on GTM (MEA/GTM). In MEA/GTM, the GTM model is adopted as
32
(8)
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
component i for every data point un : Rin ðWg ; bg Þ ¼ p v i j un ; Wg ; bg p un j v i ; Wg ; bg ¼P K 0 i 0 ¼1 p x n j v i ; Wg ; bg n o exp b2 kWfðv i Þ un k2 n
¼P
2 o 0 K b
i 0 ¼1 exp 2 Wf v i u n
3:
M-step: Re-estimate the new parameters Wgþ1 and bgþ1 to maximize (7) by the current responsibilities: T ¼ FT RX; FT GFWgþ1
1 bgþ1
¼
N X K
2 1 X Rin Wg ; bg Wgþ1 fðv i Þ un
ND n¼1 i¼1
where F is a K M matrix of basis functions with elements Fij ¼ fj ðvv i Þ; j ¼ 1; . . . ; M, R is the K N responsability matrix with elements Rin ðWg ; bg Þ, X is the N D matrix of the data set U, and G is a K K diagonal P matrix with elements gii ¼ Nn¼1 Rin ðWg ; bg Þ. 4: g ¼ g þ 1. 5: end 6: Return the final model parameters WG and bG .
However, the offspring generation using (8) does not favor any specific search direction, resulting in offspring solutions scattered around the sampled solutions due to the noise vectors. To better understand this empirically, we generate offspring solutions by RM-MEDA and MEA/GTM with the same parent population for one generation on the GLT2 test instances, and the results in the decision and objective space are plotted in Figure 2(b) and (c), respectively. From the figures, many offspring solutions scatter far
FIGURE 2 (a) The basic idea of offspring generation in MEA/GTM; (b) and (c) Offspring solutions generated by RM-MEDA and MEA/GTM with the same parent solutions on GLT2 for one generation in the decision and objective space, respectively.
from the real PS and do not converge to the PF. As mentioned in [26], this offspring operation in MEA/GTM may prefer global exploration due to the lack of individual local information. Moreover, MEA/GTM reinitializes a GTM model and performs the complete training process at each generation. This approach entails a significant computational burden. What is more, although the Pareto optimal set of a continuous MOP defines a piecewise (m-1)-D manifold structure in the decision (objective) space, in the early stages, the solutions are dispersed throughout the search space and lack regularity. In such a case, a wholly learned GTM model might be unnecessary, and could potentially misguide the search process. Given the above consideration, our primary focus is on the advantages of GTM for learning the manifold distribution more efficiently within an algorithm framework to guide the search for offspring generation.
III. Proposed Method
The primary motivation of LGSEA is to integrate a GTM based learning-to-guide search strategy (LGS) into a MOEA, so that it could capture the manifold structure and guide the evolutionary search for offspring generation. Given that the learning of GTM has been represented in Section II-C, this section presents the framework of LGSEA by embedding the GTM learning steps
within the evolutionary procedures, followed by some generation operators with LGS for offspring generation in LGSEA.
A. Framework of LGSEA
At each iteration, LGSEA maintains a population P and an external archive A, where the size of A is identical to that of P. The GTM method is adopted to capture the manifold distribution from P with a genetive model, and it maps a set of regular gird points from the latent space to the decision space to form the archive A to guide the search of LGSEA for offspring generation. To reduce the computational cost, as shown in Figure 3, LGSEA reuses the GTM model with previously learned manifold distribution information by conducting the training and evolutionary steps alternately. The framework of LGSEA is outlined in Algorithm 2. Within LGSEA, a population P with N solutions is randomly initialized from the decision space V, and a GTM model is defined with the initialized parameters W0 and b0 in line 1. The proposed LGS involves two main procedures in LGSEA: ❏ Learning the Regularity via GTM: To capture the manifold distribution, the learning problem focuses on estimating the parameters W and b for the GTM model. In each iteration, the current population P serves as training data, updating the GTM model parameters through one single or few iterations of EM training steps (i.e., the G ¼ 1 or a small value within
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
33
FIGURE 3 The framework of LGSEA.
5 iterations in Algorithm 1) in line 3. A set of trial vectors u, i.e., the external archive A, is obtained by mapping regular grid points v from the latent space to the decision space by the built GTM model defined by (5) in line 4. ❏ Guiding Evolutionary Search for Offspring Generation: In lines 7-10, for a randomly selected solution x, a mating pool M is established with some neighborhood solutions in accordance with the Euclidean distance in decision space, and a closest sampled trial vector is selected from the archive A, as the reference (or guiding) vector, to mate the parents for offspring generation. The generated offspring solution updates the population in line 11. Most existing environment selection methods, mentioned in Section I, can be utilized in LGSEA.
using the guiding position u and its mating pool M. Inspired by some existing offspring generation operators, i.e., the operation of simulated binary crossover (SBX) and differential evolution (DE), three alternative generation operators can be obtained by LGS in the following ways: ❏ Operator 1: 0:5ðð1 þ r Þ x þ ð1 r Þ uÞ if rand < 0:5 y¼ 0:5ðð1 r Þ x þ ð1 þ r Þ uÞ else (9) ❏ Operator 2:
y ¼ u þ r ðxxr1 xr2 Þ
(10)
❏ Operator 3: B. Offspring Generation With LGS
As discussed in Section II-C3, although the regularity is learned in both MEA/GTM and RM-MEDA, their offspring generations exhibit low search efficiency in the search space without the search directions. Therefore, this section introduces LGS, which aims to enhance search efficiency by guiding the search with learned manifold information, i.e., some regular grid points mapped from the latent space to decision space with the approximation of manifolds as the guiding solutions. Given a solution x randomly selected from the current population P, the generation procedure y ¼ GenSolðxx; u; MÞ in Algorithm 2 is adopted to generate a new trail solution y by
34
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
y ¼ x þ r ðuu xÞ þ ð1 r Þ ðxxr1 xr2 Þ
(11)
where Operator 1 is simplified and modified with the SBX operator, Operator 2 and Operator 3 are the variations of DE/best/1 and DE/current-to-best/1 mutations, respectively. xr1 and xr2 are two randomly selected solutions from the mating pool. To reduce the influence of parameters on these operators, all parameters in Operator 1, 2, and 3 are set with random values, e.g., rand returns a uniformly randomly distributed value in ½0; 1, r is a vector with random values ranging from 0 to 1.
Algorithm 2. LGSEA Framework.
Input: the size of population N and the maximum number of generations T ; Output: the final population P ;
1: Initialize P ¼ fxx 1 ; . . . ; x N g, and the GTM model with parameter W0 and b0 ;
2: for t ¼ 1; . . . ; T do 3: Train GTM with the current population P to update the 4:
model parameters as Wt and bt by Algorithm 1; Set A ¼ fu u j u ¼ mðvv ; Wt Þg with the GTM model, where
5: 6: 7: 8:
v are regular grid points uniformly on ½1; 1m1 ; Set Q ¼ P; while A 6¼ ? do Randomly select x 2 Q, and set Q ¼ Q n fxx g; Set the neighborhood mating pool M for x with
9: 10: 11: 12: 13: 14: 15:
K-nearest neighbors; Set a guiding solution u for x from A, where u ¼ arg minu2A kxx uk2 ; Generate a new trial solution y ¼ GenSolðxx ; u; MÞ; Update the population P with y ; Set A ¼ A n fu ug; end end Return P.
As depicted in Figure 4, the offspring generation with LGS is conducted with GTM learning to guide the search of the population, where the Operator 2 is set as an example in this figure. The details of offspring generation procedures are represented in Algorithm 3. This algorithm also implements a polynomial mutation (PM) following the LGS generation procedures. To be specific, LGSEA first adopts the developed LGS operator to generate a trial solution y by one operator Operator 1 or 2 or 3 in line 2. It then employs the PM operator to mutate the trial solution in line 4 where pm is the mutation probability, and hm denotes the distribution index of mutation. To make solutions feasible, the trial solution is further repaired before and after PM operation in lines 3 and 5.
Algorithm 3. y ¼ GenSolðxx ; u; MÞ for Offspring Generation.
Input: the solution x , the mapped point u and the mating pool M; Output: an offspring solution y ;
1: Randomly choose some parents from M; 2: Generate a trial solution y by one Operator; 3: Repair the solution y by i ¼ 1; . . . ; n: 8 < xi þ 12 randðÞð^xi xi Þ yi ¼ ^xi 12 randðÞð^xi xi Þ : yi
if yi < xi if yi > ^xi ; otherwise
4: Mutate yi by: yi ¼
yi þ di Dxi yi
if randðÞ < pm otherwise
where i ¼ 1; . . . ; n, Dxi ¼ ^xi xi , and ( di ¼
1
ð2 rÞhm þ1 1
if r < 0:5
1 ð2 2 rÞhm þ1
otherwise
1
5: Repair y if necessary 6: Return the offspring solution y .
It should be noted that the learned manifold distribution information could provide us guidance on the search. In these proposed LGS-based operators, if trail vectors are sampled from the GTM model built with the population global distribution information, global exploration is conducted; while these mapped solutions are mated with neighborhood individuals, local exploration is expected. In this manner, both local and global information is integrated into the offspring generation of LGSEA. C. Remarks
In the precursor MEA/GTM and the proposed LGSEA, although both algorithms adopt the generative model GTM, their significant differences lie in the offspring generation and
FIGURE 4 The basic idea of offspring generation with LGS where the Operator 2 is set as an example. There are some errors between the manifold embedded in the decision space by the GTM and the real PS.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
35
TABLE I Mean IGD values obtained by LGSEA with different selections and generation operators on the GLT test suite.
training steps on the use of GTM, which can be summarized as follows. ❏ On Offspring Generation: In terms of the offspring generation, MEA/GTM samples offspring solutions from the probability model obtained by GTM, while LGSEA seeks to map some points from the latent space to the decision space for guiding the search. In contrast to MEA/GTM, which generates solutions entirely based on the accuracy of the GTM model, LGSEA somewhat relaxes the demand for GTM model quality, as these mapped points are incorporated in the three alternative operators as guiding vectors. ❏ On Training GTM Model: Unlike MEA/GTM, which initializes the GTM model and performs a complete training step at each iteration, the proposed LGSEA reuses the GTM model with incremental training procedures that conduct the training and evolutionary steps in an alternate manner. Through this approach, the overhead of the GTM training can be reduced. It should note that various RM-MEDA variants have well investigated the modeling and sampling procedures of RMMEDA. For instance, in a recent study [47], some new components, including the population partition, nondominated solutions modeling, and hybrid offspring generation procedures, are designed and embedded in RM-MEDA to improve its performance, specifically using local PCA. However, a research gap exists in the efficient application of the generative model GTM, which has been initially implemented in MEA/GTM for addressing MOPs. To address this gap, the proposed LGSEA proves to be more practical than MEA/GTM, in terms of guiding the search for offspring generation and incremental training GTM model.
classical learning-based MOEAs on 39 complicated test instances; 3) Scalability of Analysis: LGSEA and five representative learning-based MOEAs are examined on the WFG test suite with a large number of decision variables; 4) Performance on Real-world Problems: Since LGSEA has shown good performance on benchmark problems with complicated PS and PF shapes, the proposed LGSEA is adopted to deal with several real-world problems in this part. A. Experimental Settings
This paper focuses on MOPs with both complicated PF and PS shapes. In our experimental studies, we use several benchmark test suites, including the GLT [48] test suite with complex PF shapes, the LZ [10] test suite with intricate PS shapes, the IMF [43] test suite with variable lineages, and the triobjective WFG [49] and ZDT [50] test suites. For fair comparisons, we adopt the recommended parameter settings for the compared algorithms that have achieved the best performance reported in the literature. All compared algorithms are implemented in PlatEMO [51], and each algorithm is run independently 31 times for each instance in this study.
IV. Experimental Study
To investigate the performance of the proposed LGSEA in dealing with complicated MOPs, four parts of experiments are conducted in this section. 1) General Performance and Analysis: In the first part, three selection methods covered the main categories of MOEAs, i.e., NSGA-II [4], SMS-EMOA [7], and MOEA/D [9], are embedded in LGSEA, and comparisons are conducted within different selections in LGSEA with Operator 1, 2, and 3. The parameters sensitivity of K (the size of the mating pool) in LGSEA is investigated, and the performance of LGSEA with the parents selected from the neighbors and global population are analyzed; 2) Comparisons With State-of-the-Arts: The optimal combinations in LGSEA, e.g., the S-metric selection in SMS-EMOA and the Operator 2, are compared with five newly developed and
36
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
FIGURE 5 The mean and standard deviations of the IGD values obtained by LGSEA with different settings on GLT1-GLT6 test instances over 30 independent runs.
TABLE II Statistical results (Mean(Std. Dev.)[rank]) obtained by AMEA, IM-MOEA, SMEA, RM-MEDA, MEA/GTM, and LGSEA over 30 independent runs on the GLT, LZ, IMF, WFG, and ZDT test suites in terms of the IGD metrics.
Two popular performance indicators, namely the inverted generational distance (IGD) indicator [14] and hypervolume (HV) [5], are adopted to evaluate the performance of the compared algorithms. In our experiments, the mean and standard deviations of the metrics are recorded and represented in the tables by summarizing the experimental results. The mean of the ranks for each algorithm for each instance is also indicated in the tables, where the optimal mean metric values are marked with gray background for each instance. Wilcoxon’s rank sum test at a 5% significance level is performed in terms of the metrics obtained by each pair of algorithms. “+”, “-”, and “ ” in the tables denote the performance of the comparison algorithm is better than, worse than, and similar to that of LGSEA according to Wilcoxon’s rank sum test, respectively. B. Algorithm Analysis
In this part, we investigate three factors that may affect the performance of the proposed LGSEA, i.e., the general performance of different selections, the size of the mating pool, and the effect of the neighbor mating.
1) General Performance With Representative Selections To investigate the effectiveness of LGSEA when using different selection methods, three representative selection methods are embedded in LGSEA. These include LGSEA/NS, which utilizes the nondominated sorting-based selection of NSGA-II; LGSEA/SMS, which employs the S-metric selection of SMS-EMOA; and LGSEA/D, which uses the decomposition-based selection of MOEA/D. These LGSEA variants with developed operators are tested on the GLT test suite, where the maximum number of iterations is T ¼ 450 for GLT1-GLT6. The IGD results achieved by these compared algorithms are shown in Table I, where the three original algorithms, i.e., NSGA-II, SMS-EMOA, and MOEA/D, are also executed for comparison with the LGSEAs. Table I shows that the variants of LGSEA with the developed operators outperform the original NSGA-II, SMS-EMOA, and MOEA/D on all GLT test instances. Among the nine LGSEA variations, the combination of the S-metric selection method and Operator 2 achieves the best results. This suggests that a steady-state selection of the S-metric method might be more
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
37
LGSEA demonstrates its capability to scale up to large-scale decision variables effectively. suitable for LGSEA. In fact, the proposed LGSEA may fail within NSGA-II and MOEA/D, since Pareto-dominance shows slow convergence to the optimal front while MOEA/D may suffer from the fixed weight vectors on the complicated PF of GLT test instances. Therefore, LGSEA with the optimal combination, e.g., Operator 2 for offspring generation and S-metric method for selection, is used in the subsequent parts. 2) Components Analysis This experiment studies the role of the components, the mating pools M, in LGSEA, including the size of M and M with neighbors or the global population. LGSEA with different settings of K (K 2 f2; 4; 6; 8; 10g) is tested on the six GLT test instances. The mean values and standard deviations of the IGD metric of the final
populations obtained by LGSEA with different sizes of mating pools are shown in Figure 5(a). This figure shows that the proposed LGSEA is relatively insensitive to the settings of different sizes of the mating pool, while LGSEA shows poor performance with a small mating pool (K ¼ 2). The experimental results indicate that a large size of neighborhood mating pools is appropriate for LGSEA, which is used in the following experiments, i.e., K ¼ 10. Afterward, we further discuss the effect of neighborhood mating in LGSEA. In several existing learning-based MOEAs, such as the SMEA and OCEA, a mating restriction probability (denoted as b) is adopted to balance the local and global search with neighbors or random population solutions. The mean and standard deviations of the IGD values obtained by LGSEA with b ¼ 0:2; 0:4; 0:6; 0:8; 1:0 on GLT1-GLT6 are shown in Figure 5(b). As seen in Figure 5(b), the mean IGD values obtained by LGSEA for the other instances are not heavily affected by b. Consequently, it may not be necessary to incorporate an additional parameter in LGSEA.
TABLE III Statistical results (Mean(Std. Dev.)[rank]) obtained by AMEA, IM-MOEA, SMEA, RM-MEDA, MEA/GTM, and LGSEA over 30 independent runs on the GLT, LZ, IMF, WFG, and ZDT test suites in terms of the HV metrics.
38
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
FIGURE 6 The representative PFs and the evolution of IGD metric value obtained by RM-MEDA, MEA/GTM, and LGSEA on GLT1-GLT6 test instances associated with median IGD metric values over 30 runs.
C. Comparison Studies
In this part, LGSEA is compared with five representative learning-based MOEAs that all focus on the offspring generation with regularity, including AMEA, IM-MOEA, SMEA, RMMEDA, and MEA/GTM: 1) AMEA [26]: a learning-based MOEA that incorporates both MGM sampling and DE operation; 2) IM-MOEA [43]: an inverse modeling procedure based MOEA; 3) SMEA [25]: a self-organizing mapping based MOEA with mating restrictions; 4) RM-MEDA [14]: a classical MOEA with PS approximation by regularity models; 5) MEA/GTM [46]: a MOEA that also uses the GTM model; Table II shows the statistical results of the IGD values obtained by AMEA, IM-MOEA, SMEA, RM-MEDA, MEA/GTM, and LGSEA on the test instances (600 iterations for GLT, IMF, WFG and ZDT, and 2000 iterations for LZ). Considering all test suites, it can be observed that the proposed LGSEA has significantly better IGD values than AMEA, IM-MOEA, SMEA, RM-MEDA, and MEA/GTM for 29, 36, 32, 37, and 37 out of the 39 test instances according to the Wilcoxon’s rank sum test. In terms of the mean rankings, the algorithms with the best to worst performance are LGSEA, SMEA, AMEA, IM-MOEA, RM-MEDA, and MEA/GTM. Taking each test suite individually, LGSEA achieves all the best IGD values on GLT1-GLT6. Meanwhile, LGSEA obtains 7, 7, 7, and 4 best IGD mean values on the LZ, IMF, WFG, and ZDT test suites, respectively.
Table III shows the statistical results for the HV values by the compared algorithms. From the table, we observe that LGSEA was in first place, followed by SMEA, IM-MOEA, AMEA, RM-MEDA, and MEA/GTM, in terms of the overall mean rankings. Further, based on Wilcoxon’s rank sum test, LGSEA performs 31, 37, 29, 28, and 39 better, 4, 1, 4, 0, and 0 worse, and 4, 1, 6, 1, and 0 similar mean metric values than AMEA, IM-MOEA, SMEA, RM-MEDA, and MEA/GTM out of 39 comparisons, respectively. To reveal the search details, Figure 6 illustrates the evolution of the approximated fronts (AFs) and the evolution with median IGD values obtained by RM-MEDA, MEA/GTM, and LGSEA on the GLT1-GLT6 instances. The results show that LGSEA could successfully converge to the PFs of all the instances. In contrast, IM-MOEA is unable to converge to the PFs of GLT5 and GLT6, and MEA/GTM fails to converge on all the GLT test instances. This indicates that LGSEA converges the fastest among the compared algorithms and can maintain the best uniformity in the evolution. The above experimental results indicate that RMMEDA and MEA/GTM have the poorest performance in the comparisons. Their generation procedures, which involve sampling from built models with added white Gaussian noises, may not be practical for producing promising new trial solutions. Although IM-MOEA achieves the best results on certain problems (i.e., IMF6, IMF10, and WFG1), it may fail to learn such a mapping from the objective space to the decision space on the other test instances.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
39
The ensemble of the proposed generation operators may be an interesting future work. AMEA and SMEA can generate promising solutions on various MOPs with complex PF and PS shapes, but their mating restriction strategies, based on probability values, might fail to balance local exploitation and global exploration since their effectiveness is still problem-dependent. In contrast, LGSEA employs GTM to learn the manifold structure for guiding the search with some alternative operators, which is helpful in generating promising offspring solutions. The designed incremental training procedures also reduce the overhead of the GTM training. In summary, LGSEA outperforms the compared algorithms on the five test suites in terms of both IGD and HV. These results demonstrate that LGSEA is highly competitive with several state-of-the-art learning-based MOEAs.
D. Scalability Studies
To further demonstrate the superiority of LGSEA, the scalability of LGSEA to the number of decision variables is examined by varying the number of decision variables on WFG1-WFG9 test instances with 50, 80, 100, and 200 dimension variables (600 iterations for 50, 80, and 100-D instances and 800 iterations for 200-D instances). The IGD results achieved by six compared MOEAs are given in Table IV. In these comparisons, LGSEA obtains the highest number of the optimal results (27 out of 36), followed by IM- MOEA, AMEA, and SMEA with 6, 2, and 1 best results, respectively. To visualize the results, Figure 7 plots the mean and standard deviations of the IGD values obtained by LGSEA with the different number of decision variables. LGSEA can produce relatively small IGD values for WFG2-WFG9 and n does not significantly influence the obtained IGD values, while LGSEA is still not well for WFG1. Considering that the PF of WFG1 is quite complicated, LGSEA may require more evolutionary generations. In a word, the developed LGSEA demonstrates its capability to scale up to large-scale decision variables effectively.
TABLE IV Statistical results (Mean(Std. Dev.)[rank]) obtained by AMEA, IM-MOEA, SMEA, RM-MEDA, MEA/GTM, and LGSEA over 30 independent runs on the WFG test suites with different variables dimensional in terms of the IGD metric.
40
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
FIGURE 7 IGD results achieved by LGSEA on nine WFG test instances where the number of decision variables varies from 50 to 200.
FIGURE 8 The representative PFs obtained by LGSEA on RE test instances.
E. Performance on Real-World Problems
In the previous experimental parts, LGSEA has been tested on various synthetic MOPs with complex PF and PS shapes. Naturally, extending LGSEA to tackle some real-world problems, such as the industrial design applications in the RE test suite [52], is a logical next step. Therefore, six continuous RE test instances, e.g., RE2-4-1, RE2-2-4, RE3-3-1, RE3-4-2, RE3-4-3, and RE3-5-4, are employed to evaluate LGSEA, running each instance for 200 iterations and plotting the final population in the objective space in Figure 8. The figure reveals that, LGSEA could successfully approximate the PFs of RE2-4-1, RE2-2-4, and RE3-4-2. Although LGSEA fails to cover the PFs of RE3-3-1, RE3-4-3, and RE3-5-4 uniformly, the population solutions have converged to the optimal solutions and reached the whole PFs. Considering that the PFs of these problems are pretty complicated, a large population size might be appropriate for RE3-3-1, RE34-3, and RE3-5-4 test instances. V. Conclusion
In this paper, we introduce a learning-to-guide search strategy (LGS) for evolutionary multiobjective optimization, primarily designed to learn the regularity properties of continuous MOPs and guide the evolutionary search when dealing with complicated MOPs. Several specific characters are involved in the proposed method. First, a generative model, the GTM method, is adopted to capture the manifold distribution of the population, and a set of regular grid points are mapped from
the latent space to the decision space within a manifold structure for mating with some individuals for offspring generation. Then, three alternative offspring generation operators are designed with the basic idea of LGS. Finally, a multiobjective evolutionary algorithm based on the proposed LGS (called LGSEA) is developed within an efficient framework by reusing the GTM model with incremental training steps. Extensive algorithm analyses and comparisons are conducted to test the proposed method. Three representative selections in classical MOEAs (i.e., NSGA-II, SMS-EMOA, and MOEA/D) have been embedded in LGSEA, and the sensitivity to some control parameters in the LGSEA has been experimentally investigated. The performance of the proposed LGSEA is compared with five learning-based MOEAs, including AMEA, IM-MOEA, SMEA, RM-MEDA, and MEA/ GTM. The superiority of the proposed LGSEA over the other five algorithms, as well as the scalability of decision variables and performance on real-world problems, are indicated. In our future work, the ensemble of the proposed three operators with LGS will be further studied for a more efficient search in the decision space. As LGSEA demonstrates good scalability regarding decision variables, we may extend it to address large-scale MOPs in future research.
Acknowledgment This work was supported in part by the Science and Technology Commission of Shanghai Municipality under Grant 22511105901,
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
41
in part by the Scientific and Technological Innovation 2030 Major Projects under Grant 2018AAA0100902, in part by the Fundamental Research Funds for the Central Universities, the National Key R&D Program of China under Grant 2022ZD0161800, in part by the NSFC-RGC under Grant 61961160734, and in part by the Shanghai Rising-Star Program under Grant 21QA1402500. References [1] K. Miettinen, Nonlinear Multiobjective Optimization, vol. 12. Berlin, Germany: Springer, 2012. [2] K. Deb, “Multi-objective optimisation using evolutionary algorithms: An introduction,” in Multi-Objective Evolutionary Optimisation for Product Design and Manufacturing. Berlin, Germany: Springer, 2011, pp. 3–34. [3] A. Zhou, B.-Y. Qu, H. Li, S.-Z. Zhao, P. N. Suganthan, and Q. Zhang, “Multiobjective evolutionary algorithms: A survey of the state of the art,” Swarm Evol. Comput., vol. 1, no. 1, pp. 32–49, 2011. [4] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, Apr. 2002. [5] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength pareto evolutionary algorithm,” Comput. Eng. Netw. Lab., ETH Zurich, Zurich, Switzerland, TIK-report 103, 2001. [6] D. W. Corne, N. R. Jerram, J. D. Knowles, and M. J. Oates, “PESA-II: Regionbased selection in evolutionary multiobjective optimization,” in Proc. 3rd Annu. Conf. Genet. Evol. Comput., 2001, pp. 283–290. [7] N. Beume, B. Naujoks, and M. Emmerich, “SMS-EMOA: Multiobjective selection based on dominated hypervolume,” Eur. J. Oper. Res., vol. 181, no. 3, pp. 1653–1669, 2007. [8] J. Bader and E. Zitzler, “Hype: An algorithm for fast hypervolume-based manyobjective optimization,” Evol. Comput., vol. 19, no. 1, pp. 45–76, 2011. [9] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, Dec. 2007. [10] H. Li and Q. Zhang, “Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGA-II,” IEEE Trans. Evol. Comput., vol. 13, no. 2, pp. 284–302, Apr. 2009. [11] L. Chen, K. Deb, and H.-L. Liu, “Explicit control of implicit parallelism in decomposition-based evolutionary many-objective optimization algorithms [research frontier],” IEEE Comput. Intell. Mag., vol. 14, no. 4, pp. 52–64, Nov. 2019. [12] C. Hillermeier et al., Nonlinear Multiobjective Optimization: A Generalized Homotopy Approach, vol. 135. Berlin, Germany: Springer, 2001. [13] Y. Jin and B. Sendhoff, “Connectedness, regularity and the success of local search in evolutionary multi-objective optimization,” in Proc. IEEE Congr. Evol. Comput., 2003, pp. 1910–1917. [14] Q. Zhang, A. Zhou, and Y. Jin, “RM-MEDA: A regularity model-based multiobjective estimation of distribution algorithm,” IEEE Trans. Evol. Comput., vol. 12, no. 1, pp. 41–63, Feb. 2008. [15] N. Kambhatla and T. K. Leen, “Dimension reduction by local principal component analysis,” Neural Comput., vol. 9, no. 7, pp. 1493–1516, 1997. [16] Y. Jin and B. Sendhoff, “A systems approach to evolutionary multiobjective structural optimization and beyond,” IEEE Comput. Intell. Mag., vol. 4, no. 3, pp. 62–76, Aug. 2009. [17] H. Wang, Q. Zhang, L. Jiao, and X. Yao, “Regularity model for noisy multiobjective optimization,” IEEE Trans. Cybern., vol. 46, no. 9, pp. 1997–2009, Sep. 2016. [18] S. Wang, B. Li, and A. Zhou, “A regularity augmented evolutionary algorithm with dual-space search for multiobjective optimization,” Swarm Evol. Comput., vol. 78, 2023, Art. no. 101261. [19] K. Li and S. Kwong, “A general framework for evolutionary multiobjective optimization via manifold learning,” Neurocomputing, vol. 146, pp. 65–74, 2014. [20] T. Hastie and W. Stuetzle, “Principal curves,” J. Amer. Stat. Assoc., vol. 84, no. 406, pp. 502–516, 1989. [21] U. Ozertem and D. Erdogmus, “Locally defined principal curves and surfaces,” J. Mach. Learn. Res., vol. 12, pp. 1249–1286, 2011. [22] M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput., vol. 15, no. 6, pp. 1373–1396, 2003. [23] A. K. Jain, “Data clustering: 50 years beyond K-means,” Pattern Recognit. Lett., vol. 31, no. 8, pp. 651–666, 2010. [24] T. Kohonen, “The self-organizing map,” Proc. IEEE, vol. 78, no. 9, pp. 1464– 1480, Sep. 1990. [25] H. Zhang, A. Zhou, S. Song, Q. Zhang, X.-Z. Gao, and J. Zhang, “A self-organizing multiobjective evolutionary algorithm,” IEEE Trans. Evol. Comput., vol. 20, no. 5, pp. 792–806, Oct. 2016. [26] J. Sun, H. Zhang, A. Zhou, Q. Zhang, and K. Zhang, “A new learning-based adaptive multi-objective evolutionary algorithm,” Swarm Evol. Comput., vol. 44, pp. 304–319, 2019.
42
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
[27] L. Pan, L. Li, R. Cheng, C. He, and K. C. Tan, “Manifold learninginspired mating restriction for evolutionary multiobjective optimization with complicated pareto sets,” IEEE Trans. Cybern., vol. 51, no. 6, pp. 3325–3337, Jun. 2021. [28] W. Zhang, N. Zhang, W. Zhang, G. G. Yen, and G. Li, “A cluster-based immune-inspired algorithm using manifold learning for multimodal multi-objective optimization,” Inf. Sci., vol. 581, pp. 304–326, 2021. [29] I. D. Guedalia, M. London, and M. Werman, “An on-line agglomerative clustering method for nonstationary data,” Neural Comput., vol. 11, no. 2, pp. 521–540, 1999. [30] S. Calinon and A. Billard, “Incremental learning of gestures by imitation in a humanoid robot,” in Proc. 2nd ACM/IEEE Int. Conf. Hum.-Robot Interact., 2007, pp. 255–262. [31] K. Li and R. Chen, “Batched data-driven evolutionary multi-objective optimization based on manifold interpolation,” IEEE Trans. Evol. Comput., vol. 27, no. 1, pp. 126–140, Feb. 2023. [32] C. M. Bishop, M. Svensen, and C. K. Williams, “GTM: The generative topographic mapping,” Neural Comput., vol. 10, no. 1, pp. 215–234, 1998. [33] C. M. Bishop, M. Svensen, and C. K. Williams, “Developments of the generative topographic mapping,” Neurocomputing, vol. 21, no. 1–3, pp. 203–224, 1998. [34] P. Larra~ naga and J. A. Lozano, Estimation of Distribution Algorithms: A New Tool for Evolutionary Comput., vol. 2. Berlin, Germany: Springer, 2001. [35] Y. Wang, J. Xiang, and Z. Cai, “A regularity model-based multiobjective estimation of distribution algorithm with reducing redundant cluster operator,” Appl. Soft Comput., vol. 12, no. 11, pp. 3526–3538, 2012. [36] Y. Li, X. Xu, P. Li, and L. Jiao, “Improved RM-MEDA with local learning,” Soft Comput., vol. 18, no. 7, pp. 1383–1397, 2014. [37] M. Shi, Z. He, Z. Chen, and X. Liu, “A full variate Gaussian model-based RMMEDA without clustering process,” Int. J. Mach. Learn. Cybern., vol. 9, no. 10, pp. 1591–1608, 2018. [38] Y. Jin, A. Zhou, Q. Zhang, B. Sendhoff, and E. Tsang, “Modeling regularity to improve scalability of model-based multiobjective optimization algorithms,” in Multiobjective Problem Solving From Nature. Berlin, Germany: Springer, 2008, pp. 331–355. [39] A. Zhou, Q. Zhang, and G. Zhang, “A multiobjective evolutionary algorithm based on decomposition and probability model,” in Proc. IEEE Congr. Evol. Comput., 2012, pp. 1–8. [40] A. Zhou, Q. Zhang, and Y. Jin, “Approximating the set of pareto-optimal solutions in both the decision and objective spaces by an estimation of distribution algorithm,” IEEE Trans. Evol. Comput., vol. 13, no. 5, pp. 1167–1189, Oct. 2009. [41] Y. Tian, X. Zhang, R. Cheng, C. He, and Y. Jin, “Guiding evolutionary multiobjective optimization with generic front modeling,” IEEE Trans. Cybern., vol. 50, no. 3, pp. 1106–1119, Mar. 2020. [42] Y. Tian, L. Si, X. Zhang, K. C. Tan, and Y. Jin, “Local model-based pareto front estimation for multiobjective optimization,” IEEE Trans. Syst., Man, Cybern. Syst., vol. 53, no. 1, pp. 623–634, Jan. 2023. [43] R. Cheng, Y. Jin, K. Narukawa, and B. Sendhoff, “A multiobjective evolutionary algorithm using Gaussian process-based inverse modeling,” IEEE Trans. Evol. Comput., vol. 19, no. 6, pp. 838–856, Dec. 2015. [44] C. He, S. Huang, R. Cheng, K. C. Tan, and Y. Jin, “Evolutionary multiobjective optimization driven by generative adversarial networks (GANs),” IEEE Trans. Cybern., vol. 51, no. 6, pp. 3129–3142, Jun. 2020. [45] Z. Wang, H. Hong, K. Ye, G.-E. Zhang, M. Jiang, and K. C. Tan, “Manifold interpolation for large-scale multiobjective optimization via generative adversarial networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 34, no. 8, pp. 4631–4645, Aug. 2023. [46] A. Zhou, Q. Zhang, Y. Jin, B. Sendhoff, and E. Tsang, “Modelling the population distribution in multi-objective optimization by generative topographic mapping,” in Proc. Int. Conf. Parallel Prob. Solving Nature, 2006, pp. 443–452. [47] W. Zhang, S. Wang, A. Zhou, and H. Zhang, “A practical regularity model based evolutionary algorithm for multiobjective optimization,” Appl. Soft Comput., vol. 129, 2022, Art. no. 109614. [48] F. Gu, H.-L. Liu, and K. C. Tan, “A multiobjective evolutionary algorithm using dynamic weight design method,” Int. J. Innov. Comput., Inf. Control, vol. 8, no. 5(B), pp. 3677–3688, 2012. [49] S. Huband, L. Barone, L. While, and P. Hingston, “A scalable multi-objective test problem toolkit,” in Proc. 3rd Int. Conf. Evol. Multi-Criterion Optim., 2005, pp. 280–295. [50] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: Empirical results,” Evol. Comput., vol. 8, no. 2, pp. 173–195, 2000. [51] Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum],” IEEE Comput. Intell. Mag., vol. 12, no. 4, pp. 73–87, Nov. 2017. [52] R. Tanabe and H. Ishibuchi, “An easy-to-use real-world multi-objective optimization problem suite,” Appl. Soft Comput., vol. 89, 2020, Art. no. 106078.
©SHUTTERSTOCK.COM/WR7
RoCaSH2: An Effective Route Clustering and Search Heuristic for Large-Scale Multi-Depot Capacitated Arc Routing Problem Yuzhou Zhang
Nanjing Xiaozhuang University, CHINA
Yi Mei
Victorial University of Wellington, NEW ZEALAND
Haiqi Zhang
Nanjing University of Science and Technology, CHINA
Qinghua Cai and Haifeng Wu Anqing Normal University, CHINA
Digital Object Identifier 10.1109/MCI.2023.3304081 Date of current version: 17 October 2023
1556-603X ß 2023 IEEE
Abstract—The Multi-Depot Capacitated Arc Routing Problem (MDCARP) is an important combinatorial optimization problem with wide applications in logistics. Large Scale MDCARP (LSMDCARP) often occurs in the real world, as the problem size (e.g., number of edges/tasks) is usually very large in practice. It is challenging to solve LSMDCARP due to the large search space and complex interactions among the depots and the tasks. Divide-andconquer strategies have shown success in solving large-scale problems by decomposing the problem into smaller subproblems to be solved separately. However, it is challenging to find accurate decomposition for LSMDCARP. To address this Corresponding author: Yi Mei (e-mail: [email protected]).
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
43
issue and alleviate the negative effect of inaccurate problem decomposition, this article proposes a new divide-andconquer strategy for solving LSMDCARP, which introduces a new restricted global optimization stage within the typical dynamic decomposition procedure. Based on the new divideand-conquer strategy, this article develops a problem-specific Task Moving among Sub-problems (TMaS) process for the global optimization stage and incorporates it into the state-ofthe-art RoCaSH algorithm for LSMDCARP. The resultant algorithm, namely, RoCaSH2, was compared with the stateof-the-art algorithms on a wide range of LSMDCARP instances, and the results showed that RoCaSH2 can achieve significantly better results than the state-of-the-art algorithms within a much shorter time. I. Introduction
S
ince presented in 1981 [1], the Capacitated Arc Routing Problem (CARP) [2] has attracted much attention due to its wide range of real-world applications, such as winter gritting [3], snow removal [4], [5], [6], street salting [7], meter reading [8], waste collection [9], and mail delivery [10]. For CARP, the goal is to design the routes for a fleet of vehicles to complete the service distributed on the edges in a given network with minimal total cost subject to specific constraints. In the past decades, a large number of studies have been conducted for solving CARP, e.g., [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23]. The basic CARP model considers a single depot, and requires each route to start and end at the single depot. In the real-world, on the other hand, there can be multiple depots for the vehicles to stay. To capture this realistic factor, this paper focuses on the Multi-depot CARP (MDCARP) [24]. In the real world, the problem size can be very large. For example, the waste collection application can easily involve thousands of streets whose waste needs to be collected. However, the existing studies for MDCARP, such as [24], [25], [26], [27], [28], [29], focus only on small to medium-size problem instances, which usually contain less than 300 required edges. They ignore the scalability issue caused by the huge search space in the Large Scale MDCARP (LSMDCARP), and thus cannot perform well on solving LSMDCARP within acceptable time [30]. Divide-and-conquer strategy has shown great success in solving large-scale problems, including the continuous optimization benchmark problems [31], [32], [33], [34], [35] and complex combinatorial optimization problems [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48]. Briefly speaking, by decomposing the original large problem into smaller sub-problems and solving each sub-problem separately, we can greatly reduce the search space and improve the effectiveness and efficiency of the search, and achieve satisfactory solutions to LSMDCARP within a shorter time by the divideand-conquer methods. For designing effective divide-and-conquer approaches for large-scale problems, the main challenge is to accurately
44
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
identify the decomposition of the problem that minimizes the interactions between different sub-problems. Most existing studies (e.g., [31], [32], [49], [50], [51], [52]) periodically applies a dynamic decomposition procedure during the search process, which typically contains three main stages. First, in the re-decomposition stage, it re-decomposes the problem into a set of smaller sub-problems based on the up-to-date information, such as the current best solution. Then, in the local optimization stage, it solves each sub-problem separately and locally. That is, it only changes the solution components corresponding to that sub-problem, while keeping all the other components fixed. Finally, in the combination stage, the best solutions of the sub-problems are combined to form the solution to the original problem. When applied to the CARP domain (e.g., [45], [46], [53]), each time the problem is first re-decomposed by re-clustering (part of) the routes of the best-so-far solution. The subset of tasks in each cluster corresponds to a sub-problem. Then, in the local optimization stage, each sub-problem is solved to obtain a set of new routes serving only the corresponding subset of tasks. The existing dynamic decomposition approaches have a limitation in that the re-decomposition and local optimization stages heavily affect each other. As a result, a poor decomposition can lead to a poor local optimization results, and vice versa. To overcome this limitation, this paper develops a new dynamic decomposition procedure, which incorporates a new global optimization stage in the loop. This way, the re-decomposition can be less affected by the local optimization results and tends to be improved more effectively. Note that the global optimization is typically much more time-consuming than the sub-problem local optimization. To address this issue, a novel problem specific Task Moving among Sub-problem (TMaS) process is designed for LSMDCARP. The TMaS process restricts the search space for the global optimization, so that the search focuses only on the potentially promising regions. In addition, this paper proposes a new initialization scheme for LSMDCARP that provides a better initial solution and decomposition to enhance the subsequent search. The overall goal of this paper is to develop a new divideand-conquer approach for LSMDCARP, which contains the following specific research objectives. ❏ Develop a new dynamic decomposition procedure that contains a new global optimization stage in addition to the existing re-decomposition, local optimization, and combination stages. This way, the re-decomposition can be less dependent on the local optimization results. ❏ Develop a new TMaS-based global optimization process for LSMDCARP, which can achieve a good balance between solution quality and time complexity. ❏ Design a new initialization scheme to further enhance the subsequent search process. ❏ Combine all the above new schemes with the state-of-theart RoCaSH algorithm [30] for LSMDCARP to develop a new algorithm named RoCaSH2. ❏ Verify the effectiveness of RoCaSH2 on a wide range of LSMDCARP instances.
The remainder of this paper is organized as follows. First, Section II gives the background of MDCARP. Then, Section III introduces the proposed new schemes and RoCaSH2. Afterwards, the experimental studies and discussions are provided in Section IV. Finally, Section V concludes the paper and puts forward future directions.
min tcðSÞ;
(1Þ
where tcðSÞ ¼
ðDÞ jV Xj
costðRd Þ ¼
In accordance with [30], MDCARP can be described as follows. Let GðV ; EÞ be an undirected connected graph, where V and E represent the vertex set and the edge set, respectively. The vertex set V consists of two complementary subsets: the depot vertex set V ðDÞ V and non-depot vertex set V ðNÞ V , i.e., V ¼ V ðNÞ [ V ðDÞ and V ðNÞ \ V ðDÞ ¼ ;. Each edge e 2 E has three non-negative attributes, i.e., a service cost scðeÞ, a deadheading cost dcðeÞ, and a service demand demðeÞ. A required edge (i.e., demðeÞ > 0) is also called a task, and all the tasks comprises the task set denoted as T E. The aim of MDCARP is to find a set of routes for the vehicles with limited capacity of Q to serve all the tasks with the least cost, and the following constraints must be satisfied: ðDÞ ❏ Each vehicle must start from a depot vd 2 V ðDÞ and finally return to the same depot; ❏ Each task is served exactly once by a vehicle; ❏ The total demand served by a route cannot exceed the vehicle’s capacity Q. MDCARP is reduced to the conventional CARP when there is only one depot. In other words, MDCARP can be considered as an extended version of CARP from single depot to multiple depots, and CARP can be seen as a special case of MDCARP. The problem formulation is described under the task representation scheme [17], [54], [55], [56], where each task t ¼ ðu; vÞ 2 T is assigned two IDs x1 and x2 , indicating two mutual inverse directions of the formulation. Each task ID has two vertices: a head vertex and a tail vertex, which are denoted as hvðÞ and tvðÞ, respectively. For the two IDs x1 and x2 of the task t ¼ ðu; vÞ, the following equations are satisfied: hvðx1 Þ ¼ tvðx2 Þ ¼ u, tvðx1 Þ ¼ hvðx2 Þ ¼ v, x1 ¼ invðx2 Þ, x2 ¼ invðx1 Þ, demðx1 Þ ¼ demðx2 Þ ¼ demðtÞ, scðx1 Þ ¼ scðx2 Þ ¼ scðtÞ, and dcðx1 Þ ¼ dcðx2 Þ ¼ dcðtÞ. Given a task set T , there are 2jT j task IDs. In addition, a loop is constructed for each depot with ðDÞ ðDÞ a separate ID xd (d ¼ 1; . . .; jV ðDÞ j), where hvðxd Þ ¼ ðDÞ ðDÞ ðDÞ ðDÞ ðDÞ tvðxd Þ ¼ vd , demðxd Þ ¼ scðxd Þ ¼ dcðxd Þ ¼ 0, and ðDÞ ðDÞ invðxd Þ ¼ xd . Overall, 2jT j þ jV ðDÞ j task IDs can be obtained for a MDCARP instance with the task set T . A solution S to MDCARP is a set of routes, each starts and ends at one depot of V ðDÞ . By grouping the routes based on their depots, we have S ¼ fR1 ; . . .; RjV ðDÞ j g, where the routes Rd ¼ fRd;1 ; . . .; Rd;jRd j g are associated ðDÞ with vd . The kth route of Rd is a sequence of task IDs, i.e., Rd;k ¼ ðxd;k;1 ; xd;k;2 ; . . . ; xd;k;jRd;k j Þ, where jRd;k j is the number of the tasks in Rd;k . Under the above representation, the mathematical model of MDCARP can be summarized as follows.
(2Þ
jRd j jRX d;k j1 X ½scðxd;k;i Þ þ ’ðtvðxd;k;i Þ; hvðxd;k;iþ1 ÞÞ; (3Þ
k¼1 i¼1 ðDÞ xd;k;1 ¼ xd ; 8 ðDÞ xd;k;jRd;k j ¼ xd ;
II. Background A. Multi-Depot Capacitated Arc Routing Problem
costðRd Þ;
d¼1
s:t:
d ¼ 1; . . .; jV ðDÞ j; k ¼ 1; . . .; jRd j; 8 d ¼ 1; . . .; jV
ðDÞ
j; k ¼ 1; . . .; jRd j;
(4Þ (5Þ
jV ðDÞ j jRd j
X X ðjRd;k j 2Þ ¼ jT j;
(6Þ
d¼1 k¼1
xd;k;i 6¼ xd0 ;k0 ;i0 ; 8 d 6¼ d 0 or k 6¼ k0 or i 6¼ i0 ; xd;k;i 6¼ invðxd0 ;k0 ;i0 Þ; 8 d 6¼ d 0 or k 6¼ k0 or i 6¼ i0 ; jRd;k j
X
(7Þ (8Þ
demðxd;k;i Þ Q; 8 d ¼ 1; . . .; jV ðDÞ j; 1 k jRd j: (9)
i¼1
where (1) is the objective function which aims to minimize the total cost calculated by (2). Eq. (3) is the cost resulting from serving the route Rd;k , and ’ðu; vÞ indicates the deadheading cost induced by traversing the shortest path from u to v, which can be pre-calculated by Dijkstra’s algorithm [57]. Constraints Eqs. (4) and (5) ensure that each route starts and ends at the same depot. Eqs. (6), (7), and (8) indicate that each task of T must be served for only once. The capacity constraint is guaranteed by the constraint (9). B. Related Work
Since its presentation made by Golden [1] in 1981, CARP has received much research attention [11], [12], [14], [15], [16], [58], [59], [60]. Early works mainly focused on the problem instances with small and medium sizes, e.g., the test instances have no more than 190 tasks. In 2008, a large EGL-G dataset with 375 tasks was generated by Brand~ao and Eglese [61]. Subsequently, a series of studies revolved round solving large-scale CARP were conducted, such as [45], [46], [47], [48], [61], [62], [63], [64]. In particular, several novel divide-and-conquer strategies were proposed to address the scalability issue of large-scale CARP [45], [47], [64]. Tang et al. [47] addressed the scalability issue on two much larger datasets, i.e., Hefei and Beijing, in which the size of the instances are over 3000. The divide-and-conquer strategy is very effective for solving large-scale problems. It decomposes a large-scale problem into a number of smaller sub-problems, and then tackles each sub-problem independently or cooperatively. In this light, the search space can be greatly reduced. For designing an effective divide-and-conquer approach, a key issue is to design an effective problem decomposition scheme that minimizes the interdependency between the sub-problems. This usually requires problem domain knowledge. To divide and conquer largescale CARP, Mei et al. [45] proposed a decomposition scheme called the Route Distance Grouping (RDG) and combined it
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
45
with the MAENS [17]. The resultant algorithm RDGMAENS is based on the cooperative co-evolution mechanism and proven to be outstanding for large-scale CARP. In [47], Tang et al. proposed an approach to LSCARP, called SAHiD, which is based on the hierarchical decomposition. In RDG-MAENS or SAHiD, the whole search process is divided into a number of iterations. At each iteration, the obtained routes are collected together directly [45] or after being split into smaller ones randomly [47]. These collected routes are then grouped for the next iteration. In [64], Zhang et al. found that grouping the entire routes without considering their structures may still not achieve sufficiently accurate decomposition. They proposed a special Cutting Off Operator (RCO), which can distinguish the links between two tasks and cut off those poor ones. In contrast with the conventional single-depot CARP, MDCARP has an additional important issue, which is to assign the tasks and routes to the depots. There have been a few studies for solving MDCARP. For example, Zhu et al. [25] proposed a Hybrid Genetic Algorithm (HGA) for MDCARP. In the HGA, the population represents a single solution, in which an individual represents a route. Kansou et al. [26] proposed a memetic algorithm for MDCARP, called MDMA, which allocates the tasks to the depots using a nearest-neighbour strategy in initialization. Then, during the search process, the individuals undergo the selection and crossover, and the newly generated children are improved by local search. Vidal et al. [65] applied a Unified Hybrid Genetic Search (UHGS) to CARP and its variants, e.g., Mixed Capacitated General Routing Problem (MCGRP), the Periodic CARP (PCARP), Min-max k-vehicles Windy Rural Postman Problem (MMkWRPP), and MDCARP. All these three algorithms only considered small or medium sized MDCARP instances, but did not take the problems with large size into account. In addition, plenty of work have been investigated on the variants of MDCARP. In [27], a genetic local search algorithm was developed for MDCARP, in which the edges are served by a fleet of heterogeneous vehicles. In [29], a new MDCARP model is proposed to consider the maximal service time of the vehicles and maximal trip length. In [28], an asymmetric MDCARP model was considered, and a Mixed Integer Linear Programming (MILP) model was presented for solving it. However, when the problem size grows, the existing approaches to MDCARP become less effective. The existing divide-and-conquer approaches for singledepot CARP [45], [46], [47], [66] are not directly applicable to LSMDCARP, since they cannot handle the allocation of the tasks to the depots. To the best of our knowledge, the only existing divide-and-conquer approach specifically developed for LSMDCARP is the Route Clustering and Search Heuristic (RoCaSH) [30]. RoCaSH follows the dynamic decomposition procedure. In each cycle, the problem is first re-decomposed based on the bestso-far solution. Specifically, the whole MDCARP is decomposed into several sub-problems by assigning all the tasks to the appropriate depots with a three-criteria clustering scheme. Each sub-problem contains a depot and the tasks assigned to it. Then,
46
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
each sub-problem is solved separately by local search heuristics, such as the 2-opt local search, route cutting-off operator [64] and Ulusoy’s split operator [11]. Each sub-solution is a subset of routes starting and ending at the same depot. Therefore, they can be directly merged to form the solution to the original problem. To verify the effectiveness of the algorithms on LSMDCARP, Zhang et al. [30] extended the EGL-G, Hefei, and Beijing CARP datasets to new LSMDCARP datasets named mdEGL-G, mdHefei, and mdBeijing. The experimental results showed that RoCaSH significantly outperformed other algorithms for LSMDCARP in terms of both effectiveness and efficiency. Besides LSCARP (e.g., [45], [46], [53], [64], [66]) and LSMDCARP (e.g., [30]), the divide-and-conquer strategy has also been successfully applied to solve other large-scale problems, e.g., [31], [32], [38], [50], [51], [52], [67]. They all consist of a number of cycles. In each cycle, the problem is first decomposed into smaller sub-problems. Then, each sub-problem is optimized separately. When solving each sub-problem, the solution components of other sub-problems are fixed. After all the sub-problems have been optimized, their sub-solutions are combined to form the solutions to the original problem. In summary, due to the complex interactions among the tasks and depots in the objective function and constraints in LSMDCARP, it is very challenging to identify accurate problem decomposition scheme in the divide-and-conquer strategy. The existing studies follow the dynamic decomposition procedure, which consists of a number of cycles. Each cycle contains a redecomposition stage and a local optimization stage. However, these two stages strongly tangle with each other. Thus, an initial poor decomposition can lead to a poor local optimization result, and vice versa. To address this issue, two new mechanisms are proposed in this paper. First, a new dynamic decomposition procedure is proposed, which contains an additional global optimization stage after the local optimization stage. This way, the re-decomposition can be less dependent on the local optimization results and more likely to be better improved. To restrict the huge search space in the global optimization stage, we develop a new TMaS process based on our domain knowledge of LSMDCARP, which restricts the search space within a small region of potentially promising solutions. Second, a new initialization scheme is proposed to produce better initial decomposition. III. The Proposed RoCaSH2
In this section, the newly proposed dynamic decomposition procedure is first described. This new decomposition is useful for any large-scale problem where the accurate decomposition is hard to be identified. Then, the new RoCaSH2 based on the new dynamic decomposition procedure is presented. A. The New Dynamic Decomposition Procedure
Figure 1 shows the outline of the newly proposed dynamic decomposition procedure. It starts with initializing the solution and problem decomposition, e.g., dividing the problem into K sub-problems. In each cycle, it first conducts a global optimization to improve the solution(s). The global optimization allows all the decision variables to be changed. Then, it re-decomposes the problem based on the solution to the original problem updated by the global optimization. Finally, it solves each sub-
FIGURE 1 The newly proposed dynamic decomposition procedure.
problem by a local optimization method, such as local search and genetic algorithm. The local optimization modifies only the values of the decision variables involved in the sub-problem, fixing all the other decision variables. The main difference between the newly proposed dynamic decomposition and the existing approaches is the additional global optimization stage. Most existing studies directly re-decompose the problem after obtaining the solution to the original problem without fine-tuning it. On the contrary, the newly proposed dynamic decomposition further improves the solution to the original problem before the redecomposition, so that the subsequent re-decomposition becomes less dependent on the previous problem decomposition and local optimization results. This way, the decomposition can be improved more effectively in each cycle. B. The RoCaSH2 Algorithm
Based on the new dynamic decomposition procedure, new divide-and-conquer approaches can be developed for LSMDCARP. The following design issues have been considered: 1) How to define the problem decomposition and sub-problems for LSMDCARP; 2) How to initialize the solution(s) and problem decomposition to assist the subsequent search; 3) How to conduct the local optimization for each subproblem; 4) How to conduct the global optimization. Based on our domain knowledge and preliminary studies, the above design issues are addressed as follows: 1) Considering the characteristics of MDCARP, each subproblem is naturally defined as the subset of routes associated with a single depot. Thus, the problem decomposition naturally generates jV ðDÞ j sub-problems, where jV ðDÞ j is the number of depots. The dth sub-problem is associated ðDÞ with the vd 2 V ðDÞ . 2) A new initialization scheme is presented, which can obtain a better initial solution to the large-scale problems in a short time. The initialization will be described in Section III-C. 3) A current state-of-the-art RoCaSH [30] algorithm is employed to single-depot CARP for the local optimization of each sub-problem.
4) A new TMaS process is developed for a restricted global optimization. The TMaS process can achieve a good balance between effectiveness and efficiency. It will be described in Section III-D. Taking all the above into account, the new RoCaSH2 is proposed. Algorithm 1 gives the pseudo code of RoCaSH2. At first, a solution is initialized by S ¼ ðS1 ; . . .; SjV ðDÞ j Þ ¼ initðT ; V ðDÞ ; kÞ, where Sd represents the subset of routes ðDÞ associated with the depot vd . Naturally, each sub-problem is defined as a single-depot CARP for serving the tasks served in ðDÞ Sd with the depot vd . Algorithm 1. The newly proposed RoCaSH2.
1: procedure RoCaSH2 (The LSMDCARP instance with task set T and depot set V ðDÞ , number of trials k, maximum cycles MaxCyc, maximum cycles without improvement MaxNoImp, TMaS parameter NR )
2: Initialize S ¼ ðS1 ; . . .; SjV ðDÞ j Þ ¼ InitðT; V ðDÞ ; kÞ; 3: cycle ¼ 0, noimp ¼ 0; 4: while cycle MaxCyc and noimp MaxNoImp do 5: cycle ¼ cycle þ 1; 6: ðS01 ; . . .; S0jV ðDÞ j Þ = TMaSðS1 , ..., SjV ðDÞ j , NR ); 7: ðR1 ; . . .; RjV ðDÞ j Þ = RedecomposeðS01 ; . . .; S0jV ðDÞ j ); 8: for d ¼ 1 ! jV ðDÞ j do 9: S00 d = LocalOptðRd ); 10: end for 11: if (ðS001 ; . . .; S00jV ðDÞ j Þ) is better than S then 12: S ¼ ððS001 ; . . .; S00jV ðDÞ j ÞÞ, noimp ¼ 0; 13: else 14: noimp ¼ noimp þ 1; 15: end if 16: end while 17: return the best feasible solution S; 18: end procedure
During the search, in each cycle, the TMaS-based global optimization is first carried out to further improve S and obtain an improved solution ðS01 ; . . .; S0jV ðDÞ j Þ. The details of TMaS will be given in Section III-D. Then, the problem is re-decomposed (line 7) based on ðS01 ; . . .; S0jV ðDÞ j Þ, and a new solution NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
47
ðS001 ; . . .; S00jV ðDÞ j Þ is obtained. Each new sub-problem is determined by the new subset of routes associated with the corresponding depot. The re-decomposition stage will be further described in Section III-E. Finally, the local optimization is conducted on each sub-problem (lines 8–10) to obtain an improved solution ðR1 ; . . .; RjV ðDÞ j Þ. The details of LocalOpt will be given in Section III-F. The search process terminates if the maximum number of cycles MaxCyc is reached or there are MaxNoImp cycles without improvement. Algorithm 2. The new initialization procedure.
1: procedure Init (The LSMDCARP instance with task set T, depot set V ðDÞ , number of trials k) 2: Assign the tasks to the depots by the nearest-neighbour
3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23:
heuristic; for d ¼ 1 ! jV ðDÞ j do ðDÞ ðDÞ ðDÞ Set Rd ¼ ðxd ; xd Þ, Td ¼ {the tasks allocated to vd }; while Td 6¼ ; do Calculate the best insertion cost of each task t 2 Td ; Select the task t with the minimal best insertion cost; Insert t at its best position in Rd , remove t from Td ; end while Split Rd using Ulusoy’s splitting procedure to obtain Sd ; end for Let S = (S1 ; . . .; SjV ðDÞ j ); for k ¼ 1 ! k 1 do for d ¼ 1 ! jV ðDÞ j do ðDÞ ðDÞ Set Rd ¼ ðxd ; xd Þ; end for T 0 ¼ T; while T 0 6¼ ; do Select a task t from T 0 randomly, and remove t from T 0 ; Assign t to the closest depot d ; Insert t at its best position in Rd ; end while Apply Ulusoy’s splitting procedure on each Rd ðd ¼ 1; :::; jV ðDÞ jÞ, and obtain a candidate solution S0 = (S0 1 ; . . .; S0 jV ðDÞ j );
24: if S0 is better S then 25: S ¼ S0 ; 26: end if 27: end for 28: return S; 29: end procedure C. Initialization
Algorithm 2 shows the pseudo code of the new initialization. First, a solution is generated in a greedy way (lines 2–12). Specifically, the tasks are allocated to the depots using the nearestneighbour (NN) heuristic, i.e., each task is allocated to its closest depot. Then, for each depot, a giant route is generated by the best insertion heuristic. Starting with a depot loop, at each step, the insertion cost of each task to each candidate position of the giant route is calculated, i.e., the difference in the cost after inserting it. The task with the best insertion cost is selected and inserted to the best position. Then, the giant route is split into a set of feasible routes by Ulusoy’s splitting procedure. After the first greedy solution is generated, a number of k 1 additional solutions are generated in a random way (lines
48
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
13–27). The main difference from the greedy generation is that at each step of task insertion, the next task is selected randomly rather than with the minimal insertion cost. Finally, the best solution among the k generated initial solutions is returned. The framework of the initialization procedure is shown in Figure 2 for easier and better grasp of the process. The new initialization is inspired by that of MDMA [26], but modified from a population initialization to a single-solution initialization. By quickly generating a number of different solutions in a relatively greedy way, RoCaSH2 can obtain a good initial solution efficiently. Algorithm 3. The Task Moving among Sub-problems process.
1: procedure TMaS (A solution S1 ; . . .; SjV ðDÞ j , parameter NR ) 2: Set the remaining routes R ¼ fR j R 2 S1 [ . . . [ SjV ðDÞ j g; 3: for d ¼ 1 ! jV ðDÞ j do 4: S0d ¼ ðÞ; 5: end for 6: N ¼ f1; . . .; jV ðDÞ jg; 7: while R 6¼ ; do 8: Randomly select k uniformly from N, and remove it from N; 9: Remove all the routes in Sk from R; 10: while Sk 6¼ ; do 11: Randomly select a route R from Sk ; 12: Remove R from Sk ; 13: Set C ¼ fR g; 14: if jRnSk j > NR 1 then 15: Select NR 1 closest routes to R from RnSk , using the distance measure (10);
16: Remove the selected routes from R; 17: Add the selected routes to C; 18: else 19: Add all the routes in RnSk to C; 20: Remove these added routes from R; 21: end if 22: Apply restricted local search to C; 23: Add all the routes in C to S0k ; 24: if RnSk ¼ ; then 25: Add all the routes in Sk to S0k , S0k ¼ S0k [ Sk ; 26: returnðS01 ; . . .; S0jV ðDÞ j Þ; 27: end if 28: end while 29: end while 30: returnðS01 ; . . .; S0jV ðDÞ j Þ; 31: end procedure D. Task Moving Among Sub-Problems
Algorithm 3 describes the proposed TMaS process. It takes a solution ðS1 ; . . .; SjV ðDÞ j Þ and a parameter NR and returns a new solution ðS01 ; . . .; S0jV ðDÞ j Þ. Initially, all the routes of the new solution are set to empty (line 4). Then, the routes of different depots in ðS1 ; . . .; SjV ðDÞ j Þ are re-grouped as follows. In each iteration, a random depot vk is selected (line 8), and each route in Sk is regrouped with at most NR 1 other routes, which must belong to different depots (lines 10–28). Specifically, for each route R 2 Sk , if there are more than NR 1 remaining routes from a different depot, then NR 1 closest routes to R are selected
FIGURE 2. The framework of the proposed initialization procedure.
from those routes in terms of the route distance measure defined in [45]. Given two routes R1 and R2 , the distance between the two routes is calculated as follows. DðR1 ; R2 Þ ¼
’ðR1 ; R2 Þ ’ðR1 ; R2 Þ ; ’ðR1 ; R1 Þ ’ðR2 ; R2 Þ
(10)
where P ’ðR1 ; R2 Þ ¼
x1 2R1
P x2 2R2
’ðx1 ; x2 Þ
jR1 j jR2 j
;
1 ’ðx1 ; x2 Þ ¼ ðdðhvðx1 Þ; hvðx2 ÞÞ þ dðhvðx1 Þ; tvðx2 ÞÞ 4 þ dðtvðx1 Þ; hvðx2 ÞÞ þ dðtvðx1 Þ; tvðx2 ÞÞÞ;
(11Þ
(12)
where dðv1 ; v2 Þ indicates the deadheading cost of the shortest path from the node v1 to v2 in the graph G. This can be precalculated by the Dijkstra’s algorithm. If there are not enough routes satisfying the condition, all the remaining routes from different depots are re-grouped together with R (line 17). After re-grouping for each route R , a restricted local search is applied to further improve the new group of routes (line 22).
The restricted local search uses the common single-insertion (moving one task to another position) and swap operators. However, the neighbourhood can be very huge if all the tasks of all the routes in the group are considered. To restrict the neighbourhood size, only the neighbours involving the routes from different depots are considered. For example, a task is only allowed for moving to a route belonging to a different depot, and only two tasks from the routes belonging to different depots can be swapped. This way, we can focus on the solutions that tend to be neglected by the local optimization stage, which considers the tasks within the routes belonging to the same depot. Figure 3 illustrates the process of dealing with a selected route from a sub-solution in TMaS. Finally, if there is no remaining route except the routes in Sk , then all the remaining routes in Sk are simply added to S0k . Overall, the TMaS process restricts the search space in the global optimization in two ways. First, when forming each new group C, it only considers the routes belonging to different depots from the first route R . Second, the local search only considers the moves between different depots. Both ways are complementary to the search space of the local optimization. Therefore, it is desired that the global optimization can focus on the search space that are ignored by the local optimization.
FIGURE 3. Illustration of dealing with a selected route in the proposed TMaS.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
49
Complexity of TMaS: assuming that there are at most NRmax routes, then the loop between lines 7 and 29 is expected to have NRmax =NR iterations (every iteration forms a group of NR routes). In each iteration, the main complexity comes from the restricted local search (line 22) on the group of routes. If the tasks are assumed to be roughly uniformly distributed among the routes, then the number of tasks involved in the restricted local search is expected to be jT j NR =NRmax . Then, the expected complexity of the restricted local search is OðL ðjT j NR =NRmax Þ2 Þ, where L is the maximum number of local search steps to reach the local optimum. Overall, the expected complexity of TMaS is ! jT j NR 2 NRmax OðTMaSÞ ¼ O L NRmax NR ! L jT j2 NR : ¼O NRmax Note that in practice, the complexity of TMaS should be much smaller, since many of the ðjT j NR =NRmax Þ2 moves are not between different depots and thus can be ignored. E. Re-Decomposition
After the TMaS global optimization, the problem is re-decomposed, and a new solution is generated. RoCaSH2 directly adopts the re-decomposition of RoCaSH [30], which is briefly described in Algorithm 4. It takes a solution ðS1 ; . . .; SjV ðDÞ j Þ and returns a new solution ðS01 ; . . .; S0jV ðDÞ j Þ. First, it cuts each route into multiple sub-routes using the route cutting-off operator [64]. Then, it re-clusters all the sub-routes using the threecriteria route clustering procedure [30]. Finally, it returns the re-clustered sub-routes. Accordingly, the dth sub-problem is to solve the single-depot CARP with the tasks in S0d and the depot ðDÞ vd . Due to the space limit, we omit the details of the route cutting-off operator and three-criteria route clustering procedure. The full details can be found in [30], [64]. Algorithm 4. The RoCaSH2 re-decomposition stage.
1: procedure Redecompose (A solution S1 ; . . .; SjV ðDÞ j ) 2: for d ¼ 1 ! jV ðDÞ j do 3: S0d ¼ ðÞ; 4: for each route R in Sd do 5: Cut R using the route cutting-off operator with the parameters ; u [64], and add the obtained sub-routes
6: 7: 8:
to the end of S0d ; end for end for Apply the three-criteria route clustering procedure [30] to the routes in S01 , . . ., S0jV ðDÞ j to re-cluster them;
9: returnðS01 ; . . .; S0jV ðDÞ j Þ; 10: end procedure
optimization method used by RoCaSH is directly adopted. Specifically, given a set of routes Sd starting and ending at the same ðDÞ depot vd , the local optimization first concatenates all the routes to obtain a merged route, where the depot loops are ignored. Then, it applies the local search with the 2-opt operator to the merged route to improve its total cost. Note that the capacity constraint is temporarily ignored at this step. After the total cost can no longer be improved, the updated merged route is split into a set of feasible routes S0d in terms of the capacity constraint. This is done by Ulusoy’s splitting procedure [11], which guarantees to obtain the optimal set of feasible routes. Finally, the set of routes are further improved by another local search with the single-insertion, double-insertion and swap operators. This local search uses different operators from the one for the merged route to reduce the chance of being stuck in a local optima. The local optimization stage is briefly described in Algorithm 5. Algorithm 5. The local optimization stage.
1: procedure LocalOpt (A set of routes Sd associated with the ðDÞ
3:
depot vd ) Build a merged route by concatenating all the routes R 2 Sd in order; Apply the local search with the 2-opt operator to the merged
4:
route until the total cost cannot be improved; Split the merged route using Ulusoy’s splitting procedure
5:
[11] to obtain the set of routes S0d ; Apply another local search with the single-insertion, double-
2:
insertion and swap operators to S0d until no better feasible solution can be found;
6: return S0d ; 7: end procedure G. Main Difference Between RoCaSH and RoCaSH2
In summary, RoCaSH2 is different from RoCaSH [30] in different aspects. First, RoCaSH solves LSMDCARP by decomposing it into several sub-problems and tackling them separately. Since the re-decomposition and local optimization stages are heavily interdependent, it is likely that a poor decomposition at some cycle leads to poor local optimization results, and vice versa. This hinders the performance of RoCaSH. To address the issue, RoCaSH2 develops a global optimization stage, i.e., the new TMaS process, after the local optimization of all the sub-problems. Second, the initialization phase in RoCaSH starts with the three complex criteria clustering scheme to assign the tasks to the depots. To simplify the task clustering procedure, RoCaSH2 adopts the new initialization procedure, which uses the nearest-neighbour heuristic to directly cluster the tasks to appropriate depots based on their distances. This new efficient initialization procedure can help RoCaSH2 gain more time for the successive search. Furthermore, the multiple trials of initialization can obtain better initial solutions in a wide search space.
F. Local Optimization
IV. Experimental Studies
After the re-decomposition stage, each sub-problem is solved by a local optimization process. In RoCaSH2, the local
To verify the effectiveness of the proposed RoCaSH2, it is compared with a number of state-of-the-art algorithms for
50
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
TABLE I The parameter settings of RoCaSH2. PARAMETER
DESCRIPTION
VALUE
MaxCyc
Max number of cycles
5000
MaxNoImp
Max number of cycles without improvement
1000
k
Number of trials in Algorithm 2
20
NR
Number of the routes for selection in Algorithm 3
4
(, u)
Two parameters used Algorithm 4
(0.1, 0.5)
MDCARP. These include the Hybrid Genetic Algorithm (HGA) [25], the Multi-Depot Memetic Algorithm (MDMA) [26], the Unified Hybrid Genetic Search (UHGS) [65] and RoCaSH [30]. A. Datasets
Three LSMDCARP datasets1 are used in our experiments: mdEGL-G, mdHefei, and mdBeijing. They were generated from the single-depot CARP EGL-G, Hefei and Beijing datasets by adding new depots [30]. The mdEGL-G dataset consists of 10 instances, whose number of tasks are between 347 and 375. The mdHefei and mdBeijing datasets are much larger, in which there can be thousands of tasks, e.g., 1212 for mdHefei and 3584 for mdBeijing. For each MDCARP instance, the number of depots is set based on the minimal number of required vehicles for that instance, which is equal to the total demand of all the tasks divided by the capacity. Specifically, the mdEGL-G instances have 8 depots. The mdHefei and mdBeijing instances have 5 depots. For each instance, the first depot is set as the first vertex in V . The other depots are set to the vertices with the indices k jV j c, where m is the number of depots, and k ¼ 1; . . .; m 1. bm1 B. Parameter Settings
Table I shows the parameter settings of RoCaSH2. The maximum number of cycles is set the same as RoCaSH in [30], and our preliminary studies show that RoCaSH2 can converge most of the time. The number of trials k in the initialization is set 20 according to the suggestion in MDMA [26]. NR is an important parameter in TMaS to determine the number of routes from different depots to be grouped together. If NR is too large, then the more thorough global optimization will take much more computational resources. If it is too small, then the interactions between the sub-problems are not sufficiently considered. Based on previous studies [68], three reasonable NR values of {3, 4, 5} are considered. The preliminary results show that NR ¼ 4 is better than the other two values. Therefore, NR is set to 4 in the following experiments. RoCaSH2 has two extra parameters and u used in the route cutting-off operator. As this is directly adopted from RoCaSH, they are set with the same values as in [30]. Specifically, ¼ 0:1 and u ¼ 0:5. 1
The datasets can be downloaded from https://meiyi1986.github.io/files/data/ mdcarp.zip.
The parameters of the compared algorithms follow the settings in [30] for HGA, MDMA and RoCaSH, and [65] for UHGS. For the relatively smaller mdEGL-G dataset, HGA stops after 100000 iterations, while MDMA, RoCaSH and RoCaSH2 stop after 5000 cycles [30]. For the large mdHefei and mdBeijing datasets, the algorithms become too slow with their original number of iterations. Thus, their stopping criteria is adjusted to the maximal runtime. Specifically, for each algorithm, the maximal runtime on the mdHefei and mdBeijing instances is set proportional to the problem size [48], i.e., 0:2 jT j seconds. RoCaSH2 was implemented in C++. All the compared algorithms were run on an Intel Core i5-7500 CPU with 3.4 GHz. For each instance, each algorithm was run 30 times independently. C. Results and Discussions
Tables II–IV show the average performance of the compared algorithms tested on the three datasets. Note that UHGS can tackle at most 6 depots, and cannot solve the the mdEGL-G instances with 8 depots, the results of the UHGS are listed only on the mdHefei and mdBeijing datasets. In each table, the columns “jV j”, “jEj”, “jT j” and “Q” represent the number of vertices, number of edges, number of tasks and the capacity of vehicle, respectively. For each algorithm, the columns “Average”, “Std” and “Time” indicate the average and standard deviation of the total cost of the obtained solutions and the average runtime (in seconds) over the 30 independent runs. For the average total cost for each instance, the minimal ones among the compared algorithms are marked in boldface. In addition, the Wilcoxon rank sum test with a significance level of 0.05 is conducted to statistically compare between RoCaSH2 and each compared algorithm. For each instance, if RoCaSH2 performed significantly better (worse) than a compared algorithm, then the average total cost of the corresponding compared algorithm is marked with “” (“+”). The bottom of each table provides a summary of the comparison. The “Mean” row represents the mean values of the corresponding columns over all the instances in the dataset, and the “B-C-W” row under each compared algorithm indicates the number of instances on which RoCaSH2 performed significantly better than (“B”), statistically comparable with (“C”), and significantly worse than (“W”) the corresponding algorithm under the Wilcoxon rank sum test. From Table II, one can see that RoCaSH2 significantly outperformed all the other algorithms on all 10 instances on the mdEGL-G dataset and achieved a much lower mean total cost, i.e., 941589.58 for RoCaSH2, 983776.54 for RoCaSH, 1002769.33 for MDMA and 1158248.91 for HGA. This clearly shows the excellent performance of RoCaSH2 on the mdEGL-G dataset. In terms of runtime, RoCaSH2 was slightly slower than MDMA and RoCaSH. Figure 4 shows the convergence curves of the compared algorithms on four mdEGL-G representative instances, i.e., mdEGL-G1-E, mdEGL-G1-A, mdEGL-G2-A and mdEGLG2-E. The x-axis is cut at the minimal runtime of all the compared algorithms to make fair comparison. One can see that RoCaSH2 converges at a faster speed, and the corresponding
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
51
TABLE II The average performance over 30 independent runs of the compared algorithms on the mdEGL-G dataset. For each instance, the minimal mean total cost is marked in bold. Under Wilcoxon rank sum test with significance level of 0.05, an algorithm is marked with “” (“+”) if it is significantly worse (better) than RoCaSH2. NAME
jV j
jEj
jT j
mdEGL-G1-A
255 375 347 28600 1013300.60- 10598.24 19.22
HGA
Q AVERAGE
MDMA
STD
AVERAGE
STD
12922.65 36.60
856936.10-
13843.36 26.21
797823.90
9927.53
34.47
16.82
911432.97-
13429.38 38.22
899692.83-
14904.57 32.76
843978.23
7857.65
39.18
mdEGL-G1-C 255 375 347 19000 1110681.67- 10082.94 22.31
973987.10-
15130.05 41.21
940262.13-
10970.78 40.60
901623.03
8366.75
44.60
994569.43-
12872.59 43.83
966984.73
9138.89
51.55
8599.66
AVERAGE
STD
TIME
AVERAGE
STD
RoCaSH2
862162.17-
mdEGL-G1-B 255 375 347 22800 1031053.63-
TIME
RoCaSH TIME
TIME
mdEGL-G1-D 255 375 347 16200 1196695.33-
7702.92
20.52 1021681.67- 14620.42 43.44
mdEGL-G1-E
255 375 347 14100 1251558.87-
6947.65
20.17 1089379.57- 22695.62 43.25 1069231.10- 11383.86 44.50 1034773.93 10831.06 55.96
mdEGL-G2-A
255 375 375 28000 1078531.93-
8772.57
16.74
928218.00-
13292.86 41.94
921700.10-
14566.88 28.65
862302.73
6673.30
37.63
mdEGL-G2-B 255 375 375 23100 1130622.33-
6482.25
19.43
974024.00-
13643.74 42.57
956795.17-
11448.57 34.23
908601.20
8862.80
42.63
mdEGL-G2-C 255 375 375 19400 1172994.30- 10274.47 21.15 1029920.87- 15967.06 45.67 1004728.10- 11516.97 42.67
970896.40
7093.61
49.07
mdEGL-G2-D 255 375 375 16700 1250813.77-
9018.68
20.97 1083767.30- 16528.80 45.69 1065950.20- 12999.64 42.39 1027643.33 10884.34 54.74
mdEGL-G2-E
6017.33
22.70 1153119.63- 15421.05 46.81 1127900.27- 13006.33 48.98 1101268.30
255 375 375 14700 1346460.30-
Mean
1158248.91
B-C-W
10-0-0
20.00
1002769.33
42.54
983776.54
10-0-0
38.48
9355.49
58.57
941589.58
46.84
10-0-0
TABLE III The average performance over 30 independent runs of the compared algorithms on the mdHefei dataset. For each instance, the minimal mean total cost is marked in bold. Under Wilcoxon rank sum test with significance level of 0.05, an algorithm is marked with “” (“+”) if it is significantly worse (better) than RoCaSH2. NAME
jV j
jEj
jT j
Q
HGA
MDMA STD
AVERAGE
STD
AVERAGE
RoCaSH STD
AVERAGE
RoCaSH2 STD
AVERAGE
STD
mdHefei-1
850
1212
121
9000
203948.07-
3131.22
213924.57-
1430.23
199504.43
7180.13
197212.77
930.41
197163.40
1174.11
mdHefei-2
850
1212
242
9000
367128.23-
6932.00
361173.17-
3122.22
337258.93+
6447.64
343753.17-
3055.02
341564.37
3092.73
mdHefei-3
850
1212
364
9000
494900.50-
10172.96
476446.03-
3945.95
439926.30+
7981.22
453233.90-
2191.37
450729.70
3724.91
mdHefei-4
850
1212
485
9000
655931.27-
15893.29
600753.80-
6830.45
556329.87+
10827.51
581350.40-
3110.64
570733.60
3362.23
mdHefei-5
850
1212
606
9000
828839.77-
17028.25
725769.93-
7536.15
688032.63
16707.15
700613.17-
3992.64
689826.23
4265.28
mdHefei-6
850
1212
727
9000
992164.40-
25607.82
856229.00-
10351.70
806942.93
15375.79
816784.20-
5483.50
811526.80
3872.75
mdHefei-7
850
1212
848
9000
1161062.80-
24107.72
991237.90-
8885.14
968303.53-
21183.46
957752.03-
5101.69
940167.50
4506.47
mdHefei-8
850
1212
970
9000
1334204.30-
28438.79
1111462.90-
10623.02
1098585.07-
25668.51
1069352.30-
6412.04
1048765.50
7328.23
mdHefei-9
850
1212
1091
9000
1499142.77-
29525.45
1233504.93-
11079.67
1249425.50-
28119.93
1191800.50-
7315.59
1170442.73
7926.50
mdHefei-10
850
1212
1212
9000
1629957.80-
30778.06
1344466.73-
11601.35
1356206.93-
31212.83
1295919.43-
8238.76
1271809.93
6855.08
Mean
916727.99
791496.90
770051.61
760777.19
B-C-W
10-0-0
10-0-0
4-3-3
9-1-0
curves are consistently beneath those of the other three algorithms on all the instances. This shows that RoCaSH2 can still achieve much better solutions than the other algorithms if given the same runtime. Tables III and IV show the results on the large mdHefei and mdBeijing datasets. Note that all the algorithms are given the same runtime of 0:2 jT j seconds for these datasets. Thus, the “Time” column is omitted. Table III shows that RoCaSH2 achieved significantly better results than that of HGA, MDMA and RoCaSH almost all the time. The only exception was mdHefei-1, where there is no statistical significance between RoCaSH and RoCaSH2. For UHGS, the results are better than those of RoCaSH2 on relatively smaller instances, such as mdHefei-2–mdHefei-4,
52
UHGS
AVERAGE
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
749272.98
and not statistically significant on mdHefei-1, mdHefei-5, and mdHefei-6. However, RoCaSH2 outperformed UHGS on all larger instances with the size of over 800. The overall mean total cost of RoCaSH2 was also much smaller than the other algorithms, e.g., 749K for RoCaSH2 vs 917K for HGA, 791K for MDMA, 770K for UHGS, and 761K for RoCaSH. The advantage of RoCaSH2 on the large-scale instances with over 800 tasks is obvious. Figure 5 shows the convergence curves of the compared algorithms on the mdHefei-1, mdHefei-4, mdHefei-7, and mdHefei10 instances. These are the smallest, medium, and largest mdHefei instances. As HGA performed much worse than the other algorithms (as shown in Table III) and the curves of UHGS start from a much worse point and later time due to its poor initialization, it
TABLE IV The average performance over 30 independent runs of the compared algorithms on the mdBeijing dataset. For each instance, the minimal mean total cost is marked in bold. Under Wilcoxon rank sum test with significance level of 0.05, an algorithm is marked with “” (“+”) if it is significantly worse (better) than RoCaSH2. NAME
jV j
jEj
jT j
Q
HGA
MDMA
UHGS
RoCaSH
RoCaSH2
AVERAGE
STD
AVERAGE
STD
AVERAGE
STD
AVERAGE
STD
AVERAGE
mdBeijing-1
2820 3584
358
25000
719234.97-
11886.66
686975.27-
2276.83
700976.03-
16440.32
670299.77-
2498.79
668445.23
2591.43
STD
mdBeijing-2
2820 3584
717
25000 1150250.27-
24800.00
991639.77-
5983.89
1067059.50- 34534.73
969526.90-
3987.90
967421.00
2069.03
mdBeijing-3
2820 3584 1075 25000 1617555.90-
52678.27
1291706.77-
7998.94
1428907.13- 51311.48 1262995.90-
5536.61
1255489.40
5982.75
mdBeijing-4
2820 3584 1434 25000 2094669.57-
57565.46
1532919.13-
8790.30
1762055.47- 42994.03 1501499.47-
6308.62
1483625.37
6238.16
mdBeijing-5
2820 3584 1792 25000 2509434.83-
84950.27
1765292.37- 12919.29 2091122.90- 41928.83 1730308.47-
8791.86
1713324.33
8715.03
mdBeijing-6
2820 3584 2151 25000 3015946.30-
77773.91
2037539.30- 12574.15 2463738.90- 69502.68 2006983.97- 10079.70 1971272.43
6149.36
mdBeijing-7
2820 3584 2509 25000 3478082.53-
84965.25
2241324.77- 16682.94 2769506.53- 46033.13 2198449.20- 12032.60 2161050.13
8318.29
mdBeijing-8
2820 3584 2868 25000 3870752.67- 114643.10 2411349.87- 15910.58 3050471.73- 52036.64 2376049.93- 12630.50 2332161.03 11504.30
mdBeijing-9
2820 3584 3226 25000 4359410.80- 137537.60 2660257.63- 16518.70 3360111.40- 77995.75 2614046.63- 11341.60 2562072.23 12059.96
mdBeijing-10 2820 3584 3584 25000 4769283.93- 144797.70 2856373.80- 16984.42 3622938.57- 83524.41 2800199.50- 12662.80 2745051.63 13751.34 Mean
2758462.18
1847537.87
2231688.82
1813035.97
B-C-W
10-0-0
10-0-0
10-0-0
10-0-0
1785991.28
FIGURE 4. Convergence curves on the mdEGL-G1-A, mdEGL-G1-E, mdEGL-G2-A, and mdEGL-G2-E instances.
FIGURE 5. Convergence curves on the mdHefei-1, mdHefei-4, mdHefei-7, and mdHefei-10 instances.
is difficult to put their curves together with others. Therefore, the curves of HGA and UHGS are omitted in the figure. From Figure 5, it can be seen that both RoCaSH and RoCaSH2 significantly outperformed MDMA. This demonstrates the effectiveness of the divide-and-conquer strategy,
which is used by both RoCaSH and RoCaSH2. On mdHefei-1, RoCaSH achieved a much better initial solution than RoCaSH2. However, RoCaSH2 quickly caught up and achieved almost the same final performance as that of RoCaSH. On mdHefei-4, mdHefei-7, and mdHefei-10, all the algorithms
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
53
FIGURE 6. Convergence curves on the mdBeijing-1, mdBeijing-4, mdBeijing-7, and mdBeijing-10 instances.
started from similar initial solutions. RoCaSH2 converged much faster than MDMA and RoCaSH, and its convergence curve was always much lower than that of the others. Table IV clearly shows the advantage of RoCaSH2 over all the other algorithms on the mdBeijing dataset. It significantly outperformed all the other four algorithms on all the 10 instances, and its overall mean total cost was much lower than the others. Figure 6 shows the convergence curves of the compared algorithms on the smallest mdBeijing-1, largest mdBeijing-10, and medium mdBeijing-4, mdBeijing-7 instances. It can be seen that for the smallest mdBeijing-1 instance, RoCaSH2 was slightly better than RoCaSH, and both RoCaSH2 and RoCaSH performed much better than MDMA. RoCaSH and RoCaSH2 seemed not converged within the given time budget (about 70 seconds), and there is still potential for these two algorithms to further improve if more running time is given. However, from the trend shown in the figure, RoCaSH2 still tends to perform better than RoCaSH if more time is given. For the larger instances, i.e., mdBeijing-4, mdBeijing-7, and mdBeijing-10, RoCaSH2 achieved much better performance than RoCaSH and MDMA. Its convergence curves are much lower than those of the other two algorithms. This suggests that the initialization of RoCaSH is time-consuming and ineffective in large instances. On the other hand, RoCaSH2 obtained much better initial point. This partly demonstrates the effectiveness of the new initialization in RoCaSH2. In summary, the experimental results on the mdEGL-G, mdHefei, and mdBeijing datasets clearly show that RoCaSH2 can achieve much better results than the state-of-the-art HGA, MDMA, UHGS, and RoCaSH for LSMDCARP especially when the time budget is very tight. In particular, the comparison between RoCaSH2 and RoCaSH demonstrates the effectiveness of the newly developed initialization and TMaS-based global optimization stage in RoCaSH2.
the ratio of the average total cost of RoCaSH2 versus the compared algorithm. Each point represents the ratio between RoCaSH2 and a compared algorithm on one instance. If the ratio is smaller than 1, then RoCaSH2 is better than the compared algorithm. Otherwise, RoCaSH2 is worse. A smaller ratio indicates a larger advantage of RoCaSH2 over the compared algorithm. From Figure 7, one can see that the points are almost always below 1, especially if the number of tasks is larger than 800. This indicates that RoCaSH2 almost always performed better than the compared algorithm, especially when the problem size is large. As the problem size increases, the ratio versus HGA and UHGS decreases rapidly, indicating that the advantage of RoCaSH2 over HGA and UHGS becomes more obvious as the problem size grows. For MDMA and RoCaSH, the ratio is more consistent across different problem sizes. However, they are all below 1. This suggests that RoCaSH2 has a relatively consistent advantage over MDMA and RoCaSH regardless of the problem size. E. Component Analysis
RoCaSH2 has two new components. The first is the additional TMaS-based restricted global optimization stage in the dynamic decomposition process. The second is the newly developed initialization scheme. To verify the effectiveness of each component separately, a new version of RoCaSH, called RoCaSH*, is designed. RoCaSH* uses the new initialization like RoCaSH2,
D. Scalability
To further investigate the scalability of RoCaSH2, we plot the relationship between the advantage of RoCaSH2 over the compared algorithms and the problem size in Figure 7. In the figure, the x-axis is the number of tasks, and the y-axis is
54
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
FIGURE 7. The ratio between the mean total costs of RoCaSH2 and other algorithms versus the number of tasks on six datasets.
V. Conclusions and Future Work
TABLE V The characteristics of RoCaSH, RoCaSH*, and RoCaSH2. ALGORITHM
NEW INITIALIZATION?
TMAS GLOBAL OPTIMIZATION?
RoCaSH RoCaSH*
@
RoCaSH2
@
@
TABLE VI The pairwise B-C-W results using Wilcoxon rank sum test with significance level of 0.05 among RoCaSH, RoCaSH*, and RoCaSH2. B-C-W
RoCaSH
RoCaSH*
RoCaSH
—
—
RoCaSH2 —
RoCaSH*
23-3-4
—
—
RoCaSH2
29-1-0
30-0-0
—
TABLE VII The W-C-B results using Wilcoxon rank sum test with significance level of 0.05 between RoCaSH2 and RoCaSH*. NAME
B
C
W
mdEGL-G
10
0
0
mdHefei
10
0
0
mdBeijing
10
0
0
Overall
30
0
0
but has no global optimization stage. Table V shows the differences among RoCaSH, RoCaSH*, and RoCaSH2. This way, the effectiveness of the new initialization can be verified by comparing between RoCaSH and RoCaSH* and the effectiveness of the TMaS-based global optimization is verified by comparing RoCaSH2 with RoCaSH*. RoCaSH* is run on all the test instances with the same parameter setting as RoCaSH. Then, for each instance, the Wilcoxon rank sum test with the significance level of 0.05 is conducted for each pair of RoCaSH, RoCaSH*, and RoCaSH2. Table VI shows the B-C-W results between the row algorithm and the column algorithm on all the 30 LSMDCARP instances, where only the lower triangle was calculated. For example, 291-0 in row 3 and column 1 indicates that RoCaSH2 performed significantly better than RoCaSH on 29 out of the 30 instances, and no worse than RoCaSH on any instance. From Table VI, one can see that RoCaSH* significantly outperformed RoCaSH on 23 out of the 30 instances, and worse on only four instances. This demonstrates the effectiveness of the new initialization scheme proposed in this paper. RoCaSH2 showed the best performance among the three algorithms. It significantly outperformed RoCaSH* (RoCaSH) on 30 (29) out of the 30 instances, and never obtained significantly worse result. Table VII shows the B-C-W results between RoCaSH2 and RoCaSH* on each dataset separately. From the table, it can be seen that RoCaSH2 showed significantly better results than RoCaSH* on all the mdEGL-G, mdHefei, and mdBeijing instances. This particularly verifies the effectiveness of the TMaS-based global optimization in RoCaSH2 on solving large-scale problems.
This paper aims to address the issue of the existing divide-andconquer approaches for LSMDCARP that an inaccurate problem decomposition can lead to poor local optimization results of the sub-problems, which will in turn deteriorate the subsequent re-decomposition. This has been achieved by proposing a new dynamic decomposition procedure and a new initialization. The new dynamic decomposition procedure introduces a new global optimization stage before the re-decomposition, and thus can reduce the inter-dependency between problem decomposition and local optimization results, and improve the decomposition more effectively. The new initialization can generate better initial solution and problem decomposition to enhance the subsequent search. Specifically for LSMDCARP, this paper develops a Task Moving among Sub-problems (TMaS)-based restricted global optimization, which can achieve a good balance between effectiveness and efficiency. The state-of-the-art RoCaSH [30] algorithm is also adopted for problem re-decomposition and local optimization. Putting all the above together, a new RoCaSH2 algorithm is proposed. The experimental studies show that RoCaSH2 can achieve much better results and converge much faster than the current state-of-the-art algorithms for a wide range of LSMDCARP test instances. There are a few future directions that can be considered. One direction is to conduct more valuable movements of the tasks in promising regions inherited from the sub-problems and reduce the redundant movements so that the efficiency of TMaS can be further improved. Another direction is to consider more practical and complicated models of CARP, such as the multi-depot periodic CARP (MDPCARP), which contains challenges of both allocating tasks to depots and deciding which days of the period to serve each task. Acknowledgment This work was supported in part by Anhui Provincial Natural Science Foundation under Grants 1808085MF173 and 1908085MF195, in part by the Natural Science Key Research Project for Higher Education Institutions of Anhui Province under Grant KJ2021A0640, and in part by the High-level Personnel Starting Project of Nanjing Xiaozhuang University under Grant 4172322. References
[1] B. Golden and R. Wong, “Capacitated arc routing problems,” Networks, vol. 11, no. 3, pp. 305–316, 1981. [2] M. Dror, Arc Routing: Theory, Solutions and Applications. Boston, MA, USA: Kluwer Academic Publishers, 2000. [3] H. Handa, L. Chapman, and X. Yao, “Robust route optimization for gritting/ salting trucks: A CERCIA experience,” IEEE Comput. Intell. Mag., vol. 1, no. 1, pp. 6–9, Feb. 2006. [4] M. Polacek, K. Doerner, R. Hartl, and V. Maniezzo, “A variable neighborhood search for the capacitated arc routing problem with intermediate facilities,” J. Heuristics, vol. 14, no. 5, pp. 405–423, 2008. [5] G. Liu, Y. Ge, T. Z. Qiu, and H. R. Soleymani, “Optimization of snow plowing cost and time in an urban environment: A case study for the city of edmonton,” Can. J. Civil Eng., vol. 41, no. 7, pp. 667–675, 2014. [6] J. Campbell and A. Langevin, “Roadway snow and ice control,” in Arc Routing: Theory, Solutions and Applications. Boston, MA, USA: Kluwer, 2000, pp. 389–418. [7] H. Handa, D. Lin, L. Chapman, and X. Yao, “Robust solution of salting route optimisation using evolutionary algorithms,” in Proc. IEEE Congr. Evol. Comput., 2006, pp. 3098–3105.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
55
[8] R. Eglese, B. Golden, and E. Wasil, “Route optimization for meter reading and Corberan and salt spreading,” in Arc Routing: Problems, Methods, and Applications, A. G. Laporte Eds. Philadelphia, PA, USA: SIAM Publications, 2014, pp. 303–320. [9] G. Ghiani, C. Mour~ao, L. Pinto, and D. Vigo, “Route in waste collection appli Corberan and cations,” in Arc Routing: Problems, Methods, and Applications, A. G. Laporte Eds. Philadelphia, PA, USA: SIAM Publications, pp. 351–370, 2014. [10] W.-L. Pearn, A. Assad, and B. L. Golden, “Transforming arc routing into node routing problems,” Comput. Operations Res., vol. 14, no. 4, pp. 285–288, 1987. [11] G. Ulusoy, “The fleet size and mix problem for capacitated arc routing,” Eur. J. Oper. Res., vol. 22, no. 3, pp. 329–337, 1985. [12] P. Beullens, L. Muyldermans, D. Cattrysse, and D. Van Oudheusden, “A guided local search heuristic for the capacitated arc routing problem,” Eur. J. Oper. Res., vol. 147, no. 3, pp. 629–643, 2003. [13] F. Chu, N. Labadi, and C. Prins, “The periodic capacitated arc routing problem linear programming model, metaheuristic and lower bounds,” J. Syst. Sci. Syst. Eng., vol. 13, no. 4, pp. 423–435, 2004. [14] P. Lacomme, C. Prins, and W. Ramdane-Cherif, “Competitive memetic algorithms for arc routing problems,” Ann. Operations Res., vol. 131, no. 1, pp. 159–185, 2004. [15] A. Hertz and M. Mittaz, “A variable neighborhood descent algorithm for the undirected capacitated arc routing problem,” Transp. Sci., vol. 35, no. 4, pp. 425–434, 2001. [16] H. Longo, M. de Arag~ao, and E. Uchoa, “Solving capacitated arc routing problems using a transformation to the CVRP,” Comput. Operations Res., vol. 33, no. 6, pp. 1823–1837, 2006. [17] K. Tang, Y. Mei, and X. Yao, “Memetic algorithm with extended neighborhood search for capacitated arc routing problems,” IEEE Trans. Evol. Comput., vol. 13, no. 5, pp. 1151–1166, Oct. 2009. [18] L. Feng, Y. S. Ong, Q. H. Nguyen, and A. H. Tan, “Towards probabilistic memetic algorithm: An initial study on capacitated arc routing problem,” in Proc. IEEE Congr. Evol. Comput., 2010, pp. 18–23. [19] L. Santos, J. Coutinho-Rodrigues, and J. R. Current, “An improved ant colony optimization based algorithm for the capacitated arc routing problem,” Transp. Res. Part B: Methodological, vol. 44, no. 2, pp. 246–266, 2010. [20] Y. Mei, K. Tang, and X. Yao, “Decomposition-based memetic algorithm for multiobjective capacitated arc routing problem,” IEEE Trans. Evol. Comput., vol. 15, no. 2, pp. 151–165, Apr. 2011. [21] C. Bode and S. Irnich, “Cut-first branch-and-price-second for the capacitated arc-routing problem,” Operations Res., vol. 60, no. 5, pp. 1167–1182, 2012. [22] F. L. Usberti, M. F. Paulo, and L. M. F. Andre, “GRASP with evolutionary path-relinking for the capacitated arc routing problem,” Comput. Operations Res., vol. 40, no. 12, pp. 3206–3217, 2013. [23] R. Li, X. Zhao, X. Zuo, J. M. Yuan, and X. Yao, “Memetic algorithm with non-smooth penalty for capacitated arc routing problem,” Knowl.-Based Syst., vol. 220, pp. 1–18, 2021. [24] A. Amberg, W. Domschke, and S. Voß, “Multiple center capacitated arc routing problems: A tabu search algorithm using capacitated trees,” Eur. J. Oper. Res., vol. 124, no. 2, pp. 360–376, 2000. [25] Z. Zhu et al., “A hybrid genetic algorithm for the multiple depot capacitated arc routing problem,” in Proc. IEEE Int. Conf. Automat. Logistics, 2007, pp. 2253–2258. [26] A. Kansou and A. Yassine, “New upper bounds for the multi-depot capacitated arc routing problem,” Int. J. Metaheuristics, vol. 1, no. 1, pp. 81–95, 2010. [27] T. Liu, Z. Jiang, and G. Na, “A genetic local search algorithm for the multidepot heterogeneous fleet capacitated arc routing problem,” Flexible Serv. Manuf. J., vol. 26, no. 4, pp. 540–564, 2014. [28] D. Krushinsky and T. V. Woensel, “An approach to the asymmetric multi-depot capacitated arc routing problem,” Eur. J. Oper. Res., vol. 244, no. 1, pp. 100–109, 2015. [29] L. Xing, P. Rohlfshagen, Y. Chen, and X. Yao, “An evolutionary approach to the multidepot capacitated arc routing problem,” IEEE Trans. Evol. Comput., vol. 14, no. 3, pp. 356–374, Jun. 2010. [30] Y. Zhang, Y. Mei, S. Huang, X. Zheng, and C. Zhang, “A route clustering and search heuristic for large-scale multi-depot capacitated arc routing problem,” IEEE Trans. Cybern., vol. 52, no. 8, pp. 8286–8299, Aug. 2022. [31] Z. Yang, K. Tang, and X. Yao, “Multilevel cooperative coevolution for large scale optimization,” in Proc. IEEE Congr. Evol. Comput., 2008, pp. 1663–1670. [32] M. N. Omidvar, X. Li, Z. Yang, and X. Yao, “Cooperative co-evolution for large scale optimization through more frequent random grouping,” in Proc. IEEE Congr. Evol. Comput., 2010, pp. 1–8. [33] M. N. Omidvar, M. Yang, Y. Mei, X. Li, and X. Yao, “DG2: A faster and more accurate differential grouping for large-scale black-box optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 6, pp. 929–942, Dec. 2017. [34] Y. Sun, M. Kirley, and S. K. Halgamuge, “A recursive decomposition method for large scale continuous optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 5, pp. 647–661, Oct. 2018. [35] Y. Jia et al., “Distributed cooperative co-evolution with adaptive computing resource allocation for large scale optimization,” IEEE Trans. Evol. Comput., vol. 23, no. 2, pp. 188–202, Apr. 2019. [36] Y. Pan, R. Xia, J. Yin, and N. Liu, “A divide-and-conquer method for scalable robust multitask learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 12, pp. 3163–3175, Dec. 2015. [37] B. Bhowmick, S. Patra, A. Chatterjee, V. Govindu, and S. Banerjee, “Divide and conquer: A hierarchical approach to large-scale structure-from-motion,” Comput. Vis. Image Understanding, vol. 157, pp. 190–205, 2017.
56
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
[38] N. R. Sabar, J. Abawajy, and J. Yearwood, “Heterogeneous cooperative coevolution memetic differential evolution algorithms for big data optimisation problems,” IEEE Trans. Evol. Comput., vol. 21, no. 2, pp. 315–327, Apr. 2017. [39] A. Ostertag, K. Doerner, R. Hartl, E. Taillard, and P. Waelti, “Popmusic for a real-world large-scale vehicle routing problem with time windows,” J. Oper. Res. Soc., vol. 60, no. 7, pp. 934–943, 2009. [40] J. Xiao, T. Zhang, J. Du, and X. Zhang, “An evolutionary multiobjective route grouping-based heuristic algorithm for large-scale capacitated vehicle routing problems,” IEEE Trans. Cybern., vol. 51, no. 8, pp. 4173–4186, Aug. 2021. [41] L. M. Paz, J. D. Tard os, and J. Neira, “Divide and conquer: EKF SLAM in OðnÞ,” IEEE Trans. Robot., vol. 24, no. 5, pp. 1107–1120, Oct. 2008. [42] G. Song, X. Zhou, Y. Wang, and K. Xie, “Influence maximization on largescale mobile social network: A divide-and-conquer method,” IEEE Trans. Parallel Distrib. Syst., vol. 26, no. 5, pp. 1379–1392, May 2015. [43] B. Wang and Q. Li, “Rolling horizon procedure for large-scale job-shop scheduling problems,” in Proc. IEEE Int. Conf. Automat. Logistics, 2007, pp. 829–834. [44] R. Chandra and M. Zhang, “Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction,” Neurocomputing, vol. 86, pp. 116–123, 2012. [45] Y. Mei, X. Li, and X. Yao, “Cooperative coevolution with route distance grouping for large-scale capacitated arc routing problems,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 435–449, Jun. 2014. [46] R. Shang, K. Dai, L. Jiao, and R. Stolkin, “Improved memetic algorithm based on route distance grouping for multiobjective large scale capacitated arc routing problems,” IEEE Trans. Cybern., vol. 46, no. 4, pp. 1000–1013, Apr. 2016. [47] K. Tang, J. Wang, X. Li, and X. Yao, “A scalable approach to capacitated arc routing problems based on hierarchical decomposition,” IEEE Trans. Cybern., vol. 47, no. 11, pp. 3928–3940, Nov. 2017. [48] S. Wøhlk and G. Laporte, “A fast heuristic for large-scale capacitated arc routing problem,” J. Oper. Res. Soc., vol. 69, no. 12, pp. 1877–1887, 2018. [49] Z. Yang, K. Tang, and X. Yao, “Large scale evolutionary optimization using cooperative coevolution,” Inf. Sci., vol. 178, no. 15, pp. 2985–2999, 2008. [50] X. Li and X. Yao, “Cooperatively coevolving particle swarms for large scale optimization,” IEEE Trans. Evol. Comput., vol. 16, no. 2, pp. 210–224, Apr. 2012. [51] Z. Cao, L. Wang, Y. Shi, X. Hei, and H. Li, “An effective cooperative coevolution framework integrating global and local search for large scale optimization problems,” in Proc. IEEE Congr. Evol. Comput., 2015, pp. 1986–1993. [52] P. Yang, K. Tang, and X. Yao, “High-dimensional black-box optimization via divide and approximate conquer,” 2016, arXiv:1603.03518. [53] Y. Mei, X. Li, and X. Yao, “Variable neighborhood decomposition for large scale capacitated arc routing problem,” in Proc. IEEE Congr. Evol. Comput., 2014, pp. 1313–1320. [54] P. Lacomme, C. Prins, and W. Ramdane-Cherif, “Evolutionary algorithms for periodic arc routing problems,” Eur. J. Oper. Res., vol. 165, no. 2, pp. 535–553, 2005. [55] F. Chu, N. Labadi, and C. Prins, “A scatter search for the periodic capacitated arc routing problem,” Eur. J. Oper. Res., vol. 169, no. 2, pp. 586–605, 2006. [56] Y. Zhang, Y. Mei, K. Tang, and K. Jiang, “Memetic algorithm with route decomposing for periodic capacitated arc routing problem,” Appl. Soft Comput., vol. 52, no. 3, pp. 1130–1142, 2017. [57] E. Dijkstra, “A note on two problems in connection with graphs,” Numerische Mathematik, vol. 1, no. 1, pp. 269–271, 1959. [58] J. Belenguer and E. Benavent, “A cutting plane algorithm for the capacitated arc routing problem,” Comput. Operations Res., vol. 30, no. 5, pp. 705–728, 2003. [59] M. Mour~ao and L. Amado, “Heuristic method for a mixed capacitated arc routing problem: A refuse collection application,” Eur. J. Oper. Res., vol. 160, no. 1, pp. 139–153, 2005. [60] P. Lacomme, C. Prins, and M. Sevaux, “A genetic algorithm for a bi-objective capacitated arc routing problem,” Comput. Operations Res., vol. 33, no. 12, pp. 3473–3493, 2006. [61] J. Brand~ao and R. Eglese, “A deterministic tabu search algorithm for the capacitated arc routing problem,” Comput. Operations Res., vol. 35, no. 4, pp. 1112–1126, 2008. [62] R. Martinelli, M. Poggi, and A. Subramanian, “Improved bounds for large scale capacitated arc routing problem,” Comput. Operations Res., vol. 40, no. 8, pp. 2145–2160, 2013. [63] R. Shang et al., “Memetic algorithm based on extension step and statistical filtering for large–scale capacitated arc routing problems,” Natural Comput., vol. 17, no. 2, pp. 375–391, 2018. [64] Y. Zhang, Y. Mei, B. Zhang, and K. Jiang, “Divide-and-conquer large scale capacitated arc routing problems with route cutting off decomposition,” Inf. Sci., vol. 553, pp. 208–224, 2021. [65] T. Vidal, “Node, edge, arc routing and turn penalties: Multiple problems-one neighborhood extension,” Operations Res., vol. 65, no. 4, pp. 992–1010, 2017. [66] Y. Mei, X. Li, and X. Yao, “Decomposing large-scale capacitated arc routing problems using a random route grouping method,” in Proc. IEEE Congr. Evol. Comput., 2013, pp. 1013–1020. [67] M. Omidvar, X. Li, and X. Yao, “Cooperative co-evolution with delta grouping for large scale non-separable function optimization,” in Proc. IEEE Congr. Evol. Comput., 2010, pp. 1762–1769. [68] E. Lalla-Ruiz and S. Voß, “A popmusic approach for the multi-depot cumulative capacitated vehicle routing problem,” Optim. Lett., vol. 14, no. 3, pp. 671–691, 2020.
Guest Editorial
Pau-Choo Chung National Cheng Kung University, TAIWAN Alexander Dockhorn Leibniz University, GERMANY Jen-Wei Huang National Cheng Kung University, TAIWAN
AI-Explained (Part I)
s we witness the remarkable progress of artificial intelligence (AI) over the last decade, its potential to address real-world complexities and revolutionize various fields has become evident. From image processing to natural language translation, AI solutions have accomplished significant milestones. However, the growing importance of AI in diverse domains has raised the need for making this complex technology accessible to a broader audience. In this issue, we embark on an educational enterprise, as we endeavor to bridge the knowledge gap between experts and non-experts in the field of AI. Our mission is to present AI concepts and methods in a comprehensible and interactive manner, allowing readers to explore, experiment, and understand the magic behind AI techniques. For this purpose, we are leveraging the recent introduction of immersive articles in IEEE Xplore and the Computational Intelligence Magazine, which allows us to not just present static content, but let readers interactively engage with the articles themselves. This special issue includes the articles that unravel complex AI topics with clarity, precision, and an element of playfulness. Our authors have ingeniously curated immersive articles that invite you to embark on an interactive journey, where complex concepts transform into
A
intuitive insights. Each extended abstract shortly introduces the discussed content, but don’t miss out on the full experience, and make sure to read the corresponding immersive article online. Following a rigorous peer review process, three papers have been accepted and selected for the Part I of this special issue. The first paper is entitled “Group Formation by Group Joining and Opinion Updates via Multi-Agent Online Gradient Ascent” authored by Chen et al. This article exemplifies the concepts of best-response dynamics and multi-agent online learning within the context of group formation. The core focus of this paper is to explore how strategic agents converge to stable states by employing decentralized online gradient ascent, both with and without regularization. The immersive article allows interested readers to gain a deeper understanding of pure-strategy Nash equilibria and the dynamics of the system as agents adapt their opinions through online learning. The second paper entitled “MAPElites for Genetic Programming-based Ensemble Learning: An Interactive Approach” by Zhang et al. delves into the emerging research area of evolutionary ensemble learning. The paper addresses the crucial task of designing an effective quality-diversity optimization algorithm to obtain a set of base learners
Digital Object Identifier 10.1109/MCI.2023.3306191 Date of current version: 17 October 2023
Corresponding author: Jen-Wei Huang (e-mail: [email protected]).
58
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
that are not only effective but also complementary to each other. The authors propose a novel approach to maintaining such a set of learners within the MAP-Elites framework for evolutionary ensemble learning. Their method leverages cosine similaritybased dimensionality reduction techniques. By using this approach, the authors aim to optimize the ensemble model and ensure its efficacy and complementarity. To address the issue of uneven distribution of individuals in semantic space, the paper introduces a reference point synthesis strategy. This strategy enhances the ensemble model’s performance by achieving a more balanced distribution of learners. Experimental results reveal the superiority of the ensemble model induced by the cosine similarity-based dimensionality reduction method. Outperforming seven other dimensionality reduction methods in both interactive examples and large-scale experiments, the proposed approach showcases its effectiveness and potential. The third paper entitled “Monte Carlo and Temporal Difference Methods in Reinforcement Learning” by Kyung-Joong et al. takes us on a fascinating journey into the realm of Reinforcement Learning (RL), a powerful subset of machine learning. RL enables intelligent agents to learn and execute desired actions through interactions with their environment. Over the years, RL has achieved remarkable progress,
1556-603X ß 2023 IEEE
making significant strides in diverse domains, from mastering complex games like Go and StarCraft to tackling practical challenges like protein-folding. The authors' efforts to provide interactive learning materials demonstrate the commitment of this special issue to making Artificial Intelligence truly engaging and accessible to a diverse audience. The many examples show step-by-step how the presented algorithms work and compare with each other. Overall, this special issue covers three papers of which each has offered unique perspectives on AI’s diverse
applications, methods, and intuitive presentation to a general audience. For this, we want to thank the authors and members of IEEE CIS for their pioneering efforts in enabling immersive articles to become part of our community and allowing us to present a series of educational articles that foster interactive explorations. Special thanks go to the technical staff of IEEE Xplore, CIM Editor-in-Chief, and last but not least all reviewers, for their kind support during the editing process of this journal. Introducing new technologies also requires overcoming new challenges.
This special issue is a testament, that the technology is ready to be used by many more authors to come. Therefore, we encourage the AI community to continue on this path of accessible education, making AI knowledge a resource accessible to all. As AI continues to reshape industries and redefine possibilities, a broader understanding of its methodologies becomes ever more critical. We sincerely hope and expect that readers will find this special issue useful.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
59
AIeXplained
Chuang-Chieh Lin Tamkang University, TAIWAN Chih-Chieh Hung National Chung Hsing University, TAIWAN Chi-Jen Lu Academia Sinica, TAIWAN Po-An Chen National Yang Ming Chiao Tung University, TAIWAN
Group Formation by Group Joining and Opinion Updates Via Multi-Agent Online Gradient Ascent Abstract
T
his article aims to exemplify best-response dynamics and multi-agent online learning by group formation. This extended abstract provides a summary of the full paper in IEEE Computational Intelligence Magazine on the special issue AI-eXplained (AI-X). The full paper includes interactive components to facilitate interested readers to grasp the idea of pure-strategy Nash equilibria and how the system of strategic agents converges to a stable state by the decentralized online gradient ascent with and without regularization.
I. Introduction
GAME theory has been applied in a variety of situations due to its predictability of outcomes in the real world. It can also be used in solving problems, such as saddle-point optimization that has been used extensively in generative adversarial network models [1]. In general, a game consists of strategic agents, each of which acts rationally to maximize its own reward (or utility) or minimize its cost. A Nash equilibrium is a stable state composed of the strategies of all agents such that none of the agents wants to change its own strategy unilaterally. Therefore, such a stable state is Digital Object Identifier 10.1109/MCI.2023.3304084 Date of current version: 17 October 2023
60
possibly achievable or even predictable. However, how to achieve a Nash equilibrium in a game may not be quite straightforward, especially when agents behave in a “decentralized” way. Indeed, when an agent’s reward function depends on the strategies of the other agents, the maximizer of one agent’s reward function is not necessarily a maximizer for any other agent, and it may change whenever any other agent changes its strategy. This article examines the group formation of strategic agents to illustrate their strategic behaviors. A strategic agent can either join a group or change its opinion to maximize its reward. The eventual equilibrium of the game hopefully suggests predictable outcomes for the whole society. For the case in which agents apply group-joining strategies, the pure-strategy Nash equilibrium (PNE) is considered as the solution concept, where a pure strategy means a strategy played with a probability of 1. For the case in which agents change their opinions, each agent executes an online gradient ascent algorithm, which guarantees the time-average convergence to a hindsight optimum for a single agent (see [2] for the cost-minimization case), in a decentralized way, and then the possibly convergent state of the system is investigated. Corresponding author: Chih-Chieh Hung (e-mail: [email protected].
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
II. Group and Opinion Formation
Given a set V of n agents v1 ; v2 ; . . . ; vn , each agent vi is represented by a public preference vector zi and a private preference vector si , such that the former (i.e., an opinion) corresponds to the preference revealed to all the agents while the latter corresponds to its belief, which is unchangeable. Consider si ; zi 2 K such that K :¼ fx 2 ½1; 1k : kxk2 1g Rk is the feasible set. Each dimension of the domain stands for a certain social issue, where 1 maps to the far-left politics, while 1 maps to far-right politics. The bounded 2-norm constraint is in line with the bounded rationality of a person, or the bounded budget for a group. Denote by z ¼ ðz1 ; z2 ; . . . ; zn Þ and s ¼ ðs1 ; s2 ; . . . ; sn Þ the two profiles that include each agent’s opinion and belief, respectively. Each agent is initially regarded as a group. The opinion of a group is the average of the opinions of its members. Similar to the monotone setting in [3], a group wins with higher odds if its opinion brings more utility to all the agents. The reward (i.e., payoff) of an agent is the expected utility that it can get from all the groups. Specifically, assume there are currently m n groups G1 ; G2 ; . . . ; Gm , and denote by jGi j ¼ ni the number of members in group Gi . Let G ¼ ðG1 ; G2 ; . . . ; Gm Þ denote the profile of the groups. To ease the notation, let t ¼ ðz; s; GÞ denote the state of the game. The reward function of P agent i is ri ðtÞ ¼ mj¼1 pj ðtÞhsi ; gj i, 1556-603X ß 2023 IEEE
v4 , respectively, fixed and altering the opinion z5 from 1 to 1, the variations in the winning probabilities and rewards of all the agents can be observed. B. Group Joining: Myopic Best Responses
FIGURE 1 1D Representation: myopic bestresponses and a PNE.
P where gj ¼ v2Gj zi =jnj j represents the opinion of group Gj and the winning probability pj ðtÞ of group Gj is D P E nj gj ; s vr 2V r e P ; pj ðtÞ ¼ P ni gi ; s vr 2V r i2½m;ni > 0 e where ½m denotes f1; 2; . . . ; mg. The following strategic behaviors of an agent in such a game will be considered: ❏ Group Joining: Each agent seeks a specific group that hopefully maximizes its reward and then joins the group. ❏ Opinion Updating without Regularization: Each agent in a certain group tries to maximize its reward by changing its own opinion. ❏ Opinion Updating with Regularization: Each agent in a certain group tries to maximize its reward by changing its own opinion, while the reward includes the regularization ksi zi k22 , which hopefully constrains an agent’s strategic behavior by preventing it from moving too far from its own belief. III. Group Joining and Pure-Strategy Nash Equilibria A. 1D Representation
When the opinions and beliefs are assumed to be in ½1; 1 R, these vectors as well as the dynamics of changes can be illustrated in a real line. For example, consider the five agents v1 ; v2 ; v3 ; v4 , and v5 in Figure 1. By keeping the opinions z1 ; z2 ; z3 , and z4 of v1 ; v2 ; v3 , and
Assume that agent i decides to join Gj , for which j ¼ arg max‘ p‘ ðtÞ hg‘ ; si i. Such a strategy is called a myopic best response. An agent joins a group by considering not only its winning probability but also the utility that the agent can get from the group before joining. The state at the bottom of Figure 1 is an example of a PNE.
FIGURE 2 Opinion updates via the online gradient ascent with regularization.
C. Online Gradient Ascent With Regularization
The opinions and beliefs as well as the dynamics of opinion changes are illustrated in K :¼ fx 2 ½1; 12 : kxk2 1g R2 . The 2-norm constraint that kzi k2 ; ksi k2 1 correlates the dimensions. A projection of the opinion is required if the constraint is not satisfied.
The reward function for agent i including the regularizer, is defined as ri ðtÞ ¼ Pm gj i kzi si k22 . Since j¼1 pj ðtÞhsi ; 2 kzi si k2 is always non-positive, an agent will be constrained to consider “not being too far from its belief.” Our experimental illustrations show that such a regularization helps the game converge to a state where agents’ opinions will not be too far from their beliefs (see Figure 2).
B. Online Gradient Ascent
V. Conclusion
Consider the setting that each agent tries to maximize its own reward by “changing its opinion” without deviating from the group to which it belongs. Each agent executes the online gradient ascent algorithm to iteratively update its opinion so as to maximize its reward. The update is done by adding a certain quantity (tuned by the learning rate h) toward the direction of the gradient. A “projection” PK ðxÞ which projects x onto the feasible set K by dividing its 2norm is performed if necessary.
This article presents a preliminary study on the dynamics of group formation. From the illustrations, readers can have a better grasp of a pure-strategy Nash equilibrium in a system of multi-agents and also learn how an online gradient ascent algorithm as one of the dynamics can reach a stable state.
IV. Opinion Updates by Online Learning A. 2D Representation
Algorithm 1. Multi-Agent Online Gradient Ascent.
Input: feasible set K, T, learning rate h. for t 1 to T do for each agent i do observe reward ri ðtÞ, where state
1: 2: 3: 4: 5: 6:
t ¼ ðz; s; GÞ zi;tþ1 PK ðzi;t þ hrzi ri ðtÞÞ end for end for
Acknowledgment This work was supported in part by the National Science and Technology Council under Grants NSTC 112-2221-E-032018-MY3, NSTC 111-2221-E-005-047-, and NSTC 111-2410-H-A49-022-MY2. We also thank Victorien Yen for helping implement the JavaScript codes in the immersive article. References [1] I. Goodfellow et al., “Generative adversarial networks,” Commun. ACM, vol. 63, pp. 139–144, 2020. [2] F. Orabona, “A modern introduction to online learning,” 2022, arXiv:1912.13213. [3] C.-C. Lin, C.-J. Lu, and P.-A. Chen, “On the efficiency of an election game of two or more parties: How bad can it be?,” 2023, arXiv:2303.14405.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
61
Hengzhe Zhang , Qi Chen , and Bing Xue Victoria University of Wellington, NEW ZEALAND Wolfgang Banzhaf Michigan State University, USA Mengjie Zhang Victoria University of Wellington, NEW ZEALAND
MAP-Elites for Genetic Programming-Based Ensemble Learning: An Interactive Approach Abstract
E
volutionary ensemble learning is an emerging research area, and designing an appropriate qualitydiversity optimization algorithm to obtain a set of effective and complementary base learners is important. However, how to maintain such a set of learners remains an open issue. This paper proposes using cosine similarity-based dimensionality reduction methods to maintain a set of effective and complementary base learners within the MAP-Elites framework for evolutionary ensemble learning. Additionally, this paper proposes a reference point synthesis strategy to address the issue of individuals being unevenly distributed in semantic space. The experimental results show that the ensemble model induced by the cosine similarity-based dimensionality reduction method outperforms the models induced by the other seven dimensionality reduction methods in both interactive examples and largescale experiments. Moreover, reference points are shown to be helpful in improving the algorithm’s effectiveness. The main contribution of this paper is to provide an interactive approach to explore the methods and results, which is detailed in the full paper presented in IEEE Xplore.
key advantages of evolutionary ensemble learning algorithms is their ability to generate a diverse set of high-quality base learners in a single run within the framework of quality-diversity (QD) optimization algorithms. QD optimization algorithms, such as the multidimensional archive of phenotypic elites (MAP-Elites) [3], have shown promising performance in evolutionary ensemble learning. In MAP-Elites, an important question that determines which individuals survive is how to define niches, where the dimensionality reduction method plays a crucial role. This paper investigates the impact of dimensionality reduction methods on MAP-Elites in the context of evolutionary ensemble learning. The main contribution of this paper is to provide an interactive approach to exploring the key aspects of using MAP-Elites for evolutionary ensemble learning, offering an enhanced understanding of MAP-Elites-based evolutionary ensemble learning and its performance on machine learning tasks. II. Evolutionary Ensemble Learning
Evolutionary ensemble learning has emerged as a significant area in both evolutionary computation and machine learning domains [1], [2]. One of the
Evolutionary ensemble learning uses an evolutionary algorithm to generate base learners for building an ensemble model. It can be genetic programming (GP) [1], learning classifier systems (LCS) [4], or neural networks (NN) [5]. For evolutionary ensemble learning, it is important to evolve a set of high-quality and diverse base learners. For this purpose, bootstrap sampling strategy [6], niching strategy [7], and random decision trees [1] can be used.
Digital Object Identifier 10.1109/MCI.2023.3304085 Date of current version: 17 October 2023
Corresponding author: Qi Chen (e-mail: qi.chen@ ecs.vuw.ac.nz).
I. Introduction
62
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
Recently, MAP-Elites, a quality-diversity optimization algorithm, has shown good performance in evolutionary ensemble learning [7]. In MAP-Elites, it is important to define a behavior space where we can map individuals into and characterize the possible behaviors of high-quality individuals. The existing methods for designing such a space include using domain knowledge [8] and auto-encoders [7]. This paper focuses on studying how to induce a good behavior space in MAP-Elites for obtaining a good ensemble model. III. Algorithm A. Algorithm Overview
Using the MAP-Elites framework, the idea of evolutionary ensemble learning is to dynamically maintain a set of GP elites in a discretized behavior space. These GP elites form an ensemble model, where each contains m GP trees for feature construction and a machine learning model for prediction. Specifically, m GP trees represent m features constructed based on the original training data. The machine learning model is then trained on these features. The final prediction of the ensemble model is the average of all machine learning model predictions. In this paper, linear regression is used as the machine learning model due to its simplicity, efficiency, and strong predictive performance. B. Solution Initialization and Evaluation
In the solution initialization stage, m GP trees in each individual are randomly initialized using the ramped half-and-half method [1], [2]. The solution evaluation stage then transforms the training data into a new feature space using the GP trees.
1556-603X ß 2023 IEEE
TABLE I Statistical comparison of test mean square error for different dimensionality reduction techniques (x/y/z mean that a method in a row is significantly better than, similar to, or worse than the method in the column on x/y/z datasets). PCA
KPCA(RBF)
KPCA(POLY)
t-SNE
Beta-VAE
Isomap
SpectralEmbedding
KPCA(COSINE)
44/62/0
61/44/1
73/33/0
58/48/0
70/36/0
63/43/0
64/42/0
KPCA(COSINE)+Reference Point
61/45/0
71/34/1
74/32/0
67/39/0
73/33/0
70/36/0
71/35/0
Based on the transformed feature space, a linear model is trained to make predictions. To ensure that the constructed features generalize well to unseen data, the model makes predictions on the training data by using cross-validation. The predictions made by the models are referred to as semantics in GP literature, and the target labels are known as target semantics. The semantics of all GP individuals together form a semantic space, where the objective of GP is to discover an individual that can output target semantics based on given inputs. C. Archive Maintenance
The archive maintenance step is crucial for selecting individuals to form the final ensemble model. In this paper, MAPElites is used to select a set of high-quality and complementary solutions, which involves four stages: ❏ Reference Point Synthesis: For a learning dataset ðX; Y Þ of a supervised learning task, where the target label Y is known, the semantics of ideal individuals F can be synthesized by ð1 aÞ FðXÞ þ a Y . a is a hyperparameter set to 0.1 and 1.1 in this paper, which corresponds to the ideal points of high-quality individuals and symmetrical highquality individuals, respectively. ❏ Dimensionality Reduction: The ideal semantic space is high-dimensional, making it challenging to define a niche in such a space. To address this issue, MAP-Elites employs a dimensionality reduction method, transforming the high-dimensional space into a lowerdimensional space that can be discretized more easily. Many methods can be used for dimensionality reduction in MAP-Elites. In this work, kernel principal component analysis (KPCA) with a cosine kernel is adopted because it demonstrated superior performance in the experiments, as discussed in Section IV. ❏ Space Discretization: Based on the reduced space, MAP-Elites discretizes
the semantic space into a k k grid, where k is a hyperparameter that determines the granularity of MAPElites. Each grid cell represents a niche, containing individuals with similar behaviors. ❏ Elites Selection: Finally, the best individual in each grid cell is preserved based on the discrete behavior space. D. Solution Selection and Generation
Once obtained a set of diverse and high-quality individuals in the external archive A, promising individuals are selected from the external archive A using random selection. Based on the selected parent individuals, offspring are generated using random subtree crossover and mutation. For multi-tree GP, crossover and mutation are applied to randomly select GP trees. IV. Experimental Results
The experimental results in Table I show the impact of dimensionality reduction methods on the test mean squared errors of ensemble models across 106 datasets. The results indicate that incorporating KPCA (COSINE) as the dimensionality reduction technique within MAP-Elites leads to a significant improvement in the performance of ensemble models. Specifically, KPCA (COSINE) outperforms PCA in MAP-Elites on 44 datasets and does not perform worse on any dataset. Furthermore, when synthesizing reference points, KPCA (COSINE) exhibits even better performance, surpassing PCA on 61 datasets and not being outperformed on any dataset. V. Conclusion
This paper provides an interactive approach to understanding how to use MAP-Elites for evolutionary ensemble learning, as well as eight dimensionality reduction methods for automatically inducing a behavior space based on semantics of GP. The experimental results from interactive examples and large-scale experiments show that the dimensionality
reduction method significantly impacts the predictive performance of the ensemble model within the MAP-Elites framework, with cosine-kernel-based PCA outperforming other methods. While this paper focuses on MAPElites, the idea of using cosine similarity for defining a behavior space could potentially be extended to other QD optimization algorithms for evolutionary ensemble learning. In the future, it would be interesting to investigate the impact of different distance metrics in other QD optimization algorithms. Acknowledgment This work was supported in part by the Marsden Fund of New Zealand Government under Contracts VUW1913, VUW1914, and VUW2016, in part by the Science for Technological Innovation Challenge (SfTI) fund under Grant E3603/2903, in part by MBIE Data Science SSIF Fund under Contract RTVU1914, in part by Huayin Medical under Grant E3791/4165, and in part by MBIE Endeavor Research Programme under Contracts C11X2001 and UOCX2104. References [1] H. Zhang, A. Zhou, and H. Zhang, “An evolutionary forest for regression,” IEEE Trans. Evol. Comput., vol. 26, no. 4, pp. 735–749, Aug. 2022. [2] H. Zhang, A. Zhou, Q. Chen, B. Xue, and M. Zhang, “SR-Forest: A genetic programming based heterogeneous ensemble learning method,” IEEE Trans. Evol. Comput., early access, Feb. 07, 2023, doi: 10.1109/TEVC.2023.3243172. [3] J.-B. Mouret and J. Clune, “Illuminating search spaces by mapping elites,” 2015, arXiv:1504.04909. [4] H. H. Dam, H. A. Abbass, C. Lokan, and X. Yao, “Neural-based learning classifier systems,” IEEE Trans. Knowl. Data Eng., vol. 20, no. 1, pp. 26–39, Jan. 2008. [5] Y. Liu, X. Yao, and T. Higuchi, “Evolutionary ensembles with negative correlation learning,” IEEE Trans. Evol. Comput., vol. 4, no. 4, pp. 380–387, Nov. 2000. [6] M. Virgolin, “Genetic programming is naturally suited to evolve bagging ensembles,” in Proc. Genet. Evol. Comput. Conf., 2021, pp. 830–839. [7] K. Nickerson, A. Kolokolova, and T. Hu, “Creating diverse ensembles for classification with genetic programming and neuro-map-elites,” in Proc. Eur. Conf. Genet. Program., 2022, pp. 212–227. [8] E. Dolson, A. Lalejini, and C. Ofria, “Exploring genetic programming systems with map-elites,” in Genetic Programming Theory and Practice XVI. Cham, Switzerland: Springer, 2019, pp. 1–16.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
63
Isaac Han , Seungwon Oh , Hoyoun Jung , Insik Chung , and Kyung-Joong Kim Gwangju Institute of Science and Technology, SOUTH KOREA
Monte Carlo and Temporal Difference Methods in Reinforcement Learning Abstract
R
einforcement learning (RL) is a subset of machine learning that allows intelligent agents to acquire the ability of executing desired actions through interactions with an environment. Its remarkable progress has achieved significant results in diverse domains, such as Go and StarCraft, and practical challenges like protein-folding. This short paper presents overviews of two common RL approaches: the Monte Carlo and temporal difference methods. To obtain a more comprehensive understanding of these concepts and gain practical experience, readers can access the full article on IEEE Xplore, which includes interactive materials and examples.
I. Introduction
Reinforcement learning (RL) [1] is a type of machine learning, in which an agent learns to make decisions by trial and error in a given environment. RL recently showed impressive results in various domains, such as Go [2], StarCraft [3], and protein-folding problems [4]. RL comprises the four main elements of policy, reward, value function, and environment model. Policy determines the agent’s behavior. Reward serves as the reinforcement signal. Value function measures the long-term state quality. Lastly, model predicts the environment’s behavior. Figure 1 provides an overview of RL. RL algorithms can either be model-free or model-based. Model-free algorithms update the policy by using value function estimates based on a trial and error feedback. In contrast, model-based algorithms evaluate the state values by considering possible future states and actions through the environment model. This study introduces two fundamental model-free algorithms, namely, the Monte Carlo Digital Object Identifier 10.1109/MCI.2023.3304145 Date of current version: 17 October 2023
64
(MC) and temporal difference (TD) methods, which are powerful tools for solving various RL problems. II. Markov Model
The Markov decision process (MDP) provides a valuable framework for addressing sequential decision-making problems. It comprises a set of states, actions, transition probabilities, and rewards that define the problem space. RL algorithms leverage the MDP to determine the optimal policy or evaluate the action value within the MDP constraints. The MDP builds upon the concept of a Markov process, a stochastic process characterized by states and transition probabilities. A. Markov Process
A Markov process is a stochastic process describing how a system changes over time. At each time step, the state may either remain the same or transition into a different state based on the transition probabilities. The system’s transitioning state is determined only by the current state, not by any of the previous states. This property is known as the Markov Property, a defining characteristic of the Markov processes.
Definition 1 (Markov Property). P ½stþ1 j st ¼ P ½stþ1 j s1 ; s2 ; . . . ; st A Markov process comprises two fundamental components: the state space (S) that represents all possible states a system can be in and the transition probability Pss0 that represents the probability of transitioning from state s to state s0 at step t. The transition probability is expressed as Pss0 ¼ PðStþ1 ¼ s0 j St ¼ sÞ. B. Markov Decision Process
The MDP extends upon a Markov process by incorporating decision-making capabilities. Within the MDP, an agent selects an Corresponding author: Kyung-Joong Kim (e-mail: [email protected]).
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
action at each time step based on its current state and associated reward. The next state is determined by the chosen action and the transition probability. The MDP introduces additional components compared to a Markov process. The action space A includes all possible actions that can be taken given a state s. The transition probability Pssa 0 models the impact of actions as Pssa 0 ¼ PðStþ1 ¼ s0 j St ¼ s; At ¼ aÞ. The reward function R evaluates the desirability of different actions by providing a scalar value, called reward Rt , at step t. Lastly, the discount factor g is a value between 0 and 1 that determines the importance of immediate rewards versus long-term rewards as a part of the agent’s learning strategy. The agent’s goal is to learn a policy that maps states to actions maximizing the expected cumulative reward over time. Maximizing the sum of rewards by time in the MDP requires the evaluation of the value of each state and the determination of the optimal action. This is performed by calculating the expected sum of the discounted future rewards obtained by taking a particular action in each state. Gt ¼ Rtþ1 þ gRtþ2 þ þ g T 1 RT The value function outputs the expected return value for each state and must be estimated from experience in a model-free environment, which is typically done using either MC or TD method. V ðsÞ ¼ E½Gt j St ¼ s " # 1 X k ¼E g Rtþkþ1 j St ¼ s k¼0
C. SharkGame Environment
This study presents the MC and TD approaches using SharkGame, a grid-based game, as an example. Figure 2 illustrates the SharkGame environment. The agent, namely, the shark, aims to reach the treasure box at the bottom while avoiding dangerous obstacles, such as bombs and
1556-603X ß 2023 IEEE
Second, the MC method can be susceptible to a high variance in the value estimates, particularly when the episodes are short or few. This variance can impede the agent’s ability to learn the true state values and lead to a slower convergence. IV. Temporal Difference Learning FIGURE 1 Overview of RL.
fishnets. SharkGame is an MDP with 36 states (each grid cell) and four actions (i.e., up, down, left, and right). The transition probability is always 1, making it a deterministic environment. The reward function provides a small negative reward at each step and a large negative reward when the agent encounters obstacles. Moreover, it encourages the agent to find the shortest path while avoiding obstacles. III. Monte Carlo Approach
The MC values are estimated by averaging the sample returns. During each episode, the agent takes actions and receives rewards, and the sequence of states, actions, and rewards is recorded. After each episode, the sample return Gt is computed by each step. The counter Nðst Þ, which refers to the number of visits to state st , is then incremented for each visited state in the episode. The sample return Gt is then added to the total return Sðst Þ. Finally, the state value V ðst Þ is obtained by dividing Sðst Þ by Nðst Þ. Nðst Þ ¼ Nðst Þ þ 1 Sðst Þ ¼ Sðst Þ þ Gt V ðst Þ ¼ Sðst Þ=Nðst Þ The MC approach has two limitations. First, it requires complete episodes of the interaction with the environment before updating the state values. This process can be time consuming and computationally expensive, particularly in long-episode scenarios. Its application also becomes unfeasible in the context of infinite episodes.
FIGURE 2 Illustration of the SharkGame environment.
The TD approach is a method for estimating the value function in RL. Its main advantage over the MC method is that it updates the value at every transition instead of waiting until the end of an episode. The TD approach updates the value function for each step using the bootstrap method that updates an estimate using another estimate. More precisely, it updates the current value function (V ðst Þ) with a more accurate estimate (Rtþ1 þ gV ðstþ1 Þ) that considers the sampled reward received by the agent. This incremental update enables the TD approach to estimate the value function at each step. V ðst Þ ¼ E½Rtþ1 þ gV ðstþ1 Þ Compared to the MC method, the TD method is more computationally efficient because of the frequent update. However, it has bias and may not converge to the true value function. It also uses an estimated function V ðsÞ for the update instead of a true value function. Despite its limitations, the TD approach demonstrates practical effectiveness in various real-world applications and is successfully applied to a wide range of RL problems. V. Difference Between MC and TD
Both the MC and TD approaches are used in RL to estimate the value function; however, they differ in their update mechanism. The MC method updates values only at the end of each episode, while the TD approach updates values after each time step. The TD method has an inherent capability to swiftly pinpoint meaningful interactions due to its incremental update scheme, which enables the ongoing episode to be directly influenced by the changes resulting from previous interactions. This mechanism contributes to a faster convergence, which enhances the overall efficiency of the TD method compared to the MC approach. Therefore, the MC method can be more computationally expensive and time consuming, while the TD method is more efficient and takes faster to converge. Both methods estimate
values in different ways and asymptotically converge to the true values. The bias and the variance of these two algorithms can be considered from a machine learning perspective. MC learning has a low bias, but a high variance because it estimates the value function by averaging the returns from multiple episodes, which can lead to a high variance. This high variance makes the training unstable and may disturb it. However, the value function is updated only with the sample returns; hence, the estimation introduces no bias. By contrast, TD learning has a low variance, but a high bias because it updates the value function at each time step based on the estimated target value. This leads to a lower variance because each update is based on a single time step; however, it can introduce a bias because the value function is updated with the estimated target values. In the full article on IEEE Xplore, we will delve into a comprehensive discussion regarding a hybrid approach that combines the strengths of the MC and TD learning methods. This advanced technique aims to optimize and balance the trade-offs inherent in each individual method, thereby enhancing the overall performance. VI. Conclusion
In conclusion, this paper provides an overview of the two fundamental approaches in RL: MC and TD. The key distinctions between the MC and TD methods are outlined, highlighting the respective strengths and weaknesses of each approach. Further information and examples are available in the complete paper, which can be accessed online at IEEE Xplore. Acknowledgment This research was supported by the National Research Foundation of Korea (NRF) funded by the MSIT under Grant 2021R1A4A1030075. References [1] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. Cambridge, MA, USA: MIT Press, 2018. [2] D. Silver et al., “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354–359, 2017. [3] O. Vinyals et al., “Grandmaster level in StarCraft II using multi-agent reinforcement learning,” Nature, vol. 575, no. 7782, pp. 350–354, 2019. [4] J. Jumper et al., “Highly accurate protein structure prediction with AlphaFold,” Nature, vol. 596, no. 7873, pp. 583–589, 2021.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
65
Application Notes
Yue Wu , Jiaming Liu , Yongzhe Yuan Xidian University, CHINA
, Xidao Hu, and Xiaolong Fan
Kunkun Tu Zhejiang University, CHINA Maoguo Gong , Qiguang Miao Xidian University, CHINA
, and Wenping Ma
Correspondence-Free Point Cloud Registration Via Feature Interaction and Dual Branch
Abstract
P
oint cloud registration, which effectively coincides the source and target point clouds, is generally implemented by geometric metrics or feature metrics. In terms of resistance to noise and outliers, feature-metric registration has less error than the traditional point-to-point corresponding geometric metric, and point cloud reconstruction can generate and reveal more potential information during the recovery process, which can further optimize the registration process. In this paper, CFNet, a correspondence-free point cloud registration framework based on feature metrics and reconstruction metrics, is proposed to learn adaptive representations, with an emphasis on optimizing the network. Considering the correlations among the paired point clouds in the registration, a feature interaction module that can perceive and strengthen the information association between point clouds in multiple stages is proposed. To clarify the fact that rotation and translation are essentially uncorrelated, they are considered different solution spaces, and the interactive features are divided into two parts to produce a dual branch regression. In addition, CFNet with its comprehensive objectives estimates the transformation matrix between two input point clouds by minimizing multiple loss metrics. The
extensive experiments conducted on both synthetic and real-world datasets show that our method outperforms the existing registration methods.
Digital Object Identifier 10.1109/MCI.2023.3304144 Date of current version: 17 October 2023
Corresponding author: Maoguo Gong (e-mail: [email protected]).
66
IMAGE LICENSED BY INGRAM PUBLISHING
I. Introduction
With the popularity of LiDAR scanners and depth cameras, many applications, such as autonomous driving [1], rely on scanning devices to obtain 3D data. A point cloud is a simple and effective unstructured data representing a 3D scene or object that consists of basic point coordinates and additional colors or other information. As an important downstream task, point cloud registration attempts to transform point clouds obtained by scanning from different viewpoints into a coordinate system,
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
which is significant in robot vision, virtual and augmented reality, etc. [2], [3]. Traditional registration methods minimize the registration error by searching for corresponding point pairs and estimating transformation parameters. These two processes are iteratively performed until the registration error converges to a minimum value. In general, deep learning uses discrete convolution as its fundamental component [4], which facilitates the development of computer vision techniques. Previous deep learning methods mainly use 2D data with regular structures that facilitate efficient computations on modern hardware. However, the convolution operation needs to be revisited when the regularity is removed. In analyzing 3D point clouds with convolutional operators [5], some research has used projection-based [6], [7] or voxelization-based methods [8], [9] to convert irregular point clouds into regular representations. PointNet [10] is the first work designed to directly process point clouds, and it uses a point-based method. Its success in point cloud tasks has demonstrated the potential of using deep learning to solve the point cloud registration problem. Most deep learning-based methods for point cloud registration [11], [12], [13], [14], [15], [16] rely on complete point cloud data and registration labels (i.e., ground-truth transformations). Most of these approaches use the correct correspondences between points as
1556-603X ß 2023 IEEE
unique registration quality indicators and require an incalculable number of transformation iterations to converge the error. Generally, the number of iterations is determined by the expected loss error or the point-to-point distance, and the resulting error metric is sensitive to noise and outliers. Furthermore, supervised learning requires the registration labels corresponding to the given point cloud data, demanding additional computations and time consumption. Few studies have focused on correspondence-free or unsupervised methods that skip the search for point-to-point correspondences and subtly move toward achieving superior registration capabilities through global point cloud representations. Motivation. Point correspondence methods always regard correct point pairs and global distances as core metrics and directly search for point-to-point relationships. In addition, it may not be possible to build a network model that fully understands the input point cloud by using only the feature metric. Therefore, this paper attempts to utilize point clouds in a secondary manner. It is assumed unique latent information may be contained in point clouds, which can be extracted by a encoder-decoder
architecture. On this basis, our goal is to fully exploit the encoding and decoding information and apply it to point cloud registration. As different point clouds have different information after encoding or decoding, this paper uses these two indicators (i.e., features and reconstructions; see Figure 1) as error indicators for point cloud registration. Solution. In this paper, a correspondence-free point cloud registration method is proposed, which is at the forefront of downstream point cloud tasks. Compared with similar works, our work provides the following two enhancements. 1) Point cloud data: The proposed method considers the impacts of both the original and reconstructed point clouds on registration, while most existing methods are designed for original point clouds only. 2) Networks and metrics: Our method metrics are based on the differences between features and reconstructions, and the proposed network is more adaptive with learnable parameters compared to similar methods. After identifying a registration method that uses point cloud features and reconstructed point clouds, this paper sheds light on the network optimization. Two summarized open ques-
FIGURE 1. The differences between the registrations corresponding to point clouds (top), feature maps (middle), and reconstructed point clouds (bottom). The top part shows that the original point clouds cannot achieve the best registration due to noise and outliers. The bottom part shows that the reconstructed point clouds enables optimal registration, as they reintegrate the intermediate point cloud features and eliminate the feature differences after reconstructions.
tions and their possible answers are proposed for verification: Q1) How can the network automatically learn better representations from its input to the transformed abstract high-level feature space? Q2) How can the network build up the solution space and differentiate among all the attention information from the feature space to the output transformation matrix? A1) Is it possible to generate key information interactively, rather than simply connecting two point cloud features? A2) Is it possible to make the network treat rotation and translation as two different solution spaces, enabling it to pay more attention to different attention information? Therefore, CFNet is proposed to handle the above registration issues; this network contains two main modules for robust and accurate registration. For Q1&A1, a novel encoder is designed to extract interactive features from the input point clouds. For Q2&A2, a dual branch architecture is proposed to separately estimate the rotation and translation components by connecting interactive features in a specific way. In addition, a decoder consisting of fully connected layers is designed to reconstruct the point clouds and estimate the reconstruction error. As shown in Figure 1, CFNet mainly evaluates the differences between the feature maps and those between the reconstructed point clouds. Furthermore, it can estimate the final transformation matrix between two input point clouds by minimizing other relevant differences. Note that CFNet can perform registration by modifying the loss functions to produce an improved unsupervised model. In summary, our main contributions are as follows: ❏ This paper summarizes the existing problems presented by deep learning-based point cloud registration and works on point cloud utilization and network optimization. ❏ This paper develops two strategies for addressing the above problems: a feature interaction module that enables the network to adaptively capture and interact the features between point clouds as inputs are paired and a
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
67
dual branch module that enables the network to pay different levels of attention to different outputs as the rotations and translations of rigid transformations are uncorrelated. ❏ A correspondence-free point cloud registration framework (CFNet) is proposed. The framework primarily relies on point cloud features and reconstructions as error metrics to comprehensively evaluate its effectiveness. II. Related Work
Feature interaction. Some previous works have proposed utilizing information interaction during point cloud feature extraction. Point Transformer [17] introduces a transformer that uses a selfattention operator to sense the feature correlations between points within the input point cloud. DCP-v2 [15] enables information interaction between inputs via an attention module. PRNet [18] iteratively applies DCP-v2 and uses key point detection. These methods possess some limitations such as the fact that their 3D point features are only computed from local geometry information. Later, FINet [19] launches and senses information associations between input point clouds at multiple stages, and constructs global-local feature interactions. However, our design, which employs an innovative self-learning parameter, further improves point cloud understanding by making feature interaction easier and more efficient. Optimization-based registration. Most optimization-based registration methods [20], [21], [22], [23], [24] require a good initial transformation and converge to a local minimum around the initial points. The most influential approach is the iterative closest point (ICP) [20], which alternately finds the closest point in the source point cloud and the target point cloud as a match and uses the point-to-point ‘2-norm as a metric to find a match in a closed form of the rigid transformation with the minimal error between the points, repeating these steps until the result converges. Methods [21], [22], [23] such as ICP are prone to failure when given noisy point pairs or
68
incorrect initializations due to the potential instability of the point-topoint technique. Interestingly, the classic optimization-based registration methods can be extended to different domains. It is recommended that readers refer to and compare with literature provided in references [25], [26]. Correspondence-based registration. Such methods are mainly suitable for partial-to-partial point cloud registration, where the network estimates only the corresponding region and regresses the transformation parameters. DCP-v2 [15] uses DGCNN [27] to extract features and generates soft matching pairs using a cross-attention module. RPMNet [28] is a partial point cloud registration method that integrates the Sinkhorn algorithm into its network to obtain soft correspondences from local features. IDAM [29] combines geometric and distance features in an iterative process for point matching. Although these methods have achieved significant performance, most of them rely on point-to-point correspondences and are still sensitive to noise and partial point cloud transformations. Correspondence-free-based registration. Generally, such methods first extract global features from the source and target point clouds and then minimize the difference between the global features of the two input point clouds to regress the transformation parameters. These methods do not require point-topoint correspondences and are robust to density variations. PointNetLK [11] unfolds the modified Lucas & Kanade (LK) algorithm into a recurrent neural network and integrates it into PointNet. PCRNet [30] compares the features extracted by PointNet from the source and target point clouds to find the transformation that aligns them. DGR [31] proposes a differentiable framework for pairwise registration of real-world 3D scans. SegReg [32] softly segments the paired point clouds into a discrete number of geometric partitions, and then achieves registration by iteratively using the IC-LK algorithm to minimize the distance between the feature descriptors of the corresponding partitions. Zhu
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
et al. [33] learn point cloud embeddings in a feature space preserving the SO(3)equivariance property and perform correspondence-free registration by combining an equivariant feature learning method with an implicit shape model. In contrast, our proposed method focuses on network optimization, feature generation, and reconstruction during point cloud transformation. Semi-supervised/unsupervisedbased registration. Additionally, semisupervised and unsupervised methods [34], [35], [36], [37], [38], [39] have been proposed for point cloud registration, and they have achieved comparable results to those with ground-truth supervision. FMR [39] adopts a semi-supervised point cloud registration method by abandoning the estimation of geometric errors and minimizing the projection errors of the feature metrics. CorrNet3D [38] facilitates the learning of dense correspondences by adjusting 3D shapes through deformed reconstructions. CEMNet [37] models point cloud registration as a Markov decision process to heuristically search for optimal transforms by gradually narrowing the transform region of interest through a trialand-error approach. RIENet [36] captures the discriminative geometric differences between the source neighborhood and the corresponding pseudo-target neighborhood for robust unsupervised point cloud registration. Inspired by these approaches, our work exploits the properties of point cloud features and reconstructions, and incorporates the strategies of feature interaction and dual branch into the network training process. In short, CFNet is simple and efficient. It can choose whether ground truths are required as a supervisory condition. It is suitable for both supervised and unsupervised learning scenarios and is comparable to the state-of-the-art methods. III. Method
In this section, a correspondence-free point cloud registration framework called CFNet is proposed. An introduction to point cloud registration is given in Section III-A. The interpretable
registration framework is presented in Section III-B. This is followed by more details of the network framework, including its feature interaction encoder (Section III-C), dual branch regression module (Section III-D), fully connected decoder (Section III-E), and loss functions (Section III-F). A. Preliminaries: Point Cloud Registration
Point cloud registration is used to find the rotation and translation matrices (rigid body transformations or Euclidean transformations) between input point clouds and to transform the source point cloud to the same coordinate system as that of the target point cloud. A point cloud is a sequence containing a large number of points on the surface of a scene or object. The network inputs are two unsorted point clouds P ¼ fpi gmi¼1 and Q ¼ fqi gni¼1 with sizes of m and n, which are called the source point cloud and target point cloud, respectively. In the most ideal case, the two point sets are simply considered to be the same, i.e., m=n. However, anomalies such as overlap, noise, and registration errors sometimes occur. Through the rigid transform, T ¼ R 2 SOð3Þ; t 2 R3 , the source point
~ ¼ f~pi gm , cloud is transformed into P i¼1 where ~pi ¼ Rpi þ t:
(1)
The registration error between a point in the source point cloud and its corresponding point in the target point cloud is often considered an optimizable metric. The best R and t can be described as ðR ; t Þ ¼ argmin ðR;tÞ
X D ~pi ; qsðiÞ ;
(2)
~ ~pi 2P
where Dð; Þ calculates the Euclidean spatial distance between two points, sðiÞ represents the index of the corresponding point of point pi 2 P in Q, and Eq. (2) only calculates the distance from each point in the source point cloud to the corresponding point in the target point cloud. Considering the distance from the source to the target, the Chamfer distance is used to measure the deviation between the two point clouds. The error can be described as ~ QÞ ¼ hðP;
X D ~pi ; qsðiÞ
~ ~pi 2P
þ
X D qj ; ~prðjÞ ;
qj 2Q
(3)
where rðjÞ represents the index of the ~ corresponding point of qj 2 Q in P. The choices of sðiÞ and rðjÞ directly affect the quality of registration. In ICP-like methods, qsðiÞ is selected as the closest point of ~pi . This closest point correspondence is usually incorrect when there is a large misregistration or low overlap between two point clouds. Therefore, it is desirable to guide the registration process with registration errors that do not need to consider corresponding points. B. CFNet Framework
Our core idea is to propose a registration framework that integrates the advantages of classic nonlinear algorithms and learning techniques. Inspired by previous work [39], CFNet is effectively improved, including an innovative network architecture and loss functions, to further improve its registration performance, as shown in Figure 2. First, the encoder extracts the features of the two input point clouds with its feature interaction module. Second, the features are input into the subsequent processing modules. Multiple submodules are available for handling the following tasks.
FIGURE 2. The proposed framework. CFNet is composed of a feature interaction encoder, a fully connected convolutional decoder, and a dual branch regression layer. First, the global features of the source point cloud P and the target point cloud Q are obtained after conducting max pooling through the feature interaction encoder. Then, the decoder uses the global features as inputs to reconstruct the point clouds P 0 and Q0 . During the network training process, the dual branch regression network connects the global features of the source point cloud and the target point cloud as inputs and outputs a 4-dimensional vector (representing a rotation component) and a 3-dimensional vector (representing a translation component). After performing n iterations, the poses in each iteration are combined to obtain the final transformation G est of the source point cloud.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
69
1) Designing the decoder to train the feature extraction module in an unsupervised manner. 2) Estimating and reconstructing point clouds by learning interactive features. 3) Estimating a transformation by learning the quaternion and significant point coordinate difference. 4) Calculating the feature projection error to measure the difference between the two input features. 5) Calculating the reconstructed point cloud error to measure the difference between the two point clouds. 6) Calculating the transformation error to measure the difference between the estimated transformation and the true transformation. The proposed CFNet contains three key parts: an encoder, a decoder, and a dual branch regression module. They extract features, reconstruct the point cloud, and output the transformation of the point cloud, respectively. More details about these components are provided in sections from III-C to III-F.
~ and Q pointwise features. Note that P use shared weights during the encoding process. In addition to the output layer, batch normalization is used for all layers with ReLU functions. As shown in Figure 3, the feature interaction encoding module (FI) aims to learn a feature extraction function FðP; QÞ, which can generate representative feature interaction vectors for a pair of input point clouds. For the FI network, interactive features with rotation or translation information are set to facilitate the association of features. In the meantime, FI separately extracts features FðPÞ and FðQÞ during the middle stage, and the pointwise features output from each convolution block in the multilayer perceptron (MLP) are extracted. Note that the fused features FðP; QÞ are obtained after feature interaction, serving as the common input for the next
stage. In the whole MLP transformation process, a (64 ! 64 ! 64 ! 128 ! 1024) transformation is set, which introduces a feature interaction module in the second and fourth stages and a transformation module in the middle dimension, greatly facilitating the mutual perception of features. D. Dual Branch Regression
Since translation belongs to Euclidean space and has little correlation with the quaternion space, it is not suitable for the obtained rotational attention and translation attention features with shared weights during network learning to be handled equally. To solve this problem, a dual branch strategy is proposed to transform the extracted features separately, as illustrated in Figure 4. Specifically, after performing feature interaction encoding, a dual branch
C. Feature Interaction Encoder
To promote the interaction and forward propagation of information between the source and target point clouds, two pointwise feature interaction modules are injected into the encoder. In each iteration, the source point cloud P is first transformed into a transformed ~ through the previous point cloud P transformation. The encoder takes the ~ and transformed source point cloud P target point cloud Q as inputs, and generates the following global features:
FIGURE 3. An illustration of the feature interaction encoder. The given point clouds are fed into a network with shared weights for training. Assuming that there is a feature matrix that encapsulates the features with a certain representation, an interaction function is introduced (FI: FðP; QÞ ¼ a FðPÞ þ b FðQÞ) on this basis to achieve the effect of mutual learning. The generated results are stored in an interaction matrix to complete the task of feature interaction.
hn oi Fy ðxÞ ¼ max cat f ky ðxÞ j k ¼ 1. . .K ; s:t:
~ Qg; y 2 fr; tg; x 2 fP;
(4)
where the subscripts r and t represent the rotation and translation, respectively. The superscript k represents the point pair feature f output by the k-th convolution block, and there are K blocks in total. maxðÞ represents max pooling, which is used for maximally pooling the connected features, and then obtaining the global feature F. cat½ represents concatenation, whose role is to connect
70
FIGURE 4. An illustration of the dual branch regression module. After passing the encoded features FðPÞ and FðQÞ via a decoder consisting of fully connected layers, the newly generated features exist in an embedding space. Then, a dual branch strategy is introduced to feed different combinations of features into the rotation and translation branch networks (i.e., fr and ft ) and finally estimate the quaternions and the significant point coordinate differences. R: rotation; t: translation.
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
FIGURE 5. The proposed decoder architectures: (a) a fully connected decoder, (b) a deconvolution-based decoder, and (c) a hierarchical fully connected decoder.
regression network is used to regress the rotation and translation parameters separately. After fusing the unique fea~ and FðQÞ, fully connected tures FðPÞ layers are stacked to generate the hybrid features Fr and Ft for rotation and translation in feature interaction module. The dual branch regression network is utilized to return the rotation and translation parameters. The rotation regression branch takes all global features as inputs and generates a 4D vector, which represents the 3D rotation R in the form of a quaternion, q 2 R4 . Furthermore, inspired by [19], instead of directly regressing the translation vector t 2 R3 , the proposed method generates two 3D vectors based on the translation regression branch, representing the coordinates of the two significant points of the source and target point clouds, and then computes the difference between them as t. In each iteration, the transformation fq; tg is ~ Fr ðQÞ; Ft ðPÞ; ~ Ft ðQÞÞ; q ¼ fr ðcat½Fr ðPÞ; (5aÞ (5b) t ¼ CQ CP~ ;
where, ~ Ft ðPÞ; ~ Ft ðQÞÞ; (6aÞ CP~ ¼ ft ðcat½Fr ðPÞ; ~ : (6b) CQ ¼ ft ðcat½Fr ðQÞ; Ft ðQÞ; Ft ðPÞÞ
The functions fr and ft represent rotation and translation networks, and perform the transformations: (256 4 ! 512 ! 128 ! 64 ! 4; q 2 R4 ) and (256 3 !
512 ! 128 ! 64 ! 3; t 2 R3 ), respectively. Furthermore, the vectors CP~ and CQ represent the salient point coordinates of the transformed source point cloud and the target point cloud, respectively. E. Fully Connected Decoder
In contrast to the function of the encoder, a decoder consisting of fully connected layers is used to reconstruct the input. Overall, the encoder generates different features for the two point clouds P and Q, and the decoder restores the different features back to the corresponding point cloud copy. This method guarantees the success of unsupervised learning when training a salient feature extractor for registration problems and provides good initial conditions for point cloud registration. In addition, to verify the effects of the decoder on the encoder’s ability to extract features and on the final registration quality, three decoder networks are utilized: a fully connected decoder, a deconvolution-based decoder, and a hierarchical fully connected decoder, as shown in Figure 5. Specifically, after the encoder module generates the salient features, the decoder module restores the features back to the point clouds. Different from the traditional reconstruction process, the proposed CFNet performs various tasks on the input source point cloud and target point cloud during the recovery process. The reconstructed target point cloud only needs to be restored once, which is helpful for the
subsequent transformation estimation process, and the reconstructed source point cloud needs to be restored multiple times by iterating the current source point cloud and the estimated transformation. For each iteration i, a transformation GðiÞ is applied to the source point cloud, and the transformed source point cloud and target point cloud are used as the new inputs of CFNet. After n iterations, all the poses in each iteration are combined to obtain the overall transformation Gest 2 SEð3Þ between the original source point clouds and the target point clouds Gest ¼ GðnÞ Gðn 1Þ Gð1Þ:
(7) Theoretically, the greater the number of iterations, i.e., the larger n, the more accurate the registration is within the error-aware range. n is 8 in PCRNet [30] and 10 in PointNetLK [11], and n is usually set to be greater than 10 in ICP-like methods. Experiments prove that the registration results of the proposed method reach the state-of-the-art level when the number of iterations reaches only 2, which reflects the strong learning ability of CFNet and its reconstruction effectiveness. F. Loss Function
The key to the network training process is for the encoder to extract attentional features for rotation and translation, and the decoder uses these extracted features to reconstruct the point clouds. The following losses are optionally implemented. 1) Reconstruction Loss The objective of the reconstruction loss is to calculate the difference between the source point cloud and the reconstructed source point cloud and the difference between the target point cloud and the reconstructed target point cloud. The Chamfer distance is used to evaluate the reconstruction loss function as follows at the top of the next page, where P 0 and Q0 represent the reconstructed source point cloud and the reconstructed target point cloud, respectively, and P and Q represent two initial input point clouds. The first term represents the sum of the
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
71
dCD ðP; QÞ ¼
1X 1X min kp qk22 þ min kq pk22 ; P p2P q2Q Q q2Q p2P
~ P 0 Þ þ dCD ðQ; Q0 Þ; lossre ¼ dCD ðP; minimum distance from any point p to Q in P, and the second term represents the sum of the minimum distance from any point q to P in Q. Different from Eq. (3), the two points with the smallest distance in the two point clouds are not necessarily corresponding points. Here, another consideration is whether to use lossre ¼ dCD ðP 0 ; Q0 Þ instead. After multiple experiments, it is found that using the former function is better, mainly because both P 0 and Q0 are obtained after reconstruction with their corresponding features, and the feature metric difference is considered in the feature loss. 2) Feature Loss The objective of the feature loss is to calculate the difference between the features extracted from the source point cloud and that extracted from the target point cloud. The root mean square error (RMSE) indicator is used to evaluate the feature loss function as follows: lossfea ¼ dRMSE FP~ ; FQ vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N 2 u1 X FP~ i FQi ; ¼t N i¼1 (10) where FP~ and FQ represent the features of the transformed source point cloud and the target point cloud, respectively, and N represents the dimensionality of the features, which is set to 1024 in the encoder. 3) Transformation Loss The objective of the transformation loss is to calculate the difference between the estimated transformation R; t and t the ground-truth transformation R; from the source point cloud to the target point cloud, with the chord distance evaluated by the Frobetween R and R benius criterion of the rotation matrix. The angle error can be calculated as u ¼ T arccosðTrðR 2RÞ1Þ, and the translation error is calculated as the Euclidean
72
(8Þ (9)
distance between t and t. To facilitate the calculation of the loss function, the estimated matrix is set as transformation
R t T¼ and the transformation 0 1 loss function is kF : lossgt ¼ kT T
(11)
The final loss function of the training process is loss ¼ lossst þ lossgt þ 1 lossre þ 2 lossfea ;
(12)
where lossst represents the Chamfer distance between the transformed source point cloud and the target point cloud, ~ QÞ; which is expressed as lossst ¼ dCD ðP; this is a classic distance metric used to measure the effect of registration. To summarize, our method is endto-end trainable and is supervised by four losses. The two weight coefficients 1 and 2 are used for lossre and lossfea , respectively, while the others are set to 1. For unsupervised training, lossgt is not used. IV. Experiments
The proposed CFNet is compared with ICP [20], FGR [40], FPFH [41], and the recent learning-based PointNet-LK [11], FMR [39], PCRNet [30], PRNet [18], DCP-v2 [15], IDAM-GNN [29], RPMNet [28], and OMNet [42] methods. For the ICP, FGR, and FPFH experiments, they are implemented in Intel Open3D [43]. For the other learning-based methods, our solution is to cite the results given in each paper based on the same experimental settings. The unit of the rotation error for all experiments is degree ( ), and the unit of the translation error on the real-world dataset is meter (m). Following the metric provided by [15], [18], the experiments uniformly use the rotation R and translation t between the ground truth and the predicted transformation to compute the
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
root mean square error (RMSE) and the mean absolute error (MAE). Experimental setup. The transformations in the Euler angle range of ½45 ; 45 and translations in the range of [-0.5, 0.5] are randomly selected, and the source point cloud is generated by applying these rigid transformations to the target point cloud. The training method used to generate point clouds is the same as the testing method used to generate point clouds. First, the synthetic ModelNet40 [44] dataset is used to evaluate the effectiveness of CFNet; it contains 12,311 synthetic CAD object models belonging to 40 different categories. Referring to the experimental settings of PCRNet, the farthest point sample algorithm is used to sample 1024 points on the surface of each model and obtain a complete point cloud. To evaluate the performance of CFNet on different levels of specific object data, the following four different types of datasets are used to train the network: (1) full models for full categories; (2) partial models for full categories; (3) full models for half categories; and (4) full models for a single category.
A. Implementation Details
For our point cloud registration network (CFNet), Adam [45] optimization is used for 200 epochs, and the learning rate starts from 0.001 and gradually decays at a rate of 0.5 after every 20 epochs. The training batch size is 32, and the test batch size is 16. In addition, the training data in each batch are randomly selected, and the overall loss follows the above loss from Eq. (12). The project is implemented with PyTorch, and all experiments are performed on an NVIDIA GeForce RTX 3090 GPU and an Intel i9 3.30-GHz CPU. During the training process, CFNet operates for two training iterations, and it can be observed that the achieved improvement is not significant after more than two iterations. This also highlights the effectiveness of our method, which achieves optimal results in a very small number of iterations. In some experiments, the training data are contaminated
TABLE I Results obtained for the testing point clouds from the full models on ModelNet40 in the full-category setting [44]. Bold indicates the best result and underline indicates the second-best result. METHOD
(a) UNSEEN MODELS RMSE(R)
MAE(R)
RMSE(t)
(b) UNSEEN CATEGORIES MAE(t)
RMSE(R)
MAE(R)
RMSE(t)
(c) GAUSSIAN NOISE MAE(t)
RMSE(R)
MAE(R)
RMSE(t)
MAE(t)
ICP [20]
12.353
5.617
0.1668
0.0695
13.298
6.241
0.1834
0.0789
13.828
7.117
0.2902
0.0878
FGR [40]
7.381
5.381
0.0117
0.0062
7.344
5.363
0.0131
0.0068
7.387
5.356
0.0105
0.0058
FPFH [41]
3.381
2.961
0.0152
0.0132
3.213
2.821
0.0158
0.0134
5.067
4.196
0.0215
0.0188
PointNetLK [11]
9.259
1.253
0.5764
0.4763
11.045
2.628
0.5761
0.4764
12.177
2.865
0.6673
0.5374
FMR [39]
6.852
1.596
0.5851
0.4886
7.579
1.803
0.5667
0.475
7.633
2.295
0.5797
0.4844
PCRNet [30]
4.220
3.015
0.0458
0.0484
4.895
2.885
0.0412
0.0469
4.524
3.257
0.0468
0.0505
PRNet [18]
3.205
1.415
0.0168
0.0124
4.995
2.315
0.0218
0.0171
4.328
2.056
0.0170
0.0123
DCP-v2 [15]
2.926
1.273
0.0214
0.0123
4.258
3.207
0.0236
0.0127
3.083
1.277
0.0105
0.0091
IDAM-GNN [29]
2.838
0.742
0.0128
0.0034
2.462
0.715
0.0251
0.0047
3.671
1.026
0.0236
0.0061
RPMNet [28]
1.634
0.287
0.0105
0.0025
2.484
0.539
0.0169
0.0045
2.217
0.570
0.0161
0.0048
OMNet [42]
1.133
0.911
0.0097
0.0056
2.094
1.193
0.0206
0.0146
1.583
1.162
0.0056
0.0024
CFNet(ours)
1.106
1.006
0.0036
0.0024
1.272
1.024
0.0032
0.0027
1.348
1.234
0.0033
0.0028
by Gaussian noise, which will be discussed in detail in Section IV-E. B. Full Categories for Training & Testing
In the first experiment, all categories of point clouds in ModelNet40 are used.
9843/2468 samples are used for training and testing, respectively. Table I(a) evaluates the performance of our method and its counterparts in this experiment (ICP and PointNetLK almost fail). Note that the unsupervised-based registration methods [36], [37] outperform
most supervised-based registration methods in terms of most metrics, and our method can theoretically further stimulate their potential. From the experimental results, CFNet is superior to the other methods under most performance indicators, showing strong
FIGURE 6. Registration results (cyan: source point cloud, magenta: target point cloud, and gray: transformed point cloud). The first two are correspondence- or feature descriptor-based methods, and the last three are deep learning-based methods where the target point cloud is the same but the initial poses of the source point clouds are different to distinguish the registration effects of the different methods.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
73
CFNet is trained on the first 20 categories and then tested on the remaining 20 categories (called unseen categories). ICP, FGR, and FPFH are also tested on the reserved categories. As shown in Table I(b), the test efficiency levels of most methods for unknown categories of point clouds decrease, which is a manifestation of their poor generalization to different point cloud categories. In contrast, the performance of FGR increases. One conjecture is that this is because FGR optimizes individual targets to align surfaces and disables mismatching, and its joint global registration is unaffected by the feature descriptors of unknown categories of point clouds.
FIGURE 7. Effect of point cloud registration on the test dataset during training. (a): Unseen models, (b): Unseen categories, (c): Unseen models with noise.
performance. To show the effect of CFNet more intuitively, qualitative analyses of the visual registration results produced by our method and some other methods are shown in Figure 6. As shown in Figure 7(a), when our network starts training, the root mean square error of rotation and translation is approximately 25 and 0.7 m, respectively, and then the error begins to decrease gradually. When the number of epochs approaches 50, the training result tends to stabilize. During the subsequent training process, although the rotation error exhibits a few fluctuations, it maintains overall outstanding performance. Network load. In addition, to evaluate the complexity of CFNet, it is compared with other network models, as shown in Table II. IDAM with a graph neural network (GNN) requires a minimum of parameters. CFNet can run with minimal FLOPs (floating point operations) costs even with a larger number of parameters. On the one hand, more parameters are needed for computation due to our feature interaction and dual branch. On the other hand, FLOPs are less consumed because our network is MLP-based
and does not require computationally intensive work such as local neighborhoods. Overall, our method is lightweight and easy to deploy in point cloud applications. Partial models. To further validate supervised learning, a limited amount of labeled training data is provided on ModelNet40 under different conditions. Specifically, CFNet randomly samples the training data at different rates and ensures that at least one sample is selected for each category. Next, CFNet is retrained on these limited samples under supervision, and the performance of the resulting models is evaluated on the entire test set. Table III summarizes the results measured by the different data rates (1%, 5%, 10%, 50%, and 80%). PointNetLK and PCRNet are chosen for comparison purposes. CFNet can better facilitate registration even with fewer training samples. C. 20 Categories For Training & 20 Categories For Testing
To test the versatility of CFNet in scenarios with different numbers of categories, ModelNet40 is evenly divided into training and testing data by category.
D. Single Category & Different Models
In addition, experiments are conducted on different models built for a single category. All models for each category of ModelNet40 are divided into training and testing data. This experiment aims to test the registration performance of our model on a small number of datasets. Under the conditions of noise-free and unknown categories, the training method of the network is the same as that in Section IV-B. As a control, PCRNet is also trained with 40 single dataset categories, and ICP and FGR are tested on the corresponding categories of the testing dataset. As shown in Figure 8, the training effects of CFNet and PCRNet [30] under a single data category are not as good as those observed under all data categories. The rotation and translation components are both lost to some extent, and the overall registration effect is reduced, which deserves our attention. After repeated experiments, our insight is that falling effect is caused by the small and singular training data, and the Euclidean translation space is more susceptible than the quaternion rotation space. This is quite different from our previous speculation that a single
TABLE II Comparison among the numbers of parameters and computations required by different deep learning-based networks. MODEL
74
PointNetLK [11]
FMR [30]
PCRNet [30]
PRNet [18]
DCP-v2 [15]
IDAM-GNN [18]
RPMNet [28]
OMNet [42]
#Params (M)
0.1517
2.6505
1.3932
5.7064
5.5617
0.0879
0.8996
53.6581
10.0458
#FLOPs (G)
2.6638
5.4283
4.2174
15.0692
13.2236
1.5669
8.3700
35.7712
0.4886
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
CFNet(ours)
TABLE III Results obtained for the testing point clouds from the partial models on ModelNet40 under all categories [44]. METHOD PointNetLK [11]
PCRNet [30]
CFNet(ours)
RATE
RMSE(R)
MAE(R)
RMSE(t)
1%
10.652
3.152
0.3468
0.3286
MAE(t)
5%
12.045
3.258
0.4256
0.3967
10%
11.807
3.172
0.3894
0.3964
50%
12.181
2.657
0.3887
0.3823
80%
10.603
2.472
0.3885
0.3841
1%
0.0386
12.356
10.617
0.0486
5%
10.486
8.175
0.0274
0.0371
10%
8.486
6.834
0.0295
0.0191
50%
5.616
4.592
0.0274
0.0337
80%
4.437
3.865
0.0527
0.0482
1%
5.613
4.373
0.0128
0.0141
5%
3.846
3.812
0.0102
0.0092
10%
2.453
1.698
0.0086
0.0074
50%
1.501
1.076
0.0042
0.0038
80%
1.143
0.934
0.0036
0.0034
FIGURE 8. Single-category registration results. The red horizontal and vertical lines represent the RMSE(t) and RMSE(R) achieved for the full dataset, respectively.
category of data might yield an excellent registration effect due to “overfitting” during training. In addition, the optimization-based methods are independent of the single-category training data and calculate the transformation error directly from a handcrafted distance formula or descriptor; thus, the various values produced with each category are similar to or even better than those of the full models. In short, our method struggles to estimate the salient points of two point clouds in the
translation branch under this condition. Even so, our network is far ahead of most algorithms in terms of the rotation error metrics.
E. Gaussian Noise
To evaluate the robustness of CFNet to noise, experiments with Gaussian noise are performed. In particular, CFNet samples noise from the Gaussian distribution N ð0; s 2 Þ for each point in the point cloud, with a mean of 0 and a standard deviation ranging from 0 to 0.05 units, and the clipping range is [-0.05, 0.05]. In the first experiment conducted in this environment, CFNet uses the dataset described in Section IV-B and repeats the experiment several times. As shown in Table I(c), the performance of traditional methods is worse under noisy conditions, and their anti-interference ability is poor. Due to the powerful feature extraction ability and spatial reconstruction network of CFNet, the performance of the proposed learningdriven method is comparable to that achieved in the noise-free situation. Compared with other methods, our method based on correspondence-free learning achieves much better performance and robustness. The effect of CFNet with added noise (s = 0.01) is shown in Figure 7(c). The overall training effect is roughly similar to that of noise-free training. Even if abnormal phenomena are observed in a small number of epochs, the results are stabilized after training for 1/4 of the epochs. To further verify the performance of our model under
FIGURE 9. Errors of different noise levels.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
75
TABLE IV Results obtained for the testing point clouds from the full models on real-world datasets. (a) 3DMATCH1
METHOD
MAE(R)
ICP [20]
18.574
11.526
0.4595
0.3800
40.108
22.556
0.4595
0.4288
42.656
24.220
0.5997
0.4288
FGR [40]
7.002
5.025
0.4549
0.3791
24.464
19.844
1.6425
1.1142
24.714
20.052
1.8568
1.2559
FPFH [41]
3.042
2.563
0.1852
0.1564
18.425
16.256
1.3428
1.0546
18.727
16.132
1.4212
1.2121
PointNetLK [11]
19.212
4.350
0.4797
0.4027
18.869
4.725
3.7511
1.5217
19.204
4.642
4.2017
1.6117
PCRNet [30]
5.412
4.182
0.0228
0.0301
4.081
3.081
0.0262
0.0252
4.651
3.515
0.0342
0.0266
CFNet(ours)
2.673
2.018
0.0051
0.0064
1.746
1.140
0.0259
0.0204
2.015
1.389
0.0374
0.0233
2
RMSE(R)
MAE(R)
RMSE(t)
MAE(t)
RMSE(R)
MAE(R)
RMSE(t)
MAE(t)
The model is used with the full training data of ModelNet40 [44] (generalization). The model is used with the full training data of KITTI [46] (retraining).
various noise levels, multiple additional deviations s 2 ½0:02; 0:05 are set. The results are shown in Figure 9, which supports the conclusion that CFNet is insensitive to noise. F. Extension to Real-World Registration
1) 3DMatch-Indoor Dataset To evaluate the generalizability of CFNet to new datasets and unseen domains, an evaluation is performed on a real-world scan dataset (i.e., 3DMatch [47]), which collects data from 62 real-world scenes. Each scene contains an average of 27,127 points, with a minimum of 850 points and a maximum of 197,433 points. CFNet randomly samples 2048 points from each scene for evaluation purposes, using the same type of rigid transformation as that employed in the previous evaluation, and applying no additional noise. For ICP, FGR, and FPFH, the experiments are redone after updating the dataset, but for the other learningbased methods, the models trained on ModelNet40 are evaluated without any retraining or fine-tuning. As shown in Table IV(a), while CFNet is trained on ModelNet40, it still maintains competitive results on 3DMatch, which demonstrates its excellent capacity to generalize to an unseen real-world scan dataset. Among all the comparison methods, only FGR and FPFH can roughly maintain their performance. The uncertainty of random sampling and the low sampling rate may account for the slight observed error fluctuations. Figure 10 shows the 76
MAE(t)
(c) KITTI(TRACKING)2
RMSE(R)
1
RMSE(t)
(b) KITTI(OBJECT)2
registration results obtained by our method on four different test scenes in 3DMatch.
2) KITTI-Outdoor Dataset To further test the application scope of CFNet, its usage scenario is changed to an outdoor setting. KITTI [46] is currently the world’s largest computer vision algorithm evaluation dataset involving autonomous driving scenarios. Its 3D object detection dataset (7481 training point clouds and 7518 testing point clouds) and tracking dataset (8004 training point clouds and 11095 testing point clouds) are selected. First, the performance of CFNet is tested on KITTI using the model trained on ModelNet40, but the results are not satisfactory. Therefore, CFNet is retrained and the same is done for PCRNet.
As shown in Table IV(b,c), whether object detection or tracking datasets are utilized, the CFNet retrained on KITTI produces equally excellent registration results. Among all comparison methods, except for PCRNet, the performance is further affected, probably because 2048 points are randomly selected from tens of thousands of points in a given point cloud. These further demonstrate the utility of learning-based registration methods. In Figure 11, qualitative examples of the visual registration results obtained in four scenes are selected.
G. Extension to Partial-to-Partial Registration
To further test the robustness of our method, the experimental setup is extended to partial-to-partial point cloud registration. However, as CFNet is based
FIGURE 10. Registration results obtained by our method on four different test scenes in 3DMatch [47] (cyan: source, gold: target).
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
method for several different partial-topartial point clouds are compared with those of SACFNet. V. Ablation Study
Multiple ablation experiments are performed in this section, where alternatives are chosen for each part (the encoder, the decoder, a single branch or dual branch, and the loss functions) to understand the value of our configuration. All experiments are performed in the same environment as that used for the experiments in Section IV-B. FIGURE 11. Registration results obtained by our method on four different test scenes in KITTI [46] (the left and right parts are from the object detection and tracking data, respectively).
TABLE V Results obtained for the testing point clouds from partial-to-partial models. METHOD
RMSE(R)
MAE(R)
RMSE(t)
MAE(t)
SACFNet [48]
0.047
0.010
0.0002
0.00006
SACFNet-Ours
0.035
0.006
0.0002
0.00004
on a correspondence-free paradigm, it is not possible to find regions that correspond to partial-to-partial point clouds. Therefore, SACFNet [48] is chosen to use as a baseline, as it is an advanced partial-to-partial point cloud registration method. In short, our network is replaced with the posterior parts of SACFNet. Note that registration is not performed on the corresponding regions in the coordinate space but rather in the feature space. As a result, our network is embedded into SACFNet with adaptations, but the basic idea is the same as that in CFNet. The results are shown in Table V. Although SACFNet achieves advanced results on ModelNet40, it still achieves an effective boost with the support of our CFNet. This demonstrates the adaptability of our method in a partialto-partial registration environment. Following the experimental setup of SACFNet, for each point cloud, the revised version of SACFNet samples a subspace with a random direction and shifts it such that approximately 70% of the points are retained. The source point
cloud and target point cloud have a 70% overlap rate. In Figure 12, some qualitative registration results produced by our
A. Encoder: Use Feature Interaction or Not?
The first experiment explores whether FIEncoder is more valuable for registration than the simple PointNet due to its capacity to encode additional interactive features. As a reference, a PointNet-like encoder is designed; it consists of MLPs with sizes of (64, 64, 64, 128, 1024) and a max-pooling function. In fact, PointNet only learns the global descriptors for the entire point cloud, while FIEncoder also learns the geometric features of paired point clouds through feature
FIGURE 12. Registration results obtained by SACFNet with our method on partial-topartial point clouds in ModelNet40 [44].
TABLE VI Ablation study: Use PointNet or FIEncoder? Use FC or Deconv or H_FC? METRIC
PointNet
FIEncoder
FC
Deconv
H_FC
FC
Deconv
H_FC
RMSE(R)
3.2338
2.9889
2.9214
1.1066
2.2097
1.1385
MAE(R)
1.8462
1.4230
1.3562
0.9708
1.5648
1.0062
RMSE(t)
0.0155
0.0135
0.0134
0.0036
0.0033
0.0026
MAE(t)
0.0147
0.0102
0.0092
0.0024
0.0029
0.0018
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
77
TABLE VII Ablation study: Use single branch or dual branch? PointNet
FIEncoder
@ @
SB
DB
RMSE(R)
RMSE(t)
@
2.930
0.0218
@
2.510
0.0119
@ @
interaction. In Table VI, the model using FIEncoder always performs better than the model using PointNet encoder. B. Decoder: Use FC or Deconv or H_FC?
The second experiment explores which decoder can better decode the input feature information to reconstruct the point cloud (see Figure 5). A fully connected (FC) network is a classic network that is traditionally used for deep learning. A hierarchical fully connected (H_FC) network is also added to test whether it facilitates feature decoding. In addition, for the “recovery” characteristics of the deconvolution process (Deconv, also known as transposed convolution), a deconvolution decoder is set up. As illustrated in Table VI, the decoding effects of the models using fully connected or hierarchical fully connected networks are roughly the same, and both perform much better than the model using deconvolution. In other experiments, a decoder composed of fully connected layers is chosen. C. Regression: Use SB or DB?
The dual-branch (DB) architecture is an important part of our network. Here, it is replaced with a single-branch (SB) architecture, i.e., the rotation and translation components are estimated by MLPs with sizes of (1024, 512, 128, 64, 64, 7). The first four digits represent a
@
3.233
0.0255
@
1.106
0.0036
quaternion, and the last three digits represent a translation. Rotation and translation feature encoders share weights in the regression network. Our idea is that the simple application of the DB architecture does not bring a performance improvement when the input point cloud pairs have no feature interaction. Due to the feature interaction in the encoding phase, the rotation and translation information remain mixed, and they are not subject to additional supervision. Therefore, only using the DB architecture may cause the network to fail to learn. As shown in Table VII, only the DB architecture combined with feature interaction can realize the ability to learn rotation and translation attention features, thereby further enhancing the feature interaction process and yielding improved registration performance. D. Different Loss Functions
Multiple loss functions are available for the point cloud registration task. They define the optimal loss function from different perspectives, most of which use the Chamfer distance metric or the earth mover’s distance metric. In our experiments, five different loss functions are used for verification, including the L1 loss function of the distance metric, the L2 loss function of the feature and reconstruction metric, the L3 loss function of the transformation metric, and
TABLE VIII Ablation study: Different loss functions. LOSS
78
the combination of the L4 and L5 loss functions. The first setting discusses the optimal weights of the L2 hyperparameters 1 and 2 within the range from 0.001 to 0.01. When 1 is equal to 0.001 and 2 is equal to 0.01, the root mean square errors of rotation and translation can simultaneously reach their best values. Then, CFNet is used to compare the five loss functions and is trained by L1, L2, L3, and the derived L4 and L5. Table VIII shows the registration results obtained with different loss functions. L1 is the most direct way to express the registration effect on the point cloud data. When L1 is used directly, it is greatly disturbed by noise (e.g., outliers and overlaps); when the unique feature metric loss L2 is directly used, the effect is not significant, and the generated point cloud feature information cannot represent the point cloud information; when the unique transformation metric loss L3 is directly used, the effect is improved, but the difference between extracting features and reconstructing the point cloud is ignored; L5 considers a ground-truth transformation matrix on the basis of L4, and CFNet can switch between supervised and unsupervised learning depending on the conditions of both strategies.
RMSE(R)
RMSE(t)
L1 : lossst
3.407
0.0304
L2 : 1 lossre þ 2 lossfea
19.496
1.1026
L3 : lossgt
1.569
0.0046
L4 : L1 þ L2
2.308
0.0066
L5 : L1 þ L2 þ L3
1.106
0.0036
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
VI. Conclusion
This paper proposes a feature- and reconstruction-based metric framework called CFNet with feature interaction and dual branch to solve the point cloud registration problem. Our ideas and experiments comprehensively demonstrate the superiority and interpretability of the proposed method, both in terms of accuracy and robustness. Compared with other point cloud registration methods, our CFNet does not need to find point-to-point correspondence relationship and instead targets features and reconstructions. Experiments show that the proposed CFNet achieves better performance than other registration methods, including optimization- and learning-based registration methods. Extensive ablation studies also verify the effectiveness of each of our components.
Limitations. Due to the use of data from different sources, it is difficult for models trained by CFNet on synthetic datasets to fit real-world outdoor datasets. Due to its imperfect network design, CFNet itself cannot directly handle partial-to-partial registration. Even so, with our further thoughts, preliminary solutions are proposed to compensate for these problems. Moreover, our future work aims to leverage autoencoders and spatial reconstruction for more generalizable networks, and to design feature selection strategies with key points to address missing and asymmetric registration cases in point clouds. Acknowledgment This work was supported in part by the National Natural Science Foundation of China under Grants 62036006 and 62276200, in part by the Natural Science Basic Research Plan in Shaanxi Province under Grant 2022JM-327, in part by the Natural Science Foundation of Shaanxi Province under Grant 2020JQ-197, in part by CAAI-Huawei MINDSPORE Academic Open Fund, in part by MindSpore, Compute Architecture for Neural Networks, and in part by Ascend AI Processor used for this research. References [1] Y. Hu et al., “Planning-oriented autonomous driving,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2023, pp. 17853–17862. [2] A. Spr€ owitz et al., “Roombots: Reconfigurable robots for adaptive furniture,” IEEE Comput. Intell. Mag., vol. 5, no. 3, pp. 20–32, Aug. 2010. [3] Y. Wu, J. Liu, M. Gong, W. Ma, and Q. Miao, “Centralized motion-aware enhancement for single object tracking on point clouds,” in Proc. Int. Conf. Cloud Comput. Intell. Syst., 2022, pp. 186–192. [4] Y. LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Comput., vol. 1, no. 4, pp. 541–551, 1989. [5] Y. Li et al., “Towards efficient graph convolutional networks for point cloud handling,” in Proc. IEEE/ CVF Int. Conf. Comput. Vis., 2021, pp. 3752–3762. [6] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3D object detection network for autonomous driving,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1907–1915. [7] Y. Wu et al., “Self-supervised intra-modal and cross-modal contrastive learning for point cloud understanding,” IEEE Trans. Multimedia, early access, Jun. 09, 2023, doi: 10.1109/TMM.2023.3284591. [8] D. Maturana and S. Scherer, “Voxnet: A 3D convolutional neural network for real-time object recognition,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2015, pp. 922–928. [9] S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, “Semantic scene completion
from a single depth image,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1746–1754. [10] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3D classification and segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 652–660. [11] Y. Aoki, H. Goforth, R.A. Srivatsan, and S. Lucey, “Pointnetlk: Robust & efficient point cloud registration using pointnet,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 7163–7172. [12] Y. Wu, Y. Zhang, X. Fan, M. Gong, Q. Miao, and W. Ma, “Inenet: Inliers estimation network with similarity learning for partial overlapping registration,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 3, pp. 1413–1426, Mar. 2023. [13] A. Kurobe, Y. Sekikawa, K. Ishikawa, and H. Saito, “Corsnet: 3D point cloud registration by deep neural network,” IEEE Robot. Automat. Lett., vol. 5, no. 3, pp. 3960–3966, Jul. 2020. [14] Y. Wu, Q. Yao, X. Fan, M. Gong, W. Ma, and Q. Miao, “Panet: A point-attention based multi-scale feature fusion network for point cloud registration,” IEEE Trans. Instrum. Meas., vol. 72, 2023, Art. no. 2512913. [15] Y. Wang and J. M. Solomon, “Deep closest point: Learning representations for point cloud registration,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 3523–3532. [16] Y. Wu et al., “RORNet: Partial-to-partial registration network with reliable overlapping representations,” IEEE Trans. Neural Netw. Learn. Syst., early access, Jun. 30, 2023, doi: 10.1109/TNNLS.2023.3286943. [17] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V. Koltun, “Point transformer,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 16259–16268. [18] Y. Wang and J. M. Solomon, “Prnet: Self-supervised learning for partial-to-partial registration,” in Proc. Adv. Neural Inf. Process. Syst., 2019, pp. 8812–8824. [19] H. Xu, N. Ye, G. Liu, B. Zeng, and S. Liu, “Finet: Dual branches feature interaction for partialto-partial point cloud registration,” in Proc. AAAI Conf. Artif. Intell., 2022, vol. 36, pp. 2848–2856. [20] P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” in Proc. Sensor Fusion IV: Control Paradigms Data Struct., 1992, vol. 1611, pp. 586–606. [21] J. Park, Q. Zhou, and V. Koltun, “Colored point cloud registration revisited,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 143–152. [22] J. Yang, H. Li, D. Campbell, and Y. Jia, “GoICP: A globally optimal solution to 3D ICP point-set registration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 11, pp. 2241–2254, Nov. 2016. [23] H. M. Le, T.-T. Do, T. Hoang, and N.-M. Cheung, “SDRSAC: Semidefinite-based randomized approach for robust point cloud registration without correspondences,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 124–133. [24] Y. Wu et al., “Evolutionary multiform optimization with two-stage bidirectional knowledge transfer strategy for point cloud registration,” IEEE Trans. Evol. Comput., early access, Oct. 19, 2022, doi: 10.1109/TEVC.2022.3215743. [25] Z. Gojcic, C. Zhou, J. D. Wegner, and A. Wieser, “The perfect match: 3D point cloud matching with smoothed densities,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 5545–5554. [26] C. Choy, J. Park, and V. Koltun, “Fully convolutional geometric features,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 8958–8966. [27] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic graph CNN for learning on point clouds,” ACM Trans. Graph., vol. 38, no. 5, pp. 1–12, 2019. [28] Z. J. Yew and G. H. Lee, “Rpm-NET: Robust point matching using learned features,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 11824–11833. [29] J. Li, C. Zhang, Z. Xu, H. Zhou, and C. Zhang, “Iterative distance-aware similarity matrix convolution with mutual-supervised point elimination for
efficient point cloud registration,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 378–394. [30] V. Sarode et al., “PCRNet: Point cloud registration network using pointnet encoding,” 2019, arXiv:1908.07906. [31] C. Choy, W. Dong, and V. Koltun, “Deep global registration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 2514–2523. [32] G. Mei, X. Huang, J. Zhang, and Q. Wu, “Partial point cloud registration via soft segmentation,” in Proc. IEEE Int. Conf. Image Process., 2022, pp. 681–685. [33] M. Zhu, M. Ghaffari, and H. Peng, “Correspondence-free point cloud registration with SO (3)equivariant implicit shape representations,” in Proc. Conf. Robot Learn., 2022, pp. 1412–1422. [34] Y. Wu, G. Mu, C. Qin, Q. Miao, W. Ma, and X. Zhang, “Semi-supervised hyperspectral image classification via spatial-regulated self-training,” Remote Sens., vol. 12, no. 1, 2020, Art. no. 159. [35] H. Dong, W. Ma, Y. Wu, J. Zhang, and L. Jiao, “Self-supervised representation learning for remote sensing image change detection based on temporal prediction,” Remote Sens., vol. 12, no. 11, 2020, Art. no. 1868. [36] Y. Shen, L. Hui, H. Jiang, J. Xie, and J. Yang, “Reliable inlier evaluation for unsupervised point cloud registration,” in Proc. AAAI Conf. Artif. Intell., 2022, vol. 36, pp. 2198–2206. [37] H. Jiang, Y. Shen, J. Xie, J. Li, J. Qian, and J. Yang, “Sampling network guided cross-entropy method for unsupervised point cloud registration,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 6128–6137. [38] Y. Zeng, Y. Qian, Z. Zhu, J. Hou, H. Yuan, and Y. He, “Corrnet3D: Unsupervised end-to-end learning of dense correspondence for 3D point clouds,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 6052–6061. [39] X. Huang, G. Mei, and J. Zhang, “Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 11366–11374. [40] Q. Zhou, J. Park, and V. Koltun, “Fast global registration,” in Proc. Eur. Conf. Comput. Vis., 2016, pp. 766–782. [41] R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (FPFH) for 3D registration,” in Proc. IEEE Int. Conf. Robot. Automat., 2009, pp. 3212–3217. [42] H. Xu, S. Liu, G. Wang, G. Liu, and B. Zeng, “Omnet: Learning overlapping mask for partial-topartial point cloud registration,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 3132–3141. [43] Q. Zhou, J. Park, and V. Koltun, “Open3D: A modern library for 3D data processing,” 2018, arXiv:1801.09847. [44] Z. Wu et al., “3D shapenets: A deep representation for volumetric shapes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 1912–1920. [45] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014, arXiv:1412.6980. [46] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” Int. J. Robot. Res., vol. 32, no. 11, pp. 1231–1237, 2013. [47] A. Zeng, S. Song, M. Nieûner, M. Fisher, J. Xiao, and T. Funkhouser, “3Dmatch: Learning local geometric descriptors from RGB-D reconstructions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1802–1811. [48] Y. Wu, X. Hu, Y. Zhang, M. Gong, W. Ma, and Q. Miao, “SACF-net: Skip-attention based correspondence filtering network for point cloud registration,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 8, pp. 3585–3595, Aug. 2023.
NOVEMBER 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
79
Conference Calendar
Marley Vellasco lica, BRAZIL Pontifıcia Universidade Cato Liyan Song Southern University of Science and Technology, CHINA
Denotes a CIS-Sponsored Conference
D Denotes a CIS Technical Co-Sponsored Conference D The 7th Asian Conference on Artificial Intelligence Technology (ACAIT 2023) November 11-12, 2023 Place: Jiaxing, China General Chairs: Qionghai Dai, Cesare Alippi, and Jong-Hwan Kim Website: https://www.acaitconf.com/ Submission: May 30, 2023 D 5th International Conference on New Trends in Computational Intelligence (NTCI 2023) November 3-5, 2023 Place: Qingdao, China General Chairs: Marios M. Polycarpou and Jian Wang Website: http://ntcien2023.upc.edu.cn/ Submission: August 20, 2023
2023 IEEE International Conference on Development and Learning (ICDL 2023) November 9-11, 2023 Place: Macau, China General Chair: Zhijun Li Website: http://www.icdl-2023.org/ Submission: May 1, 2023
2023 IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2023) December 6-8, 2023 Place: Mexico City, Mexico General Chair: Wen Yu Website: https://attend.ieee.org/ssci2023/ Submission: July 1, 2023
Digital Object Identifier 10.1109/MCI.2023.3316419 Date of current version: 17 October 2023
80
D 13rd International Conference on Pattern Recognition and Applications (ICPRAM 2024) February 24-26, 2024 Place: Rome, Italy General Chair: Ana Fred Website: https://icaart.scitevents.org/ Home.aspx Submission: October 9, 2023 D 16th International Conference on Agents and Artificial Intelligence (ICAART 2024) February 24-26, 2024 Place: Rome, Italy General Chair: Jaap van den Herik Website: https://icaart.scitevents.org/ Home.aspx Submission: October 9, 2023 D 11th International Conference on Signal Processing and Integrated Networks (SPIN 2024) March 21-22, 2024 Place: Noida-Delhi NCR, India General Chair: Manoj Kumar Pandey Website: https://amity.edu/spin2024/ Submission: September 15, 2023 D 14th International Conference on Intelligent Systems: Theories and Applications (SITA 2023) November 22-23, 2023 Place: Casablanca, Morocco General Chairs: El Beggar Omar and Kissi Mohamed Website: https://sitaconference.org/ Submission: July 31, 2023
2024 IEEE International Conference on Development and Learning (IEEE ICDL 2024) May 20-23, 2024 Place: Austin, TX, USA General Chair: Chen Yu Website: TBA Submission: December 15, 2023
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | NOVEMBER 2023
2024 IEEE Evolving and Adaptive Intelligent Systems Conference (IEEE EAIS 2024) May 23-24, 2024 Place: Madrid, Spain General Chairs: Jose Iglesias and Rashmi Baruah Website: TBA Submission: January 22, 2024 D 2024 IEEE Swiss Conference on Data Science (IEEE SDS 2024) May 30-31, 2024 Place: Zurich, Switzerland General Chair: Gundula Heinatz B€ urki Website: TBA Submission: January 12, 2024
2024 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (IEEE CIVEMSA 2024) June 14-15, 2024 Place: Xi’an, China General Chairs: Yong Hu, Xiaodong Zhang, and Yi Zhang Website: TBA Submission: January 31, 2024
2024 IEEE International Conference on Artificial Intelligence (IEEE CAI 2024) June 25-27, 2024 Place: Singapore General Chairs: Ivor Tsang, Yew Soon Ong, and Hussein Abbass Website: https://ieeecai.org/2024/ Submission: February 15, 2024
2024 IEEE World Congress on Computational Intelligence (IEEE WCCI 2024) June 30 - July 5, 2024 Place: Yokohama, Japan General Chairs: Akira Hirose, Hisao Ishibuchi Website: https://wcci2024.org/ Submission: January 15, 2024
1556-603X ß 2023 IEEE
2024 IEEE Conference on Artificial Intelligence Sands Expo and Convention Centre Marina Bay Sands, Singapore 25-27 June 2024
CALL FOR PAPERS PAPERS | WORKSHOPS | PANEL PROPOSALS
IEEE CAI is a new conference and exhibition with an emphasis on the applications of AI and key AI verticals that impact industrial technology applications and innovations.
IMPORTANT DEADLINES Workshop & Panels proposal: 20 Nov 2023 Abstract submission: 13 Dec 2023 Paper submission (long & short papers): 20 Dec 2023 Acceptance notifications & reviewers’ comments: 25 Mar 2024 Final reviewed submission: 25 Apr 2024
IEEE CAI seeks original, high-quality proposals describing the research and results that contribute to advancements in the following AI applications and verticals:
AI and Education
AI in Healthcare and Life Science
Involve the creation of adaptive learning systems, personalised content delivery, and administrative automation. It also utilises predictive analytics to monitor student progress and identify learning gaps.
Explore the need for improved decision-making to assist medical practitioners as well as additional medical issues including personnel allocation and scheduling, automated sensing, improved medical devices and manufacturing processes, and supply chain optimisation.
Industrial AI Enhance the aerospace, transportation, and maritime sectors by optimising system design, autonomous navigation, and management logistics. It emphasises robust cybersecurity, efficient Digital Twins usage, and comprehensive asset health management.
Societal Implications of AI Explore the impact of artificial intelligence on society, including issues related to ethics, privacy, and equity. It examines how AI influences job markets, decision-making processes, and personal privacy, while also considering the importance of fairness, transparency, and accountability in AI systems.
Resilient and Safe AI
Develop AI systems that are reliable, secure, and able to withstand unexpected situations or cyberattacks. It emphasises the importance of creating AI technologies that function correctly and safely, even under adverse conditions, while also maintaining data privacy and system integrity.
AI & Sustainability Employ AI to optimise resource usage, reduce waste, and support renewable energy initiatives. The ultimate aim is to leverage AI's problem-solving capabilities to address environmental challenges and contribute to a sustainable future.
Access all submission details: https://ieeecai.org/2024/authors Papers accepted by IEEE CAI will be submitted to the IEEE Xplore® Digital Library. Selected high-quality papers will be further invited for submission to a journal special issue.
CO-SPONSORS:
COMMITTEE INFO: General Co-Chair: Prof Ivor Tsang General Co-Chair: Prof Yew Soon Ong General Co-Chair: Prof Hussein Abbass
CALL FOR PAPERS
IMPORTANT DATES 15 November 2023
Special Session & Workshop Proposals Deadline
15 December 2023
Competition & Tutorial Proposals Deadline
15 January 2024
Paper Submission Deadline
15 March 2024
Paper Acceptance Notification
1 May 2024
Final Paper Submission & Early Registration Deadline
30 June - 5 July 2024 IEEE WCCI 2024 Yokohama, Japan
IEEE WCCI is the world’s largest technical event on computational intelligence, featuring the three flagship conferences of the IEEE Computational Intelligence Society (CIS) under one roof: The International Joint Conference on Neural Networks (IJCNN), the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) and the IEEE Congress on Evolutionary Computation (IEEE CEC). IEEE WCCI 2024 will be held in Yokohama, Japan. Yokohama is a city that inspires academic fusion and multidisciplinary & industrial association. The Yokohama area boasts a number of universities, institutes and companies of advanced information technology, electronics, robotics, mobility, medicine and foods. IEEE WCCI 2024 held in this area will strongly inspire the attendees to imagine next-generation science and technology as the fusion of AI, physiology and psychology as well as a cooperation with intelligence-related industries.
IJCNN 2024 The International Joint Conference on Neural Networks (IJCNN) covers a wide range of topics in the field of neural networks, from biological neural networks to artificial neural computation. IEEE CEC 2024 The IEEE Congress on Evolutionary Computation (IEEE CEC) covers all topics in evolutionary computation from theory to real-world applications.
FUZZ-IEEE 2024 The IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) covers all topics in fuzzy systems from theory to real-world applications. CALL FOR PAPERS Papers for IEEE WCCI 2024 should be submitted electronically through the Congress website at wcci2024.org, and will be refereed by experts in the fields and ranked based on the criteria of originality, significance, quality and clarity. CALL FOR TUTORIALS IEEE WCCI 2024 will feature pre-Congress tutorials, covering fundamental and advanced topics in computational intelligence. A tutorial proposal should include title, outline, expected enrollment, and presenter/organizer biography. Inquiries regarding tutorials should be addressed to Tutorials Chairs.
CALL FOR SPECIAL SESSION PROPOSALS IEEE WCCI 2024 solicits proposals for special sessions within the technical scope of the three conferences. Special sessions, to be organized by internationally recognized experts, aim to bring together researchers in special focused topics. Cross-fertilization of the three technical disciplines and newly emerging research areas are strongly encouraged. Inquiries regarding special sessions and proposals should be addressed to Special Sessions Chairs. CALL FOR COMPETITION PROPOSALS IEEE WCCI 2024 will host competitions to stimulate research in computational intelligence. A competition proposal should include descriptions of the problem(s) addressed, evaluation procedures, and a biography of the organizers. Inquiries regarding competitions should be addressed to the Competitions Chair.