118 77 11MB
English Pages 435 [423] Year 2020
Genetic and Evolutionary Computation
Wolfgang Banzhaf · Erik Goodman Leigh Sheneman · Leonardo Trujillo Bill Worzel Editors
Genetic Programming Theory and Practice XVII
Genetic and Evolutionary Computation Series Editors: Wolfgang Banzhaf , Michigan State University, East Lansing, MI, USA Kalyanmoy Deb , Michigan State University, East Lansing, MI, USA
More information about this series at http://www.springer.com/series/7373
Wolfgang Banzhaf • Erik Goodman Leigh Sheneman • Leonardo Trujillo • Bill Worzel Editors
Genetic Programming Theory and Practice XVII
123
Editors Wolfgang Banzhaf Computer Science and Engineering John R. Koza Chair, Michigan State University East Lansing, MI, USA Leigh Sheneman Department of Computer Science and Engineering Michigan State University Okemos, MI, USA
Erik Goodman BEACON Center Michigan State University East Lansing, MI, USA Leonardo Trujillo Depto Ingenieria en Electronic Electrica Tecnológico Nacional de México/ IT de Tijuana Baja California Tijuana, Mexico
Bill Worzel Evolution Enterprises Ann Arbor, MI, USA
ISSN 1932-0167 ISSN 1932-0175 (electronic) Genetic and Evolutionary Computation ISBN 978-3-030-39957-3 ISBN 978-3-030-39958-0 (eBook) https://doi.org/10.1007/978-3-030-39958-0 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
We dedicate this book to the memory of the co-founder of the Workshop series on Genetic Programming—Theory and Practice, Rick Riolo, who passed away on August 25, 2018.
Foreword
It is a genuine pleasure to write this brief foreword to the collected proceedings of GPTP XVII. It was my privilege to act as opening keynote speaker at the gathering, returning after a 16-year break from playing the same role for GPTP I in 2003. In both cases, I was a fascinated outsider learning about a community that seemed at once oddly similar and yet weirdly different from the computational evolutionary biologists who comprise my own academic tribe (specifically those concerned with the origin and early evolution of life). On both occasions, I was struck immediately by the potential for the Genetic Programming Theory and Practice (GPTP) community to answer questions that “my people” struggle to frame. How and why did the computational basis of biology evolve to comprise the particular set of rules and pieces which freshmen biologists now strive to memorize, some four billion years later (4 genetic letters, 20 amino acid building blocks of proteins and their interactions)? But this year, just as in 2003, careful listening soon brought a far deeper conviction that the questions of evolutionary computing are not and should not be limited to those which happen to interest me, or indeed anyone else. There is something too fresh, vibrant, and exploratory about the border formed by introducing evolutionary principles into programming. The diverse works which follow will grow, within the reader, an inescapable sense that it would be to the detriment of human knowledge and technological progress for anyone to presume, at this early stage, any particular purpose or direction for the field. There’s simply too much exploration to be done first! This truth only highlights a more urgent and somber note which must rightfully dominate my remaining words. While it would be nice to write here only a tourist’s guide to the series of locations along the border between evolution and computing which populate the following pages, something far more serious dominated the gathering and must be spoken openly. When I, a nosy outsider, asked participants to bring me up to speed on the history of their field “while I was away,” one message united all answers: Deep Learning has emerged to pose a deep and perhaps existential threat to our community. The numerous directions in which this particular form of neural network can find answers are undeniable. Equally undeniable is vii
viii
Foreword
the attractiveness of a simple, reliable, and user-friendly product developed by the financial might and business acumen of Google. But just as, at least within the USA, the emergence of “big box” stores brought reliability, cost savings, and convenience only at the cost of conformity which eroded a far richer consumer ecosystem, so it is very clear from the pages that follow that Deep Learning is flattening something far richer. Both implicitly and explicitly, the pages which follow demonstrate that Deep Learning is not the answer to every problem. From industry to computing theory, genetic programming and genetic algorithms can help where neural networks and other forms of machine learning struggle. A subtler, deeper message to be found between their lines is one familiar throughout research science. Surprisingly often, it turns out that an answer to the question, as originally posed, is downright unhelpful. We needed, instead, to understand why the question was badly framed. That need not be expressed in the past tense. Any history of science suggests that we progress less by obtaining answers than by forming better questions. Douglas Adams satirized this important truth famously within the Hitchhiker’s Guide to the Galaxy when he told the story of an unimaginably advanced civilization which built planet Earth as a supercomputer with which to calculate the answer to life in the universe and everything. Only when this answer arrived in the form of the number 42 did the civilization reflect that perhaps the question had not been well formed. The truth behind this humor matters when a core limitation of Deep Learning is its lack of transparency. What just happened? How did it reach that answer? Is that really what we needed to know/solve/achieve? In contrast to the black (“big”) box of Deep Learning, the diverse “Mom-and-Pop” stores of the GPTP community invite such meta-questions. Through them, we have every reason to believe, a deeper kind of learning proceeds. Let us not wait for Deep Learning to produce all of the answers, only to discover that we now need to dust off, resurrect, or reinvent alternative approaches that it drove extinct along the way. It matters, then, that the community of evolutionary computing spreads this message: through its areas of success and the unexpected insights it uncovers. And if you, the reader, are in any way new to the field represented by GPTP then it matters that you keep reading. Baltimore, MD, USA October 2019
Stephen Freeland
Preface
After 16 annual editions of the workshop on Genetic Programming Theory and Practice (GPTP) were held in Ann Arbor, 2019, we saw the workshop venturing out from that location for the first time. This 17th GPTP workshop was held in East Lansing, Michigan, from May 16 to May 19, 2019, at Michigan State University, one of the first land-grant institutions in the USA. It was organized and supported by the NSF-funded BEACON Center for the Study of Evolution in Action, a Science and Technology Center funded by the NSF since 2010. The collection you hold in hand contains the written final contributions submitted by the workshop’s participants. Each contribution was drafted, read, and reviewed by other participants prior to the workshop. Each was then presented at the workshop, and subsequently revised, after the workshop, on the basis of feedback received during the event. GPTP has long held a special place in the genetic programming community, as an unusually intimate, interdisciplinary, and constructive meeting. It brings together researchers and practitioners who are eager to engage with one another deeply, in thoughtful, unhurried discussions of the major challenges and opportunities in the field. Despite the change in location, the large group of interested individuals at MSU this year resulted in one of the largest groups ever participating in the workshop with approximately 50 regular attendees. It should be kept in mind that participation at this workshop is by invitation only, and every year the editors make an effort to invite a group of participants that is diverse in several ways, including participants both from academia and industry, junior and senior, local, national, and international. Efforts are also made to include participants in “adjacent” fields such as evolutionary biology. GPTP is a single-track workshop, with a schedule that provides ample time for presentations and for discussions, both in response to specific presentations and on more general topics. Participants are encouraged to contribute observations from their own, unique perspectives, and to help one another to engage with the presented work. Often, new ideas are developed in these discussions, leading to collaborations after the workshop.
ix
x
Preface
In this year’s edition, the regular talks touched on many of the most important issues and research questions in the field, including: opportune application domains for GP-based methods, game playing and co-evolutionary search, symbolic regression and efficient learning strategies, encodings and representations for GP, schema theorems, and new selection mechanisms. Aside from the presentations of regular contributions, the workshop featured three keynote presentations that were chosen to broaden the group’s perspective on the theory and practice of genetic programming. This year, the first keynote speaker was Dr. Stephen Freeland, University of Maryland, on “Alphabets, topologies and optimization.” He returned to the workshop after giving a keynote at the first GPTP workshop in 2003, with 15 years of additional research to report on. On the second day, the keynote was presented by Gavin A. Schmidt from the NASA Goddard Institute for Space Studies, on “Some Challenges and Progress in Programming for Climate Science.” The third and final keynote was delivered by Indika Rajapakse Associate Professor of Computational Medicine and Bioinformatics, Mathematics and Bioengineering at the University of Michigan in Ann Arbor, on “Cell Reprogramming.” As can be gathered from their titles, none of these talks focused explicitly on genetic programming per se. But each presented fascinating developments that connect to the theory and applications of genetic programming in intriguing and possibly influential ways. While most readers of this volume will not have had the pleasure of attending the workshop itself, our hope is that they will nonetheless be able to appreciate and engage with the ideas that were presented. We also hope that all readers will gain an understanding of the current state of the field, and that those who seek to do so will be able to use the work presented herein to advance their own work, and to make additional contributions to the field in the future.
Acknowledgements We would like to thank all of the participants for again making GP Theory and Practice a successful workshop 2019. As is always the case, it produced a lot of interesting and high-energy discussions, as well as speculative thoughts and new ideas for further work. The keynote speakers delivered thought-provoking talks from perspectives not usually directly connected to genetic programming. We would also like to thank our financial supporters for making the existence of GP Theory and Practice possible for the past 16 years. For 2019, as we moved to another location, we needed additional funds raised from different sponsors. We are grateful to the following sponsors: • • • •
John Koza Jason H. Moore Babak Hodjat at Sentient Mark Kotanchek at Evolved Analytics
Preface
xi
• Stuart Card • The BEACON Center for the Study of Evolution in Action, at MSU A number of people made key contributions to the organization and assisted our participants during their stay in East Lansing. Foremost among them is Constance James, who made the workshop run smoothly with her diligent efforts behind the scenes before, during, and after the workshop. Special thanks go to Michigan State University, particularly the College of Engineering and its Dean, Professor Leo Kempel, for hosting us in the Engineering Conference room, as well as to the Springer Nature Publishing Company, for producing this book. We are particularly grateful for contractual assistance by Melissa Fearon and Ronan Nugent at Springer. We would also like to express our gratitude to Carl Simon at the Center for the Study of Complex Systems at the University of Michigan for continued support. East Lansing, MI, USA East Lansing, MI, USA Okemos, MI, USA Tijuana, Mexico Ann Arbor, MI, USA October 2019
Wolfgang Banzhaf Erik Goodman Leigh Sheneman Leonardo Trujillo Bill Worzel
Contents
1
2
Characterizing the Effects of Random Subsampling on Lexicase Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Austin J. Ferguson, Jose Guadalupe Hernandez, Daniel Junghans, Alexander Lalejini, Emily Dolson, and Charles Ofria 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Lexicase Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Applying Subsampling to Lexicase Selection . . . . . . . . . . 1.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Evolutionary System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Program Synthesis Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Statistical Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Subsampling Improves Lexicase Selection’s Problem-Solving Success . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Deeper Evolutionary Searches Contribute to Subsampling’s Success . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 Subsampling Reduces Computational Effort . . . . . . . . . . . 1.4.4 Subsampling Does Not Systematically Decrease Phenotypic Diversity in Lexicase Selection . . . . . . . . . . . . 1.4.5 Cohort Lexicase Enables More Phylogenetic Diversity Than Down-Sampled Lexicase . . . . . . . . . . . . . . . 1.4.6 Subsampling Degrades Specialist Maintenance . . . . . . . . 1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . It Is Time for New Perspectives on How to Fight Bloat in GP . . . . . . . . . Francisco Fernández de Vega, Gustavo Olague, Francisco Chávez, Daniel Lanza, Wolfgang Banzhaf, and Erik Goodman 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The Bloat Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1 3 3 4 4 5 6 10 10 10 12 13 14 15 18 20 21 25
25 26 xiii
xiv
Contents
2.3
Load-Balancing and Parallel GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Structural Complexity of GP Individuals . . . . . . . . . . . . . . . 2.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Parallel Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Sequential Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
4
Explorations of the Semantic Learning Machine Neuroevolution Algorithm: Dynamic Training Data Use, Ensemble Construction Methods, and Deep Learning Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivo Gonçalves, Marta Seca, and Mauro Castelli 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Neuroevolution Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Semantic Learning Machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Previous Comparisons with Other Neuroevolution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Experimental Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Datasets and Parameter Tuning . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 SLM Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 MLP Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 SLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 MLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Generalization and Ensemble Analysis . . . . . . . . . . . . . . . . . 3.6 Toward the Deep Semantic Learning Machine . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Can Genetic Programming Perform Explainable Machine Learning for Bioinformatics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ting Hu 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Metabolomics Data for Osteoarthritis . . . . . . . . . . . . . . . . . . 4.2.2 Linear Genetic Programming Algorithm . . . . . . . . . . . . . . . 4.2.3 Training Using the Full and the Focused Feature Sets . . . 4.2.4 Feature Synergy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Best Genetic Programs Evolved on the Full Feature Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Identification of Important Features . . . . . . . . . . . . . . . . . . . .
27 28 29 31 32 33 33 34 36 37
39 39 40 45 45 46 47 47 48 49 50 50 52 54 58 59 63 63 64 64 65 67 67 68 68 69
Contents
xv
4.3.3
Best Genetic Programs Evolved on the Focused Feature Subset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
6
Symbolic Regression by Exhaustive Search: Reducing the Search Space Using Syntactical Constraints and Efficient Semantic Structure Deduplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lukas Kammerer, Gabriel Kronberger, Bogdan Burlacu, Stephan M. Winkler, Michael Kommenda, and Michael Affenzeller 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Prior Work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Organization of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Definition of the Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Grammar for Mathematical Expressions . . . . . . . . . . . . . . . 5.2.2 Expression Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Exploring the Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Symbolic Regression as Graph Search Problem . . . . . . . 5.3.2 Guiding the Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Steering the Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Quality Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Priority Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Temporal Memory Sharing in Visual Reinforcement Learning . . . . . . . Stephen Kelly and Wolfgang Banzhaf 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Temporal Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Heterogeneous Policies and Modularity . . . . . . . . . . . . . . . . 6.3 Evolving Heterogeneous Tangled Program Graphs . . . . . . . . . . . . . . . 6.3.1 Programs and Shared Temporal Memory . . . . . . . . . . . . . . . 6.3.2 Cooperative Decision-Making with Teams of Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Compositional Evolution of Tangled Program Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Empirical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Problem Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Ball Catching: Training Performance . . . . . . . . . . . . . . . . . . . 6.4.3 Ball Catching: Solution Analysis . . . . . . . . . . . . . . . . . . . . . . .
73 75 76
79
79 80 80 81 81 82 85 87 88 89 90 90 91 92 93 95 96 96 97 101 101 102 103 105 105 106 108 109 110 111 112 114
xvi
Contents
6.4.4 Atari Breakout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.5 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7
8
The Evolution of Representations in Genetic Programming Trees . . . Douglas Kirkpatrick and Arend Hintze 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Material and Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Representations and the Neuro-Correlate R . . . . . . . . . . . . 7.2.2 Smearedness of Representations . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Active Categorical Perception Task . . . . . . . . . . . . . . . . . . . . . 7.2.4 Number Discrimination Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 The Perception-Action Loop for Stateful Machines . . . 7.2.6 Markov GP Brains Using CGP Nodes . . . . . . . . . . . . . . . . . . 7.2.7 Genetic Encoding of GP Brains in a Tree-Like Fashion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.8 GP-Forest Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.9 GP-Vector Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.10 Evolutionary Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.11 Augmenting with R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 GP Trees Evolve to Have Representations . . . . . . . . . . . . . 7.3.2 Does Augmentation Using R Improve the Performance of a GA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Smeared Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Competitive Is Genetic Programming in Business Data Science Applications? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arthur Kordon, Theresa Kotanchek, and Mark Kotanchek 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Business Needs for Data Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Business Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Effective Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Growth Opportunities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Multi-Objective Optimization and Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Data Science Competitive Landscape. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Defining Key Competitors for Data Science Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Comparison on Business Needs Satisfaction . . . . . . . . . . 8.3.3 How Popular Is GP in the Data Science Community? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
121 121 124 124 126 127 127 128 129 130 131 133 136 136 137 137 138 139 141 141 142 145 145 146 146 147 148 149 149 149 150 150
Contents
Current State-of-the-Art of Genetic Programming as Business Application Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Competitive Advantages of GP . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Key Weaknesses of GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Successful Genetic Programming Applications . . . . . . . . 8.5 How to Increase Competitive Impact of Genetic Programming in Data Science Applications?. . . . . . . . . . . . . . . . . . . . . . 8.5.1 Develop a Successful Marketing Strategy . . . . . . . . . . . . . . 8.5.2 Broaden Application Areas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Improved Professional Development Tools. . . . . . . . . . . . . 8.5.4 Increase GP Visibility and Teaching in Data Science Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xvii
8.4
9
10
Using Modularity Metrics as Design Features to Guide Evolution in Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anil Kumar Saini and Lee Spector 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Modularity in Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Modularity Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Design Principles for Modularity Metrics . . . . . . . . . . . . . . 9.3.3 Reuse and Repetition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Reuse and Repetition from Execution Trace . . . . . . . . . . . 9.4 Using Modularity Metrics to Guide Evolution . . . . . . . . . . . . . . . . . . . . 9.4.1 Using Design Features During Parent Selection . . . . . . . 9.4.2 Using Design Features During Variation . . . . . . . . . . . . . . . 9.5 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Extracting Modules from Push Programs. . . . . . . . . . . . . . . 9.5.2 Autosimplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Experimental Set-up and Results. . . . . . . . . . . . . . . . . . . . . . . . 9.6 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolutionary Computation and AI Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joel Lehman 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 AI Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 EC and the Real World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 EC and Concrete AI Safety Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Avoiding Negative Side Effects . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Reward Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Scalable Oversight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Safe Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
151 151 153 154 157 157 160 160 160 161 162 165 165 166 167 168 168 169 169 171 172 172 173 173 175 175 178 179 181 181 183 183 185 187 187 188 190 191
xviii
Contents
10.3.5 Robustness to Distributional Drift . . . . . . . . . . . . . . . . . . . . . . 10.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
12
Genetic Programming Symbolic Regression: What Is the Prior on the Prediction? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miguel Nicolau and James McDermott 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Distribution Mismatch, Problem Difficulty, and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Algorithm Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.3 Understanding the Behaviour of Search Operators . . . . 11.3 Previous Work on GP Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Methodology, Experiments, and Results. . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Reasoning from First Principles. . . . . . . . . . . . . . . . . . . . . . . . . 11.4.2 Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.3 Initialisation Prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.4 GPSR Prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.5 Effect of Tree Depth on Initialisation Prior . . . . . . . . . . . . . 11.4.6 Effect of Problem Dimension on Initialisation Prior . . . 11.4.7 Effect of X Range on Initialisation Prior . . . . . . . . . . . . . . . 11.4.8 Comparing the y and yˆ Distributions Across Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Algorithm Behaviour and Performance . . . . . . . . . . . . . . . . . 11.5.2 Algorithm Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.3 Understanding GSGP Mutation . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Limitations and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hands-on Artificial Evolution Through Brain Programming . . . . . . . . . Gustavo Olague and Mariana Chan-Ley 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Evolution of Visual Attention Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 Evolution of Visual Recognition Programs . . . . . . . . . . . . . 12.3 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Classification of Digitized Art. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Beyond Random Search in Genetic Programming . . . . . 12.5.2 Ideas for a New Kind of Evolutionary Learning . . . . . . . 12.5.3 Running the Algorithm with Fewer Images . . . . . . . . . . . . 12.5.4 Running the Algorithm with 100 Images . . . . . . . . . . . . . . . 12.5.5 Ensemble Techniques and Genetic Programming . . . . . .
193 194 196 196 201 201 203 203 204 205 205 206 206 207 207 209 210 211 212 213 215 215 216 217 219 220 223 227 227 228 229 231 232 237 239 241 242 244 246
Contents
xix
12.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 13
14
Comparison of Linear Genome Representations for Software Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edward Pantridge, Thomas Helmuth, and Lee Spector 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Linear Genomes: Plush vs. Plushy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Random Genome Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.2 Genetic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Impact on Search Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Benchmark Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Genome and Program Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.2 Presence of “Closing” Genes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Other Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Hyperparameter Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 Applicable Search Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.3 Automatic Simplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.4 Serialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.5 New Epigenetic Markers for Plush. . . . . . . . . . . . . . . . . . . . . . 13.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhanced Optimization with Composite Objectives and Novelty Pulsation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hormoz Shahrzad, Babak Hodjat, Camille Dollé, Andrei Denissov, Simon Lau, Donn Goodhew, Justin Dyer, and Risto Miikkulainen 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Background and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Single-Objective Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.2 Multi-Objective Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.3 Novelty Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.4 Exploration Versus Exploitation . . . . . . . . . . . . . . . . . . . . . . . . 14.2.5 Sorting Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.6 Stock Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.1 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.2 Single-Objective Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.3 Multi-Objective Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.4 Composite Multi-Objective Approach . . . . . . . . . . . . . . . . . . 14.3.5 Novelty Selection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.6 Novelty Pulsation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
255 255 256 259 260 260 260 262 262 262 265 268 268 269 270 271 271 272 272 275
275 276 276 277 278 278 279 280 281 281 281 282 282 283 285 286 286
xx
15
16
Contents
14.4.2 Sorting Networks Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.3 Stock Trading Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Discussion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
287 288 289 290 291
New Pathways in Coevolutionary Computation . . . . . . . . . . . . . . . . . . . . . . . . Moshe Sipper, Jason H. Moore, and Ryan J. Urbanowicz 15.1 Coevolutionary Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 OMNIREP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 SAFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Concluding Remarks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
295
2019 Evolutionary Algorithms Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrew N. Sloss and Steven Gustafson 16.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Fundamentals of Digital Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.1 Population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.2 Population Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.3 Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.4 Representation and the Grammar . . . . . . . . . . . . . . . . . . . . . . . 16.3.5 Fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.6 Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.7 Multi-Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.8 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.9 Exploitative-Exploratory Search . . . . . . . . . . . . . . . . . . . . . . . . 16.3.10 Execution Environment, Modularity and System Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.11 Code Bloat and Clean-Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.12 Non-convergence, or Early Local Optima . . . . . . . . . . . . . . 16.3.13 Other Useful Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Traditional Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.1 Evolutionary Strategy, ES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.2 Genetic Algorithms, GA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.3 Genetic Programming, GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.4 Genetic Improvement, GI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.5 Grammatical Evolution, GE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.6 Linear Genetic Programming, LGP . . . . . . . . . . . . . . . . . . . . . 16.4.7 Cartesian Genetic Programming, CGP . . . . . . . . . . . . . . . . . 16.4.8 Differential Evolution, DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.9 Gene Expression Programming, GEP. . . . . . . . . . . . . . . . . . . 16.5 Specialized Techniques and Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.1 Auto-Constructive Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . .
295 297 299 303 304 307 307 310 312 313 314 315 315 316 316 317 317 318 318 318 319 319 320 321 321 322 322 323 323 324 324 324 325 325 325
Contents
16.5.2 Neuroevolution, or Deep Neuroevolution . . . . . . . . . . . . . . 16.5.3 Self-Replicating Neural Networks . . . . . . . . . . . . . . . . . . . . . . 16.5.4 Markov Brains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.5 PushGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.6 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.7 Tangled Program Graph, TPG. . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.8 Tabu Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.9 Animal Inspired Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6 Problem-Domain Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6.1 Specific Problem-Domain Mappings . . . . . . . . . . . . . . . . . . . 16.6.2 Unusual and Interesting Problem-Domain Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8 Predictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.9 Final Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10 Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
18
Evolving a Dota 2 Hero Bot with a Probabilistic Shared Memory Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robert J. Smith and Malcolm I. Heywood 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 The Dota 2 1-on-1 Mid-lane Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.1 Memory in Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.2 Memory in Genetic Programming . . . . . . . . . . . . . . . . . . . . . . 17.4 Tangled Program Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 Indexed Memory for TPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6 Dota 2 Game Engine Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.1 Developing the Dota 2 Interface . . . . . . . . . . . . . . . . . . . . . . . . 17.6.2 Defining State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.3 Defining the Shadow Fiend Action Space . . . . . . . . . . . . . . 17.6.4 Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.1 TPG Set Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.2 Training Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.3 Assessing Champion TPG Agents Post Training. . . . . . . 17.7.4 Characterization of Memory Behaviour . . . . . . . . . . . . . . . . 17.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xxi
326 327 327 328 328 328 329 329 329 330 333 335 337 338 340 340 345 345 347 348 348 350 351 353 354 354 356 357 357 358 358 359 361 362 363 364
Modelling Genetic Programming as a Simple Sampling Algorithm . . 367 David R. White, Benjamin Fowler, Wolfgang Banzhaf, and Earl T. Barr 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 18.2 Rationale for Modelling Simple Schemata . . . . . . . . . . . . . . . . . . . . . . . . 369
xxii
19
Contents
18.3
Modelling GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3.1 Change in Schema Prevalence Due to Selection . . . . . . . 18.3.2 Change in Schema Prevalence Due to Operators . . . . . . . 18.4 Empirical Data Supporting the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5 Ways to Improve GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
371 371 372 373 379 380 380 381
An Evolutionary System for Better Automatic Software Repair . . . . . . Yuan Yuan and Wolfgang Banzhaf 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2.2 Motivating Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Overview of ARJA-e. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4 Shaping the Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.1 Exploiting the Statement-Level Redundancy Assumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.2 Exploiting Repair Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.3 Initialization of Operation Types . . . . . . . . . . . . . . . . . . . . . . . . 19.5 Multi-Objective Evolution of Patches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.5.1 Patch Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.5.2 Finer-Grained Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . 19.5.3 Genetic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.5.4 Multi-Objective Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6 Alleviating Patch Overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6.1 Overfit Detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6.2 Patch Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.7 Experimental Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.7.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.7.2 Dataset of Bugs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.7.3 Parameter Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.8 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.8.1 Performance Evaluation (RQ1) . . . . . . . . . . . . . . . . . . . . . . . . . 19.8.2 Novelty in Generated Repairs (RQ2) . . . . . . . . . . . . . . . . . . . 19.8.3 Effectiveness of Overfit Detection (RQ3) . . . . . . . . . . . . . . 19.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
383 383 385 385 386 387 387 387 388 390 391 391 391 392 393 393 393 395 395 395 396 396 397 397 398 400 402 403
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Contributors
Michael Affenzeller Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Department of Computer Science, Johannes Kepler University, Linz, Austria Wolfgang Banzhaf Department of Computer Science and Engineering & Beacon Center, Michigan State University, East Lansing, MI, USA Earl T. Barr CREST, University College London, London, UK Bogdan Burlacu Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Josef Ressel Center for Symbolic Regression, University of Applied Sciences Upper Austria, Hagenberg, Austria Mauro Castelli NOVA IMS, Universidade Nova de Lisboa, Lisboa, Portugal Mariana Chan-Ley EvoVisión Laboratory, Ensenada, BC, Mexico Francisco Chávez University of Extremadura, Badajoz, Spain Andrei Denissov Sentient Investment Management, San Francisco, CA, USA Camille Dollé Sentient Investment Management, San Francisco, CA, USA Emily Dolson Department of Translational Hematology and Oncology Research, Cleveland Clinic, Cleveland, OH, USA Justin Dyer Sentient Investment Management, San Francisco, CA, USA Austin J. Ferguson The BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA Francisco Fernández de Vega University of Extremadura, Badajoz, Spain Benjamin Fowler Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL, Canada
xxiii
xxiv
Ivo Gonçalves INESC Coimbra, DEEC, University of Coimbra, Portugal
Contributors
Coimbra,
Donn Goodhew Sentient Investment Management, San Francisco, CA, USA Erik Goodman BEACON Center, Michigan State University, East Lansing, MI, USA Steven Gustafson MAANA Inc., Bellevue, WA, USA Thomas Helmuth Hamilton College, Clinton, NY, USA Jose Guadalupe Hernandez The BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA Malcolm I. Heywood Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada Arend Hintze Department of Integrative Biology, Michigan State University, East Lansing, MI, USA Department of Computer Science and Engineering, BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA Babak Hodjat Cognizant Technology Solutions, Dublin, CA, USA Ting Hu School of Computing, Queen’s University, Kingston, ON, Canada Department of Computer Science, Memorial University, St. John’s, NL, Canada Daniel Junghans The BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA Lukas Kammerer Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Department of Computer Science, Johannes Kepler University, Linz, Austria Josef Ressel Center for Symbolic Regression, University of Applied Sciences Upper Austria, Hagenberg, Austria Stephen Kelly Department of Computer Science and Engineering & Beacon Center, Michigan State University, East Lansing, MI, USA Douglas Kirkpatrick Department of Computer Science and Engineering, BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA Michael Kommenda Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Josef Ressel Center for Symbolic Regression, University of Applied Sciences Upper Austria, Hagenberg, Austria Arthur Kordon Kordon Consulting LLC, Fort Lauderdale, FL, USA Theresa Kotanchek Evolved Analytics LLC, Midland, MI, USA
Contributors
xxv
Mark Kotanchek Evolved Analytics LLC, Midland, MI, USA Gabriel Kronberger Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Josef Ressel Center for Symbolic Regression, University of Applied Sciences Upper Austria, Hagenberg, Austria Alexander Lalejini The BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA Daniel Lanza University of Extremadura, Badajoz, Spain Simon Lau Sentient Investment Management, San Francisco, CA, USA Joel Lehman Uber AI, San Francisco, CA, USA James McDermott National University of Ireland, Galway, Ireland Risto Miikkulainen Cognizant Technology Solutions, Dublin, TX, USA The University of Texas at Austin, Austin, CA, USA Jason H. Moore Institute for Biomedical Informatics, University of Pennsylvania Philadelphia, PA, USA Miguel Nicolau University College Dublin, Quinn School of Business, Belfield, Dublin, Ireland Charles Ofria The BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA Gustavo Olague CICESE, Ensenada, BC, Mexico Edward Pantridge Swoop, Inc., Cambridge, MA, USA Anil Kumar Saini College of Information and Computer Sciences, University of Massachusetts, Amherst, MA, USA Marta Seca NOVA IMS, Universidade Nova de Lisboa, Lisboa, Portugal Hormoz Shahrzad Cognizant Technology Solutions, Dublin, CA, USA Moshe Sipper Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA Department of Computer Science, Ben-Gurion University, Beer Sheva, Israel Andrew N. Sloss Arm Inc., Bellevue, WA, USA Robert J. Smith Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada Lee Spector Department of Computer Science, Amherst College, Amherst, MA, USA School of Cognitive Science, Hampshire College, Amherst, MA, USA
xxvi
Contributors
College of Information and Computer Sciences, University of Massachusetts, Amherst, MA, USA Ryan J. Urbanowicz Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA David R. White Department of Physics, University of Sheffield, Sheffield, UK Stephan M. Winkler Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Department of Computer Science, Johannes Kepler University, Linz, Austria Yuan Yuan Department of Computer Science and Engineering & Beacon Center, Michigan State University, East Lansing, MI, USA
Chapter 1
Characterizing the Effects of Random Subsampling on Lexicase Selection Austin J. Ferguson, Jose Guadalupe Hernandez, Daniel Junghans, Alexander Lalejini, Emily Dolson, and Charles Ofria
1.1 Introduction Evolutionary computation is often used to solve complex, multi-faceted problems where the quality of a candidate solution is measured according to its performance on a large set of test cases. For these test-based problems, we must somehow meld performances across many test cases to select individuals to serve as parents for the next generation. In many test-based problems, we cannot exhaustively evaluate a candidate solution over the entire space of possible test cases. As a result, it can be challenging to balance the trade-off between using a large enough test set to thoroughly evaluate candidate solutions while keeping the test set small enough to preserve computational resources and rapidly progress through generations. Lexicase selection is a relatively new parent-selection algorithm developed for genetic programming (GP) and has been demonstrated as an effective tool for solving difficult test-based problems [11, 12, 27]. Many traditional selection strategies for solving test-based problems score potential solutions by aggregating their fitness across all test cases. The lexicase algorithm, however, chooses each parent for the next generation by sequentially applying test cases in a random order, keeping only the best performers on each test case until the population has been winnowed to a single individual. Because the ordering of test cases changes for
A. J. Ferguson () · J. G. Hernandez · D. Junghans · A. Lalejini · C. Ofria The BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected] E. Dolson Department of Translational Hematology and Oncology Research, Cleveland Clinic, Cleveland, OH, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_1
1
2
A. J. Ferguson et al.
every parent selection event, individuals that perform well on different subsets of test cases are able to co-exist [4, 9]. The drawback of many test-based selection schemes, including lexicase, is that assessing individuals using a large set of test cases can be computationally expensive; this drawback is exacerbated when tests are costly to perform (e.g., robotics simulations). Using a large number of test cases constrains the number of generations we are able to run evolutionary search. Using too few test cases, however, may fail to accurately represent the problem domain and lead to overfitting. To combat this, many techniques dynamically subsample test cases (from a large pool representative of the problem domain) for candidate solution evaluation and selection (see [14, 20] for recent reviews). Indeed, subsampling has been used to reduce computational effort in GP [2, 7] and to improve the generalizability of evolved programs [8, 20]. In this chapter, we characterize the effects of random subsampling on the lexicase parent-selection algorithm. Previous work has shown that lexicase selection performs well when combined with random subsampling. Moore and Stanton applied random subsampling to lexicase selection in the context of an evolutionary robotics problem because evaluating robot controllers on test cases (simulation environments) was too costly to permit exhaustive assessments [23–25]. In [13], we proposed down-sampled and cohort lexicase selection, two variants of standard lexicase that employ random subsampling to reduce the number of per-generation evaluations required by lexicase selection. We demonstrated that both downsampled and cohort lexicase could yield higher problem-solving success than standard lexicase on a fixed evaluation budget in the context of program synthesis [13]. Here, we explore why random subsampling can improve lexicase selection’s problem-solving success. Additionally, we characterize the effect of subsampling on diversity and specialist maintenance, both of which have been shown to be important factors behind lexicase selection’s efficacy [4, 9, 10, 24]. We show that the improvement in problem-solving success gained from subsampling is due to its facilitation of deeper evolutionary searches (i.e., consisting of more generations relative to standard lexicase) given a fixed evaluation budget. Moreover, we show that both down-sampled and cohort lexicase find solutions with less computational effort than standard lexicase. While we predicted that subsampling would degrade diversity, we find no evidence for systematic degradation of phenotypic diversity. However, as the level of subsampling increases, cohort lexicase generates and maintains more phylogenetic diversity than down-sampled lexicase. As expected, we find that random subsampling degrades specialist preservation relative to standard lexicase. Our phenotypic diversity results seem to contradict our specialist preservation findings; this could be because of the particular problems we are using or because of our choice of time to measure phenotypic diversity (at the time a solution was found). Future work will continue investigating how subsampling affects diversity maintenance in an expanded problem domain and with more finegrained data collection and analysis.
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
3
1.2 Lexicase Selection Spector [27] initially proposed the lexicase parent-selection algorithm for solving modal GP problems where programs may have to output qualitatively different responses to different inputs. To accomplish this, lexicase does not aggregate fitness across test cases like many selection schemes. Instead, for each selection event (where a single parent must be selected), lexicase randomly permutes the test cases in the training set. Each test case is then considered in this permuted order, keeping only those candidate solutions that solve the focal test case (or tie for highest fitness if no candidate solutions solve it). This process continues until either a single candidate solution remains or all test cases have been exhausted. If more than one candidate solution remains, the winner is chosen at random. Each selection event follows this pattern with a different permutation until all parents for the next generation have been selected. Because the order of test cases changes for every parent selection event, individuals that perform well on different subsets of test cases are able to co-exist [4, 9]. This dynamic creates niches in a lexicase population and encourages multiple co-existing solutions that focus on different subsets of test cases. See [12, 27] for a more detailed description of lexicase selection. Since its conception, lexicase selection has been successfully applied in the field of genetic programming. Such applications include program synthesis [11] and regression [16]. Lexicase selection has also been in other areas such as evolutionary robotics [23], genetic algorithms [22], and learning classifier systems [1].
1.2.1 Applying Subsampling to Lexicase Selection Several variants of lexicase selection (and lexicase-inspired selection algorithms) exist, such as -lexicase, truncated lexicase, batch-tournament, batch-lexicase, down-sampled lexicase, and cohort lexicase [1, 13, 21, 28]. Here, we investigate down-sampled and cohort lexicase, both of which leverage random subsampling to reduce the number of per-generation evaluations required for lexicase selection.
1.2.1.1
Down-Sampled Lexicase
Down-sampled lexicase [13] applies the random subsampling technique [8] to lexicase selection. Each generation, down-sampled lexicase selects a random subset of the test-cases in the training set to use for all selection events, guaranteeing that unselected test cases are not evaluated. Here, we use D to represent our ‘downsample factor’, where each generation D1 of the training set is used. For example, a D of 5 implies a 20% subsampling rate (i.e., each generation we use one fifth of the training set to evaluate individuals). Down sampling divides the number of evaluations performed each generation by D. Given a fixed budget of evaluations,
4
A. J. Ferguson et al.
the computational savings afforded by down sampling allows us to continue our evolutionary search for more generations (or with a larger population size) than standard lexicase selection.
1.2.1.2
Cohort Lexicase
Cohort lexicase selection [13] makes use of the full set of training cases each generation while also ensuring that each prospective solution is evaluated against only a subset of tests. Every generation, cohort lexicase randomly partitions both the population and test-case set into K equally-sized sub-groups (cohorts). Each of the K candidate solution cohorts is paired with a test-case cohort, and each candidate solution is evaluated against only the test cases in its cohort. Thus, the number of evaluations performed each generation (relative to standard lexicase selection) is divided by K. Candidate solutions compete only within their cohort, and withincohort competition is arbitrated by the test cases in the associated cohort. Because cohorts are shuffled each generation, offspring will be assessed with a different subset of test cases than their parents. Note that the down-sampling factor, D, is identical to the number of cohorts, K, in both the total number of evaluations and the number of training cases a candidate solution sees per generation. Thus, K and D provide equivalent subsampling rates for the two selection schemes, and hereafter, we substitute D for K to simplify comparisons between down-sampled and cohort lexicase.
1.3 Methods We conducted a series of experiments to characterize the effects of applying random subsampling to lexicase selection. In all evolution experiments, we evolved populations of linear genetic programs to solve four program synthesis problems. Using this setup, we replicated previous results [13], tested the effect of the additional generations afforded by subsampling, and investigated how different types of subsampling affect the computational effort expended to solve problems. Additionally, we analyzed how these subsampling techniques affect both population diversity and specialist maintenance.
1.3.1 Evolutionary System For each of our evolution experiments, we evolved populations of 1000 linear genetic programs on four program synthesis problems (each described in detail in Sect. 1.3.2). Our linear-GP representation used:
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
5
• an instruction set that includes arithmetic, memory management, flow-control, and additional problem-specific instructions • memory accessed with binary tags [18] • modules referenced via binary tags [17, 29] A more detailed description of our GP system (including source code) can be found in the supplemental material [5]. We propagated programs asexually, subjecting offspring to mutations. Singleinstruction insertions, deletions, and substitutions were applied, each at a perinstruction rate of 0.005. Modules were duplicated and deleted at a per-module rate of 0.05. We also applied ‘slip’ mutations [19], which have the possibility of duplicating or deleting sequences of instructions, at a per-program rate of 0.05. Program-tags were mutated at a per-bit rate of 0.001. The run-termination criteria varied per experiment and is included in each experiment description.
1.3.2 Program Synthesis Problems For all evolution experiments, we evolved programs to solve problems from the general program synthesis benchmark suite [11]. To test our hypotheses, we needed a set of problems known to be challenging but not impossible for GP systems to solve. The general program synthesis benchmark suite comprises introductory-level computer science programming questions, many of which have been solved using lexicase selection [6, 11]. We used the following four program synthesis problems in our experiments: Smallest, Median, For Loop Index, and Grade. A description of each problem is given below: Smallest Programs are given four integer inputs (−100 ≤ inputi ≤ 100) and must output the smallest value. We measured program performance on a pass-fail basis. We limited program length to a maximum of 64 instructions and also limited the maximum number of instruction-execution steps to 64. Median Programs are given three integer inputs (−100 ≤ inputi ≤ 100) and must output the median value. We measured program performance against test cases on a pass-fail basis. We limited program length to 64 instructions and also limited the maximum number of instruction-execution steps to 64. For Loop Index Programs receive three integer inputs start (−500 ≤ start ≤ 500), end (−500 ≤ end ≤ 500), (start < end), and step (1 ≤ step ≤ 10). Programs must output the following sequence: n0 = start ni = ni−1 + step
6
A. J. Ferguson et al.
for each ni < end. We limited program length to a maximum of 128 instructions and also limited the maximum number of instruction-execution steps to 256. Program performance against a test case was measured on a gradient, using the Levenshtein distance between the program’s output and the correct output sequence. Grade Programs receive five integers in the range [0, 100] as input: A, B, C, D, and score. A, B, C, and D define the minimum score needed to receive that letter grade. These are specified such that A > B > C > D (i.e., they are monotonically decreasing and unique). The program must read in these thresholds and return the appropriate letter grade for the given score, or F if score < D. We limited program length to a maximum of 64 instructions and also limited programs’ maximum instruction-execution steps to 64. On each test, we evaluated programs on a pass-fail basis. For these experiments, the Smallest, Median, and For Loop Index problems have an associated training set of 100 test cases, and a separate validation set of 1000 test cases (withheld during fitness evaluations). We used 200 training cases and 2000 validation cases for the Grade problem. A program had to solve all test cases in both the training and validation sets to be considered a “perfect” solution. All training and validation sets can be found in the supplemental material [5].
1.3.3 Experimental Design We conducted five experiments: (1) we replicated a previous experiment [13] to evaluate subsampling’s effect on lexicase selection’s problem-solving success; (2) we tested whether or not subsampling improves problem-solving success because it facilitates deeper evolutionary searches; (3) we evaluated whether subsampling can reduce the computational effort expended by lexicase selection to solve problems; (4) we tested the effect of random subsampling on lexicase selection, comparing the diversity maintenance of standard, down-sampled, and cohort lexicase; (5) we compared each of standard, down-sampled, and cohort lexicase’s capacity to maintain specialist candidate solutions (i.e., programs with low aggregate fitness that solve test cases that the majority of the population fails).
1.3.3.1
Does Subsampling Improve Lexicase Selection’s Problem-Solving Success Given a Fixed Computation Budget?
First, we replicated the experiment conducted in Hernandez et al. [13] where both down-sampled and cohort lexicase improved problem-solving success relative to standard lexicase selection. To evaluate whether subsampling improves lexicase’s problem-solving success, we evolved programs using down-sampled, cohort, and standard lexicase selection to solve each of the four program synthesis problems (described in Sect. 1.3.2). While the sets of program synthesis problems are not
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
7
identical, the main difference between the two experiments is that our previous work included a test case that was designed to minimize program size of candidate solutions that solved all normal test cases; this minimizing test case was discarded for all experiments in this work. For a control, we also tested reduced lexicase: standard lexicase performed on a statically reduced training set that was randomly sampled at the beginning of the run. Reduced lexicase is similar to down-sampled lexicase, with the exception that test cases remain constant throughout the evolutionary search and are not sampled every generation. All three of these lexicase variants were tested at five subsampling levels: 100% (identical to standard lexicase), 50, 25, 10 and, 5% (D = 1, 2, 4, 10, and 20, respectively). For standard lexicase and each variant, we limited each instance to a maximum computation budget of 30,000,000 evaluations.1 Thus, standard lexicase ran for 300 generations, and the subsampled variants ran for 300, 600, 1200, 3000, and 6000 generations, respectively. We compared the problem-solving success (i.e., the number of replicates that produced a perfect solution) of each variant to standard lexicase. For each problem, we ran 50 replicates (each with a unique random seed) of each subsampled configuration, and 250 replicates (each with a unique random seed) of standard lexicase (50 replicates for each subsampling level).
1.3.3.2
Does Subsampling Improve Lexicase Selection’s Problem-Solving Success Because it Facilitates Deeper Searches?
Both down-sampled and cohort lexicase perform fewer test case evaluations per generation than standard lexicase, allowing us to run evolutionary searches for more generations given a fixed computation budget (i.e., a fixed number of total test case evaluations). We expected that subsampling improves lexicase’s problem-solving success because it enables deeper searches. To test this hypothesis, we repeated the performance experiment (described previously in Sect. 1.3.3.1), except we evolved all populations (regardless of selection scheme and subsampling level) for 300 generations. We compared the number of successful replicates from each of downsampled, cohort, and standard lexicase. If down-sampled and cohort lexicase lose their performance edge over standard lexicase, the distinction must come from the time after the 300 generation limit that they would have continued evolving. This finding would suggest that subsampling’s improved problem-solving success results from its facilitation of deeper evolutionary searches.
1 Evaluating
a single program on a single test case is one test case evaluation.
8
1.3.3.3
A. J. Ferguson et al.
Does Random Subsampling Reduce the Computational Effort Required to Solve Problems with Lexicase Selection?
Our previous work [13] shows that subsampling can improve lexicase selection’s problem-solving success given a fixed computational budget. Here, we are interested in whether or not subsampling reduces the total computational effort required to find solutions; that is, do down-sampled and cohort lexicase generally find solutions using fewer total evaluations than standard lexicase selection? We evolved programs on the four program synthesis problems described previously (Sect. 1.3.2) using down-sampled, cohort, and standard lexicase (at a 10% subsampling level for downsampled and cohort lexicase). For each condition, we ran 50 replicate populations. Because we wanted to compare how much computational effort it generally took for a particular selection scheme to solve a problem, we only used data from the first 25 replicates of each condition to solve the problem (i.e., the 25 replicates per condition that used the least computational effort). We also included truncated lexicase [28], another lexicase selection variant that works to reduce the rigidness in lexicase selection by limiting the number of test cases used in a selection event before a candidate solution is selected. Truncated lexicase also has the potential to reduce the computational effort needed to find solutions. For our truncated lexicase condition, we used a truncation level equal to 10% of the training set.
1.3.3.4
Does Subsampling Degrade Lexicase Selection’s Diversity Maintenance?
Part of lexicase selection’s success is known to be the result of its effectiveness at diversity maintenance [4, 9, 24]. Subsampling, however, is likely to degrade diversity maintenance because it both reduces the total number of niches available each generation (i.e., there are fewer possible orderings of test cases) and decreases niche stability from generation to generation (i.e., the set of possible test case permutations changes every generation). Thus, we expected populations evolved using down-sampled and cohort lexicase selection to have lower overall diversity and more frequent selective sweeps (coalescence events) than those evolved with standard lexicase selection. Additionally, cohort lexicase inherently buffers populations against selective sweeps, slowing down the rate at which a lineage can take over a population by limiting competition each generation to within cohorts. As such, we expected cohort lexicase to have fewer selective sweeps (and thus more phylogenetic diversity) than down-sampled lexicase. To test our hypotheses, we replicated the experiment in Sect. 1.3.3.1, running both subsampling lexicase variants (at a range of subsampling levels) and standard lexicase for 30,000,000 total evaluations. In these runs, we collected data on genotypic, phenotypic, and phylogenetic diversity. We measured genotypic and phenotypic diversity with the Shannon diversity index. To assess phylogenetic diversity, we used a suite of phylogenetic diversity metrics (see [3] for a review). After all replicates terminated, we analyzed the results of each of these diversity
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
9
measures at the time solutions were found.2 Within each subsampling level, we compared cohort, down-sampled, and standard lexicase selection.
1.3.3.5
Does Subsampling Reduce Lexicase Selection’s Capacity to Maintain specialists?
Recent work Helmuth et al. [10] demonstrates lexicase’s tendency to select specialist individuals (i.e., individuals that have a low aggregate fitness but perform well on a subset of tests that the majority of the population fails). Helmuth et al. found that lexicase’s ability to select specialists is a major driver behind its problem-solving success. Just as we expected subsampling to degrade lexicase selection’s diversity maintenance, we also expected subsampling to inhibit specialist maintenance. Because specialists perform well on few test cases (and potentially poorly on the rest), a specialist’s likelihood of being selected by lexicase selection is reduced if any of the test cases it passes are not sampled. Thus, we hypothesized that both down-sampled and cohort lexicase reduce lexicase selection’s capacity to maintain specialist individuals. To test our hypothesis, we investigated the extreme case of populations with a single specialist. We generated hypothetical populations, each containing a ‘specialist’ and many ‘generalists’. In each generated population, the specialist individual was able to solve only one focal test case, and none of the generalists were allowed to solve the focal test case. We varied the probability at which generalists could solve each non-focal test case, ranging from 0.1 to 1.0 (where all generalists solved all non-focal test cases). We also varied population size and the total number of test cases. Table 1.1 shows all parameter values used in this experiment. We generated 100 populations for each combination of these parameters.
Table 1.1 Generated population configurations Parameter Population size # test cases Generalist pass rate on non-focal tests
Values 10, 20, and 100 10, 20 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0
We generated 100 populations for all combinations of the parameters given in this table
2
Choosing when to measure diversity in evolutionary computation is an interesting problem. In evolutionary computation, diversity maintenance is often viewed as a mechanism to avoid premature convergence on suboptimal solutions. If our goal is to compare how well different selection schemes maintain diversity, when should we measure diversity? Measuring diversity after a global solution is found is not particularly meaningful, as finding the solution often causes the population to converge, decreasing diversity. We measured diversity at the time the solution is found to mitigate this problem. However, this solution only partially addresses the underlying problem: the process of evolution often involves many selective sweeps and subsequent divergences and we cannot know where in this cycle our measurements occurred.
10
A. J. Ferguson et al.
For each population, we calculated the probability of each candidate solution being selected at least once to be a parent in the next generation under standard, down-sampled, and cohort lexicase selection. For standard lexicase selection, we calculated exact probabilities: we enumerated all possible orderings of test cases, counting the number of enumerations where each candidate solution is selected. This is intractable for the subsampled lexicase variants, so we took a sampling approach. To approximate the selection probability in the lexicase variants, we randomly subsampled the population according to the selection scheme being tested. After subsampling, down-sampled lexicase is equivalent to standard lexicase with fewer test cases, while cohort lexicase is equivalent to standard lexicase conducted separately on each cohort. Thus, we calculated the selection probabilities for each candidate solution with that particular random subsampling. This process was repeated 100,000 times to approximate the true selection probabilities under down-sampled and cohort lexicase. These calculations allowed us to compare the specialist’s selection probability across configurations.
1.3.4 Statistical Analyses All statistics were calculated using the R statistical computing language v3.6.0 [26], and all figures in this work were created using the ggplot2 R package [31]. We compared problem-solving success rates among different independent conditions using Fisher’s exact tests, and we corrected for multiple comparisons using the Holm–Bonferroni method where appropriate. For measures of computational effort and diversity, we performed a Kruskal–Wallis test to look for statistically significant differences among independent conditions. For comparisons in which the Kruskal–Wallis test was significant (significance level of 0.05), we performed a post-hoc Mann–Whitney test between relevant conditions (with a Holm–Bonferonni correction for multiple comparisons where appropriate). Statistical analyses for the specialist experiment also used a Kruskal–Wallis test, but swapped the Mann– Whitney test for a Wilcoxon test because the data were paired. Analysis and visualizations scripts can all be found in the supplemental material [5].
1.4 Results and Discussion 1.4.1 Subsampling Improves Lexicase Selection’s Problem-Solving Success Figure 1.1 shows the fraction of replicates where a perfect solution evolved within 30,000,000 evaluations under each of down-sampled, cohort, reduced, and standard lexicase selection. For each program synthesis problem, we conducted a Fisher’s
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
11
Perfect Solutions Found − Constant Evaluations For Loop Index
Grade
0.660
0.436
0.276
0.700 0.740 0.820
0.720 0.780 0.700
0.580 0.560 0.580
0.460 0.720 0.720
00 1.
50
75 0.
25
0.
00 0.
00
75
1.
0.
50
* *
00
00
75
1.
50
0.
25
0.
00
00
75
1.
50
0.
25
0.460 0.880 0.800
* *
25
0.680 0.800 0.660
0.
* * * 0.000 * 0.960 * 0.920 *
0.100 0.940 0.900
0.
00 0.
0.660 0.600 0.600
0.
0.200 0.960 0.920
* * * 0.060 5% * 1.000 * 0.960 *
10%
* *
* * * 0.580 * 0.880 * 0.820 * 0.060 * 0.940 * 0.920 * 0.020 * 0.960 * 1.000 * 0.
0.380 0.860 0.820
0.
* 0.280 25% * 0.820 * 0.760
0.
50%
0.
Median
None 0.616
0.
Subsampling Level
Smallest
Fraction of Runs that Found Perfect Solutions Lexicase Selection Variant
Standard
Reduced
Down−sampled
Cohort
Fig. 1.1 Problem-solving success after 30,000,000 evaluations. Bars show the fraction of replicates that found a perfect solution. An asterisk (*) to the left of a bar denotes a significant difference compared to the standard lexicase results (using a Holm–Bonferroni correction for multiple comparisons). Results for standard lexicase (light purple) consist of 250 replicates per problem, while results for reduced lexicase (dark purple), down-sampled lexicase (yellow), and cohort lexicase (orange) consist of 50 replicates for each configuration
exact test (0.05 significance level) between the 250 standard lexicase replicates and the 50 subsampled replicates of each experimental condition; we corrected for multiple comparisons using the Holm–Bonferonni method. Our data are largely consistent with previous work [13]. For three of the four problems (Smallest, Median, and Grade), statically reducing the training set beyond a critical threshold significantly decreased problem-solving success. For example, at 5 and 10% subsampling levels, reduced lexicase performs significantly worse than standard lexicase in each of the Smallest, Median, and Grade problems. Reduced lexicase rarely outperformed standard lexicase, only doing so in three cases: Grade at 25 and 50% subsampling, and For Loop Index at 10% subsampling. Statically reducing the size of the training set did not inhibit our capacity to solve the For Loop Index problem; we suspect this is because the training set (100 test cases) is much larger than necessary. The same trend is true for 50- and 25%-reduced lexicase on the Grade problem. Both down-sampled and cohort lexicase performed significantly better than standard lexicase on at least one subsampling level for every problem. Specifically, down-sampled lexicase significantly outperformed standard lexicase on all problems at the 5 and 10% subsampling levels, while cohort lexicase also outperformed standard lexicase at 5 and 10% subsampling on all problems except For Loop Index at the 10% subsampling level. Neither down-sampled nor cohort lexicase performed significantly worse than standard lexicase in any experimental configuration.
12
A. J. Ferguson et al.
These results achieved better performance on more extreme subsampling levels than in [13]; this is because we removed all selection pressure to reduce program size. In this previous work, we included a single test case that favored small programs that only took effect when a program solved all other test cases it was evaluated against. At high subsampling levels (e.g., 5%), it is easy for programs that do not generalize well to prematurely trigger this size-minimization test case, which negatively impacted problem-solving success rates. These results support our previous claim that subsampling can improve lexicase selection’s problem-solving success. Although there is evidence that subsampling can improve solution rates, a different approach is needed to tease apart why this difference exists, or how down-sampled and cohort lexicase actually differ.
1.4.2 Deeper Evolutionary Searches Contribute to Subsampling’s Success Figure 1.2 shows the fraction of replicates where a perfect solution evolved after 300 generations under each of down-sampled, cohort, and standard lexicase selection. After 300 generations, conditions with aggressive subsampling (e.g., 5%) have made fewer total evaluations than conditions with milder subsampling (e.g., 50%) or standard lexicase. To be exact, 50, 25, 10, and 5% subsampling complete 15,000,000,
Perfect Solutions Found − Constant Generations For Loop Index
Grade
0.664
0.472
0.224
50% 0.620 0.560
0.720 0.600
0.300 0.300
0.240 0.160
0.420 0.360
0.100 0.060 0.040 0.080
*
0.000 0.040
* * 00 1.
0. 75
0. 50
*
0. 00
0. 2
0. 0
5
0 1. 0
0
0. 7
0. 5
0. 00 0. 25
1.
0. 7
0. 5
0. 0
* 0
0.260 0.120
5
* *
1. 0
0.080 0.060
0. 7
* *
*
5 0. 50
5% 0.180 0.120
*
0
* *
0.380 0.200
5
* *
0.320 0.240
00
10% 0.260 0.220
0
0.500 0.440
0
*
5
25% 0.220 0.400
0. 25
Median
None 0.592
0. 2
Subsampling Level
Smallest
Fraction of Runs that Found Perfect Solutions Lexicase Selection Variant
Standard
Down−sampled
Cohort
Fig. 1.2 Evolutionary results at the end of 300 generations. Bars show the fraction of replicates that found a perfect solution on or before 300 generations. An asterisk (*) to the left of a bar denotes significant difference compared to the standard lexicase results. Results for standard lexicase (light purple) consist of 250 replicates per problem, while results for down-sampled lexicase (yellow) and cohort lexicase (orange) consist of 50 replicates for each experimental configuration
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
13
7,500,000, 3,000,000, and 1,500,000 evaluations, respectively. We hypothesized that random subsampling improves lexicase selection because it allows evolutionary searches to run for more generations given a fixed evaluation budget. By terminating all replicates after 300 generations, we expected subsampling to lose its advantage over standard lexicase. Given a fixed number of generations, neither down-sampled nor cohort lexicase significantly outperformed standard lexicase at any subsampling level. In fact, down-sampled and cohort lexicase performed significantly worse than standard lexicase on all problems with 5 and 10% subsampling rates except in three cases: cohort at 10% subsampling on Grade, down-sampled at 10 and 5% subsampling on For Loop Index. As shown in Sect. 1.4.1, when given equivalent computational budgets (i.e., total number of training case evaluations), subsampling significantly improves lexicase’s problem-solving success. However, this experiment shows that when we restrict down-sampled and cohort lexicase to the same number of generations as standard lexicase, they both have significantly diminished success on the same problems. These data support our hypothesis that deeper evolutionary searches contribute to the success of the subsampled variations on lexicase selection.
1.4.3 Subsampling Reduces Computational Effort Next, we explored how subsampling affects the amount of computational effort required to solve problems in the context of lexicase selection. For this experiment, we removed all evaluation and generation termination criteria. Figure 1.3 shows the number of test case evaluations in each of the first 25 replicates for each condition in which a solution evolved (i.e., the 25 replicates that required the least computational effort to solve the problem). We performed a Kruskal–Wallis test (significance level 0.05) to look for significant differences among selection schemes for each program synthesis problem. For problems in which the Kruskal–Wallis test was significant,
Computational Effort Number of Evaluations
Smallest
Median
For Loop Index
Grade
1e+08
1e+07
1e+06
*
*
Lexicase Selection Variant
*
Standard
*
*
*
Down−sampled (10%)
*Cohort
(10%)
*
*
Truncated (10%)
Fig. 1.3 The number of evaluations required for each treatment to solve the specified problems. The 25 replicates with the fewest evaluations for each treatment are shown. An asterisk (*) under a box denotes significant difference between that treatment and standard lexicase
14
A. J. Ferguson et al.
we performed a post-hoc Mann–Whitney test between standard lexicase and each of the down-sampled, cohort, and truncated lexicase (with a Holm–Bonferonni correction for multiple comparisons). Both down-sampled and cohort lexicase used significantly fewer evaluations than standard lexicase on all four problems. Across all problems, truncated lexicase did not use significantly fewer evaluations than standard lexicase; on the Median problem, truncated lexicase actually used significantly more evaluations than standard lexicase. The data show a clear trend that 10% subsampling, whether via downsampling or cohorts, can significantly reduce the number of evaluations needed to solve these program synthesis problems. However, truncated lexicase (using 10% of the training cases per selection event) causes either no effect or a significant increase in required evaluations.
1.4.4 Subsampling Does Not Systematically Decrease Phenotypic Diversity in Lexicase Selection Mutations to the binary tags used by the programs to reference modules and memory are often silent (i.e., the phenotype and fitness remain the same) allowing populations to endure high mutation rates that drive adaptive evolution. As a result, almost all replicates maximize genotypic diversity, rendering comparisons uninformative. Therefore, we examined the phenotypic diversity of lexicase and the two subsampled variants. When evolution produced a candidate solution capable of solving all test cases in the training set, we immediately tested that solution on the cases in the reserved validation set as well. If this candidate solution continued to pass all test cases, we declared it a “perfect solution” and proceeded to measure the phenotypic diversity of the population it arose from. To do so, we tested all programs in the population on all test cases across both the training and validation sets. We designated each candidate solution’s performances (in sequence) on all test cases as that solution’s phenotype. Figure 1.4 shows the Shannon diversity of these results. Minimal evidence was found to support our hypothesis that subsampling results in a reduction of phenotypic diversity. After comparing the phenotypic diversity of both down-sampled and cohort lexicase to the standard algorithm, only 2 of 32 configurations resulted in a significant decrease in phenotypic diversity, both of which were down-sampled configurations. Conversely, cohort lexicase actually had significantly higher phenotypic diversity than standard lexicase in two configurations. Further, cohort lexicase results had a significantly higher phenotypic diversity than down-sampled lexicase in 4 of 16 comparisons. With only two configurations leading to decreased phenotypic diversity, we cannot conclude that there is a systematic decrease in phenotypic diversity due to subsampling for these program synthesis problems. However, these results hint
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
15
Phenotypic Diversity Smallest
Median
For Loop Index
Grade
Subsampling Level
None
50% ‡
‡
25%
†‡
‡
‡
†
10%
†‡ †‡
5%
‡
2
4
6
8
2
4
6
8 2 4 Shannon Diversity
Lexicase Selection Variant
Standard
6
8
Down-sampled
2
4
6
8
Cohort
Fig. 1.4 Shannon diversity of candidate solution phenotypes at the first generation a perfect solution was found; individual phenotypes were measured as a program’s performance on each test from the training and validation sets. A dagger (†) above a box denotes significant difference with standard lexicase. A double dagger (‡) denotes significant difference between cohort lexicase and down-sampled lexicase at that subsampling level. Results consist of replicates that found a perfect solution out of 250 replicates for standard lexicase on each problem (purple boxes) and 50 replicates for each combination of problem and subsampling level for down-sampled lexicase (yellow boxes) and cohort lexicase (orange boxes)
at a difference between diversity due to down-sampled lexicase and cohort lexicase; we plan to explore this difference in future work.
1.4.5 Cohort Lexicase Enables More Phylogenetic Diversity Than Down-Sampled Lexicase As with phenotypic diversity, we recorded the phylogenetic diversity metrics at the time point when populations first found a perfect solution. This timing was necessary; the discovery of a perfect solution is likely to produce a selective sweep, radically altering the structure of the phylogeny. An unavoidable side effect is that the measurements are taken after different numbers of generations have elapsed in different replicates. This discrepancy is potentially concerning, as phylogenetic diversity measurements are sensitive to the number of generations represented within the phylogeny. Adding more generations will, in many cases, legitimately increase the diversity of evolutionary history that a population contains. However, the number of generations elapsed can have a disproportionately large effect on a phylogenetic diversity metric, swamping out other effects. In this case, it is these
16
A. J. Ferguson et al.
other effects that we are most interested in, as we have already analyzed the causes and effects of the number of generations a population goes through. Fortunately, our results comparing down-sampled vs. cohort lexicase do not appear to be driven by variation in the number of generations elapsed, as the distribution of generations at which the first perfect solution was found did not vary consistently within any subsampling level. Because this distribution did vary among subsampling levels, we are not attempting to make any strong claims about the relationship between phylogenetic diversity and degree of subsampling. Here we examine only two of the pyhlogenetic metrics that were calculated; plots, descriptions, and statistics of all recorded metrics can be found in the supplemental material [5]. The most recent common ancestor (MRCA) is the most recently evolved candidate solution from which all extant candidate solutions descend. For this experiment we tracked the MRCA throughout the evolutionary search, and we examined the number of selective sweeps by counting the number of times the MRCA changed (see Fig. 1.5). For all problems tested, cohort lexicase has significantly fewer MRCA changes than down-sampled lexicase for 5, 10, and 25% subsampling levels. This pattern suggests that cohort lexicase inhibits selective sweeps in a way that down-sampled lexicase does not. A likely mechanism for this behavior is that, by explicitly fragmenting the population into groups, cohort lexicase prevents any single candidate solution from sweeping more than one cohort per generation. Another phylogenetic measure we examined was the phylogenetic divergence (i.e., how distinct the extant taxa are from each other) [3]. Here we quantify Most Recent Common Ancestor (MRCA) Changes Smallest
Median
For Loop Index
Grade
50%
Subsampling Level
†
25%
†
†
†
10
†
100
†
†
†
†
†
1
†
†
†
5%
†
†
†
†
†
†
10%
†
1000 1
†
†
†
10
100 1000 1 10 Number of Changes
Lexicase Selection Variant
†
100
Down-sampled
1000 1
10
100
1000
Cohort
Fig. 1.5 Number of times the most recent common ancestor (MRCA) of all extant candidate solutions changed for each evolutionary run. Changes shown on a logarithmic scale. A dagger (†) above a box denotes significant difference between cohort lexicase and down-sampled lexicase at that subsampling level. All results shown are from the replicates that found a perfect solution out of 50 replicates per experimental condition
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
17
Mean Pairwise Distance Smallest
Median
For Loop Index
Grade †
50%
†
†
Subsampling Level
†
25%
†
†
†
10%
†
5%
10
100
10
100
†
†
†
1000
†
†
† †
†
†
†
†
†
†
†
1000 10 Mean Distance
Lexicase Selection Variant
100
Down-sampled
1000
10
100
1000
Cohort
Fig. 1.6 Mean distance between all pairs of extant taxa in the phylogenetic tree for runs of both subsampled lexicase variants at different subsampling levels. A dagger (†) above a box denotes significant difference between cohort lexicase and down-sampled lexicase at that subsampling level. All results shown consist of the replicates that found a perfect solution out of 50 replicates per experimental condition
phylogenetic divergence via mean pairwise distance of the extant solutions in the phylogeny. This metric is calculated as the average distance in the phylogenetic tree between each pair of extant candidate solutions (see Fig. 1.6) [30]. Cohort lexicase has significantly higher mean pairwise distance than down-sampled lexicase for all problems at the 5 and 10% subsampling levels. This result indicates that cohort lexicase has significantly higher phylogenetic divergence than down-sampled lexicase, providing further evidence that cohort lexicase is better than down-sampled lexicase at maintaining phylogenetic diversity. Other phylogenetic diversity metrics were consistent with these results. Because the differing generation counts prevent us from meaningfully comparing phylogenetic diversity across subsampling levels, all we can say conclusively is that subsampling does not appear to decrease phylogenetic diversity. That said, it may well be the case that greater phylogenetic diversity helps produce better candidate solutions. If so, this factor could explain why more generations (as opposed to more evaluation thoroughness) increases the computational efficiency of lexicase selection. A more targeted investigation will be required to determine how important phylogenetic diversity is to the success of lexicase selection variants.
18
A. J. Ferguson et al.
1.4.6 Subsampling Degrades Specialist Maintenance Across experimental conditions, lexicase selection has a significantly higher probability of selecting the specialist than either subsampled variant (see Fig. 1.7). This result supports our hypothesis that subsampling degrades specialist preservation. Interestingly, down-sampled and cohort lexicase behave differently across the conditions. Exploring these differences can help us better understand the mechanisms that cause a lexicase variant to favor specialists. When population size is large, down-sampled and cohort lexicase behave nearly identically. At higher subsampling rates specialists have a higher survival probability in both treatments. At smaller population sizes, higher subsampling rates continue to demonstrate a higher survival probability of specialists in down-sampled lexicase, but not always in cohort lexicase. At the extreme, when population size, subsampling rate, and generalist pass rate are all small, cohort lexicase has a drastically higher probability of specialist survival than down-sampled lexicase. In this case, the specialist benefits from the low generalist pass rate, since many non-specialists will fail to solve many of the test cases. Specifically, if all candidate solutions competing against the specialist fail a given test case, it will be non-discriminatory and effectively ignored. This effect is more pronounced in cohort lexicase, when the specialist is competing only within
Specialist Preservation Probability Population Size 20
Population Size 100
1.00 10% Subsampling
0.50 0.25 0.00 1.00
*
*
*
50%
*
*
*
*
50% Subsampling
Specialist Survival Chance
0.75
0.75 0.50 0.25 0.00
20%
*
100% 20% Non−Focal Candidate Solution Pass Rate
Lexicase Selection Variant
Standard
Down−sampled
*
50%
*
100%
Cohort
Fig. 1.7 Bars show the median probability that a focal specialist will be selected as a parent in the next generation at least once; data are aggregated over 100 experimental populations. Error bars show the minimum and maximum probabilities across all populations for that configuration. The dashed lines show the expected probability for both subsampled lexicase variants for configurations where population size is 100. An asterisk (*) denotes a significant difference between cohort lexicase and down-sampled lexicase; standard lexicase was always significantly different. All configurations shown are for 20 test cases
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
19
its cohort (e.g., a cohort of size 2 for a population size of 20 with 10% subsampling), rather than the full population. At a population size of 100, this benefit is lessened because cohorts still contain a relatively large number of candidate solutions. In the remaining configurations, down-sampled lexicase has a higher probability of specialist survival than cohort lexicase. To better understand these probabilities, consider a situation with two constraints: (1) the specialist solves only its one assigned test case, and (2) every other candidate solution can solve all test cases but the specialist’s (i.e., the generalist pass rate is 1.0). While the situation is improbable, it is the worst-case scenario for selecting the specialist; relaxing either constraint could only increase the chance of selecting the specialist. In this situation, the specialist’s odds of selection in a single selection event under lexicase selection is T1 where T is the number of test cases; that is, the probability of its focal test case being chosen first. The specialist’s probability of selection for the entire next generation can be expressed as Eq. 1.1 where N is the total population size [4] (for further discussion of selection probabilities under full lexicase selection, see [15]). Plexicase = 1 − (1 −
1 N ) T
(1.1)
We can modify Eq. 1.1 to accommodate down-sampled lexicase by accounting for two cases. First, the specialist’s sole test case can be included in the test cases used for this generation, in which case the specialist has a D T chance of being selected (recall D is the down-sample factor, which divides the number of training cases such that each organism sees D1 of the full training set each generation). Otherwise the specialist’s test case is not included, and the specialist has no chance of being selected. Thus, we arrive at Eq. 1.2. Pdown-sampled =
1 − (1 − D
D N T )
(1.2)
Finally, we can also account for cohort lexicase selection. Cohort lexicase also gives the specialist a D1 chance of being evaluated against its sole test case. The only difference is in the number of selection events; cohort lexicase can be thought of as standard lexicase being conducted on each cohort. Thus, in the case where the specialist is in the same cohort as its test case, it does not have N selection events to be selected, but instead N D . This gives us the final equation, Eq. 1.3. Pcohort
1 − (1 − = D
D N D T )
(1.3)
Plotting these equations, we can see both that down-sampled and cohort lexicase approach a maximum specialist survival probability of D1 , and that down-sampled approaches that limit at lower population sizes than cohort lexicase (see Fig. 1.8). The plots also show that increasing the number of training cases increases the
20
A. J. Ferguson et al. Worst−case Specialist Preservation 20 Tests
100 Tests
250 Tests 10% Subsampling
1.00 0.75
0.25 0.00 1.00
25% Subsampling
0.75 0.50 0.25 0.00 1.00
50% Subsampling
Predicted Specialist Survival Chance
0.50
0.75 0.50 0.25 0.00 0
250
500
750
1000 0
250 500 750 Population Size
Lexicase Selection Variant
Standard
1000 0
Down−sampled
250
500
750
1000
Cohort
Fig. 1.8 Probabilities that the focal specialist will be selected to be a parent in the next generation at least once in the situation where there is one specialist, which solves only one test case, but is also the only candidate solution to solve that specific test case. Meanwhile, all other candidate solutions solve all other test cases. Note the special case of a population size of 10 with 10% subsampling. Here, each cohort has one solution, which guarantees selection exactly once with no selective pressure
required population size to reach the D1 limit. Thus the two subsampled lexicase variants have the same maximum specialist selection probability, but smaller populations will see a lower value for cohort lexicase. These theoretical findings help explain our empirical results. Again, this is the worst-case scenario for the specialist. Further work is needed to see how specialist preservation changes under different situations (e.g., more copies of the specialist, less elite generalists, specialists that solve more than one test case, etc.). Figure 1.8 shows only the lower bound on the specialist selection probability.
1.5 Conclusion Here, we investigated the effects of random subsampling on lexicase selection. We replicated previous results [13], demonstrating that subsampling improves lexicase’s problem-solving success, and we have shown that subsampling’s success is a result of it enabling deeper evolutionary searches (i.e., running searches for more genera-
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
21
tions). Moreover, we have shown that subsampling reduces the total computational effort required to evolve solutions in the context of lexicase selection. We expected that applying subsampling to lexicase selection would degrade phenotypic diversity, but have found no evidence of systematic degradation. However, we did find evidence that cohort lexicase is better at generating and preserving phylogenetic diversity than down-sampled lexicase. Finally, we have shown that subsampling does reduce lexicase’s capacity to maintain specialist individuals. Overall, our results highlight the value of random subsampling in lexicase selection, showing that it can improve problem-solving success and save computational effort. However, we also demonstrate that subsampling degrades specialist preservation, and as such, for problems where maintaining specialists is especially important, subsampling might have an overall negative effect on problem-solving success. Future work should explore how subsampling affects both overall population diversity and specialist maintenance at a fine-grained scale and on a wider range of problem types. Acknowledgements This research was supported by the National Science Foundation through the BEACON Center (Coop. Agreement No. DBI-0939454), a Graduate Research Fellowship to AL (Grant No. DGE-1424871), and Grant No. DEB-1655715 to CO. Michigan State University provided computational resources through the Institute for Cyber-Enabled Research.
References 1. Aenugu, S., Spector, L.: Lexicase selection in learning classifier systems. In: Proceedings of the Genetic and Evolutionary Computation Conference - GECCO 2019, pp. 356–364. ACM Press, Prague, Czech Republic (2019) 2. Curry, R., Heywood, M.: Towards efficient training on large datasets for genetic programming. In: A. Tawfik, S. Goodwin (eds.) Conference of the Canadian Society for Computational Studies of Intelligence, pp. 161–174. Springer (2004) 3. Dolson, E., Lalejini, A., Jorgensen, S., Ofria, C.: Quantifying the tape of life: Ancestrybased metrics provide insights and intuition about evolutionary dynamics. In: Artificial Life Conference Proceedings, pp. 75–82. MIT Press (2018) 4. Dolson, E.L., Banzhaf, W., Ofria, C.: Ecological theory provides insights about evolutionary computation. preprint, PeerJ Preprints (2018). URL https://peerj.com/preprints/27315 5. Ferguson, A.: FergusonAJ/gptp-2019-subsampled-lexicase: GPTP Chapter Companion (2020). https://doi.org/10.5281/zenodo.3679380, https://github.com/FergusonAJ/gptp-2019subsampled-lexicase 6. Forstenlechner, S., Fagan, D., Nicolau, M., O’Neill, M.: Towards Understanding and Refining the General Program Synthesis Benchmark Suite with Genetic Programming. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–6. IEEE, Rio de Janeiro (2018) 7. Gathercole, C., Ross, P.: Dynamic training subset selection for supervised learning in Genetic Programming. In: Y. Davidor, H.P. Schwefel, R. Maenner (eds.) Parallel Problem Solving from Nature - PPSN III, vol. 866, pp. 312–321. Springer Berlin Heidelberg, Berlin, Heidelberg (1994) 8. Gonçalves, I., Silva, S., Melo, J.B., Carreiras, J.M.: Random sampling technique for overfitting control in genetic programming. In: A. Moraglio, S. Silva, K. Krawiec, P. Machado, C. Cotta (eds.) European Conference on Genetic Programming
22
A. J. Ferguson et al.
9. Helmuth, T., McPhee, N.F., Spector, L.: Effects of lexicase and tournament selection on diversity recovery and maintenance. In: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion, pp. 983–990. ACM (2016) 10. Helmuth, T., Pantridge, E., Spector, L.: Lexicase selection of specialists. In: Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO 2019, pp. 1030–1038. ACM Press, Prague, Czech Republic (2019) 11. Helmuth, T., Spector, L.: General program synthesis benchmark suite. In: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 1039–1046. ACM (2015) 12. Helmuth, T., Spector, L., Matheson, J.: Solving uncompromising problems with lexicase selection. IEEE Transactions on Evolutionary Computation 19(5), 630–643 (2015) 13. Hernandez, J.G., Lalejini, A., Dolson, E., Ofria, C.: Random Subsampling Improves Performance in Lexicase Selection. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO 2019, pp. 2028–2031. ACM, New York, NY, USA (2019). Event-place: Prague, Czech Republic 14. Hmida, H., Hamida, S.B., Borgi, A., Rukoz, M.: Sampling Methods in Genetic Programming Learners from Large Datasets: A Comparative Study. In: P. Angelov, Y. Manolopoulos, L. Iliadis, A. Roy, M. Vellasco (eds.) Advances in Big Data, vol. 529, pp. 50–60. Springer International Publishing, Cham (2017) 15. La Cava, W., Helmuth, T., Spector, L., Moore, J.H.: A Probabilistic and Multi-Objective Analysis of Lexicase Selection and -Lexicase Selection. Evolutionary Computation 27, 377– 402 (2018) 16. La Cava, W., Spector, L., Danai, K.: Epsilon-Lexicase Selection for Regression. In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO 2016, pp. 741–748. ACM, New York, NY, USA (2016). Event-place: Denver, Colorado, USA 17. Lalejini, A., Ofria, C.: Evolving event-driven programs with SignalGP. In: Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO 2018, pp. 1135–1142. ACM Press, Kyoto, Japan (2018) 18. Lalejini, A., Ofria, C.: Tag-accessed memory for genetic programming. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion - GECCO 2019, pp. 346–347. ACM Press, Prague, Czech Republic (2019) 19. Lalejini, A., Wiser, M.J., Ofria, C.: Gene duplications drive the evolution of complex traits and regulation. In: Artificial Life Conference Proceedings 14, pp. 257–264. MIT Press (2017) 20. Martinez, Y., Naredo, E., Trujillo, L., Legrand, P., Lopez, U.: A comparison of fitness-case sampling methods for genetic programming. Journal of Experimental & Theoretical Artificial Intelligence 29, 1203–1224 (2017) 21. Melo, V.V., Vargas, D.V., Banzhaf, W.: Batch Tournament Selection for Genetic Programming. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion GECCO 2019, pp. 994–1002. ACM Press, Prague, Czech Republic (2019) 22. Metevier, B., Saini, A.K., Spector, L.: Lexicase selection beyond genetic programming. In: W. Banzhaf, L. Spector, L. Sheneman (eds.) Genetic Programming Theory and Practice XVI, pp. 123–136. Springer International Publishing, Cham (2019) 23. Moore, J.M., Stanton, A.: Lexicase selection outperforms previous strategies for incremental evolution of virtual creature controllers. In: Proceedings of the 14th European Conference on Artificial Life ECAL 2017, pp. 290–297. MIT Press, Lyon, France (2017) 24. Moore, J.M., Stanton, A.: Tiebreaks and Diversity: Isolating Effects in Lexicase Selection. In: The 2018 Conference on Artificial Life, pp. 590–597. MIT Press, Tokyo, Japan (2018) 25. Moore, J.M., Stanton, A.: The Limits of Lexicase Selection in an Evolutionary Robotics Task. In: The 2019 Conference on Artificial Life, pp. 551–558. MIT Press, Newcastle, United Kingdom (2019) 26. R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2019). URL https://www.R-project.org/
1 Characterizing the Effects of Random Subsampling on Lexicase Selection
23
27. Spector, L.: Assessment of problem modality by differential performance of lexicase selection in genetic programming: a preliminary report. In: Proceedings of the 14th annual conference companion on Genetic and evolutionary computation, pp. 401–408. ACM (2012) 28. Spector, L., Cava, W.L., Shanabrook, S., Helmuth, T., Pantridge, E.: Relaxations of Lexicase Parent Selection. In: W. Banzhaf, R.S. Olson, W. Tozier, R. Riolo (eds.) Genetic Programming Theory and Practice XV, pp. 105–120. Springer International Publishing, Cham (2018) 29. Spector, L., Martin, B., Harrington, K., Helmuth, T.: Tag-based modules in genetic programming. In: Proceedings of the 13th annual conference on Genetic and evolutionary computation - GECCO 2011, p. 1419. ACM Press, Dublin, Ireland (2011) 30. Webb, C.O.: Exploring the phylogenetic structure of ecological communities: an example for rain forest trees. The American Naturalist 156(2), 145–155 (2000) 31. Wickham, H.: ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York (2016). URL https://ggplot2.tidyverse.org
Chapter 2
It Is Time for New Perspectives on How to Fight Bloat in GP Francisco Fernández de Vega, Gustavo Olague, Francisco Chávez, Daniel Lanza, Wolfgang Banzhaf, and Erik Goodman
2.1 Introduction A well known phenomenon in GP is the inherent bloating behavior correlated with fitness improvement [4]. Many approaches to fix this problems have been described and applied although no perfect solution has yet been found. This paper considers the problem from a new point of view. The main goal is to understand whether new paths are possible, so that in the future new bloat control methods can be produced. Instead of using more traditional size-based approaches (such as penalty functions associated to chromosome size), we are particularly interested in analyzing whether the influence of parallel and distributed computing models that reduce computing time may be also useful for reducing bloat. Although island models have been analyzed before in this context [16, 17], these previous approaches relied more on spatial structure of the models, while we are here more interested in the standard GP algorithm, when it is run in parallel using the standard fitness parallelization approach. The analysis that we present is useful to understand the relationship between individual size and computing time, and therefore with the bloat phenomenon. Moreover, a new set of bloat-control mechanisms could be easily derived, that
F. Fernández de Vega () · F. Chávez · D. Lanza University of Extremadura, Badajoz, Spain e-mail: [email protected]; [email protected] G. Olague CICESE, Ensenada, BC, Mexico e-mail: [email protected] W. Banzhaf · E. Goodman Beacon Center, Michigan State University, East Lansing, MI, USA e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_2
25
26
F. Fernández de Vega et al.
makes use of parallel architectures that are nowadays present in every computer system. We thus analyze the bloat phenomenon using execution time instead of memory consumption (size). To the best of our knowledge this is the first time such approach is considered and applied. As we describe below, preliminary tests with a well known benchmark problem shows the feasibility of the idea. Thus, the main contribution of this chapter is providing a new perspective for bloat fighting in GP, and any variable-size chromosome based evolutionary approach; secondly, we describe how this perspective may inspire new bloat-control methods for GP; and finally, one of such control methods is described and tested with success. Although results are still preliminary, we are confident with results obtained, which will be confirmed in the future with a series of experiments in a wider set of benchmark problems. The rest of the chapter is organized as follows: In Sect. 2.2 we contextualize the problem, and then, their relationship with parallel architectures and scheduling are described in Sect. 2.3. The methodology applied is presented in Sect. 2.4, while Sect. 2.5 shows the experiments performed and results obtained. Finally, we draw our conclusions in Sect. 2.6.
2.2 The Bloat Phenomenon The bloat problem has been addressed frequently in the GP literature since first described by J. Koza in [3]. A good review of the topic may be found in [10]. We will refer here to ideas that have been described in the last decade and are more directly related with the approach we follow in this chapter. Among the available techniques to control bloat, particularly relevant is a recent method called the waiting room, introduced by Luke and Panait [6]. The idea is for individuals to add a pre-birth phase to all newly created individuals. Children must wait for a period of time proportional to their size before they are allowed to enter the population and compete. Although the authors recognized that the idea was associated with the relationship between individual sizes and evaluation times, they maintained the emphasis on size-control mechanism and hence did not elaborate on the time concept, nor did they take into account the possibilities associated with parallel and distributed infrastructures available, given their influence on evaluation time when individuals are sent to available processors. Thus, they relied on the total number of nodes individuals feature, similarly to all of the other methods, although using a somewhat different approach. Another technique of interest is operator equalization presented in [1] aimed at controlling the distribution of program sizes at each generation, defining a specific shape for the distribution. Some of the best results were achieved by using a uniform or flat distribution [9], and also by applying speciation, fitness sharing or elitism, see [11]. But again, difficulties in effectively applying the method include how to
2 It Is Time for New Perspectives on How to Fight Bloat in GP
27
control the shape of the distribution without changing the nature of the search, and how to efficiently account for individuals’ sizes and shapes. Although plenty of size-related techniques can be found in the literature for bloat control in GP, few times, if any, a computing time analysis have been tackled. Here we come up with a deeper analysis of computing times that sheds light on the problem, in contrast to the standard approach of individuals’ sizes. Moreover, we want to see if the relationship between size and evaluation time can be exploited in parallel systems not just to save time, but to address the bloat phenomenon in a more natural way. We must also remember that in a series of papers, it was observed that the island model offers some possibilities for fighting bloat [2, 15, 16], and this observation was later exploited in a new proposal by Whigham [17] that considers spatial distribution of islands in GP. The connection between the dynamics of some of the parallel models for GP and the bloat phenomenon has been shown to be mainly due to the spatial structure of the model, which relies on islands of individuals. But there is still a second source of possible improvement in parallel EAs, as we described above: the number of computing resources employed to run the algorithm, which has not been studied yet from the point of view of its influence on algorithm’s bloating behavior. Even when the simplest embarrassingly parallel model is used for running a GP experiment, a load balancing technique must decide how individuals are distributed among the available computing resources, and this may also have an influence on the bloat phenomenon, as we show below.
2.3 Load-Balancing and Parallel GP Among the available parallel models for EAs and GP, the only one that does not change the algorithm behavior is the embarrassingly parallel model. All algorithm’s steps are performed as in the sequential version, and the only change is introduced in the most expensive part of the algorithm: the fitness evaluation. Thus, instead of sequentially evaluating every individual in the population, they are distributed among the available processors, and fitness values are computed in parallel. This model is frequently used in the parallel computing literature, being known as the client/server model. It requires some kind of load-balancing mechanism that allows to reduce latencies and distribute tasks efficiently among computing resources, so that makespan is reduced. Interested readers can find a taxonomy of load-balancing methods in [7], while [19] presents a comparison of different strategies. If we focus on GP, given that a large number of individuals featuring different sizes must be managed across the available processors, which are typically smaller in number than the population size, some kind of load-balancing mechanism must be applied, in charge of sending individuals to idle processors, and this mechanism might provide new hidden properties: sometimes, a deeper analysis of the new version of a given algorithm allows us to discover some properties that were not
28
F. Fernández de Vega et al.
noticed before. We are here interested in both the parallel model itself, and the loadbalancing technique that can be used and considered as the basis for a new proposal that we will describe and analyze below. Load-balancing techniques have already been considered as an implicit component of parallel versions of genetic programming. Since the nineties, static load-balancing mechanisms—the ones we will consider here—have been applied within parallel versions of GP, when facing difficult real-world problems. For instance, [8] describes a parallel version of GP that considers complexity of individuals as the basis for establishing the load-balancing policy. Nevertheless, few papers since then have studied the importance of loadbalancing techniques in GP. We may refer to [14], where several methods were tested. But again, no specific study on their relationship with the bloat phenomenon has already been described.
2.3.1 Structural Complexity of GP Individuals When any load-balancing technique is to be employed, a prediction of computing time for the task must be applied, so that the method can properly decide when to launch the task. In GP, an important feature of GP individuals is their structural complexity [12]. Although this value is typically computed taking into account the number of nodes, such as the case of evaluating computing effort [13] or lexicographic complexity [5], both are approximate estimates of the real value required: in other words, the evaluation time of the individuals’ fitness function. But we can adopt a different point of view, as we do in the approach we present below: given that the estimation of the real complexity of an individual is measured when the individual has been evaluated, we can characterize individuals using that computing time, so that we can employ it in future decisions. In any case, that value will not be available when the load-balancing mechanism must decide when to launch the evaluation of a new individual; nevertheless, it will be available after the individual’s evaluation. This could be useful, if not for that individual, given that it was already sent to be evaluated, then at least for its children, as a value to somehow approximate its evaluation time. And this is basically the idea we will apply: Our approach takes into account an individual’s computing time, as a value to decide how to distribute children among available computing resources, and ultimately, to reduce computing time while simultaneously reducing the bloat phenomenon. We thus need to keep a record of the time that each program spends during testing and use that information to create clusters of programs with similar durations that will be useful for loadbalancing individuals: clusters will be send to different processors. Thus, the load-balancing mechanism relies on individuals’ computing time, and this allows to return individuals after evaluation in an order that depends on their computing time. We use the computer’s clock to give a value to runtime of a program; this, of course, is correlated with the size of the computer program and the number of
2 It Is Time for New Perspectives on How to Fight Bloat in GP
29
instruction cycles required to execute it. All clusters are created without regard to the fitness function. We do not measure directly the size of the individuals, nor use any information about the complexity of breeding programs other than time. As we show below, the above described load-balancing technique has an impact on the bloat phenomenon without increasing the computational complexity of the algorithm.
2.4 Methodology As described above, we will consider execution time as a measure of an individual’s complexity given that a correlation exists between size and running time, that can be more deeply investigated in future work. This idea can be easily applied when individuals are evaluated: we just have to take elapsed time during an individual’s evaluation as the complexity value required. The idea is particularly useful when multicore or manycore computer architectures are to be employed: ideally, all of the individuals in the population could be evaluated simultaneously, and their evaluation time obtained simultaneously. We simplify the measurements by directly using the elapsed evaluation time as the representation of the individual’s complexity. Once the individuals’ evaluation times have been obtained, and with the hypothesis that individuals of similar size will produce offspring of similar size, our proposed method groups individuals by computing time, always understanding it as an indirect—and easier to compute—measure of an individual’s size. We must again consider that in a parallel system with as many processors as individuals, individuals of similar size will finish their evaluation simultaneously and will be ready to reproduce. Therefore, an automatic grouping mechanism naturally arise from these parallel architectures (see Figs. 2.1, 2.2, 2.3, and 2.4). If the number of processors is smaller, then, the load balancing mechanism which is always in charge of distributing tasks among processors, will decide which individuals group together in single tasks, and may thus apply grouping according to the ending time of individuals evaluation. After grouping, selection and breeding phases are performed within each group, so only individuals of similar size-time are allowed to crossover. Then, the loadbalancing mechanism is in charge of creating tasks by grouping individuals of the same cardinality by evenly dividing the whole population.
Fig. 2.1 First step: individuals are sent to be evaluated
30
F. Fernández de Vega et al.
Fig. 2.2 Second step: smaller individuals fitness available first
Fig. 2.3 Third step: smaller individuals produce children while larger ones are coming back from evaluation
Fig. 2.4 Fourth step: larger individuals produce children
2 It Is Time for New Perspectives on How to Fight Bloat in GP
31
Our hypothesis states that individuals of similar size will produce offspring of similar size. This will not be the case if the crossover operation does not divide the individual into two similar-size parts. If different, crossover will produce small and big individuals, whose sizes do not follow our hypothesis. Nevertheless, considering that individuals are randomly divided, we expect a central tendency which results aim to be of a size in between both parents. This can be considered as a weak point of our offspring-size prediction, which can be improved in a future version. However, generally speaking, we expect that offspring will have similar sizes compared to their parents.
2.4.1 Implementation Therefore, when designing a specific bloat control mechanism for GP that uses the new time-based perspective, the parallel version of the algorithms, in particular multi-thread based models, has been initially chosen, which are available today in some of the most popular EAs tools. Thus, all operations performed within each group are done in one thread, and the number of threads created corresponds to the number of groups. Each thread collects its corresponding individuals and performs the selection and breeding steps. In this way, all operations are isolated within each group, that can be naturally conformed when individuals return from evaluation according to evaluation time. After the breeding phase, the mechanism takes advantage of the fact that each group of individuals is contained in a different thread and it performs the evaluation of all the corresponding individuals. Each group/thread contains the same amount of individuals, individuals of similar execution time. As a result, the evolutionary process is considerably speeded up by parallelizing the evaluation phase. Afterwards, once all individuals of the population are evaluated, and their computing times obtained, they are sent to the thread corresponding to their computing time (size surrogate) value, so that the next breeding operations can be performed.
2.4.1.1
Software Tool
With the goal of making the bloat method easily usable, it has been implemented based on a popular existing tool, ECJ [18]. Such system has been built in modules in order to facilitate the replacement of any part involved in the evolutionary process. In our case, we replace the module that carry out the breeding phase with the new time-based approach. Our bloat control mechanism slightly modifies the way individuals are bred. In order to apply the bloat control mechanism, two new operations have been implemented.
32
F. Fernández de Vega et al.
• GroupBreeder orchestrates the breeding phase and starts the corresponding threads. – As the first step, individuals need to be grouped according to evaluation times. During their evaluations, elapsed time has been captured so each individual already contains it as a new feature. Before grouping them, all the individuals of the population are sorted by evaluation time. – Then, the same number of individuals goes to each group, so they are taken in order and sent to their groups. In case individuals cannot be equally split into groups, first groups will get one individual more. – Next, one thread is instantiated per group. A group of individuals is assigned to each of them. – Threads are started and the program continues until all threads have finished. • GroupBreederThread represents the threads that actually perform the selection, breeding and evaluation of individuals. – A call to the module that performs the selection and breeding is done, specifying the group to which these operations need to be applied. Here, the only change that has been done to the original implementation is to apply these operations to only the individuals that correspond to the specified group—the group that corresponds to the thread. – Once selection and breeding phases have finished, evaluation of new individuals generated by this thread takes place. Note that the evaluation step has not been modified. Therefore, it goes through all individuals and tries to evaluate them; however, it will not actually evaluate any, since they have been previously evaluated and marked as such.
2.4.2 Experiments All experiments described below were run on an Intel(R) Xeon(R) CPU (E5530) that offers 8 cores at 2.4 GHz and 8 GB of memory. Default configuration parameters set in ECJ has been used for the benchmark problem. Generations were set to 50, and 30 runs were launched, for statistical purposes. The well known even parity problem from the GP literature was used with the basic configurations already available in the ECJ toolkit. The only change is the number of bits in the chromosome: 12 bits so that the problem is difficult enough for long runs. Regarding the load-balancing mechanism—number of groups to be used—several configurations were employed: 1 group, which corresponds with the standard GP algorithm, and also 2, 4, 8, 16, 32, 64, and 128 groups.
2 It Is Time for New Perspectives on How to Fight Bloat in GP
33
2.5 Results We present and discuss below results obtained in the experiments. Average fitness and size are plotted on the figures included. The proposed method uses parallel execution and results are presented here: as many threads as possible are launched so that individuals of different sizes are evaluated in different processors; Yet, ideas extracted can be also adapted when running experiments in a sequential fashion. We also include an analysis of results in this context.
2.5.1 Parallel Model In the parity problem, fitness is monotonically affected by the number of threads (groups), as can be observed in Fig. 2.5. Nevertheless, if we take into account the scale employed, differences are really narrow. If we focus instead on size evolution, a dramatic reduction up to a third of the size as compared with a standard run (1 group) can be found in Fig. 2.6. The slightly
Fig. 2.5 Best-fitness evolution along generations (averaged over 30 runs) for the parity problem (maximizing fitness)
34
F. Fernández de Vega et al.
Fig. 2.6 Size-evolution along generations (averaged over 30 runs) for the parity problem
affected fitness may be acceptable, taking into account the considerable reduction in size.
2.5.2 Sequential Execution Although the previously described idea was born considering how individuals can be run on parallel computing systems, given that individuals may naturally group when a number of them are launched to be evaluated on different processors, and return simultaneously when their running time is similar, the idea can be adapted to sequential environments with minimal changes: allowing individuals to group according to running time. Unfortunately delays are present in this latest approach, given that individuals can only be grouped when all of them has been evaluated, while in the parallel model this is not the case: they can breed with other individuals that has finish their evaluation process simultaneously. In any case, the idea for the sequential version is to simply emulate the parallel version. Therefore, no population structure support the model, although some resemblances with structured models may be seen in this sequential approach. But this distinction is pertinent to properly understand the new bloat control approach: while the method naturally fits parallel
2 It Is Time for New Perspectives on How to Fight Bloat in GP
35
Fig. 2.7 Best-fitness evolution along generations (averaged over 30 runs) for the parity problem (sequential execution) (maximizing fitness)
computing environments, and can be applied without any additional effort, other structured-based approaches can only be applied when specific grouping tasks are added to the algorithm. Results from sequentially executed runs are shown for the even parity problems in Figs. 2.7 and 2.8. Similarly to what is seen in the results from parallel execution, fitness quality is not strongly affected, while the individual size is notably reduced. This phenomenon is produced solely by the fact of grouping individuals by computing time before carrying out selection and crossover stages. Summarizing, we have shown in this preliminary study that the proposed time and individual duration method points to new research avenues where GP could be studied to address the problem of size growth from a computing time perspective. Moreover, under the new approach this paper introduces a first method for controlling individual sizes. Its main idea was to group individuals according to evaluation time, and it naturally allows keeping individuals’ size under control, while fitness quality remains high. The experiments allow us to see the interest of the approach in both sequential and parallel models, although it requires further analysis with a larger set of problems in future work to confirm the findings presented above. Although we have focused here on the time-size relationship as well as on the specific method implemented under this perspective, we believe that new
36
F. Fernández de Vega et al.
Fig. 2.8 Size-evolution along generations (averaged over 30 runs) for the parity problem (sequential execution)
approaches may be developed in the future considering time-space relationships in variable-size chromosome-based evolutionary techniques. Moreover, we think that specific improvements in the method presented here for GP will be attained when parallel environments are used and a proper load-balancing approach is applied.
2.6 Conclusions This chapter presents a new approach to evaluating individuas complexity in variable-size chromosomes based evolutionary algorithms: using computing time instead of individuals size. Moreover, given that in parallel/distributed computing environment load-balancing methods may allow individuals to naturally group according to arrival time, this idea can shed light on the bloating phenomenon, providing clues for new control methods. To demonstrate the usefulness of the approach, we present a first method which is based on an individual’s computing time—which is automatically obtained when fitness is computed—as a trait employed for characterizing and grouping individuals together in a natural way, so that they can only bred within their groups. The reason
2 It Is Time for New Perspectives on How to Fight Bloat in GP
37
for this idea is to keep computing time—and thus, indirectly, an individual’s size growth—under control. Based on the above described idea, we have run a set of experiments on a well known benchmark problem: parity, and results—both in parallel and sequential environments—show that the idea works, and a first specific method to prevent bloat has been presented, although other possible ones that relies in load-balancing techniques may be derived. Acknowledgements We acknowledge support from Spanish Ministry of Economy and Competitiveness under project TIN2017-85727-C4-f2,4g-P, Regional Government of Extremadura, Department of Commerce and Economy, the European Regional Development Fund, a way to build Europe, under the project IB16035, Junta de Extremadura, project GR15068, and CICESE project 634-128.
References 1. Dignum, S., Poli, R.: Operator equalisation and bloat free GP. In: European Conference on Genetic Programming, LNCS, vol. 4971, pp. 110–121. Springer (2008) 2. Galeano, G., Fernández de Vega, F., Tomassini, M., Vanneschi, L.: Studying the influence of synchronous and asynchronous parallel GP on programs length evolution. In: Congress on Evolutionary Computation, vol. 2, pp. 1727–1732. IEEE (2002) 3. Koza, J.R.: Genetic programming: On the programming of computers by means of natural selection. MIT Press (1992) 4. Langdon, W.B., Poli, R.: Fitness causes bloat, pp. 13–22. Soft Computing in Engineering Design and Manufacturing. Springer (1998) 5. Luke, S., Panait, L.: Lexicographic parsimony pressure. In: Conference on Genetic and Evolutionary Computation, pp. 829–836 (2002) 6. Luke, S., Panait, L.: A comparison of bloat control methods for genetic programming. Evolutionary Computation 14(3), 309–344 (2006) 7. Osman, A., Ammar, H.: Dynamic load balancing strategies for parallel computers. In: International Symposium on Parallel and Distributed Computing, vol. 11, pp. 110–120 (2002) 8. Oussaidène, M., Chopard, B., Pictet, O.V., Tomassini, M.: Parallel genetic programming and its application to trading model induction. Parallel Computing 23(8), 1183–1198 (1997) 9. Silva, S.: Reassembling operator equalisation: A secret revealed. ACM SIGEVOlution 5(3), 10–22 (2011) 10. Silva, S., Costa, E.: Dynamic limits for bloat control in genetic programming and a review of past and current bloat theories. Genetic Programming and Evolvable Machines 10(2), 141–179 (2009) 11. Trujillo, L., Munoz, L., Galván-López, E., Silva, S.: Neat genetic programming: Controlling bloat naturally. Information Sciences (2016) 12. Vanneschi, L., Castelli, M., Silva, S.: Measuring bloat, overfitting and functional complexity in genetic programming. In: Conference on Genetic and Evolutionary Computation, pp. 877–884. ACM (2010) 13. Fernández de Vega, F.: Distributed genetic programming models with application to logic synthesis on FPGAs. Ph.D. thesis, University of Extremadura (2001) 14. Fernández de Vega, F., Abengózar Sánchez, J.G., Cotta, C.: A preliminary analysis and simulation of load balancing techniques applied to parallel genetic programming. In: International Work-Conference on Artificial Neural Networks, LNCS, vol. 6692, pp. 308–315. Springer (2011)
38
F. Fernández de Vega et al.
15. Fernández de Vega, F., Galeano, G., Gómez, J.A., and, J.M.S.: Efficient use of computational resources in genetic programming: Controlling the bloat phenomenon by means of the island model. In: Conference of the Industrial Electronics Society, vol. 3, pp. 2520–2524. IEEE (2002) 16. Fernández de Vega, F., Gil, G.G., Gómez Pulido, J.A., Guisado, J.L.: Control of bloat in genetic programming by means of the island model. In: Parallel Problem Solving from Nature-PPSN VIII, LNCS, vol. 3242, pp. 263–271. Springer (2004) 17. Whigham, P.A., Dick, G.: Implicitly controlling bloat in genetic programming. IEEE Transactions on Evolutionary Computation 14(2), 173–190 (2010) 18. White, D.R.: Software review: The ECJ toolkit. Genetic Programming and Evolvable Machines 13(1), 65–67 (2012) 19. Zaki, M.J., Li, W., Parthasarathy, S.: Customized dynamic load balancing for a network of workstations. Journal of Parallel and Distributed Computing (1997)
Chapter 3
Explorations of the Semantic Learning Machine Neuroevolution Algorithm: Dynamic Training Data Use, Ensemble Construction Methods, and Deep Learning Perspectives Ivo Gonçalves, Marta Seca, and Mauro Castelli
3.1 Introduction The success of artificial intelligence can be partially attributed to Artificial Neural Networks (ANNs), a machine learning algorithm that was invented in the late 1950s [74]. Inspired by the anatomy of the human brain, classic ANNs consist of neurons: atomic operators that receive a set of inputs and generate one output, determined by their activation function. To create networks, neurons are connected over synapses, so that the output of one neuron serves as input for the other. ANNs were used for a wide variety of tasks in many different fields [62], showing their suitability in addressing both classification and regression problems. Several properties of ANNs make them a suitable approach for addressing forecasting tasks: (1) ANNs are data-driven self-adaptive methods and there are few a priori assumptions about the models for the problems under study [79]. Thus, ANNs are a valuable technique for problems whose solutions require knowledge that is difficult to specify, but for which there are enough data or observations; (2) ANNs are universal functional approximators. In particular, it was proven that ANNs can approximate any continuous function to any desired accuracy [28, 29, 31]. Despite this result, in the literature there is no established procedure to determine the correct number of neurons for a given application. Another critical aspect relates with the topology of the network, that is, how the neurons are connected among them. For ANNs to perform well at a certain task, it is critical to find suitable connection weights. For this purpose, weights are adjusted in a learning
I. Gonçalves () INESC Coimbra, DEEC, University of Coimbra, Coimbra, Portugal M. Seca · M. Castelli NOVA IMS, Universidade Nova de Lisboa, Lisboa, Portugal e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_3
39
40
I. Gonçalves et al.
process, based on provided training data. The most prevalent approach is backpropagation [60], where the error between prediction and ground truth is distributed back recursively through adjacent connections. However, backpropagation fails to answer the question of how to define the general topology of neurons and synapses. Devising suitable topologies is crucial, since it directly affects the speed and accuracy of the subsequent learning process. Neuroevolution addresses these issues by applying Evolutionary Computation (EC) methods with the goal of evolving ANNs. Within neuroevolution, some approaches are specifically designed to automatically discover suitable combinations of topology and weights. More recently, a neuroevolution algorithm called Semantic Learning Machine (SLM) was proposed by Gonçalves et al. [22]. The most interesting characteristic of SLM is that it searches over unimodal error landscapes in any supervised learning problem where the error is measured as a distance to the known targets. It was empirically verified that this characteristic allows SLM to outperform other neuroevolution methods over a considerable set of supervised learning problems [32]. This work continues the investigation of SLM by empirically studying: (1) different methods of dynamic training data use; (2) different ensemble constructions methods. Furthermore, the extension of SLM to Convolutional Neural Networks is also discussed. This chapter is organized as follows: Sect. 3.2 overviews the field of neuroevolution; Sect. 3.3 describes the SLM neuroevolution algorithm and presents its distinctive features; Sect. 3.4 outlines the experimental methodology; Sect. 3.5 reports and discusses the experimental results; and Sect. 3.6 presents some initial results with SLM and Convolutional Neural Networks, and discusses future Deep Learning perspectives.
3.2 Neuroevolution Overview One of the most challenging problems when using an ANN is the choice of its architecture or topology. In this context, the terms architecture and topology are used as synonyms to indicate some specific hyperparameters of the ANN, namely the number of hidden layers, the number of hidden neurons, and how the neurons are connected among them. Despite the vast number of applications where ANNs were used [1], the literature lacks an established procedure to determine the most suitable topology of a network when addressing a given task. Consequently, a lot of effort is currently put into automating the process of finding good ANN architectures. Solving this requires addressing several topics, such as: 1. how to design the components of the architecture 2. how to put them together 3. how to set the hyperparameters There are two main approaches followed when approaching these topics, namely (a) using search methods based on artificial intelligence, and (b) using evolutionary
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
41
techniques to generate networks. The first approach uses the gradient descent algorithm to optimize the weights of the network and to dynamically modify the hyperparameters of ANNs, while the second approach is characterized by the use of evolution to optimize the network’s topology. Neuroevolution techniques were successfully applied in different domains [14] and several attempts were proposed for using EC techniques for the optimization of ANNs [76]. The existing works can be categorized into three main approaches: (1) use of EC to train the ANN; (2) use of EC to optimize the network of an ANN; (3) use of EC to optimize the ANN’s topology and to train the ANN. These different approaches are briefly discussed to present the reader the evolution of this research field in the last three decades. The initial works aimed at using EC techniques for optimizing the weights of the connections, with a fixed topology [10, 52, 53, 68, 73]. The main idea of these works was to counteract the limitations of the backpropagation algorithm [7, 30], that due to the use of gradient descent [71] can get trapped in a local minimum of the error function and it is not capable of determining a global minimum if the error function is multimodal and/or non-differentiable [76]. These works replace the backpropagation algorithm with an evolutionary technique for learning the weights of the ANN’s connections. The use of EC techniques for optimizing the weights of an ANN is attractive because they can handle the global search problem better in a vast, complex, multimodal, and non-differentiable surface [76]. Additionally, it does not depend on gradient information of the error function and thus is particularly appealing when this information is difficult to obtain. These two advantages allowed the use of evolutionary-based methods to train different kind of ANNs, including recurrent ANNs [26, 42] and higher order ANNs [12]. Subsequently, a second strand of research investigated the design of ANN architectures, namely the number of neurons, the number of hidden layers, and the connectivity among the neurons. Differently with respect to the previous studies, where it was assumed that the architecture of an ANN is predefined and fixed during the evolution of connection weights, the main aim here is to use EC techniques for evolving the topology of the network. The problem can be formulated as an optimization problem in which the objective is to determine the global optimum into a search space where each point represents an architecture. Given an objective function to be optimized (i.e., a function that quantifies the quality or performance of each architecture), the design of the optimal topology corresponds in determining the highest point on the surface induced by the objective function on the search space of the architectures. This surface (or fitness landscape), presents some properties that make EC a suitable approach for finding the sought architecture [52]. The following features were identified and discussed by Miller et al. [52]: (1) the number of possible neurons and connections is unbounded, thus the fitness landscape is infinitely large; (2) since changes in the number of neurons or connections must be discrete, the surface is non-differentiable, thus making gradient-based approaches impossible to be used; (3) the mapping from network design to network performance after learning, is indirect, strongly epistatic, and dependent on initial conditions (e.g., initial weights), so the surface is complex and noisy; (4) structurally similar
42
I. Gonçalves et al.
networks can show very different information processing capabilities, thus making the surface highly deceptive; and (5) structurally dissimilar networks can provide similar performance, thus the surface is multimodal. For all these reasons, EC techniques seem to be a natural choice for addressing the problem at hand, and they represent an alternative approach, with respect to constructive and destructive algorithms, toward the automatic design of architectures. A constructive algorithm [13, 15] is a hill climbing method that starts with a minimal network (an ANN with a minimal number of hidden layers, nodes, and connections) and adds new layers, nodes, and connections during the training phase when deemed necessary and based on some criterion. On the other hand, destructive algorithms [8, 56, 66] search for the optimal topology starting with a maximal network and by subsequently removing unnecessary layers, nodes, and connections during the training phase [76]. While these approaches are simpler to implement with respect to EC-based methods, they are susceptible to becoming trapped at structural local optima and they are able to explore only a small fraction of the possible ANN topologies [4]. Several works based on EC for optimizing the topology of an ANN appeared in the literature. One of the first works was proposed by Miller et al. [52], where a method based on a genetic algorithm is used for evolving neural network architectures for specific tasks. Each network architecture is represented as a connection constraint matrix mapped directly into a bit-string genotype. Modified standard genetic operators act on populations of these genotypes to produce network architectures with higher fitnesses over successive generations. Architecture fitness is assessed by training particular network instantiations and recording their final performance error. Schaffer et al. [63] demonstrated, using a genetic algorithm, that an evolved network architecture performs better than a large network using backpropagation learning alone when the criterion is correct generalization from a set of examples. In the same line of research, Wilson [75] showed that when genetic search is applied, a set of perceptrons can learn more complex tasks than initially apparent. Schiffmann et al. [64] presented a crossover operator for a genetic algorithm specifically created for automatic topology-optimization. In contrast to competing approaches, it allows that two parent networks with a different number of units can mate and produce a (valid) child network, which inherits genes from both of the parents. Similarly, Alba et al. [3] relied on a genetic algorithm to address the connectivity and structure definition problems, in order to accomplish a full genetic ANN design. Nikolopoulos and Fellrath [57] proposed the use of genetic algorithms and classifier systems to optimize the architecture of an ANN to be used for investment advising. In the methods previously discussed, only the architecture of the ANN is evolved, but it is assumed that the activation function of each node in the architecture is fixed and predefined a priori. Despite the simplicity of this assumption, some studies demonstrated that the choice of the activation function plays a important role in determining the performance of an ANN [9, 48]. An important attempt to evolve the architecture of an ANN, as well as the activation functions, was proposed by Schoenauer and Ronald [65], where authors
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
43
investigated the tuning of the slopes of the transfer functions of the individual neurons in the ANN. White and Ligomenides [72] adopted a simpler approach to the evolution of both topological structures and node transfer functions. The initial population contained ANNs with an 80% of the neurons using the sigmoid function and a 20% of the neurons using a Gaussian function. The evolutionary process was used to determine the optimal blend of these two functions in an automatic fashion. The idea of evolving the activation functions is nowadays investigated given the popularity of deep learning. For instance, the rectified linear activation (ReLU) function [33] has simplified the training of deep neural networks by counteracting the problems related to weight initialization and the vanishing gradient. As summarized by Manessi and Rozza [47], variations of ReLU have been proposed over the years, such as leaky ReLU (LReLU) [46], which addresses dead neuron issues in ReLU networks, thresholded ReLU [38], which tackles the problem of large negative biases in autoencoders, and parametric ReLU (PReLU) [27], which treats the leakage parameter of LReLU as a per-filter learnable weight. While these works introduced new and useful activation functions, other works used more advanced strategies to learn the most suitable activation function for the particular architecture at hand. Agostinelli et al. [2] designed a novel form of piecewise linear activation function that is learned independently for each neuron using gradient descent. With this adaptive activation function, they were able to improve upon deep neural network architectures composed of static rectified linear units, achieving state-of-the-art performance on CIFAR-10, CIFAR-100, and a benchmark from high-energy physics involving Higgs boson decay modes. Manessi and Rozza [47] introduced two approaches to automatically learn different combinations of base activation functions (such as the identity function, ReLU, and hyperbolic tangent) during the training phase. They presented a thorough comparison of their novel approaches with well-known architectures on three standard datasets showing substantial improvements in the overall performance. Thus, the evolution of the activation functions is nowadays deemed as important as the evolution of the architectures of the ANNs [47]. The evolutionary methods just discussed only evolve the architecture of ANNs, without any connection weights. That is, connection weights have to be learned in a subsequent step. While this approach reduces the complexity of evolving both the topology and the weights, there is a major problem with the evolution of architectures without connection weights as pointed out by Yao and Liu [77]. In particular it is possible to identify two critical issues: (1) different random initial weights may produce different training results. Thus, the same genotype may have different fitness due to different random initial weights used in training; and (2) different training algorithms may produce different training results even from the same set of initial weights. This is especially true for multimodal error functions. Thus, the remaining part of this section recalls contributions where EC-based techniques were used to optimize the weights and the topology of an ANNs simultaneously. The idea behind this approach is that each individual in a population is a fully specified ANN with complete weight information. As a consequence, there is a one-to-one mapping between a genotype and its phenotype, thus allowing the
44
I. Gonçalves et al.
search process to overcome the issues related to the fitness evaluation. Srinivas and Patnaik [68] presented a technique for reducing the search space of the genetic algorithm to improve its performance in searching for the globally optimal set of connection weights. They used the notion of equivalent solutions in the search space, and included in the reduced search-space only one solution, called the base solution, from each set of equivalent solutions. The iteration of the genetic algorithm consisted of an additional step where the solutions are mapped to the respective base solutions. A genetic algorithm based method was also proposed by Bornholdt and Grauden [5] for evolving a network that represented a model for a brain with sensory and motor neurons. Oliker et al. [58] proposed a distributed genetic algorithm for designing and training neural networks. The method sets the neural network’s architecture and weights for a given task where the network is comprised of binary linear threshold units. White and Ligomenides [72] introduced a new algorithm which uses a genetic algorithm to determine the topology and link weights of a neural network. If the genetic algorithm fails to find a satisfactory solution network, the best network developed by the genetic algorithm is used to try to find a solution via back-propagation. In this way, each algorithm is used to its greatest advantage: the genetic algorithm (with its global search) determines a (sub-optimal) topology and weights to solve the problem, and back-propagation (with its local search) seeks the best solution in the neighborhood of the weight and topology spaces found by the genetic algorithm. Besides genetic algorithms, other EC methods were used to address the optimization problem at hand. Koza and Rice [39] showed how to use genetic programming to find both the weights and architecture for a neural network, including the number of layers, the number of processing elements per layer, and the connectivity between processing elements. Jian and Yugeng [34] presented a new design method for the structure and weights of static neural networks based on evolutionary programming. The method is further extended to design recurrent neural networks through introducing delayed links into networks. Particle Swarm Optimization (PSO) has also been used to evolve both the weights and the topology of the networks [17, 37, 78]. In particular, Kiranyaz et al. [37] presented a Multi-Dimensional Particle Swarm Optimization (MD-PSO) technique for the automatic design of Artificial Neural Networks by evolving to the optimal network configuration (connections, weights, and biases) within the architecture space. Similarly, Garro and Vázquez [17] explored the simultaneous evolution of the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer function for each neuron. The main topic of this contribution was the evaluation of eight different proposed fitness functions used to evaluate the quality of each solution and find the best design. Zhang et al. [78] introduced a new evolutionary system for evolving Feed-Forward ANNs, which is constrained to the use of PSO. Both the architecture and the weights of ANNs were adaptively adjusted according to the quality of the network. One of the most popular and broadly used approaches in neuroevolution is the NeuroEvolution of Augmenting Topologies (NEAT) algorithm [69]. Recently, NEAT has been evolved
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
45
to CoDeepNEAT [51], an algorithm capable of covering more complex areas such as vision, speech and language. To conclude, among the many existing references reporting on the use of EC to optimize ANNs, the reader is particularly referred to [61, 70, 76] for a comprehensive overview of this research field.
3.3 Semantic Learning Machine 3.3.1 Algorithm In the proposal of Geometric Semantic Genetic Programming (GSGP), Moraglio et al. [54] showed that any supervised learning problem where the error is measured as a distance to the known targets has a unimodal error landscape. This property can be exploited by constructing specific variation operators. These operators are known as geometric semantic operators. In this context, the term semantics is used to refer to the outputs of any supervised learning model (e.g., a neural network) over a set of data instances. In GSGP, geometric semantic operators were defined for some domains: boolean, arithmetic, and conditional rules. GSGP was shown to outperform the traditional syntactic genetic programming approach in several datasets [21, 54]. The reasoning behind these geometric semantic operators can be used to create equivalent operators for other representations or computational models. With the proposal of the Semantic Learning Machine (SLM) neuroevolution algorithm [22, 24], it is achievable to perform semantic search for the space of Neural Networks (NNs). This was made possible by deriving the arithmetic mutation from GSGP to the space of NNs, therefore defining a geometric semantic mutation for NNs. This allows SLM to effectively and efficiently explore the space of NNs by exploiting the underlying unimodal error landscape. Given that these error landscapes are unimodal, no local optima exist. In the case of SLM this means that, with the exception of the global optimum, every point in the search space has at least one neighbor with better fitness, and that neighbor is reachable through the application of the mutation operator. The direct consequence is that a hill climbing strategy can effectively advance the search. SLM is essentially a geometric semantic hill climber for NNs that follows a (1 + λ) strategy. Without local optima, the search can be focused around the current best NN without incurring in any particular disadvantage. SLM can be summarized in the following steps: 1. Generate N initial random NNs 2. Choose the best NN (B) from the initial random NNs, according to the selected performance criterion 3. Repeat the following steps until a given stopping criterion is met:
46
I. Gonçalves et al.
(a) Apply the geometric semantic mutation to the current best (B) N times to generate N new NNs (known as children or neighbors) (b) Update B as being the NN with the best performance according to the selected criterion, considering the current B and the N newly generated NNs 4. Return B as the best performing NN according to the selected performance criterion The initial random NNs can be generated without any particular restriction. They can have any number of layers and neurons, with any activation functions, while the weights in the connections between the neurons can be freely selected. The networks do not have to be fully-connected and can be as sparsely connected as desired. The crucial aspect of SLM is the geometric semantic mutation which takes a parent NN and produces a child NN. This mutation works by adding new hidden neurons while ensuring that the semantics of the parent’s hidden neurons are not affected by these new hidden neurons. To ensure this fundamental aspect of the geometric semantic mutation, the new hidden neurons do not feed their computations to the parent’s hidden neurons, with the exception of the output neurons. The weights of connections from the new hidden neurons in the last hidden layer to the output neurons are defined by the learning step. The learning step can be computed optimally with the Moore–Penrose pseudoinverse (similarly to the case of GSGP [21, 23, 55]), or it can be defined as a parameter to be tuned. Each new hidden neuron added can select from which neurons it receives incoming connections. This means that the sparseness level can be easily controlled by defining how many incoming connections each new neuron will receive. The weights of each connection can be freely selected as in the initialization step. As is common in neuroevolution algorithms, SLM does not rely on backpropagation to adjust the weights of the NNs. For further SLM details the reader is referred to Gonçalves et al. [22] and Gonçalves [24].
3.3.2 Previous Comparisons with Other Neuroevolution Methods Jagusch et al. [32] explored several SLM variants and performed a comparison with other neuroevolution methods as well as other well-established supervised machine learning techniques. Regarding the neuroevolution methods, NEAT and a fixed-topology neuroevolution approach were used as points of comparison. NEAT was one the focus of the comparison given its popularity. The comparisons were performed on a total of nine real-world datasets freely available from the UCI Machine Learning Repository [43]: four binary classification datasets and five regression datasets. The results showed that, in terms of learning the training data, SLM was superior to the other neuroevolution methods in all the nine datasets considered (all with statistically significant differences). In this comparison the best
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
47
SLM variant was, naturally, always the one that computed the optimal learning step. Focusing on the NEAT comparison and on the generalization performance, SLM was found to be superior to NEAT, with statistically significant differences, in eight out of the nine datasets considered. No statistically significant difference was found in the remaining dataset. Furthermore, particularly the SLM variant that computed the optimal learning step and used a semantic stopping criterion [25] (further details on Sect. 3.4.2) also resulted in much smaller neural networks and achieved speedups of various orders of magnitude over NEAT on several datasets.
3.4 Experimental Methodology 3.4.1 Datasets and Parameter Tuning In the experimental phase, four real-world binary classifications datasets are considered: Cancer, Credit, Diabetes, and Sonar. In Credit, the objective is to classify the individuals as either good or bad credit whereas in Diabetes and Cancer, the goal is to predict whether an individual has diabetes or cancer, respectively. The Sonar task aims at classifying sonar signals as they either bounced off a metal cylinder or a roughly cylindrical rock. All of these datasets are freely available from the UCI Machine Learning Repository [43]. Table 3.1 presents the number of features (input variables), the number of instances (observations), and the % of class 1 instances in each of the four datasets under consideration. Different SLM variants are compared with the Multi-layer Perceptron (MLP) trained with backpropagation. A nested k-fold cross-validation (CV) methodology is followed. A 30-fold outer CV is used to obtain 30 final generalization values (test set values) to assess the statistical significance of the results. For each outer training fold, a twofold inner CV is conducted to perform parameter tuning for each algorithm. Both algorithms are allowed to explore a total of 72 random parameter combinations during parameter tuning. Four SLM and two MLP variants are tested (detailed on Sects. 3.4.2 and 3.4.3). To ensure fairness, SLM tests 18 parameter combinations for each of the four variants considered, while MLP tests 36 parameter combinations for each of the two variants considered. Table 3.1 Binary classification datasets considered
Dataset Cancer Credit Diabetes Sonar
Features 30 24 8 60
Instances 569 1000 768 208
% of class 1 instances ≈37% 30% ≈35% ≈47%
48
I. Gonçalves et al.
3.4.2 SLM Variants The base SLM configuration is the following: • In the initial population each NN is generated with a random number of hidden layers selected between 1 and 5 • In the initial population each NN randomly selects the number of neurons for each hidden layer between 1 and 5 • Each hidden neuron randomly selects its activation function from the following options: Logistic, Relu, and Tanh • Each hidden neuron randomly selects the weight of each incoming connection from values in the interval [−mncw, mncw], where mncw represents the maximum neuron connection weight parameter (subject to parameter tuning) • Each hidden neuron randomly selects the weight of its bias from values in the interval [−mbw, mbw], where mbw represents the maximum bias weight parameter (subject to parameter tuning) • Each time a new NN is created by the mutation operator, the number of new neurons to be added to each layer is randomly selected between 1 and 3 The three main differences between the SLM variants under study are the following: (1) the strategy regarding the learning step; (2) the type of training data use (static or dynamic); (3) the stopping criterion to decide the termination of the training process. Regarding the learning step, two variants are considered: computing the Optimal Learning Step (OLS) for each application of the mutation operator; and using a Bounded Learning Step (BLS). The SLM-BLS variants introduce an additional parameter that defines the maximum learning step (mls) that bounds the learning step. At each application of the mutation operator, the effective learning step is randomly selected from values in the interval [−mls, mls]. In terms of dynamic training data use, two approaches are considered: randomly selecting a subset of the training data at each iteration and computing the quality of each solution with this subset; and always using the complete training data but randomly weighting each instance (between 0 and 1) and changing these weights at each iteration. The first approach is referred to as Random Sampling Technique (RST) following Gonçalves et al. [18–20], while the second approach is referred to as Random Weighting Technique (RWT). In genetic programming, RST successfully contributed to avoid overfitting and improve generalization on highdimensional datasets [19, 20]. Other studies using dynamic training data in genetic programming have followed [16, 49, 50, 67]. Finally, regarding the termination of the training process, the following approaches are considered: termination based on a given number of iterations; and termination based on a semantic stopping criterion . The Semantic Stopping Criteria (SSC) proposed by Gonçalves et al. [25] use information gathered from the semantic neighborhood (the set of new models generated by the mutation) to decide when to stop the search. These are named the Error Deviation Variation
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
49
(EDV) criterion and the Training Improvement Effectiveness (TIE) criterion. EDV stops the search when a considerable majority of the neighbors are improving the training performance at the expense of larger error deviations. TIE stops the search when training error improvements become harder to find within the semantic neighborhood. This can signal that the training error improvements are being forced at the expense of the resulting generalization. These SSC can be used to avoid setting a maximum number of iterations and to avoid setting data aside to use as a validation set to decide when to stop. With these different aspects into consideration, the following SLM variants are grouped and named as follows: 1. 2. 3. 4.
BLS variants: SLM-BLS, SLM-BLS + RST, and SLM-BLS + RWT OLS variants: SLM-OLS, SLM-OLS + RST, and SLM-OLS + RWT BLS + TIE/EDV: SLM-BLS + TIE/EDV OLS + EDV: SLM-OLS + EDV
When SLM-BLS is mentioned by itself it refers to SLM-BLS without using RST and RWT. Similarly, when SLM-OLS is mentioned by itself it refers to SLM-OLS without using RST and RWT. All SLM variants can tune the maximum neuron connection weight (mncw) and the maximum bias weight (mbw) in the range [0.1, 0.5]. The BLS variants and BLS + TIE/EDV can tune the maximum learning step (mls) in the range [0.1, 2], and the number of iterations in the range [1, 100]. The BLS and the OLS variants select with equal probability the use of RST, RWT, or none. BLS + TIE/EDV selects with equal probability the use of EDV or TIE as the semantic stopping criterion. Whenever RST is used, the parameter that defines the ratio of the total training data to be used (the subset ratio) is selected from the range [0.01, 0.99].
3.4.3 MLP Variants Two MLP variants are considered: the most common stochastic gradient descent (SGD) [35, 59] variant, and the Adam SGD variant [36]. For SGD and Adam, the following parameters are tuned: • The number of iterations in the range [1, 100] • The batch size between 50 and the maximum number of training instances available • The activation function to be used in the hidden layers: Logistic, Relu, and Tanh • The number of hidden layers in the range [1, 5] • The number of hidden neurons per layer in the range [1, 200] • The learning rate in the range [0.1, 2] • The L2 penalty in the range [0.1, 10]
50
I. Gonçalves et al.
SGD can also select the momentum in the range [0.0000001, 1] and decide to use or not the Nesterov’s momentum. Adam can also select the beta 1 and beta 2 parameters in the range [0, 1[.
3.5 Results and Analysis This section analyzes the results obtained in the experimental phase. Section 3.5.1 presents the results achieved by the different variants of the SLM algorithm taken into account, analyzing the performance obtained on the validation set and discussing some aspects related to the choice of the parameters. Subsequently, Sect. 3.5.2 presents the results produced by the MLP over the same benchmark problems and discusses the main performance differences between MLP and SLM. Section 3.5.3 compares SLM and MLP after their best configuration are found and explores the generalization ability of SLM under different ensemble construction methods.
3.5.1 SLM This section presents the results obtained by considering different groups of SLM variants. The first analysis refers to the validation Area Under Receiver Operating Characteristic (AUROC) curve values produced by the considered SLM variants, and the results are summarized in Table 3.2. For each benchmark problem and for each technique, this table reports the mean and the standard deviation of the validation AUROC. These values were obtained from the nested cross-validation procedure previously described. Thus they are the mean and standard deviation values achieved by the best models obtained in the inner cross-validation procedure (that was performed to determine the most suitable values of the hyperparameters). According to the results reported in this table and complementing them with the ones in Table 3.3, it is possible to state that OLS variants are the best performer, outperforming the other variants taken into account. In particular, the OLS variants outperformed the other competitors 23 times on both the Cancer and the Credit datasets, 20 times on the Diabetes dataset, and 26 times on the Sonar dataset. The Table 3.2 Validation AUROC for each SLM variant considered Dataset Cancer Credit Diabetes Sonar
BLS variants 0.951 ± 0.095 0.679 ± 0.134 0.680 ± 0.169 0.648 ± 0.282
OLS variants 0.937 ± 0.124 0.733 ± 0.120 0.784 ± 0.110 0.816 ± 0.213
BLS + TIE/EDV 0.896 ± 0.185 0.564 ± 0.166 0.626 ± 0.153 0.636 ± 0.277
OLS + EDV 0.959 ± 0.061 0.688 ± 0.108 0.738 ± 0.131 0.724 ± 0.220
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
51
Table 3.3 Best SLM configuration by variant Dataset Cancer Credit Diabetes Sonar
BLS variants 5 0 1 2
OLS variants 23 23 20 26
BLS + TIE/EDV 0 0 0 1
OLS + EDV 2 7 9 1
Table 3.4 Number of iterations for each SLM variant considered Dataset Cancer Credit Diabetes Sonar
BLS variants 79.567 ± 16.332 68.267 ± 20.793 76.000 ± 18.819 75.033 ± 22.172
OLS variants 64.067 ± 23.712 63.433 ± 26.165 60.467 ± 22.508 58.667 ± 24.288
BLS + TIE/EDV 3.067 ± 2.741 3.967 ± 2.399 5.567 ± 8.336 3.533 ± 3.391
OLS + EDV 1.133 ± 0.434 2.233 ± 1.406 1.967 ± 1.217 3.167 ± 4.900
second-best performer when considering the Cancer and the Sonar datasets is the BLS variants group, while OLS + EDV outperforms the other competitors 7 times on the Credit dataset and 9 times on the Diabetes dataset. BLS + TIE/EDV performs poorly in relative terms. A possible explanation might be that, for the datasets considered, the maximum number of iterations is not high enough for the semantic stopping criterion to take effect under a bounded learning step. Overall, OLS families (OLS variants and OLS + EDV) seem to provide higher AUROC values with respect to the BLS families. A global view on the average AUROC values reported in Table 3.2 suggests that: (1) within the BLS groups, BLS + TIE/EDV always achieved a lower average validation AUROC than the BLS variants; (2) within the OLS groups, OLS + EDV is the best performer on the Cancer dataset, while the OLS variants group is the best performer over the remaining benchmarks taken into account; (3) the OLS variants represent the most suitable choice for the classification problems at hand. The subsequent analysis considers the average number of iterations performed by each SLM variant. Results of this analysis are reported in Table 3.4. The BLS variants require a larger number of iterations with respect to the other competitors. This behavior was expected considering that the BLS variants do not use any semantic stopping criterion and optimal learning step. Focusing on the other competitors, the OLS variants (the best performer over these problems) ran for a significantly larger number of iterations with respect to BLS + TIE/EDV. When comparing OLS variants with OLS + EDV, it is possible to see that also in this case the number of iterations performed by OLS + EDV is significantly lower than the one of the OLS variants. The use of the optimal learning step allows OLS + EDV to reach satisfactory performance in all of the considered benchmarks and to outperform the OLS variants over the Cancer dataset. To summarize the results of this analysis, it seems that the use of a semantic stopping criterion with the OLS and the BLS is effective at reducing the computational effort needed to perform the
52
I. Gonçalves et al.
Table 3.5 EDV and TIE use in SLM-BLS
Table 3.6 RST and RWT use in the BLS and the OLS variants
Dataset Cancer Credit Diabetes Sonar
Dataset Cancer Credit Diabetes Sonar
BLS variants None RST 8 9 18 4 11 6 15 4
RWT 13 8 13 11
EDV 27 26 25 25
TIE 3 4 5 5
OLS variants None RST RWT 15 6 9 15 4 11 12 4 14 15 6 9
training process, but according to the problem at hand, it may not result in the best overall performance. With respect to the semantic stopping criterion, Table 3.5 compares the usage of EDV and TIE in SLM-BLS. According to these values, it is clear that EDV is more effective than the TIE stopping criterion in the classification problems considered. An additional analysis performed during the experimental phase aimed at understanding whether the random weighting/sampling techniques are beneficial when coupled with the SLM variants. Results of this analysis are reported in Table 3.6, where the use of RST, RWT, and the whole original set of observations (None) are compared in the context of BLS variants and OLS variants. According to these values, it seems that RWT is more effective than RST in both of the considered SLM variants. Comparing the use of RST and RWT with the use of the whole original set of observations, the results of Table 3.6 suggest that random sampling and random weighting techniques can improve the performance of SLM. This shows that the dynamic use of training data can indeed be beneficial within SLM. This is particularly clear when RWT is used in conjunction with the optimal learning step computation.
3.5.2 MLP This section presents the results obtained by both MLP variants: Adam and SGD. The first part of this discussion considers the performance on the validation set and the corresponding results are reported in Table 3.7. According to these values, Adam is the best performer over the Cancer dataset, while SGD is the best performer over the Credit dataset and the Sonar dataset. The two MLP variants produce the same performance on the Diabetes dataset. According to these results it is difficult to draw a general conclusion, and it seems that the choice between Adam and SGD must be evaluated according to the particular problem at hand. To complement this analysis,
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm Table 3.7 Validation AUROC for each MLP variant considered
Dataset Cancer Credit Diabetes Sonar
Table 3.8 Best MLP configuration by variant
Table 3.9 Number of iterations for each MLP variant considered
53
Adam 0.542 ± 0.110 0.509 ± 0.031 0.498 ± 0.016 0.496 ± 0.023
SGD 0.500 ± 0.000 0.525 ± 0.048 0.498 ± 0.011 0.581 ± 0.109
Dataset Cancer Credit Diabetes Sonar Dataset Cancer Credit Diabetes Sonar
Adam 56.200 ± 25.183 56.400 ± 24.210 42.433 ± 27.320 50.267 ± 24.237
Adam 28 9 24 7
SGD 2 21 6 23
SGD 55.500 ± 26.046 57.167 ± 26.592 56.700 ± 30.326 49.667 ± 29.352
Table 3.8 shows the best MLP configurations by variant. According to these values, SGD produces the best performance most of the times over the Credit and Sonar dataset, while Adam returns the best performance on the remaining datasets in the vast majority of the runs considered. At this stage, it is important to compare the results of Table 3.7 (obtained with MLP) with the ones reported in Table 3.2 (obtained with SLM). According to these results, all the SLM variants are able to outperform the best MLP variant over all the classification problems under exam. This comparison clearly demonstrates the superiority of SLM (with respect to MLP) in creating models characterized by a greater validation AUROC. Overall, the SLM algorithm is a competitive option to consider in these classification problems given that its performance (independently of the selected variant) is significantly better than the best MLP variant. Another important aspect to analyze in the comparison between MLP and SLM, is the number of iterations required to produce the final model. While this analysis was performed for the SLM algorithm (see Table 3.4), Table 3.9 reports the same information for MLP-based models. In particular, the SLM variants that use a semantic stopping criterion are able to build a classification model in a considerably lower amount of iterations with respect to an MLP variant based on the backpropagation algorithm. This smaller number of iterations does not negatively affect the performance of the final models, as these SLM variants are able to outperform MLP over all the considered benchmarks. To further understand the different MLP variants, Table 3.10 reports the activation function used by each one of them. From these values it is interesting to point out that the Adam variant has a clear preference for the Relu function, disregarding the problem under analysis. SGD shows a preference for the Relu function when
54
I. Gonçalves et al.
Table 3.10 Activation functions use by MLP variant
Dataset Cancer Credit Diabetes Sonar
Adam Logistic 8 9 4 8
1
Tanh 7 8 10 5
SGD Logistic 7 9 7 0
Relu 16 8 15 14
Tanh 7 13 8 16
0.85 0.8
0.9
0.75
0.8
0.7
0.7
AUROC
AUROC
Relu 15 13 16 17
0.6 0.5
0.65 0.6 0.55 0.5
0.4
0.45 0.4
0.3
0.35
0.2 SLM
MLP
SLM
MLP
SLM
MLP
1 0.9 0.9 0.8 AUROC
AUROC
0.8 0.7 0.6
0.7 0.6
0.5 0.5
0.4
0.4 SLM
MLP
Fig. 3.1 Boxplots for test set AUROC values of SLM and MLP: cancer, credit, diabetes, and sonar
considering the Cancer and Diabetes datasets, while Tanh is the function used most of the times over the Credit dataset. For the Sonar dataset the Logistic function was never selected and Relu and Tanh were selected 14 and 16 times respectively.
3.5.3 Generalization and Ensemble Analysis This section starts by assessing the generalization (i.e., the test set performance over the 30 outer folds) of SLM and MLP considering the best parameter configurations found after parameter tuning. Figure 3.1 presents the boxplots for the AUROC values of both algorithms. On each box, the central mark is the median, the edges
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm Table 3.11 p-values of Mann–Whitney U-tests over test set AUROC values of SLM and MLP
Dataset Cancer Credit Diabetes Sonar
55 p-value 3.486 × 10−11 1.054 × 10−8 1.036 × 10−11 9.894 × 10−5
of the box are the 25th and 75th percentiles, and the whiskers extend to the most extreme data points that are not considered outliers. These boxplots show that SLM consistently achieves better AUROC values than MLP across all datasets. A set of statistical tests is performed to assess the statistical significance of these results. Firstly, a Kolmogorov-Smirnov test is applied to assess if these values come from a normal distribution. The result of this test suggests that the alternative hypothesis (i.e., the data do not come from a normal distribution) cannot be rejected considering a significance level (α) of 0.05. Given this outcome, a rank-based statistic is selected for the next step. A Mann-Whitney U-test is performed with the null hypothesis that the samples have equal medians. As in the previous test, a significance level of 0.05 is considered. The outcomes suggest that SLM outperforms MLP in all datasets with statistically significant differences. The p-values for these comparisons can be found in Table 3.11. The final part of the analysis studies the outcomes of different ensemble construction methods when using SLM as a base learner. Bagging [6] and Boosting [11] methods are compared with a common simple averaging construction method that trains the base learner N times without changing the training instances provided. This simpler ensemble construction method can be effective if the base learner already has an inherent diversity within its search process. This might be the case of SLM given that it is a stochastic algorithm. These three ensemble construction methods are used to create ensembles of 30 NNs using SLM as the base learner. In the Boosting case, four variations of AdaBoost.R2 are studied and labeled as follows: • Boosting-1: weighted median prediction and fixed learning rate of 1 • Boosting-2: weighted median prediction and variable learning rate selected randomly in the interval [0, 1] for each new NN added to the ensemble • Boosting-3: weighted mean prediction and fixed learning rate of 1 • Boosting-4: weighted mean prediction and variable learning rate selected randomly in the interval [0, 1] for each new NN added to the ensemble Figure 3.2 presents the boxplots for the test set AUROC values of each ensemble construction method considered: Simple (averaging), Bagging, and the four Boosting variations. These ensemble results show that the simple averaging method performs similarly to Bagging and Boosting. In terms of the median AUROC value, the simple averaging method even achieves the highest value in three of the four datasets: Credit, Diabetes, and Sonar. Overall, these different ensemble construction methods
56
I. Gonçalves et al. 1.0
0.9
AUROC
0.8
0.7
0.6
0.5
0.4
0.3
Simple
Bagging
Boosting-1
Boosting-2
Boosting-3
Boosting-4
Simple
Bagging
Boosting-1
Boosting-2
Boosting-3
Boosting-4
0.9
AUROC
0.8
0.7
0.6
0.5
0.4
Fig. 3.2 Boxplots for test set AUROC values of each ensemble construction method considered: cancer, credit, diabetes, and sonar
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
57
0.95 0.90
AUROC
0.85 0.80
0.75 0.70 0.65 0.60 Simple
Bagging
Boosting-1
Boosting-2
Boosting-3
Boosting-4
Simple
Bagging
Boosting-1
Boosting-2
Boosting-3
Boosting-4
1.0
0.9 0.8
AUROC
0.7 0.6 0.5 0.4
0.3
Fig. 3.2 (continued)
58
I. Gonçalves et al.
perform similarly in terms of the distribution of the values. These results suggest that the stochastic nature of SLM allows the simple averaging method to perform well without having to explicitly confer more diversity to the base learner, e.g., by providing different training instances to each ensemble member.
3.6 Toward the Deep Semantic Learning Machine Recently, SLM was used in conjunction with Convolutional Neural Networks (CNNs) for the first time [40, 41]. In these contributions, the task of discriminating between benign and malignant prostate cancer lesions given multiparametric magnetic resonance imaging was addressed. This image classification task was proposed in the context of the PROSTATEx competition [44]. SLM was tested as a backpropagation replacement for the training of the last fully-connected layers of CNNs. In this approach, the outputs from the convolutional layers of a given CNN are passed as inputs (without pre-training) to SLM. The empirical comparison is performed with XmasNet [45], a state-of-the-art CNN specifically developed to address the PROSTATEx 2017 competition. The results show that SLM achieves higher AUROC curve values than XmasNet with a statistically significant difference. This performance is achieved without neither pre-training the underlying CNN nor relying on backpropagation. Furthermore, SLM is also much more computationally efficient in the training phase. SLM achieves an average speed-up of around 14 over training with backpropagation. This is also important as neuroevolution methods are sometimes perceived as slow. Additionally, it is important to emphasize that SLM was only run on CPU (whereas XmasNet was trained using a GPU) and without any explicit parallelization. This further reinforces the results obtained, given that each network evaluation could be suitably parallelized, thus achieving a higher speed-up. Furthermore, inside each network evaluation, the new nodes of a given layer can also be evaluated in parallel. SLM can be further extended to include the convolutional layers within the search process. This would remove the need for a fixed CNN topology to be provided and it would also eliminate the burden of assessing several CNN topologies in order to find a suitable topology for the task at hand. Adapting SLM to include the convolutional layers within the search would result in a unimodal search over the space of CNNs. Such a development could result in considerable improvements in the field of deep learning and computer vision. This is something that is currently under study. Acknowledgements This work was partially supported by projects UID/MULTI/00308/2019 and by the European Regional Development Fund through the COMPETE 2020 Programme, FCT— Portuguese Foundation for Science and Technology and Regional Operational Program of the Center Region (CENTRO2020) within project MAnAGER (POCI-01-0145-FEDER-028040). This work was also partially supported by national funds through FCT (Fundação para a Ciência e a Tecnologia) under project DSAIPA/DS/0022/2018 (GADgET).
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
59
References 1. Abiodun, O.I., Jantan, A., Omolara, A.E., Dada, K.V., Mohamed, N.A., Arshad, H.: State-ofthe-art in artificial neural network applications: A survey. Heliyon 4(11), e00,938 (2018) 2. Agostinelli, F., Hoffman, M., Sadowski, P., Baldi, P.: Learning activation functions to improve deep neural networks. arXiv preprint arXiv:1412.6830 (2014) 3. Alba, E., Aldana, J., Troya, J.M.: Full automatic ANN design: A genetic approach. In: International Workshop on Artificial Neural Networks, pp. 399–404. Springer (1993) 4. Angeline, P.J., Saunders, G.M., Pollack, J.B.: An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks 5(1), 54–65 (1994) 5. Bornholdt, S., Graudenz, D.: General asymmetric neural networks and structure design by genetic algorithms. Neural Networks 5(2), 327–334 (1992) 6. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996). 7. Chauvin, Y., Rumelhart, D.E.: Backpropagation: Theory, architectures, and applications. Psychology Press (2013) 8. Cun, Y.L., Denker, J.S., Solla, S.A.: Advances in neural information processing systems 2. chap. Optimal Brain Damage, pp. 598–605. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1990) 9. DasGupta, B., Schnitger, G.: Efficient approximation with neural networks: A comparison of gate functions. Pennsylvania State University, Department of Computer Science (1992) 10. Dill, F.A., Deer, B.C.: An exploration of genetic algorithms for the selection of connection weights in dynamical neural networks. In: Proceedings of the IEEE 1991 National Aerospace and Electronics Conference NAECON 1991, vol. 3, pp. 1111–1115 (1991) 11. Drucker, H.: Improving regressors using boosting techniques. In: Proceedings of the Fourteenth International Conference on Machine Learning, ICML ’97, pp. 107–115. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1997). 12. Epitropakis, M.G., Plagianakos, V.P., Vrahatis, M.N.: Evolutionary Algorithm Training of Higher-Order Neural Networks. IGI Global (2009) 13. Fahlman, S.E., Lebiere, C.: Advances in neural information processing systems 2. chap. The Cascade-correlation Learning Architecture, pp. 524–532. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1990) 14. Floreano, D., Dürr, P., Mattiussi, C.: Neuroevolution: From architectures to learning. Evolutionary Intelligence 1(1), 47–62 (2008) 15. Frean, M.: The upstart algorithm: A method for constructing and training feedforward neural networks. Neural Computation 2(2), 198–209 (1990) 16. Galván-López, E., Vázquez-Mendoza, L., Schoenauer, M., Trujillo, L.: On the use of dynamic GP fitness cases in static and dynamic optimisation problems. In: International Conference on Artificial Evolution (Evolution Artificielle), pp. 72–87. Springer (2017) 17. Garro, B.A., Vázquez, R.A.: Designing artificial neural networks using particle swarm optimization algorithms. Computational Intelligence and Neuroscience 2015, 61 (2015) 18. Gonçalves, I., Silva, S.: Experiments on controlling overfitting in genetic programming. In: Local proceedings of the 15th Portuguese Conference on Artificial Intelligence, EPIA 2011 (2011) 19. Gonçalves, I., Silva, S., Melo, J.B., Carreiras, J.M.B.: Random sampling technique for overfitting control in genetic programming. In: Genetic Programming, pp. 218–229. Springer (2012) 20. Gonçalves, I., Silva, S.: Balancing learning and overfitting in genetic programming with interleaved sampling of training data. In: Genetic Programming, pp. 73–84. Springer (2013) 21. Gonçalves, I., Silva, S., Fonseca, C.M.: On the generalization ability of geometric semantic genetic programming. In: Genetic Programming, pp. 41–52. Springer (2015) 22. Gonçalves, I., Silva, S., Fonseca, C.M.: Semantic learning machine: A feedforward neural network construction algorithm inspired by geometric semantic genetic programming. In: Progress in Artificial Intelligence, Lecture Notes in Computer Science, vol. 9273, pp. 280–285. Springer (2015)
60
I. Gonçalves et al.
23. Gonçalves, I., Silva, S., Fonseca, C.M., Castelli, M.: Arbitrarily close alignments in the error space: A geometric semantic genetic programming approach. In: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion, pp. 99–100. ACM (2016) 24. Gonçalves, I.: An exploration of generalization and overfitting in genetic programming: Standard and geometric semantic approaches. Ph.D. thesis, Department of Informatics Engineering, University of Coimbra, Portugal (2017) 25. Gonçalves, I., Silva, S., Fonseca, C.M., Castelli, M.: Unsure when to stop? ask your semantic neighbors. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO’17, pp. 929–936. ACM, New York, NY, USA (2017). 26. Greenwood, G.W.: Training partially recurrent neural networks using evolutionary strategies. IEEE Transactions on Speech and Audio Processing 5(2), 192–194 (1997) 27. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015) 28. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks 2(5), 359–366 (1989) 29. Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Networks 4(2), 251–257 (1991) 30. Hush, D.R., Horne, B.G.: Progress in supervised neural networks. IEEE Signal Processing Magazine 10(1), 8–39 (1993) 31. Irie, B., Miyake, S.: Capabilities of three-layered perceptrons. In: IEEE International Conference on Neural Networks, vol. 1, p. 218 (1988) 32. Jagusch, J.B., Gonçalves, I., Castelli, M.: Neuroevolution under unimodal error landscapes: An exploration of the semantic learning machine algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 159–160. ACM (2018) 33. Jarrett, K., Kavukcuoglu, K., LeCun, Y., et al.: What is the best multi-stage architecture for object recognition? In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2146–2153. IEEE (2009) 34. Jian, F., Yugeng, X.: Neural network design based on evolutionary programming. Artificial Intelligence in Engineering 11(2), 155–161 (1997) 35. Kiefer, J., Wolfowitz, J.: Stochastic estimation of the maximum of a regression function. Ann. Math. Statist. 23(3), 462–466 (1952). 36. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014). 37. Kiranyaz, S., Ince, T., Yildirim, A., Gabbouj, M.: Evolutionary artificial neural networks by multi-dimensional particle swarm optimization. Neural Networks 22(10), 1448–1462 (2009) 38. Konda, K., Memisevic, R., Krueger, D.: Zero-bias autoencoders and the benefits of co-adapting features. arXiv preprint arXiv:1402.3337 (2014) 39. Koza, J.R., Rice, J.P.: Genetic generation of both the weights and architecture for a neural network. In: IJCNN-91-Seattle International Joint Conference on Neural Networks, vol. 2, pp. 397–404. IEEE (1991) 40. Lapa, P., Gonçalves, I., Rundo, L., Castelli, M.: Enhancing classification performance of convolutional neural networks for prostate cancer detection on magnetic resonance images: A study with the semantic learning machine. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’19. ACM, New York, NY, USA (2019). 41. Lapa, P., Gonçalves, I., Rundo, L., Castelli, M.: Semantic learning machine improves the cnnbased detection of prostate cancer in non-contrast-enhanced mri. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’19. ACM, New York, NY, USA (2019). 42. Lei, J., He, G., Jiang, J.P.: The state estimation of the cstr system based on a recurrent neural network trained by HGAs. In: Proceedings of International Conference on Neural Networks (ICNN’97), vol. 2, pp. 779–782 (1997) 43. Lichman, M.: UCI Machine Learning Repository (2013).
3 Explorations of the Semantic Learning Machine Neuroevolution Algorithm
61
44. Litjens, G., Debats, O., Barentsz, J., Karssemeijer, N., Huisman, H.: “PROSTATEx Challenge data”, The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/display/Public/ SPIE-AAPM-NCI+PROSTATEx+Challenges (2017). Online; Accessed on January 25, 2019 45. Liu, S., Zheng, H., Feng, Y., Li, W.: Prostate cancer diagnosis using deep learning with 3D multiparametric MRI. In: Medical Imaging 2017: Computer-Aided Diagnosis, Proceedings SPIE, vol. 10134, p. 1013428. International Society for Optics and Photonics (2017). 46. Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proc. ICML, vol. 30, p. 3 (2013) 47. Manessi, F., Rozza, A.: Learning combinations of activation functions. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 61–66. IEEE (2018) 48. Mani, G.: Learning by gradient descent in function space. In: 1990 IEEE International Conference on Systems, Man, and Cybernetics Conference Proceedings, pp. 242–247 (1990) 49. Martinez, Y., Trujillo, L., Naredo, E., Legrand, P.: A comparison of fitness-case sampling methods for symbolic regression with genetic programming. In: EVOLVE-A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation V, pp. 201–212. Springer (2014) 50. Martínez, Y., Naredo, E., Trujillo, L., Legrand, P., López, U.: A comparison of fitness-case sampling methods for genetic programming. Journal of Experimental & Theoretical Artificial Intelligence 29(6), 1203–1224 (2017) 51. Miikkulainen, R., Liang, J., Meyerson, E., Rawal, A., Fink, D., Francon, O., Raju, B., Shahrzad, H., Navruzyan, A., Duffy, N., Hodjat, B.: Evolving deep neural networks. In: R. Kozma, C. Alippi, Y. Choe, F.C. Morabito (eds.) Artificial Intelligence in the Age of Neural Networks and Brain Computing. Amsterdam: Elsevier (2018). 52. Miller, G.F., Todd, P.M., Hegde, S.U.: Designing neural networks using genetic algorithms. In: Proceedings of the Third International Conference on Genetic Algorithms, pp. 379–384. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1989) 53. Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: IJCAI, vol. 89, pp. 762–767 (1989) 54. Moraglio, A., Krawiec, K., Johnson, C.G.: Geometric semantic genetic programming. In: Parallel Problem Solving from Nature-PPSN XII, pp. 21–31. Springer (2012) 55. Moraglio, A., Mambrini, A.: Runtime analysis of mutation-based geometric semantic genetic programming for basis functions regression. In: Proceedings of the 15th annual conference on Genetic and Evolutionary Computation, pp. 989–996. ACM (2013) 56. Mozer, M.C., Smolensky, P.: Advances in neural information processing systems 1. chap. Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment, pp. 107–115. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1989) 57. Nikolopoulos, C., Fellrath, P.: A hybrid expert system for investment advising. Expert Systems 11(4), 245–250 (1994) 58. Oliker, S., Furst, M., Maimon, O.: Design architectures and training of neural networks with a distributed genetic algorithm. In: IEEE International Conference on Neural Networks, pp. 199–202. IEEE (1993) 59. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Statist. 22(3), 400– 407 (1951). 60. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533 (1986) 61. S. Ding H. Li, C.S.J.Y.F.J.: Evolutionary artificial neural networks: A review. Artificial Intelligence Review 39, 251–260 (2013). 62. Samarasinghe, S.: Neural networks for applied sciences and engineering: From fundamentals to complex pattern recognition. Auerbach Publications (2016) 63. Schaffer, J.D., Caruana, R.A., Eshelman, L.J.: Using genetic search to exploit the emergent behavior of neural networks. Physica D: Nonlinear Phenomena 42(1–3), 244–248 (1990) 64. Schiffmann, W., Joost, M., Werner, R.: Synthesis and performance analysis of multilayer neural network architectures (1992)
62
I. Gonçalves et al.
65. Schoenauer, M., Ronald, E.: Genetic extensions of neural net learning: Transfer functions and renormalisation coefficients 66. Sietsma, J., Dow, R.J.: Creating artificial neural networks that generalize. Neural Networks 4(1), 67–79 (1991) 67. Silva, S., Ingalalli, V., Vinga, S., Carreiras, J.M., Melo, J.B., Castelli, M., Vanneschi, L., Gonçalves, I., Caldas, J.: Prediction of forest aboveground biomass: An exercise on avoiding overfitting. In: European Conference on the Applications of Evolutionary Computation, pp. 407–417. Springer (2013) 68. Srinivas, M., Patnaik, L.M.: Learning neural network weights using genetic algorithmsimproving performance by search-space reduction. In: [Proceedings] 1991 IEEE International Joint Conference on Neural Networks, vol. 3, pp. 2331–2336 (1991) 69. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10(2), 99–127 (2002) 70. Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) 71. Sutton, R.S.: Two problems with backpropagation and other steepest-descent learning procedures for networks. In: Proceedings of the Eighth Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum (1986) 72. White, D., Ligomenides, P.: Gannet: A genetic algorithm for optimizing topology and weights in neural network design. In: International Workshop on Artificial Neural Networks, pp. 322– 327. Springer (1993) 73. Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel Computing 14(3), 347–361 (1990) 74. Widrow, B., Lehr, M.A.: 30 years of adaptive neural networks: Perceptron, madaline, and backpropagation. Proceedings of the IEEE 78(9), 1415–1442 (1990) 75. Wilson, S.W.: Perception redux: Emergence of structure. Physica D: Nonlinear Phenomena 42(1–3), 249–256 (1990) 76. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999) 77. Yao, X., Liu, Y.: A new evolutionary system for evolving artificial neural networks. IEEE Transactions on Neural Networks 8(3), 694–713 (1997) 78. Zhang, C., Shao, H., Li, Y.: Particle swarm optimisation for evolving artificial neural network. In: Systems, Man, and Cybernetics, 2000 IEEE International Conference on, vol. 4, pp. 2487– 2490. IEEE (2000) 79. Zhang, G., Patuwo, B.E., Hu, M.Y.: Forecasting with artificial neural networks:: The state of the art. International Journal of Forecasting 14(1), 35–62 (1998)
Chapter 4
Can Genetic Programming Perform Explainable Machine Learning for Bioinformatics? Ting Hu
4.1 Introduction In recent years, with the increasing availability of computational power, machine learning has seen a rapid growth of applications in a variety of research fields and industries [6, 9, 11, 15, 20]. However, questions with regard to the explainability of machine learning have been raised. How trustworthy is the learned predictive model? What is the mechanism of the prediction? Given a specific testing sample, how can we explain the prediction result? Such an explainability issue is in an urgent need to be addressed in order to allow wider applications and further developments of machine learning [7, 12, 26]. Some initial attempts have been put forward to enable machine learning to be more explainable and transparent. For instance, Ribeiro et al. proposed the local interpretable model-agnostic explanations (LIME) algorithm to identify a local linear model to explain the prediction on a specific instance [24]. Such a local linear model is faithfully derived from the globally learned highly non-linear model. Thus it is able to provide an explainable linear relationship of the predictors and the response while retaining the prediction accuracy. In a bioinformatics application study of machine learning, Yu et al. designed a deep learning algorithm DCell, a neural network embedded in the hierarchical structure of more than two thousand subsystems comprising a eukaryotic cell [22]. DCell simulated cellular growth and was trained to encode complex genotypes for the prediction of diseases. The research on explainable machine learning is still at its infancy since the most popular methods for interpretation and explanation are either problem dependent,
T. Hu () School of Computing, Queen’s University, Kingston, ON, Canada Department of Computer Science, Memorial University, St. John’s, NL, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_4
63
64
T. Hu
i.e., domain knowledge is needed to design the architecture of the predictive model, or question dependent, i.e., which aspect is needed to explain the prediction model. Genetic programing (GP), a powerful automatic learning and evolutionary algorithm, may be a good candidate for solving the explainability problem. First, the representation of a predictive model can be flexible in GP. Second, feature selection is intrinsic and co-evolved with the predictive models. Third, allowing multiple objectives, GP may facilitate the evolution of both compact and accurate predictive models. In this study, we explored some initial ideas of using GP for explainable machine learning by designing a linear GP algorithm and feature importance evaluation methods for the bioinformatics application problem of predicting disease risk using metabolite abundance levels in blood samples. Among multiple issues related to explainability, we aimed to address the questions as follows: Which features are more influential on the prediction of the disease risk? How do these features influence the prediction?
4.2 Methods 4.2.1 Metabolomics Data for Osteoarthritis In the metabolomics data for osteoarthritis (OA) used in the current study, knee OA patients were selected from the Newfoundland Osteoarthritis Study (NFOAS) initiated in 2011 [17, 27]. The NFOAS aimed at identifying novel genetic, epigenetic, and biochemical markers for OA. The NFOAS recruited OA patients who underwent a total knee replacement surgery due to primary OA between November 2011 and December 2013 at the St. Clare’s Mercy Hospital and Health Science Centre General Hospital in St. John’s, the capital city of Newfoundland and Labrador (NL), Canada. Healthy controls were selected from the CODING study (The Complex Diseases in the Newfoundland population: Environment and Genetics), where participants were adult volunteers [10]. Both cases and controls were from the same source population of Newfoundland and Labrador. Knee OA diagnosis was made based on the American College of Rheumatology clinical criteria for the classification of idiopathic OA of the knee [2] and the judgment of the attending orthopedic surgeons. Controls were individuals without self-reported family doctor diagnosed knee OA based on their medical information collected by a self-administered questionnaire. A total of 153 OA cases and 236 healthy controls were collected. Blood samples were collected after at least 8 h of fasting and plasma was separated from blood using the standard protocol. Metabolic profiling was performed on plasma using the Waters XEVO TQ MS system (Waters Limited, Mississauga, Ontario, Canada) coupled with Biocrates AbsoluteIDQ p180 kit, which measures 186 metabolites including 90 glycerophospholipids, 40 acylcarnitines (1
4 GP for Explainable Machine Learning
65
free carnitine), 21 amino acids, 19 biogenic amines, 15 sphingolipids and 1 hexose (above 90% is glucose). The details of the 186 metabolites and the metabolic profiling method were described in our previous publication [29]. Over 90% of the metabolites (167/186) were successfully determined in each sample. Prior to performing the informatics analyses, several steps of preprocessing were applied to the dataset. Batch correction was performed by multiplying each metabolite concentration value by the ratio of the overall mean and the batch mean for that metabolite. Then, covariate adjustment was performed to remove the variation due to individual’s age, gender, and body mass index (BMI). The samples were randomly assigned to either a discovery or replication dataset, such that cases and controls were divided evenly between the two datasets. The purpose of the split was to allow two sets of independent analysis and ensure the generalization of the results. Finally, each metabolite concentration value was normalized to zero mean and unit variance across the population.
4.2.2 Linear Genetic Programming Algorithm In this study, a Linear Genetic Programming (LGP) algorithm [5] was developed to evolve symbolic models that predict the disease risk using the metabolite concentrations in the blood samples. A population of diverse candidate prediction models is generated randomly in the step of initialization and will evolve to improve prediction accuracy gradually through a number of generations. After evolution halts, the best model of the population in the final generation will be the output. Each candidate prediction model takes the form of a symbolic computer program comprised of a set of sequential instructions. An instruction can be an assignment statement or a conditional statement. The conditional if instructions affect the program flow such that the instruction immediately following the if instruction is not executed if the condition is false. In the case of nested if instructions, each of the successive conditions needs to be true in order for the instruction following the chain of if instructions to be executed. A register r stores the value of a feature, a calculation variable, or a constant. A feature can be a predictor or an attribute used to make a prediction of the outcome. In the context of the current study, features are concentration levels of metabolites in the samples. A calculation variable serves as a temporary buffer that enhances the computation capacity. In an assignment instruction, only registers storing calculation variables can serve as the return on the left side of the assignment symbol “=”, but any register can serve as an operand on the right-hand side. This is to prevent overwriting the feature values. When a prediction model is evaluated on a given sample, feature registers take all the values of the sample, and the set of instructions are executed sequentially. The sigmoid transformation of the final value stored in the designated calculation register r[0] is used to predict the outcome of the sample, i.e., if S(r[0]) is greater than or equal to 0.5, the sample is predicted as diseased (class one), otherwise the sample is predicted as healthy (class zero).
66
T. Hu
An example of classification model with eight instructions is given below. Here, the output register r[0] and calculation registers r[4] and r[5] are all initialized with ones. Feature registers r[1-3] take input values from three metabolite concentration levels m[1-3] respectively. For instance, when a sample with m[1-3] values as {0.2, 0.01, 0.085} is input to this classification model, the conditional statement r[1]>r[3] in instruction I1 becomes true, so in instruction I2, r[0] changes its value to 0.51. The rest of the instructions are executed sequentially, and the final value of r[0] is set to 1.0039. Its sigmoid transformation S(1.0039) is greater than 0.5, so this sample will be classified by this model as class one, i.e., diseased. I1: if r[1]> r[3] I2: then r[0] = r[2] + 0.5 I3: r[4] = r[2] / r[0] I4: if r[0] > 4 I5: then if r[3] < 10 I6: then r[5] = r[3] - r[4] I7: r[4] = r[4] * r[1] I8: r[0] = r[5] + r[4] At the initial generation, a population of diverse linear genetic programs (classification models) was generated randomly. The fitness of each model was evaluated using mean classification error (MCE), computed as the average number of incorrectly classified training samples. A set of models were chosen as parents based on their fitness, and variation operators, including mutation and recombination, were applied to them. A mutation alters an element of a randomly picked instruction, i.e., replacing a return or an operand register by a randomly generated one or replacing the operator. Recombination swaps segments of instructions of two parent models. Survival selection picks fitter models to form the population for the next generation. Such an evolution process iterates for a certain number of generations, and the model with the lowest MCE at the end is output as the final best model of a run. This LGP algorithm was implemented using the Julia programming language [4]. The main parameters used in the implementation are shown in Table 4.1. A fivefold cross-validation was used so that each run of the LGP algorithm produced five best classification models as its output. We first ran the LGP algorithm 200 times using the discovery dataset and collected 1000 evolved best predictive models. We investigated the resulting classification models by calculating various statistics of the fitness (MCE) values, sensitivity, specificity and area under the curve (AUC) as computed on the testing fold for each run.
4 GP for Explainable Machine Learning Table 4.1 Parameter configuration of the LGP algorithm
Fitness function Program initialization Program length Population size Number of parents Parent selection Survival selection Number of generations Operator set Constant set Calculation registers Mutation operator
67 Mean classification error (MCE) Random [1, 500] 500 500 Tournament with size 16 Truncation 500 {+, −, ×, ÷, x y , if } {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} 150 Effective instructions only
4.2.3 Training Using the Full and the Focused Feature Sets Although all 167 features (metabolites) in the metabolomics data were provided as input variables to the LGP algorithm, not all of them will be picked and influence the output of a final best genetic program (predictive model). For a linear genetic program, we define its effective features as the ones that effectively influence the value of its output register. Note that this definition excludes features that are present in the program but are structurally or semantically ineffective in modifying the final value of the output register. This is also a powerful feature of GP algorithms where feature selection is intrinsic and co-evolved with the model accuracy. Since we had 1000 evolved best predictive models, we counted the occurrence frequency of each metabolite and used it as a quantitative measure of the feature importance of a metabolite. The intuition is that if a metabolite appeared more often as an effective feature in the best predictive models, it was regarded as more important. Therefore, we performed two rounds of training by running the LGP algorithm provided with (1) the 167 full feature set and (2) the top-ranked focused feature subset.
4.2.4 Feature Synergy Analysis In addition to looking at the individual occurrence of single metabolites in the best classification models, the co-occurrence of metabolites in the models was studied by counting the number of times each metabolite pair appeared together in the same model. The top 1% of the resulting metabolite pairs, ranked by decreasing frequency, were used to construct a metabolite synergy network. Network science has seen increasing applications in biomedical research [1, 3, 8, 13, 14, 16, 18], where biological entities are represented as vertices and their relationships can be
68
T. Hu
modeled using edges linking pairs of vertices. Network modeling is a powerful tool to study interconnections among a large number of biological entities. In this study, vertices represent metabolites and an edge links two metabolites if they have a cooccurrence frequency in the set of 1000 best prediction models greater than the given cutoff. The network was rendered and analyzed using the Cytoscape software [25]. For the second round of more focused analysis, only the subset of metabolites appearing in the metabolite synergy network was used as a restricted feature set in a repeated model learning implementation, allowing the evolutionary algorithm to only use these more important metabolites to construct the classification models. The analysis was performed on both the discovery and replication datasets, each resulting in another set of 1000 best classification models. The intersection of the top 20 most common metabolites from the discovery and replication runs was reported, and such metabolites are regarded as interacting metabolites with high potential associations to the disease of OA.
4.3 Results and Discussion 4.3.1 Best Genetic Programs Evolved on the Full Feature Set The initial training used the discovery data and the full set of 167 metabolite features. Recall that we used fivefold cross-validation and ran the LGP algorithm 200 times for each fold, therefore, we collected 1000 best evolved linear genetic programs. Table 4.2 shows the evaluation statistics of the 1000 best programs including the fitness (MCE), sensitivity, specificity, and the AUC. The statistics were computed using the prediction results on the testing samples. We see that the prediction performance of the 1000 evolved best programs varies and the minimal error rate can be as low as 0.067 while the best sensitivity, specificity, and the AUC can be as high as 1.000, 0.933, and 0.947 respectively. To take a closer look, Fig. 4.1 shows the distributions of the fitness values and the number of effective features in the best 1000 evolved genetic programs. We see that the majority of the evolved programs achieved an error rate between 0.3 and 0.5, and only used 20–45 out of the total 167 features for the prediction. Table 4.2 Statistics of the testing results on the full feature set (discovery)
Mean Median Min Max Std dev 5% confidence 95% confidence
MCE 0.367 0.367 0.067 0.667 0.095 0.181 0.553
Sensitivity 0.684 0.667 0.200 1.000 0.146 0.398 0.970
Specificity 0.584 0.600 0.200 0.933 0.142 0.305 0.862
AUC 0.663 0.667 0.320 0.947 0.110 0.447 0.879
4 GP for Explainable Machine Learning
69
Fig. 4.1 The distributions of (a) the fitness and (b) the number of effective features in the best 1000 evolved predictive models
Fig. 4.2 The occurrence frequencies of the top 20 (a) individual and (b) pairs of features/metabolites using the full feature set and the discovery data
4.3.2 Identification of Important Features Out of the 1000 best evolved programs, we counted the occurrence frequency of each metabolite feature, as well as the co-occurrence frequency of metabolite pairs being effective in a best program. We ranked the individual and pairs of metabolite features based on the frequencies to study their importance in making the prediction. Figure 4.2 shows the top 20 individual metabolites and co-occurring metabolite pairs. We see that the top individual metabolites including taurine, arginine (Arg),
70
T. Hu
tyrosine (Thr), and ornithine (Orn) also appear in pairs most frequently. Metabolites such as C6, leucine (Leu), and C16:1 appeared in the top 20 pairs without being among the top 20 individual features. The top 1% metabolite pairs out of all 167 possible combinations were then 2 used to construct a metabolite synergy network (Fig. 4.3), in order to show the global connection structure of metabolite pairs with the strongest synergy. In the visualized graph, vertices are metabolites and two metabolites are directly connected by an edge if their co-occurrence frequency was among the top 1%. Vertex size and edge weight reflect the occurrence and co-occurrence frequencies. The network includes 70 metabolites as vertices. The two most frequent metabolite features, taurine and arginine locate in the center of the network with the highest vertex degrees of 44 and 43, respectively. The other metabolites with a vertex degree greater than 10 include tyrosine (a degree of 27), ornithine (19), C18:1 (14), and PC ae C40:6 (12). We also looked at the distributions of metabolite concentrations comparing diseased cases and healthy controls in order to see if the LGP algorithm was able to identify metabolites that influence the disease risk through synergistically
C3-DC (C4-OH) PC aa C32:1
C3
PC ae C42:3 PC ae C44:3
lysoPC a C16:1 PC aa C40:1 PC ae C34:1
PC ae C40:6
SM C20:2
Sarcosine
total DMA lysoPC a C18:0
C5:1-DC SM (OH) C24:1
PC aa C24:0
C18:2 PC ae C38:0
PC ae C42:5
PC aa C40:4 PC ae C36:3
C5 C18
C4 Leu
PC ae C38:2
C5:1 Thr
C16:2-OH C16 C16:1
Kynurenine Taurine
Arg lysoPC a C20:3
PC ae C32:1 PC ae C42:2 C6 (C4:1-DC)
C14:1-OH
SM C16:1
PC aa C42:2 Asn Orn
PC ae C34:0
Tyr
PC ae C42:4
PC ae C38:4 PC aa C34:3
C5-DC (C6-OH)
Nitro-Tyr PC aa C38:5 PC aa C32:3
PC ae C40:1 His
PC aa C42:0 PC ae C40:2
C4:1
C10:2
C10 PC ae C40:5
Pro
C12
Ac-Orn
Ile H1 C18:1 PC ae C44:5 PC aa C42:6
lysoPC a C24:0
PC ae C30:1
Fig. 4.3 The feature synergy network. Each vertex represents a feature/metabolite and an edge links to features if the pair appears among the top 1% the most frequent of all possible pairs in the best 1000 evolved predictive models. Vertex size and edge width are proportional to the individual and pairwise occurrence frequencies. The upper right inset shows the distribution of vertex degrees in a log-log scale
4 GP for Explainable Machine Learning
71
Fig. 4.4 The distributions of arginine (Arg) comparing case and control samples in the (a) discovery and (b) replication data
interacting with others rather than through individual and separate effects. We investigated two metabolites, arginine (Arg) and C18:1, both of which had a vertex degree greater than 10. Figures 4.4 and 4.5 show the comparison of metabolite concentration distributions in diseased cases (in pink color) v.s. healthy controls (in green color). We see that in both discovery and replication data, the concentration distribution of arginine differentiates significantly in the two populations. This indicates that using conventional uni-variable statistical methods could also detect
72
T. Hu
Fig. 4.5 The distributions of C18:1 comparing case and control samples in the (a) discovery and (b) replication data
this metabolite feature. However, when we looked at the distribution comparison for metabolite C18:1 (Oleic Acid) (Fig. 4.5), in both the discovery and replication data, the distributions heavily overlapped in the two populations. This indicates that conventional uni-variable methods would likely overlook this metabolite, however, the LGP algorithm was able to identify Oleic Acid as an important feature through interacting with other metabolites to influence disease risk.
4 GP for Explainable Machine Learning
73
4.3.3 Best Genetic Programs Evolved on the Focused Feature Subset The second round of training repeated the LGP algorithm implementation using only the 70 metabolites included in the top 1% metabolite pairs as shown in Fig. 4.3. Tables 4.3 and 4.4 show the prediction measurements of the best 1000 evolved linear genetic programs using the reduced and focused feature subset on both the discovery and replication data. It can be seen that the prediction performance improved significantly comparing to the initial training using the full feature set (Table 4.2). This suggests that the feature selection of the LGP algorithm was effective. We also investigated the feature importance ranking in the secondary training using only the focused feature subset. Figures 4.6 and 4.7 show the top 20 individual and pairs of metabolites ranked by the LGP algorithm by counting their occurrence or co-occurrence frequencies in the 1000 best evolved programs, using the discovery and replication data, respectively. Comparing to the frequencies reported in Fig. 4.2, the occurrence and co-occurrence frequencies increased by a large margin when a focused feature subset was used. This indicates that the model search was more effective and focused with a reduced and selected feature set. Moreover, using the separate discovery and replication data was able to validate the results of the identification of the most influential metabolite features. Nine metabolites, arginine, C16, C18:1, isoleucine, nitrotyrosine, ornithine, taurine, threonine and tyrosine, were found most frequently appearing in the best models in the discovery dataset as the result of both rounds of analyses and were successfully replicated using the replication dataset, including the previous four key metabolites identified in the network. Table 4.3 Statistics of the testing results on the focused feature subset (discovery)
Table 4.4 Statistics of the testing results on the focused feature subset (replication)
Mean Median Min Max Std dev 5% confidence 95% confidence
MCE 0.325 0.333 0.100 0.600 0.089 0.151 0.498
Sensitivity 0.723 0.733 0.267 1.000 0.135 0.459 0.987
Specificity 0.628 0.600 0.200 1.000 0.141 0.353 0.904
AUC 0.704 0.709 0.362 1.000 0.103 0.503 0.906
Mean Median Min Max Std dev 5% confidence 95% confidence
MCE 0.295 0.300 0.033 0.600 0.098 0.102 0.488
Sensitivity 0.733 0.733 0.267 1.000 0.139 0.460 1.000
Specificity 0.678 0.667 0.200 1.000 0.162 0.361 0.995
AUC 0.725 0.739 0.380 1.000 0.120 0.491 0.959
74
T. Hu
Fig. 4.6 The occurrence frequencies of the top 20 (a) individual and (b) pairs of features/metabolites using the focused feature set and the discovery data
Fig. 4.7 The occurrence frequencies of the top 20 (a) individual and (b) pairs of features/metabolites using the focused feature set and the replication data
The results are interesting as arginine and its pathway related metabolites, such as ornithine, have been identified as being associated with OA in previous analysis using traditional methods including pairwise comparison and regression technique [30]. Similarly, isoleucine was also previously identified as OA-associated metabolite [28]. The current analyses applied a novel analytic method, the evolutionary algorithm, which confirmed our previous findings and also identified additional novel metabolic markers for OA. These included four amino acids and two acylcarnitines, which could have potential utility in the clinical management
4 GP for Explainable Machine Learning
75
of OA. For example, taurine is the most abundant free amino acid in humans, and may play an important role in inflammation associated with oxidative stress [23]. It has been reported to be associated with rheumatoid arthritis [19]. Nitrotyrosine is also associated with oxidative damage and has been found associated with aging and the development of OA in cartilage samples from both monkeys and humans [21]. The findings in the current study certainly warrant further investigation of the role of those novel metabolic markers in OA.
4.4 Conclusion In this study, we proposed to use an LGP algorithm to evolve predictive models for metabolomics study of human diseases. We aimed to provide explainable learning results to the application problem by showing which metabolite features were the most influential in the prediction through either individual effects or synergistic effects combining with other metabolite features. The results showed that the LGP algorithm was able to evolve highly accurate classification models that took the metabolite concentrations as inputs and predicted the disease risk for unseen testing data. We highlighted that the feature selection was intrinsic to the LGP algorithm and was performed automatically through coevolving with the genetic programs. It was shown that using the subset of only the most influential metabolite features, the LGP algorithm was able to further improve the prediction accuracy. In addition, we used a network presentation to depict the pairwise interaction patterns of the most influential metabolite pairs. Such a metabolite synergy network allowed us to identify the highly central metabolites that interacted with a large number of other metabolites and collectively contributed to the disease risk prediction. The predictive model learning and feature importance analysis developed in this study open up more research directions for the field of genetic programming. As artificial intelligence and machine learning revolutionizing many research areas and industries, the explainability issue becomes more predominant. Questions like the following are often raised while applying machine learning algorithms, especially to bioinformatics problems. How trustworthy is the learned predictive model? What is the mechanism underlying the highly accurate prediction? Which features play a more important role in the prediction? In this study, we were only able to partially answer some of these questions and are leaving more explorations for future research. Acknowledgements This research is supported by the Canadian Natural Sciences and Engineering Research Council (NSERC) Discovery grant RGPIN-04699-2016 to Ting Hu.
76
T. Hu
References 1. Almasi, S.M., Hu, T.: Measuring the importance of vertices in the weighted human disease network. PLoS ONE 14(3), e0205,936 (2019) 2. Altman, R., Alarcon, G., Appelrouth, D., Bloch, D., Borenstein, D., Brandt, K., Brown, C., Cooke, T.D., et al.: The american college of rheumatology criteria for the classification and reporting of osteoarthritis of the hip. Arthritis and Rheumatology 34(5), 505–514 (1991) 3. Barabasi, A.L., Oltvai, Z.N.: Network biology: Understanding the cell’s functional organization. Nature Reviews Genetics 5, 101–113 (2004) 4. Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: A fresh approach to numerical computing. CoRR abs/1411.1607 (2014). URL http://arxiv.org/abs/1411.1607 5. Brameier, M.F., Banzhaf, W.: Linear Genetic Programming. Springer (2007) 6. Camacho, D.M., Collins, K.M., Powers, R.K., Costello, J.C., Collins, J.J.: Next-generation machine learning for biological networks. Cell 173(7), 1581–1592 (2018) 7. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730 (2015) 8. Cho, D.Y., Kim, Y.A., Przytycka, T.M.: Network biology approach to complex diseases. PLoS Computational Biology 8(12), e1002,820 (2012) 9. Dorani, F., Hu, T., Woods, M.O., Zhai, G.: Ensemble learning for detecting gene-gene interactions in colorectal cancer. PeerJ 6, e5854 (2018) 10. Fontaine-Bisson, B., Thorburn, J., Gregory, A., Zhang, H., Sun, G.: Melanin-concentrating hormone receptor 1 polymorphisms are associated with components of energy balance in the complex diseases in the newfoundland population: Environment and genetics (coding) study. The American Journal of Clinical Nutrition 99(2), 384–391 (2014) 11. Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521, 452– 459 (2015) 12. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Proceedings of the 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018) 13. Hu, T., Chen, Y., Kiralis, J.W., Moore, J.H.: ViSEN: Methodology and software for visualization of statistical epistasis networks. Genetic Epidemiology 37, 283–285 (2013) 14. Hu, T., Moore, J.H.: Network modeling of statistical epistasis. In: M. Elloumi, A.Y. Zomaya (eds.) Biological Knowledge Discovery Handbook: Preprocessing, Mining, and Postprocessing of Biological Data, chap. 8, pp. 175–190. Wiley (2013) 15. Hu, T., Oksanen, K., Zhang, W., Randell, E., Furey, A., Sun, G., Zhai, G.: An evolutioanry learning and network approach to identifying key metabolites for osteoarthritis. PLoS Computational Biology 14(3), e1005,986 (2018) 16. Hu, T., Sinnott-Armstrong, N.A., Kiralis, J.W., Andrew, A.S., Karagas, M.R., Moore, J.H.: Characterizing genetic interactions in human disease association studies using statistical epistasis networks. BMC Bioinformatics 12, 364 (2011) 17. Hu, T., Zhang, W., Fan, Z., Sun, G., Likhodi, S., Randell, E., Zhai, G.: Metabolomics differential correlation network analysis of osteoarthritis. Pacific Symposium on Biocomputing 21, 120–131 (2016) 18. Kafaie, S., Chen, Y., Hu, T.: A network approach to prioritizing susceptibility genes for genome-wide association studies. Genetic Epidemiology 43(5), 477–491 (2019) 19. Kontny, E., Wojtecka-ŁUkasik, E., Rell-Bakalarska, K., Dziewczopolski, W., Ma´sli´nski, W., Ma´slinski, S.: Impaired generation of taurine chloramine by synovial fluid neutrophils of rheumatoid arthritis patients. Amino Acids 23(4), 415–418 (2002) 20. Lee, M., Hu, T.: Computational methods for the discovery of metabolic markers of complex traits. Metabolites 9(4), 66 (2019)
4 GP for Explainable Machine Learning
77
21. Loeser, R.F., Carlson, C.S., Carlo, M.D., Cole, A.: Detection of nitrotyrosine in aging and osteoarthritic cartilage: Correlation of oxidative damage with the presence of interleukin-1β and with chondrocyte resistance to insulin-like growth factor 1. Arthritis and Rheumatology 46(9), 2349–2357 (2002) 22. Ma, J., Yu, M.K., Fong, S., Ono, K., Sage, E., Demchak, B., Sharan, R., Ideker, T.: Using deep learning to model the hierarchical structure and function of a cell. Nature Methods 15(4), 290–298 (2018) 23. Marcinkiewicz, J., Kontny, E.: Taurine and inflammatory diseases. Amino Acids 46(1), 7–20 (2014) 24. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should I trust you?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) 25. Shannon, P., Markiel, A., Ozier, O., Baliga, N.S., Wang, J.T., Ramage, D., Amin, N., Schwikowski, B., Ideker, T.: Cytoscape: A software environment for integrated models of biomolecular interaction networks. Genome Research 13, 2498–2504 (2003) 26. Yu, M.K., Ma, J., Fisher, J., Kreisberg, J.F., Raphael, B.J., Ideker, T.: Visible machine learning for biomedicine. Cell 173(7), 1562–1565 (2018) 27. Zhai, G., Aref-Eshghi, E., Rahman, P., Zhang, H., Martin, G., Furey, A., Green, R.C., Sun, G.: Attempt to replicate the published osteoarthritis-associated genetic variants in the newfoundland & labrador population. Journal of Orthopedics and Rheumatology 1(3), 5 (2014) 28. Zhai, G., Wang-Sattler, R., Hart, D.J., Arden, N.K., Hakim, A.J., Illig, T., Spector, T.D.: Serum branched-chain amino acid to histidine ratio: a novel metabolomic biomarker of knee osteoarthritis. Annals of the Rheumatic Diseases p. 120857 (2010) 29. Zhang, W., Likhodii, S., Aref-Eshghi, E., Zhang, Y., Harper, P.E., Randell, E., Green, R., Martin, G., Furey, A., Sun, G., Rahman, P., Zhai, G.: Relationship between blood plasma and synovial fluid metabolite concentrations in patients with osteoarthritis. The Journal of Rheumatology 42(5), 859–865 (2015) 30. Zhang, W., Sun, G., Likhodii, S., Liu, M., Aref-Eshghi, E., Harper, P.E., Martin, G., Furey, A., Green, R., Randell, E., Rahman, P., Zhai, G.: Metabolomic analysis of human plasma reveals that arginine is depleted in knee osteoarthritis patients. Osteoarthritis and Cartilage 24, 827– 834 (2016)
Chapter 5
Symbolic Regression by Exhaustive Search: Reducing the Search Space Using Syntactical Constraints and Efficient Semantic Structure Deduplication Lukas Kammerer, Gabriel Kronberger, Bogdan Burlacu, Stephan M. Winkler, Michael Kommenda, and Michael Affenzeller
5.1 Introduction Symbolic regression is a task that we can solve with genetic programming (GP) and a common example where GP is particularly effective in practical applications. Symbolic regression is a machine learning task whereby we try to find a mathematical model represented as a closed-form expression that captures dependencies of variables from a dataset. Genetic programming has been proven to be well-suited for this task especially when there is little knowledge about the data-generating process. Even when we have a good understanding of the underlying process, GP can identify counterintuitive or unexpected solutions.
L. Kammerer () Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Department of Computer Science, Johannes Kepler University, Linz, Austria Josef Ressel Center for Symbolic Regression, University of Applied Sciences Upper Austria, Hagenberg, Austria e-mail: [email protected] G. Kronberger · B. Burlacu · M. Kommenda Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Josef Ressel Center for Symbolic Regression, University of Applied Sciences Upper Austria, Hagenberg, Austria S. M. Winkler · M. Affenzeller Heuristic and Evolutionary Algorithms Laboratory (HEAL), University of Applied Sciences Upper Austria, Hagenberg, Austria Department of Computer Science, Johannes Kepler University, Linz, Austria © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_5
79
80
L. Kammerer et al.
5.1.1 Motivation GP has some practical limitations when used for symbolic regression. One limitation is that—as a stochastic process—it might produce highly dissimilar solutions even for the same input data. This can be very helpful to produce new “creative” solutions. However, it is problematic when we try to integrate symbolic regression in carefully engineered solutions (e.g. for automatic control of production plants). In such situations we would hope that there is an optimal solution and the solution method guarantees to identify the optimum. Intuitively, if the data changes only slightly, we expect that the optimal regression solution also changes only slightly. If this is the case we know that the solution method is trustworthy (cf. [15, 31]) and we can rely on the fact that the solutions are optimal at least with respect to the objective function that we specified. Of course this is only wishful thinking because of three fundamental reasons: (1) the symbolic regression search space is huge and contains many different expressions which are algebraically equivalent, (2) GP has no guarantee to explore the whole search space with reasonable computational resources and (3) the “optimal solution” might not be expressible as a closed-form mathematical expressions using the given building blocks. Therefore, the goal is to find an approximately optimal solution.
5.1.2 Prior Work Different methods have been developed with the aim to improve the reliability of symbolic regression. Currently, there are several off-the-shelf software solutions which use enhanced variants of GP and are noteworthy in this context: the DataModeler package1 [16] provides extensive capabilities for symbolic regression on top of Mathematica™. Eureqa™ is a commercial software tool2 for symbolic regression based on research described in [27–29]. The open-source framework HeuristicLab3 [36] is a general software environment for heuristic and evolutionary algorithms with extensive functionality for symbolic regression and white-box modeling. In other prior work, several researchers have presented non-evolutionary solution methods for symbolic regression. Fast function extraction (FFX) [22] is a deterministic method that uses elastic-net regression [39] to produce symbolic regression solutions orders of magnitudes faster than GP for many real-world problems. The work by Korns toward “extremely accurate” symbolic regression [12–14] highlights the issue that baseline GP does not guarantee to find the optimal solution even for
1 http://www.evolved-analytics.com/. 2 https://www.nutonian.com/products/eureqa/. 3 https://dev.heuristiclab.com.
5 Symbolic Regression by Exhaustive Search
81
rather limited search spaces. They give a useful systematic definition of increasingly larger symbolic regression search spaces using abstract expression grammars [10] and describes enhancements to GP to improve it’s reliability. The work by Worm and Chiu on prioritized grammar enumeration [38] is closely related. They use a restricted set of grammar rules for deriving increasingly complex expressions and describe a deterministic search algorithm, which enumerates the search space for limited symbolic regression problems.
5.1.3 Organization of This Chapter Our contribution is conceptually an extension of prioritized grammar enumeration [38], although our implementation of the method deviates significantly. The most relevant extensions are that we cut out large parts of the search space and provide a general framework for integrating heuristics in order to improve the search efficiency. Section 5.2 describes how we reduce the size of the search space which is defined by a context-free grammar: 1. We restrict the structure of solution to prevent too complicated solutions. 2. We use grammar restrictions to prevent semantic duplicates—solutions with different syntax but same semantics, such as algebraic transformations. With these restrictions, most solutions can only be generated in exactly one way. 3. We efficiently identify remaining duplicates with semantic hashing, so that (nearly) all solutions in the search space are semantically unique. In Sect. 5.3, we explain the algorithm that iterates all these semantically unique solutions. The algorithm sequentially generates solutions from the grammar and keeps track of the most accurate one. For very small problems, it is even feasible to iterate the whole search space [19]. However, our goal in larger problems is to find accurate and concise solutions early during the search and to stop the algorithm after a reasonable time. The search order is determined with heuristics, which estimate the quality of solutions and prioritize promising ones in the search. A simple heuristic is proposed in Sect. 5.4. Modeling results in Sect. 5.5 show that this first version of our algorithm can already solve several difficult noiseless benchmark problems.
5.2 Definition of the Search Space The search space of our deterministic symbolic regression algorithm is defined by a context-free grammar. Production rules in the grammar define the mathematical expressions that can be explored by the algorithm. The grammar only specifies possible model structures whereby placeholders are used for numeric coefficients. These are optimized separately by a curve-fitting algorithm (e.g. optimizing least
82
L. Kammerer et al.
squares with an gradient-based optimization algorithm) using the available training data. In a general grammar for mathematical expressions—as it is common in symbolic regression with GP for example—the same formula can be derived in several forms. These duplicates inflate the search space. To reduce them, our grammar is deliberately restricted regarding the possible structure of expressions. Remaining duplicates that cannot be prevented by a context-free grammar are eliminated via a hashing algorithm. Using both this grammar and hashing, we can generate a search space with only semantically unique expressions.
5.2.1 Grammar for Mathematical Expressions In this work we consider mathematical expressions as list of symbols which we call phrases or sentences. A phrase can contain both terminal and non-terminal symbols and a sentence only terminal symbols. Non-terminal symbols can be replaced by other symbols as defined by a grammar’s production rules while terminal symbols represent parts of the final expression like functions or variables in our case. Our grammar is very similar to the one by Kronberger et al. [19]. It produces only rational polynomials which may contain linear and nonlinear terms, as outlined conceptually in Eq. (5.1). The basic building blocks of terms are linear and nonlinear functions {+, ×, inv, exp, log, sin, square root, cube root}. Recursion in the production rules represents a strategy for generating increasingly complex solutions by repeated nesting of expressions and terms. Expr = c1 Term1 + c2 Term2 + . . . + cn Term = Factor0 × Factor1 × . . .
(5.1)
Factor ∈ {variable, log(variable), exp(variable), sin(variable)} We explicitly disallow nested non-linear functions, as we consider such solutions too complex for real-world applications. Otherwise, we allow as many different structures as possible to keep accurate and concise models in the search space. We prevent semantic duplicates by generating just one side of mathematical equality relations in our grammar, e.g. we allow xy + xz but not x(y + z). Since each function has different mathematical identities, many different production rules are necessary to cover all special cases. Because we scale every term including function arguments, we also end up with many placeholders for coefficients in the structures. All production rules are detailed in Fig. 5.1 and described in the following. We use a polynomial structure as outlined in Eq. (5.1) to prevent a factored form of solutions. The polynomial structure is enforced with the production rules Expr 1 and Term. We restrict the occurrence of the multiplicative inverse (= ... ), the square 1 1 . This is root and cube root function to prevent a factored form such as x+y x+z necessary since we want to allow sums of simple terms as function arguments (see
5 Symbolic Regression by Exhaustive Search
Fig. 5.1 Context-free grammar for generating mathematical expressions
83
84
L. Kammerer et al.
non-terminal symbol SimpleExpr). Therefore, these three functions can occur at most once time per term. This is defined with symbol OneTimeFactors and one production rule for each combination. The only function in which we do not allow sums as argument is exponentiation (see ExpFactor), since this form is substituted by the overall polynomial structure (e.g. we allow ex ey but not ex+y ). Equation (5.2) shows some example identities and which forms are supported. in the search space:
not in the search space:
c1 xy + c2 xz + c3 ≡ x(c4 y + c5 z) + c6 c1
1 1 1 + c7 ≡ c8 + c14 c2 x + c3 xx + c4 xy + c5 y + c6 c9 x + c10 c11 x + c12 y + c13 c1 exp(c2 x) exp(c3 y) + c4 ≡ c5 exp(c6 x + c7 y) + c8
(5.2) We only allow (sums of) terms of variables as function arguments, which we express with the production rules SimpleExpr and SimpleTerm. An exception is the multiplicative inverse, in which we want to include the same structures as in ordinary terms. However, we disallow compound fractions like in Eq. (5.3). Again, we introduce separate grammar rules InvExpr and InvTerm which cover the same rules as Term except the multiplicative inverse. in the search space: c1
not in the search space:
1 1 + c6 ≡ c7 + c14 1 c2 log(c3 x + c4 ) + c5 c8 c9 log(c10 x+c11 )+c12 + c13
(5.3)
In the simplest case, the grammar produces an expression E0 = c0 x+c1 , where x is a variable and c0 and c1 are coefficients corresponding to the slope and intercept. This expression is obtained by considering the simplest possible Term which corresponds to the derivation chain Expr → Term → RecurringFactors → VarFactor → x. Further derivations could lead for example to the expression E1 = c0 x + (c1 x + c2 ), produced by nesting E0 into the first part of the production rule for Expr, where the Term is again substituted with the variable x. However, duplicate derivations can still occur due to algebraic properties like associativity and commutativity. These issues cannot be prevented with a contextfree grammar because a context-free grammar does not consider surrounding symbols of the derived non-terminal symbol in its production rules. For example the expression E1 = c0 x + (c1 x + c2 ) contains two coefficients c0 and c1 for variable x which could be folded into a new coefficient cnew = c0 + c1 . This type of redundancy becomes even more pronounced when VarFactor has multiple productions (corresponding to multiple input variables), as it becomes possible for multiple derivation paths to produce different expressions which are algebraically equivalent, such as c1 x + c2 y, c3 x + c4 x + c5 y, c6 y + c7 x for corresponding
5 Symbolic Regression by Exhaustive Search
85
values of c1 . . . c7 . Another example are c1 xy and c2 yx which are both equivalent but derivable from the grammar. To avoid re-visiting already explored regions of the search space, we implement a caching strategy based on expression hashing for detecting algebraically equivalent expressions. The computed hash values are the same for algebraically equivalent expressions. In the search algorithm we keep the hash values of all visited expressions and prevent re-evaluations of expressions with identical hash values.
5.2.2 Expression Hashing We employ expression hashing by Burlacu et al. [3] to assign hash values to subexpressions within phrases and sentences. Hash values for parent expressions are aggregated in a bottom-up manner from the hash values of their children using any general-purpose hash function. We then simplify such expressions according to arithmetic properties such as commutativity, associativity, and applicable mathematical identities. The resulting canonical minimal form and associated hash value are then cached in order to prevent duplicated search effort. Expression hashing builds on the idea of Merkle trees [23]. Figure 5.2 shows how hash values propagate towards the tree root (the topmost symbol of the expression) using hash function ⊕ to aggregate child and parent hash values. Expression hashing considers an internal node’s own symbol, as well as associativity and commutativity properties. To account for these properties, each hashing step must be accompanied by a corresponding sorting step, where child subexpressions are reordered according to their type and hash value. Algorithm 1 ensures that child nodes are sorted and hashed before parent nodes, such that calculated hash values are consistent towards the root symbol. An expression’s hash value is then given by the hash value of its root symbol. After sorting, sub-expressions with the same hash value are considered isomorphic
Fig. 5.2 Hash tree example, in which the hash values of all nodes are calculated from both their own node content and the has value of their children [3]
86
L. Kammerer et al.
Algorithm 1 Expression hashing [3] Input: An expression E Output: The corresponding sequence of hash values 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14:
hashes ← empty list of hash values symbols ← list of symbols in E for all symbol s in symbols do H (s) ← an initial hash value if s is a terminal function symbol then if s is commutative then Sort the child nodes of s end if child hashes ← hash values of s’s children H (n) ← ⊕(child hashes, H (s)) end if hashes.append(H (n)) end for return hashes
and are simplified according to arithmetic rules. The simplification procedure is illustrated in Fig. 5.3 and consists of the following steps: 1. Fold: Apply associativity to eliminate nested symbols of the same type. For example, postfix expression a b + c + consists of two nested additions where each addition symbol has arity 2. Folding flattens this expression to the equivalent form a b c + where the addition symbol has arity 3. 2. Simplify: Apply arithmetic rules and mathematical identities to further simplify the expressions. Since expression already include placeholders for numerical coefficients, we eliminate redundant subexpressions such as a a b + which becomes a b +, or a a + which becomes a. 3. Repeat steps 1 and 2 until no further simplification is possible. Nested + and × symbols in Fig. 5.3 are folded in the first step, simplifying the tree structure of the expression. Arithmetic rules are then applied for further simplification. In this example, the product of exponentials exp(c1 × x1 ) × exp(c2 × x1 ) ≡ exp((c1 + c2 ) × x1 ) is simplified since from a local optimization perspective, optimizing the coefficients of the expression yields better results for a single coefficient c3 = c1 + c2 , thus it makes no sense to keep both original factors. Finally, the sum c4 x1 + c5 x1 is also simplified since one term in the sum is redundant. After simplification, the hash value of the simplified tree is returned as the hash value of the original expression. Based on this computation we are able to identify already explored search paths and avoid duplicated effort.
5 Symbolic Regression by Exhaustive Search
87
Fig. 5.3 Simplification to canonical minimal form during hashing (a) Original expression. (b) Folded expression. (c) Minimal form
5.3 Exploring the Search Space By limiting the size of expressions, the grammar and the hashing scheme produce a large but finite search space of semantically unique expressions. In an exhaustive search, we iterate all these expressions and search for the best fitting one. Thereby, we derive sentences with every possible derivation path. An expression is rejected if another expression with the same semantic—according to hashing—has already been generated during the search. When a new, previously unseen sentence is derived, the placeholders for coefficients are replaced with real values and optimized separately. The best fitting sentence is stored. Algorithm 2 outlines how all unique expressions are derived: We store unfinished phrases—expressions with non-terminal symbols—in a data structure such as a stack or queue. We fetch phrases from this data structure one after another, derive new phrases, calculate their hash values and compare these hash values to previously seen ones. To derive new phrases, we always replace the leftmost non-terminal symbol in the old phrase with the production rules of this non-terminal symbol. If a derived phrase becomes a sentence with only terminal symbols, its coefficients are optimized and its fitness is evaluated. Otherwise, if it still contains derivable non-terminal symbols, it is put back on the data structure. We restrict the length of a phrase by its number of variable references—e.g. xx and log(x) + x have two variable references. Phrases that exceed this limit are discarded in the search. Since every non-terminal symbol is eventually derived to at least one variable reference, non-terminal symbols count as variable references. In our experiments, a limit on the complexity has been found to be the most intuitive way to estimate an appropriate search space limit. Other measures, e.g. the number of symbols are harder to estimate since coefficients, function symbols and the nonfactorized representation of expression quickly inflate the number of symbols in a phrase.
88
L. Kammerer et al.
Algorithm 2 Iterating the search space Input: Data set ds, max. number of variable references maxVariableRefs Output: Best fitting expression 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28:
openP hrases ← empty data structure seenH ashes ← empty set Add StartSymbol to openP hrases bestExpression ← constant symbol while openP hrases is not empty do oldP hrase ← fetch and remove from openP hrases nonT erminalSymbol ← leftmost nonterminal symbol in oldP hrase for all production prod of nonT erminalSymbol do newP hrase ← apply prod on copy of oldP hrase if VariableRefs(newP hrase) ≤ maxVariableRefs then hash ← Hash(newP hrase) if seenH ashes not contains hash then Add hash to seenH ashes if newP hrase is sentence then Fit coefficients of newP hrase to ds Evaluate newP hrase on ds if newP hrase is better than bestExpression then bestExpression ← newP hrase end if end if else Add newP hrase to openP hrases end if end if end for end while return bestExpression
5.3.1 Symbolic Regression as Graph Search Problem Without considering the semantics of an expression, we would end up exploring a search tree like in Fig. 5.4, in which semantically equivalent expressions are derived multiple times (e.g. c1 x + c2 x and c1 x + c2 x + c3 x). However, hashing turns the search tree into a directed search graph in which nodes (derived phrases) are reachable via one or more paths, as shown in Fig. 5.5. Thus, hashing prevents the search in a graph region that was already visited. From this point of view, Algorithm 2 is very similar to simple graph search algorithms such as depth-first or breadth-first search.
5 Symbolic Regression by Exhaustive Search
89
Expr Expr→Term+Expr Expr→Term Term + Expr
Term
Term→Factor Term→Factor*Term Factor + Expr
Factor * Term + Expr
Factor→X
Expr→Term+Expr
X + Term
X * Term
Term→Factor*Term Term→Factor*Term
Term→Factor X * Factor + Expr
Factor
Factor→X Factor→X
X * Term + Expr
Expr→Term X + Term + Expr
Factor * Term
Factor→X
X + Expr
Term→Factor
Term→Factor*Term
X
Term→Factor
X * Factor * Term
too long
X * Factor
Term→Factor Term→Factor*Term too long
X + Factor + Expr
X + Factor * Term
Factor→X
Expr→Term+Expr too long
Term→Factor X + X + Factor
too long
X+X
too long
Factor→X
X+X*X
Factor→X
X * X + Expr
X * X + Term
Term→Factor Term→Factor*Term
X + X * Factor
Term→Factor*Term
Factor→X
Factor→X Expr→Term
X + X * Term
Expr→Term
Term→Factor
X + Factor
Factor→X
X + X + Expr
X + X + Term
Term→Factor*Term
Factor→X
X * X * Term
Expr→Term+Expr
too long
X*X
Term→Factor*Term Term→Factor
X * X * Factor
too long
Term→Factor Term→Factor*Term Factor→X
X * X + Factor
too long
X*X*X
Factor→X X*X+X
Factor→X X+X+X
Fig. 5.4 Search tree of expression generation without semantic hashing
5.3.2 Guiding the Search In Algorithm 2, the order in which expressions are generated is determined by the data structure used. A stack or a queue would result in a depth-first or a breadth-first search respectively. However, as the goal is to find well-fitting expressions quickly and efficiently, we need to guide the traversal of a search graph towards promising phrases. Our general framework for guiding the search is very similar to the idea used in the A* algorithm [5]. We use a priority queue as data structure and assign a priority value to each phrase, indicating the expected quality of sentences which are derivable from that phrase. Phrases with high priority are derived first in order to discover well-fitting sentences, steering the algorithm towards good solutions. Similar to the A* algorithm, we cannot make a definite statement about a phrase’s priority before actually deriving all possible sentences from it. Therefore, we need to estimate this value with problem-specific heuristics. The calculation of phrase
90
L. Kammerer et al.
Expr Expr→Term+Expr Term + Expr Term→Factor
Factor * Term + Expr
X + Expr
X * Term + Expr Expr→Term+Expr
Term→Factor
Term→Factor*Term
Factor→X X + Term
X + Term + Expr
X + Factor * Term
X + Factor
X + Factor + Expr
X + X * Term
X
Term→Factor X + X * Factor
too long X * X + Expr
Factor→X
Factor→X
Term→Factor*Term
too long
X * Factor + Expr
too long
Term→Factor Term→Factor*Term Factor→X
Term→Factor*Term Term→Factor
Expr→Term X * X + Term
Term→Factor
X * X + Factor
Factor * Term
Factor→X
Factor→X
Expr→Term
Term→Factor*Term
Term→Factor*Term
Factor→X X * Term
Term→Factor*Term
Term→Factor
X * Factor
X * Factor * Term
Factor→X
Factor
Expr→Term+Expr
Term→Factor
Factor + Expr
Expr→Term
Term
Factor→X
too long
Term→Factor*Term
too long
X*X
Factor→X X * X * Term
Term→Factor*Term
Term→Factor
too long
X * X * Factor Factor→X X*X*X
Factor→X
Factor→X X+X*X
Fig. 5.5 Search graph with loops caused by semantic hashing
priorities provides us a generic point for integrating heuristics for improving the search efficiency and extending the algorithm’s capabilities in future work.
5.4 Steering the Search We introduce a simple heuristic for guiding the search and leave more complex and efficient heuristics for future work. The proposed heuristic makes a pessimistic estimation of the quality of a phrase’s derivable sentences. This is done by evaluating phrases before they are derived to sentences. With the goal of finding short and accurate sentences quickly, the priority value considers both the expected quality and the length of a phrase.
5.4.1 Quality Estimation Estimating the expected quality of an unfinished phrase is possible due to the polynomial structure of sentences and the derivation of the leftmost non-terminal symbol in every phrase. Since expressions are sums of terms (c1 Term1 + c2 Term2 + . . .), repeated expansion of the leftmost non-terminal symbol derives one term after
5 Symbolic Regression by Exhaustive Search
91
another. This results in phrases such as in Eq. (5.4), in which the first two terms c1 log(c2 x + c3 ) and c4 xx contain only terminal symbols and the last non-terminal symbol is Expr. finishedTerm1
c1 log(c2 x + c3 )
+
finishedTerm2 c4 xx
+
Expr
(5.4)
Treat as coefficient
Phrases where the only non-terminal symbol is Expr are evaluated as if they were full sentences by treating Expr as a coefficient during the local optimization phase. We get a pessimistic estimate of the quality of derivable sentences, since derived sentences with more terms can only have better quality. The quality can only improve with more terms because of separate coefficient optimization and one scaling coefficient per term, as shown in Eq. (5.5). If a term which does not improve the quality is derived, the optimization of coefficients will cancel it out by setting the corresponding scaling coefficient to zero (e.g. c5 in Eq. (5.5)). finishedTerm1
+
finishedTerm2
+
c Term 5
(5.5)
Can only improve quality
This heuristic works only for phrases in which Expr is the only non-terminal symbol. For sentences with different non-terminal symbols, we reuse the estimated quality from the last evaluated parent phrase. The estimate is updated when a new term with only terminal symbols is derived and again only one Expr remains. For now, we do not have a reliable estimation method for terms that contain non-terminal symbols and leave this topic for future work.
5.4.2 Priority Calculation To prevent arbitrary adding of badly-fitting terms that are eventually scaled down to zero, our priority measure considers both a phrase’s length and its expected accuracy. To balance these two factors, these two measures need to be in the same scale. We use the normalized mean squared error (NMSE) as quality measure which is in the range [0, 1] for properly scaled solutions. This measure corresponds to 1 − R 2 (coefficient of determination). As length measure we use the number of symbols relative to the maximum sentence length. Since we limit the search space to a maximum number of variable references of a phrase, we cannot exactly calculate the maximum possible length of a phrase. Therefore, we estimate this maximum length with a greedy procedure: Starting with the grammar’s start symbol Expr, we iteratively derive a new phrase using the longest production rule. If two production rules have the same length, we take the one with least non-terminal symbols and variable references.
92
L. Kammerer et al.
Phrases with lower priority values are expanded first during the search. The priority for steering the search from Sect. 5.3 is the phrase’s NMSE value minus its weighted relative length, as shown in Eq. (5.6). The weight w controls the greediness and allows corrections of over- or underestimations of the maximum length. However, in practice this value is not critical. priority(p) = NMSE(p) + w
len(p) lengthmax
(5.6)
5.5 Experiments We run our algorithm on several synthetic benchmark datasets to show that the search space defined by our restricted grammar is powerful enough to solve many problems in feasible time. As benchmark datasets, we use noiseless datasets from physical domains [4] and Nguyen-, Vladislavleva- and Keijzer-datasets [37] as defined and implemented in the HeuristicLab framework. The search space was restricted in the experiments to include only sentences with at most 20 variable references. We evaluate at most 200,000 sentences. Coefficients are randomly initialized and then fitted with the iterative gradient-based Levenberg– Marquardt algorithm [20, 21] with at most 100 iterations. For each model structure, we repeat the coefficient fitting process ten times with differently initialized values to reduce the chance of finding bad local optima. As a baseline, we also run symbolic regression with GP on the same benchmark problems. Therefore, we execute GP with strict offspring selection (OSGP) [1] and explicit optimization of coefficients [9]. The OSGP settings are listed in Table 5.1. The OSGP experiments were executed with the HeuristicLab software framework4
Table 5.1 OSGP experiment settings Parameter Population size Max. selection pressure Max. evaluated solutions Mutation probability Selection Crossover operator Mutation operator Max. tree size Function set
4 https://dev.heuristiclab.com.
Setting 500 300 200,000 15% Gender-specific selection (random and proportional) Subtree swapping Point mutation, tree shaking, changing single symbols, replacing/removing branches Number of nodes: 30, depth: 50 +, −, ×, ÷, exp, log, sin, cos, square, sqrt, cbrt
5 Symbolic Regression by Exhaustive Search
93
[36]. Since this comparison focuses only on particular weaknesses and strengths of our proposed algorithm over state of the art-techniques, we use the same OSGP settings for all experiments and leave out problem-specific hyper parameter-tuning.
5.5.1 Results Both the exhaustive search and OSGP were repeated ten times on each dataset. All repetitions of the exhaustive search algorithm led to the exact same results. This underlines the determinism of the proposed methods, even though we rely on stochasticity when optimizing coefficients. Also the OSGP results do not differ much. Tables 5.2, 5.3, 5.4, and 5.5 show the achieved NMSE values for the exhaustive search and the median NMSE values of all OSGP repetitions. NMSE values in the Tables 5.2, 5.3, 5.4, and 5.5 smaller than 10−8 are considered as exact or good-enough approximations and emphasized in bold. The exhaustive search found a good solution (NMSE < 10−8 ) within ten minutes for all datasets. If no such solution was found, the algorithm runs until it reaches the max. number of evaluated solutions, which can take days for larger datasets. The experimental results show, that our algorithm struggles with problems with complex terms—for example with Keijzer data sets 4, 5 and 11 in Table 5.2. This is probably because our heuristic works “term-wise”—our algorithm searches completely broad without any guidance within terms which still contain nonterminal symbols. This issue becomes even more pronounced when we have to find
Table 5.2 Median NMSE results for Keijzer instances
6 [6, 32]
Problem 0.3x sin(2π x); x ∈ [−1, 1] 0.3x sin(2π x); x ∈ [−2, 2] 0.3x sin(2π x); x ∈ [−3, 3] x 3 exp(−x) cos(x) sin(x)(sin(x)2 cos(x)− 1) (30xz)/((x − 10)y 2 ) x 1
7 [6, 32] 8 [6, 32] 9 [6, 32] 10 [6, 32] 11 [6, 33] 12 [6, 33] 13 [6, 33] 14 [6, 33] 15 [6, 33]
ln(x) √ x √ arcsinh(x) i.e. ln(x + x 2 + 1) xy xy + sin((x − 1)(y − 1)) x 4 − x 3 + y 2 /2 − y 6 sin(x) cos(y) 8/(2 + x 2 + y 2 ) x 3 /5 + y 3 /2 − y − x
1 [6, 7] 2 [6] 3 [6] 4 [6, 26] 5 [6]
i=1 i
Exhaustive search Train. Test 3e−27 2e−27 5e−22 5e−22 6e−32 3e−31 1e−04 2e−04
OSGP Train. 1e−30 5e−18 4e−30 1e−06
Test 8e−31 4e−18 3e−30 1e−06
3e−08
3e−20
3e−20
3e−08
8e−13
6e−09
5e−14
5e−13
2e−31 2e−14 5e−14 4e−04 7e−04 5e−32 2e−32 4e−32 1e−22
3e−31 8e−10 1e−05 1e−01 7e−01 1e−31 2e−31 2e−31 2e−21
1e−30 5e−21 5e−17 6e−32 2e−22 7e−22 3e−32 1e−17 2e−11
2e−30 1e−21 6e−16 2e−04 9e−02 8e−18 3e−32 1e−17 6e−10
94
L. Kammerer et al.
Table 5.3 Median NMSE results for Nguyen instances
1 [34] 2 [34] 3 [34] 4 [34] 5 [34] 6 [34] 7 [34] 8 [34] 9 [34] 10 [34] 11 [34] 12 [34]
Problem x3 + x2 + x x4 + x3 + x2 + x x5 + x4 + x3 + x2 + x x6 + x5 + x4 + x3 + x2 + x sin(x 2 ) cos(x) − 1 sin(x) + sin(x + x 2 ) log(x + 1) + log(x 2 + 1) √ x sin(x) + sin(y 2 ) 2 sin(x) cos(y) xy x 4 − x 3 + y 2 /2 − y
Exhaustive search Train. Test 5e−34 3e−33 3e−33 4e−33 1e−33 7e−33 6e−12 6e−11 9e−14 3e−13 2e−17 2e−12 4e−13 5e−12 6e−32 2e−31 2e−13 2e−12 5e−32 1e−31 2e−06 1e−02 2e−31 2e−31
OSGP Train. 8e−30 5e−30 2e−16 2e−12 3e−18 6e−14 5e−13 7e−32 8e−31 1e−28 6e−30 7e−18
Test 2e−29 1e−28 2e−15 3e−08 4e−18 6e−08 1e−09 1e−31 8e−31 8e−29 3e−30 5e−17
OSGP Train. 1e−09 3e−06 3e−05 7e−03 8e−16 6e−31 5e−29 5e−05
Test 9e−07 2e−03 6e−04 1e−02 9e−15 3e−19 4e−29 2e−02
Table 5.4 Median NMSE results for Vladislavleva instances
1 [30] 2 [26] 3 [35] 4 [35] 5 [35] 6 [35] 7 [35] 8 [35]
Problem exp(−(x1 − 1)2 )/(1.2 + (x2 − 2.5)2 ) exp(−x)x 3 cos(x) sin(x)(cos(x) sin(x)2 −1) f2 (x1 )(x2 − 5) 10/(5 + 5i=1 (xi − 3)2 ) 30((x1 − 1)(x3 − 1))/(x22 (x1 − 10)) 6 sin(x1 ) cos(x2 ) (x1 − 3)(x2 − 3) + 2 sin((x1 − 4)(x2 − 4)) ((x1 − 3)4 + (x2 − 3)3 − (x2 − 3))/((x2 − 2)4 + 10)
Exhaustive search Train. Test 3e−03 3e−01 3e−04 1e−02 1e−02 2e−01 1e−01 2e−01 2e−03 9e−03 8e−32 4e−31 1e−30 9e−31 1e−03 2e−01
long and complex function arguments. It should also be noted that our algorithm only finds non-factorized representations of such arguments, which are even longer and therefore even harder to find in a broad search. For the Nguyen datasets in Table 5.3 and the Keijzer datasets 12–15 in Table 5.2, we find the exact or good approximations in most cases with our exhaustive search. Especially for simpler datasets, the results of our algorithm surpasses the one of OSGP. This is likely due to the datasets’ low number of training instances, which makes it harder for OSGP to find good approximations. Some problems are not contained in the search space, thus we do not find any good solution for them. This is the case for Keijzer 6, 9 and 10 in Table 5.2, for which we do not support the required function symbols in our grammar. Also all Vladislavleva datasets except 6 and 7 in Table 5.4 and the problems “Fluid Flow” and “Pagie-1” in Table 5.5 are not in the hypothesis space as they are too complex.
5 Symbolic Regression by Exhaustive Search
95
Table 5.5 Median NMSE results for other instances Problem Poly-10 [25] x1 x2 + x3 x4 + x5 x6 + x1 x7 x9 + x3 x6 x10 Pagie-1 (inverse dynamics) [24] 1/(1 + x −4 ) + 1/(1 + y −4 ) Aircraft lift coefficient [4] CLα (α − α0 ) + CLδe δe SH T /Sref Fluid flow [4] V∞ r sin(θ)(1 − R 2 /r 2 ) + Γ /(2π ) ln(r/R) Rocket√fuel flow [4] p0 A / T0 γ /R(2/(γ + 1))(γ +1)/(γ −1)
Exhaustive search Train. Test 2e−32 1e−32
OSGP Train. 7e−02
Test 1e−01
1e−03
6e−01
9e−07
5e−05
3e−31
3e−31
2e−17
2e−17
3e−04
4e−04
9e−06
2e−05
3e−31
3e−31
1e−19
1e−19
Another issue is the optimization of coefficients. Although several problems have a simple structure and are in the search space, we do not find the right coefficients for arguments of non-linear functions, for example in Nguyen 5–7. The issue hereby is that we iterate over the actually searched model structure but determine bad coefficients. As we do never look again at the same model structure, we can only find an approximation. This is a big difference to symbolic regression with genetic programming, in which we might find the same structure again in next generations.
5.6 Discussion Among the nonlinear system identification techniques, symbolic regression is characterized by its ability to identify complex nonlinear relationships in structured numerical data in the form of interpretable models. The combination of the power of nonlinear system identification without a priori assumptions about the model structure with the white-box ability of mathematical formulas represents the unique selling point of symbolic regression. If tree-based GP is used as search method, the ability to interpret the found models is limited due to the stochasticity of the GP search. Thus, at the end of the modeling phase, several similarly complex models of approximately the same quality can be produced, which have completely different structures and use completely different subsets of features. These last-mentioned limitations due to ambiguity can be countered using a deterministic approach in which only semantically unique models may be used. This approach, however, requires a lot of restrictions regarding search space complexity in order to specify a subspace in which an exhaustive search is feasible. On the other hand, the exhaustive claim enables the approach to generate extensive model libraries already in the offline phase, through which as soon as a concrete task is given in the online phase, it is only necessary to navigate in a suitable way. In a very reduced summary, one could characterize the classical tree-based symbolic regression using GP and the approach of deterministically and exhaustively
96
L. Kammerer et al.
generating models in such a way that the latter enables a complete search in an incomplete search space while the classical approach performs an incomplete search in a rather complete search space.
5.6.1 Limitations The approach we have described in this contribution also has several limitations. For the identification of optimal coefficient values we rely on the Levenberg–Marquardt method for least squares, which is a local search routine using gradient information. Therefore, we can only hope to find global optima for coefficient values. Finding bad local optima for coefficients is less of a concern when using GP variants with a similar local improvement scheme because there are implicitly many restarts through the evolutionary operations of recombination and mutation. In the proposed method we visit each structure only once and therefore risk to discard a good solution when we are unlucky to find good coefficients. We have worked only with noiseless problem instances yet. We observed in first experiments with noisy problems instances that the algorithm might get stuck trying to improve non-optimal partial solutions due to its greedy nature. Therefore, we need further investigations before we move on with the development of our algorithm to noisy real-world problems. Another limitation is the poor scalability of grammar enumeration when increasing the number of features or the size of the search space. When increasing these parameters we can not expect to explore a significant part of the complete search space and must increasingly rely on the power of heuristics to hone in on relevant subspaces. Currently, we have only integrated a single heuristic which evaluates terms in partial solutions and prioritizes phrase which include well-fitting terms. However, the algorithm has no way to prioritize incomplete terms and is inefficient when trying to find complex terms.
5.7 Outlook Even when considering the above mentioned limitations of the currently implemented algorithm we still see significant potential in the approach of more systematic and deterministic search for symbolic regression and we already have several ideas to improve the algorithm and overcome some of the limitations. The integration of improved heuristics for guided search is our top-priority. An advantage of the concept is that it is extremely general and allows to experiment with many different heuristics. Heuristics can be as simple as prioritizing shorter expressions or less complex expressions. More elaborate schemes which guide the search based on prior knowledge about the data-generating process are easy to imagine. Heuristics could incorporate syntactical information (e.g. which variables already occur within the expression) as well as information from partial evaluation
5 Symbolic Regression by Exhaustive Search
97
of expressions. We also consider dynamic heuristics which are adjusted while the algorithm is running and learning about the problem domain. Potentially, we could even identify and learn heuristics which are transferable to other problem instances and would improve efficiency in a transfer learning setting. Getting trapped in local optima is less of a concern when we apply global search algorithms for coefficient values such as evolution strategies, differential evolution, or particle swarm optimization (cf. [11]). Another approach would be to reduce the ruggedness of the objective function through regularization of the coefficient optimization step. This could be helpful to reduce the potential of overfitting and getting stuck in sub-optimal subspaces of the search space. Generally, we consider grammar enumeration to be effective only when we limit the search space to relatively short expressions—which is often the case in our industrial applications. Therein lies the main potential compared to the more general approach of genetic programming. In this context we continue to explore potential for segmentation of the search space [19] in combination with grammar enumeration in an offline phase for improving later search runs. Grammar enumeration with deduplication of structures could also be helpful to build large offline libraries of sub-expressions that could be used by GP [2, 8, 17, 18]. Acknowledgements The authors gratefully acknowledge support by the Christian Doppler Research Association and the Federal Ministry for Digital and Economic Affairs within the Josef Ressel Center for Symbolic Regression.
References 1. Affenzeller, M., Winkler, S., Wagner, S., Beham, A.: Genetic Algorithms and Genetic Programming - Modern Concepts and Practical Applications, Numerical Insights, vol. 6. CRC Press, Chapman & Hall (2009) 2. Angeline, P.J., Pollack, J.: Evolutionary module acquisition. In: Proceedings of the Second Annual Conference on Evolutionary Programming, pp. 154–163. La Jolla, CA, USA (1993) 3. Burlacu, B., Kammerer, L., Affenzeller, M., Kronberger, G.: Hash-based Tree Similarity and Simplification in Genetic Programming for Symbolic Regression. In: Computer Aided Systems Theory, EUROCAST 2019 (2019) 4. Chen, C., Luo, C., Jiang, Z.: A multilevel block building algorithm for fast modeling generalized separable systems. Expert Systems with Applications 109, 25–34 (2018) 5. Hart, P.E., Nilsson, N.J., Raphael, B.: A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics 4(2), 100–107 (1968) 6. Keijzer, M.: Improving symbolic regression with interval arithmetic and linear scaling. In: Genetic Programming, Proceedings of EuroGP’2003, LNCS, vol. 2610, pp. 70–82. SpringerVerlag, Essex (2003) 7. Keijzer, M., Babovic, V.: Genetic programming, ensemble methods and the bias/variance tradeoff - introductory investigations. In: Genetic Programming, Proceedings of EuroGP’2000, LNCS, vol. 1802, pp. 76–90. Springer-Verlag, Edinburgh (2000) 8. Keijzer, M., Ryan, C., Murphy, G., Cattolico, M.: Undirected training of run transferable libraries. In: Proceedings of the 8th European Conference on Genetic Programming, Lecture Notes in Computer Science, vol. 3447, pp. 361–370. Springer, Lausanne, Switzerland (2005)
98
L. Kammerer et al.
9. Kommenda, M., Kronberger, G., Winkler, S., Affenzeller, M., Wagner, S.: Effects of constant optimization by nonlinear least squares minimization in symbolic regression. In: Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, GECCO ’13 Companion, pp. 1121–1128. ACM (2013) 10. Korns, M.F.: Symbolic regression using abstract expression grammars. In: GEC ’09: Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation, pp. 859–862. ACM, Shanghai, China (2009) 11. Korns, M.F.: Abstract expression grammar symbolic regression. In: Genetic Programming Theory and Practice VIII, Genetic and Evolutionary Computation, vol. 8, chap. 7, pp. 109– 128. Springer, Ann Arbor, USA (2010) 12. Korns, M.F.: Extreme accuracy in symbolic regression. In: Genetic Programming Theory and Practice XI, Genetic and Evolutionary Computation, chap. 1, pp. 1–30. Springer, Ann Arbor, USA (2013) 13. Korns, M.F.: Extremely accurate symbolic regression for large feature problems. In: Genetic Programming Theory and Practice XII, Genetic and Evolutionary Computation, pp. 109–131. Springer, Ann Arbor, USA (2014) 14. Korns, M.F.: Highly accurate symbolic regression with noisy training data. In: Genetic Programming Theory and Practice XIII, Genetic and Evolutionary Computation, pp. 91–115. Springer, Ann Arbor, USA (2015) 15. Kotanchek, M., Smits, G., Vladislavleva, E.: Trustable symbolic regression models: using ensembles, interval arithmetic and pareto fronts to develop robust and trust-aware models. In: Genetic Programming Theory and Practice V, Genetic and Evolutionary Computation, chap. 12, pp. 201–220. Springer, Ann Arbor (2007) 16. Kotanchek, M.E., Vladislavleva, E., Smits, G.: Symbolic Regression Is Not Enough: It Takes a Village to Raise a Model, pp. 187–203. Springer New York, New York, NY (2013) 17. Krawiec, K., Pawlak, T.: Locally geometric semantic crossover. In: GECCO Companion ’12: Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference companion, pp. 1487–1488. ACM, Philadelphia, Pennsylvania, USA (2012) 18. Krawiec, K., Swan, J., O’Reilly, U.M.: Behavioral program synthesis: Insights and prospects. In: Genetic Programming Theory and Practice XIII, Genetic and Evolutionary Computation, pp. 169–183. Springer, Ann Arbor, USA (2015) 19. Kronberger, G., Kammerer, L., Burlacu, B., Winkler, S.M., Kommenda, M., Affenzeller, M.: Cluster analysis of a symbolic regression search space. In: Genetic Programming Theory and Practice XVI. Springer, Ann Arbor, USA (2018) 20. Levenberg, K.: A method for the solution of certain non-linear problems in least squares. Quarterly of Applied Mathematics 2(2), 164–168 (1944) 21. Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathematics 11(2), 431–441 (1963) 22. McConaghy, T.: FFX: Fast, scalable, deterministic symbolic regression technology. In: Genetic Programming Theory and Practice IX, Genetic and Evolutionary Computation, chap. 13, pp. 235–260. Springer, Ann Arbor, USA (2011) 23. Merkle, R.C.: A digital signature based on a conventional encryption function. In: Advances in Cryptology — CRYPTO ’87, pp. 369–378. Springer Berlin Heidelberg, Berlin, Heidelberg (1988) 24. Pagie, L., Hogeweg, P.: Evolutionary consequences of coevolving targets. Evolutionary Computation 5(4), 401–418 (1997) 25. Poli, R.: A simple but theoretically-motivated method to control bloat in genetic programming. In: Genetic Programming, Proceedings of EuroGP’2003, LNCS, vol. 2610, pp. 204–217. Springer-Verlag, Essex (2003) 26. Salustowicz, R.P., Schmidhuber, J.: Probabilistic incremental program evolution. Evolutionary Computation 5(2), 123–141 (1997)
5 Symbolic Regression by Exhaustive Search
99
27. Schmidt, M., Lipson, H.: Co-evolving fitness predictors for accelerating and reducing evaluations. In: Genetic Programming Theory and Practice IV, Genetic and Evolutionary Computation, vol. 5, pp. 113–130. Springer, Ann Arbor (2006) 28. Schmidt, M., Lipson, H.: Symbolic regression of implicit equations. In: Genetic Programming Theory and Practice VII, Genetic and Evolutionary Computation, chap. 5, pp. 73–85. Springer, Ann Arbor (2009) 29. Schmidt, M., Lipson, H.: Age-fitness pareto optimization. In: Genetic Programming Theory and Practice VIII, Genetic and Evolutionary Computation, vol. 8, chap. 8, pp. 129–146. Springer, Ann Arbor, USA (2010) 30. Smits, G., Kotanchek, M.: Pareto-front exploitation in symbolic regression. In: Genetic Programming Theory and Practice II, chap. 17, pp. 283–299. Springer, Ann Arbor (2004) 31. Stijven, S., Vladislavleva, E., Kordon, A., Kotanchek, M.: Prime-time: Symbolic regression takes its place in industrial analysis. In: Genetic Programming Theory and Practice XIII, Genetic and Evolutionary Computation, pp. 241–260. Springer, Ann Arbor, USA (2015) 32. Streeter, M.J.: Automated discovery of numerical approximation formulae via genetic programming. Master’s thesis, Computer Science, Worcester Polytechnic Institute, MA, USA (2001) 33. Topchy, A., Punch, W.F.: Faster genetic programming based on local gradient search of numeric leaf values. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001), pp. 155–162. Morgan Kaufmann, San Francisco, California, USA (2001) 34. Uy, N.Q., Hoai, N.X., O’Neill, M., McKay, R.I., Galvan-Lopez, E.: Semantically-based crossover in genetic programming: application to real-valued symbolic regression. Genetic Programming and Evolvable Machines 12(2), 91–119 (2011) 35. Vladislavleva, E.J., Smits, G.F., den Hertog, D.: Order of nonlinearity as a complexity measure for models generated by symbolic regression via Pareto genetic programming. IEEE Transactions on Evolutionary Computation 13(2), 333–349 (2009) 36. Wagner, S., Affenzeller, M.: HeuristicLab: A generic and extensible optimization environment. In: Adaptive and Natural Computing Algorithms, pp. 538–541. Springer (2005) 37. White, D.R., McDermott, J., Castelli, M., Manzoni, L., Goldman, B.W., Kronberger, G., Ja´skowski, W., O’Reilly, U.M., Luke, S.: Better GP benchmarks: community survey results and proposals. Genetic Programming and Evolvable Machines 14(1), 3–29 (2013) 38. Worm, T., Chiu, K.: Prioritized grammar enumeration: symbolic regression by dynamic programming. In: GECCO ’13: Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference, pp. 1021–1028. ACM, Amsterdam, The Netherlands (2013) 39. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. Journal of the royal statistical society: series B (statistical methodology) 67(2), 301–320 (2005)
Chapter 6
Temporal Memory Sharing in Visual Reinforcement Learning Stephen Kelly and Wolfgang Banzhaf
6.1 Introduction Reinforcement learning (RL) is an area of machine learning that models the way living organisms adapt through interaction with their environment. RL can be characterized as learning how to map situations to actions in the pursuit of a predefined objective [34]. A solution, or policy in RL is represented by an agent that learns through episodic interaction with the problem environment. Each episode begins in an initial state defined by the environment. Over a series of discrete timesteps, the agent observes the environment (via sensory inputs), takes an action based on the observation, and receives feedback in the form of a reward signal. The agent’s actions potentially change the state of the environment and impact the reward received. The agent’s goal is to select actions that maximize the long-term cumulative reward. Most real-world decision-making and prediction problems can be characterized as this type of environmental interaction. Animal and human intelligence is partially a consequence of the physical richness of our environment, and thus scaling RL to complex, real-world environments is a critical step toward sophisticated artificial intelligence. In real-world applications of RL, the agent is likely to observe the environment through a highdimensional, visual sensory interface (e.g. a video camera). However, scaling to high-dimensional input presents a significant challenge for machine learning, and RL in particular. As the complexity of the agent’s sensory interface increases, there
S. Kelly () Department of Computer Science and Engineering & Beacon Center, Michigan State University, East Lansing, MI, USA e-mail: [email protected] W. Banzhaf Michigan State University, East Lansing, MI, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_6
101
102
S. Kelly and W. Banzhaf
is a significant increase in the number of environmental observations required for the agent to gain the breadth of experience necessary to build a strong decisionmaking policy. This is known as the curse of dimensionality (Section 1.4 of [6]). The temporal nature of RL introduces additional challenges. In particular, complete information about the environment is not always available from a single observation (i.e. the environment is partially observable) and delayed rewards are common, so the agent must make thousands of decisions before receiving enough feedback to assess the quality of its behaviour [22]. Finally, real-world environments are dynamic and non-stationary [9, 26]. Agents are therefore required to adapt to changing environments without ‘forgetting’ useful modes of behaviour that are intermittently important over time. Video games provide a well-defined test domain for scalable RL. They cover a diverse range of environments that are designed to be challenging for humans, all through a common high-dimensional visual interface, namely the game screen [5]. Furthermore, video games are subject to partial observability and explicitly nonstationary. For example, many games require the player to predict the trajectory of a moving object. These calculations cannot be made from observing a single screen capture. To make such predictions, players must identify, store, and reuse important parts of past experience. As the player improves, new levels of play are unlocked which may contain completely new visual and physical dynamics. As such, video games represent a rich combination of challenges for RL, where the objective for artificial agents is to play the game with a degree of sophistication comparable to that of a human player [4, 30]. The potential real-world applications for artificial agents with these capabilities is enormous.
6.2 Background Tangled Program Graphs (TPG) are a representation for Genetic Programming (GP) with particular emphasis on emergent modularity through compositional evolution: the evolution of hierarchical organisms that combine multiple agents which were previously adapted independently [38], Fig. 6.1.
Fig. 6.1 A multi-agent organism developed through compositional evolution
6 Temporal Memory Sharing in Visual Reinforcement Learning
103
This approach leads to automatic division of labour within the organism and, over time, a collective decision-making policy emerges that is greater than the sum of its parts. The system has three critical attributes: 1. Adaptive Complexity. Solutions begin as single-agent organisms and then develop into multi-agent organisms through interaction with their environment. That is, the complexity of a solution is an adapted property. 2. Input Selectivity. Multi-agent organisms are capable of decomposing the input space such that they can ignore sensory inputs that are not important at the current point in time. This is more efficient than assuming that the complete sensory system is necessary for every decision. 3. Modular Task Decomposition. As multi-agent organisms develop they may subsume a variable number of stand-alone agents into a hierarchical decisionmaking policy. Importantly, hierarchies emerge incrementally over time, slowly growing and breaking apart through interaction with the environment. This property allows a TPG organism to adapt in non-stationary environments and avoid unlearning behaviours that were important in the past but are not currently relevant. In the Atari video game environment, TPG matches the quality of solutions from a variety of state-of-the-art deep learning methods. More importantly, TPG is less computationally demanding, requiring far fewer calculations per decision than any of the other methods [20]. However, these TPG policies were purely reactive. They represent a direct mapping from observation to action with no mechanism to integrate past experience (prior state observations) into the decision-making process. This might be a limitation in environments with temporal properties. For example, there are Atari games that explicitly involve predicting the trajectory of a moving object (e.g. Pong) for which TPG performed poorly. A temporal memory mechanism would allow agents to make these calculations by integrating past and present environmental observations.
6.2.1 Temporal Memory Temporal memory in sequential decision-making problems implies that behavioural agent has the ability to identify, store, and reuse important aspects of past experience when predicting the best action to take in the present. More generally, temporal memory is essential to any time series prediction problem, and has thus been investigated extensively in GP (see Agapitos et. al [2] for a broad review). In particular, an important distinction is made between static memory in which the same memory variables are accessed regardless of the state of the environment, and dynamic memory in which different environmental observations trigger access to different variables. Dynamic memory in GP is associated with indexed memory, which requires the function set to include parameterized operations for reading and writing to specific
104
S. Kelly and W. Banzhaf
memory addresses. Teller [35] showed that GP with memory indexing is Turing complete, i.e. theoretically capable of evolving any algorithm. Brave [8] emphasized the utility of dynamic memory access in GP applied to an agent planning problem, while Haynes [15] discuses the value of GP with dynamic memory in sequential decision-making environments that are themselves explicitly dynamic. Koza [23] proposed Automatically Defined Stores, a modular form of indexable memory for GP trees, and demonstrated its utility for solving the cart centering problem without velocity sensors, a classic control task that requires temporal memory. Static memory access is naturally supported by the register machine representation in linear genetic programming [7]. For example, a register machine typically consists of a sequence of instructions that read and write from/to memory, e.g. Register[x] = Register[x] + Register[y]. In this case, the values contained in registers x and y may change depending on environmental input, but reference to the specific registers x and y is determined by the instruction’s encoding and is not affected by input to the program. If the register content is cleared prior to each program execution, the program is said to be stateless. A simple form of temporal memory can be implemented by not clearing the register content prior to each execution of the program. In the context of sequential decision-making, the program retains/accumulates state information over multiple timesteps, i.e., the program is stateful. Alternatively, register content from timestep t may be fed back into the program’s input at time t + 1, enabling temporal memory through recurrent connections, e.g. [10, 16]. Smith and Heywood [32] introduced the first memory model for TPG in particular. Their method involved a single global memory bank. TPG’s underlying linear GP representation was augmented with a probabilistic write operation, enabling long and short-term memory stores. They also included a parameterized read operation for indexed (i.e. dynamic) reading of external memory. Furthermore, the external memory bank is ecologically global. That is, sharing is supported among the entire population such that organisms may integrate their own past experience and experience gained by other (independent) organisms. The utility of this memory model was demonstrated for navigation in a partially observable, visual RL environment. In this work we propose that multi-agent TPG organisms can be extended to support dynamic temporal memory without the addition of specialized read/write operations. This is possible because TPG organisms naturally decompose the task both spatially and temporally. Specifically, each decision requires traversing one path through the graph of agents, in which only agents along the path are ‘executed’ (i.e. a subset of the agents in the organism). Each agent will have a unique complement of static environmental input and memory references. Since the decision path at time t is entirely dependent on the state of the environment, both state and memory access are naturally dynamic. The intuition behind this work is that dynamic/temporal problem decomposition w.r.t input and memory access is particularly important in visual RL tasks because: (1) High-dimensional visual input potentially contains a large amount of information that is intermittently relevant to decision making over time. As such,
6 Temporal Memory Sharing in Visual Reinforcement Learning
105
it is advantageous if the model can parse out the most salient observational data for the current timestep and ignore the rest; and (2) RL environments with realworld complexity are likely to exhibit partial observability at multiple times scales. For example, predicting the trajectory of a moving object may require an agent to integrate a memory of the object’s location at t − 1 with the current observation at time t, or short-term memory. Conversely, there are many tasks that require integration over much longer periods of time, e.g. maze navigation [12]. These two points clearly illustrate the drawback of autoregressive models [2, 30], in which the issue of temporal memory is side-stepped by stacking a fixed number of the most recent environmental observations into a single ‘sliding window’ world view for the agent. This approach potentially increases the amount of redundant input information and limits temporal state integration to a window size fixed a priori.
6.2.2 Heterogeneous Policies and Modularity Heterogeneous policies provide a mechanism through which multiple types of active device, entities which accept input, perform some computation, and produce output, may be combined within a stand-alone decision-making agent, e.g. [17, 25]. In this work, we investigate how compositional evolution can be used to adaptively combine general-purpose and task-specific devices. Specifically, GP applied to visual problems can benefit from the inclusion of specialized image-progressing operators, e.g. [3, 24, 39]. Rather than augmenting the instruction set of all programs with additional operators, compositional evolution of heterogeneous policies provides an opportunity to integrate task-specific functionality in a modular fashion, where modularity is likely to improve the evolvability of the system [1, 36, 38].
6.3 Evolving Heterogeneous Tangled Program Graphs The algorithm investigated in this work is an extension of Tangled Program Graphs [20] with additional support for temporal memory and heterogeneous policies. This section details the extended system with respect to three system components, each playing a distinct role within the emergent hierarchical structure of TPG organisms: • A Program is the simplest active device, capable of processing input data and producing output, but not representing a stand-alone RL agent. • A Team of Programs is the smallest independent decision-making organism, or agent, capable of observing the environment and taking actions. • A Policy Graph adaptively combines multiple teams (agents) into a single hierarchical organism through open-ended compositional evolution. In this context, open-ended refers to the fact that hierarchical transitions are not planned a priori. Instead, the hierarchical complexity of each policy graph is a property that emerges through interaction with the problem environment.
106
S. Kelly and W. Banzhaf
6.3.1 Programs and Shared Temporal Memory In this work, all programs are linear register machines [7]. Two types of program are supported: Action-value programs and Image processors. Action-value programs have a pointer to one action (e.g. a joystick position from the video game domain) and produce one scalar bidding output, which is interpreted as the program’s confidence that its action is appropriate given the current state of the environment. As such, the role of an action-value program is to define environmental context for one action, Algorithm 1. In order to support shared temporal memory, action-value programs have two register banks; one stateless bank that is reset prior to each program execution, and a pointer to one stateful bank that is only reset at the start of each episode (i.e. game start). Stateless register banks are private to each program, while stateful banks are stored in a dedicated memory population and may be shared among multiple programs (This relationship is illustrated in the lower-left of Fig. 6.2). Shared memory allows multiple programs to communicate within a single timestep or integrate information across multiple timesteps. In effect, shared memory implies that each program has a parameterized number of context outputs (see Registersshared in Table 6.2). Image processing programs have context outputs only. They perform matrix manipulations on the raw pixel input and store the result in shared memory accessible to all other programs, Algorithm 2. In addition to private and shared register memory, image processors have access to a task-specific type of shared memory
Algorithm 1 Example action-value program. Each program contains one private stateless register bank, Rp , and a pointer to one shareable stateful register bank, Rs . Rp is reset prior to each execution, while Rs is reset (by an external process) at the start of each episode. Programs may include two-argument instructions of the form R[i] ← R[j ] ◦ R[k] in which ◦ ∈ {+, −, x, ÷}; single-argument instructions of the form R[i] ← ◦(R[k]) in which ◦ ∈ {cos, ln, exp}; and a conditional statement of the form IF (R[i] < R[k]) THEN R[i] ← −R[i]. The second source variable, R[k], may reference either memory bank or a state variable (pixel), while the target and first source variables (R[i] and R[j ]) may reference either the stateless or stateful memory bank only. Action-value programs always return the value stored in Rp [0] at the end of execution 1: Rp ← 0 2: 3: 4: 5: 6: 7: 8:
Rp [0] ← Rs [0] − I nput[3] Rs [1] ← Rp [0] ÷ Rs [7] Rs [2] ← Log(Rs [1]) if then(Rp [0] < Rs [2]) Rp [0] ← −Rp [0] end if return Rp [0]
# reset private memory bank Rp
6 Temporal Memory Sharing in Visual Reinforcement Learning
107
Fig. 6.2 Illustration of the relationship between teams, programs, and shared memory in heterogeneous TPG
Algorithm 2 Example image-processor program. As with action-value programs, image-processor programs contain one private stateless register bank, Rp , and a pointer to one shareable stateful register bank, Rs . In addition, all image processors have access to a global buffer matrix, S, which is reset (by an external process) at the start of each episode. Image-processor programs accept either the raw pixel screen or the shared buffer as input. Depending on the operation, the result is stored in Rp , Rs , or back into the shared buffer S. Unlike the operations available to action-value programs, some image processing operations are parameterized by values stored in register memory or sampled directly form input. This opens a wide range of possibilities for image processing instructions, a few of which are illustrated in this algorithm. Table 6.1 provides a complete list of image processing operations used in this work 1: Rp ← 0 2: 3: 4: 5: 6:
S ← AddC(Screen, Rs [0]) S ← Div(Screen, S) S ← Sqrt (S) Rp [2] ← MaxW (S, Rs [0], Rs [5], Rs [7]) Rs [2] ← MeanW (S, Rs [3], Rs [1], Rp [2])
# reset private memory bank Rp # Add an amount to each image pixel # Divide the pixel values of two images # Take the square root of each pixel # Store max of parameterized window # Store mean of parameterized window
in the form of a single, global image buffer matrix with the same dimensionality as the environment’s visual interface. The buffer matrix is stateful, reset only at the start of each episode. This allows image processors to accumulate full-screen manipulations over the entire episode. Note that image-processor programs have no bidding output because they do not contribute directly to action selection. Their role is to preprocess input data for action-value programs, and this contribution is communicated to action-value programs through shared register memory (This relationship is illustrated in the lower-left of Fig. 6.2). Memory sharing implies that a much larger proportion of program code is now effective, since an effective instruction is one that effects the final value in the bidding output (Rp [0]) or any of the shared registers, Rs . As a result, sharing memory incurs a significant computational cost relative to programs without shared
108
S. Kelly and W. Banzhaf
Table 6.1 Operations available to image-processor programs Operation Add Sub Div Mul Max2 Min2 AddC SubC DivC MulC Sqrt Ln Mean
Parameters Image, Image Image, Image Image, Image Image, Image Image, Image Image, Image Image, x Image, x Image, x Image, x Image Image Image, x
Max Min Med MeanA
Image, x Image, x Image, x Image, 3 Int (x, y, size)
StDevA MaxA MinA
Image, 3 Int (x, y, size) Image, 3 Int (x, y, size) Image, 3 Int (x, y, size)
Description Add pixel values of two images Subtract pixel values of two images Divide pixel values of two images Multiply pixel values of two images Pixel-by-pixel max of two images Pixel-by-pixel min of two images Add integer x to each pixel Subtract integer x from each pixel Divide each pixel by integer x Multiply each pixel by integer x Take the square root of each pixel Take the natural log of each pixel Uses a sliding window of size x and replaces the centre pixel of the window with the mean of the window As Mean, but takes the maximum value As Mean, but takes the minimum value As Mean, but takes the median value Returns the mean value of the pixels contained in a window of size, centred at x, y in the image Returns standard deviation Returns maximum value Returns minimum value
These operations were selected based on their previous application in GP applied to image classification tasks [3]. See Algorithm 2 for an example program using a subset of these operations
memory, since fewer ineffective instructions, or introns, can be removed prior to program execution. In addition, shared memory implies that the order of program execution within a team now potentially impacts bidding outputs. In this work, the order of program execution within a team remains fixed, but future work could investigate mutation operators that modify execution order. Program variation operators are listed in Table 6.2, providing an overview of how evolutionary search is focused on particular aspects of program structure. In short, program length and content, as well as the degree of memory sharing, are all adapted properties.
6.3.2 Cooperative Decision-Making with Teams of Programs Individual action-value programs have only one action pointer, and can therefore never represent a complete solution independently. A team of programs represents a complete solutions by grouping together programs that collectively map
6 Temporal Memory Sharing in Visual Reinforcement Learning
109
Table 6.2 Parameterization of team and program populations Parameter Team population Rsize pmd , pma pmm Program population Registersprivate Registersshared pdelete , padd
Value
Parameter
Value
1000 0.7 0.2
Rgap ω pmn , pms
50% of Root Teams 60 0.1
8 8 0.5
maxProgSize patomic pmutate , pswap
100 0.99 1.0
For the team population, pmx denotes a mutation operator in which: x ∈ {d, a} are the prob. of deleting or adding a program respectively; x ∈ {m, n} are the prob. of creating a new program, changing the program action pointer, and changing the program shared memory pointer respectively. ω is the max initial team size. For the program population, px denotes a mutation operator in which x ∈ {delete, add, mutate, swap} are the prob. for deleting, adding, mutating, or reordering instructions within a program. patomic is the probability of a modified action pointer referencing an atomic action
environmental observations (e.g. the game screen) to atomic actions (e.g. joystick positions). This is achieved through a bidding mechanism. In each timestep, every program in the team will execute, and the team then takes the action pointed to by the action-value program with the highest output. This process repeats at each timestep from the initial episode condition to the end of an episode. When the episode ends, due to a GameOver signal from the environment or an episode time constraint is reached, the team as a whole is assigned a fitness score from the environment (i.e. the final game score). Since decision-making in TPG is a strictly collective process, programs have no individual concept of fitness. Team variation operators may add, remove, or modify programs in the team, with parameters listed in Table 6.2. In this work, new algorithmic extensions at the team level are twofold: (1) Teams are heterogeneous, containing action-value programs and image processing programs; and (2) All programs have access to shareable stateful memory. Figure 6.2 illustrates these extensions.
6.3.3 Compositional Evolution of Tangled Program Graphs This section details how teams and programs are coevolved, paying particular attention to emergent hierarchical transitions. Parameters are listed in Table 6.2. Evolution begins with a population of Rsize teams, each containing at least one program of each type, and a max of ω programs in total. Programs are created in pairs with one shared memory bank between them (See left-hand-side of Fig. 6.2). Program actions are initially limited to task-specific (atomic) actions, Fig. 6.2. Throughout evolution, program variation operators are allowed to introduce actions
110
S. Kelly and W. Banzhaf
that index other teams within the team population. To do so, when a program’s action is modified, it may reference either a different atomic action or any team created in a previous generation. Specifically, the action set from which new program actions are sampled will correspond to the set of atomic actions, A, with probability patomic , and will otherwise correspond to the set of teams present from any previous generation. In effect, action pointer mutations are the primary mechanism by which TPG supports compositional evolution, adaptively recombining multiple (previously independent) teams into variably deep/wide directed graph structures, or policy graphs, right-hand-side of Fig. 6.2. The hierarchical interdependency between teams is established entirely through interaction with the task environment. Thus, more complex structures can emerge as soon as they perform better than simpler solutions. The utility of compositional evolution is empirically demonstrated in Sect. 6.4.2. Decision-making in a policy graph begins at the root team (e.g. t3 in Fig. 6.2), where each program in the team will produce one bid relative to the current state → observation, − s (t). Graph traversal then follows the program with the largest bid, → repeating the bidding process for the same state, − s (t), at every team along the path until an atomic action is reached. Thus, in sequential decision-making tasks, the policy graph computes one path from root to atomic action at every time step, where only a subset of programs in the graph (i.e. those in teams along the path) require execution. As hierarchical structures emerge, only root teams (i.e. teams that are not referenced as any program’s action) are subject to modification by the variation operators. As such, rather than pre-specify the desired team population size, only the number of root teams to maintain in the population, or Rsize , requires prior specification. Evolution is driven by a generational GA such that the worst performing root teams (50% of the root population, or Rgap ) are deleted in each generation and replaced by offspring of the surviving roots. After team deletion, programs that are not part of any team are also deleted. As such, selection is driven by a symbiotic relationship between programs and teams: teams will survive as long as they define a complementary group of programs, while individual programs will survive as long as they collaborate successfully within a team. The process for generating team offspring uniformly samples and clones a root team, then applies mutation-based variation operators to the cloned team, as listed in Table 6.2. Complete details on TPG are available in [20] and [19].
6.4 Empirical Study The objective of this study is to evaluate heterogeneous TPG with shared temporal memory for object tracking in visual RL. This problem explicitly requires an agent to develop short-term memory capabilities. For an evaluation of TPG in visual RL with longer-term memory requirements see [32].
6 Temporal Memory Sharing in Visual Reinforcement Learning
111
6.4.1 Problem Environments In order to compare with previous results, we consider the Atari video game Breakout, for which the initial version of TPG failed to learn a successful policy [20]. Breakout is a vertical tennis-inspired game in which a single ball is released near the top of the screen and descends diagonally in either left or right direction, Fig. 6.3a. The agent observes this environment through direct screen capture, an 84 × 64 pixel matrix1 in which each pixel has a colour value between 0 and 128. The player controls the horizontal movement of a paddle at the bottom of the screen. Selecting form 4 atomic actions in each timestep, A ∈ {Serve, Lef t, Right, NoAction}, the goal is to maneuver the paddle such that it makes contact with the falling ball, causing it to ricochet up towards the brick ceiling and clear bricks one at a time. If the paddle misses the falling ball, the player looses a turn. The player has three turns to clear two layers of brick ceiling. At the end of each episode, the game returns a reward signal which increases relative to the number of bricks eliminated. The primary skill in breakout is simple: the agent must integrate the location of the ball over multiple timesteps in order to predict its trajectory and move the paddle to the correct horizontal position. However, the task is dynamic and non-trivial because, as the game progresses, the ball’s speed increases, its angle varies more widely, and the width of the paddle shrinks. Furthermore, sticky actions [27] are utilized such that agents stochastically skip screen frames with probability p = 0.25, with the previous action being repeated on skipped frames. Sticky actions have a dual purpose in the ALE: (1) artificial agents are limited to roughly the same reaction time as a human player; and (2) stochasticity is present throughout the entire episode of gameplay. The Atari simulator used in this work, The Arcade Learning Environment (ALE) [5], is computational demanding. As such, we conduct an initial study in a custom environment that models only the ball tracking task in breakout. This “ball catching” task is played on a 64 × 32 grid (i.e. representing roughly the bottom 3/4 of the Breakout game screen) in which each tile, or pixel, can be one of two colours represented by the values 0 (no entity present) and 255 (indicating either the ball or paddle is present at this pixel location), Fig. 6.3b. The ball is one pixel large and is stochastically initialized in one of the 64 top-row positions at the start of each episode. The paddle is 3 pixels wide and is initialized in the centre of the bottom row. The ball will either fall straight down (probability = 0.33) or diagonally, moving one pixel down and one pixel to the left or right (chosen with equal probability at time t = 1) in each timestep. If a diagonally-falling ball hits either wall, its horizontal direction is reversed. The agent’s objective is to select one of 3 paddle movements in each timestep, A ∈ {Lef t, Right, NoAction}, such that the paddle makes contact with the falling ball. The paddle moves twice as fast as the ball, i.e. 2
1 This screen resolution corresponds to 40% of the raw Atari screen resolution. TPG has previously
been shown to operate under the full Atari screen resolution [21]. The focus of this study is temporal memory, and the down sampling is used here to speed up empirical evaluations.
112
S. Kelly and W. Banzhaf
Fig. 6.3 Screenshots of the two video game environments utilized in this work. (a) Breakout. (b) Ball catching
pixels at a time in either direction. An episode ends when the ball reaches the bottom row, at which point the game returns a reward signal of 1.0 if the ball and paddle overlap, and 0 otherwise. As in Breakout, success in this task requires the agent to predict the trajectory of the falling ball and correlate this trajectory with the current position of the paddle in order to select appropriate actions.
6.4.2 Ball Catching: Training Performance Four empirical comparisons are considered in the ball catching environment, with 10 independent runs performed for each experimental case. As discussed in Sect. 6.1, visual RL policies require a breadth of experience interacting with the problem environment before their fitness can be estimated with a sufficient degree of generality. As such, in each generation we evaluate every TPG policy in 40 episodes and let the mean episode score represent their fitness. Curves in Fig. 6.4 represent the fitness of the champion individual over 2000 generations, which is equivalent to roughly 12 h of wall-clock time. Each line is the median over 10 independent runs. Figure 6.4a is the training curve for heterogeneous TPG with the capacity for shared stateful memory as described in Sect. 6.3. For reference, a mean game score of 0.65 indicates that the champion policy successfully maneuvered the paddle to meet the ball 65% of the time over 40 episodes. Figure 6.4b is the training curve for TPG with shared memory but without imageprocessor programs. While the results are not significantly different than case (a), heterogeneous TPG did not hinder progress in any way. Furthermore, the single best policy in either (a) or (b) was heterogeneous and did make use of image-processor programs, indicating that the method has potential. Figure 6.4c is the training curve for heterogeneous TPG without the capacity for shared memory. In this case, each program is initialized with a pointer to one private stateful register bank. Mutation operators are not permitted to modify memory
6 Temporal Memory Sharing in Visual Reinforcement Learning
113 (a) Shared Mem (b) Shared Mem (No Image Proc.)
0.6
Mean Game Score
(c) Private Mem
(d) Shared Mem (No Graphs)
0.5
(e) No Mem
0.4
0.3
0
500
1000
1500
2000
Generation
Fig. 6.4 Fraction of successful outcomes (Mean Game Score) in 40 episodes for the champion individual at each generation. Each line is the median over 10 independent runs for experimental cases (a)–(e). See Sect. 6.4.2 text for comparative details
pointers (pms = 0). The case with memory sharing exhibits significantly better median performance after very few generations (≈100). Figure 6.4d shows the training curve for heterogeneous TPG without the capacity to build policy graphs. In this case, team hierarchies can never emerge (patomic = 1.0). Instead, adaptive complexity is supported by allowing root teams to acquire an unbounded number of programs (ω = ∞). The weak result clearly illustrates the advantage of emergent hierarchical transitions for this task. As discussed in Sect. 6.2.1, one possible explanation for this is the ability of TPG policy graphs to decompose the task spatially by defining an appropriate subset of inputs (pixels) to consider in each timestep, and to decompose the task temporally by identifying, storing, and reusing subsets of past experience through dynamic memory access. Without the ability to build policy graphs, input and memory indexing would be static, i.e. the same set of inputs and memory registers would be accessed in every timestep regardless of environmental state. Figure 6.4e shows the training curve for heterogeneous TPG without the capacity for stateful memory. In this case, both private and shared memory registers are cleared prior to each program execution. This is equivalent to equipping programs with 16 stateless registers. The case with shared temporal memory achieves significantly better policies after generation ≈500, and continues to gradually discover increasingly high scoring policies up until the runs terminate at generation 2000.
114
S. Kelly and W. Banzhaf
This comparison clearly illustrates the advantage that temporal memory provides for TPG organisms in this domain. Without the ability to integrate observations over multiple timesteps, even the champion policies are only slightly better than random play.
6.4.3 Ball Catching: Solution Analysis Section 6.4.2 established the effectiveness of heterogeneous TPG with shared temporal memory in a visual object tracking task (i.e. ball-catching). The following analysis confirms that champion policies rely on shared temporal memory to succeed at this task under test conditions. Box plots in Fig. 6.5 summarize the mean game score (over 30 episodes) for the single champion policy from 20 runs.2 The distribution labeled ‘Shared Mem’ indicates that the median success rate for these 20 champions is ≈76%. The box labeled ‘Mem (No Sharing)’ summarizes scores for the same 20 policies when their ability to share memory is suppressed. The decrease in performance indicates that policies are indeed exploiting shared temporal memory in solving this task. The box labeled ‘No Mem’ provides test scores for these policies when all memory registers are stateless (i.e. reset prior to each program execution), again confirming that temporal memory plays a crucial role in the behaviour of these policies. Without temporal memory, the champion policies are often no better than a policy that simply selects actions at random, or ‘Rand’ in Fig. 6.5.
6.4.4 Atari Breakout In this section, the most promising TPG configuration identified under the ballcatching task, or heterogeneous TPG with shared temporal memory, is evaluated in the Atari game Breakout (Sect. 6.4.1). The computational cost of game simulation in the ALE precludes evaluating each individual policy in 40 episodes per generation during training. In Breakout, each policy is evaluated in only 5 episodes per generation. This limits the generality of fitness estimation but is sufficient for a proof-of-concept test of our methodology in a challenging and popular visual RL benchmark. Figure 6.6 provides the training curves for 10 independent Breakout runs. In order to score any points, policies must learn to serve the ball (i.e. select the Serve action) whenever the ball does not appear on screen. This skill appears relatively quickly in most of the runs in Fig. 6.6. Next, static paddle locations (e.g. moving the paddle to the far right after serving the ball and leaving it there) can
2 An
additional 10 runs were conducted for this analysis relative to the 10 runs summarized in Fig. 6.4a.
6 Temporal Memory Sharing in Visual Reinforcement Learning
115
0.8
Mean Game Score
0.7
0.6
0.5
0.4
0.3
0.2 Shared Mem
Mem (No Sharing)
No Mem
Rand
Fig. 6.5 Fraction of successful outcomes (Mean Game Score) in 30 test episodes for the champion policies from the case of heterogeneous TPG with shared temporal memory, Fig. 6.4a. Box plots summarize the results from 20 independent runs. See Sect. 6.4.3 text for details on each distribution
occasionally lead to ≈11–15 points. In order to score ≈20–50 points, policies must surpass this somewhat degenerate local optima by discovering a truly responsive strategy in which the paddle and ball make contact several times at multiple horizontal positions. Finally, policies that learn to consistently connect the ball and paddle will create a hole in the brick wall. When this is achieved, the ball can pass through all layers of brick and become trapped in the upper region of the world where it will bounce around clearing bricks from the top down and accumulating scores above 100. Table 6.3 lists Breakout test scores for several recent visual RL algorithms. Previous methods either employed stateless models that failed to achieve a high score (TPG, CGP, HyperNeat) or side-stepped the requirement for temporal memory by using autoregressive, sliding window state representations. Heterogeneous TPG with shared memory (HTPG-M) is the highest scoring algorithm that operates directly from screen capture without an autoregressive state, and roughly matches the test scores from 3 of the 4 deep learning approaches that do rely on autoregressive state.
116
S. Kelly and W. Banzhaf
Mean Game Score
200
100
50
20
10
0
500
1000
1500
2000
Generation
Fig. 6.6 Fitness (Mean Game Score) for the single champion policy in each of 10 independent Breakout runs. Scores are averaged over 5 episodes. For clarity, line plots show the fitness of the best policy discovered up to each generation Table 6.3 Comparison of Breakout test scores (mean game score over 30 episodes) for state-ofthe art methods that operate directly from screen capture Human 31.8
Double 368.9
Dueling 411.6
Prioritized 371.6
A3C LSTM 766.8
TPG 12.8
HTPG-M 374.2
HyperNeat 2.8
CGP 13.2
Heterogeneous TPG with shared memory (HTPG-M) is the highest scoring algorithm that operates without an autoregressive, sliding window state representation. ‘Human’ is the score achieved by a “professional” video game tester reported in [30]. Scores for comparator algorithms are from the literature: Double [13], Dueling [37], Prioritized [31], A3C FF [28], A3C LSTM [28], TPG [20], HyperNeat [14], CGP [39]
6.5 Conclusions and Future Work We have proposed a framework for shared temporal memory in TPG which significantly improves the performance of agents in a partially observable, visual RL problem with short-term memory requirements. This study confirms the significance of private temporal memory for individual programs as well as the added benefit of memory sharing among multiple programs in a single organism, or policy graph. No
6 Temporal Memory Sharing in Visual Reinforcement Learning
117
specialized program instructions are required to support dynamic memory access. The nature of temporal memory access and the degree of memory sharing among programs are both emergent properties of an open-ended evolutionary process. Future work will investigate this framework in environments with long-term partial observability. Multi-task RL is a specific example of this [20], where the agent must build short-term memory mechanisms and integrate experiences from multiple ongoing tasks. Essentially, the goal will be to construct shared temporal memory at multiple time scales, e.g. [18]. This will most likely require a mechanism to trigger the erosion of non-salient memories based on environmental stimulus, or active forgetting [11]. Supporting multiple program representations within a single heterogeneous organism is proposed here as an efficient way to incorporate domain knowledge in TPG. In this study, the inclusion of domain-specific image processing operators was not crucial to building strong policies, but it did not hinder performance in any way. Given the success of shared memory as a means of communication within TPG organisms, future work will continue to investigate how heterogeneous policies might leverage specialized capabilities from a wider of variety bio-inspired virtual machines. Image processing devices that model visual attention are of particular interest, e.g. [29, 33]. Acknowledgements Stephen Kelly gratefully acknowledges support from the NSERC Postdoctoral Fellowship program. Computational resources for this research were provided by Michigan State University through the Institute for Cyber-Enabled Research (icer.msu.edu) and Compute Canada (computecanada.ca).
References 1. A. Simon, H.: The architecture of complexity. Proceedings of the American Philosophical Society 106, 467–482 (1962) 2. Agapitos, A., Brabazon, A., O’Neill, M.: Genetic programming with memory for financial trading. In: G. Squillero, P. Burelli (eds.) Applications of Evolutionary Computation, pp. 19– 34. Springer International Publishing (2016) 3. Atkins, D., Neshatian, K., Zhang, M.: A domain independent genetic programming approach to automatic feature extraction for image classification. In: 2011 IEEE Congress of Evolutionary Computation (CEC), pp. 238–245 (2011) 4. Beattie, C., Leibo, J.Z., Teplyashin, D., Ward, T., Wainwright, M., Küttler, H., Lefrancq, A., Green, S., Valdés, V., Sadik, A., Schrittwieser, J., Anderson, K., York, S., Cant, M., Cain, A., Bolton, A., Gaffney, S., King, H., Hassabis, D., Legg, S., Petersen, S.: Deepmind lab. arXiv preprint arXiv:1612.03801 (2016) 5. Bellemare, M.G., Naddaf, Y., Veness, J., Bowling, M.: The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research 47, 253–279 (2013) 6. Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag (2006) 7. Brameier, M., Banzhaf, W.: Linear Genetic Programming, 1st edn. Springer (2007)
118
S. Kelly and W. Banzhaf
8. Brave, S.: The evolution of memory and mental models using genetic programming. In: Proceedings of the 1st Annual Conference on Genetic Programming, pp. 261–266. MIT Press (1996) 9. Choi, S.P.M., Yeung, D.Y., Zhang, N.L.: An environment model for nonstationary reinforcement learning. In: S.A. Solla, T.K. Leen, K. Müller (eds.) Advances in Neural Information Processing Systems 12, pp. 987–993. MIT Press (2000) 10. Conrads, M., Nordin, P., Banzhaf, W.: Speech sound discrimination with genetic programming. In: W. Banzhaf, R. Poli, M. Schoenauer, T.C. Fogarty (eds.) Genetic Programming, pp. 113– 129. Springer Berlin Heidelberg (1998) 11. Davis, R.L., Zhong, Y.: The Biology of Forgetting – A Perspective. Neuron 95(3), 490–503 (2017) 12. Greve, R.B., Jacobsen, E.J., Risi, S.: Evolving neural turing machines for reward-based learning. In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO ’16, pp. 117–124. ACM (2016) 13. Hasselt, H.v., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pp. 2094– 2100. AAAI Press (2016) 14. Hausknecht, M., Lehman, J., Miikkulainen, R., Stone, P.: A neuroevolution approach to general Atari game playing. IEEE Transactions on Computational Intelligence and AI in Games 6(4), 355–366 (2014) 15. Haynes, T.D., Wainwright, R.L.: A simulation of adaptive agents in a hostile environment. In: Proceedings of the 1995 ACM Symposium on Applied Computing, SAC ’95, pp. 318–323. ACM (1995) 16. Hintze, A., Edlund, J.A., Olson, R.S., Knoester, D.B., Schossau, J., Albantakis, L., TehraniSaleh, A., Kvam, P.D., Sheneman, L., Goldsby, H., Bohm, C., Adami, C.: Markov brains: A technical introduction. arXiv preprint 1709.05601 (2017) 17. Hintze, A., Schossau, J., Bohm, C.: The evolutionary buffet method. In: W. Banzhaf, L. Spector, L. Sheneman (eds.) Genetic Programming Theory and Practice XVI, Genetic and Evolutionary Computation Series, pp. 17–36. Springer (2018) 18. Jaderberg, M., Czarnecki, W.M., Dunning, I., Marris, L., Lever, G., Castañeda, A.G., Beattie, C., Rabinowitz, N.C., Morcos, A.S., Ruderman, A., Sonnerat, N., Green, T., Deason, L., Leibo, J.Z., Silver, D., Hassabis, D., Kavukcuoglu, K., Graepel, T.: Human-level performance in 3d multiplayer games with population-based reinforcement learning. Science 364(6443), 859–865 (2019) 19. Kelly, S.: Scaling genetic programming to challenging reinforcement tasks through emergent modularity. Ph.D. thesis, Faculty of Computer Science, Dalhousie University (2018) 20. Kelly, S., Heywood, M.I.: Emergent solutions to high-dimensional multitask reinforcement learning. Evolutionary Computation 26(3), 347–380 (2018) 21. Kelly, S., Smith, R.J., Heywood, M.I.: Emergent Policy Discovery for Visual Reinforcement Learning Through Tangled Program Graphs: A Tutorial, pp. 37–57. Springer International Publishing (2019) 22. Kober, J., Peters, J.: Reinforcement learning in robotics: A survey. In: M. Wiering, M. van Otterio (eds.) Reinforcement Learning, pp. 579–610. Springer (2012) 23. Koza, J.R., Andre, D., Bennett, F.H., Keane, M.A.: Genetic Programming III: Darwinian Invention & Problem Solving, 1st edn. Morgan Kaufmann Publishers Inc. (1999) 24. Krawiec, K., Bhanu, B.: Visual learning by coevolutionary feature synthesis. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 35(3), 409–425 (2005) 25. Lalejini, A., Ofria, C.: What Else Is in an Evolved Name? Exploring Evolvable Specificity with SignalGP. In: W. Banzhaf, L. Spector, L. Sheneman (eds.) Genetic Programming Theory and Practice XVI, pp. 103–121. Springer International Publishing (2019) 26. Lughofer, E., Sayed-Mouchaweh, M.: Adaptive and on-line learning in non-stationary environments. Evolving Systems 6(2), 75–77 (2015)
6 Temporal Memory Sharing in Visual Reinforcement Learning
119
27. Machado, M.C., Bellemare, M.G., Talvitie, E., Veness, J., Hausknecht, M., Bowling, M.: Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. J. Artif. Int. Res. 61(1), 523–562 (2018) 28. Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In: M.F. Balcan, K.Q. Weinberger (eds.) Proceedings of The 33rd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 48, pp. 1928–1937. PMLR (2016) 29. Mnih, V., Heess, N., Graves, A., Kavukcuoglu, K.: Recurrent models of visual attention. In: Proceedings of the 27th International Conference on Neural Information Processing Systems Volume 2, NIPS’14, pp. 2204–2212. MIT Press (2014) 30. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015) 31. Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized experience replay. In: International Conference on Learning Representations (2016) 32. Smith, R.J., Heywood, M.I.: A model of external memory for navigation in partially observable visual reinforcement learning tasks. In: L. Sekanina, T. Hu, N. Lourenço, H. Richter, P. GarcíaSánchez (eds.) Genetic Programming, pp. 162–177. Springer International Publishing (2019) 33. Stanley, K.O., Miikkulainen, R.: Evolving a Roving Eye for Go. In: T. Kanade, J. Kittler, J.M. Kleinberg, F. Mattern, J.C. Mitchell, M. Naor, O. Nierstrasz, C. Pandu Rangan, B. Steffen, M. Sudan, D. Terzopoulos, D. Tygar, M.Y. Vardi, G. Weikum, K. Deb (eds.) Genetic and Evolutionary Computation — GECCO 2004, vol. 3103, pp. 1226–1238. Springer Berlin Heidelberg, Berlin, Heidelberg (2004) 34. Sutton, R.R., Barto, A.G.: Reinforcement Learning: An introduction. MIT Press (1998) 35. Teller, A.: Turing completeness in the language of genetic programming with indexed memory. In: Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, vol. 1, pp. 136–141 (1994) 36. Wagner, G.P., Altenberg, L.: Perspective: Complex adaptations and the evolution of evolvability. Evolution 50(3), 967–976 (1996) 37. Wang, Z., Schaul, T., Hessel, M., Van Hasselt, H., Lanctot, M., De Freitas, N.: Dueling network architectures for deep reinforcement learning. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp. 1995–2003. JMLR.org (2016) 38. Watson, R.A., Pollack, J.B.: Modular interdependency in complex dynamical systems. Artificial Life 11(4), 445–457 (2005) 39. Wilson, D.G., Cussat-Blanc, S., Luga, H., Miller, J.F.: Evolving simple programs for playing atari games. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’18, pp. 229–236. ACM (2018)
Chapter 7
The Evolution of Representations in Genetic Programming Trees Douglas Kirkpatrick and Arend Hintze
7.1 Introduction One of the most promising approaches in the quest for general purpose artificial intelligence (AI) is neuroevolution [9, 35, 39]. The idea is to use a genetic algorithm to optimize the AI that controls an embodied agent to solve one or many cognitive tasks. Many different AI systems have been proposed for this task, such as Artificial Neural Networks (ANNs) and their recurrent versions (RNNs) [31], Genetic Programs (GPs) [16], Cartesian Genetic Programming (CGP) [22], Neuro Evolution of Augmenting Topologies (NEAT) [36], and Markov Brains (MBs) [11], among many others. One of the most pressing problems is that AIs have difficulty generating internal models about the environment they act in. Humans seem to be particularly good at creating these models for themselves, but we struggle greatly when it comes to imbuing artificial systems with such models, also called representations.1 It even has been argued that one should not even try to do so and
1 Observe
that the term representations in computer science sometimes also refers to the structure of data, or how an algorithm, for example, is encoded. We mean neither, but instead use the term representation to be about the information a cognitive system has about its environment as
D. Kirkpatrick () Department of Computer Science and Engineering, BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA e-mail: [email protected] A. Hintze Department of Integrative Biology, Michigan State University, East Lansing, MI, USA Department of Computer Science and Engineering, BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_7
121
122
D. Kirkpatrick and A. Hintze
instead create “intelligence without representation” [6]. However, we found earlier that neuroevolution is very capable of solving this problem of creating structures to store representations. To prove that a system indeed has representations, we previously introduced an information theoretic measure R that quantifies exactly how much information the internal states of a system store about its environment independent of the information coming directly from its sensors [19]. Over the course of evolution Markov Brains stored more information about the environment necessary to solve complex tasks. They did so by associating specific hidden nodes with different environmental concepts. These sparse and distributed representations are believed to be similar to how humans for example store information: not all neurons fire, but certain areas perform computations based on specific firing patterns of other neurons. We also found that RNNs, when evolved to perform a task that requires memory, form less sparse representations and smear the information across all their hidden states in contrast to MBs. However, it remains to be shown that genetic programming (GP) or systems similar to GP (such as CGPs) also evolve to solve tasks that require representations, to what extent they evolve the ability to form representations, and to what degree they smear their representations. While it is academically interesting to find how much a system knows about its environment, quantifying representations has an application for genetic algorithms. Similar to multiple objective optimization [40] which improves at least two different qualities of a system at the same time, one can not only use the performance of an agent but also use the degree to which the system has developed representations to define the mean number of offspring in a genetic algorithm (GA) for augmentation of the GA [32]. While it has been shown how this helps to optimize Markov Brains and RNNs during evolution, again we do not know if the same is true for genetic programming systems. Lastly, representations can not only be measured for the whole system but also for individual sub-components. By testing all subcomponents versus all concepts in the environment, we can pinpoint where these representations are exactly stored in the system. This results in a matrix defining the amount or information each element has about each concept. We found earlier [12] that substrates differ in how distributed or condensed these representations are. Markov Brains create sparsely distributed representations, while RNNs smear the knowledge about the environment across all nodes. In addition, we found that for Markov Brains, RNNs, and LSTMs [14] that smearedness and robustness correlate negatively. Perhaps the difference in representation structure between Markov Brains and RNNs is not surprising when one considers the topological features of these systems. RNNs receive their inputs and sequentially propagate this information through one layer after the other by connecting each node from one layer to all nodes to the next. While it obviously ensures that the information can reach all components it presumably also prevents representations from becoming defined in Marstaller et al. [19]. The term representation, as we use it, is adapted from the fields of psychology and philosophy.
7 The Evolution of Representations in Genetic Programming Trees
123
local and sparse. Markov Brains, on the other hand, start with sparse connections and only retain additional ones if there is an evolutionary benefit at the time of addition. This results in sparsely interconnected graphs, which might be the reason why these kinds of networks evolve to have sparse and condensed representations. Genetic programming trees present a great tool to evolve and fit almost arbitrary mathematical functions to data. They can take a lot of different inputs, perform computations on them, and due to their tree structure, condense the result of all those computations into a single result at the root of the tree. It stands to reason that this topology provides an advantage to the question of condensing representations as well. These trees can be as wide as the input layers of an RNN, but do not have the property of further distributing the information at every step (see Fig. 7.1 for a comparison of topologies and representational smearedness across different kinds of computational substrates). The genetic programming trees, however, have the disadvantage that they do not have an intuitive way to implement recurrence. One way of dealing with the exact problem of forming memory has been addressed by adding indexed memory and additional computational nodes into the tree that can read from and write to said memory [37]. Similarly, the nodes that perform the read and write operations, or push and pop in the case of a memory stack, can themselves be evolved by genetic
RNN: input
recurrence
Markov Brain:
GP Tree:
input
input
recurrence
recurrence
? output
recurrence
recurrence
output
output
concept a
concept a
?
concept b
concept c
?
concept c
concept b
?
Fig. 7.1 Three different cognitive architectures—RNN, Markov Brain, and GP tree—and how they distribute or condense representations. RNNs have a joint input and recurrence layer, feed into arbitrary many hidden layers of arbitrary size, compute outputs which are also recurred back to the input layer. These networks tend to smear representations about different context over all recurrent nodes. The red, blue, and green color bars illustrate that. Markov Brains use inputs and hidden states as inputs to the logic gates or to other kinds of computational units. The result of the computations creates recurrent states and outputs. These networks tend to form sparse condensed representations, illustrated by narrower red, green, and blue bands. GP trees use inputs and potentially recurring information and combine them over several computational nodes into a result. For these structures, it is not clear to which degree they are smearing or condensing representations. It is also not obvious how to recur them, illustrated by the “?”
124
D. Kirkpatrick and A. Hintze
programming trees. These trees implement commands that can then be included into linear forms of genetic programming [18]. Finally, the genetically evolvable push programming language [33] provides a more complex solution for memory stored in stacks, specifically for linear genetic programs. While these methods all solve the problem of adding memory to the system, they essentially deviate from the idea of using the property of the tree to potentially condense information. Therefore, we are testing two alternative ways to create recurrence in genetic programming trees. We test if, as a consequence, the tree topology provides any advantage with respect to the ability to evolve sparsely condensed or smeared representations.
7.2 Material and Methods 7.2.1 Representations and the Neuro-Correlate R The neuro-correlate R seeks to quantify the amount of information an agent has about its environment that is not currently provided by its sensors. The informationtheoretic Venn diagram (see Fig. 7.2) illustrates the relation between environment or world states (W ), memory or brain states (B), and sensor states (S) and defines R to be quantified as: R = H (W : B|S) = H (W : B) − I (W : B : S) .
(7.1)
In order to actually measure R one needs to record the states of all three random variables. In the case of Markov Brains, this means the brain and sensor states are just the state of the hidden and input nodes. The world states, however, are not that obvious. In fact, they need to be defined and well-chosen by the experimenter and depend highly on the environment. Here we use a block catching Fig. 7.2 Venn diagram of entropies and information for the three random variables W , S, and B, describing the world, sensor, and agent internal (brain) states. The representation R = H (W : B|S) is shaded
W H(W |S, B)
B
R
H(B|W, S)
I(S : B|W )
I(W : S|B) H(S|W, B)
S I(W : B : S)
7 The Evolution of Representations in Genetic Programming Trees
125
task (active categorical perception, see Sect. 7.2.3) and number comparison task (number discrimination, see Sect. 7.2.4), and thus use the relevant concepts of these environments to define the world states. Here, for the block catching task, we chose the size of the block, the direction of the block falling, and whether the block is to the left or right of the agent to be the world states. This is different from Marstaller [19] where they also measured if the block is currently above or next to the agent. It was observed that agents typically do not evolve representations about hitting or missing the block, and thus this state was omitted. For the number comparison task, we chose the actual numbers to be the world states. As such, we ask to what degree the agent evolves representations about each number explicitly. In addition, we also defined a state to represent whether the first or second number was larger, to see if the agent is able to represent the difference between the two numbers. With regards to the actual computation of R, a reduction was found that decreases the number of steps needed to compute R. Where the original Eq. (7.1) expanded in practice to R = H (W ) + H (B) + H (S) − H (W, B, S) − I (S : W ) − I (S : B) ,
(7.2)
we found a reduction to only four entropy calculations: R = H (S, W ) + H (S, B) − H (S) − H (W, B, S) . The proof for this reduction is as follows: Proof We start with the original formula for R from [19]: R = HCorr − I (S : W ) − I (S : B), where HCorr is defined as HCorr = H (W ) + H (B) + H (S) − H (W, B, S). This expands to R = H (W ) + H (B) + H (S) − H (W, B, S) − I (S : W ) − I (S : B). We then use the formula for conditional information from [19] I (X : Y ) = H (X) + H (Y ) − H (X, Y ) to expand I (S : W ) = H (S) + H (W ) − H (S, W )
(7.3)
126
D. Kirkpatrick and A. Hintze
and I (S : B) = H (S) + H (B) − H (S, B). Substituting these leads to R =H (W ) + H (B) + H (S) − H (W, B, S) − (H (S) + H (W ) − H (S, W )) − (H (S) + H (B) − H (S, B)). By distributing the negative signs we get R =H (W ) + H (B) + H (S) − H (W, B, S) − H (S) − H (W ) + H (S, W ) − H (S) − H (B) + H (S, B). The positive and negative entropies cancel, leaving R = −H (S) − H (W, B, S) + H (S, W ) + H (S, B). A slight reordering gives the final equation, R = H (S, B) + H (S, W ) − H (S) − H (W, B, S). As the reduced Eq. (7.3) requires fewer mathematical operations than the original equation (7.2), there is less chance for computational error and round-off in the reduced equation, which makes it more accurate in practice than the original. In addition, the reduced equation is more computationally efficient to calculate.
7.2.2 Smearedness of Representations While the neuro-correlate R calculates the amount of mental representations as a whole, it does little to examine the structure or layout of said representations. Recent work [12] proposes a new measure, referred to as smearedness, to work around this deficiency. Smearedness quantifies the amount of representation that each memory state in the brain has about each individual concept in the environment, and takes a pairwise minimum to see how much the representations are spread across the different nodes. The equation for smearedness is seen in Eq. (7.4), where M is the measure of the representation in a specific node and concept, taken over all nodes i and for all combinations of concepts j and k. SN =
i
j >k
min(Mj i , Mki )
(7.4)
7 The Evolution of Representations in Genetic Programming Trees
127
7.2.3 Active Categorical Perception Task In order to augment the performance of a genetic algorithm using the neuro-correlate R the environmental states need to be described well. Since the neuro-correlate R has already been shown to correlate positively with performance on the active categorical perception task [19], and been shown to boost performance in learning the task when the GA was augmented with R [32], the same task was used here as well to allow for a direct comparison. Augmentation is done by multiplying the attained score for the task by the amount of representations the agent has about the environment (see Sect. 7.2.11). In this task [2, 3] an agent who can move left and right has to either catch or avoid a block that is falling towards it. The agent has 4 sensors, split into two groups of 2 sensors, separated by a 2 or 3 unit space between them. Blocks of various size are dropped one at a time, with the agent having 34 time steps to catch or avoid the block. The agents in the original experiments were evolved to catch blocks of width 2, and avoid blocks of width 4. We extended the task to try different combinations of block sizes. The blocks move 1 unit to the right or left on any given time step. On each time step, the agent receives the sensory input and can move 1 unit to the right or to the left. The agent is expected to determine the size of the block, and its direction of movement, so that it can decide whether to avoid or catch the specific block being dropped. The agents were tested over both block types and all possible permutations of starting positions relative to the sensors and block movement patterns. Due to the configuration of the sensors and the relative location of the agent to the block, agents can only very rarely decide if a block is large or small without movement and thus need to navigate to be directly under the block—hence the “active” in active categorical perception. This can only be done if the block was observed before for at least two updates in a row, since otherwise the direction of the fall could not be determined. All this information needs to be integrated into a decision about catching or avoiding the block. It is this integration of sensor information over multiple time steps that requires agents to first evolve memory and with it representations in order to make proper decisions.
7.2.4 Number Discrimination Task In addition to the active categorical perception task, recent work [15] has identified a different task in which we can also investigate the role of representations and smearedness in the neuroevolutionary process. This task, referred to as the Number Discrimination Task, has agents receive two values in sequence, and require the agent to identify the larger one. This task is inspired by previous biological [21, 24] and psychological [20] studies. When adapted for use in silico, the agents are presented with every possible pair of numbers between 0 and 5, with the values
128
D. Kirkpatrick and A. Hintze
being shown as bit strings where the number of ones is the value the agent must comprehend (e.g., 00000 is 0, 01000 is 1, 11000 is 2, etc.). Agents are additionally presented with all possible permutations of each pair (e.g. 10001 is equivalent to 01100 and both are used to represent 2). This task requires agents to store the first number and then perform a comparison to the second. The agents are first shown one value, then they are given 3 updates to process the information, followed by a second value and an additional 3 updates to process the information, before being required to make a determination of which number is larger. The world states used in calculating R are defined by the individual numbers used, and the information of whether the first or second value presented is larger.
7.2.5 The Perception-Action Loop for Stateful Machines Genetic programming has been used before to control robots in virtual [10, 29, 30] as well as in actual environments [26–28]. In order to create a meaningful controller, the evolved machine or program needs to map the sensor inputs it receives to motor outputs which control the actions of the robot (see Fig. 7.3). A purely reactive machine would have a direct mapping of inputs to outputs, while a stateful machine would use hidden states (memory) to store information from the sensors to use it for later decisions. Previous approaches used genetic programming methods to evolve computerlike code running on register machines [25]. Some other approaches encoded the program controlling the robot as a tree of binary operations executed on only the sensor inputs [17]. While it is these kinds of tree-like encodings that we seek to investigate here, this specific approach did not use hidden states and
the brain perception
hidden states
inputs
ouputs
action
the world
Fig. 7.3 Illustration of the perception-action loop. The world is in a particular state and can be perceived by the agent. This perception defines specific input states. These states now allow the brain (computational substrate) to perform a computation, which leads to new output states. These states, in turn, cause the agent to act in the environment and thus can change the world state. In case the agent needs to remember information about the past, it has to have the ability to store these memories somewhere. Thus, any machine that needs to be stateful has to have internal (here hidden states) that can be set depending on the current input and the current hidden states. This scheme of a brain resembles the structure of a recurrent neural network, but can be generally applied to all sort of computational machines that have to be stateful and act in an environment
7 The Evolution of Representations in Genetic Programming Trees
129
thus created purely reactive machines. Adding memory has been done differently before [18, 33, 37]. However, we think that the tree structure is of particular importance. This structure allows the information from all sensor and hidden states to be funneled together into a coherent representation, like all the leaves of a tree feed into the root (see Fig. 7.1 for an illustration). Although intuition may indicate that this structure is correct, we must empirically test the viability of these structures. This does not imply that other substrates, for example linear programming structures or other memory models, cannot perform such functions. As such, the task here is to use genetic programming to encode tree like computational structures that can take the inputs of the machine and the hidden states into account to create new outputs while also setting hidden states in the process. A typical recurrent neural network [31] does this by extending the input and output layer of an artificial neural network and then recurring the additionally computed output states back to the inputs (see Fig. 7.1 RNN). This is exactly the way how Cartesian genetic programming [23] would be used to create systems that can take advantage of hidden states. Similarly, Markov Brains [11] use the same concept of recurrence to store hidden states (see Fig. 7.1 Markov Brain). In fact, when Markov Brains use the same nodes that CGPs use [13], they become very similar to each other. This kind of recurrence is very capable of creating systems that are stateful and have representations that are condensed, but again are not necessarily tree-like, and rather are more arbitrarily connected networks. Beyond using Markov Brains equipped with the computational nodes found in CGPs we use two other methods to create genetic programming trees that are stateful. GP-forests use one tree per hidden and output state and can use default values as well as inputs and hidden states from the last update.2 GP-vector-trees are generic trees but instead of performing computations on single continuous variables, they execute vector operations. The vector supplied contains the hidden states of the last update as well as the current inputs, and part of the vector defines the output computed. We assume that this obvious extension of normal variables to vectors has certainly been done previously, but we can not find an explicit reference for this approach. Generally speaking, here we introduce one form of recurrence into genetic programming tree structures. The programming language push [33] should in principle allow for the same structures to emerge. However, these structures would need to arise by chance, in comparison to the models that we introduce here which achieve the desired structural goals by their definition.
7.2.6 Markov GP Brains Using CGP Nodes We used Markov Brains extensively on similar tasks and also to introduce and confirm our definitions of representations [19]. In a nutshell, a genome is used to
2 Inspired
by the multiple trees used to encode memory in Langdon [18].
130
D. Kirkpatrick and A. Hintze
encode the computational units of a Markov Brain. The genome defines the function and connectivity of these units, which can connect to input, output, and hidden states. The connections allow the computational units to read from these states and compute new outputs and new hidden states. As such a Markov Brain defines the update function of a state vector from time point t to t + 1. Some values of this vector are inputs, some outputs, the rest hidden states. This is very similar to how an RNN is constructed. The number of hidden states in a Markov Brain or RNN can be chosen by the experimenter arbitrarily. Generally speaking, too many states slows evolution down, too few states makes evolving the task impossible. The number of hidden states used here (8) was found to be relatively optimal for the task at hand [19]. The computational units a Markov Brain uses were originally probabilistic and deterministic logic gates. As such, a Markov Brain defines a hidden Markov Model conditional on an input sequence [4]—hence the name Markov Brain. For a more detailed description of Markov Brains see Hintze et al. [11]. In this work, we use Markov Brains with CGP gates to approximate a CGP network. This creates a dynamic, GP-like structure that can be contrasted with the more rigid tree structures.
7.2.7 Genetic Encoding of GP Brains in a Tree-Like Fashion Genetic programming trees and also Cartesian Genetic Programming define the computations they perform by identifying computational nodes, their connections, and how they receive inputs. When evaluated, they take those inputs to perform all computations defined by their structure and return a solution. When mutated, the nodes and inputs can change, as well as rewiring between the components can occur. In addition, when for example genetic programming trees are recombined, sub-branches of the trees are identified and exchanged between the trees [1]. Since mutations directly identify the components to be changed, this could be called a direct encoding. An alternative to this approach are indirect encoding schemes which can sometimes provide an advantage during evolutionary adaptation, such as the hyper-geometric encoding for NEAT [7]. Markov Brains, for example, use a type of indirect encoding where a genome specifies genes, and these genes then define the computational components and their connectivity. Mutations happen to the genome and not on the components themselves. The genetic programming substrates used here are defined in a similar genetic encoding scheme. We use the term genetic encoding to imply that it is not entirely direct nor as indirect as other systems might be. Here different kinds of computational structures (GP-Forest and GP-Vector Brains) are defined by such a genetic encoding scheme. Starting from a root node, new nodes have to be sequentially added. Therefore, a linear genome (vector of bytes) is scanned for start codons (a set of two numbers) which define the start of a gene. The numbers following such a start codon are then used to define the mathematical operation of a node, and how this node is inserted into the genetic programming tree (see Fig. 7.4). Observe that the order of genes in the genome
7 The Evolution of Representations in Genetic Programming Trees
131
genome: start codon
gene/node
gene:
node:
# opp
default value
address
Fig. 7.4 Schematic overview of the genetic encoding for computational nodes. The genome is a sequence of numbers (bytes), and specific subsets of numbers (42, 213) define start codons. The sequence of numbers behind the start codon define a gene. As such, a gene stores all information necessary to define all properties of a computational node as it is needed to function within a genetic programming tree. Specifically: the computation the node has to perform (orange), the default for its left input (green), the default for its right input (blue), and the address where it needs to be inserted (purple). Observe that the defaults can be specific values (as seen for example for the left input of the node), or the address of a hidden node as seen for example in the right input of the node
matters: genes found earlier encode nodes that will be inserted into a tree earlier. Mutations affect the genome and can not only shuffle genetic material around but also insert and delete whole parts. As such, genes can disappear or be created de novo. This means, that at the time when a node is created by reading the gene defining it, it is not known if other nodes will be appended to it later or not. Since each node in our implementations has two possible inputs, each gene encodes default values or addresses for hidden states or input values. If another node becomes added later it replaces the default value with it’s own output. Since nodes are sequentially added, they always get appended to an already existing node. To know where, each gene also encodes a continuous number [0.0, 1.0] that specifies where in the tree the node has to be added (see Fig. 7.5 for an illustration of that process). This scheme applies to both GP-Forest and GP-Vector. However, what each gene encodes differs with respect to the function of the node or the specific default values or addresses.
7.2.8 GP-Forest Brain Any kind of computational system controlling an agent that needs to be stateful, cannot just map outputs to inputs but instead has to rely on hidden states which
132
D. Kirkpatrick and A. Hintze root
1)
0 0.0 -1.0 root
2)
+ 0 .0-0.5
a .5-1.0 root
3)
+ a
-
.5-1.0
b
0.3
.0 - .25
.25 - .5 root
4)
+ b .0 - .25
* 0.3 .25 - .5
c .5 - .75
c .75 - 1.0
Fig. 7.5 Illustration of tree growth by adding nodes. (1) In the beginning the tree is defined as an empty root, that defaults to an output (here 0.0). (2) Once the genome is parsed and a new gene that encodes a node has been found, the node replaces the root node. The left and right inputs now define two ranges [0.0, 0.5] and ]0.5, 1.0]. (3) Each new node to be added has a specific address (a value from [0.0, 1.0] as defined by the gene, see Fig. 7.4 the purple component of the gene). This number defines which default input to replace. The added node itself subdivides the range it connected to, in order to allow new nodes to again be added. (4) The concept of adding nodes and subdividing the connection ranges continues until the tree is built. In this way, a new node will always have a place in the tree. However, since nodes are added sequentially, deletions of genes/nodes will affect the topology greatly, but will not make it impossible to add nodes
are also sometimes called recurrent states. In the case of an RNN, for example, extra outputs that the network produces are just mapped back as additional inputs. Similarly, Markov Brains use inputs, outputs, hidden states upon which all computations are performed. While the inputs are generated by the environment the Markov Brain generates the outputs and the new hidden states to complete a perception-action loop. In order to allow for the same computational complexity in a genetic programming tree, we need to somehow make it recurrent. Conventional trees compute a single output based on a selection of inputs. However, this would allow us to only create systems without hidden states. Here we use, what we call a genetic programming forest (GP-Forest), where each output, as well as each hidden state, is computed by an individual genetic programming tree (hence the forest
7 The Evolution of Representations in Genetic Programming Trees Table 7.1 The possible mathematical operations of each node in a GP-Forest brain
Operation NOP LEFT RIGHT ADD SUB MUL DIV EQU NEQU LOW HIGH LOW +16 LOGIC
133
Return value 0.0 L R L+R L−R L*R L/R, 0.0 if R==0.0 1.0 if L==R, else 0.0 1.0 if L!=R, else 0.0 1.0 if L=R, else 0.0 1.0 if L 0 is used as the discriminant for binary classification of the query point X. If the GPSR prior is not symmetric about 0, then this introduces a bias towards predicting the positive (or negative) class which may not match the true distribution of binary labels. Again, the search algorithm would have to expend search effort to reach a part of the search space with a correct bias. This is separate from the well-known problem of the accuracy metric being misleading on unbalanced data [2]. This observation may have real-world consequences since with several function sets and in many real-world datasets, the initialisation and GPSR priors will be biased positive.
216
M. Nicolau and J. McDermott
11.5.2 Algorithm Configuration What function set should we choose for a particular problem? Note that several of the problems we have investigated are among the GP benchmarks commonly used in recent years. Some of them, when proposed as benchmarks, are accompanied by specific function sets. For strict comparison of algorithm performance it may be necessary to use the specified function sets. However, when the perspective is that of maximising real-world performance, it is clear that the choice of function set matters [27, 28]. Our results support this previous work. For example, the good results achieved by certain function sets [27, 28] can now be explained as the result of a good match between the GPSR prior and the true y distribution (both distributions are biased positive, with relatively low variance, the true y has no extreme values and the prior has few |y| ˆ > 3). In practice, in many forms of regression it is common to normalise or standardise data, either X or y or both, before modelling. This is expected to improve the performance of some models, such as kernel regression, and make no difference to others, such as linear regression. For cases where the observed y distribution is itself strangely distributed, e.g. taking on both extreme positive and extreme negative values, authors have investigated suitable transformations for y. For example, Burbidge et al. [3] suggest a sigmoidal transformation, the inverse hyperbolic sine. These techniques amount to controlling distribution mismatch by changing y, as opposed to by changing the GPSR prior. They would help to address the issue described by Keijzer (see Sect. 11.2), i.e. a y distribution which is simply shifted far from the GPSR prior. But they may not make the distribution match the shape of the GPSR prior, and in particular the choice between 0–1 normalisation and standardisation amounts to a choice between positive-only and zero-centered distributions. As we have seen, GPSR priors for several function sets match neither. Also, both normalisation and standardisation will give a y distribution much narrower than the very wide GPSR prior which arises with several function sets. Thus, algorithm configuration to control the GPSR prior may still be worthwhile. Another way to avoid the problem of distribution mismatch is to use linear scaling in the objective function [12]. It solves the problem without removing the mismatch explicitly because it means an individual which predicts a good constant is no longer a good individual in early generations: an individual which achieves any improvement on matching the correct “shape” of the true y predictions will do better. However, linear scaling has not been taken up by the majority of GP users, who seem to value the transparency of a simple objective function, with some studies questioning its generalisation ability [5]. For such cases, again, controlling the GPSR prior may be needed. Our results on asymptotes using different function sets may be useful in finding well-behaved, robust models (i.e. models less likely to blow up on unseen data).
11 Genetic Programming Symbolic Regression: What Is the Prior on the Prediction?
217
11.5.3 Understanding GSGP Mutation How does geometric semantic mutation [24] behave? It is defined as m(p) = p + ms.(r1 − r2 ) where p is the parent program, and r1 and r2 are randomly-generated trees. The method of random generation is not specified [24], but some later work is explicit and uses the GROW operator for this [4]. The effect of the small constant ms is to control the degree of exploration of the operator. The term “geometric” is defined in the original to mean that all offspring are contained in a ball of radius centred at the parent. The reason for taking r1 − r2 is not explicit, but its effect is to centre the distribution of m(p) at p. The initialisation prior (i.e. the prior on r1 and on r2 ) is not in general symmetric or zero-centred, so taking m(p) = p + ms.r1 would give an asymmetric mutation operator, not centred at p. One possible disadvantage is that the variance of r1 −r2 is even larger than that of a single r1 [36]. The behaviour of the operator remains to be investigated empirically. In Fig. 11.9 we illustrate it. GSGP mutation is not an -ball. There is no radius within which all outputs are contained, because according to our previous results, the distribution is long-tailed in each dimension. Also, the variance in each dimension can differ, due to differing distributions of X values of training cases. If (e.g.) training case 0 has larger values for independent variables than those of training case 1, then the result will be an ellipse-shaped distribution rather than spherical, with a wider range for yˆ0 than for yˆ1 . However, the ellipse is not in general axis-aligned because the output of a program m(p) on training case i is correlated with that on training case j . Thus the GSGP mutation is not distributed as an ball, but as a potentially long-tailed, non-axis-aligned elliptical distribution. In each dimension it is centered at p due to the subtraction. Moraglio and Mambrini [25] propose an alternative formulation to reshape the distribution to be not only centered but isotropic, i.e. equally distributed in all directions from p, which requires taking the Moore– Penrose inverse of a matrix derived from the training data. Another alternative was proposed by McDermott et al. [22]: m(p) = p + ms.r, where r is a random tree and ms is drawn from a Normal centred at 0. This achieves symmetry and somewhat reduces growth in tree size, but still does not constrain the output to be an ball, or isotropic. Some later work, e.g. [38], defines mutation as m(p) = p + σ (ms.(r1 − r2 )), where σ is a sigmoid mapping. This constrains the distribution to an ball but is not isotropic. In addition to the properties of (1) being constrained to an ball, (2) being symmetric per-dimension, and (3) being isotropic, we remark that two stronger conditions on mutation can also be defined: (4) a geometric complete mutation can be defined as one in which all points in an ball are possible results of the mutation (cf. geometric complete crossover [23, p. 304]); an even stronger condition (5) is one in which the result is uniformly distributed on the ball (implying that it is also complete and isotropic). Properties 1–3 can be achieved but no known GP mutation achieves properties 4 or 5. It is useful for researchers studying GSGP and related ideas to distinguish these five properties.
218
M. Nicolau and J. McDermott
Fig. 11.9 The effect of mutation in GSGP in semantic space. The semantics of the offspring m(p) = p +ms.(r1 −r2 ) is distributed centred at the parent semantics p. Here p = (2, 2, . . . 2) for
11 Genetic Programming Symbolic Regression: What Is the Prior on the Prediction?
219
An elliptical distribution may lead to faster movement through one dimension of semantics than another, which may be helpful if the target’s values in different dimensions differ markedly. On the other hand, the shape of the mutation distribution depends on X, not on y. Further, the long-tailed mutation distribution gives the potential for damaging asymptotes at every mutation. In GSGP, subtrees are amalgamated but never deleted, and so asymptotes, if they appear, tend to remain. It is therefore important for a GSGP implementation to prevent asymptotes, for example by choosing the GP language carefully. Some avenues for future work are opened up by these perspectives.
11.6 Conclusions In this paper, we have asked the question: what is the implicit prior on yˆ which results from our GP initialisation and search algorithms? We have compared these priors against y distributions for common problems and found mismatches which we think are important in that they impact on algorithm dynamics and performance. One application of our results is then on how to tune algorithm parameters with distribution mismatch in mind. We have tested the impact of some algorithm parameters, including function set and maximum tree depth, on this mismatch, and found that overall the function set 4 (+, −, ∗, aq, sin, tanh) gives a better match with observed y distributions. With the other function sets, a large tree depth gives more extreme values, but with function set 4 this tendency is reversed, and so with this function set we do not need to recommend very tight limits on tree depth. We have also observed the effect of the number of variables and their range. A second application of our results is in understanding the behaviour of operators, e.g. the GSGP mutation operator. We have demonstrated empirically the distribution achieved by this operator. Fig. 11.9 (continued) illustration. Pairs (r1 , r2 ) were generated by the GROW algorithm and their semantics illustrated on two training cases at a time. The centering is due to the subtraction r1 − r2 , eliminating any offset or skew that may be present in r1 and r2 individually. For small mutation step ms = 1/2, the distribution is tight (left); for ms = 2, the distribution is wider (right); in either case values are not constrained to any radius. For many pairs of training cases, the distribution is wider for one than the other, leading to an elliptical rather than spherical distribution. For some pairs of training cases, there is a correlation between the semantics on those pairs (a positive correlation in rows 1 and 3, but a negative correlation could also occur); for other pairs, little or no correlation (rows 2, 4). For ri of maximum depth 2, many offspring are simple and contain a naked variable x, and linear patterns occur (rows 1–2); for maximum depth 15 this does not occur (rows 3–4)
220
M. Nicolau and J. McDermott
11.6.1 Limitations and Future Work As argued by Silva and Costa [33], fitness is the one ingredient which, if removed, would undermine all theories of bloat. Therefore, in our experiments on GP “search” without fitness, we have achieved distributions of GP trees which are not bloated and therefore not representative of distributions in the presence of fitness. As a result our distributions of yˆ could be inaccurate. However, running without fitness is essential if we are to discuss priors, as opposed to results from particular problems. One possible solution for future work is to use previous work on limiting distributions of GP tree sizes to characterise the eventual shapes of trees, and then to simulate such trees to study their distribution of yˆ values. Poli et al. [30] showed that such a limiting distribution exists for crossover-only GP, the Lagrange distribution of the second kind. Extensions would be required to deal with mutation and to characterise tree shapes. The latter would be possible through studying the balancedness of trees as done by Keijzer and Foster [13]. A further possible limitation is that in our work on the GPSR prior we have taken the final generation to be representative of the output of the GP run. In reality, the final generation (in the presence of fitness) will likely include some very small trees [30], and these are unlikely to be the best of the run. It is the best of the run which is the true output of the GP run. Therefore, our GPSR priors probably over-represent small individuals. By our reasoning in Sect. 11.4.5, such individuals tend to have fewer extreme values than large individuals. The true GPSR prior— based only on the best of the run—would then have more extreme values than the distributions we have shown. In future work we will investigate whether this issue makes a difference. Acknowledgement Thanks to Alberto Moraglio and Alexandros Agapitos for discussion.
Appendix A: Table of Distribution Statistics See Table 11.3.
Dataset fset Min gen 0 : depth 2–6 : 1 var : N (0, 1) : yˆ Vanilla 01 −1.1e+14 Vanilla 02 −2954.11 Vanilla 03 −Inf Vanilla 04 −104.19 gen 50 : depth 2–6 : 1 var : N (0, 1) : yˆ Vanilla 01 −8.8e+12 Vanilla 02 −1825.56 Vanilla 03 −Inf Vanilla 04 −59.29 gen 0 : depth 2 : 1 var : N (0, 1) : yˆ Vanilla 01 −612.76 Vanilla 02 −6.13 Vanilla 03 −701.35 Vanilla 04 −6.13 gen 0 : depth 15 : 1 var : N (0, 1) : yˆ Vanilla 01 −1.6e+104 Vanilla 02 −2.5e+25 Vanilla 03 −Inf Vanilla 04 −259.37 gen 0 : depth 2–6 : 10 vars : N (0, 1) : yˆ Vanilla 01 −2.6e+08 Vanilla 02 −1783.55 Vanilla 03 −Inf Vanilla 04 −45.81 0.0063 0.00 0.22 0.00
−1.8e+07 0.061 2.9e+302 0.031 0.11 0.065 0.20 0.052 4.2e+101 −1.6e+19 1.1e+302 0.0057 281.21 0.017 3e+301 0.0086
1.4e+13 7585.03 Inf 84.34 596.69 9.39 765.96 9.39 5.3e+107 5.4e+24 Inf 307.48 6e+08 1090.58 Inf 154.65
0.0044 0.00022 0.32 0.00
0.00 0.00014 0.36 0.00
0.043 0.0031 0.23 0.0084
0.0031 0.00 0.34 0.008
1.7e+08 −0.021 1.4e+300 0.052
4.3e+14 2055.14 Inf 78.92
Median
Mean
Max
Table 11.3 Measures of spread and central tendency
6.5e+05 3.31 Inf 1.08
4.7e+104 2.3e+22 Inf 1.60
8.32 1.03 7.65 0.88
1.7e+10 12.45 Inf 1.03
4e+11 9.84 Inf 1.07
SD
599.53 −94.83 NaN 4.31
(continued)
6e+05 8.4e+04 NaN 524.20
NaN 1.1e+06 NaN 5582.72
1432.52 5.60 2443.81 7.57
−0.48 0.49 0.70 0.68 NaN −1032.60 NaN −16.62
4.6e+05 1.3e+05 NaN 204.85
1.1e+06 3.5e+04 NaN 298.37
Kurt
269.44 278.72 NaN 3.45
983.51 −138.22 NaN 0.93
Skew
11 Genetic Programming Symbolic Regression: What Is the Prior on the Prediction? 221
Dataset fset Min gen 0 : depth 2–6 : 1 var : N (0, 50) : yˆ Vanilla 01 −3.6e+15 Vanilla 02 −1.3e+15 Vanilla 03 −Inf Vanilla 04 −1e+09 datasets : y Keijzer5 – −2.22 Korns8 – 11.77 Vladislavleva4 – 0.26 Housing – 5.00 gen 0 : datasets : yˆ Keijzer5 01 −8.2e+11 Keijzer5 02 −132.07 Keijzer5 03 −Inf Keijzer5 04 −10.36 Korns8 01 −6.1e+20 Korns8 02 −1e+14 Korns8 03 −Inf Korns8 04 −5.1e+08 01 −4.2e+10 Vladislavleva4 Vladislavleva4 02 −3.2e+06 Vladislavleva4 03 −Inf Vladislavleva4 04 −5498.77 Housing 01 −5.2e+19 Housing 02 −1.1e+16 Housing 03 −Inf Housing 04 −1.7e+09
Table 11.3 (continued) Mean 4.2e+11 −1.3e+09 −1.6e+301 −2508.95 0.012 3075.62 0.54 22.35 3.2e+08 0.22 1.3e+301 0.22 −2.5e+13 −2.5e+07 2.4e+302 795.17 −1.6e+04 −15.14 9.7e+301 1.66 −5.4e+13 −2.1e+11 3.2e+302 3.5e+05
Max 1.7e+17 1.7e+14 Inf 6.1e+08 2.21 9909.16 1.57 50.00 4.7e+14 76.95 Inf 25.26 3.2e+18 6.4e+13 Inf 4.8e+08 1.1e+09 9.5e+05 Inf 1.1e+04 2.3e+18 4e+16 Inf 2.7e+09
3.3e+11 1.84 Inf 0.96 1.2e+17 8.2e+10 Inf 1.4e+06 2.6e+07 6364.54 Inf 38.56 5.5e+16 9.8e+13 Inf 2.2e+07
0.50 2004.64 0.19 8.85
−0.0026 2746.47 0.50 21.20 0.081 0.081 0.42 0.14 0.000021 0.00 0.43 0.00 1.08 1.19 0.68 0.61 1.22 2.76 0.84 0.60
1.7e+14 1.2e+12 Inf 1.4e+06
SD
0.0000069 0.00 0.41 0.0000017
Median
1250.16 −4.91 NaN 1.29 −4999.16 −88.61 NaN −8.05 −1597.85 −242.41 NaN 71.22 −929.70 93.61 NaN 82.74
0.35 0.69 1.62 1.07
924.90 −970.09 NaN −264.17
Skew
1.7e+06 300.69 NaN 19.57 2.5e+07 2.3e+05 NaN 2.2e+04 2.6e+06 9.3e+04 NaN 1.6e+04 8.7e+05 4.5e+04 NaN 8138.03
2.21 −0.16 3.77 1.53
9.5e+05 1e+06 NaN 2.4e+05
Kurt
222 M. Nicolau and J. McDermott
11 Genetic Programming Symbolic Regression: What Is the Prior on the Prediction?
223
References 1. Beadle, L., Johnson, C.G.: Semantic analysis of program initialisation in genetic programming. Genetic Programming and Evolvable Machines 10(3), 307–337 (2009) 2. Bhowan, U., Johnston, M., Zhang, M., Yao, X.: Evolving diverse ensembles using genetic programming for classification with unbalanced data. IEEE Transactions on Evolutionary Computation 17(3), 368–386 (2013) 3. Burbidge, J.B., Magee, L., Robb, A.L.: Alternative transformations to handle extreme values of the dependent variable. Journal of the American Statistical Association 83(401), 123–127 (1988). https://doi.org/10.1080/01621459.1988.10478575. https://amstat.tandfonline.com/doi/ abs/10.1080/01621459.1988.10478575 4. Castelli, M., Silva, S., Vanneschi, L.: A C++ framework for geometric semantic genetic programming. Genetic Programming and Evolvable Machines 16(1), 73–81 (2015) 5. Costelloe, D., Ryan, C.: On improving generalisation in genetic programming. In: L. Vanneschi, S. Gustafson, A. Moraglio, I.D. Falco, M. Ebner (eds.) European Conference on Genetic Programming, EuroGP 2009, Tübingen, Germany, April 15–17, 2009, Proceedings, Lecture Notes in Computer Science, vol. 5481, pp. 61–72. Springer (2009) 6. Dignum, S., Poli, R.: Generalisation of the limiting distribution of program sizes in treebased genetic programming and analysis of its effects on bloat. In: D. Thierens, H.G. Beyer, J. Bongard, J. Branke, J.A. Clark, D. Cliff, C.B. Congdon, K. Deb, B. Doerr, T. Kovacs, S. Kumar, J.F. Miller, J. Moore, F. Neumann, M. Pelikan, R. Poli, K. Sastry, K.O. Stanley, T. Stutzle, R.A. Watson, I. Wegener (eds.) GECCO ’07: Proceedings of the 9th annual conference on Genetic and evolutionary computation, vol. 2, pp. 1588–1595. ACM Press, London (2007) 7. Espejo, P.G., Ventura, S., Herrera, F.: A survey on the application of genetic programming to classification. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 40(2), 121–144 (2010) 8. Fortin, F.A., De Rainville, F.M., Gardner, M.A., Parizeau, M., Gagné, C.: DEAP: Evolutionary algorithms made easy. Journal of Machine Learning Research 13, 2171–2175 (2012) 9. Gelman, A., Jakulin, A., Pittau, M.G., Su, Y.S.: A weakly informative default prior distribution for logistic and other regression models. Ann. Appl. Stat. 2(4), 1360–1383 (2008). https://doi. org/10.1214/08-AOAS191 10. Grinstead, C.M., Snell, J.L.: Introduction to probability. American Mathematical Soc. (2012) 11. Iba, H., de Garis, H., Sato, T.: Genetic programming using a minimum description length principle. In: K.E. Kinnear, Jr. (ed.) Advances in Genetic Programming, chap. 12, pp. 265– 284. MIT Press (1994) 12. Keijzer, M.: Improving symbolic regression with interval arithmetic and linear scaling. In: EuroGP, pp. 70–82. Springer (2003) 13. Keijzer, M., Foster, J.: Crossover bias in genetic programming. In: European Conference on Genetic Programming, pp. 33–44. Springer (2007) 14. Korns, M.F.: Accuracy in symbolic regression. In: R. Riolo, E. Vladislavleva, J.H. Moore (eds.) Genetic Programming Theory and Practice IX, Genetic and Evolutionary Computation, pp. 129–151. Springer, New York (2011) 15. Koza, J.: Genetic Programming: on the programming of computers by means of natural selection. MIT Press, Cambridge, MA (1992) 16. Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Advances in neural information processing systems, pp. 950–957 (1992) 17. Langdon, W.B., Poli, R.: Fitness causes bloat. In: P.K. Chawdhry, R. Roy, R.K. Pant (eds.) Soft Computing in Engineering Design and Manufacturing, pp. 13–22. Springer London (1998). https://doi.org/10.1007/978-1-4471-0427-8_2 18. Lichman, M.: UCI machine learning repository. http://archive.ics.uci.edu/ml (2013)
224
M. Nicolau and J. McDermott
19. Luke, S., Panait, L.: A comparison of bloat control methods for genetic programming. Evolutionary Computation 14(3), 309–344 (2006) 20. Mauceri, S., Sweeney, J., McDermott, J.: One-class subject authentication using feature extraction by grammatical evolution on accelerometer data. In: Proceedings of META 2018, 7th International Conference on Metaheuristics and Nature Inspired computing. Marrakesh, Morocco (2018) 21. McDermott, J.: Measuring mutation operators’ exploration-exploitation behaviour and longterm biases. In: M. Nicolau, K. Krawiec, M.I. Heywood, M. Castelli, P. García-Sánchez, J.J. Merelo, V.M.R. Santos, K. Sim (eds.) 17th European Conference on Genetic Programming, LNCS, vol. 8599, pp. 100–111. Springer, Granada, Spain (2014) 22. McDermott, J., Agapitos, A., Brabazon, A., O’Neill, M.: Geometric semantic genetic programming for financial data. In: Applications of Evolutionary Computation, pp. 215–226. Springer (2014) 23. Moraglio, A.: Towards a geometric unification of evolutionary algorithms. Ph.D. thesis, University of Essex (2007) 24. Moraglio, A., Krawiec, K., Johnson, C.: Geometric semantic genetic programming. In: Proc. PPSN XII: Parallel problem solving from nature, pp. 21–31. Springer, Taormina, Italy (2012) 25. Moraglio, A., Mambrini, A.: Runtime analysis of mutation-based geometric semantic genetic programming for basis functions regression. In: Proceedings of the 15th annual conference on Genetic and evolutionary computation, pp. 989–996. ACM (2013) 26. Ni, J., Drieberg, R.H., Rockett, P.I.: The use of an analytic quotient operator in genetic programming. IEEE Transactions on Evolutionary Computation 17(1), 146–152 (2013) 27. Nicolau, M., Agapitos, A.: On the effect of function set to the generalisation of symbolic regression models. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 272–273. ACM (2018) 28. Nicolau, M., Agapitos, A.: Function sets and their generalisation effect in symbolic regression models (2019). In review 29. Poli, R.: A simple but theoretically-motivated method to control bloat in genetic programming. In: C. Ryan, T. Soule, M. Keijzer, E. Tsang, R. Poli, E. Costa (eds.) Genetic Programming, Proceedings of EuroGP’2003, LNCS, vol. 2610, pp. 204–217. Springer-Verlag, Essex (2003) 30. Poli, R., Langdon, W.B., Dignum, S.: On the limiting distribution of program sizes in treebased genetic programming. In: European Conference on Genetic Programming, pp. 193–204. Springer (2007) 31. Poli, R., Langdon, W.B., McPhee, N.F.: A field guide to genetic programming. Published via http://lulu.com and freely available at http://www.gp-field-guide.org.uk (2008) 32. Rosca, J.P., et al.: Analysis of complexity drift in genetic programming. Genetic Programming pp. 286–294 (1997) 33. Silva, S., Costa, E.: Dynamic limits for bloat control in genetic programming and a review of past and current bloat theories. Genetic Programming and Evolvable Machines 10(2), 141–179 (2009) 34. Silva, S., Dignum, S.: Extending operator equalisation: Fitness based self adaptive length distribution for bloat free GP. In: EuroGP, pp. 159–170. Springer (2009) 35. Silva, S., Vanneschi, L.: The importance of being flat—studying the program length distributions of operator equalisation. In: R. Riolo, K. Vladislavleva, J. Moore (eds.) Genetic Programming Theory and Practice IX, pp. 211–233. Springer (2011) 36. Springer, M.D.: The algebra of random variables. Wiley (1979) 37. Stephens, T.: GPLearn (2015). https://github.com/trevorstephens/gplearn, viewed 1 April 2019 38. Vanneschi, L., Silva, S., Castelli, M., Manzoni, L.: Geometric semantic genetic programming for real life applications. In: Genetic programming theory and practice xi, pp. 191–209. Springer (2014)
11 Genetic Programming Symbolic Regression: What Is the Prior on the Prediction?
225
39. Vladislavleva, E.J., Smits, G.F., den Hertog, D.: Order of nonlinearity as a complexity measure for models generated by symbolic regression via Pareto genetic programming. IEEE Transactions on Evolutionary Computation 13(2), 333–349 (2009) 40. Whigham, P.A.: Inductive bias and genetic programming (1995) 41. Whigham, P.A., McKay, R.I.: Genetic approaches to learning recursive relations. In: X. Yao (ed.) Progress in Evolutionary Computation, Lecture Notes in Artificial Intelligence, vol. 956, pp. 17–27. Springer-Verlag (1995)
Chapter 12
Hands-on Artificial Evolution Through Brain Programming Gustavo Olague and Mariana Chan-Ley
12.1 Introduction Brain programming is a new kind of artificial evolutionary learning based on neuroscience knowledge and the power of genetic programming. In contrast to state-of-the-art deep learning methodologies, the idea is to look for computational programs embedded within artificial models of the brain. To review the importance of the subject, we can say that Holland provided the first sketch that illustrates the problem in his seminal book [14]. A simple pattern recognizer was used to demonstrate a simple artificial adaptive system. Paradigmatically this research area has not received keen interest from the research community in genetic and evolutionary computation probably due to the difficulty of approaching real-world applications and the lack of an appealing theory. In evolutionary computer vision, the visual problem is studied under the framework of goal-oriented vision [22]. Brain programming follows the route outlined in [22] to define the necessary steps towards not only specifying how to solve a visual problem but also to answer the question: What is the visual task for? Brain programming aims to offer a new theory of visual processing where the paradigm of evolutionary programming can be extensively used in practical image understanding and pattern recognition endeavors.
G. Olague () CICESE, Ensenada, BC, Mexico e-mail: [email protected] M. Chan-Ley EvoVisión Laboratory, Ensenada, BC, Mexico e-mail: [email protected]; http://evovision.cicese.mx © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_12
227
228
G. Olague and M. Chan-Ley
12.2 Evolution of Visual Attention Programs In [9] artificial visual programs were evolved for the problem of visual attention. Automation of cognitive modeling is designed through a succession of levels of layers that create an artificial dorsal stream. Visual attention tasks are evolved following a purposive goal-oriented behavior. Results obtained on a well-known testbed confirm that the proposal is able to automatically design visual attention programs outperforming previous human-made systems developed by visual attention experts while providing readable results through a set of mathematical and computational structures. Figure 12.1 presents a sketch to illustrate the analogy of our artificial approach with the natural system. The methodology was further tested on different tasks to assess its suitability to discover solutions that can be tagged as general answers to the question of what is visual attention for? In [23] the system was tested on the problem of evolving head tracking routines. The system evolves visual operators to obtain several visual and conspicuity maps that are fused into a saliency map, which is converted to a binary image, thus defining the proto-object. Artificial brains are synthesized using multiple visual operators embedded within an intricate hierarchical procedure consisting of several key processes such as center-surround mechanisms, normalization, and pyramidscale processes. The proposed strategy robustly manages many difficulties such as occlusion, distraction, and illumination, and the resulting programs are real-time
Fα(p, r) =
I’m thirsty. I want a Coke!
(1+α)·(p·r) (α·p+r)
Fitness function
f (CMInt(·), CMO (·), CMC (·))
CMC (·)
CMO (·) CMInt =
Ir +Ig +Ib 3
Fig. 12.1 Conceptual model of the artificial artificial dorsal stream. The correspondence between the dorsal stream areas and the stages of the artificial model. The idea is to emulate the transformations that the input image undergoes along the pathway of the natural system
12 Hands-on Artificial Evolution Through Brain Programming
229
systems that can track a person’s head with enough accuracy to automatically control the camera. Extensive experimentation shows that the proposed methodology outperforms several state-of-the-art methods in the challenging problem of headtracking. We tested the same system in the automation of video tracking design processes of robotic visual systems [24]. The usual practice of learning a different task is to learn the programs from a database and then test the system with a different set of images representing the learned problem. In that paper, the challenge was to learn to detect a toy dinosaur from a database while testing the evolved programs in a different task considering three distinct robots and visual tracking scenarios. Indeed, the database does not contain information about the visual tracking challenge. When planning an object tracking system detection of moving objects in each frame and correct association of detection to the same object over time need to be approached for the whole sequence. Visual attention is a skill performed by the brain whose functionality is to perceive salient visual features. The automatic design of the acquisition and integration steps of the natural dorsal stream was engineered to emulate its selectivity and goal-driven behavior useful to the task of tracking objects from a video captured with a moving camera. This is a step towards the design of robust and autonomous visual behaviors. The test considers many difficulties due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid structures, object-to-object, and objectto-scene occlusions, as well as camera motion, models, and parameters. Tracking relies on the quality of the detection process and automatically designing such a stage could significantly improve tracking methods. Experimental results confirm the validity of the approach, and a comparison with the method of regions with convolutional neural networks (CNN) is provided to illustrate the benefit of the proposed methodology. The idea of evolving focus of attention studied in an interactive domotic environment produces excellent results [6]. The aim was to self-adapt the focus of attention algorithm to improve the accuracy level of a laser pointer detection system. The proposed technique was compared with previous proposals, and the new method allows to send more accurate orders to home devices. The procedure eradicates false offs, thus preventing orders not signaled by users. Moreover, by adding self-adjusting capabilities with a genetic-fuzzy system, the computer vision algorithm focuses its attention on a narrower area of the image. This work illustrates the application of the evolution of visual attention programs in practical, real-world tasks.
12.2.1 Evolution of Visual Recognition Programs In [12] the human visual system was used as inspiration for solving object detection and classification tasks. Computational models of an artificial visual cortex (AVC) were evolved to solve challenging classification problems of natural
230
G. Olague and M. Chan-Ley
Orientation Direction Disparity
Color image
Color
Dorsal Stream
V1 Visual map of Visual map of Visual map of Visual map of Color Shape Intensity Orientation VMC=VOC(Icolor) VMO=VOO(Icolor) VMS=VOS(Icolor) VMInt=(Ir+Ig+Ib)/3
Where? / How?
Center surround
Shape V2
Conspicuity Map of Orientation CMO
Conspicuity Map of Shape CMS
VOMM1(CMC)
VOMM1(CMO)
VOMM1(CMS)
VOMM1(CMInt)
VOMM2(CMC)
VOMM2(CMO)
VOMM2(CMS)
VOMM2(CMInt)
VOMMk(CMC)
VOMMk(CMO)
VOMMk(CMS)
VOMMk(CMInt)
Conspicuity Map of Color CMC
V4
What?
Ventral Stream
IT Inferotemporal Cortex
MMC= Σ(VOMMk(CMC))
MMO= MMS= Σ(VOMMk(CMO)) Σ(VOMMk(CMS))
Conspicuity Map of Intensity CMInt
MMInt= Σ(VOMMk(CMInt))
n global maxima max(MMC, MMO, MMS, MMInt) Vector Descriptor = (a1,...,an)
Fig. 12.2 Conceptual model of the artificial visual cortex. The system decomposes the color image into four dimensions (color, orientation, shape, and intensity). Then, a hierarchical structure solves the object classification problem through a function-driven paradigm
images. Figure 12.2 presents a sketch to visualize the general idea of the proposed system for image classification. The idea is to create an image descriptor vector for classification and at the same time finding the object location within the image. That paper presents the paradigm of brain programming from a multi-objective perspective to enhance the performance in the object classification task. According to brain programming paradigm, each operator within the artificial brain does not represent a solution to the visual task. A single operator might not be of interest since they only hold meaning while working along with other operators within the hierarchical structure. The fitness function is a balance between the artificial dorsal and ventral streams. Each evolved solution is evaluated through two performance measures. The first one measures classification performance with a metric called equal error rate that defines the probability of an algorithm to decide if two instances correspond to the same class. The second objective, based on the Fmeasure, calculates the correspondence between ground-truth of the object location in the image and the region selected by the algorithm as the possible position. The methodology was tested on the GRAZ-01 and GRAZ-02 databases. The solutions match and in some cases outperform other techniques in the state-of-the-art for classifying such databases. The artificial visual cortex is a bio-inspired model that is computationally expensive for object classification tasks. This cost is due in part to the highinformation content that is necessary to process while working with large databases, with images of high resolution, or in the worst case when they need to operate over video. In [13] the original model was improved through the compute unified device architecture (CUDA) by exploiting computational capabilities of graphics processing units (GPUs). In this work, the artificial visual cortex was coded in the CUDA framework to achieve real-time functionality. As a result, the proposed
12 Hands-on Artificial Evolution Through Brain Programming
231
system can process images in average up to 90 times faster than the original images. Moreover, when the size of the image grows, the proposed AVC-CUDA is faster in comparison with other approaches like CNN-CUDA. The optimization of the AVC model through genetic and evolutionary programming while tested on challenging object recognition tasks finds reasonable solutions during the initial stage of the search. In [25] a study shows the frequency of discovering optimal visual operators through a simple random search. It exhibits the richness of the paradigm from two complementary viewpoints: (1) the concept of function composition in combination with a hierarchical structure leads to outstanding object recognition programs, and (2) multiple random runs of the search process can discover optimal functions. Experimental results provide evidence that high recognition rates could be achieved in well-known object categorization problems such as Caltech-5, Caltech-101, GRAZ-01, and GRAZ-02. This paper is organized as follows. After a brief introduction to the problem, Sect. 12.4 reviews the state-of-the-art of classification of digitized art. It explains the reason for selecting this problem to test brain programming. Section 12.5 presents the results that were obtained and a comparison with deep learning. Section 12.5.1 presents an analysis of some runs of the algorithm using an initial population of the best set of solutions discovered in the previous section. Section 12.5.2 presents a sketch about our future research in the domains of visual learning and pattern recognition. Finally, the conclusion is drawn in Sect. 12.6.
12.3 Problem Statement This paper presents results of a system that automatically designs novel computational models of an artificial ventral stream for the problem of recognizing digitized art. Creating an algorithm for authenticating the type of art from high-resolution digital scans of the original works is considered an open research area. Here, brain programming is tested against a challenging database, and a comparison is made with deep learning to understand the benefits and limits of our approach. A proposal about learning from a set of previously discovered solutions is presented, and results give us some clues for future research. We summarize the concept of object recognition through the following definition. Definition 12.1 (Object Recognition) Object recognition refers to the task of determining if an object belongs to one or more classes in a collection of images and video sequences. Here, we are interested in studying the problem of identifying digitized visual arts. While there is no universally accepted definition of art, commonly the word is used to describe something of beauty or a skill which produces an aesthetic result [1, 2, 21, 28]. We chose a database made of photographs of fine art as part of the general category of visual arts. In object categorization, the problem studied is that of identifying the object content of natural images while generalizing across variations inherent to the object class. In all previous studies of the artificial visual
232
G. Olague and M. Chan-Ley
cortex, each class describes a kind of specific object. For example, in Caltech-101, the categories correspond to particular objects like cars, airplanes, faces, and so on. ImageNet also includes many other objects in the images that are repetitive for some of the classes, and GRAZ benchmark introduces photographs with several objects within the same image. This paper aims to test brain programming with a different kind of object categorization problem. Here the class is representative of a given visual art, and therefore, the program needs to identify what is common in the set of images that can help to recognize the type of art. This problem is challenging since the methodology needs to build code that is capable of identifying the form of 2D and 3D visual features that render the information confirming the piece of art. which is in itself hard to define.1
12.4 Classification of Digitized Art Koza formulates the paradigm of genetic programming as a way to achieve program induction following the Darwinian principle that structure arises from fitness [16]. He uses as a postulate the central idea raised by Arthur Samuel in the 1950s How can computers learn to solve problems without being explicitly programmed? In other words, how can computers be made to do what is needed to be done, without being told exactly how to do it? [16, p. 1]
While this is a final goal for the genetic programming community, for practical reasons, especially when dealing with real-world problems, one needs to make some compromises. In previous research devoted to the question of configuring a photogrammetric network, we avoid the idea of determining the fitness function without knowledge of the derivatives [26]. That research avoids the advice about one of the seven principles pronounced by Koza called “correctness,” which at the beginning of evolutionary computation was said that it should not be used in solving every problem. In photogrammetry, modeling-of-data requires that the proposed method not only solves the problem but what is needed is to obtain the solution with the highest accuracy with a minimal amount of resources. Nowadays, the problem of image classification is on the verge of a digital revolution produced by what is known as deep learning. This method uses the core technology of neural networks as its central paradigm.2 Since many authors have studied the application of evolutionary computation for neural networks, it is attractive to continue along this line of research. There are advantages:
1 Nowadays,
to claim that any computational method (artificial intelligence) is capable of solving similar visual problems needs to be taken carefully since the programs need to be explainable from the artistic viewpoint. 2 Note that Koza classify neural networks as one of the existing methods that do not seek solutions in the form of computer programs.
12 Hands-on Artificial Evolution Through Brain Programming
233
• There is a vast experience on the subject. • Programming packages are available. • A community ready to work on the subject. Nevertheless, each has a disadvantage: • Researchers in evolutionary computation study mostly toy problems. • Even for today standard, the technology is meager in comparison with the size of the studied problems. • Not all people interested in the subject has access to the technology necessary to tackle state-of-the-art problems. To place this in perspective, let us recall three milestones of deep learning. In 2012 researchers from Stanford and Google unveiled to the world the real power of the paradigm, building on previous research into multilayer neural networks [19]. Their study explored unsupervised learning, which does away with the expensive and time-consuming task of manually labeling data before it can be used to train machine learning algorithms. It would accelerate the pace of AI development and open up a new world of possibilities when it came to building machines to do work, which until then could only be done by humans. Specifically, they singled out the fact that their system had become highly competent at recognizing pictures of cats. The paper described a model which would enable an artificial network to be built containing around one billion connections using a dataset of 10 million 200 × 200 pixel images. It also conceded that while this was a significant step towards building an “artificial brain,” there was still some way to go with neurons in a human brain thought to be joined by a network of around 10 trillion connectors. In 2012 a convolutional neural network named AlexNet [18] competed in the ImageNet Large Scale Visual Recognition Challenge, where algorithms compete to show their proficiency in recognizing and describing a library of 1000 images. The neural network with 60 million parameters and 500,000 neurons win the image recognition contest. AlexNet is a variant of CNN designs introduced by Yann LeCun [20], who applied the backpropagation algorithm to a variant of Kunihiko Fukushima [11] original CNN architecture called “neocognitron.” AlphaGo is the first computer program to defeat a professional human Go player in 2015, the first program to defeat a Go world champion in 2016, and arguably the strongest Go player in history [27]. The neural networks take a description of the Go board as an input and process it through many different network layers containing millions of neuron-like connections. Given the size of the neural networks, it is difficult to foresee an improvement of the network architectures for such kind of problems through evolutionary computation [15]. The first papers on brain programming published in 2012 at the EvoStar conference represent the birth of this new methodology based on genetic programming [5, 8]. The studied problems were toy problems for today standard. In particular, for the object recognition problem, the databases were Caltech-5 and Caltech-101. A comparison with HMAX (a kind of hierarchical convolutional network) showed that the proposed system was more straightforward and accurate [5]. The general
234
G. Olague and M. Chan-Ley
idea was to apply a function-driven paradigm in contrast to the favorite data-driven model. The system was further improved to solve image classification problems like those posed in the databases of GRAZ-01 and GRAZ-02, which is part of the European network PASCAL’s Visual Object Classes Challenge [12]. These datasets are significantly harder to solve in comparison with Caltech-5 and Caltech-101. Since ImageNet is massive researchers prefer to use smaller datasets like MNIST [20] and CIFAR [17] whose images are of reduced size. However, These datasets are simpler to solve in comparison with GRAZ database. In order to advance in our study, we move to a different problem that represents a compromise between GRAZ and ImageNet that we download from Kaggle web site [3]. The dataset consists of about 9000 art images, including the classes Drawings, Engraving, Painting, Iconography, and Sculpture. We add as the no-class the background class from Caltech-101. This dataset made of different sources and conceived for classifying different styles of art includes high-resolution images of different sizes [4, 7, 10]. The five classes were downloaded from Google images, Yandex images, and the virtual Russian museum. Data is separated on training and validation sets. Examples of the main categories are provided in Figs. 12.3, 12.4, 12.5, 12.6, 12.7, and 12.8 for visualization purposes. Also, we include a description of each database.
Fig. 12.3 This figure presents image examples of the class “Drawings” from the Kaggle database. These images include drawings and watercolors. Note the diversity in color, size, perspective, style, and image content. There are 1108 training images and 122 testing images. Brain programming spent 450 h or 18.75 days for one run of the algorithm
Fig. 12.4 This figure shows image examples of the class “Engraving” from the Kaggle database. This class represents a category of fine art or graphic art usually impress on a flat surface through the practice of incising a design onto a hard, usually flat surface by cutting grooves into it with a burin. The term refers to the arts that rely more on lines or tone than on color. There are 757 training images and 84 testing images. Brain programming spent 544 h or 22.6 days to solve the problem
Fig. 12.5 This figure displays image examples of the class “Painting” from the Kaggle database. Painting is the practice of applying paint, pigment, color, or another medium to a solid surface (support base). It brings elements such as drawings, gesture, composition, narration, or abstraction. Paintings can be naturalistic and representational (as in a still life or landscape), photographic, symbolistic, emotive, or political. There are 2043 training images and 228 testing images. Brain programming finished one run of the algorithm in 597 h or 24.81 days
Fig. 12.6 This figure presents image examples of the class “Iconography” from the Kaggle database. The term refers to the production of religious images, called “icons” in the Byzantine and Orthodox Christian tradition. The dataset contains 2076 training images and 231 testing images. Brain programming finished in 810 h or 33.75 days for this class
Fig. 12.7 This figure includes image examples of the class “Sculpture” from the Kaggle database. The sculpture is a branch of visual arts that operates in three dimensions. It is one of the plastic arts. Sculptural processes include carving (the removal of material) and modeling (the addition of material, as clay), in stone, metal, ceramics, wood, and other materials but, since modernism, there has been almost complete freedom of materials and processes. A wide variety of materials may be worked by removal such as carving, assembled by welding or modeling, or molded or cast. There are 1737 training images and 188 testing images. Brain programming spent 562 h or 23.41 days for the image class
12 Hands-on Artificial Evolution Through Brain Programming
237
Fig. 12.8 This figure shows image examples of the class “Background” from Caltech-101 database. It includes a wide variety of images of different types, it is considered the Non-class for the problem of binary classification. Note that by mistake this class contains images of the five classes studied in this paper like sculptures, and other similar to engravings and paintings. There are 468 images split in training and validation sets
12.5 Experiments Experiments were carried out to provide evidence to support the claim that efficient and reliable solutions, for not trivial recognition problems, are discovered through the technique of brain programming. The implementation is programmed in MATLAB running on a Dell Precision T7500 workstation, Intel Xeon eight-core, CPU E5506 at 2.13 GHz, NVIDIA Quadro 4000 with Linux OpenSuse OS. The methodology follows the usual absent/present protocol considering two image sets for learning and testing. Brain programming is a highly demanding computational paradigm. Instead of applying hundreds or thousands of individuals to create the population, we use a population size of 30 individuals per function. Also, the programs run for 30 generations instead of hundreds or thousands of iterations like in most published research. Note that we use atypical genetic programming parameters, and this is due to the analysis of the visual problem. A common practice in solving challenging problems is to increase the size of the population as a way to enrich the initial population. An important aspect to properly approach any computational problem with genetic programming is the definition of the function and terminal sets. These
238
G. Olague and M. Chan-Ley
are the building blocks that genetic programming uses to construct the solutions. In our previous research, we define those sets following our expertise and the literature explaining the inner workings of the visual pathways. A balance should be found to create programs that can solve non-trivial problems within the state-of-the-art in a reasonable amount of time. Our method spends 450 h or 18.75 days for the class Drawing, 544 h or 22.66 days for the class Engraving, 810 h or 33.75 days for the class Iconography, 597 h or 24.87 days for the class Painting, and 562 h or 23.41 days for the class Sculpture. We use ten workstations to speed the process of optimization. Table 12.1 shows a summary of statistical results after running 15 times our brain programming strategy per class. We include results of two deep-learning methods: from scratch CNN and AlexNet. Brain programming looks for the optimal strategy within the search space of possible programs. At the moment of making a comparison of the best individual discovered through genetic programming with solutions from the state-of-the-art, it is necessary to use only the best solution. The average and standard deviation give information about the search process, but if we want to know the accuracy
Table 12.1 This table shows a summary of the results of applying brain programming to the classification of digitized art Run 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Mean Std. Dev. Min. Max. From scratch CNN AlexNet
Drawings 80.15 84.73 78.84 80.65 83.46 79.00 83.84 74.32 75.94 82.57 78.88 86.01 79.75 83.33 75.76 80.48 3.48 74.32 86.01 76.18 ± 2.38
Engraving 80.03 83.46 83.30 74.95 78.88 79.70 76.26 81.99 76.26 82.16 74.63 79.54 73.48 79.86 81.50 79.19 3.19 74.14 83.46 79.16 ± 4.88
Painting 90.27 89.71 86.61 92.94 88.43 98.24 95.93 96.46 90.43 90.51 90.74 89.47 90.27 92.82 92.50 91.40 3.26 87.07 98.24 91.78 ± 2.43
Iconography 83.52 89.11 87.09 85.60 87.56 89.37 87.17 86.54 87.01 83.71 85.83 80.15 84.20 88.72 83.81 86.34 2.02 83.52 89.37 91.49 ± 0.51
Sculpture 81.10 80.65 84.21 85.37 84.74 84.83 84.74 85.64 89.37 84.92 85.28 86.19 84.83 83.01 84.92 84.70 1.93 81.10 89.37 81.14 ± 2.51
89.34 ± 0.89
92.50 ± 1.98
96.46 ± 0.36
98.14 ± 0.43
93.81 ± 0.91
Values highlighted with bold font identify the best solutions discovered after applying hands-on artificial evolution. We include results of the deep neural networks for comparison
12 Hands-on Artificial Evolution Through Brain Programming
239
of a solution, this needs to be computed afterward. The table shows that the best program for the class Painting practically solves the problem and its accuracy is better to AlexNet. It misses only about 40 images from the whole dataset. Regarding the class Drawings, our program reaches close to AlexNet, while for the rest of the classes, the results are better to an underlying convolutional neural network (from scratch CNN).
12.5.1 Beyond Random Search in Genetic Programming In general, the use of random principles is overused in evolutionary computation. The idea is to adapt the methodology to avoid the unnecessary application of arbitrary or unplanned solutions within the algorithm to advance towards a more goal-oriented methodology. A first idea tested here is to use the best solutions discovered during the previous experiments as the initial population for a new set of experiments of brain programming, see Fig. 12.9. Table 12.2 provides preliminary results achieved for the five classes. We observe that the Class Drawings improved to the point of matching AlexNet’s performance and for the Painting class becomes even better. Also, the proposed strategy improves all other classes: Engraving, Iconography, and Sculpture. Tables 12.3, 12.4, 12.5, 12.6, and 12.7 provide the best programs for each studied art problem. It is remarkable the simplicity of solutions in comparison with AlexNet. These programs can be read and are susceptible to improving through analysis. Figure 12.10 shows the results of the five experiments that confirm the benefit of applying the best solutions discovered in previous experiments to run new batches of experiments with excellent results. Fig. 12.9 We use the best set of solutions that were discovered in the previous experiments, to create a non-random initial population for a new set of experiments of brain programming. The idea is to continue the evolution from the best local minimum discovered so far
240
G. Olague and M. Chan-Ley
Table 12.2 This table shows a summary of the results after evolving the best set of solutions with brain programming to the classification of digitized art Run 1 2 3 4 5 6 7 8 9 10 Original Mean Std. Dev. Best solution Improvement AlexNet
Drawings 90.41 89.15 91.32 90.41 90.78 89.15 89.18 89.31 88.78 89.15 86.01 89.76 0.88 91.32 5.31 89.34 ± 0.89
Engraving 84.61 84.78 84.12 84.94 84.45 84.29 83.96 83.96 83.63 86.74 83.46 84.54 0.87 86.74 3.28 92.50 ± 1.98
Painting 99.02 98.24 98.24 98.32 98.64 98.32 98.32 98.32 98.32 98.40 98.24 98.41 0.24 99.02 0.78 96.46 ± 0.36
Iconography 91.11 90.48 89.85 90.87 91.11 91.18 90.95 90.79 90.72 89.37 89.37 90.64 0.59 91.18 1.73 98.14 ± 0.43
Sculpture 89.37 89.73 89.37 89.37 89.74 89.74 89.83 89.37 89.92 89.83 89.37 89.62 0.23 89.92 0.25 93.81 ± 0.91
Values highlighted with bold font identify the best solutions discovered after applying hands-on artificial evolution. We include results of transfer learning methodology AlexNet for comparison
Table 12.3 This table provides the functions of the best program discovered for the problem Drawings vs. Background Best individual EVOO a kSum(imRound(ip_GaussDy_1(supremum(kSum(supremum(ip_GaussDy_1 (supremum(ip_GaussDx_1(C)kSum(supremum(ip_GaussDx_1(C), imRound(ip_GaussDx_1(M))), 0.31))), imRound(ip_GaussDx_1(M))), 0.31), kSum(supremum(ip_GaussDx_1(C), imRound(ip_GaussDx_1(C))), 0.31)))), 0.31) EVOC kSum(kDiv(kSum(imF loor(imCeil(kDiv(kSum(kSum(kSum(kSum(imF loor (imF loor(S)), 0.20), 0.20), 0.20), 0.20), 0.31))), 0.20), 0.31), 0.20) EVOS ip_imdivide(hitmissDmnd(hitmissDsk(hitmissDsk(hitmissDmnd (hitmissDmnd(bottomH at (ip_imdivide(hitmissDmnd(hitmissDsk(hitmissDsk (hitmissDmnd(bottomH at (dilateDsk(G)))))), R))))))), R) MM1 a ip_Sqrt (ip_Sqrt (ip_Sqrt (ip_GaussDx_1(ip_GaussDy_1(imagen))))) MM2 ip_GaussDy_1(ip_imsubtract (ip_GaussDy_1(ip_imsubtract (ip_GaussDy_1 (ip_GaussDy_1(imagen)), ip_Sqr(ip_GaussDy_1(ip_GaussDy_1(imagen))))), ip_GaussDy_1(ip_imsubtract (ip_GaussDy_1(ip_GaussDy_1(imagen)), ip_GaussDy_1(ip_GaussDy_1(imagen)))))) a EVO
stands for evolutionary visual operator and MM for mental maps. Note that image in the MM refers to the conspicuity maps (CM)
12 Hands-on Artificial Evolution Through Brain Programming
241
Table 12.4 This table provides the functions of the best program discovered for the problem Engraving vs. Background EVOO a EVOC EVOS MM1 a MM2 MM3
Best individual ip_Gauss_1(imagen) ip_imcomplement (ip_Exp(ip_Exp(ip_Exp(ip_Exp(ip _Exp(RedGreenOpon(image))))))) Y ip_Logarithm(ip_H alf (ip_H alf (ip_GaussDx_1(imagen)))) ip_GaussDx_1(ip_GaussDx_1(imagen)) ip_Logarithm(ip_imdivide(ip_GaussDy_1(ip_GaussDy_1(imagen)), ip_imadd(ip_imadd(ip_GaussDy_1(ip_GaussDy_1(imagen)), ip_imadd(ip_GaussDy_1(ip_GaussDy_1(imagen)), ip_imadd(ip_GaussDy_1(ip_GaussDy_1(imagen)), ip_GaussDx_1(ip_GaussDy_1(imagen))))), ip_imadd(ip_GaussDy_1(ip_GaussDy_1(imagen)), ip_GaussDx_1(ip_GaussDy_ 1(imagen))))))
a EVO
stands for evolutionary visual operator and MM for mental maps. Note that these functions are meaningless if the hierarchical structure is not applied to recognize the image Table 12.5 This table provides the functions of the best program discovered for the problem Painting vs. Background EVOO EVOC EVOS MM1 a
a EVO
a
Best individual ip_imadd(ip_GaussDx_1(ip_GaussDx_1(H )), ip_imabsadd(ip_GaussDx_1 (ip_GaussDx_1(H )), ip_GaussDx_1(ip_GaussDx_1(H )))) RedGreenOpon(image) hitmissSqr(ip_imadd(erodeDmnd(erodeDmnd(kDiv(K, 0.88))), ip_imadd (ip_imsubtract (erodeDsk(B), erodeDsk(B)), ip_imsubtract (Y, erodeDsk(B))))) ip_imabsadd(ip_Sqrt (ip_imabsadd(ip_imabsadd(ip_Sqrt (ip_Logarithm (ip_GaussDx_1(ip_Logarithm(ip_GaussDx_1(ip_GaussDx_1 (ip_GaussDy_1(imagen))))))), ip_GaussDx_1(ip_GaussDy_1(imagen))), ip_Logarithm(ip_GaussDx_1(ip_GaussDy_1 (imagen))))), ip_Logarithm(ip_imabs(ip_imabs(ip_GaussDx_1(ip_Logarithm(ip_GaussDx_ 1(ip_Logarithm(ip_GaussDx_1(ip_GaussDx_1(ip_GaussDy_1(imagen)))))))))))
stands for evolutionary visual operator and MM for mental maps
12.5.2 Ideas for a New Kind of Evolutionary Learning The proposed strategy presents a significant drawback related to the high computational cost. Usually, in machine learning, the goal is to optimize the quality of results through proper management of the set of solutions. In evolutionary computation, this is achieved with a technique that increases diversity by creating sub-populations called “niches”, where solutions are only allowed to compete within their subgroups similar to how species evolve when isolated on islands. Diversity is rewarded through a technique called “fitness sharing”, where the difference between members of the population is measured to give an edge in the competition to distinct types of
242
G. Olague and M. Chan-Ley
Table 12.6 This table provides the functions of the best program discovered for the problem Iconography vs. Background Best individual EVOO a ip_GaussDy_1(kSum(ip_GaussDy_1(ip_GaussDy_1(ip_GaussDy_1 (ip_GaussDy_1 (ip_Gauss_2(ip_GaussDx_1(ip_GaussDx_1(C))))))), 0.68)) EVOC R EVOS hitmissSqr(kDiv(skeletonShp(kDiv(skeletonShp(Y ), 0.74)), 0.74)) MM1 a ip_Sqrt (ip_imabsadd(ip_imabsadd(ip_imabsadd(ip_imabsadd (ip_Sqrt (ip_imabsadd(ip_imabsadd(ip_imabsadd(ip_imabsadd (ip_imabsadd(ip_GaussDx_1(ip_GaussDy_1(imagen)),ip_Sqrt (ip_imabsadd (ip_GaussDx_1(ip_GaussDx_1(imagen)), ip_ GaussDx_1 (ip_GaussDy_1(imagen))))),imagen), ip_Sqrt (ip_Sqrt (ip_imabsadd (ip_ imabsadd(ip_imabsadd(ip_imabsadd(ip_imabsadd(ip_GaussDx_1 (ip_GaussDy_1(imagen)), ip_Sqrt (ip_imabsadd(ip_GaussDx_1 (ip_GaussDx_1(imagen)), ip_GaussDx_1 (ip_GaussDy_1(imagen))))), imagen), ip_Sqrt (ip_imabsadd(imagen, ip_GaussDx_1 (ip_GaussDy_1(imagen))))), imagen),imagen)))), imagen), imagen)), imagen), ip_Sqrt (ip_imabsadd(imagen, ip_GaussDx_1(ip_GaussDy_1(imagen))))), ip_GaussDy_1 (ip_GaussDy_1(imagen))), imagen)) MM2 ip_GaussDy_1(ip_GaussDy_1(imagen)) a EVO
stands for evolutionary visual operator and MM for mental maps
Table 12.7 This table provides the functions of the best program discovered for the problem Sculpture vs. Background EVOO
EVOC EVOS MM1 a
a
Best individual ip_GaussDx_1(kSust (ip_GaussDx_1(kSust (kSust (ip_GaussDx_1 (kSust (imF loor(ip_imabs(ip_GaussDx_1(ip_GaussDy_1(imagen)))), 0.46)), 0.46), 0.46)), 0.46)) ip_imcomplement (imF loor(ip_Sqrt (imF loor(M)))) kSum(M, 0.00) ip_imabs(ip_GaussDx_1(ip_GaussDy_1(imagen)))
a EVO
stands for evolutionary visual operator and MM for mental maps. Note that in this program the function for shape dimension adds zero to the magenta band. In GP it is possible to catch this errors while in neural networks similar problems are harder to identify
solutions. We present some results here to point out some critical issues for future research regarding computational costs.
12.5.3 Running the Algorithm with Fewer Images In previous research, we remark the facility of brain programming to discover solutions through a simple random search [25]. Table 12.8 shows that it is possible to reduce the computational effort by reducing the size of the subsets for a classification problem. The computational effort could be reduced from several days
12 Hands-on Artificial Evolution Through Brain Programming Drawings vs Background
92 91
Std.Dev. Mean Min Max
86.5 86
90
85.5
Fitness
Fitness
Engraving vs Background
87
Std.Dev. Mean Min Max
243
89
85 84.5
88
84 83.5
87
83
86
0
5
10
15
20
25
82.5
30
0
10
5
15
Generations
20
25
30
Generations
a)
b) Painting vs Background
99.1
Std.Dev. Mean Min Max
99 98.9
Fitness
98.8 98.7 98.6 98.5 98.4 98.3 98.2 98.1
0
5
10
15
20
25
30
Generations
c) Iconography vs Background
91.5
90.5
90
Std.Dev. Mean Min Max
89.9 89.8
Fitness
Fitness
91
Sculpture vs Background
90 Std.Dev. Mean Min Max
89.7 89.6 89.5 89.4
89.5 89.3 89 0
5
10
15
20
Generations
d)
25
30
89.2 0
5
10
15
20
25
30
Generations
e)
Fig. 12.10 These graphics show the evolution of the maxima discovered for each of the art problems. Note that Painting vs. Background requires more time to ameliorate the solutions with a smooth initial improvement. We observe similar behavior in the Sculpture vs. Background figure. The other graphs present constant progress from the first generations. In general, these figures give evidence that the idea of hands-on evolution works for computationally demanding problems
244 Table 12.8 Experiments of the computational effort after reducing the number of images of the subsets for the problem Painting vs. Background
G. Olague and M. Chan-Ley No. of images 50 50 100 200 a All
Time (h)a 18 h 18 h 22 h 4 days
Fitness 79% 85% 88.5% 84.5%
Since Gen 5 24 22 19
Until Gen 50 60 30 30
computations were made with matlab parfor
to only a few hours. Nevertheless, fitness can be compromised if care is not taken to balance the problem representation. Indeed, the number of images representing the problem that is being solved needs to be chosen carefully. Here, we observe that fitness is reduced by 10%. Also, we noted that perfect solutions could be achieved randomly for a given subset as already pointed out. This aspect can be misleading since a particular subset with few images could produce a score that brings a solution that is not general. We select a size of 100 images for further experiments.
12.5.4 Running the Algorithm with 100 Images In a new set of experiments, we attempt to solve the classification problem of Painting vs. Burrito using ten different subsets of 100 images, see Table 12.9. In the first experiments, we chose as the non-class the Background class from Caltech 101. This dataset has images that belong to some of the classes studied in the digitized art problem. In order to avoid this issue, we replace the non-class with a set of images taken from the class Burrito of ImageNet for the rest of the experiments, see Fig. 12.11. We run the algorithm ten times, each run with a different subset, and found three out of four perfect solutions whose scores degrade while being tested with the complete database. We do the same for deep learning to make a comparison with CNN from scratch and transfer learning AlexNet. Note that this latter method gives outstanding results. However, when the best solutions are tested with the Background class, the score degrade significantly. This behavior is one of the main complains of deep learning. The methodology of convolutional neural networks creates models heavily based on the database. Hence, when there are variations to the information, the model reduces its performance. In contrast, brain programming slowly decreases and in some cases maintain the achieved level when the background changes. This novel behavior is due to the function-driven paradigm applied in our approach that attempts to discover a program from the landscape of possible computer programs. A simple way of understanding this phenomenon is as follows. In normal deep learning, the designer provides the architecture of the deep neural network, like in the case of AlexNet. Then, the network is optimized through conventional learning using gradient descent and the set of given images. The results in Table 12.9 show a small dispersion in comparison with a badly designed network like CNN from scratch. In our GP-based approach this phenomenon of small variance was
12 Hands-on Artificial Evolution Through Brain Programming
245
Table 12.9 Experiments with subsets of 100 images showing fitness scores for the classification problem of Painting vs. Burrito Training set c1 c2 c326 c4 c50 c6 c7 c80 c9 c100 Mean
Fitness 91.00 97.50 100 99.00 100 97.50 100 99.00 100 88.5 97.00 100 87.00 95.65 ± 4.90
Complete database 81.68 95.31 97.27 98.03 66.54 95.91 98.03 96.74 66.43 82.43 94.17 66.62 83.19 92.90 ± 6.92
Painting vs. Background 76.24 87.32 85.49 94.85 44.10 83.57 85.25 83.41 42.58 77.41 82.62 64.83 76.25 83.24 ± 5.70
Some elements of the first column have a superindex indicating that in this run, the program found a perfect solution (score 100%). The number in the superscript indicates the generation when the algorithm discovered the ideal solution. However, these solutions decreased their performance while tested, so we perform another evolution with the same set of images. The second line after superscript shows the result of this evolution CNN from scratch
Training set c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 Mean
Transfer learning—AlexNet vs. Fitness Complete Background Fitness Complete 72.00 69.11 54.55 98.5 98.86 82.50 86.30 73.84 98.00 97.88 86.00 85.09 70.57 98.5 99.55 79.00 85.77 75.28 100 99.17 85.50 87.36 73.76 99.50 98.64 84.00 80.39 63.64 99.00 99.02 85.50 86.15 72.41 98.00 98.94 80.00 83.04 68.10 99.00 99.55 88.50 90.31 73.21 98.50 98.33 81.00 76.68 62.28 98.00 99.09 82.4 ± 4.7 83.02 ± 6.19 68.76 ± 6.67 98.7 ± 0.67 98.90 ± 0.51
vs. Background 80.62 76.27 81.10 80.78 80.06 80.54 80.54 81.26 79.74 80.86 80.84 ± 0.62
Third and fourth columns provide results of the best solutions against the complete database and Caltech 101 background
never reproduced. The designer is attempting to find the best model through an evolutionary and genetic programming-based approach. In this case, the designer is not only looking for architecture but the whole computational model. Hence, there are numerous variations in the final solutions from the several runs being discovered by the algorithm. Note that in the c3 training set, the best solution was
246
G. Olague and M. Chan-Ley
Fig. 12.11 Image examples of the class “Burrito” from the ImageNet database. These images include photographs of a dish called burrito, Mexican and Tex-Mex cuisine that consists of a flour tortilla with various other ingredients. It is wrapped into a closed-ended cylinder that can be picked up, in contrast to a taco, where the tortilla is simply folded around the fillings. The database includes many situations as well as persons with 1300 images. We select this dataset as the non-class for the new set of experiments since this class does not overlap with any of the digitized art classes
discovered in 26 iterations of the algorithm. In this case, the algorithm converged to a local minimum that is good enough to describe the classification problem: the score with the complete database is high. This last experiment was repeated using the Iconography vs. Burrito databases, see Table 12.10. We observe the same pattern for deep learning where minimal dispersion is achieved even for the complete database and Background experiments. Brain programming always had higher-variance independently of the databases. Similar results can be observed in Tables 12.11, 12.12, and 12.13.
12.5.5 Ensemble Techniques and Genetic Programming Ensemble techniques are meta algorithms that combine several weak classifiers into one predictive model. In general, ensemble techniques use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. The word weak in the jargon of machine learning refers to very poor learners, each performing only just better than
12 Hands-on Artificial Evolution Through Brain Programming
247
Table 12.10 Experiments with subsets of 100 images showing fitness scores for the classification problem of Iconography vs. Burrito Training set c1 c2 c3 c4 c5 c61 c7 c8 c9 c10 Mean
Fitness 93.00 84.50 86.50 89.00 78.00 100 83.50 81.00 82.5 80.5 83.5 84.20 ± 4.37
Complete database 64.35 73.48 77.13 84.53 70.63 26.83 69.88 73.92 76.98 72.05 79.60 74.26 ± 5.64
Iconography vs. Background 57.04 62.79 70.42 71.13 63.81 22.97 64.42 68.76 69.55 67.98 72.70 66.86 ± 4.78
Some elements of the first column have a superindex indicating that in this run, the program found a perfect solution (score 100%). The number in the superscript indicates the generation when the algorithm discovered the ideal solution. However, these solutions decreased their performance while tested, so we perform another evolution with the same set of images. The second line after superscript shows the result of this evolution
Training set c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 Mean
CNN from scratch
Transfer learning—AlexNet
vs. Fitness Complete Background 66.50 72.42 58.54 70.00 65.02 56.81 73.50 76.76 65.77 73.00 79.07 65.46 55.50 64.50 55.78 56.00 77.65 76.08 73.00 71.97 58.77 67.00 74.29 65.93 74.50 74.44 60.11 66.50 77.28 72.07 67.55 ± 6.91 73.34 ± 5.06 63.53 ± 6.74
Fitness 99.50 99.50 99.00 100 99.00 100 100 99.50 96.50 100 99.3 ± 1.06
Complete 99.55 99.40 98.95 99.63 99.78 98.88 99.63 99.78 98.50 99.50 99.36 ± 0.43
vs. Background 81.27 81.12 80.57 81.35 81.51 80.49 81.27 81.59 81.67 81.27 81.21 ± 0.40
Third and fourth columns provide results of the best solutions against the complete database and Caltech 101 background
chance, and which by putting them together it is possible to make an ensemble learner that can perform arbitrarily well. These so-called weak classifiers cannot be taken as such in the case of genetic programming since the programs learned by this evolutionary algorithm follows a process where the solutions are optimized, and their differences are quite significant from genotype and phenotype viewpoints. Next, we present some preliminary results of several techniques called mixture-ofexperts. Following the results of the experiments with the databases Painting vs. Burrito we applied majority voting, see Table 12.9. In this technique, all ensemble members
248
G. Olague and M. Chan-Ley
Table 12.11 Experiments with subsets of 50 images showing fitness scores for the classification problem of Engraving vs. Burrito Training set c1 c2 c3 c4 c5 c6 c70 c816 c9 c10 Mean
Fitness 96.00 87.00 85.00 87.00 83.00 79.00 100 84.00 100 92.00 95.00 87.00 87.50 ± 5.38
Complete database 82.89 75.37 69.17 80.97 77.29 69.91 65.04 65.04 44.25 74.78 80.24 76.84 75.25 ± 5.70
Engraving vs. Background 50.00 50.88 54.87 62.83 61.36 53.10 47.35 69.43 44.25 58.23 51.33 56.34 56.87 ± 6.21
Some elements of the first column have a superindex indicating that in this run, the program found a perfect solution (score 100%). The number in the superscript indicates the generation when the algorithm discovered the ideal solution. However, these solutions decreased their performance while tested, so we perform another evolution with the same set of images. The second line after superscript shows the result of this evolution
Training set c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 Mean
CNN from scratch
Transfer learning—AlexNet
vs. Fitness Complete Background 95.00 96.30 59.57 85.00 95.77 59.25 90.00 92.06 56.96 91.00 93.65 57.94 98.00 95.50 59.08 97.00 90.48 55.97 85.00 87.04 53.85 91.00 91.01 56.30 76.50 91.80 56.79 95.00 87.04 83.85 90.30 ± 6.75 92.06 ± 3.34 56.96 ± 2.06
Fitness 100 99.00 99.00 100 100 100 99.00 99.00 100 100 99.60 ± 0.52
Complete 99.71 99.71 99.56 99.71 99.85 99.85 99.85 99.41 99.41 99.56 99.66 ± 0.17
vs. Background 61.54 61.87 61.54 61.70 61.87 61.87 61.70 61.87 61.37 61.87 61.72 ± 0.18
Third and fourth columns provide results of the best solutions against the complete database and Caltech 101 background
cast a vote about whether or not the image belongs to a class. We took the decision taken by (n/2) + 1 programs. As a result of applying this method we manage to increase the performance of the paintings class to 98.41% with the complete database. Another mixture-of-experts technique called weighted ensemble was tested with the results of Iconography vs. Burrito shown in Table 12.10. In this method,
12 Hands-on Artificial Evolution Through Brain Programming
249
Table 12.12 Experiments with subsets of 100 images showing fitness scores for the classification problem of Sculpture vs. Burrito Training set c1 c20 c3 c4 c5 c61 c7 c8 c9 c10 Mean
Fitness 80.50 100 83.00 81.50 80.50 90.50 100 89.00 93.00 81.00 82.00 90.50 85.15 ± 4.97
Complete database 71.66 35.87 77.48 80.74 72.52 92.47 38.61 80.91 84.76 66.18 75.68 86.22 78.86 ± 7.79
Sculpture vs. Background 70.12 35.87 67.85 72.17 58.82 72.77 38.61 77.84 68.24 58.39 66.78 68.95 68.19 ± 5.96
Some elements of the first column have a superindex indicating that in this run, the program found a perfect solution (score 100%). The number in the superscript indicates the generation when the algorithm discovered the ideal solution. However, these solutions decreased their performance while tested, so we perform another evolution with the same set of images. The second line after superscript shows the result of this evolution
Training set c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 Mean
CNN from scratch
Transfer learning—AlexNet
vs. Fitness Complete Background 73.00 74.65 58.86 64.5 67.05 52.86 72.50 77.65 61.22 73.50 78.57 61.94 79.50 79.26 62.49 72.00 81.68 64.40 94.50 90.55 71.39 55.50 89.17 70.30 81.00 87.21 68.76 78.00 73.16 57.67 74.40 ± 10.29 79.90 ± 7.46 62.99 ± 5.88
Fitness 97.50 99.50 99.00 100 97.00 98.50 99.00 99.00 100 98.00 98.75 ± 1.00
Complete 97.98 98.89 99.40 98.46 98.54 99.23 99.40 99.23 98.29 99.56 98.81 ± 0.50
vs. Background 76.93 78.11 78.56 77.93 77.66 77.48 78.38 78.38 78.20 77.20 77.88 ± 0.55
Third and fourth columns provide results of the best solutions against the complete database and Caltech 101 background
each classifier has confidence (percentage) assigned as weights for each classifier representing the importance of its decision. Given the top three classifiers, each of them with fitness: 84.53%, 79.60%, and 77.13%. We normalize its weights to 35.04%, 32.99%, and 31.99% respectively. The probability of the ensemble for the class and non-class are computed as follows:
250
G. Olague and M. Chan-Ley
Table 12.13 Experiments with subsets of 50 images showing fitness scores for the classification problem of Drawings vs. Burrito Training set c1 c22 c3 c4 c54 c6 c7 c8 c90 c10 Mean
Fitness 86.00 100 88.00 89.00 89.00 100 92.00 96.00 95.00 89.00 100 88.00 91.00 90.30 ± 3.20
Complete database 78.08 35.17 78.66 81.36 72.80 35.17 72.92 86.87 82.42 74.09 39.27 91.44 83.12 80.17 ± 6.12
Drawings vs. Background 70.22 29.64 72.33 65.90 63.61 29.64 57.56 66.92 67.18 60.05 22.90 70.93 67.68 66.23 ± 4.70
Some elements of the first column have a superindex indicating that in this run, the program found a perfect solution (score 100%). The number in the superscript indicates the generation when the algorithm discovered the ideal solution. However, these solutions decreased their performance while tested, so we perform another evolution with the same set of images. The second line after superscript shows the result of this evolution
Training set c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 Mean
CNN from scratch
Transfer learning—AlexNet
vs. Fitness Complete Background 73.00 85.35 64.50 78.00 72.68 47.58 85.00 82.18 61.45 87.00 82.53 58.27 75.00 79.84 54.58 84.00 86.99 60.56 66.00 77.84 62.34 79.00 82.06 63.36 83.00 86.05 64.89 81.00 85.46 59.03 79.10 ± 6.38 82.09 ± 4.39 59.66 ± 5.27
Fitness 97.00 99.00 98.00 99.00 95.00 99.00 100 100 100 98.00 98.50 ± 1.58
Complete 98.48 98.71 97.42 98.59 97.42 99.18 98.48 98.94 98.71 98.71 98.46 ± 0.59
vs. Background 69.85 69.47 68.58 69.85 70.10 69.97 69.21 69.97 69.85 69.34 69.62 ± 0.42
Third and fourth columns provide results of the best solutions against the complete database and Caltech 101 background
C1 = (pn |c1 ) ∗ weight1 + (pn |c1 ) ∗ weight2 + (pn |c1 ) ∗ weight3 ;
(12.1)
C2 = (pn |c2 ) ∗ weight1 + (pn |c2 ) ∗ weight2 + (pn |c2 ) ∗ weight3 ;
(12.2)
12 Hands-on Artificial Evolution Through Brain Programming
251
where C1 is the probability that the image belongs to the class, and C2 to the nonclass. The value of weightk is the classifier weight obtained at the moment of computing its fitness with the classifier variance. (pn |c1 ) = is the classifier trust that the n image belongs to class 1, and (pn |c2 ) = is the classifier trust that the n image belongs to class 2. With the above values we can compute the probability of the ensemble for the first image. C1 = (0.7563 ∗ 0.3504) + (0.9887 ∗ 0.3299) + (0.7893 ∗ 0.3099) = 0.7954; C2 = (0.2437 ∗ 0.3504) + (0.0113 ∗ 0.3299) + (0.2987 ∗ 0.3199) = 0.1748; If we apply this reasoning we obtain a general classifier performance of 84.53%. Of course we give proportional weight to the top three classifiers, but if we change the weights of the ensemble to 40%, 30%, and 30%, the performance of the ensemble scores 85.35% for the complete database.
12.6 Conclusions This paper presents a summary of the main works that have been developed within the EvoVisión laboratory about a new strategy called brain programming. It also offers the first results of applying the methodology for the problem of classification of digitized art. We report encouraging results and a first proposal to improve the algorithm. The goal of this research is to outline a methodology that can challenge deep learning in current visual perception tasks. Therefore, it is not feasible to leave to the genetic programming methodology the charge of discovering the whole computer program. Moreover, neuroscientific models are the base of brain programming to create visual processing programs. These new ideas represent a source of inspiration for a new kind of evolutionary algorithms that can challenge the state-of-the-art in artificial vision and pattern recognition. Acknowledgements This research was partially funded by CICESE through the project 634-128, “Programación Cerebral Aplicada al Estudio del Pensamiento y la Visión”. The second author graciously acknowledges the scholarship paid by the National Council for Science and Technology of Mexico (CONACyT) under grant 25267-340078.
References 1. Agarwal, S., Karnick, H., Pant, N., Patel, U.: Genre and style based painting classification. In: 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 588–594. IEEE (2015) 2. Arora, R. S., Elgammal, A.: Towards automated classification of fine-art painting style: A comparative study. In: Proceedings of the 21st International Conference on Pattern Recognition (ICPR-2012), pp. 3541–3544. IEEE (2012)
252
G. Olague and M. Chan-Ley
3. Art Images: Drawing/Engraving/Iconography/Painting/Sculpture, https://www.kaggle.com/ thedownhill/art-images-drawings-painting-sculpture-engraving 4. Bar, Y., Levy, N., Wolf, L.: Classification of artistic styles using binarized features derived from a deep neural network. In: European Conference on Computer Vision, pp. 71–84. Springer, Cham (2014) 5. Clemente, E., Olague, G., Dozal, L., Mancilla, M.: Object Recognition with an Optimized Ventral Stream Model using Genetic Programming. European Conference on the Applications of Evolutionary Computation, pp. 315–325. EvoApplications (2012) 6. Clemente, E., Chavez de la O, F., Fernández, F., Olague, G.: Self-adjusting Focus of Attention in Combination with a Genetic Fuzzy System for Improving a Laser Environment Control Device System. Applied Soft Computing 32, 250–265 (2015) 7. Condorovici, R. G., Florea, C., Vrânceanu, R., Vertan, C.: Perceptually-inspired artistic genre identification system in digitized painting collections. In: Scandinavian Conference on Image Analysis, pp. 687–696. Springer (2013) 8. Dozal, L., Olague, G., Clemente, E., Sánchez, M.: Evolving Visual Attention Programs through EVO Features. European Conference on the Applications of Evolutionary Computation, pp. 326–335. EvoApplications (2012) 9. Dozal, L., Olague, G., Clemente, E., Hernández, D.E.: Brain Programming for the Evolution of an Artificial Dorsal Stream. Cognitive Computation 6(3) 528–557 (2014) 10. Florea, C., Toca, C., Gieseke, F.: Artistic movement recognition by boosted fusion of color structure and topographic description. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 569–577. IEEE (2017) 11. Fukushima, K.: Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position. Biological Cybernetics 36(4), 193–202 (1980) 12. Hernández, D.E., Clemente, E., Olague, G., Briseño, J.L.: Evolutionary Multi-objective Visual Cortex for Object Classification in Natural Images. Journal of Computational Science 17(part 1), 216–233 (2016) 13. Hernández, D.E., Olague, G., Hernández, B., Clemente, E.: CUDA-based Parallelization of a Bio-inspired Model for Fast Object Classification. Neural Computing and Applications 30(10), 3007–3018 (2018) 14. Holland, J.H.: Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. MIT Press, 211 pages—first appeared in 1975 (1992) 15. Kowaliw, T., McCormack, J., Dorin, A.: Evolutionary automated recognition and characterization of an individual’s artistic style. In: IEEE Congress on Evolutionary Computation, pp. 1–8. IEEE (2010) 16. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press (1992) 17. Krizhevsky, A.: Learning Multiple Layers of Features from Tiny Images. April 2009. 18. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, NIPS (2012) 19. Le, Q.V., Ranzato, M.A., Monga, R., Devin, M., Chen, K., Corrado, G.S., Dean, J., Ng, A.Y.: Building High-level Features using Large Sacle Unsupervised Learning. International Conference on Machine Learning, pp. 507–514 ICML (2012) 20. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based Learning Applied to Document Recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998) 21. Lyu, S., Rockmore, D., Farid, H.: A digital technique for art authentication. Proceedings of the National Academy of Sciences, 101(49), 17006–17010 (2004) 22. Olague, G.: Evolutionary Computer Vision: The First Footprints. Springer, Heidelberg (2016) 23. Olague, G., Hernández, D.E., Clemente, E., Chan-Ley, M.: Evolving Head Tracking Routines with Brain Programming. IEEE Access 6, 26254–26270 (2018)
12 Hands-on Artificial Evolution Through Brain Programming
253
24. Olague, G., Hernández, D.E., Llamas, P., Clemente, E., Briseño, J.L.: Brain Programming as a New Strategy to Create Visual Routines for Object Tracking. Multimedia Tools and Applications 78(5), 5881–5918, (2018) 25. Olague, G., Clemente, E., Hernández, D.E., Barrera, A., Chan-Ley, M., Bakshi, S.: Artificial Visual Cortex and Random Search for Object Categorization. IEEE Access (2019) 26. Olague, G.: Automated Photogrammetric Network Design using Genetic Algorithms. Photogrammetric Engineering & Remote Sensing 68(5), 423–431 (2002) Paper awarded the “2003 First Honorable Mention to the Talbert Abrams Award”, by ASPRS. 27. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van den Driessche, G., Schrittwieser J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman. S., Grewe, D., Nham, J., Kalchbrenner, N., Sustkever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature 529, 484–489 (2016) 28. Zujovic, J.,Gandy, L., Friedman, S., Pardo, B., Pappas, T.: Classifying Paintings by Artistic Genre: An Analysis of Features & Classifiers. In: 2009 IEEE International Workshop on Multimedia Signal Processing, pp. 1–5. IEEE (2009)
Chapter 13
Comparison of Linear Genome Representations for Software Synthesis Edward Pantridge, Thomas Helmuth, and Lee Spector
13.1 Introduction Inductive program synthesis is the field of producing executable programs from a set of input-output examples [7, 14, 16]. General software synthesis refers to the subfield of inductive program synthesis in which the programs produced are expected to be capable of manipulating a variety of data types, control structures, and data structures. The field of genetic programming has produced some the most capable general software synthesis methods, such as PushGP [5], Grammar Guided Genetic Programming [1], and SignalGP [10]. The experiments and discussion in this paper focuses on PushGP. PushGP synthesizes programs in the Push programming language. Push is a stack-based programming language designed for genetic programming, in which arguments for instructions are taken from typed stacks and return values are placed on the stacks [19]. A Push program is a sequence that may contain instructions, literals, and code blocks. A code block is also a sequence that may contain
E. Pantridge () Swoop, Inc., Cambridge, MA, USA e-mail: [email protected] T. Helmuth () Hamilton College, Clinton, NY, USA e-mail: [email protected] L. Spector () Department of Computer Science, Amherst College, Amherst, MA, USA School of Cognitive Science, Hampshire College, Amherst, MA, USA College of Information and Computer Sciences, University of Massachusetts, Amherst, MA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_13
255
256
E. Pantridge et al.
instructions, literals, and code blocks, allowing for hierarchically nested program structures. When executed by a Push interpreter, the program itself is pushed onto the exec stack, a special stack that keeps track of the executing program. During execution, items from the exec stack are consumed from the program and evaluated sequentially. Literal values are placed onto stacks corresponding to their data types. Instructions are evaluated as functions that pop their arguments from the stacks and push their return values back onto the stacks. When code blocks are interpreted, their contents are unpacked and inserted at the start of the exec stack [19]. Push implementations typically provide instructions and stacks for common data types such as integers, floating point numbers, Boolean values, and strings. It is also possible for users to provide stacks and instructions for any other data types they choose [12, 19]. Since the executing program itself is stored on a stack, instructions can manipulate the executing code itself as it runs; this functionality is used to implement both standard and exotic control flow structures using the exec stack. Like all genetic programming methods, PushGP manipulates “genome” data structures that correspond to executable programs. In its initial design, the PushGP genome structure was also the program structure of nested code blocks. With this representation, it straightforward to implement tree-based genetic operators like those used in tree-based genetic programming [8], but less straightforward to implement operators that act uniformly on program elements at all levels of nesting [17]. More recent implementations of PushGP use the Plush linear genome representation , which can be translated into the hierarchical code block program structure before execution of the program [6]. This layer of indirection provides flexibility with respect to which genetic operators can be applied to genomes, specifically with uniform mutation and crossover operators, which in turn has produced better search performance [3, 6]. This work compares two linear genome representations, each of which takes a different approach to the problem of specifying a nested structure in a flat, linear form. While both Plush and the new genome structure, Plushy, can represent the same set of Push programs, there are trade-offs between them that may affect evolvability and the dynamics of program size and structure. We begin by detailing the two genome representations, Plush and Plushy. We then present experimental results that allow us to compare their search performance and to examine the effects of each on program structure over evolutionary time. We discuss factors relevant to choosing among the representations in practice, and conclude by recommending the adoption of Plushy genomes in future PushGP work.
13.2 Linear Genomes: Plush vs. Plushy Plush [6] and Plushy [13] are two different linear data structures that have been used to represent genomes in PushGP systems. Both genome representations encode the
13 Comparison of Linear Genome Representations for Software Synthesis
257
(5 x int_gt exec_if (3 x int_sub) (x 2 int_mult)) Fig. 13.1 A simple Push program that takes an integer input x and returns (3 − x) if x > 5, or 2x otherwise. Note that the program contains two code blocks ((x) and (x 2 int_mult)), exemplifying the nested, non-linear structure of Push programs
nested structure of Push programs, and can be translated into executable programs. The process of translating a genome into a program will determine which individual genes should be placed within nested code blocks to produce the structured Push program. Every instruction gene has a defined number of code blocks expected to follow the instruction, which is the same number in both representations. For example, the int_add instruction sums the top two integers on the integer stack, and thus opens no code blocks. The exec_if instruction takes two code blocks from the exec stack as arguments: one for holding the body of the “then” clause and one for holding the body of the “else” clause. If the top value on the Boolean stack is true, then the code block for the “else” clause is ignored. If the top value on the Boolean stack is false, then the code block for the “then” clause is ignored. Figure 13.1 shows a simple Push program that utilizes this exec_if instruction. Just as the exec_if is defined to require two code blocks to follow it, other instructions also require specific numbers of code blocks as arguments. Note that this is a feature of the genome specifications we are discussing, not a requirement of the underlying Push programs. In fact, when using Push programs as genomes, there was nothing guaranteeing the presence of code blocks after instructions that made use of them. Often exec_if and similar instructions would just be followed by single instructions instead of code blocks; this can sometimes be useful, but it is often advantageous to have larger code blocks in these positions. When designing Plush, and afterward when designing Plushy, we chose to force instructions that can use code blocks to be followed by them, to increase the use of code blocks in evolved programs [6]. Given that instruction definitions are used to determine where code blocks are opened, it is left up to the genome representation to determine how to store the information denoting where each code block is closed. The Plush genome representation is a flat sequence of instructions and literals. Each of these tokens is considered a gene of the genome. Each gene also has epigenetic markers that store information that is used when translating the genome into a program—these are “epigenetic” in the sense that they affect translation into Push programs, but do not appear in programs themselves. The distinction between genetic and epigenetic information raises the possibility that the two kinds of information could be varied in different ways or at different times during evolution. While in most prior work with Plush, including the experiments described in this chapter, epigenetic information was only varied during the production of offspring from parents (like genetic information), previous work has used hill-climbing search over variation of epigenetic markers to “learn” during an individual’s lifetime [9].
258
E. Pantridge et al.
The two kinds of epigenetic markers that have been used in PushGP systems are “close” and “silent” markers, though others could be created [6]. The “close” marker is an integer denoting how many code blocks should be closed directly following that particular gene. This allows the genome to indicate where code blocks are closed using epigenetic markers attached to specific genes. If there are no code blocks open at that location, the close maker value is ignored; if the number of open code blocks is less than the “close” value, then all open blocks are closed. If some code blocks are left open after the entire program has been translated, it is assumed the code blocks are closed at the end of the program. The “silent” marker is a Boolean flag denoting if the gene is silenced. If true, the gene is skipped during genome translation. Using these markers a genome can hold genetic material that does not influence the resulting program and potentially pass it on to it children. Figure 13.2 shows a Plush genome that produced the program from Fig. 13.1 when translated. Due to the separation of genes and their epigenetic markers under this representation, the Plush data structure can be thought of as a tabular structure since every gene has a value for every epigenetic marker. The Plushy genome representation is also a sequence of instruction and literal genes, however there are additional genes used solely for translation. Plushy genomes do not use epigenetic markers, but are instead simply flat sequences of genes. The two additional kinds of genes introduced thus far in Plushy genomes are CLOSE genes and SKIP genes [13]. The CLOSE gene denotes the end of a code block. If there are no code blocks open at that location, the CLOSE is a no-op. If some code blocks are left open after the entire program has been translated, translation continues as if additional CLOSE genes are present until all code blocks are closed. The SKIP gene causes genome translation to ignore the subsequent gene. Much like the silent epigenetic markers used in Plush genomes, these SKIP genes can be used to suppress genetic material such that it does not appear in the resulting program, yet potentially can be passed down to children. SKIP genes also cause a following SKIP or CLOSE gene to be ignored.
x int gt exec if x int sqr x 2 int mult Gene: 5 Closes: 0 0 2 0 1 0 0 0 0 Silent: false false false false false true false false false Fig. 13.2 One potential Plush genome that produces the program from Fig. 13.1 after translation. The definition of the exec_if instruction specifies the opening of two code blocks; one for the “then” clause and one for the “else” clause. The “close” epigenetic marker on the x gene denotes the end of the “then” clause for the exec_if. There is no gene with a non-zero close marker to denote the end of the “else” clause, and thus it is assumed to be at the end of the sequence. Notice that the int_gt instruction closes 2 code blocks despite no code blocks being opened by the previous genes, and thus these close markers are ignored by translation. The int_sqr instruction is not translated into the program because it has a true silent marker
13 Comparison of Linear Genome Representations for Software Synthesis
259
5 x int_gt CLOSE exec_if x SKIP int_sqrt CLOSE x 2 int_mult Fig. 13.3 One potential Plushy genome that produces the program from Fig. 13.1 after translation. The definition of the exec_if instruction specifies the opening of two code blocks; one for the “then” clause and one for the “else” clause. The end of the “then” clause is denoted by the final CLOSE gene. There is no CLOSE gene to denote the end of the “else” clause, and thus it is assumed to be at the end of the sequence. There is no gene that opens a code block before the first CLOSE and thus it has no effect on translation. The SKIP gene specifies the following gene should not be included in the translation, which explains why int_sqrt does not appear in the translated program
Figure 13.3 shows a Plushy genome that produces the program from Fig. 13.1 when translated.
13.2.1 Random Genome Generation Random genomes are used to seed the initial population of genetic programming runs. Each genome type is generated differently to ensure the logic, structure, and size of the programs in the initial population is diverse. The instructions and literals in random Plush genomes are typically chosen with a uniform distribution. The close epigenetic markers in Plush are initialized using a probability distribution. Sampling the distribution will give the value of the “close” marker. In previous PushGP research a binomial distribution with n = 4 and p = 1/16 is used. This yields the following probabilities for assigning values for the “close” marker. Closes 0 1 2 3
Probability 0.772 0.206 0.021 0.001
We do not use “silent” markers in this work. When generating Plushy genomes, a set of Push instructions and literals provided by the user is available. Plushy simply adds additional elements to this set for the CLOSE and SKIP genes. As described above, the definition of each instruction in the set denotes how many code blocks are opened by the instruction, based on the number of arguments it takes from the exec stack. If the set of genes were randomly sampled with uniform probability, the CLOSE 1 gene would occur in genomes at a rate of |S| , where |S| is the number of available genes. This likely provides too few CLOSE genes compared to the number of code blocks opened. Instead, to generate Plushy genomes with a larger proportion of CLOSE genes, we set the probability of sampling a CLOSE gene proportionally to
260
E. Pantridge et al.
the sum of all “open” counts across all instructions. For example, if there are 10 instructions that each open 1 code block, the CLOSE gene is given a 10 times the probability of being added to the Plushy genome compared to any other instruction. This results in an average of one CLOSE for every code block opened.
13.2.2 Genetic Operators When using Plush genomes, the genes and their epigenetic markers are two separate values corresponding to the same location in the genome. Genetic operators can affect either the genes, their epigenetic markers, or both. Uniform crossover and mutation operators that keep genes with their epigenetic markers have typically been used [3, 17]. Additionally, specialized genetic operators that do not effect gene values can be applied to epigenetic markers. For example, a uniform close mutation operator changes the “close” epigenetic markers by incrementing or decrementing them by one [6]. This close-marker mutation operator is applied to each gene in the genome with some configurable probability. Plushy genomes do not contain epigenetic markers. Genetic operators that manipulate the genes in the genome are modifying both logic and structure. The uniform genetic operators that are applied to Plush can be applied to Plushy, with the exception of the epigenetic-marker operations. Genetic operators commonly used in the field of genetic algorithms can also be applied to Plushy genomes. Genetic operators that add random genes to a Plushy genome, such as a mutation, will utilize the same increased probability of adding a CLOSE gene as seen in random genome generation discussed in Sect. 13.2.1.
13.3 Impact on Search Performance A genome is the data structure manipulated by genetic operator throughout evolution. Different genome structures yield different landscapes to search over. Some landscapes may be more difficult to search through and thus search performance could be degraded when using certain genome representations.
13.3.1 Benchmarks To evaluate the impact of Plush vs. Plushy genomes on search performance, we tested each on 25 problems from the general software synthesis benchmark suite [5]. These benchmark problems come from coding assignments traditionally given to human programmers in introductory computer science classes. For our detailed
13 Comparison of Linear Genome Representations for Software Synthesis
261
analysis of the affects of each genome on evolved program structure, we selected a representative subset of 10 problems, which are: • Compare String Lengths. Given three strings (s1 , s2 , and s3 ) return true if length(s1 ) < length(s2 ) < length(s3 ), and false otherwise. • Double Letters. Given a string, print the string, doubling every letter character, and tripling every exclamation point. All other non-alphabetic and nonexclamation characters should be printed a single time each. • Last Index of Zero. Given a vector of integers of length ≤50, each integer in the range [−50, 50], at least one of which is 0, return the index of the last occurrence of 0 in the vector. • Mirror Image. Given two lists of integers of the same length ≤50, return true if one list is the reverse of the other, and false otherwise. • Negative to Zero. Given a vector of integers in [−1000, 1000] with length ≤50, return the vector where all negative integers have been replaced by 0. • Replace Space With Newline. Given a string input, print the string, replacing spaces with newlines. Also, the program should return the integer count of the non-whitespace characters. • String Lengths Backwards. Given a vector of strings with length ≤50, where each string has length ≤ 50, print the length of each string in the vector starting with the last and ending with the first. • Sum of Squares. Given an integer 0 < n ≤ 100, return the sum of squaring each positive integer between 1 and n inclusive. • Syllables. Given a string (max length 20, containing symbols, spaces, digits, and lowercase letters), count the number of occurrences of vowels in the string and print that number as X in “The number of syllables is X.” • Vector Average. Given a vector of floats with length in [1, 50], with each float in [−1000, 1000], return the average of those floats. Results are rounded to 4 decimal places. For each benchmark problem, 100 runs were performed with each genome type. All runs were performed with the same configuration of the PushGP system, with the exception of the genome type used. The hyperparameter values used to configure the PushGP system are presented in Fig. 13.4. We use size-neutral
Parameter Runs per setting Population size Max number of generations Genetic operator UMAD addition rate max genome size
Value 100 1000 300 UMAD, used to make all children 0.09 varies per problem, but same for Plush and Plushy
Fig. 13.4 The configuration of the Clojush PushGP system for the experimental runs performed for this research. We leave the tuning of these configurations for each genome type as future research
262
E. Pantridge et al.
uniform mutation with addition and deletion (UMAD) to make all children for both Plush and Plushy [3]. UMAD, with an addition rate of 0.09, adds a new random instruction before or after each instruction with 0.09 probability; it then deletes each instruction in the program with probability 1 1 ≈ 0.08257 to remain size0.09 +1
neutral on average. Note that we do not use any crossover here; while crossover may play some role in deciding which representation to use, UMAD by itself has outperformed any crossover technique we have tried, so we used it here. The initial and maximum genome sizes vary per problem, and follow the recommendations from the benchmark suite’s technical report [4].
13.3.2 Benchmark Results Figure 13.5 shows the number of solutions found by the genetic programming runs using Plush or Plushy genomes for all problems. Only one of the differences in success rate is significant using a Chi-squared test at a = 0.05: the Syllables problem. All other problems show no significance in the difference in numbers of successes. This shows that, at least for these program synthesis problems and hyperparameter settings, the choice of genome between Plush and Plushy has little to no effect on performance. Thus the choice to use Plush or Plushy should be based not on their effects on performance, but instead on other considerations such as flexibility with respect to genetic operators and the required amount of hyperparameter tuning.
13.4 Genome and Program Structure Push programs have meaningful structure organized by code blocks, which affect the semantics of programs, particularly with respect to control flow. When evaluating genome representations , we therefore consider program structure in addition to search performance.
13.4.1 Sizes Figure 13.6 shows a comparison of genome lengths produced during PushGP runs for each genome representation for 10 representative problems. Plushy genomes tend to be slightly longer than Plush genomes. This is to be expected because Plushy genomes require explicit close genes, increasing the size of genomes, whereas Plush stores “close” markers as epigenitic markers that do not contribute to genome size.
13 Comparison of Linear Genome Representations for Software Synthesis Genome Type
plush
263
plushy
Checksum
Count Odds
CSL
Digits
Double Letters
Even Squares
For Loop Index
Grade
LIoZ
Median
Mirror Image
Negative To Zero
Number IO
Pig Latin
RSWN
Scrabble Score
SLB
Small Or Large
Smallest
Sum of Squares
Super Anagrams
Syllables
Vector Average
Vectors Summed
X−Word Lines
100 75 50 25 0
Solution Rate (%)
100 75 50 25 0 100 75 50 25 0 100 75 50 25 0 100 75 50 25 0 sh
plu
y
sh
plu
sh
plu
y
sh
plu
sh sh hy hy plu plus plu plus Genome Type
sh
plu
y
sh
plu
Fig. 13.5 The rate of solutions found using Plush genomes versus Plushy genomes. A genome is a solution if it receives an error of zero on all cases in a previously unseen test set after being simplified by an automatic simplification algorithm [2]. Each genome representation was evaluated across 100 runs for each problem. The difference in solution rates is only significant for one problem, “Syllables“, shown with a black outline. On the “Syllables” problem the Plushy genome produces more solutions
264
E. Pantridge et al.
Average Genome Length
Genome Type
plush
plushy
CSL
double_letters
LIoZ
mirror_image
negative_to_zero
RSWN
SLB
sum_of_squares
syllables
vector_average
250 200 150 100 50 250 200 150 100 50
0
0
10
0
20
0 0
30
0
10
0
20
0
30
250 200 150 100 50
0
0
10
0
20
0 0
30
0
10
0
20
0 30 Generation
Fig. 13.6 Average Plush and Plushy genome lengths for all benchmark problems, averaged across all benchmark runs
The examples from previous sections also illustrate the difference in size. The Plush genome in Fig. 13.2 contains 9 genes. The Plushy genome in Fig. 13.3 contains 12 genes. Both genomes translate into program in Fig. 13.1, and both genomes only silence/skip one gene. Figure 13.7 shows a comparison of program sizes produced during PushGP runs for both genome representations. Despite producing longer genomes, the Plushy data structure tends to translate into slightly smaller programs. This further confirms that the difference lengths was due to CLOSE genes, which affect genome lengths but not program sizes. It seems as though the Replace Space With Newline problem is an outlier for both genome and program sizes. Genome and program sizes tend to be similar for Plush and Plushy experiments in the early generations. In later generations, the Plushy genomes and programs far exceed the size of their Plush counterparts. Since a large number of runs had finished by that point (by finding a solution), this can likely be attributed to a small number of outliers for each drastically changing the average of the remaining runs. Push programs are nested structures of code blocks. It is possible to measure the maximum depth of a program. We refer the maximum depth of a program as the program’s depth. Figure 13.8 shows that Plush genomes and Plushy genomes tend to produce similar program depths. The Sum of Squares problem is a drastic outlier
13 Comparison of Linear Genome Representations for Software Synthesis Genome Type
plush
265
plushy
CSL
double_letters
LIoZ
mirror_image
negative_to_zero
RSWN
SLB
sum_of_squares
syllables
vector_average
300 200
Average Program Size
100
300 200 100
0
300
0
10
0
20
0 0
30
0
10
0
20
0
30
200 100
0
0
10
0
20
0 0
30
0
10
0
20
0 30 Generation
Fig. 13.7 Average Push program sizes produced using Plush and Plushy genomes for all benchmark problems, averaged across all benchmark runs
here, with programs translated from Plush genomes tending to have roughly twice the depth of the programs translated from Plushy genomes. It is important to note that these genome representations do not have direct effects on program depth, as only the instructions they contain dictate where and how many code blocks are opened. Thus any differences here come about by evolutionary pressures. So, it may be the case that for the Sum of Squares problem the way in which Plushy genomes close code blocks made it evolutionarily advantageous to have more nested instructions than with Plush.
13.4.2 Presence of “Closing” Genes As PushGP searches for solution programs, it manipulates genomes such that the logic and structure of the resulting programs is varied generation to generation. Figure 13.9 shows the prevalence of “closing” genes in both kinds of genomes as evolution progresses. For Plush genomes, this is the percentage of genes in the genome with a non-zero close epigenetic marker. For Plushy genomes, this is the percentage of CLOSE genes in the genome.
266
E. Pantridge et al.
Average Program Depth
Genome Type
plush
plushy
CSL
double_letters
LIoZ
mirror_image
negative_to_zero
RSWN
SLB
sum_of_squares
syllables
vector_average
12.5 10.0 7.5 5.0
12.5 10.0 7.5 5.0
12.5 10.0 7.5 5.0 0
0
10
0
20
0 0
30
0
10
0
20
0
0
10
0
20
0 0
30
0
10
0
20
0
30
0 30 Generation
Fig. 13.8 Average program depths produced using Plush and Plushy genomes for all benchmark problems, averaged across all benchmark runs
The levels of “closing” genes stay relatively stable for both Plush and Plushy throughout evolutionary time, often ending with approximately the same percentage of closing genes as in the initial generation. These flat trends indicate that the levels of closing genes are largely dictated by the percentage of close epigenetic markers/CLOSE genes present in random code created during initialization and mutation, and are not reflective of evolutionary pressures toward higher or lower levels. The percentage of close markers with Plush starts around the same level (around 0.25) for every problem, as would be expected with the hard-coded probabilities of close markers as described earlier. The percentage of CLOSE genes in random Plushy genomes depends on the instruction set, and will therefore be different for these different problems, which use differing instruction sets. This explains the high level of CLOSE genes for the Sum of Squares problem, which uses a higher percentage of exec stack instructions (those responsible for opening code blocks) compared to the other problems here. Despite the adaptive prevalence of CLOSE genes offered by Plushy as discussed in Sect. 13.2.1, it is interesting to recall the lack of significantly different solution rates reported in Fig. 13.5. This suggests that the performance of evolution is not particularly sensitive to the prevalence of “closeing” genes. As demonstrated in Sect. 13.2, when using either Plush or Plushy it is possible to have “closing” genes that have no impact of the structure of the resulting program
13 Comparison of Linear Genome Representations for Software Synthesis
Genome Type
plush
267
plushy
CSL
double_letters
LIoZ
mirror_image
negative_to_zero
RSWN
SLB
sum_of_squares
syllables
vector_average
0.30
Average Percent Close Genes
0.25 0.20 0.15 0.30 0.25 0.20 0.15 0.30
0
0
10
0
20
0 0
30
0
10
0
20
0
30
0.25 0.20 0.15 0
0
10
0
20
0 0
30
0
10
0
20
0 30 Generation
Fig. 13.9 The percentage of “closing” genes observed when using Plush and Plushy genomes for all benchmark problems, averaged across all benchmark runs. For Plush genomes, this is the percentage of genes in the genome with a non-zero close epigenetic marker. For Plushy genomes, this is the percentage of CLOSE genes in the genome
because they occur at locations in the genome where no code blocks are open. In order to compare the number of close genes that are having impact on program structure, we must compare the number of code blocks found in programs translated from each genome representation. Figure 13.10 shows the average number of code blocks in translated programs divided by the program size. All problems show that Plush genomes tend to have a slightly higher concentration of code blocks in the translated programs. The range of differences between experiments using Plush genomes and experiments using Plushy genomes is very narrow, suggesting that the genome representation has very little bearing on the concentration of code blocks in a program. The small differences here likely reflect the fact that even though the genome sizes are the same, the Plush genomes will contain more actual instructions compared to Plushy genomes, for which use some genes are CLOSE genes, leading to slightly higher numbers of instructions that open code blocks in Plush genomes.
268
E. Pantridge et al.
Avg. Number of Code Blocks Divided by Program Size
Genome Type
plush
plushy
CSL
double_letters
LIoZ
mirror_image
negative_to_zero
RSWN
SLB
sum_of_squares
syllables
vector_average
0.21 0.18 0.15 0.12
0.21 0.18 0.15 0.12
0.21
0
0
10
0
20
0 0
30
0
10
0
20
0
30
0.18 0.15 0.12 0
0
10
0
20
0 0
30
0
10
0
20
0 30 Generation
Fig. 13.10 The average number of code blocks divided by the average program size observed when using Plush and Plushy genomes for all benchmark problems, averaged across all benchmark runs. The plot shows a clear similarity between genome representations, especially considering the narrow range of the y-axis
13.5 Other Considerations Section 13.3 discussed how Plush and Plushy genomes have nearly identical search performance. The various measurements presented in Sect. 13.4 show that programs produced while using the different genome representations are usually similar. This may seem to indicate the choice between Plush and Plushy genomes is inconsequential, but in practice there are important differences regarding their effects on usability and ease of implementation.
13.5.1 Hyperparameter Fitting Most machine learning systems have a collection of hyperparameters that can be tuned to problem-specific values that improve performance. Typically hyperpa-
13 Comparison of Linear Genome Representations for Software Synthesis
269
rameters for genetic programming systems include the population size, mutation rates, and parent selection methods. Grid search is a common method of tuning hyperparameters by exhaustively evaluating sets of values taken from a grid of hyperparameter values. As mentioned in Sect. 13.2.1, the Plush genome representation requires a probability distribution to generate epigenetic marker values for random code for initialization and mutation. Probability distributions are difficult hyperparameters to tune. All previous research using Plush genomes assumes a binomial distribution of initial values for epigenetic markers, although this has not been proven to be optimal via theoretical analysis nor empirical experimentation, and in fact has never been tuned. We believe that they are relatively robust to moderate change. However, it is possible that better-tuned values may lead to better performance than has been seen previously. Even if the optimal distribution is a binomial distribution in all cases, there are two hyperparameters to tune (n and p) for initial close marker assignment alone. If the optimal type of probability distribution is problem specific, the number of hyperparameters is unknown. This further complicates the tuning of hyperparameters that is required when using the Plush genome representation. Typically, the computational cost of tuning all hyperparameters drastically increases as the number of hyperparameters increases. In contrast, when using Plushy genomes the choice of which instructions can appear in the instruction set determines both structure and logic. No additional hyperparameters are required specifically to initialize the CLOSE genes. Furthermore, when using Plushy genomes the proportional rate of CLOSE genes presented in Sect. 13.2.1 agrees with the intuition on how many CLOSE genes should appear in a random program and is suitable for most cases. Using this method of generating random genomes, there are no hyperparameters to tune when using Plushy genomes. Section 13.2.2 discussed the separate set of genetic operators that can be used to vary the epigenetic markers on genes in Plush genomes. These mutation operations often require their own hyperparameter tuning for values such as mutation rate. When using Plushy genomes, there is no need for genetic operators that vary epigenetic markers, and thus no additional tuning is required. It is possible that future research will produce genetic operators that specifically target the CLOSE and SKIP genes of Plushy genomes. These operators may expand the space of hyperparameters.
13.5.2 Applicable Search Methods The field of inductive program synthesis has a large variety of methods undergoing active research across many problem domains. There is no clear superior family of algorithms that dominate the field. It is in the best interest of the field to compare and evaluate as many systems as possible to gain a better understanding of their behaviors.
270
E. Pantridge et al.
Nearly every program synthesis method has a different approach to representing programs. This heterogeneity makes comparisons of different search methods on the same problems difficult [11]. The simplicity of the Plushy genome facilitates such comparisons better than the Plush genome because any search method capable of making changes to a sequence of tokens can be used to search over the space of Plushy genomes. Some examples of algorithms that could be used to search for solution Plushy genomes are: • Evolutionary algorithms such as genetic algorithms and evolution strategies. • Traditional local search methods such as simulated annealing and hill climbing. • “Sequence to Sequence” neural architectures that are commonly used to synthesize sentences of natural language. • Brute force combinatorics. In contrast, Plush genomes require that each step in the search account for both gene tokens and their epigenetic markers. It is not immediately clear how a given search procedure should coordinate searching through genome space and epigenetic marker space in tandem, illustrating the complexity added by the epigenetic markers compared to CLOSE genes.
13.5.3 Automatic Simplification Previous research on PushGP has detailed algorithms for automatically simplifying Push programs and Plush genomes [2, 15, 20]. This process uses hill climbing on program or genome size by randomly removing a small set of genes and testing that the program’s outputs remain unchanged on the training data. If not, the genes are not removed, and a new random set of genes is removed, and the program is tested again. This process typically reaches a local size optimum within a few thousand iterations [18]. Automatic simplification was originally intended to yield programs that are easier for humans to understand [15, 18, 20], however it has also been shown that applying automatic simplification after evolution often produces programs that generalize better to unseen data [2]. In this sense, automatic simplification can be thought of as a regularization step for evolutionary program synthesis methods. The solution rates reported in Fig. 13.5 are for simplified programs on a held-out test set (that is, a test set not used during evolution). In this case, we automatically simplified the Push programs, not the genomes, so there should be no difference in simplification for those results. However, previous work described alternative methods for simplifying Plush genomes directly before translation [2], and we could also imagine automatically simplifying Plushy genomes. When simplifying Plush genomes, we can randomly turn on a small number of “silent” epigenetic markers during a hill-climbing step, effectively removing the genes without losing their information. This allows for backtracking of the hillclimbing by unsilencing those genes at a later time, potentially allowing the process
13 Comparison of Linear Genome Representations for Software Synthesis
271
to escape local optima. This leads to smaller Push programs than non-backtracking approaches, though it only produces negligible gains in generalization [2]. When applying the same automatic simplification process to Plushy genomes, it is possible for a set of CLOSE or SKIP genes to be removed without the modification of any genes that encode instructions. We leave it to future research to perform a rigorous study on the impact this has on generalization or interpretability.
13.5.4 Serialization The main artifact of inductive program synthesis systems is the solution program found by the search. In PushGP, this artifact is typically either an executable Push program or a genome that can be translated into a Push program. In practice these solutions need to be serialized, stored, and recalled for later use. Serializing Push programs requires the serialization of a nested structure of code blocks, literals, and instructions. One benefit of using a linear genome representation is that the solution’s genome is often easier to serialize and de-serialize than the program. Serializing Plush genomes requires denoting the gene value and the value for all epigenetic markers at every location in the genome. Serializing Plushy genomes only requires serializing the gene values. The simplicity of Plushy cuts down on the size of serialized genomes and improves interpretation and ease of de-serialization.
13.5.5 New Epigenetic Markers for Plush One of the inspirations for the development of Plush genomes was the ability to add new epigenetic markers to add new data to the genome. We have discussed two such epigenetic markers that have easy translations to the Plushy representation: “close” and “silent” markers. However, we could imagine (and have experimented with) other epigenetic markers that would be more difficult to add to Plushy genomes. For example, we have experimented with the idea of adding “crossover hot-spots”, which are locations in the genome where crossover is more likely to occur than in other locations. This is easy to envision as a new epigenetic marker, whereas it is not obvious how this feature would be added to Plushy genomes. However, we have yet to find a specific use of a new epigenetic marker that actually improves performance of PushGP in practice. We therefore recommend keeping this ability in mind as a possible advantage of Plush—a context in which the complexity of Plush could add to its utility in comparison to Plushy.
272
E. Pantridge et al.
13.6 Conclusion We have compared two genome representations for evolving Push programs, Plush and Plushy. Experiments using the Clojush implementation of PushGP showed that the choice of representation has little effect on the problem-solving power of the genetic programming system, making it impossible to recommend one representation over the other on the basis of problem-solving performance alone. We also explored other qualities of the programs evolved using each representation, and found some minor differences in genome/program sizes, numbers of closing genes, and numbers of code blocks in the translated Push programs. While these differences are interesting and potentially could impact problem-solving performance on other problems, they appear incidental in the problem-solving performance results in this study. We then discussed at the qualitative aspects of each representation. Plushy requires fewer hyperparameters to be tuned than Plush, since the number of CLOSE genes to include is determined from the instruction set, whereas Plush requires at least one to use (and potentially to tune) hyperparameters that determine the distribution of “close” epigenetic markers in randomly-generated genes. Additionally, the simplicity of Plushy compared to Plush makes it easier to apply non-geneticprogramming search methods and to serialize genomes. After comparing the two representations, we recommend using Plushy genomes for the evolution of Push programs in most settings. Since both representations achieve similar problem-solving performance, Plushy’s simplicity makes it more versatile and easier to use. Acknowledgements Feedback and discussions that improved this work were provided by other members of the Hampshire College Institute for Computational Intelligence, and by participants in the Genetic Programming Theory and Practice workshop. This material is based upon work supported by the National Science Foundation under Grant No. 1617087. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation.
References 1. Forstenlechner, S., Fagan, D., Nicolau, M., O’Neill, M.: A grammar design pattern for arbitrary program synthesis problems in genetic programming. In: M. Castelli, J. McDermott, L. Sekanina (eds.) EuroGP 2017: Proceedings of the 20th European Conference on Genetic Programming, LNCS, vol. 10196, pp. 262–277. Springer Verlag, Amsterdam (2017). https:// doi.org/10.1007/978-3-319-55696-3_17 2. Helmuth, T., McPhee, N.F., Pantridge, E., Spector, L.: Improving generalization of evolved programs through automatic simplification. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’17, pp. 937–944. ACM, Berlin, Germany (2017). https:// doi.org/10.1145/3071178.3071330. http://doi.acm.org/10.1145/3071178.3071330
13 Comparison of Linear Genome Representations for Software Synthesis
273
3. Helmuth, T., McPhee, N.F., Spector, L.: Program synthesis using uniform mutation by addition and deletion. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’18, pp. 1127–1134. ACM, New York, NY, USA (2018). https://doi.org/10.1145/ 3205455.3205603. http://doi.acm.org/10.1145/3205455.3205603 4. Helmuth, T., Spector, L.: Detailed problem descriptions for general program synthesis benchmark suite. Technical Report UM-CS-2015-006, Computer Science, University of Massachusetts, Amherst (2015). https://web.cs.umass.edu/publication/details.php?id=2387 5. Helmuth, T., Spector, L.: General program synthesis benchmark suite. In: GECCO ’15: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 1039–1046. ACM, Madrid, Spain (2015). https://doi.org/10.1145/2739480.2754769. http://doi. acm.org/10.1145/2739480.2754769 6. Helmuth, T., Spector, L., McPhee, N.F., Shanabrook, S.: Linear genomes for structured programs. In: Genetic Programming Theory and Practice XIV. Springer (2017) 7. Kitzelmann, E.: Inductive programming: A survey of program synthesis techniques. In: U. Schmid, E. Kitzelmann, R. Plasmeijer (eds.) Approaches and Applications of Inductive Programming, pp. 50–73. Springer Berlin Heidelberg, Berlin, Heidelberg (2010) 8. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge, MA, USA (1992). http://mitpress.mit.edu/books/geneticprogramming 9. La Cava, W., Helmuth, T., Spector, L., Danai, K.: Genetic programming with epigenetic local search. In: GECCO ’15: Proceedings of the 2015 conference on Genetic and Evolutionary Computation Conference, pp. 1055–1062. ACM, Madrid, Spain (2015). https://doi.org/10. 1145/2739480.2754763. http://doi.acm.org/10.1145/2739480.2754763 10. Lalejini, A., Ofria, C.: Evolving event-driven programs with signalgp. CoRR abs/1804.05445 (2018). http://arxiv.org/abs/1804.05445 11. Pantridge, E., Helmuth, T., McPhee, N.F., Spector, L.: On the difficulty of benchmarking inductive program synthesis methods. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’17, pp. 1589–1596. ACM, New York, NY, USA (2017). https://doi.org/10.1145/3067695.3082533. http://doi.acm.org/10.1145/3067695. 3082533 12. Pantridge, E., Spector, L.: PyshGP: PushGP in python. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’17, pp. 1255–1262. ACM, Berlin, Germany (2017). https://doi.org/10.1145/3067695.3082468. http://doi.acm.org/ 10.1145/3067695.3082468 13. Pantridge, E., Spector, L.: Plushi: An embeddable, language agnostic, push interpreter. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’18, pp. 1379–1385. ACM, New York, NY, USA (2018). https://doi.org/10.1145/3205651. 3208296. http://doi.acm.org/10.1145/3205651.3208296 14. Perelman, D., Gulwani, S., Grossman, D., Provost, P.: Test-driven synthesis. ACM SIGPLAN Notices 49(6), 408–418 (2014). https://doi.org/10.1145/2594291.2594297 15. Robinson, A.: Genetic programming: Theory, implementation, and the evolution of unconstrained solutions. Division iii thesis, Hampshire College (2001). http://hampshire.edu/ lspector/robinson-div3.pdf 16. Rosin, C.D.: Stepping stones to inductive synthesis of low-level looping programs. CoRR abs/1811.10665 (2018). http://arxiv.org/abs/1811.10665 17. Spector, L., Helmuth, T.: Uniform linear transformation with repair and alternation in genetic programming. In: Genetic Programming Theory and Practice XI, Genetic and Evolutionary Computation, chap. 8, pp. 137–153. Springer, Ann Arbor, USA (2013). https://doi.org/10.1007/ 978-1-4939-0375-7_8. http://link.springer.com/chapter/10.1007%2F978-1-4939-0375-7_8 18. Spector, L., Helmuth, T.: Effective simplification of evolved push programs using a simple, stochastic hill-climber. In: GECCO Comp ’14: Proceedings of the 2014 conference companion on Genetic and evolutionary computation companion, pp. 147–148. ACM, Vancouver, BC, Canada (2014). https://doi.org/10.1145/2598394.2598414. http://doi.acm.org/10.1145/ 2598394.2598414
274
E. Pantridge et al.
19. Spector, L., Klein, J., Keijzer, M.: The push3 execution stack and the evolution of control. In: GECCO 2005: Proceedings of the 2005 conference on Genetic and evolutionary computation, vol. 2, pp. 1689–1696. ACM Press, Washington DC, USA (2005). https://doi.org/10.1145/ 1068009.1068292. http://www.cs.bham.ac.uk/~wbl/biblio/gecco2005/docs/p1689.pdf 20. Spector, L., Robinson, A.: Genetic programming and autoconstructive evolution with the push programming language. Genetic Programming and Evolvable Machines 3(1), 7–40 (2002). https://doi.org/10.1023/A:1014538503543. http://hampshire.edu/lspector/pubs/pushgpem-final.pdf
Chapter 14
Enhanced Optimization with Composite Objectives and Novelty Pulsation Hormoz Shahrzad, Babak Hodjat, Camille Dollé, Andrei Denissov, Simon Lau, Donn Goodhew, Justin Dyer, and Risto Miikkulainen
14.1 Introduction Multi-objective optimization is most commonly used for discovering a Pareto front from which solutions that represent useful tradeoffs between objectives can be selected [9, 14–16, 23]. Evolutionary methods are a natural fit for such problems because the Pareto front naturally emerges in the population maintained in these methods. Interestingly, multi-objectivity can also improve evolutionary optimization because it encourages populations with more diversity. Even when the focus of optimization is to find good solutions along a primary performance metric, it is useful to create secondary dimensions that reward solutions that are different in terms of structure, size, cost, consistency, etc. Multi-objective optimization then discovers stepping stones that can be combined to achieve higher fitness along the primary dimension [34]. The stepping stones are useful in particular in problems where the fitness landscape is deceptive, i.e. where the optima are surrounded by inferior solutions [28]. However, not all such diversity is helpful. In particular, candidates that optimize one objective only and ignore the others are less likely to lead to useful tradeoffs, H. Shahrzad () · B. Hodjat Cognizant Technology Solutions, Dublin, CA, USA e-mail: [email protected]; [email protected] C. Dollé · A. Denissov · S. Lau · D. Goodhew · J. Dyer Sentient Investment Management, San Francisco, CA, USA e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected] R. Miikkulainen Cognizant Technology Solutions, Dublin, CA, USA The University of Texas at Austin, Austin, TX, USA e-mail: [email protected],[email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_14
275
276
H. Shahrzad et al.
and they are less likely to escape deception. Prior research demonstrated that it is beneficial to replace the objectives with their linear combinations, thus focusing the search in more useful areas of the search space, and make up for the lost diversity by including a novelty metric in parent selection [39]. This paper improves upon this approach by introducing the concept of novelty pulsation: the novelty selection is turned on and off periodically, thereby allowing exploration and exploitation to leverage each other repeatedly. This idea is tested in two domains. The first one is the highly deceptive domain of sorting networks [25] used in the original work on composite novelty selection [39]. Such networks consist of comparators that map any set of numbers represented in their input lines to a sorted order in their output lines. These networks have to be correct, i.e. sort all possible cases of input. The goal is to discover networks that are as small as possible, i.e. have as few comparators organized in as few sequential layers as possible. While correctness is the primary objective, it is actually not that difficult to achieve, because it is not deceptive. Minimality, on the other hand, is highly deceptive and makes the sorting network design an interesting benchmark problem. The experiments in this paper show that while the original composite novelty selection and its novelty-pulsation-enhanced version both find state-of-theart networks up to 20 input lines, novelty pulsation finds them significantly faster. It also beat the state of the art for 20-line network by finding a 91 comparators design, which broke the previous world record of 92 [40]. The second domain is the highly challenging real-world problem of stock trading. The goal is to evolve agents that decide whether to buy, hold, or sell particular stocks over time in order to maximize returns. Compared to original composite novelty method, novelty pulsation finds solutions that generalize significantly better to unseen data. It therefore forms a promising foundation for solving deceptive realworld problems through multi-objective optimization.
14.2 Background and Related Work Evolutionary methods for optimizing single-objective and multi-objective problems are reviewed, as well as the idea of using novelty to encourage diversity and the concept of exploration versus exploitation in optimization methods. The domains of minimal sorting networks and automated stock trading are introduced and prior work in them reviewed.
14.2.1 Single-Objective Optimization When the optimization problem has a smooth and non-deceptive search space, evolutionary optimization of a single objective is usually convenient and effective. However, we are increasingly faced with problems of more than one objective and
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
277
with a rugged and deceptive search space. The first approach often is to combine the objectives into a single composite calculation [14]: Composite (O1 , O2 , . . . , Ok ) =
k
β
αi Oi i
(14.1)
i=1
Where the constant hyper-parameters αi and βi determine the relative importance of each objective in the composition. The composite objective can be parameterized in two ways: 1. By folding the objective space, and thereby causing a multitude of solutions to have the same value. Diversity is lost since solutions with different behavior are considered to be equal. 2. By creating a hierarchy in the objective space, and thereby causing some objectives to have more impact than many of the other objectives combined. The search will thus optimize the most important objectives first, which in deceptive domains might result in inefficient search or premature convergence to local optima. Both of these problems can be avoided by casting the composition explicitly in terms of multi-objective optimization.
14.2.2 Multi-Objective Optimization Multi-objective optimization methods construct a Pareto set of solutions [16], and therefore eliminate the issues with objective folding and hierarchy noted in Sect. 14.2.1. However, not all diversity in the Pareto space is useful. Candidates that optimize one objective only and ignore the others are less likely to lead to useful tradeoffs, and are less likely to escape deception. One potential solution is reference-point based multi-objective methods such as NSGA-III [15, 16]. They make it possible to harvest the tradeoffs between many objectives and can therefore be used to select for useful diversity as well, although they are not as clearly suited for escaping deception. Another problem with purely multi-objective search is crowding. In crowding, objectives that are easier to explore end up with disproportionately dense representation on the Pareto front. NSGA II addresses this problem by using the concept of crowding distance [14], and NSGA III improves upon it using reference points [15, 16]. These methods, while increasing diversity in the fitness space, do not necessarily result in diversity in the behavior space. An alternative method is to use composite multi-objective axes to focus the search on the area with most useful tradeoffs [39]. Since the axes are not orthogonal, solutions that optimize only one objective will not be on the Pareto front. The focus effect, i.e. the angle between the objectives, can be tuned by varying the coefficients of the composite.
278
H. Shahrzad et al.
However, focusing the search in this manner has the inevitable side effect of reducing diversity. Therefore, it is important that the search method makes use of whatever diversity exists in the focused space. One way to achieve this goal is to incorporate a preference for novelty into selection.
14.2.3 Novelty Search Novelty search [31, 33] is an increasingly popular paradigm that overcomes deception by ranking solutions based on how different they are from others. Novelty is computed in the space of behaviors, i.e., vectors containing semantic information about how a solution performs during evaluation. However, with a large space of possible behaviors, novelty search can become increasingly unfocused, spending most of its resources in regions that will never lead to promising solutions. Recently, several approaches have been proposed to combine novelty with a more traditional fitness objective [17, 19, 20, 37, 38] to reorient search towards fitness as it explores the behavior space. These approaches have helped scale novelty search to more complex environments, including an array of control [3, 13, 37] and content generation [27, 29, 30] domains. Many of these approaches combine a fitness objective with a novelty objective in some way, for instance as a weighted sum [11], or as different objectives in a multiobjective search [37]. Another approach is to keep the two kinds of search separate, and make them interact through time. For instance, it is possible to first create a diverse pool of solutions using novelty search, presumably overcoming deception that way, and then find solutions through fitness-based search [26]. A third approach is to run fitness-based search with a large number of objective functions that span the space of solutions, and use novelty search to encourage search to utilize all those functions [13, 36, 38]. A fourth category of approaches is to run novelty search as the primary mechanism, and use fitness to select among the solutions. For instance, it is possible to add local competition through fitness to novelty search [30, 31]. Another version is to accept novel solutions only if they satisfy minimal performance criteria [17, 32]. Some of these approaches have been generalized using the idea of behavior domination to discover stepping stones [34, 35]. In the Composite Novelty method [39], a novelty measure is employed to select which individuals to reproduce and which to discard. In this manner, it is integrated into the genetic algorithm itself, and its role is to make sure the focused space that the composite multiple objectives define is searched thoroughly.
14.2.4 Exploration Versus Exploitation Every search algorithm needs to both explore the search space and exploit the known good solutions in it. Exploration is the process of visiting entirely new
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
279
regions of a search space, whilst exploitation is the process of visiting regions within the neighborhood of previously visited points. In order to be successful, a search algorithm needs to establish a productive synergy between exploration and exploitation [6]. A common problem in evolutionary search is that it gets stuck in local minima, i.e. in unproductive exploitation. A common solution is to kick-start the search process in such cases by temporarily increasing mutation rates. This solution can be utilized more systematically by making such kick-starts periodic, resulting in methods such as in delta coding and burst mutation [18, 42]. This paper incorporates the kick-start idea into novelty selection. By turning novelty selection on and off periodically allows local search (i.e. exploitation) and novelty search (i.e. exploration) to leverage each other, leading to faster search and better generalization. These effects will be demonstrated in the sorting networks and stock trading domains, respectively.
14.2.5 Sorting Networks A sorting network of n inputs is a fixed layout of comparison-exchange operations (comparators) that sorts all inputs of size n (Fig. 14.1). Since the same layout can sort any input, it represents an oblivious or data-independent sorting algorithm, that is, the layout of comparisons does not depend on the input data. The resulting fixed communication pattern makes sorting networks desirable in parallel implementations of sorting, such as those in graphics processing units, multi-processor computers, and switching networks [2, 24, 39]. Beyond validity, the main goal in designing sorting networks is to minimize the number of layers, because it determines how many steps are required in a parallel implementation. A tertiary goal is to minimize the total number of comparators in the networks. Designing such minimal sorting networks is a challenging optimization problem that has been the subject of active research since the 1950s [25]. Although the space of possible
Fig. 14.1 A Four-Input Sorting Network and its representation. This network takes as its input (left) four numbers, and produces output (right) where those number are sorted (small to large, top to bottom). Each comparator (connection between the lines) swaps the numbers on its two lines if they are not in order, otherwise it does nothing. This network has three layers and five comparators, and is the minimal four-input sorting network. Minimal networks are generally not known for large input sizes. Their design space is deceptive which makes network minimization a challenging optimization problem
280
H. Shahrzad et al.
networks is infinite, it is relatively easy to test whether a particular network is correct: If it sorts all combinations of zeros and ones correctly, it will sort all inputs correctly [25]. Many of the recent advances in sorting network design are due to evolutionary methods [40]. However, it is still a challenging problem, even for the most powerful evolutionary methods, because it is highly deceptive: Improving upon a current design may require temporarily growing the network, or sorting fewer inputs correctly. Sorting networks are therefore a good domain for testing the power of evolutionary algorithms.
14.2.6 Stock Trading Stock trading is a natural multi-objective domain where return and risk must be balanced [4, 5]. Candidate solutions, i.e. trading agents, can be represented in several ways. Rule-based strategies, sequence modeling with neural networks and LSTMs (Long Short-Term Memory), and symbolic regression using Genetic Programming or Grammatical Evolution are common approaches [1, 12]. Frequency of trade, fundamental versus technical indicators, choice of trading instruments, transaction costs, and vocabulary of order types are crucial design decisions in building such agents. The goal is to extract patterns from historical time-series data on stock prices and utilize those patterns to make optimal trading decisions, i.e. whether to buy, hold, or sell particular stocks (Fig. 14.2) [10, 41]. The main challenge is to trade in a
Fig. 14.2 Stock Trading Agent. The agent observes the time series of stock prices and makes live decisions about whether to buy, hold, or sell a particular stock. The signal is noisy and prone to overfitting; generalization to unseen data is the main challenge in this domain
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
281
manner that generalizes to previously unseen situations in live trading. Some general methods like training data interleaving can be used to increase generalization [21], but their effectiveness might not be enough due to the low signal to noise ratio which is the main source of deception in this domain. The data is extremely noisy and prone to overfitting, and methods that discover more robust decisions are needed.
14.3 Methods In this section, the genetic representation, the single and multi-objective optimization approaches, the composite objective method, the novelty-based selection method, and the novelty pulsation method are described, using the sorting network domain as an example. These methods were applied to stock trading in an analogous manner.
14.3.1 Representation In order to apply various evolutionary optimization techniques to the sorting network problem, a general structured representation was developed. Sorting networks of n line can be seen as a sequence of two-leg comparators where each leg is connected to a different input line and the first leg is connected to a higher line than the second:{(f1 , s1 ) , (f2 , s2 ) , (f3 , s3 ) , . . . , (fc , sc )}. The number of layers can be determined from such a sequence by grouping successive comparators together into a layer until the next comparator adds a second connection to one of the lines in the same layer. With this representation, mutation and crossover operators amount to adding and removing a comparator, swapping two comparators, and crossing over the comparator sequences of two parents at a single point. Domain-specific techniques such as mathematically designing the prefix layers [7, 8] or utilizing certain symmetries [40] were not used.
14.3.2 Single-Objective Approach Correctness is part of the definition of a sorting network: Even if a network mishandles only one sample, it will not be useful. The number of layers can be considered the most important size objective because it determines the efficiency of a parallel implementation. A hierarchical composite objective can therefore be defined as: SingleFitness (m, l, c) = 10,000 m + 100 l + c
(14.2)
282
H. Shahrzad et al.
Where m, l, and c are the number of mistakes (unsorted samples), number of layers, and number of comparators, respectively. In the experiments in this paper, the solutions will be limited to less than one hundred layers and comparators, and therefore, the fitness will be completely hierarchical (i.e. there is no folding).
14.3.3 Multi-Objective Approach In the multi-objective approach, the same dimensions, i.e. the number of mistakes, layers, and comparators m, l, c, are used as three separate objectives. They are optimized by the NSGA-II algorithm [14] with selection percentage set to 10%. Indeed, this approach may discover solutions with just a single layer, or a single comparator, since they qualify for the Pareto front. Therefore, diversity is increased compared to the single-objective method, but this diversity is not necessarily helpful.
14.3.4 Composite Multi-Objective Approach In order to construct composite axes, each objective is augmented with sensitivity to the other objectives: Composite1 (m, l, c) = 10,000 m + 100 l + c
(14.3)
Composite2 (m, l) = α1 m + α2 l
(14.4)
Composite3 (m, c) = α3 m + α4 c
(14.5)
The primary composite objective (Eq. 14.3), which will replace the mistake axis, is the same hierarchical fitness used in the single-objective approach. It discourages evolution from constructing correct networks that are extremely large. The second objective (Eq. 14.4), with α2 = 10, primarily encourages evolution to look for solutions with a small number of layers. A much smaller cost of mistakes, with α1 = 1, helps prevent useless single-layer networks from appearing in the Pareto front. Similarly, the third objective (Eq. 14.5), with α3 = 1 and α4 = 10, applies the same principle to the number of comparators. The values for α1 , α2 , α3 , and α4 were found to work well in this application, but the approach was found not to be very sensitive to them; A broad range will work as long as they establish a primacy relationship between the objectives. It might seem like we are adding several hyper-parameters which need to be tuned, but we can estimate them in each domain by picking values that push away trivial or useless solution off the Pareto front.
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
283
14.3.5 Novelty Selection Method In order to measure how novel the solutions are it is first necessary to characterize their behavior. While such a characterization can be done in many ways, a concise and computationally efficient approach is to count how many swaps took place on each line in sorting all possible zero-one combinations during the validity check. Such a characterization is a vector that has the same size as the problem, making the distance calculations fast. It also represents the true behavior of the network: Even if two networks sort the same input cases correctly, they may do it in different ways, and the characterization is likely to capture that difference. Given this behavior characterization, novelty of a solution is measured by the sum of pairwise distances of its behavior vector to those of all the other individuals in the selection pool: NoveltyScore (xi ) =
n
d(b (xi ) , b xj ) .
(14.6)
j =1
The selection method also has a parameter called selection multiplier (e.g. set to 2 in these experiments), varying between one and the inverse of the elite fraction (e.g. 1/10, i.e. 10%) used in the NSGA-II multi-objective optimization method. The original selection percentage is multiplied by the selection multiplier to form a broader selection pool. That pool is sorted according to novelty, and the top fraction representing the original selection percentage is used for selection. This way, good solutions that are more novel are included in the pool. Figure 14.3 shows an example result of applying Eq. 14.6. One potential issue is that a cluster of solutions far from the rest may end up having high novelty scores while only one is good enough to keep. Therefore, after the top fraction is selected, the rest of the sorted solutions are added to the selection pool one by one, replacing the solution with the lowest minimum novelty, defined as Fig. 14.3 The first phase of novelty selection is to select the solutions (marked green) with the highest Novelty Score (Eq. 14.6)
284
H. Shahrzad et al.
MinimumNovelty (xi ) =
min
1≤j ≤n; j = i
d(b (xi ) , b xj ) .
(14.7)
Note that this method allows tuning novelty selection continuously between two extremes: By setting selection multiplier to one, the method reduces to the original multi-objective method (i.e. only the elite fraction ends up in the final elitist pool), and setting it to the inverse of the elite fraction reduces it to pure novelty search (i.e. the whole population, sorted by novelty, is the selection pool). In practice, low and midrange values for the multiplier work well, including the value 2 used in these experiments. Figure 14.4 shows an example result of applying Eq. 14.7. The entire novelty-selection algorithm is summarized in Fig. 14.5. To visualize this process, Fig. 14.6 contrasts the difference between diversity that multi-objective method (e.g. NSGA-II) creates (left-side) and diversity that novelty search creates (right-side). In the objective space (top), novelty looks more focused Fig. 14.4 Phase two of novelty selection eliminates the closest pairs of the green candidates in order to get better overall coverage (blue candidates). The result is a healthy mixture of high-fitness candidates and novel ones (Eq. 14.7)
1. using a selection method (e.g. NSGA-II) pick selection multiplier times as many elitist candidates than usual 2. sort them in descending order according to their NoveltyScore (Equation 14.6) 3. move the usual number of elitist candidates from the top of the list to the result set 4. for all remaining candidates in the sorted list: a. add the candidate to result set b. remove the candidate with the lowest MinimumNovelty (Equation 14.7) c. (using a fitness measure as the tie breaker) 5. return the resulting set as the elite set
Fig. 14.5 The novelty selection algorithm
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
285
Fig. 14.6 An example demonstrating how novelty selection (right column) creates better coverage of the behavior space (bottom row) than NSGA-II (left column) despite being more focused in the objective space (top row)
and less diverse, but in the behavior space (bottom) it is much more diverse. This type of diversity enables the method to escape deception and find novel solutions, such as the state of the art in sorting networks.
14.3.6 Novelty Pulsation Method Parent selection is a crucial step in an evolutionary algorithm. In almost all such algorithms, whatever method is used remains unchanged during the evolutionary run. However, when a problem is deceptive or prone to over-fitting, changing the selection method periodically may make the algorithm more robust. It can be used to alternate the search between exploration and exploitation, and thus find a proper balance between them. In Composite Novelty Pulsation, novelty selection is switched on and off after a certain number of generations. As in delta-coding and burst mutation, once good solutions are found, they are used as a starting point for exploration. Once exploration has generated sufficient diversity, local optimization is performed to find the best possible versions of these diverse points. These two phases leverage each other, which results in faster convergence and more reliable solutions. Composite Novelty Pulsation adds a new hyper-parameter, P , denoting the number of generations before switching novelty selection on and off. Preliminary experiments showed that P = 5 works well in both sorting network and stock
286
H. Shahrzad et al.
Fig. 14.7 Visulization of how novelty pulsation process alternates between composite multiobjective selection and novelty selection
trading domains; however, in principle it is possible to tune this parameter to fit the domain. Figure 14.7 shows the novelty pulsation process schematic.
14.4 Experiment Previous work in the sorting networks domain demonstrated that composite novelty can match the minimal known networks up to 18 input line with reasonable computational resources [39, 40]. The goal of the sorting network experiments was to achieve the same result faster, i.e. with fewer resources. The experiments were therefore standardized to a single machine (a multi-core desktop). In the stock market trading domain, the experiments compared generalization by measuring the correlation between seen and unseen data.
14.4.1 Experimental Setup Experiments in previous paper [39] already demonstrated that the composite novelty method performs statistically significantly better in the sorting network discovery task than the other methods discussed above. Therefore, this paper focuses on comparing the novelty pulsation method to its predecessor, i.e. the composite novelty method. In the sorting networks domain, experiments were run with the following parameters: • Eleven network sizes, 8 through 18; • Ten runs for each configuration (220 runs in total); • 10% parent selection rate;
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
287
• Population size of 1000 for composite novelty selection and 100 for novelty pulsation. These settings were found to be appropriate for each method experimentally. In the trading domain, experiments were run with the following parameters: • • • • •
Ten runs on five years of historical data; Population size of 500; 100 generations; 10% parent selection rate; Performance of the 10 best individuals from each run compared on the subsequent year of historical data, withheld from all runs.
14.4.2 Sorting Networks Results Convergence time of the two methods to minimal solutions for different network sizes is shown in Fig. 14.8. Novelty pulsation shows an order of magnitude faster convergence across the board. All runs resulted in state-of-the-art sorting networks.
Fig. 14.8 The average runtime needed to converge to the state of the art on networks with different sizes. Novelty pulsation converges significantly faster at all sizes, demonstrating improved balance of exploration and exploitation
288
H. Shahrzad et al.
An interesting observation is that sorting networks with an even number of lines take proportionately less time to find the state-of-the-art solution than those with odd numbers of lines. This result is likely due to symmetrical characteristics of even-numbered problems. Some methods [40] exploit this very symmetry in order to find state-of-the-art solutions and break previous records, but this domain-specific information was not used in the implementation in this paper. The fact that the method achieves the state-of-the-art results and even breaks one world record (as described in the Appendix) without exploiting domain specific characteristics is itself a significant result.
14.4.3 Stock Trading Results Figures 14.9 and 14.10 illustrate generalization of the composite novelty selection and novelty pulsation methods, respectively. Points in Fig. 14.10 are noticeably closer to a diagonal line, which means that better training fitness resulted in better testing fitness, i.e. higher correlation and better generalization. Numerically, the seen-to-unseen correlation for the composite novelty method is 0.69, while for
Fig. 14.9 Generalization from seen to unseen data with the Composite Novelty method. The fitness on seen data is on x and unseen is on y. The correlation is 0.69, which is enough to trade but could be improved. Candidates to the right of the vertical line are profitable on seen data, and candidates above the horizontal line are profitable on unseen data
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
289
Fig. 14.10 Generalization from seen to unseen data with Composite Novelty Pulsation method. The correlation is 0.89, which results in significantly improved profitability in live trading. It is also notable that the majority of profitable genes on training data are also profitable on unseen data
composite novelty pulsation, it is 0.86. The ratio of the number of profitable candidates on unseen data and training data is also better, suggesting that underfitting is unlikely. In practice, these differences are significant, translating to much improved profitability in live trading.
14.5 Discussion and Future Work The results in both sorting network and stock trading domains support the anticipated advantages of the composite novelty pulsation approach. The secondary objectives diversify the search, composite objectives focus it on most useful areas, and pulses of novelty selection allow for both accurate optimization and thorough exploration of those areas. These methods are general and robust: they can be readily implemented in standard multi-objective search such as NSGA-II and used in combination with many other techniques already developed to improve evolutionary multi-objective optimization. The sorting network experiments were designed to demonstrate the improvement provided by novelty pulsation over the previous state of the art. Indeed, it found
290
H. Shahrzad et al.
the best known solutions significantly faster. One compelling direction of future work is to use it to optimize sorting networks systematically, with domain-specific techniques integrated into the search, and with significantly more computing power, including distributed evolution [22]. It is likely that given such power, many new minimal networks can be discovered, for networks with even larger number of input lines. The stock trading experiments was designed to demonstrate that the approach makes a difference in real-world problems. The main challenge in trading is generalization to unseen data, and indeed in this respect novelty pulsation improved generalization significantly. The method can also be applied in many other domains, in particular those that are deceptive and have natural secondary objectives. For instance, various game strategies from board to video games can be cast in this form, where winning is accompanied by different dimensions of the score. Solutions for many design problems, such as 3D printed objects, need to satisfy a set of functional requirements, but also maximize strength and minimize material. Effective control of robotic systems need to accomplish a goal while minimize energy and wear and tear. Thus, many applications should be amenable to the composite novelty pulsation approach. Another direction is to extend the method further into discovering effective collections of solutions. For instance, ensembling is a good approach for increasing the performance of machine learning systems. Usually the ensemble is formed from solutions with different initialization or training, with no mechanism to ensure that their differences are useful. In composite novelty pulsation, the Pareto front consists of a diverse set of solutions that span the area of useful tradeoffs. Such collections should make for a powerful ensemble, extending the applicability of the approach further.
14.6 Conclusion The composite novelty pulsation method is a promising extension of the composite novelty approach to deceptive problems. Composite objectives focus the search on the most useful tradeoffs (better exploitation), while novelty selection allows escaping deceptive areas (better exploration). Novelty pulsation balances between the exploration and exploitation, finding solutions faster and finding solutions that generalize better. These principles were demonstrated in this paper in the highly deceptive problem of minimizing sorting networks and in the highly noisy domain of stock market trading. Composite novelty pulsation is a general method that can be combined with other advances in population-based search, thus increasing the power and applicability of evolutionary multi-objective optimization.
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
291
Fig. 14.11 The new 20-line sorting network with 91 comparators, discovered by novelty pulsation
Appendix The graph of the new world record for 20-line sorting network, which moved the previous record of 92 comparators also discovered by evolution [40] down to 91.
One of the nice properties of Novelty Pulsation Method is the ability to converge with a very small pool size (like only 30 individuals in case of sorting networks). However, it still took almost 2 months to break the world record on the 20-line network running on a single machine (Fig. 14.11). Interestingly, even if it takes the same number of generations for the other methods to get there with a normal pool size of a thousand, those runs will take almost 5 years to converge!
References 1. F. Allen, R. Karjalainen. 1999. Using genetic algorithms to find technical trading rules. Journal of Financial Economics 51, 245–271. 2. S. W. A. Baddar. 2009. Finding Better Sorting Networks. PhD thesis, Kent State University. 3. J. A. Bowren, J. K. Pugh, and K. O. Stanley. 2016. Fully Autonomous Real-Time Autoencoder Augmented Hebbian Learning through the Collection of Novel Experiences. In Proceedings of ALIFE. 382–389. 4. A. Brabazon, M. O’Neill. 2006. Biologically Inspired Algorithms for Financial Modelling. Springer. 5. R. Bradley, A. Brabazon, M. O’Neill. 2010. Objective function design in a grammatical evolutionary trading system. In: 2010 IEEE World Congress on Computational Intelligence, pp. 3487–3494. IEEE Press.
292
H. Shahrzad et al.
ˇ 6. M. Crepinšek, S. Liu, M. Mernik. 2013. Exploration and Exploitation in Evolutionary Algorithms: A Survey. ACM Computing Surveys 45, Article 35. 7. M. Codish, L. Cruz-Filipe, and P. Schneider-Kamp. 2014. The quest for optimal sortingnetworks: Efficient generation of two-layer prefixes. In Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), 2014 16th International Symposium on (pp. 359–366). IEEE. 8. M. Codish, L. Cruz-Filipe, T. Ehlers, M. Muller, and P. Schneider-Kamp. 2016. Sorting networks: To the end and back again. Journal of Computer and System Sciences. 9. C. A. C. Coello, G. B. Lamont, and D. A. Van Veldhuizen. 2007. Evolutionary algorithms for solving multi-objective problems. Vol. 5. Springer. 10. I. Contreras, J.I. Hidalgo, L. Nunez-Letamendia, J.M. Velasco. 2017. A meta-grammatical evolutionary process for portfolio selection and trading. Genetic Programming and Evolvable Machines 18(4), 411–431. 11. G. Cuccu and F Gomez. 2011. When Novelty is Not Enough. In Evostar. 234–243. 12. W. Cui, A. Brabazon, M. O’Neill. 2011. Adaptive trade execution using a grammatical evolution approach. International Journal of Financial Markets and Derivatives 2(1/2), 4–3. 13. A. Cully, J. Clune, D. Tarapore, and J-B. Mouret. 2015. Robots that can adapt like animals. Nature 521, 7553 (2015), 503–507. 14. K. Deb, A. Pratap, S. Agarwal, and T. A. Meyarivan. 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. on Evolutionary Computation 6, 2 (2002), 182–197. 15. K. Deb, and H. Jain. 2014. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints. In IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, 577–601. 16. K. Deb, K. Sindhya, and J. Hakanen. 2016. Multi-objective optimization. In Decision Sciences: Theory and Practice. 145–184. 17. J. Gomes, P. Mariano, and A. L. Christensen. 2015. Devising effective novelty search algorithms: A comprehensive empirical study. In Proc. of GECCO. 943–950. 18. F. Gomez, and R. Miikkulainen. 1997. Incremental evolution of complex general behavior. Adaptive Behavior 5(3–4), pp.317–342. 19. J. Gomes, P. Urbano, and A. L. Christensen. 2013. Evolution of swarm robotics systems with novelty search. Swarm Intelligence, 7:115–144. 20. F. J. Gomez. 2009. Sustaining diversity using behavioral information distance. In Proc. of GECCO. 113–120. 21. I. Gonçalves, S. Silva. 2013. Balancing Learning and Overfitting in Genetic Programming with Interleaved Sampling of Training Data. In: Krawiec K., Moraglio A., Hu T., Etaner-Uyar A., Hu B. (eds) Genetic Programming. EuroGP 2013. Lecture Notes in Computer Science, vol 7831. Springer, Berlin, Heidelberg. 22. B. Hodjat, H. Shahrzad, and R. Miikkulainen. 2016. Distributed Age-Layered Novelty Search. In Proc. of ALIFE. 131–138. 23. H. Jain, and K. Deb. 2014. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach. In IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, 602–622. 24. P. Kipfer, M. Segal, and R. Westermann. 2004. Uberflow: A gpu-based particle engine. In HWWS 2004: Proc. of the ACM SIGGRAPH/EUROGRAPHICS, 115–122. 25. D. E. Knuth. 1998. Art of Computer Programming: Sorting and Searching, volume 3. AddisonWesley Professional, 2 edition. 26. P. Krcah, and D. Toropila. 2010. Combination of novelty search and fitness-based search applied to robot body-brain coevolution. In Proc. of 13th Czech-Japan Seminar on Data Analysis and Decision Making in Service Science. 27. J. Lehman, S. Risi, and J. Clune. 2016. Creative Generation of 3D Objects with Deep Learning and Innovation Engines. In Proc. of ICCC. 180–187. 28. J. Lehman, and R. Miikkulainen. 2014. Overcoming deception in evolution of cognitive behaviors. In Proc. of GECCO.
14 Enhanced Optimization with Composite Objectives and Novelty Pulsation
293
29. J. Lehman and K. O. Stanley. 2012. Beyond open-endedness: Quantifying impressiveness. In Proc. of ALIFE. 75–82. 30. J. Lehman and K. O. Stanley. 2011. Evolving a diversity of virtual creatures through novelty search and local competition. In Proc. of GECCO. 211–218. 31. J. Lehman and K. O. Stanley. 2011. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary Computation 19, 2 (2011), 189–223. 32. J. Lehman and K. O. Stanley. 2010. Efficiently evolving programs through the search for novelty. In Proc. of GECCO. 836–844. 33. J. Lehman and K. O. Stanley. 2008. Exploiting Open-Endedness to Solve Problems Through the Search for Novelty. In Proc. of ALIFE. 329–336. 34. E. Meyerson, and R. Miikkulainen. 2017. Discovering evolutionary stepping stones through behavior domination. In Proc. of GECCO, 139–146. ACM. 35. E. Meyerson, J. Lehman, and R. Miikkulainen. 2016. Learning behavior characterizations for novelty search. In Proc. of GECCO. 149–156. 36. J-B. Mouret and J. Clune. 2015. Illuminating search spaces by mapping elites. CoRR abs/1504.04909 (2015). 37. J-B. Mouret and S. Doncieux. 2012. Encouraging behavioral diversity in evolutionary robotics: An empirical study. Evolutionary Computation 20, 1 (2012), 91–133. 38. J. K. Pugh, L. B. Soros, P. A. Szerlip, and K. O. Stanley. 2015. Confronting the Challenge of Quality Diversity. In Proc. of GECCO. 967–974. 39. H. Shahrzad, D. Fink, and R. Miikkulainen. 2018. Enhanced Optimization with Composite Objectives and Novelty Selection. In Proc. of ALIFE. 616–622. 40. V. K. Valsalam, and R. Miikkulainen. 2013. Using symmetry and evolutionary search to minimize sorting networks. Journal of Machine Learning Research 14(Feb):303–331. 41. H. White. 2000. A reality check for data snooping. Econometrica Sep. 2000; 68(5):1097–126. 42. D. Whitley, K. Mathias, P. Fitzhorn. 1991. Delta coding: An iterative search strategy for genetic algorithms. In ICGA (Vol. 91, pp. 77–84).
Chapter 15
New Pathways in Coevolutionary Computation Moshe Sipper, Jason H. Moore, and Ryan J. Urbanowicz
15.1 Coevolutionary Computation In biology, coevolution occurs when two or more species reciprocally affect each other’s evolution. Darwin mentioned evolutionary interactions between flowering plants and insects in Origin of Species. The term coevolution was coined by Paul R. Ehrlich and Peter H. Raven in 1964.1 Coevolutionary algorithms simultaneously evolve two or more populations with coupled fitness [8]. Strongly related to the concept of symbiosis, coevolution can be mutualistic (cooperative), parasitic (competitive), or commensalistic (Fig. 15.1)2 : (1) In cooperative coevolution, different species exist in a relationship in which each individual (fitness) benefits from the activity of the other; (2) in competitive coevolution, an organism of one species competes with an organism of a different species; and (3) with commensalism, members of one species gain benefits while those of the other species neither benefit nor are harmed. A cooperative coevolutionary algorithm involves a number of independently evolving species, which come together to obtain problem solutions. The fitness of an individual depends on its ability to collaborate with individuals from other species [2, 8, 9, 15]. 1 https://en.wikipedia.org/wiki/Coevolution. 2 https://en.wikipedia.org/wiki/Symbiosis.
M. Sipper () Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA Department of Computer Science, Ben-Gurion University, Beer Sheva, Israel J. H. Moore · R. J. Urbanowicz Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA e-mail: [email protected]; [email protected]
© Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_15
295
296
M. Sipper et al.
Fig. 15.1 Coevolution: (a) cooperative: Purple-throated carib feeding from and pollinating a flower (credit: Charles J Sharp, https://commons.wikimedia.org/wiki/File:Purplethroated_carib_hummingbird_feeding.jpg); (b) competitive: predator and prey—a leopard killing a bushbuck (credit: NJR ZA, https://commons.wikimedia.org/wiki/File:Leopard_kill_-_KNP__001.jpg); (c) commensalistic: Phoretic mites attach themselves to a fly for transport (credit: Alvesgaspar, https://en.wikipedia.org/wiki/File:Fly_June_2008-2.jpg)
In a competitive coevolutionary algorithm the fitness of an individual is based on direct competition with individuals of other species, which in turn evolve separately in their own populations. Increased fitness of one of the species implies a reduction in the fitness of the other species [5]. We have recently developed two new coevolutionary algorithms, which will be reviewed herein: OMNIREP and SAFE [10–12]. OMNIREP aims to aid in one of the major tasks faced by an evolutionary computation (EC) practitioner, namely, deciding how to represent individuals in the evolving population. This task is actually composed of two subtasks: defining a data structure that is the representation and defining the encoding that enables to interpret the representation. OMNIREP discovers both a representation and an encoding that solve a particular problem of interest, by employing two coevolving populations. SAFE—Solution And Fitness Evolution—stemmed from our recently highlighting a fundamental problem recognized to confound algorithmic optimization: conflating the objective with the objective function [13]. Even when the former is well defined, the latter may not be obvious. SAFE is a commensalistic coevolutionary algorithm that maintains two coevolving populations: a population of candidate solutions and a population of candidate objective functions. To the best of our knowledge, SAFE is the first coevolutionary algorithm to employ a form of commensalism. We first turn to OMNIREP (Sect. 15.2), followed by SAFE (Sect. 15.3), and ending with concluding remarks (Sect. 15.4). This chapter summarizes our research. For full details please refer to [10–12]. NB: The code for both OMNIREP and SAFE is available at https://github.com/EpistasisLab/.
15 New Pathways in Coevolutionary Computation
297
15.2 OMNIREP One Representation to rule them all, One Encoding to find them, One Algorithm to bring them all and in the Fitness bind them. In the Landscape of Search where the Solutions lie.
One of the basic tasks of the EC practitioner is to decide how to represent individuals in the (evolving) population, i.e., precisely specify the genetic makeup of the artificial entity under consideration. As stated by Eiben and Smith [3]: “Technically, a given representation might be preferable over others if it matches the given problem better, that is, it makes the encoding of candidate solutions easier or more natural.” One of the EC practitioner’s foremost tasks is thus to identify a representation— a data structure—and its encoding, or interpretation. These can be viewed, in fact, as two distinct tasks, though they are usually dealt with simultaneously. To wit, one might define the representation as a bitstring and in the same breath go on to state the encoding, e.g., “the 120-bit bitstring represents 4 numerical values, each encoded by 30 bits, which are treated as signed floating-point values”. OMNIREP uses cooperative coevolution with two coevolving populations, one of representations, the other of encodings. The evolution of each population is identical to a single-population evolutionary algorithm—except where fitness is concerned (Fig. 15.2).
Fig. 15.2 Fitness computation in OMNIREP, where two populations coevolve, one comprising representations, the other encodings. Fitness is computed by combining a representation individual (R) with an encoding individual (E)
298
M. Sipper et al.
Selection, crossover, and mutation are performed as in a standard singlepopulation algorithm. To compute fitness the two coevolving populations cooperate. Specifically, to compute the fitness of a single individual in one population, OMNIREP uses representatives from the other population [8]. The representatives are selected via a greedy strategy as the 4 fittest individuals from the previous generation. When evaluating the fitness of a particular representation individual, OMNIREP combines it 4 times with the top 4 encoding individuals, computes 4 fitness values, and uses the average fitness over these 4 evaluations as the final fitness value of the representation individual. In a similar manner OMNIREP uses the average of 4 representatives from the representations population when computing the fitness of an encoding individual. In [10] we applied OMNIREP successfully to four problems: • Bitstring and bit count. Solve cubic polynomial regression problems, y = ax 3 + bx 2 + cx + d, where the objective was to find the coefficients a, b, c, d for a given dataset of x, y values (independent and dependent variables). An individual in the representations population was a bitstring of length 120. An individual in the encodings population was a list of 4 integer values, each of which specified the number of bits allocated to the respective parameter (a, b, c, d) in the representation individual. 49 ej • Floating point and precision. Solve regression problems, y = j =0 aj x , where aj ∈ R ∩ [0, 1], x ∈ R ∩ [0, 1], ej ∈ {0, . . . , 4}, j = 0, . . . , 49. An individual in the representations population was a list of 50 real values ∈ [0, 1] (the coefficients aj ). An individual in the encodings population was a list of 50 integer values, each specifying the precision of the respective coefficient, namely, the number of digits d ∈ {1, . . . , 8} after the decimal point. • Program and instructions. Find a program that is able to emulate the output of an unknown target program. We considered the evolution of a program composed of 10 lines, each line executing a mathematical, real-valued, univariate function, or instruction. The representation individual was a program comprising 10 lines, each one executing a generic instruction of the form x=fi(x), where fi ∈ {f1,...,f5}. The program had one variable, x, which was set to a specific value v at the outset, i.e., to each (10-line) program, the instruction x=v was added as the first line. v was thus the program’s input. After a program finished execution, its output was taken as the value of x. To run a program one needs to couple it with an encoding individual, which provides the specifics of what each fi performs. Table 15.1 shows an example. • Image and blocks. Herein, we delved into evolutionary art, wherein artwork is generated through an evolutionary algorithm. Our goal was to evolve images that closely matched a given target image, a “standard of beauty” as it were. The representation individual’s genome was a list of pixel indexes, with each index considered the start of a same-color block of pixels. The encoding individual was a list equal in length to the representation individual, consisting of tuples (bi , ci ), where bi was block i’s length, and ci was block i’s color. If a pixel was uncolored by any block it was assigned a default base color. Sample evolved artwork is shown in Fig. 15.3.
15 New Pathways in Coevolutionary Computation Table 15.1 OMNIREP ‘program and instructions’ experiment: sample representation and encoding individuals, the former being a 10-line program with generic instructions, and the latter being the instruction meanings
299 Representation x=v x=f1(x) x=f2(x) x=f3(x) x=f4(x) x=f2(x) x=f2(x) x=f5(x) x=f2(x) x=f1(x) x=f5(x)
Encoding f1: mul10 f2: fabs f3: tan f4: mul10 f5: minus2
Fig. 15.3 Sample artwork evolved by OMNIREP
OMNIREP was able to solve all problems successfully. Moreover, it usually found better encodings (e.g., more compact—using less bits or less precision) than fixed-representation schemes, with no degradation in performance. For full details see [10].
15.3 SAFE We have recently highlighted a fundamental problem recognized to confound algorithmic optimization, namely, conflating the objective with the objective function [13]. Even when the former is well defined, the latter may not be obvious. We presented an approach to automate the means by which a good objective function might be discovered, through the introduction of SAFE—Solution And Fitness Evolution—a commensalistic coevolutionary algorithm that maintains two coevolving populations: a population of candidate solutions and a population of candidate objective functions [11, 12]. Consider a robot navigating a maze, wherein the challenge is to evolve a robotic controller such that the robot, when placed in the start position, is able to make its way to the goal. It seems intuitive that the fitness of a given robotic controller
300
M. Sipper et al.
be defined as a function of the distance from the robot to the objective, as done, e.g., by Lehman and Stanley [7]. However, reaching the objective may be difficult since the robot is faced with a deceptive landscape, where higher fitness (i.e., being reasonably close to the goal) may not imply that the robot is “almost there”. It is quite easy for the robot to attain a fairly good fitness value, yet be stuck behind a wall in a local optimum—quite far from the objective in terms of the path needed to be taken. Indeed, our experiments with such a fitness-based evolutionary algorithm [12] produced the expected failure, demonstrated in Fig. 15.4. One solution to this conflation problem was offered by Lehman and Stanley [7] in the form of novelty search, which ignores the objective and searches for novelty. However, novelty for the sake of novelty alone lacks incentive for solutions that reach and stay at the objective. Perhaps, though, the issue lies with our ignorance of the correct objective function. That is the motivation behind the SAFE algorithm. SAFE is a coevolutionary algorithm that maintains two coevolving populations: a population of candidate solutions and a population of candidate objective functions. The evolution of each population is identical to a standard, single-population evolutionary algorithm—except where fitness computation is concerned, as shown in Fig. 15.5. We applied SAFE to two domains: evolving robot controllers to solve mazes [12] and multiobjective optimization [11]. Applying SAFE within the robotic domain, an individual in the solutions population was a list of 16 real values, representing the robot’s control vector (“brain”). The controller determined the robot’s behavior when wandering the maze, with its phenotype taken to be the final position, or endpoint. The endpoint
Fig. 15.4 In a maze problem a robot begins at the start square and must make its way to the goal square (objective). Shown above are paths (green) of robots evolved by a standard evolutionary algorithm with fitness measured as distance-to-goal, evidencing how conflating the objective with the objective function leads to a non-optimal solution
15 New Pathways in Coevolutionary Computation
A. SAFE
Population of Objective Functions (Each Oi has associated fitness)
Population of Solutions (Each Si has associated fitness)
Evolve Next Generation
S1
Elitism Selection
301
3
S4
Crossover
S2
S3
O1
Evaluate each Si with each candidate Oi
S5
Elitism
O5
1 O4
Fitness of Si = largest Oi out of Om
... Sn
Evolve Next Generation
O2 O3 ... Om
Mutation
Evaluate each Oi
2
4
Selection Crossover Mutation
Fitness of Oi
‘Genotypic’ Novelty of Oi
B. Standard Evolutionary Algorithm Population of Solutions (Each Si has associated fitness) Evolve Next Generation
S1
Elitism Selection Crossover Mutation
2
S4
S2 S5 ... Sn
S3
Evaluate each Si
1
O
Objective Function
Fitness of Si
Fig. 15.5 A single generation of SAFE (a) vs. a single generation of a standard evolutionary algorithm (b). The numbered circles identify sequential steps in the respective algorithms. The objective function can comprise a single or multiple objectives
was used to compute standard distance-to-goal fitness and to compute phenotypic novelty: compare the endpoint to all endpoints of current-generation robots and to all endpoints in an archive of past individuals whose behaviors were highly novel when they emerged. The final novelty score was then the average of the 15 nearest neighbors. An individual in the objective-functions population was a list of 2 real values [a, b], each ∈ [0, 1]. Every solution individual was scored by every candidate objective-function individual in the current population (Fig. 15.5a). Candidate SAFE objective functions incorporated both ‘distance to goal’ (the evolving a parameter) as well as phenotypic novelty (the evolving b parameter) in order to calculate solution fitness, weighting the two objectives in a simple linear fashion. The best (highest) of these objectivefunction scores was then assigned to the individual solution as its fitness value. As for the objective-functions population, determining the quality of an evolving objective function posed a challenge. Eventually we turned to a commensalistic coevolutionary strategy, where the objective functions’ fitness did not depend on the population of solutions. Instead, it relied on genotypic novelty, based on the objective-function individual’s two-valued genome, [a, b]. The distance between two objective functions was simply the Euclidean distance of their genomes. Each generation, every candidate objective function was compared to its cohorts in the current population of objective functions and to an archive of past individuals whose
302
M. Sipper et al.
behaviors were highly novel when they emerged. The novelty score was the average of the distances to the 15 nearest neighbors, and was used in computing objectivefunction fitness. SAFE performed far better than random search and a standard fitness-based evolutionary algorithm, and compared favorably with novelty search. Figure 15.6 shows sample solutions found by SAFE (contrast this with the standard evolutionary algorithm, which always got stuck in a local minimum, as exemplified in Fig. 15.4). For full details see [12]. The second domain we applied SAFE to was multiobjective optmization [11]. A multiobjective optimization problem involves two or more objectives all of which need to be optimized. Applications of multiobjective optimization abound in numerous domains [16]. With a multiobjective optimization problem there is usually no single-best solution, but rather the goal is to identify a set of ‘non-dominated’ solutions that represent optimal tradeoffs between multiple objectives—the Pareto front. Usually, a representative subset will suffice. Specifically, we applied SAFE to the solution of the classical ZDT problems, which epitomize the basic setup of multiobjective optimization [6, 17]. For example, ZDT1 is defined as: f1 (x) = x1 , g(x) = 1 + 9/(k − 1)
k
i=2
f2 (x) = 1 −
f1 /g .
Fig. 15.6 Solutions to the maze problems, evolved by SAFE
xi ,
15 New Pathways in Coevolutionary Computation
303
The two objectives are to minimize both f1 (x) and f2 (x). The dimensionality of the problem is k = 30, i.e., solution vector x = x1 , . . . , x30 , xi ∈ [0, 1]. The utility of this suite is that the ground-truth optimal Pareto front can be computed and used to determine and compare multiobjective algorithm performance. SAFE maintained two coevolving populations. An individual in the solutions population was a list of 30 real values. An individual in the objective-functions population was a list of 2 real values [a, b], each in the range [0, 1], defining a candidate set of weights, balancing the two objectives of the ZDT functions: a determined f1 ’s weighting and b determined f2 ’s weighting. Note that, as opposed to many other multiobjective optimizers, SAFE did not rely on measures of the Pareto front (i.e., a Pareto front was not employed to calculate solution fitness, or as a standard for selecting parent solutions to generate offspring solutions). We tested SAFE on four ZDT problems—ZDT1, ZDT2, ZDT3, ZDT4— recording the evolving Pareto front as evolution progressed. We compared our results with two very recent studies by Cheng et al. [1] and by Han et al. [4], finding that SAFE was able to perform convincingly better on 3 of the 4 problems. For full details see [11].
15.4 Concluding Remarks The experimentation performed to date is perhaps not definitive yet but we hope to have offered at least proof-of-concept of our two new coevolutionary algorithms. Both have been shown to be successful in a number of domains. There are several avenues of future research that present themselves, including: • Study and apply both algorithms to novel domains. We have been looking into applying SAFE to datasets created by the GAMETES system, which models epistasis [14]. We have also created additional art by devising novel encodingrepresentation couplings for OMNIREP (Fig. 15.7). • Study the coevolutionary dynamics engendered by OMNIREP and SAFE. • Cooperative or competitive versions of SAFE (which is currently commensalistic), i.e., finding ways in which the objective-function population depends on the solutions population. • Examine the incorporation of more sophisticated evolutionary algorithm components (e.g., selection, elitism, genetic operators). Acknowledgement This work was supported by National Institutes of Health (USA) grants AI116794, LM010098, and LM012601.
304
M. Sipper et al.
Fig. 15.7 Additional artwork created by OMNIREP using novel encoding-representation couplings (involving polygons, and horizontal and vertical blocks). Each row shows a single evolutionary run, from earlier generations (left) to later generations (right)
References 1. Cheng, T., Chen, M., Fleming, P.J., Yang, Z., Gan, S.: A novel hybrid teaching learning based multi-objective particle swarm optimization. Neurocomputing 222, 11–25 (2017) 2. Dick, G., Yao, X.: Model representation and cooperative coevolution for finite-state machine evolution. In: 2014 IEEE Congress on Evolutionary Computation (CEC), pp. 2700–2707. IEEE, Piscataway, NJ (2014) 3. Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing. Springer-Verlag, Berlin (2003) 4. Han, F., Sun, Y.W.T., Ling, Q.H.: An improved multiobjective quantum-behaved particle swarm optimization based on double search strategy and circular transposon mechanism. Complexity 2018 (2018)
15 New Pathways in Coevolutionary Computation
305
5. Hillis, W.: Co-evolving parasites improve simulated evolution as an optimization procedure. Physica D: Nonlinear Phenomena 42(1), 228–234 (1990) 6. Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation 10(5), 477–506 (2006) 7. Lehman, J., Stanley, K.O.: Exploiting open-endedness to solve problems through the search for novelty. In: Proceedings of the Eleventh International Conference on Artificial Life (ALIFE). MIT Press, Cambridge, MA (2008) 8. Pena-Reyes, C.A., Sipper, M.: Fuzzy CoCo: A cooperative-coevolutionary approach to fuzzy modeling. IEEE Transactions on Fuzzy Systems 9(5), 727–737 (2001) 9. Potter, M.A., De Jong, K.A.: Cooperative coevolution: An architecture for evolving coadapted subcomponents. Evolutionary Computation 8(1), 1–29 (2000) 10. Sipper, M., Moore, J.H.: OMNIREP: originating meaning by coevolving encodings and representations. Memetic Computing (2019) 11. Sipper, M., Moore, J.H., Urbanowicz, R.J.: Solution and fitness evolution (SAFE): A study of multiobjective problems. In: Proceedings of 2019 IEEE Congress on Evolutionary Computation (CEC). IEEE (2019) 12. Sipper, M., Moore, J.H., Urbanowicz, R.J.: Solution and fitness evolution (SAFE): Coevolving solutions and their objective functions. In: L. Sekanina, T. Hu, N. Lourenço, H. Richter, P. García-Sánchez (eds.) Genetic Programming, pp. 146–161. Springer International Publishing, Cham (2019) 13. Sipper, M., Urbanowicz, R.J., Moore, J.H.: To know the objective is not (necessarily) to know the objective function. BioData Mining 11(1) (2018) 14. Urbanowicz, R.J., Kiralis, J., Sinnott-Armstrong, N.A., Heberling, T., Fisher, J.M., Moore, J.H.: GAMETES: A fast, direct algorithm for generating pure, strict, epistatic models with random architectures. BioData Mining 5(1), 16 (2012) 15. Zaritsky, A., Sipper, M.: Coevolving solutions to the shortest common superstring problem. Biosystems 76(1), 209–216 (2004) 16. Zhou, A., Qu, B.Y., Li, H., Zhao, S.Z., Suganthan, P.N., Zhang, Q.: Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm and Evolutionary Computation 1(1), 32–49 (2011) 17. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary Computation 8(2), 173–195 (2000)
Chapter 16
2019 Evolutionary Algorithms Review Andrew N. Sloss and Steven Gustafson
16.1 Preface When attempting to find a perfect combination of chemicals for a specific problem, a chemist will undertake a set of experiments. They know roughly what needs to be achieved but not necessarily how to achieve it. A chemist will create a number of experiments. Each experiment is a combination of different chemicals. Following some theoretical basis for the experiments. The experiments are played out and the promising solutions are identified and gathered together. These new chemical combinations are then used as the basis for the next round of experiments. This procedure is repeated until hopefully a satisfactory chemical combination is discovered. The reason this discovery method is adopted is because the interactions between the various chemicals is too complicated and potentially unknown. This effectively makes the problem-domain too large to explore. An Evolutionary Algorithm (EA) replaces the decision making by the chemist, using evolutionary principles to explore the problem-space. EAs handle situations that are too complex to be solved with current knowledge or capability using a form of synthetic digital evolution. The exciting part is that the solutions themselves can be original, taking advantage of effects or attributes previously unknown to the problem. EAs provide a framework that can be reused across different domains, they are mostly biologically-inspired algorithms that reside as a subbranch of Artificial Intelligence (AI).
A. N. Sloss () Arm Inc., Bellevue, WA, USA e-mail: [email protected]; [email protected] S. Gustafson MAANA Inc., Bellevue, WA, USA © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_16
307
308
A. N. Sloss and S. Gustafson
Using Bertrand Russell’s method of defining philosophy [63] i.e.“as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be called philosophy, and becomes a separate science”. AI research lives inbetween philosophy and science. Ideas transition from philosophical thought to applied science and truth. Within Computer Science, the AI field resides at the edge of knowledge and as such includes a distinct part which is more philosophical and another which is more rooted in science. In this review we cover one of the science aspects. AI science incorporates many areas of research e.g. Neural Networks, Bayesian Networks, Evolutionary Algorithms, Correlation, Game Theory, Planning, Vision recognition, Decision making, Natural Language Processing, etc. It is a dynamically changing list as more discoveries are made or developed. Thirdly, Machine Learning is the engineering discipline which applies the science to a real world problem. One of the overriding motivators driving Machine Learning in recent years has been the desire to replace rigid rule-based systems. A strong candidate has been emerging which is both adaptive and outcome-based. This technology relies on data-directed inputs. Jeff Bezos, CEO of Amazon, succinctly described this concept in a letter to shareholders in 2017 [46], “Over the past decades computers have broadly automated tasks that programmers could describe with clear rules and algorithms. Modern Machine Learning techniques now allow us to do the same for tasks where describing the precise rules is much harder”. Also Kazuo Yano, Fellow and Corporate Officer of Hitachi Ltd, said in his keynote at the 2018 Genetic and Evolutionary Computation Conference (GECCO) [8] that the demand for more flexibility forces us to transition from traditional rule-oriented systems to future outcome-oriented ones. The adaptability and transition to outcome-oriented systems means, from an end user perspective, there is more uncertainly surrounding the final result. This is construed as being either real or perceived. Rule-based systems are not impervious but tend to be deterministic and understandable e.g. the most notable being the area of safety-critical systems. This uncertainty creates the notion of User Control Attributes (UCA). The UCA include the concepts of limiters [74], explainability [4], causality [58], fairness [2, 82] and correction [79]. These attributes have seen a lot of scrutiny in recent years due to high-profile public errors, as detailed in the AI Now 2018 Report [79]. The report goes into the details of fairness and highlights the various procedures, transparency and accountability required for Machine Learning systems to be safely applied in real social environments. Also worth mentioning is the International Standard Organization (ISO), which has formed a study group focusing specifically on Trustworthiness [25]. The study will be investigating methods to improve basic trust in Machine Learning systems by exploring transparency, verify-ability, explainability, and control-ability. The study will also look at mitigation techniques. The goal is to improve overall robustness, resiliency, reliability, accuracy, safety, security and privacy; and by doing so hopefully minimize source biases. For this review we will limit the focus to research, while at the same time being cognizant of the dangers of real world deployment.
16 2019 Evolutionary Algorithms Review
309
Control imposes a different level of thinking, where researchers are not just given a problem to solve but the solution requires a model justifying the outcome. That model has to provide the answers to the main questions: Whether the algorithm stays within limits/restrictions? Is the algorithm explainable? Can the algorithm predict beyond historical data? Does the algorithm avoid system biases or even side-step replicating human prejudices [21]? And finally, can the algorithm be corrected? These attributes are not mutually exclusive and in fact intertwine with each other. We can see a trend where modern Machine Learning algorithms will be rated not only on the quality of the results but on how well they cope with the user demanded control attributes. These attributes are to be used as a basis for a new taxonomy. This is a good point to start discussing the Computer Industry. The industry itself is facing a set of new challenges and simultaneously adding new capabilities, as summarized below: • Silicon level: groups are starting to work on the problem of mass silicon production at the 3-nanometer scale and smaller [22]. This involves designing gates and transistors at a scale unimaginable 10 or even 5-years ago, using enhanced lithographic techniques and grappling with quantum tunneling issues. These unprecedented improvements have allowed other areas higher up the software stack to flourish. Unfortunately these advancements are slowing down, as the current techniques hit both physical and economic limitations. This situation is called the End of Moore’s Law (EoML) [11, 67]. • System level: A number of levels above the silicon lies the system-level which also has seen some impressive advancements with the world-wide web, network infrastructure and data centers that connect everyone with everyone. The scale of the system-level advancements has opened up the possibility of mimicking small parts of the human brain. One such project is called SpiNNaker [29]. SpiNNaker was upgraded and switched on in November 2018, it now consists of a million interconnected ARM cores each executing Spiking Neural Networks (SNN) [49]. Even with all the hardware capability, estimates suggest that it is only equivalent to about 1% of a human brain. • Software design level: Software design, at the other end of the spectrum, has been constantly pursuing automation. The hope being that automation leads to the ability to handle more complicated problems, which in turn provides more opportunities. New programming languages, new software paradigms and higher level data-driven solutions all contribute to improving software automation. To handle these new challenges and capabilities requires a continuously changing toolbox of techniques. EAs are one such tool and like other tools they have intrinsic advantages and disadvantages. Recently EAs have seen a resurgence of enthusiasm, which comes at an interesting time since other branches of Machine Learning become mature and crowded. This maturity forces some researchers to explore combinations of techniques. As can be seen in this review a lot of the new focus and vigor is centered upon hybrid solutions, especially important is the area of combining evolutionary techniques with Artificial Neural Networks (ANNs).
310
A. N. Sloss and S. Gustafson
16.2 Introduction EAs are not a new subject. In fact as we look back at some of the early computing pioneers we see examples of evolutionary discovery. For example both Alan Turing [76] and John von Neumann [55] formed ideas around Biological Automation, Biological Mathematics and Machine Learning. These forward visionaries focused on the fact that exploitative methods could only go so far to solve difficult problems and more exploratory methods were required. The main difference between the two techniques is that exploitative focuses on direct local knowledge to obtain a solution whereas exploratory takes effectively a more stochastic approach (leaping into the unknown). Figure 16.1 shows an idealized view of the changes to hardware capability and algorithmic efficiency over a time period. The figure shows the relationship between improvements in hardware and the types of problems that can be addressed. Before 2010, the computing industry mostly focused on exploitative problems “getting things to work efficiently”. Today, due to hardware improvements, we can look at exploratory algorithms that “discover”. At the extremes of the X-Y axis, top-right and bottom-right lies the respective future potentials of hardware and software i.e. the unknown. Note: End of Dennard Scaling [24] marks the point in time when transistor shrinkage no-longer sustained a constant power density. In other words, static power consumption dominates the power equation for silicon. Forcing the industry to use clever frequency and design duplication techniques to mitigate the problem. Deep Learning [34] represents the software resurgence of Neural Nets, due to hardware improvements and the availability of large training data sets.
Fig. 16.1 Hardware capability and algorithmic efficiency over an idealized time line
16 2019 Evolutionary Algorithms Review
311
The Hardware Capability top-right of the graph represents future hardware concepts, and requires subsequent manufacturing breakthroughs. These future concepts could include alternative computing models, Quantum computing, Neuromorphic architectures, new exotic materials, Asynchronous processing, etc.. By contrast the Algorithmic Efficiency bottom-right represents future breakthroughs in subjects like Artificial Life, Artificial General Intelligence (AGI), etc.; more philosophical goals than either science or engineering. Both require significant advancements beyond what we have today. With these future developments, the desire is to set a problem-goal and let the “system” find the correct answer. This is extremely simple to state but highly complex to implement. And more importantly, next to impossible to implement without direct insertion of domain specific knowledge into the algorithm in question. No Free Lunch Theorem [81] states that no algorithm exists which outperforms every other algorithm for every problem. This means to be successful, each problem requires some form of domain specific knowledge to be efficient. The more domain specific knowledge applied to an algorithm the greater the likelihood of beating a stochastic algorithm. A stochastic algorithm can search every problem, without the requirement of domain knowledge. EAs are directed population-based stochastic search algorithms. As hardware capability increases more of these types of problems can be handled. It is the constraints of time and efficiency that forces domain knowledge to be inserted into an algorithm. This paper provides an up-to-date review of the various EAs, there respective options and how they may be applied to different problem-domains. EAs are a family of biologically-inspired algorithms that take advantage of synthetic methods, namely management of populations, replication, variability and finally selection. All based upon the fundamental theory of Darwinian evolution [23]. As a general rule the algorithms are often simple at a high-level but become increasingly complex as more domain knowledge is put into the system. Another term frequently used to describe these style of algorithms is metaheuristics. Metaheauristics is a higher-order concept, it is an algorithm that systematically pursues the identification of the best solution within a problem-space. EAs obviously fall under this class of algorithms and a lot of the academic literature frequently refers to metaheuristics: the main differentiator being biologically inspired algorithms. For these algorithms to execute, some form of quantitative goal has to be set. This goal is the metric for success. The success metric can also be used as a method to exit the algorithm but most often a function of time is used. The algorithm can also be designed to be continuous i.e. never ending, always evolving. The goal itself can be made up of a single objective or multi-objectives. For multi-objective the search and optimization is towards the pareto optimal curve i.e. attempting to satisfy all the objectives in some degree.
312
A. N. Sloss and S. Gustafson
As an important side-note, EAs are mostly derivative-free, in that majority do not require a derivative function to measure change to determine the optimal result. Lastly, GECCO 2018 [8], in Kyoto, saw a number of trends. Neuroevolution being one of the more notable ones, Neuroevolution is the method of using EAs to configure ANNs (see Sect. 16.5.2). These new trends will be tracked in more detail in future reviews if they remain popular in the evolutionary research community.
16.2.1 Applications EAs are applied to problems where traditional exploitative or pure stochastic algorithms fail or find it difficult to reach a conclusion. This is normally due to constraints on resources, high number of dimensions or complex functionality. Solving these problems would require exceeding the available resources. In other words, given infinite resources and infinite compute capability it could be possible for a traditional exploitative or stochastic algorithm to reach a conclusion. By contrast, EAs can be thought of as the algorithms-of-last-resort. The problems in question are inherently complex, size of the problem-domain is extreme or the mere number of objectives make the problem impossibly difficult to explore. In these circumstances the solutions are more likely “good enough” solutions rather than solutions with high precision or accuracy (but this does not preclude precision or accuracy being a goal). EAs tend to be poor candidates for simple problems where standard techniques could easily be used instead. They can be applied to a broad set of problem types. These types range from variable optimization problems to creating new conceptual designs. In both instances, novelty can occur which may exceed human understanding or ability. These problem-domains can be broken-down into optimization, new design and improvement. • Variable optimization consists of searching a variable space for a “good” solution, where the number of variables being searched is large. Potentially at a magnitude greater than a traditional programming problem. This is where the goal can be strictly defined. • New structural design consists of creating a completely new solution, for example like a program or a mechanical design. A famous example, of a nonanthropomorphized solution, is the evolutionary designed NASA antenna [36]. The antenna design was not necessarily something a human would have created. This is where a particular outcome is desired but it is unknown how that outcome can be constructed. • Improvement is where a known working solution is placed into the system and EAs explore potential better versions. This is where a solution already exists and there is a notion that a potential better solution can be discovered.
16 2019 Evolutionary Algorithms Review
313
Being more specific on the applications side, EAs have been applied to a wide range of problems from leading edge research on self-assembly cellular automata [54] to projecting future city landscapes for town planners (see Sect. 16.6.2.3).
16.3 Fundamentals of Digital Evolution Before diving directly into the fundamentals we should stress that there are two ways to describe evolution. The first is from a pure biology point-of-view dealing the various interactions of biological systems and the other is from the Computer Science perspective. In this paper we will keep the focus on Computer Science with a hint of biology. Other texts may approach the explanation from a different perspective. Evolution is a dynamic mechanism that includes a population of entities (potential solutions) where some form of replication, variation and selection occurs on those entities, as shown in Fig. 16.2. This is a stochastic but guided process where the desire is to move towards a fixed goal. • Replication: Is where new entities are formed, either creating a completely new generation of the population or altering specific individuals within the same population (called steady-state). • Variation: Making the population diverse. There are in effect two forms of variation, namely recombination and mutation. Recombination, or more commonly called crossover, creates new entities by combining parts of other entities. By contrast, mutation injects randomness into the population by stochastic-ally changing specific features of the entities. Fig. 16.2 Idealized Darwinian evolution
314
A. N. Sloss and S. Gustafson
Fig. 16.3 Basic digital process
• Selection: Selection is based on Darwin’s natural selection, or Survival Of The Fittest, where selected entities that show the most promise are carried forward, with variation on the selection being used to create the children of the next generation. As a recap, digital evolution is one which a population of entities goes through generational changes. Each change starts with a selection from the previous generation. Each entity is evaluated against a known specific goal i.e. the fitness is established and used as input to the selection algorithm. Once a selection is made, replication occurs with different degrees of variation. The variation is either by some form of recombination from the parent selection and/or some stochastic mutation. Figure 16.3 shows the basic digital process that is adopted by many EAs. The high level synthetic evolutionary process as-shown is a relatively straightforward and simple procedure. Within the process there are many complex nuances and variations, which are influenced by our understanding of evolutionary biology.
16.3.1 Population A population is a set of solution candidates or entities. Each population is created temporally and has the potential to age with each evolutionary cycle i.e. generation. The initial population is either seeded randomly or is sampled from a set of known solutions. There are two ways for the population to be evolved. This first way consists of evolving the entire population to create the next generation. The second way is to evolve individual entities within the population (called Steady-State). Entities can die or thrive between generations.
16 2019 Evolutionary Algorithms Review
315
The size of the population can be fixed or dynamic throughout the evolutionary process. Fixed is when the number of entities is kept constant and dynamic is when the population size changes. If dynamic, a larger initial population may bring some potential advantages. This is especially true when choosing the first strong candidates for further evolution. A population can also be divided into subgroups, these subgroups are called demes. A deme is a biological term that describes a group of evolving organisms with the same taxonomy. They can evolve in isolation within the population [13]. Demes can be made to interact with the other demes. This is often described using an island and canoe metaphor, where the demes are the islands and the interactions occur as canoes move between the islands depositing new entities.
16.3.2 Population Entities The population entities, or more commonly called phenotypes, can be any type provided that type allows recombination and/or mutation to be applied. The main styles are strings and programs. In this context “programs” is a loose term. The strings are fixed length and more likely represent variables or parameters. The programs tend to be more complex. Effectively any structure from a traditional computer programming language to hardware layout, or even biological descriptions, can be a program. Metaheuristics operate at a high-level making the process generic, allowing idea or concept can be used as the evolved substrate. Including concepts like Deep Learning Neural Network or potentially a causality equation. EAs are neutral on the subject but when it comes to specific problems, the problems themselves, tend to be strictly defined. There are two important concepts to consider with population entities, namely niching and crowding [66]. These terms are associated with diversity, which is briefly mentioned in Sect. 16.3.1. Niching means individual entities survive generations in distinct areas of the search space. By contrast, crowding means replacing an individual entity with similar featured individuals.
16.3.3 Generation A generation is a specific step in an evolutionary population, where individuals in the population undergo change via crossover and/or mutation. For EAs with a fixedrun a restriction is imposed on the number of generations. By contrast, continuous EAs have no upper bounds. EAs can vary from small populations with high number of generations to large populations with significantly lower number of generations. These trade-offs need to be made in terms of computation time. For example, some scenarios will have expensive evaluation functions (so fewer generations with
316
A. N. Sloss and S. Gustafson
larger populations might be preferred). The number of generations required is both algorithmic and problem-domain specific. The children in the next generation can also include the parents from the previous generation. This is called elitism, where the strongest entities remain in the population.
16.3.4 Representation and the Grammar Representation, or more commonly called genotype, is what EAs manipulate. There are different types of representations which include strings, tree structures, linear structures and direct-graphs. Each representation has advantages and disadvantages, and is dependent on the specific problem-domain. The representation determines what actually gets manipulated when recombination occurs, some lend themselves more to fractal/recursive like procedures and the others are more sequential and linear. By contrast, the rules are defined by the grammar. EAs grammar provides the expressive boundaries for the representation. For example, in a mathematical domain adding a function like sin(x) to the grammar provides extra richness and complexity to the representation. Similarly, for string based representations adding or removing a variable to the grammar changes the expressiveness. Other decisions can be made for structural EAs, such-as including more constructors (e.g. addition) over destructors (e.g. subtraction), vice-versa or more commonly keeping the grammar entirely balanced i.e. same number of constructors as destructors. Possible grammars include a subset of Python or C, assembly instructions or even pure LLVM intermediate code [47]. These are all potential outputs of EAs.
16.3.5 Fitness Fitness is the measure of how close a result is to a desired goal. A fitness function is the algorithm used to calculate the fitness value. Calculating the fitness value for an entire population is a time consuming activity. The time taken is related to the complexity of the fitness function and the size of the population being evaluated. The fitness function is used to select individuals for inclusion in future populations. The function can be constant throughout the evolutionary process or it can change depending upon the desired goal or situation. It is the feature of fitness function change that makes EAs highly adaptive. The fitness can be calculated using different methods e.g. Area Under the Curve (AUC) from a test set, measurement of a robot responding to a set of trials, etc.; each method being problem dependent. For supervised learning it is calculated as the difference between the desired goal and the actual result obtained from the entity. Conversely, for unsupervised learning there are other methods. Once the fitness
16 2019 Evolutionary Algorithms Review
317
has been determined the entities can be ranked/sorted by strength. The stronger candidates are more likely to be chosen as parents for the next generation.
16.3.6 Selection Selection is the method where individuals in the current population are chosen to be the starting parents for the next generations. In digital evolution, parents are not restricted to two; any number can be chosen. A simple method is to use the fitness value to order the population, this method is called ranked selection. A population can be organized from the highest to lowest fitness value. The highest entities are then used as the starting parents for the next generation and so forth. As mentioned in Sect. 16.3.3, elitism is where the chosen parents remain in the next population rather than being discarded. Diversity [66] is an important concept when it comes to a healthy population. Healthy populations are important for discovering “good” solutions. In other words, a diverse population has a higher exploratory capability. This tends to be important especially at the start of the search process. Diversity is directly associated with the amount of variation applied to the entities. It can be argued that local selection schemes [13, 66] (steady-state) are naturally more likely to preserve diversity over global selection scheme. Local selection means evolution is potentially occurring at different rates across the population. An example of a local selection scheme is called tournament selection. As the name implies, selection requires running a tournament between a randomly chosen set of individuals. The “winner” of each tournament is then selected for further evolution. Note: there are other selection schemes, which are not covered here.
16.3.7 Multi-Objective As the name implies multi-objective is the concept of not having a single objective but multiple objectives. These objectives may act against each other in complex ways i.e. conflict. It is this interaction which makes multi-objective so complex. A typical example in the mobile phone industry is finding the optimum position between performance, power consumption (longevity) and cost. These three objectives can be satisfied to different degrees, effectively giving the end consumer a choice of options. Too much performance may sacrifice longevity + cost, low cost may sacrifice performance + longevity and so-on. EAs are extremely good at exploring multi-objective problems where the fitness is around a compromise along the Pareto curve.
318
A. N. Sloss and S. Gustafson
16.3.8 Constraints Constraints are the physical goals, as compared with the objectives which are the logical goals. The physical goals represent the real world limitations. They take a theoretical problem and make it realistic. Constraints are the limitations imposed on the entities. A constraint could be code size, energy consumption or execution time. EA Researchers have discovered some of the most interesting and potentially best solutions tend to lie somewhere at the edge of the constraint boundaries.
16.3.9 Exploitative-Exploratory Search EAs use recombination and mutation for exploitative and exploratory search. The more mutation that occurs the more exploratory the search, and correspondingly the less mutation the more exploitative the search. EAs can be at either end of the spectrum, with only recombination (more exploitative) or only mutation (more exploratory). The ratios of recombination and mutation can be fixed or dynamic. By making the ratios dynamic EAs can adapt to changing circumstances. This shift may occur when the potential “good” solution is perceived to be either near or far. Another way to view this is that mutation is a local search and recombination (or crossover) is a global search. Recombination, despite using only existing genetic material, often takes much larger jumps in the search space than does mutation.
16.3.10 Execution Environment, Modularity and System Scale EAs can execute as a process within an Operating System, as a self-constructed dynamic program feed into a language interpreter (e.g. exec(open(“ea.py”).read()) in Python), or within a simulator, where the simulator can be a physics simulator, biological simulator and/or a processor simulator. EAs are generic and literally any executing model can be used to explore a desired problem-domain. To handle larger problems some form of modularity has to be adopted. There are many schemes including ones that introduce tree based processing or Byzantine style algorithms i.e. voting systems. Modularity tends to work best when the problem granularity is relatively small and concise, such as a function-call. The normal questions asked are (1) whether all the function-calls use the same input data, (2) whether all information is shared between the functions, or (2) whether a hierarchy of evolution has to be adopted i.e. from function call to full solution or from local to global parameter configuration. EAs can scale from one process to many processes running on several server clusters [37, 38, 48, 56]. These compute clusters are called islands. They operate in either a parallel and/or distributed fashion. A parallel system is a set of evolving
16 2019 Evolutionary Algorithms Review
319
islands working together towards a common goal. Genetic material or solutions can be shared. We should highlight that there are a few options when it comes to parallel topologies. By comparison, the distributed approach, which can include parallel islands, is about the physical aspect of running on various hardware systems. With both approaches, scale-out co-ordination becomes an important issue i.e. the management of the islands becomes part of the performance equation. Lastly, it is important to mention co-evolution in the context of scaling. Coevolution is where two or more evolving populations (effectively species) start interfering/cooperating with each other. This is particular important when digital evolution is being used to build much larger systems. Both modularity and system scale add an extra layer of complexity. Scale is a required necessity to answer more complicated problems.
16.3.11 Code Bloat and Clean-Up The EAs which play with structures can quite easily have to deal with the problem of bloat [60]. Bloat is a byproduct of exploring a problem-domain. Historically this was a major issue with the earlier algorithms due to hardware limitations. Today, modern systems have an abundance of compute and storage. This does not mean the limitation has gone away but it is mitigated to a certain extent. Bloat may be critically important to the evolutionary process. In nature the more that is discovered the more it seems that very little is actually wasted. Non-coding regions are not bloat, they are crucial parts to the process of molecular mechanisms. There are structures or code sequences that have no value, as-in the result is circumvented by other code or structures. These neutral or noneffective structures are called introns. Intron is a biological term referring to the noneffective fragments found in DNA. For software programs, introns are code sequences that are noneffective or neutral; these sequences can be identified and cleaned-up, i.e. eliminated. The elimination can occur either during the evolutionary process itself or at a final stage. Note, introns can be critically important to the process, so early removal can be detrimental.
16.3.12 Non-convergence, or Early Local Optima There has been a lot of research focusing on the problems of non-convergence and early-local-optima solutions. Non-convergence means that the evolving entities are not making enough progress towards a solution. Early-local-optima means a suboptimal solution has been found at the beginning of the evolutionary process. This sub-optimal solution has caused the algorithm to limit further exploration, reducing the chance of finding a better solution.
320
A. N. Sloss and S. Gustafson
Non-convergence is caused by many factors including the possibility of not having the right data. EAs rely on stochastic processes to move toward, which means that the paths taken are unique, unrepeatable and non-deterministic; unique in the sense that the paths taken are always different, unrepeatable as-in randomness is used to determine the next direction, non-deterministic as-in the length of time to solution is variable. To get to a potential solution may require reruns of the algorithm. Each rerun potentially requiring some form of fine adjustment to help narrow into a good solution. In the end, when everything else fails, more domain specific knowledge may have to be inserted before convergence eventually occurs. It is important to stress that the final solution may very well be deterministic and repeatable, it is the evolutionary process to create the solution which may not be. Similar to non-convergence is the problem of reaching a local extrema too early, and then EAs iterate persistently around a point not discovering a better solution. Again, the techniques used for non-convergence tend to be used to avoid the local optima scenario. Identification of a local optima can be difficult since the global optima is unknown. After multiple readjustments hopefully the global optima can be discovered.
16.3.13 Other Useful Terms There are other terms which are worthy of a mention and brief descriptions. The first terms are the Baldwin Effect [52] and Lamarckian Evolution [19] both important concepts for digital evolution. The Baldwin Effect is about how learned behavior effects evolution. This is important for EAs that improve towards a solution under techniques such as elitism (as briefly mentioned in Sect. 16.3.3). And Lamarckian Evolution which theorizes that children can inherit characteristics from the experiences gained by their parents. Again, an important concept with direct implications for digital evolution. By contrast, the term overfitting [15] describes a situation that should be avoided. It is when the data noise containing irrelevant information and what is actually being discovered combine into a result (configuration parameters). EAs overfit when the discovered parameters satisfies the complete data-set and not the data for the specific exploration, making it effectively useless for any future prediction using other input data sources. The potential concern is the increased risk of a false-positive outcome. Lastly, Genetic Drift [1, 9] is a basic evolutionary mechanism found in nature, where some genotype entries between generations leave more descendants or parts than others. These descendants are in the population by random chance. They are neither in the next generation because of a strong attribute nor higher fitness value. In nature this happens to all populations and there is little chance for avoidance. EAs genetic drift can be as a result of a combination of factors, primarily related to selection, fitness function and representation. It happens by unintentional loss of genotypes. For example, random chance that a good genotype solution never gets selected for reproduction. Or, if there is a ‘lifespan’ to a solution and it dies before
16 2019 Evolutionary Algorithms Review
321
it can reproduce. Normally such a genotype only resides in the population for a limited number of generations.
16.4 Traditional Techniques In this section we briefly cover the traditional and well known EAs. These EAs tend to be older and more mature techniques. The techniques covered are frequently used by industry and research. There are numerous support frameworks available to experiment with [26, 53, 68, 77]. Assume for each technique discussed that there are many more variations available. Figure 16.4 shows the relationships between the various traditional techniques. For this review we have decided to focus more on Genetic Programming and the various sub-categories. In future reviews this emphasis will likely change.
16.4.1 Evolutionary Strategy, ES Evolutionary Strategy, ES [48, 50, 66] is one of the oldest EAs, developed in the 1960s at the Technical University of Berlin; it usually only involves mutation and selection. Entities are selected using truncation selection. After the entities are evaluated, the entries below the truncation point are systematically removed from the population. The remaining parents are mutated to buildup a new population.
Fig. 16.4 Relationships between traditional EA techniques
322
A. N. Sloss and S. Gustafson
Modern implementations include Co-variance Matrix Adaptation—Evolutionary Strategy CMA-ES [32] and an alternative Co-variance Matrix Self Adaptation— Evolutionary Strategy CMSA-ES. CMA-ES is thought to be complicated to implement, and CMSA-ES is a newer alternative and is believed to be easier to implement [12]. What Problems Do ESs Solve? An ES is used to solve continuous parameter optimization. A parameter is defined by its type and interval range (upper and lower bounds). A continuous parameter can take any value within the interval range. The precision determines the minimum change value.
16.4.2 Genetic Algorithms, GA Genetic Algorithms, GA [31, 48, 66] is the most common and popular among the EAs. GA applies evolution to fixed length strings. The length of the string represents the dimensionality of the problem. These strings represent variables or parameters and are useful when exploring a large number problem-domain space. This space is normally beyond human or traditional methods. As well as being popular, GAs are also the most commonly taught algorithm within the various EAs. Variables or parameters are converted to fixed length strings, the strings are entities in the population and are evolved using crossover windows and mutation. Crossover is ubiquitous but explicit crossover windows are not. By comparison to an ES (Sect. 16.4.1), a GA tends to be more generic. It should be noted that a recent trend has emerged, in both GAs and ESs, where crossover is dropped and mutation is the sole mechanism for evolution (this differs to the earlier thinking expressed in the classical literature). What Problems Do GAs Solve? GAs handle optimization and configuration problems where there are too many variables or parameters for a traditional method to succeed. The variables or parameters may interact making a potential solution much harder to identify.
16.4.3 Genetic Programming, GP Genetic Programming, GP [41–43, 45] in contrast to GAs, manipulate structures and in-particular executable programs or mathematical equations. Early GPs were based on tree representations and used the LISP programming language as the grammar. LISP was chosen for its operator richness and was relatively easily to manipulate. More recently there have been other representations introduced and newer languages
16 2019 Evolutionary Algorithms Review
323
such as Python have become popular as the main target. The recombination carries out the global search, whereas the mutation covers the local search. It is frequently common for mutation to be limited to 5–10% of the population [45]. There are always exceptions, especially if the % mechanisms are dynamically altered between generations [66]. What Problems Do GPs Solve? GPs apply evolutionary techniques to code or functions. GPs handle the manipulation of programs, so that problems that are linear, tree or direct-graph based can be explored. GPs can produce original source code, and in fact find new novel solutions to any structural style problem. In industry, GP are mostly used to discover best fit mathematical equations.
16.4.4 Genetic Improvement, GI Genetic Improvement, GI [59] is a subclass of GP (Sect. 16.4.3), where instead of a random initial seeded population, a working program is inserted as the starting point to spawn entities of the first population. This is a powerful concept since it does not only search for a better optimized solution but also has the potential to discover and correct faults in the original “working” code. What Problems Do GIs Solve? Solves an interesting problem, where either the working code is potentially un-optimized and a more optimized version is required or bringing legacy code up to current standards.
16.4.5 Grammatical Evolution, GE Grammatical Evolution, GE [26] is a powerful technique. It is yet another subclass of GP (Sect. 16.4.3) but instead of using a fixed grammar to evolve-able solutions, the grammar itself is select-able. A good example of GE is the PonyGE2 [26] tool, written in Python. It takes a standard Backus-Naur Form (BNF) grammar [10] as an input and uses it to evolves solutions. This is a powerful method especially when dealing with more obscure programming languages. GE can also carry out GI (see Sect. 16.4.4). Note the PonyGE2 source code is available on GitHub. What Problems Does GEs Solve? GE solves the problem of evolving multiple programming languages using the same tool. As long as the language has a BNF-style definition, it can be evolved. This makes GE flexible across a number of problem-domains, and especially ones which require a specific programming language.
324
A. N. Sloss and S. Gustafson
16.4.6 Linear Genetic Programming, LGP Linear Genetic Programming, LGP [13] is a subclass of GP (Sect. 16.4.3) and as the name implies uses a linear structure representation. The linear structure has some advantages over the more complicated tree or directed-graph structures. LGP is particularly useful for problems which are more sequential. For example, optimizing low level assembly output. It also makes the problem of manipulating complex structures easier since it is a linear flow that is being evolved. Constructs like ifstyle control flow or loops are superimposed onto the linear structure. The linear aspect of this technique introduces an ordering constraint, which potentially has Turing Machine and/or Turing complete ramifications. What Problems Do LPGs Solve? LPG solves problems that are sequential. This is particular useful for optimizing programs and low level assembly style output. Or any problem-domain where the problem being explored is about sequential ordering.
16.4.7 Cartesian Genetic Programming, CGP Rather than linear or tree based, Cartesian Genetic Programming CGP [51] is based on Cartesian co-ordinates and directed-graphs. One basic characteristic is that the population is small (e.g. population size around five). The small population goes through a large number of generations. CGP is uniquely qualified to handle specific problems extremely well. EAs themselves can be temporal by default. CGP introduces the concept of spatial awareness into EAs. What Problems Do CGPs Solve? CGP has been shown to be useful at circuit layout design since the logic components require some form of spatial awareness. Interestingly since CGP is spatial it can also be used to produce artistic designs/patterns. Recent research shows that CGP can achieve competitive results on the Atari benchmark set [80]. CGP can also encode ANNs by adding weights to the links in the graph, allowing them to do neuroevolution (see Sect. 16.5.2).
16.4.8 Differential Evolution, DE Differential Evolution, DE [48, 66] is an example of a non-biologically inspired algorithm but falls under the metaheuristic category. It is based on iterating a population towards a quality goal. The iteration involves recombination, evaluation and selection. It avoids the need for gradient descent. A new candidate is based on a weighted difference between random candidates to create a third candidate,
16 2019 Evolutionary Algorithms Review
325
shifting the population to a higher quality level. Each new population effectively self-organizes. What Problems Do DEs Solves? Works best on Boolean, Integer spaces and Reals. DE was developed specifically to find the Chebyshev polynomial coefficients and the optimization of digital filter coefficients.
16.4.9 Gene Expression Programming, GEP Gene Expression Programming, GEP [27, 28] is a subclass of both GA (Sect. 16.4.2) and GP (Sect. 16.4.3). This method borrows from both techniques, as-in it uses fixed length strings which encode expression trees. The expression trees can be of varied size. Evolution occurs on the simple linear, fixed length strings. What Problems Do GEPs Solve? It offers a powerful linear encoding which is guaranteed to produce valid programs, since GEP follows the syntactic rules of the specific programming language being targeted. This makes it easy to implement powerful genetic operators.
16.5 Specialized Techniques and Concepts In this section we cover some of the more exotic EAs and extended tools. These EAs are new, hybrids or just miscellaneous concepts. This is not an exhaustive list but a more holistic subset of ideas that do not follow the traditional evolutionary methods. A few of the techniques covered in this section are not technically based on biological or synthetic evolution but play an important role in the process or are placed here due to taxonomy convenience. Figure 16.5 shows the relationships between the specialized techniques and concepts. These relationships are more tenuous than the relationships found between the various traditional EA techniques.
16.5.1 Auto-Constructive Evolution Auto-constructive Evolution [69] is where instead of having an overarching algorithm orchestrating the artificial evolution process, entities themselves are given the ability to undergo evolution. This means that children are constructed by their own parents. Parents have an ability to produce children, without the need of a master synthetic algorithm. This is in contrast to the more traditional EAs where the artificial replication occurs at a higher level.
326
A. N. Sloss and S. Gustafson
Fig. 16.5 Relationships between specialized techniques and concepts
What Problems Does Auto-Constructive Evolution Solve? Provides a method to carry out micro-evolution. This type of evolution is more inline with the goal of building Artificial Life. Potentially has an advantage to scale since it requires no centralized coordination.
16.5.2 Neuroevolution, or Deep Neuroevolution Neuroevolution [7, 61, 62, 73] is classed as a hybrid solution. It has become more popular in recent years. It was a noticeably hot topic in GECCO 2018 [8]. Neuroevolution is the concept of using some form of GA to discover the optimal way to setup a Deep Neural Network (DNN). The entities being optimized are artificial neural networks (ANNs). Or, it can be used in combination with supervised learning and reinforcement learning (RL) techniques. The family of neuroevolution algorithms can be further classified based on how the candidate solutions are encoded and how much they can vary. As mentioned previously, CGP (Sect. 16.4.7) can also be an effective way to evolve ANNs by adding weights to the links. • Direct encoding: parameters of every artificial neuron and connection are part of the solution encoding. • Indirect encoding: is a “recipe” for generating ANNs. The topology can be either fixed or evolving. Fixed means that only the connection weights are optimized, whereas evolving means both the connection weights and the
16 2019 Evolutionary Algorithms Review
327
topology of the ANN are modified. This latter class of algorithms is commonly called Topology and Weight Evolving Artificial Neural Network algorithms (TWEANNs). Notable examples of TWEANNs are Neuroevolution of Augmenting Topologies (NEAT) [72] and its successor HyperNEAT [71]. The former uses a direct encoding while the latter uses an indirect encoding called “Compositional Pattern Producing Networks” (CPPN) [70]. Another example of a TWEANN approach using an indirect encoding is “Evolutionary Acquisition of Neural Topologies” (EANT2) [65]. What Problems Does Neuroevolution Solve? Neuroevolution combines ideas from Genetic Algorithms and Artificial Neural Networks. It can evolve both ANN weights and topologies making it an attractive alternative to ML hyper-parameter hand-crafting. By contrast with backpropagation, neuroevolution is not limited to differentiable domains [73].
16.5.3 Self-Replicating Neural Networks Self-replicating Neural Networks [20] is a relatively new idea where the ANNs themselves re-configure. Currently the idea is to have the network learn by producing their own weights as output also to have regeneration. Regeneration is the concept of training an ANN by inserting predictions of its own parameters. This technique is still being researched and is at a very early stage. Vigorous exploration is still required but this idea has potential to be become more important. Expect to see forward progress in this area. What Problems Do Self-Replicating Neural Networks Solve? Self-replication is still a new idea. Early research shows some promise in the area of continual ANN improvement using natural selection.
16.5.4 Markov Brains Markov Brains [35] is in an early stage. Markov Brains belong to the same hybrid group as neuroevolution. Based on ANNs with some significant differences. Normal ANNs are designed with layers built-up from nodes with the same functional characteristic. Markov Brains are networks built from nodes with different computational characteristics. The components interact with each other, and can connect with external sensors (physical inputs) and actuators (physical outputs). What Problems Do Markov Brains Solve? This is still relatively early days for Markov Brains but they are showing some early promise especially in unsupervised learning. By being a more flexible substrate than ANNs, they could also lead to
328
A. N. Sloss and S. Gustafson
a more general understanding of the role recurrence plays in learning. Looking forward to see the next papers on this subject.
16.5.5 PushGP PushGP [68] is a family of programming languages which have been specifically designed for evolution to be applied-to i.e. an evolution target. It is based on a stack execution model. Each datatype has a separate stack. Code is treated as a manipulated datatype. It has been subjected to continuous research over a number of years and so there are many iterations and implementations. These variations include ones that allow for auto-constructive evolution (see Sect. 16.5.1). What Problems Does PushGP Solve? PushGP is designed to be an evolutionary target language rather than forcing a standard programming language to evolve. In other words, instead of using an existing programming language which is not evolutionary friendly, PushGP goes the other way by making it evolutionary friendly.
16.5.6 Simulated Annealing Simulated Annealing [66] is not an evolutionary algorithm in itself but is frequently used in-conjunction with EAs. It is where a problems starts being exploratory and as it gets nearer to a possible solution moves more to being exploitative. Meaning that more of a stochastic approach is adopted at the beginning and as a good enough solution becomes nearer a more combinational approach is adopted (see Sect. 16.3.9). What Problems Does Simulated Annealing Solve? Simulated Annealing is most useful when the problem-domain requires more of an exploratory approach at the beginning but as the solution becomes more in view a more exploitative approach is adopted.
16.5.7 Tangled Program Graph, TPG Tangled Program Graph (TPG) [40] is another relatively new technique. A method of managing programs to scale. The scale is used to handle more complicated tasks such as game playing. Provides a method to manage continuous evolution of independent and co-evolved populations. Shown to have some promising results when compared with equivalent deep learning methods. And requires significantly lower computation requirement.
16 2019 Evolutionary Algorithms Review
329
What Problems Does TPG Solve? TGP is both efficient and proven to handle complex dynamic problems such as the traditional game playing benchmarks. This is still an early research area and as such should be monitored.
16.5.8 Tabu Search Tabu search [17] is similar to Simulated Annealing (see Sect. 16.5.6) in that it is often used in conjunction with EAs. Tabu search relaxes the local search method by allowing a not-so-good solution to progress-forward over solutions which have already been visited. What Problems Does Tabu Search Solve? Tabu search solves the problem of getting off a local maxima by placing solutions which have already been visited onto a tabu list. The list is used as a method to avoid going down known previously explored search paths.
16.5.9 Animal Inspired Algorithms The Animal Inspired Algorithms [66] are biology inspired algorithms. Never-theless they deserve a reference within the context of EAs. There are a surprisingly large number of animal inspired algorithms. The more famous are swarm, ant and frog algorithms but the actual list is considerably longer. Each animal inspired algorithm provides some unique quality like flying, jumping or walking. They are important algorithms; swarm algorithm, for instance, can be used to control a collection of drones. Ant algorithms can be used to explore an unknown terrain for searching useful resources. What Problems Do Animal Inspired Algorithms Solve? This is a broad group of specialized algorithms which solve very specific problems.
16.6 Problem-Domain Mapping The most important part of any algorithm is what can it accomplish. In this section we attempt to map specific problem-domains to potential techniques. We must stress that this is not exhaustive and may change dramatically between reviews. It will act as an important baseline for any future reports.
330
A. N. Sloss and S. Gustafson
16.6.1 Specific Problem-Domain Mappings Here we map the general problem-domains and the specific technique or techniques. These problem-domains are traditional problems found in industry.
16.6.1.1
Variable and Parameter Optimization
Parameter and variable optimization is a process of finding the best set of parameter values for a problem. The problems in themselves are complicated or the number of parameters is extremely large. If neither is true then a more traditional method of optimization may be a better route to a solution. ES or GA are the normal solutions for this type of problem-domain. It has been shown that a GA can handle up to a million variables, as discussed in the Tutorial on Next Generation Genetic Algorithms GECCO 2018 [8, 78]. This makes the search space difficult or impossible for traditional methods and the only remaining true competitor is a pure stochastic method. See sections • •
Section 16.4.1: Evolutionary Strategy, ES Section 16.4.2: Genetic Algorithms, GA
16.6.1.2
Symbolic and Polynomial Regression
This is the problem when given a data-set finding the equivalent mathematical equation. This is a particularly popular and important activity in many industries. The requirement is to find a matching equation for the data. EAs are adopted when traditional regression methods fail. The automotive industry are actively involved in this area since they frequently need to confirm theoretical equations with practical data. The techniques help find the equation that matches the real data independent of theory. GP and all subclasses can handle symbolic and polynomial regression. See sections • • • • •
Section 16.4.3: Genetic Programming, GP Section 16.4.6: Linear Genetic Programming, LGP Section 16.4.7: Cartesian Genetic Programming, CGP Section 16.4.5: Grammatical Evolution, GE Section 16.5.5: PushGP
16.6.1.3
Automated Code Production
The goal is to produce new code without human involvement. Once the programming language, representation and goal have all been chosen, EAs can explore the
16 2019 Evolutionary Algorithms Review
331
problem-domain in search of an optimal solution. If a final code solution is found then it will fall into one of three criteria i.e. precise/accurate, “good enough” or a multi-objective compromise (along the pareto curve). Historically, EAs have mostly targeted programming languages such as LISP and low-level assembly language. Both have strict input-output formats which can simplify mutation and the joining of code segments. This avoids introducing syntax errors due to interface inconsistencies. Today the popular programming languages are Python (as a replacement for LISP) and Intermediate Representation (as a replacement for assembler instructions). Both offer new opportunities and challenges. GP, LGP, GE, CGP, and PushGP are all techniques that produce code. Due to the fact that the code is automatically generated it is likely that the end result is difficult or unreadable by humans. See sections • • • • •
Section 16.4.3: Genetic Programming, GP Section 16.4.6: Linear Genetic Programming, LGP Section 16.4.5: Grammatical Evolution, GE Section 16.4.7: Cartesian Genetic Programming, CGP Section 16.5.5: PushGP
16.6.1.4
Regular Expression
Automated generation of regular expression. This can be achieved using Genetic Programming [30]. Where EAs are used to explore expression strings. See section •
Section 16.4.3: Genetic Programming
16.6.1.5
Circuit Design
Circuit design is similar to low-level assembly instructions (see Sect. 16.6.1.3) in that it has been successfully explored using EAs. The rules for circuit design is relatively simple and so EAs can explore the problem space relatively easily. This area was explored intensely in the 1990s but has potential to see a revival as system complexity increases. CGP is particularly good at exploring circuit design since it handles spatial problems using Cartesian co-ordinates. See sections • •
Section 16.4.3: Genetic Programming, GP Section 16.4.7: Cartesian Genetic Programming, CGP
332
A. N. Sloss and S. Gustafson
16.6.1.6
Code Improvement and Optimization
This is an up and coming area. The evolutionary process starts from a working code base, as compared with an initial random seed population. The existing working code is optimized towards a set of new objectives or is transitioned to fit within a new set of constraints. The new objectives could include specific performance features or any other similar attributes. The constraints could include new code size restrictions or power consumption limitations. The working code is basically used as a seed for the first initial population. Standard evolutionary operators are then applied to search the problem space for a potentially better solution. As a subtopic, legacy code improvement is about taking older existing code and finding a better alternative. This better alternative can either be a metric improvement (e.g. faster, smaller code) or higher quality (hidden or existing anomalies are removed). EAs in this problem-domain act as extra engineers on the project, where they might or might not produce a better answer from the original. Similar to many parts of engineering, this technique relies heavily on the quality of the goal and the associated test suites. GP, GI and GE are all referenced to handle code improvement and optimization. See sections • • •
Section 16.4.3: Genetic Programming, GP Section 16.4.4: Genetic Improvement, GI Section 16.4.5: Grammatical Evolution, GE
16.6.1.7
Simulator Testing
Simulation testing is an indirect/byproduct of using EAs. It turns out EAs can be extremely good at finding simulator inconsistencies. This is because EAs explore the simulator in a different way than an engineer or scientist would explore a simulator. In fact, there are a number of historical examples where EAs are annoyingly good at discovering faults. All the EAs are capable of pushing the limits of a simulator.
16.6.1.8
Walking Robot
Robots learning to walk has been a traditionally hard problem. EAs have been involved in learning how to walk for decades. EAs have two attributes which make them particularly useful in handling these types of problems. First, EAs can learn how to improve with each evolutionary cycle (incremental improvements) and second they can adapt to changes in the environment. For example, if a physical component changes or malfunctions EAs can adapt to that change and can continue walking.
16 2019 Evolutionary Algorithms Review
333
GAs are used extensively in Robot Walking Algorithms, for both soft and hard robots. See section •
Section 16.4.2: Genetic Algorithms, GA
16.6.1.9
Automated Machine Learning
EAs are starting to be used in the new subject of Automated Machine Learning (or more commonly known as AutoML) [61, 62]. AutoML is the automation of machine learning to real problem-domains. The goal is to avoid the labor intensive configuration required to setup a Deep Neural Network (DNN). This method also potentially bypasses the requirement for domain experts. Google have shown that AutoML can successfully improve existing ML systems. Google has also shown how Evolutionary AutoML can be used to improve Image Classifiers that were originally designed by humans. This is becoming an increasingly popular subject, especially as DNNs become more complicated and larger. DNNs are more and more being adopted to solve interesting, real problems but the complexity of setup is causing a slow down in application. The quest today is applying and configuring DNN to more problems quicker and easier. EAs are increasingly being used in this area to discover novel solutions which are more efficient. In research, GAs are an alternative way to configure a DNN. The GAs provide some form of novelty. For example, in a Google Brain paper [62] aging the weaker entities from the population was introduced to DNN configuration. They found that GAs achieved better results, as compared with other methods, when there is limited hardware resources. One of the more famous public tools in EAs to carry this is out is TPOT [57], source code available on GitHub. It is designed to be a Data Science Assistant, written in Python. The goal is to optimize machine learning pipelines using GP. See sections • • •
Section 16.4.2: Genetic Algorithms, GA Section 16.4.3: Genetic Programming, GP Section 16.5.2: Neuroevolution, or Deep Neuroevolution
16.6.2 Unusual and Interesting Problem-Domain Mappings Here we highlight some of the more unusual problem-domain mappings that have appeared in recent articles. It is expected that these mappings will change the most between reviews.
334
16.6.2.1
A. N. Sloss and S. Gustafson
Configuring Neuromorphic Computers
Configuration of a Neuromorphic Computer [18] is a more unusual problem. Current techniques map Convolutional Neural Networks (CNNs) to Spiking Neural Networks (SNN). These techniques avoid the dynamic nature and complexity of SNNs. A suggestion is to use an EA to perform these mappings. EAs are used to design simple ANNs to configure the platform. This technique allows the full utilization of a complex SNN to create small networks to solve specific problems. EAs can explore the entire parameter space of the specific hardware.
16.6.2.2
Forecasting Financial Markets
GA are used by institutional quantitative traders and other areas of the financial world [44]. Traders use software packages to set parameters that are optimized using both historical data and a GA. Depending upon the problem, the optimization can vary from which parameters are being used and the associated values to only optimizing the values. Trading comes with some risk but identifying the right parameters that relate to major market turns can be critical.
16.6.2.3
Predicting Future City Landscapes
The Spanish Foundation For Science and Technology [5] have used EAs to predicts the upward growth of cities. They discovered that increases in build height follows similar development as some living systems. A GA takes historical and economic data and uses it to predict the skyline of the future. The GA predicts how the skyscrapers and other buildings increase in height.
16.6.2.4
Designing an Optimized Floor Plan
Using EAs to design internal building floor plans. Floor plan can be complex due to building irregularities. A GA has been applied to optimize complex floor-plans [75]. The GA successfully designed office floor plans that optimized walk times and hallways.
16.6.2.5
Antenna Design
Antennas are complicated and are mostly designed by hand. This is both timeconsuming and requires many resources. EAs have been “used to search the design space and automatically find novel antenna designs”. In the paper entitled Automated Antenna Design with Evolutionary Algorithm [36], a group of NASA scientist successfully achieve designing an antenna using digital evolution. The
16 2019 Evolutionary Algorithms Review
335
antenna turned out to be efficient for a variety of applications. It was unique since the final design would not have been created by a human.
16.6.2.6
Defect Identification of Electron Microscopy Images
The US Department of Energy has been using a system called Multinode Evolutionary Neural Networks for Deep Learning (MENNDL) [3] to identify defects in electron microscopy images. MENNDL uses NNs to find defects out of changing data. The system runs on the Oakridge Summit supercomputer [6]. Fully utilizing all the available compute-nodes i.e. 18,000 GPUs on 3000 nodes. It analyzes millions of networks using a “scalable, parallel, asynchronous genetic algorithm augmented with a support vector machine to automatically find a superior deep learning network topology and hyper-parameter set” [3]. The scale of this system makes this an impressive hybrid implementation.
16.7 Challenges Challenges include some personal opinions from the experience we have had navigating the subject of EAs. The points laid out below are opinions so should be debated and discussed. They are not end points. • It is our observation that the community is relatively small compared with other Machine Learning communities. The size of the community determines the level of vigor that can be applied to validate a new idea or concept. In other words, there is not enough experts in the subject to vigorously prove or disprove a concept. Many ideas, even extremely clever and good ones, go unverified and unchecked by the community. This is a problem since good ideas can go missing due to a lack of support. • EAs have an inherent difficulty proving they are the best solution to a specific “real-world” problem. There is no automatic methods to compare algorithms that is without any bias. In other words how do we prove, without doubt, that an EA performs better at achieving a time-to-solution than a random search. This has been a consistent issue since it is extremely difficult to prove the results from an EA experiment that is obviously without bias. What compounds the difficulty is that the problem-domains targeted are in themselves inherently complex, that is why an EAs is being used in the first place. • Recently a lot of work has gone into creating synthetic problem benchmarks but there is concern less work has been applied to “real world” problems with “real world” constraints. Where bench marking and consistency is undoubtedly important, especially when comparing techniques, the most important activity is always applying algorithms to real problems.
336
A. N. Sloss and S. Gustafson
• The community is an old community within Machine Learning, with ideas dating back to the early 1950s but many of the original drawbacks of using evolutionary techniques have been removed since modern hardware is both abundant and high performing. This means experiments which were constrained by the hardware resources at the time can now be feasible. Population size and maximum number of generations can be made considerably larger. • Biology plays important an role but EAs are not organisms. This makes crossing terms between biology and Computer Science difficult. Computer Scientists will use the biological terms loosely for their own purposes whereas the real biological meaning is much more complicated. • EAs have been proven good at tackling some hard problems but they suffer from a difficulty-of-scale. In complex systems it is important to divide-and-conquer (break the problem down into smaller elements) before attempting to produce a solution. EAs at the bottom level make a lot of sense but they become more problematic as we scale-up the problem. Research work at bigger scale problems is still immature and is also limited, for the most part, with the capabilities of the current hardware available. • Similar to many other AI disciplines, there is a constant struggle between make verses buy. There is a tendency for Researchers to re-invent the technology wheel, this is in part due to the required learning curve to attain usefulness. The amount of effort to learn a new framework can be as challenging and time consuming as creating a propriety framework from first principles. This is detrimental to the discipline, as a whole, since the re-invention slows down forward progress. The caveat is that over time frameworks become overly specific to a given problem domain, so applying the same framework to a different problem can be cumbersome. • Modularity is a method to handle more complex problems by breaking the problem into more manageable components. This falls under the divide-andconquer strategy or scientific method. EAs solve problems using a bottom-up design and as-such are inherently more difficult to modularize. • EAs may be deterministic and provable, the process by how the solution was arrived at is non-determinant and if the algorithm is run again there is no guarantee the same result will be found or any result will be found. This is the opposite of some other techniques in Machine Learning. • EAs are provable. Proof and explainability is becoming increasingly more important. Governments are also stepping in with new regulations, for example the General Data Protection Regulation (GDPR) [4]. GDPR is a European regulation which has direct implications on Machine Learning algorithms in general. In Article 22 [2], of the regulation, calls for algorithmic fairness and explainability. In other words, explain how the algorithm is correct, fair and unbiased. • Being an old subject EAs inherently suffers from the reinvention and rediscovery of already known concepts. Keeping everyone current with what has been published is always a challenge, especially as the amount of scientific information accelerates.
16 2019 Evolutionary Algorithms Review
337
• Even-though EAs could potentially be the next big direction for Machine Learning, the general low funding of the subject may hold back development. This is concerning since EAs are not as well-known as other Machine Learning techniques. This situation makes obtaining core funding for EA related research difficult and very much a secondary focus. The outcome is that only a few fulltime researchers worldwide focus solely on these techniques. Also, the broad interdisciplinary knowledge base required is difficult to attain.
16.8 Predictions In this section we look into the future. What may happen in the next few years with EAs. Again since these are predictions they should be treated with more questions. • It is likely that in the near future we will see more cross-pollination and collaboration between Machine Learning researchers and molecular biologists, neuroscientists and evolutionary biologists. People are held back by their specializations and generally dangerous in other areas. The philosopher Paul Feyerabend argued that the most progress is made on the boundaries between subjects [14]. It is at the boundary of biology and computer science where most advancements are likely to occur. • Pure research on DNNs will slow down and the DNN focus will shift to engineering and the application side for the near-term. There is a strong likelihood that Machine Learning research will diversify and be more challenging. Research problems will become hybrid, involving the merging of many techniques. A potential end goal for hybrid systems is Artificial General Intelligence (AGI). • Artificial General Intelligence is a philosophical goal, or concern depending upon whom you ask. This will potentially take decades and many stages before is can be reached, if at all. One stage is the much smaller attainable goal of producing what we call Domain Specific Artificial Life (DSAL). Bringing together many disciplines in the desire to create solutions to a specific problem. The term is a fun play on Domain Specific Architectures as advocated by some of the hardware architecture community: small artificial lifeforms to solve specific problems. • New Artificial Neurons (AN) will be explored and developed. There is potential for the ANs themselves to be re-examined either by vastly increasing the number of interconnects or adding interesting attributes like co-ordinates to the model. Today’s ANs have limited interconnects, whereas the biological neurons have substantially more interconnects. Then evolving these models as required. These are ideas that people like Julian Miller have put forward. Whatever future direction is taken, the AN model will most likely change over the next few years. In all likelihood we will end up with many AN models to choose from. This change will occur as our understanding of Neuroscience and biological mechanisms increases.
338
A. N. Sloss and S. Gustafson
• As James Shapiro points out, there are many genetic mechanisms that could be incorporated into existing and new EAs [64]. One such concept is Horizontal Gene Transfer (HGT) [39]. HGT becomes important to the community, as more advanced complex systems are attempted. HGT is the ability for useful genes (or code segments) to quickly transfer across species (islands). There are Computer Science implications for such ideas. As with Richard Feynman’s “There’s plenty of room at the bottom”, when referring to molecular chemistry, there is plenty of room with evolutionary biology and its application to EAs. • EAs can potentially be used to explore causality. Judea Pearl [58] has given the industry a challenge to explain Machine Learning outcomes and identify the causes of the outcomes. In particular ideas like counterfactual, where forward predictions about the future can be made by inserting a change or deciding not to insert a change. This involves not just providing data correlation but creating a model on how the data was created. • Obfuscation may allow EAs to get involved in security, privacy and data cloaking. • EAs are already being used in the field of Quantum Computing (QC), and we can expect more activity in this area. Either to configure or to handle the complicated data output. 2017 saw the Humies Award [16] go to an Australian Team [33] using EAs applied to Quantum Computing.
16.9 Final Discussion and Conclusion The 2019 Evolutionary Algorithms Review is a baseline for future reports. We are cognizant that there are many areas which were not covered or were only covered briefly. These areas may become more important as we consider the 2020 review. Traditional EAs continue to provide useful solutions to hard problems. They are mature algorithms. They are becoming particularly useful when placed on modern hardware. The newer trends appear to indicate that hybrid solution may provide the next future capability i.e. combining evolutionary techniques with other techniques. Where EAs are used either to bring knowledge-forwarding or optimizing complex Machine Learning configurations. We dedicated most of the review to the landscaping of the various techniques. This was planned to act as a baseline for future review development. For problemdomains that affect society, and subsequently industry, we introduced UCA (User Control Attributes) criteria. There are five defined attributes i.e. limiters, explainability, causality, fairness and correction. They impose an extra level of required thinking, where the algorithms have to be both community friendly and adhere to new government regulations. The UCA will be used as a basis for a new taxonomy for EAs. Future algorithms will not only be rated on their ability to produce an outcome but on the ability to satisfy the UCA criteria in some form or other. More generally this taxonomy could be applied to other forms of Machine Learning.
16 2019 Evolutionary Algorithms Review
339
Current thoughts on applying the UCA to EAs are as follows: • Limiters is the concept of restricting the capability. It is still early days but as EAs handle more problems that either affect the physical environment (i.e. control actuators), or are involved in some privacy aspect, then methods of limiting the capability or cloaking the outcomes become more important. This may require significant external technology and thought. • Explainability revolves around explaining how an algorithm produces an answer. The output solution from EAs are inherently provable since they solve a specific problem but as with all algorithms of this class, explaining how the solution came about is problematic and difficult to repeat. • Causality is about moving to a higher order answer which adds the “Why” component to the answer being searched. The challenge has been set and causality will become more important as we move forward with implementing real world EAs, driven by the desire for the EAs to do more and to understand why a conclusion has been reached. • Fairness is about producing an answer which is balanced and without human prejudice. This may come down to the actual input data selected and the fitness function being used. Algorithmic fairness [79] has to be capable of detecting biases, contesting a decision and finally instigating a potential remedy. A critical method of achieving fairness is making sure that the outcome is vigorously tested. There is potential for a mathematical element of fairness to be incorporated into the evolutionary frameworks. Whereas the society aspects will remain in the human domain for the foreseeable future and may well require careful deliberate biasing to produce a fair outcome. Two features related to fairness, not covered in this review, are the broader subjects of accountability and transparency. A method of algorithmic fairness can be complicated. This is an area that requires future monitoring. Fairness may also involve studying the initial starting states, checking that the biases are not injected right at the start of the process. • Correction is about correcting a problem once it has been identified. When an error is identified/detected/contested, EAs can be assessed on how easily a remedy can be applied. EAs by definition are adaptive, and so correction can occur as part of a continuous evolutionary process (i.e. via environmentalchanges) or more manually through a direct change to the fitness function and/or constraints. Now that we have described the basics of the UCA, we will attempt in the 2020 review to apply them to the various EAs outlined in this review. This provides the new taxonomy. There is a long way to go with-respect-to EAs. We are just forming a common language by which we can communicate with the biologists. This is an exciting time since the discoveries in biochemistry and synthetic biology are occurring at unprecedented rates and the capabilities of digital hardware are coming to levels that can mimic parts of the biological system. Likewise, biology can also take advantage of some of the newly gained insights from the Computer Science field.
340
A. N. Sloss and S. Gustafson
This means that the research runway is long but at the same time we have to realize that the hardware gap in both “effective” processing and interconnects between biology and digital systems is still vast. We may have the vision to achieve biological equivalence but the current state of the hardware is both different and nonoptimal for many types of problems. This is particularly interesting as we hit the End of Moore’s Law and the possibility of different compute-models being introduced and forced on the industry.
16.10 Feedback If you have any feedback or suggestions on this review please send them to [email protected] with the subject line of “2019 Review Feedback” and we will consider the enhancements. Acknowledgements We would like to acknowledge the following people for their encouragement and feedback during the writing of this review, namely Mbou Eyole, Casey Axe, Paul Gleichauf, Gary Carpenter, Andy Loats, Rene De Jong, Charlotte Christopherson, Leonard Mosescu, Vasileios Laganakos, Julian Miller, David Ha, Bill Worzel, William B. Langdon, Daniel Simon, Emre Ozer, Arthur Kordon, Hannah Peeler and Stuart W. Card.
References 1. Genetic Drift. https://evolution.berkeley.edu/evolibrary/article/evo_24. Last accessed February 4 2019 2. Art. 22 GDPR Automated individual decision-making, including profiling. Intersoft Consulting (2018). https://gdpr-info.eu/art-22-gdpr/. Last accessed November 22, 2018 3. Deep learning for electron microscopy. US Department of Energy (2018). https://m.phys.org/ news/2018-12-deep-electron-microscopy.html. Last accessed December 22 2018 4. General Data Protection Regulation GDPR. Intersoft Consulting (2018). https://gdpr-info.eu. Last accessed November 22, 2018 5. A genetic algorithm predicts the vertical growth of cities. Spain Foundation for Science and Technology (2018). https://www.eurekalert.org/pub_releases/2018-05/f-sf-aga052518. php. Last accessed November 17 2018 6. ORNL Launches Summit Supercomputer. US Department of Energy (2018). https://www. ornl.gov/news/ornl-launches-summit-supercomputer. Last accessed April 29 2019 7. Welcoming the Era of Deep Neuroevolution. Uber Engineering (2018). https://eng.uber.com/ deep-neuroevolution/. Last accessed December 28 2018 8. Aguirre, H. (ed.): GECCO ’18: Proceedings of the Genetic and Evolutionary Computation Conference Companion. ACM, New York, NY, USA (2018) 9. Arnold, J.: Genetic Drift (2001). https://www.sciencedirect.com/topics/neuroscience/geneticdrift. Last accessed April 29 2019 10. Backus, J.W.: The syntax and semantics of the proposed international algebraic language of the Zurich ACM-GAMM Conference. In: IFIP Congress (1959) 11. Bailey, B.: The Impact Of Moore’s Law Ending (2018). https://cacm.acm.org/news/232532the-impact-of-moores-law-ending/fulltext. Last accessed April 29 2019
16 2019 Evolutionary Algorithms Review
341
12. Beyer, H.G., Sendhoff, B.: Covariance matrix adaptation revisited—the CMSA evolution strategy (2008). https://www.researchgate.net/publication/220701715_Covariance_Matrix_ Adaptation_Revisited_-_The_CMSA_Evolution_Strategy_-. Last accessed February 6 2019 13. Brameier, M., Banzhaf, W.: Linear Genetic Programming. No. XVI in Genetic and Evolutionary Computation. Springer (2007). URL http://www.springer.com/west/home/default? SGWID=4-40356-22-173660820-0 14. Broad, W.J.: Paul Feyerabend: Science and the Anarchist (2018). https://www.jstor.org/stable/ 1749231?seq=1#page_scan_tab_contents. Last accessed April 4 2019 15. Brownlee, J.: Overfitting and underfitting with machine learning algorithms (2016). https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learningalgorithms/. Last accessed April 29 2019 16. Brownlee, J.: Annual “humies” awards for human-competitive results (2018). http://www. human-competitive.org. Last accessed November 12 2018 17. Brownlee, J.: Tabu search (2018). http://www.cleveralgorithms.com/nature-inspired/ stochastic/tabu_search.html. Last accessed January 10 2019 18. Buckley, S., McCaughan, A., Chiles, J., P. Mirin, R., Woo Nam, S., Shainline, J., Bruer, G., Plank, J., Schuman, C.: Design of superconducting optoelectronic networks for neuromorphic computing. In: 2018 IEEE International Conference on Rebooting Computing (ICRC), pp. 1–7 (2018) 19. Burkhardt, R.W.: Lamarck, evolution, and the inheritance of acquired characters. Genetics 194 4, 793–805 (2013) 20. Chang, O., Lipson, H.: Neural network quine. CoRR abs/1803.05859 (2018). URL http:// arxiv.org/abs/1803.05859 21. Cossins, D.: Discriminating algorithms: 5 times ai showed prejudice (2018). https://www. newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/. Last accessed April 26 2019 22. Dahad, N.: Imec, ASML Team on Post-3nm Lithography (2018). https://www.eetimes.com/ document.asp?doc_id=1333896. Last accessed January 7 2019 23. Darwin, C.: On the Origin of Species by Means of Natural Selection. Murray, London (1859) 24. Dennard, R.H., Gaensslen, F.H., Rideout, V.L., Bassous, E., LeBlanc, A.R.: Design of ionimplanted mosfet’s with very small physical dimensions. IEEE Journal of Solid-State Circuits 9(5), 256–268 (1974) 25. Diab, W.: About JTC 1/SC 42 Artificial intelligence (2018). https://jtc1info.org/jtc1-presscommittee-info-about-jtc-1-sc-42/. Last accessed December 22 2018 26. Fenton, M., McDermott, J., Fagan, D., Forstenlechner, S., Hemberg, E., O’Neill, M.: PonyGE2: Grammatical Evolution in Python. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’17, pp. 1194–1201. ACM, Berlin, Germany (2017) 27. Ferreira, C.: Gene expression programming: a new adaptive algorithm for solving problems. Complex Systems 13(2), 87–129 (2001) 28. Ferreira, C.: Gene Expression Programming: Mathematical Modeling by an Artificial Intelligence. Springer (2006) 29. Furber, S.: SpiNNaker (2018). http://apt.cs.manchester.ac.uk/projects/SpiNNaker/project/. Last accessed November 11 2018 30. Gibbs, M.: Genetic programming meets regular expressions (2015). https://www. networkworld.com/article/2955126/software/genetic-programming-meets-regularexpressions.html. Last accessed January 2 2019 31. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA (1989) 32. Hansen, N.: The cma evolution strategy: A tutorial. CoRR abs/1604.00772 (2016). URL http:// arxiv.org/abs/1604.00772 33. Harper, R., Chapman, R., Ferrie, C., Granade, C., Kueng, R., Naoumenko, D., T. Flammia, S., Peruzzo, A.: Explaining quantum correlations through evolution of causal models. Physical Review A 95 (2016)
342
A. N. Sloss and S. Gustafson
34. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006) 35. Hintze, A., Edlund, J.A., Olson, R.S., Knoester, D.B., Schossau, J., Albantakis, L., TehraniSaleh, A., Kvam, P.D., Sheneman, L., Goldsby, H., Bohm, C., Adami, C.: Markov Brains: A Technical Introduction. ArXiv abs/1709.05601 (2017) 36. Hornby, G., Globus, A., Linden, D., Lohn, J.: Automated antenna design with evolutionary algorithms. Collection of Technical Papers - Space 2006 Conference 1 (2006) 37. Ivan: Parallel and distributed genetic algorithms (2018). https://towardsdatascience.com/ parallel-and-distributed-genetic-algorithms-1ed2e76866e3. Last accessed February 7 2019 38. Izzo, D., Ruci´nski, M., Biscani, F.: The generalized island model. In: Parallel Architectures and Bioinspired Algorithms, pp. 151–169. Springer (2012) 39. Jain, R., Rivera, M.C., Lake, J.A.: Horizontal gene transfer among genomes: The complexity hypothesis. Proceedings of the National Academy of Sciences 96(7), 3801–3806 (1999) 40. Kelly, S., Heywood, M.I.: Emergent tangled graph representations for Atari game playing agents. In: M. Castelli, J. McDermott, L. Sekanina (eds.) EuroGP 2017: Proceedings of the 20th European Conference on Genetic Programming, LNCS, vol. 10196, pp. 64–79. Springer Verlag, Amsterdam (2017). https://doi.org/10.1007/978-3-319-55696-3_5. Best paper 41. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge, MA, USA (1992) 42. Koza, J.R.: Genetic Programming II: Automatic Discovery of Reusable Programs. MIT Press, Cambridge Massachusetts (1994) 43. Koza, J.R., Andre, D., Bennett III, F.H., Keane, M.: Genetic Programming III: Darwinian Invention and Problem Solving. Morgan Kaufman (1999) 44. Kuepper, J.: Using genetic algorithms to forecast financial market (2018). https://www. investopedia.com/articles/financial-theory/11/using-genetic-algorithms-forecast-financialmarkets.asp. Last accessed November 17 2018 45. Langdon, W.B., McPhee, N.F.: A Field Guide to Genetic Programming. LuLu Selfpublishing (2018) 46. Leswing, K.: Jeff Bezos just perfectly summed up what you need to know about artificial intelligence (2018). https://www.businessinsider.com/jeff-bezos-shareholder-letter-on-aiand-machine-learning-2017-4. Last accessed December 20 2018 47. LLVM.org: The LLVM Compiler Infrastructure Project. https://llvm.org/. Last accessed January 2 2019 48. Luke, S.: Essentials of Metaheuristics, first edn. lulu.com (2009). URL http://cs.gmu.edu/~ sean/book/metaheuristics/. Available at http://cs.gmu.edu/~sean/books/metaheuristics/ 49. Maass, W.: Networks of Spiking Neurons: The Third Generation of Neural Network Models. Neural Networks 10, 1659–1671 (1996) 50. Maheswaranathan, N., Metz, L., Tucker, G., Sohl-Dickstein, J.: Guided evolutionary strategies: escaping the curse of dimensionality in random search. CoRR abs/1806.10230 (2018). URL http://arxiv.org/abs/1806.10230 51. Miller, J.F.: Cartesian genetic programming. In: J.F. Miller (ed.) Cartesian Genetic Programming, Natural Computing Series, chap. 2, pp. 17–34. Springer (2011) 52. Morgan, T.J.H., Griffiths, T.L.: What the Baldwin Effect affects. In: CogSci (2015) 53. Mosescu, L.: Darwin neuroevolution framework (2018). Urlhttps://github.com/tlemo/darwin. Last accessed February 4 2019 54. N. Krasnogor S. Gustafson, D.A.P., Verdegay, J.L.: Systems Self-Assembly, Volume 5: Multidisciplinary Snapshots (Studies in Multidisciplinarity. Elsevier Science (2008) 55. Neumann, J.V.: Theory of Self-Reproducing Automata. University of Illinois Press, Champaign, IL, USA (1966) 56. Ochi, L., Vianna, D., Drummond, L., Victor, A.: A parallel evolutionary algorithm for the vehicle routing problem with heterogeneous fleet. Future Generation Computer Systems 14, 285–292 (1998)
16 2019 Evolutionary Algorithms Review
343
57. Olson, R.S., Urbanowicz, R.J., Andrews, P.C., Lavender, N.A., Kidd, L.C., Moore, J.H.: Proceedings of Evo Applications 2016, Porto, Portugal, March 30–April 1, 2016, Part I, chap. Automating Biomedical Data Science Through Tree-Based Pipeline Optimization, pp. 123– 137. Springer (2016) 58. Pearl, J., Mackenzie, D.: The Book of Why: The New Science of Cause and Effect, 1st edn. Basic Books, Inc., New York, NY, USA (2018) 59. Petke, J., Haraldsson, S.O., Harman, M., Langdon, W.B., White, D.R., Woodward, J.R.: Genetic improvement of software: a comprehensive survey. IEEE Transactions on Evolutionary Computation 22(3), 415–432 (2018) 60. Purohit, A., Choudhari, N.S.: Code bloat problem in genetic programming (2013). http://www. ijsrp.org/research-paper-0413/ijsrp-p1612.pdf. Last accessed May 26 2019 61. Real, E.: Using evolutionary AutoML to discover neural network architectures (2018). https://ai.googleblog.com/2018/03/using-evolutionary-automl-to-discover.html. Last accessed December 22 2018 62. Real, E., Aggarwal, A., Huang, Y., V Le, Q.: Regularized evolution for image classifier architecture search (2018). https://arxiv.org/abs/1802.01548. Last accessed December 28 2018 63. Russell, B.: The Value of Philosophy. In: S.M. Cahn (ed.) Exploring Philosophy: An Introductory Anthology. Oxford University Press (2009) 64. Shapiro, J.: A 21st century view of evolution: Genome system architecture, repetitive dna, and natural genetic engineering. Gene 345, 91–100 (2005) 65. Siebel, N.T.: Evolutionary reinforcement learning. http://www.siebel-research.de/ evolutionary_learning/. Last accessed January 2 2019 66. Simon, D.: Evolutionary Optimization Algorithms. Wiley (2013). URL https://books.google. com/books?id=gwUwIEPqk30C 67. Simonite, T.: Moore’s Law Is Dead. Now What? (2016). https://www.technologyreview.com/ s/601441/moores-law-is-dead-now-what/. Last accessed January 7 2019 68. Spector, L.: Push, PushGP and Pushpop (2018). http://faculty.hampshire.edu/lspector/push. html. Last accessed December 28 2018 69. Spector, L., McPhee, N.F., Helmuth, T., Casale, M.M., Oks, J.: Evolution evolves with autoconstruction. In: T. Friedrich, et al. (eds.) GECCO ’16 Companion: Proceedings of the Companion Publication of the 2016 Annual Conference on Genetic and Evolutionary Computation, pp. 1349–1356. ACM, Denver, Colorado, USA (2016). https://doi.org/10.1145/ 2908961.2931727 70. Stanley, K.: Compositional pattern producing networks: A novel abstraction of development. Genetic Programming and Evolvable Machines 8, 131–162 (2007) 71. Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving largescale neural networks. Artificial Life 15(2), 185–212 (2009) 72. Stanley, K.O., Miikkulainen, R.: Evolving neural network through augmenting topologies. Evolutionary Computation 10(2), 99–127 (2002) 73. Such, F.P., Madhavan, V., Conti, E., Lehman, J., Stanley, K.O., Clune, J.: Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. ArXiv abs/1712.06567 (2017) 74. Sverdlik, Y.: Google is switching to a self-driving data center management system (2018). https://www.datacenterknowledge.com/google-alphabet/google-switching-selfdriving-data-center-management-system. Last accessed January 8 2019 75. Tokmakova, A.: Optimizing floorplans via experimental algorithms (2018). https://archinect. com/news/article/150108746/optimizing-floorplans-via-experimental-algorithms. Last accessed December 22 2018 76. Turing, A.: The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society B 237, 37–72 (1952) 77. Wall, M.: Galib: Matthew’s C++ genetic algorithms library (1996). http://lancet.mit.edu/galib2.4/. Last accessed February 4 2019
344
A. N. Sloss and S. Gustafson
78. Whitley, D., Chicano, F., Ochoa, G., Sutton, A., Tinós, R.: Next generation genetic algorithms (2018). http://gecco-2018.sigevo.org/index.html/tiki-index.php?page=Tutorials Last March 7 2019 79. Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S.M., Richardson, R., Schultz, J.: AI Now 2018 Report (2018). https://ainowinstitute.org/AI_Now_ 2018_Report.pdf. Last accessed December 20 2018 80. Wilson, D.G., Cussat-Blanc, S., Luga, H., Miller, J.F.: Evolving simple programs for playing Atari games. In: H. Aguirre, et al. (eds.) GECCO ’18: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 229–236. ACM, Kyoto, Japan (2018) 81. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1(1), 67–82 (1997) 82. Zaldivar, A.: Introduction to fairness in machine learning (2018). https://developers. googleblog.com/2018/11/introduction-to-fairness-in-machine.html. Last accessed January 8 2019
Chapter 17
Evolving a Dota 2 Hero Bot with a Probabilistic Shared Memory Model Robert J. Smith and Malcolm I. Heywood
17.1 Introduction High-dimensional reinforcement learning implies that feature construction as well as policy discovery be performed simultaneously. Recent advances in ‘deep reinforcement learning’ approaches have demonstrated that this is possible over a crosssection of task domains defined in terms of computer games, e.g. (Atari) arcade game titles [27]. Previous work has shown that the genetic programming approach of tangled program graphs (TPG) is able to compete with deep reinforcement learning solutions on games of complete information [21]. However, when partial observability plays an increasing role in defining game content (as in first person shooter games), TPG experiences task specific limitations [32]. A recent work proposed an approach for introducing indexed memory into the TPG framework and demonstrated its applicability under VizDoom ‘deathmatches’ [34]. In this work, we assess the significance of the same approach under a completely different highdimensional reinforcement learning task, that of evolving a hero to play Defence of the Ancients 2 (Dota 2),1 a real-time strategy game that has been targeted by OpenAI for developing hero agents using reinforcement learning.2 The details of the approach adopted by OpenAI are not publicly known, however, it does appear to be based on some form of reinforcement learning.3
1 http://blog.dota2.com/?l=english. 2 https://openai.com/blog/how-to-train-your-openai-five/. 3 https://openai.com/blog/openai-baselines-ppo/.
R. J. Smith · M. I. Heywood () Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_17
345
346
R. J. Smith and M. I. Heywood
Our interest is not to compete directly with OpenAI,4 but to use the challenging environment posed by the Dota 2 environment to validate the previous approach to addressing partial observability in high-dimensional reinforcement learning task. Specifically, Dota 2 state information is limited to line-of-sight, so a hero agent cannot ‘see’ around objects such as trees or buildings, and descending to the river feature implies that the agent cannot see above the river bank. In short, there are multiple instances of partial observability. The particular approach taken to designing memory for enabling the TPG framework to operate under partially observable high-dimensional environments are summarized as follows [34]: • Memory Synchronization: TPG incrementally stitches together teams of programs into an increasingly interconnected graph of teams-of-programs. Adding stateful register referencing (Sect. 17.3.2) provides each program with the ability to retain register state, potentially addressing partial observability from the perspective of individual programs, but not collectively. Thus, some form of scalable ‘global’ indexable memory needs to be assumed, but in such a way that all programs are encouraged to act collaboratively/respectfully, whether they are within the same TPG individual or not. We make this observation because we cannot predict which specific programs will ultimately be combined into each TPG agent. Indexed memory therefore also represents a common communication medium that programs post ‘messages’ to and receive ‘messages’ from. • What to Write: The state space of Dota 2 consists of hundreds of attributes when playing a middle lane single opponent configuration, however, this increases to tens of thousands of attributes when playing three lanes with five heroes on each team. In short, even under the minimal setting, a write operation should not write all state information to memory, it will not scale. Instead, we will assume that a program’s write operation will use the program’s current register state as a surrogate for defining state as ‘communicated’ through indexed memory. Put another way, we are assuming that each program’s register state represents a useful encoding (of state) for communicating to the other programs in the population. • Long versus Short term memory: it is clear that the human mind supports many forms of memory, and supporting both memory that is accessible to all programs versus memory specific to programs represents one component to this. However, explicit support for short and long term memory could also be fruitful; how it is used is down to the evolutionary process to discover. Likewise, how the ‘decision’ is made regarding a write to/read from long or short term memory should also be an emergent property. In the following, we develop the topic by first introducing the Dota 2 oneon-one mid-lane task (Sect. 17.2) and survey previous approaches to introducing indexed memory in neural networks and genetic programming (Sect. 17.3). The 4 The
computational resources used by OpenAI are in the order of 180 years of gameplay per day.
17 Evolving a Dota 2 Hero Bot
347
TPG approach to genetic programming is briefly summarized in Sect. 17.4 and the framework adopted for designing indexed memory to address the above points detailed in Sect. 17.5. Section 17.6 summarizes all the issues specific to interfacing TPG to the Dota 2 game engine (tools, state-space, actions, fitness function). The details of a benchmarking study appear in Sect. 17.7 and conclusions and future work appear in Sect. 17.8.
17.2 The Dota 2 1-on-1 Mid-lane Task Game play in Dota 2 revolves around developing strategies for defining the behaviour of a ‘hero’ character. There are three basic categories of hero (strength, agility, intelligence) resulting in a total of 117 specific heroes.5 We will not be addressing the hero selection issue in our work. Instead, we will assume a specific hero from the agile category—a Shadow Fiend—where the agile category provides heroes with the potential to perform well in a broad range of gaming scenarios [31]. Future research will consider whether we can evolve heroes from the other categories. The Dota 2 environment is defined in terms of a map in 3-D isometric perspective, however, all the agents playing the game are subject to sensor information constrained by partial observability (line-of-sight/fog-of-war). Thus, from an agent’s perspective it is not possible to see ‘through’ a forest, or above a flight of stairs. There are two teams, ‘The Radiant’ represents our team (base at the bottom LHS, Fig. 17.1) and ‘The Dire’ represent the opponent team (base at the top RHS, Fig. 17.1). The base of each team periodically issue waves of ‘creeps’ which are simple bots with default behaviours. The creeps advance from each base down each ‘lane’, where the number of active lanes is a game parameter (the more active lanes, the more difficult the game). In this work, we are interested in the single mid-lane case, thus creeps will advance from each tower down the leading diagonal (Fig. 17.1). Creeps have predefined behavours that the hero can potentially manipulate. However, on encountering any form of opponent, they will attack. The hero character for each team attempts to use their powers to develop a strategy to enable them to advance specific aspects of the game to their advantage. Examples might be to support the creep attack or collecting powerup and bounty runes from the environment (increasing team wealth and therefore supports the collection of items which provide additional powers). A game ends when one side either kills the opponent hero twice, or a tower is taken. The goal we set in this work is to determine whether we can evolve behaviour for the Shadow Fiend hero against the built in Shadow Fiend hero bot under the ‘mid-lane’ parameterization. We will assume that the opponent hero bot operates
5 https://dota2.gamepedia.com/Heroes.
348
R. J. Smith and M. I. Heywood
Fig. 17.1 Initial Dota 2 visual game space. No agents are shown. This view would constitute complete information, which bots do not possess during game play. Each team has a tower (highlighted in red). Should a team loose their tower the game is over
under the hard as opposed to easy or medium level of difficulty. We will then test the resulting evolved Shadow Fiend hero against all three settings for the opponent hero bot. This is not a straightforward goal, as we are potentially subject to several pathologies: (1) fitness disengagement—the opponent already has a challenging strategy, implying that we are not able to establish a useful gradient for directing evolution. (2) sparse fitness function—many skills need to be attained in order to develop effective hero strategies, including navigation, defending, attacking and collecting bounty. Some of these properties have no direct reward (i.e. navigation) and others are only indirectly quantified in the fitness function (i.e. defending and attacking). In short, establishing an effective strategy necessitates the development of many abilities that are not characterized directly in the fitness function.
17.3 Related Work 17.3.1 Memory in Neural Networks Letting neurones recycle previous (internal) state information produces a feedback loop, thus supporting a distributed form of memory, or memory through recurrent connectivity. However, there are many potential connectivity patterns that could recirculate state to provide recurrent properties, e.g. Hopfield network formulations for content-addressable memory [2], Brain-state-in-a-box [13], Elman recurrent networks [8]. This diversity of models appears in part due to the impact that different
17 Evolving a Dota 2 Hero Bot
349
connectivity schemes have on the properties of the network and the (possible) implications for stability under feedback. One of the most widely employed configurations for recurrent connectivity takes the form of long short-term memory (LSTM) [15].6 Each LSTM uses a combination of gating elements to define a ‘perfect integrator’, effectively implementing the expression, y(t + 1) = y(t) + g(ts ) · x(t)
(17.1)
where t is a discrete time step, y is the LSTM output, x is the input to the LSTM and g(ts ) is the value of a gating neurone used to modulate when the input x appears through the independent temporal index ts . The problem invariably is that although the LSTM model has had many successes (see [11] for a short survey and ongoing developments), the practicalities of getting the LSTM to reliably gate the relevant inputs at the right time has motivated the development of forms of indexed addressable memory for neural networks [9, 10]. In this latter case, the ‘Neural Turing Machine’ (NTM) represents the most widely assumed starting point [9]. The NTM augments a feedforward neural network with outputs for manipulating a ‘bank’ of (external) memory using read, write and erase operations. The innovation was to define these operations such that they could be directed using gradient decent. This was achieved by making the operations act on a range of locations relative to the ‘position’ of the read/write/erase ‘head’. Moreover, the range itself could be tuned. Ultimately, explicit mechanisms were designed to encourage read operations that supported associative recall and temporal association and write operations that mimicked a ‘least recently used’ policy [10]. Initial formulations of the NTM were demonstrated on tasks such as copying arbitrary length sequences to memory and then retrieving them, associative recall of specific sequences of bits, or sorting bit sequences into lists [9]. More realistic tasks were then introduced that included answering graph queries (pertinent to natural language processing and route finding) and puzzle solving on a grid world [10]. Combining a deep learning architecture with a NTM provides the opportunity to learn what specific encoded states (from the deep learning architecture) should be retained in memory for later use. However, the optimal relationship between deep learning network, LSTM or NTM necessary for solving specific tasks remains an unknown. That said, several recent works demonstrate that visual reinforcement learning tasks (the sensor input corresponds to video data) describing environments limited to a first person perspective (i.e., partial observability) can only be addressed using a combination of all three properties [17, 39]. Finally, we note that several NTM results for low dimensional tasks have been demonstrated using neuroevolution [12, 26]. This is significant because neuroevolution is able to assume a simpler interface to indexible memory than
6 LSTM
is widely used as it addresses one of the potential pathologies of recurrent connectivity under gradient decent, that of vanishing gradients.
350
R. J. Smith and M. I. Heywood
the original NTM (with gradient decent). Neuroevolution is also free to evolve the connectivity for the neural network, thus identifies solutions that are also much simpler than gradient based NTM methods. To date, we are not aware of neuroevolution with external memory as applied to high-dimensional partially observable tasks (e.g. as encountered by bots defining characters in video games), only reactive configurations [30, 36].
17.3.2 Memory in Genetic Programming Memory mechanisms in GP have a rich history. For the purposes of this review, we recognize two forms of memory as employed with GP: scalar and indexed. Scalar memory is synonymous with register references in linear GP. Thus, linear GP might manipulate code defined in terms of a ‘register-to-register’ operation: R[x] := R[x]opR[y] in which R[·] is a register, x, y ∈ {0, . . . , Rmax − 1} represent indexes to registers and op is a two argument operator. Several authors pioneered the use of scalar memory in linear GP [16, 28]. This means that although execution through linearly structured programs is typically sequential,7 sequential instructions need not use the result of the preceding instruction. In addition, we can also distinguish between stateful and stateless operation in linear GP. Stateful operation implies that after execution of the program relative to state x(t), the values of the registers are not reset. Instead, execution at state s(t + 1) commences with the register values as left from the previous execution of the program [1, 14], or R[x]t+1 := R[x]t opR[y]t
(17.2)
where R[x]t is a reference to register x content relative to external state t. Note, however, that each reference to R[x] at the same update, t, will result in a ‘distributed’ updating of (recurrent) state. Finally, we also note that the assignment operator (:=) results in the LHS being over-written by the result of the calculation. Poli et al. make the case for a model of scalar memory in which values incrementally change content as opposed to out-right replacing current content [29]. Such a framework was applied to the propagation of values between operations in tree structured GP and the action of operators in register based (linear) GP in regression tasks. Indexed memory implies that addresses are specified with an explicit read and write operation (i.e. provision of ‘load’ and ‘store’ operations in linear GP). Teller investigated various formal properties that might result from supporting indexed memory [38]. Teller also demonstrated the evolution of indexed memory as a basic data structure [37], a topic extensively investigated by Langdon [23]. Other early developments included the evolution of ‘mental models’, such as recalling the
7 Conditional
instructions could change this [6].
17 Evolving a Dota 2 Hero Bot
351
content of discrete worlds defined as 4 × 4 toroidal grids [4, 7]. To do so, a two phase approach to evolving memory content was assumed [4, 7]. In phase 1, the agent can only write to memory, the assumption being that the agent can navigate the entire world noting points of interest without reading from memory. In phase 2, the agent is rewarded for systematically revisiting all the points of interest using only the information from memory. Such a formulation would preclude operation in tasks that require memory in order to support navigation. External memory models have also been proposed for use with simple robot controllers e.g. [3]. In this case, a performance measure was needed in order to define what memory content was actually saved and the criteria for replacement. Moreover, each write operation wrote the entire state space to memory. Neither would be feasible in the context of the Dota 2 environment.
17.4 Tangled Program Graphs The tangled program graph (TPG) framework was previously benchmarked on the Arcade Learning Environment suite of Atari game playing tasks [25], where results competitive with deep learning solutions were demonstrated [19, 21]. In addition, TPG could evolve solutions to multiple ALE titles simultaneously [20, 21]. It was also demonstrated that solutions could be evolved without first down sampling the source video in ALE [22] and sub-tasks described in the VizDoom first person shooter environment [32]. No use is made of task specific instruction sets, instead TPG lets each program perform feature construction from individually indexed pixels [22, 32]. The cooperative nature of this decomposition organizes programs into teams, and ultimately teams into graphs of teams (of programs). Solution complexity then reflects that of the underlying task. Moreover, the computational complexity of the resulting solutions is orders of magnitude lower than that associated with deep learning solutions, resulting in TPG solutions executing in real-time on very modest computing platforms [19, 21, 22, 32]. Figure 17.2 provides an example TPG solution for controlling a bot capable of playing 10 tasks taken from the VizDoom first person shooter environment [40]. Each ‘node’ of the graph represents a team of programs. The outgoing arcs represent programs with the arrow ending at a specific indexed action, a. Actions can either be atomic (task specific) actions, a ∈ A , or a pointer to another team, a ∈ P . Each program defines a function indicating the degree of confidence in its corresponding action, i.e. actions are a scalar integer value which is decoded into either an atomic action or a pointer to another team. At initialization, all teams are described in terms of atomic actions. Over time, the action of the variation operators enables surviving teams to index other (surviving) teams, resulting in the emergent discovery of a ‘tangled program graph’. Execution always starts relative to a ‘root node’, where there is only ever one root node per TPG individual and root nodes are not indexed by other teams. In order to evaluate a node, all the programs at this node are executed on the current state,
352
R. J. Smith and M. I. Heywood
Fig. 17.2 Example TPG solution taken from [32]. Nodes are teams (of programs) with the black node denoting the root team where evaluation always commences. Square nodes are atomic actions. All nodes have at least one atomic node to guarantee that looping behaviour does not occur
x(t). The program with the maximum output at this node is said to have ‘won’ the right to suggest its action. If this action is an atomic action, then this is passed to the game engine as the bot’s action in this state, resulting in an update to game state, x(t) → x(t + 1). However, should the action be a pointer to another node, then the process of evaluation repeats at the new node relative to x(t). Various heuristics ensure that although the graph might be cyclic, any attempt at node revisiting is trapped, and the program with the ‘runner up’ output used to identify an unvisited graph arc. This implies that only a fraction of a TPG graph need be evaluated to determine each action. Moreover, no attempt is made to apply a convolution operator with the high-dimensional video input, again contributing to the very low computational footprint of TPG solutions. A tutorial description of the TPG algorithm is available from prior work [21, 22].
17 Evolving a Dota 2 Hero Bot
353
17.5 Indexed Memory for TPG As noted in the introduction, the principle motivation for augmenting TPG with indexed memory is to provide a mechanism for addressing partial observability [34]. However, to do so we also need to bare in mind that TPG solutions consist of multiple programs (Fig. 17.2), so merely having each program address an independent ‘bank’ of indexed memory will still result in an inability to communicate ‘internal state’ across an entire TPG individual. Moreover, as TPG solutions are entirely emergent, we cannot a priori identify which programs will ultimately be cooperating to compose a solution. In order to address this issue we adopt an approach first proposed by Spector and Luke in which there is only ever one instance of external memory [35]. As we are interested in reinforcement learning tasks, we carry the concept to the point where state of the indexed memory is never reset.8 Each TPG individual inherits the state of indexed memory as left from the previous evaluation of a TPG individual. Moreover, no reset is performed between generations. From this perspective, memory is as much a communal communication medium as it is memory. Any individual that in some way corrupts the communal state of indexed memory, penalizes the entire population. Read and write operations will assume different ‘views’ in order to provide a mechanism to support different temporal durations (long and short term) in indexed memory. Specifically, a read operation will assume that indexed memory, M , is defined as a set of sequential indexes. Such a process can be implemented as a Register-Memory reference with the mode bit setting the operand source to reference indexed memory as opposed to a register or a reference to the application state space. Thus, any read to M fetches the content from a single indexed memory location and loads it into a register, R[x]. A write operation needs to define both what to write and where to write. We will assume that the content of the set of registers, R, associated with the program performing the write operation represents a suitable definition for what to write, i.e. each program has its own set of Rmax registers. Writes will define where to write as a probabilistic operation (Algorithm 1), distributing the content of the program’s registers, R, across the L columns of external memory M , as per Algorithm 1. This means that columns in the ‘middle’ of external memory are updated most frequently (short-term memory) whereas columns towards the beginning or end are updated less frequently (long-term memory). In summary, the read operation treats indexed memory, M , as a set of consecutively addressed locations. Write operations perceive indexed memory as a matrix of Rmax × L locations in which the mid region L2 is more likely to be written to and the regions towards the ends are less likely to be written to. This then supports indexed memory with short and long term temporal properties. Section 17.7.1 discuss particular parameter choices for L and β.
8 Indexed
memory is initialized once at generation zero with NULL content.
354
R. J. Smith and M. I. Heywood
Algorithm 1 Write function for External memory M . Function called by a write instruction of the form: Write(R) where R is the vector of register content of the program when the write instruction is called. Step 1 identifies the mid point of memory M , effectively dividing memory into upper and lower memory banks. Step 2 sets up the indexing for each bank such that the likelihood of performing a write decreases as a function of the distribution defined in Step 2a. Step 2(a)i defines the inner loop in terms of the number of registers that can source data for a write. Step 2(a)iA tests for a write to the upper memory bank and Step 2(a)iB repeats the process for the lower bank Call: Write(R ) 1. mid = L2 2. for (offset := 0 < mid) a. pwrite = 0.25 − (β × offset)2 i. for (j := 0 < Rmax ) A. IF (rnd[0, 1) ≤ pwrite ) THEN (M [mid + offset][j ] = R [j ]) B. IF (rnd[0, 1) ≤ pwrite ) THEN (M [mid − offset][j ] = R [j ])
17.6 Dota 2 Game Engine Interface 17.6.1 Developing the Dota 2 Interface Figure 17.3 summarizes the interaction between the various components comprising our interface to the Dota 2 game engine. The process for delivering data between the Dota 2 game client and an external learning interface (in this case TPG) is created using a combination of the Dota 2 Bot Scripting API (webite) and the Dota 2 Workshop Tools Scripting AI (website). As seen in Fig. 17.3, the general flow for evaluating the environment and acting on it is loosely defined by a somewhat standard client-server model. After initial configuration is performed (loading the local development script for handling TPG bots, loading the “default” bot behaviour for the opponent, and removing other bots from the game to ensure the game is played as a one-on-one player scenario), the Dota 2 client sends the game state (as defined in Table 17.1) as a JSON structure to the webserver, which runs as a library parallel to the main TPG logic. The webserver extracts the data and stores it in a list of floating point values, which is fed into the main TPG learning algorithm. Once TPG evaluates the game state, it produces an atomic integer action, which is gathered by the webserver and sent back to the game
17 Evolving a Dota 2 Hero Bot
Dota 2 Remove Other Bots
355
HTTP Server
TPG
Apply Action and Repeat Analyze Game State and Send an Action
Match Creation Script
TPG Shadow Fiend Bot
Get Action from Server Get Full Game State from Server
Default Shadow Fiend Bot
Create and Sent Game State
Fig. 17.3 Interaction between components of the Dota 2 bot interface
client. Once the game client receives the integer action, it selects the preset (but minimal) behaviour associated with that action. This process then repeats until the match has concluded. The Dota 2 client, as a standard feature, does not allow a user to have more than one local development script for bots. This became a problem when selecting heroes. Our goal was to attempt to have TPG play against the built-in bots, however we ran into issues where we could not select both our own hero and the enemy’s hero without loading a bot script. However, if we used our local bot script, this would mean TPG would play against itself, which was not a desirable scenario as our opponent should be of known skill in order to facilitate improvement of the TPG agent. However, the Steam client has an online repository for Dota 2 bots which are able to run separately from the local script. This allowed us to create a bot which uses all of the default bot logic, except during the hero selection process, where it always selects Shadow Fiend. By default, the Dota 2 Bot Scripting API does not have a clean method for handling outside interfaces, likely to prevent forms of cheating. The only viable option for us to enable the exchange of information between the client and TPG was to use the Dota 2 Workshop feature of creating custom HTTP POST requests in order to send data to and from the Dota 2 client. The client and server are able to exchange a single set of 320 floating point and integer values within 7 ms, which can become a technical challenge when running the engine at faster speeds in order to accelerate the learning process and decrease overall run time.
356
R. J. Smith and M. I. Heywood
Table 17.1 Attributes used to characterize game state Attribute type Attribute count Self (opponent) 30(26) Team, level, health (avg, max, regen), Mana (avg, max, regen), movement speed (base, current), base damage (avg., variance), attack (damage, range, speed), Sec. per attack, attack (hit location, time since last, target of current), vision range, strength, agility, intelligence, golda , net wortha , last hitsa , deniesa , location (x-, y-coordinate), Orientation Match duration 1 Abilities—6 per self (opponent)) 48 (42) Level, Mana cost, damage, range, Num. of Souls (Necromaster), Cooldowna , target type, effect flag area Creeps (up to 10 nearest) 140 Team, health, max health, movement speed (base, current), base damage (avg., variance), attack (damage, range, speed), Sec. per attack, location (x-, y-coordinate) Towers—self (opponent) 6 (6) Team, health (avg., Max), attack (damage, range, speed) Runes 8 Bounty rune location (top, bottom), top regular rune location, top powerup rune (type, status), bottom powerup rune (location, type, status) Items—self (opponent) 7 (6) Inventory (slot 1, . . ., slot 6), healing salve flaga In the case of attributes estimated for both self and opponent a Indicates a self specific attribute
17.6.2 Defining State Space A total of eight different categories of feature were assumed from the set of features provided by Valve.9 The Self and Opponent categories summarize the basic statistics for each team. Four of these properties represent internal bot state (Current Gold/Net worth, Last hits, Denies) and are therefore unknown in the case of the opponent. Match time is merely the duration of the current game. Abilities reflect further state information for each hero, and are repeated for all 6 of the Shadow Fiend hero abilities. As such abilities represent skills available to the hero character and change as the character ‘level up’ during each game (games start with heroes at the lowest level of ability). Creep state is characterized in terms of a vector of 14 attributes per creep, for the nearest 10 creeps. One way in which the game can be won/lost is when a tower is taken. Thus, there are six attributes defining state of Radiant and Dire’s mid-lane tower. Runes represent gems that can be collected in the environment, eight attributes summarize type, location and state of such items,
9 Up
to 20,000 features are available, however, we are only interested in the case of a 1-on-1 single lane configuration of the game (as opposed to 5 heroes per team over 3 lanes).
17 Evolving a Dota 2 Hero Bot
357
and another seven attributes summarize Items bought during the course of a game by each hero. An attribute value of ‘−1’ is assumed for any attributes that are not measurable due to partial observability or if there is not actually enough of the property to measure (e.g. there are less than 10 creeps observable/alive).
17.6.3 Defining the Shadow Fiend Action Space We will assume a minimalist scenario in which we want to provide enough functionality for our Shadow Fiend hero to support the following five properties, resulting in a total of 30 actions. Movement: at any point in time heroes have the ability to walk in one of eight cardinal directions (north, north-east, east, south-east, south, south-west, west, north-west) or not move, resulting in 9 actions. When issued, the move command is applied to a ground location 100 units from the heroes current location. Attacking opponents: an ‘attack action’ declares which opponent to attack, and assumes a default integer value (future work could potentially tune these defaults). In total TPG will distinguish between 12 possible targets: the opponent tower, the opponent hero, or the ten nearest opponent creeps, so a total of 12 actions. Casting spells: a Shadow Fiend character has a unique set of spell casting abilities.10 The ‘Shadowraze’ is a ranged spell cast with three distances (near, medium, far), whereas a ‘Requiem of Souls’ is either cast or not. This results in another 4 actions. Collecting items: a hero can collect ‘powerup’ and ‘bounty’ runes while navigating the environment, but to do so it has to explicitly deploy the relevant action. This results in another 4 actions. Hero’s bot: the Shadow Fiend hero can control a ‘Healing Salve’ for which there is a single action, deploy or not.
17.6.4 Fitness Function A hero has to operate cooperatively with friendly creeps and defensive towers. In the following we explicitly reward our hero explicitly for successful ‘Last Hitting’ and ‘Denies’, but ignore other tactics such as ‘Creep Blocking’ and ‘Pulling’.11 In addition, we include three factors explicitly scored by the game engine: Net worth, Kills, and Match points.
10 https://dota2.gamepedia.com/Shadow_Fiend. 11 https://dota2.gamepedia.com/Creep_control_techniques.
358
R. J. Smith and M. I. Heywood
Last hits (LH): A ‘last hit’ refers to the killing blow applied to an opponent or neutral creep. If your hero is the last hitter, then a gold bonus is payed to your hero, increasing your ability to purchase items for improving the capabilities of your hero. Each last hit is awarded 10 points in the fitness function. Denies (D): When creep health decreases below 50% you can get a last hit on your own creeps. This is useful because it ‘denies’ a last hit to the opponent hero, i.e. you prevent the opponent hero from gaining gold or experience points. Each ‘deny’ is awarded 15 points in the fitness function, i.e. significantly less common than a last hit. Net worth (NW): is the overall wealth of your team, so includes all bought items, gold, and any inventory accumulated. We use the net worth as calculated by the game engine. Kills (K): Each time an opponent hero is killed a reward of 150 points is given. Match (M): A match ends with either a hero dying twice, or the successful destruction of the team’s tower (Fig. 17.1). If the agent is the winner, the game engine awards 2000 points. Not dead (ND): A bonus of 150 points is awarded if the hero did not die during the match. TPG Shadow Fiend hero fitness, fi now has the form: fi = 10 × LH + 15 × D + NW + 150 × K + 150 × N D + 2000 × M
(17.3)
where LH, D, K reflect counts for the number of times each property is achieved per match, N D ∈ [0, 1, 2] reflecting the number of times the agent is ‘not dead’ per match, NW assumes the value estimated by the game engine at the end of the match, and M ∈ [0, 1].
17.7 Results 17.7.1 TPG Set Up Parameterization assumes the same set up as an earlier experiment in which bot behaviours were evolved for the death matches in the VizDoom first person shooter environment [34]. External memory is defined by the number of registers a program indexes (Rmax ) and the number of columns, L (Sect. 17.5). Thus, from the perspective of a read operation there are M = Rmax × L indexible locations. All our experiments assume L = 100 therefore the range of indexes supported during a read operation is M = 800. The probability distribution defined in Step 2a of Algorithm 1 is parameterized with β = 0.01. This means that for ‘offset’ values ≈ 0 the likelihood of a write operation is 25% (or 4 out of Rmax = 8 registers will
17 Evolving a Dota 2 Hero Bot Table 17.2 TPG Parameterization
359 Configuration at initialization Number of teams (P ): 360 Number of programs: 14,400 Instruction distribution: Equal Initial Prog. per team (ω): 40 Team population Program population Max. instructions: 64 Registers per program (Rmax ): 8 Gap: 50% of root teams Prob. Delete Instr. (Pdel ): 0.5 Pm : 0.2 Prob. Add Instr. (Padd ): 0.5 Pd : 0.7 Prob. Mutate Instr. (Pmut ): 1.0 Pa : 0.7 Prob. Swap Instr. (Pswp ): 1.0 Pnm , Patomic : 0.2, 0.5 Linear GP representation is assumed in which 10 opcodes appear: +, −, ×, ÷, log, exp, Conditional, Write, Read; Two argument instructions have the form: R[x] = R[x]op2 R[y]; Single argument instructions: R[x] = op1 (R[y]); Conditional operator: IF R[x]op0 R[y] THEN R[x] = −R[x]. These definitions are unchanged relative to earlier work on classification tasks [24]. Read/Write definitions appear in Sect. 17.5
be written to the shortest term memory location as indexed about ≈ L2 ). However, as ‘offset’ → L (and respectively 1), then the likelihood of performing a write tends to 1%. Overall a write operation will result in 17% of M changing value under this parameterization. No claims are made regarding the relative optimality of such a parameter choice. Table 17.2 summarizes the parameters assumed for TPG.
17.7.2 Training Performance Figure 17.4 summarizes the development of fitness over the 250 generations assumed for training. The top curve corresponds to the overall fitness (Eq. (17.3)), whereas the bottom curves correspond to the contribution to fitness from Last hits (Fig. 17.4a) and Denies (Fig. 17.4b). Note that the overall fitness is expressed on the left y-axis on a log scale. The curves are certainly noisy, on account of the stochastic properties of the game. Given that all training is performed relative to an opponent built in Shadow Fiend defined as ‘hard’, we see it as encouraging that the underlying fitness curve shows a positive trend.
R. J. Smith and M. I. Heywood
80 60
Last Hits
0
2000
20
3000
40
4000
Fitness
5000 6000 7000
100
360
0
50
100
150
200
250
Generation
40 30 20 0
2000
10
Denies
4000 3000
Fitness
5000 6000 7000 8000
(a)
0
50
100
150
200
250
Generation
(b)
Fig. 17.4 Overall fitness function (left y-axis) with development of (a) last hits and (b) denies (right y-axis). Moving average of respective fitness curves shown in black
17 Evolving a Dota 2 Hero Bot
361
17.7.3 Assessing Champion TPG Agents Post Training Post training, the best TPG agent is identified as the individual with highest training fitness. Fifty tournaments are then performed with TPG trained without indexed memory (just scalar stateful registers—hereafter ‘reactive TPG’), TPG trained with indexed memory (hereafter ‘memory TPG’) and three instances of the opponent Shadow Fiend bot from the game engine (corresponds to difficulty levels of ‘easy’, ‘medium’ and ‘hard’). The distinction between these difficulty levels is summarized in Table 17.3. Table 17.4 summarizes the number of times that the two forms of TPG were able to win against different difficulty levels of the opponent Shadow Fiend bot, i.e. games are always played until completion. The reactive TPG configuration is not capable of winning games for any level of the opponent. Viewing the resulting game play indicates that the reactive TPG agent is unable to identify the significance of the mid-lane, i.e. where the creeps from its base will march and engage with the opponent creeps. Instead, the reactive TPG hero agent wanders around the world until it either dies (e.g. attacked by creeps specific to forest features) or the opponent team successfully take the TPG agent’s tower (marked in red on Fig. 17.1). Adding the indexed memory to TPG clearly results in competitive performance when playing against ‘Easy’ and ‘Medium’ difficulty opponent bots, however, we are not able to compete with the opponent Shadow Fiend at the ‘Hard’ setting. That said, the fitness curve indicates that additional generations might rase the level of play to the point where we do become competitive at this level. Also shown for the memory TPG agent is the experience points awarded by the game engine, where this is an indication of the ability to ‘level up’, i.e. gain new abilities/develop the hero character over the course of the game. This is most prominent in the case of the Medium opponent difficulty as games are over too quickly for the Easy and
Table 17.3 Difference between different difficulty settings of the built in Shadow Fiend bot Difficulty Easy Medium Hard
Reaction time 0.3 s 0.15 s 0.075 s
Last hitting delay Yes Yes None
Ability recovery delay 6s 3s None
Table 17.4 Evaluation of champion TPG agent against opponent Shadow Fiend Bot Opponent Bot Difficulty Easy Medium Hard
Reactive TPG # wins 3(6%) 0(0%) 0(0%)
Memory TPG # wins 31(62%) 22(44%) 12(24%)
XP 8760 10,111 6166
Gold 3633 3860 3170
A total of 50 games are played at each difficulty level. XP is the median ‘experience points’ as awarded to memory TPG play over the 50 games. Median ‘gold’ reflects the ability of memory TPG to monetize during game play
362
R. J. Smith and M. I. Heywood
Hard opponents (i.e. TPG agent wins (looses) to quickly against the Easy (Hard) opponent). We also note that from a behavioural perspective, memory TPG appears to have learnt to navigate and explicitly takes itself to the river feature about the mid-lane where it ‘patrols’ and engages with the opponents [33].
17.7.4 Characterization of Memory Behaviour Table 17.5 summarizes the static characteristics of the TPG agent at several levels of granularity. In short, the TPG graph consists of 1369 programs organized into 46 nodes with a median out degree of 19. However, the median number of node visits per decision is 5 (or on average 5 × 19 × 51 = 4845 instructions evaluated per decision). It is also apparent that all programs perform a read against indexed memory, but only 23% of the programs perform a write to indexed memory (at initialization, all programs consist of all types of instruction). In short, specialization has taken place, with only specific programs being allowed to perform write operations. We note that a similar specialization appeared in VizDoom deathmatches with TPG [34]. Figure 17.5 attempts to capture some of the dynamic properties of indexed memory accesses during the course of games. Figure 17.5a represents a heat map12 of the memory accesses with darker coloured cells indicating more frequently Table 17.5 Post training static characterization of champion TPG bot properties
12 Heat
map produced using [5].
Property Teams Out-Degree (Med) Learners Learners/team (Mean) Shortest path Median path Longest path Instructions (Total) Instructions/learner (Mean) Total read instructions Input read instructions Memory read instructions Teams with memory reads Learners with memory reads Memory write instructions Teams with memory writes Learners with memory writes
Value 46 19 1369 29.76 Root (1) 5 8 69,664 50.89 30,881 (44.33% of total) 23,818 (34.19% of total) 7063 (10.13% of total) 46 (100.0% of total) 1369 (100.0% of total) 2786 ( 3.99% of total) 27 (58.69% of total) 317 (23.16% of total)
17 Evolving a Dota 2 Hero Bot
363
Fig. 17.5 Snapshot of indexed memory utilization over a game (a) heat map of memory read/writes (b) number of read/write references during a game. Note the difference in scale between reads (106 ) and writes (105 )
accessed memory locations. Short term memory is associated with the (vertical) middle of the map (indexes around 50) whereas long term memory locations appear towards the ends (indexes around 1 and 100). Certain locations appear to have much higher frequency of memory access than others, and a significant degree of sparseness also appears. Counting the frequency of indexed memory accesses is also possible (Fig. 17.5b), where the same bias to a lower number of write operations is reported. Note, however, that no attempt is made to remove hitchhiking programs (those programs that never win the right to suggest an action at a node), so further simplification might be possible.
17.8 Conclusion TPG has previously been demonstrated as providing solutions to visual reinforcement learning tasks from the ALE suite of titles, to a level competitive with deep
364
R. J. Smith and M. I. Heywood
learning, but without the computational cost [21]. However, game titles from the ALE suite are for the most part, games of complete information. Scaling TPG to games with significant amounts of partial observability requires the addition of indexed memory. A design of indexed memory for this purpose was previously proposed and evaluated under the specific case of ‘deathmatches’ under the VizDoom first person shooter environment [34]. In this work, we demonstrate that the same appears to be true for the case of evolving a ‘Shadow Fiend’ hero for the 1-on-1 mid lane configuration of Dota 2. In summary, for the same design of fitness function, atomic actions and state space, TPG with indexed memory is able to reliably reach the level of performance capable of matching the built in Shadow Fiend hero at the medium setting in 250 generations. Moreover, the fitness curve at this point has not plateaued, implying that further generations would be beneficial. Conversely, TPG without indexed memory is unable to evolve behaviours that even match the opponent bot with an ‘easy’ difficulty. In future work, we are interested in investigating whether behaviours can be transferred between different hero characters in order to speed up evolution as well as addressing the issue of multi-lane play and multi-opponent play. Finally, outside of Dota 2, we are interested in knowing whether TPG with indexed memory would also be useful in high-dimensional games of complete information. That is to say, TPG (and GP in general) invariably does not index all the state space, so from the perspective of what GP ‘sees’ the state space is actually partial, thus indexed memory might be appropriate for high-dimensional reinforcement learning tasks as a whole. We note that research to this end has began to appear, suggesting that TPG with an incremental model for constructing indexed memory is indeed beneficial under high-dimensional games of complete information [18]. Acknowledgements We gratefully acknowledge support from the NSERC CRD program (Canada).
References 1. Agapitos, A., Brabazon, A., O’Neill, M.: Genetic programming with memory for financial trading. In: EvoApplications, LNCS, vol. 9597, pp. 19–34 (2016) 2. Aiyer, S.V.B., Niranjan, N., Fallside, F.: A theoretical investigation into the performance of the Hopfield model. IEEE Transactions on Neural Networks 15, 204–215 (1990) 3. Andersson, B., Nordin, P., Nordahl, M.: Reactive and memory-based genetic programming for robot control. In: European Conference on Genetic Programming, LNCS, vol. 1598, pp. 161– 172 (1999) 4. Andre, D.: Evolution of mapmaking ability: Strategies for the evolution of learning, planning, and memory using genetic programming. In: IEEE World Congress on Computational Intelligence, pp. 250–255 (1994) 5. Babicki, S., Arndt, D., Marcu, A., Liang, Y., Grant, J.R., Maciejewski, A., Wishart, D.S.: Heatmapper: web-enabled heat mapping for all. Nucleic Acids Research (2016). http://www. heatmapper.ca/
17 Evolving a Dota 2 Hero Bot
365
6. Brameier, M., Banzhaf, W.: Linear Genetic Programming. Springer (2007) 7. Brave, S.: The evolution of memory and mental models using genetic programming. In: Proceedings of the Annual Conference on Genetic Programming (1996) 8. Elman, J.L.: Finding structure in time. Cognitive Science 14, 179–211 (1990) 9. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. CoRR abs/1410.5401 (2014) 10. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwinska, A., Colmenarejo, S.G., Grefenstette, E., Ramalho, T., Agapiou, J., Badia, A.P., Hermann, K.M., Zwols, Y., Ostrovski, G., Cain, A., King, H., Summerfield, C., Blunsom, P., Kavukcuoglu, K., Hassabis, D.: Hybrid computing using a neural network with dynamic external memory. Nature 538(7626), 471–476 (2016) 11. Greff, K., Srivastava, R.K., Koutník, J., Steunebrink, B.R., Schmidhuber, J.: LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems 28(10), 2222– 2231 (2017) 12. Greve, R.B., Jacobsen, E.J., Risi, S.: Evolving neural turing machines for reward-based learning. In: ACM Genetic and Evolutionary Computation Conference, pp. 117–124 (2016) 13. Grossberg, S.: Content-addressable memory storage by neural networks: A general model and global Liapunov method. In: E.L. Schwartz (ed.) Computational Neuroscience, pp. 56–65. MIT Press (1990) 14. Haddadi, F., Kayacik, H.G., Zincir-Heywood, A.N., Heywood, M.I.: Malicious automatically generated domain name detection using stateful-SBB. In: EvoApplications, LNCS, vol. 7835, pp. 529–539 (2013) 15. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735– 1780 (1997) 16. Huelsbergen, L.: Toward simulated evolution of machine language iteration. In: Proceedings of the Annual Conference on Genetic Programming, pp. 315–320 (1996) 17. Jaderberg, M., Czarnecki, W.M., Dunning, I., Marris, L., Lever, G., Castañeda, A.G., Beattie, C., Rabinowitz, N.C., Morcos, A.S., Ruderman, A., Sonnerat, N., Green, T., Deason, L., Leibo, J.Z., Silver, D., Hassabis, D., Kavukcuoglu, K., Graepel, T.: Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 859–865 (2019) 18. Kelly, S., Banzhaf, W.: Temporal memory sharing in visual reinforcement learning. In: W. Banzhaf, E. Goodman, L. Sheneman, L. Trujillo, B. Worzel (eds.) Genetic Programming Theory and Practice, vol. XVII. Springer (2020) 19. Kelly, S., Heywood, M.I.: Emergent tangled graph representations for Atari game playing agents. In: European Conference on Genetic Programming, LNCS, vol. 10196, pp. 64–79 (2017) 20. Kelly, S., Heywood, M.I.: Multi-task learning in Atari video games with emergent tangled program graphs. In: ACM Genetic and Evolutionary Computation Conference, pp. 195–202 (2017) 21. Kelly, S., Heywood, M.I.: Emergent solutions to high-dimensional multitask reinforcement learning. Evolutionary Computation 26(3), 347–380 (2018) 22. Kelly, S., Smith, R.J., Heywood, M.I.: Emergent policy discovery for visual reinforcement learning through tangled program graphs: A tutorial. In: W. Banzhaf, L. Spector, L. Sheneman (eds.) Genetic Programming Theory and Practice, vol. XVI, chap. 3, pp. 37–57. Springer (2019) 23. Langdon, W.B.: Genetic Programming and Data Structures. Kluwer Academic (1998) 24. Lichodzijewski, P., Heywood, M.I.: Symbiosis, complexification and simplicity under GP. In: Proceedings of the ACM Genetic and Evolutionary Computation Conference, pp. 853–860 (2010) 25. Machado, M.C., Bellemare, M.G., Talvitie, E., Veness, J., Hausknecht, M., Bowling, M.: Revisiting the arcade learning environment: evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research 61, 523–562 (2018) 26. Merrild, J., Rasmussen, M.A., Risi, S.: Hyperntm: Evolving scalable neural turing machines through hyperneat. In: EvoApplications, pp. 750–766 (2018)
366
R. J. Smith and M. I. Heywood
27. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015) 28. Nordin, P.: A compiling genetic programming system that directly manipulates the machine code. In: K.E. Kinnear (ed.) Advances in Genetic Programming, pp. 311–332. MIT Press (1994) 29. Poli, R., McPhee, N.F., Citi, L., Crane, E.: Memory with memory in genetic programming. Journal of Artificial Evolution and Applications (2009) 30. Salimans, T., Ho, J., Chen, X., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. CoRR abs/1703.03864 (2016) 31. Sapienza, A., Peng, H., Ferrara, E.: Performance dynamics and success in online games. In: IEEE International Conference on Data Mining Workshops, pp. 902–909 (2017) 32. Smith, R.J., Heywood, M.I.: Scaling tangled program graphs to visual reinforcement learning in ViZDoom. In: European Conference on Genetic Programming, LNCS, vol. 10781, pp. 135– 150 (2018) 33. Smith, R.J., Heywood, M.I.: Evolving Dota 2 Shadow Fiend bots using genetic programming with external memory. In: Proceedings of the ACM Genetic and Evolutionary Computation Conference (2019) 34. Smith, R.J., Heywood, M.I.: A model of external memory for navigation in partially observable visual reinforcement learning tasks. In: European Conference on Genetic Programming, LNCS, vol. 11451, pp. 162–177 (2019) 35. Spector, L., Luke, S.: Cultural transmission of information in genetic programming. In: Annual Conference on Genetic Programming, pp. 209–214 (1996) 36. Such, F.P., Madhavan, V., Conti, E., Lehman, J., Stanley, K.O., Clune, J.: Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. CoRR abs/1712.06567 (2018) 37. Teller, A.: The evolution of mental models. In: K.E. Kinnear (ed.) Advances in Genetic Programming, pp. 199–220. MIT Press (1994) 38. Teller, A.: Turing completeness in the language of genetic programming with indexed memory. In: IEEE Congress on Evolutionary Computation, pp. 136–141 (1994) 39. Wayne, G., Hung, C.C., Amos, D., Mirza, M., Ahuja, A., Grabska-Barwi´nska, A., Rae, J., Mirowski, P., Leibo, J.Z., Santoro, A., Gemici, M., Reynolds, M., Harley, T., Abramson, J., Mohamed, S., Rezende, D., Saxton, D., Cain, A., Hillier, C., Silver, D., Kavukcuoglu, K., Botvinick, M., Hasssbis, D., Lillicrap, T.: Unsupervised predictive memory in a goal-directed agent. CoRR abs/1803.10760 (2018) 40. Wydmuch, M., Kempka, M., Ja´skowski, W.: ViZDoom competitions: Playing doom from pixels. IEEE Transactions on Games to appear (2019)
Chapter 18
Modelling Genetic Programming as a Simple Sampling Algorithm David R. White, Benjamin Fowler, Wolfgang Banzhaf, and Earl T. Barr
18.1 Introduction Previous attempts to characterise Genetic Programming (GP) have focused on complex schemata, that is ‘templates’ of full GP trees or subtrees composed of fixed positions that specify the presence of a given function from the function set, and other unspecified ‘wildcard’ positions that indicate any single function, or an arbitrary subtree; a schema represents a set of expression trees that share some syntactic characteristics. Unfortunately, the complicated definitions of such schemata has made resulting theories difficult to test empirically, and GP schema theory has been criticised for its inability to make significant predictions or inform the development of improved algorithms. We present a dramatically simplified approach to GP schema theory, which considers only the distribution of functions at a single node in the tree, ignoring second-order effects that describe the context within which the node resides: for example, we do not consider parent-child node relationships. Our schemata are thus
D. R. White Department of Physics, University of Sheffield, Sheffield, UK e-mail: [email protected] B. Fowler Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL, Canada e-mail: [email protected] W. Banzhaf () Department of Computer Science and Engineering, Michigan State University, East Lansing, Okemos, MI, USA e-mail: [email protected] E. T. Barr CREST, University College London, London, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_18
367
368
D. R. White et al.
simple fixed-position ‘point schemata’, as illustrated in Fig. 18.1. We show that the behaviour of GP viewed through the sampling of simple schemata is remarkably consistent: GP converges top-down from the root node, with nodes at each level gradually becoming ‘frozen’ to a given function from the function set, in a recursive fashion; an example of such convergence for the root node on one benchmark is illustrated in Fig. 18.2. We model this behaviour as a competition between simple schemata, where the correlation between rank fitness and schema membership determines the ‘winner’ and drives convergence. We present empirical evidence to support our model.
Fig. 18.1 Examples of Simple Single Point Schemata for a function set where the maximum arity is two. A single labelled node indicates a fixed position; all other nodes are unconstrained— provided that they are not on the path from root to the fixed node, a position may not even exist within trees belonging to the schemata. (a) + at Root, (b) / at Root’s second child, (c) Terminal x
Fig. 18.2 Example convergence of the root node across a population for the Keijzer-6 benchmark. The root node distribution converges to the Sqrt function. Other runs exhibit similar behaviour
18 Modelling Genetic Programming as a Simple Sampling Algorithm
369
18.2 Rationale for Modelling Simple Schemata Traditional GP, often referred to as tree-based GP, is a simple evolutionary algorithm that explores the space of expression trees. Algorithm 1 provides pseudocode for canonical tree-based GP using tournament selection. Two key observations from this pseudocode are apparent: (1) the variation operators applied do not consider the semantics of the parents or their children (see also [7]), and (2) the only point at which fitness is considered by the algorithm is in selection via relative fitness, i.e. ranking. We add a third observation from the wider GP literature: (3) GP’s strength lies in its versatility: it may easily be applied to a wide variety of problems and demonstrates good performance across a range of problem domains. Given these observations, any description of how GP searches the space of expression trees is limited to considering (a) syntactic properties of the population and (b) the rank fitness of individuals. Beyond relative fitness within a population, the semantics of individual trees is irrelevant. The building block hypothesis [8] (BBH) conjectures that GP crossover exchanges small subtrees that make aboveaverage contributions to the fitness of an individual, despite GP’s disregard for semantic concerns when selecting the location at which that subtree is inserted. That such subtrees should exist seems intuitively unlikely, but far more concerning is the notion that such subtrees should exist across the wide variety of problems GP has been successfully applied to. We are therefore skeptical of the BBH and regard crossover as a constrained macro-mutation operator; this viewpoint is congruent with rigorous empirical studies that have shown crossover to be beneficial on only a limited subset of simple problems [15].
Algorithm 1 Canonical tree-based GP algorithm Input: pop_size, gens, pxo , pmut Output: Final population pop 1: pop ← init_pop(pop_size) 2: for 1 to gens do 3: for p ∈ pop do 4: eval(p) 5: end for 6: next_gen ← {} 7: while size(next_gen) < pop_size do 8: op ← select_op(xo, mut, pxo , pmut ) 9: if op == mut then 10: ind ← tournament_selection(pop, 1) 11: next_gen ← next_gen ∪ {mutate(ind)} 12: end if 13: if op == xo then 14: ind ← tournament_selection(pop, 2) 15: next_gen ← next_gen ∪ {xo(ind, ind2)} 16: end if 17: end while 18: end for 19: return pop
370
D. R. White et al.
We therefore must consider only syntactic features when formulating a model of GP’s behaviour, and furthermore consider only those features that are represented in a substantial number of individuals within the population, arriving at the following possible properties that may form part of a behavioural model for GP: • • • • •
Tree size Tree shape Function frequency per individual Functions at positions in the tree (simple schemata) Higher-order schemata
Tree size and shape depend on the level of bloat in a population; by definition, bloat does not contribute to solving a particular problem, so we disregard tree size and shape as an explanatory factor. Explorative data analysis of population function frequency showed some interesting behaviour, but it was too coarse-grained to maintain a strong relationship with semantics. We are therefore left with simple and higher-order schemata. Given the limited success of higher-order schema theory, we restrict the discussion to models based on function positions within individual trees. Our preliminary investigations into the frequency of different functions at the root node showed promise, and also matched earlier results [12], which considered more complex rooted schemata. We retain a simpler definition of point schemata; our definition utilises the grid system introduced by Poli and McPhee [10], which is illustrated in Fig. 18.3. Using this coordinate system, we can formally define a simple schema: Fig. 18.3 Grid system for laying out expression trees, based on Poli and McPhee [10]. The example represents √ the expression tree x + xy
18 Modelling Genetic Programming as a Simple Sampling Algorithm
371
Definition A simple GP schema is a tuple (d, x, f ) where d is the depth within a tree (root is defined as depth 0), x is the ‘x index’ identifying a node at that particular depth as in [10], and f is a function from the function set. Examples of simple schemata are given in Fig. 18.1.
18.3 Modelling GP We are interested in modelling the frequency of simple schemata within the population; this is a heritable trait and can therefore be modelled using Price’s covariance and selection theorem [11], which gives the expected change in a trait within a population: z =
1 1 cov(wi , zi ) + E(wi zi ) w¯ w¯
(18.1)
Here, z is the prevalence of a trait within the population, z is its expected change, w¯ is the average number of children per individual, zi is the amount of the trait within the individual (in our case, 1 if the individual belongs to a schema, and 0 otherwise), and wi is the number of children of individual i. As our population size is constant, the average fitness (number of children) of an individual is always one and thus the 1/w¯ may be discarded. In our case, z is the frequency of schema membership, i.e., the proportion of individuals belonging to a given schema. The expected change in trait z over the population depends on two terms: the change due to selection and the change due to transmission. The change due to selection is determined by the covariance between possession of that trait and fitness that propagates the trait into the next generation, whilst the change due to transmission models the constructive and destructive effects of genetic operators in GP, which may add or remove an individual’s schema membership.
18.3.1 Change in Schema Prevalence Due to Selection We first consider change due to selection. We model only tournament selection, a common selection method, which uses relative (ranked) fitness to select parents. After fitness evaluation, each individual is assigned a rank r1 . . . rN where r1 is the best individual and rN is the worst. As given by Blickle and Thiele [2] the probability ps (i) of selecting an individual i of rank ri within a population size of n under a tournament of size k is given by: ps (ri ) =
(n − i + 1)k − (n − i)k nk
(18.2)
372
D. R. White et al.
Any selected individual has a single child belonging to the same schema, modulo the opportunity for disruption due to the genetic operators as described below. We define member(h, ri ) to be 1 if the individual ranked ri is a member of h, and 0 otherwise, We therefore rewrite Eq. (18.1), inserting our function and notation for schema membership: |h| = cov(ps (ri ), member(h, ri )) + E(wi zi )
(18.3)
We can precisely calculate this term in practice, although we require full knowledge of schema membership for each fitness rank. It also represents an expectation, and errors may accumulate if predicting membership several generations ahead: indeed, stochastic variation in the early stages of a run may sometimes push the population away from convergence to the most highly fit schemata to secondary ones. However, such an equation is still useful if it accurately reflects the mechanism that underlies GP, as we may use it to improve on current algorithms.
18.3.2 Change in Schema Prevalence Due to Operators To model the potential change due to transmission, we must model crossover and mutation. If no crossover or mutation occurs, then the transmission probability is 1 and the case is closed; this is simple reproduction of the parent, and is omitted from Algorithm 1 as we do not use it. First, consider disruption. A single node schema can only be disrupted if the crossover or mutation point selected is above the node. Given a schema (d, x, f ), there are only d − 1 such points in the tree i.e. the potential for disruption occurs with probability ≤ d/size(tree), otherwise disruption will certainly not occur. Assuming the selection of a disruptive node for crossover or mutation, there are three possible outcomes: (1) the node is replaced with another node of the same type, preserving the schema; (2) the node is replaced with another node of a different type; (3) the node is removed from the tree due to a change in tree shape. The only possibility that preserves the schema is situation (1), and to do so two conditions are required: first, the inserted subtree must be of a shape such that the node exists in the new tree and, second, the node inserted at position (d, x) must be of type f . If we assume that the distribution of different functions f out of the function set F is roughly uniform in the population (we can alternatively make a slightly weaker but more complicated assumption with the same conclusion), then the probability of disrupting the schema given the node exists in the donating tree is approximately (|F |−1)/|F |. This probability is therefore close to one, and is actually an optimistic lower bound: leaf nodes are disproportionately likely to be inserted, increasing this probability further. Given that the insertion of a suitable subtree that will preserve a schema is already small, particularly as d grows (because inserted subtrees are disproportionately small), we therefore make a simplifying assumption: that the probability of disrup-
18 Modelling Genetic Programming as a Simple Sampling Algorithm
373
tion to the schema given an insertion point between the root and our defining node is approximately one, and therefore the probability of disruption is ≈ d/size(tree). Given growth in tree size we can see why convergence close to the root will occur: the probability of disrupting a schema is low and the selection term in Eq. (18.3) will dominate, particularly as average tree size increases. A similar argument can be used for the constructive case: one of the (d − 1)/size(tree) nodes must be selected for insertion, and in this case some trees may not contain a node at that position, so this is an optimistic upper bound. Given such an insertion point is selected, we must then select for insertion at that point a subtree deep enough with the correct shape to ensure the node exists within the schema, and the inserted node must be of the correct type. Again, as average tree size increases the probability of construction rapidly diminishes and we therefore assume the probability of construction approaches zero as d increases. We conjecture that this is the reason GP systems do not exhibit long-term learning.
18.4 Empirical Data Supporting the Model How does GP behave with respect to these schemata? We examined a set of three benchmarks, taken from [14] given in Table 18.1, and plotted the convergence behaviour of these schemata. Convergence plots for various schemata are shown in Figs. 18.4, 18.5, 18.6, 18.7, 18.8, and 18.9. Similar results were found across the deeper layers of the tree, with a large proportion converging to no function, i.e. a tree shape that does not involve nodes at position (d, x). There were two main exceptions to this convergence behaviour: firstly, when the sample size for a particular (d, x) coordinate was small, convergence may not be seen for that position. This was particularly likely to occur deep within the node grid and for problems with large node arities, because a given row in the tree grid has a k x indices, where k is the greatest arity of functions in the function set. The more significant exception was lower levels in the tree in general, where crossover was more likely to impact convergence: crossover was exponentially more likely to impact a schema at a lower position within the tree than at a higher one, and thus there was a counterbalance to the convergence resulting from rank-based selection. However, the average impact of a node deep in the tree in terms of overall semantics was low, so we did not consider it within our modelling.
Table 18.1 Benchmarks used Benchmark
Target equation x 1
Paper
Keijzer-6 Korns-12
2 − 2.1 cos(9.8x) sin(1.3w)
Korns [4]
Vladislavleva-4
i i
5+
10 2 i=1 (xi −3)
5
Keijzer [3] Vladislavleva et al. [13]
374
D. R. White et al.
1.0
Frequency
0.8 ADD INV MUL NEG SQRT X
0.6
0.4
0.2
0.0 0
10
20
Gen
30
40
50
Fig. 18.4 Node frequency for Keijzer6 d = 1 x = 0
Node Frequency for korns12 d=0 x=0 1.0
Frequency
0.8
0.6
SQUARE SUB TAN TANH X0 X1 X2 X3 X4
ADD COS CUBE EXP LN MUL PDIV SIN SQRT
0.4
0.2
0.0 0
10
20
30 Gen
Fig. 18.5 Node frequency for Korns12 d = 0 x = 0
40
50
18 Modelling Genetic Programming as a Simple Sampling Algorithm
375
Node Frequency for korns12 d=4 x=0 1.0
Frequency
0.8
SQUARE SUB TAN TANH X0 X1 X2 X3 X4
ADD COS CUBE EXP LN MUL PDIV SIN SQRT
0.6
0.4
0.2
0.0 0
10
20
30
40
50
Gen
Fig. 18.6 Node frequency for Korns12 d = 4 x = 0
1.0 ADD MUL PDIV POW SQRT SUB X0 X1 X2 X3 X4
Frequency
0.8
0.6
0.4
0.2
0.0 0
10
20
Gen
30
Fig. 18.7 Node frequency for Vladislavleva4 d = 0 x = 0
40
50
376
D. R. White et al.
1.0 ADD MUL PDIV POW SQRT SUB X0 X1 X2 X3 X4
Frequency
0.8
0.6
0.4
0.2
0.0 0
10
20
Gen
30
40
50
Fig. 18.8 Node frequency for Vladislavleva4 d = 1x = 1
1.0 ADD MUL PDIV POW SQRT SUB X0 X1 X2 X3 X4
Frequency
0.8
0.6
0.4
0.2
0.0 0
10
20
Gen
30
Fig. 18.9 Node frequency for Vladislavleva4 d = 3 x = 3
40
50
18 Modelling Genetic Programming as a Simple Sampling Algorithm
377
Convergence across Runs for keijzer6 d=0 x=0 1.0
0.8
Frequency
Run 0 (SQRT) Run 1 (SQRT)
0.6
Run 2 (INV) Run 3 (ADD) Run 4 (SQRT)
0.4
Run 5 (SQRT) Run 6 (ADD) Run 7 (ADD)
0.2
Run 8 (ADD) Run 9 (INV) 0.0
0
10
20
Gen
30
40
50
Fig. 18.10 Convergence across many runs for Keijzer6 root node. Runs can be driven towards a particular node based on the sample achieved. A handful of functions dominate each position, in this case ADD and SQRT
Figure 18.10 shows an example of convergence behaviour across runs, where each line is the ‘winning’ function from a run. It was often the case that two functions demonstrated strong covariance and due to the stochastic nature of GP, convergence to any given function was not certain, but biased towards those with high covariance. Does covariance between schema membership and rank fitness drive convergence, i.e. is our model accurate? We plot graphs showing both convergence and covariance in Figs. 18.11, 18.12, and 18.13. Schema and frequency are colourmatched, the dashed lines indicating the covariance (right-hand axis). Clear spikes in covariance can be seen prior to population convergence. Note that if we were to anticipate convergence, the best predictor would be covariance, a signal which is clear far in advance of the rising edge of the convergence curve; this suggests it may be possible to exploit covariance measures to accelerate convergence and potentially improve the performance of GP. Note also that over time the clear covariance signal disappears, with covariance tending toward 0. Figure 18.13 shows that after a brief spike in covariance and a subsequent spike in node frequency, all frequencies went to 0 in this example, indicating that node (2,2) is unoccupied.
378
D. R. White et al. Node Frequency for Keijzer-6 d=0 x=0
1.0
0.2
Frequency
0.8
ADD INV MUL NEG SQRT X
0.6
0.4
0.2
0.1
0.0
−0.1
0.0
Fig. 18.11 Node frequency (lines, left scale) and covariance (dashed lines, right scale) for Keijzer6 root node
Node Frequency for Keijzer-6 d=1 x=0
0.25
1.0
0.20
Frequency
0.8
0.6
0.4
0.2
0.0
0.15 ADD INV MUL NEG SQRT X
0.10 0.05 0.00 −0.05 −0.10
Fig. 18.12 Node frequency (lines, left scale) and covariance (dashed lines, right scale) for Keijzer6 d = 1 x = 0
18 Modelling Genetic Programming as a Simple Sampling Algorithm
379
Node Frequency for Keijzer-6 d=2 x=2 0.25
Frequency
0.20
0.15
ADD INV MUL NEG SQRT X
0.15
0.10
0.05
0.00 0.10 −0.05 0.05 −0.10 0.00
Fig. 18.13 Node frequency (lines, left scale) and covariance (dashed lines, right scale) for Keijzer6 d = 2 x = 2
18.5 Ways to Improve GP Based on our model of how GP searches the solution space, we propose a possible improvement to GP: a point schema should be “frozen” if high covariance is found between a given root-node schema and rank fitness. Freezing this node can be achieved by assigning the worst fitness to all individuals in the population that do not belong to that schema, forcing convergence. After a new generation has been produced and evaluated, we can check to see if any function dominates the schema: it must dominate both in terms of being substantial in number (e.g. >20% of the population size) and exhibit high covariance. Alternatively, we can simply replace nodes at coordinate (d, x) in all trees with the one we want to freeze (which should have the highest covariance with fitness), and then continue the run. An even more sophisticated alternative is to replace GP entirely, using a sampling algorithm that descends from the root and systematically samples each schema in a ‘racing’ algorithm, selecting the node in a particular location that exhibits the highest covariance. These proposals require more theoretical and empirical work and we anticipate a need for careful comparison with existing algorithms and across more benchmark problems, all of which is beyond the scope of this chapter but will be subject to a future investigation.
380
D. R. White et al.
18.6 Related Work There are two areas of work that are related to our modeling: the development of schema theories for GP, and the empirical examination of the role of crossover. One of the motivating results for this work is the rigorous evaluation of crossover in GP [15], which found crossover was not greatly beneficial to the search. That work was itself based on early controversial and influential work by Luke and Spector [5, 6]. These papers cast doubt on the building block hypothesis and the utility of crossover in reassembling solutions which was—at the time—the predominant search operator used by GP researchers. Other early work came to the conclusion that both mutation and crossover operators contribute to the success of GP runs and that it is beneficial to apply both operators with at least 5% probability, at least in Linear GP [1]. GP Schema theory has a long and varied history, with many different approaches proposed by Rosca [12], O’Reilly [9], and Poli and McPhee [10]; the latter included a multi-paper exposition on schema theory for general crossover, and introduced the grid system used in this paper.
18.7 Conclusion In this chapter, we have demonstrated how GP can be modeled using a simple singlenode schema definition, and that empirical results show convergence throughout the population across benchmarks and experimental runs. In our model, crossover and mutation in GP act simply as “sampling” operators, in agreement with previous empirical experimentation. As the sampling occurs disproportionately towards the bottom of the tree, the driving force in GP is selection, making convergence inevitable. We propose modifying GP based on this model. This work is incomplete: in particular, we need to examine convergence statistically across every node in a tree’s grid; more benchmarks, particularly non-symbolic expression benchmarks, should be examined; and a full covariance-based algorithm should be implemented—either as a modification to GP, or else as a re-imagined sampling algorithm. Future work includes testing this model in other ways: for example, modifying genetic operators to manipulate the disturbance and construction of simple schemata, and confirming the results agree with prediction. Acknowledgements WB acknowledges funding from the Koza Endowment provided by MSU.
18 Modelling Genetic Programming as a Simple Sampling Algorithm
381
References 1. Banzhaf, W., Francone, F.D., Nordin, P.: The effect of extensive use of the mutation operator on generalization in genetic programming using sparse data sets. In: H.M. Voigt, W. Ebeling, I. Rechenberg, H.P. Schwefel (eds.) International Conference on Parallel Problem Solving from Nature (PPSN-96), pp. 300–309. Springer (1996) 2. Blickle, T., Thiele, L.: A mathematical analysis of tournament selection. In: L. Eshelman (ed.) International Conference on Genetic Algorithms (ICGA-95), pp. 9–16. Morgan Kaufmann, San Francisco (1995) 3. Keijzer, M.: Improving symbolic regression with interval arithmetic and linear scaling. In: C. Ryan, T. Soule, M. Keijzer, E. Tsang, R. Poli, E. Costa (eds.) European Conference on Genetic Programming, EuroGP 2003, pp. 70–82. Springer, Berlin, Heidelberg (2003) 4. Korns, M.F.: Accuracy in symbolic regression. In: R. Riolo, E. Vladislavleva, J. Moore (eds.) Genetic Programming Theory and Practice IX, pp. 129–151 (2011) 5. Luke, S., Spector, L.: A comparison of crossover and mutation in genetic programming. In: European Conference on Genetic Programming, pp. 240–248. Springer (1997) 6. Luke, S., Spector, L.: A revised comparison of crossover and mutation in genetic programming. In: European Conference on Genetic Programming, pp. 208–213. Springer (1998) 7. Moraglio, A., Krawiec, K., Johnson, C.G.: Geometric Semantic Genetic Programming. In: International Conference on Parallel Problem Solving from Nature, pp. 21–31. Springer (2012) 8. O’Reilly, U.M., Oppacher, F.: Using building block functions to investigate a building block hypothesis for genetic programming. Santa Fe Inst., Santa Fe, NM, Working Paper pp. 94–02 (1994) 9. O’Reilly, U.M., Oppacher, F.: The troubling aspects of a building block hypothesis for genetic programming. In: Foundations of Genetic Algorithms (FOGA-95), vol. 3, pp. 73–88. Elsevier (1995) 10. Poli, R., McPhee, N.F.: General schema theory for genetic programming with subtreeswapping crossover: Part i. Evolutionary Computation 11(1), 53–66 (2003) 11. Price, G.R.: Selection and covariance. Nature 227, 520–521 (1970) 12. Rosca, J.P., Mallard, D.H.: Rooted-tree schemata in genetic programming. In: K.E. Kinnear, W.B. Langdon, L. Spector, P.J. Angeline, U.M. O’Reilly (eds.) Advances in Genetic Programming, Vol 3, pp. 243–271 (1999) 13. Vladislavleva, E.J., Smits, G.F., Den Hertog, D.: Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic programming. IEEE Transactions on Evolutionary Computation 13, 333–349 (2008) 14. White, D.R., McDermott, J., Castelli, M., Manzoni, L., Goldman, B.W., Kronberger, G., Ja´skowski, W., O’Reilly, U.M., Luke, S.: Better GP benchmarks: community survey results and proposals. Genetic Programming and Evolvable Machines 14(1), 3–29 (2013) 15. White, D.R., Poulding, S.: A rigorous evaluation of crossover and mutation in genetic programming. In: European Conference on Genetic Programming, pp. 220–231. Springer (2009)
Chapter 19
An Evolutionary System for Better Automatic Software Repair Yuan Yuan and Wolfgang Banzhaf
19.1 Introduction Automatic software repair [13, 39, 49] aims to fix bugs in software automatically, generally relying on a specification. When a test suite is considered as the specification, the paradigm is called test-suite based repair [39]. The test suite should contain at least one negative (i.e., initially failing) test that triggers the bug to be fixed and a number of positive (i.e., initially passing) tests that define the expected program behavior. In terms of test-suite based repair, a bug is regarded to be fixed or repaired, if a created patch makes the entire test suite pass. Such a patch is referred to as a test-adequate patch [33] or a plausible patch [44]. Evolutionary repair approaches [49] are a popular category of techniques for testsuite based repair. These approaches determine a search space potentially containing correct patches, then use evolutionary computation (EC) techniques, particularly genetic programming (GP) [2, 4, 21], to explore that search space. A major characteristic of evolutionary repair approaches is that they have high potential to fix multi-location bugs, since GP can manipulate multiple likely faulty locations at a time. However, GenProg [12, 25, 27, 51], the most well-known approach of this kind, does not fulfill the potential in multi-location bug fixing according to largescale empirical studies [33, 44], partly due to the search ability of its underlying GP [42, 44, 57]. To tackle this issue, our previous work introduced ARJA [57], which
Y. Yuan () Department of Computer Science and Engineering & Beacon Center, Michigan State University, East Lansing, MI, USA e-mail: [email protected]; [email protected] W. Banzhaf BEACON Center for the Study of Evolution in Action and Department of Computer Science and Engineering, Michigan State University, East Lansing, MI, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0_19
383
384
Y. Yuan and W. Banzhaf
uses a novel multi-objective GP approach with better search ability to explore the search space. Although ARJA has achieved much improved performance and also demonstrated its strength in multi-location repair, major challenges [26] still remain for evolutionary software repair. The first challenge is how to construct a reasonable search space that is more likely to contain correct patches. In this respect, GenProg and ARJA exploit the statement-level redundancy assumption [36] (also called plastic surgery hypothesis [3]). That is, they only conduct statement-level changes and use existing statements in the buggy program for replacement or insertion. The problem here is that fix statements randomly excerpted from somewhere in the current buggy program may have little pertinence to the likely-buggy statement to be manipulated. Due to this problem, GenProg usually generates patches overfitting the test suite or even fails to fix a bug. To relieve the issue, Kim et al. [20] proposed PAR, which exploits repair templates to produce program variants. Each template specifies one type of program transformation and is derived from common fix patterns (e.g., adding a null-pointer checker for an object reference) manually learned from human-written patches. Compared to GenProg, PAR usually works in a more promising search space, since the program transformations performed by PAR are more targeted. Nevertheless, as can be inferred from the results in [57], the redundancy-based approaches can really fix some bugs that cannot be fixed by typical templatebased approaches (e.g., PAR and ELIXIR [46]) which implies that combining the redundancy assumption and repair templates to generate fix statements could further improve repair effectiveness. The second challenge is how to design a search algorithm that can navigate the search space more effectively. The combination of the statement-level redundancy assumption and repair templates will lead to a much larger search space, thereby making this challenge more serious. Recent studies [42, 57] have indicated that compared to using GenProg’s patch representation, using a lower-granularity patch representation that decouples the partial information of an edit can significantly improve the search ability of GP in bug repair. However such representations are specially designed for statement-level edits and cannot be directly used for template-based edits (usually occurring at the expression level). Besides the patch representation, the fitness function is another important factor that influences the search ability of GP. In existing evolutionary repair approaches, the fitness function is generally defined based on how many test cases a patched program passes. However this kind of fitness function can only provide a binary signal (i.e, passed or failed) for a test case and cannot measure how close a modified program is to pass a test case. In consequence, there may be a large number of plateaus in the search space [11, 26, 44], thereby trapping GP. The third challenge is how to alleviate patch overfitting [47]. Evolutionary repair approaches can usually find a number of plausible patches within a computing budget. But most of these patches may be incorrect in general, by just overfitting the given test suite. To pick correct patches more easily, it is necessary to include a post-processing step for these approaches, which can filter out incorrect patches (i.e., overfit detection) or rank the plausible patches found (i.e., patch ranking).
19 An Evolutionary System for Better Automatic Software Repair
385
However, almost all existing evolutionary repair systems, including GenProg, PAR, and ARJA, do not implement such a step. In this chapter, we describe ARJA-e, a new evolutionary repair system for Java programs that aims to address the above three challenges. To determine a search space that is more likely to contain correct patches, ARJA-e combines two sources of fix ingredients (i.e., the statement-level redundancy assumption and repair templates) with contextual analysis based search space reduction, thereby leveraging their complementary strengths. To encode patches in GP more properly, ARJA-e unifies the edits at different granularities into statement-level edits, and then uses a new lower-granularity patch representation that is characterized by the decoupling of statements for replacement and statements for insertion. Furthermore, ARJA-e uses a finer-grained fitness function that can make full use of semantic information contained in the test suite, which is expected to better guide the search of GP. To alleviate patch overfitting, ARJA-e includes a post-processing tool that can serve the purposes of overfit detection and patch ranking.
19.2 Background and Motivation 19.2.1 Related Work Our system belongs to the class of evolutionary repair approaches which explore a repair search space using evolutionary algorithms. GenProg [25, 27], PAR [20], GenProg with anti-patterns [48] and ARJA [57] all fall into this category. Their basic ideas have been described in Sect. 19.1. ARJA-e organically combines the characteristic components of all these approaches, making it distinctly different from any of them. Several approaches employ other kinds of search algorithms, instead of EAs, to traverse GenProg’s search space (e.g., RSRepair [43] uses random search and AE [50] uses an adaptive search strategy). Inspired by the idea of using templates [20], some repair approaches (e.g., SPR [31] and ELIXIR [46]) employ a set of richer templates (or code transformations) that are defined manually. Genesis [30] aims to automatically infer such code transformations from successful patches. Cardumen [35] mines repair templates from the program under repair. Similar to these approaches, ARJA-e uses templates extended and enhanced from those in PAR. Beyond the current buggy program and its associated test suite, some approaches exploit other information to help the repair process. HDRepair [24] uses mined historical bug fixes to guide its random search. ACS [55] uses the information of javadoc comments to rank variables. SearchRepair [19] and ssFix [53] both use existing code from an external code database to find potential repairs. A number of existing approaches infer semantic specifications from the test cases and then use program synthesis to generate a repair that satisfies the inferred specifications. These are usually categorized as semantics-based approaches. SemFix [41] is a pioneer in this category. Other typical approaches of this kind include
386
Y. Yuan and W. Banzhaf
DirectFix [37], QLOSE [8], Angelix [38], Nopol [56], JFix [22] and S3 [23]. Recently, machine learning techniques have been used in software repair. Prophet [32] uses a probabilistic model to rank the candidate patches over the search space of SPR. DeepFix [14] uses deep learning to fix common programming errors.
19.2.2 Motivating Examples In this subsection, we take real bugs as examples to illustrate the key insights motivating the design of ARJA-e. Figure 19.1 shows the human-written patch for bug Math85 from the Defects4J [18] dataset. To correctly fix this bug, only a slight modification is required (i.e., change >= to >), as shown in Fig. 19.1. However, redundancy-based approaches (e.g., GenProg [25, 27], RSRepair [43] and AE [50]) usually cannot find a correct patch for this bug since the fix statement used for replacement (i.e., if (fa * fb > 0){...}) or semantically equivalent ones do not happen to appear elsewhere in the buggy program. In contrast, some template-based approaches (e.g., jMutRepair [10, 34] and ELIXIR [46]) are very likely to fix the bug correctly since changing of infix boolean operators is a specified repair action in such approaches. In addition, GenProg can easily overfit the given test suite [44] by deleting the whole buggy if statement: if (fa * fb >= 0){...}), leading to a plausible but incorrect patch. Figure 19.2 shows the human-written patch for bug Math39 from Defects4J. To correctly repair the bug, an if statement with relatively complex control logic should be inserted before the buggy code, as shown in Fig. 19.2. However, for approaches only based on repair templates, the bug is hard to fix correctly, because this fix generally does not belong to a common fix pattern and is difficult to be encoded with templates. In contrast, approaches that exploit the redundancy
1 2 3 4
public static double [] bracket (...) { ... − if ( fa ∗ fb >= 0.0) { + if ( fa ∗ fb > 0.0) { throw new ConvergenceException (...) ; }
... }
Fig. 19.1 The human-written patch for bug Math85 1 2 3 4 5 6
public void integrate (...) throws ... { ... + if (forward) { + if ( stepStart + stepSize >= t) { stepSize = t − stepStart ; } + } else { + if ( stepStart + stepSize t ) ) || ((! forward) && ( stepStart + stepSize < t ) ) ) { stepSize = t − stepStart ; }
happens to be in the buggy program elsewhere, which is semantically equivalent to the one inserted by human developers. From the above examples, it can be seen that redundancy- and template-based approaches potentially have complementary strengths in bug fixing. We aim to combine both statement-level redundancy assumption and repair templates, to generate potential fix ingredients. Such a combination will lead to a much larger search space, posing a great challenge to the search algorithm. So we will also introduce several strategies to properly reduce the search space and enhance the search algorithm with a new lower-granularity patch representation.
19.3 Overview of ARJA-e The input of ARJA-e is a buggy program associated with a JUnit test suite. ARJA-e basically aims to make all these test cases pass by modifying the buggy program. First, we use the fault localization technique called Ochiai [5] to select n likelybuggy statements (LBSs) with the highest suspiciousness. For the j -th LBS, we determine three sets denoted by Rj , Ij and Oj . Rj is the set of statements that can be used to replace the LBS, Ij is the set of statements that can be used for insertion before the LBS, and Oj is a subset of three operation types: “delete”, “replace” and “insert”. To find simpler patches, we uses a multi-objective GP to explore the determined search space, with the guidance of a finer-grained fitness function. Through evolutionary search, ARJA-e can usually find a number of plausible patches. However, many of these patches may overfit the test suite and would thereby be not correct. To alleviate the patch overfitting issue, we develop a post-processing tool which can identify overfitting patches or rank the plausible patches found by ARJA-e. In the following sections, we will detail how to shape the search space (i.e., determine Rj , Ij and Oj , see Sect. 19.4), how to conduct multi-objective search (see Sect. 19.5) and how to alleviate patch overfitting (see Sect. 19.6).
19.4 Shaping the Search Space 19.4.1 Exploiting the Statement-Level Redundancy Assumption For each LBS selected, we first collect the statements within the package where the LBS resides, and then ignore those statements that are not in-scope at the destination of the LBS or violates the complier constraints. For each of the remaining statements
388
Y. Yuan and W. Banzhaf
(denoted by s), we further check the program context. Our insight is that if replacing the LBS with s is a promising manipulation, s should generally exhibit a certain similarity to the LBS; and if it is potentially useful to insert s before the LBS, s should generally have a certain relevance to the context surrounding the LBS. In the following, we describe how to quantify such similarity and relevance. Suppose Vs and VLBS are the sets of variables (including local variables and fields) used by s and the LBS respectively. We define the similarity between s and the LBS as the Jaccard similarity coefficient between sets Vs and VLBS : sim(s, LBS) =
|VLBS ∩ Vs | |VLBS ∪ Vs |
(19.1)
Note that when collecting fields used by a statement, we also consider the fields accessed by invoking the methods in the current class. In the method where the LBS resides, suppose Vbef and Vaft are the sets of variables used by k statements before and after the LBS, respectively, where k is set to 5 by default. We define the relevance of s to the context of LBS as follows: rel(s, LBS) =
1 |Vs ∩ Vbef | |Vs ∩ Vaft | + 2 |Vs | |Vs |
(19.2)
Eq. (19.2) indeed averages the percentages of the variables in Vs that are covered by Vbef and Vaft . If |VLBS ∪Vs | = 0, sim(s, LBS) is set to 1, and if |Vs | = 0, rel(s, LBS) is set to 0. So sim(s, LBS) ∈ [0, 1] and rel(s, LBS) ∈ [0, 1]. Only when sim(s, LBS) > βsim can s be put into Rj (i.e., the set of candidate statements for replacement), and only when rel(s, LBS) > βrel can s be put into Ij (i.e., the set of candidate statements for insertion), where βsim and βrel are predetermined threshold parameters.
19.4.2 Exploiting Repair Templates In ARJA-e, we also use 7 repair templates to manipulate the LBS, which are mainly extended from templates used in PAR. These templates are described in Table 19.1. Template ER replaces an abstract syntax tree (AST) node element in a LBS with another compatible one. Table 19.2 lists the elements that can be replaced and also shows alternative replacers for each kind of elements. This template generalizes the templates “Parameter Replacer” and “Method Replacer” used in PAR. Several replacement rules are inspired by recent template-based approaches (e.g., replacing a primitive type with widened type follows ELIXIR [46] and replacing x with f(x) follows the transformation schema in REFAZER [45]). Unlike PAR which applies templates on-the-fly (i.e., during the evolutionary process), ARJA-e executes the above seven repair templates in an offline manner. Specifically, we perform all the possible transformations defined by the templates
19 An Evolutionary System for Better Automatic Software Repair
389
Table 19.1 The description of repair templates used in this study No. 1 2
Template name Null pointer checker (NPC) Range checker (RC)
3
Cast checker (CC)
4
Divide-by-Zero checker (DC) Method parameter adjuster (MPA) Boolean expression adder or remover (BEAR) Element replacer (ER)
5 6
7
Description Add an if statement before a LBS to check whether any object reference in this LBS is null Add an if statement before a LBS to check whether any array or list element access in this LBS exceeds the upper or lower bound Add an if statement before a LBS to assure that the variable or expression to be converted in this LBS is an instance of casting type Add an if statement before a LBS to check whether any divisor in this LBS is 0 Add, remove or reorder the method parameters in a LBS if this method has overloaded methods For a condition branch (e.g., if), add a term to its predicate (with && or ||), or remove a term from its predicate Replace an AST node element (e.g., variable or method name) in a LBS with another one with compatible type
Table 19.2 List of replacement rules for different elements Element Variable
Format x
Field access Qualified name Method name
e.g., this . a a.b f (...)
Primitive type Boolean literal Number literal Infix operators Prefix/postfix operators Assignment operators Conditional expression
e.g., int true or false e.g., 1 or 0.5 e.g., + or > e.g., ++ e.g., += a ? b : c
Replacer (1) The visible fields or local variables with compatible type (2) A compatible method invocation in the form of f () or f (x) The same as above The same as above The name of another visible method with compatible parameter and return types A widened type, e.g., float to double The opposite boolean value Another number literal located in the same method A compatible infix operator, e.g., + to −, > to >= The opposite prefix/postfix operator, e.g., ++ to −− The opposite assignment operator, e.g., += to −= b or c
390
Y. Yuan and W. Banzhaf
Fig. 19.3 Illustration of the offline execution of templates
for each LBS before searching for patches. Then each LBS can derive a number of new statements, each of which can either replace the LBS or be inserted before it. So various template-based edits (usually at the expression-level) are abstracted into two types of statement-level edits (i.e., replacement and insertion). These statements for replacement and insertion are added into Rj and Ij respectively. For the LBS a.callX(), Fig. 19.3 illustrates the way to exploit the templates in ARJA-e. Note that we do not consider similarity and relevance as in Sect. 19.4.1 since the statements generated by templates are highly targeted. Moreover, we only apply a template to a single AST node at a time to avoid combinatorial explosion. For example, we do not simultaneously modify a and callX in a.callX() using the template ER.
19.4.3 Initialization of Operation Types The deletion operation should be executed carefully because it can easily lead to the following two problems: It can (1) cause a compiler error of the modified code; or (2) generate overfitting patches [44]. To address the first problem, we use the two rules defined in [57], that is, if a LBS is a variable declaration statement or a return/throw statement which is the last statement of a method not declared void, we disable the deletion operation for this LBS. To address the second problem, we use the 5 anti-delete patterns defined in [48]. If a LBS follows any of these patterns, we ignore the deletion operation. For example, according to one of the anti-delete patterns, if a LBS is a control statement (e.g., if statement or loops), deletion of the LBS is disallowed.
19 An Evolutionary System for Better Automatic Software Repair
391
19.5 Multi-Objective Evolution of Patches 19.5.1 Patch Representation To encode a patch as a genome in GP, we first number the LBSs and the elements in Rj , Ij and Oj respectively, starting from 1, where j ∈ {1, 2, . . . , n}. All the IDs are fixed throughout the search. A solution (i.e., a patch) to the program repair problem is encoded as x = (b, u, p, q), which contains four different parts each being a vector of size n. In the solution x, bj ∈ {0, 1} indicates whether the j -th LBS is to be edited or not; uj ∈ {1, 2, . . . , |Oj |} indicates the uj -th operation type in Oj is used for the j th LBS; pj ∈ {1, 2, . . . , |Rj |} means that if replace operation is used, the pj -th statement in Rj will be selected to replace the j -th LBS; and qj ∈ {1, 2, . . . , |Ij |} means that if insert operation is used, the qj -th statement in Ij will be inserted before the j -th LBS. Figure 19.4 illustrates the new lower-granularity patch representation. Suppose the j -th LBS is a.callX(); in this figure, then the edit on the j -th LBS is: replace a.callX(); with b.callX();.
19.5.2 Finer-Grained Fitness Function To evaluate the fitness of an individual x, we still use a bi-objective function as in the original ARJA [57]. The first objective (i.e., f1 (x)) is the patch size, which is exactly the same as that in ARJA. The second objective (i.e., f2 (x)) is the weighted failure rate. Different from that in ARJA, we compute f2 (x) through finer-grained analysis of test execution in this study, in order to provide smoother gradient for the genetic search to navigate the search space. Since our repair system targets Java,
Fig. 19.4 Illustration of the new lower-granularity patch representation
392
Y. Yuan and W. Banzhaf
our implementation is based on the JUnit [7] framework. Specifically, we define a metric to measure the degree of violation for each assertion, which we call assertion distance. Suppose an assertion (denoted by e) asserting x and y are equal to within a positive δ: assertEquals(x, y, δ ), then the assertion distance d(e) is computed as: ν(|x − y| − δ), |x − y| ≥ δ (19.3) d(e) = 0, |x − y| < δ Here, ν(x) is a normalizing function in [0, 1] and we use the one suggested in [1]: ν(x) = x/(x + 1). After executing a program variant x over a test case t, we can compute a metric h(x, t) ∈ [0, 1] to indicate how badly x fails the test case t by using the collected assertion distances. This metric is defined as follows: e∈E(x,t) d(e) (19.4) h(x, t) = |E(x, t)| where E(x, t) is the set of executed assertions by x over t, and d(e) is the assertion distance for the assertion e. Based on h(x, t), f2 (x) can be formulated as follows: f2 (x) =
t∈Tpos
h(x, t)
|Tpos |
+w×
t∈Tneg
h(x, t)
|Tneg |
(19.5)
where w ∈ (0, 1] is a parameter that can introduce a bias toward negative test cases.
19.5.3 Genetic Operators Genetic operators, including crossover and mutation, are used to produce the offspring individuals in GP. Crossover is applied to each part of the patch representation separately, in order to inherit good genetic materials from parents. For all four parts, we employ the half uniform crossover (HUX) operator. We apply a guided mutation to the information of a single selected LBS. To be specific, we first use roulette wheel selection to choose a LBS, where the j -th LBS is chosen with a probability of suspj / nj=1 suspj ; suppose that the j -th LBS is finally selected, then we apply bit flip mutation to bj and uniform mutation to uj , pj and qj respectively. Figure 19.5 illustrates the crossover and mutation operations, where only a single offspring is shown for brevity.
19 An Evolutionary System for Better Automatic Software Repair
393
Fig. 19.5 Illustration of the crossover and mutation
19.5.4 Multi-Objective Search With the patch representation, fitness function and genetic operators designed above, any multi-objective evolutionary algorithm can serve the purpose of searching for patches. In this work, we basically employ NSGA-II [9] as the multi-objective search framework. To initialize the population, we combine the fault localization result and randomness: for the first part (i.e., b), bj is initialized to one with a probability of suspj × μ, where μ ∈ (0, 1) is a predefined parameter; and the remaining three parts (i.e., u, p, q) are just initialized randomly. After population initialization, the search algorithm iterates over generations until the stopping criterion is satisfied.
19.6 Alleviating Patch Overfitting 19.6.1 Overfit Detection For overfit detection, we take a buggy program, a set of positive test cases and a plausible patch as input, and determine whether or not this plausible patch is an overfitting patch. Our approach is based on the assumption that the buggy program will perform correctly on the test inputs encoded in the positive test cases. Figure 19.6 shows the overall process of this approach. First, given a plausible patch and a buggy program, we can localize the methods where the statements will be modified by the patch. Then we instrument the bytecode of these methods in the buggy program. For each method, the instrumentation is conducted at its entry point and all its possible exit points. At the entry point, we inject new bytecode to save the input of the method, including all the method parameters and the current object this (i.e., the object whose method is being called), into a file. At each exit point, we inject new bytecode to save the output of the method, including the return value, the current object this and the reference-type method parameters, into another file.
394
Y. Yuan and W. Banzhaf
Fig. 19.6 The overview of our overfit detection approach
Note that if a method to be instrumented is a static method, we just ignore the current object. To save the objects, we leverage the Java serialization technique. This technique can convert object state to a byte stream that can be reverted back into a copy of the object. With the instrumented buggy program, we run the positive test cases so that we can capture a number of input-output pairs for the localized methods. Suppose that there are K such pairs, denoted by a set P A = {(I n1 , Out1 ), (I n2 , Out2 ), . . . , (I nK , OutK )}. According to our assumption, all these input-output pairs should reflect the correct program behavior. In order to judge patch correctness, we will feed these inputs I n1 , I n2 , . . . , I nK into the corresponding methods in the patched program so as to see whether the correct outputs can be obtained. Specifically, for each input-out pair (I ni , Outi ) ∈ P A collected previously, we deserialize I ni from the file and use the Java reflection technique to invoke the corresponding method in the instrumented patched program with the deserialized input I ni , so that we can collect the method output Outi . Lastly, we compare every Outi with the corresponding Outi , and if there exists any difference, we identify the plausible patch as an overfitting patch that is incorrect.
19 An Evolutionary System for Better Automatic Software Repair
395
19.6.2 Patch Ranking ARJA-e can sometimes output more than one plausible patch (with the same patch size) for a bug. As a post-processing step, we design a heuristic procedure to rank these patches. For this ranking purpose, we first define three metrics for a patch. The first metric, denoted by Susp, represents the summation of the suspiciousness for the LBSs modified by the patch. The second metric, denoted by Dist, is based on our overfit detection approach. Recall that for the purpose of overfit detection, we only need to know whether there is a difference between Outi and Outi , where i = 1, 2, . . . , K. Here we want to quantify such a difference. To do this, we deserialize Outi and Outi and extract all primitive data and string data contained in the two outputs in a recursive way. Similar to the computing of assertion distance, we can easily compute the distance for each corresponding primitive/string data and normalize it to [0, 1]. Then Dist is calculated as the average of these normalized distances for all outputs. Before defining the third metric, we determine a preference relation of operation types in our system. We prefer the operation type that is generally less likely to bring in side effects, and the preference relation is: NPC/RC/CC/DC ≺ MPA ≺ ER ≺ BEAR ≺ SR/SI ≺ SD. Here SR and SI mean statement replacement and insertion based on the redundancy assumption respectively, and SI means statement deletion. The others are all template-based operations that can be referred to in Sect. 19.4.2. We assign a preference score for each operation type: SD is scored 1, SR and SI are scored 2, BEAR is scored 3 and so on. With these scores, the second metric for a patch, denoted by P ref , is defined as the sum of scores of operation types contained in the patch. For Susp and P ref , larger is better; whereas for Dist, smaller is better. When comparing two patches in the ranking, Susp, Dist and P ref are considered in sequence until the two patches can be distinguished. If all the three metrics cannot distinguish the two patches, the patch found earlier is ranked higher.
19.7 Experimental Design 19.7.1 Research Questions We intend to answer the following research questions: RQ1: How effective is ARJA-e compared to state-of-the-art repair systems on real bugs? RQ2: Can ARJA-e fix bugs in a novel way compared to a human developer? RQ3: How good is our overfit detection approach?
396
Y. Yuan and W. Banzhaf
19.7.2 Dataset of Bugs We perform the empirical evaluation on a database of real bugs, called Defects4J [18], which has been extensively used for evaluating Java repair systems [6, 33, 46, 53, 55, 57]. We consider four projects in Defects4J, namely Chart, Time, Lang and Math. Table 19.3 shows the descriptive statistics of the four projects. In total, there are 224 real bugs: 26 from Chart (C1–C26), 27 from Time (T1–T27), 65 from Lang (L1–L65) and 106 from Math (M1–M106).
19.7.3 Parameter Setting Table 19.4 shows the parameter setting for ARJA-e in the experiments. Note that crossover and mutation operators presented in Sect. 19.5.3 are always executed, so the probability (i.e., 1) is omitted in this table. Given the stochastic nature of ARJA-e, we execute five random trials in parallel for each bug. Each trial of ARJAe is terminated after it reaches the maximum number of generations (i.e., 50) or its execution time exceeds 1 h. Table 19.3 The descriptive statistics of Defects4J dataset
Table 19.4 The parameter setting for ARJA-e
Project Chart Time Lang Math Total Parameter N γmin nmax βsim βrel w
Source KLoC 96 28 22 85 231
Test KLoC 50 53 6 19 128
Description Population size Threshold for the suspiciousness Maximum number of LBSs considered Threshold for similarity Threshold for relevance Refer to Sect. 19.5.2
Value 40 0.1 60 0.3 0.3 0.5
ID C T L M
#Bugs 26 27 65 106 224
#JUnit tests 2205 4043 2295 5246 13,789
19 An Evolutionary System for Better Automatic Software Repair
397
19.8 Results and Discussions 19.8.1 Performance Evaluation (RQ1) To show the superiority of ARJA-e over the state of the art, we compare ARJAe with 13 existing repair approaches in terms of the number of bugs fixed and correctly fixed. The 13 approaches are jGenProg [33, 34] (an implementation of GenProg for Java), xPAR (a reimplementation of PAR by Le et al. [24]), Nopol [56], HDRepair [24], ACS [55], ssFix [53], JAID [6], ELIXIR [46], ARJA [57], SimFix [17], CAPGEN [52], SOFIX [29] and SKETCHFIX [16], which include almost all published approaches that have ever been tested on Defects4J. Note that here we use a strict criterion for judging whether a bug is correctly fixed by ARJA-e, that is, a bug is regarded as being correctly fixed only when the plausible patch ranked first (using the procedure in Sect. 19.6.2) is correct. Table 19.5 shows the comparison results. From Table 19.5, we can see that ARJA-e outperforms all other approaches in terms of the number of fixed bugs and correctly fixed bugs. We further compare ARJA-e with ACS, ssFix and SimFix by analyzing the overlaps among their repair results. ACS, ssFix and SimFix are selected because they show prominent performance among the 13 compared approaches and the IDs of (correctly) fixed bugs are available for them [17, 53, 55]. Figure 19.7 shows the intersection of fixed bugs (in Fig. 19.7a) and correctly fixed bugs (in Fig. 19.7b) between ARJA-e, ACS, ssFix and SimFix, using Venn diagrams. From Fig. 19.7a, ARJA-e performs much better than the other three approaches in
Table 19.5 Comparison with existing repair tools in terms of the number of bugs fixed and correctly fixed (Plausible/Correct) Project Chart Lang Math Time Total
ARJA-e 18/7 28/9 51/21 9/2 106/39
Project Chart Lang Math Time Total
JAID 4/2 8/1 8/1 0/0 20/4
jGenProg 7/0 0/0 18/5 2/0 27/5 ELIXIR 7/4 12/8 19/12 3/2 41/26
xPAR –/0 –/1 –/2 –/0 –/3
ARJAb 9/3 17/4 29/10 4/1 59/18
Nopol 6/1 7/3 21/1 1/0 35/5
SimFix 8/4 13/9 26/14 1/1 48/28
HDRepaira –/2 –/7 –/6 –/1 –/16 CAPGEN –/4 –/5 –/12 –/0 –/21
ACS 2/2 4/3 16/12 1/1 23/18
SOFIX –/5 –/4 –/13 –/1 –/23
ssFix 7/2 12/5 26/7 4/0 49/14
SKETCHFIX 8/6 4/1 8/7 1/1 21/15
The best results are shown in bold “–” Means the data is not available since it is not reported by the original authors a HDRepair generated correct patches for 16 bugs, but only 10 of them were ranked first b In ARJA, a bug is regarded as being fixed correctly if one of its plausible patches is identified as correct
398
Y. Yuan and W. Banzhaf
Fig. 19.7 Venn diagram of repaired bugs by ARJA-e, ACS, ssFix and SimFix. (a) Test-adequate bug fixing. (b) Correct bug fixing
terms of test-adequate bug fixing, and most of the bugs fixed by ACS, ssFix and SimFix can also be fixed by ARJA-e. From Fig. 19.7b, ARJA-e fixes the highest number of bugs correctly (i.e., 39), where 20 bugs cannot be fixed correctly by any of the other three approaches. So ARJA-e indeed complements to the three approaches very well. But it should be noted that the three approaches also show good complementarity to ARJA-e in terms of correct bug fixing. Specifically, ACS, ssFix and SimFix can correctly fix 11, 9 and 16 bugs that cannot be correctly fixed by ARJA-e, respectively. This may be the case because ACS and ssFix are quite different from ARJA-e in technique. ACS aims at performing precise condition synthesis while ssFix uses existing code from a code database. It seems possible to further enhance the performance of ARJA-e by borrowing ideas from ACS and ssFix. For example, we can use a method similar to ACS to generate more accurate conditions for instantiating the template BEAR, or we can reuse the existing code outside the buggy program like ssFix. In summary, ARJA-e outperforms 13 existing repair approaches by a considerable margin. Specifically, by comparison with the best results, ARJA-e can fix 79.7% more bugs than ARJA (from 59 to 106), and can correctly fix 39.3% more bugs than SimFix (from 28 to 39). Moreover, ARJA-e is an effective approach complementary to the state-of-the-art techniques.
19.8.2 Novelty in Generated Repairs (RQ2) We found that ARJA-e can fix some bugs in a different way from the human developer. These patches are generally beyond a programmer’s expectations, showing the surprising novelty [28]. In the following, we will present case studies to demonstrate this point. Figure 19.8 shows a correct patch generated by ARJA-e for M94. The following function wants to compute greatest common divisor (GCD) of two integers.
19 An Evolutionary System for Better Automatic Software Repair
1 2 3 4 5 6
// MathUtils. java public static int gcd( int u, int v) { − if (u ∗ v == 0) { + if (u == 0 || v == 0) { // human−written patch + if ( sign (u) ∗ v == 0) { // ARAJ−e patch return (Math.abs(u) + Math.abs(v)) ; }
7 8 9
399
...
...
}
Fig. 19.8 Human-written patch and correct patch generated by ARJA-e for bug M94 1 2 3 4 5 6 7 8
// StrBuilder . java public int indexOf( String str , int startIndex ) { ... − char [] thisBuf = buffer ; + char [] thisBuf = toCharArray() ; int len = thisBuf . length − strLen; − outer : for ( int i = startIndex ; i < len; i++) { + outer : for ( int i = startIndex ; i epsilon && n < maxIterations) { n = n + 1.0; an = an ∗ (x / (a + n)) ; sum = sum + an; }
Fig. 19.11 Plausible patch generated by ARJA-e for bug M104
Figure 19.11 shows a plausible patch generated by ARJA-e for bug M104. This bug is triggered because the maximum allowed numerical error (MANE) is too large. To fix the bug, the loop should be terminated until Math.abs(an) reaches a smaller value. So the human-written patch changes the initial value of of epsilon from 10e-9 to 10e-15 in order to ensure a smaller MANE. The ARJA-e patch shown in Fig. 19.11 achieves a similar functionality in a different way, which changes the method invocation abs to sqrt. Although this patch is not semantically equivalent to the human-written patch, it can make the entire test suite pass and is also indicative of the cause of the bug.
19.8.3 Effectiveness of Overfit Detection (RQ3) In this subsection, we will evaluate our overfit detection approach described in Sect. 19.6.1. To demonstrate its effectiveness, we compare it with Xiong et al.’s approach (XA) [54], which is currently the state-of-the-art technique for overfit detection and shares certain similarities with our approach. To ensure a fair comparison, we use the version without test case generation for XA. According to [54], this simplified version has already achieved competitive performance compared to the version relying on new test cases.
19 An Evolutionary System for Better Automatic Software Repair
401
For the subjects, we consider the first plausible patch found by ARJA-e for each bug (according to RQ1). In addition, we include the patches generated by jGenProg and jKali, which are collected from Martinez et al.’s empirical study [33] on Defects4J. In the end, we collect a dataset of 122 plausible patches by ignoring unsupported patches, where 97 patches are incorrect and 25 patches are correct. The correctness of ARJA-e patches is judged by ourselves, while the correctness of jGenProg and jKali patches is according to Martinez et al.’s analysis [33]. Table 19.6 shows the statistics of this dataset. Table 19.7 show the comparison results on the dataset per tool. From Table 19.7, for the patches of ARJA-e and jGenProg, our approach can filter out more incorrect patches than XA, while for the patches of jKali, the two approaches can identify the same number of incorrect patches. Moreover, our approach does not filter out any correct patch obtained by jGenProg and jKali, while XA filters out one correct patch (for bug M53) by jGenProg. Note that it was reported in [54] that XA does not exclude any correct patch by jGenProg. This inconsistency may be due to different computing environments. For the patches of ARJA-e, both approaches exclude correct patches by mistake, but our approach only excludes 3 out of 19 correct patches whereas XA excludes 7. To further understand the performance difference between our approach and XA, Fig. 19.12 shows the intersection of incorrect patches identified by the two approaches. It is interesting to see that our approach complements to XA very well. Specifically, our approach can identify 6 incorrect patches by jGenProg, 4 incorrect patches by jKali and 14 incorrect patches by ARJA-e, respectively, which cannot be identified by XA. In addition, we note that none of the 8 correct patches excluded by XA is also excluded by our approach. Given this strong complementarity, it is very
Table 19.6 Dataset of plausible patches used in RQ3 Project Chart Lang Math Time Total
ARJA-e Incorrect 9 16 23 7 55
Correct 3 4 11 1 19
jGenProg Incorrect 6 0 13 2 21
Correct 0 0 5 0 5
jKali Incorrect 6 0 13 2 21
Correct 0 0 1 0 1
Total Incorrect 21 16 49 11 97
Correct 3 4 17 1 25
Table 19.7 Comparison between our approach and Xiong et al.’s approach in overfit detection (The Patches are categorized by repair tools) Tool
Incorrect
Correct
ARJA-e jGenProg jKali Total
55 21 21 97
19 5 1 25
Incorrect excluded Our approach XA 28(50.91%) 27(49.09%) 11(52.38%) 8(38.10%) 9(42.86%) 9(42.86%) 48(49.48%) 44(45.36%)
Correct excluded Our approach XA 3(15.79%) 7(36.84%) 0(0.00%) 1(20.00%) 0(0.00%) 0(0.00%) 3(12.00%) 8(32.00%)
402
Y. Yuan and W. Banzhaf
(a)
(b)
(c) Fig. 19.12 Intersection of incorrect patches identified by our approach and Xiong et al.’s approach. (a) jGenProg. (b) jKali. (c) ARJA-e
promising to further try to improve the accuracy of overfit detection by properly combining the strength of the two approaches.
19.9 Conclusion In this chapter, we have proposed a new repair system, called ARJA-e, for better evolutionary software repair. By combining two sources of fix ingredients, ARJAe can conduct complex statement-level transformations, targeted code changes (e.g., adding a null pointer checker), and code changes at a finer-granularity than statement level, which gives ARJA-e great potential to fix various kinds of bugs. To reduce the search space and avoid nonsensical patches, ARJA-e uses a strategy based on a light-weight contextual analysis, which can filter out unpromising replacement and insertion statements, respectively. In order to harness the potential repair power of the search space, ARJA-e first unifies the edits at different granularities into statement-level edits, so as to encode patches in the search space with a lower-granularity patch representation that is characterized by the decoupling of statements for replacement and insertion. With this new patch representation, ARJA-e employs multi-objective GP to navigate the search space. To better guide the search of GP, ARJA-e uses a finer-grained fitness function that can make full use of semantic information provided by existing test cases. Moreover, ARJA-e includes a post-processing tool for alleviating patch overfitting. This tool can serve the purposes of overfit detection and patch ranking. We have conducted an extensive empirical study on 224 real bugs in Defects4J. The evaluation results show that ARJA-e outperforms 13 existing repair approaches
19 An Evolutionary System for Better Automatic Software Repair
403
by a considerable margin in terms of both the number of bugs fixed and correctly fixed. Interestingly, we found that ARJA-e can fix some bugs in a creative way, which is usually beyond the exceptions of human programmers. In addition, we have shown that the proposed overfit detection technique shows several advantages over a state-of-the-art approach [54]. In the future, we plan to incorporate additional sources of fix ingredients (e.g., source code repositories [19, 53]) into our repair framework, which may increase the potential for fixing more bugs. Moreover, we would like to investigate new mating and survival selection methods [15, 40, 58] in GP, so as to further improve the evolutionary search algorithm for bug repair.
References 1. Arcuri, A.: It does matter how you normalise the branch distance in search based software testing. In: Proceedings of the Third International Conference on Software Testing, Verification and Validation, pp. 205–214. IEEE (2010) 2. Banzhaf, W., Nordin, P., Keller, R.E., Francone, F.D.: Genetic programming: An introduction, vol. 1. Morgan Kaufmann San Francisco (1998) 3. Barr, E.T., Brun, Y., Devanbu, P., Harman, M., Sarro, F.: The plastic surgery hypothesis. In: Proceedings of the 22nd International Symposium on Foundations of Software Engineering, pp. 306–317. ACM (2014) 4. Brameier, M.F., Banzhaf, W.: Linear genetic programming. Springer Science & Business Media (2007) 5. Campos, J., Riboira, A., Perez, A., Abreu, R.: Gzoltar: An eclipse plug-in for testing and debugging. In: Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering, pp. 378–381. ACM (2012) 6. Chen, L., Pei, Y., Furia, C.A.: Contract-based program repair without the contracts. In: Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, pp. 637–647. IEEE (2017) 7. Contributors, J.: A programmer-oriented testing framework for java (2004). URL https:// github.com/junit-team/junit4 8. D’Antoni, L., Samanta, R., Singh, R.: Qlose: Program repair with quantitative objectives. In: Proceedings of International Conference on Computer Aided Verification, pp. 383–401. Springer (2016) 9. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6(2), 182–197 (2002) 10. Debroy, V., Wong, W.E.: Using mutation to automatically suggest fixes for faulty programs. In: Proceedings of the Third International Conference on Software Testing, Verification and Validation, pp. 65–74. IEEE (2010) 11. Fast, E., Le Goues, C., Forrest, S., Weimer, W.: Designing better fitness functions for automated program repair. In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, pp. 965–972. ACM (2010) 12. Forrest, S., Nguyen, T., Weimer, W., Le Goues, C.: A genetic programming approach to automated software repair. In: Proceedings of the 11th Annual conference on Genetic and Evolutionary Computation, pp. 947–954. ACM (2009)
404
Y. Yuan and W. Banzhaf
13. Gazzola, L., Micucci, D., Mariani, L.: Automatic software repair: A survey. IEEE Transactions on Software Engineering 45(1), 34–67 (2019) 14. Gupta, R., Pal, S., Kanade, A., Shevade, S.: Deepfix: Fixing common c language errors by deep learning. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence, pp. 1345– 1351 (2017) 15. Helmuth, T., Spector, L., Matheson, J.: Solving uncompromising problems with lexicase selection. IEEE Transactions on Evolutionary Computation 19(5), 630–643 (2015) 16. Hua, J., Zhang, M., Wang, K., Khurshid, S.: Towards practical program repair with on-demand candidate generation. In: Proceedings of the 40th International Conference on Software Engineering, pp. 12–23. ACM (2018) 17. Jiang, J., Xiong, Y., Zhang, H., Gao, Q., Chen, X.: Shaping program repair space with existing patches and similar code. In: Proceedings of the 27th International Symposium on Software Testing and Analysis, pp. 298–309. ACM (2018) 18. Just, R., Jalali, D., Ernst, M.D.: Defects4j: A database of existing faults to enable controlled testing studies for java programs. In: Proceedings of the 2014 International Symposium on Software Testing and Analysis, pp. 437–440. ACM (2014) 19. Ke, Y., Stolee, K.T., Le Goues, C., Brun, Y.: Repairing programs with semantic code search. In: Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering, pp. 295–306. IEEE (2015) 20. Kim, D., Nam, J., Song, J., Kim, S.: Automatic patch generation learned from human-written patches. In: Proceedings of the 35th International Conference on Software Engineering, pp. 802–811. IEEE (2013) 21. Koza, J.R.: Genetic programming: on the programming of computers by means of natural selection, vol. 1. MIT press (1992) 22. Le, X.B.D., Chu, D.H., Lo, D., Le Goues, C., Visser, W.: Jfix: Semantics-based repair of java programs via symbolic pathfinder. In: Proceedings of the 26th International Symposium on Software Testing and Analysis, pp. 376–379. ACM (2017) 23. Le, X.B.D., Chu, D.H., Lo, D., Le Goues, C., Visser, W.: S3: syntax-and semantic-guided repair synthesis via programming by examples. In: Proceedings of the 11th Joint Meeting on Foundations of Software Engineering, pp. 593–604. ACM (2017) 24. Le, X.B.D., Lo, D., Le Goues, C.: History driven program repair. In: Proceedings of the 23rd International Conference on Software Analysis, Evolution, and Reengineering, pp. 213–224. IEEE (2016) 25. Le Goues, C., Dewey-Vogt, M., Forrest, S., Weimer, W.: A systematic study of automated program repair: Fixing 55 out of 105 bugs for $8 each. In: Proceedings of the 34th International Conference on Software Engineering, pp. 3–13. IEEE (2012) 26. Le Goues, C., Forrest, S., Weimer, W.: Current challenges in automatic software repair. Software Quality Journal 21(3), 421–443 (2013) 27. Le Goues, C., Nguyen, T., Forrest, S., Weimer, W.: Genprog: A generic method for automatic software repair. IEEE Transactions on Software Engineering 38(1), 54–72 (2012) 28. Lehman, J., Clune, J., Misevic, D., Adami, C., Altenberg, L., Beaulieu, J., Bentley, P.J., Bernard, S., Beslon, G., Bryson, D.M., et al.: The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. arXiv preprint arXiv:1803.03453 (2018) 29. Liu, X., Zhong, H.: Mining stackoverflow for program repair. In: Proceedings of 25th International Conference on Software Analysis, Evolution and Reengineering, pp. 118–129. IEEE (2018) 30. Long, F., Amidon, P., Rinard, M.: Automatic inference of code transforms for patch generation. In: Proceedings of the 11th Joint Meeting on Foundations of Software Engineering, pp. 727– 739. ACM (2017) 31. Long, F., Rinard, M.: Staged program repair with condition synthesis. In: Proceedings of the 10th Joint Meeting on Foundations of Software Engineering, pp. 166–178. ACM (2015) 32. Long, F., Rinard, M.: Automatic patch generation by learning correct code. ACM SIGPLAN Notices 51(1), 298–312 (2016)
19 An Evolutionary System for Better Automatic Software Repair
405
33. Martinez, M., Durieux, T., Sommerard, R., Xuan, J., Monperrus, M.: Automatic repair of real bugs in java: A large-scale experiment on the defects4j dataset. Empirical Software Engineering 22(4), 1936–1964 (2017) 34. Martinez, M., Monperrus, M.: Astor: A program repair library for java. In: Proceedings of the 25th International Symposium on Software Testing and Analysis, pp. 441–444. ACM (2016) 35. Martinez, M., Monperrus, M.: Ultra-large repair search space with automatically mined templates: the cardumen mode of astor. In: International Symposium on Search Based Software Engineering, pp. 65–86. Springer (2018) 36. Martinez, M., Weimer, W., Monperrus, M.: Do the fix ingredients already exist? an empirical inquiry into the redundancy assumptions of program repair approaches. In: Companion Proceedings of the 36th International Conference on Software Engineering, pp. 492–495. ACM (2014) 37. Mechtaev, S., Yi, J., Roychoudhury, A.: Directfix: Looking for simple program repairs. In: Proceedings of the 37th International Conference on Software Engineering, pp. 448–458. IEEE Press (2015) 38. Mechtaev, S., Yi, J., Roychoudhury, A.: Angelix: Scalable multiline program patch synthesis via symbolic analysis. In: Proceedings of the 38th International Conference on Software Engineering, pp. 691–701. ACM (2016) 39. Monperrus, M.: Automatic software repair: A bibliography. ACM Computing Surveys 51(1), 17 (2018) 40. Mouret, J.B., Clune, J.: Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909 (2015) 41. Nguyen, H.D.T., Qi, D., Roychoudhury, A., Chandra, S.: Semfix: Program repair via semantic analysis. In: Proceedings of the 35th International Conference on Software Engineering, pp. 772–781. IEEE (2013) 42. Oliveira, V.P.L., de Souza, E.F., Le Goues, C., Camilo-Junior, C.G.: Improved representation and genetic operators for linear genetic programming for automated program repair. Empirical Software Engineering 23(5), 2980–3006 (2018) 43. Qi, Y., Mao, X., Lei, Y., Dai, Z., Wang, C.: The strength of random search on automated program repair. In: Proceedings of the 36th International Conference on Software Engineering, pp. 254–265. ACM (2014) 44. Qi, Z., Long, F., Achour, S., Rinard, M.: An analysis of patch plausibility and correctness for generate-and-validate patch generation systems. In: Proceedings of the 2015 International Symposium on Software Testing and Analysis, pp. 24–36. ACM (2015) 45. Rolim, R., Soares, G., D’Antoni, L., Polozov, O., Gulwani, S., Gheyi, R., Suzuki, R., Hartmann, B.: Learning syntactic program transformations from examples. In: Proceedings of the 39th International Conference on Software Engineering, pp. 404–415. IEEE Press (2017) 46. Saha, R.K., Lyu, Y., Yoshida, H., Prasad, M.R.: Elixir: Effective object-oriented program repair. In: Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, pp. 648–659. IEEE (2017) 47. Smith, E.K., Barr, E.T., Le Goues, C., Brun, Y.: Is the cure worse than the disease? overfitting in automated program repair. In: Proceedings of the 10th Joint Meeting on Foundations of Software Engineering, pp. 532–543. ACM (2015) 48. Tan, S.H., Yoshida, H., Prasad, M.R., Roychoudhury, A.: Anti-patterns in search-based program repair. In: Proceedings of the 24th International Symposium on Foundations of Software Engineering, pp. 727–738. ACM (2016) 49. Weimer, W., Forrest, S., Le Goues, C., Nguyen, T.: Automatic program repair with evolutionary computation. Communications of the ACM 53(5), 109–116 (2010) 50. Weimer, W., Fry, Z.P., Forrest, S.: Leveraging program equivalence for adaptive program repair: Models and first results. In: Proceedings of the 28th International Conference on Automated Software Engineering, pp. 356–366. IEEE (2013)
406
Y. Yuan and W. Banzhaf
51. Weimer, W., Nguyen, T., Le Goues, C., Forrest, S.: Automatically finding patches using genetic programming. In: Proceedings of the 31st International Conference on Software Engineering, pp. 364–374. IEEE (2009) 52. Wen, M., Chen, J., Wu, R., Hao, D., Cheung, S.C.: Context-aware patch generation for better automated program repair. In: Proceedings of the 40th International Conference on Software Engineering, pp. 1–11. ACM (2018) 53. Xin, Q., Reiss, S.P.: Leveraging syntax-related code for automated program repair. In: Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, pp. 660–670. IEEE Press (2017) 54. Xiong, Y., Liu, X., Zeng, M., Zhang, L., Huang, G.: Identifying patch correctness in test-based program repair. In: Proceedings of the 40th International Conference on Software Engineering, pp. 789–799. ACM (2018) 55. Xiong, Y., Wang, J., Yan, R., Zhang, J., Han, S., Huang, G., Zhang, L.: Precise condition synthesis for program repair. In: Proceedings of the 39th International Conference on Software Engineering, pp. 416–426. IEEE Press (2017) 56. Xuan, J., Martinez, M., Demarco, F., Clement, M., Marcote, S.L., Durieux, T., Le Berre, D., Monperrus, M.: Nopol: Automatic repair of conditional statement bugs in java programs. IEEE Transactions on Software Engineering 43(1), 34–55 (2017) 57. Yuan, Y., Banzhaf, W.: ARJA: Automated repair of java programs via multi-objective genetic programming. IEEE Transactions on Software Engineering (2018). https://doi.org/10.1109/ TSE.2018.2874648 58. Yuan, Y., Xu, H., Wang, B., Yao, X.: A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation 20(1), 16– 37 (2016)
Index
A Abstract syntax tree (AST), 388 acknowledgments, x active categorical perception, 127 AI safety, 181, 183 Ancients 2, 345 Arcade learning environment (ALE), 111 ARJA, 385 ARJA-e, 385 Artificial neural networks (ANNs), 39 Atari, 345 Atari simulator, 111 auto-constructive evolution, 325 automated machine learning, 333 automatic code production, 330 automatic simplification, 175
B backpropagation, 40, 46, 58 bagging, 55 Baldwin effect, 320 ball catching, 112 boosting, 55 Breakout, 111, 114 buggy program, 393
C Cartesian genetic programming (CGP), 129, 324 causality, 339 code improvement, 332
cohort lexicase, 4 computational savings, 4 convergence, 377 convolutional neural network (CNN), 58 cooperative coevolution, 297 cooperative decision making, 108 covariance, 377
D decision making, 149 deep learning, 43 Defects4J, 386, 396 design feature, 171 Differential evolution (DE), 324 digital evolution, 313 digitized art, 232 disruption, 372 distributional drift, 193 distribution mismatch, 203 distribution statistics, 220 diversity phenotypic, 14 Dota 2, 345, 354 dynamic training, 48
E ensemble, 55, 246 evolutionary art, 298 evolutionary learning, 241 execution trace, 169 explainability, 339 expression hashing, 85
© Springer Nature Switzerland AG 2020 W. Banzhaf et al. (eds.), Genetic Programming Theory and Practice XVII, Genetic and Evolutionary Computation, https://doi.org/10.1007/978-3-030-39958-0
407
408 F fairness, 339 feature set, 68 feature synergy analysis, 67 focused feature set, 67, 73 forecasting, 146, 334 nonlinear business, 156
G genetic drift, 320 Genetic programming symbolic regression (GPSR), 201, 209 Genome, 256, 260, 262, 391 GenProg, 385 Geometric semantic genetic programming (GSGP), 45, 217 GP-vector, 133 GP applications, 154 GP bias, 205 GP-forest, 131 grammar, 82, 316 graph search, 88
H Holm–Bonferroni correction, 10
I indexed memory, 353 initialization prior, 207, 211
K Keijzer-4, 93 Keijzer-5, 93 Keijzer-6, 377 Keijzer-11, 93 Kruskal–Wallis test, 10
L lamarckism, 320 Levenberg–Marquardt, 96 lexicase selection, 1, 172 Linear genetic programming (LGP), 65, 104, 324, 350
M Mann–Whitney test, 10 marketing, 157 Markov Brains (MBs), 121, 327
Index memory, 350 metabolomics, 64 modelling GP, 371 modularity metrics, 167 module, 168 most recent common ancestor (MRCA), 16
N negative side effects, 187 Neuroevolution, 45, 121, 141, 326 Neuroevolution of augmenting topologies (NEAT), 121 Nguyen datasets, 94 No free lunch (NFL) theorem, 311 novelty, 398 novelty pulsation, 285 novelty search, 278 novelty selection method, 283 number discrimination, 127
O OMNIREP, 296 Osteoarthritis (OA), 64 overfitting, 320, 393 oversight, 190
P PAR, 385 patch, 391 patch ranking, 395 perception-action loop, 128 phenotypic diversity, 14 phylogenetic divergence, 16 Plush, 256 Plushy, 272 policy, 105 preface, ix Price’s theorem, 371 professional development, 160 program repair, 391 program synthesis, 4, 260 PushGP, 173, 255, 328
R Random sampling technique (RST), 48 redundancy assumption, 387 Reinforcement learning (RL), 186 repair templates, 388 repetition, 169
Index representation, 124, 316 reuse, 169 reward hacking, 188 robust inferential sensor, 156 S safe exploration, 191 schema prevalence, 372 semantic stopping criterion, 48 simple GP schema, 371 Solution and fitness evolution (SAFE), 296, 299 Sorting networks, 279 stock trading, 280 subsampling, random, 2 symbolic regression, 88, 201, 330
409 T Tangled program graph (TPG), 104, 105, 328, 345 taxonomy, 339 teams of programs, 108 temporal memory, 103 tree-based GP, 369 U unimodal error landscape, 40 V video game, 111 visibility, 160 visual recognition, 229 Vladislavleva datasets, 94