127 56 21MB
English Pages 694 [678] Year 2022
Methods in Molecular Biology 2425
Emilio Benfenati Editor
In Silico Methods for Predicting Drug Toxicity Second Edition
METHODS
IN
MOLECULAR BIOLOGY
Series Editor John M. Walker School of Life and Medical Sciences University of Hertfordshire Hatfield, Hertfordshire, UK
For further volumes: http://www.springer.com/series/7651
For over 35 years, biological scientists have come to rely on the research protocols and methodologies in the critically acclaimed Methods in Molecular Biology series. The series was the first to introduce the step-by-step protocols approach that has become the standard in all biomedical protocol publishing. Each protocol is provided in readily-reproducible step-bystep fashion, opening with an introductory overview, a list of the materials and reagents needed to complete the experiment, and followed by a detailed procedure that is supported with a helpful notes section offering tips and tricks of the trade as well as troubleshooting advice. These hallmark features were introduced by series editor Dr. John Walker and constitute the key ingredient in each and every volume of the Methods in Molecular Biology series. Tested and trusted, comprehensive and reliable, all protocols from the series are indexed in PubMed.
In Silico Methods for Predicting Drug Toxicity Second Edition
Edited by
Emilio Benfenati Department of Environmental Health Sciences, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Italy
Editor Emilio Benfenati Department of Environmental Health Sciences Istituto di Ricerche Farmacologiche Mario Negri IRCCS Milan, Italy
ISSN 1064-3745 ISSN 1940-6029 (electronic) Methods in Molecular Biology ISBN 978-1-0716-1959-9 ISBN 978-1-0716-1960-5 (eBook) https://doi.org/10.1007/978-1-0716-1960-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2016, 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Humana imprint is published by the registered company Springer Science+Business Media, LLC, part of Springer Nature. The registered company address is: 1 New York Plaza, New York, NY 10004, U.S.A.
Preface This book introduces novelties in the area of in silico models, with particular attention to pharmaceuticals. Indeed, more and more models are available, for more endpoints. Some years ago, we edited the first edition of this book. Compared to the first edition, the number of chapters increased; half of them are totally new and the others have been revised and updated. We extended the content, adding chapters on some commercial programs which are largely used for pharmaceuticals. This offers the possibility to be updated on both the methodology and the regulatory indications. The evolution in the field is not only a technological one. The attention moved toward the integrated use of data from multiple sources, such as in silico models and read-across. A recent document from the European Food Safety Authority has provided quite useful indications, which introduced new avenues to be applied in the pharmaceuticals, cosmetics, food, and other areas. Indeed, the interest is on the use of all values obtained from multiple, heterogeneous, powerful, and novel methodologies. For this reason, several completely new chapters have been added to address both the methodologies and their integrated use. Another evolution is the necessary interest in areas which apparently are not related to human toxicity but actually the impact on the environment which will affect humans. The regulation on pharmaceuticals, and the drug companies as well, is addressing this aspect, and this is also reflected in the book. A chapter introduces the theory in general, also highlighting new perspectives. Then, there are several sections covering specific areas. One section addresses the pharmacokinetic approaches. The more and more abundant data of a different nature, and how to exploit them, are addressed in another section. There are in silico models for multiple endpoints, and they are described in separate chapters in a specific section. A section covers the platforms offering multiple in silico tools in a structured way. Finally, a section explores the challenges and perspectives, on different points of view, also considering different users. The evolving vision moving toward a more complex and integrated scenario requires more and more attention to the explanation of the methods and the way to fully exploit them. Many new examples have been introduced, which will guide the reader through the correct procedure to take advantage of the in silico models, which is no longer a field used by a restricted sector of experts. Milan, Italy
Emilio Benfenati
v
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contributors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v xi
1 QSAR Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Giuseppina Gini
1
PART I
MODELING A PHARMACEUTICAL IN THE BODY
2 PBPK Modeling to Simulate the Fate of Compounds in Living Organisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fre´de´ric Y. Bois, Cleo Tebby, and Ce´line Brochot 3 Pharmacokinetic Tools and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Judith C. Madden and Courtney V. Thompson 4 In Silico Tools and Software to Predict ADMET of New Drug Candidates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supratik Kar, Kunal Roy, and Jerzy Leszczynski
PART II
29 57
85
EXPLOITING TOXICITY DATA FOR MODELING PURPOSES
5 Development of In Silico Methods for Toxicity Prediction in Collaboration Between Academia and the Pharmaceutical Industry . . . . . . . . 119 Manuel Pastor, Ferran Sanz, and Frank Bringezu 6 Emerging Bioinformatics Methods and Resources in Drug Toxicology . . . . . . . . 133 Karine Audouze and Olivier Taboureau
PART III
THE USE OF IN SILICO MODELS FOR THE SPECIFIC ENDPOINTS
7 In Silico Prediction of Chemically Induced Mutagenicity: A Weight of Evidence Approach Integrating Information from QSAR Models and Read-Across Predictions . . . . . . . . . . . . . . . . . . . . . . . . . . Enrico Mombelli, Giuseppa Raitano, and Emilio Benfenati 8 In Silico Methods for Chromosome Damage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diego Baderna, Ilse Van Overmeire, Giovanna J. Lavado, Domenico Gadaleta, and Birgit Mertens 9 In Silico Methods for Carcinogenicity Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . Azadi Golbamaki, Emilio Benfenati, and Alessandra Roncaglioni 10 In Silico Models for Developmental Toxicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marco Marzo, Alessandra Roncaglioni, Sunil Kulkarni, Tara S. Barton-Maclaren, and Emilio Benfenati 11 In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No Observed Adverse Effect Level (NOAEL) and Lowest Observed Adverse Effect Level (LOAEL) for Drugs. . . . . . . . . . . . . . . . . . . . . . . . . Fabiola Pizzo, Domenico Gadaleta, and Emilio Benfenati
vii
149 185
201 217
241
viii
12 13 14 15
Contents
In Silico Models for Predicting Acute Systemic Toxicity . . . . . . . . . . . . . . . . . . . . . Ivanka Tsakovska, Antonia Diukendjieva, and Andrew P. Worth In Silico Models for Skin Sensitization and Irritation . . . . . . . . . . . . . . . . . . . . . . . . Gianluca Selvestrel, Federica Robino, and Matteo Zanotti Russo In Silico Models for Hepatotoxicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claire Ellison, Mark Hewitt, and Katarzyna Przybylak Machine Learning Models for Predicting Liver Toxicity . . . . . . . . . . . . . . . . . . . . . Jie Liu, Wenjing Guo, Sugunadevi Sakkiah, Zuowei Ji Gokhan Yavas, Wen Zou, Minjun Chen, Weida Tong Tucker A. Patterson, and Huixiao Hong
PART IV 16 17
18
19
20
21
22 23
291 355 393
PLATFORMS FOR EVALUATING PHARMACEUTICALS
Implementation of In Silico Toxicology Protocols in Leadscope . . . . . . . . . . . . . . Kevin Cross, Candice Johnson, and Glenn J. Myatt Use of Lhasa Limited Products for the In Silico Prediction of Drug Toxicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David J. Ponting, Michael J. Burns, Robert S. Foster, Rachel Hemingway, Grace Kocks, Donna S. MacMillan, Andrew L. Shannon-Little, Rachael E. Tennant, Jessica R. Tidmarsh, and David J. Yeo Using VEGAHUB Within a Weight-of-Evidence Strategy . . . . . . . . . . . . . . . . . . . Serena Manganelli, Alessio Gamba, Erika Colombo, and Emilio Benfenati MultiCASE Platform for In Silico Toxicology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suman K. Chakravarti and Roustem D. Saiakhov
PART V
259
419
435
479
497
THE SCIENTIFIC AND SOCIETY CHALLENGES
Adverse Outcome Pathways as Versatile Tools in Liver Toxicity Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emma Arnesdotter, Eva Gijbels, Bruna dos Santos Rodrigues, Vaˆnia Vilas-Boas, and Mathieu Vinken Use of In Silico Methods for Regulatory Toxicological Assessment of Pharmaceutical Impurities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simona Kovarich and Claudia Ileana Cappelli Computational Modeling of Mixture Toxicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mainak Chatterjee and Kunal Roy In Silico Methods for Environmental Risk Assessment: Principles, Tiered Approaches, Applications, and Future Perspectives . . . . . . . . . . . . . . . . . . . Maria Chiara Astuto, Matteo R. Di Nicola, Jose´ V. Tarazona, A. Rortais, Yann Devos, A. K. Djien Liem, George E. N. Kass, Maria Bastaki, Reinhilde Schoonjans, Angelo Maggiore, Sppandrine Charles, Aude Ratier, Christelle Lopes, Ophelia Gestin, Tobin Robinson, Antony Williams, Nynke Kramer, Edoardo Carnesecchi, and Jean-Lou C. M. Dorne
521
537 561
589
Contents
24
ix
Increasing the Value of Data Within a Large Pharmaceutical Company Through In Silico Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 Alessandro Brigo, Doha Naga, and Wolfgang Muster
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
675
Contributors EMMA ARNESDOTTER • Department of Pharmaceutical and Pharmacological Sciences, Entity of In Vitro Toxicology, Vrije Universiteit Brussel, Brussels, Belgium MARIA CHIARA ASTUTO • European Food Safety Authority, Parma, Italy KARINE AUDOUZE • Universite´ de Paris, Inserm UMR S-1124, Paris, France DIEGO BADERNA • Istituto Di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Lombardia, Italy TARA S. BARTON-MACLAREN • Existing Substances Risk Assessment Bureau, Health Canada, Ottawa, ON, Canada MARIA BASTAKI • European Food Safety Authority, Parma, Italy EMILIO BENFENATI • Department of Environmental Health Sciences, Laboratory of Environmental Chemistry and Toxicology, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Italy ´ FREDE´RIC Y. BOIS • Simcyp Division, Certara UK Limited, Sheffield, UK ALESSANDRO BRIGO • Roche Pharma Research and Early Development, Pharmaceutical Sciences, Roche Innovation Centre Basel, Basel, Switzerland FRANK BRINGEZU • Chemical and Preclinical Safety, Merck Healthcare KGaA, Darmstadt, Germany CE´LINE BROCHOT • INERIS, Unit of Experimental Toxicology and Modelling, Verneuil en Halatte, France MICHAEL J. BURNS • Lhasa Limited, Leeds, UK CLAUDIA ILEANA CAPPELLI • S-IN Soluzioni Informatiche SRL, Vicenza, Italy EDOARDO CARNESECCHI • Institute for Risk Assessment Sciences (IRAS), Utrecht University, Utrecht, The Netherlands SUMAN K. CHAKRAVARTI • MultiCASE Inc, Beachwood, OH, USA SANDRINE CHARLES • University of Lyon, Lyon, France MAINAK CHATTERJEE • Drug Theoretics and Cheminformatics Laboratory, Department of Pharmaceutical Technology, Jadavpur University, Kolkata, India MINJUN CHEN • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA ERIKA COLOMBO • Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Italy KEVIN CROSS • Instem, Columbus, OH, USA YANN DEVOS • European Food Safety Authority, Parma, Italy MATTEO R. DI NICOLA • Unit of Dermatology, IRCCS San Raffaele Hospital, Milan, Italy ANTONIA DIUKENDJIEVA • Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Sofia, Bulgaria JEAN-LOU C. M. DORNE • European Food Safety Authority, Parma, Italy BRUNA DOS SANTOS RODRIGUES • Department of Pharmaceutical and Pharmacological Sciences, Entity of In Vitro Toxicology, Vrije Universiteit Brussel, Brussels, Belgium CLAIRE ELLISON • Human and Natural Sciences Directorate, School of Science, Engineering and Environment, University of Salford, Manchester, UK ROBERT S. FOSTER • Lhasa Limited, Leeds, UK DOMENICO GADALETA • Istituto Di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Lombardia, Italy
xi
xii
Contributors
ALESSIO GAMBA • Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Italy OPHELIA GESTIN • University of Lyon, Lyon, France EVA GIJBELS • Department of Pharmaceutical and Pharmacological Sciences, Entity of In Vitro Toxicology, Vrije Universiteit Brussel, Brussels, Belgium GIUSEPPINA GINI • DEIB, Politecnico di Milano, Milan, Italy AZADI GOLBAMAKI • Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Italy WENJING GUO • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA RACHEL HEMINGWAY • Lhasa Limited, Leeds, UK MARK HEWITT • School of Pharmacy, Faculty of Science and Engineering, University of Wolverhampton, Wolverhampton, UK HUIXIAO HONG • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA ZUOWEI JI • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA CANDICE JOHNSON • Instem, Columbus, OH, USA SUPRATIK KAR • Interdisciplinary Center for Nanotoxicity, Department of Chemistry, Physics and Atmospheric Sciences, Jackson State University, Jackson, MS, USA GEORGE E. N. KASS • European Food Safety Authority, Parma, Italy GRACE KOCKS • Lhasa Limited, Leeds, UK SIMONA KOVARICH • S-IN Soluzioni Informatiche SRL, Vicenza, Italy NYNKE KRAMER • Institute for Risk Assessment Sciences (IRAS), Utrecht University, Utrecht, The Netherlands SUNIL KULKARNI • Existing Substances Risk Assessment Bureau, Health Canada, Ottawa, ON, Canada GIOVANNA J. LAVADO • Istituto Di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Lombardia, Italy JERZY LESZCZYNSKI • Interdisciplinary Center for Nanotoxicity, Department of Chemistry, Physics and Atmospheric Sciences, Jackson State University, Jackson, MS, USA A. K. DJIEN LIEM • European Food Safety Authority, Parma, Italy JIE LIU • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA CHRISTELLE LOPES • University of Lyon, Lyon, France DONNA S. MACMILLAN • Lhasa Limited, Leeds, UK JUDITH C. MADDEN • School of Pharmacy and Biomolecular Sciences, Liverpool John Moores University, Liverpool, UK ANGELO MAGGIORE • European Food Safety Authority, Parma, Italy SERENA MANGANELLI • Chemical Food Safety Group, Nestle´ Research, Lausanne, Switzerland MARCO MARZO • Department of Environmental Health Sciences, Laboratory of Environmental Chemistry and Toxicology, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Italy BIRGIT MERTENS • SD Chemical and Physical Health Risks, Sciensano, Brussels, Belgium; Department of Pharmaceutical Sciences, Universiteit Antwerpen, Wilrijk, Belgium ENRICO MOMBELLI • INERIS-Institut National de l’Environnement Industriel et des Risques, Verneuil-en-Halatte, France WOLFGANG MUSTER • Roche Pharma Research and Early Development, Pharmaceutical Sciences, Roche Innovation Centre Basel, Basel, Switzerland GLENN J. MYATT • Instem, Columbus, OH, USA
Contributors
xiii
DOHA NAGA • Roche Pharma Research and Early Development, Pharmaceutical Sciences, Roche Innovation Centre Basel, Basel, Switzerland; Department of Pharmaceutical Chemistry, Group of Pharmacoinformatics, University of Vienna, Wien, Austria MANUEL PASTOR • Research Programme on Biomedical Informatics (GRIB), Hospital del Mar Medical Research Institute (IMIM), Department of Experimental and Health Sciences, Universitat Pompeu Fabra, Barcelona, Spain TUCKER A. PATTERSON • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA FABIOLA PIZZO • FEED Unit, European Food Safety Authority, Parma, Italy DAVID J. PONTING • Lhasa Limited, Leeds, UK KATARZYNA PRZYBYLAK • Unilever Safety and Environmental Assurance Centre, Bedfordshire, UK GIUSEPPA RAITANO • Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Laboratory of Environmental Chemistry and Toxicology, Milan, Italy AUDE RATIER • University of Lyon, Lyon, France FEDERICA ROBINO • Angel Consulting s.a.s, Milan, Italy TOBIN ROBINSON • European Food Safety Authority, Parma, Italy ALESSANDRA RONCAGLIONI • Department of Environmental Health Sciences, Laboratory of Environmental Chemistry and Toxicology, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Italy A. RORTAIS • European Food Safety Authority, Parma, Italy KUNAL ROY • Drug Theoretics and Cheminformatics Laboratory, Department of Pharmaceutical Technology, Jadavpur University, Kolkata, India MATTEO ZANOTTI RUSSO • Angel Consulting s.a.s, Milan, Italy ROUSTEM D. SAIAKHOV • MultiCASE Inc, Beachwood, OH, USA SUGUNADEVI SAKKIAH • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA FERRAN SANZ • Research Programme on Biomedical Informatics (GRIB), Hospital del Mar Medical Research Institute (IMIM), Department of Experimental and Health Sciences, Universitat Pompeu Fabra, Barcelona, Spain REINHILDE SCHOONJANS • European Food Safety Authority, Parma, Italy GIANLUCA SELVESTREL • Laboratory of Environmental Chemistry and Toxicology, Environmental Health Department, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Milan, Italy ANDREW L. SHANNON-LITTLE • Lhasa Limited, Leeds, UK OLIVIER TABOUREAU • Universite´ de Paris, Inserm U1133, CNRS UMR 8251, Paris, France JOSE´ V. TARAZONA • European Food Safety Authority, Parma, Italy CLEO TEBBY • INERIS, Unit of Experimental Toxicology and Modelling, Verneuil en Halatte, France RACHAEL E. TENNANT • Lhasa Limited, Leeds, UK COURTNEY V. THOMPSON • School of Pharmacy and Biomolecular Sciences, Liverpool John Moores University, Liverpool, UK JESSICA R. TIDMARSH • Lhasa Limited, Leeds, UK WEIDA TONG • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA IVANKA TSAKOVSKA • Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Sofia, Bulgaria ILSE VAN OVERMEIRE • SD Chemical and Physical Health Risks, Sciensano, Brussels, Belgium
xiv
Contributors
VAˆNIA VILAS-BOAS • Department of Pharmaceutical and Pharmacological Sciences, Entity of In Vitro Toxicology, Vrije Universiteit Brussel, Brussels, Belgium MATHIEU VINKEN • Department of Pharmaceutical and Pharmacological Sciences, Entity of In Vitro Toxicology, Vrije Universiteit Brussel, Brussels, Belgium ANTONY WILLIAMS • Center for Computational Toxicology and Exposure, Office of Research and Development, U.S. Environmental Protection Agency (U.S. EPA), Research Triangle Park, NC, USA ANDREW P. WORTH • European Commission, Joint Research Centre (JRC), Ispra, Italy GOKHAN YAVAS • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA DAVID J. YEO • Lhasa Limited, Leeds, UK WEN ZOU • National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, USA
Chapter 1 QSAR Methods Giuseppina Gini Abstract This chapter introduces the basis of computational chemistry and discusses how computational methods have been extended from physical to biological properties, and toxicology in particular, modeling. Since about three decades, chemical experimentation is more and more replaced by modeling and virtual experimentation, using a large core of mathematics, chemistry, physics, and algorithms. Animal and wet experiments, aimed at providing a standardized result about a biological property, can be mimicked by modeling methods, globally called in silico methods, all characterized by deducing properties starting from the chemical structures. Two main streams of such models are available: models that consider the whole molecular structure to predict a value, namely QSAR (quantitative structure–activity relationships), and models that check relevant substructures to predict a class, namely SAR. The term in silico discovery is applied to chemical design, to computational toxicology, and to drug discovery. Virtual experiments confirm hypotheses, provide data for regulation, and help in designing new chemicals. Key words Computer models, Chemometrics, Toxicity prediction, SAR, QSAR, Model interpretation
1
Starting from Chemistry “All science is computer science.” When a New York Times article in 2007 used this title, the general public was aware that the introduction of computers has changed the way that experimental sciences have been carried out so far. Chemistry and physics are the best examples of such a new way of making science. A new discipline, chemoinformatics, has been in existence for the past few decades [1, 2]. Many of the activities performed in chemoinformatics are information retrieval [3], aimed at searching for new molecules of interest when a single molecule has been identified as being relevant. However, chemoinformatics is more than “chemical information”; it requires strong algorithmic development. It is useful to remember that models of atoms were defined by analogy with different systems; Thomson in 1897 modeled the
Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5_1, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
1
2
Giuseppina Gini
atom as a sphere of positive electricity with negative particles; Rutherford in 1909 adapted the solar system model with a dense positively charged nucleus surrounded by negative electrons. Finally, in the 1920s, the electron cloud model was defined; in this model, an atom consists of a dense nucleus composed of protons and neutrons surrounded by electrons. A molecule is an electrically neutral group of two or more atoms held together by covalent bonds, sharing electrons. The valence model naturally transforms a molecule into a graph, where the nodes are atoms and the edges are bonds. This graph representation is usually called 2D chemical structure. The graph theory, whose basic definition has been established back in the eighteenth century, initially evolved through chemistry. Two scientists in particular, Alexander C. Brown and James J. Sylvester, developed the molecular representation as nodes (atoms, indicated by their name) and bonds. The edges are assigned weights according to the bond: single, double, triple, or aromatic where electrons are delocalized. Today, hydrogen atoms are implicitly represented in the graph since they are assumed to fill the unused valences [4]. A common representation of the graph is the adjacency matrix, a square matrix with dimension N equal to the number of atoms. Each position (i, j) in the matrix specifies the absence (0 value) or the presence of a bond connecting the atoms i and j, filled with 1, 2, or 3 to indicate simple, double, or triple bond, 4 for amide bond, and 5 for aromatic bond. The diagonal elements are always zero. An example of a matrix representation is in Fig. 1. The Simplified Molecular Input Line Entry Specification (SMILES) [5] is a very popular string representation of a molecule. This string is formally defined in a context-free language and lists bonds and atoms encountered in the depth-first visit of the chemical graph, using parentheses to enclose branches. Hydrogens are left out. Table 1 shows examples of SMILES representations. The SMILES notation suffers the lack of a unique representation, since a molecule can be encoded beginning anywhere; in Table 1, four SMILES strings, all correct, represent ethanol. Therefore, a method of encoding a molecule was quickly developed that provided an invariant SMILES representation called canonical SMILES [6]. Recent developments in chemical notations are the InChI (International Chemical Identifier) codes, supported by the International Union of Pure and Applied Chemistry (IUPAC), which can uniquely describe a molecule, at different levels of detail, but is not intended for human readability [7]. What about the real shape of molecules? They are 3D objects, and as such they should be represented. Considering the 2-methylbutane molecule, illustrated as a simple drawing in
QSAR Methods
3
Fig. 1 Adjacency matrix of 2-methylbutane pictured after hydrogen elimination; only the five Carbon atoms are considered and numbered Table 1 Examples of molecules with their SMILES code SMILES
Name
Formula
CC
Ethane
CH3CH3
C¼O O¼C
Formaldehyde
CH2O
CCO OCC C(C)O C(O)C
Ethanol
CH3CH2OH
Graph
Fig. 1, its formula, SMILES, and 3D conformation are illustrated in Fig. 2a–c. Defining the 3D shape of a molecule is a basic topic of computational chemistry.
2
Computational Chemistry Computational chemistry is a branch of chemistry that uses computers to assist in solving chemical problems, studying the
4
Giuseppina Gini
Fig. 2 The 2-methylbutane molecule: (a) chemical formula, (b) SMILES, (c) 3D conformer (from NIH PubChem)
electronic structure of molecules, and designing new materials and drugs. It uses the results of theoretical chemistry, incorporated into programs, to calculate the structures and properties of molecules. The methods cover both static and dynamic situations: accurate methods—ab-initio methods, and less-accurate methods—called semi-empirical. It all happened in the second half of the last century. 1. In the early 1950s, the first semiempirical atomic orbital calculations. 2. The first ab initio calculations in 1956 at MIT. 3. Nobel prize for Chemistry, in 1998, assigned to John Pople and Walter Kohn, for Computational Chemistry. 4. Nobel prize for Chemistry assigned in 2013 to chemistry assigned to Martin Karplus, Michael Levitt, and Arieh Warshel for their development of multiscale models for complex chemical systems. Computational chemistry is a way to move away from the traditional approach of solving scientific problems using only direct experimentation, but it does not remove experimentation. Experiments produce new data and facts. The role of theory is to situate all the new data into a framework, based on mathematical rules. Computational chemistry uses theories to produce new facts in a manner similar to the real experiments. It is now possible to simulate in the computer an experiment before running it. In modeling chemical processes, two variables are important, namely time and temperature. It is necessary to make dynamic simulations and to model the force fields that exist between atoms and explain the bonds breaking. This task usually requires solving the quantum mechanics equations.
QSAR Methods
5
A hierarchy of simulation levels provides different levels of details. The study of the fundamental properties without the introduction of empirical parameters is the so-called ab initio methods. Those computations consider the electronic and structural properties of the molecule at the absolute zero temperature. They are computationally expensive, so the size of the molecules is limited to a few hundred atoms. When the ab initio methods cannot be used, it is possible to introduce empirical parameters and we have the so-called molecular dynamics methods [8]. Today, the applications of quantum mechanics to chemistry are widely used. The most notable is the Gaussian software (https:// gaussian.com/), developed at the Carnegie Mellon University of Pittsburgh. This program gained a large popularity since the Nobel Prize for chemistry in 1998 was assigned to Pople, one of the inventors of Gaussian. 2.1 Molecular Simulation
Using computers to calculate the intermolecular forces, it is possible to compute a detailed “history” of the molecules. Analyzing this history, by the methods of statistical mechanics, affords a detailed description of the behavior of matter [8]. Three techniques are available: 1. Molecular Dynamics (MD) simulation. In this technique, the forces between molecules are calculated explicitly, and the motion of the molecules is computed using a numerical integration method. The starting conditions are the positions of the atoms (taken from crystal structure) and their velocities (randomly generated). Following Newton’s equations, from the initial positions, velocities, and forces, it is possible to calculate the positions and velocities of the atoms at a small time interval later. From these new positions, the forces are recalculated and another step in time made. Following an equilibration period of many thousands of time steps, during which the system “settles down” to the desired temperature and pressure, a production period begins where the history of the molecules is stored for later analysis. 2. Monte Carlo (MC) simulation. Monte Carlo simulation resembles the Molecular Dynamics method in that it also generates a history of the molecules in a system, which is subsequently used to calculate the bulk properties of the system by means of statistical mechanics. However, the procedure for moving the atoms employs small random moves used in conjunction with a sampling algorithm to confine the random walk to thermodynamically meaningful configurations. 3. Molecular Mechanics (MM) modeling. MM is a method for predicting the structures of complex molecules, based on the energy minimization of the potential energy function, obtained empirically, by experiment, or by the methods of quantum
6
Giuseppina Gini
chemistry. The energy minimization method is an advanced algorithm to optimize the speed of convergence. The method’s main advantage is its computational cheapness. 4. Any of those methods is necessary to optimize the 3D structure of the molecule before constructing models that use the 3D shape instead of the graph representation of the molecules. 2.2 Biological Effects of Chemicals and Toxicology
3
Since the nineteenth century, the practice of animal experimentation was established in physiology, microbiology, and surgery. The explosion in molecular biology in the second half of the twentieth century increased the importance of in vivo models [9]. All models have their limitations; their prediction can be poor, and their transferability to the real phenomena they model can be unsatisfactory. So extrapolating data from animal models to the environment or to human health depends on the degree to which the animal model is an appropriate reflection of the condition under investigation. These limitations are, however, an intrinsic part of all modeling approaches. Most of the questions about animal models are ethical more than scientific; in public health, the use of animal models is imposed by regulations and is unlikely that any health authority will allow novel drugs without supporting animal data.
Bioassays for Toxicity Toxicity is the degree to which a substance can damage an organism. Toxicity is a property of concern for every chemical substance. Theophrastus Phillipus von Hohenheim (1493–1541) Paracelsus wrote: “All things are poison and nothing is without poison; only the dose makes a thing not a poison.” The relationship between dose and its effects on the exposed organism is of high significance in toxicology. The process of using animal testing to assess toxicity of chemicals has been defined in the following ways: l
Toxicity can be measured by its effects on the target.
l
Because individuals have different levels of response to the same dose of a toxin, a population-level measure of toxicity is often used which relates the probabilities of an outcome for a given individual in a population. Example is LD50: the dose that causes the death of 50% of the population.
l
When the dose is individuated, define “safety factors.” For example, if a dose is safe for a rat, one might assume that a small percentage of dose would be safe for a human, allowing a safety factor of 100, for instance.
QSAR Methods
7
This process is based on assumptions that usually are very crude and presents many open issues. For instance, it is more difficult to determine the toxicity of chemical mixtures (gasoline, cigarette smoke, waste) since the percentages of the chemicals can vary, and the combination of the effects is not exactly a summation. Perhaps the most common continuous measure of biological activity is the log (IC50) (inhibitory concentration), which measures the concentration of a particular compound necessary to induce a 50% inhibition of the biological activity under investigation. Similarly, the median lethal dose, LD50, is the dose required to kill half the members of a tested population after specified test duration. LD50 is not the lethal dose for all subjects, only for half of them [10]. The dose–response relationship describes the change in effect on an organism caused by differing levels of doses to a chemical after a certain exposure time. A dose–response curve is an x–y graph relating the dose to the response of the organism. l
l
The measured dose is plotted on the X axis, and the response is plotted on the Y axis. The response may be a physiological or biochemical response; LD50 is used in human toxicology; IC50 inhibition concentration and its dual EC50 effect concentration is used in pharmacology.
Usually, the logarithm of the dose is plotted on the X axis, and in such cases, the curve is typically sigmoidal, with the steepest portion in the middle. Figure 3 shows an example of the dose– response curve for LD50. Today also, in vitro testing is available. It is the scientific analysis of the effects of a chemical on cultured bacteria or mammalian cells. Experiments using in vitro systems are useful in the early phases of medical studies where the screening of large number of potential therapeutic candidates may be necessary, or in making fast tests for possible pollutants. In vitro systems are nonphysiological and have important limitations, as their results poorly correlate with in vivo. However, there are substantial advantages in using in vitro systems to advance mechanistic understanding of toxicant activities and the use of human cells to define human-specific toxic effects. Both in vivo and in vitro methods use the chemical substance; in case of initial steps in designing new drugs or materials, the chemical substance is only designed, not yet available. Other methods that need only the chemical structures are needed, and they are collectively named in silico.
8
Giuseppina Gini 100
% responding
80
60
50% 40
20
0 10
100
LD50
1000
10000
dose (log scale)
Fig. 3 A curve for log LD50. (Source: http://www.dropdata.org/RPU/pesticide_activity.htm)
4
In Silico Methods Animal testing refers to the use of nonhuman animals in experiments. Worldwide, it is estimated that the number of vertebrate animals annually used for animal experiments is in the order of tens of millions. In toxicity, animal tests are called in vivo models; they give doses for some species and are used to extrapolate data to human health or to the environment. As we said above, the extrapolation of data from species to species is not obvious. For instance, the lethal doses for rats and mice are sometimes very different. How to construct a model that relates a chemical structure to the effect was investigated even before computers were available. The term in silico today covers the methods devoted to this end, and in silico refers to the fact that computers are used, and computers have silicon in their hardware. The most known in silico methods are the (Q)SARs—(quantitative) structure–activity relationships methods, based on the premise that the molecular structure is responsible for all the activities and aimed at finding the dose or the substructure responsible for the activity [11–13]. QSAR is based on the concept of similarity, as similar molecules are expected to have similar effects, and learns this relationship from many examples. Read across is another in silico method, strictly based on finding the most similar molecule and adapting its experimental value to the molecule under investigation. Figure 4 illustrates the in vivo, in vitro, and in silico methods as a whole.
4.1
QSAR
Quantitative structure–activity relationship (QSAR) models seek to correlate a particular response variable of interest with molecular descriptors that have been computed or even measured from the
QSAR Methods
9
Fig. 4 The methods available for testing molecules. After chemical synthesis, in vivo and in vitro produce data to compute the standardize effect. Without chemical synthesis, molecular dynamics studies the 3D structures, (Q)SAR predicts the properties, and read across translates the properties of a molecule to a similar one
molecules themselves. QSAR methods were first pioneered by Corwin Hansch [14] in the 1940s, which analyzed congeneric series of compounds and formulated the QSAR Eq. 1: Log 1=C ¼ a p þ b s þ c Es þ const
ð1Þ
where C ¼ effect concentration, p ¼ octanol—water partition coefficient, s ¼ Hammett substituent constant (electronic), and Es ¼ Taft’s substituent constant. Log P, octanol—water partition coefficient [15, 16], is the ratio of concentrations of a compound in the two phases of a mixture of two immiscible solvents at equilibrium. It is a measure of the difference in solubility of the compound in these two solvents. Normally one of the solvents is water while the second is hydrophobic such as octanol. With high octanol/water partition coefficient, the chemical substance is hydrophobic, and preferentially distributed to hydrophobic compartments such as cell membrane, while hydrophilic are found in hydrophilic compartments such as blood serum. LogP values today are often predicted. The definitions of the chemical structure and of the function remain a challenge today, but relating structure to property is widely adopted in drug discovery and in risk assessment.
10
Giuseppina Gini
Sometimes, the QSAR methods take a more specific name as QSPR (quantitative structure–property relationship) when used for physicochemical properties, as the boiling point, the solubility, logP. They all correlate a dependent variable (the effect or response) with a set of independent variables (usually calculated properties, or descriptors). They are statistical models and can be applied to predict the responses for unseen data points without the need of using the real chemical, even without the chemical has been synthetized. 4.1.1 Molecular Descriptors
The generation of informative data from molecular structures is of high importance in chemoinformatics since it is often used in statistical analyses of the molecules. There are many possible approaches to calculate molecular descriptors [17] that represent local or global salient characteristics of the molecule. Different classes of descriptors are: l
Constitutional descriptors, which depend on the number and type of atoms, bonds, and functional groups.
l
Geometrical descriptors, which consider molecular surface area and volume, moments of inertia, shadow area projections, and gravitational indices.
l
Topological indices, based on the topology of molecular graph [4]. Only the structural information is used in generating the description. Examples are the Wiener index (the sum of the number of bonds between all nodes in a molecular graph) and the Randic index (the branching of a molecule).
l
Physicochemical descriptors, which estimate the physical properties of molecules. Examples are water solubility and partition coefficients, as logP.
l
Electrostatic descriptors, such as partial atomic charges and others depending on the possibility to form hydrogen bonds.
l
Quantum chemical descriptors, related to the molecular orbital and their properties.
l
Fingerprints. Since subgraph isomorphism (substructure searching in chemoinformatics) in large molecular databases is quite often time consuming, substructure screening was developed as a rapid method of filtering out those molecules that definitely do not contain the substructure of interest. The method used is Structure-Key Fingerprints; the fingerprint is a binary string encoding a molecule, where the 1 or 0 in a position means that the substructure of this position in the dictionary is present or not. The dictionaries are designed according to knowledge of chemical entities that can be of interest for the task at hand.
Another useful distinction considers the descriptors in a dimension space. 1D descriptors are atom and bound counts and molecular weight. 2D descriptors are derived from molecular topology
QSAR Methods
11
and fragment count. 3D descriptors are calculated from quantum chemistry programs on the optimized 3D structure. Usually, 3D descriptors are not used in tasks that require rapid screening, because their computation can be quite expensive. 4.1.2 Model Construction of QSAR
The first step in model construction is the preparation of a dataset containing the available experimental data about the selected endpoint. This step is very demanding, as few datasets found in the literature are complete and validated. Often the Quantitative Structure-Activity Relationship (QSAR) literature presents models developed on the few good-quality available datasets, but in most of the real applications data should be found from the prime sources (laboratory studies) and checked to be coherent, of quality, and really comparable [1, 18]. The second step is computing and selecting the chemical descriptors. In the early days of QSAR, only a few descriptors a priori considered as crucial were used. Today, hundreds or thousands of 2D descriptors are easily computed; often it is preferred to compute a large number of them and then select the few ones that are more relevant considering simple methods as the build-up method (adding one at a time), or the build-down method (removing one at a time), or optimization methods for variable selection. Whatever algorithm is then chosen to develop predictive models, it is important to take heed of the model quality statistics and ensure a correct modeling methodology is used to avoid overfitting or underfitting the training set. Numerous model statistics are available that can indicate if new data points, from which responses are to be predicted, can be applied to the model [19]. Two types of supervised learning models are of interest: classification and regression. Classification methods assign the target molecule to two or more classes—most frequently either biologically active or inactive. Regression methods attempt to use continuous data, such as a measured biological response variable, to correlate molecules with that data so as to predict a continuous numeric value for new and unseen molecules using the generated model. Another discrimination is between methods that are similaritybased, and tend to cluster similar molecules, and methods featurebased that use a set of molecular features (as the descriptors) to build the classification function. A plethora of algorithms and methods is available for modeling, including statistical regression (linear discriminant analysis, partial least squares, and multiple linear regression) methods and machine learning methods (Naive Bayesian classifier, decision trees, recursive partitioning, random forest, artificial neural networks, and support vector machines).
12
Giuseppina Gini
In the last two decades, many methods derived from artificial intelligence became the methods of election in QSAR building [20], as they are not parametric and allow finding modeling functions without a priori decisions. Some more details of the most recent methods for model construction will be given in the following Subheading 5. 4.1.3 Model Performance Measures
In many cases, published QSAR models implement the leave-oneout cross-validation procedure and compute the cross-validated determination coefficient R2, called q2. If ypi and yi are the predicted and observed property values, ypim and yim respectively are the average values of the predicted and observed property values, and the determination coefficient is defined as in Eq. 2. 2 2 m m 2 ð2Þ R ¼ 1 SUM y pi y i =SUM y pi y i A high value of q2 (greater than >0.5) is considered as an indicator or even as the ultimate proof that the model is predictive. A high q2 is the necessary condition for a model to have a high predictive power; however, it is not a sufficient condition. In general, the procedure used is to divide the data set available into two parts, one used for training and the rest (often it is 20% of data) used for testing only. If the q2 on the training and test sets are similar and high, it is assumed that the model is predictive. From a statistic point of view, other parameters are more informative, and k-fold cross-validation is more consistent than the statistics on the testing sets [21]. Similar considerations apply to classifiers. Usually, the basic measures are true positive (TP), true negative (TN), false positive (FP), and false negative (FN) molecules in the training and test sets. From those numbers, important measures, as sensitivity and specificity, are computed (Eq. 3): Sensitivity ¼
TP TN ; Specificity ¼ TP þ FN TN þ FP
ð3Þ
A graphical view of sensitivity and specificity is the receiver operating characteristic (ROC) curve, while a more informative parameter that does not depend on class imbalance is the Matthews correlation coefficient (MCC), (Eq. 4), which returns a value between 1 and +1. Only positive values indicate that the model is better than random guessing. TP TN FP FN MCC ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðTN þ FP ÞðTN þ FN ÞðTP þ FP ÞðTP þ FN Þ
ð4Þ
Regardless to the absolute values, we have to remember that the statistical performance of any model is related to the uncertainty and variability of the original data used to build the model.
QSAR Methods 4.1.4 Model Interpretation
13
Model interpretation is important; people aim at understanding the models from known basic principles. A low number of descriptors used and their role in a simple linear equation have been often considered as necessary to accept a QSAR result. However, if the main aim of QSAR is prediction, the attention should be focused on the quality of the model, and not on its interpretation. Moreover, it is dangerous to attempt to interpret statistical models, since correlation does not imply causality. It is common to differentiate predictive QSARs, focused on prediction accuracy, from descriptive QSARs, focused on interpretability [22]. There is a trade-off between prediction quality and interpretation quality. Interpretable models are generally desired in situations where the model has to provide information about the problem domain and how to navigate through chemistry space, allowing the medicinal chemist to make informed decisions. However, these models tend to suffer in terms of prediction quality as they become more interpretable. The reverse is true with predictive models in that their interpretation suffers as they become more predictive. Models that are highly predictive may use molecular descriptors that are not readily interpretable, as they are selected by algorithms to optimize the accuracy, or are expressed as implicit equations, as in case of neural networks. Highly predictive models can therefore be used as high-throughput models. If interpretability is of concern, other methods are available, as expert systems or SAR. This view of the “two QSARs” is somehow obsolete as indicated by Polishchuk [22], who discusses how the concept of interpretation of QSAR models evolved. Interpretation is intended as the retrieval of useful knowledge from a model. Usually QSARs adopted the paradigm “model ! descriptors ! (structure),” that is, strictly based on the interpretability of the descriptors. Instead, the “model ! structure” paradigm can be more general and useful. Often fingerprints descriptors are used, and they are directly related to the presence of substructure. Even in case not interpretable descriptors are used, any model can be made interpretable estimating the contributions of the molecules substructures. As a consequence, it is not important to use information about the machine learning method used, or the descriptors. Livingstone [13] so concludes about the interpretability of QSAR models: “The need for interpretability depends on the application, since a validated mathematical model relating a target property to chemical features may, in some cases, be all accurate estimates of the chemicals activity.” Both SAR and QSAR are predictive statistical models, and as such they suffer the problems of the statistical learning theory, the mathematics behind any inductive method that tries to gain knowledge from a set of data. It is well known that it is always possible to find a function that fits the data. However, such function could be
14
Giuseppina Gini
very bad in predicting new data, in particular if data are noisy. Among the many functions that can accomplish the task of inducing a model, performance and simplicity characterize the most appreciated models. While performance is measured by accounting for the errors in prediction, simplicity has no unique definition; usually it means using few free parameter or descriptors. The limitations of all the inductive methods are mathematically expressed by the “no free lunch theorem.” “No free lunch theorem” is a popular name for the theorems demonstrated by Wolpert and Macready [23] stating that any two models are equivalent when their performance is averaged across all possible problems. The practical indication from these theorems is that, without assumptions on the phenomenon to study, the best algorithm does not exist. In other words, data cannot replace knowledge; the priors used explain the success in getting the prediction. Priors can derive from available knowledge expressed by the designer and given to the model, as in case of the relevant chemical descriptors, or can be automatically extracted looking at the structure of data, as in case of deep learning, presented in Subheading 5. 4.2
SAR
SAR (structure–activity relationships) typically makes use of rules created by experts to produce models that relate subgroups of the molecule atoms to a biological property. So, the SAR approach consists in detecting particular structural fragments of molecule already known to be responsible for the toxic property under investigation. In the mutagenicity/carcinogenicity domain, the key contribution in the definition of such toxicophores comes from Ashby [24], who compiled a list of 19 structural alerts (SA) for DNA reactivity. Practically, SAs are rules that state the condition of mutagenicity by the presence or the absence of peculiar chemical substructures. SAs are sound hypotheses that derive from chemical and biological properties and have a sort of mechanistic interpretation; however, their presence alone is not a definitive method to prove the property under investigation, since the substituents present in some cases are able to change the classification. SAs search is very common for properties as genotoxicity and carcinogenicity, where studies and implementations are available [25]. The extraction of SAs for other properties is seldom done. A few examples exist of automatic construction of such SAR systems, which use graph-mining approaches to mine large datasets for frequent substructures. An automatic and freely available method for SAs extraction is SARpy (SAR in python), a data miner that automatically generates SAR models by finding the relevant fragments; it extracts a set of rules directly from data expressed in the SMILES notation, without any a priori knowledge [26]. Given a training set of molecular structures with their experimental activity labels, SARpy generates
QSAR Methods
15
every substructure in the set and mines correlations between the incidence of a particular molecular substructure and the activity/ inactivity of the molecules that contain it. This is done in three steps. First a recursive algorithm considers every combination of bond breakages and computes every substructure of the molecular input set. Then each substructure is validated as potential SA on the training set to assess its predictive power. Finally, the best substructures are kept as rules of the kind: “IF chemical contains THEN .” Moreover, the user can ask SARpy to also generate rules related to nontoxic substances and use them to better assign molecules to the nontoxic class. The extracted rule set becomes a SAR model, applied to the target molecule for prediction; the model tags it as toxic when one or more SAs are present, as nontoxic if no SAs are found and rules of nontoxicity are present, as dubious in other cases.
5
New Trends in Building QSAR and SAR Models The initial QSAR paradigm considered only congeneric compounds in the hypotheses that: l
Compounds in the series are closely related.
l
Same mode of action is supposed.
l
Basics biological activity are investigated.
l
Linear relations are constructed.
Recently there has been an important change. Today, QSAR models are generated using a wide variety of statistical and machine learning methods, datasets containing a large variety of chemicals, and a large choice of molecular descriptors. QSARs may be simple equations or express nonlinear relationships between descriptors values and the property. Those changes have been necessary to cope with: l
Heterogeneous compound sets.
l
Mixed modes of action.
l
Complex biological endpoints.
l
Large number of properties.
Alongside with new techniques for model building, new problems also emerged. 1. As more models are available for the same dataset, they can be used collectively to improve their performance; this requires defining the methods and the property of ensemble systems. 2. With larger datasets available for many properties of pharmaceutical or environmental interest, more rapid methods to deal
16
Giuseppina Gini
with big data, in particular deep learning methods, became of interest in the QSAR community. 3. As a consequence of large adoption of machine learning and deep learning methods, computing and extracting features changed, as methods able to automatically compute features show performance similar or even better than methods using precomputed molecular descriptors. 5.1 Integrated and Ensemble Models
The development of computer programs able to contain in explicit form the knowledge about some domain was the basis of the development of “Expert Systems” in the 1970s [11]. Soon, expert systems moved from the initial rule-based representation to the modern modeling and interpretation systems. The starting “Machine Learning” community developed a way to make use of data in absence of knowledge which led to the development of Inductive Trees, well exemplified by C4.5 [27] and after by the commercial system CART. Using different representations to reach a common agreement or a problem solution led to the idea of using computationally different methods on different problem representations, so to make use of their relative strengths. Examples are the hybrid neural and symbolic learning systems, and the neuro-fuzzy system that combines connectionist and symbolic features in form of fuzzy rules. While the neural representation offers the advantage of homogeneity, distribution, and parallelization and is effective with incomplete and noisy data, the symbolic representation brings the advantages of human interpretation and knowledge abstraction [28]. Independently, a similar evolution in the pattern recognition community proposed to combine classifiers. In this area, most of the intuitions started with a seminal work, about bagging classifiers [29], which opened the way to ensemble systems. Combining the predictions of a set of classifiers has shown to be an effective way to create composite classifiers that are more accurate than any of the component classifiers. There are many methods for combining the predictions given by component classifiers, as voting, combination, ensemble, and mixture of experts [30]. Finally, it is possible to use a sequential approach, so to train the final classifier using the outputs of the input classifiers as new features. In QSAR, often only simple combinations, called consensus models, are used. Exploiting ensembles will offer many more ways of improving and comparing QSAR predictions [31]. Why ensembles work and why they outperform single classifiers can be explained considering the error in classifiers. Usually the error is expressed [32] as in Eq. 5: Error ¼ noise þ bias2 þ variance
ð5Þ
QSAR Methods
17
Fig. 5 The error function for different complexities of the model
where bias is the expected error of the classifier due to the fact that the classifier is not perfect; variance is the expected error due to the particular training set used, and noise is irreducible. Models with too few parameters can perform poorly, but the same applies to models with too many parameters. A model which is too simple, or too inflexible, will have a large bias, a model which has too much flexibility will have high variance. Usually, the bias is a decreasing function of the complexity of the model, while variance is an increasing function of the complexity, as illustrated in Fig. 5. The concepts of bias and variance are of help in understanding and getting rid of the noise. The predictor should be insensitive to the noise on the training data, to reduce variance, but flexible enough to approximate the model function, and so minimize bias. There is a trade-off between the two components of the error, and balancing them is an important part of the error reduction strategy. Integrating different models and using a stepwise strategy is today a practical way of assessing chemicals. In this case, both global QSAR models and local read across strategies [33] are combined in a weight-of-evidence approach [34]. 5.2 Big Data and Deep Learning
Applying artificial intelligence methods to difficult datasets, where complex and nonlinear relationships are expected, has been tested, at the beginning of this century, on carcinogenicity data in the international Predictive Toxicology Challenge. The review of about 100 submitted models highlighted some appreciable results in a few models that, in a difficult endpoint as carcinogenicity,
18
Giuseppina Gini
performed significantly better than random guessing and were judged by experts to have empirically learned a small but significant amount of toxicological knowledge [35]. With the availability of large datasets, in 2014, the Tox21 challenge organizers invited participants to build computational models to predict the toxicity of compounds for 12 toxic effects using data from high-throughput screening assay. The training set consisted of 12 K chemicals; not all the effects were available for each chemical. The winning model [36] adopted fingerprints and deep neural networks (DNNs), combined with other ML algorithms. DNNs are NNs, so they implicitly define a mapping between an input (the chemical) and an output (the toxicity). They differ from classical NNs in that they use a large number of neurons, and most of them are hidden neurons (not coding the input values neither the output) which are organized in layers through weighed connections [37]. Weights are learned on the connections to adjust the output of the network iteratively during training. The number of the hidden layer determines the complexity of the function that the network can approximate; one hidden layer is enough to approximate any nonlinear function. DNNs can have thousands of neurons and tens of hidden layers to approximate complex functions and to take any information from the input. Training such large networks, where hundreds or thousands of parameters should be optimized, is now possible through free packages [38] that automatically try different combinations, and through special hardware as GPUs, easily available as a service or for installation in personal computers. DNNs are seldom used in QSAR, but are widely applied in chemical computations and de novo design. While the full process for drug design cannot be automated, recently DNNs have been used to shorten to 48 days the process of de novo design of antimicrobials. The authors report using deep learning classifiers and molecular dynamics simulations to screen, identify, synthetize, and test 20 candidates, of which two with high potential tested in vitro and in mice [39]. 5.3 Modeling Without Descriptors
DNNs can learn the regularities in the data exploiting any information; co-linearity and redundancy in the input data do not create problems (differently from what holds in statistic methods) but improve the learning. This characteristic of DNNs makes them apt to a new way of developing QSAR: the full molecule representation can be given to input neurons, and features are automatically extracted from input data without the need of computing and selecting chemical descriptors. Using DNNs as representation extractors has been pioneered in image analysis, where results of a particular kind of DNN, the convolutional NN (CNN), surpass the results of applying classical
QSAR Methods
19
algorithms to statistical features and even bypass the human ability to categorize images. CNNs store an image in the input neurons, and apply to small pieces of it the same process: convoluting with a simple window and pooling to reduce the input image. This process continues until the reduced number of neurons that remain code the important image features; subsequent layers use them to make the recognition. In (Q)SAR, this method applies to the images representing the chemical graph [40, 41]. DNNs have found applications in text processing, where the problem is to create and exploit the semantic connections among words. Recurrent NN and RNN have the ability to maintain for a given time the dependences among words. In (Q)SAR, the method can be applied to SMILES. Various cases of models developed on SMILES or on modified SMILES are available [42–44]. Finally, DNNs can take in input the standard representation of graphs, i.e., the adjacency matrix, plus basic information about the extra atoms and the kind of bonds. Graph convolutional NN (GCN) is the architecture of reference; it integrates convolutions and recurrent connections to extract subgraphs from the chemical graph [45]. An important consequence of the feature extraction property of DNNs is that the important features, which are sub-images, sub-strings, or sub-graphs, are readily available and can be compared with existing knowledge. The abovementioned DNN-based QSARs have provided an analysis about the results in terms of the paradigm “model ! structure,” and this analysis shows a good agreement with the available knowledge and opens up fresh avenues to explore whether the newly found features characterize yet undiscovered properties.
6
Moving QSARs from Laboratory to Regulations Toxicity testing typically involves studying adverse health outcomes in animals administered with doses of toxicants, with subsequent extrapolation to expected human responses. The system is expensive, time-consuming, low-throughput, and often provides results of limited predictive value for human health. The toxicity testing methods are largely the same for industrial chemicals, pesticides, and drugs and have led to a backlog of tens of thousands chemicals to which humans are potentially exposed but whose potential toxicity remains largely unknown. Today more than 170 million chemical substances are registered in the CAS registry (www.cas.org), and between 75,000 and 140,000 of them are on the market [46]. Experimental toxicity data are available for a small part of them. According to Johnson et al. [46], aquatic toxicity data are available for 11%, bioconcentration data for 1%, and persistence data for 0.2% of chemical substances marketed in the European Union (EU) and the United States (USA).
20
Giuseppina Gini
This potential risk has urged national and international organizations in making a plan for assessing the toxicity of those chemicals, both for human and environment safety. In USA, for instance, EPA (Environmental Protection Agency) applies the Toxic Substances Control Act (TSCA) and routinely uses predictive QSAR based on existent animal testing to authorize new chemicals. Recently in the US, a new toxicity testing plan, “Human Toxome Project,” has been launched which will make extensive experimentation using predictive, high-throughput cell-based assays (of human organs) to evaluate perturbations in key pathways of toxicity. There is no consensus about this concept of “toxicity pathway” that in the opinion of many should be instead “disruption of biological pathways.” The target of the project is to gain more information directly from human data, so to check in a future, with specific experiments, the most important pathways. In the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) regulation for industrial chemical has been introduced together with specific regulations for cosmetics, pesticide, and food additives. REACH is accepting, still with restrictions, QSAR models as well as read across. Different regulations apply for different use for the chemicals or for different compartments, as air pollutants, industrial products, food, drinking water, cosmetics and detergents, pesticides, and drugs. In all those cases, there is only limited international agreement on the regulations and doses. Europe is proceeding with the concept of “One Substance One Assessment,” in order to harmonize different regulations. Many open issues still make it uneasy to accept QSARs for assessing toxicology. One of the main problems is the difficulty in making a statistical model apt for indicating causal links. Even though the QSAR interpretation can individuate the most important phenomena involved, this interpretation is far from constructing cause–effect relationships. This difficulty is somehow solved in case a full path of foreseen transformations of the chemical in the interacting organism is constructed; recent work in modeling pathways and AOP tries to construct this explanation. While the acceptance of (Q)SARs is debated on the claim that it cannot substitute experimental practices, ethical issues, and in particular the concern about making experiments on animals, are urging (Q)SAR adoption. Those two conflicting trends will be briefly described. 6.1 Statistical Models and Causal Reasoning
QSARs are both statistical models that exploit variable correlations, and knowledge-based models, that discover how patterns in input data explain the output pattern [47]. Those correlations are at the basis of the explanations provided by QSAR models, which give importance to functional subgroups and make warnings about possible holes in the applicability domain.
QSAR Methods
21
Learning causal relationships from raw data has been on philosophers’ wish list since the eighteenth century. Hume argued that causality cannot be perceived and instead we can only perceive correlation. And indeed the basic biological experiments aim at finding a correlation (positive or negative) between some features and effect. Humans understand causation to be a relation between events in which the presence of some events causes the presence of others. Human inference of causal relationships relies primarily on universal cues, such as spatiotemporal contingency, or reliable covariation between effects and their causes, as well as on domain-specific knowledge. Accordingly, most theories of causation invoke an explicit requirement that a cause precedes its effect in time. Yet temporal information alone cannot distinguish genuine causation from spurious associations. Causation is a transitive, nonreflexive, and antisymmetric relation. That is: 1. If A is a cause of B and B is a cause of C, then A is also a cause of C. 2. An event A cannot cause itself. 3. If A is a cause of B then B is not a cause of A. Computational models of causality, which combine together networks, Bayesian probability, and Markov assumptions, as in the Causal Bayes Networks [48] are still computationally hard and do not have yet applications in real toxicology. A kind of simulation approach is more feasible, as adopted in recent studies about the Adverse Outcome Pathway (AOP), aimed at identifying the workflow from the molecular initiating event to the final observed outcome. It integrates other terms often used in explaining the toxicology, as “mode of action,” or “mechanistic interpretation.” Unfortunately, there is no unique AOP for a class of chemicals, as different biological pathways are usually observed or supposed. 6.2
Ethical Issues
Toxicity testing typically involves studying adverse health outcomes in animals subjected to high doses of toxicants with subsequent extrapolation to expected human responses at lower doses. The system is expensive, time-consuming, with low-throughput, and often provides results of limited predictive value for human health. Conversely each year a huge number of new substances are synthetized and possibly sent to the market. Is it really necessary to test all of them on animals? Even more, is it necessary to synthetize them or would it be better to in silico assess their properties before making them, using a proactive strategy?
22
Giuseppina Gini
The Declaration of Bologna, in 1999, called the 3 R (for reduce, refine, replace), proposed a manifesto to develop alternative methods that could save millions of animals. In this scenario, the ethical issues however are advocated also by authorities that have to protect humans and that see the animals as a more ethical use than that of humans. The stakeholders in the toxicity assessment are: l
Scientists and producers: They want modeling of the process and discovery of properties. In other words, build knowledge, and translate it rapidly in products and drugs.
l
Regulators and standardization organizations: They want to be convinced by some general rule (mechanism of action). In other words, reduce the risk of erroneous evaluations. Be fast and conservative in decisions taking.
l
Public, media, and opinion makers: They want to be protected against risk at 100%. Part of the population is strongly against the use of animal models.
There are many models, many techniques to build (Q)SAR models, and also many reasons to build up a model. The intended use of a model can greatly affect its development. In new chemicals and new drug design, the target is to identify active compounds, so avoiding false positives. Final users and regulators, instead, have an opposite need, as they want to surely mark toxic compounds, so they want to avoid false negatives. Sensitivity and specificity can be in turn the most important parameter. Good and validated in silico models can benefit multiple actors. In any case, the output of the model, despite its good predictive value, is not sufficient; documentation enabling the user to accept or not the prediction is necessary. 6.3 Towards a Larger Acceptance of QSAR Models
Experts today when asked to assess the properties of a chemical substance prefer to experimentally test it. If testing is not possible or allowed for all the properties of interest, they prefer finding the results of similar substances and adapt them to the target chemical. This read across method makes it central that the similarity measure adopted is well suited to the problem domain; the definition of a proper similarity measure is still challenging [49]. The common perception is that a predictive model cannot match the observation. This topic has been central in the philosophy of science; however, here a pragmatic approach is necessary. In practice, can QSARs be used to improve expert evaluation? In the public sector, asking experts to provide advice in decision-making is common. For instance, EPA asks panels of experts to assess substances for which data are incomplete or contradictory. The experts have to express their opinions about the probability that a certain outcome will happen. Those judgments
QSAR Methods
23
are subjective and are often expressed through words (as likely) whose meaning in terms of probability changes from expert to expert. Morgan [50] analyzed the causes of expert bias, while Benfenati et al. [51] provided an example of how far the expert answers differ when experts are asked to assess the toxicity of some chemicals. The use of QSAR is valuable, as it provides an expertindependent prediction, which can be the common basis for expert judgment. Weak points about QSAR are still present and should be considered. The first weakness is that most of the QSARs are developed on small data sets (hundreds of chemicals), and so it is hard to believe that they can correctly cover all the chemical space; this problem is tackled by checking whether the target molecule is in the applicability domain. Another weakness is that the answer of the QSAR is a label, or a small interval of values, but this is not the probability that the molecule will be really toxic at that level. The sensitivity and specificity values depend on the training and test set, not on the percentage of positive and negative molecules in the environment. For this reason, other analyses are needed, and often they are carried out considering the exposure.
7
Conclusions Alongside classical methods as in vivo and in vitro experiments, the use of computational tools is gaining more and more interest in the scientific community. Computational tools provide simulations, offer ways to speed up the molecular design, offer predictions, and somehow explanation of the mechanism. QSARs derive their methods from chemometrics and physical simulations, and combining them together with statistics to construct systems can be predictive. QSARs can span from very predictive methods, which usually employ complex and nonlinear correlations, to explanatory QSARs that are focused on simple interpretability. The usage of QSAR models is growing, since they provide the only affordable method to screen large quantities of data.
References 1. Brown N (2009) Chemoinformatics—an introduction for computer scientists. ACM Comput Surv 41(2):8. https://doi.org/10. 1145/1459352.1459353 2. Gasteiger J, Engel T (2003) Chemoinformatics: a textbook. Wiley-VCH, Weinheim, Germany. ISBN: 978-3-527-30681-7
3. Martin YC, Kofron JL, Traphagen LM (2002) Do structurally similar molecules have similar biological activity? Med Chem 45(19): 4350–4358. https://doi.org/10.1021/ jm020155c 4. Balaban AT (1985) Applications of graph theory in chemistry. J Chem Inf Comput Sci 25:
24
Giuseppina Gini
334–343. https://doi.org/10.1021/ ci00047a033 5. Weininger D (1988) SMILES a chemical language and information system. 1. Introduction to methodology and encoding rules. J Chem Inf Comput Sci 28:31–36. https://doi.org/ 10.1021/ci00057a005 6. Weininger D, Weininger A, Weininger JL (1989) SMILES. 2. Algorithm for generation of unique SMILES notation. J. Chem. Inf. Comput Sci 29:97–101. https://doi.org/10. 1021/ci00062a008 7. Adam D (2002) Chemists synthesize a single naming system. Nature 417:369. https://doi. org/10.1038/417369a 8. Schlick T (2002) Molecular modeling and simulation: an interdisciplinary guide. SpringerVerlag, New York. ISBN 978-1-4419-6351-2 9. Chow PHK, Ng RTH, Ogden BE (2008) Using animal model in biomedical Research. 1st edition. World Scientific. https://doi.org/ 10.1142/6454 10. Balazs T (1970) Measurement of acute toxicity, in methods in toxicology. Blackwell Scientific Publications, Oxford and Edinburgh 11. Benfenati E, Gini G (1997) Computational predictive programs (expert systems) in toxicology. Toxicology 119:213–225. https:// doi.org/10.1016/s0300-483x(97)03631-7 12. Hartung T (2009) Toxicology for the twentyfirst century. Nature 460(9):208–212. https:// doi.org/10.1038/460208a 13. Livingstone DJ (2000) The characterization of chemical structures using molecular properties. A survey. J Chem Inf Comput Sci 40:195–209. https://doi.org/10.1021/ci990162i 14. Hansch C, Malony PP, Fujita T, Muir RM (1962) Correlation of biological activity of phenoxyacetic acids with hammett substituent constants with partition coefficents. Nature 194:178–180. https://doi.org/10.1038/ 194178b0 15. Ghose AK, Crippen GM (1986) Atomic physicochemical parameters for three-dimensional structure directed quantitative structureactivity relationships. I. Partition coefficients as a measure of hydrophobicity. J Comp Chem 7:565–577. https://doi.org/10.1021/ ci00053a005 16. Kubinyi H (2002) From narcosis to hyperspace: the history of QSAR. Quant Struct Act Relat 21:348–356. https://doi.org/10.1002/ 1521-3838(200210)21:43.0.CO;2-D 17. Karelson M (2000) Molecular Descriptors in QSAR/QSPR. Wiley-VCH, Weinheim, Germany. ISBN: 978-0-471-35168-9
18. Gadaleta D, Lombardo A, Toma C, Benfenati E (2018) A new semi-automated workflow for chemical data retrieval and quality checking for modeling applications. J Cheminformatics 10(60):1–13. https://doi.org/10.1186/ s13321-018-0315-6 19. Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning: data mining, inference, and prediction. Springer-Verlag, New York, NY. ISBN 978-0-387-84858-7 20. Gini G, Katritzky A (1999) Predictive toxicology of chemicals: experiences and impact of artificial intelligence tools. In: Proc. AAAI spring symposium on predictive toxicology, report SS-99-01. AAAI Press, Menlo Park, CAL. ISBN 978-1-57735-073-6 21. He´berger K, Ra´cz A, Bajusz D (2017) Which performance parameters are best suited to assess the predictive ability of models? In Roy K (ed) advances in QSAR modeling, Springer International. ISBN 978-3-319-56850-8 22. Polishchuk PG (2017) Interpretation of quantitative structure-activity relationships models: past, Present and future. J Chem Inf Model 57(11):2618–2639. https://doi.org/10. 1021/acs.jcim.7b00274 23. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1:67. https://doi.org/10. 1109/4235.585893 24. Ashby J (1985) Fundamental structural alerts to potential carcinogenicity or noncarcinogenicity. Environ Mutagen 7:919–921. https:// doi.org/10.1002/em.2860070613 25. Benigni R, Bossa C (2008) Structure alerts for carcinogenicity, and the salmonella assay system: a novel insight through the chemical relational databases technology. Mutat Res 659(3): 248–261. https://doi.org/10.1016/j.mrrev. 2008.05.003 26. Ferrari T, Cattaneo D, Gini G, Golbamaki N, Manganaro A, Benfenati E (2013) Automatic knowledge extraction from chemical structures: the case of mutagenicity prediction. SAR QSAR Environ Res 24(5):365–383. https://doi.org/10.1080/1062936X.2013. 773376 27. Quinlan JR (1993) C4.5: Programs for Machine Learning. Morgan Kauffman, San Francisco; CA. https://doi.org/10.1007/ BF00993309 28. Neagu C-D, Gini G (2003). Neuro-fuzzy knowledge integration applied to toxicity prediction. In Jain R, Abraham A, Faucher C, Jan van der Zwaag B (Eds), Innovations in knowledge engineering, advanced knowledge
QSAR Methods International Pty Ltd, Magill, South Australia, 311-342. ISBN 0 9751004 0 8 29. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140. https://doi.org/10. 1023/A:1018054314350 30. Polikar R (2006) Ensemble based systems in decision making. IEEE Circ Syst Mag 2006(6): 21–45. https://doi.org/10.1016/S08936080(05)80023-1 31. Gini G, Garg T, Stefanelli M (2009) Ensembling regression models to improve their predictivity: a case study in QSAR (quantitative structure activity relationships) with computational chemometrics. Appl Artif Intell 23: 261–281. https://doi.org/10.1080/ 08839510802700847 32. Friedman J (1997) On bias, variance, 0/1 loss and the curse of dimensionality. Data Mining Knowl Discov 1:55–77. https://doi.org/10. 1023/A:1009778005914 33. Gini G, Franchi AM, Manganaro A, Golbamaki A, Benfenati E (2014) ToxRead: a tool to assist in read across and its use to assess mutagenicity of chemicals. SAR QSAR Enviro Res 25(12):1–13. https://doi.org/10.1080/ 1062936X.2014.976267 34. Benfenati E, Chaudhry Q, Gini G, Dorne JL (2019) Integrating in silico models and readacross methods for predicting toxicity of chemicals: a step-wise strategy. Environ Int 131: 105060. https://doi.org/10.1016/j.envint. 2019.105060 35. Toivonen H, Srinivasan A, King RD, Kramer S, Helma C (2003) Statistical evaluation of the predictive toxicology challenge 2000–2001. Bioinformatics 19(10):1183–1193. https:// doi.org/10.1093/bioinformatics/btg130 36. Mayr A, Klambauer G, Unterthiner T, Hochreiter S (2016) DeepTox: toxicity prediction using deep learning. Front Environ Sci 3: 80. https://doi.org/10.3389/fenvs.2015. 00080 37. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444. https://doi. org/10.1038/nature14539 38. Kingma DP, Ba J (2015) Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations (ICLR). arXiv:1412.6980v9 [cs. LG] 30 Jan 2017 39. Das P, Sercu T, Wadhawan K, Padhi I, Gehrmann S, Cipcigan F, Chenthamarakshan V, Strobelt H, dos Santos C, Chen P-Y, Yang YY, Tan JPK, Hedrick J, Crain J, Mojsilovic A (2021) Accelerated antimicrobial discovery via deep generative models and molecular dynamics
25
simulations. Nat Biomed Eng 5:613–623. https://doi.org/10.1038/s41551-02100689-x 40. Goh G, Siegel C, Vishnu A., Hodas NO, Baker N (2017) Chemception: a deep neural network with minimal chemistry knowledge matches the performance of expert-developed QSAR/ QSPR models. arxiv.org/abs/1706. 066892017 41. Gini G, Zanoli F (2020) Machine learning and deep learning methods in ecotoxicological QSAR modeling. In: Roy K (ed) Ecotoxicological QSARs. Humana Press, Springer, New York, pp 111–149. ISBN 978-1-0716-0150-1 42. Goh G, Hodas N, Siegel C, Vishnu A (2018) SMILES2vec: an interpretable general-purpose deep neural network for predicting chemical properties. arXiv:1712.02034v2 [stat.ML] 43. Gini G, Zanoli F, Gamba A, Raitano G, Benfenati E (2019) Could deep learning in neural networks improve the QSAR models? SAR QSAR Environ Res 30(9):617–642. https:// doi.org/10.1080/1062936X.2019.1650827 44. Chakravarti SK, Radha Mani AS (2019) Descriptor free QSAR modeling using deep learning with long short-term memory neural networks. Front Artif Intell 2:17. https://doi. org/10.3389/frai.2019.00017 45. Gini G, Hung C, Benfenati E (2021) Big data and deep learning: extracting and revising chemical knowledge from data. In: Basak S, Vracko M (eds) Big data analytics in Chemoinformatics and bioinformatics (with applications to computer-aided drug design, cancer biology, emerging pathogens and computational toxicology). Elsevier, Amsterdam 46. Johnson AC, Jin X, Nakada N, Sumpter JP (2020) Learning from the past and considering the future of chemicals in the environment. Science 367:384–387. https://doi.org/10. 1126/science.aay6637 47. Gini G (2018) QSAR, what else? In: Nicolotti O (ed) Computational toxicology: methods and protocols, vol 1800. Springer, Clifton, NJ, pp 79–105. ISBN 978-1-4939-7899-1. 48. Pearl J (2003) Statistics and causal inference: a review. Test J 12:281–345. https://doi.org/ 10.1007/BF02595718 49. G. Gini G (2020) The QSAR similarity principle in the deep learning era: confirmation or revision?, Found Chem 22: 383–402. DOI: https://doi.org/10.1007/s10698-02009380-6 50. Morgan MG (2014) Use (and abuse) of expert elicitation in support of decision making for
26
Giuseppina Gini
public policy. PNAS 111(20):7176–7184. https://doi.org/10.1073/pnas.1319946111 51. Benfenati E, Belli M, Borges T, Casimiro E, Cester J, Fernandez A, Gini G, Honma M, Kinzl M, Knauf R, Manganaro A,
Mombelli E, Petoumenou MI, Paparella M, Paris P, Raitano G (2016) Results of a roundrobin exercise on read-across. SAR QSAR Environ Res 27(5):371–384. https://doi. org/10.1080/1062936X.2016.1178171
Part I Modeling a Pharmaceutical in the Body
Chapter 2 PBPK Modeling to Simulate the Fate of Compounds in Living Organisms Fre´de´ric Y. Bois, Cleo Tebby, and Ce´line Brochot Abstract Pharmacokinetics study the fate of xenobiotics in a living organism. Physiologically based pharmacokinetic (PBPK) models provide realistic descriptions of xenobiotics’ absorption, distribution, metabolism, and excretion processes. They model the body as a set of homogeneous compartments representing organs, and their parameters refer to anatomical, physiological, biochemical, and physicochemical entities. They offer a quantitative mechanistic framework to understand and simulate the time-course of the concentration of a substance in various organs and body fluids. These models are well suited for performing extrapolations inherent to toxicology and pharmacology (e.g., between species or doses) and for integrating data obtained from various sources (e.g., in vitro or in vivo experiments, structure–activity models). In this chapter, we describe the practical development and basic use of a PBPK model from model building to model simulations, through implementation with an easily accessible free software. Key words 1,3-Butadiene, PBPK, Monte Carlo simulations, Numerical integration, R software
1
Introduction The therapeutic or toxic effects of chemical substances not only depend on interactions with biomolecules at the cellular level but also on the amount of the active substance reaching target cells (i.e., where the effects arise). Therefore, conceptually, two phases can be distinguished in the time course of such effects: the absorption, transport, and elimination of substances into, in, and out the body including target tissues (pharmacokinetics), and their action on these targets (pharmacodynamics). Schematically, pharmacokinetics (or toxicokinetics for toxic molecules) can be defined as the action of the body on substances, and pharmacodynamics as the action of substances on the body. Pharmacokinetic and pharmacodynamics first aim at a qualitative understanding of the underlying biology. They also use mathematical models to analyze and extrapolate measurements of various biomarkers of exposure,
Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5_2, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
29
30
Fre´de´ric Y. Bois et al.
susceptibility, or effect, in order to quantitatively predict effects. This chapter focuses on toxicokinetic models and in particular on physiologically based pharmacokinetic (PBPK) models. Toxicokinetic models aim to link an external exposure to an internal dosimetry in humans (e.g., concentration in blood, urine, or tissues) by describing the process of absorption, distribution, metabolism, and excretion (ADME) that undergoes a substance in living organisms. A class of toxicokinetic models, the physiologically based pharmacokinetic (PBPK) models, bases the description on the ADME processes on the physiology and the anatomy of individuals, and the biochemistry of the compounds. A PBPK model subdivides the body in compartments representing organs connected through a fluid, usually blood. Model parameters correspond to physiological and biochemical entities specific to the body and compounds, such as organ volumes, tissue blood flows, affinities of the compounds for the tissues, or the metabolic clearance. The first works in pharmacokinetic modeling were based on physiological descriptions of the body [1–6]. However, at the time, the corresponding mathematical models were too complex to be solved. Research and applications then focused on simpler one-, two-, or three-compartment models [7], which proved to be adequate for describing and interpolating concentration–time profiles of many drugs in blood or other biological matrices. However, for substances with complex kinetics, or when interspecies extrapolations were required, simple models were insufficient and research continued on physiological models [8–12]. Over the years, the ever-increasing computing capabilities and the advent of statistical approaches applicable to uncertainty and population variability modelling have turned PBPK models into well-developed tools for safety assessment of chemical substances [13]. A significant advance has been the development of quantitative structure–properties models for the chemical-dependent parameters of PBPK models (e.g., tissue affinities) [14, 15]. Those developments are still ongoing and have led to large generic models which can give quick, even if approximate, answers to pharmacokinetic questions, solely on the basis of a chemical’s formula and limited data [16–18]. The mechanistic basis of PBPK models is particularly well adapted to toxicological risk assessment [19, 20] and also in the pharmaceutical industry for the development of new therapeutic substances [21], in particular for dealing with extrapolations inherent to these domains (in vitro to in vivo, laboratory animals to human populations, various exposure or dosing schemes. . .). PBPK models can be applied in two different steps of the risk assessment framework. First, these models can be used to better characterize the relationship between the exposure dose and the adverse effects by modeling the internal exposure in the target tissues (i.e., where the toxic effects arise) [22]. Secondly, PBPK
PBPK Modeling to Simulate the Fate of Compounds in Living Organisms
31
models can be used in the exposure assessment to estimate the external exposure using human biomonitoring data, like the concentrations of chemicals in blood or urine [23, 24]. These predictions can then be compared to existing exposure guidance or reference values such as tolerable daily intakes [25]. To provide a general overview of the basis and applications of PBPK modeling, the first section of this chapter describes the development of a PBPK model (model formulation and parameter estimation). We then propose to illustrate the different steps with 1,3-butadiene, a volatile organic compound that is carcinogenic to humans (group 1 in the IARC classification).
2
Development of a PBPK Model In this section, we present the steps to follow in developing a PBPK model. Recently, international groups provided guidance on the characterization and application of PBPK models in risk assessment [19, 26]. Their aim is to propose a standardized framework to review and critically evaluate the available toxicological data and describe thoroughly the development of the model, i.e., structure, equations, parameter estimation, and model evaluation and validation. The ICRP framework also aimed to harmonize good modeling practices between risk assessors and model developers [27–29].
2.1 Principles and Model Equations
A PBPK model represents the organism of interest—human, rat, mouse, etc.,—as a set of compartments, each corresponding to an organ, group of organs, or tissues (e.g., adipose tissue, bone, brain, gut) having similar blood perfusion rate (or permeability) and affinity for the substance of interest. Transport of molecules between those compartments by blood, lymph, or diffusion and further absorption, distribution, metabolism, or excretion (ADME) processes are described by mathematical equations (formally differential equations) whose structure is governed by physiology (e.g., blood flow in exit of gut goes to liver) [30, 31]. As such, PBPK modeling is an integrated approach to understand and predict the pharmacokinetic behavior of chemical substances in the body. Drug distribution into a tissue can be rate-limited by either perfusion or permeability. Perfusion rate-limited kinetics apply when the tissue membranes present no barrier to diffusion. Blood flow, assuming that the drug is transported mainly by blood, as is often the case, is then the limiting factor to distribution in the various cells of the body. That is usually true for small lipophilic drugs. A simple perfusion-limited PBPK model is depicted on Fig. 1. It includes the liver, well perfused tissues (lumping brain, kidneys, and other viscera), poorly perfused tissues (muscles and skin), and fat. The organs have been grouped into those
32
Fre´de´ric Y. Bois et al.
Fig. 1 Schematic representation of a simple, perfusion limited, PBPK model. The model equations are detailed in Subheading 2 of the text
compartments under criteria of blood perfusion rate and lipid content. Under such criteria, the liver should be lumped with the well-perfused tissues, but is left separate here as it is supposed to be the site of metabolism, a target effect site, and a port of entry for oral absorption (assuming that the gut is a passive absorption site which feeds into the liver via the portal vein). Bone can be excluded from the model if the substance of interest does not distribute to it. The substance is brought to each of these compartments via arterial blood. Under perfusion limitation, the instantaneous rate of entry for the quantity of drug in a compartment is simply equal to the (blood) volumetric flow rate through the organ times the incoming blood concentration. At the organ exit, the substance’s venous blood concentration is assumed to be in equilibrium with the compartment concentration, with an equilibrium ratio named “partition coefficient” or “affinity constant” [30]. In the following, we will note Qi the quantity of substance in compartment i, Ci the corresponding concentration, Vi the volume of compartment I, Fi the blood flow to that compartment, and PCi the corresponding tissue over blood partition coefficient. Note that all differentials are written for quantities, rather than concentrations because molecules are transported. Arguably, they are proportional to
PBPK Modeling to Simulate the Fate of Compounds in Living Organisms
33
differentials for concentrations, but only if volumes are constant (and they may not be). For consistency, we strongly suggest you work with quantities. The rate of change of the quantity of substance in the poorly perfused compartment, for example, can therefore be described by the following differential equation: ∂Q pp Q pp ð1Þ ¼ F pp C art P pp V pp ∂t where Qpp is the quantity of substance at any given time in the poorly perfused compartment, Fpp the blood volumetric flow rate through that group of organs, Cart the substance’s arterial blood concentration, Ppp the poorly perfused tissues over blood partition coefficient, and Vpp the volume of the poorly perfused compartment. Since Qpp kinetics are governed by a differential equation, it is part of the so-called “state variables” of the model. The tissue over blood partition coefficient Ppp measures the relative affinity of the substance for the tissue compared to blood. It is easy to check that at equilibrium: ∂Q pp ∂t
¼ 0 ) C art
Q pp C pp ¼ 0 ) P pp ¼ P pp V pp C art
ð2Þ
if we denote by Cpp the concentration of the substance in the poorly-perfused compartment. Similarly, for the well-perfused and the fat compartments, we can write the following equations for the two-state variables Qwp and Qfat, respectively: ∂Q wp Q wp ð3Þ ¼ F wp C art P wp V wp ∂t ∂Q fat Q fat ð4Þ ¼ F fat C art P fat V fat ∂t The equation for the last-state variable, Qliv (for the liver) is a bit more complex, with a term for metabolic clearance, with firstorder rate constant kmet, and a term corresponding to the oral ingestion rate of the compound (quantity absorbed per unit time), Ring which corresponds to the administration rate if gut absorption is complete, or to a fraction of it otherwise: ∂Q liv Q liv kmet Q liv þ Ring ð5Þ ¼ F liv C art P liv V liv ∂t Obviously, this is a minimal model for metabolism, and much more complex terms may be used for saturable metabolism, binding to plasma proteins and blood cells, multiple enzymes, metabolic interactions, extrahepatic metabolism, etc. If the substance is volatile, and if accumulation in the lung tissue itself is neglected, the arterial blood concentration Cart can be computed as follows,
34
Fre´de´ric Y. Bois et al.
assuming instantaneous equilibrium between blood and air in the lung: C art ¼
F pul ð1 r ds ÞC inh þ Ftot C ven F pul ð1 r ds Þ=P a þ Ftot
ð6Þ
where Ftot is the blood flow to the lung, Fpul the pulmonary ventilation rate, rds the fraction of dead space (upper airways’ volume unavailable for blood–air exchange) in the lung, and Pa the blood over air partition coefficient. Cart is the concentration inhaled. Equation 6 can be derived from a simple balance of mass exchanges between blood and air under equilibrium conditions. Cven is the concentration of compound in venous blood and can be obtained as the sum of compound concentrations in venous blood at the organ exits weighted by corresponding blood flows: P FxQ x C ven ¼
x∈fpp,wp,fat,livg P x V x
F pp þ Fwp þ Ffat þ Fliv
ð7Þ
Finally, the substance’s concentration in exhaled air, Cexh, can be obtained under the same equilibrium conditions as for Eq. 6: C exh ¼ ð1 r ds Þ
C art þ r ds C inh Pa
ð8Þ
Note that Cart, Cven, and Cexh are not specified by differential equations, but by algebraic equations. Those three variables are not fundamental in our model and could be expressed using only parameters and state variables. They are just (very) convenient “output variables” that we may want to record during simulation and that facilitate model writing. The above model assumes that all the substance present in blood is available for exchange with tissues. This may not be true if a fraction of the substance is bound, for example, to proteins, in blood or tissues. In that case, it is often assumed that binding/ unbinding is rapid compared to the other processes. The equations are then written in term of unbound quantities, and the rapid equilibrium assumption is used to keep track of the balance bound/unbound quantity in each organ or tissue [30]. Diffusion across vascular barriers or cellular membranes can be slower than perfusion. This condition is likely to be met by large polar molecules. In that case, to account for diffusion limitation, a vascular subcompartment is usually added to each organ or tissue of interest. Diffusion between that vascular subcompartment and the rest of the tissue is modeled using Fick’s law. A diffusion barrier can also exist between the extracellular and intracellular compartments. Consequently, PBPK models exhibit very different degrees of complexity, depending on the number of compartments used and their eventual subdivisions [32].
PBPK Modeling to Simulate the Fate of Compounds in Living Organisms
2.2 Parameter Estimation
35
A PBPK model needs a considerable amount of information to parametrize. At the system level, we find substance-independent anatomical (e.g., organ volume), physiological (e.g., cardiac output), and some biochemical parameters (e.g., enzyme concentrations). All those are generic, in the sense that they do not depend on the substance(s) of interest, and are relatively well documented in humans and laboratory animals [31, 33–37]. They can be assigned once for ever, at least in first approximation, for an “average” individual in a given species at a given time. There are also, inevitably, substance-specific parameters which reflect the specific interactions between the body and the substance of interest. In many cases, values for those parameters are not readily available. However, such parameters often depend, at least in part, on the physicochemical characteristics of molecule studied (e.g., partition coefficients depend on lipophilicity, passive renal clearance depends on molecular weight). In that case, they can be estimated, for example, by quantitative structure–activity relationships (QSARs) [38–40], also referred to as quantitative structure– property relationships (QSPRs) when “simple” parameter values are predicted. Molecular simulation (quantum chemistry) models can also be used [41, 42], in particular for the difficult problem of metabolic parameters’ estimation. QSARs are statistical models (often a regression) relating one or more parameters describing chemical structure (predictors) to a quantitative measure of a property or activity (here a parameter value in a PBPK model) [15, 43– 46]. However, when predictive structure–property models are not available (as is often the case with metabolism, for example), the parameters have to be measured in vitro and corrected by the toxicokinetic and nonspecific binding of compounds in test systems [47–50] or estimated from in vivo experiments and are much more difficult to obtain. However, using average parameter values does not correctly reflect the range of responses expected in a human population, nor the uncertainty about the value obtained by QSARs, in vitro experiments or in vivo estimation [51]. Interindividual variability in PK can have direct consequences on efficacy and toxicity, especially for substances with a narrow therapeutic window. Therefore, simulation of interindividual variability should be an integral part of the prediction of PK in humans. The mechanistic framework of PBPK models provides the capacity of predicting interindividual variability in PK when the required information is adequately incorporated. To that effect, two modeling strategies have been developed in parallel; The first approach has been mostly used for data-rich substances. It couples a pharmacokinetic model to describe time-series measurements at the individual-level and a multilevel (random effect) statistical model to extract a posteriori estimates of variability from a group of subjects [52, 53]. In a Bayesian context, a PBPK model can be used at the individual
36
Fre´de´ric Y. Bois et al.
level and allows easy inclusion of many subject-specific covariates [54]. The second approach also takes advantage of the predictive capacity of PBPK models but simply assigns a priori distributions for variability and uncertainty to the model parameters (e.g., metabolic parameters, blood flows, organ volumes) and forms distributions of model predictions by Monte Carlo simulations [55]. 2.3 Solving the Model Equations
Many software programs can actually be used to build and simulate a PBPK model. Some are very general simulation platforms—R [56], GNU MCSim [57, 58], Octave [59], Scilab [60], Matlab® [61], Mathematica® [62], to name a few. Those platforms usually propose some PBPK-specific packages or functionalities that ease model development. An alternative is to use specialized software (e.g., PK-Sim® [63], Simcyp® [64], GastroPlus® [65], Merlin-expo [66, 67], MCRA [68, 69]), which has often an attractive interface. However, in that case, the model equations cannot usually be modified, and only the parameter values can be changed or assigned preset values or distributions.
2.4 Evaluation of the Model
The evaluation (checking) of the model is an integral part of its development to objectively demonstrate the reliability and relevance of the model. Model evaluation is often associated with a defined purpose, such as a measure of internal dosimetry relevant to the mode of action of the substance (e.g., the area under the curve or maximal concentration in the target tissues during critical time windows). The objective here is to establish confidence in the predictive capabilities of the model for a few key variables. A common way to evaluate a model’s predictability is to confront its predictions to an independent dataset, i.e., that has not been used for model development. That is called cross-validation in statistical jargon. For example, the evaluation step could check that the model is able to reproduce the peaks and troughs or steady-state levels of tissue concentrations under repeated exposure scenarios. Model evaluation is not limited to a confrontation between model predictions and data, but also requires checking the plausibility of the model structure, its parametrization, and the mathematical correctness of equations (e.g., the conservation of mass, organ volumes, and blood flows). Because of their mechanistic description of ADME processes, PBPK model structures and parameters values must be in accordance with biological reality. Parameter values inconsistent with physiological and biological knowledge limit the use of the model for extrapolation to other exposure scenarios and ultimately need to be corrected by the acquisition of new data, for example.
2.5 Model Validation and Validity Domain
Most models are valid only on a defined domain. That is true even for the most fundamental models in physics. The term validation is rarely used in the context of toxicokinetic modeling as it is almost
PBPK Modeling to Simulate the Fate of Compounds in Living Organisms
37
impossible to validate in all generality a model of the whole body. Actually, it is not done because it is bound to fail. It would require experimental data for all state variables (time evolution of concentration in all compartments) and model parameters under innumerable exposure scenarios. In that context, to be useful, the validation process should first define a validity domain. For example, we should not expect PBPK models to give accurate descriptions of within-organ differences in concentrations (organs are described as homogeneous “boxes”). There is actually an avenue of research for improved organ descriptions. Good accuracy in predictions is achieved in a time scale ranging from hours to years [16], but for inhalation at the lung level in particular, PBPK models are not suitable for time scales lower than a couple of minutes (the cyclicity of breathing is not described). Metabolism and the description of metabolites distribution is a deeper problem, as it branches on the open-ended field of systems biology [70]. In that area, the domain of validity becomes harder to define and is usually much smaller than that of the parent molecule. The model’s domain of validity should be documented, to the extent possible, and even more carefully as we venture into original and exotic applications. Fortunately, the assumptions consciously made during model development usually help in delineating the domain of validity.
3
A PBPK Model for 1,3-Butadiene In this section, we propose to apply the model development process presented above to the development of a PBPK model for 1,3-butadiene, a volatile organic compound. First, some background information on 1,3-butadiene will be provided to fulfil some requirements of the guidance defined by [19]. Because the aim here is not to run a risk assessment on butadiene, most sections of the guidance will be omitted (e.g., the comparison with the default approaches).
3.1 Setting up Background
An extensive literature exists on 1,3-butadiene human uses, exposures, toxicokinetics, and mode of action, see for example [71, 72]. 1,3-Butadiene (CAS No. 106-99-0) is a colorless gas under normal conditions. It is used for production of synthetic rubber, thermoplastic resins, and other plastics and is also found in cigarette smoke and combustion engine fumes. It enters the environment from engine exhaust emissions, biomass combustion, and from industrial on-site uses. The highest atmospheric concentrations have been measured in cities and close to industrial sources. The general population is exposed to 1,3-butadiene primarily through ambient and indoor air. Tobacco smoke may contribute significant amounts of 1,3-butadiene at the individual level. It is a known carcinogen, acting through its metabolites [71].
38
Fre´de´ric Y. Bois et al.
1,3-Butadiene metabolism is a complex series of oxidation and reduction steps [71]. Briefly, the first step in the metabolic conversion of butadiene is the cytochrome P450-mediated oxidation to 1,2-epoxy-3-butene (EB). EB may subsequently be exhaled, conjugated with glutathione, further oxidized to 1,2:3,4diepoxybutane (DEB), or hydrolyzed to 3-butene-1,2-diol (BDD). DEB can then be hydrolyzed to 3,4-epoxy-1,2-butanediol (EBD) or conjugated with glutathione. BDD can be further oxidized to EBD. EBD can be hydrolyzed or conjugated with glutathione. The metabolism for 1,3-butadiene to EB is the rate-limiting step for the formation of all its toxic epoxy metabolites. It makes sense, given the above, to define the cumulated amount of 1,3-butadiene metabolites formed in the body as the measure of its internal dose for cancer risk assessment purposes. 3.2 Model Development and Evaluation 3.2.1 Software Choice
3.2.2 Defining the Model Structure and Equations
In our butadiene example, we will use the R software and its package deSolve. We will assume that the reader has a minimal working of knowledge of R and has R and deSolve installed. R is freely available for the major operating systems (Unix/Linux, Windows, Mac OS), and deSolve provides excellent functions for integrating differential equations. R is easy to use, but not particularly fast. If you need to run many simulations (say several thousands or more), you should code your model in C language, compile it, and have deSolve call your compiled code (see the deSolve manual for that). An even faster alternative (if you need to do Bayesian model calibration, for example) is to use GNU MCSim. You can actually use GNU MCSim to develop C code for deSolve. Our research group has previously developed and published a PBPK model for 1,3-butadiene on the basis of data collected on 133 human volunteers during controlled low-dose exposures We used it for various studies and as an example of Bayesian PBPK analysis [73–76]. That model (see Fig. 2) is a minimal description of butadiene distribution and metabolism in the human body after inhalation. Three compartments lump together tissues with similar perfusion rate (blood flow per unit of tissue mass): the “wellperfused” compartment regroups the liver, brain, lungs, kidneys, and other viscera; the “poorly perfused” compartment lumps muscles and skin; and the third is “fat” tissues. Butadiene can be metabolized into an epoxide in the liver, kidneys, and lung, which are part of the well-perfused compartment. Our model will therefore include four essential “state” variables, which will each have a governing differential equation: the quantities of butadiene in the fat, in the well-perfused compartment, in the poorly-perfused compartment, and the quantity metabolized. Actually, the latter is a “terminal” state variable which depends on the others, but has no dependent. We could dispense with it if we did not want to compute and output it. That would save computation time, which
PBPK Modeling to Simulate the Fate of Compounds in Living Organisms
39
Fig. 2 Representation of the PBPK model used for 1,3-butadiene. The model equations and parameters are detailed in Subheading 3 of the text
grows approximately with the square of the number of state variables of the model. In an R script code for use with deSolve, we first need to define the model state variable and assign them initial values (values they will take at the start of a simulation, those are called “boundary conditions” in technical jargon). The syntax is quite simple (the full script is given in Appendix 1): y = c("Q_fat" = 0, # Quantity of butadiene in fat (mg) "Q_wp" = 0, # ~ in well-perfused (mg) "Q_pp" = 0, # ~ in poorly-perfused (mg) "Q_met" = 0) # ~ metabolized (mg)
That requests the creation of y as a vector of four named components, all initialized here at the value zero (i.e., we assume no previous exposure to butadiene, or no significant levels of butadiene in the body in case of a previous exposure). The portions of lines starting with the pound sign (#) are simply comments for the reader and are ignored by the software. We have chosen milligrams as the unit for butadiene quantities, and it is useful to indicate it here. In R, indentation and spacing do not matter and we strive for readability.
40
Fre´de´ric Y. Bois et al.
We then need to define similarly, as a named vector, the model parameters: parameters = c( "BDM" = 73, # Body mass (kg) "Height" = 1.6, # Body height (m) "Age" = 40, # in years "Sex" = 1, # code 1 is male, 2 is female "Flow_pul" = 5, # Pulmonary ventilation rate (L/min) "Pct_Deadspace" = 0.7, # Fraction of pulmonary deadspace "Vent_Perf" = 1.14, # Ventilation over perfusion ratio "Pct_LBDM_wp" = 0.2, # wp tissue as fraction of lean mass "Pct_Flow_fat" = 0.1, # Fraction of cardiac output to fat "Pct_Flow_pp" = 0.35, # ~ to pp "PC_art" = 2, # Blood/air partition coefficient "PC_fat" = 22, # Fat/blood ~ "PC_wp" = 0.8, # wp/blood ~ "PC_pp" = 0.8, # pp/blood ~ "Kmetwp" = 0.25) # Rate constant for metabolism (1/min)
We will see next how those parameters are used in the model equations, but you notice already that they are not exactly, except for the partition coefficients and metabolic rate constant, the parameters used in Eqs. 1, and 3–5. They are in fact scaling coefficients used to model parameter correlations in an actual subject. Before we get to the model core equations, we need to define the value of the concentration of butadiene in inhaled air. This is an “input” to the model, and we will allow it to change with time, so it is a dynamic boundary condition to the model (deSolve uses the term “forcing function”). We use here a convenient feature of R, defining Cinh as an approximating function. C_inh = approxfun(x = c(0, 100), y = c(10, 0), method = "constant", f = 0, rule = 2)
The instruction above defines a function of time Cinh(t), right continuous (option f ¼ 0) and constant by segments (the option method ¼ “linear” would yield a function linear by segments). At times 0 and 100 (x values), it takes values y 10 and then 0, respectively. Before time zero and after time 100, Cinh(t) will take the closest y value defined (option rule ¼ 2). Figure 3 shows the behavior of the function Cinh(t) so defined.
PBPK Modeling to Simulate the Fate of Compounds in Living Organisms
41
Fig. 3 Plot of the time concentration profile of butadiene inhaled generated by the function Cinh(t) of the example script. Cinh(t) is used as a forcing function for the model simulations
Formally you do not necessarily need such an input function in your model. Cinh could simply be a constant, or no input could be used if you were to model just the elimination of butadiene out of body following exposure. Indeed, the initial values of the state variables would have to be non-null in that case. Now we need to define a function that will compute the derivatives at the core of the model, as a function of time t—used, for example, when parameters are time varying, or for computing Cinh(t), of the current state variable values y, and of the parameters. Here is the (simplified) code of that function which we called “bd. model” (intermediate calculations have been deleted for clarity, we will see them later): bd.model = function(t, y, parameters) { # function header # function body: with (as.list(y), { with (as.list(parameters), { # ... (part of the code omitted for now) # Time derivatives for quantities dQ_fat = Flow_fat * (C_art - Cout_fat) dQ_wp = Flow_wp * (C_art - Cout_wp) - dQmet_wp dQ_pp = Flow_pp * (C_art - Cout_pp) dQ_met = dQmet_wp;
42
Fre´de´ric Y. Bois et al. return(list(c(dQ_fat, dQ_wp, dQ_pp, dQ_met), # derivatives c("C_ven" = C_ven, "C_art" = C_art))) # extra outputs }) # end with parameters }) # end with y } # end of function bd.model()
The first two “with” nested blocks (they extend up to the end of the function) are an obscure but useful feature of R. Remember that y and “parameters” are arrays with named components. In R, you should refer to their individual components by writing, for example, “parameters[“PC_fat”]” for the fat over blood partition coefficient. That can become clumsy and the “with” statements allow you to simplify the notation and call simply “PC_fat.” The most important part of the “bd.model” function is the calculation of the derivatives. As you can see, they are given an arbitrary name and computed similarly to the equations given above (e.g., Eq. 1). Obviously, we need to have defined the temporary variables “Cout_fat”, “Cout_wp”, and “dQmet_wp” but they are part of the omitted code and we will see them next. Finally, the function needs to return (as a list, that is imposed by deSolve) the derivatives computed and eventually the output variables we might be interested in (in our case, for example, Cven and Cart). The code we omitted for clarity was simply intermediate calculations. First, some obvious conversion factors: # Define some useful constants MW_bu = 54.0914 # butadiene molecular weight (in grams) ppm_per_mM = 24450 # ppm to mM under normal conditions # Conversions from/to ppm ppm_per_mg_per_l = ppm_per_mM / MW_bu mg_per_l_per_ppm = 1 / ppm_per_mg_per_l
The following instructions scale the compartment volumes to body mass. The equation for the fraction of fat is taken from [77]. That way, the volumes correlate as they should to body mass or lean body mass: # Calculate fraction of body fat Pct_BDM_fat = (1.2 * BDM / (Height * Height) - 10.8 *(2 - Sex) + 0.23 * Age - 5.4) * 0.01 # Actual volumes, 10% of body mass (bones...) receive no butadiene Eff_V_fat = Pct_BDM_fat * BDM
PBPK Modeling to Simulate the Fate of Compounds in Living Organisms
43
Eff_V_wp = Pct_LBDM_wp * BDM * (1 - Pct_BDM_fat) Eff_V_pp = 0.9 * BDM - Eff_V_fat - Eff_V_wp
The blood flows are scaled similarly to maintain adequate perfusion per unit mass: # Calculate alveolar flow from total pulmonary flow Flow_alv = Flow_pul * (1 - Pct_Deadspace) # Calculate total blood flow from Flow_alv and the V/P ratio Flow_tot = Flow_alv / Vent_Perf # Calculate actual blood flows from total flow and percent flows Flow_fat = Pct_Flow_fat * Flow_tot Flow_pp = Pct_Flow_pp * Flow_tot Flow_wp = Flow_tot * (1 - Pct_Flow_pp - Pct_Flow_fat)
We have now everything needed to compute concentrations at time t in the various compartments or at their exit: # Calculate the concentrations C_fat = Q_fat / Eff_V_fat C_wp = Q_wp / Eff_V_wp C_pp = Q_pp / Eff_V_pp # Venous blood concentrations at the organ exit Cout_fat = C_fat / PC_fat Cout_wp = C_wp / PC_wp Cout_pp = C_pp / PC_pp
The next two lines are typical computational tricks. The righthand sides will be used several times in the subsequent calculations. It is faster, and more readable to define them as temporary variables: # Sum of Flow * Concentration for all compartments dQ_ven = Flow_fat * Cout_fat + Flow_wp * Cout_wp + Flow_pp * Cout_pp # Quantity metabolized in liver (included in well-perfused) dQmet_wp = Kmetwp * Q_wp C_inh.current = C_inh(t) # to avoid calling C_inh() twice
44
Fre´de´ric Y. Bois et al.
The last series of intermediate computations obtain Cart as in Eq. 6, with a unit conversion for Cinh(t), Cven as in Eq. 7 (those two will be defined as outputs in the function’s return statement), the alveolar air concentration Calv, and finally the exhaled air concentration Cexh: # Arterial blood concentration # Convert input given in ppm to mg/l to match other units C_art = (Flow_alv * C_inh.current * mg_per_l_per_ppm + dQ_ven) / (Flow_tot + Flow_alv / PC_art) # Venous blood concentration (mg/L) C_ven = dQ_ven / Flow_tot # Alveolar air concentration (mg/L) C_alv = C_art / PC_art # Exhaled air concentration (ppm!) if (C_alv 5 Hbond donors; >10 H bond acceptors; M Wt >500 Da) is associated with poor oral absorption % human intestinal absorption/gastrointestinal absorption—indicates potential absorption following oral administration Skin permeability coefficient indicates potential uptake following dermal administration
Distribution PPB, fb, fu
Plasma protein binding (expressed as % binding or as fraction bound/ unbound (fb/fu) in plasma) Vd Volume of distribution, a hypothetical volume into which a drug distributes, based upon its concentration in blood (Vd ¼ dose/conc. in plasma); a drug highly distributed to tissues has a high volume of distribution P-gp substrate or Potential to be a substrate for, or to inhibit, p-glycoprotein, a key efflux inhibitor transporter. Substrates may show reduced uptake; inhibitors may increase the uptake of coadministered drugs BBB partitioning Blood–brain barrier partitioning is the tendency to cross the blood–brain barrier (particularly relevant for drugs having therapeutic activity or undesirable side effects in the central nervous system)
Metabolism CYP substrate or inhibitor
Metabolites
Potential to be a substrate for, or to inhibit, Cytochrome P450 enzymes. Interactions with CYP1A2, CYP3A4, CYP2C9, CYP2C19, CYP2D6 most commonly predicted as these enzymes are responsible for the metabolism of the majority of drugs. Particularly relevant for predicting drug–drug interactions—inhibitors may increase the levels of coadministered drugs Potential metabolites formed may possess therapeutic activity or be associated with toxicity (continued)
Pharmacokinetic Tools and Applications
61
Table 1 (continued) Property
Definition
Elimination Cltot, Clr, Clh
Cltot is the total clearance, i.e., the amount of blood that is effectively cleared of drug per unit time (per unit body weight); total clearance includes contributions from renal excretion (Clr), hepatic clearance (Clh), and metabolism or clearance by other routes (e.g., sweat, secretion into breast milk, exhalation). Note that elimination comprises both excretion of unchanged drug and chemical conversion via metabolism
F
Bioavailability is a composite parameter indicating the amount of drug administered that reaches the systemic circulation unchanged; this is dependent on both uptake and avoidance of first-pass metabolism at site of administration (skin, lungs, or liver for dermal, inhalational or orally administered drugs, respectively)
t½
Half-life, the time for concentration in blood (or tissue) to fall by one half
Level (iii) Concentration–time profile in blood (or tissues/organs) Cmax
Maximum concentration achieved in blood or tissue/organ
Tmax
Time to reach the maximum concentration in blood or tissue/organ
AUC
Area under the concentration–time curve (indicating overall internal exposure to the drug) in blood or an individual tissue/organ of interest
silico tools. These are the most commonly predicted parameters and do not encompass the full complement of ADME properties relevant to drug design, but will serve as illustrative examples to demonstrate the capabilities of available tools. The reader is referred to more detailed reviews of ADME properties, and software that can be used for their prediction, available in the literature [7, 10–12].
2
Methods The focus of this chapter is freely available tools for predicting ADME-relevant properties; many commercial tools are also available but are not considered here. This chapter provides information on selected software, including key functionalities, basic instructions for use, and the presentation/interpretation of results. This is not intended to be a comprehensive review of all ADME-relevant software, rather an introduction to what is available and how it can be used in practice. Throughout this chapter, atenolol (a common beta-blocker, with the structure as shown in Fig. 2) will be used as the example drug. The process of using a range of pharmacokinetic tools to obtain data or make predictions for this drug (i.e., typical inputs and outputs) will be described enabling the reader to repeat the process for other chemicals of interest.
62
Judith C. Madden and Courtney V. Thompson
Fig. 2 Atenolol 2.1 Obtaining Existing Data
It is obvious that if reliable data already exist there is no need to use predictive software. In certain cases, although data are not available for the chemical of interest, they may be available for another “similar” chemical, enabling use of this existing knowledge to infer information for the related chemical. Hence, existing databases are a vital resource, the information content for which is truly vast and ever-expanding. In a recent comprehensive review of resources for existing data, over 900 different databases were identified and characterized, of which 38 were dedicated to ADME data [13]. Table 2 shows key resources for obtaining existing data. Representative examples only have been selected for each type of resource. For the majority of these databases, navigating the resource is highly intuitive. All those listed in Table 2 accept the name of the chemical as an input search term, and many can be interrogated using alternative identifiers such as Simplified Molecular Input Line Entry System (SMILES) strings, Chemical Abstracts Service (CAS) Registry numbers, PubMed ID numbers, and/or InChIKeys. The majority of these resources are also capable of searching for “similar” chemicals to a target chemical; based on the similarity of physicochemical properties, chemical structure, or other features. In addition to providing available physicochemical or ADME and toxicity (ADMET) data, a key functionality of databases such as ChemSpider or PubChem is that they can be used to obtain SMILES strings (or other identifiers) for existing chemicals. This is particularly important as many of the resources to predict ADMET properties require a SMILES string for input. For chemicals that are not present within the databases (and therefore do not have a SMILES string associated with them), the SMILES string needs to be created in order to use the software. Tutorials on creating SMILES strings for any chemical are provided by Daylight (https://www.daylight.com/dayhtml_tutorials/languages/ smiles/index.html). Alternatively, PubChem Sketcher V2.4 (https://pubchem.ncbi.nlm.nih.gov//edit3/index.html, accessed October 2021) can be used to convert manually edited chemical structures into SMILES strings, InChiKeys, or other formats.
Pharmacokinetic Tools and Applications
63
Table 2 Key resources for existing data Database
Source
Information availablea
ChemSpider
http://www. chemspider.com/
More than 103 million chemical records, physicochemical property, and toxicity data
Drugbank
https://go.drugbank. com/
Data for 14,460 drugs/drug candidates; ADME-related data, where available, typically includes absorption, volume of distribution, protein binding, metabolism, elimination, half-life, and clearance
EPA CompTox Chemicals Dashboard
https://comptox.epa. gov/dashboard
Data for 883,000 chemicals including in vitro to assist in vitro-to-in-vivo extrapolation (IVIVE); includes, where available, intrinsic clearance, plasma protein binding, predicted values for volume of distribution, time to steady state, half-life, and human steady-state plasma concentration following a 1 mg/kg/day dose
oCHEM
https://ochem.eu/ home/show.do
Database based on the wiki principle; varying level of detail available for individual chemicals
OECD QSAR toolbox
https://www. qsartoolbox.org/
More than 2.6 million data points for 90,000 chemicals, including physicochemical property and toxicity data
PBK model database
Alternatives to Laboratory Animals (in press)
Database of chemicals for which existing PBK models are available; provides information on models, such as species, routes of administration, software used; provides links to relevant publication
PubChem
https://pubchem.ncbi. nlm.nih.gov/
110 million chemicals recorded; physicochemical property and toxicity data. ADME data, where available, includes: absorption, route of elimination, volume of distribution, clearance, metabolites, and half-life; links to relevant studies in literature
a
The amount of data available for individual chemicals varies significantly. Note that many of these resources provide a range of additional types of data
2.1.1 Case Study 1: Using PubChem to Obtain SMILES Strings and Existing ADME Data
Figure 3 illustrates how to use PubChem to obtain an extensive array of existing data (currently for 110 million compounds from 781 data sources including 32 million literature references). Note that the amount of data and data types available for individual chemicals varies considerably.
Data Availability
19 different types of data are available for atenolol including structures, physicochemical properties (measured and predicted), pharmacological, and toxicity data. From the “names and identifiers” tab, the SMILES string can be obtained, this is required as input for software that is described in subsequent sections of this chapter. InChIKeys are unique, machine-readable identifiers, and are useful for database searching.
64
Judith C. Madden and Courtney V. Thompson
Go to the PubChem homepage (hps://pubchem.ncbi.nlm.nih.gov/) and enter the name of the chemical (or other idenfier) into the search box and click the search icon:
The best matches for the search appear (it is also possible to search for similar structures), click on the name matching your search term and the page for the chemical of interest is displayed.
From here it is possible to navigate to the various data types using the links on the right of the page. The “Names and Idenfiers” tab links to SMILES strings, InChiKeys etc
The “Pharmacology and Biochemistry” tab links to ADME informaon
Fig. 3 Using PubChem to obtain data for atenolol
Pharmacokinetic Tools and Applications
65
SMILES string for atenolol: CC(C)NCC(COC1¼CC¼C(C¼C1)CC(¼O)N)O. InChIKey: METKIMKYRPQLGS-UHFFFAOYSA-N. Under the “Pharmacology and Biochemistry” tab, a range of data are given. The ADME data for atenolol includes: gastrointestinal absorption (~50%); ability to cross blood–brain barrier (slow and to a small extent); routes of elimination (85% via renal excretion; 10% in feces); volume of distribution (63.8–112.5 L with differential distribution into a central and two peripheral compartments); and clearance (total clearance 97.3–176.3 mL/min of which 95–168 mL/min is renal clearance). 2.2 Tools for Predicting Physicochemical Properties
Where existing information is not available, the initial stages of drug development may require screening of vast (virtual) libraries of potential drug candidates. In terms of ADME profiling, this equates to Level (i) described above, i.e., predicting simple physicochemical properties that are associated with ADME characteristics. Qualitative and quantitative relationships between structure and activity/properties (QSARs/QSPRS) have been used extensively to predict ADME, and many other properties, for decades [6, 9, 10, 14, 15]. One of the simplest and best-known structure– activity relationships being the Lipinski Rule of Fives to identify candidates with potentially low oral absorption [16]. Several, similar rules for a preliminary assessment of “drug-likeness” have also been published previously [17–20] and incorporated as screening level tools in predictive ADME software. Generally, these screening tools require values for fundamental physicochemical properties such as logarithm of aqueous solubility (log Saq), the octanol: water partition coefficient (log P), hydrogen bond donating or accepting ability (Hdon/Hacc), or topological polar surface area (TPSA). A report into reasons for attrition across four major pharmaceutical companies reconfirmed the importance of selecting candidates with appropriate physicochemical properties [3]. Examples of software for predicting physicochemical properties are given in Table 3. The use of EPI SUITE™ (Estimation Programs Interface Suite from the US Environmental Protection Agency) is demonstrated here for the prediction of physicochemical properties and skin permeability, as this has been a standard package used for prediction for over 20 years [15]. The software is recommended to be used as a “screening level tool” with measured values being preferable where available. It is possible to run predictions for an individual chemical or to use the software in batch mode to make predictions for larger datasets. The models, and the underlying data on which they are based, have been updated periodically since the software was originally released. A detailed history of EPI SUITE™ and a description of how the different predictions are made is given by Card et al. [15]. Note that pharmaceutical companies may focus their effort on a particular region of chemical space. Hence, the use of generic
66
Judith C. Madden and Courtney V. Thompson
Table 3 Example software to predict physicochemical properties Software
Source
Properties predicteda
ACD/ Percepta (ACD Labs)
https://www.acdlabs.com/products/percepta/
Log Saq; Log P; Log D; Hdon/Hacc
ADMETlab
http://admet.scbdd.com/calcpre/index/
Log Saq; Log P; Log D
admetSAR
http://lmmd.ecust.edu.cn/admetsar2/
Log Saq; Log P; Hdon/Hacc
ALOGPS 2.1
http://www.vcclab.org/lab/alogps/
Log Saq; Log P
Corina Symphony
https://www.mn-am.com/
Log P; Hdon/Hacc; TPSA
EPI SUITE™ https://www.epa.gov/tsca-screeningtools/epi-suitetm- Log Saq; Log P (WSKOW; KOWWIN estimationprogram-interface modules respectively) Molinspiration http://www.molinspiration.com/
Log P; Hdon/Hacc; TPSA (Lipinski Rule Violations)
pkCSM
http://biosig.unimelb.edu.au/pkcsm/
Log Saq; Log P; Hdon/Hacc
SwissADME
http://www.swissadme.ch/
Log Saq; Log P; Hdon/Hacc; TPSA
a
NB Many of these packages predict additional properties to those listed here; for definitions of the properties refer to Table 1
QSAR/QSPR models may be less informative than models developed in-house using relevant company data. It is advisable, therefore, to update or reparameterize these models as new data become available to ensure that they are representative of the chemical space of interest. Predictive models are only useful if the chemical falls within the applicability domain of the model, i.e., the region of physicochemical or structural space represented by the training set data. For models developed in-house, chemicals are more likely to be within the applicability domain of the models. 2.2.1 Case Study 2: Using EPI SUITETM to Predict Physicochemical Properties and Kp
EPI SUITE™ can be downloaded for free from: https://www.epa.gov/tsca-screening-tools/download-episuitetm-estimation-program-interface-v411(accessed October 2021). After downloading and initiating the software, the homepage is displayed; from where a range of predictions can be made as described in Fig. 4. The software calculates log P (i.e., log Kow) using the sum of atomic/fragment contributions; this information is then used to calculate water solubility, using correction factors where necessary. Skin permeability (Kp) is calculated using a QSPR model based on that of Potts and Guy [21].
Pharmacokinetic Tools and Applications Installing and iniang the EPISUITETM soware brings up the Homepage:
KOWWIN (predicts log P) WSKOW (predicts water solubility)
DERMWIN (predicts Kp)
Click the icon for the property to be predicted or enter SMILES here and click “calculate” to predict all properes. Clicking an individual icon displays the input page for the relevant sub-program e.g. “KOWWIN”. Enter the SMILES string for the required chemical (atenolol used here) and click “calculate”, or select batch mode.
The output, including structure and calculaon informaon are displayed, if an exisng value is available in the database this is also given. A similar input/output interface is used for all sub-programs, including DERMWIN for Kp. Predicted & Experimental values
Fig. 4 Using EPI SUITE™ to predict physicochemical properties and Kp
67
68
Judith C. Madden and Courtney V. Thompson
2.3 Tools for Predicting ADME Properties
Level (ii) in characterizing the potential ADME profile of a drug is the prediction of specific ADME properties, such as extent of absorption, volume of distribution, half-life, etc. (as listed and defined in Table 1). An increasing number of tools, both freely available and commercial, have been developed, each predicting a range of ADME and often physicochemical properties. Examples of freely available software/webtools are listed in Table 4, along with the properties that can be predicted. The list in Table 4 is not exhaustive, and other examples of such tools are detailed in recent publications [7, 11, 12]. While the freely available tools have become increasingly elegant and user-friendly, it is not possible here to give a detailed presentation of all those available. Three individual tools have been selected as examples to demonstrate their use and capabilities; this does not imply endorsement or preference for these tools over others that are available.
2.3.1 ADMETlab
This is a web-based platform, the design, development, and use of which is described in detail by Dong et al. [24], who also provide details on the algorithms used and comparative performance statistics. 288,967 data entries were collated and used for the database, providing 31 datasets from which to derive the ADMET models. Models were built using random forest, support vector machine, recursive partitioning regression, partial least squares, naı¨ve Bayes and decision trees, to provide regression-based or classification models for the ADMET endpoints. In addition to the prediction of ADMET properties, the webserver provides tools for evaluating drug-likeness using Lipinski’s Rule of Fives [16] and similar rulebased systems [17–20] or a random forest model.
Case Study 3: Using ADMETlab to Predict a Range of ADME Properties
ADMETlab is available from: https://admet.scbdd.com/calcpre/ index/ (accessed October 2021). The similarity between a selected molecule and those in the training set can be ascertained by selecting “application domain” under the webserver tab (refer to Fig. 5). The search tab enables searching based on structure, physicochemical properties or by structural similarity using fingerprints. This is a comprehensive and easy-to-navigate tool for the prediction of ADMET-relevant properties, some of the key features of which are presented in Fig. 5 using atenolol as the compound and blood–brain barrier partitioning as one possible endpoint of the many available.
2.3.2
This platform for predicting a range of ADMET properties is freely available on the web. The underpinning methodology and use of this software is described by Pires et al. [25]. The system is based on graph modeling, from which distance-based graph signatures can be derived to represent small molecules and train the models. This method was used to develop models for 30 ADMET-relevant properties using this machine learning technique. These include
PKCSM
Pharmacokinetic Tools and Applications
69
Table 4 Tools for predicting ADME Software ADMETlab ADMETSAR pkCSM SwissADME aProperty
Lipinski Rule Violations % HIA / GIabs Kpb PPB, fb, fu Vd P-gp substrate or inhibitor BBB partitioning CYP substrate or inhibitor Vmax/Km Cltot, Clr, Clh F t½ c Other ADME properties
VNNADMET
cPublished
(Q)SARs
9
9
9
9
9
9
9
9 9
9
9
9 9 9 9
9
9
9 9 9
9
9
9
9
9
9
9
9
9
9
9
9
9 9 9
9 9 9
9
9
9 9 9
a For definitions of ADME properties, refer to Table 1; note that some of the software listed provides predictions for additional properties beyond those listed here b Kp also predicted by EPI SUITE™—see Subheading 2.2.1 c 89 QSAR models for over 30 ADME-related endpoints were compiled by Patel et al. [22]; Benchmark datasets for developing models for these endpoints were curated by Przybylak et al. [23]
14 regression models, providing numerical values, and 16 classification models. The authors report improved statistical performance for several of these models when compared to other predictive methods. Case Study 4: Using PKCSM to Predict ADMET Properties
PKCSM is available from: http://biosig.unimelb.edu.au/pkcsm/ prediction (accessed October 2021). The webtool is designed to enable rapid screening for small molecules and for security does not retain molecular information. An example of using this tool and the range of properties it can predict is exemplified using atenolol, as presented in Fig. 6.
2.3.3 SwissADME
In addition to the ADME properties listed in Table 4, this freely available webtool also enables prediction of a range of physicochemical properties, drug-likeness, and identifies unfavorable structural features, using medicinal chemistry alerts. In order to provide
70
Judith C. Madden and Courtney V. Thompson “ADMET predictor” (under Webserver tab) lists available models on the le; click to select the model of interest. Or select Webserver, then “Systemac evaluaon” to obtain results for all models. Enter SMILES for one molecule or up to 20 in batch mode
Details of the type of model (here SVM for BBB predicon) and performance stascs are given Click “submit” to run O
Toxicity (7 endpoints) also predicted
Example output for the Blood-Brain-Barrier model A table explaining all results (categorical or numeric) and references for models also provided by clicking “here”
If “Systemac Evaluaon” is selected a summary of results, interpretaon and references is given for all models.
Fig. 5 Using ADMETlab to evaluate the ADMET profile of a drug candidate
a screening-level tool for determining the likelihood of a drug candidate to be taken up via the gastrointestinal tract or partition into the brain, Daina and Zoete [26] developed the Brain Or IntestinaL EstimateD permeation method (BOILED-Egg). This
Pharmacokinetic Tools and Applications
71
Theory link gives informaon on how to interpret the results and how many molecules were in the training set. Enter a SMILES string or up to 100 in batch mode
Select individual model types for A – D – M – E – T
or
click ADMET to run all models
Example output for atenolol
10 toxicity endpoints also predicted
Fig. 6 Using PKCSM to predict ADMET properties
work draws on the successful work of Egan et al. [27] who demonstrated the region of chemical space, more favorable for oral absorption, could be delineated by the bounds of a plot of two readily available descriptors for lipophilicity and polarity. Furthering this idea, Daina and Zoete [26], using log P and topological polar
72
Judith C. Madden and Courtney V. Thompson
surface area, devised a plot (visually similar to a “boiled-egg”) wherein the position of a drug candidate within the plot area determines its potential for intestinal (egg-white area) or brain (yolk area) permeation. This simple but effective screening tool is incorporated within SwissADME, along with a “bioavailability radar” for determining drug-likeness. Full details of the development and use of this tool and comparative performance of the models are given in Daina et al. [28]. Case Study 5: Using SwissADME to Predict Physicochemical and ADME Properties
SwissADME is available from: http://www.swissadme.ch/ (accessed October 2021). The webtool uses a mixture of intuitive plots, simple classifications schemes, regression analysis (prediction of skin permeability is based on a QSAR model adapted from that of Potts and Guy [21]), and SVM models (such as those identifying potential cytochrome P450 or P-gp substrates and inhibitors). Figure 7 shows the homepage and input for atenolol, and the homepage gives links to the relevant references for the models. The example output shows the range of properties predicted, with additional details of the models being retrieved by clicking the question marks. Additional examples of software for ADME prediction are given below, although in the interest of space, case studies exemplifying use have not been included here.
2.3.4 admetSAR
The admetSAR webserver (freely available from http://lmmd. ecust.edu.cn/admetsar2/, accessed October 2021) predicts approximately 50 physicochemical and ADMET properties. The first version was released in 2012, using over 210,000 experimental data points from over 96,00 chemicals, and 27 predictive ADMET models based on fingerprints and SVM methods were developed. In the more recent release, admetSAR version 2.0, the models have been improved and an additional feature to optimize molecules using scaffold hopping (for ADMET properties) has been incorporated [29]. Details of the development of the models, using SVM, RF, kNN, and deep learning techniques, as well as the definition of the applicability domain and method for scaffold hopping are all described in Yang et al. [29]. For input, the program requires SMILES strings for an individual chemical, or can analyze up to 20 chemicals in batch mode, with the output being available as a downloadable CSV file.
2.3.5 VNN-ADMET
This is a webserver that is freely available, following registration (from https://vnnadmet.bhsai.org, accessed October 2021), uses a variable nearest neighbor (vNN) approach to predict 15 ADMET properties. It also enables users to build classification and regression models utilizing their own data and the vNN approach. Details of
Pharmacokinetic Tools and Applications
73
Homepage provides links to other webtools available from the Swiss Instute of Bioinformacs e.g. SwissDock, Swiss Similarity etc
Enter individual or batches of SMILES strings
Click run Example output for atenolol
Fig. 7 Using SwissADME to predict physicochemical and ADME properties
the k-NN methodology, use of the webserver, and performance statistics are given in Schyman et al. [30]. 2.3.6 Existing QSAR Models for ADME-Relevant Endpoints
A wide range of QSAR models for multiple ADME-relevant endpoints have been published in the literature, many of which provide extensive datasets for the generation of future models [22, 23]. The
74
Judith C. Madden and Courtney V. Thompson
endpoints include those indicated in Tables 1 and 4 as well as many others, such as milk:plasma partitioning, placental transfer, Vmax, or Km. One caveat to the use of existing QSAR models is identified in the work of Patel et al. [22] who evaluated more than 80 of these published ADME models to assess their reliability and reproducibility. Many of the models investigated did not fulfil the OECD principles for the validation of QSARs (five well-established criteria by which the suitability of QSAR models for regulatory purposes can be assessed, [31]) and many could not be easily reproduced; hence, users are advised to assess the suitability of such models for their purposes. It should also be noted that reproducing certain models may require some level of expertise, as well as the use of commercial software to generate the descriptors necessary for running the models. The potential of applying such models, although less convenient than the web-based tools, may provide useful insights, providing that the candidate drug falls within the applicability domain of the model. One of the problems with many of the existing datasets is the bias towards “successful” drugs, hence those with unfavorable properties tend not to be available for training purposes. In order to address this issue, an ensemble method to create balanced datasets has been developed to assist modeling of absorption, bioavailability, and P-gp binding, with models being validated using data available in DrugBank [32]. 2.4 Prediction of Metabolism
The availability and success of tools to predict passive processes (such as absorption or partitioning) has unsurprisingly been greater than that for predicting active processes (such as metabolism or interactions with transporters). Failure to predict such processes leads to a considerable gap in knowledge, for example, actively transported compounds are often outliers in models for absorption or excretion. Predicting metabolism is key as not only does this determine both bioavailability and half-life but also the metabolites themselves may be relevant as active pharmaceuticals in their own right or may present significant toxicity. Some of the software described above is capable of predicting metabolism and transporter interactions, and other software for predicting metabolic information are briefly described below.
2.4.1 The QSAR Toolbox
The original version of the QSAR Toolbox was released in 2008, and the latest version (4.5) was released in October 2021. This was developed in collaboration with the Organisation for Economic Collaboration and Development (OECD) and the Laboratory of Mathematical Chemistry (LMC, Bulgaria). It is freely available from https://qsartoolbox.org/download/ (accessed October 2021). This has functionalities for predicting both skin and liver metabolism and identifies formation of reactive metabolites. Use of this tool is less intuitive than those given above; however, links to
Pharmacokinetic Tools and Applications
75
multiple tutorials, training manuals, and webinars are available from the download site. 2.4.2 Toxtree
The Toxtree software (freely available from http://toxtree. sourceforge.net/, accessed October 2021) uses a decision tree approach to predict multiple toxic hazards. The SMARTCyp module, a more recent addition to the software, enables prediction of likely sites of metabolism via the cytochrome P450 pathway and depicts potential metabolites.
2.4.3 CypReact
This software (freely available from https://bitbucket.org/Leon_ Ti/cypreact, accessed October 2021) predicts whether or not a query molecule is likely to react with one of the nine most relevant cytochrome P450 enzymes, using models based on a machine learning approach. Development of the methodology and performance statistics in comparison to other predictive software are reported in Tian et al. [33].
2.4.4 SyGMA
Systematic Generation of potential Metabolites (SyGMA freely available from https://github.com/3D-e-Chem/sygma, accessed October 2021) enables metabolites to be predicted from structure using reaction rules for both phase I and phase II metabolism. The rules were derived from the Metabolite Database, and the system uses the approach as published by Ridder and Wagener [34].
2.4.5 Xenosite
Xenosite (freely available from https://swami.wustl.edu/xenosite, accessed October 2021) is based on the method presented by Zaretski et al. [35]. It predicts potential sites of metabolism for cytochrome P450 enzyme catalyzed reactions using a machine learning approach based on neural networks.
2.4.6 Alerts for Reactive Metabolites
In addition to the abovementioned software, structural alerts associated with the formation of reactive metabolites can also be used to identify potentially problematic candidate structures. Collations of structural alerts [36] or those forming part of the oCHEM structural alert screening routine (https://ochem.eu/home/show.do, accessed October 2021) can assist with identifying drug candidates that may be associated with unfavorable metabolic profiles. One area of increasing importance for predictive ADME tools is in providing the input properties necessary for development of the more complex and biologically relevant physiologically based pharmacokinetic models [37] as discussed below. Level (iii) demands the most sophisticated and complex models, as these are used to predict the concentration–time profile of a drug in blood or tissues/organs, giving the dose at the target site. It is this internal dose that is most relevant to determining potential effects, both desirable and undesirable. Physiologically based
76
Judith C. Madden and Courtney V. Thompson
PBPK Model
Conc.
Chemical-specific information Physicochemical properties Log P; Log Saq; PPB Other ADME-relevant properties
Physiological and anatomical information Organ size / volume Blood flow Potential for metabolism or excretion
Predict concentration time profile for individual organs Time
Physicochemical and ADME properties can be predicted using tools described above
Adapt for species / route / age; parameters from databases, literature or integrated in tools
Fig. 8 A summary of the key elements of a PBPK model 2.5 Tools for Physiologically Based Pharmacokinetic (PBPK) Modeling
pharmacokinetic (PBPK) models are used for these purposes; however, they are the most resource intensive of all models discussed here. A tutorial level explanation of how PBPK models are built and the required components are given by Kuepfer et al. [38]. Briefly, in a PBPK model, the body is considered as a set of compartments (organs) linked together by blood flow. The models require information relating to the chemical of interest (physicochemical or ADME-relevant properties) ideally using experimental values if available. Where experimental values are not available, predicted values can be obtained using the tools described in previous sections, however predicted values can have serious implications for the accuracy of the models. Information relating to the anatomy and physiology of the species of interest are also required and are generally sourced from literature or databases integrated with the software. Using a series of differential equations, the time course of the chemical is predicted in blood or other tissues/organs of interest. Altering the physiological parameters to account for different species, life-stages or disease states means that the models can be adapted for use in different scenarios. These models are of particular use in drug development as they can be used to investigate the potential for drug–drug interactions and to predict the first dose in man. They are also used to convert in vitro effect concentrations to relevant in vivo concentrations, referred to as quantitative in vitroto-in vivo extrapolation (QIVIVE). Figure 8 shows a typical representation of the key components of a PBPK model.
Pharmacokinetic Tools and Applications
77
2.5.1 PK-SIM®
Commercial PBPK models have been available for many years; more recently there has been an expansion in the number of freely available tools accessible to users. Eissing et al. [39] describe the development and application of the modeling platforms PK-SIM® and MoBi® for PBPK modeling. PK-SIM® is suitable for use by nonexperts, although it is restricted in functionalities it is fully compatible with MoBi® enabling more expert users to customize their models. For example, import into MoBi® allows for the addition of subcompartments and visualization of the model structure. MoBi® toolboxes enable the models to be exported into MatLab or R. The integrated database for PK-SIM® includes anatomical and physiological parameters for humans and common preclinical animals (mouse, rat, minipig, dog, and monkey). PBPK model building blocks are grouped into different components: individuals, populations, compounds, formulations, administration protocols, events, and observed data. Different building blocks can be selected, combined, and reused to build models for different purposes (e.g., adapted for different individuals or routes of administration).
Case Study 6: Using PK-SIM® for PBPK Modeling of Atenolol
PK-SIM®, part of the Open Systems Pharmacology Suite, can be freely downloaded from http://www.systems-biology.com/ products/pk-sim/ (accessed October 2021) which also provides links to online video tutorials for using the software. Associated documentation is also available from: https://docs.open-systemspharmacology.org/working-with-pk-sim/pk-sim-documentation (accessed October 2021). A case study was again developed using atenolol, for which an existing PBPK model is available in the literature [40]. The model was developed by inputting relevant information for each of the “building blocks” within the software. For physiological and anatomical parameters, a user may enter their own values, use default values from the software, or a combination of both. Basic information regarding the chemical, such as molecular weight and log P, should be entered, and additional ADME properties can be included where these are available. Details of the chosen dosing regimen are added, as the administration protocol, enabling the concentration–time curve(s) to be generated in blood or tissues. Where available, observed data can be added to assist model development/evaluation. Table 5 shows the parameters entered for the generation of the model for atenolol. Figure 9 shows example inputs and outputs for a model developed for atenolol.
2.5.2 PLETHEM
More recently, a population life course exposure to health effects model (PLETHEM), for PBPK modeling using R, has been developed. This open-source tool is freely available from https://www. scitovation.com/plethem/ (accessed October 2021). A description of the tool, its development, and applications are given in
78
Judith C. Madden and Courtney V. Thompson
Table 5 Parameters used in generating the PBPK model for atenolol Data for the individuala: 70 kg male human Organ/tissue
Blood flow (mL/min)
Organ volume (mL)
Adipose
260
10,000
Bone
250
4579
Brain
700
1450
Heart
150
310
Kidney
1100
280
Liver
1650
1690
Lung
5233
1172
Muscle
750
35,000
Pancreas
133
77
Skin
300
7800
Spleen
77
192
Stomach
38
154
Arterial blood
–
1698
Venous blood
–
3396
Small intestine
600
Colon
700
Data for the compound (from Peters 2008) Property
Value
Source
M Wt
0.266 kDa
–
Log P
0.11
Clog P, biobyte Corp
Ion type
Base
Drugbank
pKa
9.6
ACD/pKa, ACD Labs
Solubility (at pH 6.5)
51 mmol/L
Drugbank
Fraction bound
0.96
Drugbank
Administration protocol Dose
50 mg
Route
Intravenous, infusion (12 min)
Specified values from [40]; all other values are default values from PK-SIM®
a
Pendse et al. [41], with source code, a user-guide, and tutorials available on the website. This provides a modular platform for users to develop, share, and audit PBPK models with the aim of increasing their acceptance for regulatory use. It includes a chemical database, QSAR models, and the physiological and anatomical
Pharmacokinetic Tools and Applications
79
Building blocks for the model are populated in terms of: A. The individual (here 70 kg, male, human)
B. The properes of the chemical (here m. wt, log P, fu entered for atenolol)
C. The administraon protocol (here an intravenous infusion)
Example output: Plasma concentraon-me profile
Concentration [μmol/l]
Atenolol-Venous Blood-Plasma-Concentraon Data-Atenolol-Venous Blood-Plasma
Note that exisng data can be used to evaluate the model (here blue circles indicate exisng data; red line is predicted concentraon-me curve for atenolol following i.v. infusion).
Individual curves can be ploed for different organs of interest. Time [h]
Fig. 9 Using PK-SIM® to derive a PBPK model for atenolol following an intravenous infusion
80
Judith C. Madden and Courtney V. Thompson
parameters necessary for PBPK model development. Mallick et al. [42] provide an example of using this package to determine if age-dependent differences are apparent following exposure of adults and children to pyrethroids. 2.5.3 QIVIVE Tools
3
The webtool for quantitative in vitro-to-in vivo extrapolation (QIVIVE tools) developed by Punt et al. [43] is freely available from http://www.qivivetools.wur.nl/ (accessed October 2021). The tool is presented as a workflow, enabling parameterization according to physicochemical properties, metabolism, and uptake, resulting in the predicted concentration–time curve in organs(s) of interest. Input parameters such as PPB or tissue:plasma partition coefficients can be calculated by the software or input directly if experimental values are available. Punt et al. [43] provide an example of using the software to predict oral equivalent doses of in vitro data for food additives and comparing these to human exposure estimates.
Conclusion In order to predict drug toxicity (or therapeutic activity), information is required on both the inherent biological activity of the chemical and its concentration–time profile at the target site. To assist prediction of the latter, a range of freely available tools have been presented for predicting ADME-relevant parameters, across three levels of information content and biological relevance. These have been categorized as: Level (i) simple physicochemical properties, Level (ii) ADME parameters, and Level (iii) concentration– time curves in blood and tissues. In making predictions, the user must ascertain the most appropriate model and level of detail necessary for the question at hand, i.e., rapid screening of thousands of chemicals or more detailed characterization of a select few. Whichever model type is selected, there are some over-arching considerations. Firstly, does the chemical of interest fit within the applicability domain of the model? This requires knowledge of the training set which may, or may not, be available. If a model can make a reliable prediction for the chemical of interest, how transparent and readily interpretable is the model? For a simple (Q)SAR, it is usually apparent how a candidate structure could be modified in order to optimize its ADME profile, this is not always the case for more complex modeling methods. As this chapter has demonstrated, there may be several models from which to choose. Comparing results across multiple predictions to establish the most probable has been recommended [12]. Another approach is to investigate the predictions for similar chemicals (where the values are known) and so determine which is the most
Pharmacokinetic Tools and Applications
81
appropriate model for the chemical space of interest. One disadvantage of many of the models is that they have been built on public data for “successful” candidates hence the data sets may be biased in relation to the property-space covered. Efforts to develop more balanced datasets for modeling include both computational methods, as well as data sharing initiatives across pharmaceutical industries to include more “failed” drug candidates, i.e., those that may have unfavorable pharmacokinetic profiles [4, 5, 32]. As software has developed, predictions for “passive” processes, such as partitioning across membranes, have become increasingly reliable. However, predicting active processes, such as rates of metabolism, renal clearance, or transport processes (and the potential interaction of chemicals that may compete for, inhibit, or promote these processes), remains a significant challenge. Resolving these issues would be a tremendous advance in our ability to predict pharmacokinetic profiles. Issues regarding sustainability, versioning, and maintaining the web presence of freely available tools can be more problematic than for commercial software. However, the growth in the number of tools available and the expansion in their capabilities are promising for future developments in this area.
Acknowledgments The funding of the European Partnership for Alternative Approaches to Animal Testing (EPAA) is gratefully acknowledged. References 1. Prentis RA, Lis Y, Walker SR (1988) Pharmaceutical innovation by the seven UK-owned pharmaceutical companies (1964–1985). Br J Clin Pharmac 25:387–396 2. Kerns EH, Di L (2008) Drug-like properties: concepts, structure design and methods: from ADME to toxicity optimization. Elsevier, Burlington, USA 3. Waring MJ, Arrowsmith J, Leach AR et al (2015) An analysis of the attrition of drug candidates from four major pharmaceutical companies. Nat Rev Drug Discov 14:475–448 4. Sanz F, Pognan F, Steger-Hartmann T et al (2017) Legacy data sharing to improve drug safety assessment: the eTOX project. Nat Rev Drug Discov 16:811–812 5. Pognan F, Steger-Hartmann T, Diaz C (2021) The eTRANSAFE project on translational safety assessment through integrative knowledge management: achievements and perspectives. Pharmaceuticals 14:237
6. Cherkasov A, Muratov EM, Fourches D et al (2014) QSAR modeling: where have you been? Where are you going to? J Med Chem 57: 4977–5010 7. Ferreira LLG, Andricopulo AD (2019) ADMET modeling approaches in drug discovery. Drug Discov Today 24:1157–1165 8. Hemmerich J, Ecker GF (2020) In silico toxicology: from structure–activity relationships towards deep learning and adverse outcome pathways. WIREs Comput Mol Sci 10:e1475 9. Madden JC, Enoch SJ, Paini A et al (2020) A review of in silico tools as alternatives to animal testing: principles, resources and applications. Altern Lab Anim 48:146–172 10. Wang Y, Xing J, Xu Y (2015) In silico ADME/ T modelling for rational drug design. Q Rev Biophys 48:488–515 11. Madden JC, Pawar G, Cronin MTD et al (2019) In silico resources to assist in the
82
Judith C. Madden and Courtney V. Thompson
development and evaluation of physiologicallybased kinetic models. Comp Tox 11:33–49 12. Kar S, Leszczynski J (2020) Open access in silico tools to predict the ADMET profiling of drug candidates. Expert Opin Drug Disc 15(12):1473–1487. https://doi.org/10. 1080/17460441.2020.1798926 13. Pawar G, Madden JC, Ebbrell D et al (2019) In silico toxicology data resources to support read-across and (Q)SAR. Front Pharmacol 10: 561 14. Mostrag-Szlichtyng A and Worth A (2010) Review of QSAR models and software tools for predicting biokinetic properties. JRC scientific and technical reports EUR 24377 EN— 2010. https://doi.org/10.2788/94537 15. Card ML, Gomez-Alvarez V, Lee W-H et al (2017) History of EPI Suite™ and future perspectives on chemical property estimation in US toxic substances control act new chemical risk assessments. Environ Sci Process Impacts 19:203–212 16. Lipinski CA, Lombardo F, Dominy BW et al (1997) Experimental and computational approaches to estimate solubility and permeability in drug discovery and development settings. Adv Drug Deliver Rev 23:3–25 17. Ghose AK, Viswanadhan VN, Wendoloski JJ (1999) A knowledge based approach in designing combinatorial and medicinal chemistry libraries for drug discovery: 1. Qualitative and quantitative characterization of known drug databases. J Comb Chem 1:55–68 18. Oprea TI (2000) Property distribution of drug-related chemical databases. J Comput Aid Mol Des 14:251–264 19. Veber DF, Johnson SR, Cheng H-Y et al (2002) Molecular properties that influence the oral bioavailability of drug candidates. J Med Chem 45:2615–2623 20. Varma MVS, Obach RS, Rotter C et al (2010) Physicochemical space for optimum oral bioavailability: contribution of human intestinal absorption and first-pass elimination. J Med Chem 53:1098–1108 21. Potts RO, Guy RH (1992) Predicting skin permeability. Pharm Res 9:662–669 22. Patel M, Chilton ML, Sartini A et al (2018) Assessment and reproducibility of quantitative structure–activity relationship models by the nonexpert. J Chem Inf Model 58:673–682 23. Przybylak KR, Madden JC, Covey-Crump E et al (2018) Characterisation of data resources for in silico modelling: benchmark datasets for ADME properties. Expert Opin Drug Met 14: 169–181
24. Dong J, Wang N-N, Yao Z-J et al (2018) ADMETlab: a platform for systematic ADMET evaluation based on a comprehensively collected ADMET database. J Cheminform 10:29 25. Pires DEV, Blundell TL, Ascher DB (2015) pkCSM: predicting small-molecule pharmacokinetic and toxicity properties using graphbased signatures. J Med Chem 58:4066–4072 26. Daina A, Zoete V (2016) A BOILED-egg to predict gastrointestinal absorption and brain penetration of small molecules. ChemMedChem 11:1117–1121 27. Egan WJ, Merz KM Jr, Baldwin JJ (2000) Prediction of drug absorption using multivariate statistics. J Med Chem 43:3867–3877 28. Daina A, Michielin O, Zoete V (2017) SwissADME: a free web tool to evaluate pharmacokinetics, drug-likeness and medicinal chemistry friendliness of small molecules. Sci Rep 7: 42717 29. Yang H, Lou C, Sun L et al (2019) admetSAR 2.0: web-service for prediction and optimization of chemical ADMET properties. Bioinformatics 35:1067–1069 30. Schyman P, Liu R, Desai V et al (2017) vNN web server for ADMET predictions. Front Pharmacol 8:889 31. Organisation for Economic Co-operation and Development (2007) Guidance document on the validation of (quantitative) structureactivity relationships [(Q)SAR] models, Series on Testing and Assessment No. 69. Paris 32. Yang M, Chen J, Xu L et al (2018) A novel adaptive ensemble classification framework for ADME prediction. RSC Adv 8:11661–11683 33. Tian S, Djoumbou-Feunang Y, Greiner R et al (2018) CypReact: a software tool for in silico reactant prediction for human cytochrome P450 enzymes. J Chem Inf Model 58: 1282–1291 34. Ridder L, Wagener M (2008) SyGMa: combining expert knowledge and empirical scoring in the prediction of metabolites. ChemMedChem 3:821–832 35. Zaretzki J, Matlock M, Swamidass SJ (2013) XenoSite: accurately predicting CYP-mediated sites of metabolism with neural networks. J Chem Inf Model 53:3373–3383 36. Stepan AF, Walker DP, Bauman J et al (2011) Structural alert/reactive metabolite concept as applied in medicinal chemistry to mitigate the risk of idiosyncratic drug toxicity: a perspective based on the critical examination of trends in the top 200 drugs marketed in the United States. Chem Res Toxicol 24:1345–1410
Pharmacokinetic Tools and Applications 37. Bois FY, Brochot C (2016) In: Benfenati E (ed) Modelling pharmacokinetics in in silico methods for predicting drug toxicity. Humana Press, Springer, New York 38. Kuepfer L, Niederalt C, Wendl T et al (2016) Applied concepts in PBPK modeling: how to build a PBPK/PD model. CPT Pharmacometrics Syst Pharmacol 5:516–531 39. Eissing T, Kuepfer L, Becker C et al (2011) A computational systems biology software platform for multiscale modeling and simulation: integrating whole-body physiology, disease biology, and molecular reaction networks. Front Pharmacol 2:4 40. Peters SA (2008) Evaluation of a generic physiologically based pharmacokinetic model for lineshape analysis. Clin Pharmacokinet 47: 261–275
83
41. Pendse N, Efremenko AY, Hack CE et al (2020) Population life-course exposure to health effects model (PLETHEM): an R package for PBPK modeling. Comput Toxicol 13: 100115 42. Mallick P, Song G, Efremenko AY et al (2020) Physiologically based pharmacokinetic modeling in risk assessment: case study with pyrethroids. Toxicol Sci 176:460–469 43. Punt A, Pinckaers N, Peijnenburg A et al (2021) Development of a web-based toolbox to support quantitative in-vitro-to-in-vivo extrapolations (QIVIVE) within nonanimal testing strategies. Chem Res Toxicol 34(2): 460–472. https://doi.org/10.1021/acs. chemrestox.0c00307
Chapter 4 In Silico Tools and Software to Predict ADMET of New Drug Candidates Supratik Kar, Kunal Roy, and Jerzy Leszczynski Abstract Implication of computational techniques and in silico tools promote not only reduction of animal experimentations but also save time and money followed by rational designing of drugs as well as controlled synthesis of those “Hits” which show drug-likeness and possess suitable absorption, distribution, metabolism, excretion, and toxicity (ADMET) profile. With globalization of diseases, resistance of drugs over the time and modification of viruses and microorganisms, computational tools, and artificial intelligence are the future of drug design and one of the important areas where the principles of sustainability and green chemistry (GC) perfectly fit. Most of the new drug entities fail in the clinical trials over the issue of drugassociated human toxicity. Although ecotoxicity related to new drugs is rarely considered, but this is the high time when ecotoxicity prediction should get equal importance along with human-associated drug toxicity. Thus, the present book chapter discusses the available in silico tools and software for the fast and preliminary prediction of a series of human-associated toxicity and ecotoxicity of new drug entities to screen possibly safer drugs before going into preclinical and clinical trials. Key words ADME, Endpoint, In silico, Software, Toxicity, Web server
1
Introduction The key reason for late-stage drug failure is unwanted pharmacokinetic (PK) and pharmacodynamic (PD) properties [1]. From drug design to synthesis followed by preclinical and clinical trials, there should be a fine balance between drug candidate’s absorption, distribution, metabolism, elimination, and toxicity (ADMET) profile to reach the market as a successful candidate. Therefore, prior detection of PK/PD properties considering drug-likeness scores and ADMET profiling can ensure saving of huge money, manpower, time, and animal sacrifice [2, 3]. Researchers use the terms like PK/PD study, drug-likeness, and ADMET profiling interchangeably, but they are distinctly different from each other, and each one has a vital role to play in the drug discovery process [4]. Interestingly, all parameters are directly
Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5_4, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
85
86
Supratik Kar et al.
or indirectly connected with each other. Thus, although the topic of the present chapter is about toxicity prediction, but ADME properties have a critical role to influence the toxicity of the studied drug candidates [5]. An additional essential aspect of a probable drug candidate is apprehension about its ecotoxicity. Due to the U.S. Environmental Protection Agency’s (USEPA’s) strict environmental risk management policy, ecotoxicity prediction of a drug candidate is becoming equally important like drug toxicity to humans. Not only that, USEPA has recommended in silico modeling to study the toxicities of the investigated compounds by stopping the usage of live mammals by 2035 [6]. Thus, in silico models are quite critical to make the plan successful. Single quantitative structure–activity/toxicity relationship (QSAR/QSTR) models and machine learning models are mostly used in silico tools for toxicity prediction of drug candidates. But in most of the cases, these models are local models and valid for quite a small number and types of molecules. Additionally, to use those models, users need to understand the algorithm and the model. On the contrary, in silico tools in the form of standalone software and web servers are user friendly, well documented, and in most of the cases considered global models for prediction. Therefore, based on the researchers’ needs, fast prediction, and saving of animals, in silico tools are the future of drug design and discovery with special interest in toxicity prediction.
2
How Are In Silico Tools Integrated in Drug Design and Discovery? Web servers/web platforms/standalone software related to drug discovery problems are integrated collection of multiple in silico models developed with classification- and regression-based QSAR, machine learning as well as artificial intelligence (AI) approaches employing datasets which vary from adsorption, distribution, metabolism, and toxicity endpoints to drug-likeness [7]. The idea of these web servers is fast prediction of ADMET profiling as well as ecotoxicity of any query drug molecule or chemical confidently. Drug lead molecules may fail with lacking drug-likeness and/or ADMET profiling while their inherited ecotoxicity might directly affect the ecosystem. As per a report, there is 90% attrition rates among the probable drug candidates where a single new drug can cost a pharmaceutical company around US$ 2.6 billion [8]. Most common reason for this high attrition rate is unwanted toxicity of drug molecules observed during preclinical and clinical trials. A series of open access and commercial web servers are available, which provide the prediction for diverse toxicity endpoints related to human and ADME profiling, and drug-likeness. Although the number of endpoints is still quite less, but many major web servers and software have started to incorporate ecotoxicity prediction
In Silico Tools and Software to Predict ADMET of New Drug Candidates
87
endpoints due to upcoming strict environmental risk policies [9]. Drug-associated toxicity and ecotoxicity endpoints and ADME profiling endpoints and algorithms used to develop in silico models incorporated under multiple web servers and software are reported in Table 1.
3
Tools
3.1 Open Access Tools, Software, and Web Server for Toxicity Prediction 3.1.1 ADMETlab
Dong et al. [10, 11] prepared a web server platform named ADMETlab which offers a user-friendly, open-source web interface for the systematic ADMET assessment of chemicals based on a database comprising of 288,967 entries. The web server has four modules which help users to execute drug-likeness score, evaluate 31 ADMET endpoints prediction, and search similarity. Recently, the same group updated the web server with an improved version named ADMETlab 2.0 which suggests greater capability to help medicinal chemists in fast-tracking the drug research and development process. The ADMETlab 2.0 comprises two modules. The first one is “ADMET evaluation” which is compiled from a series of prediction models trained by multi-task graph attention framework. It enables the calculation and prediction of 17 physicochemical properties, 23 ADME endpoints, 13 medicinal chemistry measures, and 27 toxicity endpoints. The prediction is created on a high-quality database of 0.25 M entries. Additionally, 8 toxicophore rules for 751 substructures. The second module “ADMET screening” is designed for the batch mode of evaluation and for the prediction of molecular datasets. This module is appropriate for the assessment of analytically designed or visually screened molecules even before chemical synthesis and biochemical assays to speed up the drug discovery process.
Input and Output Files
For ADMET evaluation, user can use SMILES or draw the query molecules using JMSE editor. As a warning point, developers suggested that chemicals with more than 128 atoms are not recommended for evaluation. The obtained output can be easily downloaded as .csv or .pdf format with predicted data and clear explanation of the outcomes. For the ADMET screening module, one can upload a series of molecules in the form of a list of SMILES/.sdf/.txt files. Similarly, all the output can be exported as .csv file. For example, we have provided the SMILES string of acetaminophen and predicted the ADMET profiling. The predicted toxicity outcomes are illustrated in Fig. 1.
3.1.2 admetSAR
This is a web server for the prediction and optimization of ADMET properties of probable drug candidates. The initial version was also open source with a search option for text and structure. The associated database offers ADMET-associated properties data
88
Supratik Kar et al.
Table 1 Drug-associated human toxicity and ecotoxicity endpoints along with commonly used modeling algorithm under in silico tools and software Drug-associated human toxicity endpoints
Ecotoxicity endpoints
Biodegradation (BD), Acute oral toxicity crustaceans aquatic (AOT), acute rat toxicity (CT), fish toxicity (ART), aquatic toxicity adverse outcomes (FAT), Tetrahymena (Tox21) pathways pyriformis toxicity and toxicity targets, (TPT), honeybee Ames toxicity (AT), toxicity (HBT), Ames bacterial fathead minnow mutation assays toxicity (FMT) (ABMA), mutagenicity (MT), plasma protein binding (PPB), skin sensitization (SkinSen), cytotoxicity (CytTox), cardiotoxicity (CarTox) hepatotoxicity (HepTox), immunotoxicity (ImmTox), carcinogenicity (CAR), drug–drug interactions (DDI), drug-induced liver injury (DILI), eye irritation (EI), human ether-a-g-go inhibition (Herg), MetaTox (). Micronuclear toxicity (MT), endocrine disruption (ED), skin permeation (SP), BRENK structural alert (BRENK)
Associated ADME properties related to toxicity
Modeling algorithm implemented in the tools/software
Caco-2 cell permeability Ada boost (AB), (CacoP), blood–brain convolutional neural barrier (BBB) networks (CNN), permeation, human decision tree (DT), intestinal absorption extra trees classifier (HIA), oral (ETC), flexible bioavailability (OB), discriminant analysis CYP inhibitors and (FlDA), gradient substrates (CYPI/S), boosting (GB), P-gp inhibitors and k-nearest neighbor substrates (P-gp+/P(kNN), LBM gp-), volume (learning base distribution (VD), model), linear clearance (cl) [total, discriminant analysis renal, hepatic, biliary], (LDA), maximum t1/2, blood–placenta likelihood (MLHD), multiple linear barrier (BPB), sites of regression (MLR), metabolism (SOM), Naı¨ve Bayes (NB), quinone formation nearest centroid (QF), maximum (NC), partial least recommended daily squares (PLS), dose (MRDD), breast principal component cancer-resistant protein analysis (PCA), (BCRP), organic cation quantitative estimate transporter 2 (OCT2), of drug-likeness organic anion (QED), quantitative transporter polypeptide structure–activity 1B1 (OATP1B1) relationship (QSAR), radial basis function (RBF), random forests (RF), readacross (RA), recursive partitioning regression (RP), rulebased C5.0 algorithm (R C5.0), SMOreg: Support vector machine (SVM), SVM implemented into sequential minimal optimization (SMO), variable nearest neighbor (vNN)
In Silico Tools and Software to Predict ADMET of New Drug Candidates
89
Fig. 1 Predicted toxicity values of different endpoints by ADMETlab 2.0
taken from the literature. The previous version of admetSAR covers over 210,000 ADMET annotated data points for more than 96,000 chemicals with 45 kinds of ADMET properties [12]. The earlier version had 22 in silico models for the prediction of ADMET properties for new query compounds. Later, the improved version admetSAR 2.0 included 47 models helpful for the diverse toxicity endpoints (around 50) prediction associated with either drug discovery or environmental risk assessment [13]. The studied prediction ADMET endpoints and major characteristics of admetSAR 2.0 are illustrated in Table 2. Input and Output Files
User needs to provide the SMILES string of a query molecule and predict choosing admetSAR1 (the older version) or admetSAR2.0 (the newer version). The output file offers classification as well as regression-based prediction. The prediction outcomes can be copied and pasted for further evaluation in excel or text file format. We
90
Supratik Kar et al.
Table 2 Studied prediction endpoints and characteristics of admetSAR 2.0 Studied endpoints
Major characteristics
Absorption: Caco-2 permeability, human intestinal absorption (HIA), human oral bioavailability (HOB) Distribution: Plasma protein binding (PPB), P-glycoprotein substrate, inhibitor, blood–brain barrier penetration (BBB), blood–placenta barrier (BPB) Metabolism: Cytochrome P450 (CYP450) substrate (CYP2D6, 2C9, 3A4), inhibitor (CYP1A2, 2D6, 2C9, 2C19, 3A4), pharmacokinetics transporters: BSEPi, BRCPi, OCT2i, OCT1i, MATE1i, OATP1b3i, OATP1b1i, OATP2b1i Excretion: Half time (t1/2), renal clearance Toxicity: Organ toxicity [drug-induced liver injury Human ether-a-go-go-related gene (hERG) inhibition, acute toxicity, eye injury and eye corrosion], genomic toxicity [Ames mutagenesis, carcinogenesis, micronucleus assay], ecotoxicity [Tetrahymena Pyriformis toxicity, phytoplankton toxicity, honeybee toxicity, avian toxicity, biodegradation]
l
QSAR-based ADMET properties prediction employing 47 regression and classification-based models l Predicts toxicity along with ADME properties for new drug candidates l Predicts toxicity for organic chemical related to environmental concerns l The web server is important from species toxicity to biodegradation prediction which is helpful for environmental hazard and risk assessment
have used acetaminophen SMILES as an input, and the outcomes using admetSAR 2.0 has illustrated in Fig. 2. 3.1.3 CypReact
This is another freely accessible in silico tool [14, 15], which is competent in predicting enzymatic reactions with the cytochromes P450 (CYP450) enzymes; majorly responsible for metabolism of the drug molecules during pharmacokinetic process. Although metabolism is different from toxicity properties but based on metabolites and the onset of metabolism, toxicity can vary widely. Therefore, metabolism directly affects the respective toxicity of the query molecules, and knowing type and onset metabolism is very much important. The CypReact offers tools for the necessity of metabolism prediction of whether an explicit drug will interact with one or multiple metabolizing CYP450 enzymes, predicting individual enzymatic reactions. The CypReact is capable of identifying whether the query compound will react with any one of the nine CYP450 enzymes (CYP1A2, CYP2A6, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP2E1, and CYP3A4). Therefore, the CypReact has nine different subtools prepared for each CYP450 enzymes. The prediction model under CypReact has been prepared and validated employing 1632 molecules and efficiently predict the
In Silico Tools and Software to Predict ADMET of New Drug Candidates
91
Fig. 2 Schematic workflow representation and obtained ADMET profile of acetaminophen employing ADMETSAR
metabolites of phase I, phase II, and microbial metabolism in humans. The employed prediction models were built using learning-based models like SVM, ANN, RF, DT,,LR and ensemble models. The prediction outcomes indicate reactants or nonreactants to the specific enzyme. The complete algorithm considered to develop the CypReact is portrayed in Fig. 3. Input and Output Files
The standard .SDF file or SMILES string can be used as input, and the output will be in form of reactants or nonreactants with any of the nine CYPs mentioned above.
3.1.4 CypRules
This is another open-source web server to predict the CYPP450 metabolism inhibition [16, 17]. This web server is capable of predicting inhibition for five different CYP450 enzymes, and they are CYP1A2, CYP2C19, CYP2C9, CYP2D6, and CYP3A4. The C5.0
92
Supratik Kar et al.
Fig. 3 Schematic workflow representation of CypReact web server
algorithm is considered for the inhibition prediction for all five enzymes in five different submodules where the rules are computed through Mold2 2D descriptors. As per the developer, they have tested SDF file containing 6000 chemicals at a run, and it works. Input and Output Files
For an individual module, it accepts only the .SDF file format. The outcome contains three forms of data: (1) the visualization of the query molecule structure; (2) the prediction output as an inhibitor or noninhibitor; and (3) the rule sets information for the prediction. As an input, we have tried one molecule to check what types of results we should expect, and the outcome is reported in Table 3.
3.1.5 FAFDrugs4
FAFDrugs4 is another web server to predict ADMET properties as well as an efficient filtering tool for drug-likeness properties [18, 19]. Along with the ADMET filtering, the FAFDrug4 offers removal of salts and counter ions and duplicate chemicals. The most recent version assists in silico and experimental screening as it helps to select chemicals for in cellulo/in silico/in vitro assays. Another vital inclusion in the present version is a quantitative estimate of the drug-likeness (QED) method which is termed as FAF-QED. The FAF-QED method includes major physicochemical descriptors and a search of 113 structural alerts (ALERTs) followed by computing toxicophores and the identification of the Pan-assay interference structure (PAINS). The web server also has the option of optimization of the standardization–neutralization
In Silico Tools and Software to Predict ADMET of New Drug Candidates
93
Table 3 Prediction of 2,3,4,5,6-pentachloronitrobenzene for CYP1A2 predictor module Compound structure -
O
O N+
Cl Cl
Cl
Cl Cl
2,3,4,5,6-pentachloro nitrobenzene
Outcome l Noninhibitor Rule #1: Structural information content order3 index 1 are considered significantly differentiated, such thresholds are arbitrary and highly depending to the experimental conditions (doses, times, cells. . .).
3.3
Nowadays, RNA-sequencing (RNAseq) has become the primary technology used for gene expression profiling. At the difference of microarray where the intensity is measured on microarrays chips, the RNA samples are sequenced to obtain 100 of millions of reads (small fragment of base-pair single end). Before analyzing the relative abundance of transcripts/genes (counts) from the raw sequencing data, some processing steps must undergo such as quality filtering and removal of undesired sequences, alignment of reads to a suitable reference genome, and quantification of the transcripts into counts. It has to be noticed that the alignment is the major bottleneck in RNAseq analysis as it is computationally expensive. The STAR aligner [25] has grown in popularity as a method that balances accuracy with efficiency, having good performance in systematic evaluations. Often, the expression values are normalized (including gene length and sequencing depth corrections) in the form of RPKM/FPKM (reads/fragments per kilobase per million reads), TPM (transcripts per kilobase million), or CPM (counts per millions) and can be transformed into a base-10 logarithm [26]. Once the “count matrix” produced after alignment and quantification, differential expression (DE) analysis can be performed. Two most commonly used normalization methods in differentially expressed genes (DEG) are edgeR [27, 28] and DESeq2 [29]. The normalization of the DEG, i.e., Trimmed
RNA Sequencing
Emerging Bioinformatics Methods and Resources in Drug Toxicology
137
Mean of M-values (TMM), for edgeR and Relative Log Expression (RLE), for DESeq is commonly used and show consistent good performance. Then, to compare the expression levels of genes in two conditions, statistical models are performed. Negative binomial model might be performed but, to assume that the dispersion is the same for all genes (i.e., all genes have the same mean-variance relationship), other algorithms are usually preferred to estimate the dispersion like, for example, a quantile-adjusted condition maximum likelihood (qCML) method implemented in edgeR. After a statistic model is computed, the significance of the expression difference between two conditions will be measured and converted to a p-value for any genes of the matrix. It is usual in biology to use a criteria p-value of 0.05. Also, adjusted p-value (i.e., correction for multiple testing) is in many studies applied like the Bonferroni correction or the False Discovery Rate (FDR) from Benjamini and Hochberg [30] in the aim to limit the false-positive errors and keep a high confidence of the results from the statistics. The interpretation and visualization of the DEG results can be performed through volcano plot and heatmap (with or without gene clustering): 1. In a volcano plot, both fold-change (i.e., log2 fold change) and a measure of the statistical significance (logCPM) are represented for each gene. 2. A heatmap in combination with a hierarchical clustering allows to explore the degree of similarity and relationships among groups of genes that are differentially expressed in a same manner from a condition to another condition. Figures of these plots are depicted in Fig. 1. Recently, single-cell RNA sequencing has emerged as a powerful technology allowing to profile the whole transcriptome of a large number of individual cells. The bioinformatic protocol to analyze such data is relatively similar to bulk RNA sequencing; however, the large volumes of data generated from these experiments require more specific and suitable bioinformatics methods such as M3Drop [31], Seurat [32], or MAST [33] packages available in R. Guidelines and tutorial about analyzing single-cell RNA sequencing data can be obtained here [34]. Such technology has been applied to screen three cancer cell lines exposed to 188 chemicals, allowing to measure how drugs shift the relative proportions of some genes in subsets of cells [35]. 3.4 Biological Enrichment Analysis
Once a list of differentially expressed genes has been detected, it is common to compare them with databases of well-annotated gene sets such as gene ontology (GO) terms or biological pathways. Enrichment analyses to a gene/protein network and integration of pathway-level analyses have become important tools for the
138
Karine Audouze and Olivier Taboureau
Fig. 1 (a) Representation of a volcano plot. Each point is defined with a log CPM value (Count per Million) in xaxis and a log FC value (Fold Change) in y-axis and represents the variability of a gene expression between two conditions. (b) Representation of a heatmap. The variation of the gene’s expression is represented in row in function of the conditions, in column. The colors spectrum usually defines the z-score (the standard deviations that a value is away from the mean of all the values for a same gene). In our case, green is below the mean and red above the mean
interpretation of data from transcriptomics. A widely used approach is overrepresentation analysis (ORA) that applies statistical test (like cumulative hypergeometric test) to compute the statistical significance between the number of common genes shared by a set of genes and each annotated gene in the databases [36]. An extension of this approach is the Gene Set Enrichment Analysis (GSEA) that use directly the expression fold change of genes and not a cutoff for selecting genes like in ORA [37]. In toxicogenomics studies, the use of GSEA allowed to determine a number of gene sets whose expression levels are closely associated with certain toxicological endpoint, notably for liver toxicity [38]. Overall, many computational biological methods have been developed and reported for prioritizing candidate genes and enrichment analysis [39, 40]. For example, Ingenuity Pathway Analysis (IPA; http://www.ingenu ity.com) enrichment statistics and clusterProfiler [41] are tools able to reveal biological pathways and Gene Ontology (GO) terms perturbed in cells and tissues. A representation of the outcomes from clusterProfiler is depicted in Fig. 2. Weighted gene co-expression network analysis (WGCNA) is another method available in R that allows to identify co-expressed genes from the transcriptomics data [42]. One drawback to notice is the soft power-selection algorithm in WGCNA produces asymptotic curves which require selection of an arbitrary level of agreement with a scale-free network topology (e.g., 90%). So, small changes in the threshold can lead to large changes in the selected power. However, such networks have shown interesting results in
Fig. 2 Overrepresentation analysis. (a) From a genes list obtained from differential expression genes analysis, functional enrichment allows to identify impacted pathways. The geneRatio (x-axis) determines the number of genes from the genes list that is involved in a pathway. The larger is the size of the spheres, the higher is the number of genes, involved in a pathway. The color of the sphere is related to the significance ( p-value) of the pathway perturbed by the genes list. (b) A genes list obtained from DEG is mapped to a pathway (here the nonalcoholic liver disease). A color panel is used to observe gene downregulated (green), upregulated (red), or not significatively deregulated (grey)
Emerging Bioinformatics Methods and Resources in Drug Toxicology 139
140
Karine Audouze and Olivier Taboureau
Fig. 3 Representation of the pathways highly deregulated by a list of genes from differential expression genes analysis in function of three timepoints and three doses. The gene fold enrichment (y axis) is a score that divides the number of genes deregulated by the total number of genes involved in the pathway being analyzed and then multiplied by the statistical mean for the same pathway
the repositioning of drugs in cancer [43] and also with toxicogenomics data [44, 45]. Interestingly, WGCNA can be applied also to proteomic and metabolomic data analysis [46]. Finally, it is common to observe an evolution of the gene expression profile in function of the concentration (dose) and time of a chemical exposure. To interpret such dose–response and time-dependent processes, some tools have been developed. MasigPro [47] and TinderMIX [48] are able to perform time series analysis and to detect significant temporal expression change of genes and their variance at different concentrations of a compound. Such approach was, for example, used to explore mechanisms of chemical-induced hepatic steatosis toxicity (Fig. 3) [49]. BMDExpress is more dedicated to identify subset of genes that display significant dose–response behavior and to assess the benchmark dose (BMD) at which the expression of the gene significantly deviates from that observed in control animals [50, 51]. 3.5 Network Analysis
The drug action can be explored across multiple scale of complexity, from molecular and cellular to tissue and organism levels over time and dose. Network sciences allow the integration of heterogeneous data sources and the quantification of their interactions [52]. In this concept, the perception that human diseases are
Emerging Bioinformatics Methods and Resources in Drug Toxicology
141
associated with a few dominant factors (reductionist) is replaced by the view of diseases as the outcome of many weak contributors (holistic). Therefore, they are sensitive approaches that can be of interest to consider, for example, entire pathways rather than single genes associated to diverse toxicity endpoints or adverse outcomes. In network sciences, features such as node, i.e., the entity on which the network is building up (chemical, genes, pathways, clinical outcomes), edges (that can represent physical interactions, functional interactions, or simply connection between multi-scale data), degree of distribution around a node and the forming hubs, degree of clustering, or betweenness centrality are usually considered to assess the properties of the biological network. Tools like Cytoscape (www.cytoscape.org), Neo4j tool (www.neo4j.com), which is a high-performance NOSQL graphics database using Cypherbased query commands or igraph and network in R, are commonly used to analyze biological networks. Several studies have reported new insights on chemical toxicity and adverse drug reactions based on network analysis [53, 54]. With the increasing number of publicly available gene expression data, and the promising results that have been shown to date, network biology is expected to be of use in both characterizing the molecular mechanism of drugs and in early evaluation of the risk assessment of drug candidate. An example of a network visualization for adverse outcome pathways (AOP) stressed by a chemical is presented in Fig. 4.
Fig. 4 Network representation of several events from adverse outcome pathway (AOP) associated to a chemical (acrolein) obtained from the server sAOP [55]
142
Karine Audouze and Olivier Taboureau
Fig. 5 Links between a chemical (glyphosate) and toxicity events found in abstracts from the PubMed database using the AOP-helpFinder tool. The size of the node reflects the number of abstracts co-mentioning the chemical (glyphosate) and one event. The color represents the type of events according to AOP-Wiki database 3.6
4
Text Mining
Artificial intelligence methods such as text mining can also be considered in order to identify and extract specific information from large amounts of text that could be complementary to existing databases. Indeed, a high number of knowledges is available and spread over the literature. For example, the AOP-helpFinder tool, a hybrid method that combines text mining (i.e., Natural Language Processing) and graph theory (i.e., Dijkstra algorithm), was specifically developed to identify reliable associations between a chemical of interest and biological events (Fig. 5) [56, 57]. Such tool allows to automatically screen the >30 millions of available abstracts from the PubMed database. Associated to systems toxicology, text mining allows the development of computational model to predict adverse outcome pathways linked to small molecules [58]. Screening the full text of open-access publication could also be used, as performed by Jensen et al., who use frequent item-set mining to extract fragmented information in full text [59]. Other specific tools using text mining to retrieve systematically information for drugs and toxicity associations exists, such as LimTox, a web-based online biomedical search tool for hepatobiliary toxicity [60].
Conclusion Transcriptomics data (and more generally omics data) provide both gene- and pathway-level read-outs (e.g., changes in individual gene expression levels or statistical indices for the differential regulation
Emerging Bioinformatics Methods and Resources in Drug Toxicology
143
of sets of genes within a specific biological pathway) that can be used for selecting measurable and relevant biomarkers. For example, Suter et al. have shown that combination of transcriptomics with proteomics and metabolomics data improved the ability to provide more specific biomarker candidates for liver hypertrophy, bile duct necrosis, and/or cholestasis and proximal tubule damage [61]. However, the exploitation of the data coming from these new technologies requires the development of sophisticated and dedicated bioinformatics approaches in the aim to maximize the interpretation resulting from these experiments. Many of these approaches are freely available in the R software which is frequently updated and can be run on a variety of platforms (UNIX, Windows, MacOS. . .). In addition, the integration of heterogeneous data into network biology gives the possibility to explore the impact of drug exposure across the system biology and so to assess the drug toxicity. Finally, with the revival of phenotypic screening technologies in drug discovery and image-based high-content screening (HCS), many information related to morphological change of a cell (or compartment in a cell) after drug exposure can be observed. Combination of genomic and phenotypic data with the development of new bioinformatics methods like, for example, the generative adversarial network [62] will be the next paradigm in drug toxicology and chemical risk assessment.
Acknowledgments This work was supported by the University of Paris and INSERM. References 1. Asselah T, Durantel D, Pasmant E et al (2021) COVID-19: discovery, diagnostics and drug development. J Hepatol 74:168–184 2. Paul SM, Mytelka DS, Dunwiddie CT, Persinger CC, Munos BH, Lindborg SR et al (2012) How to improve R&D productivity: the pharmaceutical industry’s grand challenge. Nat Rev Drug Discov 9:203–214 3. Mahlich J, Bartol A, Dheban S (2021) Can adaptive clinical trials help to solve the productivity crisis of the pharmaceutical industry? – a scenario analysis. Health Econ Rev 11:4 4. Taboureau O, El M’Selmi W, Audouze K (2020) Integrative systems toxicology to predict human biological systems affected by exposure to environmental chemicals. Toxicol Appl Pharmacol 405:115210 5. Aguayo-Orozco A, Audouze K, Brunak S, Taboureau O (2016) In silico systems
pharmacology to assess drug’s therapeutic and toxic effects. Curr Pharm Des 22:6895–6902 6. Wu Q, Taboureau O, Audouze K (2020) Development of an adverse drug event network to predict drug toxicity. Curr Res Tox 1:48–55 7. Wilson JL, Wong M, Chalke A, Stepanov N, Petkovic D, Altman RB (2019) PathFXweb: a web application for identifying drug safety and efficacy phenotypes. Bioinformatics 35: 4504–4506 8. Yilmaz S, Jonveaux P, Bicep C, Pierron L, Smail-Tabbone M et al (2009) Gene-disease relationship discovery based on model-driven data integration and database view definition. Bioinformatics 25:230–236 9. Stathias V, Koleti A, Vidovic D, Cooper DJ, Jagodnik KM et al (2018) Sustainable data and metadata management at the BD2KLINCS data coordination and integration center. Sci Data 5:180117
144
Karine Audouze and Olivier Taboureau
10. Igarashi Y, Nakatsu N, Yamashita T, Ono A, Ohno Y, Urushidani T, Yamada H (2015) Open TG-GATEs: a large-scale toxicogenomics database. Nucleic Acids Res 43: D921–D927 11. Ganter B, Tugendreich S, Pearson CI, Ayanoglu E, Baumhueter S, Bostian KA et al (2005) Development of a large-scale chemogenomics database to improve drug candidate selection and to understand mechanisms of chemical toxicity and action. J Biotechnol 119:219–244 12. Darde TA, Gaudriault P, Beranger R, Lancien C, Caillarec-Joly A et al (2018) TOXsIgN: a cross-species repository for toxicogenomic signatures. Bioinformatics 34: 2116–2122 13. Davis AP, Grondin CJ, Johnson RJ, Sciaky D, King BL et al (2017) The comparative Toxicogenomics database: update 2017. Nucleic Acids Res 45:D972–D978 14. Williams AJ, Grulke CM, Edwards J, McEachran AD, Mansouri K et al (2017) The CompTox chemistry dashboard: a community data resource for environmental chemistry. J Cheminform 9:61 15. Bray M-A, Singh S, Han H, Davis CT, Borgeson B et al (2016) Cell painting, a high-content image-based assay for morphological profiling using multiplexed fluorescent dyes. Nat Protocol 11:1757–1774 16. Plunkett LM, Kaplan LM, Becker RA (2015) Challenges in using the ToxRefDB as a resource for toxicity prediction modeling. Regul Toxicol Pharmacol 72:610–614 17. Lea IA, Gong H, Paleja A, Rashid A, Fostel J (2017) CEBS: a comprehensive annotated database of toxicological data. Nucleic Acids Res 45:D964–D971 18. Kuhn M, Campillos M, Letunic I, Jensen LJ, Bork P (2010) A side effect resource to capture phenotypic effects of drugs. Mol Syst Biol 6: 343 19. Xu R, Wang Q (2014) Large-scale combining signals from both biomedical literature and the FDA adverse event reporting system (FAERS) to improve post-marketing drug safety signal detection. BMC Bioinformatics 15:15–17 20. Rao M, Van Vleet TR, Ciurlionis R, Buck WR, Mittelstadt SW et al (2019) Comparison of RNA-Seq and microarray gene expression platforms for the toxicogenomic evaluation of liver from short-term rat toxicity studies. Front Genet 9:638 21. Buck WR, Waring JF, Blomme EA (2008) Use of traditional end points and gene dysregulation to understand mechanism of toxicity:
toxicogrnomics in mechanistic toxicology. Methods Mol Biol 460:23–44 22. Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M et al (2004) Bioconductor: open software development for computational biology and bioinformatics. Genome Biol 5:80 23. Shakya K, Ruskin HJ, Kerr G, Crane M, Becker J (2010) Comparison of microarray preprocessing methods. Adv Exp Med Biol 680: 139–147 24. Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, Smyth GK (2015) Limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res 43:e47 25. Dobin A, Davis CA, Schlesinger F, Drenkow J, Zaleski C et al (2013) STAR: ultrafast universal RNA-seq aligner. Bioinformatics 29:15–21 26. Li B, Ruotti V, Stewart RM, Thomson JA, Dewey CN (2010) RNA-Seq gene expression estimation with read mapping uncertainty. Bioinformatics 26:493–500 27. Robinson M, McCarthy D, Smyth G (2010) edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics 26:139–140 28. Robinson MD, Oshlack A (2010) A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biol 11:R25 29. Love MI, Huber W, Anders S (2014) Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol 15:550 30. Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to. Multiple testing. J R Stat Soc Ser B 57:289–300 31. Andrews TS, Hemberg M (2019) M3Drop: dropout-based feature selection for scRNASeq. Bioinformatics 35:2865–2867 32. Butler A, Hoffman P, Smibert P, Papalexi E, Satija R (2018) Integrating single-cell transcriptomic data across different conditions, technologies and species. Nat Biotechnol 36: 411–420 33. Finak G, McDavid A, Yajima M, Deng J, Gersuk V et al (2015) MAST: a flexible statistical framework for assessing transcriptional changes and characterizing heterogeneity in single-cell RNA sequencing data. Genome Biol 16:278 34. Andrews TS, Kiselev VY, McCarthy D, Hemberg M (2021) Tutorial: guidelines for the computational analysis of single-cell RNA sequencing data. Nat Protoc 16:1–9
Emerging Bioinformatics Methods and Resources in Drug Toxicology 35. Srivatsan SR, McFaline-Figueroa JL, Ramani V, Saunders L, Cao J et al (2020) Massively multiplex chemical transcriptomics at single-cell resolution. Science 367:45–51 36. da Huang W, Sherman BT, Lempicki RA (2009) Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists. Nucleic Acids Res 37:1–13 37. Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL et al (2005) Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A 102: 15545–15550 38. Kiyosawa N, Ando Y, Manabe S, Yamoto T (2009) Toxicogenomics biomarkers for liver toxicity. J Toxicol Pathol 22:35–52 39. Zolotareva O, Kleine M (2019) A survey of prioritization tools for mendelian and complex human diseases. J Integr Bioinform 16: 20180068 40. Hernandez-de-Diego R, Tarazona S, Martinez-Mira C, Balzano-Nogueira L, FurioTari P et al (2018) PaintOmics 3: a web resource for the pathway analysis and visualization of multi-omics data. Nucleic Acids Res 46: W503–W509 41. Yu G, Wang LG, Han Y, He QY (2012) clusterProfiler: an R package for comparing biological themes among gene clusters. OMICS 16:284–287 42. Zhang B, Horvath S (2005) A general framework for weighted gene co-expression network analysis. Stat Appl Genet Mol Biol 4:17 43. Ivliev AE, Hoen PAC, Borisevich D, Nikolsky Y, Sergeeva G (2016) Drug repositioning through systematic mining of gene coexpression networks in cancer. PLoS One 11:e0165059 44. Sutherland JJ, Webster YW, Willy JA et al (2018) Toxicogenomic module associations with pathogenesis: a network-based approach to understanding drug toxicity. Pharmacogenomics J 18:377–390 45. Copple IM, den Hollander W, Callegaro G et al (2019) Characterisation of the NRF2 transcriptional network and its response to chemical insult in primary human hepatocytes: implications for prediction of drug-induced liver injury. Arch Toxicol 93:385–399 46. Pei G, Chen L, Zhang W (2017) WGCNA application to proteomic and metabolomic data analysis. Methods Enzymol 585:135–158 47. Nueda MJ, Tarazona S, Conesa A (2014) Next maSigPro: updating maSigPro Bioconductor
145
package for RNA-seq time series. Bioinformatics 30:2598–2602 48. Serra A, Fratello M, Del Giudice G, Saarim€aki LA, Paci M et al (2020) TinderMIX: time-dose integrated modelling of toxicogenomics data. Gigascience 9:giaa055 49. Aguayo-Orozco A, Bios FY, Brunak S, Taboureau O (2018) Analysis of time-series gene expression data to explore mechanisms of chemical-induced hepatic steatosis toxicity. Front Gene 9:396 50. Yang L, Allen BC, Thomas RS (2007) BMDExpress: a software tool for the benchmark dose analyses of genomic data. BMC Genomics 8:387 51. Phillips JR, Svoboda DL, Tandon A, Patel S, Sedykh A et al (2019) BMDExpress 2: enhanced transcriptomic dose-response analysis workflow. Bioinformatics 35:1780–1782 52. Vermeulen R, Schymanski EL, Barabasi AL, Miller GW (2020) The exposome and health: where chemistry meets biology. Science 367: 392–396 53. Audouze K, Juncker AS, Roque FJ, KrysiakBaltyn K, Weinhold N et al (2010) Deciphering diseases and biological targets for environmental chemicals using toxicogenomics networks. PLoS Comput Biol 6:e1000788 54. Dafniet B, Cerisier N, Audouze K, Taboureau O (2020) Drug-target-ADR network and possible implications of structural variants in adverse events. Mol Inform 39:e2000116 55. Aguayo-Orozco A, Audouze K, Siggaard T, Barouki R, Brunak S et al (2019) sAOP: linking chemical stressors to adverse outcomes pathway networks. Bioinformatics 35:5391–5392 56. Carvaillo JC, Barouki R, Coumoul X, Audouze K (2019) Linking bisphenol S to adverse outcome pathways using a combined text mining and systems biology approach. Environ Health Perspect 127:47005 57. Jornod F, Rugard M, Tamisier L, Coumoul X, Andersen HR, Barouki R, Audouze K (2020) AOP4EUpest: mapping of pesticides in adverse outcome pathways using a text mining tool. Bioinformatics 36:4379–4381 58. Rugard M, Coumoul X, Carvaillo JC, Barouki R, Audouze K (2020) Deciphering adverse outcome pathway network linked to bisphenol F using text mining and systems toxicology approaches. Toxicol Sci 173:32–40 59. Jensen PB, Jensen LL, Brunak S (2012) Mining electronic health records: towards better research applications and clinical care. Nat Rev Genet 13:395–405 60. Canada A, Capella-Gutierrez S, Rabal O, Oyarzabal J, Valencia A, Krallinger M (2017)
146
Karine Audouze and Olivier Taboureau
LimTox: a web tool for applied text mining of adverse event and toxicity associations of compounds, drugs and genes. Nucleic Acids Res 45:W484–W489 61. Suter L, Schroeder S, Meyer K, Gautier JC, Amberg A et al (2011) EU framework 6 project: predictive toxicology (PredTox) –
overview and outcome. Toxicol Appl Pharmacol 252:73–84 62. Me´ndez-Lucio O, Baillif B, Clevert DA, Rouquie´ D, Wichard J (2020) De novo generation of hit-like molecules from gene expression signatures using artificial intelligence. Nat Commun 11:10
Part III The Use of In Silico Models for the Specific Endpoints
Chapter 7 In Silico Prediction of Chemically Induced Mutagenicity: A Weight of Evidence Approach Integrating Information from QSAR Models and Read-Across Predictions Enrico Mombelli, Giuseppa Raitano, and Emilio Benfenati Abstract Information on genotoxicity is an essential piece of information in the framework of several regulations aimed at evaluating chemical toxicity. In this context, QSAR models that can predict Ames genotoxicity can conveniently provide relevant information. Indeed, they can be straightforwardly and rapidly used for predicting the presence or absence of genotoxic hazards associated with the interactions of chemicals with DNA. Nevertheless, and despite their ease of use, the main interpretative challenge is related to a critical assessment of the information that can be gathered, thanks to these tools. This chapter provides guidance on how to use freely available QSAR and read-across tools provided by VEGA HUB and on how to interpret their predictions according to a weight-of-evidence approach. Key words Mutagenicity, Ames test, QSAR, Predictive reliability, Structural alerts
1
Introduction The assessment of information on mutagenicity represents an important component for the evaluation of the toxicological characteristics of chemicals [1]. For instance, in the field of drug discovery, the detection of mutagenic potential of a chemical can result in the rejection of a promising chemotype owing to the deleterious consequences that the introduction of gene mutations can elicit. In addition, the characterization of genotoxicity is required for the regulatory qualification of impurities in drug substances [2], and it is a mandatory requirement for all the different tonnage bands defined by the overarching REACH regulation [3]. Mutagenic effects caused by chemical agents can be detected by the Ames test that was devised by Bruce Ames during the 1970s [4]. This test is still commonly in use in many toxicological laboratories around the world because of its good interlaboratory reproducibility, aptitude at testing different agents, cost-effectiveness,
Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5_7, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
149
150
Enrico Mombelli et al.
and structure–activity underlying rationale [5]. The remarkable juxtaposition of these attributes has brought the Ames test to the forefront of modern toxicology. Indeed, this test is a paradigm for the development of nowadays in vitro toxicology, and it has been nicknamed “the stethoscope of genetic toxicology for the twentyfirst century” [5] given that testing strategies for carcinogenicity rely on the Ames test as an essential first-tier assay [5, 6]. This test is based upon the ability of Salmonella Typhimurium and Escherichia coli auxotrophic strains to recover the ability to synthesize an essential amino acid (histidine for S. typhimurium and tryptophan for E. coli) as a consequence of the mutagenic effect of chemicals to which they are exposed. The design of the experimental protocol enables the detection of bacterial colonies that can grow in the absence of essential amino acids as a result of a back mutation that restores their biosynthetic capabilities. The detection of this back mutation to wild type has the potential to identify point mutations that are caused by the substitution, addition, or deletion of one or few DNA base pairs. At least five bacterial strains should be used when testing a chemical [7], including strains that are sensitive to oxidizing mutagens, cross-linking agents, and hydrazines (E. coli WP2 or S. typhimurium TA102, see Note 1). Anyhow, it is important to note that, as stated in the OECD guideline [7], mammalian tests may be more appropriate when evaluating certain classes of drugs. For example, the Ames test is not the most appropriate choice for chemicals displaying a high bactericidal activity such as certain antibiotics, topoisomerase inhibitors, and some nucleoside analogues. The interlaboratory reproducibility of the Ames test is estimated at 85–90% [8, 9], and these percentages represent the upper limit of predictive performance that can be expected from QSAR models for the same endpoint. Indeed, these models are derived from data obtained by means of the same protocol. In other words, these findings mean that 10–15% of the chemicals that were experimentally tested gave different results when analyzed in different laboratories. Therefore, this experimental uncertainty in terms of false-negative or -positive predictions is transposed into the semiempirical QSAR models that cannot be expected to be more reliable than their experimental counterpart. Consequently, a key issue that should be given attention when judging the reliability of a QSAR model predicting Ames genotoxicity is whether or not this model predicts with a reliability that is comparable to the reproducibility of the test. It is worth mentioning that this comparison has to be critically assessed as a function of the number and chemotypes of the chemicals that compose the external test set that was adopted in order to validate the model. For example, if the external test set does not include all the chemotypes that are covered by the training set, the estimated predictive
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
151
performance of the model will only be representative of a subset of chemical structures. One final word of caution should be added with respect to models whose alleged performance is much higher than the experimental test they are meant to replace. This special situation could indicate a potential overfitting of the model and its lack of ability to provide reliable prediction for new cases (i.e., molecules that are not included in its training set). The theory of electrophilic reactivity by Miller and Miller [10] adequately describes the molecular mechanisms that control the genotoxicity of chemicals as detected by the Ames test. Indeed, this theory has proved to be in agreement with the observations ever since it was formulated in the late 1970s. According to this theory, the vast majority of known chemical carcinogens are also genotoxic since they are (or are metabolized to) reactive electrophiles that react with nucleic acids. The (Q)SAR models described in this chapter (see Subheading 2.5) conform to this theory by identifying structural fragments that trigger electrophilic reactions as formalized by e-state values and fragments (e.g., CAESAR) and by structural alerts (SA) validated by experts (e.g., ISS) or automatically extracted by machine-learning algorithms (e.g., SARpy). Because of the complementary nature of these tools, this chapter illustrates the practical application of models covering the three main categories of in silico tools for the prediction of the mutagenic potential of chemicals: (Q)SAR models that are based on numerical descriptors (e.g., partition coefficients, topological descriptors, functional group counts), rule-based expert systems that are based on structural alerts (molecular fragments that are associated with the occurrence of adverse outcomes), and hybrid models combining these two approaches. Models based on all these approaches are implemented within the freely available VEGA platform [11]: CAESAR, SARpy, ISS, KNN, and a consensus model integrating the predictions of the previous ones (see Note 2). Details regarding these models are available within the VEGA system. In addition, the QSAR Model Reporting Format (QMRF) of the models is available at the VEGA platform. A brief description of the models is given in the following paragraphs and more detailed information can be found in the literature therein cited.
2
Materials
2.1 Software Requirement
KNN, ISS, CAESAR, and SARpy models are embedded within the standalone software application VEGA that allows for a secure in-house execution of the four models without the need to send information to any external server [11]. VEGA can be also used for batch processing of multiple chemical structures. The software
152
Enrico Mombelli et al.
application can be freely downloaded for the VEGA website [11], and it can be installed and used on any operative system supporting JAVA. 2.2 Optional Software
Any software application that permits to draw chemical structures and convert them into two types of chemical file formats supported by VEGA: “Simplified Molecular Input Line Entry specification” (SMILES) [12] or “Structure Data Format” (SDF). Several chemical drawing programs can perform this task: VEGA ZZ [13], ACD/ChemSketch [14], MarvinSketch [15], and the OECD QSAR toolbox [16]. This list is not exhaustive, and these applications are subjected to different software licenses and terms and conditions of use.
2.3 VEGA: The Workflow
VEGA has a simple workflow which is schematically depicted in Fig. 1. Basically, a user types or pastes a SMILES string in the blank space at the top of the user interface and then adds it to a working list of molecules to be analyzed. Once a SMILES string is added to the working list, it is possible to highlight it and visually check the two-dimensional structure encoded by the text line. This checkpoint is crucial. Indeed, several structural inaccuracies can take place at this stage and compromise the reliability of the predictions [17]. If needed, users can also input multiple molecules at once (“import File” button at the top right of the user interface). In this case, the file contains a list of SMILES codes saved in “txt” or “smi” format. Thanks to the “Select” button it is then possible to choose the model(s) of interest, to specify the desired output format (PDF or csv files) and to indicate where the prediction reports should be saved (“Export” button). Finally, the selected model(s) can be executed by clicking on the “Predict” button.
2.4 Applicability Domain
All the models that will be described in the following paragraphs adopt the VEGA definition of applicability domain [18] (see Note 3). According to this definition, the degree of membership of a query chemical to the applicability domain of the model is described by an Applicability Domain Index (ADI) with values that range from 0 (no membership) to 1 (full membership). Chemicals characterized by ADI values that are less than 0.7 or 0.65, depending on the model, are to be regarded as potentially not belonging to the AD. ADI values that are within the range 0.7 (or 0.65)–0.9 represent a critical region since the query chemical could be out of the applicability domain. Finally, ADI values that are greater than or equal to 0.9 indicate chemicals that should be regarded as belonging to the applicability domain of the model. These reference values represent a general guideline, and they should be interpreted in the light of a thorough inspection of the
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
153
Fig. 1 Workflow of VEGA. The SMILES string corresponding to 1-butanol was used as input structure
subindexes that compose the ADI: the similarity index, the concordance index, the accuracy index, and the atom centered fragments index. If, as in the case of the CAESAR model, the chemical structures are characterized by numerical descriptors, the ADI takes also into account a check of the ranges in descriptor values (see Note 4). These critical factors should always be analyzed when interpreting results and they will be described in the following paragraphs. 2.4.1 Similarity Index
This index takes into account the degree of similarity between the query chemical and the three (four for the KNN model) most similar chemicals. Values close to one indicate that the chemotype of the query chemical is well represented by the training set of the model (see Note 5). On the other hand, lower values could indicate that the prediction is an extrapolation since the query chemical is located in regions of the chemical space that are scarcely populated. In this case, the prediction cannot be supported by the evaluation of similar chemicals. This does not necessarily mean that the prediction is wrong. It means that the user should gather further elements to support the model results. In particular, additional models should be run to get support.
2.4.2 Concordance Index
This index provides information on the concordance between the predicted value for the query chemical and the experimental values of the three most similar chemicals. Values that are close to zero may indicate an unreliable prediction and the possible identification of a region in the chemical space whose structure–toxicity behavior is not adequately described by the model. Therefore, a careful inspection of chemicals that are associated with conflicting
154
Enrico Mombelli et al.
predictions is requested. Indeed, there could be one or more structural analogs (StAn) whose experimental value could be at odds with the prediction for the target chemical. For instance, a visual inspection may easily identify the presence of a specific toxic SA within the structure of the StAn. Consequently, two chemicals that are similar from a chemical point of view may differ with respect to the presence/absence of SA, and this fact can explain differences in their toxicity values. If the user does not recognize SA, it is possible to run VEGA on the similar chemical with the conflicting value; VEGA will list the SA, which can then be compared among chemicals. 2.4.3 Accuracy Index
When assessing the reliability of predictions, it is important to understand how well a model predicts the toxicity in the region of the chemical space where the query chemical is located. This index informs on such a local reliability by taking into account the classification accuracy of the three most similar chemicals. Low values for this index should warn about a lack of predictive accuracy. In this case, additional models should be run, to see if they have better accuracy.
2.4.4 Atom-Centered Fragments (ACF) Index
This index takes into account the presence of one or more fragments that are not found in the training set, or that are rare fragments. An index value equal to one implies that all atom-centered fragments of the target chemical were found in the training set. On the other hand, a value that is less than 0.7 implies that a prominent number of atom-centered fragments of the target chemical have not been found in the chemicals of the training set or are rare fragments of the training set. Also in this case, it is recommended to run additional models, because each model can bring new information as a function of its own training set.
2.4.5 Model Descriptors Range Index
Computed only for the CAESAR model, this index checks if the descriptors calculated for the predicted chemical are inside the range of descriptors of the training and test set. The index has value 1 if all descriptors are inside the range, 0 if one descriptor or more are out of the allowed range.
2.5 Models Description
The following paragraphs report a concise description of the VEGA models. A description of their statistical validation can be found in the supporting documentation associated with each model.
2.5.1 Mutagenicity (Ames Test) Model (ISS)
The ISS model is based on a series of rules defined by Benigni and Bossa that detects mutagenic chemicals [19]. This rule-based model was originally implemented within the Toxtree application freely distributed by the European Joint Research Center [19].
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
155
Toxicity Data Source
Data were extracted from the ISSCAN database [20], and they comprise 670 chemicals, 331of which are mutagens.
Description of the Model
The ISS model is based on a set of rules for mutagenicity taken from the work by Benigni and Bossa [19] as implemented in the software application Toxtree (http://toxtree.sourceforge.net). Toxtree is a rule-based system that includes alerts for genotoxic carcinogenicity and nongenotoxic carcinogenicity. Genotoxic carcinogenicity alerts can be considered as a valuable tool for the detection of chemicals that yield positive results during an Ames test. Toxtree as implemented within the VEGA platform offers the same compilation of rules as the original version [19]. This model offers a compilation of SAs that refers mainly to knowledge on the mechanism of action for genotoxic carcinogenicity (i.e., they are also pertinent for mutagenic activity in bacteria). The SAs detecting nongenotoxic carcinogens are not to be considered when applying this model since nongenotoxic carcinogens cannot, by definition, be detected by the Ames test.
Interpretation of the Output
The model ISS classifies query chemicals as mutagenic when one or more SAs are detected within their molecular structure or as a nonmutagenic if no SA is identified.
2.5.2 Mutagenicity (Ames Test) Model (CAESAR)
The CAESAR model [21] was developed on the basis of 4204 chemicals (2348 mutagenic and 1856 nonmutagenic) extracted from the Bursi dataset [22]. This initial set was then split into a training set (3367 chemicals, 80% of the entire dataset) and an external test set (837 chemicals, 20% of the entire dataset) [22].
Toxicity Data Source Description of the Model
The algorithm of the model is described in Ferrari and Gini [21]. CAESAR automatically calculates chemical descriptors for the chemicals of interest and contains a subset of Toxtree rules (see previous paragraph) to enhance the sensitivity of the model. The model integrates two complementary predictive approaches in series (statistical and rule-based): a support vector machine (SVM) algorithm coupled to two sets of structural alerts rules aimed at reducing the false negative rate. In order not to inflate the falsepositive rate, a chemical which is identified as negative during the first two steps (SVM output and first SA filter) and positive by the second set of rules is flagged as a suspicious mutagen. If the user wants only the results of the statistical model, (s)he can check if the model identifies SA and discard this approach.
Interpretation of the Output
CAESAR-VEGA classifies chemicals as mutagenic, nonmutagenic, and suspicious mutagenic. Suspicious chemicals are associated with higher predictive uncertainty.
156
Enrico Mombelli et al.
2.5.3 Mutagenicity (Ames Test) Model (SarPy/IRFMN)
The dataset employed for rule extraction was retrieved from the CAESAR model for Ames mutagenicity (see previous paragraph). This model and CAESAR share the same training set.
Toxicity Data Source Description of the Model
SARpy (SAR in python) is a QSAR method that identifies relevant fragments and extracts a set of SAs directly from data without any a priori knowledge [23]. The algorithm generates substructures; relevant SAs are automatically selected based on their prediction performance for a training set. The application of this modeling approach to the CAESAR dataset extended the previous work [23] by extracting two sets of SAs: one for mutagenicity (112 rules) and the other for nonmutagenicity (93 rules) (see Note 6). The SARpy application is available through a graphic interface or through the VEGA HUB platform.
Interpretation of the Output
If the target chemical matches one or more mutagenicity SAs, the prediction will be “mutagen”; if the target chemical matches one or more nonmutagenicity SAs, the prediction will be “nonmutagen.” If no SA is found, the prediction is possible nonmutagenic. If there are SAs for mutagenicity and nonmutagenicity, the substance is labeled mutagenic.
2.5.4 Mutagenicity (Ames Test) Model (KNN/Read-across)
The dataset of the KNN/read-across model is composed of 5770 chemicals (3254 mutagens and 2516 nonmutagenic chemicals). This dataset has been compiled by the Istituto di Ricerche Farmacologiche Mario Negri, by merging experimental data from a benchmark dataset published by Hansen and colleagues [24] and from a collection of data made available by the Japan Health Ministry within their Ames (Q)SAR project [25].
Toxicity Data Source
Description of the Model
The read-across model has been built with the istKNN application (developed by Kode srl, http://chm.kode-solutions.net), and it is based on the similarity index developed inside the VEGA platform; the index takes into account several structural aspects of the chemicals, such as their fingerprint, the number of atoms, of cycles, of heteroatoms, of halogen atoms, and of particular fragments (such as nitro groups). The index value ranges from 1 (maximum similarity) to 0. On the basis of this structural similarity index, the four chemicals belonging to the dataset that are characterized by the highest similarity with respect to the query chemical are considered when computing predictions. On the other hand, chemicals with a similarity value lower than 0.65 are discarded from the list of StAn. If there are no chemicals respecting these conditions, then the model does not provide any prediction. The estimated toxicity class is
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
157
calculated as the most frequent class (i.e., a majority vote) among the four chosen StAn, weighed with their similarity value associated with each StAn. Interpretation of the Output
If three StAn out of four are mutagenic or nonmutagenic, then the query chemical is predicted as mutagenic or nonmutagenic, respectively.
2.5.5 Mutagenicity CONSENSUS Model
The consensus model, by integrating the predictions of the four previous models, relies on the data sources that were used to develop the models described in the previous paragraphs.
Toxicity Data Source Description of the Model
The model integrates the predictions of the available VEGA mutagenicity models (CAESAR, SarPy, ISS, and KNN). The underlying consensus algorithm adopts the Applicability Domain assessment associated with the predictions computed by each single model as a weight to bias the final assessment so that it will be more influenced by the models that compute more reliable predictions. The Applicability Domain assessment of each model is converted to a numerical value in the range 0–1 according to the following scheme: AD assessment
Value/weight
Experimental value
1.0
High reliability
0.9
Moderate reliability
0.6
Low reliability
0.2
A score is calculated as the sum of the weights for each model that elaborated a prediction. In the framework of this consensus approach, the “suspect mutagenic/nonmutagenic” predictions are considered simply as “mutagenic/nonmutagenic” predictions. The calculated score is normalized over the number of adopted models, so that it spans a theoretical [0, 1] range. Then, the prediction class with the highest score is used as the final consensus prediction. If at least one model has retrieved an experimental value in its training set, only experimental values are considered for the score. In this case, the reported number of models used refers only to the number of models having an experimental value. Therefore, the score of the final prediction can be used as a measure of the reliability of the produced consensus assessment. Indeed, the score would achieve its maximum value (1) only if one or more models retrieved experimental values and these values are in agreement. In all other cases, the score will result in lower values.
158
Enrico Mombelli et al.
Interpretation of the Output
3
The output of the consensus model is a predicted dichotomic activity (mutagenic vs. nonmutagenic) whose pertinence is described by the reliability score described above. The assessments provided by the individual models are also provided as accompanying information.
Methods A critical assessment of predictions is the most critical aspect related to the interpretation of the output of (Q)SAR models. VEGA facilitates the interpretability of (Q)SAR predictions by breaking down several critical aspects of the applicability domain as described at Subheading 2.5. Nevertheless, possible misinterpretations can still take place, and the following examples will provide further insights into the analysis of (Q)SAR results. More precisely, the reported examples will follow a weight-ofevidence (WoE) approach aimed at increasing the level of confidence that can be attributed to predictions for mutagenicity obtained as a function of the structure–activity paradigm. The lines of evidence of this WoE strategy are described by Benfenati and coauthors in their review on the integration of predictions provided by QSAR models and read-across approaches [26]. Indeed, an overall assessment on mutagenicity can be achieved by evaluating the three following lines of evidence: results provided by independent QSAR models, a critical assessment of the chemicals that QSAR models (or read-across tools) identified as being similar to the query chemical, and the presence and relevance of the structural alerts embedded within the chemical structures of the query chemical and its StAn. This strategy applied to the prediction of mutagenicity according to the Ames test and by means of VEGA in silico tools is summarized by the following key points and by the general scheme reported in Fig. 2: 1. Assessment of the prediction yielded by the VEGA consensus model. 2. Assessment of the reliability of the prediction yielded by the individual QSAR models composing the consensus model. 3. If chemicals, whose similarity to the query chemical is deemed as pertinent, can be identified among the closest StAn highlighted by the individual QSAR models, then implement a read-across prediction as a function of these chemicals. In the most informative case, the four individual QSAR predictions will also be associated with four read-across predictions. 4. When implementing a read-across prediction, existing information on structural alerts for mutagenicity or nonmutagenicity has to be considered.
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
159
Fig. 2 Stepwise strategy for the integration of in silico models and read-across methods for predicting the toxicity of chemicals
5. Assessment of the diagnosis provided by ToxRead on the closest StAn of the query chemical. During this step, special attention should be given to the scrutiny of rules for mutagenicity and nonmutagenicity. 6. If appropriate, a read-across prediction as a function of the mutagenicity potential associated with the StAn proposed by ToxRead that is deemed as pertinent has to be carried out. 7. Finally, an overall assessment of mutagenicity can be established by comparing all the information gathered during the preceding points. The following sections describe three case studies that illustrate the general strategy described in the previous paragraphs. The first two case studies illustrate predictions characterized by an output which is concordant across all VEGA models. On the contrary, the last case study is more challenging, and it will confront the reader with complex cases. 3.1 Case Study 1: Valproic Acid (Fig. 3) 3.1.1 Consensus Model
This model indicates a lack of mutagenicity, and the prediction is characterized by a high nonmutagenic score (0.9) highlighting a predictive convergence of the individual QSAR predictions.
160
Enrico Mombelli et al.
Fig. 3 Chemical structure of valproic acid, chemical information, and experimental toxicity [27]
Fig. 4 The three most similar StAn of valproic acid that are shown in the PDF outputs of the CAESAR and SARpy 3.1.2 CAESAR Model Output: Prediction is Nonmutagenic, and the Result Appears Reliable
The CAESAR model does not identify any SA linked to mutagenic activity. Similarity values for the six most similar chemicals are very high (ranging from 0.989 to 0.903). Furthermore, experimental and predicted toxicities agree for all the similar molecules that were found in the training set. Indeed, predicted and experimental toxicities systematically designate nonmutagenic chemicals (see Note 7). On the basis of this information and in particular thanks to a visual inspection of the first three similar chemicals (see Fig. 4), the predicted substance is considered into the applicability domain of the model (ADI ¼ 0.978).
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
161
Read-Across Prediction for the CAESAR Model.
The closest StAn (2-ethylcaproic acid) of valproic acid identified by the CAESAR model (see Fig. 4) is an isomer of the query chemical. In addition, this closest StAn and the query chemical are both branched carboxylic acids. A one-to-one read-across predictions will therefore provide a result overlapping with the QSAR prediction: a lack of mutagenicity potential. If this logic is extended to include the first four StAn (branched and linear carboxylic acids), the read-across prediction would also indicate a lack of toxicity given the fact that all the StAn are nonmutagenic.
Structural Alerts
In this case, there is not an SA associated to mutagenicity. Indeed, this substance is predicted “nonmutagenic.” Anyhow also from this point of view, there is no disagreement between the different lines of evidence.
3.1.3 SARpy Model
The model finds within the structure of the query chemical only SAs for nonmutagenicity (“inactive” rules) (Fig. 5). Also in this case, the query chemical falls into the applicability domain (ADI ¼ 0.978), and the predicted and experimental toxicities for the most similar chemicals are the same. This behavior is not completely surprising since CAESAR and SARpy are based on the same training set. Nevertheless, this result corroborates the prediction computed by CAESAR by assessing toxicities according to a complementary analysis executed by a different algorithm.
Output: Prediction is Nonmutagenic, and the Result Appears Reliable
3.1.4 Read-Across Prediction for the SARpy Model
The same results and arguments described for the CAESAR model can be obtained for the SARpy model.
Structural Alerts
In this case, the model identifies SAs associated with a lack of mutagenicity. Indeed, the aliphatic chain mitigates and even cancels the mutagenic effect also when there is a mutagenic SA. For instance, the aldehydes become not-mutagenic for longer chain, while the first members of the family are mutagenic; the same happens in the case of chloroalkanes.
3.1.5 ISS Model
Similarly to what described for the CAESAR model, the ISS model does not find any SA for mutagenicity. The most similar chemicals shown in the output are different from those of CAESAR and SARpy, since the corresponding training sets are different. These StAn are characterized by lower similarity values (ranging from 0.823 to 0.773), and this lower degree of similarity is reflected by the ADI (0.906). This degree of overall similarity combined with a lack of identification of SA substantiates the validity of the prediction (see Note 8).
Output: Prediction is Nonmutagenic, and the Result Appears Reliable
162
Enrico Mombelli et al.
Fig. 5 Inactive SAs identified by SARpy within the chemical structure of valproic acid 3.1.6 Read-Across Prediction for the ISS Model
Among the six StAn belonging to the training set of the model, only one chemical (see Fig. 6) includes a carboxylic moiety attached to an aliphatic chain within its structure (11-aminoundecanoic acid, CAS RN 2432-99-7). This chemical is also a primary amine, and this class of chemicals is known for not eliciting genotoxic effects [28]. Moreover, within the QSAR VEGA databases n-hexadecylamine and n-pentylamine are nonmutagenic. Therefore, a one-toone read-across prediction as a function of this structural analog would suggest a lack of mutagenicity of the query chemical, but with an increased level of uncertainty with respect to the previous read-across predictions given the fact that there is a decreased level of similarity between chemicals. As a general principle, if similarity is higher than 0.9, the similar chemical deserves particular attention. It is also possible that a highly similar chemical overturns the prediction of the QSAR model. With substances with similarity lower than 0.9 this is a less likely scenario.
3.1.7 KNN/Read-Across Model
The prediction is computed as a function of the four most similar StAn included in the training set of the model (see Fig. 7). These chemicals are all linear and saturated carboxylic acids devoid of any mutagenic potential. Their degree of similarity with respect to the query chemical ranges from 0.903 to 0.946. This high level of similarity adds confidence to the predictions, and no other warnings for the applicability domain of the model are flagged by the software.
Output: Prediction is Nonmutagenic, and the Result Appears Reliable
3.1.8 Read-Across Prediction for the KNN/Read-Across Model
Given the fact that the algorithm of this model can be considered as a read-across approach, QSAR prediction and read-across predictions coincide.
3.1.9 Read-Across with ToxRead
ToxRead shows (see Fig. 8) the six most similar chemicals. The chemical characterized by the highest degree of similarity is located exactly above the central circle representing the query chemical and it coincides with the query chemical (i.e., valproic acid). The
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
163
Fig. 6 StAn of valproic acid identified by the ISS model
experimental data for valproic acid indicate a lack of mutagenic potential confirming therefore the robustness and exactness of the arguments stated in the previous paragraphs about the lack of toxicity for the query chemical.
164
Enrico Mombelli et al.
Fig. 7 StAn of valproic acid identified by the KNN/read-across model 3.1.10
Overall Evaluation
3.2 Case Study 2: Nifuratel (Fig. 9) 3.2.1 Consensus Model 3.2.2 CAESAR Model
All the lines of evidence provided by the QSAR models that did not include the query chemical in their training sets indicate lack of mutagenicity for valproic acid. The correctness of this assessment has been confirmed by the experimental value provided by Toxread. The presence of SAs related to lack of effect further increases the confidence in the results. This model indicates mutagenicity, and the prediction is characterized by a high mutagenic score (0.9) highlighting a predictive convergence of the individual QSAR predictions.
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
165
Fig. 8 StAn of valproic acid identified by ToxRead
Fig. 9 Chemical structure of nifuratel, chemical information, and experimental activity [29] Output: Prediction is Mutagenic, and the Result Appears Reliable
The model identifies two fragments related to mutagen activity included within the Benigni-Bossa rulebase [19]: Nitro aromatic, SA27 and Hydrazine, SA13 (see Fig. 10). In addition to the six most similar molecules found in the training set, the model shows the three most similar chemicals having the same fragments (see Fig. 11).
166
Enrico Mombelli et al.
Fig. 10 Nitro aromatic structural alert no. 27 (Ar ¼ any aromatic/heteroaromatic ring); Hydrazine structural alert No. 13 (R ¼ any atom/group)
Fig. 11 The closest StAn of nifuratel according to the CAESAR model that contains within their structure the same structural alerts displayed by the query chemical
The similarity index is high, 0.9. The concordance for similar molecules and the accuracy index are both equal to 1. For these reasons, the predicted substance is considered into the applicability domain (ADI ¼ 0.948). Read-Across Prediction for the CAESAR Model.
A visual inspection of the six most similar chemicals indicates that only the functional organic groups of the most similar chemical (Furazolidone, CAS RN 67-45-8, a mutagenic chemical) overlap to a great extent with the query chemical. The only exception is the presence of a thioether moiety (methylthioethane) within the chemical structure of nifuratel which does not appear in the structure of furazolidone. This moiety is not associated with any SA. It is
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
167
Fig. 12 Four alerts for mutagenicity identified by SARpy
therefore unlikely that this moiety could have a major influence on mutagenicity. In addition, as far as the polarity and steric effects of the molecule are considered, this residue is not expected to largely modify the behavior of the substance. Thus, it is reasonable to assess the query chemical as being a mutagen.
Output: Prediction is Mutagenic, and the Result Appears Reliable
In this case, the identified structural alerts are four and all linked to mutagenic activity (see Fig. 12). SARpy also shows the most similar chemicals that are characterized by the presence of the identified fragments. In this case, predictions and experimental values agree for all the StAn. This prediction is characterized by the same ADI (and his subindexes) as the prediction computed by the CAESAR model.
Read-Across Prediction for the SARpy Model
The same results and arguments described for the CAESAR model can be obtained for the SARpy model.
3.2.4 ISS Model
As explained in Subheading 2.6, CAESAR contains a subset of Toxtree rules and both models identify the same nitro aromatic fragment that plays a key role in supporting the prediction together with the presence of a hydrazine fragment. The ADI value (0.933) is slightly lower than what observed for CAESAR and SARpy; this is related only to the index of similarity (0.871) whereas all the other indices are excellent.
3.2.3 SARpy Model
Output: Prediction is Mutagenic, and the Result Appears Reliable
Read-Across Prediction for the ISS Model
The two most similar chemicals that are present in the training set of the model (nifurimide CAS RN 21638-36-8 and nifurdazil CAS RN 5036-03-3) (see Fig. 13) but they contain a semicarbozone moiety which is not present in the target. In addition, these chemicals are also characterized by the presence of an additional methyl group or an additional primary alcohol. A read-across prediction as a function of these two chemicals would flag the query chemical as being a mutagen. Nevertheless, the highlighted structural differences could impair the pertinence of a read-across prediction. In
168
Enrico Mombelli et al.
Fig. 13 The two closest StAn of nifuratel that contains the same structural alerts displayed by the query chemical
this case, a read-across prediction does not corroborate the QSAR prediction in a robust way and its substantiating role is rather weak. 3.2.5 KNN/Read-Across Model Output: Prediction is Mutagenic, and the Result Appears Reliable
The prediction is computed as a function of the four most similar StAn included in the training set of the model (see Fig. 14). Their degree of similarity with respect to the query chemical ranges from 0.854 to 0.922. This high level of similarity adds confidence to the predictions, and no other warnings for the applicability domain of the model are flagged by the software. Among the four most similar StAn identified within the training set of the model, there is the same structural analog identified by the CAESAR model: furazolidone. Therefore, the same arguments described above for the CAESAR model can also be applied to this case and they point towards a mutagenic potential of the query chemical.
Read-Across Prediction for the KNN/Read-Across Model
Given the fact that the algorithm of this model can be considered as a read-across approach, QSAR prediction and read-across predictions coincide.
3.2.6 Read-Across with ToxRead
ToxRead shows (see Fig. 15) the six most similar chemicals. All these chemicals are mutagenic. Even in this case, the most similar chemical is furazolidone. This chemical and the query chemical are both associated with the following rules indicating a mutagenic potential: 1. Rule MNM94 (experimental accuracy ¼ 1, p-value < 10–6) defined by the SMARTS [O][N+](〓O)c1ccco1. This rule and the structural alert SARpy 4 are the same.
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
169
Fig. 14 The four closest StAn of nifuratel according to the KNN/read-across model
2. Rule SM94 (experimental accuracy ¼ 0.61, p-value ¼ 0.20) defined by the SMARTS C(OCC)N. 3. Rule SM103 (experimental accuracy ¼ 0.82, p-value < 10–6) defined by the SMARTS NNCC. 4. Rule MNM70 (experimental accuracy ¼ 0.85, p-value < 10–6) defined by the SMARTS [O][N+;D3](¼O)a. This rule corresponds to SA 27 (nitro aromatic). 5. Rule CRM80 (experimental accuracy ¼ 0.85, pvalue ¼ 0.00076) defined by the SMARTS N¼Cc1ccc(o1)[N +](¼O)[O]. Interestingly, the query chemical is also associated with a rule indicating nonmutagenicity. This rule is named CRM163, and it is
170
Enrico Mombelli et al.
Fig. 15 StAn of nifuratel identified by ToxRead
characterized by an experimental accuracy equal to 0.67 (only on biocide chemicals) and a p-value ¼ 0.0026. According to this information, it appears that a one-to-one read-across implemented by using Furazolidone as a structural analog would yield a positive prediction. The mitigating role of the thioether moiety could suggest a reduced mutagenic potential but, from a conservative point of view, a positive prediction is fully justified. 3.2.7 Overall Evaluation
All the lines of evidence provided by the QSAR models and by ToxRead indicate a mutagenic activity of nifuratel. This conclusion is corroborated by several rules for mutagenicity, and the putative mitigating effect of the thioether group cannot, in itself, justify a negative prediction.
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
171
Fig. 16 4-Aminosalicylic acid structure, chemical information, and experimental activity [30] 3.3 Case Study 3: 4-Aminosalicylic Acid (Fig. 16)
Unlike the previous examples, in this case, the result is equivocal because the predictions of the models disagree and show very low values of ADI.
3.3.1 Consensus Model
The model indicates mutagenicity, but the prediction is characterized by a low score (0.25) not so different from that of nonmutagenicity (0.225) highlighting a predictive divergence of the individual QSAR predictions and consequently, a poor ability of the system to discriminate between the two classes of activity. Three out of four models predict the target chemical as mutagenic, but the reliability of their predictions is low meanwhile the only negative prediction provides a high value of ADI.
3.3.2 CAESAR Model
The ADI of the prediction is equal to 0, thus placing 4-aminosalicylic acid outside the applicability domain of the model. This lack of reliability is caused by the low values of the accuracy (0.325) and the concordance (0) indices.
Output: Prediction is mutagenic, but the Result May Be Not Reliable 3.3.3 Read-Across Prediction for the CAESAR Model
The accuracy of the predictions for similar molecules found in the training set is not adequate, and five out of six similar molecules found in the training set have negative experimental values that disagree with the prediction of the target chemical. Similar chemicals #1 (CAS RN 89-57-6) and #2 (CAS RN 548-93-6) have the same moieties (aromatic amine, hydroxyl group, and carboxylic acid) as the target chemical but in different position (see Fig. 17). Despite having a lower similarity value (0.927) than the previous two, the chemical #3 is particularly interesting because it has a carboxylic acid moiety in para substitution as the target chemical. The only similar chemical experimentally mutagenic is chemicals # 5 (CAS RN 102000-59-9) that has two amines on the same aromatic ring. Thus, the analysis on similar chemicals does not support the prediction of mutagenicity for the target chemical.
Structural Alerts
No SAs linked to mutagen activity have been found by the model.
172
Enrico Mombelli et al.
Fig. 17 The six most similar chemicals find in the training set of CAESAR model
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
173
Fig. 18 SM104 fragment structure and the most similar chemicals from the model’s dataset bearing the same fragment 3.3.4 SARpy Model Output: Prediction is Mutagenic, but the Result May Be not Reliable 3.3.5 Read-Across Prediction for SARpy Model
As for CAESAR model (CAESAR and SARpy have the same training set), the prediction is outside the applicability domain (ADI ¼ 0) for the low values of the accuracy (0.675) and the concordance (0) indices.
SARpy shows as the most similar chemicals the same substances as from CAESAR. The accuracy of the prediction for those molecules is slightly enhanced with respect to CAESAR; in fact, SARpy predicts correctly the first two similar chemicals (as nonmutagenic molecule #1 and as possible not-mutagenic molecule #2). About
174
Enrico Mombelli et al.
Fig. 19 SM189 fragment structure and the most similar chemicals from the model’s dataset bearing the same fragment
the molecular structures of the similar molecules, the same observations made within CAESAR analysis can be obtained for SARpy. Structural Alerts
SARpy recognizes two fragments: one active (SM104) and one inactive (SM189). For each fragment, the model shows the three most similar chemicals that are characterized by the presence of the identified fragments (see Figs. 18 and 19). In the case of the fragment of mutagenicity (SM104), 2 of 3 molecules are negative experimentally; one of them is chemical #3 previously analyzed. For molecules CAS RN 62-23-7 and CAS RN 619-73-8, in addition to SM104, SARpy finds other alerts of mutagenicity not found in the target (SM19; SM72) characterized by nitro group (see Fig. 18). Thus, the information regarding the two nitro chemicals is not
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
175
useful, and the fact the other similar chemical, with CAS RN 150-13-0, is nonmutagenic further increases the uncertainty associated to this SA, for the specific target substance. In the other hand, one (CAS RN 119-36-8) of the three most similar chemicals having the same inactive fragment (SM189) of the target chemical is mutagenic experimentally. It has not the amino group and instead of the acid there is an ester. Within its structure, in addition to SM189, SARpy recognizes another inactive fragment not found in the target chemical (SM170). No further evidence supports mutagenic prediction from the SAs analysis. 3.3.6 ISS Model Output: Prediction is Mutagenic, but the Result Shows Some Critical Aspects
Although the ADI value is higher than that of CAESAR and SARpy predictions (0.79), the target chemical could be out of the Applicability Domain of the model. The reason of this lies into the low value of concordance index ¼ 0.491, while the other indices are higher (similarity index ¼ 0.892, accuracy index ¼ 1, ACF index ¼ 1). Similar molecules found in the training set have experimental values that disagree with the predicted value.
3.3.7 Read-Across Prediction for the ISS Model
The most similar chemical (CAS RN 118-92-3) is nonmutagenic experimentally and was predicted correctly by ISS and by CAESAR and SARpy; it is the only similar chemical in common among those found in the training sets of the models (Fig. 20). Chemicals #2, #3, and #4 are mutagenic experimentally but ISS model finds a different SA (SA27 nitro aromatic) in their structure that is not present in the target, thus these similar chemicals are not relevant. The ISS model does not recognize SA28 in molecule #1 and #3 for the restrictions of that rule (see SAs paragraph below). The last two similar molecules (#5 and #6) are, respectively, nonmutagenic and mutagenic; the first chemical is a benzoic acid and the second one shows two chlorine atoms bonded to the aromatic ring which are not present in the target chemical. As for CAESAR and SARpy, the read-across analysis on the similar chemicals found in the training set of the model does not bring any significant evidence supporting the QSAR prediction.
Structural Alerts
The ISS model predicts 4-aminosalicylic acid as mutagenic because it finds in its structure the following alert: SA28 primary aromatic amine, hydroxyl amine, and its derived esters (with restrictions). As specified in Fig. 21, among the other exceptions, aromatic amino groups with a carboxylic acid substituent in ortho position are excluded from this rule for mutagenicity. Among the three most similar chemicals of the training set that have the same fragment (SA28) as the target, only CAS RN 99-57-0 did not appear in the previous discussion. However, it does not bring any further evidence of mutagenicity because it has a nitro group in its structure that is not present in the target chemical.
176
Enrico Mombelli et al.
Fig. 20 The six most similar chemicals belonging to the training set of ISS model.
In Silico Prediction of Chemically Induced Mutagenicity: A Weight of. . .
177
Fig. 21 SA28 fragment definition and the closest StAn of 4-aminosalicylic acid according to the ISS model that contains within their structure the same structural alerts characterizing the query chemical
3.3.8 KNN/Read-Across Model
Differently from the previous models, KNN predicts the target chemical as being nonmutagenic with a high reliability (ADI ¼ 0.97) placing the substance in its applicability domain.
Output: Prediction is nonmutagenic, and the Result Appears Reliable 3.3.9 Read-Across Prediction for the KNN/Read-Across Model
Five out of the six similar chemicals shown in the report are experimentally nonmutagenic and correctly predicted by the model: indeed; the values of accuracy and concordance indices are ¼ 1. The degree of similarity with respect to the query chemical ranges from 0.914 to 0.975. In addition to the three negative chemicals of
178
Enrico Mombelli et al.
Fig. 22 StAn #4 and #5, found in the training set of KNN model
the training sets of the other models previously analyzed, KNN shows molecule # 4 (CAS RN 89-86-1) and molecule #5 (CAS RN 619-67-0) that are negative in Ames test (see Fig. 22). Given the fact that the algorithm of this model can be considered as a read-across approach, QSAR prediction and read-across predictions coincide. 3.3.10 Read-Across with ToxRead
The read-across prediction provided automatically by the software characterizes the chemical as being non-mutagenic (score ¼ 0.69). As for KNN model, five out of six most similar chemicals showed by ToxRead (see Fig. 23) are nonmutagenic. The most similar chemical is 5-aminosalicylic acid (CAS RN 89-57-6) as for CAESAR, SARpy, and KNN, with a value of similarity equal to 0.975. That molecule together with 4-aminobenzoic acid (CAS RN 150-13-0) shares with the query chemical most of the identified fragments, thus these two analogues are more relevant in the assessment of the target chemical respect to the other similar molecules. 4-Aminobenzoic acid and the target chemical are associated with the following 4 SAs: 1. SA CRM5 for nonmutagenicity (experimental accuracy: 0.68, p-value: 10,950
391
Munro
HESS
IRIS
COSMOS
ToxRef
OpenFoodTox
Global antimicrobial inventory
Yang et al. (2020)
Oral
Many
Many
Many
Many
Oral
Munro et al. Oral (1996)
COSMOS project
Exposure duration
OECD QSAR Toolbox
Rodents, non- https://www.efsa.europa.eu/en/ rodents microstrategy/openfoodtox
Rodents, OECD QSAR Toolbox non-rodents
Rodents, http://cosmosdb.cosmostox.eu non-rodents
Rodents, http://www.epa.gov/iris/intro.html non-rodents
Rats
OECD QSAR Toolbox
28 days, 90 days Rodents, Yang et al. (2020) or longer non-rodents
Many
Many
Many
Many
28–120 days
Up to 910 days Rodents, rabbit
http://fraunhofer-repdose.de/
Animal model Available at
Oral, At least 14 days Rodents inhalation
>600
RepDose
Fraunhofer ITEM
No. Exposure substances Developers route
NAME
Table 1 RDT databases for modelling
N
N
N
N
N
N
N
Y
Y
N
Y
Y
N
N
N
N
Copyright Drugs
In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No. . . 243
244
Fabiola Pizzo et al.
Fig. 2 Query form of the RepDose database
NOAEL, not only is the final number important but other supporting information is too, such as route and duration of exposure, species and strain used, space between doses, and organ level effects, in order to properly assess the quality and the potential use of these data for modeling. The RepDose database, developed by Fraunhofer ITEM as part of a project funded by the European Chemical Industry Council (CEFIC), contains experimental NOAEL and LOAEL values for more than 600 chemicals related to oral (gavage, feeding, and drinking water) or inhalation studies in rodents exposed to the substance over at least 14 days. The chemicals in the database have a limited number of functional groups since complex and multifunctional chemical structures such as pharmaceuticals, inorganic or metal compounds, and mixtures were eliminated (Fig. 2) [20]. A score (A, B, C, D) indicating the data quality is also provided. Details on the animals used (strain, sex, number per dose group) and the exposure (duration and route, post-exposure observation period, and dose groups) are also provided. The database includes toxicological (effect data include all target organs with all associated effects and corresponding LOAEL) and physicochemical (molecular weight, solubility in water, physical state, boiling point, dissociation constant, octanol-water partition coefficient, and vapor pressure) data. The RepDose database is available at http:// fraunhofer-repdose.de/, and access is free on registration by the user. A user-friendly query screen (Fig. 3) puts several questions regarding the influence of structural features and physicochemical data on LOAEL, target organs, and effects [20]. Although all the data in the database are displayed, their use is restricted. The data of RepDose which are publicly available are present in the OECD QSAR Toolbox [21]. Munro et al. [22] provide NOAEL and LOAEL values for 613 organic compounds related to subchronic, chronic, reproductive toxicity, and teratogenic studies in rodents and rabbits. For each compound, the chemical name, CAS number, structural classification using the decision tree of Cramer et al. [23], species, sex, route of exposure, doses tested, study type, duration, end points, NOAEL, and LOAEL references are reported. The data come from
In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No. . .
245
Fig. 3 Typical output of a query using the COSMOS database. In the toxicity data section (orange), the exposure duration and the animal used for the in vivo experiment (green) are indicated, and the RDT study is reported at the bottom of the screen (red) as highest no effect level (HNEL)
four sources: US National Toxicology Program (NTP) technical reports (post-1984), the toxicological monographs prepared by the Joint FAO/WHO Expert Committee on Food Additives (JECFA), the Integrated Risk Information System (IRIS) database, and the Developmental and Reproductive Toxicology (DART) database. The compounds in the Munro database represent a variety of chemicals (e.g., pesticides, food additives, industrial chemicals). To demonstrate that a study is rigorous enough to detect toxic effects, a compound needs to have both NOAEL and LOAEL to be included in the database; however, in some cases, the LOAEL is not available because the substances are major food ingredients and had no toxicity at the highest dose tested in well conducted studies [22]. The database is downloadable from the OECD QSAR Toolbox; otherwise, the publication provides a paper version of the database. The Hazard Evaluation Support System (HESS) database comprises 500 chemicals for which RDT data were obtained from test
246
Fabiola Pizzo et al.
reports of Japanese CSCL by the Ministry of Health, Labour and Welfare, the National Institute of Technology and Evaluation (NITE), and the Ministry of Economy, Trade and Industry (METI) and from reports produced by the US NTP [24]. All these tests were conducted in compliance with GLP principles. This database contains detailed RDT data related to subchronic and chronic (28–120 days) oral exposure in rats. The HESS database, freely downloadable from OECD QSAR Toolbox, provides information for the target compounds such as CAS number, chemical name, SMILES, exposure route and duration of the studies, animal used (strain, sex), toxicological data (organ, tissue, effects, largest, and smallest doses used), and NOAEL/LOAEL values. The Integrated Risk Information System (IRIS) is a publicly available repository, developed by the US Environmental Protection Agency (EPA) that contains information on over 500 chemicals. It provides descriptive and quantitative chronic health information on chemicals found in the environment in order to support risk decision-making [25]. Two main categories of effects are present in IRIS database: noncancer (oral reference doses and inhalation reference concentrations: RfDs and RfCs) and cancer effects. NOAEL and LOAEL are reported with a detailed summary of the studies containing information on the species used, route and duration of exposure, concentrations tested, and target organs. The user can consult data on the EPA website (http://cfpub.epa.gov/ ncea/iris/index.cfm?fuseaction¼iris.showSubstanceList); substances are listed in alphabetical order. The COSMOS database [26] contains 12,538 toxicological studies for 1660 compounds. Two datasets are available: US FDA PAFA and oRepeatToxDB. The first contains 12,198 studies across 27 endpoints including both repeated-dose (in this case the lowest effect level, LEL, is reported) and genetic toxicity data. ORepeatToxDB, assembled by the COSMOS consortium, contains 340 in vivo repeated-dose toxicity studies from different sources (EC REACH project, US NTP) for 228 chemicals. It reports observed toxicological effects together with the sites at which the effect occurred. Figure 3 reports the typical output of a COSMOS database query. The user needs to be registered for a free account. The COSMOS database was built in the context of the EC project SEURAT-1, partly funded by Cosmetics Europe. The Toxicity Reference Database (ToxRefDB), developed by US EPA [27], comprises thousands of animal toxicity studies (reporting NOAEL and LOAEL) after testing hundreds of different chemicals. ToxRefDB [28] is freely downloadable from the OECD QSAR Toolbox or can be consulted at the US EPA website (http://actor.epa.gov/toxrefdb/faces/Home.jsp). Although none of these databases contains only NOAEL and LOAEL data for drugs, some of them cover pharmaceuticals.
In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No. . .
247
Fig. 4 A typical output of the EFSA’s Chemical Hazards Database OpenFoodTox
EFSA’s Chemical Hazards Database: OpenFoodTox [29], launched by the European Food Safety Authority (EFSA) in 2017, is an open-source toxicological database for chemical risk assessment. It contains (eco)toxicological information coming from over 1650 EFSA scientific outputs for chemicals entering the food and feed chain. Point of departure (POD) values (e.g., NOAEL/LOAEL) and reference values (e.g., ADI, upper levels, and tolerable daily intake) for all the substance evaluated by EFSA are shown (Fig. 4). Options are available to search available data for each substance using chemical descriptors and generate an individual summary datasheet (PDF/Excel) for different features (chemical characterization, EFSA outputs and background regulations, reference point for human, animal and environmental health, reference values and uncertainty factors applied). The database can be consulted online at the following link: https://www.efsa.europa. eu/en/microstrategy/openfoodtox. Alternatively, the user has the option to download the full dataset or the individual spreadsheets containing information on substance characterization, EFSA outputs, reference points, reference values, and genotoxicity, as excel format. As this database has been developed with the main scope to make available summary data on substances assessed by EFSA in the food/feed areas, such as additives, pesticides, and contaminants, it does not specifically contain NOAEL and LOAEL data for drugs. Yang et al. [30] recently compiled a new database of antimicrobial chemicals for the Threshold of Toxicological Concern (TCC), namely the Global Antimicrobial (AM) Inventory. The database, including 681 chemicals, was compiled by integrating AM data from various regulatory sources. Furthermore, NOEL/LOEL
248
Fabiola Pizzo et al.
data were also retrieved and included for 391 of these chemicals and manually curated from EFSA, ECHA, SCCS, JECFA, US NTP, and US CIR. The Global AM Inventory was further merged with three existing TCC datasets (i.e., Munro, COSMOS, and Feingenbaum [31]) to create a consolidated AM TCC dataset.
3
In Silico Models for the Prediction of LOAEL and NOAEL A limited number of in silico models are available for the prediction of LOAEL and NOAEL. Published models and software are reviewed here.
3.1 Published Models
The models described here were not built primarily to predict NOAEL and LOAEL for pharmaceuticals; indeed, the compounds used for modeling came from different industrial and environmental contexts. The performances are close to acceptability and do offer a good starting point for the development of a reliable model that can be used in a multidisciplinary context. Table 2 provides a general overview of the literature-based models. Yang et al. [32], in order to overcome the inherent difficulties in modeling the NOAEL/LOAEL, developed a refined chemoinformatics approach using a starting dataset of 1357 chemicals. This approach incorporates toxicity endpoint-specific data to estimate confidence bounds for the NOAEL of a target substance, based on experimental NOAEL values for similar compounds, decreasing the uncertainties of the final predictions. Gadaleta et al. [33] proposed a novel approach to model RDT data, based on the assumption that the integration of both NO (A)EL and LO(A)EL predictions can improve performance of single models. In particular, various machine learning and fragmentbased models were proposed for both NO(A)EL and LO(A)EL prediction, then the comparison of predictions for a single chemical allowed to identify and exclude those less reliable, i.e., those characterized by a larger difference between the NO(A)EL and LO (A)EL estimations. This strategy allowed to improve RMSE in external validation from values higher than 0.70 to values lower than 0.60. Pradeep et al. [34] developed QSAR models to predict POD values (PODQSAR) for repeat dose toxicity using publicly available in vivo toxicity data on 3592 chemicals from the U.S. Environmental Protection Agency’s Toxicity Value database (ToxValDB). QSAR models, built using the random forest algorithm, showed the best performance, with an RMSE of 0.71 in the external test set. Helma et al. [35] published a read-across model developed from a training set of 998 LOAEL values for 671 chemical structures, from the merging of the dataset from Mazzatorta et al. [36] and from the Swiss Food Safety and Veterinary Office (FSVO)
Endpoint
POD
LOAEL
NOAEL
LOAEL
LEL
NOAEL
LD50/ k-NN LOAEL
NOAEL
LOAEL
NOAEL
Pradeep et al. (2020)
Helma et al. (2018)
Toropova et al. (2017)
Veselinovic et al., (2016)
Novotarsky et al. (2016)
Hisaki et al. (2015)
Chavan et al. (2015)
Toropov et al. (2015)
Gadaleta et al. (2014)
Toropova et al. (2014)
Monte Carlo
k-NN
Monte Carlo
Artificial neural networks
Associative neural networks
Monte Carlo
Monte Carlo
Random forest (local models)
Random forest
LOAEL/ Various ML methods; SAs NOAEL
Similarity-based
Modeling method
Gadaleta et al. (2021)
Yang et al. (2021) NOAEL
Reference
8
Fingerprints +3 structural keys
3
Fingerprints
17
39 to 4036
3
4 or 6
Fingerprints
10
Various
Fingerprints
Descriptors
Table 2 General overview of published models for the prediction of NOAEL and LOAEL Test set None (case studies)
Training set 1357
8349
94 (LD50); 70 (LOAEL)
42
28 days, oral, rats
90 days, oral, rats
c.ca 180
254
28 and 90 days, 97 oral, rats
LD50, acute toxicity
28–90 days, oral, rat
483
(continued)
c.c 21
179
16
24 (LD50); 19 (LOAEL)
None
143
52–57
>180 days, oral, 109–114 rats N/A
81
155
2088
90 days, oral, rat 159
>180 days, rats, 671 oral
Various
90 days, oral, rat 124 (regression) 38 (regression) 66 361 (classification) (classification)
Chronic, subchronic, acute
Data used
In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No. . . 249
MRTD/ NOEL
Matthews et al. (2004)
Structural alerts
Stepwise procedure for descriptor selection, LDA for model generation
LOAEL
GA-PLS for descriptor selection, LOO-SMLR for model generation
De Julian-Ortiz et al. (2005)
LOAEL
Mazzatorta et al. (2008)
Read-across
Furnival-Wilson for descriptor selection, LOO-MLR for model generation
LOAEL
Sakuratani et al. (2013)
Modeling method
Garcia-Domenech LOAEL et al. (2006)
Endpoint
Reference
Table 2 (continued)
None
15
11
19
None
Descriptors
3–12 months, oral, human
Chronic
Chronic, rats
1309
86
87
None
17
16
None
>180 days, oral, 445 rats
Test set None
Training set 500
28–90 days, oral, rats
Data used
250 Fabiola Pizzo et al.
In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No. . .
251
database. The model was developed with the modular lazar (lazy structure activity relationship) framework from Maunz et al. [37], that builds local random forest models from a selection of training chemicals structurally close to the target. Validation on a test set of 155 chemicals returns results that were comparable to the estimated experimental variability of the endpoint. Toropova et al. [38] developed a new NOEL model based on larger selection of data (475 compounds) coming from the Fraunhofer’s RepDose® database and the EFSA’s Chemical Hazard Database. Similarly to their previous work [39], three different splitting iterations were performed. The R2 in external validation was in the range of 0.66–0.78, thus improving the performance of the previous model. The model was again developed on SMILESbased optimal descriptor with the integration of HARD global attributes. Veselinovic et al. [40] developed QSAR models for LOAEL prediction using the Monte Carlo algorithm using a dataset of 341 organic compounds. The models were built using two approaches 1. classical method, which is based on three sets (training set, calibration set, and validation set) and 2. balance of correlations which is based on four sets (training set, invisible training set, calibration set, and validation). The r2 was up to 0.64 using the fist approach but applying the balance of correlation the performance improved significantly (r2 up to 0.76). Novotarsky et al. [41] developed an associative neural network (ASNN) as part of a modeling ToxCast EPA challenge managed by TopCoder in 2014, for the prediction of the Lowest Effect Level (LEL). The model was based on a dataset of 483 compounds and molecular descriptors from the OCHEM public platform and was validated on a test set of 143 chemicals, resulting in top-performing models with RMSE >1 log(mol/L). Another attempt to model NOEL was proposed by Hisaki et al. [42] that developed artificial neural network models based on a training set of 421 chemicals and descriptors derived from molecular orbital calculations. Tenfold cross-validation returned RMSE values of 0.529 log units. Chavan et al. [43] developed a k-NN model based on PaDELderived fingerprints to predict LD50-based categories. Predictions made with this model were able to correctly determine the LOEL of external chemicals, with 14 out of 19 chemicals in the test set (74%) that were predicted within one order of magnitude from their experimental value. Toropov et al. [39] modeled NOAEL for 113 organic compounds using the Monte Carlo method implemented in the CORAL software and three SMILES-based descriptors. The dataset was split three times, and the average performances in the training set (97 compounds) in terms of R2 and RMSE were, respectively, 0.52 and 0.61. In the test set (16 compounds), the
252
Fabiola Pizzo et al.
performance in terms of R2 and RMSE ranged from 0.62 to 0.73 and from 0.44 to 0.52, respectively. Gadaleta et al. [44] using the k nearest neighbors (k-NN) algorithm, a computational technique based on the concept of similarity, built a model for the prediction of LOAEL. However, to improve the performance, the basic algorithm was refined by setting additional conditions, and a target chemical must fulfil all those rules to be considered reliably predicted. The training and test sets of the model comprised, respectively, 254 and 174 organic compounds, and R2 for the two sets ranged from 0.632 to 0.769 and from 0.552 to 0.682, considering the different k. Toropova et al. [45] modeled 218 NOAEL data (28 days of oral exposure in the rats) using the Monte Carlo method. R2 for the training and test sets ranged from 0.679 to 0.718 and from 0.61 to 0.66, respectively, considering the different splits. Sakuratani et al. [46] identified 33 chemical categories related to individual types of toxicity on the basis of mechanistic knowledge starting from a training set of 500 chemicals with RDT data related to oral exposure between 28 and 120 days in rats. Chemicals were assigned to a given category, and then the LOAEL was derived as the result of a data gap-filling approach by read-across on other chemicals in the category. This model does not provide figures for the LOAEL but can be used to identify the target organ most likely to be affected by the target chemical. The category library has been implemented and is available through the Hazard Evaluation Support System (HESS) integrated computational platform. A further model for the prediction of LOAEL was developed by Mazzatorta et al. [36], applying an integrated approach of genetic algorithm (GA) and partial least squares (PLS). Selected descriptors (19 from DRAGON) were used to develop a LOAEL predictive model through a leave-one-out stepwise multiple linear regression (LOO-SMLR) starting from a set of 445 chronic toxicity data (180 days or more of oral exposure in rats) selected from several sources. The final dataset included pesticides, drugs, and natural products. This model performed as follows: R2 0.570 and RMSE 0.700. No external validation was done, so the real predictive model’s power is not known. However, the performances of LOO cross-validation were q2 0.500 and RMSE 0.727. Garcı´a-Domenech et al. [47] applied the same techniques (Furnival-Wilson for descriptor selection and MLR for model building) on the same 86 molecules used by De Julia´n-Ortiz et al. [48]. The model, based on six descriptors, was validated on 16 external chemicals. Performances in the training set were R2 0.795 and RMSE 0.517; q2 0.719 and RMSE 0.564 in LOO crossvalidation; and R2 0.712 and RMSE 0.853 in external validation. De Julia´n-Ortiz et al. [48] used a dataset of chronic LOAEL data for 234 compounds compiled from different sources (US Environmental Protection Agency, EPA, and National Cancer
In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No. . .
253
Institute/National Toxicology Program, NTP) to build a multilinear regression model (MLR). They selected 15 topological descriptors by a Furnival-Wilson algorithm from among those in the DESCRI program. MLR and the Furnival-Wilson algorithm were also applied to a smaller but more homogeneous dataset (86 compounds). The results on the first 234 compounds were quite poor (R2 0.524 and RMSE 0.74). However, the performance on the second dataset (86 compounds) was significantly better (R2 0.647 and RMSE 0.66). In both cases, no external validation was done. Matthews et al. [49] used Maximum Recommended Therapeutic Dose (MRTD) data for 1309 pharmaceutical substances for classification modeling. The MRTD (or Maximum Recommended Daily Dose, MRDD) was determined from clinical trials that employed an oral route of exposure and daily treatments, usually for 3–12 months. The MRTD is derived from human clinical trials and is an upper dose limit beyond which the efficacy of a drug does not increase and/or adverse effects start to outweigh the beneficial ones [50]. MRTD and NOEL for drugs are directly related in humans [49]. An analysis of the MRTD database indicated that most drugs do not show efficacy or adverse effects at a dose approximately ten times lower than the MRTD. Based on this observation, Matthews et al. [49] calculated NOEL as MRTD/10. Chemicals with low MRTD/NOEL were considered strongly toxic, whereas those with higher values were labeled as safe, and structural alerts were identified on this basis. The predictive ability of this model was evaluated through leave-more-out external validation (40 compounds were removed from the training data set of 120 selected test chemicals), and the results showed that the model gave good predictions of toxicity for the test chemicals; the positive predictivity and specificity were high, at, respectively, 92.5% and 95.2%, whereas the sensitivity was lower (74.0%). 3.2
Software
Three software are available for the prediction of LOAEL/ NOAEL, one freeware and two commercials. The free tool is the VEGA platform (https://www.vegahub.eu/download/vega-qsardownload/) implementing several freely accessible QSAR models for the prediction of (eco) toxicological, environmental toxicity, and physicochemical properties of chemicals. VEGA implements the NOEL model from Toropov et al. [39] described in Subheading 3.1. In addition, the VEGA implementation also includes an automated strategy to evaluate if a prediction falls inside of the applicability domain of the model. This is based on the calculation of the Applicability Domain Index (ADI), a score accounting for various aspects such as the presence of molecules within the training set that are similar to the target, their associated error in prediction, and the concordance of their experimental values with the prediction generated for the target.
254
Fabiola Pizzo et al.
The first of the commercial tool is Toxicity Prediction by Komputer Assisted Technology (TOPKAT), developed by Accelrys®. The TOPKAT model aims to predict chronic oral LOAEL in rats (studies lasting 12 or more months were considered) and has been described in Mumtaz et al. [50]. Starting from a dataset of 234 heterogeneous chemicals, the model was built using a stepwise regression analysis with 44 descriptors selected from an initial pool of electronic, topological, symmetry descriptors, and molecular connectivity indices. The performance of the model was tested comparing the predicted with the experimental LOAEL. About 55% of the compounds were predicted within a factor of 2 and more than 93% within a factor of 5 [50]. Over the years, the TOPKAT model for LOAEL prediction has been refined, including more data in the training set. Using the expanded training set (393 chemicals), models for five chemical classes were developed (acyclics, alicyclics, heteroaromatics, single benzenes, and multiple benzenes). Venkatapathy et al. [51] tested the predictive performance of the five submodules using a large dataset of 656 substances and the R2 ranged between 0.78 (multiple benzenes) and 0.98 (alicycles). TOPKAT was further validated by Tilaoui et al. [52] using 340 chemicals not included in the TOPKAT training set. TOPKAT correctly predicted (with an error lower than 1 log unit) only 33% of these chemicals [18]. Another software for LOAEL prediction has been developed by Molcode Ltd. using RDT data in the rat. Information about this model is available from the QSAR Model Reporting Format (QRMF) document. The model is proprietary, but the training and test sets are available. The model was developed using multilinear regression, and the descriptors were chosen through a stepwise selection. There were 76 compounds in the training set, and in order to validate the real ability of the model to predict LOAEL, an external dataset containing 18 compounds was used. In terms of R2, the performance of the Molcode model gave, respectively, 0.79 and 0.725 for the training and test set; a definition of applicability domain was also provided. This software is not built using only pharmaceutical compounds. However, it can be used for the prediction of LOAEL for drugs.
4
Uncertainty of LOAEL and NOAEL Data The development of nonanimal testing for RDT is difficult mainly because the complex underlying processes, which include effects on different organs and tissues and different time scales [2]. NOAEL and LOAEL have been criticized as conceptually inappropriate for providing quantitative estimates for toxicity [53]. Besides the fact that many organs and tissues are involved, other aspects make the NOAEL and LOAEL data uncertain. NOAEL and LOAEL are not
In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No. . .
255
derived or calculated from a dose-effect curve but can only be identified from the doses. This means that they both depend on the study design, particularly the spaces between doses. Consequently, different NOAEL and LOAEL values may be obtained for the same substance using different study designs or different exposure doses. There is a further intrinsic uncertainty in LOAEL experimental data. The “true” LOAEL (the real dose of the substance that starts to generate an effect) may be anywhere between the NOAEL and the LOAEL. This uncertainly is probably big, but how big cannot be measured. This is another problem of the NOAEL and LOAEL approach, as in risk assessment quantifying the uncertainties involved is crucial for establishing protective human exposure limits [54]. The variability of the responses between animals in the dose groups, the definition of the “adversity” of an effect, and the statistical methods supporting this definition are other aspects that raise the level of uncertainty of NOAEL and LOAEL [55]. The benchmark dose (BMD) approach was introduced more than 30 years ago, and it has been proposed for the replacement of the NOAEL/LOAEL concepts to refine the estimation of the POD. Being based on a more consistent response level across studies and study types and taking into consideration the entire dose–response curve, the BMD modeling is now considered the preferred approach in the different risk assessment contexts. Additionally, the BMD modeling facilitates a more efficient use of animals, giving information and estimates of a greater degree of precision from a given number of animals in a study, compared with the NOAEL approach [56].
5
Conclusion The NOAEL and LOAEL of substances are required for human health hazard assessments under different regulatory contexts (pharmaceutical, biocides, REACH, cosmetics) [2]. In vivo RDT studies are very expensive and time-consuming and involve a large number of animals. In vivo RDT has been banned for the safety assessment of cosmetics [1], and REACH legislation [10] requires to use as few animals as possible to evaluate the toxicity of substances. Therefore, there is a pressing need to find a valid alternative. However, considering the uncertainty of NOAEL and LOAEL values, the in silico models are extremely complex because all this uncertainty will be implicitly transferred into the data predicted by a model. Moreover, considering the QSAR approach, there is a no solid mechanistic basis to support the statistical association between a set of molecular descriptors and the systemic effects [2]. Despite the limitations of each single alternative approach, the combination and interpretation of data from different alternative techniques,
256
Fabiola Pizzo et al.
such as QSARs, physiologically based pharmacokinetic modeling (PBPK), read-across, threshold of toxicological concern (TTC), and in vitro methods, may be useful to gain more reliable predictions of NOAEL and LOAEL.
Disclaimer The views expressed in this manuscript are the authors only and do not represent the views of the European Food Safety Authority. References 1. Scientific Committee on Consumer Safety (SCCS) (2012) The SCCS’s notes of guidance for the testing of cosmetic ingredients and their safety evaluation. http://ec.europa.eu/health/ scientific_committees/consumer_safety/ docs/sccs_s_006.pdf. Accessed 08 June 2015 2. Worth A, Barroso J, Bremer S et al (2014) Alternative methods for regulatory toxicologya state-of the-art review. JRC science and policy reports. Report EUR 26797 EN 3. Steinmetz KL, Spack EG (2009) The basics of preclinical drug development for neurodegenerative disease indications. Neurology 9:S2 4. Dorado MA, Engelhardt JA (2005) The no-observed adverse level in drug safety evaluations: use, issue, and definition(s). Regul Toxicol Pharmacol 42:265–274 5. European Medicines Agency (2010) Guideline on repeated dose toxicity. Committee for Human Medicinal Products. Reference number CPMP/SWP/1042/99 Rev 6. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) (2010) Guidance on nonclinical safety studies for the conduct of human clinical trials and marketing authorization for pharmaceuticals M3(R2). http://www.ema.europa.eu/docs/ en_GB/document_library/Scientific_guide line/2009/09/WC500002941.pdf. Accessed 06 June 2015 7. European Commission (2006) Regulation (EC) No 1907/2006 of the European Parliament and the council of 18 December 2006 concerning the registration, evaluation, authorisation and restriction of chemicals (REACH), establishing a European Chemicals Agency, amending directive 1999/45/EC and repealing council regulation (EEC) No 793/93 and commission regulation (EC) No 1488/94 as well as council directive 76/769/ EEC and commission directives 91/155/
EEC, 93/67/EEC, 93/105/EC and2000/ 21/EC. Off J Eur Union L396:1–849 8. European Commission (2009) Regulation (EC) no 1107/2009 of the European Parliament and of the council of 21 October 2009 concerning the placing of plant protection products on the market and repealing council directives 79/117/EEC and 91/414/EEC. Off J Eur Union L309:1–47 9. European Commission (2013) Regulation (EU) No 283/2013 of 1 March 2013 setting out the data requirements for active substances, in accordance with regulation (EC) No 1107/ 2009 of the European Parliament and of the Council concerning the placing of plant protection products on the market 10. Union E (2012) Regulation (EU) no 528/2012 of the European Parliament and of the council of 22 May 2012 concerning the making available on the market and use of biocidal products. Off J Eur Union L167:1–116 11. European Commission. Commission implementing regulation (EU) No 503/2013 of 3 April 2013 on applications for authorisation of genetically modified food and feed in accordance with Regulation (EC) No 1829/2003 of the European Parliament and of the Council OJ L 157, 8.6.2013, pp 1–48 12. European Commission. Commission regulation (EC) No 429/2008 of 25 April 2008 on detailed rules for the implementation of Regulation (EC) No 1831/2003 of the European Parliament and of the Council as regards the preparation and the presentation of applications and the assessment and the authorisation of feed additives OJ L, 133 (2008), pp 1–65 13. European Commission. Commission implementing regulation (EU) 2017/2469 of 20 December 2017 laying down administrative and scientific requirements for applications referred to in article 10 of regulation (EU) 2015/2283 of the European Parliament
In Silico Models for Repeated-Dose Toxicity (RDT): Prediction of the No. . . and of the council on novel foods. OJ L, 351 (2017), pp 64–71 14. European Commission. Commission Regulation (EU) No 234/2011 of 10 March 2011 implementing regulation (EC) no 1331/2008 of the European Parliament and of the council establishing a common authorisation procedure for food additives, food enzymes and food flavourings. OJ L, 64 (2011), pp 15–24 15. European Commission (2009) Regulation (EC) no 1223/2009 of the European Parliament and of the council of 30 November 2009 on cosmetic products 16. Sand F, Parham CJ, Portier R et al (2017) Comparison of points of departure for health risk assessment based on high-throughput screening data. Environ Health Perspect 125(4):623–633 17. Dearden JC (2003) In silico prediction of drug toxicity. J Comput Aided Mol Des 17:119–127 18. Tsakovska I, Lessigiarska I, Netzeva T, Worth AP (2007) A mini review of mammalian toxicity (Q)SAR models. QSAR Comb Sci 27: 41–48 19. Przybylak KR, Madden JC, Cronin MTD et al (2012) Assessing toxicological data quality: basic principles, existing schemes and current limitations. SAR QSAR Environ Res 23: 435–459 20. Bitsch A, Jacobi S, Melber C et al (2006) RepDose: a database on repeated dose toxicity studies of commercial chemicals—a multifunctional tool. Regul Toxicol Pharmacol 46: 202–210 21. https://www.oecd.org/chemicalsafety/riskassessment/oecd-qsar-toolbox.htm 22. Munro IC, Ford RA, Kennepohl E et al (1996) Correlation of structural class with no-observed- effect levels: a proposal for establishing a threshold of concern. Food Chem Toxicol 34:829–867 23. Cramer GM, Ford RA, Hall RL (1978) Estimation of toxic hazard—a decision tree approach (and errata sheet). Food Cosmet Toxicol 16:255–276 24. Hayashi M, Sakuratani Y (2011) Development of an evaluation support system for estimating repeated dose toxicity of chemicals based on chemical structure. In: Wilson AGE (ed) New horizons in predictive toxicology. Current status and application. RSC Publishing, Cambridge 25. Persad AS, Cooper GS (2008) Use of epidemiologic data in integrated risk information system (IRIS) assessments. Toxicol Appl Pharmacol 233:137–145
257
26. Anzali S, Berthold MR, Fioravanzo E et al (2012) Development of computational models for the risk assessment of cosmetic ingredients. IFSCC Mag 15:249–255 27. Martin MT, Judson RS, Reif DM et al (2009) Profiling chemicals based on chronic toxicity results from the U.S. EPA ToxRef database. Environ Health Perspect 117:392–399 28. Watford S, Pham LL, Wignall J et al (2019) ToxRefDB version 2.0: improved utility for predictive and retrospective toxicology analyses. Reprod Toxicol 89:145–158 29. Dorne JL, Richardson J, Livaniou A et al (2021) EFSA’s OpenFoodTox: an open source toxicological database on chemicals in food and feed and its future developments. Environ Int 146:106293 30. Yang C, Cheeseman M, Rathman J et al (2020) A new paradigm in threshold of toxicological concern based on chemoinformatics analysis of a highly curated database enriched with antimicrobials. Food Chem Toxicol 143:111561 31. Feigenbaum A, Pinalli R, Giannetto M et al (2015) Reliability of the TTC approach: learning from inclusion of pesticide active substances in the supporting database. Food Chem Toxicol 75:24–38 32. Yang C, Rathman JF, Magdziarz T et al (2021) Do similar structures have similar no observed adverse effect level (NOAEL) values? Exploring chemoinformatics approaches for estimating NOAEL bounds and uncertainties. Chem Res Toxicol 34(2):616–633 33. Gadaleta D, Marzo M, Toropov A et al (2021) Integrated in silico models for the prediction of no-observed-(adverse)-effect levels and lowestobserved-(adverse)-effect levels in rats for sub-chronic repeated-dose toxicity. Chem Res Toxicol 34(2):247–257 34. Pradeep P, Friedman KP, Judson R (2020) Structure-based QSAR models to predict repeat dose toxicity points of departure. Comput Toxicol 16:100139 35. Helma C, Vorgrimmler D, Gebele D et al (2018) Modeling chronic toxicity: a comparison of experimental variability with (Q) SAR/ read-across predictions. Front Pharmacol 9: 413 36. Mazzatorta P, Estevez MD, Coulet M et al (2008) Modeling oral rat chronic toxicity. J Chem Inf Model 48:1949–1954 37. Maunz A, Gu¨tlein M, Rautenberg M et al (2013) Lazar: a modular predictive toxicology framework. Front Pharmacol 4:38 38. Toropova AP, Toropov AA, Marzo M et al (2017) The application of new HARDdescriptor available from the CORAL software
258
Fabiola Pizzo et al.
to building up NOAEL models. Food Chem Toxicol 112:544–550 39. Toropov AA, Toropova AP, Pizzo F et al (2015) CORAL: model for no observed adverse effect level (NOAEL). Mol Divers 19: 563–575. https://doi.org/10.1007/s11030015-9587-1 40. Veselinovic JB, Veselinovic AM, Toropova AP et al (2016) The Monte Carlo technique as a tool to predict LOAEL. J Med Chem 30(116):71–75 41. Novotarskyi S, Abdelaziz A, Sushko Y et al (2016) ToxCast EPA in vitro to in vivo challenge: insight into the rank-I model. Chem Res Toxicol 29(5):768–775 42. Hisaki T, Ne´e Kaneko MA, Yamaguchi M et al (2015) Development of QSAR models using artificial neural network analysis for risk assessment of repeated-dose, reproductive, and developmental toxicities of cosmetic ingredients. J Toxicol Sci 40(2):163–180 43. Chavan S, Friedman R, Nicholls IA (2015) Acute toxicity-supported chronic toxicity prediction: a k-nearest neighbor coupled readacross strategy. Int J Mol Sci 16(5):11659–11677 44. Gadaleta D, Pizzo F, Lombardo A et al (2014) A k -NN algorithm for predicting oral sub-chronic toxicity in the rat. ALTEX 31: 423–432 45. Toropova AP, Toropov A, Veselinovic´ JB et al (2014) QSAR as a random event. Environ Sci Pollut Res Int 22:8264–8271. https://doi. org/10.1007/s11356-014-3977-2 46. Sakuratani Y, Zhang H, Nishikawa S et al (2013) Hazard evaluation support system (HESS) for predicting repeated dose toxicity using toxicological categories. SAR QSAR Environ Res 24:351–363 47. Garcı´a-Domenech R, de Julia´n-Ortiz JV, Besalu´ E (2006) True prediction of lowest observed adverse effect levels. Mol Divers 10: 159–168 48. Julia´n-Ortiz JV, Garcı´a-Domenech R, Ga´lvez J et al (2005) Predictability and prediction of
lowest observed adverse effect levels in a structurally heterogeneous set of chemicals. SAR QSAR Environ Res 16:263–272 49. Matthews EJ, Kruhlak NL, Benz RD et al (2004) Assessment of the health effects of chemicals in humans: I. QSAR estimation of the maximum recommended therapeutic dose (MRTD) and no effect level (NOEL) of organic chemicals based on clinical trial data 1. Curr Drug Discov Technol 1:61–76 50. Mumtaz MM, Knau LA, Reisman DJ et al (1995) Assessment of effect levels of chemicals from quantitative structure-activity relationship (QSAR) models. I. Chronic lowestobserved adverse- effect level (LOAEL). Toxicol Lett 79:131–143 51. Venkatapathy R, Moudgal CJ, Bruce RM (2004) Assessment of the oral rat chronic lowest observed adverse effect level model in TOPKAT, a QSAR software package for toxicity prediction. J Chem Inf Comput Sci 44: 1623–1629 52. Tilaoui L, Schilter B, Tran LA, Mazzatorta P et al (2006) Integrated computational methods for prediction of the lowest observable adverse effect level of food-borne molecules. QSAR Comb Sci 26:102–108 53. Sand S, Victorin K, Filipsson AF (2008) The current state of knowledge on the use of the benchmark dose concept in risk assessment. J Appl Toxicol 28:405–421 54. Vermeire TG, Baars AJ, Bessems JGM et al (2007) Toxicity testing for human health risk assessment. In: van Leeuwen CJ, Vermeire TG (eds) Risk assessment of chemicals, an introduction, 2nd edn. Springer, Dordrecht 55. Paparella M, Daneshian M, Hornek-Gausterer R et al (2013) Food for thought. . .uncertainty of testing methods-what do we (want to) know? ALTEX 30:131–144 56. Haber LT, Dourson ML, Allen BC et al (2018) Benchmark dose (BMD) modeling: current practice, issues, and challenges. Crit Rev Toxicol 48(5):387–415
Chapter 12 In Silico Models for Predicting Acute Systemic Toxicity Ivanka Tsakovska, Antonia Diukendjieva, and Andrew P. Worth Abstract In this chapter, we give a brief overview of the regulatory requirements for acute systemic toxicity information in the European Union, and we review structure-based computational models that are available and potentially useful in the assessment of acute systemic toxicity. Emphasis is placed on quantitative structure–activity relationship (QSAR) models implemented by means of a range of software tools. The most recently published literature models for acute systemic toxicity are also discussed, and perspectives for future developments in this field are offered. Key words In silico models, Acute systemic toxicity, QSAR, Regulatory context, Databases, LD50, Organ-specific toxicity
1
Introduction Acute systemic toxicity comprises the general adverse effects that occur after a single or multiple exposure of an animal (by oral, dermal, or inhalation route) to a substance within 24 h and during an observation period of at least 14 days. The information obtained from acute systemic toxicity studies is used in the hazard assessment of chemicals for regulatory purposes. Acute mammalian toxicity tests are often the first in vivo toxicity tests to be performed on a chemical. In the past two decades, there have been considerable efforts to replace, reduce, or refine these animal tests by applying alternative approaches, including both in vitro and in silico models. An increasing number of models are available to predict acute mammalian toxicity. This is partly due to the fact that a reasonable number of datasets are openly available for modeling. However, the reliability of the in vivo data can be highly variable, and the meta data provided is often insufficient to determine the suitability of the data for modeling purposes. Another challenge is related to the multiple mechanisms leading to this complex effect, which is typically expressed as a single
Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5_12, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
259
260
Ivanka Tsakovska et al.
numerical value (LD50 for oral and dermal toxicity; LC50 for inhalational toxicity). In addition, there are also differences between the routes of administration and the species used, which means that the available data are very heterogeneous. For these reasons, the correlation of rodent toxicity with acute toxicity in humans is generally found to be low. However, a recent study of 36 MEIC (see Subheading 5.1 below) chemicals found reasonable correlations with mouse and rat intraperitoneal LD50 values (r2 greater than 0.75; [1]). Target organs, such as the liver, kidneys, heart, lungs, and brain, can be affected by exogenous chemicals to the extent that they cease to function. Thus, the use of quantitative structure– activity relationship (QSAR) models for organ/system-specific toxicity would be extremely helpful when predicting acute toxicity. A limited number of QSAR models for specific target organ and tissue effects are available. In this chapter, we give an overview of the regulatory requirements for acute systemic toxicity information in the European Union and the in silico models that have been recently developed to facilitate implementation of alternatives to animal acute toxicity testing. In particular, software packages available for assessment of acute systemic toxicity and organ- and system-specific toxicity, as well as the databases available for obtaining such data are considered. Since comprehensive reviews of literature QSAR studies are available elsewhere [2–5], we focus here on some of the more recently published literature models for acute systemic toxicity. Some of these software and literature models are documented in the JRC’s QSAR Model Database [6].
2
Regulatory Context in the European Union The assessment of acute systemic toxicity is one component in the safety evaluation of substances and represents a standard information requirement within several pieces of EU chemicals legislation, including the Regulation concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) [7], the Biocidal Products Regulation [8], and the Plant Protection Products Regulation [9]. However, in vivo acute systemic toxicity studies are prohibited for cosmetic substances and products [10]. Information on acute toxicity, where available, is also used for classification and labeling purposes [11]. In contrast, in preclinical drug development [12], these studies are no longer required to support first clinical trials in man. The information needed can be obtained from appropriately conducted dose-escalation studies or short-duration dose ranging studies that define a maximum tolerated dose in the general toxicity test species [13, 14]. Further
In Silico Models for Predicting Acute Systemic Toxicity
261
Table 1 In vivo methods currently available for acute systemic toxicity Exposure route
OECD
EU test method
Oral
TG 420: Fixed dose procedure, [16] TG 423: Acute toxic class method [17] TG 425: Up and down procedure [18]
B.1 bis [19] B.1 tris [19]
Dermal
TG 402 [20]
B.3 [19]
Inhalation
TG 403 [21] TG 436: Acute toxic class method [22] TG 433: Fixed concentration procedure [23]
B.2 [24] B.52 [24]
information on the regulatory requirements in the EU is given in Prieto et al. [15]. The standardized and internationally accepted in vivo test methods are summarized in Table 1. The endpoint measured in the majority of the standard assays is animal morbidity or death. Evident signs of toxicity (i.e., clear signs of toxicity indicating that exposure to the next highest concentration would cause severe toxicity in most animals within the observation period) are only used in the oral fixed dose procedure (FDP), which causes less suffering and is, therefore, more humane. The European Chemicals Agency (ECHA) guidance on standard information requirements for acute oral toxicity includes a weight-of-evidence (WoE) adaptation that gives the possibility for use in certain cases an oral repeated-dose toxicity study together with other pieces of information, such as nonanimal or non-testing data instead of in vivo testing. This adaptation primarily applies to low toxicity substances (i.e., LD50 > 2000 mg/kg) [25].
3
Software for Predicting Acute Systemic Toxicity Several software tools capable of predicting endpoints related to acute systemic toxicity are available, as reviewed previously [26]. An updated list is given in Table 2, and some updates on the programs are described below. Among the commercial software programs covering a broad spectrum of systemic toxicological effects is ACD/Percepta, which is developed and marketed by Advanced Chemistry development Inc. (http://www.acdlabs.com/) and is built using experimental data for more than 100,000 compounds extracted from the Registry of Toxic Effects of Chemical Substances (RTECS) and former European Chemical Substances Information System (ESIS)
262
Ivanka Tsakovska et al.
Table 2 Software tools for acute systemic toxicity prediction
Software (Developer)
Information available on the developer’s Availability website/publications
ACD/Percepta (Advanced Chemistry Development, Inc.)
Commercial
l
Modeled endpoint: LD50 values for two rodent species (mice/rats) after different administration routes (oral/intravenous) l Estimates qualitative OECD hazard categories l Identifies hazardous substructures potentially responsible for the acute toxicity
ADMET predictor (Simulations Plus Inc.)
Commercial
l
admetSAR (Laboratory of Molecular Modeling and Design, East China University of Science and Technology)
Freely available
l
CASE Ultra (MultiCASE Inc.)
Commercial
l
Leadscope (Leadscope)
Commercial
l
GUSAR (geneXplain GmbH)
Commercial
l
Modeled endpoint: Acute rat toxicity (pLD50) l Training set: 5718 structures l Data comes from two sources, the highly overlapping RTECS data set and the ChemIDplus database Modeled endpoint: Acute rat toxicity (LD50) l The model is based on 10,207 molecules with LD50 (mg/L) l Algorithm: Graph convolutional neural network Modeled endpoints: Rat acute toxicity (LD50) and maximum tolerated dose (rat and mouse) l Models are based on FDA data l Statistical models and expert rules applied for the prediction Modeled endpoint: GHS categories (1–5 and not classified) l Dataset: Over 15,000 chemicals from a number of sources including RTECS, ECHA, EU’s joint research Council’s AcutoxBase, National Library Medicines Hazardous Substances Data Bank, OECD (eChemPortal), PAI (NICEATM), and TEST (NLM Chem IDplus) l Expert rule-based and statistical-based methodologies applied in the models’ development Modeled endpoint: Acute rat/mouse toxicity (LD50) with four types of administration (intravenous, subcutaneous, intraperitoneal, oral) l Training set: Data from SYMYX MDL toxicity database, RTECS, and ChemIDPlus (continued)
In Silico Models for Predicting Acute Systemic Toxicity
263
Table 2 (continued)
Software (Developer)
Information available on the developer’s Availability website/publications
ProTox-II, (Charite University of Medicine, Freely Institute for Physiology, Structural available Bioinformatics Group)
l
Modeled endpoint: Acute toxicity in rodents (LD50); GHS category l Prediction based on fingerprints similarity and presence of toxic fragments l Dataset: 38,515 compounds the in-house developed database SuperToxic
TerraQSAR (TerraBase)
Commercial
l
T.E.S.T. (US EPA)
Freely available
l
TIMES (Laboratory of Mathematical Chemistry, University “Prof. Dr. Asen Zlatarov)
Commercial
l
TOPKAT (Biovia Discovery Studio)
Commercial
Modeled endpoint: LD50 values (mouse / rat oral/intravenous) l Based on a data set of measured values for ~10,000/6000 organic compounds l Algorithm: Proprietary probabilistic neural network l The data taken from both noncommercial sources, such as the US National Institute of Health’s database, or from commercial sources, such as TerraBase Inc.’s TerraTox™—Explorer database Modeled endpoint: LD50 (oral, rat) Consensus of five QSAR models, based on different methodologies l Based on a data for 7420 chemicals from ChemIDplus database l
Modeled endpoint: LD50 (rat, mouse) Training set size of 1814 compounds l The following types of toxicity are defined and predicted: Basal (minimum) toxicity, excess toxicity, invariable toxicity, bioavailability-dependent toxicity l Models for chemicals with basal toxicity or excess invariable toxicity do not include explanatory variables; linear regression models are developed for chemicals with bioavailability-dependent toxicity l
Modeled endpoint: Rat oral LD50 Algorithm: Partial least squares regression l Training set: 3597 experimental rat oral LD50 values from open literature, selected after critical review l Most of the data extracted from various editions of the RTECS l l
264
Ivanka Tsakovska et al.
databases. The Acute Toxicity Module provides three different estimates of acute toxicity: l
LD50: Provides predictions of LD50 values for rats and mice according to various routes of administration. Prior to modeling, the original experimental data were converted to logarithmic form (pLD50) in order to maintain linear relationship with used descriptors. The final prediction results returned to the user are converted back to LD50 value (mg/kg). The predictive model for pLD50 has been derived using GALAS (Global, Adjusted Locally According to Similarity) modeling methodology.
l
Hazards: A knowledge-based expert system that identifies and visualizes hazardous structural fragments potentially responsible for the toxic effect.
l
Categories: Classifies compounds into one of the five Globally Harmonized System (GHS) categories for acute oral toxicity. The predictions are provided as a list of probabilities that the compound’s LD50 (rat, oral route) would exceed the cut-off values separating different categories. An additional piece of information that the module presents are the five most similar structures to the test compound from the training set as well as their experimental LD50 values.
CASE Ultra (http://www.multicase.com/) is the further development of MCASE methodology and falls in the range of fragment-based QSAR expert systems [27]. The CASE Ultra model mainly consists of a set of “positive alerts” (biophores) and “deactivating alerts” (biophobes), i.e., those fragments that are identified as statistically significant for increasing/decreasing the activity. The improvement of CASE Ultra over its predecessor is related to the identified alerts that are no longer limited to linear paths of limited size or limited branching pattern. In addition, the training sets could be larger than 8000 molecules. The applicability domains of individual toxicity alerts within the models quantitatively define the necessary structural environment of the toxicity alerts thus that the alerts can be considered as valid. The statistically based program TOPKAT uses multiple QSARs on small and homogenous sets of data. It is now a part of QSAR, ADMET, and Predictive Toxicology module within Biovia Discovery Studio platform (https://www.3ds.com). The rat oral LD50 module in TOPKAT comprises 19 regression analyses developed using experimental values of approx. 4000 chemicals from RTECS, including pesticides and industrial chemicals. The rat oral LD50 module in MCASE is built using a training set of 3597 experimental rat oral LD50 values from theopen literature. Most of the data have been extracted from various editions of RTECS. It should be noted
In Silico Models for Predicting Acute Systemic Toxicity
265
that RTECS lists the most toxic value when multiple values exist. For this reason, this model will tend to overestimate the toxicity of a query structure (JRC QSAR Model Database, BIOVIA toxicity prediction model—rat oral LD50). Tunkel and coworkers [28] compared the performance of the TOPKAT and MCASE rat LD50 modules against an external test set of 73 organic compounds covering 32 chemical categories retrieved from submissions to the EPA High Production Volume (HPV) Challenge Program (http:// www.epa.gov/chemrtk/). The predictive accuracy of each software tool was assessed by applying the EPA’s New Chemical classification approach (http://www.epa.gov/oppt/newchems/index. htm), from the low-concern class (>2000 mg/kg) to the highconcern class ( 2000 mg/kg, those that should not be classified as Dangerous Goods (LD50 > 300 mg/kg), and could assist in identifying a starting dose for in vivo acute oral toxicity testing. In view of the significant role of read-across approaches in the regulatory context, a recent study has explored the application of GenRA (read-across) approach to predict rat oral acute LD50 toxicity [60]. GenRA (Generalized Read-Across) explores structural and/or bioactivity information in form of fingerprints and applies a similarity index to derive a weighted average of “structurally similar” analogues [61]. Global performance of the predictions was demonstrated with r2 of 0.61. When explored on local neighborhoods (local performance), the performance improved in some cases, with the r2 reaching up to 0.9. Overall, the study demonstrates the usefulness of GenRA approach to predict LD50 acute toxicity. 5.2 Non-apical Toxicity Prediction as an Alternative to Acute Systemic Toxicity Testing
The feasibility of using in vitro cytotoxicity data for the prediction of in vivo acute toxicity has been investigated in a number of research programs [62–64]. Over 70% correlation has been established between in vitro basal cytotoxicity and rodent LD50 values [65]. 3T3 Neutral Red Uptake assay (3T3NRU) is one of the better known and standardized in vitro methods for basal
In Silico Models for Predicting Acute Systemic Toxicity
275
cytotoxicity [66]. The applicability of 3T3 NRU cytotoxicity assay for the identification of substances with an LD50 > 2000 mg/kg has been evaluated by the EURL ECVAM Scientific Advisory Committee (ESAC). It was recommended however that the results should always be used in combination with other information sources. For instance, the assay is recommended as a component of an Integrated Approach to Testing and Assessment (IATA) [67]. A reason for the absence of a clear relationship between basal cytotoxicity and in vivo acute toxicity could be that specific organ toxicity is the most sensitive parameter for acute toxicity. Common sensitive systems and organs include nervous, cardiovascular, immune system, kidneys and liver, lungs, and blood. IATA proposed for acute systemic toxicity a combination of complementary approaches (in vitro, ex vivo, in silico, in chemico) that address functional mechanistic endpoints tied to adverse outcomes of regulatory concern [68]. The application of 3T3 NRU cytotoxicity assay is proposed as a part of WoE assessment of the acute oral toxicity in the ECHA’s Guidance on Information Requirements and Chemical Safety Assessment [25]. The study of Prieto and coworkers provides valuable analysis on the mechanisms involved in acute toxicity manifestation. It also proposes a battery of tests to evaluate acute oral toxicity including their cytotoxicity (e.g., based on the 3T3 NRU assay) and organspecific toxicity [66]. Thus, in silico models predicting organ-specific toxicity may be valuable to support the mechanistic assessment of acute systemic toxicity. As summarized in Table 4, there is a limited number of literature models for predicting toxicities at tissue and organ levels. Various software platforms predict toxicities at tissue/organ levels, including Derek Nexus (Lhasa Limited), ACD/Percepta (Advanced Chemistry Development, Inc.), ADMET Predictor (Simulations Plus Inc.), Leadscope (Leadscope), and Lazar (In silico toxicology GmbH) (see also Chapter 17). They are based on expert system, regression/categorical QSAR models, read-across. In the case of ligand–protein interactions, molecular modeling approaches are mainly used. Among the commonly used software tools, Derek Nexus provides over 500 structural alerts for a range of organ- and system-specific toxicities, and other miscellaneous endpoints. Models for predicting liver toxicity are further covered in Chapter 14 (Ellison et al.). Some of these models are based on the concept of reactivitybased toxicity. The covalent binding of reactive electrophiles to cellular targets (i.e., nucleophilic sites of macromolecules) has the potential to initiate a chain of biological effects (e.g., depletion of glutathione and protein thiols) resulting in specific organ and system toxicities. Among the few comprehensive studies covering a range of organ toxicities and relying on a broad structural space in the training set are the models published by Matthews et al.
r2 (specific activities for MGST1-catalyzed reactions) ¼ 0.943
cGST and MGST1 enzyme activity
Nephrotoxicity, [73]
LR
Renal tubular necrosis, hemolytic anemia
Nephrotoxicity, Hematotoxicity, [72] 9
16
n/r
Haloalkenes
Derivatives of 1,2- and 1,4-naphthoquinone
ELUMO
Structural alerts
Two-dimensional structural descriptors
Multiple
172–847 depending on 42–154 the model depending on the model
Sensitivity and specificity above 90%
Recursive partitioning algorithm, as (ChemTree™ software)
Nephrotoxicity, kidney necrosis, kidney relative weight gain, nephron injury
Nephrotoxicity, [71]
Topological fingerprints implemented in PaDEL-descriptor
Multiple
338 parent compounds/ 156 metabolites
487 parent Best parent compoundscompounds/624 based TIN model: CA metabolites ¼ 0.80 and MCC ¼ 0.32 Best metabolite-based models: CAs 0.84, 0.85, and 0.83; MCCs: 0.69, 0.69, and 0.62 for TN, NI, and TIN models, respectively
SVM
Tubular necrosis (TN), interstitial nephritis (IN), and tubulointerstitial nephritis (TIN)
Nephrotoxicity, [70]
n/r
Multiple
n/r
Significant parameters
1600
Class(es) studied
Test set
Training set
Statistical parameters Best consensus models: sensitivity 56%, specificity 78%
Statistical method/ software
Software programs: MC4PC, BioEpisteme, MDL-QSAR, and Leadscope predictive data miner
Endpoint
Urinary tract toxicity, Six types of urinary tract [69] injury (acute renal disorders, nephropathies, bladder disorders, kidney function tests, blood in urine, urolithiases)
Model, Reference
Table 4 Overview of published organ and system-specific toxicity QSAR models
The relation between nephrotoxicity of the haloalkanes and ELUMO, reflect their propensity for conjugation reactions catalyzed by glutathione transferase enzymes
SAR analysis, outlining important structural alerts
Models are available in the MetaDrug / ToxHunterTM systems Pharmacology suite
Metabolite sets consist of major urinary metabolites of pharmaceuticals in parent compound sets
Consensus models based on two programs increased sensitivity to 56%
Note
TOPS-MODE approach
EC30rat, EC30mouse
Neurotoxicity, [77]
Model (rat): r ¼ 0.902, 45/46 s ¼ 0.252, F (6,38) ¼ 27.5, rcv ¼ 0.902, RMSECV ¼ 0.273 Model (mice) r ¼ 0.901, s ¼ 0.250, F (7,38) ¼ 23.4, rcv ¼ 0.881, RMSECV ¼ 0.264
20
Two significant principal components (t1 and t2) explaining 51% of the variation in the in vitro screening data (t1 ¼ 37% and t2 ¼ 14%). The PLS model (response formation of the chemically reactive oxygen containing species): Q2 ¼ 0.63
PCA; PLS
Seven endpoints related to neurotoxicity: Including effects on vesicular and membrane Transporter-mediated uptake of dopamine, glutamate, and gamma-aminobutyric acid
Neurotoxicity, [76]
FSMLR models: Q2DCV 30–58 range 0.180–0.778 BPNN models: Q2DCV range 0.601–0.800 MFTA models: r2 range 0.62–0.96 CoMSIA models: r2 range 0.81–0.93, q2 range 0.62–0.80
FSMLR, BPNN, MFTA, CoMSIA
Inhibitory activity and pair-wise selectivity toward serine esterases including acetylcholinesterase and neuropathy Target esterase
Acute and delayed neurotoxicity, [75]
Hansch analysis: 7/9 MFTA: 18–52
Hansch analysis best models: r ¼ 0.699 0.993 MFTA best models: r2 ¼ 0.57 0.96; q2 ¼ 0.47 0.91
MLR (Hansch analysis), MFTA
Inhibitory activity and pair-wise selectivity toward serine esterases including acetylcholinesterase and neuropathy Target esterase
Acute and delayed neurotoxicity, [74]
n/r
n/r
n/r
n/r
(continued)
Structural fragments, Non-congeneric series of Spectral moments, responsible for solvents multiplication of difference in moments, indicator of neurotoxicity are hydrogen bond analyzed capacity of groups, and experimental values of BP
PCA used to derive 67 chemical descriptors general toxicity including partial profiles from the atomic charges and in vitro screening parameters describing data size and substitution pattern; significant parameters (PLS model): Molecular size, expressed as number of substituents and total surface area
FSMLR: Fragmental descriptors of up to eight nonhydrogen atoms, LogP MFTA: Effective charge, Q, on an atom; effective van der Waals radius, re; group lipophilicity, Lg
O-phosphorylated oximes
Non-dioxin-like polychlorinated biphenyls
Hansch analysis: Hydrophobicity of substituent R in the general formula (RO) 2P(O)X MFTA: Effective charge, Q, on an atom; effective van der Waals radius, re; group lipophilicity, Lg
Organo-phosphorus compounds
Immunotoxicity, [82] Each endpoint corresponds to one out of 1418 assays, 36 molecular and cellular targets, 46 standard type measures related to immunotoxicity
LR Activity of phospholipases A2 (PLA2), C (PLC), and D (PLD); In vitro
Lung toxicity, [81]
TOPS-MODE approach
1156
Accuracy ¼ 91.76%
6747
n/r
n/r
4 r ¼ 0.97, 0.98, 0.99, for PLA2, PLC, and PLD, respectively
24 measure-ments from five different TiO2 features and 18 measure-ments from six different ZnO features
Models for TiO2: r2 ¼ 0.70 0.77; Models for ZnO: r2 ¼ 0.19 0.49
MLR, PCA, LDA Classification
Cellular membrane damage of immortalized human lung epithelial cells (via lactate dehydrogenase release); In vitro
Lung toxicity, [80]
8
54
Not reported statistics of the MLR model; 75% accuracy of external validation (classification: Upregulation class/ downregulation class)
MLR Software: ADME WORKS
IL-8 gene expression; In vitro
Lung toxicity, [79]
n/r
8
r > 0.98
Pharmacophore modeling, using Catalist software
Acetylcholinesterase inhibition, IC50; In vitro
Neurotoxicity, [78]
Test set
Training set
Statistical parameters
Statistical method/ software
Endpoint
Model, Reference
Table 4 (continued)
Multiple
Lipid ozonation products
TiO2 and ZnO nanoparticles
Multiple
Organo-phosphorous compounds
Class(es) studied
Note
Spectral moments
EHOMO, ELUMO, and the net charge on the H49 atom
Multi-target QSAR (mt-QSAR)
The goodness of the TiO2: Size in ultrapure classification water, concentration in measured by the ultrapure water, and resubstitution error zeta potential in ultrapure water ZnO: Size in ultrapure water, size in CCM, and concentration
The 54 chemicals are WTPT3, MOLC4, classified into two V5CH, SYMM2, S3C, groups based on the CRB_LEADL, and Gene expression of OPERA RULEI IL-8, namely upregulation class and downregulation class
Pharmacophore models include at least one hydrogen bond acceptor site and 2–3 Hydrophobic sites
Significant parameters
Ten commercial drugs
2130 compounds
Accuracy of 90.1%; Tenfold cross-validation; Accuracy on the external set of ten drugs of 80.0%
hERG channel blocking
Cardiotoxicity, [86]
Multiple
Multiple
Poly-chlorinated diphenyl ethers
Poly-chlorinated dibenzo-dioxins, poly-chlorinated dibenzo-furans, and poly-chlorinated biphenyls
Dragon physicochemical descriptors and fingerprints; extended connectivity fingerprints
Volsurf descriptors, 2D structural and electrotopological E-states descriptors
A parameter of electrostatic equilibrium on molecular surface
Steric and electrostatic fields
Own experimental datataset generated
Pentacle software used to calculate GRIND toxicophore-based descriptors
Abbreviations: AE, adverse effect; BP, boiling point; BPNN, Backpropagation Neural Networks; CA, classification accuracy; cGST, cytosolic glutathione transferases; Clog, calculated partition coefficient P; CoMFA, comparative molecular field analysis; CoMSIA, Comparative Molecular Similarity Index Analysis; CRB LEADL, count of rotatable bonds; ELUMO, energy of the lowest unoccupied molecular orbital; FSMLR, Fast Stepwise Multiple Linear Regression; hERG, human Ether-a-go-go Related-Gene; IL-8, interleukin-8; LDA, linear discriminant analysis; LR, linear regression; MCC, Matthews correlation coefficient; MFTA, MLR, multiple linear regression; Molecular Field Topology Analysis; MGST1, microsomal glutathione transferase 1; MOLC4, path-2 molecular connectivity; n/r, not reported; OPERA RULEI, the rule based on Lipinski’s rule; RMSECV, root mean square error of cross validation; RMSEP, root-mean-squares error of prediction; SVM, support vector machine; q, cross-validation parameter; Q2DCV, determination coefficient for double cross-validation; r, correlation coefficient; S3C, third order cluster MC Simple; SYMM2, geometrical symmetry; TOPS-MODE, topological sub-structural molecular design; V5CH, fifth order chain MC Valence; WTPT3, sum of atom indexes for all heteroatoms
A number of machine learning models; the neural network model has demonstrated best performance
21
14
Best PLS model: r2 ¼ 0.79, q2 ¼ 0.72, r2 (test set) ¼ 0.67 RMSEP ¼ 0.69 PLS-DA: Accuracy (test set prediction) ¼ 86%
PCA, PLS; Pentacle software: GRIND toxicophorebased descriptors calculated and PLS-DA performed
pIC50 for human CFC-GEMM cells (colony-forming cell granulocyte, erythroid, megakaryocyte, macrophage) and GM-CFC (granulocytes monocytes colonyforming cell)
Myelotoxicity, [85]
n/r
209
r ¼ 0.964 rcv ¼ 0.884
19
MLR
59
Immunotoxicity, [84] Log ED50
Best model: r2 ¼ 0.858 q2 ¼ 0.684; r2 (test set) ¼ 0.851
CoMFA
Immunotoxicity, [83] Binding to the aryl hydrocarbon receptor (AhR)
280
Ivanka Tsakovska et al.
[69]. These models were developed for urinary tract toxicities of drugs. For each organ, a number of toxicity endpoints were considered in the QSAR analysis. The investigation utilizes four software programs: MC4PC (versions 1.5 and 1.7); BioEpisteme (version 2.0); MDL-QSAR (version 2.2); and Leadscope Predictive Data Miner (LPDM version 2.4.). The four QSAR programs were demonstrated to be complementary, and enhanced performance was obtained by combining predictions from two programs. The best QSAR models exhibited an overall average of 92% coverage, 87% specificity, and 39% sensitivity. These results support the view that a consensus prediction strategy provides a means of optimizing predictive ability. In the work of Myshkin et al. [71], a detailed ontology of toxic pathologies for 19 organs was created from the literature in a consistent way to capture precise organ toxicity associations of drugs, industrial, environmental, and other compounds. Models for nephrotoxicity and for more specific endpoints related to these organ injuries were developed using a recursive partitioning algorithm. The models performed better at the prediction of distinct organ toxicity subcategories than general organ toxicity, reflecting the well-known tendency of QSAR models to have a better predictive performance for more specific endpoints. In a more recent study, Lee et al. [70] present QSAR models for three common patterns of drug-induced kidney injury, i.e., tubular necrosis, interstitial nephritis, and tubulo-interstitial nephritis. Binary classification models of nephrotoxin versus non-nephrotoxin with eight fingerprint descriptors were developed based on heterogeneous pharmacological compounds data. Two types of data sets were used for construction of the training set, i.e., parent compounds of pharmaceuticals (251 nephrotoxins and 387 non-nephrotoxins) and their major urinary metabolites (307 nephrotoxins and 233 non-nephrotoxins). Thus, the study reflects the fact that the nephrotoxicity of a pharmacological compound is induced by the parent compound as well as its metabolites. The results of a ten-fold cross-validation and external validation procedures showed a high accuracy of the models (better than 83% for external validation sets). For kidney toxicity, local QSARs have been developed for specific chemical classes, such as the haloalkenes. These highvolume chemicals used in industrial, synthetic, and pharmaceutical applications are common environmental pollutants. Many haloalkenes are known to be nephrotoxic in rodents after bioactivation via the cysteine conjugate beta-lyase pathway, which is triggered by formation of hepatic glutathione S-conjugates, a reaction catalyzed by cytosolic and microsomal glutathione transferases [87]. The study by Jolivette and Anders [73] relates the nephrotoxicity of nine haloalkenes to their lowest unoccupied molecular orbital
In Silico Models for Predicting Acute Systemic Toxicity
281
energies, ELUMO, reflecting their propensity for conjugation reactions catalyzed by glutathione transferase enzymes. Very few QSAR studies of neurotoxicity have been published. An example is the work of Estrada et al. [77]. Their models are based on the TOPS-MODE approach, which provides a means of estimating the contributions to neurotoxicity in rats and mice of a series of structural fragments. Organophosphorus (OP) compounds are well-known neurotoxic agents. These chemicals are potent inhibitors of serine esterases, the most critical of which is the widely distributed nervous system enzyme acetylcholinesterase (AChE). This wellestablished mechanism of action underlies the usefulness of molecular modeling approaches like 3D QSAR and pharmacophore modeling to predict the inhibition potency of OPs. Several published models are based on these approaches [74, 75, 78, 83]. Among the commonly used software tools, Derek Nexus estimates neurotoxicity using a number of structural alerts: gammadiketone or precursor, acrylamide or glycidamide, nitroimidazole, carbon disulfide or precursor, pyrethroid, 1-methyl-1,2,3,6-tetrahydropyridine, lead or lead compound, and organophosphorus ester. Few studies have been published in relation to other organs/ systems. Immunotoxicity can refer to immunosuppression in humans (caused, for example, by benzene and halogenated aromatic hydrocarbons), autoimmune disease (for example, the pesticide dieldrin induces an autoimmune response against red blood cells, resulting in hemolytic anemia), and allergenicity (chemicals which stimulate the immune system can cause allergies or hypersensitivity reactions such as anaphylactic shock). Thus, immunotoxicity refers to a wide variety of biological effects, many of which involve complex biochemical networks. Tenorio-Borroto et al. [82] have trained and validated a multi-target QSAR model for highthroughput screening of drug immunotoxicity using TOPSMODE approach. Yuan et al. [83] have studied the key molecular features of polychlorinated dibenzodioxins, polychlorinated dibenzofurans, and polychlorinated biphenyls for determining binding affinity to the aryl hydrocarbon receptor (AhR)—an intracellular receptor which has been correlated to immunotoxicity, thymic atrophy, weight loss, and acute lethality. CoMFA (Comparative Molecular Field Analysis) was applied to generate 3D QSAR models. In a study by Hui-Ying et al. [84], linear relationships between immunotoxicity values (log ED50) and other biological activities of polychlorinated diphenyl ethers and their structural descriptors were established by multiple linear regression. It was shown that the structural descriptors derived from molecular electrostatic potentials together with the number of the substituting chlorine atoms on the two phenyl rings can be used to express the
282
Ivanka Tsakovska et al.
quantitative structure–property relationships of polychlorinated diphenyl ethers. Evaluation of hematotoxicity is important step in early drug design. Particularly, it is a common dose-limiting toxicity associated with anticancer drugs. The first attempt to build in silico models to predict the myelosuppressive activity of drugs from their chemical structure was made by Crivori et al. [85]. Two sets of potentially relevant descriptors for modeling myelotoxicity (i.e., 3D Volsurf and 2D structural and electrotopological E-states descriptors) were selected and PCA (Principal Component Analysis) was carried out on the entire set of data (38 drugs). The first two principal components discriminated the highest from the least myelotoxic compounds with a total accuracy of 95%. In addition, a highly predictive PLS (Partial Least Squares) model was developed by correlating a selected subset of in vitro hematotoxicity data with Volsurf descriptors. After variable selection, the PLS analysis resulted in a one-latent-variable model with r2 of 0.79 and q2 of 0.72. In contrast to other organ-specific effects, the in silico modeling of cardiotoxicity has been a rather productive field. This is because drug cardiotoxicity is one of the main reasons for drugrelated fatalities and subsequent drug withdrawals. In recent years, the hERG channel has been extensively investigated in the field of cardiotoxicity prediction as it has been found to play a major role in both cardiac electrophysiology and drug safety. Because hERG assays and QT animal studies are expensive and time consuming, numerous in silico models have been developed for use in early drug discovery. The earliest attempts to identify whether a molecule is a hERG blocker include a set of simple rules based on structural and functional features, but these rules are not always reliable predictors for identifying hERG blockers. In order to give more accurate predictions of hERG blockage, a wide range of QSAR models have been developed based on a variety of statistical techniques and machine learning methods, including multiple linear regression, partial least square, k-nearest-neighbor algorithm, linear discriminant analysis, artificial neural networks, support vector machine, self-organizing mapping, recursive partitioning, random forest, genetic algorithm, and naive Bayesian classification. Most of these QSAR models are classifiers, and only a few regression models have been reported. Recently a large hERG toxicity dataset of 2130 compounds with measured IC50 values (compounds with IC50 < 10 μM classified as toxic and the other compounds classified as nontoxic) has been developed by Lee and coworkers [86]. A number of machine learning models (linear regression, ridge regression, logistic regression, naı¨ve Bayes, neural network, and random forest) were developed, with the neural network model demonstrating best
In Silico Models for Predicting Acute Systemic Toxicity
283
performance and the potential to support virtual screening of drug candidates that do not cause cardiotoxicity. Pharmacophore modeling has also been employed to develop ligand-based prediction models of hERG channel blockers. Some of them combine pharmacophore modeling with 3D QSAR to develop predictive models of hERG channel blockers [88]. Recently, a cryo-EM structure of the open channel state of hERG was reported [89], thus opening new possibilities for structure-based models applying molecular docking, molecular dynamics simulations, and free energy calculations to explore the hERG–blocker interactions [90]. Reviews by Wang et al., Jing et al., Villoutreix et al., and Garrido et al. [91–94] summarize the advances and challenges in computational studies of hERG channel blockage. It is expected that the development of in silico models for hERG-related cardiotoxicity prediction will stay active in the coming years in order to design drugs without undesirable side effects.
6
Conclusions The modeling of acute systemic toxicity has largely focused on QSARs for predicting LD50 values and for categorizing chemicals according to ranges of LD50 values. For these purposes, which are potentially useful in the regulatory assessment of chemicals, the in silico models seem to perform as well as in vitro cytotoxicity methods. The developments in this field can be attributed to the availability of extensive LD50 datasets and a wide range of machine learning techniques. Many of these datasets and software tools derived from the datasets are in the public domain. It is important to take into account the variability of the experimental values when accessing the performance of the in silico model developed. An analysis is useful here to define a margin of uncertainty when using LD50 values as dependent variables in order to assess the performance of the in silico model. Consensus modeling approaches demonstrate the potential to overcome the limitations of individual models and to perform at least as well as the in vivo acute toxicity assay [56]. The emergence of mechanism-based toxicology (e.g., adverse outcome pathways; https://aopwiki.org/oecd_page) is a tremendous opportunity to improve current models with better biological knowledge. Indeed, the time of global (and scientifically dubious) QSARs predicting LD50 based on chemical properties for the whole chemical space is probably coming to an end. Future models should target specific toxicity mechanisms on the basis of current biological knowledge. Historically, this was actually done implicitly by focusing model building on very limited chemical classes (supposedly acting via the same mechanism). According to this
284
Ivanka Tsakovska et al.
approach, global LD50 models would be the sum of a multitude of accurate predictors dedicated to describe well-defined mechanisms of action. In this context, the use of biological (in vitro) descriptors in combination with traditional molecular descriptors provides a promising means of building local QSAARs for mechanistically based chemical classes. In general, the modeling of organ-specific and system-specific effects represents an underdeveloped field, ripe for future research but far from regulatory applications, which typically rely on the assessment of lethality. A notable exception concerns the modeling of receptors and ion channels implicated in specific organ pathologies, such as the hERG channel in relation to cardiotoxicity. The development of models for upstream (molecular and cellular) effects represents a more scientifically meaningful exercise which also promises to unify the traditional regulatory distinction between the acute and repeat dose toxicity. In some areas, such as immunotoxicity, short-term progress seems unlikely. The complexity of such effects probably means that systems biology approaches will be more appropriate. In general, the development of models for organ-specific and system-specific effects will depend on a new generation of databases, such as the COSMOS database [95], which contain highquality data that are structured and annotated according to meaningful chemical and biological ontologies.
Acknowledgments I.T. acknowledges the financial support by the Bulgarian Ministry of Education and Science under the National Research Programme “Healthy Foods for a Strong Bio-Economy and Quality of Life” approved by DCM # 577 / 17.08.2018. References 1. Dearden JC, Hewitt M (2021) Prediction of human lethal doses and concentrations of MEIC chemicals from rodent LD50 values: an attempt to make some reparation. Altern Lab Anim 49(1–2):10–21. https://doi.org/10. 1177/0261192921994754 2. Kleandrova V, Luan F, Speck-Planche A, Cordeiro M (2015) In silico assessment of the acute toxicity of chemicals: recent advances and new model for multitasking prediction of toxic effect. Mini Rev Med Chem 15:677–686. https://doi.org/10.2174/ 1389557515666150219143604
3. Devillers J, Devillers H (2009) Prediction of acute mammalian toxicity from QSARs and interspecies correlations. SAR QSAR Environ Res 20:467–500. https://doi.org/10.1080/ 10629360903278651 4. Lapenna S, Fuart-Gatnik M, Worth A (2010) Review of QSAR models and software tools for predicting acute and chronic systemic toxicity. EN. Publications Office of the European Union, Luxembourg. http://publications.jrc. ec.europa.eu/repository/ 5. Tsakovska I, Lessigiarska I, Netzeva T, Worth A (2008) A mini review of mammalian toxicity
In Silico Models for Predicting Acute Systemic Toxicity (Q)SAR models. QSAR Combinatorial Sci 27: 41–48 6. JRC (2020) JRC QSAR model database. European Commission, Joint Research Centre (JRC) [Dataset] PID: http://data.europa.eu/ 89h/e4ef8d13-d743-4524-a6eb80e18b58cba4 7. European Union (2006) Regulation (EC) No 1907/2006 of the European Parliament and the Council of 18 December 2006 concerning the registration, evaluation, authorisation and restriction of chemicals (REACH), establishing a European Chemicals Agency, amending directive 1999/45/EC and repealing council regulation (EEC) No 793/93 and commission regulation (EC) No 1488/94 as well as council directive 76/769/EEC and commission directives 91/155/EEC, 93/67/EEC, 93/105/ EC and 2000/21/EC. Off J Eur Union L396:1–849 8. European Union (2012) Regulation (EU) no 528/2012 of the European Parliament and of the council of 22 may 2012 concerning the making available on the market and use of biocidal products. Off J Eur Union L 167:1–116 9. European Union (2009) Regulation (EC) no 1107/2009 of the European Parliament and of the council of 21 October 2009 concerning the placing of plant protection products on the market and repealing council directives 79/117/EEC and 91/414/EEC. Off J Eur Union L309:1–47 10. European Union (2009) Regulation (EC) no 1223/2009 of the European Parliament and the council of 30 November 2009 on cosmetic products. Off J Eur Union L342:59–209 11. European Union (2008) Regulation (EC) no 1272/2008 of the European Parliament and of the council of 16 December 2008 on classification, labelling and packaging of substances and mixtures, amending and repealing directives 67/548/EEC and 1999/45/EC, and amending regulation (EC) no 1907/2006. Off J Eur Union L353 12. ICH (2009) Requirements for registration of pharmaceuticals for human use. Guidance on nonclinical safety studies for the conduct of human clinical trials and marketing authorization for pharmaceuticals M3(R2). Recommended for adoption at step 4 of the ICH process on June 11, 2009 13. Robinson S, Delongeas JL, Donald E et al (2008) A European pharmaceutical company initiative challenging the regulatory requirement for acute toxicity studies in pharmaceutical drug development. Regul Toxicol Pharmacol 50:345–352
285
14. Chapman K, Creton S, Kupferschmidt H et al (2010) The value of acute toxicity studies to support the clinical management of overdose and poisoning: a cross-discipline consensus. Regul Toxicol Pharmacol 58:354–359 15. Prieto P, Burton J, Graepel R et al (2014) EURL ECVAM strategy to replace, reduce and refine the use of animals in the assessment of acute mammalian systemic toxicity. EN. Publications Office of the European Union, Luxembourg. http://publications.jrc. ec.europa.eu/repository/ 16. OECD (2001) Guideline for testing of chemicals, 420, acute oral toxicity – fixed dose method. In: OECD guidelines for the testing of chemicals, section 4. OECD Publishing, Paris 17. OECD (2001) Guideline for testing of chemicals, 423, acute oral toxicity – acute toxic class method. In: OECD guidelines for the testing of chemicals, section 4. OECD Publishing, Paris 18. OECD (2001) Guideline for testing of chemicals, 425, acute oral toxicity – up-and-down procedure. In: OECD guidelines for the testing of chemicals, section 4. OECD Publishing, Paris 19. European Union (2008) Regulation (EC) no 440/2008 of 30 may 2008 laying down test methods pursuant to regulation (EC) no 1907/2006 of the European Parliament and of the council on the registration, evaluation, authorisation and restriction of chemicals (REACH). Off J Eur Union L142 20. OECD (1987) Guideline for testing of chemicals, 420, acute dermal toxicity. In: OECD guidelines for the testing of chemicals, section 4. OECD Publishing, Paris 21. OECD (2009) Guideline for testing of chemicals, 403, acute inhalation toxicity. In: OECD guidelines for the testing of chemicals, section 4. OECD Publishing, Paris 22. OECD (2009) Guideline for testing of chemicals, 436, acute inhalation toxicity – acute toxic class method. In: OECD guidelines for the testing of chemicals, section 4. OECD Publishing, Paris 23. OECD (2018) Guideline for testing of chemicals, 433: acute inhalation toxicity – fixed concentration procedure, OECD guidelines for the testing of chemicals, section 4. OECD Publishing, Paris 24. European Union (2014) Commission regulation (EU) no 260/2014 of 24 January 2014 amending, for the purpose of its adaptation to technical progress, regulation (EC) no 440/2008 laying down test methods pursuant
286
Ivanka Tsakovska et al.
to regulation (EC) no 1907/2006 of the European Parliament and of the council on the registration, evaluation, authorisation and restriction of chemicals (REACH). Off J Eur Union L 81:1–253 25. ECHA – European Chemicals Agency (2017) Endpoint specific guidance (version 6). In: Guidance on information requirements and chemical safety assessment, chapter R.7a. European Chemicals Agency, Helsinki 26. Fuart-Gatnik M, Worth A (2010) Review of software tools for toxicity prediction. EN. Publications Office of the European Union. http://publications.jrc.ec.europa.eu/ repository/ 27. Chakravarti SK, Saiakhov RD, Klopman G (2012) Optimizing predictive performance of CASE ultra expert system models using the applicability domains of individual toxicity alerts. J Chem Inf Model 52:2609–2618 28. Tunkel J, Mayo K, Austin C et al (2005) Practical considerations on the use of predictive models for regulatory purposes. Environ Sci Technol 39:2188–2199 29. Roberts G, Myatt GJ, Johnson WP et al (2000) LeadScope: software for exploring large sets of screening data. J Chem Inf Comput Sci 40: 1302–1314 30. Bercu J, Masuda-Herrera MJ, Trejo-Martin A et al (2021) A cross-industry collaboration to assess if acute oral toxicity (Q)SAR models are fit-for-purpose for GHS classification and labelling. Regul Toxicol Pharmacol 120:104843 31. Yang H, Lou C, Sun L et al (2019) admetSAR 2.0: web-service for prediction and optimization of chemical ADMET properties. Bioinformatics 35:1067–1069 32. Zhu H, Martin TM, Ye L et al (2009) Quantitative structureactivity relationship modeling of rat acute toxicity by oral exposure. Chem Res Toxicol 22:1913–1921 33. Banerjee P, Eckert AO, Schrey AK, Preissner R (2018) ProTox-II: a webserver for the prediction of toxicity of chemicals. Nucleic Acids Res 46:W257–W263 34. Drwal MN, Banerjee P, Dunkel M et al (2014) ProTox: a web server for the in silico prediction of rodent oral toxicity. Nucleic Acids Res 42: 53–58 35. Greene N, Judson PN, Langowski JJ, Marchant CA (1999) Knowledge-based expert systems for toxicity and metabolism prediction:
DEREK, StAR and METEOR. SAR QSAR Environ Res 10:299–314 36. Vedani A, Dobler M, Smiesˇko M (2012) VirtualToxLab - a platform for estimating the toxic potential of drugs, chemicals and natural products. Toxicol Appl Pharmacol 261: 142–153 37. Vedani A, Dobler M, Smiesˇko M (2015) OpenVirtualToxLab—a platform for generating and exchanging in silico toxicity data. Toxicol Lett 232:519–532 38. Kinsner-Ovaskainen A, Rzepka R, Rudowski R et al (2009) Acutoxbase, an innovative database for in vitro acute toxicity studies. Toxicol In Vitro 23:476–485 39. Kinsner-Ovaskainen A, Prieto P, Stanzel S, Kopp-Schneider A (2013) Selection of test methods to be included in a testing strategy to predict acute oral toxicity: an approach based on statistical analysis of data collected in phase 1 of the ACuteTox project. Toxicol In Vitro 27:1377–1394 40. Prieto P, Kinsner-Ovaskainen A, Stanzel S et al (2013) The value of selected in vitro and in silico methods to predict acute oral toxicity in a regulatory context: results from the European project ACuteTox. Toxicol In Vitro 27: 357–376 41. Prieto P, Kinsner-Ovaskainen A (2015) Short commentary to “Human in vivo database now on ACuteTox home page” [Toxicol In Vitro 27 (2013) 2350–2351]. Toxicol In Vitro 29: 415. https://doi.org/10.1016/j.tiv.2014. 11.016 42. Hoffmann S, Kinsner-Ovaskainen A, Prieto P et al (2010) Acute oral toxicity: variability, reliability, relevance and interspecies comparison of rodent LD50 data from literature surveyed for the ACuteTox project. Regul Toxicol Pharmacol 58:395–407 43. Fonger GC, Hakkinen P, Jordan S, Publicker S (2014) The National Library of Medicine’s (NLM) hazardous substances data Bank (HSDB): background, recent enhancements and future plans. Toxicology 325:209–216 44. ICCVAM (2001) Guidance document on using in vitro data to estimate in vivo starting doses for acute toxicity. NIH publication 01-4500. National Institute of Environmental Health Research Triangle Park, North Carolina. http://ntp.niehs.nih.gov/pubhealth/ evalatm/test-method-evaluatiguidanons/
In Silico Models for Predicting Acute Systemic Toxicity acute-systemic-tox/in-vitro/guidance-docu ment/index.html 45. Kleinstreuer NC, Karmaus A, Mansouri K et al (2018) Predictive models for acute oral systemic toxicity: a workshop to bridge the gap from research to regulation. Comput Toxicol 8:21–24 46. Zhu H, Ye L, Richard A et al (2009) A novel two-step hierarchical quantitative structureactivity relationship modeling work flow for predicting acute toxicity of chemicals in rodents. Environ Health Perspect 117: 1257–1264 47. Norle´n H, Berggren E, Whelan M, Worth A (2012) An investigation into the use of computational and in vitro methods for acute systemic toxicity prediction. EN. Publications Office of the European Union, Luxembourg 48. Diaza RG, Manganelli S, Esposito A et al (2015) Comparison of in silico tools for evaluating rat oral acute toxicity. SAR QSAR Environ Res 26:1–27 49. Nelms MD, Karmaus AL, Patlewicz G (2020) An evaluation of the performance of selected (Q)SARs/expert systems for predicting acute oral toxicity. Comput Toxicol 16:100135 50. Lessigiarska I, Worth AP, Netzeva TI et al (2006) Quantitative structure-activity-activity and quantitative structure-activity investigations of human and rodent toxicity. Chemosphere 65:1878–1887 51. Raevsky OA, Grigor’ev VJ, Modina EA, Worth AP (2010) Prediction of acute toxicity to mice by the arithmetic mean toxicity (AMT) modelling approach. SAR QSAR Environ Res 21: 265–275 52. Chavan S, Nicholls IA, Karlsson BC et al (2014) Towards global QSAR model building for acute toxicity: Munro database case study. Int J Mol Sci 15:18162–18174 53. Lu J, Peng J, Wang J et al (2014) Estimation of acute oral toxicity in rat using local lazy learning. J Chem 6:26 54. Sedykh A, Zhu H, Tang H et al (2011) Use of in vitro HTS-derived concentration-response data as biological descriptors improves the accuracy of QSAR models of in vivo toxicity. Environ Health Perspect 119(3):364–370 55. Low Y, Sedykh A, Fourches D et al (2013) Integrative chemical-biological read-across approach for chemical hazard classification. Chem Res Toxicol 26:1199–1208 56. Sullivan K, Allen DG, Clippinger AJ et al (2021) Mind the gaps: prioritizing activities to meet regulatory needs for acute systemic lethality. ALTEX 38(2):327–335. https://doi. org/10.14573/altex.2012121
287
57. Gadaleta D, Vukovic´ K, Toma C et al (2019) SAR and QSAR modeling of a large collection of LD50 rat acute oral toxicity data. J Cheminform 11:58. https://doi.org/10.1186/ s13321-019-0383-2 58. Ballabio D, Grisoni F, Consonni V, Todeschini R (2019) Integrated QSAR models to predict acute oral systemic toxicity. Mol Inform 38: 1800124 59. Graham JC, Rodas M, Hillegass J, Schulze G (2021) The performance, reliability and potential application of in silico models for predicting the acute oral toxicity of pharmaceutical compounds. Regul Toxicol Pharmacol 119:104816 60. Helman G, Shah I, Patlewicz G (2019) Transitioning the generalised read-across approach (GenRA) to quantitative predictions: a case study using acute oral toxicity data. Comput Toxicol 12:100097 61. Patlewicz G, Ball N, Booth ED et al (2013) Use of category approaches, read-across and (Q)SAR: general considerations. Regul Toxicol Pharmacol 67:1–12 62. Ekwall B, Barile FA, Castano A et al (1998) MEIC evaluation of acute systemic toxicity. Altern Lab Anim 26(Suppl. 2):617–658 63. Spielmann H, Genshow E, Liebsch M, Halle W (1999) Determination of the starting dose for acute oral toxicity (LD50) testing in the up and down procedure (UDP) from cytotoxicity data. Altern Lab Anim 27:957–966 64. Halle W (2003) The registry of cytotoxicity: toxicity testing in cell cultures to predict acute toxicity (LD50) and to reduce testing in animals. Altern Lab Anim 31:89–198 65. Clemedson C, Ekwall B (1999) Overview of the final MEIC results: I. The in vitro–in vitro evaluation. Toxicol In Vitro 13:657–663 66. Prieto P, Graepel R, Gerloff K et al (2019) Investigating cell type specific mechanisms contributing to acute oral toxicity. ALTEX 36:39–64 67. ECVAM EURL (2013) EURL ECVAM recommendation on the 3T3 neutral red uptake cytotoxicity assay for acute oral toxicity testing. EN. Publications Office of the European Union, Luxembourg 68. Rovida C, Ale´pe´e N, Api AM et al (2015) Integrated testing strategies (ITS) for safety assessment. ALTEX 32:25–40 69. Matthews EJ, Ursem CJ, Kruhlak NL et al (2009) Identification of structure-activity relationships for adverse effects of pharmaceuticals in humans: part B. use of (Q)SAR systems for early detection of drug-induced hepatobiliary and urinary tract toxicities. Regul Toxicol Pharmacol 54:23–42
288
Ivanka Tsakovska et al.
70. Lee S, Kang Y, Park H et al (2013) Human nephrotoxicity prediction models for three types of kidney injury based on data sets of pharmacological compounds and their metabolites. Chem Res Toxicol 26:1652–1659 71. Myshkin E, Brennan R, Khasanova T et al (2012) Prediction of organ toxicity endpoints by QSAR modeling based on precise chemical histopathology annotations. Chem Biol Drug Des 80:406–416 72. Munday R, Smith BL, Munday CM (2007) Structure-activity relationships in the haemolytic activity and nephrotoxicity of derivatives of 1,2- and 1,4-naphthoquinone. J Appl Toxicol 27:262–269 73. Jolivette LJ, Anders MW (2002) Structureactivity relationship for the biotransformation of Haloalkenes by rat liver microsomal glutathione transferase 1. Chem Res Toxicol 15: 1036–1041 74. Makhaeva GF, Radchenko EV, Palyulin VA et al (2013) Organophosphorus compound esterase profiles as predictors of therapeutic and toxic effects. Chem Biol Interact 203: 231–237 75. Makhaeva GF, Radchenko EV, Baskin P II et al (2012) Combined QSAR studies of inhibitor properties of O-phosphorylated oximes toward serine esterases involved in neurotoxicity, drug metabolism and Alzheimer’s disease. SAR QSAR Environ Res 23:627–647 76. Stenberg M, Hamers T, Machala M et al (2011) Multivariate toxicity profiles and QSAR modeling of non-dioxin-like PCBs–an investigation of in vitro screening data from ultra-pure congeners. Chemosphere 85: 1423–1429 77. Estrada E, Molina E, Uriarte E (2001) Quantitative structure-toxicity relationships using TOPS-MODE. 2. Neurotoxicity of a non-congeneric series of solvents. SAR QSAR Environ Res 12:445–459 78. Yazal JE, Rao SN, Mehl A, Slikker W Jr (2001) Prediction of organophosphorus acetylcholinesterase inhibition using three dimensional quantitative structure-activity relationship (3D-QSAR) methods. Toxicol Sci 63:223–232 79. Hosoya J, Tamura K, Muraki N et al (2011) A novel approach for a toxicity prediction model of environmental pollutants by using a quantitative structure-activity relationship method based on toxicogenomics. ISRN Toxicol 2011:515724
80. Sayes C, Ivanov I (2010) Comparative study of predictive computational models for nanoparticle-induced cytotoxicity. Risk Anal 30:1723–1734 81. Kafoury RM, Huang MJ (2005) Application of quantitative structure activity relationship (QSAR) models to predict ozone toxicity in the lung. Environ Toxicol 20:441–448 ˜ uelas-Rivas CG, Va´s82. Tenorio-Borroto E, Pen quez-Chagoya´n JC et al (2014) Model for high-throughput screening of drug immunotoxicity–study of the anti-microbial G1 over peritoneal macrophages using flow cytometry. Eur J Med Chem 72:206–220 83. Yuan J, Pu Y, Yin L (2013) Docking-based three-dimensional quantitative structureactivity relationship (3D-QSAR) predicts binding affinities to aryl hydrocarbon receptor for polychlorinated dibenzodioxins, dibenzofurans, and biphenyls. Environ Toxicol Chem 32:1453–1458 84. Hui-Ying X, Jian-Wei Z, Gui-Xiang H, Wei W (2010) QSPR/QSAR models for prediction of the physico-chemical properties and biological activity of polychlorinated diphenyl ethers (PCDEs). Chemosphere 80:665–670 85. Crivori P, Pennella G, Magistrelli M et al (2011) Predicting myelosuppression of drugs from in silico models. J Chem Inf Model 51: 434–445 86. Lee HM, Yu MS, Kazmi OSR et al (2019) Computational determination of hERGrelated cardiotoxicity of drug candidates. BMC Bioinformatics 20(Suppl 10):250 87. Anders MW, Dekant W (1998) Glutathionedependent bioactivation of haloalkenes. Annu Rev Pharmacol Toxicol 38:501–537 88. Chemi G, Gemma S, Campiani G et al (2017) Computational tool for fast in silico evaluation of hERG K+ channel affinity. Front Chem 5:7 89. Wang W, MacKinnon R (2017) Cryo-EM structure of the open human ether-a-go-gorelated K(+) channel hERG. Cell 169:422–430 90. Dickson CJ, Velez-Vega C, Duca JS (2020) Revealing molecular determinants of hERG blocker and activator binding. J Chem Inf Model 60(1):192–203 91. Wang S, Li Y, Xu L et al (2013) Recent developments in computational prediction of HERG blockage. Curr Top Med Chem 13: 1317–1326
In Silico Models for Predicting Acute Systemic Toxicity 92. Jing Y, Easter A, Peters D et al (2015) In silico prediction of hERG inhibition. Future Med Chem 7:571–586 93. Villoutreix BO, Taboureau O (2015) Computational investigations of hERG channel blockers: new insights and current predictive models. Adv Drug Deliv Rev 15:00022–00028 94. Garrido A, Lepailleur A, Mignani SM et al (2020) hERG toxicity assessment: useful
289
guidelines for drug design. Eur J Med Chem 195:112290 95. Yang C, Cronin MTD, Arvidson KB et al (2021) COSMOS next generation - A public knowledge base leveraging chemical and biological data to support the regulatory assessment of chemicals. Comput Toxicol 19:100175 https://doi.org/10.1016/j.com tox.2021.100175
Chapter 13 In Silico Models for Skin Sensitization and Irritation Gianluca Selvestrel, Federica Robino, and Matteo Zanotti Russo Abstract The assessment of skin irritation, and in particular of skin sensitization, has undergone an evolution process over the last years, pushing forward to new heights of quality and innovation. Public and commercial in silico tools have been developed for skin sensitization and irritation, introducing the possibility to simplify the evaluation process and the development of topical products within the dogma of the computational methods, representing the new doctrine in the field of risk assessment. The possibility of using in silico methods is particularly appealing and advantageous due to their high speed and low-cost results. The most widespread and popular topical products are represented by cosmetics. The European Regulation 1223/2009 on cosmetic products represents a paradigm shift for the safety assessment of cosmetics transitioning from a classical toxicological approach based on animal testing, towards a completely novel strategy, where the use of animals for toxicity testing is completely banned. In this context sustainable alternatives to animal testing need to be developed, especially for skin sensitization and irritation, two critical and fundamental endpoints for the assessment of cosmetics. The Quantitative Risk Assessment (QRA) methodology and the risk assessment using New Approach Methodologies (NAM) represent new frontiers to further improve the risk assessment process for these endpoints, in particular for skin sensitization. In this chapter we present an overview of the already existing models for skin sensitization and irritation. Some examples are presented here to illustrate tools and platforms used for the evaluation of chemicals. Key words Skin sensitization, Skin irritation, In silico methods, Risk assessment, Topical products, Cosmetics
1
Introduction The skin is the largest organ of the human body. The skin is involved in many functions, such as providing a protective barrier from the external environment (e.g., defending against microbial infection, inhibiting the entry of chemicals and toxins, preventing dehydration), regulating body temperature, and producing vitamin D. The skin is also the most exposed organ and it is subject to several physical and environmental stressors [1].
Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5_13, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
291
292
Gianluca Selvestrel et al.
As such, the skin is susceptible to various disorders and diseases. Topical preparations exist in many forms, such as ointments, gels, creams, lotions, solutions, suspensions, foams, and shampoo. Topical dermatologic pharmaceutical products are used in treating a variety of disorders (e.g. bacterial infection, fungal/yeast infection, viral infection or inflammatory and pruritic manifestation). However, thinking about the everyday life, the first image related to a topical product is a cream used for our beauty routine, by making cosmetics the most important and widespread topical products used in the world. Cosmetic products do not have a therapeutical purpose. They are applied on the skin with a view exclusively or mainly to clean, perfume, change its appearance, protect it, keep it in good condition or correct body odors. The large use of cosmetics makes this category of products the most involved in the adverse reactions related to skin contact (e.g. skin irritation and skin sensitization). Skin irritation is a non-allergic inflammatory reaction of the skin to an external agent. It represents the local toxicity related to a cosmetic product [2]. Even though mechanical, thermal, or climatic cofactors are important, a toxic chemical is the major cause of skin irritation. Skin sensitization is a skin reaction that involves an immunologic mechanism resulting in consequences such as contact dermatitis (delayed-type reaction) or contact urticaria (immediate type reaction) [3]. The participation of an allergic reaction brings to long-term consequences for the involved people, in fact contact allergy develops after months to years of exposure and it is a chronic/repeat dose toxicity endpoint. While several in vitro tests are available for skin irritation, the sneaky characteristics of skin sensitization make this toxicological endpoint complex to investigate. Skin sensitization has been deeply studied and its Adverse Outcome Pathway (AOP) is well known [4]. Figure 1 shows the scheme of the AOP for skin sensitization. A conclusive outcome on the skin sensitization potential of a chemical should be a clinical test such as HRIPT (Human Repeated Insult Patch Test). However, nowadays, this test is considered controversial because of ethical concern due to the long-term consequences of an inducted allergy. An interesting approach related to skin sensitization is the quantitative risk assessment (QRA), specifically developed for fragrances [5] but potentially applicable for other kinds of chemicals. The aim of this methodology is to establish a concentration at which the induction of sensitization of a fragrance is unlikely to occur [6] (see Note 1). In silico methods are likely to play an important role in understanding the hazard and risk associated with chemicals [7]. Assessing sensitization is a necessary component of classification and labelling, workers’ safety and occupational health (where
In Silico Models for Skin Sensitization and Irritation
293
Fig. 1 AOP scheme for skin sensitization
~20–30% of compounds may be sensitizers), regulation of cosmetics, and other industrial chemicals, as well as product discovery. Therefore, in the complex context of skin sensitization, in silico models could contribute to define a strategy, following the approach of the Next Generation Risk Assessment [6], useful to evaluate the sensitization potential of a chemical when used in any topical product. Several in silico models and tools have been developed for the evaluation of skin sensitization and irritation. In this chapter we provide an overview of these instruments, explaining how to use them and underlining their utility in the daily work of the risk assessors. Furthermore, the assignments of reliability and confidence scores that reflect the overall strength of these assessments are discussed.
2 2.1
Materials VEGA Software
VEGA has a simple workflow which is schematically illustrated in Fig. 2. Basically, the user can type or paste a SMILES string in the blank space at the top of the user interface and then, clicking on the button ‘+’, adds it to a working list of molecules to be analyzed. Once a SMILES string is added to the working list, it is possible to highlight it and visually check the two-dimensional structure encoded by the text line. This checkpoint is critical. Indeed, several structural inaccuracies can take place at this stage and compromise the reliability of the predictions.
294
Gianluca Selvestrel et al.
Fig. 2 Main screen of the VEGA software
If needed, users can also evaluate multiple substances by exploiting the batch mode (clicking the “import File” button at the top right of the user interface). In this case, the file contains a list of SMILES strings saved in “txt” or “smi” format. Clicking the “Select” button, it is then possible to choose the model(s) of interest, to specify the desired output format (PDF or CSV files), and to indicate where the prediction reports should be saved (“Export” button). Finally, the selected model(s) can be executed by clicking on the “Predict” button. For skin sensitization, VEGA makes available two quantitative structure activity relationship (QSAR) models: CAESAR model and IRFMN/JRC model. 2.2 CAESAR Model (VEGA)
The model provides a qualitative prediction of skin sensitization on mouse (local lymph node assay (LLNA) model). It is implemented inside the VEGA platform, accessible at: https://www.vegahub.eu/
2.2.1 Description of the Model
The model consists in alogic procedure called Adaptive Fuzzy Partition (AFP) based on 8 descriptors. The AFP produces as output two values: 1 (positive) and O (negative) that represent the belonging degree respectively to the sensitizer and non-sensitizer class. The input compound is assigned to the class having this degree value higher than 0.5, unless the difference between the values of the two degrees is lower than the threshold 0.001; in this case, the belonging to one class or the other is not
In Silico Models for Skin Sensitization and Irritation
295
sure, thus no prediction is made. Full references and details of the used formulas can be found in [8]. 2.2.2 List of Descriptors
The descriptors used by the AFP model are the following: – nN: Number of nitrogen atoms. – GNar: Narumi geometric topological index. – MDDD: Mean Distance Degree Deviation. – X2v: Valence connectivity index chi-2. – EEig10r: Eigenvalue 10 from the edge adjacency matrix weighted by resonance integrals. – GGI8: Topological charge index of order 8. – nCconj: Number of non-aromatic conjugated C(sp2). – O-058: Atom centered fragment ¼O.
2.2.3 Applicability Domain (AD)
The applicability domain of predictions is assessed using an Applicability Domain Index (ADI) that has values from 0 (worst case) to 1 (best case). The ADI is calculated by grouping several other indices (see Note 2), each one taking into account a particular issue of the applicability domain. Most of the indices are based on the calculation of the most similar compounds found in the training and test set of the model, calculated by a similarity index that consider molecule’s fingerprint and structural aspects (count of atoms, rings, and relevant fragments). The final global index takes into account all the abovementioned indices, in order to give a general global assessment on the applicability domain for the predicted compound. Defined intervals for this model are as follows: 1 > ¼ index > ¼ 0.8: predicted substance is into the Applicability Domain of the model. 0.8 > index > ¼ 0.6: predicted substance could be out of the Applicability Domain of the model. index ¼ index > ¼ 0.8: predicted substance is into the Applicability Domain of the model. 0.8 > index > ¼ 0.6: predicted substance could be out of the Applicability Domain of the model. index 73% and specificities >94% were obtained in external validations. It was interesting to note that only three endpoints (ALT, AST, and the composite score) had a relatively broad coverage among the 490 drugs in the database. This is in agreement with the fact that ALT and AST are routine, widely used clinical chemistry biomarkers for liver toxicity. The examination of the applicability of these developed models, using three chemical databases: World Drug Index (WDI), Prestwick Chemical Library (PCL), and Biowisdom Liver Intelligence Module, showed low coverage. For example, 80% of chemicals in the WDI database were outside the applicability domain of the models. The authors also verified the predictions for compounds from these three external datasets, by comparing model-based classification with reports in the publicly available literature. For many compounds, the predictions could not be verified, because of the lack of reports of toxicity in the literature. This is a common problem encountered in many hepatotoxicity modeling studies. The lack of data is a limiting factor as is the questionable quality and relevance of what is available. The model for the composite endpoint was also further validated using five pairs of structurally similar chemicals with opposing liver toxicity effects. The outcome of this external validation was equivocal. Two pairs were outside of the model’s applicability domain and only one pair was predicted correctly. Building on the similar experiences noted above, this may suggest that in some cases chemical mechanism(s) alone may not account for the toxic potential. It is possible in these cases that the differential toxicity may arise from metabolic transformations, complex disease pathways or other risk factors dependent on genetic polymorphism and/or environmental conditions. This study clearly illustrates that the limitations of in silico methodologies result from their restricted applicability domains as well as a lack of understanding of the complexities of human risk factors and DILI pathways. Liu et al. [28] utilized the clinical and postmarketing data from the computer-readable side effect resource (SIDER) database [70] and identified 13 types of hepatotoxic side effects (HepSEs) based on MedDRA ontology, including bilirubinemia, cholecystitis, cholelithiasis, cirrhosis, elevated liver function tests, hepatic failure, hepatic necrosis, hepatitis, hepatomegaly, jaundice, liver disease, fatty liver and liver function test abnormalities [28]. Firstly, these
In Silico Models for Hepatotoxicity
371
13 side effects were used to discriminate drugs that do and do not cause DILI using the Liver Toxicity Knowledge Base Benchmark Dataset (LTKB-BD) [71] and the PfizerData [37]. For the LTKBDB, classification accuracy was 91%; for the PfizerData the accuracy was significantly lower (74%). In the next step, using the SIDER database, QSAR models for every HepSEs were generated using a Bayesian methodology and these were then combined to form a DILI prediction system (DILIps). Finally, the authors implemented a “rule of three” (RO3) criterion (a chemical being positive in at least three HepSEs) into DILIps which increased classification accuracy. The predictive performance of DILIps was examined using three external databases: LTKB-DB, PfizerData, and a dataset published by O’Brien et al. [72] and yielded prediction accuracies of 60-70%. Liu et al. also applied the RO3 criterion to drugs in DrugBank to investigate their DILI potential in terms of protein targets and therapeutic categories. Two therapeutic categories showing a higher risk for causing DILI were identified (anti-infective for systemic use and musculoskeletal system drugs). These findings are consistent with current knowledge that most of the antiinfective drugs are very often associated with liver injuries. 134 protein targets related to drugs inducing liver toxicity have been identified using pathway analysis and co-occurrence text mining with most of these targets being associated with multiple HepSEs. This study provides an interesting example of the translation of clinical observations into an in silico tool which can be used to screen and prioritize new drug candidates or chemicals and to avoid those that might cause hepatotoxicity. The SIDER database was used in addition to data from LiverTox, PharmGKB, and Drugs.com by Liu et al. [73] to build a tiered system of QSAR models based on human data from clinical trials and observations. The tiered system comprised three levels of the endpoint which were defined by: hepatotoxicity existence, hepatotoxicity severity, and specific adverse hepatic events (AHEs). A RF algorithm was applied to 206 2D molecular descriptors calculated in MOE for the collated data, which comprised 1038 compounds with no reported AHEs and 2017 compounds associated with 403 AHE incidences (15,873 compound-AHE pairs in total. The models were built and implemented through a KNIME workflow, which the authors made available as supplementary information. In total, the authors built 2 models for the global endpoint of “hepatotoxicity,” 4 models relevant to each level of hepatotoxicity severity, and 22 models for specific AHEs. The authors found data on more than 22 AHEs but were unable to build models for each one due to insufficient data being available (AHEs with less than 100 compounds were not modeled). On an external test set, the global models gave an accuracy of ~81%. However, the test set only
372
Claire Ellison et al.
contained a small number (11) of known hepatotoxins and was used by the authors to demonstrate that the system can offer powerful evidence to confirm a compounds hepatotoxicity post release and offer a mechanistic rationale. A similar hierarchical tiered approach was implemented by Mulliner et al. [32]. The authors acknowledge that when attempting to build models for specific endpoints and mechanisms of actions, a compromise between the complexity of the endpoint and the coverage is required, understanding that it is not possible to build a model for specific mechanisms which are exhibited by only a small number of compounds. To this end, the authors created a three-tier system defined by hepatoxicity existence, clinical chemistry or morphological findings, and hepatocellular or hepatobiliary injury. The authors then further divide the endpoint according to whether data were from humans or preclinical data. A final split came based on dose of the preclinical data, where only compounds which exhibited toxicity at > > > > > > > > > > > > > > > > < = 100 n Rmix ¼ 1 ∏I¼1 1 ð4Þ 1P0 > 0 > > > > > > > > B 1 C > > > > 1 þ @P A > n > > > > Ci : ; EC i¼1
50i
Here, Rmix ¼ the overall combined toxicity of chemical groups; EC50i ¼ median effective concentration of component i; Ci ¼ concentration of ith chemical in the mixture; P0 ¼ average power of the individual chemical within a group. Major limitations of this model are as follows. (a) This model does not consider interactions. (b) This is more data demanding than the first generation models. (c) It can be used only when the accurate modes of toxic action for all component chemicals are readily available.
Computational Modeling of Mixture Toxicity Integrated Addition and Interaction Model
569
To overcome the limitations of the integrated addition model and to incorporate the effect of interactions, Rider and LeBlanc expanded the previous model and proposed integrated addition interaction (IAI) model. They used K function for incorporating the effect of interactions. Chemical interaction can be defined by the ability of one chemical to alter the properties of other chemicals, and it can be explained mathematically by using the coefficient of K function [39]. This model also acts as a two-stage prediction model like the previous model. Here in the first stage, the combined toxicity of the separated groups based on the similar mechanism of action has been calculated by Eq. 5. 8 9 > > > > > > > > > > < = 1 N R ¼ 1 ∏I¼1 1 ð5Þ 1 > 1þ P0 > > > > > n > > P > > Ka , iðCa ÞCi : ; EC i¼1
50i
Here, Ka,i represents a function which describes the extent of one chemical (a) present after interaction among a and i. EC50i is the median effective concentration of component i. P0 represents the average power of chemicals within a group. Limitations of the IAI model (a) It needs experiments to calculate the K function. (b) Highly data demanding process (dose response curve of components are needed to find out the interactions).
3
Importance of Computational Toxicology of Chemical Mixtures Computational toxicology is a developing field of science which combines developments in molecular biology and chemistry with computational science for the prediction of various toxicity end points for single chemicals as well as mixtures [40]. There are still an existing data gap for the toxicity of single chemicals which are present in the environment. The case of mixture toxicity data is even worse. As we have discussed earlier, the conventional toxicity assessments need animal models or in vitro models for both the individual chemicals and mixtures as well. Animal experimentations are ethically complicated and troublesome processes which need a lot of money and manpower. Computational methods are one of the promising techniques of toxicity prediction by using multiple chemometric tools and expert systems. These approaches are not at all the exact alternatives of animal testing; rather they are usually used to complement the experimentations by reducing the animal tastings. These cost-effective alternatives are green, time efficient, and can improve the toxicity predictions and risk assessment.
570
Mainak Chatterjee and Kunal Roy
Additionally, in the case of risk assessment of mixtures, computational methods can predict the toxicity of any number of combinations in advance. Various regulatory authorities followed by the industries are now using the in silico techniques for the chemical risk assessment. For example, the use of QSAR is already accepted for the hazard assessment by the European Union’s “REACH legislation.” Various commercial software systems like Derek Nexus expert systems (DEREK), TOPKAT (Accelrys), and MultiCase are available for toxicity prediction of single chemicals (see Chapter 17). Organization for Economic Cooperation and Development (OECD) has developed a noncommercial software called OECD Toolbox for the same purpose [41]. These systems are based on some statistical techniques, expert systems, and neural networks to relate various biological, physicochemical end points (LC50, EC50, and so on) with molecular structures in the form of descriptors. The major importance of in silico methods in mixture toxicity predictions are as follows [23]. (a) To efficiently predict the hazards posed by single chemicals and multicomponent mixtures in the environment. (b) The unethical practice of animal sacrifice can be minimized by the application of computational tools in the toxicity studies of chemicals and mixtures. (c) To predict the toxicity of combinations where the component chemicals are present in very low concentrations (less than their EC50 of NOEC). (d) Using the chemometric models of existing mixtures, toxicity prediction of new and unknown combinations against the specific species are possible if they fall under the applicability domain of the model. (e) In silico tools are fast, less costly, and reproducible as compared to in vivo or in vitro experiments. (f) It is a reliable method to generate more toxicity data of multicomponent chemical mixtures, and hence it leads to easier risk assessment of mixtures.
4
Computational Modeling Techniques for Toxicity Prediction of Mixtures After introduction of computational techniques in the mixture toxicity assessment, various in silico studies have been utilized. These studies are basically the computation modeling of mixture toxicity using various in silico tools. The computational modeling studies have some basic steps for toxicity prediction (Fig. 2) such as (1) data gathering, (2) descriptor generation, (3) modeling, (4) model evaluation, (5) model interpretation. In the present
Computational Modeling of Mixture Toxicity
571
Fig. 2 Steps for generating in silico prediction models and the major modeling techniques used for mixture toxicity assessment
chapter, various in silico models for toxicity assessment of mixtures have been discussed in detail. 4.1 Quantitative Structure–Activity Relationship
Quantitative structure–activity relationship (QSAR) is the most commonly used similarity based in silico approach employed for the toxicity assessment of mixtures. It is a simple mathematical modeling technique that can correlate chemistry with specific molecular properties (biological/physicochemical/toxicological) using various computationally or experimentally derived quantitative parameters termed as descriptors [42]. A QSAR model is developed from the dataset of a series of compounds having a specific response. In absence of experimental data, a compound’s characteristics (having acceptable relationship between structural properties and activity) can be predicted by QSAR models. The simplified expression of QSAR is as follows –
Response ðactivity=toxicity=propertyÞ ¼ f ðquantitative structural information or propertiesÞ ¼ f ðDescriptorsÞ
In the conventional QSAR models, the descriptors are calculated from a series of individual compounds whereas in the case of a mixture, the descriptors are computed from the information of existing mixture components. So the basic difference between conventional QSAR and mixture QSAR lies in the descriptor calculation and result interpretation stages; rest of the stages are similar. The conventional QSARs can be classified differently. Based on the
572
Mainak Chatterjee and Kunal Roy
dimensionality of molecular descriptors used, QSAR is classified from 0D to 7D QSAR. The first three types, that is, 0D-QSAR, 1D-QSAR, and 2D-QSAR are easy and fast techniques, whereas the rest are more computationally exhaustive and complex methods [43]. QSAR can also be classified into Regression based models (“Multiple linear regression [MLR],” “partial least squares [PLS],” “genetic function approximation [GFA],” etc.) and classification based models (“linear discriminant analysis [LDA],” “cluster analysis [CA]).” Machine learning algorithms (“Artificial neural network [ANN],” “Support vector machine [SVM],” “Random forest [RF]”) are also important tools for the modeling [44]. The basic stages of mixture QSAR have been discussed in following sections. 4.1.1 Steps Involved for QSAR Modeling Study
The basic steps involved for the development of a QSAR model are data collection and pretreatment, computation of molecular descriptors, dataset division, feature selection or descriptor selection, model development, model validation, and model interpretation. The QSAR modelers utilize one or more statistical software for developing a statistically significant, robust, and reliable relationship between the molecular structural features of compounds or mixture as a form of descriptors and the response like activity, property, and toxicity. The QSAR experts have prescribed some guidelines for the formation and validation of QSAR models in 2004 which are the most relevant outline for the QSAR modelers to get reliable, consistent, and reproducible QSAR models. These guidelines are approved by the OECD (Organization for Economic Cooperation and Development) and are commonly known as OECD principles [45]. According to this guideline, a QSAR model should have a defined end point (OECD principle 1), an unambiguous algorithm (OECD principle 2), a defined domain of applicability (OECD principle 3), a proper measure of goodnessof-fit, predictivity, and robust (OECD principle 4), and mechanistic interpretation of the model (OECD principle 5). Various steps of QSAR modeling of mixtures are briefly discussed in accordance with OECD principles, in the following sections.
Data Collection and Pretreatment
QSAR is a data-driven process and hence data collection is the primary and an important step to construct a model. Quality of the model always depends on the correctness of data. The dataset must contain a proper end point (LC50, EC50, etc.) against a specific organism. During the data collection, the modelers should focus on the correctness of chemical structures and the biological response values. Ideally the experimental protocol of the performed toxicity test should be same for same set of compounds; thus, the end point values obtained from different laboratory condition should not be clubbed into a single dataset. Analysis for the presence of duplicates and salts must be performed, and they should be removed from the dataset. Mixture toxicity data are more complex
Computational Modeling of Mixture Toxicity
573
and there are very limited such data available for QSAR study. The quality of the available data is still questionable, and hence more attention is still required to generate sufficient data for mixture toxicity [46–48]. Computation of Mixture Descriptors
Descriptor calculation for the conventional single molecular QSAR is a straight-forward approach. Various software tools like Dragon [49], PaDel Descriptor [50], SiRMS [51], and Alvadesc [52] are employed to compute molecular descriptors. However, calculation of mixture descriptors is a tricky process. Weighted descriptor approach is mostly applied to obtain the mixture descriptor values from the descriptors of component chemicals of a mixture [53]. According to this strategy, the mixture descriptors can be obtained by multiplying the original descriptor values of a mixture component with the percentage ration of that component in the mixture and finally summing up for all components present in the mixture. The approach can be simplified as following Eq. 6. X D mix ¼ ðD i X i Þ ð6Þ Here, Dmix is the estimated value of mixture descriptor, xi is the percentage ratio of specific component, and Di is the original descriptor value of that component. The sum of the percentage ratios of all components should be 100% for a mixture. Apart from weighted descriptors, some other approaches are reported in the literature such as (a) integral additive descriptors (weighted sum of descriptors of pure components), (b) integral nonadditive descriptors of mixtures, (c) fragment nonadditive descriptors (various structural fragments are taken into consideration), and (d) descriptors based on partition coefficient of mixtures, and miscellaneous descriptors for mixtures [53].
Model Development
QSAR models are developed by using various statistical tools and methods for better prediction, reliability, and robustness. For model development and proper validation, the dataset should be divided into training and test sets. The training set compounds or mixtures are exclusively used for model development and internal validation, whereas the test set compounds/mixtures are used for appropriate external validation. Selection of significant descriptors is the trickiest step of QSAR modeling. The software calculates thousands of descriptors among which only the most relevant descriptors are chosen for modeling. The aim of the feature selection is to find out the descriptors having maximum correlation with the modeled activity and develop a linear or nonlinear relationship among structural features and the response variable. All the irrelevant and noisy descriptors are removed in this step without losing any relevant information. Stepwise regression, all possible subset selection, and genetic algorithm
574
Mainak Chatterjee and Kunal Roy
are the most commonly used feature selection techniques for QSAR modeling of individual compounds and mixtures as well [44]. Several modeling techniques are available for the model development such as multiple linear regression, partial least squares regression, principal component regression analysis, and ridge regression. The training set compounds are used for model development. To know more about the modeling techniques, readers are advised to refer to the relevant literatures [54]. Model Evaluation
Quality of the prediction model depends on the selection of proper modeling technique. The developed QSAR model must be validated properly. The models are usually validated by considering various internal validation metrics, external validation metrics, Y-randomization analysis, and the estimation of the applicability domain of the model. Training set compounds are used for internal validation and test set molecules are used for external validation. Internal validation checks the goodness-of-fit of the model. Various metrics like R2 and Q2LOO are measured for the internal model quality and validation. External validation checks the overall predictivity of the models. Y-randomization test checks whether the model is a result of any chance correlation or not. For detailed discussion of validation and various validation metrics, readers are advised to consult the cited references [54]. QSAR models are easy to interpret, and they can extract the meaningful features of component structures for the mixture toxicity. Prediction of a large database is possible by QSAR and most importantly, one can predict the toxicity of probable environmental mixture in advance. But this approach has some limitations also which are pointed in the following. (a) It is a data driven process; experimental data are needed to develop QSAR model. (b) The interaction effects of mixtures are not considered in QSAR. (c) QSAR is species specific; hence, it cannot be used for extrapolation between species until biological data available.
4.2 Read Across, Chemical Category, and Trend Analysis
Read across (RA) is a prediction method of unknown chemicals from the chemical analogs with known toxicity from the same chemical category [55–58]. It is accepted by REACH and US EPA and used for filing the data gaps of thousands of compounds [59]. A chemical category can be defined as the class of chemicals which follows a similar pattern of biological action and shows similar toxicity or properties. The chemicals in a specific category are called source chemicals which are used for the toxicity estimation of unknown chemicals (target compounds) of similar category. Trend analysis is a toxicity prediction technique after analyzing the
Computational Modeling of Mixture Toxicity
575
Fig. 3 Read across modeling and its different approaches (analog vs. category approach; interpolation vs. extrapolation)
toxicity trend (toxicity increases/ decreases/remains constant) of tested chemicals. There are two basic ways to develop a read across model: analog approach (AN) and category approach (CA) [57, 58]. The analog approach is the one to one prediction method where the unknown toxicity of a compound is predicted from a known analog. This method is not reliable as such because of its sensitivity to outliers. Two different analogs may have different toxicity pattern; chances of false prediction is more. The category approach is a more confident prediction where many analogs are used to determine a trend among category and predict the toxicity of unknown analog. Read across can be used with both the qualitative and quantitative data. The quantitative read across is mostly used [60, 61]. Various approaches of read across are depicted in Fig. 3. Read across analysis has certain advantages like the following. (a) Read across can model both qualitative and quantitative data. (b) It is a simple and transparent technique. (c) Implementation and interpretation is easy. The limitations of the method are as follows. (a) RA can be used for the small datasets in practice because number of analogs is limited for a target chemical. (b) Inaccurate predictions may be possible in the case of analogs having conflicting toxicity profiles.
576
Mainak Chatterjee and Kunal Roy
4.3 Structural Alerts and Rule-Based Techniques
Structural alerts (SA) are the chemical structures or fragments that are associated to toxicity [58, 62]. Structural alerts are also known as toxicophores or toxic fragments [63]. SA may contain single atom/several connected atoms (groups)/combination of groups or fragments. A combination of groups as SA contributes more toward toxicity than single SA. The concept of structural alert is based on the defined rule, that is, “if A is B then T” where A is a SA, B is the value (binary value—present or absent), and T is the toxicity prediction with a certainty level. For example, “IF (chemical substructure) IS (present) THEN (toxicity IS certain).” Two main rule-based models can be seen in practice: human-based rules (HBRs) and induction-based rules (IBRs). HBRs depend on human intelligence whereas IBRs are derived from computer intelligence. Human based rules are more accurate but the use of HBRs is limited because of the dependence on expert knowledge and bias. It needs a detailed literature survey and knowledge update with time which makes it impractical in some cases. On the contrary, induction-based rules can be generated from the large dataset using computers efficiently [40]. The method is easy to understand and the results are easily interpretable. This method can be used for transforming a structure into less toxic one. Though it is a qualitative measure, it has some limitations also. (a) Appropriate mechanism behind the toxicity cannot be identified. (b) Quantitative measurement is not possible. (c) The list of all possible structural alerts is still incomplete and hence it may cause many false negative results.
4.4
Expert Systems
Expert systems are the combination of multiple QSAR models and/or QSAR models with various others in silico tools like read across, induction-based rules, and structural alerts in a form of an unique software tool or system. It is more convenient choice for the prediction of toxicity than the conventional QSARs. Expert system requires the structure, endpoint, and the information of organism as feed. The necessary steps of QSAR modeling (descriptor calculation, model validation, etc.) are performed by expert system itself; thus, it is fast and easy technique and can be operated by nonexperts also. Because of its fast and cost effective nature, it is widely used for the toxicity prediction of individual chemicals in regulatory agency, industries, and academia. Figure 4 represents a flowchart of the preparation of a QSAR based expert system for toxicity assessment of mixtures. Some of the promising expert systems for toxicity prediction are OECD Tool Box, Derek Nexus, AMBIT, AIM, Toxtree, DSSTox, Meteor, CASE, HazardExpert, and PASS [64].
Computational Modeling of Mixture Toxicity
577
Fig. 4 Flowchart of the preparation of QSAR based expert system for mixture toxicity assessment
5 Applications of Computational Modeling for Toxicity Prediction of Mixtures: A Few Case Studies The scientific communities around the globe are continuously working on the mixture risk assessment. Various new ideas and techniques are utilized to develop more reliable and precise prediction model and moreover to construct reliable guidelines of mixture risk assessment. Popular in silico techniques, QSAR and machine learning approaches are successfully used in various computational studies of mixture toxicity prediction. In this chapter, we have discussed some of the successful concepts as representative case studies for better understanding of the applications of computational modeling in real world scenario. 5.1 Case Study 1: A QSAR Modeling of Multicomponent Mixtures of Antibiotics and Pesticides
Liang et al. [65] developed a QSAR model for prediction of the toxicity (median effective concentration—EC50) of binary and multicomponent mixtures composed of two antibiotics (tetracycline hydrochloride and chloramphenicol) and four pesticides (metribuzin, trichlorfon, dichlorvos, and linuron) against Aliivibrio fischeri. The authors considered the additive and nonadditive (synergism and antagonism) effects of mixtures during model development and calculated the mixture descriptors by following eight different mathematical equations (Eqs. 7–14) to encode the nonlinear additive effects of mixtures.
578
Mainak Chatterjee and Kunal Roy
D mix,i ¼
n X i¼1 n X
D mix,i ¼
ð7Þ
!2 ,
ð8Þ
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n X 2 ¼ pi x i ,
ð9Þ
i¼1
D mix,i
pi x i ,
pi x i
i¼1
D mix,i ¼
n X pffiffiffiffiffiffiffiffi pi x i ,
ð10Þ
n pffiffiffiffiffiffiffi X jx i j,
ð11Þ
i¼1
D mix,i ¼
i¼1
D mix,i ¼
n X
pi ðx i Þ2 ,
ð12Þ
n X 2 pi x i ,
ð13Þ
i¼1
D mix,i ¼
i¼1
D mix,i
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n X 3 3 ¼ pi x i ,
ð14Þ
i¼1
Here, Dmix,i represents the overall mixture descriptor, i represents the ith component in a mixture, and xi represents the descriptor of ith component. The authors used 25 binary, 15 ternary, and 5 quaternary mixtures data to construct an effective model for multicomponent mixtures. The study described the conventional toxicity prediction by concentration addition and independent action and a comparison of results among additive toxicity models with QSAR. Based on the CA and IA predictions, 18 mixtures presented additive effects whereas 27 mixtures showed nonadditive effects. The authors developed a genetic algorithm based MLR (multiple linear regression) model with three descriptors. An agreement between predicted and observed toxicity data can be expressed with a high coefficient of determination value (R2 ¼ 0.9366). Out of 45 mixture predictions, 12 mixtures are out of the lower and upper range of 95% confidence interval which is a better result than CA and IA models (27 mixtures were out of range). The authors have measured root mean square error (RMSE) to evaluate the deviation between observed and predicted results; the value for the QSAR model based on the whole data set (0.1663) was less than both CA (0.2045) and IA predictions (0.2378). Therefore the authors claimed better predictive capability of the QSAR models from CA and IA for prediction of toxicity
Computational Modeling of Mixture Toxicity
579
of nonadditive mixtures in the case of binary and multicomponent mixtures as well. 5.2 Case Study 2: INFCIM Modeling to Predict Mixture Toxicity
Absolute similarity or dissimilarity of toxicity pattern for all mixture components is rare in real world scenario, and due to this, the conventional additive models not always predict the toxicity of mixtures satisfactorily. In 2004, Mwense et al. [66] proposed a new methodology called integrated fuzzy concentration additionindependent action model (INFCIM) which utilizes molecular descriptors and a fuzzy set theory to characterize the similarity and dissimilarity of mixture components. On the basis of that, the model also integrates the CA and IA models for the better prediction of mixture toxicity. The authors tested their model in two cases. In the first case, they took the toxicity data of two types of mixtures (Mixture 1 and Mixture 2) containing 18 s-triazine and 16 acting chemical species, respectively tested on a freshwater algae Scenedesmus vacuolatus. The inhibition of reproductive ability was tested with both mixtures. In the second case, the toxicity data contains mixture 3 (comprising 10 quinolones) and mixture 4 (comprising 16 phenol derivatives) causing long-term inhibition of bioluminescence of marine bacterium Vibrio fischeri. In the proposed INFCIM approach, molecular descriptors of individual components are calculated for the assessment of similarity among the components. The model development and functioning are discussed in the following Fig. 5. The low prediction errors between the experimental data and INFCIM for all the datasets (less than 10% for mixture 1, less than 16% for mixture 2, 11% for mixtures 3 and 4) justify the good predictive ability of the method. The authors claimed that the INFCIM approach can be used for the toxicity prediction of same mixture with different compositions or different mixtures as well until the endpoint remains same.
5.3 Case Study 3: Prediction of Toxicity of Chemical Mixtures by Employing Machine Learning Approach
Cipullo et al. [67] employed two machine learning models, that is, artificial neural network (ANN) and random forest (RF) to predict temporal bioavailability which was further used to predict toxicity of complex chemical mixtures. The authors developed a two-step process; temporal bioavailability of heavy metal/metalloids and petroleum hydrocarbon were predicted in the first step, and further in the second step, the soil toxicity was predicted from the bioavailability data (Fig. 6). Six months mesocosms experimental data were used for the study. Parameters like amendment (biochar, compost), soil type, incubation time, and initial concentration of specific contaminants were employed as the input data for the modeling. The ANN was used to predict the bioavailability whereas the RF was employed for the identification of important input variables for temporal bioavailability and toxicity prediction. A series of biological indicators including soil respiration, bacteria count,
580
Mainak Chatterjee and Kunal Roy
Fig. 5 Schematic representation of INFCIM development and functioning
Fig. 6 Prediction of soil toxicity from predicted bioavailability employing machine learning algorithm
seeds germination, earthworm lethality, microbial community fingerprint, and bioluminescent bacteria were evaluated for the environmental risk assessment. The ML model showed the importance
Computational Modeling of Mixture Toxicity
581
of multiple features (combined effect) for the change in toxicity, and this information may not be accounted in classical linear regression models. It has been found out that the prediction of earthworm acute toxicity index was mainly affected by phenanthrene, anthracene, acenaphthene, fluorine, pyrene, and the aliphatic fraction 17 to 19 of petroleum hydrocarbons. The aliphatic hydrocarbons with medium chain length were identified as important indicators for the acute toxicity of soil organisms like earthworm. The authors claimed that the engagement of a ML model could improve our understanding of the rate-determining processes affecting the bioavailable fraction of soil contaminants. Therefore, the potential risk assessment of environmental mixtures become easier and the appropriate safety measures can be adopted. 5.4 Case Study 4: Application of QSAR and 2D Structural Descriptors for the Prediction of Aquatic Toxicity of Mixtures
Roy et al. [68] have published a mixture QSAR study where they have assessed the aquatic toxicity of two mixture datasets (type 1 and type 2). The authors have collected marine toxicity data of 35 pure compounds (Series 1 compounds: trimethoprim, 10 sulfonamides, 11 cyanogenics, and 13 aldehydes) and 79 binary mixtures (type 1 mixtures: obtained from various combinations of series 1 compounds) against a marine bacterium Photobacterium phosphoreum. In addition, they have collected the freshwater toxicity data (pEC50) of 20 pure compounds (series 2 compounds: 10 urea herbicides and 10 triazines) and 20 binary mixtures (type 2 mixtures: obtained from various combinations of series 2 compounds) against a freshwater microalga Selenastrum capricornutum. They have employed negative logarithm of molar median effective concentration (pEC50) as the end point of QSAR and various 2D structural descriptors as the dependent variables. The authors have calculated the mixture descriptors using the weighted descriptor approach. For the comparison of important structural features for toxicity in the case of mixtures and pure compounds, additional two QSAR models have been developed from the series 1 and series 2 compounds. All the models have been developed by partial least squares (PLS) regression method. The authors claimed good predictive ability of the developed models and justified their claim by the acceptable validation results (Mixture 1: Q2F1 ¼ 0.82; Mixture 2: Q2F1 ¼ 0.82). In this study, various structural features associated with mixture and pure chemical toxicity have been identified. It was found out that any significant structural features of pure chemicals do not have sufficient impact on the toxicity of mixtures. The authors also pointed out that the more bulky mixture components can raise the mixture toxicity in freshwater condition, whereas, more multiple bonded nitrogen and sp2 hybridized carbon in the mixture components can reduce the mixture toxicity in marine environment. Therefore, the models can be helpful for the prediction and risk assessment of aquatic toxicity, and the
582
Mainak Chatterjee and Kunal Roy
knowledge of important structural features can help in the designing of less toxic chemicals. 5.5 Case Study 5: Estimation of Mixture Toxicity of Heavy Metals
Zou et al. have applied QSAR for the risk assessment of heavy metal containing mixtures [69]. The authors have evaluated the toxicity of persistent heavy metals such as Cu2+, Co2+, Fe3+, Zn2+, and Cr3+ by experimentation. They also determined the toxicity of corresponding binary mixtures against Photobacterium phosphoreum. The authors clearly pointed out two distinct observations. Firstly, they showed the toxicity order of individual heavy metals against marine bacterium (P. phosphoreum): Zn2+ > Cu2+ > Co2+> Cr3+ > Fe3+. Zinc (II) ion was the most toxic one (log EC50 ¼ 4.75) among five whereas, ferric (III) ion showed the least toxicity (log EC50 ¼ 3.64). Secondly, various interactions (synergism and antagonism) have been observed in addition with the additive effects of the binary mixtures of the mentioned metals. The authors developed a partial least squares (PLS) regression model for the prediction of mixture toxicity of heavy metals by employing the ion characteristic parameters as dependent variables. Acceptable R2 and Q2 values (R2 ¼ 0.75; Q2 ¼ 0.649) prove good correlation between the toxicity endpoint (log EC50) and the ion characteristic parameters. From the developed model, it has been found out that the toxicity of mixtures of heavy metals is affected by the (ΔIP)mix (change in ionization potential), log(KOH)mix (first hydrolysis constant), and logKfmix (formation constant value). Finally the authors claimed the ion characteristics based QSAR as an efficient approach for the toxicity prediction and proposed it as a supplementary tool of mixture risk assessment of heavy metals.
5.6 Case Study 6: Application of QSTR Model for Toxicity Assessment of Deep Eutectic Solvents
Halder et al. employed machine learning based QSTR modeling for the risk assessment of deep eutectic solvents (DES) against Aliivibrio fischeri [70]. The authors have employed the toxicity data of 72 DESs and their constituents to find out the structural features associated with the DES mediated toxicity. Mixture descriptors were obtained by “sum and absolute difference of weighted (by molar fraction) descriptors” strategy. According to this strategy, the composition of mixture components along with their molar fractions affects the properties of mixtures. The authors used the following Eqs. 15 and 16 to calculate the mixture descriptors for DESs. D pmix ¼ i 1 D 1 þ i 2 D 2
ð15Þ
D nmix ¼ ji 1 D 1 i 2 D 2 j
ð16Þ
Here, Dpmix and Dnmix ¼ mixture descriptors; D1 & D2 ¼ descriptor values of mixture components 1 and 2, respectively; i1 and i2 are the mole fraction of mixture components 1 and 2 respectively.
Computational Modeling of Mixture Toxicity
583
The authors have employed “points out,” “mixtures out,” “compounds out,” and “everything out” strategies for the dataset division and model validation. The nonlinear QSTR models were developed with the help of three machine learning techniques called: “support vector machine (SVM),” “random forest” (RF), and “multilayer-perception artificial neural network (MLP-ANN).” The authors also developed several linear QSTR models for the identification of the important structural features of the associated toxicity of DESs. The authors suggested that the “number of nitrogen atoms,” “topological distance between oxygen atoms,” “van der Waals surface area related topological descriptors,” “dipole moment,” “molar refractivity,” and “molecular mass” components of DESs may affect the toxicity of DESs.
6
Future Prospects of Computational Modeling for Mixture Risk Assessment The toxicity pattern of chemical mixtures is entirely different from the single chemicals. Interactions play a crucial role in the complex action of mixtures. Although it is a well-known fact since long, the concept of mixture risk assessment evolved much later than the single chemical risk assessment. With ages, the environment is becoming a chemical soup flooded with multicomponent chemical mixtures, and it is very clear that mixture toxicity prediction is now a crucial requirement for environmental safety and human risk assessment. The regulatory authorities are also paying attention on mixture toxicity data generation. Various agencies like National Institute of Environmental Health Sciences (NIEHS), National Institute of Occupational Safety and Health (NIOSH), and National Toxicology Program (NTP) have initiated research activities on exposure of mixtures, identification of various environmentally relevant mixtures, and development of biomarkers [71]. Scarcity of mixture toxicity data is a serious concern; thus, for the better risk assessment of chemical mixtures, a novel database must be developed. The data should be obtained from various species using different experimental protocols and for a broad range of environmental mixtures. Generating a lot of data by experimentation is a time-consuming process and animal experimentation may be unethical. So the application of computational tools for the existing data gap filling is indispensable in the present situation. The use of artificial intelligence can be a good option for the mixture risk assessment. The expert systems can be developed based on various computational approaches discussed. For the preliminary data generation and validation of expert system, good collaboration of experimental toxicologists and computational modelers is necessary. The artificial intelligence like expert systems should be universal and validated properly, otherwise the predictions may be erroneous. The applicability domain of the developed
584
Mainak Chatterjee and Kunal Roy
models should be checked properly because the toxicity of all compounds or mixtures cannot be predicted by a single model.
7
Conclusion Existence of environmental chemical mixtures is a well-known fact and hence mixture toxicity assessment is a growing research area now. Previously, the regulatory agencies used to pay more attention on single mixture toxicity studies but with time the situation is changing. Various scientific groups along with the regulatory agencies have come up with various ideas for the mixture risk assessment. However, a general and comprehensive framework for toxicity test of mixtures is still pending. The conventional approaches are costly and time dependent and are based on several assumptions; thus, they are not always reliable. The toxicologists are working on the mixture toxicity assessment but there is a huge existing data gap in this domain. Various interactions among the mixture components make the toxicity assessment more critical and challenging. Keeping in mind the importance of mixture risk assessment and associated complexities of it, we have discussed various conventional and new methods for toxicity predictions of mixtures in this chapter. We have illustrated the limitations of conventional approaches and the importance of the incorporation of computational techniques in the mixture risk assessment. Various computational techniques are demonstrated in this chapter along with the successful applications of in silico tools as case studies. The knowledge of the mixture components, their concentration ratio, detailed modes of action are essential for the in silico study and prediction of mixture toxicity. Apart from this, the use of artificial intelligence is now essential for fast data generation and data gap filling in this particular domain. Finally collaborative studies among the computational modelers and experimental toxicologist can ease the toxicity prediction and can help in prescribing comprehensive guidelines for mixture risk assessment.
Acknowledgments M.C. appreciates All India Council for Technical Education (AICTE), New Delhi for financial assistance in the form of National Doctoral Fellowship (NDF). K.R. thanks SERB, New Delhi for financial assistance under the MATRICS scheme (MTR/2019/ 000008).
Computational Modeling of Mixture Toxicity
585
References 1. Hartung T, Rovida C (2009) Chemical regulators have overreached. Nature 460:1080–1081 2. Debunking the myths: are there really 84,000 chemicals?, https://www.chemicalsafetyfacts. org/chemistry-context/debunking-mythchemicals-testing-safety/ 3. Altenburger R, Nendza M, Schuurmann G (2003) Mixture toxicity and its modeling by quanitative structre activity relationship. Environ Toxicol Chem 22:1900–1915 4. Commission E (2003) Technical guidance document in support of commission directive 93/67/EEC on risk assessment for new notified substances and commission regulation (EC)no. 1488/94 on risk assessment for existing substances 5. Altenburger R, Greco WR (2009) Extrapolation concepts for dealing with multiple contamination in environmental risk assessment. Integr Environ Assess Manag 5:62 6. Martin HL, Svendsen C, Lister LJ et al (2009) Measurement and modeling of the toxicity of binary mixtures in the nematode caenorhabditis elegans—a test of independent action. Environ Toxicol Chem 28:97 7. Syberg K, Jensen TS, Cedergreen N et al (2009) On the use of mixture toxicity assessment in REACH and the water framework directive: a review. Hum Ecol Risk Assess An Int J 15:1257–1272 8. Yang R, Thomas R, Gustafson D et al (1998) Approaches to developing alternative and predictive toxicology based on PBPK/PD and QSAR modeling. Environ Health Perspect 106:1385–1393 9. Kolpin DW, Furlong ET, Meyer MT et al (2002) Pharmaceuticals, hormones, and other organic wastewater contaminants in US streams, 1999–2000: a national reconnaissance. Environ Sci Technol 36:1202–1211 10. Kortenkamp A, Altenburger R (1999) Approaches to assessing combination effects of oestrogenic environmental pollutants. Sci Total Environ 233:131–140 11. Calabrese EJ (2005) Paradigm lost, paradigm found: the re-emergence of hormesis as a fundamental dose response model in the toxicological sciences. Environ Pollut 138:378–411 12. Cedergreen N, Streibig JC, Kudsk P et al (2007) The occurrence of hormesis in plants and algae. Dose-Response 5:2 13. Yang RSH, Thomas RS, Gustafson DL et al (1998) Approaches to developing alternative and predictive toxicology based on PBPK/PD and QSAR modeling. In: Environmental
Health Perspectives. Public Health Services, US Dept of Health and Human Services, pp 1385–1393 14. Lydy M, Belden J, Wheelock C et al (2004) Challenges in regulating pesticide mixtures. Ecol Soc 9:1 15. Kim J, Kim S, Schaumann GE (2013) Reliable predictive computational toxicology methods for mixture toxicity: toward the development of innovative integrated models for environmental risk assessment. Rev Environ Sci Biotechnol 12:235–256 16. Borgert CJ (2004) Chemical mixtures: an unsolvable riddle? Hum Ecol Risk Assess 10: 619–629 17. Pape-Lindstrom PA, Lydy MJ (1997) Synergistic toxicity of atrazine and organophosphate insecticides contravenes the response addition mixture model. Environ Toxicol Chem 16: 2415–2420 18. Folt CL, Chen CY, Moore MV et al (1999) Synergism and antagonism among multiple stressors. Limnol Oceanogr 44:864–877 19. Sexton K, Hattis D (2007) Assessing cumulative health risks from exposure to environmental mixtures—three fundamental questions. Environ Health Perspect 115:825–832 20. Greco WR, Bravo G, Parsons JC (1995) The search for synergy: a critical review from a response surface perspective. Pharmacol Rev 47:331–385 21. Altenburger R, Boedeker W, Faust M et al (1993) Aquatic toxicology, analysis of combination effects. In: Handbook of hazardous materials. Academic, pp 15–27 22. Kortenkamp A, Altenburger R (1998) Synergisms with mixtures of xenoestrogens: a reevaluation using the method of isoboles. Sci Total Environ 221:59–73 23. Kar S, Leszczynski J (2019) Exploration of computational approaches to predict the toxicity of chemical mixtures. Toxics 7:15 24. Khan PM, Kar S, Ror K (2020) Ecotoxicological QSARs of mixtures. In: Roy K (ed) Ecotoxicological QSARs. Springer 25. Bliss CI (1939) The toxicity of poisons applied jointly. Ann Appl Biol 26:585–615 26. Drescher K, Boedeker W (1995) Assessment of the combined effects of substances: the relationship between concentration addition and independent action. Biometrics 51:716–730 27. Olmstead AW, LeBlanc GA (2005) Toxicity assessment of environmentally relevant
586
Mainak Chatterjee and Kunal Roy
pollutant mixtures using a heuristic model. Integr Environ Assess Manag 1:114 28. EPA U (2000) Supplementary guidance for conducting health risk assessment of chemical mixtures. EPA U, Washington DC 29. Commission E (2009) State of the art report on mixture toxicity—final report. Commission E € 30. Loewe S, Muischnek H (1926) Uber Kombinationswirkungen Mitteilung: Hilfsmittel der Fragestellung. Naunyn Schmiedeberg’s Arch Pharmacol 114:313–326 31. Cassee FR, Groten JP, Van Bladeren PJ et al (1998) Toxicological evaluation and risk assessment of chemical mixtures. Crit Rev Toxicol 28:73–101 32. Finney DJ (1942) The analysis of toxicity tests on the mixtures of poisons. Ann Appl Biol 29: 82–94 33. Howard GJ, Webster TF (2009) Generalized concentration addition: a method for examining mixtures containing partial agonists. J Theor Biol 259:469–477 34. Teuschler LK (2007) Deciding which chemical mixtures risk assessment methods work best for what mixtures. Toxicol Appl Pharmacol 223: 139–147 35. Andersen ME, Dennison JE (2004) Mechanistic approaches for mixture risk assessments— present capabilities with simple mixtures and future directions. Environ Toxicol Pharmacol 16:1–11 36. Altenburger R, Schmitt H, Schu¨u¨rmann G (2005) Algal toxicity of nitrobenzenes: combined effect analysis as a pharmacological probe for similar modes of interaction. Environ Toxicol Chem 24:324 37. Altenburger R, Walter H, Grote M (2004) What contributes to the combined effect of a complex mixture? Environ Sci Technol 38: 6353–6362 38. Ra JS, Lee BC, Chang NI et al (2006) Estimating the combined toxicity by two-step prediction model on the complicated chemical mixtures from wastewater treatment plant effluents. Environ Toxicol Chem 25:2107 39. Rider CV, LeBlanc GA (2005) An integrated addition and interaction model for assessing toxicity of chemical mixtures. Toxicol Sci 87: 520–528 40. Raies AB, Bajic VB (2016) In silico toxicology: computational methods for the prediction of chemical toxicity. Wiley Interdiscip Rev Comput Mol Sci 6:147–172 41. The OECD QSAR toolbox—OECD. https:// www.oecd.org/chemicalsafety/risk-assess ment/oecd-qsar-toolbox.htm
42. Roy K, Kar S, Das RN (2015) Background of QSAR and historical developments. In: Understanding the basics of QSAR for applications in pharmaceutical sciences and risk assessment. Academic, London, pp 1–46 43. Roy K, Kar S, Das RN (2015) Chemical information and descriptors. In: Understanding the basics of QSAR for applications in pharmaceutical sciences and risk assessment. Academic, London, pp 47–80 44. Roy K, Kar S, Das RN (2015) Selected statistical methods in QSAR. In: Understanding the basics of QSAR for applications in pharmaceutical sciences and risk assessment. Academic, London, pp 191–229 45. OECD. OECD principles for the validation, for regulatory purposes, of quantitative structure-activity relationship model. https:// www.oecd.org/chemicalsafety/risk-assess ment/37849783.pdf 46. Roy K, Kar S, Das RN (2015) Understanding the basics of QSARs for applications in pharmaceutical sciences and risk assessment. Academic, London 47. Fourches D, Pu D, Tassa C et al (2010) Quantitative nanostructure—activity relationship modeling. ACS Nano 4:5703–5712 48. Cherkasov A, Muratov EN, Fourches D et al (2014) QSAR modeling: where have you been? Where are you going to? J Med Chem 57: 4977–5010 49. Mauri A, Consonni V, Pavan M et al (2006) Dragon software: an easy approach to molecular descriptor calculations. MATCH Commun Math Comput Chem 56:237–248 50. Yap CW (2011) PaDEL-descriptor: an open source software to calculate molecular descriptors and fingerprints. J Comput Chem 32: 1466–1474 51. Kuz’min VE, Artemenko AG, Polischuk PG et al (2005) Hierarchic system of QSAR models (1D-4D) on the base of simplex representation of molecular structure. J Mol Model 11: 457–467 52. alvaDesc—Alvascience. https://www. alvascience.com/alvadesc/ 53. Kar S, Ghosh S, Leszczynski J (2018) Single or mixture halogenated chemicals? Risk assessment and developmental toxicity prediction on zebrafish embryos based on weighted descriptors approach. Chemosphere 210: 588–596 54. Roy K, Kar S, Das RN (2015) Validation of QSAR models. In: Understanding the basics of QSAR for applications in pharmaceutical sciences and risk assessment. Academic, London, pp 231–289
Computational Modeling of Mixture Toxicity 55. Modi S, Hughes M, Garrow A et al (2012) The value of in silico chemistry in the safety assessment of chemicals in the consumer goods and pharmaceutical industries. Drug Discov Today 17:135–142 56. Dimitrov S, Mekenyan O (2010) An introduction to read-across for the prediction of the effects of chemicals. In: In silico toxicology: principles and applications. The Royal Society of Chemistry, pp 372–384 57. Jeliazkova N, Jaworska J, Worth P (2010) Open source tools for read-across and category formation. In: Cronin MT, Madden J (eds) In silico toxicology: principles and applications. The Royal Society of Chemistry 58. Venkatapathy R, Wang NCY (2013) Developmental toxicity prediction. Methods Mol Biol 930:305–340 59. Vink SR, Mikkers J, Bouwman T et al (2010) Use of read-across and tiered exposure assessment in risk assessment under REACH—a case study on a phase-in substance. Regul Toxicol Pharmacol 58:64–71 60. Gajewicz A (2017) Development of valuable predictive read-across models based on “reallife” (sparse) nanotoxicity data. Environ Sci Nano 4:1389–1403 61. Gajewicz A, Jagiello K, Cronin MTD et al (2017) Addressing a bottle neck for regulation of nanomaterials: quantitative read-across (Nano-QRA) algorithm for cases when only limited data is available. Environ Sci Nano 4: 346–358 62. Valerio LG (2009) In silico toxicology for the pharmaceutical sciences. Toxicol Appl Pharmacol 241:356–370 63. Worth AP, Lapenna S, Serafimova R (2013) QSAR and metabolic assessment tools in the assessment of genotoxicity. Methods Mol Biol 930:125–162
587
64. Roy K, Kar S (2016) In silico models for ecotoxicity of pharmaceuticals. In: Benfenati E (ed) In silico methods for predicting drug toxicity. Methods in molecular biology. Humana Press, New York, NY, pp 237–304 65. Qin LT, Chen YH, Zhang X et al (2018) QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide. Chemosphere 198:122–129 66. Mwense M, Wang XZ, Buontempo FV et al (2004) Prediction of noninteractive mixture toxicity of organic compounds based on a fuzzy set method. J Chem Inf Comput Sci 44: 1763–1773 67. Cipullo S, Snapir B, Prpich G et al (2019) Prediction of bioavailability and toxicity of complex chemical mixtures through machine learning models. Chemosphere 215:388–395 68. Chatterjee M, Roy K (2021) Prediction of aquatic toxicity of chemical mixtures by the QSAR approach using 2D structural descriptors. J Hazard Mater 408:124936 69. Zeng J, Chen F, Li M et al (2019) The mixture toxicity of heavy metals on photobacterium phosphoreum and its modeling by ion characteristics-based QSAR. PLoS One 14: 1–13 70. Halder AK, Cordeiro MNDS (2019) Development of predictive linear and non-linear QSTR models for Aliivibrio fischeri toxicity of deep eutectic solvents. Int J Quant Struct Relationships 4:50–69 71. Bucher JR, Lucier G (1998) Current approaches toward chemical mixture studies at the National Institute of Environmental Health Sciences and the U.S. National Toxicology Program. In: Environmental health perspectives. Public Health Services, US Dept of Health and Human Services, pp 1295–1298
Chapter 23 In Silico Methods for Environmental Risk Assessment: Principles, Tiered Approaches, Applications, and Future Perspectives Maria Chiara Astuto, Matteo R. Di Nicola, Jose´ V. Tarazona, A. Rortais, Yann Devos, A. K. Djien Liem, George E. N. Kass, Maria Bastaki, Reinhilde Schoonjans, Angelo Maggiore, Sandrine Charles, Aude Ratier, Christelle Lopes, Ophelia Gestin, Tobin Robinson, Antony Williams, Nynke Kramer, Edoardo Carnesecchi, and Jean-Lou C. M. Dorne Abstract This chapter aims to introduce the reader to the basic principles of environmental risk assessment of chemicals and highlights the usefulness of tiered approaches within weight of evidence approaches in relation to problem formulation i.e., data availability, time and resource availability. In silico models are then introduced and include quantitative structure–activity relationship (QSAR) models, which support filling data gaps when no chemical property or ecotoxicological data are available. In addition, biologicallybased models can be applied in more data rich situations and these include generic or species-specific models such as toxicokinetic–toxicodynamic models, dynamic energy budget models, physiologically based models, and models for ecosystem hazard assessment i.e. species sensitivity distributions and ultimately for landscape assessment i.e. landscape-based modeling approaches. Throughout this chapter, particular attention is given to provide practical examples supporting the application of such in silico models in real-world settings. Future perspectives are discussed to address environmental risk assessment in a more holistic manner particularly for relevant complex questions, such as the risk assessment of multiple stressors and the development of harmonized approaches to ultimately quantify the relative contribution and impact of single chemicals, multiple chemicals and multiple stressors on living organisms. Key words Environmental risk assessment, Quantitative structure–activity relationship models, Physiologically based models, Dynamic energy budget models, Landscape modeling
Maria Chiara Astuto and Matteo R. Di Nicola contributed equally with all other contributors. Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5_23, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
589
590
1
Maria Chiara Astuto et al.
Principles of Environmental Risk Assessment: A Snapshot In the modern world the environment is exposed to a multitude of chemicals of anthropogenic and natural origin from different sources [1]. Such man-made and natural chemicals include regulated products (e.g., pesticides, feed additives, pharmaceuticals) and contaminants that may alter the delicate equilibrium of environmental compartments and actors in addition to other stressors such as climate change and biological stressors (fungi, virus, bacteria, etc.) [2]. Environmental Risk Assessment (ERA) provides methodologies to determine the risk(s) posed by chemicals on the environment by combining information on the hazard and the exposure. The purpose of ERA is to evaluate the likelihood and extent of harmful effects resulting from exposure to a chemical on different organisms (e.g., animals, microbes, or plants) and different levels of biological organization (e.g., single cells, individuals, populations, ecosystems, or landscapes). ERA requires extensive data requirements and entails multidisciplinary efforts across toxicology, chemistry, biology, physiology, geology, engineering, statistics, and mathematical modeling. Together with relevant stakeholders (e.g., researchers, NGOs and industry), a wide range of regulatory and advisory bodies worldwide perform ERA particularly in the context of pre-marketing of regulated substances (e.g., pesticides and feed additives) as well as post-market assessment and monitoring of chemicals (e.g., regulated products, contaminants or natural occurring chemicals), while also contributing toresearch activities. Examples are the Food and Agriculture Organization (FAO), the World Health Organization (WHO), the Organisation for Economic Cooperation and Development (OECD), the United States Environmental Protection Agency (US-EPA), the Food and Drug Administration (FDA), the European Medicines Agency (EMA), the European Chemicals Agency (ECHA) and the European Food Safety Authority (EFSA). In Europe, ERA plays a key role in achieving the vision and goals of the Europe of twenty-first century and European Green Deal [3, 4]. Environmental protection and sustainability are one of the main objectives of this strategy and ERA helps to inform decision making in this regard. For instance, the role of the EFSA is to independently assess and provide scientific advice to risk managers on possible risks that plant protection products, feed additives and genetically modified organisms may pose to the environment, before their regulatory approval and marketing authorization in the European Union. While carrying out risk assessments, EFSA also contribute to advance regulatory science and risk assessment. In this context, harmonized ERA schemes have been recently developed and proposed by EFSA, with regard to: accounting for biodiversity and ecosystem services to define protection goals [5],
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
591
coverage of endangered species as nontarget organisms in singlestressor ERA [6], and assessing temporal and spatial recovery of nontarget organisms from effects of potential stressors in ERA [7]. In a similar manner to human health risk assessment (HRA), the process of ERA is generally composed of the following steps: (1) problem formulation (2) hazard and exposure assessment, (3) risk characterization. The US-EPA defines problem formulation as “a systematic planning step to identify the major factors to be considered in a particular assessment in relation to preliminary hypotheses with regard to hazard assessment (i.e., likelihood and severity of adverse effects which might occur or have occurred) and exposure assessment (i.e., likelihood and significance of exposure)” [8]. Given the diversity of possible assessment endpoints and exposure pathways, it is fundamental that the problem formulation is performed at the very beginning of the process to clearly define the ecological entity to be evaluated and plan the direction of the assessment. The problem formulation is followed by the first real step of the risk analysis: hazard(s) and exposure assessment. The hazard assessment is composed of hazard identification and characterization and consists of the determination of the potential of a stressor to cause adverse effects (e.g., due to its intrinsic toxicity and/or its environmental fate properties) and under what circumstances. The exposure assessment evaluates the potential exposure routes and scenarios, and the extent and frequency of exposure. Using the context developed during the problem formulation and analysis step, the risk characterization results in the formulation of a statement of risk and associated uncertainties [9]. For each of these steps, tiered approaches are applied depending on the assessment question(s), resources, and data available to produce fit-for-purpose RA to answer the assessment questions defined in the problem formulation. In other words, tiering is a process that allows for the planning of a risk assessment. The application of tiers depends on whether the assessment is based on data-poor scenarios, allowing for simple and conservative approaches when time and resources are limited (e.g., low tier for an emerging contaminant). In contrast, for more complex situations when data become available, more accurate and realistic approaches can be applied (e.g., high tier for the re-evaluation of a regulated product). Overall, when the risk characterization provides evidence that the protection goals defined in the problem formulation are reached, i.e., sufficient protection, for the organism (s), such an assessment can be concluded. Alternatively, when the risk characterization indicates that sufficient protection cannot be achieved, higher tiers may be applied, or risk management may be considered (e.g., risk mitigation measures). In the context of hazard identification and characterization for a regulated product (e.g., plant protection products, feed additives) or environmental contaminants (e.g., heavy metals, persistent organic pollutants), the
592
Maria Chiara Astuto et al.
first tier may involve the use of ecotoxicity studies from standardized laboratory experiments with relevant test species to extrapolate reference points/points-of-departure such as the No-Observed-Effect-Concentration (NOEC), Lethal Dose (LD50) or Lethal Concentration (LC50) for specific exposure time(s). Most often, the most sensitive (i.e., lowest) biologically relevant value is used to derive a predicted no-effect concentration (PNEC) applying an uncertainty factor (UF). A standard UF value of 100 is generally selected, but further refinement can be performed depending on data availability. Exposure metrics are then compared to such reference points to determine risk. For higher tiers, results are evaluated for a more complex level of biological organization (e.g., field trials, whole ecosystem) [10–14]. Over the last few years, many research efforts have been put together to integrate a mechanistic understanding in relation to chemical exposure and toxicity, through the use of predictive in silico models as part of batteries of New Approach Methodologies (NAMs). NAMs is a broad term that includes any conceptual framework and modern toxicological methods that serve as (replacement, reduction, or refinement) alternatives to animal testing, aimed at moving toward a mechanistic understanding of toxicity while responding to the needs stipulated in the European Chemicals Strategy for Sustainability as part of the European Green Deal [4]. Such approaches include the use of conceptual frameworks such as the Mode of Action (MoA) and Adverse Outcome Pathway (AOP), as well as in silico and biologically-based models including (Quantitative) Structure–activity Relationship ((Q) SARs), read-across methods, high-content screening (HCS), highcontent analysis (HCA) physiologically-based (PB) models (such as PB-TK and PB-TK-TD models), dynamic energy budget models (DEB), and OMICS technologies (e.g., transcriptomics, proteomics, and metabolomics) [15]. Such NAMs are particularly relevant to ERA since in vivo testing of all potential chemicals on all species is not possible and the reduction of animal testing for ethical, economic, and practical reasons is increasingly needed. In ERA, the definition of key processes that are critical for chemical toxicity made it possible to identify where NAMs development is particularly required at different levels of biological organization as defined in the AOP framework (individual, population, ecosystem) and ultimately integrated into the landscape modeling to provide a systemsbased approach. For given individual and multiple chemical exposures, processes driving exposure and toxicity can be summarized as follows [16]: l
Environmental fate for which an external exposure or “bioavailability” of the chemical in the environment may modulate the external exposure of the chemical (i.e., speciation, binding and transport).
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
593
l
Toxicokinetics (TK, i.e., what the body does to a chemical). Chemicals enter an organism via different routes (e.g., oral uptake, dermal contact or inhalation) and after uptake, events in the body are described by (pharmaco- or toxicokinetic processes (PK, TK) divided into absorption, distribution, biotransformation and excretion known as ADME processes. For such ADME processes, metabolism, disposition and excretion are driven by a range of phase I and phase II enzymes, transporters and excretion routes respectively in key organs (e.g., liver, intestine, kidney, gills).
l
Toxicodynamics (TD, i.e., what the chemical does to the body). TD processes reflect the expression of toxicity mediated by via interactions between the chemical and its target (e.g., receptors, enzymes, etc.).
Understanding such processes have allowed for the development of biologically based models such as TK-TD models, Dynamic Energy-Based (DEB) models and physiologically based kinetic (PB-K) and physiologically based-kinetic-dynamic models which can be used to predict adverse effects from chemical exposures at the individual and population level and allow comparison between species sensitivities [17, 18]. Their practical applicability and integration in chemical risk assessment for single and multiple chemicals is currently being implemented by regulatory agencies and international organizations worldwide [13, 15, 19, 20]. Overall, this short introduction aims to provide an overview of the principles of ERA for chemicals and set the scene for this chapter. Subheading 2 aims to highlight the principles of Weight-of-Evidence and tiered approaches for ERA. Subheading 3 provides examples of toxicological databases as sources of ecotoxicological data (Subheading 3.1) and modern in silico tools to predict chemical toxicity in single species data poor situations (Subheading 3.2) such as Eco-Threshold Toxicological Concern (Subheading 3.2.1), read-across methodologies (Subheading 3.2.2) and QSAR models in ERA (Subheading 3.3). Subheading 4 focuses on biologically based models such as toxicokinetic–toxicodynamic models (Subheading 4.1), Dynamic Energy Budget models (Subheading 4.2) and Physiologically Based Kinetic models (Subheading 4.3). Subheading 5 provides an overview of in silico strategies to address chemical toxicity for the whole ecosystem such as species sensitivity distributions. Subheading 6 highlights tools available for landscape modeling for ERA. Finally, Subheading 7 provides future perspectives to move toward a systems-based approach in Environmental Risk Assessment.
594
2
Maria Chiara Astuto et al.
Weight-of-Evidence and Tiered Principles in Environmental Risk Assessment In food and feed safety, many of the types of risk assessments are very often described within the specific legislation of regulatory silos (e.g., regulated products) and are dealt with in guidance documents. In the context of EFSA’s work, problem formulation is most often outlined in the Terms of Reference (ToR) provided by risk managers from the European Commission. The ToR provides the context of the problem formulation for a specific risk assessment to be carried out, which is also often refined through a dialogue between risk managers and risk assessors to clarify the scope of the requested assessment [21]. The question to be addressed is then described as part of “Interpretation of the Terms of Reference” section. In the context of chemical risk assessment, problem formulation generates a conceptual model describing exposure sources and pathways, species, populations and life stages exposed, endpoints to be considered, and their relationships [22]. In the design of such a conceptual model, assessors need to take the regulatory context into account to provide fit-for-purpose advice. Overall, the outcome of the problem formulation provides an analysis plan which describes how to proceed with the risk assessment, and which may include aspects such as specification of the study design, methodology, data requirements and uncertainty analysis. The establishment of the specific question(s) provides the fundamental prerequisite for a Weight-of-Evidence assessment which can be described as a process in which all of the evidence considered relevant for a risk assessment is evaluated and weighted [23]. Recently, EFSA has published its Guidance on the use of the Weight-of-Evidence (WoE) approach in scientific assessments, which defines a harmonized but flexible framework for considering and documenting the approaches used to weight the evidence in answering the main question for different types of scientific assessments, including risk assessment. The WoE assessment is presented as a three-step process: (1) assembling the evidence into lines of evidence of similar type, (2) weighing the evidence, and (3) integrating the evidence. Reliability, relevance, and consistency are the three basic concepts to be considered during the WoE process [10]. The WoE can be performed at any point of the risk assessment where evidence integration is needed (e.g., hazard assessment, exposure assessment and/or risk characterization) in case of either data poor or data rich situations. Figure 1 illustrates the WoE framework in the context of risk assessment in both data rich and data poor scenarios. Tiering principles allow for the application of simple and conservative approaches (low tier), and more complex and realistic approaches (higher tier) when needed or data availability allows this to be performed. The appropriate application of tiering means
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
595
Fig. 1 Illustration of tiers applied in data rich and data poor scenarios that may characterize the risk assessment process i.e. hazard assessment (i.e., identification and characterization), exposure assessment and risk characterization. 1The problem formulation step anticipates the risk assessment process and ends with the establishment of specific question(s) that direct the risk assessment. These question(s) represent the fundamental prerequisite for a WoE. 2WoE can be performed at any point of the risk assessment where evidence integration is needed (e.g., hazard assessment, exposure assessment and/or risk characterization)
that the higher the tier applied the lower the uncertainty of the risk assessment results and implies that an assessment can be concluded as soon as there is clarity on sufficient protection for the exposed population defined in the problem formulation. Overall, tiers can be qualified either as low, intermediate (data poor scenarios), or high (data rich scenarios) or using numerical attributes (0, 1, 2, 3, etc.) where tier 0 typically describes a data poor situation, requiring conservative assumptions and tier 1, 2 and 3 reflect scenarios where data availability increase and allow for more accurate and realistic assessments. Such higher tiers are associated with a better characterisation of uncertainties and eventually decreasing uncertainty and also, whenever possible, making use of probabilistic approaches. The tier applied is not necessarily symmetrical across the exposure and hazard assessment steps because the availability of exposure and effect data may vary, and because of regulatory requirements under which the assessment is being performed. This needs to be addressed in the uncertainty analysis. Alternatively, progression to risk management or a higher tier when clarity on sufficient protection is lacking is also possible (e.g. risk mitigation measures). Because of the vast variety of problem formulations, approaches and data, the tiers applied are not prescribed nor does the tiering principle imply that assessments necessarily proceed from lower to higher tiers. For example, in many assessments of regulated
596
Maria Chiara Astuto et al.
Table 1 Tabular format for summarizing a Weight of Evidence (WoE) assessment for an emerging contaminant in the context of an environmental risk assessment for a data poor scenario [10]
Assessment question
Hazard identification of an emerging contaminant: prediction of ecotoxicological properties for test species of relevance to environmental risk assessment in a data poor situation
Assemble the evidence
Select evidence
No toxicity data available: use read-across from already-tested similar compounds, in silico tools (QSAR) to predict toxicity
Lines of evidence
Identify lines of evidence for potential effect(s) (e.g., fish acute toxicity, Daphnia magna LC50 or bee acute toxicity) from the presence of a structural alert or QSAR models, read-across from similar compounds
Methods
Evaluate the reliability, relevance, and consistency of the QSAR models
Results
Toxicity value for each line of evidence, with associated assessment of reliability (e.g., through the applicability domain of the models used)
Methods
If the estimates from the different models converge, the level of uncertainty regarding the toxic property(ies) can be evaluated (e.g., through the applicability domain of the models used). If the estimates deviate, further modeling for the toxic property(ies) could be undertaken to evaluate whether the results can be improved
Results
Integrated the toxicity values obtained by different nontesting methods and related uncertainties to respond to the assessment question
Weight the evidence
Integrate the evidence
products, the tier(s) applied will be predetermined by the available data, the problem formulation and/or the regulatory context. In the context of ERA, hazard assessment of a chemical may involve data poor scenarios and data-based scenarios. A typical data poor scenario may be represented by an emerging contaminant (Table 1). In case of a lack of comprehensive toxicological data, empirical data can be combined with estimates generated from in silico tools. These computational approaches (e.g., in silico QSAR and/or read across) are often useful in filling data gaps by predicting the toxicity of a substance or identifying similar substances with a known toxicological identity that can share the same toxicological behavior. Preferably, the estimation should be performed using multiple tools (e.g., the VEGA Hub (https://www.vegahub.eu/) or the US-EPA’s Toxicity Estimation Software Tool (TEST)) providing suites of QSAR models and the results compared to provide robust support to the assessment conclusion. In simple cases, there is a general consensus between the models, whereas, in more complex situations, conflicts may arise between models. In this case, the use of sophisticated in silico tools (e.g., OECD toolbox or ToxRead) may help the expert in identifying any toxicity alerts in the target compound, and/or the presence of the alerts in other similar compounds, as well as providing a measure of the uncertainty
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
597
associated [10]. In most instances, summarised WoE results are also supported by expert judgement and methods to integrate the results from different in silico models have been recently described [24]. In contrast to emerging contaminants, the ERA of regulated products requires data requirements listed in the relevant regulation and guidance documents which need to be duly presented to regulatory agencies for regulatory acceptance. The data requirements include physicochemical and (eco)toxicological studies (e.g., acute, subchronic, chronic, developmental. and reproductive toxicity) and the extent of the information required varies depending on the regulatory frameworks. Further complexity is added for substances already identified as of concern and/or under reevaluation. This case represents a classical example of a data rich scenario [10]. A practical example of WoE assessment is provided below for the risk assessment of plant protection products for aquatic organisms in edge-of-field surface waters [14]. In this case, the problem formulation defines the need to derive a regulatory acceptable concentration (RAC) in fish for a plant protection product. In this context, the WoE assessment may be needed at any point of the risk assessment and here particularly for the hazard assessment. As described in the EFSA Guidance on WoE, the evidence collected should be subjected to a three-step process: assembly, weigh and integration [10]. The first step of assembling evidence requires the identification and selection of relevant ecotoxicological hazard evidence to be included in WoE assessment (e.g., toxicological information submitted by the applicant in the dossier), and proceed by grouping hazard evidence into lines of evidence (e.g., group all toxicological data retrieved per endpoint (e.g., acute or chronic toxicity values) and respective standard assessment factors). It should be noted that also in case of regulated products, data gaps may occur (e.g., for metabolites and/or impurities). In this case, non-testing methods can be used to fill data gaps instead of generating new experimental data. The approach to be followed should be similar to that described above for data poor situations (Table 1). [14]. The second step requires to weight the evidence obtained through assessments of reliability, relevance, and consistency, select a WoE assessment method and report the conclusions in a transparent manner using a summary tabular form. The approach is generally characterized by the selection of the lowest toxicological acute or chronic toxicological value and applying the appropriate safety factor (e.g., for acute toxicity values a standard assessment factor of 100 is applied), refining with expert judgement when appropriate. The third and last step of the WoE process is characterized by the integration of the evidence to formulate results of the WoE assessment, which is based on the direction defined in the problem formulation. The results should transparently report the assessment process and conclusions reached, including possible uncertainties
598
Maria Chiara Astuto et al.
and data gaps identified. For example, the use of a plant protection product, such as an insecticide applied in edge-of-field surface waters, requires and ERA including assessment of ecotoxicity in aquatic organisms. In this context, the assessor defines a Regulatory Acceptable Concentration for acute and chronic scenarios. With the end of the WoE, the hazard assessment is generally concluded. The results obtained will be used in the risk characterization step, where actual environmental exposure levels are compared with this guidance value.
3 Toxicological Databases and In Silico Tools for Individual Species in Data Poor Situations Toxicological databases support ERA in data rich situations and in silico tools support ERA in data poor situations. Both are valuable tools to simulate biological processes for a range of applications, species, and levels of biological organisation (e.g., molecular, cellular, species) and are described below [15, 25]. In data poor situations, such in silico tools are available to predict physicochemical properties, environmental fate, TK and toxicity properties to fill in such data gaps in a given species. Examples and applications of such databases and tools are provided below and include the Eco-threshold of toxicological concern (EcoTTC), read-across methodologies and quantitative structure activity relationship (QSAR) models. 3.1 Examples of Available Toxicological Databases to Access Ecotoxicological Data
Worldwide, a number of toxicological databases providing open access to ecotoxicological data in key terrestrial and aquatic test species, as well as nontarget species, are available throughout the world. Some examples as well as links to access such databases are provided below. l
The US-EPA CompTox Chemicals Dashboard (https://com ptox.epa.gov/dashboard) provides the largest database in the world in regard to experimental and predicted physicochemical properties, fate and transport, hazard (TK and TD), exposure, and in vitro bioactivity data for a wide range of species and over ~900,000 substances [26]. Specifically for the ERA area, the US-EPA has published the ECOTOXicology Knowledgebase (ECOTOX) database (https://cfpub.epa.gov/ecotox/) which is a comprehensive and publicly available knowledgebase providing single chemical environmental toxicity data on aquatic life, terrestrial plants, and wildlife for thousands of chemicals. The ECOTOX database integrates three previously independent databases, namely AQUIRE, PHYTOTOX, and TERRETOX which includes toxicity data derived from the peer-reviewed literature, for aquatic life, terrestrial plants, and terrestrial wildlife, respectively. ECOTOX has been developed and is
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
599
maintained by the Center for Computational Toxicology and Exposure’s (CCTE’s) Great Lakes Toxicology Ecology Division (GLTED). l
OpenFoodTox as EFSA’s open source hazard database providing data for over 5000 chemicals (https://zenodo.org/ record/1252752#.Xd_5G-hKjD4). These hazard data for food additives and flavorings, food contact materials, vitamins, minerals, and novel foods. For ERA, most of the database provides data on key test species for active substances used in plant protection products in terrestrial and aquatic organisms (e.g., fish, bees, insects, earth worms, algae) as well as feed additives, contaminants of anthropogenic origin, resulting from food and/or feed processing and natural toxins produced in food and feed by plants, fungi and other microorganisms [27, 28].
l
ECHA (European Chemicals Agency) offers a large database with information submitted from industry in the form of REACH dossiers. ECHA’s database (https://echa.europa.eu/ information-on-chemicals) provides a unique source of information on the chemicals manufactured and imported in Europe which covers their hazardous properties, classification and labeling, and information on how to use them safely. To cite but a few, it includes data for the following. – Cross regulation legislations.
activities,
and
data
from
previous
– Registration, Evaluation, Authorisation and Restriction of Chemicals Regulation (REACH) – Classification, Labelling and Packaging (CLP) – Biocidal Products Regulation – Prior Informed Consent Regulation – Persistent Organic Pollutants Regulation – Chemical Agents Directive and Carcinogens and Mutagens Directive (OELs) l
The OECD eChemPortal (https://www.echemportal.org/ echemportal/index.action) is the global portal to information on chemical substances allowing simultaneous searching of reports and datasets by chemical name, number, chemical property, and Globally Harmonized System of Classification and Labelling of Chemicals (GHS) classification. Direct links to collections of chemical hazard and risk information from national, regional and international levels can also be accessed.
l
The OECD QSAR Toolbox (https://www.oecd.org/ chemicalsafety/risk-assessment/oecd-qsar-toolbox.htm) is a software application intended to be used by governments and chemical industry and other stakeholders in filling gaps in
600
Maria Chiara Astuto et al.
(eco)toxicity data needed for assessing the hazards of chemicals. It has been developed in close collaboration with the European Chemicals Agency. The Toolbox incorporates information and tools from various sources into a logical workflow. Crucial to this workflow is grouping chemicals into chemical categories. 3.2 In Silico Tools to Predict Chemical Toxicity for Single Species
In the absence of comprehensive experimental physicochemical and toxicological information (data poor situations), a number of tools are available to predict toxicity. These include the threshold of toxicological concern (TTC), read-across methodologies and the quantitative structure–activity relationships (QSAR) models. In silico models are valuable tools to simulate biological processes for a range of applications, species and levels of biological organization (e.g., cellular, molecular, species, population, ecosystem and landscape) [15, 25].
3.2.1 Eco-Threshold of Toxicological Concern
The Threshold of Toxicological Concern approach (TTC) is a wellknown decision-making tool that has been used for a number of years for risk assessment purposes mostly in the area of human health risk assessment. The reader is referred to the opinion of the Scientific Committee of EFSA on the TTC approach [29, 30]. The approach is based on a historical toxicological database built on the empirical evidence that for noncancer effects there are thresholds of exposure below which toxicity is not expected to occur. The TTC approach has also been used to provide thresholds of exposure for genotoxic carcinogens for which the likelihood of tumor incidence is considered very low. Recently, attempts have been made to develop an EcoTTC and also to integrate the human and ERA procedures in a “One Health” perspective [31]. The EcoTTC allows for the derivation of environmental threshold concentrations of “no concern.” An online tool, EnviroTox, has recently been developed to derive ecotoxicological threshold concentrations, based on the underlying EnviroTox database. This database of more than 91,000 curated records for acute and chronic aquatic toxicity data covering more than 3900 unique CAS numbers and 1560 aquatic species. The dataset was developed from 12 original databases with the largest data source being the ECOTOX Knowledge base of the US-EPA (https://cfpub.epa.gov/ecotox/) [32, 33]. In addition to the ecotoxicological data, the database also includes other chemical information such as physicochemical properties and MOA classification according to several MOA frameworks which have been previously compared and discussed [34, 35].
3.2.2 Read-Across Methods
Read-across is a technique for predicting endpoint information for one substance (the target substance), by using data from the same endpoint from (an)other substance(s), (source substance(s)). This approach needs to be considered on “an endpoint-by-endpoint”
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
601
basis due to their complexity (e.g., key parameters, biological targets). In the case of a high number of substances the “category approach” is used [36]. A wide range of in silico tools are available for grouping chemicals and applying read-across are compared and discussed in this book particularly for pahramceuticals. Being a nonformalized approach, it requires considerable expert knowledge and judgment. For example, numerical values indicating the degree of similarity with respect to the target compound are provided by some QSAR tools (e.g., VEGA, TEST, GenRA) and can be used to facilitate the read-across assessment. Statistical QSAR models based on k-nearest neighbor (kNN) were also introduced to promote automation and increase reproducibility of the read-across process [24, 37]. A comprehensive guidance on grouping and read-across has been published by ECHA [36], OECD [38] and in recent reviews [24]. In the ERA area, applications of read-across approaches are still scarce and more work is required in this area through the use of a structured database for given taxa to pinpoint the reliability of predictions these techniques can provide. It is worth noting that a 2013 review article in 2013 has explored the use of read-across methods for the ERA of pharmaceuticals [39]. 3.3 Quantitative Structure–Activity Relationship Models
In the case of lack of experimental data on chemicals (i.e., data poor scenario), quantitative structure–activity relationship (QSAR) models are generally employed to predict the physicochemical (e.g., lipophilicity, flammability), biological (e.g., toxicity) and environmental fate properties of compounds from their chemical structure [40]. The term “quantitative” indicates that the molecular descriptors are quantifiable on a continuous scale, thus allowing for a quantitative relationship with substance toxicity [41]. The predictive toxicity is usually derived from the molecular descriptors of chemicals, which include their inherent physicochemical properties such as atomic composition, structure, substructures, hydrophobicity, surface area charge, and molecular volume [40]. QSAR models can be divided into classification-based, when a relationship between the descriptors and the categorical values of the response variable(s) is established, or regression-based, whenever the aim is to find a relationship between the descriptors and the quantitative values of the response variable(s) [42]. In practice, while the first ones allow for the discrimination of chemicals into categories (e.g., toxic vs nontoxic, active vs nonactive) according to a threshold set a priori, the second ones provide users with quantitative predictions of a given endpoint (e.g., LD50 value) for the chemical of concern [10, 43]. Similarly, (Q)SARs can also be used directly or stepwise to predict the mode of action (MoA) of chemicals [35, 44–46]. Examples of QSAR modeling platforms include the following.
602
Maria Chiara Astuto et al. l
The OECD QSAR Toolbox (which is an open source software application (available at https://www.oecd.org/chemicalsafety/ risk-assessment/oecd-qsar-toolbox.htm) intended to be used by governments, chemical industry and other stakeholders in filling gaps in (eco)toxicity data needed for assessing the hazards of chemicals. The OECD QSAR toolbox allows the use of QSAR models and read across methods using a category approach for the prediction of toxic properties of chemicals, supports testing through intelligent testing strategies when data gaps are identified and sustainable development and green chemistry through toxicity predictions before chemical production. Stakeholders using the toolbox include governments, chemical industry, as well as national and international agencies. The OECD QSAR toolbox 4.4 includes a wide range of databases from a wide range of governmental organizations and scientific advisory bodies [47, 48].
l
The VEGA Hub (www.vegahub.eu) provides a platform with a large number of QSAR models and read across tools predicting a wide range of biological, environmental, and physicochemical properties of chemicals in relation to their structure. Relevant examples for ERA include QSAR models for bees and daphnia (acute toxicity predictions) and fish (e.g., predictions of bioaccumulation factors and acute toxicity). Further details on VEGA Hub applications in human health risk assessment are provided in a number of chapters in this book.
l
The OPEn structure–activity/property Relationship App (OPERA) is a suite of open source QSAR models originally developed by the US-EPA and now supported by the National Institute of Environmental Health Sciences (NIEHS) (https:// github.com/kmansouri/OPERA). OPERA provides predictions for physicochemical properties, environmental fate parameters and toxicity endpoints as well as ecotoxicity parameters including fish bioconcentration factor, soil adsorption coefficient, and biodegradability. All OPERA models have been built on curated data and QSAR-ready chemical structures standardized using an open-source workflow. Overall, the datasets used to build OPERA models range from 150 chemicals for biodegradability half-life to 14,050 chemicals for logKow, with an average of 3222 chemicals across all endpoints. The collection of chemical structure information and associated experimental data for QSAR/property relationship (QSPR) modeling is supported by an increasing number of public databases with large amounts of datasets and the models continues to expand over time and includes a well-validated pKa prediction model (https://jcheminf.biomedcentral.com/articles/10.1186/ s13321-019-0384-1). All models are open-source and have been applied to hundreds of thousands of chemicals on the US-EPA CompTox Chemicals Dashboard [26].
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . . l
603
The US-EPA’s Toxicity Estimation Software Tool (TEST) (https://www.epa.gov/chemical-research/toxicity-estimationsoftware-tool-test) is available as both a downloadable software package (https://www.epa.gov/chemical-research/toxicity-esti mation-software-tool-test#sysrequirements) as well as via a web interface (for single chemical predictions) in the CompTox Chemicals Dashboard (https://comptox.epa.gov/dashboard/pre dictions/index). The prediction models are built using a number of different QSAR methodological approaches (https://www.epa.gov/sites/production/files/2016-05/ documents/600r16058.pdf) and provide predictions for physicochemical properties and (eco)toxicological endpoints including 96-h fathead minnow and 48-h Daphnia magna LC50 values as well as Tetrahymena pyriformis 50% growth inhibition concentration. TEST has been supported for over a decade and, at the time of writing, new training datasets are being assembled to rebuild the models.
Table 2 provides examples of QSAR models developed from EFSA’s OpenFoodTox database. All of these models are available and operational to predict hazard properties in given species from the open source platform VEGA (https://www.vegahub.eu/) [28]. In a regulatory context, particularly under the REACH legislation [58], QSARs are mainly used to predict missing information (e.g., LC50) for the chemical according to the nontarget species (fish, bees, earthworm, etc.) relevant to the hazard assessment. Hence, QSARs can provide evidence for read-across strategies or to add credence to the experimental results of unknown or limited validity (e.g., missing Good Laboratory Practices compliance) in RA steps. Other examples include the development and validation of an innovative in silico tool to integrate MoA information into a QSAR model using open source data (e.g., EFSA OpenFoodTox). The model can be applied to (1) to predict acute contact toxicity (LD50) and (2) to profile the MoA of pesticides [44]. Overall, since the understanding of single Adverse Outcome Pathways (AOPs) and their networks is unfolding in both HHRA and ERA [59, 60], the integration of mechanistic information on the molecular initiating event (MIE), key events (KE) and key event relationships (KER) is foreseen as a mid-term perspective as part of a quantitative Adverse Outcome Pathways (qAOPs) approach [43, 61]. AOPs can provide the mechanistic understanding needed to integrate data from NAMs such as in silico and in vitro methods [43]. Similarly, computational models such as QSARs can be employed to predict key MIEs relevant to AOPs such as hepatic steatosis [62]. However, these have not been developed to date for ERA and additional in vivo TK data (e.g., bioavailability, half-life) on tested chemicals would help scientists to better capture the internal (effective) dose
604
Maria Chiara Astuto et al.
Table 2 Examples of Open source QSAR models available in the VEGA platform developed using OpenFoodTox and other datasets (Modified from Dorne et al. [28])
Target species
Predicted toxicological endpoint
Bobwhite quail (Colinus virginianus) [49]
Acute oral toxicity (LD50) Classification and regression-based (CORAL)
Plant protection products
Earth worm (Eisenia fetida) [50]
Acute contact toxicity (LC50)
Regression-based
Plant protection products
Earth worm Acute oral toxicity (LC50) (Eisenia fetida) [50, 51]
Classification-based
Plant protection products
Earth worm (Eisenia fetida) [52]
Acute oral toxicity (LC50)
Classification-based (CORAL)
Plant protection products
Honeybee (Apis mellifera) [53]
Acute contact toxicity (LD50)
Classification-based
Plant protection products
Honeybee (Apis mellifera) [44]
Acute contact toxicity (LD50)
Classification and regression-based
Plant protection products
Honeybee (Apis mellifera) [44]
Acute contact toxicity (LD50) and MoA
Regression based MoA Plant protection profiler products
Honeybee (Apis mellifera) [54]
Acute contact toxicity (LD50) Binary mixture Synergism/nonsynergism LD50 mixture as toxic units
Classification and regression-based (CORAL)
Plant protection products and veterinary drugs
Rainbow trout Acute contact toxicity (Oncorhynchus mykiss) (LC50) [55]
Regression-based (CORAL)
Plant protection products
Rat (Rattus norvegicus) [56]
Regression-based (CORAL)
All classes
Regression-based (CORAL)
All classes
Subchronic toxicity (NOAEL)
Rat Subchronic toxicity (Rattus norvegicus) [57] (NOAEL and LOAEL)
Model type
Application domain
triggering either the MIE or the KE [61]. TK data can be derived from (high-throughput) in vitro methods as well as in silico ones (e.g., QSAR, PBK models), while integrating different lines of evidence in read-across or grouping approaches, before being assessed in an overall WoE [24, 43]. Such in silico tools will allow risk assessors to further depict mechanistic understanding of chemical toxicity for single chemicals as well as for combined toxicity of
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
605
multiple chemicals including interactions (i.e., potentiation, synergism, antagonism) effects of the combined exposure to multiple chemicals [13, 63].
4 Toxicokinetic–Toxicodynamic Models, Dynamic Energy Budget Models and Physiologically Based Models: An Overview of Available Models 4.1 Toxicokinetic– Toxicodynamic Models
According to the current risk assessment guidelines, a Tier 1 risk assessment requires collecting data from standard tests either for acute or chronic toxicity studies specifically designed for each species–compound combination. These toxicity tests are performed from experimental plans thought to allow for a precise estimate of LC50 or EC50 values, so that the range of concentrations is chosen in advance (based on preliminary studies, the scientific literature or expert knowledge), as well as the number of replicates and the experiment duration according to the life cycle of the species under study, if not fixed by guidance themselves. LC50 or EC50 values are used by regulators as toxicity indices to support their daily work when evaluating dossiers of market authorization demands for active substances. Thus, precise estimates of LC50 or EC50 are crucial for ERA. LC50 or EC50 values are classically estimated by fitting concentration-response or effect models to observed data (e.g., log-logistic, probit, or Weibull models). Several tools already exist to get LC50 or EC50 estimates, although some may need to be updated, according to the very latest research developments, in particular those regarding the quantification of uncertainties [64]. Even if requested by regulatory bodies, it is well known that concentration-response or effect models are essentially “black” boxes without any biological consideration behind them, providing time-dependent outputs, especially LC50 or EC50 values. In addition, any physiological mechanisms involved in the responses or effects at the individual level cannot be described nor understood. Consequently, concentration-response or effect models fail to account for time-variable exposure, preventing their use in the prediction of potential responses or effects for regulators who often need to investigate environmentally realistic exposure scenarios. This difficulty can be overcome by the use of toxicokinetic– toxicodynamic (TKTD) models that can be viewed as “simplified” boxes, i.e., aiming to simplify biological complexity [65] compared to very refined models used at the molecular levels for example. Moreover, TKTD models are today recognized as fit-for-purpose for regulatory risk assessment of pesticides on aquatic organisms (68, TKTD aquatic). TKTD models can potentially help to decipher relationships between life-history traits, as well as allow to investigate effects of toxicants over the entire life cycle, for example, with DEBtox toxicity models derived from the Dynamic Energy
606
Maria Chiara Astuto et al.
Fig. 2 General Classification of Toxicokinetic–toxicodynamic (TKTD) models
Budget (DEB) theory [67]. Regarding TKTD models for survival, known as GUTS models, that is General Unified Threshold models of Survival (GUTS) models [68], they can be parameterized even from standard, single-species toxicity tests [69–71]. TKTD models can also provide relevant information at the individual level when extrapolating beyond the boundaries of tested conditions in terms of exposure, or to assist experimenters in designing refined experiments, for example in the perspective of Tier-2C ERA [14]. Parameters involved in TKTD models remain species- and compoundspecific but, contrary to the abovementioned concentrationresponse or effect models, parameters can usually be interpreted in a physical or biological manner. In essence, TKTD models are most often considered and used as mechanistic models. Several TKTD models already exist (Fig. 2), some of them being already considered as ready-to-use for ERA, namely GUTS models in their reduced forms and the TKTD model for the Lemna species [66]. The toxicokinetic part of TKTD models (also called the TK model) aims to relate the exposure concentration, which may be constant or variable over time, to what happens within exposed organisms (denoted by “internal damages”, Fig. 2). In more general terms, the TK part describes how the organism bioaccumulates the chemical substance, accounting for absorption, distribution, metabolization and elimination (ADME) processes. Mathematically speaking, all TK models are compartment models. Compartments are usually chosen according to available data within a complexity gradient from: (1) the simplest TK models with only one compartment corresponding to the whole organism [72–76]; (2) to TK models of intermediate complexity in which compartments correspond to chosen internal entities (e.g., blood, digestive system, a set of organs); (3) until the most refined TK models known as physiologically based TK (or PBTK) models in which compartments are associated to specific target tissues or organs [18, 77]. Nevertheless, even with the simplest TK models (one compartment), there are several versions which can be accounted for according to their biological processes. Basically, if only one uptake route and excretion are considered, the TK model will comprise only two parameters of interest (well-known as uptake (ku) and elimination (ke) rates) (Fig. 3), while considering several exposure routes (e.g., via water, pore water, sediment, and/or food) as well as several elimination processes (e.g., dilution by
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
607
Fig. 3 Simulation examples of TK models representing the internal concentration over time for an exposure of 5 days followed by a depuration phase of 16 days (separated by the vertical black dotted line): (a) the simplest one-compartment model with only two parameters; (b) a one-compartment model with exposure via water and food, and biotransformation of the parent compound into two phaseI metabolites. Arbitrary exposure concentrations: expwater ¼ expfood ¼ 0.001 (arbitrary unit). Arbitrary values used for parameters: kee ¼ kuw ¼ kuf ¼ kem1 ¼ kum1 ¼ kum2 ¼ 0.5, kem2 ¼ 0.2 (arbitrary unit). This figure was generated with the prediction tool of MOSAIC_bioacc, freely available from https://scharles-univlyon1.shinyapps.io/ bioacc-predict/
growth and metabolization) may lead to much more detailed TK models [76, 78] with more kinetic parameters associated with them (Fig. 3). The toxicodynamic part of TKTD models (also called the TD model) aims to relate the internal concentration causing damages into responses or effects at the individual level, which can be experimentally observed or measured. In more general terms, the TD part describes the impact of the chemical substance on the organism. Mathematically speaking, TD models will be different according on the endpoints under consideration (Fig. 2). There are specific TD parts for plants for which the growth rate is measured: for example, the Lemna model [79]; as well as for organisms for which growth and reproduction are measured: for example, for Daphnia magna [79–84], Venturia canescens [85] or Chlamydomonas reinhardtii [93]. For the survival endpoint, ERA may strongly benefit from the well-established theoretical framework of the GUTS models. These models were first used intensively for research purposes (e.g., [71, 86–91]), before being fully recognized as ready-to-use for ERA [66, 92]. In terms of application to ERA, only reduced GUTS (GUTS-RED) models are considered, as they can be applied to standard survival data collected through regulatory guidelines. In fact, there are two versions of GUTS-
608
Maria Chiara Astuto et al.
RED models: one assuming a stochastic death process, all individuals being identical with the same toxicity threshold from which the compound effect linearly increases (namely the GUTS-REDSD model); the second version assumes an individual tolerance so that all individuals are different with their own toxicity threshold. If this threshold is exceeded, individuals will immediately die. In addition, the individual toxicity thresholds are distributed according to an appropriate probability law (leading to the GUTS-REDIT model). Regarding ERA, both GUTS-RED models may first equivalently be used for calibration before deciding which one can be validated for further use in predictions. Indeed, EFSA [66] proposed an approach to follow when investigating the effect of pesticides on aquatic living organisms. This workflow consists of three steps: 1. Calibration: fit both GUTS-RED models to standard survival data to get parameter estimates with their associated uncertainties; 2. Validation: simulate the number of survivors with both GUTSRED models under pulsed exposure and propagate the previous estimation uncertainties. The simulated numbers are then compared to data newly collected under the same pulsed profiles and quantitative validation criteria are calculated; 3. Prediction: choose one of the previously validated GUTS-RED models, namely the most conservative one in terms of predicting risk for the species–compound combination under consideration. Then, simulate the survival probability under an environmentally realistic exposure profile such as FOCUS (FOrum for the Co-ordination of pesticide fate models and their Use) profiles for example, and get an estimate of the multiplication factor leading to an additional x% of survival at the end of exposure (namely, the x% Lethal Profile). Figure 4 illustrates how results are comparable between GUTS-REDSD and GUTS-RED-IT models on data for Lymnaea stagnalis exposed to cadmium [70]. The use of some of the TKTD models may be recommended or required by regulatory bodies such as EFSA. In addition, userfriendly and ready-to-use tools are available to users, without the need to invest in the underlying mathematical, statistical and technical aspects. Nevertheless, there are no convenient tools for plant TKTD and DEBtox models allowing ecotoxicologists or regulators to easily use without being a programmer. Even if recognized as fitfor-purpose for ERA, TKTD models are still mainly used in the academic world and require more development before they can be used in ERA as standard practice. Regarding GUTS models, however, here are two main tools available to use in a regulatory context; that is to fulfil the regulatory workflow proposed by
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
609
Fig. 4 Example fit of both GUTS-RED models on data for Lymnaea stagnalis exposed to cadmium. Parameter estimates are the following: (a) GUTS-RED-SD model: kd ¼ 0.224 [0.146; 0.394] (toxicokinetic related rate), hb ¼ 1.29e-03 [5.24e-04; 2.54e-03] (background mortality), zw ¼ 95.6 [73.8; 108.0] (common toxicity threshold), bw ¼ 5.98e-04 [4.08e-04; 8.49e-04] (toxicodynamic related rate); (b) GUTS-RED-IT model: kd ¼ 3.73e-02 [2.39e-02; 5.14e-02] (toxicokinetic related rate), hb ¼ 7.35e-04 [1.77e-04; 1.86e-03] (background mortality), mw ¼ 101.0 [77.5; 120.0] (mean toxicity threshold), b ¼ 3.86 [3.03; 4.85] (toxicity threshold deviation from the mean). Please refer to Baudrot et al. [70] for units. Results were obtained with the R-package “morse” [89]
EFSA [66] for Tier-2C ERA. The two main tools addressing this EFSA workflow are openGUTS [94] and MOSAIC [95]. The MOSAIC platform has advantages including offering a whole range of tools as a one stop shop for ERA [96]: all services for dose-response analyses and the estimation of LCx and ECx (Tier-1 ERA, [97–99]); calculation of bioaccumulation metrics such as BCF or estimation of kinetic parameters of TK models [100, 101]; complete GUTS workflow and Species Sensitivity Analyses (SSD) including potential censored inputs [99, 102]. Figure 5 provides an overview of the features of MOSAIC. TK TD models can also be linked to higher levels of biological organisation, particularly to refine the prediction of population dynamics. Since these tools are not yet fully user friendly to for risk assessors, their regulatory use is still hampered. However, their integration in ERA is recognised to be unavoidable to gain a better understanding of the ecological relevance of chemicals particularly with regards to the target and non-target species to protect [104]. Coupling TKTD and population dynamics models can be performed under different frameworks, such as matrix population models [104] or individual-based models [105].
610
Maria Chiara Astuto et al.
Fig. 5 Overview of all MOSAIC features freely available from https://mosaic.univ-lyon1.fr/: Dose-response standard analyses for binary, count and quantitative continuous data. Here survival data are analyzed providing an LC50 estimate and its uncertainty limits (Left panel); SSD analysis accounting for intervalcensored data with an HC5 estimate and its uncertainty limits (Middle-upper panel); Probability distribution of bioaccumulation metrics from the fit of a TK model providing a BCFk estimate and its uncertainty limits (Middle-lower panel); GUTS-RED-SD model fitted to data and parameter estimates with their uncertainty limits (Right panel) 4.2 Dynamic Energy Budget Models
The DEB Theory captures the quantitative aspects of metabolism at the individual level for organisms of all species and it builds on the premise that the mechanisms which are responsible for the organization of metabolism are not species-specific [106, 107]. Such generalization is supported by both the universality of physics and evolution of species as well as the existence of widespread biological empirical patterns among organisms [108]. Five essential criteria for any general model for the metabolism of individuals have been defined and include: 1. Consistency with broader scientific knowledge allowing the models to be based on explicit assumptions that are consistent with thermodynamics, physics, (geo)chemistry, and evolution. 2. Consistency with empirical data so that the assumptions should be consistent with empirical patterns. 3. Life-cycle approach so that the assumptions cover the full life cycle of the individual, from initiation of development to death. 4. Occam’s razor. The general model should be as simple as possible, the predictions should be testable in practice. This typically involves constraining the maximum complexity substantially. This can be quantified in terms of number of variables and parameters.
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
611
5. Taxon-specific adaptations. It requires restrictions to make a model applicable to particular taxa to be consistent with an explicit evolutionary scenario and explicit to allow predictions that the model will apply to such species. DEB theory is explicitly based on mass, isotope, energy and time conservation including the inherent degradation of energy associated with all processes. Historically, the DEB theory was first developed in 1979 from an ecotoxicology laboratory in the Netherlands (Delft) as the toxicology of chemicals as related to daphnids and several species of fish was being investigated and in particular the need to integrate growth and reproduction to different taxa in a consistent quantitative framework was identified. Overall, the DEB framework describes how energy and mass from environmental resources are used over time to fuel an organism’s life cycle under physiological conditions including feeding, maintenance, growth, development, and reproduction. The DEB theory resulted in the so-called κ-rule, which in short assumes that the various energetic processes (e.g., assimilation rate and maintenance) are dependent either on surface area or on body volume. The model also assumes that assimilated products first enter a reserve pool and are then allocated to (1) maintenance, (2) growth and (3) reproduction [109]. Overall, a DEB model for an individual organism describes the rates at which the organism assimilates and utilizes energy for maintenance, growth and reproduction, as a function of the organism’s and its environment [106, 110]. The state of the organism is characterized by key life cycle parameters such as age, size, amount of reserves, and the environment e.g., food density and temperature. DEB models of individual organisms are also the basic building blocks for modeling the dynamics of structured populations [111]. Life cycle parameters for well over 2000 species are available in the Add-my-Pet (AmP) (https://www. bio.vu.nl/thb/deb/deblab/add_my_pet/) which collects the following. 1. Reference data on the energetics of animal species including parameters of the standard DEB model as well as those of its different variants which can be estimated. 2. Matlab code for estimating DEB parameters from this data. 3. DEB parameter estimates. 4. Properties of species that are implied by these parameters. A number of related DEB models exist with a range of levels of complexity (parameters and state variables) depending on the questions one is interested in as well as data availability and quality. The simplest complete DEB model is the standard DEB model in which an organism consists of one structure and one reserve and feeds on one food source [107]. The standard DEB model assumes that the
612
Maria Chiara Astuto et al.
shape of an organism from a particular species does not change during growth and that the life cycle is defined by three life stages: embryo, juvenile and adult. The standard DEB model is described by a number of parameters which represent the energetics of an organism and cannot be observed directly but can be derived from standard life-cycle data, namely (1) time to hatch, (2) body length at birth, (3) weight at birth, (4) growth rate, (5) physical length at puberty, (6) age at puberty, (7) weight at puberty, (8) time to first reproduction, (9) final body length [17]. Different biological traits can be included in the standard DEB model as extensions, with conservation of the interpretation of parameters and parameter values. Elaborate descriptions of the background of DEB theory and the fundamental mathematical equations can be found in [107, 109, 112, 114, 115]. Figure 6 illustrates a generic DEB population modeling approach using the common frog as an example (i.e., tadpole and adult). Other DEB models of importance in ERA include the DEB-TOX and DEBKiss models as well as individually based DEB models (IBM). DEB-TOX models apply the DEB theory to understand toxic effects and were first developed to integrate different endpoints as the results of ecotoxicological testing. The general approach has been adopted by the OECD in a guidance document “Current approaches in the statistical analysis of ecotoxicity data: a guidance to application” [116]. Key differences between classical ERA and DEB-TOX modeling approaches exist. Classical ERA uses summary statistics from laboratory studies in standard test species (e.g., EC50 for growth or reproduction (or reproduction rate), LC50 for survival and NOEC for growth or reproduction) and DEB-TOX models taken into account TK and TD processes. In practice, an elimination rate (TK) is derived from the observed time course of toxicity (TD). For this purpose, three parameters are needed to describe the whole time-course of toxic effects namely (1): a time-independent toxicological threshold below which no effects occur irrespective of exposure time: the No Effect Concentration (NEC) which is usually expressed as an environmental concentration (as the incipient LC50). (2) A TK parameter (ke) describing the equilibrium between internal and external concentrations and expressed in d 1. (3) A TD parameter relating the toxic potency of the compound as the killing rate (kr) expressed in (mol/l)1 day1 for survival and the tolerance concentration (Ct) for sublethal effects expressed in mol/l. As a rule-of-thumb, the higher the killing rate, the lower the tolerance concentration the more potent (toxic) the compound [17]. These DEB-TOX models have been applied recently to assess the time dependent toxicity of pesticides and veterinary drugs in honey bees, solitary bees and bumble bees [117].
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
613
Fig. 6 Schematic representation of DEB population modeling. The main box represents the standard DEB model (arrows ¼ energy fluxes for the process specified by the arrow; boxes ¼ state variables for the individual); the ellipses are the input data; the black boxes are the different models and the rounded blue boxes are the model output. Modified from Baas et al. [17]
The DEBkiss model excludes the reserve dynamics so that metabolic memory is not included and predictions for species in conditions with lack of food become more difficult to predict. Some advantages of DEBkiss modeling lie in the derivation of simpler models in terms of the number of state variables, however, it comes to the cost of biological realism and extrapolation potential as DEBkiss models are more species and context specific compared with the standard DEB model. A comparison between the standard DEB models and DEBkiss models is illustrated in Fig. 7. Table 3 provides examples of a range of DEB models applied in ecotoxicology and ERA for different taxa (see Baas et al. for full details [17]). 4.3 Physiologically Based Toxicokinetic and Physiologically Based Toxicokinetic– Toxicodynamic Models
The WHO has defined TK models as “mathematical descriptions simulating the relationship between external exposure level and chemical concentration in biological matrices over time.” TK models take into account the ADME of the administered chemical and its metabolites [118]. Integration of physiological parameters into TK models resulting in physiologically based -TK models is defined as “a model that estimates the dose to target tissue by taking into account the rate of absorption into the body, distribution and storage in tissues, metabolism and excretion on the basis of
614
Maria Chiara Astuto et al.
Fig. 7 Schematic representations: (a) standard DEB model; (b) DEBkiss model (energetic costs for growth, maintenance and reproduction are taken up from food uptake without first being taken up as reserves) (b). Arrows ¼ energy fluxes for the process specified by the arrow; Boxes ¼ state variables for the individual. Modified from Baas et al. [17]
interplay among critical physiological, physicochemical and biochemical determinants” the so-called forward dosimetry approach [118]. In contrast, PB-TK models can be also used to estimate the external dose or exposure concentration needed to achieve given target organ concentrations, measured, for example, using biomarkers in humans, the so-called reverse dosimetry approach [15]. Two key approaches are used for the development of TK models: 1. A “bottom-up” approach including each organ/tissue of the body as a distinct entity to integrate interactions of a chemical/ drug with all components of the body to allow for mechanistic insights into the global behavior of the system to make valid extrapolations. 2. A “top down approach” uses simplified models for which the tissues are combined together (“lumping tissues”) [15]. Overall, PB-TK models are based on a compartmental approach that separates the organism’s body into a series of biologically relevant anatomical compartments of defined volumes. The number of compartments varies from model to model, i.e., one compartment model to multicompartment to physiologically based models depending on data quality, tissue compartments of interest (i.e., site of pharmacological or toxicological activity), purpose of the model, and the physicochemical properties and behavior of the chemical in the organism (lipophilic/hydrophilic) [15]. Recently, reviews on available PB-TK models in ERA and animal health have been published highlighting that most models are available for fish species and key farm animal species whereas generic models were rarely available [18, 119]. Hence, the generic models were
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
615
Table 3 Examples of DEB model applications for a range of organisms and chemicals (modified from Baas et al. [17]) Phylum
Species
Stressor(s)
DEB model(s)
Annelida
Dendrobaena octaedra
Copper
DEBkiss, DEBTox, DEB
Arthropoda Chironomus riparius
Methiocarb
DEB matrix population
Arthropoda Tribolium castaneum
PAH
DEB mixture survival
Arthropoda Folsomia candida
Copper, cadmium, lead, zinc
DEB mixture survival
Arthropoda Apis mellifera, Bombus Pesticides/metals terrestris, Osmia bicornis
DEB survival
Bacteria
Pseudomonas aeruginosa
Cadmium
DEB hazard
Cordata
Danio rerio
Uranium, population growth
DEB, DEB IBM
Cordata
Pimephales promelas
Narcotics
DEB mixture survival
Cordata
Merluccius merluccius
PCB accumulation
DEB
Copepoda
Calanus glacialis, C. finmarchicus
PAHs
DEB
Calanus sinicus
Temperature
DEBkiss
Crustacea
Daphnia magna
Food limitation, pesticide mixtures
DEB mixture survival
Mollusca
Mytilus galloprovincialis
ZnO nanoparticles, produced water
DEB
Mytilus californianus
Produced water
DEB
Crassostrea gigas
Harvesting oysters
DEB IBM
Lymnaea stagnalis
Food limitation
DEB, DEBkiss
Macoma balthica
Population dynamics
Lotka DEB
Nematoda
Acrobeloides nanus
Pentachlorobenzene, carbendazim, cadmium
DEB
Nematoda
Caenorhabditis elegans
Uranium, aldicarb, fluoranthene, DEB, DEBkiss mixture cadmium
Various
Mussels, oysters, Narcotics, pesticides, mercury, earthworms, water fleas, copper, chlorophenols, fishes toluene, PAHs, tetradifon, pyridine
DEB, DEB survival
developed, calibrated, and validated generic models according to the WHO guidance document and the recent OECD guidance document on the use of PB-TK models in risk assessment. These models were developed for four key fish test species (i.e., stickleback, rainbow trout, zebrafish, and fathead minnow) and key farm animals (cattle, pig, sheep; chicken) [18, 120, 121]. All models are
616
Maria Chiara Astuto et al.
Fig. 8 Generic physiologically based toxicokinetic (PB-TK) models for vertebrate species. *The terms in the gray ellipse refer only to fishes
currently being incorporated in EFSA TKPlate as an open source platform for PBK modeling. Figure 8 illustrates the structure of generic PB-TK models applicable to many vertebrate species relevant to ERA (rodents, birds, fish, etc.). Finally, PB-TK models are also increasingly developed from IVIVE or quantitative IVIVE (QIVIVE) as highlighted in the HTTK package of the US-EPA [https://cran.r-project.org/web/ packages/httk/index.html]. The main difficulty in the ERA area is the lack of in vitro models for species of relevance to ERA let alone measurement of the free and internal cell concentrations in these in vitro systems influenced by abiotic processes (i.e., chemical stability of the compound over time, adsorption to plastic devices, binding with the medium components) or cellular processes (mechanism of transport across the membranes, biotransformation, bioaccumulation). Recently, an example of QIVIVE modeling has been illustrated for the chicken Gallus gallus domesticus using in vitro metabolism data and a generic PB-TK model to predict blood and target organ concentrations (e.g., liver, kidney). 4.3.1 Physiologically Based Toxicokinetic–Toxicodynamic Models
In the ERA area, a number of PB-TK-TD models have been published. Toxicodynamic (TD) models are defined as “mathematical descriptions simulating the relationship between a biologically effective dose and the occurrence of a tissue response over time” [23]. When the TD for a specific compound is known at the target organ or site (e.g., a cell or enzyme) it can be quantitatively linked to predicted TK parameters from a PB-PK model. In other words, a PB-TK model is combined with dose response data to obtain PB-
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
617
TK-TD model. Historically, biologically based dose-response (BBDR) models have been introduced to bring mechanistic information into dose-response assessment and today PB-TK-TD are considered to be the most comprehensive and phenomenologically based models and thereby the most comprehensive BBDR [15, 122]. Such PB-TK-TD models are very useful in that as they provide highly refined tools able to reduce uncertainty for higher tier risk assessments of single and multiple chemicals while providing means to estimate internal concentrations and toxicity of chemicals and their metabolites through integration of population variability into TK and TD. As described for PB-TK models, PB-TK-TD are built using the body as a set of interconnected compartments of differential mathematical equations describing the ADME of a specific chemical and/or its metabolite, and then they connect the internal dose to the dose response of the adverse dynamic effect (from in vivo and more recently in vitro studies) for the compound and/or its metabolites. Recent examples include a PB-TK-TD model to investigate the interaction between melamine and cyanuric acid in rainbow trout [123]. Another example is the Physiologically based toxicokinetic and toxicodynamic (PB-TK-TD) modeling of cadmium and lead exposure in adult zebrafish Danio rerio [124]. So far, no generic PB-TK-TD models are available for species of relevance to the environment since data requirements are high and most efforts have gone into TK-TD modeling and DEB modeling. It can be foreseen that such an integration of physiological data and dose-response modeling for sublethal effects will support the potential integration of PB-TK models in GUTS models and DEB models. The latter integration having the advantage of providing quantitative information for a range of life cycle stages though the Add-my-Pet database (see Subheading 3.3).
5 In Silico Models to Predict Chemical Toxicity for Multiple Species and the Ecosystem: Species Sensitivity Distributions Recently, Species Sensitivity Distribution applications in ERA have increased and a number of examples are available in the literature, including SSDs for combined exposure to pesticides mixtures (xx) as well as for the risk assessment of innovative (bio)technological products such as transgenic crops (xx) or engineered nanomaterials (ENM) (xx). For example, [131] describes SSD derivation for metal-based ENMs using a set of metallic ENMs grouped by structural characteristics (e.g., surface coating, size, shape), experimental conditions and over 1800 ecotoxicity endpoints from the peer-reviewed literature. In environmental risk
618
Maria Chiara Astuto et al.
assessment, reference points such as predicted no-effect concentration (PNEC) are derived by applying a safety factor to reference values (no-observed-effect concentration (NOEC), lethal dose (LD50), etc.), and when such toxicity data are available for several species (i.e., data rich situations), they can be integrated to derive species sensitivity distributions (SSD) for multiple species or the ecosystem. SSDs are statistical models that describe the interspecies differences in toxicity across multiple species for a given compound or mixture through a statistical distribution. SSDs can be used to extrapolate hazardous concentrations for 5% of the species (HC5) and derive a PNEC to protect the whole ecosystem [125]. It should be noted that SSD models do not contain any ecological information i.e., interspecies variability since each species has been tested separately for any compound or mixture. Therefore, the selection of relevant test species and a minimum diversity of taxonomic groups and species are crucial in SSD derivation and will determine the final uncertainty [126, 127]. The ecotoxicity data can be experimentally generated or obtained from available databases, such as OpenFoodTox, ECHA database, the US-EPA Ecotoxicology Knowledgebase or the OECD eChemPortal (see Subheading 3). The process for the development of an SSD is initiated by the collection of toxicity data for different species, followed by interspecies ranking i.e., from the most sensitive (e.g., lowest LC50) to the least sensitive organism. These values are then plotted with the potentially affected fraction on the Y-axis and the log scale concentration on the X-axis. Once data have been fitted, the value at which 5% of species may be affected is calculated (i.e., hazardous concentration for 5% of species (HC5). Derived HC5 are then used to calculate a PNEC, which will account for sensitivity distributions for multiple species. Moreover, SSD-models can also be used to predict toxic pressure (i.e., potential to harm) of a stressor(s) at a given exposure level, through computation of predicted or measured environmental concentrations, [125]. Figure 9 provides a generic example of empirical and theoretical SSDs generated from the MOSAIC tool (see Subheading 4). Recently, Posthuma et al. [128] has developed an approach for SSD derivation based on a collection of chronic and acute ecotoxicity data for 12,386 compounds to quantify the likelihood of critical effect level exceedance from chemical pollution in surface water, in agreement with the European Water Framework Directive requirements. As a result, the toxic pressure resulting from exposure to mixtures was quantified for over 22,000 water bodies in Europe for 1760 chemicals, demonstrating that the likelihood of mixture exposures exceeding a negligible effect level and increasing species loss. This example clearly demonstrates the value of SSDs for water quality assessment and prioritization of risk management measures.
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
619
Fig. 9 Generic example of SSD estimation generated from the MOSAIC tool (https://mosaic.univ-lyon1.fr/ssd). Each dot is represented by a toxicity value for specific species
Recently, SSD applications in environmental risk assessment is increasing and a number of examples are available in the literature including SSDs for combined exposure to pesticides mixtures [129] as well as for the risk assessment of innovative (bio)technological products such as transgenic crops [130] or engineered nanomaterials (ENM) [131]. For example, Chen et al. [131] describes SSD derivation for metal-based ENMs using a set of metallic ENMs grouped by structural characteristics (e.g., surface coating, size, shape), experimental conditions and over 1800 ecotoxicity endpoints from the peer-reviewed literature. Further, accounting for uncertainty in SSD analyses has been recently assessed using 50% effective rates (ER50) by Charles et al. [99]. The analysis describes the impact of including uncertainty in ER50 on the derivation of the 5% Hazard Rate (HR5) for nontarget plants. As a first step, information on ecotoxicity endpoints (i.e., survival, emergence, shoot dry weight) for nontarget plants from seven standard greenhouse studies were gathered. A model was then fitted to experimental toxicity test data for each species using a Bayesian framework to obtain a posterior probability distribution for ER50 and censoring criteria were established to account for the
620
Maria Chiara Astuto et al.
uncertainty of the ER50. As a result, this exercise confirmed that at least six distinct ER50 values are required as input to the SSD analysis to ensure reliable estimates. In addition, the authors highlighted that propagation of the uncertainty from the ER50 estimates and inclusion of censored data into SSD analyses often leads to smaller point estimates of HR5, which provides more conservative estimates for an environmental risk assessment. Finally, the lack of robust and comprehensive ecotoxicity data remains the main limitation for the derivation and use of SSD models. Recently, efforts have been made to explore whether the SSD parameters can be estimated from physical and chemical molecular descriptors, through a QSAR-based approach [132]. This attempt is based on the assumption that differences in ecotoxicity across chemicals are attributable to their different physicochemical properties, and these can be quantitatively described by a QSAR model operating at the level of species assemblages. The proposed model shows that the concept of QSAR-based SSDs supports screening-level assessments of the potential ecotoxicity of compounds in data poor scenarios. Predictive toxicology can also assist in the evaluation of the impact of contaminant cross-species differences and identify appropriate test species for follow-up (eco)toxicity tests. A combination of in silico tools such as the US-EPA Sequence Alignment to Predict Across Species Susceptibility tool (SeqAPASS; https:// seqapass.epa.gov/seqapass/) and molecular dynamics can be used to assess interspecies conservation of specific protein targets to predict the likelihood of chemical susceptibility. For example, Chen et al. [133] described an approach for predicting cross-species differences in binding affinity between per- and polyfluoroalkyl substances (PFAS) and the liver fatty acid-binding protein (LFABP), a key determinant of PFAS bioaccumulation, to evaluate its interspecies conservation. A similar analysis has been proposed to explore the conservation of endocrine targets across vertebrate species using SeqAPASS for defining the taxonomic domain of applicability for mammalian-based high-throughput screening (HTS) results and identifying organisms for suitable toxicity studies [134]. Such approaches represent a valuable tool to support the screening and prioritization of chemicals for further evaluation, extrapolation of empirical toxicity data, selection of suitable species for testing, and/or evaluation of the interspecies relevance of adverse outcome pathways [135].
6
Landscape Modeling in Environmental Risk Assessment For decades, modelling and field studies have constituted the standard higher tier options in environmental chemicals risk assessments. Although both options are offered in the relevant guidance documents, in the regulatory context field studies have
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
621
been usually preferred. The main reasons were not due to lack of scientific interest and development [136], but mostly to shortcomings of models for regulatory applications mostly developed by academic researchers [137] or lack of practical guidance [138]. The situation has changed, and with the technological development modeling represents the future for environmental risk assessments [139]. In environmental risk assessment, landscape modeling covers the exposure component, through emission and fate models, and the ecological component of the hazard and risk characterization using population models. In addition, both exposure and hazard components can be integrated in holistic landscape risk assessment models, with the aim of generating risk characterization outcomes informative for environmental impact assessments [140]. The research initiatives in this area are appealing, while the actual implementation in the regulatory context are much more limited, although they are considered promising tools by risk assessment agencies such as EFSA or the US-EPA. Each of the abovementioned categories have different origins, aims and evolution. Examples of challenges and opportunities for each category are presented below, focusing on their application to regulatory assessments. 6.1 Landscape Exposure Assessment Models
While landscape considerations and georeferenced models have been key in retrospective assessments [141, 142], most regulatory prospective risk assessment scenarios for chemicals have been developed as local generic scenarios. Also, however, for regulatory purposes, regional level scenarios including landscape considerations are essential for a proper assessment of certain uses, and for addressing emissions at river basin or regional levels, and this need triggered specific developments in different regulated areas. Examples cover catchment level models based on Geographical Information System (GIS) for assessing the environmental risk of chemicals in consumer products [143], or the FOCUS Working Group on Landscape and Mitigation Factors in Ecological Risk Assessment for pesticide risk assessment (https://esdac.jrc.ec.europa.eu/pro jects/focus-landscape-mitigation). The availability of environmental data and data management technology is extending this opportunity. Advanced processing tools for satellite and aerial images, georeferenced structured databases, and the associated data management technology have been used for calibrating and updating generic exposure scenarios. Current technology allows for the development of spatial explicit national up to continental level exposure estimations. Although not yet fully implemented in the decision-making process, tools such as the EFSA Data & PERSAM software tool (https://esdac. jrc.ec.europa.eu/content/european-food-safety-authority-efsadata-persam-software-tool) for Predicting Environmental
622
Maria Chiara Astuto et al.
Concentrations in soil based on spatial data sets are already part of standard regulatory tools, and could be moved to the next step: landscape based exposure estimations. There are multiple benefits from mapping exposure predictions at a landscape level to support a wide range of risk management decisions. if properly implemented, these tools can identify and prioritize the conditions and areas with higher exposure and risk, instead of generalising worst-case conditions. In addition, these tools are essential for linking prospective risk estimations with monitoring studies and retrospective assessments. The benefits are even larger for covering aggregate exposure to the same chemical from different sources and uses. As indicated in the previous examples, this is essential for assessments at both river basin or at regional levels, requiring the aggregation of different point-source emissions. A particularly interesting example covers assessments for agricultural scenarios (pesticides, fertilizers, feed additives, veterinary pharmaceuticals and sludge used as soil amendment). Even for local scale exposure scenarios, the integration of the crop scenario into the landscape characteristics will add realism to the assessment and facilitate the identification of situations with different risk levels and possible mitigation actions. Finally, moving to landscape exposure assessment is an essential requirement for assessing the combined risk to multiple chemicals in the same area which is a current priority for citizens and regulators and is receiving significant research attention [144]. 6.2 Landscape “Ecotoxicodynamic” Modeling Approaches
In a hazard assessment context, regulatory interest in modeling has triggered the development of specific recommendations for developers, such as the EFSA PPR Opinion on Good Modelling Practice [145], or the US-EPA modeling framework [146] or Pop-Guide [147]. The aim of ecotoxicological assessments is to identify risks at population level and beyond, and population modelling translates the (eco)toxicological observations into population consequences, avoiding the use of surrogates such as reproductive no-effect levels. The possibilities for modeling population dynamics include individual-based models, that are then combined at a population level, and the identification and modeling of population level parameters, such as survival and reproduction rates. The inclusion of landscape considerations in population modeling simulations [148], add realism and offer the capacity for integrating the ecological, spatial, and managerial variability in the risk assessment process. Elements such as the time of application with the reproductive cycle, the population situation and resilience, and the impact of other stressors can be included in the set of simulations, identifying the critical conditions and risk drivers. The options include spatially implicit (SIM, spatial attributes incorporated mathematically) and spatially explicit models (SEM, exact location in space) models [149].
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
623
Population models are also essential for integrating aggregated and combined exposures. Current models focus on the same individual exposed to several chemicals, while at a landscape level, there is the additional need to combine the effects of exposures of different individuals and at different times on the overall population status, for example, by aggregating the impact on survival and reproductive rates through the seasons, as offered by population models. 6.3 Landscape Risk Characterization Models
The combination of both exposure and population modeling at a landscape level represents the most interesting option, particularly as current technology facilitates the development of spatially explicit approaches. DeAngelis and Yurek offer a generic review of SEM models [149] while there are several examples of applications in regulatory risk assessments, such as ALMaSS [150] an SEM individual-based model that incorporates land-use management, and offers predictions of the risk related to pesticide use [151]. In a regulatory context, decisions are needed for each regulated substance. Landscape models covering several stressors will not substitute but will complement the assessment of single chemicals. It has been suggested, both for general chemicals [152] and pesticides [139], that the opportunity for this model is in the integration of single-chemical decisions; offering the possibility to connect prospective and retrospective models, and the use of monitoring data for the calibration and verification of the predictions. The suggested approach is to update the premarketing risk assessment of each regulated product with the actual decision, and then to integrate the risk in a landscape SEM at regional, national and continental level as appropriate. Using the EU as example, the legislation already contain sufficient elements for SEM approaches. In some cases spatial information is directly available, e.g., REACH provides information on manufacturing locations and amounts for each chemical in the EU; in others, e.g., pesticides, biocides, pharmaceuticals, the authorizations offer the maximum use per substance and use, which can be refined with actual needs and market penetration. Once the general model is developed, each new assessment can be integrated, and the overall risk updated. This approach will answer the currently unsolved question of “Will the new use or substance increase or decrease the overall environmental impact if authorized?” This will open new risk management options for regulators, but also for developers and practitioners that will have tools for assessing the environmental impact of new products and proposed practices.
624
7
Maria Chiara Astuto et al.
Future Perspectives ERA of chemicals involves a vast range of disciplines and research areas including toxicology, biochemistry, cellular biology, population biology, ecology, chemistry, physiology, statistics, and mathematical and landscape modeling, as well as others. The goal of ERA is to quantify levels of chemical exposures (exposure assessment) for given specie(s), population(s), multiple species, and the ecosystem from relevant sources and routes of exposure (inhalation, oral, dermal through air, water/food, skin); to determine safe levels for such chemicals (hazard identification and characterization), and to quantify the risk associated with such exposures (risk characterization). The diversity of chemicals within the environment can be large and includes pesticides, biocides, food and feed additives, pharmaceuticals, as well as contaminants of anthropogenic (e.g., persistent organic pollutants, heavy metals, perfluoroalkyl substances, brominated flame retardants, dioxins) or natural marine biotoxins, and mycotoxins [153]. This chapter has provided the reader with an introduction to the principles of ERA of chemicals and the use of in silico tools to predict toxicity in both data poor and data-rich situations at different levels of biological organization (single and multiple species at the individual and population level, ecosystem, and landscape). Figure 10 provides a generic map of the relationships between steps in ERA, the level of biological organization and New Approach Methodologies (NAMs). In an ERA context, exposure to chemicals connects with environmental fate under which use patterns of chemicals and fate properties are considered and are key characteristics to determine the environmental occurrence, which then provides a basis for exposure assessment for a given individual in a given species (external dose). From such external doses, toxicokinetic data provides a basis to bridge exposure assessment and hazard identification and characterization. For hazard characterization, standard practice uses in vivo assays to determine safe levels of exposure for a given nontarget species at the individual level. Increasingly, NAMs are playing an important role to support mechanistic understanding of toxicity. The so-called NAM-data galaxy provides a means to move toward such mechanistic understanding as well as bridging data gaps. These include in silico tools (i.e., QSAR models), in vitro data providing a basis for (quantitative) in vitro to in vivo ((Q)IVIVE) extrapolation, OMICs (transcriptomics, metabolomics, proteomics) which can be integrated into (TK)TD models. Moving toward understanding the impact of chemicals at the population and landscape level, a number of tools can be integrated at each ERA step:
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
625
Fig. 10 A generic map for the implementation of New Approach Methodologies (NAMs) across level of biological organization (molecular, individual, population, landscape) in Environmental Risk Assessment (ERA) l
Exposure assessment: landscape characteristics as agricultural ecotypes can be combined with environmental occurrence to support exposure assessment at the landscape level for each ecotype.
l
Hazard characterization: (TK)TD models at individual taxa level and population dynamic models in a quantitative manner, support the identification of drivers, the characterization of interspecies differences and population effects, while considering ecosystem services. Integration of population effects and recovery for each species using SSDs provides ERA endpoints for the ecosystem for which chemicals can be assessed at a per use level at the landscape scale.
l
Risk characterization: integration of both exposure and hazard dimensions at the landscape level will support the landscapebased risk characterization of chemicals on ecosystem services.
Over the last two decades, the definition of a One-Health approach has opened new horizons for risk assessment exercises to move toward holistic approaches which are aimed at protecting
626
Maria Chiara Astuto et al.
humans, animals, and the environment as a whole [154]. A typical example supporting this move toward holistic approaches, include the consideration of critical aspects of species sensitivities in relation to chemical toxicity in risk assessment, particularly when considering the balance between farming, safety, and sustainability. A relevant example in this context is the recently observed colony collapse disorder (CCD) in honey bees (Apis mellifera), which highlighted the need to assess the complex interactions between multiple chemical (e.g., pesticides, contaminants) and nutritional (e.g., food availability) stressors as well as climate influence and diseases. Further complexity is added considering the limited capacity of bees to detoxify chemicals particularly due to the limited copies of xenobiotic metabolizing enzymes [11, 155]. In order to facilitate this transition, constant development in ERA is ongoing, particularly through the implementation of approaches enabling a mechanistic understanding of toxicity and the development of systems approach [6, 15, 18]. Such approaches can be applied to better describe the processes connecting external dose to internal dose and (possible) toxicity for different levels of biological organization (e.g., specific taxa or the whole ecosystem), while investigating taxa-specific traits to provide a rationale for species-specific sensitivities to chemicals and biological hazards present in the environment. In this context, scientists have developed tools based on Next Generation Sequencing (NGS) and OMICs technologies [15]. Examples of “omics application in ERA have been discussed at a recent EFSA Scientific Colloquium on “OMICs in Risk Assessment” [19] and include: 1. Identification of molecular initiating events (MIE) and key events (KE) at different levels of biological organization and across taxonomic groups within an AOP framework [156], 2. Use of transcriptomics from available genomes for nonmodel species allowing to test effects of environmental contaminants on species in a resource efficient manner; proteomic biomarkers to assess water pollutants in fish [157], and metabolomics for biomonitoring the health status of environmental compartments [158]. 3. Use of genomic technologies, including high-throughput sequencing for the analysis of environmental DNA samples to monitor the presence of single species and communities in specific habitats, as well as understanding the functional biodiversity and dynamics of ecosystem [159]. 4. Use of omics data to support biological read-across (e.g., SeqAPASS tool) allowing the use of molecular docking techniques to be used to identify the presence of target receptors and correlate them with species sensitivity to chemicals [135].
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . .
627
As described earlier in this chapter, the MoA and AOP frameworks combined with TK data provide the opportunity to describe molecular, cellular, target organ, and whole organism footprints of key events leading to adverse effects. In the future, the development of open source databases reporting structured hazard and exposure data, biologically based models such as physiologically based TK and TD models and dynamic energy budget models will provide a means to investigate species sensitivities at the individual and population level. From a community and ecosystem perspective, tools for landscape modeling can provide ways to integrate the information generated from such system biology approaches and assess risks to the environment in a holistic manner. Many recommendations have been described elsewhere for the development of open source databases, tools and methods to provide a mechanistic understanding of toxicity at the individual, species, population, ecosystem, and landscape levels and these require interdisciplinary cooperation at national, regional, and international levels. It can be foreseen that this type of cooperation between regulatory bodies, risk managers, scientists in academia, industry, and nongovernmental bodies will play an increasing role to share these data and tools, avoid the duplication of efforts while supporting harmonization of methods, tools and guidance to bring them into practice in the risk assessment community. Finally, education in these areas for the current and the next generation of scientists, human, and environmental risk assessors is a prerequisite to their practical implementation.
Acknowledgments The views represented here are the authors only and do not represent the views of the European Food Safety Authority or the US Environmental Protection Agency. Matteo R di Nicola and Edoardo Carnesecchi are thankful to the Lush foundation for supporting this work through the Lush prize in the area of alternative methods to animal testing. References 1. Bopp SK, Kienzler A, Richarz A-N, van der Linden SC, Paini A, Parissis N, Worth AP (2019) Regulatory assessment and risk management of chemical mixtures: challenges and ways forward. Crit Rev Toxicol 49:174–189. https://doi.org/10.1080/10408444.2019. 1579169 2. EFSA (European Food Safety Authority), Maggiore A, Afonso A, Barrucci A, De Sanctis G (2020). Climate change as a driver of
emerging risks for food and feed safety, plant, animal health and nutritional quality. EFSA Support Pub 17(6):EN-1881. 146 pp. https://doi.org/10.2903/sp.efsa.2020. EN-1881 3. European Commission (2010) In: European Commission (ed) Communication from the Commission, Europe 2020, a strategy for smart, sustainable and inclusive growth
628
Maria Chiara Astuto et al.
4. European Commission (EC) (2020). Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: The European Green Deal. COM/2019/640 final. https://eur-lex. europa.eu/legal-content/EN/TXT/? uri¼COM%3A2019%3A640%3AFIN 5. EFSA Scientific Committee (2016) Guidance to develop specific protection goals options for environmental risk assessment at EFSA, in relation to biodiversity and ecosystem services. EFSA J 14(6):4499, 50 pp. https://doi. org/10.2903/j.efsa.2016.4499 6. EFSA Scientific Committee (2016) Scientific opinion on coverage of endangered species in environmental risk assessments at EFSA. EFSA J 14(2):4312, 124 pp. https://doi. org/10.2903/j.efsa.2016.4312 7. EFSA Scientific Committee (2016) Scientific opinion on recovery in environmental risk assessments at EFSA. EFSA J 14(2):4313, 85 pp. https://doi.org/10.2903/j.efsa. 2016.4313 8. US-EPA (US Environmental Protection Agency) (2007) Concepts, methods, and data sources for cumulative health risk assessment of multiple chemicals, exposures and effects: a resource document (final report, 2008). 412 pp. https://cfpub.epa.gov/ ncea/risk/recordisplay.cfm?deid¼190187 9. Klaassen CD (2013) Casarett and Doull’s toxicology: the basic science of poisons. McGraw-Hill, New York 10. EFSA Scientific Committee, Hardy A, Benford D, Halldorsson T, Jeger MJ, Knutsen HK, More S, Naegeli H, Noteborn H, Ockleford C, Ricci A, Rychen G, Schlatter JR, Silano V, Solecki R, Turck D, Benfenati E, Chaudhry QM, Craig P, Frampton G, Greiner M, Hart A, Hogstrand C, Lambre C, Luttik R, Makowski D, Siani A, Wahlstroem H, Aguilera J, Dorne J-LCM, Fernandez Dumont A, Hempen M, Valtuena Martınez S, Martino L, Smeraldi C, Terron A, Georgiadis N, Younes M (2017) Scientific opinion on the guidance on the use of the weight of evidence approach in scientific assessments. EFSA J 15(8):4971, 69 pp. https://doi.org/10.2903/j.efsa.2017.4971 11. Ingenbleek L, Lautz LS, Dervilly G, Darney K, Astuto MC, Tarazona J, Liem AKD, Kass GEN, Leblanc JC, Verger P, Le Bizec B, Dorne J-LCM (2021) Risk assessment of chemicals in food and feed: principles, applications and future perspectives.
Environmental pollutant exposures and public health. The Royal Society of Chemistry, pp 1–38 12. Meek ME, Boobis AR, Crofton KM, Heinemeyer G, Van Raaij M, Vickers C (2011) Risk assessment of combined exposure to multiple chemicals: a WHO/IPCS framework. Regul Toxicol Pharmacol 60:S1–S14 13. EFSA Scientific Committee, More SJ, Bampidis V, Benford D, Bennekou SH, Bragard C, Halldorsson TI, HernandezJerez AF, Koutsoumanis K, Naegeli H, Schlatter JR, Silano V, Nielsen SS, Schrenk D, Turck D, Younes M, Benfenati E, Castle L, Cedergreen N, Hardy A, Laskowski R, Leblanc JC, Kortenkamp A, Ragas A, Posthuma L, Svendsen C, Solecki R, Testai E, Dujardin B, Kass GEN, Manini P, Jeddi MZ , Dorne J-LCM, Hogstrand C (2019) Guidance on harmonised methodologies for human health, animal health and ecological risk assessment of combined exposure to multiple chemicals. EFSA J 17(3):5634, 77 pp. https://doi.org/10.2903/j.efsa. 2019.5634 14. EFSA PPR Panel (EFSA Panel on Plant Protection Products and their Residues) (2013) Guidance on tiered risk assessment for plant protection products for aquatic organisms in edge-of-field surface waters. EFSA J 11(7): 3290, 268 pp. https://doi.org/10.2903/j. efsa.2013.3290 15. EFSA (European Food Safety Authority) (2014) Modern methodologies and tools for human hazard assessment of chemicals. EFSA J 12(4):3638, 87 pp. https://doi.org/10. 2903/j.efsa.2014.3638 16. Spurgeon DJ, Jones OA, Dorne J-LCM, Svendsen C, Swain S, Stu¨rzenbaum SR (2010) Systems toxicology approaches for understanding the joint effects of environmental chemical mixtures. Sci Total Environ 408:3725–3734. https://doi.org/10.1016/ j.scitotenv.2010.02.038 17. Baas J, Augustine S, Marques GM, Dorne J-LCM (2018) Dynamic energy budget models in ecological risk assessment: from principles to applications. Sci Total Environ 628-629:249–260. https://doi.org/10. 1016/j.scitotenv.2018.02.058 18. Grech A, Brochot C, Dorne J-LCM, Quignot N, Bois FY, Beaudouin R (2017) Toxicokinetic models and related tools in environmental risk assessment of chemicals. Sci Total Environ 578:1–15. https://doi. org/10.1016/j.scitotenv.2016.10.146 19. EFSA (European Food Safety Authority), Aguilera J, Aguilera-Gomez M, Barrucci F,
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . . Cocconcelli PS, Davies H, Denslow N, Dorne J-LCM, Grohmann L, Herman L, Hogstrand C, Kass GEN, Kille P, Kleter G, Nogue´ F, Plant NJ, Ramon M, Schoonjans R, Waigmann E, Wright MC (2018) EFSA Scientific Colloquium 24—’omics in risk assessment: state of the art and next steps. EFSA Support Pub 15(11):EN-1512. 30 pp. https://doi.org/10.2903/sp.efsa.2018. EN-1512 20. OECD (Organisation for Economic Co-operation and Development) (2021) OECD guidance document on the characterisation, validation and reporting physiologically based kinetic models for regulatory purposes. Series on testing and assessment no. 331 21. EFSA (European Food Safety Authority) (2015) Editorial: increasing robustness, transparency and openness of scientific assessments. EFSA J 13(3):e13031, 3 pp. https:// doi.org/10.2903/j.efsa.2015.e13031 22. EFSA (European Food Safety Authority) (2014) Conclusion on the peer review of the pesticide human health risk assessment of the active substance chlorpyrifos. EFSA J 12(4): 3640 23. WHO (World Health Organization) (2009) Food safety. Project to update the principles and methods for the assessment of chemicals in food. Principles and methods for the risk assessment of chemicals in food. EHC 240. ISBN 978-92-4-157240-8 24. Benfenati E, Chaudhry Q, Gini G, Dorne J-LCM (2019) Integrating in silico models and read-across methods for predicting toxicity of chemicals: a step-wise strategy. Environ Int 131:105060, ISSN 0160-4120. https:// doi.org/10.1016/j.envint.2019.105060 25. ECHA (European Chemicals Agency) (2017) The use of alternatives to testing on animals for the REACH regulation. ECHA article 117(3) of the REACH regulation report no.: ECHA-17-R-02-EN. ECHA, Helsinki, FI 26. Williams AJ, Lambert JC, Thayer K, Dorne J-LCM (2021) Sourcing data on chemical properties and hazard data from the US-EPA CompTox chemicals dashboard: a practical guide for human risk assessment. Environ Int 154:106566. https://doi.org/10.1016/ j.envint.2021.106566 27. Dorne J-LCM, Richardson J, Kass G, Georgiadis N, Monguidi M, Pasinato L, Cappe S, Verhagen H, Robinson T (2017) Editorial: OpenFoodTox: EFSA’s open source toxicological database on chemical hazards in food and feed. EFSA J 15:
629
e15011. https://doi.org/10.2903/j.efsa. 2017.e15011 28. Dorne J-LCM, Richardson J, Livaniou A, Carnesecchi E, Ceriani L, Baldin R, Kovarich S, Pavan M, Saouter E, Biganzoli F, Pasinato L, Zare Jeddi M, Robinson TP, Kass GEN, Liem AKD, Toropov AA, Toropova AP, Yang C, Tarkhov A, Georgiadis N, Di Nicola MR, Mostrag A, Verhagen H, Roncaglioni A, Benfenati E, Bassan A (2021) EFSA’s OpenFoodTox: an open source toxicological database on chemicals in food and feed and its future developments. Environ Int 146:106293. https://doi.org/ 10.1016/j.envint.2020.106293 29. Reilly L, Serafimova R, Partosch F, Gundert˜ as Abrahantes J, Dorne Remy U, Cortin J-LCM, GEN K (2019) Testing the thresholds of toxicological concern values using a new database for food-related substances. Toxicol Lett 314:117–123. https://doi.org/ 10.1016/j.toxlet.2019.07.019 30. ECHA (European Chemicals Agency) (2008) REACH guidance on information requirements and chemical safety assessment. Chapter R.6: QSARs and grouping of chemicals. http://echa.europa.eu/documents/ 10162/13632/information_requirements_ r6_en.pdf 31. Baderna D, Faoro R, Selvestrel G, Troise A, Luciani D, Andres S, Benfenati E (2021) Defining the human-biota thresholds of toxicological concern for organic chemicals in freshwater: the proposed strategy of the LIFE VERMEER project using VEGA tools. Molecules 26(7):1928. https://doi.org/10. 3390/molecules26071928 32. Belanger SE, Sanderson H, Embry MR, Coady K, DeZwart D, Farr BA, Gutsell S, Halder M, Sternberg R, Wilson P (2015) It is time to develop ecological thresholds of toxicological concern to assist environmental hazard assessment. Environ Toxicol Chem 34: 2864–2869. https://doi.org/10.1002/etc. 3132 33. Connors KA, Beasley A, Barron MG, Belanger SE, Bonnell M, Brill JL, de Zwart D, Kienzler A, Krailler J, Otter R, Phillips JL, Embry MR (2019) Creation of a curated aquatic toxicology database: EnviroTox. Environ Toxicol Chem 38:1062–1073. https:// doi.org/10.1002/etc.4382 34. Kienzler A, Barron MG, Belanger SE, Beasley A, Embry MR (2017) Mode of action (MOA) assignment classifications for ecotoxicology: an evaluation of approaches. Environ Sci Technol 51:10203–10211. https://doi. org/10.1021/acs.est.7b02337
630
Maria Chiara Astuto et al.
35. Kienzler A, Connors KA, Bonnell M, Barron MG, Beasley A, Inglis CG, Norberg-King TJ, Martin T, Sanderson H, Vallotton N, Wilson P, Embry MR (2019) Mode of action classifications in the EnviroTox database: development and implementation of a consensus MOA classification. Environ Toxicol Chem 38:2294–2304. https://doi.org/10. 1002/etc.4531 36. ECHA (European Chemicals Agency) (2008) Guidance on information requirements and chemical safety assessment. Chapter R.6: QSARs and grouping of chemicals 37. Manganaro A, Pizzo F, Lombardo A, Pogliaghi A, Benfenati E (2016) Predicting persistence in the sediment compartment with a new automatic software based on the k-Nearest Neighbor (k-NN) algorithm. Chemosphere 144:1624–1630. https://doi.org/ 10.1016/j.chemosphere.2015.10.054 38. Organisation for Economic Co-operation and Development (OECD) (2007) Guidance on grouping of chemical. environment health and safety publications, series on testing and assessment no. 80, Report no. ENV/JM/ MONO(2007) 28, JT03232745. OECD, Paris 39. Rand-Weaver M, Margiotta-Casaluci L, Patel A, Panter GH, Owen SF, Sumpter JP (2013) The read-across hypothesis and environmental risk assessment of pharmaceuticals. Environ Sci Technol 47:11384–11395. https://doi.org/10.1021/es402065a 40. OECD (Organisation for Economic Co-operation and Development) (2014) Guidance on grouping of chemicals, second edition. Series on Testing and Assessment, No. 194, ENV/JM/MONO(2014)4, 14-Apr-2014 41. Benfenati E, Como F, Marzo M, Gadaleta D, Toropov A, Toropova A (2017) Developing innovative in silico models with EFSA’s OpenFoodTox database. EFSA Support Pub:EN1206, 19 pp. https://doi.org/10.2903/sp. efsa.2017.EN-1206 42. Ambure P, Halder AK, Gonza´lez Dı´az H, Cordeiro MNDS (2019) QSAR-Co: an open source software for developing robust multitasking or multitarget classification-based QSAR models. J Chem Inf Model 59: 2538–2544. https://doi.org/10.1021/acs. jcim.9b00295 43. OECD (Organisation for Economic Co-operation and Development) (2020) Overview of concepts and available guidance related to integrated approaches to testing and assessment (IATA), OECD series on testing and assessment, no 329. Environment,
Health and Safety, Environment Directorate, OECD 44. Carnesecchi E, Toma C, Roncaglioni A, Kramer N, Benfenati E, Dorne J-LCM (2020) Integrating QSAR models predicting acute contact toxicity and mode of action profiling in honey bees (A. mellifera): data curation using open source databases, performance testing and validation. Sci Total Environ 735:139243. https://doi.org/10.1016/ j.scitotenv.2020.139243 45. Verhaar HJM, van Leeuwen CJ, Hermens JLM (1992) Classifying environmental pollutants. 1: structure-activity-relationships for prediction of aquatic toxicity. Chemosphere 25:471–491. https://doi.org/10.1016/ 0045-6535(92)90280-5 46. Russom CL, Bradbury SP, Broderius SJ, Hammermeister DE, Drummond RA (1997) Predicting modes of toxic action from chemical structure: acute toxicity in the fathead minnow (Pimephales promelas). Environ Toxicol Chem. https://doi.org/10. 1897/1551-5028(1997)0162.3.co;2 47. Dimitrov SD, Diderich R, Sobanski T, Pavlov TS, Chankov GV, Chapkanov AS, Karakolev YH, Temelkov SG, Vasilev RA, Gerova KD, Kuseva CD, Todorova ND, Mehmed AM, Rasenberg M, Mekenyan OG (2016) QSAR Toolbox—workflow and major functionalities. SAR QSAR Environ Res 27:203–219. https://doi.org/10.1080/1062936X.2015. 1136680 48. Kuseva C, Schultz TW, Yordanova D, Tankova K, Kutsarova S, Pavlov T, Chapkanov A, Georgiev M, Gissi A, Sobanski T, Mekenyan OG (2019) The implementation of RAAF in the OECD QSAR Toolbox. Regul Toxicol Pharmacol 105:51–61. https://doi.org/10.1016/j. yrtph.2019.03.018 49. Carnesecchi E, Toporov A, Toporova A, Roncaglioni A, Dorne J-LCM, Benfenati E (2020) Development of quantitative structure activity relationship (QSAR) models for the prediction of acute oral toxicity of plant protection products in the bobwhite quail (Colinus virginianus) using EFSA’s OpenFoodTox (in preparation) 50. Ghosh S, Ojha PK, Carnesecchi E, Lombardo A, Roy K, Benfenati E (2020) Exploring QSAR modeling of toxicity of chemicals on earthworm. Ecotoxicol Environ Saf 190:110067. https://doi.org/10.1016/j. ecoenv.2019.110067 51. Roy J, Kumar Ojha P, Carnesecchi E, Lombardo A, Roy K, Benfenati E (2020)
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . . First report on a classification-based QSAR model for chemical toxicity to earthworm. J Hazard Mater 386:121660. https://doi.org/ 10.1016/j.jhazmat.2019.121660 52. Carnesecchi E, Toporov A, Toporova A, Roncaglioni A, Dorne J-LCM, Benfenati E (2020) Development of quantitative structure activity relationship (QSAR) models for the prediction of acute oral toxicity of plant protection products in earth worms (Eisenia fetida) using EFSA’s OpenFoodTox (in preparation) 53. Como F, Carnesecchi E, Volani S, Dorne J-LCM, Richardson J, Bassan A, Pavan M, Benfenati E (2017) Predicting acute contact toxicity of pesticides in honeybees (Apis mellifera) through a k-nearest neighbor model. Chemosphere 166:438–444. https://doi. org/10.1016/j.chemosphere.2016.09.092 54. Carnesecchi E, Toropov AA, Toropova AP, Kramer N, Svendsen C, Dorne J-LCM, Benfenati E (2020) Predicting acute contact toxicity of organic binary mixtures in honey bees (A. mellifera) through innovative QSAR models. Sci Total Environ 704:135302. https:// doi.org/10.1016/j.scitotenv.2019.135302 55. Toropov AA, Toropova AP, Marzo M, Dorne J-LCM, Georgiadis N, Benfenati E (2017) QSAR models for predicting acute toxicity of pesticides in rainbow trout using the CORAL software and EFSA’s OpenFoodTox database. Environ Toxicol Pharmacol 53:158–163 56. Toropova AP, Toropov AA, Marzo M, Escher SE, Dorne J-LCM, Georgiadis N, Benfenati E (2018) The application of new HARDdescriptor available from the CORAL software to building up NOAEL models. Food Chem Toxicol 112:544–550. https://doi. org/10.1016/j.fct.2017.03.060 57. Gadaleta D, Marzo M, Toropov A, Toropova A, Lavado GJ, Escher SE, Dorne J-LCM, Benfenati E (2021) Integrated in silico models for the prediction of no-observed-(adverse)-effect levels and lowestobserved-(adverse)-effect levels in rats for sub-chronic repeated-dose toxicity. Chem Res Toxicol 34:247–257. https://doi.org/ 10.1021/acs.chemrestox.0c00176 58. Regulation (EC) No 1907/2006 of the European Parliament and of the Council of 18 December 2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), establishing a European Chemicals Agency, amending Directive 1999/45/EC and repealing Council Regulation (EEC) No 793/93 and Commission Regulation (EC) No 1488/94 as well as Council Directive 76/769/EEC and
631
Commission Directives 91/155/EEC, 93/67/EEC, 93/105/EC and 2000/21/ EC 59. Perkins EJ, Ashauer R, Burgoon L, Conolly R, Landesmann B, Mackay C, Murphy CA, Pollesch N, Wheeler JR, Zupanic A, Scholz S (2019) Building and applying quantitative adverse outcome pathway models for chemical hazard and risk assessment. Environ Toxicol Chem 38:1850–1865. https://doi. org/10.1002/etc.4505 60. LaLone CA, Villeneuve DL, Wu-Smart J, Milsk RY, Sappington K, Garber KV, Housenger J, Ankley GT (2017) Weight of evidence evaluation of a network of adverse outcome pathways linking activation of the nicotinic acetylcholine receptor in honey bees to colony death. Sci Total Environ 584-585:751–775. https://doi.org/10. 1016/j.scitotenv.2017.01.113 61. Spinu N, Cronin MTD, Enoch SJ, Madden JC, Worth AP (2020) Quantitative adverse outcome pathway (qAOP) models for toxicity prediction. Arch Toxicol 94:1497–1510. https://doi.org/10.1007/s00204-02002774-7 62. Cotterill J, Price N, Rorije E, Peijnenburg A (2020) Development of a QSAR model to predict hepatic steatosis using freely available machine learning tools. Food Chem Toxicol 142:111494. https://doi.org/10.1016/j.fct. 2020.111494 63. Beronius A, Zilliacus J, Hanberg A, Luijten M, van der Voet H, van Klaveren J (2020) Methodology for health risk assessment of combined exposures to multiple chemicals. Food Chem Toxicol 143:111520. https://doi.org/10.1016/j.fct.2020. 111520 64. EFSA (European Food Safety Authority) Scientific Committee, Benford D, Halldorsson T, Jeger MJ, Knutsen HK, More S, Naegeli H, Noteborn H, Ockleford C, Ricci A, Rychen G, Schlatter JR, Silano V, Solecki R, Turck D, Younes M, Craig P, Hart A, Von Goetz N, Koutsoumanis K, Mortensen A, Ossendorp B, Martino L, Merten C, Mosbach-Schulz O, Hardy A (2018) Guidance on uncertainty analysis in scientific assessments. EFSA J 16(1):5123, 39 pp. https://doi.org/10.2903/j.efsa.2018.5123 65. Jager T (2017) Making sense of chemical sress—application of dynamic energy budget theory in ecotoxicology and stress ecology 66. EFSA PPR Panel (EFSA Panel on Plant Protection Products and their Residues), Ockleford, C, Adriaanse, P, Berny, P,
632
Maria Chiara Astuto et al.
Brock, T, Duquesne, S, Grilli, S, HernandezJerez, AF, Bennekou, SH, Klein, M, Kuhl, T, Laskowski, R, Machera, K, Pelkonen, O, Pieper, S, Smith, RH, Stemmer, M, Sundh, I, Tiktak, A, Topping, CJ, Wolterink, G, Cedergreen, N, Charles, S, Focks, A, Reed, M, Arena, M, Ippolito, A, Byers, H, Teodorovic I (2018) Scientific opinion on the state of the art of Toxicokinetic/ Toxicodynamic (TKTD) effect models for regulatory risk assessment of pesticides for aquatic organisms. EFSA J 16(8):5377, 188 pp. https://doi.org/10.2903/j.efsa. 2018.5377 67. Jager T (2020) Revisiting simplified DEBtox models for analysing ecotoxicity data. Ecol Model 416:108904 68. Jager T, Ashauer R (2018) Modelling survival under chemical stress. A comprehensive guide to the GUTS framework. Leanpup 69. Nyman AM, Schirmer K, Ashauer R (2012) Toxicokinetic-toxicodynamic modelling of survival of Gammarus pulex in multiple pulse exposures to propiconazole: model assumptions, calibration data requirements and predictive power. Ecotoxicology 21:1828–1840. https://doi.org/10.1007/s10646-0120917-0 70. Baudrot V, Preux S, Ducrot V, Pave A, Charles S (2018) New insights to compare and choose TKTD models for survival based on an interlaboratory study for Lymnaea stagnalis exposed to Cd. Environ Sci Technol 52: 1582–1590. https://doi.org/10.1021/acs. est.7b05464 71. Focks A, Belgers D, Boerwinkel MC, Buijse L, Roessink I, Van den Brink PJ (2018) Calibration and validation of toxicokinetictoxicodynamic models for three neonicotinoids and some aquatic macroinvertebrates. Ecotoxicology 27:992–1007. https://doi. org/10.1007/s10646-018-1940-6 72. Landrum PF (1989) Bioavailability and toxicokinetics of polycyclic aromatic hydrocarbons sorbed to sediments for the amphipod Pontoporeia hoyi. Environ Sci Technol 23: 588–595 73. Landrum PF, Lydy MJ, Lee H (1992) Toxicokinetics in aquatic systems: model comparisons and use in hazard assessment. Environ Toxicol Chem 11:1709–1725 74. MacKay D, Fraser A (2000) Bioaccumulation of persistent organic chemicals: mechanisms and models. Environ Pollut 110:375–391 75. Luoma SN, Rainbow PS (2005) Why is metal bioaccumulation so variable? Biodynamics as a unifying concept. Environ Sci Technol 39: 1921–1931
76. Ratier A, Lopes C, Labadie P, Budzinski H, Delorme N, Que´au H, Peluhet L, Geffard O, Babut M (2019) A Bayesian framework for estimating parameters of a generic toxicokinetic model for the bioaccumulation of organic chemicals by benthic invertebrates: proof of concept with PCB153 and two freshwater species. Ecotoxicol Environ Saf 180: 33–42. https://doi.org/10.1016/j.ecoenv. 2019.04.080 77. Gestin O, Lacoue-Labarthe T, Coquery M, Delorme N, Garnero L, Dherret L, Ciccia T, Geffard O, Lopes C (2021) One and multicompartments toxico-kinetic modeling to understand metals’ organotropism and fate in Gammarus fossarum. Environ Int. (in proof) 78. Ratier A, Lopes C, Geffard O, Babut M (2021) The added value of Bayesian inference for estimating biotransformation rates of organic contaminants in aquatic invertebrates. Aqua Toxicol 234:105811. https://doi.org/ 10.1016/j.aquatox.2021.105811 79. Schmitt W, Bruns E, Dollinger M, Sowig P (2013) Mechanistic TK/TD-model simulating the effect of growth inhibitors on Lemna populations. Ecol Model 255:1–10 80. Billoir E, Delhaye H, Forfait C, Cle´ment B, Triffault-Bouchet G, Charles S, DelignetteMuller ML (2012) Comparison of bioassays with different exposure time patterns: the added value of dynamic modelling in predictive ecotoxicology. Ecotoxicol Environ Saf 75: 80–86. https://doi.org/10.1016/j.ecoenv. 2011.08.006 81. Billoir E, Delhaye H, Cle´ment B, DelignetteMuller ML, Charles S (2011) Bayesian modelling of daphnid responses to time-varying cadmium exposure in laboratory aquatic microcosms. Ecotoxicol Environ Saf 74: 693–702. https://doi.org/10.1016/j. ecoenv.2010.10.023 82. Billoir E, da Silva Ferra˜o-Filho A, Laure Delignette-Muller M, Charles S (2009) DEBtox theory and matrix population models as helpful tools in understanding the interaction between toxic cyanobacteria and zooplankton. J Theor Biol 258:380–388. https://doi.org/10.1016/j.jtbi.2008. 07.029 83. Billoir E, Laure Delignette-Muller M, Pe´ry ARR, Geffard O, Charles S (2008) Statistical cautions when estimating DEBtox parameters. J Theor Biol 254:55–64. https:// doi.org/10.1016/j.jtbi.2008.05.006 84. Billoir E, Delignette-Muller ML, Pe´ry AR, Charles S (2008) A Bayesian approach to analyzing ecotoxicological data. Environ Sci
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . . Technol 42:8978–8984. https://doi.org/10. 1021/es801418x 85. Llandres AL, Marques GM, Maino JL, Kooijman SALM, Kearney MR, Casas J (2015) A dynamic energy budget for the whole lifecycle of holometabolous insects. Ecol Monogr 85:353–371. https://doi.org/10. 1890/14-0976.1 86. Jager T, Albert C, Preuss T, Ashauer R (2011) General unified threshold model of survival— a toxicokinetic-toxicodynamic framework for ecotoxicology. Environ Sci Technol 45: 2529–2540 87. Ashauer R, Thorbek P, Warinton JS, Wheeler JR, Maund S (2013) A method to predict and understand fish survival under dynamic chemical stress using standard ecotoxicity data. Environ Toxicol Chem 32:954–965. https://doi.org/10.1002/etc.2144 88. Gabsi F, Solga A, Bruns E, Leake C, Preuss TG (2019) Short-term to long-term extrapolation of lethal effects of an herbicide on the marine mysid shrimp Americamysis Bahia by use of the General Unified Threshold Model of Survival (GUTS). Integr Environ Assess Manag 15:29–39. https://doi.org/10. 1002/ieam.4092 89. Baudrot V, Charles S (2021) morse: an R-package in support of environmental risk assessment. J Open Source Software (submitted) https://doi.org/10.1101/2021.04.07. 438826v1 90. Dalhoff K, Hansen AMB, Rasmussen JJ, Focks A, Strobel BW, Cedergreen N (2020) Linking morphology, toxicokinetic, and toxicodynamic traits of aquatic invertebrates to pyrethroid sensitivity. Environ Sci Technol 54:5687–5699. https://doi.org/10.1021/ acs.est.0c00189 91. Bart S, Jager T, Robinson A, Lahive E, Spurgeon DJ, Ashauer R (2021) Predicting mixture effects over time with toxicokinetic–toxicodynamic models (GUTS): assumptions, experimental testing, and predictive power. Environ Sci Technol 55:2430–2439. https://doi.org/10.1021/ acs.est.0c05282 92. Brock T, Arena M, Cedergreen N, Charles S, Duquesne S, Ippolito A, Klein M, Reed M, Teodorovic I, van den Brink PJ, Focks A (2021) Application of general unified threshold models of survival models for regulatory aquatic pesticide risk assessment illustrated with an example for the insecticide chlorpyrifos. Integr Environ Assess Manag 17: 243–258. https://doi.org/10.1002/ieam. 4327
633
93. Xie M, Sun Y, Feng J, Gao Y, Zhu L (2019) Predicting the toxic effects of Cu and Cd on Chlamydomonas reinhardtii with a DEBtox model. Aqua Toxicol 210:106–116. https:// doi.org/10.1016/j.aquatox.2019.02.018 94. Jager T (2020) Robust likelihood-based approach for automated optimization and uncertainty analysis of toxicokinetictoxicodynamic models. Integr Environ Assess Manag 17:388–397 95. Charles S, Veber P, Delignette-Muller ML (2018) MOSAIC: a web-interface for statistical analyses in ecotoxicology. Environ Sci Pollut Res 25:11295–11302 96. Charles S, Ratier A, Baudrot V, Multari G, Siberchicot A, Wu D, Lopes C (2021) Taking full advantage of modelling to better assess environmental risk due to xenobiotics. bioRxiv:2021.2003.2024.436474. https://doi. org/10.1101/2021.03.24.436474 97. Forfait-Dubuc C, Charles S, Billoir E, Delignette-Muller ML (2012) Survival data analyses in ecotoxicology: critical effect concentrations, methods and models. What should we use? Ecotoxicology 12:1072–1083 98. Delignette-Muller ML, Lopes C, Veber P, Charles S (2014) Statistical handling of reproduction data for exposure-response modeling. Environ Sci Technol 48:7544–7551 99. Charles S, Wu D, Ducrot V (2021) How to account for the uncertainty from standard toxicity tests in species sensitivity distributions: an example in non-target plants. PLoS One 16:e0245071 100. Ratier A, Lopes C, Multari G, Mazerolles V, Carpentier P, Charles S (2021) New perspectives on the calculation of bioaccumulation metrics for active substances in living organisms. bioRxiv:2020.2007.2007.185835. https://doi.org/10.1101/2020.07.07. 185835 101. Ratier A, Charles S (2021) Accumulationdepuration data collection in support of toxicokinetic modelling. bioRxiv:2021.2004.2015.439942. https://doi. org/10.1101/2021.04.15.439942 102. Kon Kam King G, Veber P, Charles S, Delignette-Muller ML (2014) MOSAIC_SSD: a new web tool for species sensitivity distribution to include censored data by maximum likelihood. Environ Toxicol Chem 33:2133–2139 103. Forbes VE, Calow P, Grimm V, Hayashi TI, Jager T, Katholm A, Palmqvist A, Pastorok R, Salvito D, Sibly R, Spromberg J, Stark J, Stillman RA (2011) Adding value to ecological risk assessment with population modeling.
634
Maria Chiara Astuto et al.
Hum Ecol Risk Assess Int J 17:287–299. https://doi.org/10.1080/10807039.2011. 552391 104. Charles S, Billoir E, Lopes C, Chaumot A (2009) Matrix population models as relevant modeling tools in ecotoxicology. Ecotoxicol Model:261–298 105. Martin BT, Zimmer EI, Grimm V, Jager T (2012) Dynamic Energy Budget theory meets individual-based modelling: a generic and accessible implementation. Methods Ecol Evol 3:445–449 106. Kooijman S (2001) Quantitative aspects of metabolic organization: a discussion of concepts. Philos Trans Biol Sci 356:331–349 107. Kooijman B, Kooijman S (2010) Dynamic energy budget theory for metabolic organisation. Cambridge University Press 108. Sousa T, Domingos T, Kooijman S (2008) From empirical patterns to theory: a formal metabolic theory of life. Philos Trans Biol Sci 363:2453–2464 109. van der Meer J (2006) An introduction to Dynamic Energy Budget (DEB) models with special emphasis on parameter estimation. J Sea Res 56:85–102. https://doi.org/10. 1016/j.seares.2006.03.001 110. Nisbet RM, Muller EB, Lika K, Kooijman SALM (2000) From molecules to ecosystems through dynamic energy budget models. J Anim Ecol 69:913–926. https://doi.org/ 10.1111/j.1365-2656.2000.00448.x 111. Metz JAJ, Diekmann O (1986) The dynamics of physiologically structured populations. Springer, Berlin 112. Kooijman S, Sousa T, Pecquerie L, Van der Meer J, Jager T (2008) From food-dependent statistics to metabolic parameters, a practical guide to the use of dynamic energy budget theory. Biol Rev 83:533–552 113. Lika K, Kearney MR, Freitas V, van der Veer HW, van der Meer J, Wijsman JW, Pecquerie L, Kooijman SA (2011) The “covariation method” for estimating the parameters of the standard Dynamic Energy Budget model I: philosophy and approach. J Sea Res 66:270–277 114. Lika K, Kooijman SA (2011) The comparative topology of energy allocation in budget models. J Sea Res 66:381–391 115. Sousa T, Domingos T, Poggiale JC, Kooijman SALM (2010) Dynamic energy budget theory restores coherence in biology. Philos Trans Biol Sci 365(1557):3413–3428 116. OECD (2006) Current approaches in the statistical analysis of ecotoxicity data: a guidance to application (annexes to this publication exist as a separate document), OECD Series
on Testing and Assessment, No. 54. OECD Publishing, Paris. https://doi.org/10.1787/ 9789264085275-en 117. Hesketh H, Lahive E, Horton AA, Robinson AG, Svendsen C, Rortais A, Dorne J-LCM, Baas J, Spurgeon DJ, Heard MS (2016) Extending standard testing period in honeybees to predict lifespan impacts of pesticides and heavy metals using dynamic energy budget modelling. Sci Rep 6:37655. https://doi. org/10.1038/srep37655 118. International Programme on Chemical Safety & Inter-Organization Programme for the Sound Management of Chemicals (2010) Characterization and application of physiologically based phamacokinetic models in risk assessment. World Health Organization. https://apps.who.int/iris/handle/10665/ 44495 119. Lautz LS, Oldenkamp R, Dorne J-LCM, Ragas AMJ (2019) Physiologically based kinetic models for farm animals: critical review of published models and future perspectives for their use in chemical risk assessment. Toxicol In Vitro 60:61–70. https:// doi.org/10.1016/j.tiv.2019.05.002 120. Lautz LS, Nebbia C, Hoeks S, Oldenkamp R, Hendriks AJ, Ragas AMJ, Dorne J-LCM (2020) An open source physiologically based kinetic model for the chicken (Gallus gallus domesticus): calibration and validation for the prediction residues in tissues and eggs. Environ Int 136:105488. https://doi.org/10. 1016/j.envint.2020.105488 121. Lautz LS, Dorne J-LCM, Oldenkamp R, Hendriks AJ, Ragas AMJ (2020) Generic physiologically based kinetic modelling for farm animals: part I. Data collection of physiological parameters in swine, cattle and sheep. Toxicol Lett 319:95–101. https://doi.org/ 10.1016/j.toxlet.2019.10.021 122. Lau C, Mole ML, Copeland MF, Rogers JM, Kavlock RJ, Shuey DL, Cameron AM, Ellis DH, Logsdon TR, Merriman J, Setzer RW (2001) Toward a biologically based doseresponse model for developmental toxicity of 5-fluorouracil in the rat: acquisition of experimental data. Toxicol Sci 59:37–48. https:// doi.org/10.1093/toxsci/59.1.37 123. Tebby C, Brochot C, Dorne J-LCM, Beaudouin R (2019) Investigating the interaction between melamine and cyanuric acid using a Physiologically-Based Toxicokinetic model in rainbow trout. Toxicol Appl Pharmacol 370: 184–195. https://doi.org/10.1016/j.taap. 2019.03.021. ISSN 0041-008X 124. Zhang C et al (2019) The application of the QuEChERS methodology in the
In Silico Methods for Environmental Risk Assessment: Principles, Tiered. . . determination of antibiotics in food: a review. TrAC Trends Anal Chem 118:517–537 125. Posthuma L, Suter GWI, Traas TP (2002) Species sensitivity distributions in ecotoxicology. CRC, Boca Raton, FL 126. European Chemicals Agency (2008) Guidance on information requirements and chemical safety assessment. European Chemicals Agency, Helsinki 127. Dowse R, Tang D, Palmer CG, Kefford BJ (2013 Jun) Risk assessment using the species sensitivity distribution method: data quality versus data quantity. Environ Toxicol Chem 32(6):1360–1369. https://doi.org/10. 1002/etc.2190 128. Posthuma L, van Gils J, Zijp MC, van de Meent D, de Zwart D (2019) Species sensitivity distributions for use in environmental protection, assessment, and management of aquatic ecosystems for 12 386 chemicals. Environ Toxicol Chem 38:905–917. https://doi.org/10.1002/etc.4373 129. Giddings JM, Wirtz J, Campana D, Dobbs M (2019) Derivation of combined species sensitivity distributions for acute toxicity of pyrethroids to aquatic animals. Ecotoxicology 28: 242–250. https://doi.org/10.1007/ s10646-019-02018-0 130. Boeckman CJ, Layton R (2017) Use of species sensitivity distributions to characterize hazard for insecticidal traits. J Invertebr Pathol 142:68–70. https://doi.org/10. 1016/j.jip.2016.08.006 131. Chen G, Peijnenburg WJGM, Xiao Y, Vijver MG (2018) Developing species sensitivity distributions for metallic nanomaterials considering the characteristics of nanomaterials, experimental conditions, and different types of endpoints. Food Chem Toxicol 112: 563–570. https://doi.org/10.1016/j.fct. 2017.04.003 132. Hoondert RPJ, Oldenkamp R, de Zwart D, van de Meent D, Posthuma L (2019) QSARbased estimation of species sensitivity distribution parameters: an exploratory investigation. Environ Toxicol Chem 38:2764–2770. https://doi.org/10.1002/etc.4601 133. Cheng W, Doering JA, LaLone C, Ng C (2021) Integrative computational approaches to inform relative bioaccumulation potential of per- and polyfluoroalkyl substances across species. Toxicol Sci 180:212–223. https:// doi.org/10.1093/toxsci/kfab004 134. LaLone CA, Villeneuve DL, Doering JA, Blackwell BR, Transue TR, Simmons CW, Swintek J, Degitz SJ, Williams AJ, Ankley GT (2018) Evidence for cross species extrapolation of mammalian-based high-throughput screening assay results. Environ Sci
635
Technol 52:13960–13971. https://doi.org/ 10.1021/acs.est.8b04587 135. LaLone CA, Villeneuve DL, Lyons D, Helgen HW, Robinson SL, Swintek JA, Saari TW, Ankley GT (2016) Editor’s highlight: sequence alignment to predict across species susceptibility (SeqAPASS): a web-based tool for addressing the challenges of cross-species extrapolation of chemical toxicity. Toxicol Sci 153:228–245. https://doi.org/10.1093/ toxsci/kfw119 136. Forbes VE, Calow P, Grimm V, Hayashi TI, Jager T, Katholm A et al (2011) Adding value to ecological risk assessment with population modeling. Hum Ecol Risk Assess Int J 17(2): 287–299 137. Schmolke A, Thorbek P, Chapman P, Grimm V (2010) Ecological models and pesticide risk assessment: current modeling practice. Environ Toxicol Chem 29(4):1006–1012 138. Wang M, Grimm V (2010) Population models in pesticide risk assessment: lessons for assessing population-level effects, recovery, and alternative exposure scenarios from modeling a small mammal. Environ Toxicol Chem 29(6):1292–1300 139. EFSA (2018) Scientific risk assessment of pesticides in the European Union (EU): EFSA contribution to on-going reflections by the EC. EFSA Support Pub 15(1):1367E 140. Streissl F, Egsmose M, Tarazona JV (2018) Linking pesticide marketing authorisations with environmental impact assessments through realistic landscape risk assessment paradigms. Ecotoxicology 27(7):980–991 141. Zolezzi M, Cattaneo C, Tarazona JV (2005) Probabilistic ecological risk assessment of 1,2,4-trichlorobenzene at a former industrial contaminated site. Environ Sci Technol 39(9):2920–2926 142. Eccles KM, Pauli BD, Chan HM (2019) The Use of Geographic Information Systems for Spatial Ecological Risk Assessments: An Example from the Athabasca Oil Sands Area in Canada. Environ Toxicol Chem 38(12): 2797–2810 143. Schowanek D, Fox K, Holt M, Schroeder FR, Koch V, Cassani G et al (2001) GREAT-ER: a new tool for management and risk assessment of chemicals in river basins. Contribution to GREAT-ER #10. Water Sci Technol 43(2): 179–185 144. Bopp SK, Barouki R, Brack W, Dalla Costa S, Dorne J-LCM, Drakvik PE et al (2018) Current EU research activities on combined exposure to multiple chemicals. Environ Int 120:544–562
636
Maria Chiara Astuto et al.
145. EFSA PPR Panel (2014) Scientific opinion on good modelling practice in the context of mechanistic effect models for risk assessment of plant protection products. EFSA J 12(3): 3589 146. Raimondo S, Etterson M, Pollesch N, Garber K, Kanarek A, Lehmann W et al (2018) A framework for linking population model development with ecological risk assessment objectives. Integr Environ Assess Manag 14(3):369–380 147. Raimondo S, Schmolke A, Pollesch N, Accolla C, Galic N, Moore A et al (2021) Pop-GUIDE: population modeling guidance, use, interpretation, and development for ecological risk assessment. Integr Environ Assess Manag 17(4):767–784 148. Topping CJ, Weyman GS (2018) Rabbit population landscape-scale simulation to investigate the relevance of using rabbits in regulatory environmental risk assessment. Environ Model Assess 23(4):415–457 149. DeAngelis DL, Yurek S (2017) Spatially explicit modeling in ecology: a review. Ecosystems 20(2):284–300 150. Topping CJ, Hansen TS, Jensen TS, Jepsen JU, Nikolajsen F, Odderskær P (2003) ALMaSS, an agent-based model for animals in temperate European landscapes. Ecol Model 167(1):65–82 151. Mayer M, Duan X, Sunde P, Topping CJ (2020) European hares do not avoid newly pesticide-sprayed fields: overspray as unnoticed pathway of pesticide exposure. Sci Total Environ 715:136977 152. Tarazona JV (2013) Use of new scientific developments in regulatory risk assessments: challenges and opportunities. Integr Environ Assess Manag 9(3):e85–e91 153. Dorne J-LCM, Fink-Gremmels J (2013) Human and animal health risk assessments of chemicals in the food chain: comparative aspects and future perspectives. Toxicol Appl Pharmacol 270:187–195. https://doi.org/ 10.1016/j.taap.2012.03.013 154. Calistri P, Iannetti S, Danzetta ML, Narcisi V, Cito F, Di Sabatino D, Bruno R, Sauro F, Atzeni M, Carvelli A, Giovannini A (2013)
The components of ‘One World—One Health’ approach. Transbound Emerg Dis 60:4–13. https://doi.org/10.1111/tbed. 12145 155. Rortais A, Arnold G, Dorne J-LCM, More SJ, Sperandio G, Streissl F, Szentes C, Verdonck F (2017) Risk assessment of pesticides and other stressors in bees: principles, data gaps and perspectives from the European Food Safety Authority. SciTotal Environ 587–588: 524–537. https://doi.org/10.1016/j. scitotenv.2016.09.127 156. Brockmeier EK, Hodges G, Hutchinson TH, Butler E, Hecker M, Tollefsen KE, GarciaReyero N, Kille P, Becker D, Chipman K, Colbourne J, Collette TW, Cossins A, Cronin M, Graystock P, Gutsell S, Knapen D, Katsiadaki I, Lange A, Marshall S, Owen SF, Perkins EJ, Plaistow S, Schroeder A, Taylor D, Viant M, Ankley G, Falciani F (2017) The role of omics in the application of adverse outcome pathways for chemical risk assessment. Toxicol Sci 158: 252–262. https://doi.org/10.1093/toxsci/ kfx097 157. Roland K, Kestemont P, Dieu M, Raes M, Silvestre F (2016) Using a novel “Integrated Biomarker Proteomic” index to assess the effects of freshwater pollutants in European eel peripheral blood mononuclear cells. J Proteome 137:83–96. https://doi.org/10. 1016/j.jprot.2016.01.007 158. Davis JM, Ekman DR, Teng Q, Ankley GT, Berninger JP, Cavallin JE, Jensen KM, Kahl MD, Schroeder AL, Villeneuve DL, Jorgenson ZG, Lee KE, Collette TW (2016) Linking field-based metabolomics and chemical analyses to prioritize contaminants of emerging concern in the Great Lakes basin. Environ Toxicol Chem 35:2493–2502. https://doi. org/10.1002/etc.3409 159. Lodge DM, Turner CR, Jerde CL, Barnes MA, Chadderton L, Egan SP, Feder JL, Mahon AR, Pfrender ME (2012) Conservation in a cup of water: estimating biodiversity and population abundance from environmental DNA. Mol Ecol 21:2555–2558
Chapter 24 Increasing the Value of Data Within a Large Pharmaceutical Company Through In Silico Models Alessandro Brigo, Doha Naga, and Wolfgang Muster Abstract The present contribution describes how in silico models and methods are applied at different stages of the drug discovery process in the pharmaceutical industry. A description of the most relevant computational methods and tools is given along with an evaluation of their performance in the assessment of potential genotoxic impurities and the prediction of off-target in vitro pharmacology. The challenges of predicting the outcome of highly complex in vivo studies are discussed followed by considerations on how novel ways to manage, store, exchange, and analyze data may advance knowledge and facilitate modeling efforts. In this context, the current status of broad data sharing initiatives, namely, eTOX and eTransafe, will be described along with related projects that could significantly reduce the use of animals in drug discovery in the future. Key words Drug discovery, Machine learning, ICH M7, Computational toxicology, Lead optimization, Off-target pharmacology, SEND format, Data sharing
1
Introduction Computational methods (in silico models) are widely used in the pharmaceutical industry for optimizing molecules during early drug development, not only for efficacy but in parallel with regard to their toxicological as well as drug disposition properties. It is the fine balance of target potency, selectivity, favorable ADME (absorption distribution metabolism excretion), and (pre)clinical safety properties that will ultimately lead to the selection and clinical development of a potential new drug [1, 2]. As a clinical candidate needs rigorous preclinical optimization in various aspects, multidimensional optimization (MDO) is a term often used to describe the intensive investigations during the first three to four years of drug development from the identification of the target to the selection of the best drug development compound. The current MDO process comprises the use of in silico, in vitro as well as in vivo techniques.
Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5_24, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
637
638
Alessandro Brigo et al.
In general, in silico tools have the intrinsic advantages to be fast and not to need the physical presence of the test compounds and can therefore be applied very early in drug development. Theoretically, in silico models can be developed for all endpoints and organisms, but the availability of large enough, balanced, and high-quality datasets are the main drawbacks for reliable predictions. An excellent correlation with the in vitro/in vivo data, that is, high sensitivity, as well as high specificity, an easy to use and easy to interpret in silico model are key requirements for its usefulness. In the past few years, computational toxicology prediction systems tremendously increased their predictive power for endpoints like genotoxicity, carcinogenicity, phototoxicity, phospholipidosis, GSH adduct formation, hERG inhibition, CYP inductions, but still have not achieved the major breakthrough due to lack of sufficiently large datasets covering more complex toxicological endpoints (e.g., hepatic, renal, cardiotoxicity). These are the critical toxicity endpoints, which needs to be addressed in the next years to weed out potential safety issues in the clinics. Recent initiatives and consortia (e.g., IMI/eTOX, IMI2/eTransafe, BioCelerate’s TDS, ToxCast, and ToxBank) dealing with data sharing of clinical outcomes and preclinical in vivo toxicology studies and computational approaches are gaining enormous traction with an unprecedented potential to significantly improve these and other endpoint predictions and filling the data gaps [3–5]. This contribution will outline general considerations on the mainly applied expert systems—rule-based and statistical-based models—in toxicology and ADME for pharmaceuticals and their application in the early drug development process as well as their regulatory impact on the assessment of potential impurities arising in the manufacturing process [6]. Recent applications of machine learning methods, progress and future perspectives on the main challenge of predicting complex in vivo endpoints will be summarized and discussed.
2
In Silico Methods for the Prediction of Toxicity As already described in the introduction of this chapter, the thorough characterization of the safety profile of drug candidates is of great importance to ensure that no harm is posed to healthy volunteers and patients during and after clinical development throughout the entire compound life cycle. Drug toxicity can manifest itself in a number of ways and may interest one or more target organs or biological processes. In particular, carcinogenicity, liver, renal, cardiovascular, reproductive, and genetic toxicities are among the most significant safety issues that can prevent drug candidates to progress through clinical development or can cause the withdrawal of already marketed products.
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
639
Overall, between 20% and 30% of failures can be attributed to safety reasons [7–9]. Over that past few years, predictive computational approaches have found a significant role within drug discovery in helping scientists rank compounds classes and prioritize in vitro and in vivo experiments. A number of factors contributed to the increased importance of in silico methods in drug discovery: (1) wider availability of high quality datasets (public domain; focused data sharing initiatives); (2) robust computational models can provide reliable predictions [10]; (3) pressure to reduce animal testing; (4) need to bring new drugs to the market faster and cheaper; (5) legislation on the assessment of potential genotoxic impurities; (6) greater number of commercially available and open source software tools; (7) unprecedented traction in automation, artificial intelligence and machine learning methods fostered by the rapid increase in data storage capacity and processing power of computers. The most widely used computational methods for the prediction of toxicity endpoints can be roughly divided into two main categories, rule-based and statistical-based systems, depending on what type of methods they use to make their classifications. In the same context, another field that needs to be mentioned is machine learning. Technical aspects of the in silico methodologies and tools will not be covered here as other chapters describing specific methodological areas will discuss them in greater details. 2.1 Rule-Based Systems
Computational tools included in this category store and manipulate knowledge to interpret information. They are often referred to as expert systems, which make use of a set of explicit rules (i.e., not implicitly embedded in a code) to make deductions and classifications. Such systems have the advantage that rules can be easily represented and developed by experts in the field of toxicology (or of any discipline the systems are applied to), rather than by IT specialists. In addition, solid expert rules can be derived from limited amounts of data, as long as they are sufficiently representative of specific chemical and biological spaces. Both commercial and open source systems are available within the rule-based methodologies and they include, among others, Derek Nexus [11–15], ToxTree [16–18], CASE Ultra Expert Rules [19], and Leadscope Expert Alert system [20, 21].
2.2 Statistical-Based Systems
Quantitative structure–activity relationship (QSAR) models are regression or classification models used in the chemical and biological sciences and other disciplines. Like other regression models, QSAR regression models relate a set of “predictor” variables (X) to the potency of the response variable (Y), while classification QSAR models correlate the predictor variables to a category value of the response variable.
640
Alessandro Brigo et al.
The QSAR approach can be generally described as an application of data analysis methods and statistics to models development that could accurately predict biological activities or properties of compounds based on their structures. Any QSAR method can be generally defined as an application of mathematical and statistical methods to the problem of finding empirical relationships (QSAR models) in the form Pi ¼ k0 (D1, D2,. . ., Dn), where Pi are biological activities (or other properties) of molecules; D1, D2, . . ., Dn are calculated (or, sometimes, experimentally measured) structural properties (or molecular descriptors) of compounds; k0 is some empirically established mathematical transformation that should be applied to descriptors to calculate the property values for all molecules. The goal of QSAR modeling is to establish a trend in the descriptor values, which parallels the trend in biological activity [22]. Both commercial and open source systems are available within the QSAR-based methodologies and they include, among others, Sarah Nexus [23–25], CASE Ultra [19, 26–29], Leadscope Model Applier [30], OECD Toolbox [31–33], Bioclipse [34–39], admetSAR [40], and Prous Institute Symmetry [41, 42]. 2.3 Machine Learning
The machine learning approach may be described as the process of enabling computers to make successful predictions using prior experience. In drug discovery, among other scientific fields, the primary objective is to model the relationship between a set of observable/measurable quantities (defined as inputs) and another set of variables that are correlated to these (outputs). Once such a mathematical model is determined, it is possible to predict the value of the desired variables by measuring the observables. Many real-world phenomena, though, are too complex to model directly as a closed form input–output relationship. Machine learning provides techniques that can automatically build a computational model of these complex relationships by processing the available data and maximizing a problem-dependent performance criterion. The automatic process of model building is called “training” and the data used for training purposes is called “training data.” The trained model can provide new insights into how input variables are mapped to the output and it can be used to make predictions for novel input values that were not part of the training data. For some problems, the limitation of the approach is that it is often required a large amount of training data to learn an accurate model. The performance of a specific model depends on many factors such as the amount and quality of training data, the complexity and form of the relationship between the input and output variables, and computational constraints such as available training time and memory. Depending on the problem, it is often necessary to try different models and algorithms to find the most suitable ones [43].
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
3 3.1
641
Assessment of Potential Genotoxic Impurities ICH M7 Guideline
3.1.1 Background
The European Medicines Agency Committee for Medicinal Products for Human Use (CHMP) released in 2006 a “Guideline on the Limits of Genotoxic Impurities,” which describes an approach for assessing genotoxic impurities of unknown carcinogenic potential based on the TTC concept. In 2007 a Question & Answer document was published on the EMEA website addressing several aspects of the practical implementation of the recommendations contained in the guideline. Genotoxicity is a broad term that typically describes a deleterious action on cellular genetic material. Chemicals may induce DNA damage by directly interacting with it (e.g., alkylating agents) or by acting on non-DNA targets (mitotic spindle poisons, inhibitors of topoisomerase, etc.). For DNA-reactive genotoxins the mechanism by which they induce genetic damage is assumed to follow a linear no-threshold model; on the other hand, for molecules not interacting directly with DNA, the existence of a threshold concentration required to induce the damage is by and large accepted [44]. Impurities that belong to the second category of substances can be regulated according to the ICH Quality Guideline Q3C [45] which includes Class 2 Solvents. The thresholds or permissible daily exposures (PDE) are calculated from the No Observed Effect Level (NOEL) obtained in the most relevant animal studies with the use of conservative conversion factors used to extrapolate the animal data to humans. The CHMP guideline suggests that the TTC concept should be applied to those genotoxic impurities that do not have sufficient evidence of a threshold-related mechanism of action. The reference values are taken from Kroes et al. [46], where a TTC of 0.15 μg/ day is proposed for impurities presenting a structural alert for genotoxicity, corresponding to a 106 lifetime risk of cancer. In the case of pharmaceuticals the guideline suggests a 1 in 100,000 risk be applied, resulting in a TTC of 1.5 μg/day. For drug substances the identification thresholds above which impurities are required to be identified are within the range of 0.05 and 0.1%. ICH Guidelines Q3A(R) [47] and Q3B(R) [48] state that even though the identification of impurities is not necessary at levels lower than or equal to the identification threshold, “analytical procedures should be developed for those potential impurities that are expected to be unusually potent, producing toxic or pharmacological effects at a level not more than the identification threshold.” The guideline recommends carrying out a thorough evaluation of the synthetic route along with chemical reactions and conditions, with the aim of identifying reagents, intermediates, starting materials and readily predicted side products which may be of potential concern. Once all potential impurities are theoretically identified
642
Alessandro Brigo et al.
and listed, an initial assessment for genotoxicity is carried out by a scientific expert using computer tools such as QSAR and knowledge base expert systems. A thorough literature and internal archive (when applicable) search also needs to be completed, as a number of intermediates and reagents have often been tested in genotoxicity or carcinogenicity assays. The potential genotoxic impurities which may be present in an API are then classified into one of five classes described by Mu¨ller et al. in 2006 [49]; the purpose is to identify those impurities that pose a high risk and need to be limited to very low concentrations: In 2006, a task force established under the umbrella of the US Pharmaceutical Research and Manufacturing Association (PhRMA) for the first time proposed the “staged TTC” concept to be applied to pharmaceuticals [49]. The task force was established as a response to various clinical holds imposed by the FDA on investigational drugs in clinical trial phases based on suspicions to contain genotoxic impurities at levels potentially associated with a risk for the volunteers or patients involved in these trials [50]. The staged approach allows levels of daily intake of mutagenic impurities higher than 1.5 μg as defined by the Less Than Lifetime (LTL) TTC, namely, 10 μg (for a 6 to 12 month duration), 20 μg (3 to 6 months), 40 μg (1 to 3 months), and 120 μg for not more than one month. The EMA adopted the staged TTC approach for limits of genotoxic impurities in clinical trials in the 2007 Q&A document [51] but to be more conservative it reduced the staged TTC limits proposed in the PhRMA paper by a factor of 2. In 2008, the FDA issued a draft “guidance for Industry on Genotoxic and Carcinogenic Impurities in Drug Substances and Products: Recommended Approaches” [52] which was largely similar to the EU guidance. However, this document has not been finalized because in 2009 the topic “genotoxic impurities” was adopted by ICH for development of a new internationally harmonized guideline. Since the topic was considered to include both, safety and quality aspects the projected guideline was assigned to the M (multidisciplinary) series of the ICH process and designated as ICH M7 with the title “Assessment and Control of DNA-Reactive (Mutagenic) Impurities in Pharmaceuticals to Limit Potential Carcinogenic Risk” [53]. In February 2013 a draft of the M7 guideline was published in the three ICH regions for public consultation (step 3 of the ICH process). The document was adopted as a step 4 ICH Harmonised Tripartite Guideline in June 2014 and is currently on step 5, adopted by CHMP on 25 September 2014 and issued as EMA/CHMP/ICH/83812/2013 [53].
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . . 3.1.2 Key Aspects of the ICH M7 Guideline
643
The ICH M7 guideline combines many of the principles set by the EU- and the draft FDA Guidelines on genotoxic impurities. Some aspects, though, have been updated and clear recommendations can be identified. A thorough description of all key aspects of the ICH M7 guideline, which are described elsewhere [50], is beyond the scope of the present contribution. It is nonetheless worthwhile mentioning few of the critical aspects that the ICH M7 guideline does enforce: 1. Structure-based assessment of potentially mutagenic impurities has to be carried out using two in silico systems that complement each other: one should be a rule-based and one a statistics-based method, see Subheading 2 in this chapter). 2. The impurities classification system proposed by the ICH M7 Guideline has been derived from the scheme proposed by Mu¨ller et al. in 2006 [49], which identifies 5 classes of impurities as a function of data availability for the characterization of their mutagenicity and carcinogenicity potential. 3. ICH M7 replaced the term “genotoxic impurities” as applied by the EU Guideline on the Limits of Genotoxic Impurities with the term “DNA-reactive impurities” in order to specify that DNA reactive compounds (i.e., that typically covalently bind to DNA generating adducts, which, if unrepaired, can lead to point mutations and/or strand-breakage) are those that fall within the scope of the Guideline. There is also the assumption that DNA-reactive (Ames-positive) compounds are likely carcinogens with no threshold mechanism; 4. For DNA-reactive (Ames-positive) compounds lacking rodent carcinogenicity data, a generic TTC value would be applied as an acceptable intake level that poses a negligible risk of carcinogenicity; 5. If rodent carcinogenicity data is available for a (potentially) mutagenic impurity the application of the TTC concept is not warranted and a compound-specific calculation of acceptable levels of impurity intake is recommended as is described in more detail in the Note 4 of the Guideline [53]; 6. Compound-specific calculations for acceptable intakes can be applied case-by-case for impurities which are chemically similar to a known carcinogen compound class (class-specific acceptable intakes) provided that a rationale for chemical similarity and supporting data can be demonstrated (Note 5) [44, 51]; 7. The acceptable intakes derived from compound-specific risk assessments can be adjusted for shorter duration of exposure. The TTC-based acceptable intake of 1.5 μg/day is considered to be protective for a lifetime of daily exposure. To address Less Than Lifetime (LTL) exposures to mutagenic impurities in
644
Alessandro Brigo et al.
Table 1 Acceptable intakes for an individual impurity Duration of treatment
1 month
>1–12 Months
>1–10 years
>10 years
Daily intake [μg/day]
120
20
10
1.5
Table 2 Acceptable total daily intakes for multiple impurities Duration of treatment
1 month
>1–12 months
>1–10 years
>10 years
Daily intake [μg/day]
120
60
30
5
pharmaceuticals, a formula is applied in which the acceptable cumulative lifetime dose (1.5 μg/day 25,550 days ¼ 38.3 mg) is uniformly distributed over the total number of exposure days during LTL exposure. This allows higher daily intakes of mutagenic impurities than would be the case for lifetime exposure and still maintain comparable risk levels for daily and nondaily treatment regimens. Table 1 summarizes the levels for different durations. 8. As far as multiple impurities are concerned, when there are more than 2 mutagenic (i.e., Ames positive) or alerting impurities, total mutagenic impurities should be limited as described in Table 2 for clinical development and marketed products. 3.2 Performance of Commercial Systems on Proprietary Compounds
In silico methods for the prediction of mutagenic activity have been available for many years and they have been continuously improved in terms of technology and prediction results, also for greater availability of high quality data. The specific use of such in silico tools in the pharmaceutical industry, in the context of the evaluation of genotoxic impurities, has been summarized and reviewed by Sutter et al. [54]. The authors, representing a total of 14 pharmaceutical companies, compared the predictive value of the different methodologies analyzed in two surveys conveyed in the US and European pharmaceutical industry: most pharmaceutical companies used a rule-based expert system as their primary methodology, yielding negative predictivity values of 78% in all participating companies. A further increase (>90%) was often achieved by an additional expert review and/or a second statistics-based methodology. Also in the latter case, an expert review was encouraged, especially when conflicting results were obtained. The conclusion was that a rule-based expert system complemented by either expert knowledge or a second (Q)SAR model is appropriate. Overall, the procedures for structure-based
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
645
Table 3 External validation sets Data set
Number of compounds
Positive
Negative
Roche Ames Standard
1335
254
1081
Roche Ames Microsuspension
1785
190
1595
LSDB
4699
2068
2631
Hansen [55]
2647
1773
874
Total
10,466
4285
6181
assessment presented in the article by Sutter et al. [54] were already considered appropriate for regulatory submissions within the scope of ICH M7, which mandates the use two different methodologies: one expert-rule based and one statistical based. In order to comply with such Guideline specification, additional commercial in silico tools and novel models have been recently made available to the scientific community. Brigo and Muster [55] evaluated three expert-rule systems (Derek Nexus v.4.0.5 [14], ToxTree v.2.6.6 [16], Leadscope Expert Alert v.3.2.4-1 [20]) and three statistical systems (Sarah v.1.2.0 [23], Leadscope Model Applier v.3.2.4-1 [30], three models of CASE Ultra v.1.5.1.8 [19]—GT1_7B; SALM2013; SALM2013PHARMA) in an individual and combined fashion. The evaluation was carried out using a large validation set of Ames mutagenicity data comprising over 10,000 compounds, 30% of which are Roche proprietary data (Table 3). The Roche datasets include the vast majority of compounds (not only impurities) tested in the Ames standard [57] and microsuspension [58] protocols. All programs have been applied as commercially available, without internal customization or follow-up expert knowledge. Individual systems showed adequate performance statistics with public domain datasets (concordance: 74–95%; sensitivity: 58–99%; specificity: 51–96%, however, there was a consistently significant drop in sensitivity with the Roche data sets, down, in one case, to single digit (concordance: 66–88%; sensitivity: 8–54%; specificity: 69–95%). All systems showed good performance with “public validation sets,” also due to the training set overlap, which went up to 91% for Sarah. Expert-rule based tools showed lower specificity with public domain datasets versus the statistical based programs. Statistical tools showed a much higher number of compounds (up to 39% in one case) outside of their applicability domains and hence not predicted. To evaluate the performance of the combined approach recommended by the ICH M7 Guideline, the Roche validation sets have
646
Alessandro Brigo et al.
Fig. 1 Performance of combined systems on the Roche Ames Standard dataset
been submitted to all possible combinations of one expert-rule based and one statistical based system (Figs. 1 and 2). The combinations of all systems, compared to their individual performance with both Roche validation sets, improves the sensitivity to consistently above 50%, up to 71% for the combination “LSMA Alerts + SALM2013”. As expected, specificity is generally lower than with individual systems, but its reduction is limited for the majority of combinations. In order to assess the prediction tools with chemicals that fall within the potential genotoxic impurities chemical space, four subsets of both Roche validation sets have been generated with molecular weights (MW) 400, 350, 300 and 250. Such subsets cover the chemical space of the large majority of the potential genotoxic impurities tested in Roche over the past decade. All programs have been tested against these subsets individually and in combination [55]. With individual systems, sensitivity shows a clear trend to increase proportionally to the decrease of MW. For example, in the Roche Ames Microsuspension set, sensitivity improves as follows: Derek from 27% to 64%; Sarah from 51% to 76%; ToxTree from 42% to 85%; CASE Ultra SALMPHARM2013 from 45% to 71%; LSMA Alerts and LSMA Stats show an increase in sensitivity to 60% up to MW 300 but there is a flection down to ~55% for both programs for MW 250. In general, sensitivity increases
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
647
Fig. 2 Performance of combined systems on the Roche Ames Micro dataset
significantly with low-MW subsets with almost all programs and models. The only exception is CASE Ultra SALM2013 model, which keeps the same sensitivity values throughout all subsets (between 29% and 33%) [54]. The evaluation of combined systems with low MW Roche subsets shows a significant increase in sensitivity, up to over 90% for sets with MW 300 and 250 with several combinations. The increase in sensitivity is proportional to the decrease in MW; at the same time, there is a considerable decrease in specificity (50% inhib hERG (RT)
CCS
2-week DRF, rodent
2-week DRF, non-rodent
hERG 37°C
Embryonic Stem Cell Test (EST) Micro Ames Micronucleus Test in vitro Hepatocytes (rat, human) DILI (BSEP/Mitotox/Cytotox) Phototoxicity Non-rodent MTD Metabolic Stability CYP Ind/P-gp/GSH
Non-rodent telemetry
Cl Pathway/Met cross-spec/Met ID CYP DDIs all
Fig. 3 Use of in silico tools and safety screening during the early drug development process
free energy of amphiphilicity (ΔΔGAM) and logP value [60] of cationic amphiphilic drugs and can be applied in a high-throughput mode. Further endpoints, which can be used for on-the-fly predictions are teratogenicity, GSH adduct formation, irritation, and skin sensitization. At this early stage of development, the potential safety hazards identified by the application of an expert system in combination with a set of statistical models contribute to the overall compound profile, but are not used as a decision pathway (see Fig. 4). 4.2 Lead Identification (LI) Phase: Target Selected (TS) to Lead Series Identified (LSI)
The main goal during lead identification is to identify valid chemical templates for further optimizing the efficacy and selectivity on the target, ideally multiple discrete series. Besides computational chemistry tools to calculate physicochemical properties, virtual screening, structure-based design, QSAR analysis of both the desired target but also off-target activities, chemical structures are analyzed continuously in silico for possible structure-related safety concerns
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
651
Fig. 4 Illustration of different evaluation metrics for 5HT3 and Muscarinic M1 receptor models
to identify major issues with the templates. Insights into the toxicological potential of a scaffold or series of structures early in the drug discovery process could help medicinal chemists to prioritize particular scaffolds. Components of early avoidance of chemical structure safety liabilities include predictions for genotoxicity, carcinogenicity, hERG channel blockade, reactive metabolite formation, phospholipidosis, structural similarity to problematic molecules, CYP inductions, GSH adduct formation and DMPK properties (cell penetration, microsomal stability, CYP3A inhibition). The in silico tools offer good guidance on what additional tests may be necessary or whether further characterization is warranted however, they also have limitations [61]. DMPK properties play a major role during lead identification. Numerous commercially available tools for the prediction of metabolites exist, such as METEOR [12, 15], MetabolExpert [62], and MetaSite [63]. Most software packages correctly predict metabolites that are detected experimentally. However, a relatively high incidence of false predictions of metabolites is common to most unspecified computerized systems. In the hand of drug metabolism experts, these software packages have a certain value for hypothesis generation and guiding to experimental approaches for the identification of drug metabolites. However, the generation of additional new local rules, intended to predict the activity of a single enzyme
652
Alessandro Brigo et al.
(and often only within a chemical series), can significantly improve the prediction accuracy. With the fast-paced progression of data digitalization within our industry and the greater emphasis given to advanced methodologies like machine learning, additional in silico models keep on being added to our tool box and constantly updated leveraging the growing amount of data becoming available and taking advantage of the increasing degree of automation. Nonclinical toxicology has been identified in several cross company studies as one of the primary reasons behind drug attrition [64–66]. Waring et al. demonstrated in their study that 240 out of 605 compounds were terminated due to nonclinical safety reasons [65]. Some of these preclinical safety issues may be attributed to the nonselectivity of compounds. Nonselective binding to targets other than the intended therapeutic target might cause a number of undesired adverse events. In depth explanation of the relationship between off-target activities and serious adverse events can be found in the work of Bowes and coauthors [67]. Polypharmacology and drug promiscuity are thus one of the key challenges facing chemists during the synthesis of novel compounds during LI/LO phases of drug development [68–70]. As a result, off-target screening has been adopted by most of the pharmaceutical companies as an early safety in vitro screen in order to eliminate nonselective compounds at early stages of the drug development process and minimize adverse events related to the promiscuous nature of the chemical structures. Prediction of these off-target activities directly from the structure can be highly advantageous where early knowledge of off-target activities (prior to synthesis of the chemical structures) can guide chemists in the drug design process. Several efforts have been made in predicting off-target activities, using private or public databases [71, 72]. The optimized Roche internal off-target assay panel has been published in 2019 [73]. This panel of 50 targets has been routinely used internally to screen ~4000 molecules in the early discovery phase (not all compound are screened against all targets). A deep learning modeling approach was adopted to predict the off-target activities of this panel, as well as an automated machine learning (autoML) technique. Tensorflow [74] (version 2.3) and keras [75] (2.4) packages were used for construction of the deep learning models in R software (version 3.5.1), while H2O driverless AI [76] was used as the autoML framework. All the details of this work will be published elsewhere [77]. Hereby we summarize some of the most important challenges and factors impacting the construction of machine learning models through few case studies within the context of off-target pharmacology.
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
653
4.2.1 Case Study 1: The Choice of the Performance Metric and Its Critical Role in Overcoming Prediction Bias
Accuracy is one of the most widely used metric in assessing the performance of a machine learning model. However, perhaps due to the misleading indications of the accuracy measure, a trend has been growing toward using other metrics such as balanced accuracy, Matthew’s correlation coefficient (MCC), Area Under Precision Recall curve (AUCPR), and other commonly applied performance metrics that are more suitable to imbalanced datasets. For instance, only 50 compounds out of 2130 are active (considered positive samples) upon the 5HT3 targets, which makes the negative subset the majority of this dataset. Since accuracy estimates the overall correct predictions of the model, a high accuracy value might reflect the correct predictions for only the true negative compounds and a total miss prediction of the very few positive compounds, thus making it not compatible for imbalanced datasets. Accuracy and other metrics like the area under the receiving operator curve (ROC) might therefore be misleading. This situation is depicted in Fig. 3 where a large difference is seen between the different evaluation metrics in the 5HT3 target. Both metrics accuracy (¼0.98) and AUC (¼0.83) overestimates the model performance, while other metrics such as the balanced accuracy (0.6), MCC (0.37) and AUCPR (0.3) seem to show a more realistic representation. These metrics eliminates the bias either by applying penalty to the true negative class (e.g., AUCPR) or by considering the four quadrants of the confusion matrix (balanced accuracy and MCC). On the other hand, a small difference is seen in the performance metrics of the Muscarinic M1 receptor, which does not have the same imbalanced ratio between positive and negative samples (916 positives out of 2533 total compounds screened). It is therefore important to evaluate the nature of the dataset prior to modeling. The choice of the appropriate performance metric is not only necessary for evaluating the model but also for training. Optimizing the models toward a better balanced accuracy or another unbiased metric (other than accuracy) could greatly enhance the final performance and its usefulness in the drug discovery process.
4.2.2 Case Study 2: Efficient Selection of the Machine Learning Method During Model Construction (Deep Learning vs Automated Machine Learning)
Automated hyper parameter optimization is one of the major advantages of autoML over Tensorflow models, which usually requires significantly more time and resources consumption. Some autoML do not require any programming expertise and is available as a user interface. The modeler is only required to provide the datasets as an input with no further processing needed. An example of such interface is H2O driverless AI, which however requires a license. Other open source autoML packages are available such as AutoGluon tabular [78], TPOT [79], auto-keras [80], and others, which requires a minimal level of programming techniques. The choice of each method is highly dependent on the complexity of the problem and the available resources (Table 5).
654
Alessandro Brigo et al.
Table 5 Highlight of two of the major differences between Tensorflow and autoML Tensorflow models (neural networks)
autoML (H2O driverless AI)
Hyper parameter optimization
Manual
Automatic
Skills required
High proficiency in R/Python
Basic knowledge in programming language or none
Factors
Table 6 Comparison between balanced accuracies of neural networks and autoML models for three targets Target
Neural networks
autoML (H2O driverless AI)
Muscarinic M1 receptor
0.82
0.9
Prostaglandin F receptor
0.4
0.4
Vasopressin receptor 1 (V1A)
0.5
0.84
A simple neural network model was constructed in this study and no grid search or hyper parameter optimization was performed. The automated ML models were constructed using a user-friendly autoML interface, H2O driverless AI. How can we benefit from the difference in performance between the two approaches? As seen in Table 6, Muscarinic M1 receptor was well predicted by both methods, with a balanced accuracy higher than 80%. This indicates that in some cases, a simple autoML model is sufficient for satisfactorily addressing such a problem. Both methods failed to predict Prostaglandin F receptor, which might indicate that the reason behind the wrong predictions could be attributed to the data set, rather than the algorithm itself. On the other hand, the autoML model exhibited a good performance for the Vasopressin receptor (V1A) (BA ¼ 0.84) while the neural network model performed poorly (BA ¼ 0.5) for this target. Such discrepancy in performance between the two methods could be interpreted as a failure of the neural network to converge and that a hyper parameter optimization of the network could have improved the prediction. In this case, autoML served as a guide for a more consuming method like deep learning and could be used as a first choice in some of the cases to save time and resources. 4.2.3 Case Study 3: The Effect of Data Imbalance on the Model Performance
Data imbalance is considered one of the major challenges in addressing binary problems. A balance between the positive and negative samples plays a critical role during the model training and might have a high impact on the model robustness [81–83].
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
655
Fig. 5 Precision recall curves for Muscarinic (M1), Adenosine (A3), and 5HT2A receptors. True positive rate (recall) is plotted on the x axis vs the false positive rate (precision) on the y axis
For off-target activity predictions, compounds that are capable of inhibiting the off-target and causing an adverse event (the positive samples) are very important to detect. We calculated the hit percent as a ratio between the positive samples and the total number of compounds screened upon a target. A low percentage of the positive samples might difficult to predict. As shown in Fig. 5, the area under the precision-recall curve shows a gradual decrease as the hit percent of the targets is lower. Targets with a higher hit percent, such as Muscarinic receptor (36%), Adenosine receptor (29%), 5HT2A (21%) demonstrate a good correlation with the AUCPR values, which are 0.9, 0.71 and 0.62, respectively. Overcoming data imbalance and improving the model performance for such datasets can be achieved through several techniques, such as setting weight importance for each class, applying nested cross-validation, penalizing the models and several other techniques. These approaches might not be reasonable for extremely imbalanced data set and will not likely be able to significantly improve the performance. However, other sophisticated methods might be helpful and are explained in the next section. Innovative methods are applied to overcome the challenges that machine learning models are facing, mainly data imbalance: Oversampling
Data augmentation, an approach commonly used to increase the proportion of a minority class in an imbalanced dataset approaches have shown success in predictive modeling within several fields, such as image analysis, speech recognition and biomedical signals [84–86]. Chemical structures can be oversampled through several approaches. Synthetic Minority Oversampling Technique (SMOTE) [87] is one of the most commonly applied techniques,
656
Alessandro Brigo et al.
where new samples are synthesized. Another novel approach is conformational oversampling (COVER), developed by Hemmerich and coworkers [88], which is based on generation of multiple conformations for molecules in order to oversample or balance datasets in chemical classification problems. Both COVER and SMOTE has shown significant improvement in the context of toxicity prediction (e.g., Tox21 data [89]) and can be used for off-target modeling [88]. Multitask Learning
Multitask learning, introduced by Rich Caruana, is based on the principle of inductive transfer mechanism, where information is shared between different tasks during model training [90]. Regarding off-target modeling, multitask learning approach might be beneficial, especially for targets with few active molecules and imbalanced datasets. This enrichment can be enabled through sharing target information between similar targets, for example targets with similar binding pocket sequence, targets with similar mechanisms of binding, and targets with similar pharmacophores. Choosing which tasks to be learnt simultaneously and represents an important condition for the success of this approach could be very challenging [91, 92].
Transfer Learning
Thousands of in vitro assays, including drug–target interaction data are available in public data bases such as ChEMBL [93], ToxCast [94], and BindDB [95]. Pretraining our neural network models on a large dataset obtained from these databases could inform the network prior to testing it on smaller or imbalanced datasets. Handling noisy data and variation within assay conditions between different data sources might be troublesome.
Anomaly Detection
Anomaly detection is a technique used for detecting rare or abnormal events, that is, outliers [96]. One of the most common applications is fraud detection, however it can also be used to address biological questions, such as abnormalities in medical imaging [97, 98] and other clinical endpoints [99]. This approach could be attempted in off-target modeling, where the minority positive samples would be treated as outliers. The above described approaches can be used in overcoming one of the major challenges presented throughout the case studies, data imbalance. The impact of data imbalance on model performance was highlighted in case study 3, where upon using the same model structure, targets with higher hit percent showed a better balanced accuracy than those with a lower hit percent. Improving the readout for imbalanced datasets might require additional precautions and more extensive training. It is therefore very important to assess the nature of a dataset prior to modeling in order to choose the appropriate machine learning method and avoid unnecessary consumption of resources. As observed in case study 2, it is
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
657
sometimes sufficient to use a neural network model with no grid search or an autoML model for good predictions (e.g., Muscarinic M1 receptor). Some evaluation metrics such as accuracy and AUC were not able to reflect the balance between sensitivity and specificity of the models (case study 1). Therefore, during model training and evaluation, it is more appropriate to use metrics like MCC, AUCPR and balanced accuracy to avoid prediction bias. Prediction of toxicity end points directly from a chemical structure can be highly advantageous and one of the obvious next steps is attempting the novel mentioned methods to improve the current off-target models. This can help early detection of off-target activities, guide chemists during the drug design process and minimize drug attrition. Experimental follow up of potential issues are conducted to build/refine safety plans moving forward. Even if in vitro assays clearly disprove identified in silico alerts further spot-checking of the distinctive endpoint will be conducted to avoid creeping in of a structural liability. The in vitro results always overrule the in silico warnings provided that the corresponding assays could have been conducted under reliable conditions (e.g., solubility, stability). Chemical templates with identified and confirmed intrinsic metabolic and/or safety concerns will be eliminated (see Fig. 4). 4.3 Lead Optimization (LO) Phase: Lead Series Identified (LSI) to Clinical Candidate Selected (CLS)
The task of the LO phase is to take a lead and convert it into a candidate for preclinical evaluation. This phase is intensively accompanied by early safety in vitro screening in various areas: genotoxicity, hERG and other ion channels, cytotoxicity, hepatic toxicity, bone marrow toxicity, transporters, metabolites identification, metabolic stability, CYP induction/inhibition, reactive metabolites, off target pharmacology/secondary pharmacology, cross-species comparisons, where applicable. Further screens might be applied based on the target liabilities or already identified potential safety issues. If adverse in vitro activities appear, specified structure–activity relationships (SARs), so-called local SARs, will be established to support the discovery projects in optimizing the clinical candidates toward safety/DMPK in parallel to efficacy. At this stage, first, fit for purpose in vivo studies are conducted to address early target or chemistry related safety concerns. The first general toxicology studies are maximal tolerated dose (MTD) and dose-range finding (DRF) studies generally performed in rodents and nonrodents. The value of performing exploratory drug safety studies before candidate nomination is to identify unwanted toxicities evident in a study of up to 14 days duration, as well as any potential toxicities anticipated based on a known cause for concern. In the absence of findings or the presence of findings that are judged manageable, these studies provide a greater comfort in the selection of a molecule for advancement into development with higher likelihood of success. Additional benefits of these studies are
658
Alessandro Brigo et al.
the identification of target organs to monitor in development and the selection of doses for the GLP toxicology studies. In addition, identification of the toxicity profile of a lead compound can be useful for the backup program where the goal is often an improved safety margin. In silico safety concerns might be included as part of PK/PD characterization in vivo (disease) models to extract safetyrelevant information and to build confidence in safety before expanding into larger regulatory animal studies. During LO, every in silico alert is immediately followed up by the corresponding in vitro screen, in case, even in vivo studies might be frontloaded. To avoid late failures of optimized candidates, spot-checking of the potential development candidates without alerts is conducted if the resources and throughput of the assay allows. In case of screening alerts, creation of local SARs can result in significant acceleration of project by optimizing the chemical improvement rounds. Specific and tailor-made local models normally have a significantly higher accuracy, if continuously updated with new incoming screening results. Learnings and newly identified alerting substructures should be implemented in general rules and models (customized systems) to continuously improve the LSI
TS TI//TC/TV HTS/Screening
Lead Identification
1
2
CLS Lead Optimization
CCS Candidate Selection
33
44
In silico alerts for all new chemical entities (automatically): e.g. mutagenicity, carcinogenicity, teratogenicity, GSH adducts
Phase 0 5
1
=> Safety alerts added to molecular profile Endpoints evaluated during LI phase: • • • •
Genotoxicity / Carcinogenicity Phototoxicity / Phospholipidosis Reactive metabolites / CYP inhibition/induction hERG blockade
2
Frontloading of respective in vitro screens to confirm or disprove Templates with intrinsic alerts are dropped
Alerting compounds prioritized for in vitro screen: • • • •
Endpoints: see LI phase Immediately followed-up by respective in vitro (maybe in vivo) screening studies Local SARs to be established as neccessary Mechanistic investigations triggered by structural warnings
Detailed in silico profile Assessment of potential genotoxic impurities starting from 1st GLP synthesis
Fig. 6 Downstream activities following in silico alerts in the drug development process
33
44
5
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
659
performance of the computational tools used for drug optimization (see Fig. 6). 4.4 Phase 0: After Clinical Candidate Selected (CLS) to Entry into Humans (EiH)
5
The main usage of in silico tools after the final candidate has been selected encompasses the assessment of potentially genotoxic impurities according to the ICH M7 guideline as described in Chapter 3, as well as cross-reading and pathway analysis following an unexpected event in preclinical studies. Furthermore, a backup or fastfollower program will trigger dedicated in silico profiling and screening of the new molecules, based on experiences and identified issues of the frontrunner compound. Apart from the use of in silico tools to assess genotoxic impurities, there are no computational assessments which are mandatory requirements from regulatory agencies, but in case in silico models have been applied during drug development and influenced the testing strategy or triggered additional investigations, the information should be included in regulatory documents and adequately described.
Prediction of Complex Endpoints
5.1 Challenges in the Prediction of the Outcome of In Vivo Safety Studies
When the goal is the prediction of the outcome of certain assays, such as the Ames assay [57], in which the results can be roughly considered as binary, that is, “yes” or “no” answer, in silico models have a higher chance to give a better performance if compared to more complex assays and studies. The mechanism of action of a molecule leading to a specific readout plays a critical role in the predictive performance of in silico models as it is one of the biggest challenges of, for instance, QSAR models. “Do the descriptors have any physicochemical interpretation that is consistent with a known biological mechanism? [100]” is often a very difficult question to answer. In vitro chromosome damage (an assay used to establish the clastogenicity potential of test compounds) can also be considered binary (i.e., the test item is “clastogenic” or “not clastogenic”). However, the mechanisms of action that may lead to clastogenicity are manifold and may involve the interaction of the compound with a number of proteins or enzymes, the disruption of one or more biological pathways that ultimately lead to a clastogenicity outcome. This complexity is reflected in the performance of the in silico prediction tool described in Table 4 for the chromosome damage endpoint. Before the update of the model based on extensive internal data and structures, the model sensitivity was in the single digit, showing that it was basically unable to identify any clastogenic compound within the validation set used. The update was successful in increasing the sensitivity value to 65%, nonetheless, we need to bear in mind, that due to the various mechanisms of action that can lead to
660
Alessandro Brigo et al.
clastogenicity, minor structural changes within a chemical class can have a large impact on the mechanism of action (e.g., the interaction with one or more proteins may be hampered, hence changing the final outcome of the assay). Even greater challenges are offered to the prediction of the outcome of single dose and repeat-dose toxicity in vivo studies. In the pharmaceutical industry, such studies are typically used to identify a maximum tolerated dose (MTD) and the NOAEL (nonobserved adverse effect level) for a test compound, in addition to the identification of a general toxicity profile and significant target organs that may show toxicity upon exposure to the compound tested. Since animal models are very complex and the number of readouts collected in such studies is extremely wide, the development of in silico models that can reliably predict such outcomes is extremely challenging. For example, a typical repeat-dose study requires the use of a control group plus three dose groups: each animal is then carefully examined for clinical observations throughout the in-life part, including body weight and food consumption measurements as well as some behavioral evaluations; clinical pathology values are collected at different time points; urine analysis is performed; macroscopic and microscopic examinations are carried out on a number of selected organs; toxicokinetics values are then calculated using the test item concentrations measured in blood from the samples collected throughout the study, which could be of different durations, from 5 days till 39 weeks (up to 2 years for the rodent bioassays for the evaluation of carcinogenicity) and in different species. The variations, permanent or transient, of the parameters and values briefly described above may depend on the pharmacological target, on the chemical structure and related physicochemical properties, on background incidences due to adaptations or other factors, such as major differences in plasma exposures. Because of this variability, building an in silico model capable of predicting all these different “degrees of freedom” or “dimensions” is extremely challenging, in particular due to the fact that the identification of unequivocal mechanisms of action for whatever findings have been identified is not trivial. In addition, the development of robust SARs using the outcome of such studies is difficult because of the limited amount of publicly available data and, even within large pharmaceutical companies, the number of chemically similar compounds tested in such long and expensive studies for each investigated pharmacological target is small (less than 5). This, of course, hampers the possibility to even develop local models since the number of similar compounds, designed for the same target, undergoing the same type of studies is rather limited. Given the recent developments in artificial intelligence and machine learning methodologies, which could in theory be suited
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
661
to address questions and problems of such high multidimensionality, sophisticated in silico models may become available for the prediction of the potential findings identified in, for example, repeat-dose studies. However, all the limitations described above, in particular the availability of large standardized datasets along with the difficulties to conduct a proper validation would make it difficult, within the pharmaceutical industry, to accept the predictions of such models for decision making on compounds prioritization or as guidance for chemical optimization. 5.2 Data Collection, Organization, Availability and Interpretation for In Vivo Toxicity Studies
Within the industry, it has been recently recognized that the digitalization and automation of data opens the door to opportunities such as the partial replacement of in vitro assays with in silico predictions and the reduction of the number of investigated molecules per project. A first step is the creation of a comprehensive, that is, across disciplines, and integrated FAIR (Findable, Accessible, Interoperable, Reusable) data to be seamlessly connected to the labs and external sources generating the data stream [100]. In the context of preclinical safety, the digital consolidation of the outcome of in vivo toxicity within appropriate platforms employing the right technology would allow the full exploitation of the knowledge that such data can provide. Large Pharma organizations can typically count on many years of drug discovery and research conducted across several sites on a significant number of therapeutic areas, pharmacological targets and molecules. This translates into a large amount of complex datasets, stored in different repositories or Laboratory Information Management Systems (LIMS) each designed specifically to accommodate the data type of interest (Histopathology, Clinical Observation, Clinical Pathology, PK, etc.). The organization of such wealth of information to generate specific knowledge from the integration of all of these data types has been considered several times in the past by many pharmaceutical companies. However, due to limited resources or inadequate technology, the outcome of such initiatives has often been disappointing. In more recent years, there has been a tremendous focus across industries, not only in pharma, to extract knowledge and identify patterns or trends from large amounts of data, be it omics, market research, public preferences on digital movies rental [101], airplane estimated times of arrivals, and others. A lot of these initiatives often fall under the term “Big Data,” generally underlying the intention of large organizations to look deeper into their databases and assess whether an improvement in the way such data are organized, stored, made accessible, and mined may provide any advantage for the business in terms of saving resources or increasing efficiency via surfacing hidden value. Along this line, Roche has been working on a number of “Big Data” projects across several areas of research and IT. One of them
662
Alessandro Brigo et al.
had the goal to integrate all in vivo nonclinical safety data generated by the company over the past 30 years across three research sites, two in the USA and one in Switzerland. The goal was to ensure that all different data types that are part of in vivo studies (i.e., histopathology, clinical observations, PK, clinical pathology, etc.) were brought together electronically in such a way that they could all be searched and made available at the same time to the users community. The scope for such a platform, internally called SDI/RoDEo (i.e., Safety Data Integration/Roche Data Exploration) is to allow scientists to identify specific patterns of findings across species and their historical relevance, correlations between molecular structures and toxicological effects, and, eventually, use the data to generate more reliable prediction algorithms. The application of a semantic data integration approach [102] for the harmonization of terms, formats, units, and taxonomy, allowed the implementation of a nonclinical study warehouse including over 5000 studies of different types which can be interrogated with very complex queries such as: “Which compounds showed Spleen Hyperplasia AND Liver Necrosis AND Lung Leukocytosis AND an AST increase > 50%?” returning an answer in a matter of seconds. The identification of studies and compound matching the query above, in the absence of properly designed data integration efforts, would have been extremely labor intensive and time consuming, if possible at all. The SDI/RoDEo platform has been interfaced with other, already existing, internal databases, such as the chemical structures and the in vitro biology data repositories to further expand the data integration beyond toxicology allowing the users to assess the compound profile in almost its entirety. In addition, the system enables direct access to real-time data collected during the course of the study and cross-study data comparisons, the latter emulating the digital data analysis approach that FDA has been following since the creation of the SEND format (see Subheading 6.1). 5.3 Possible Models Generation
As far as models development is concerned, the advantage of the platform described in Subheading 5.2 is the high data granularity available, down to the single animal level. One of the challenges in the development of predictive models for complex endpoints, such as hepatotoxicity, is that the modeler is forced to make a generic classification (hepatotoxic vs. nonhepatotoxic), often neglecting safety margins (vs. pharmacological activity), doses at which specific toxicity is seen and ignoring the specific findings and whether it is transient or not. This is because, more often than not, such information is not easily available. All these factors make such classification relatively inaccurate: for example, Paracetamol (or Acetaminophen) an over-thecounter mild analgesic, commonly used to relieve headaches and reduce fever, is commonly classified as hepatotoxic (as its overdose
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
663
can cause fatal liver damage [103]. However, at doses as high as up to 4 grams per day in adults, Paracetamol is regarded as totally safe and can comfortably be used (at properly adjusted lower doses) even in infants. This example explains how critical and challenging a correct classification is: it is correct to classify Paracetamol as hepatotoxic, since an overdose would likely cause a fatal liver failure, however, in drug development settings, what type of decision can be made on a compound predicted to be hepatotoxic by a model based on the information gathered, among others, from Paracetamol? Should this molecule be discontinued and any further investigation stopped before knowing what safety margins might there be with regards to its intended therapeutic indication? Disregarding this molecule immediately after a positive prediction bears the risk of losing a potentially valuable compound. Continuing the investigations to further profile the molecule for future clinical development may be the best option to get to a more solid data-driven decision on its potential to become a drug. The bottom line is that, in this context, the prediction model will have a negligible impact on the decision. In order to strengthen the reliability of in silico models for the prediction of complex endpoints, all information generated by in vivo single- and repeat-dose studies should be made available in a clear and searchable way at the highest possible level of details. This would allow experts to generate very specific models by making the correct compounds classifications for very specific findings via a preliminary and careful data analysis. For example, it will be possible to have models for AST and ALT increases above 50% vs. control groups, or for the prediction of bilirubinemia, moving away from a nonspecific, for example, “hepatotoxicity” classification. This approach would, in principle, also make the identification of sound mechanisms of action for the specific observed toxicities a bit easier to address.
6
(Near) Future Perspectives
6.1 SEND Model and Data Exchange with FDA
On December 18, 2014, FDA issued the binding guidance titled “Providing Regulatory Submissions In Electronic Format—Standardized Study Data” [104] that requires Investigational New Drug (IND), New Drug Application (NDA), Abbreviated New Drug Application (ANDA) and Biologics License Application (BLA) submissions to be made in a standardized electronic format. The Clinical Data Interchange Standards Consortium (CDISC) Standard for Exchange of Nonclinical Data (SEND) is intended to guide the structure and format of standard nonclinical datasets for interchange between sponsors and Contract Research Organizations (CROs) and for submission to the US FDA.
664
Alessandro Brigo et al.
The current version of the SEND Implementation Guide (SENDIG v.3.1.1) is designed to support cardiovascular and respiratory studies, single-dose general toxicology, repeat-dose general toxicology, and carcinogenicity studies. The guidance requires submission of nonclinical safety studies in SEND format for the study types currently supported. In the near future, the standard will be expanded to include additional study types, such as developmental and reproductive toxicology, The guidance further stipulates that published FDA-specific SEND validation rules will be enforced for all submitted data sets. The agency may refuse to file (for NDAs and BLAs) or receive (for ANDAs) an electronic submission that does not have study data in conformance to the required standards. Supported studies (included in NDA, ANDA and certain BLA submissions) that started after December 17, 2016 must be submitted in SEND. For IND submissions supported studies starting after December 17, 2017 must be submitted in SEND. Historically, nonclinical safety data has been provided as tabulated data within PDF Study Reports, Original electronic data, generated in-house, is normally stored on the originating LIMS systems until it is archived. In the case of CRO studies, original electronic data is typically not made available unless explicitly requested by the sponsor. The FDA now requires that, in addition to the PDF reports, the original electronic data also be submitted in SEND format. While it is possible to build a SEND data set manually, the process is labor intensive, error prone and very difficult to validate. Given the fact that data comprising a study may come from multiple data sources the challenge becomes unworkable. An automated or semiautomated computerized system that can accurately and consistently transform original non-SEND data from multiple sources to the SEND standard and validate SEND data following published rules is required. Oversight, tools and processes for ensuring that source data sets are collected, curated, transformed to SEND and made available for submission in an effective manner are also required. FDA pharm/tox reviewers have been analyzing the submitted study reports by manually extracting the tabulated data contained in the appendices of the PDF documents and loading them into any number of tools they see fit for visualizing and reviewing it. This first step is labor-intensive and time-consuming. With the guidance for e-submissions, FDA reviewers have the opportunity to receive the study data directly in the appropriate format into one single platform. FDA will visualize the electronic data, run their analyses and draw their conclusions on the studies under review. This approach allows FDA reviewers to save time on data curation and formatting aspects and free resources for more in-depth scientific analyses, also leveraging the large amount of
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
665
information and knowledge that will be continuously captured within the platform over the coming years. Since clinical data is also electronically exchanged via standardized models (www.cdisc.org) it can be expected that one day, clinical and nonclinical data will be integrated under one single platform, which would represent a significant milestone in Translational Medicine arena. 6.2 Data Sharing Initiatives
Analysis of reasons for previous failures and exploitation of them should help in improving the efficiency of clinical development of new drugs and their safety profiles. So far, preclinical study reports have been rarely stored in a format that supports data mining or statistical analysis. Pharmaceutical companies have realized these hidden treasures in their archives and started internal work to improve the retrieval of their data. It would clearly be of an enormous benefit to the whole industry to analyze these data across multiple companies in order to expand the chemical and biological space. However, extracting these data from the raw data and the reports and building such a database requires considerable investment. Recent advances achieved in international initiatives, including IMI’s eTOX and eTRANSAFE projects have shown that sharing of preclinical and clinical data, both private and public, is achievable through the combination of legal (IP), IT and honest broker concepts ([3, 105], see Fig. 7).
6.2.1 The eTOX Project
The eTOX project, an EU-funded consortium, aimed to collect, extract and organize preclinical safety data from pharmaceutical
Fig. 7 The overall setup of the eTOX project
666
Alessandro Brigo et al.
industry legacy study reports and publicly available toxicology data into a searchable database to facilitate data mining and the development of innovative in silico models and software tools to predict potential safety liabilities of small molecules. The eTOXconsortium consisted of 13 pharmaceutical companies, 11 academic institutions, and 6 SMEs working together under the sponsorship of the Innovative Medicines Initiative (IMI) between 2010 and 2016. The participating partners embraced expert knowledge in computational modeling, toxicology, pathology, and database design, liaising within the project in an integrative working environment. After establishing an effective data sharing intellectual property (IP) protection within an “honest broker” approach (see Fig. 7), the project was able to compile a unique, well-curated dataset of more than 8000 study reports, corresponding to ca. 2000 test compounds. The concept to divide the results from the legacy reports of the pharmaceutical companies in different “confidentiality classes” was fundamental to facilitate data sharing and overcome IP and legal hurdles. Public data (class 1) are accessible to the public on request, nonconfidential data (class 2) are open for eTOX consortium members, confidential data (class 3) are only accessible within the consortium with an additional secrecy agreement and private date (class 4) are only for EFPIA data owners, but can be shared for model generation on request. Treatment-related findings have been classified within the database, reflecting the interpreted study outcome of every report. A suite of ontologies, built through OntoBrowser released by eTOX to the public domain, enables the user to directly compare observed effects or toxicities of chemically similar structures (read-across). A new in silico tool—eTOXsys—has been developed with a single user interface, which manages search queries on the highquality preclinical database and organizes requests to a steadily growing collection of independent prediction models. Aspects of IP rights for data sharing, definition of ontologies, design of database structure, development of in silico models, data analysis, validation, and sustainability were key aspects of the eTOX project. 6.2.2 The eTRANSAFE Project
Enhancing TRANslational SAFEty Assessment through integrative knowledge management is a 5-year research project, started September 2017 and funded by IMI Joint Undertaking (IMI2) together with the pharmaceutical industry. The eTRANSAFE project aims to develop an integrated data infrastructure and innovative computational methods and tools (the eTRANSAFE ToxHub) to significantly improve the predictivity and reliability of translational safety assessment during the drug development process. Following but extending the objectives of eTOX, the eTRANSAFE project is focused on the challenging field of translational safety evaluation
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . .
667
Fig. 8 The eTRANSAFE project concept
and aims to provide in silico solutions for identifying when and how much the preclinical toxicological observations can be predictive of clinical adverse drug reactions. The project is taking the nonclinical safety data and linking them to clinical readouts and observed effects in humans. The project objectives include the development of databases containing preclinical and clinical data, computational systems for translational analysis, including tools for data query, analysis and visualization, as well as computational models to explain and predict drug safety events. The concept, components, and tools of the project are depicted in Fig. 8 [106, 107]. The core aim of the project is to extract unbiased knowledge out of the wealth of accumulated data from both sides, clinical and nonclinical, to analyze the true predictive power of animal studies and to enable a general correspondence, or translation, between these two worlds. Previous attempts are suffering from limited preclinical studies usually considering sensitive proprietary data only available to the author. One key activity of the eTRANSAFE project in this respect has been the development of a mapping between preclinical (SEND and legacy studies) and clinical (e.g., MedDRA) ontologies applying the so-called Rosetta Stone strategy. This approach is the first attempt to systematically map semantic concepts between preclinical and clinical data related to toxicity instead of relying on the subjective personal expertise and practices of experts on both sides. The mapping has been implemented in a computational tool that provides semantic services to the eTRANSAFE system. The open architecture implemented in the eTRANSAFE system is expected to facilitate its maintenance and development, and to flexibly allow the expansion with additional data sources, tools, and mapping rules [107]. Among many different aspects of this project, like the preclinical and clinical data gathering, ontologies, the concept and architecture of the ToxHub, (mechanistic) modeling tools, and
668
Alessandro Brigo et al.
overarching data sharing policies, which are described elsewhere, the importance of text mining approaches needs to be emphasized. Over the last decades, the pharmaceutical industry has generated a vast amount of knowledge on the safety of drugs, but their potential to support the decision-making in the drug development process remains nearly unused as most of this data is only available as unstructured textual documents with variable degree of digitization. This limitation can be avoided, or at least mitigated, by relying on text mining techniques for automatically extracting relevant, structured data from textual documents of interest, thus supporting drug research and development activities. It is one challenge to extract, normalize and harmonize ontologies of the raw data from the LIMS systems, however, the exploitation of the knowledge contained in the study reports is extremely important as the treatment-related effects are only stored there. It needs to be a continued effort to identify, capture and standardize findings related to drug treatment (i.e., safety findings) by mining legacy preclinical toxicology reports and incoming SEND studies. The ultimate goal is to develop text mining tools to (semi)automatically populate databases. Importantly, these tools would not replace human experts, but support and facilitate the identification of safety findings that can be later on validated by experts, thus expediting the overall process [107]. In order to capture the expert conclusions of animal toxicology studies in a consistent, structured, machine-readable format, the concept of the Study Report (SR) domain as a proposed additional domain for SEND has been developed, which can be integrated into database schemes designed to store SEND data These expert conclusions are not captured in SEND or legacy raw data. To achieve the goal that treatment-related findings are always stored with tall detailed raw data of a preclinical animal study, the SR-Domain application may also enable study directors to record significant study findings directly within an SR-domain upon the conclusion of a study. Ideally, all electronic exchange files summarizing the outcome of a preclinical safety study should contain an SR-domain [108]. The ultimate deliverable of the eTRANSAFE project will be a new and innovative software platform, so-called ToxHub, which centralizes the access to all data sources (SEND and preclinical legacy studies, clinical data, off-target panels) and dedicated exploitation modules, allowing the highest degree of flexibility and deployment in different ways. This modular architecture imposes the use of an abstraction layer wrapping heterogeneous data sources to allow simultaneous querying, the so-called primitive adapter, which provides a single API for accessing the data and allow extraction of data from different sources in formats suitable for integrated data analysis and visualization.
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . . 6.2.3 The Virtual Control Groups (VCGs) Project
669
Although the decisions may vary from company to company it became evident that the most sensitive information in data sets or studies is the chemical structure, disclosure of the target or indication, and in some case specific toxicological findings, particularly when these are rare or can be related to the pharmacological mode of action. There is a common understanding that some data can be shared without any restriction because it cannot be linked back to an individual test item or mechanisms. This particularly applies to data from control animals. Sharing data from control animals opens the door for completely new analyses and approaches to spare animals in studies. Sharing legacy data from in vivo toxicity studies offers the opportunity to analyze the variability of control groups stratified for strain, age, duration of study, vehicle, and other experimental conditions. Historical animal control group data collected in a repository could be used to construct virtual control groups (VCGs) for toxicity studies. VCGs are an established concept in clinical trials (e.g., background incidence of tumors in the field of carcinogenicity studies), but the idea of replacing living beings with virtual data sets has so far not been introduced into the design of regulatory animal studies. The use of VCGs has the potential to reduce animal use by 25% by replacing control group animals with existing randomized data sets. Prerequisites for such an approach are the availability of large and well-structured control data sets as well as thorough statistical evaluations. The foundation of datasharing has been laid within the eTOX and eTRANSAFE projects. To establish proof of principle, participating companies have started to collect control group data for subacute preclinical studies and are characterizing these data for its variability. Essentially, all routine endpoints covered in an oral repeated dose toxicity study are included, namely study definition, animal data, clinical chemistry, hematology, urinalysis, organ weights, gross pathology, and histopathology findings. In a second step, the control group data will be shared among the companies, and cross-company variability will be investigated. In a third step, a set of studies will be analyzed to assess whether the use of VCG data would have influenced the outcome of the study compared to the real control group [109]. The ultimate goals of the database are the reduction of the number of control group animals or a complete replacement of the control group by virtual control animals based on data drawn from the database (Fig. 9). Prerequisite to achieve these goals is a thorough understanding of the data variability for the laboratory performing the studies and clear rules and definitions for thresholds of variability in order to adequately safeguard sensitivity and specificity of the in vivo studies with regard to the question whether an observation is treatment-related, that is, compound-related, or not. Furthermore, such a large cross-company dataset can be used to systematically assess the endpoint variability in control animals and
670
Alessandro Brigo et al.
Fig. 9 Five steps to replace control animals by VCGs
exploit the role of experimental factors and to improve the quality and conclusions of preclinical studies. In conclusion, the VCG concept could be one of the first compelling innovations of the cross-company data sharing activities and its success will enable a significant reduction of animal use in repeated dose toxicity studies, thus substantially contributing to the 3R concept.
References 1. Mu¨ller L, Breidenbach A, Funk C, Muster W, Paehler A (2008) Strategies for using computational toxicology methods in Pharmaceutical R&D. In: Ekins S (ed) Computational toxicology: risk assessment for pharmaceutical and environmental chemicals. Wiley, Hoboken, NY, pp 545–579 2. Muster W et al (2008) Computational toxicology in drug development. Drug Discov Today 13(7–8):303–310 3. Cases M et al (2014) The eTOX data-sharing project to advance in silico drug-induced toxicity prediction. Int J Mol Sci 15(11): 21136–21154 4. Kavlock R (2009) The future of toxicity testing—the NRC vision and the EPA’s ToxCast program national center for computational toxicology. Neurotoxicol Teratol 31(4): 237–237 5. Kohonen P et al (2013) The ToxBank data warehouse: supporting the replacement of
in vivo repeated dose systemic toxicity testing. Mol Informat 32(1):47–63 6. Brigo A, Muster W (2016) The use of in silico models within a large pharmaceutical company. In: Benfenati E (ed) In silico methods for predicting drug toxicity. Methods in molecular biology, vol 1425. Humana Press, New York, NY 7. Arrowsmith J (2011) Trial watch: phase III and submission failures: 2007-2010. Nat Rev Drug Discov 10:87 8. Arrowsmith J, Miller P (2013) Trial watch: phase II and phase III attrition rates 2011–2012. Nat Rev Drug Discov 12:569 9. Arrowsmith J (2011) Trial watch: phase II failures: 2008–2010. Nat Rev Drug Discov 10:328–329 10. Hillebrecht A et al (2011) Comparative evaluation of in silico systems for ames test mutagenicity prediction: scope and limitations. Chem Res Toxicol 24:843–854
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . . 11. Sanderson DM, Earnshaw CG (1991) Computer prediction of possible toxic action from chemical structure; the DEREK system. Hum Exp Toxicol 10:261–273 12. Greene N, Judson PN, Langowski JJ, Marchant CA (1999) Knowledge-based expert systems for toxicity and metabolism prediction: DEREK, StAR and METEOR. SAR QSAR Environ Res 10:299–314 13. Judson PN (2006) Using computer reasoning about qualitative and quantitative information to predict metabolism and toxicity. In: Testa B, Kramer SD, Wunderli-Allespach H, Volkers G (eds) Pharmacokinetic profiling in drug research: biological, physicochemical, and computational strategies. Wiley, New York, pp 183–215 14. Derek Nexus (2015) http://www. lhasalimited.org/products/derek-nexus.htm 15. Limited L (2015) Derek Nexus: negative predictions for bacterial mutagenicity. https:// www.lhasalimited.org/products/derekfeatures-and-benefits.htm 16. ToxTree version 2.6.6. (2015) https://ec. europa.eu/jrc/en/scientific-tool/toxtreetool 17. Pavan M, Worth AP (2008) Publiclyaccessible QSAR software tools developed by the Joint Research Centre. SAR QSAR Environ Res 19:785–799 18. Benigni R, Bossa C (2008) Structure alerts for carcinogenicity, and the Salmonella assay system: a novel insight through the chemical relational databases technology. Mutat Res 659:248–261 19. CASE Ultra version 1.5.2.0 (2015) http:// www.multicase.com/case-ultra 20. Leadscope Expert Alerts version 3.2.4-1 (2015) https://www.leadscope.com/ genetox_expert_alerts/ 21. Leadscope® Genetox Expert Alerts White paper (2014) http://www.leadscope.com/ white_papers/Leadscope_alerts_white_ paper.pdf 22. Tropsha A (2010) Best practices for QSAR model development, validation, and exploitation. Mol Informat 29:476–488 23. Sarah Nexus (2015) http://www. lhasalimited.org/products/sarah-nexus.htm 24. Hanser T, Barber C, Rosser E, Vessey JD, WEbb SJ, Werner S (2014) Self organising hypothesis networks: a new approach for representing and structuring SAR knowledge. J Chem 6:21 25. Sarah Nexus Methodology (2015) https:// www.lhasalimited.org/products/sarahnexus.htm#Confidence
671
26. Klopman G (1992) A hierarchical computer automated structure evaluation program. Quant Struct Act Relat 11:176–184 27. Klopman G (1984) Artificial intelligence approach to structure-activity studies. Computer automated structure evaluation of biological activity of organic molecules. J Am Chem Soc 106:7315–7321 28. Chakravarti SK, Saiakhov RD, Klopman G (2012) Optimizing predictive performance of CASE ultra expert system models using the applicability domains of individual toxicity alerts. J Chem Inf Model 52:2609–2618 29. Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(379–423):379 30. Leasdcope Model Appliers (2015) http:// www.leadscope.com/model_appliers/ 31. van Leeuwen K, Schultz TW, Henry T, Diderich B, Veith GD (2009) Using chemical categories to fill data gaps in hazard assessment. SAR QSAR Environ Res 20:207–220 32. The OECD QSAR Toolbox (2015) https:// www.oecd.org/chemicalsafety/risk-assess ment/oecd-qsar-toolbox.htm 33. OECD (2015) Toolbox guidance document 34. Bioclipse (2015) http://www.bioclipse.net/ 35. OpenTox (2015) http://www.opentox.org/ 36. Kazius J, McGuire R, Bursi R (2005) Derivation and validation of toxicophores for mutagenicity prediction. J Med Chem 48:312–320 37. Kuhn T, Willighangen EL, Zielesny A, Steinbeck C (2010) CDK-Taverna: an open workflow environment for cheminformatics. BMC Bioinformatics 11:159 38. Steinbeck C, Hoppe C, Kuhn S, Floris M, Guha R, Willighagen EL (2006) Recent developments of the chemistry development kit (CDK)—an open-source java library for chemo- and bioinformatics. Curr Pharm Des 12:2111–2120 39. Ekins S (2014) Progress in computational toxicology. J Pharmacol Toxicol Methods 69:115–140 40. Cheng A, Li W, Zhou Y, Shen J, Wu Z, Liu G et al (2012) admetSAR: a comprehensive source and free tool for assessment of chemical ADMET properties. J Chem Inf Model 52:3099–3105 41. Prous Institute Symmetry (2015) https:// prousresearch.com/#portfolio 42. Valencia A, Prous J, Mora O, Sadrieh N, Valerio LG Jr (2013) A novel QSAR model of Salmonella mutagenicity and its application in the safety assessment of drug impurities. Toxicol Appl Pharmacol 273(3):427–434
672
Alessandro Brigo et al.
¨ zuysal M (2014) Introduction 43. Bas¸tanlar Y, O to machine learning. In: Yousef M, Allmer J (eds) miRNomics: MicroRNA biology and computational analysis. Methods in molecular biology (methods and protocols), vol 1107. Humana Press, Totowa, NJ 44. Brigo A, Mu¨ller L (2011) Development of the threshold of toxicological concern concept and its relationship to duration of exposure. In: Teasdale A (ed) Genotoxic impurities. Wiley, pp 27–63 45. ICH (1997) International conference on harmonisation of technical requirements for registration of pharmaceuticals for human use (ICH). Topic Q3C. Impurities: residual solvents, ICH 46. Kroes R, Renwick AG, Cheesemann M, Kleiner J, Mangelsdorf I, Piersma A, Schilter B, Schlatter J, van Schothorst F, Vos JG, Wurtzen G (2004) Structure-based thresholds of toxicological concern (TTC): guidance for application to substances present at low levels in the diet. Food Chem Toxicol 42:65–83 47. ICH (2002) International conference on harmonisation of technical requirements for registration of pharmaceuticals for human use (ICH). Topic Q3A(R). Impurities testing guideline: impurities in new drug products (Revision). ICH 48. ICH (2002) International conference on harmonisation of technical requirements for registration of pharmaceuticals for human use (ICH). Topic Q3B(R). Impurities testing guideline: impurities in new drug substances (Revision). ICH 49. Mu¨ller L, Mauthe RJ, Riley CM, Andino MM, Antonis DD, Beels C, DeGeorge J, De Knaep AG, Ellison D, Fagerland JA, Frank R, Fritschel B, Galloway S, Harpur E, Humfrey CD, Jacks AS, Jagota N, Mackinnon J, Mohan G, Ness DK, O’Donovan MR, Smith MD, Vudathala G, Yotti L (2006) A rationale for determining, testing, and controlling specific impurities in pharmaceuticals that possess potential for genotoxicity. Reg Tox Pharm 44: 198–211 50. Kasper P, Mu¨ller L (2015) Genotoxic impurities in pharmaceuticals. In: Graziano MJ, Jacobson-Kram D (eds) Genotoxicity and carcinogenicity testing of pharmaceuticals. Springer 51. EMA (2010) Question & answers on the CHMP guideline on the limits of genotoxic impurities, EMA/CHMP/SWP/431994/ 2007 Rev.3, http://www.ema.europa.eu/ docs/en_GB/document_library/Scientific_ guideline/2009/09/WC500002907.pdf
52. FDA Draft Guidance (2008) Guidance for Industry. Genotoxic and Carcinogenic Impurities in Drug Substances and Products: Recommended Approaches. U.S. Department of Health and Human Services, Food and Drug Administration Center for Drug Evaluation andResearch (CDER). Washington, December 2008 53. ICH guideline M7(R1) on assessment and control of DNA reactive (mutagenic) impurities in pharmaceuticals to limit potential carcinogenic risk. 5 Step 2014 54. Sutter A, Amberg A, Boyer S, Brigo A, Contrera JF, Custer LL, Dobo KL, Gervais V, Glowienke S, van Gompel J, Greene N, Muster W, Nicolette J, Reddy MV, Thybaud V, Vock E, White AT, Mu¨ller L (2013) Use of in silico systems and expert knowledge for structure-based assessment of potentially mutagenic impurities. Regul Toxicol Pharmacol 67(1):39–52 55. Brigo A, Muster W (2015) Comparative assessment of several in silico systems and models to predict the outcome of the Ames mutagenicity assay. In: Society of Toxicology (ed) Society of toxicology annual meeting 2015. Society of Toxicology, San Diego, CA 56. Hansen K, Mika S, Schroeter T, Sutter A, ter Laak A, Steger-Hartmann T, Heinrich N, Mu¨ller KR (2009) Benchmark data set for in silico prediction of Ames mutagenicity. J Chem Inf Model 49:2077–2081 57. Ames BN, Durston WE, Yamasaki E, Lee FD (1973) Carcinogens are mutagens: a simple test system combining liver homogenate for activation and bacteria for detection. Proc Natl Acad Sci USA 70:2281–2285 58. Escobar PA, Kemper RA, Tarca J, Nicolette J, Kenyon M, Glowienke S, Sawant SG, Christensen J, Johnson TE, McKnight C, Ward G, Galloway SM, Custer L, Gocke E, O’Donovan MR, Braun K, Snyder RD, Mahadevan B (2013) Bacterial mutagenicity screening in the pharmaceutical industry. Mutat Res 752:99–118 59. Reuters T (2015) Metacore—data-mining and pathway analysis. https://portal.genego. com/ 60. Fischer H et al (2001) Prediction of in vitro phospholipidosis of drugs by means of their amphiphilic properties. Rational Approach Drug Design:286–289 61. Kruhlak NL et al (2007) Progress in QSAR toxicity screening of pharmaceutical impurities and other FDA regulated products. Adv Drug Deliv Rev 59(1):43–55
Increasing the Value of Data Within a Large Pharmaceutical Company Through. . . 62. CompuDrug. MetabolExpert (2015) http:// www.compudrug.com/metabolexpert 63. Discovery M. MetaSite (2015) http://www. moldiscovery.com/software/metasite/ 64. Waring MJ, Arrowsmith J, Leach AR, Leeson PD, Mandrell S, Owen RM, Pairaudeau G, Pennie WD, Pickett SD, Wang J, Wallace O, Weir A (2015) An analysis of the attrition of drug candidates from four major pharmaceutical companies. Nat Rev Drug Discov 14: 475–486 65. Roberts R, Kavanagh S, Mellor H, Pollard C, Robinson S, Platz S (2014) Reducing attrition in drug development: smart loading pre-clinical safety assessment. Drug Discov Today 19(3):341–347 66. Cook D, Brown D, Alexander R, March R, Morgan P, Satterthwaite G, Pangalos MN (2014) Lessons learned from the fate of AstraZeneca’s drug pipeline: a five-dimensional framework. Nat Rev Drug Discov 13: 419–431 67. Bowes J, Brown AJ, Hamon J, Jarolimek W, Sridhar A, Waldron G, Whitebread S (2012) Reducing safety-related drug attrition: the use of in vitro pharmacological profiling. Nat Rev Drug Discov 11:909–922 68. Anighoro A, Bajorath J, Rastelli G (2014) Polypharmacology: challenges and opportunities in drug discovery. J Med Chem 57: 7874–7887 69. Huggins D, Sherman W, Tidor B (2012) Rational approaches to improving selectivity in drug design. J Med Chem 55:1424–1444 70. Brennan RJ (2017) Target safety assessment: strategies and resources. Methods Mol Biol 1641:213–228 71. Rao MS, Gupta R, Liguori MJ, Hu M, Huang X, Mantena SR, Mittelstadt SW, Blomme EAG, Van Vleet TR (2019) Novel computational approach to predict off-target interactions for small molecules. Front Big Data 2:25 72. Allen T, Wedlake A, Gelzˇinyte˙ E, Gong C, Goodman J, Gutsell S, Russell P (2020) Neural network activation similarity: a new measure to assist decision making in chemical toxicology. Chem Sci 11:7335–7348 73. Bendels S, Bissantz C, Fasching B, Gerebtzoff G, Guba W, Kansy M, Migeon J, Mohr S, Peters JU, Tillier F, Wyler R, Lerner C, Kramer C, Richter H, Roberts S (2019) Safety screening in early drug discovery: an optimized assay panel. J Pharmacol Toxicol Methods 99:106609 74. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G,
673
Isard M, Kudlur M, Levenberg J, Monga R, Moore S, Murray D, Steiner B, Tucker P, Vasudevan V, Warden P, Zhang X (2016) TensorFlow: a system for large-scale machine learning. Open access to the Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation. ISBN 978-1-931971-33-1 75. Ketkar N (2017) Introduction to keras. pp 95–109 76. Hall P, Kurka M, Bartz A (2018) Using H2O driverless AI 77. Naga D, Muster W, Musvasva E, Ecker G (2021) Machine learning tools for off-target early safety assessment of small molecules in drug discovery (Single task neural networks vs Automated machine learning), manuscript submitted for publication 78. Erickson N, Mueller J, Shirkov A, Zhang H, Larroy P, Li M, Smola A (2020) AutoGluonTabular: robust and accurate AutoML for structured data. arXiv:2003.06505v1 [stat. ML] 79. Olson R, Moore J (2019) TPOT: a tree-based pipeline optimization tool for automating machine learning, pp. 151–160 80. Jin H, Song Q, Hu X (2019) Auto-Keras: an efficient neural architecture search system. KDD ’19: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp 1946–1956. https://doi.org/10.1145/ 3292500.3330648 81. Wei Q, Dunbrack RL Jr (2013) The role of balanced training and testing data sets for binary classifiers in bioinformatics. PLoS One:e67863 82. Li DC, Liu CW, Hu SC (2010) A learning method for the class imbalance problem with medical data sets. Comput Biol Med 40: 509–518 83. Antelo-Collado A, Carrasco-Velar R, Garcı´aPedrajas N, Cerruela-Garcı´a G (2021) Effective feature selection method for classimbalance datasets applied to chemical toxicity prediction. J Chem Inf Model 61:76–94 84. Safdar MF, Alkobaisi SS, Zahra FT (2020) A comparative analysis of data augmentation approaches for magnetic resonance imaging (MRI) scan images of brain tumor. Acta Inform Med 28(1):29–36 85. Sakai A, Minoda Y, Morikawa K (2017) Data augmentation methods for machine-learningbased classification of bio-signals. The 2017 Biomedical Engineering International Conference (BMEiCON-2017). pp 1–4
674
Alessandro Brigo et al.
86. Yi L, Mak MW (2020) Improving speech emotion recognition with adversarial data augmentation network. IEEE transactions on neural networks and learning systems 87. Chawla N, Bowyer K, Hall L, Kegelmeyer W (2002) SMOTE: synthetic minority oversampling technique. J Artif Intell Res 16: 321–357 88. Hemmerich J, Asilar E, Ecker GF (2020) COVER: conformational oversampling as data augmentation for molecules. J Chem 12:18 89. Huang R, Xia M (2017) Editorial: Tox21 challenge to build predictive models of nuclear receptor and stress response pathways as mediated by exposure to environmental toxicants and drugs. Front Environ Sci 5:3 90. Caruana R (1997) Multitask learning. Mach Learn 28:41–75 91. Sun X, Panda R, Feris R (2019) AdaShare: learning what to share for efficient deep multi-task learning. arXiv:1911.12423v2 [cs. CV] 92. Kang Z, Grauman K, Sha F (2011) Learning with whom to share in multi-task feature learning. Proceedings of the 28 th International Conference on Machine Learning, Bellevue, WA, USA, 2011. pp 521–528 93. Gaulton A, Hersey A, Nowotka M, Bento A, Chambers J, Mendez D, Mutowo P, Atkinson F, Bellis L, Cibria´n-Uhalte E, Davies M, Dedman N, Karlsson A, ˜ os M, Overington J, Papadatos G, Magarin Smit I, Leach A (2016) The ChEMBL database in 2017. Nucleic Acids Res 45 94. Dix D, Houck K, Martin M, Richard A, Setzer R, Kavlock R (2007) The ToxCast program for prioritizing toxicity testing of environmental chemicals. Toxicol Sci 95:5–12 95. Gilson MK, Liu T, Baitaluk M, Nicola G, Hwang L, Chong J (2016) Binding DB in 2015: a public database for medicinal chemistry, computational chemistry and systems pharmacology. Nucleic Acids Res 44: D1045–D1053 96. Grubbs FE (1969) Procedures for detecting outlying observations in samples. Technometrics 11:1–21 97. Taboada-Crispi A, Sahli H, Orozco Monteagudo M, Hernandez Pacheco D, Falcon A (2009) Anomaly detection in medical image analysis. Handbook of Research on Advanced Techniques in Diagnostic Imaging and Biomedical Applications. pp. 426–446
98. Wang R, Nie K, Wang T, Yang Y, Long B (2020) Deep learning for anomaly detection. pp. 894–896 99. Sˇabic´ E, Keeley D, Henderson B, Nannemann S (2021) Healthcare and anomaly detection: using machine learning to predict anomalies in heart rate data. AI & Soc 36:149–158 100. Briggs K et al (2021) Guidelines for FAIR sharing of preclinical safety and off-target pharmacology data. ALTEX 38(2):187–197 101. Cherkasov A, Muratov EN, Fourches D, Varnek A, Baskin II, Cronin M, Dearden J, Gramatica P, Martin YC, Todeschini R, Consonni V, Kuz’min VE, Cramer R, Benigni R, Yang C, Rathman J, Terfloth L, Gasteiger J, Richard A, Tropsha A (2014) QSAR modeling: where have you been? Where are you going to? J Med Chem 57: 4977–5010 102. Piatetsky-Shapiro G (2012) Big data hype (and reality). https://hbr.org/2012/10/ big-data-hype-and-reality 103. Sciences PL (2015) http://www. pointcrosslifesciences.com/ 104. James LP, Mayeuy PR, Hinson JA (2003) Acetaminophen-induced hepatotoxicity. Drug Metab Dispos 31(12):1499–1506 105. FDA (2014) http://www.fda.gov/ downloads/Drugs/Guidances/ UCM292334.pdf 106. Briggs K et al (2012) Inroads to predict in vivo toxicology—an introduction to the eTOX project. Int J Mol Sci 13(3): 3820–3846 107. Pognan F et al (2021) The eTRANSAFE project on translational safety assessment through integrative knowledge management: achievements and perspectives. Pharmaceuticals 14:237 108. Drew P, Thomas R, Capella-Gutierrez S (2019) PP07: Consolidating study outcomes in a standardised, SEND-compatible structure. FDA/PHUSE US computational science symposium, June 9–11, 2019. Silver Spring, MD. https://www.lexjansen.com/ css-us/2019/PP07.pdf 109. Steger-Hartmann T, Kreuchwig A, Vaas L, Wichard J, Bringezu F, Amberg A, Muster W, Pognan F, Barber C (2020) Introducing the concept of virtual control groups into preclinical toxicology animal testing. ALTEX 37(3):343–349
INDEX A Absorption, distribution, metabolism, excretion and toxicity (ADMET) ......................... 62, 68–71, 85–113, 179, 262, 264, 266, 275 Absorption, distribution, metabolization and excretion (ADME) .........................30, 31, 36, 58–63, 65, 68–70, 72–76, 78, 86–88, 90, 105, 107, 109, 110, 113, 199, 554, 593, 606, 611, 617, 637, 638 Accuracies .............................................13, 37, 46, 94, 96, 102, 103, 106, 136, 153, 154, 166, 168–171, 173, 175, 177–179, 205, 223, 224, 226, 227, 231, 238, 265, 271, 273, 274, 278–280, 282, 295, 297, 305, 307, 315, 318, 329, 331, 338–341, 349, 350, 359–362, 364–366, 368, 369, 371, 374–378, 387, 396–399, 401–405, 408, 439, 452, 481, 489, 490, 492, 502, 542, 543, 546, 548, 648, 649, 652–654, 656–658 ACD/Percepta ...................................................... 66, 261, 262, 275, 542 Acetylcholinesterase (AChE) ..............277, 278, 281, 532 Acute systemic toxicity......................................... 259–284 Additive effects ........................................... 563, 565–567, 577, 578, 582 Adverse outcome (AO).............................. 141, 275, 385, 420, 460, 470, 472, 521–526, 528, 530, 603, 620 Adverse outcome pathway (AOP).......................... 20, 21, 97, 141, 292, 293, 368, 460, 461, 470, 472, 522–532, 543, 592, 626, 627 Ames tests ........................................... 149–151, 154–158, 178, 202, 203, 436, 439, 442, 445, 447, 483, 503, 546, 547, 549, 551, 645 Aneugenicity.................................................................. 186 Antagonism ......................................................... 563, 565, 567, 577, 582, 605 Applicability domain applicability domain index (ADI) ................ 152, 153, 160, 161, 166, 167, 171, 173, 175, 177, 181, 188–190, 208–210, 222–224, 227, 229–231, 233, 235, 238, 242, 247, 251, 295–297, 348, 482, 485 Artificial intelligence (AI) .................................12, 17, 58, 86, 141, 203, 306, 379, 394, 445, 545, 550, 556, 583, 584, 639, 652–654, 660
Artificial neural networks (ANNs) ...................11, 58, 91, 204, 205, 249, 251, 266, 273, 282, 358, 360, 364, 395, 396, 401, 540, 572, 578, 583 Aryl hydrocarbon receptor (AhR)........................ 99, 267, 279, 281, 504, 526–528, 530 Atom centered fragment (ACF)................. 154, 175, 295 AUC............................ 61, 372, 377, 401, 403, 653, 657
B Bayesian models ........................................... 38, 299, 300, 323, 332, 358, 361, 397, 401, 402 Benchmark dose (BMD) ..................................... 140, 255 Bias............................................................ 17, 23, 74, 157, 364, 451, 575, 653, 657 Big data............................................................. 16–18, 660 Bioinformatics ..................... 99, 120, 129, 133–143, 263
C CAESAR models ........................................ 153–156, 160, 161, 165–168, 171–173, 204, 205, 209, 222, 223, 227, 230–235, 238, 294–296, 304, 313–317, 325, 326, 328–331, 335, 336, 338–340, 342–345, 350, 492 Cancer..........................................38, 137, 140, 201, 204, 206, 209, 212, 224, 246, 252, 378, 440, 442, 443, 521, 527, 530, 545, 551, 556, 566, 641 Carcinogenicity .......................................... 14, 17, 88, 99, 107, 109, 110, 150, 155, 201–213, 269, 381, 440–445, 447, 448, 451, 455, 472, 479, 481, 488, 502–504, 514, 540, 543, 545, 550, 553, 556, 557, 638, 642, 643, 649, 651, 660, 664, 669 Carcinogenic Potency Database (CPDB)........................................... 204, 208, 209, 212, 440, 445, 504, 540 CASE Ultra ........................................109, 221, 224–225, 229–231, 234, 235, 237, 262, 264, 299, 301–303, 306–308, 314, 327, 335, 337, 343, 345, 346, 497–502, 504–508, 639, 640, 645–647 Causal reasoning .......................................................20, 21 ChEMBL ............................... 94, 95, 107, 120, 273, 656 Chemoinformatics................................................ 1, 9, 248 Chemotypes................................149, 150, 153, 348, 380 Cholestasis ................ 143, 358, 369, 378, 503, 529, 530
Emilio Benfenati (ed.), In Silico Methods for Predicting Drug Toxicity, Methods in Molecular Biology, vol. 2425, https://doi.org/10.1007/978-1-0716-1960-5, © The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2022
675
IN SILICO METHODS FOR PREDICTING DRUG TOXICITY
676 Index
Chromosome aberration ........................... 186–188, 191, 192, 194, 198, 199, 553 Chromosome damage......................................... 185–200, 440, 645, 648, 658 Classification, Labeling and Packaging (CLP) ........................................................ 562, 599 Classification and regression tree (CART) models .......................... 16, 73, 205–206, 296, 376, 404, 405 Classification......................................... 11, 14, 31, 68–70, 89, 93, 98, 101, 111, 113, 154, 188–191, 205, 206, 208, 210, 221–223, 231, 244, 249, 253, 260, 265, 269, 271–274, 278, 280, 282, 292, 296, 310, 321, 326, 332, 336, 345, 346, 350, 352, 359, 361, 363, 365, 370, 371, 374, 377, 383–385, 397, 401, 403–405, 407, 441–443, 445, 454–457, 463, 464, 479–481, 484, 538, 539, 541, 543, 545–549, 555, 572, 599, 604, 606, 639, 643, 656, 662, 663 Clastogenicity ....................................................... 186, 658 Component based approaches ............................ 564–569 Computational chemistry .................................... 3–6, 650 Concentration addition (CA)............................. 188, 189, 276, 563, 565–568, 572, 575, 578 Consensus.......................................... 16, 20, 94, 96, 112, 113, 151, 157–159, 163, 171, 179, 188, 190, 263, 267, 273–276, 280, 298, 311, 314, 322, 326, 327, 336, 343, 345, 346, 351, 352, 364, 369, 375, 388, 396, 397, 404, 485, 486, 488, 492, 543, 596 Convolutional neural network (CNN) .................. 18, 88, 108, 262, 266, 401 Cosmetics ..................................................... 20, 194, 242, 246, 255, 260, 292, 293, 310–314, 321, 328, 332, 338, 347, 351, 352, 372, 437, 460, 461, 533 COSMOS database ....................245, 246, 268, 269, 284 Counter-propagation artificial neural networks (CPANNs) ......................................................... 364 Cramer classes ............................................. 482, 539, 540 Cytochrome (CYP) ..............................38, 60, 70, 75, 88, 90, 103, 108, 109, 267, 361, 527, 530, 638, 651, 657 Cytochromes P450 (CYP450) ........................90, 91, 361 Cytotoxicity ..................................... 88, 98, 99, 101, 269, 270, 272, 273, 275, 369, 377, 405, 527, 657
D Danish (Q)SAR database ........................... 198, 199, 299, 301, 302, 306, 308 Databases ......................................... 9, 58, 59, 62, 63, 68, 75, 76, 78, 87, 95, 98, 100, 101, 107–111, 120, 134, 135, 137, 138, 141, 142, 155, 162, 187–190, 192, 199, 204, 205, 223, 224, 226, 230, 242–248, 251, 253, 260–263, 265–271,
273, 274, 284, 297, 298, 301, 303–305, 307, 309–311, 322, 324, 325, 327, 328, 335, 337, 343–346, 369–371, 377, 388, 401, 409, 420, 422, 426, 428, 440, 441, 444, 445, 447, 457, 466, 469, 483, 488, 497, 498, 500, 502–505, 507, 512–515, 522, 543, 555, 574, 583, 593, 598–603, 617, 618, 621, 627, 649, 652, 656, 660, 662, 664, 666–669 Data sharing ................................81, 120, 186, 436, 437, 444, 445, 448, 472, 474, 638, 639, 664–670 Decision forest ....................................394–400, 403, 405 Deep learning ............................................. 14, 16–18, 70, 123, 298, 396, 401, 405, 409, 497, 501, 652–654 Deep neural networks (DNNs) .............. 18, 19, 401, 403 Degradation........................................110, 436, 440–442, 466–471, 498, 515, 529, 538, 551, 611 DEREK....................................................... 181, 198, 203, 267, 275, 281, 305, 314, 381, 382, 436–441, 443, 445–452, 454, 455, 459–467, 470–472, 570, 575, 639, 645–648 Descriptors DRAGON ............................ 252, 296, 370, 405, 573 electrostatic descriptors............................................... 9 fingerprints ................................ 9, 13, 249, 274, 280, 298, 361, 362, 372, 377, 397, 406 geometrical descriptors ............................................... 9 molecular descriptors ................................8, 9, 13, 15, 16, 122, 123, 125, 126, 188, 226, 251, 255, 273, 284, 296, 300, 302, 303, 306–309, 361, 362, 365, 366, 371, 397, 401, 404, 405, 486, 540, 546, 571–573, 578, 601, 620, 640 PaDEL .................................................. 122, 251, 364, 373, 374, 405, 573 physico-chemical descriptors .................................. 303 quantum chemical descriptors.................................... 9 topological indices .....................................9, 300, 307 Developmental toxicity developmental and reproductive toxicity (DART)...................................217, 218, 220, 221, 223, 235, 245, 472, 597, 664 Dipole moment .................................................... 361, 583 DNA ............................................14, 103, 150, 185, 188, 189, 201, 203, 419, 447, 455, 501, 502, 511, 538, 550, 626, 641, 643 Drug induced liver injury (DILI) ................88, 355–357, 361–363, 365, 366, 370, 371, 374, 375, 377, 378, 387–389, 393, 394, 396–403, 405, 409, 503 Dynamic energy budget (DEB) ........................... 38, 592, 593, 602–617, 627
E eChemPortal ..............................205, 262, 271, 599, 618 Eco-Threshold of Toxicological Concern (EcoTTC) .......................................................... 599
IN SILICO METHODS Ensemble learning........................................396, 403–405 Environmental risk assessment ............................. 89, 502, 580, 590–598, 617, 619–623, 625 Epigenetic ............................................................. 201, 204 EPI SUITETM (Estimation Programs Interface Suite from the US Environmental Protection Agency) .......................................65–69 Errors .........................................14, 16, 17, 44, 111, 137, 235, 251, 254, 278, 377, 395, 426, 505, 578, 664 eTOX .......................................................... 120–122, 384, 638, 664–666, 669 eTRANSAFE .............................................. 120, 121, 123, 129, 638, 664, 666–669 European Chemicals Agency (ECHA) .......................................... 248, 262, 269, 271, 275, 298, 352, 461, 482, 504, 590, 599–601, 618 European Food Safety Authority (EFSA) ............................................ 179, 243, 247, 248, 251, 484, 504, 541, 562, 590, 594, 597, 599, 603, 608, 609, 616, 621, 622, 626, 627 Expert systems........................................13, 16, 109, 110, 120, 151, 187, 198, 202, 203, 264, 267, 272, 275, 306, 357, 358, 381, 436, 438, 439, 481, 542, 548, 569, 570, 575, 577, 583, 638, 639, 642, 644, 650 Eye-irritation ................................ 88, 126–128, 460, 461
F Fibrosis..........................................................527, 529–531 Food and Drug Administration (FDA) ............. 108, 135, 222, 246, 262, 269, 270, 364, 365, 369, 398, 400, 401, 409, 410, 422, 502–504, 542, 557, 590, 642, 643, 662–664 Forward dosimetry........................................................ 614
G Gene ontology (GO) .................................. 135, 137, 138 Generalised Read-Across (GenRA) ..................... 274, 601 Genetic algorithm (GA) ..................................... 252, 282, 372, 573, 578 Genotoxicity ......................................... 14, 109, 149–151, 185, 186, 190, 194, 197, 202, 203, 247, 310, 326, 336, 437, 445, 448, 479, 481, 503, 538, 543, 545–549, 553, 555, 638, 641, 642, 649, 651, 657 Green chemistry (GC) .................................................. 602
H Hazard Evaluation Support System (HESS).....................................243, 245, 246, 252 Hepatotoxicity................................. 88, 98, 99, 110, 267, 355–389, 393, 394, 400, 404, 406, 407, 444, 527, 530, 553, 662, 663
FOR
PREDICTING DRUG TOXICITY Index 677
hERG (human Ether-a-go-go-Related Gene) .............................................. 88, 90, 94–96, 108, 268, 279, 282–284, 473, 498, 503, 638, 645, 648, 651, 657 High-throughput screening (HTS) ..................... 18, 272, 281, 369, 374, 377, 397, 405, 406, 504, 620, 649
I ICH M7 guideline ..................................... 419, 440, 441, 449, 455, 499, 501, 511, 538, 540, 542, 543, 545, 641–645, 658 Impurities .......................................... 119, 149, 186, 419, 436, 437, 440–445, 448, 449, 451, 455–459, 467, 468, 499, 501, 511, 512, 537–557, 597, 638, 639, 641–648, 658 InChIKeys .......................................................... 62, 63, 65 Independent action (IA).....................563, 565–568, 578 Integrated Approach to Testing and Assessment (IATA)....................................................... 275, 320 Integrated Risk Information System (IRIS) ............................................... 243, 245, 246 Interactions...........................................29, 33, 35, 60, 74, 76, 81, 88, 93, 101, 103, 106, 107, 109, 110, 138, 141, 188, 203, 266, 275, 283, 361, 369, 380, 385, 387, 447, 468, 469, 499, 500, 521, 523, 530, 563–565, 567–569, 574, 582–584, 593, 605, 614, 617, 626, 656, 658, 660 Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM)....................................... 270, 271, 298 International Chemical Identifier (InChI) ...................... 2 In vitro toxicity.............................................................. 405 ISS model ...........................................154, 155, 161–163, 167, 175–177, 208, 209, 482
K Key events..................................460, 470, 472, 503, 521, 523, 524, 528, 530, 603, 626, 627 Klimisch score ...................................................... 424, 425 K-nearest neighbor algorithm (kNN) .....................................151, 153, 156–157, 162, 164, 168, 169, 177–179, 485, 488, 492 KNIME................................................371, 373, 384, 385
L Landscape modelling ..........................592, 620–623, 627 Lazar .................................. 203, 206, 208–210, 251, 275 LD50 ............................................... 6–8, 98, 99, 110, 249, 251, 260–275, 283, 284, 507, 508, 592, 601, 603, 604, 618 Lead identification (LI) ....................................... 650–657 Lead optimization (LO) .............................. 98, 105, 248, 473, 652, 657, 658
IN SILICO METHODS FOR PREDICTING DRUG TOXICITY
678 Index
Leadscope models ...................................... 221, 225–226, 230, 233–235, 237, 304, 308, 542, 543, 640, 645 Leave many out (LMO) ............................................... 369 Linear discriminant analysis (LDA) ................................ 11, 88, 108, 205, 250, 278, 282, 296, 360, 361, 365, 374, 395, 404, 407, 572 Logic and Heuristics Applied to Synthetic Analysis (LHASA)................................ 109, 198, 267, 275, 305, 381, 435–474, 645 LogP ................................................ 9, 66, 109, 226, 277, 296, 303, 310, 377, 464, 483, 488, 540, 602, 650 Lowest observed adverse effect level (LOAEL) ..........................................241–256, 604
M Machine learning (ML) ............................. 11, 13, 15, 16, 18, 68, 75, 86, 98, 108, 111, 120, 123, 221, 225, 248, 249, 275, 279, 282, 323, 360, 362, 365, 374, 394–409, 525, 568, 572, 577, 578, 580–583, 638–640, 652–656, 660 Matthews correlation coefficient (MCC) ............ 12, 127, 128, 221, 276, 365, 399, 401, 653, 657 Maximum Recommended Therapeutic Dose (MRTD).................................................... 250, 253 Metabolism........................................... 30–33, 35, 37, 38, 46, 58, 60, 61, 63, 74–75, 78, 81, 85, 86, 88, 90, 91, 94, 102–107, 109, 110, 179, 299, 305, 306, 355, 356, 358, 385, 436, 464–466, 497, 498, 500, 504, 508, 510, 515, 526, 543, 554, 593, 610, 611, 616, 637, 649, 651 Microarrays ........................................................... 134–136 Micronucleus ............................... 90, 186–192, 194, 198, 199, 202, 503, 645 Mixtures..................................................7, 9, 16, 70, 244, 270, 271, 311, 314, 364, 451, 480, 505, 515, 532, 561–584, 604, 615, 618, 619 Model interpretation.............................13, 127, 570, 572 Mode-of-action .......................................... 15, 21, 36, 37, 199, 224, 266, 305, 439, 472, 482, 521, 522, 527, 543, 554, 566, 592, 601, 669 Molecular dynamics (MD) ................................... 5, 9, 18, 105, 283, 620 Molecular initiating events (MIEs) ...................... 21, 320, 368, 384, 470, 472, 522–526, 528–532, 603, 604, 626 Molecular mechanics (MM) .....................................5, 101 Molecular simulation ............................................. 5, 6, 35 Monte Carlo simulations ...............................5, 36, 45–48 Multicase.............................................203, 221, 224–225, 262, 264, 307, 497, 498, 501–505, 507, 508, 511, 512, 515, 570 Multilinear regression model (MLR)..................... 58, 88, 108, 252, 253, 277–279, 540, 572, 578
Multiple Computer Automated Structure Evaluation (MCASE) ........................225, 264, 265, 272, 498 Multitask learning ......................................................... 656 Mutagenicity............................................... 14, 88, 98, 99, 101, 109, 110, 149–181, 186, 188, 189, 201, 202, 204, 206, 207, 222, 238, 310, 326, 336, 356, 419, 439–443, 445, 447–449, 451, 452, 455, 462, 479, 482, 483, 485, 486, 488–492, 499, 501–503, 511–514, 538, 540, 543, 545–548, 551, 553, 555, 556, 643, 645, 648
N Naı¨ve Bayesian classification (NBC) ................... 221, 282 NCSTOX .................................................... 304, 314, 325, 326, 335, 336, 344, 345 Network analysis .................................................. 138, 141 Network biology .................................................. 141, 143 Neural networks ....................................... 13, 75, 88, 208, 249, 251, 263, 266, 279, 282, 401, 406, 501, 570, 654, 656, 657 New Approach Methodologies (NAMs).................. 461, 463, 592, 603, 624, 625 Noise ................................................................................ 17 Non-genotoxicity .............................. 155, 187, 192, 193, 196, 201–204, 206–208, 538, 547, 552 Non-Steroidal Anti-Inflammatory Drugs (NSAIDs)........................................................... 369 No observed adverse effect level (NOAEL)................................................. 241–256, 310, 540, 604, 660 Nuclear receptors ................ 99, 267, 356, 526, 529, 530
O Occupational toxicology...................................... 460–464 OECD QSAR Toolbox........................................... 63, 74, 152, 187–192, 194, 197–199, 243–246, 305, 314, 482, 539, 543, 599, 602 Off-target pharmacology ..................................... 652, 657 Oncologic ............................................................. 203, 206 OpenFoodTox................... 243, 247, 599, 603, 604, 618 Organophosphorus compounds .................................. 281 Overfitting .................................... 11, 151, 377, 378, 404 Oversampling .............................................. 363, 655, 656
P Partial least square (PLS).......................58, 88, 108, 252, 277, 279, 282, 300, 303, 309, 540, 572, 580, 582 Pharmacokinetics .............................................. 29–31, 35, 57–59, 75, 76, 78, 80, 81, 85, 90, 93, 98, 101, 109, 111, 479 Pharmacophores..............................................93, 99, 278, 281, 283, 303, 357, 380, 387, 656
IN SILICO METHODS Physiologically based pharmacokinetic modeling (PBPK)............................................ 30–49, 59, 75, 76, 78–80, 256 Point of departure (POD) ..................242, 247–249, 255
Q QPRF .................................................................... 128, 500 QSAR Model Reporting Format (QMRF).......................................... 126, 151, 254, 299, 301, 302, 304, 306–308, 448, 482 QSAR/property relationship (QSPR) ......................9, 66, 479, 480, 602 QSAR toolbox...........................................................74, 75 Quantitative IVIVE (QIVIVE) .......................76, 78, 616 Quantitative structure activity-activity relationship (QSAAR) .................................................. 272, 273 Quantitative structure-activity relationship (QSAR) ....................................... 8–20, 22, 23, 46, 66, 69, 70, 73, 74, 78, 86, 88, 94, 96, 97, 106, 108, 119, 149–181, 187, 189, 204–206, 222, 223, 225, 226, 235, 248, 251, 255, 260, 263–267, 271–273, 275, 276, 278, 280–283, 294, 298, 299, 301, 303, 304, 306, 307, 309, 314, 316, 320, 322, 324, 325, 327, 328, 335, 337, 343–346, 358, 361–366, 369, 371, 373–375, 378, 388, 397, 405, 409, 483, 488, 500, 501, 505–508, 515, 530, 540, 542, 543, 546, 568, 570–575, 577, 578, 580, 582, 593, 596, 599, 601–604, 620, 624, 639, 640, 642, 650, 658 Quantitative structure-toxicity relationship (QSTR) ............................. 86, 273, 564, 582, 583
R R&D .............................................................................. 119 Random forest (RF)..........................................11, 58, 68, 70, 88, 91, 108, 123, 126, 127, 188, 191, 221, 223, 248, 249, 251, 282, 358, 362, 363, 365, 371, 374, 375, 377, 395, 396, 403–407, 409, 572, 578, 583 Read-across........................................ 8, 9, 17, 20, 22, 88, 149–181, 191, 208, 210, 235, 238, 248, 250, 252, 256, 274, 275, 305, 384, 424–426, 461, 463, 479, 482, 484–486, 488–490, 492, 493, 530, 537, 539, 542–544, 550, 554–557, 574, 575, 592, 593, 596, 599, 601–604, 626, 666 Read-across method (“RAx”) ............................. 466, 543 Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) regulation............................................................. 20 Registry of Toxic Effects of Chemical Substances (RTECS) ......................................... 224, 261–265, 268–270
FOR
PREDICTING DRUG TOXICITY Index 679
Regressions ........................................... 11, 35, 58, 68–70, 88, 90, 93, 101, 106, 124, 126, 205, 206, 249, 252, 254, 263–265, 273, 275, 281, 282, 296, 300, 302, 303, 307, 359, 360, 365, 374, 377, 395, 404, 405, 407, 480, 481, 540, 572–574, 578, 580–582, 604, 639 Repeated-dose toxicity (RDT) ....................241–256, 261 Reverse dosimetry ......................................................... 614 Risk assessment quantitative risk assessment (QRA) ......................................292, 326, 348, 349 RNA-sequencing (RNAseq) ................................ 135–137 Root mean square error (RMSE).......248, 251–253, 578 R software .............................................38, 143, 296, 652
S SARpy .....................................................14, 15, 151, 156, 157, 160–162, 167, 168, 173–175, 178, 179, 181, 187, 191, 192, 203, 205, 210, 212, 304, 314, 326, 336, 345, 350, 351, 384, 480, 481, 483, 485, 488, 492 SDF file ............................................................91, 92, 103, 107, 318 Self-organizing mapping (SOM) ......................... 88, 108, 282, 500 SEND format ....................................................... 662, 664 Sensitivities ................................................. 12, 22, 23, 45, 48, 94, 96, 127, 128, 155, 205, 219, 223–226, 253, 274, 276, 280, 295, 297, 300–302, 304, 307–309, 350, 359, 361, 362, 364, 365, 369, 370, 374, 375, 397–399, 401–405, 542, 543, 575, 593, 609, 617–620, 626, 627, 638, 645–648, 657, 658, 669 Simplified molecular input line entry system (SMILES) ..................................... 2–4, 14, 19, 62, 63, 65, 70, 87, 89–91, 93–95, 98, 99, 102, 103, 107, 111, 125, 152, 153, 189, 194, 205, 206, 209, 212, 225, 227, 231, 233, 246, 293, 294, 297, 311, 313, 318, 320, 322, 328, 338, 350, 406, 480, 483, 485, 486, 499–501, 505, 506, 508 Skin irritation ............................................. 292, 306, 307, 309, 310, 314, 327, 337, 338, 345–348, 460, 461 Skin permeability......................................... 60, 63, 66, 70 Skin sensitization...............................................88, 96, 97, 109, 291–352, 356, 420, 428–432, 448, 460–464, 470, 485, 503, 508, 509, 650 SMiles ARbitrary Target Specification (SMARTS) ..................................93, 94, 168, 169, 178, 205, 297, 361, 384, 483, 489, 490 Specificities ......................... 12, 22, 23, 94, 97, 107, 127, 128, 205, 223–226, 253, 276, 280, 295, 297, 300–302, 304, 307–309, 350, 359, 361, 362, 364, 365, 369, 370, 374, 375, 397–399, 401–405, 542, 543, 638, 645–648, 657, 669
IN SILICO METHODS FOR PREDICTING DRUG TOXICITY
680 Index
Statistical models ....................................9, 13, 20, 21, 35, 137, 155, 181, 203, 221, 262, 301, 357–361, 368–372, 374–379, 423, 429, 440, 447, 450, 452, 492, 503, 505, 508, 509, 513, 618, 650 Steatosis ............................................. 140, 355, 357, 368, 373, 380, 385, 526–528, 532, 603 Structural alerts (SAs) ................................ 14, 15, 75, 88, 92, 93, 108, 151, 154–156, 158, 160, 161, 166–169, 171, 174, 175, 177–181, 187, 188, 190–192, 194, 197, 202, 204–206, 208–210, 250, 253, 265, 267, 275, 276, 281, 304, 320, 349, 351, 357, 363, 379–388, 436–440, 442, 446, 462, 463, 481, 482, 484, 489–493, 498, 500, 506–509, 515, 542, 547, 575, 596, 641 Structure-activity relationships (SAR) ............... 9, 13–20, 22, 78, 106, 151, 156, 158, 181, 186, 202–204, 206, 213, 226, 227, 265, 276, 300, 302, 303, 305–308, 313, 314, 359, 423, 438, 448, 455, 472, 473, 479, 480, 482, 484, 486, 492, 497–499, 501, 502, 512, 539–543, 545–551, 554–556, 644 Support vector machines (SVMs) .......................... 11, 58, 68, 70, 88, 91, 108, 123, 155, 221, 276, 282, 358, 362, 364, 365, 372, 374, 394–396, 403–407, 409, 572, 583 Synergism ................................................... 563, 565, 567, 577, 582, 604, 605 Synthetic minority over-sampling technique (SMOTE)......................................... 363, 655, 656
T Tanimoto similarities .......................................... 401, 452, 453, 463, 507 Target assessment (TA)................................................. 649 Target identification (TI) ............................................. 649 TD50 .................................................. 204, 208, 209, 211, 212, 549, 550, 556 Text mining ................................................ 141, 361, 368, 371, 403, 668 Threshold of Toxicological Concern (TCC) ............................................. 223, 247, 248, 256, 310, 442, 538, 539, 599 The Tissue MEtabolism Simulator (TIMES) .....................................1, 32, 39, 40, 43, 46, 113, 134, 136, 188, 251, 253, 263, 266, 272, 306, 374, 623, 660 ToxCast......................................111, 135, 251, 369, 374, 404, 406–408, 638, 656 Toxicants.................................... 7, 19, 21, 222–224, 226, 228, 231, 265, 368, 374, 375, 388, 602, 612 Toxicity Estimation Software Tool (T.E.S.T.) ........................................................... 267
Toxicity Prediction by Komputer Assisted Technology (TOPKAT) .......................... 110, 203, 254, 263–265, 272, 542, 570 Toxicity Reference Database (ToxRefDB) ......... 246, 407 Toxicogenomics ......................................... 134, 135, 138, 140, 374, 388, 406, 407, 528 Toxicophores .............................................. 14, 87, 92, 93, 99, 198, 267, 436, 455, 456, 462, 575 ToxRead............................................. 159, 162, 163, 165, 168–170, 178, 179, 482, 483, 485–489, 491–493, 543, 596 Toxtree.........................................75, 154, 155, 167, 181, 188, 202–204, 206–210, 212, 297–298, 304, 309–310, 314, 318, 320, 326, 332, 333, 335, 336, 342, 343, 345, 351, 352, 481–483, 539, 542, 575, 639, 645–648 Training set................................................. 11, 14, 15, 17, 18, 66, 68, 78, 103, 106, 112, 121, 150, 151, 153–157, 160–163, 165, 167, 168, 171–173, 175, 176, 178, 179, 181, 203, 205, 206, 209, 222–231, 233, 235, 238, 248, 250–252, 254, 262–264, 266, 272, 275, 278, 280, 295, 297, 300–303, 306–308, 313–315, 318, 329, 331, 338–341, 348, 350, 360–362, 364–366, 369, 370, 377, 378, 396, 397, 401, 403, 431, 438, 447, 451–454, 484, 489, 492, 498, 505, 506, 511, 513, 542, 546, 547, 573, 574, 645
U Underfitting .................................................................... 11 US Environmental Protection Agency (EPA) .....................................................20, 22, 63, 206, 243, 246, 251, 252, 263, 265, 273, 369, 406, 482, 521, 522, 562, 566, 574, 627
V Variance...........................................................17, 136, 140 VEGA..................................................151–159, 162, 179, 181, 187–192, 194, 197, 198, 204–206, 208–210, 221–224, 226, 235, 238, 251, 293–297, 304, 311, 313, 314, 317, 318, 325, 326, 335, 336, 341, 344–347, 349–351, 481–483, 485–489, 492, 493, 542, 596, 601–604 VirtualToxLab ............................................................... 267
W Weight of evidence (WoE)......................... 149–181, 384, 447, 542, 543, 596 WEKA ............................................................................ 223 Whole mixture based approach .................................... 564