127 18 13MB
English Pages 327 [309] Year 2023
Advances in Experimental Medicine and Biology 1424
Panagiotis Vlamos Editor
GeNeDis 2022
Computational Biology and Bioinformatics
Advances in Experimental Medicine and Biology Volume 1424 Series Editors Wim E. Crusio, Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, CNRS and University of Bordeaux, Pessac Cedex, France Haidong Dong, Departments of Urology and Immunology, Mayo Clinic, Rochester, MN, USA Heinfried H. Radeke, Institute of Pharmacology and Toxicology, Clinic of the Goethe University Frankfurt Main, Frankfurt am Main, Hessen, Germany Nima Rezaei , Research Center for Immunodeficiencies, Children’s Medical Center, Tehran University of Medical Sciences, Tehran, Iran Ortrud Steinlein, Institute of Human Genetics, LMU University Hospital, Munich, Germany Junjie Xiao, Cardiac Regeneration and Ageing Lab, Institute of Cardiovascular Sciences, School of Life Science, Shanghai University, Shanghai, China
Advances in Experimental Medicine and Biology provides a platform for scientific contributions in the main disciplines of the biomedicine and the life sciences. This series publishes thematic volumes on contemporary research in the areas of microbiology, immunology, neurosciences, biochemistry, biomedical engineering, genetics, physiology, and cancer research. Covering emerging topics and techniques in basic and clinical science, it brings together clinicians and researchers from various fields. Advances in Experimental Medicine and Biology has been publishing exceptional works in the field for over 40 years, and is indexed in SCOPUS, Medline (PubMed), EMBASE, BIOSIS, Reaxys, EMBiology, the Chemical Abstracts Service (CAS), and Pathway Studio. 2021 Impact Factor: 3.650 (no longer indexed in SCIE as of 2022)
Panagiotis Vlamos Editor
GeNeDis 2022 Computational Biology and Bioinformatics
Editor Panagiotis Vlamos Department of Informatics Ionian University Corfu, Greece
ISSN 0065-2598 ISSN 2214-8019 (electronic) Advances in Experimental Medicine and Biology ISBN 978-3-031-31981-5 ISBN 978-3-031-31982-2 (eBook) https://doi.org/10.1007/978-3-031-31982-2 # The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023, Corrected Publication 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To Ada, who gave me the chance to embrace the joys of maturity, the spirit of teenage adventure, and the wonder of childhood, simultaneously.
Acknowledgment
I would like to thank Konstantina Skolariki for her invaluable assistance in editing and compiling the conference proceedings. Her contribution, dedication, and attention to detail have been instrumental in the success of our conference.
vii
Contents
1
2
3
4
5
6
7
8
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes in Elderly at Prodromal Alzheimer’s Disease Risk . . . . . . . . . . . . . . . . . . . Themis P. Exarchos, Robert Whelan, and Ioannis Tarnanas A Sensor-Based Platform for Early-Stage Parkinson’s Disease Monitoring . . . . . . . . . . . . . . . . . . . . . . . Marios G. Krokidis, Themis P. Exarchos, Aristidis G. Vrahatis, Christos Tzouvelekis, Dimitrios Drakoulis, Foteini Papavassileiou, and Panagiotis Vlamos Pressure Prediction on Mechanical Ventilation Control Using Bidirectional Long-Short Term Memory Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . Gerasimos Grammenos and Themis P. Exarchos Making Pre-screening for Alzheimer’s Disease (AD) and Postoperative Delirium Among Post-Acute COVID-19 Syndrome (PACS) a National Priority: The Deep Neuro Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ioannis Tarnanas and Magda Tsolaki Graph Theory-Based Approach in Brain Connectivity Modeling and Alzheimer’s Disease Detection . . . . . . . . . . . . . Dionysios G. Cheirdaris Developing Theoretical Models of Kinesia Paradoxa Phenomenon in Order to Build Possible Therapeutic Protocols for Parkinson’s Disease . . . . . . . . . . . . Irene Banou Computational Methods for Protein Tertiary Structure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antigoni Avramouli Spiking Neural Networks and Mathematical Models . . . . . . . Mirto M. Gasparinatou, Nikolaos Matzakos, and Panagiotis Vlamos
1
23
31
41
49
59
61 69
ix
x
9
10
11
Contents
On Modelling Electrical Conductivity of the Cerebral White Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emmanouil Perakis
81
Neuroeducation and Mathematics: The Formation of New Educational Practices . . . . . . . . . . . . . . . . . . . . . . . . . Eleni Lekati and Spyridon Doukakis
91
DRDs and Brain-Derived Neurotrophic Factor Share a Common Therapeutic Ground: A Novel Bioinformatic Approach Sheds New Light Toward Pharmacological Treatment of Cognitive and Behavioral Disorders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Louis Papageorgiou, Efstathia Kalospyrou, Eleni Papakonstantinou, Io Diakou, Katerina Pierouli, Konstantina Dragoumani, Flora Bacopoulou, George P. Chrousos, Themis P. Exarchos, Panagiotis Vlamos, Elias Eliopoulos, and Dimitrios Vlachakis
97
12
Proposal for Investigating Self-Efficacy in Mathematics Using a Portable EEG System . . . . . . . . . . . . . . 117 Athina Papadopoulou and Spyridon Doukakis
13
Collaborative Platforms and Matchmaking Algorithms for Research and Education, Establishment, and Optimization of Consortia . . . . . . . . . . . . 125 Eleni Papakonstantinou, Vasiliki Efthymiou, Konstantina Dragoumani, Maria Christodoulou, and Dimitrios Vlachakis
14
Cognitive Neurorehabilitation in Epilepsy Patients via Virtual Reality Environments: Systematic Review . . . . . . . . . 135 Theodoros Fasilis, Panayiotis Patrikelis, Lambros Messinis, Vasileios Kimiskidis, Stefanos Korfias, Grigorios Nasios, Athanasia Alexoudi, Anastasia Verentzioti, Efthimios Dardiotis, and Stylianos Gatzonis
15
A Retrospective Analysis to Investigate Contact Sensitization in Greek Population Using Classic and Machine Learning Techniques . . . . . . . . . . . . . . . . . . . . 145 Aikaterini Kyritsi, Anna Tagka, Alexandros Stratigos, Maria Pesli, Polyxeni Lagiokapa, and Vangelis Karalis
16
The Prediction of Tumorigenesis Onset Using Parameters from Chaotic Attractor Models . . . . . . . . . . . . . . 157 Michael Harney
17
Using Biomarkers for Cognitive Enhancement and Evaluation in Mobile Applications . . . . . . . . . . . . . . . . . . 161 Panagiota Giannopoulou and Panagiotis Vlamos
Contents
xi
18
A Mobile Application for Supporting and Monitoring Elderly Population to Perform the Interventions of the FINGER Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Maria Chalkioti and Themis P. Exarchos
19
Application of Graphs in a One Health Framework . . . . . . . . 175 Ifigeneia Sideri and Nikolaos Matzakos
20
Application of Machine Learning Techniques in the HELIAD Study Data for the Development of Diagnostic Models in MCI and Dementia . . . . . . . . . . . . . . . . 187 George A. Dimakopoulos, Aristidis G. Vrahatis, Themis P. Exarchos, Eva Ntanasi, Mary Yannakoulia, Mary H. Kosmidis, Efthimios Dardiotis, Georgios Hadjigeorgiou, Paraskevi Sakka, Nikolaos Scarmeas, and Panagiotis Vlamos
21
Impact of Cognitive Priming on Alzheimer’s Disease . . . . . . . 193 Hamdi Ben Abdessalem and Claude Frasson
22
Signature-Based Computational Drug Repurposing for Amyotrophic Lateral Sclerosis . . . . . . . . . . . . . . . . . . . . . 201 Thomas Papikinos, Marios G. Krokidis, Aris Vrahatis, Panagiotis Vlamos, and Themis P. Exarchos
23
Integrating Wearable Sensors and Machine Learning for the Detection of Critical Events in Industry Workers . . . . 213 George Mantellos, Themis P. Exarchos, Georgios N. Dimitrakopoulos, Panagiotis Vlamos, Nikolaos Papastamatiou, Konstantinos Karaiskos, Panagiotis Minos, Theofanis Alexandridis, Stelios Axiotopoulos, Dimitrios Tsakiridis, Vasilios Avramoudis, Anastasios Vasiliadis, and Stylianos Stagakis
24
Graph-Based Disease Prediction in Neuroimaging: Investigating the Impact of Feature Selection . . . . . . . . . . . . . 223 Dimitra Kiakou, Adam Adamopoulos, and Nico Scherf
25
Computational Methods for Anticancer Drug Discovery; The MCT4 Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Eleni Papakonstantinou, Dimitrios Vlachakis, Trias Thireou, Panayiotis G. Vlachoyiannopoulos, and Elias Eliopoulos
26
3D QSAR based Virtual Screening of Flavonoids as Acetylcholinesterase Inhibitors . . . . . . . . . . . . . . . . . . . . . . . . 233 Sowmya Andole, Husna Sd, Srija Sudhula, Lavanya Vislavath, Hemanth Kumar Boyina, Kiran Gangarapu, Vasudha Bakshi, and Krishna Prasad Devarakonda
xii
Contents
27
A Comparison of the Various Methods for Selecting Features for Single-Cell RNA Sequencing Data in Alzheimer’s Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Petros Paplomatas, Panagiotis Vlamos, and Aristidis G. Vrahatis
28
An Optimized Cloud Computing Method for Extracting Molecular Descriptors . . . . . . . . . . . . . . . . . . . . . 247 Christos Didachos, Dionisis Panagiotis Kintos, Manolis Fousteris, Phivos Mylonas, and Andreas Kanavos
29
Prediction of Intracranial Temperature Through Invasive and Noninvasive Measurements on Patients with Severe Traumatic Brain Injury . . . . . . . . . . . . . . . . . . . . . . . 255 Eleni Tsimitrea, Dimitra Anagnostopoulou, Maria Chatzi, Evangelos C. Fradelos, Garyfallia Tsimitrea, George Lykas, and Andreas D. Flouris
30
Improving Patient-Centered Dementia Screening for General, Multicultural Population and Persons with Disabilities from Primary Care Professionals with a Web-Based App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Maria Sagiadinou, Panagiotis Vlamos, Themis P. Exarchos, Dimitrios Vlachakis, and Christina Kostopoulou
31
Improved Regularized Multi-class Logistic Regression for Gene Classification with Optimal Kernel PCA and HC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Nwayyin Najat Mohammed
32
Mathematical Study of the Perturbation of Magnetic Fields Caused by Erythrocytes . . . . . . . . . . . . . . . . . . . . . . . . 281 Maria Hadjinicolaou and Eleftherios Protopapas
33
Computational Models for Biomarker Discovery . . . . . . . . . . 289 Konstantina Skolariki, Themis P. Exarchos, and Panagiotis Vlamos
34
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications . . . . . . . . . . . . . . . . . . . . 297 Eleni Georgiadou, Haralabos Bougias, Stephanos Leandrou, and Nikolaos Stogiannos
Retraction Note to: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes in Elderly at Prodromal Alzheimer’s Disease Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Themis P. Exarchos, Robert Whelan, and Ioannis Tarnanas
C1
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
1
PT ER
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes in Elderly at Prodromal Alzheimer’s Disease Risk Themis P. Exarchos, Robert Whelan, and Ioannis Tarnanas
AC TE
DC
Large-scale human brain networks interact across both spatial and temporal scales. Especially for electro- and magnetoencephalography (EEG/MEG), there are many evidences that there is a synergy of different subnetworks that oscillate on a dominant frequency within a quasi-stable brain temporal frame. Intrinsic cortical-level integration reflects the reorganization of functional brain networks that support a compensation mechanism for cognitive decline. Here, a computerized intervention integrating different functions of the medial
temporal lobes, namely, object-level and scene-level representations, was conducted. One hundred fifty-eight patients with mild cognitive impairment underwent 90 min of training per day over 10 weeks. An active control (AC) group of 50 subjects was exposed to documentaries, and a passive control group of 55 subjects did not engage in any activity. Following a dynamic functional source connectivity analysis, the dynamic reconfiguration of intra- and cross-frequency coupling mechanisms before and after the intervention was revealed. After the neuropsychological and resting state electroencephalography evaluation, the ratio of inter versus intra-frequency coupling modes and also the contribution of β1 frequency was higher for the target group compared to its pre-intervention period. These frequency-dependent contributions were linked to neuropsychological estimates that were improved due to intervention. Additionally, the time-delays of the cortical interactions were improved in {δ, θ, α2, β1} compared to the pre-intervention period. Finally, dynamic networks of the target group further improved their efficiency over the total cost of the network. This is the first study that revealed a dynamic reconfiguration of intrinsic coupling modes and an improvement of time-delays due to a target intervention protocol.
HA
Abstract
TR
The original version of this chapter was retracted: A retraction note to this chapter can be found at https://doi.org/10.1007/978-3-031-31982-2_35
RE
T. P. Exarchos (✉) Department of Informatics, Ionian University, Corfu, Greece R. Whelan Trinity College Institute of Neurosciences, Trinity College, Dublin, Ireland e-mail: [email protected] I. Tarnanas Altoida Inc, Houston, TX, USA
Global Brain Health Institute, Trinity College, Dublin, Ireland University of California, San Francisco, CA, USA e-mail: [email protected]
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023, Corrected Publication 2024 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_1
1
2
T. P. Exarchos et al.
1.1
Introduction
RE
TR
AC TE
DC
Intrinsic coupling constitutes a key feature of ongoing brain activity, which exhibits rich spatiotemporal patterning and contains information that influences cognitive processing. Meanwhile, a new paradigm has emerged that considers the brain as inherently active and constantly creating predictions about upcoming stimuli and events [1, 29, 33]. Opposing the classical view, it soon became clear that ongoing activity carries information and is endowed with meaningful spatiotemporal structure, which reflects previous learning and can bias the processing of stimuli [14, 29]. Importantly, these fluctuations of ongoing activity were strongly synchronized across spatially distributed neuronal populations [13, 15, 45], suggesting that processing of stimuli is biased not just by fluctuations in a local neuronal population but, actually, also by the dynamics of coherently active networks. These coupling patterns in ongoing activity involved not only low-frequency fluctuations in the δ-band (1–4 Hz) or below [13, 15], but also faster frequencies in the θ (5–8 Hz), α (9–12 Hz), beta-(13–30 Hz), and γfrequency range (>30 Hz) [15, 45, 46]. It is well known that brain rhythms are the neural syntax of brain communication [10]. Due to the constraint of the slow axon conduction velocity, fast oscillations are restricted to a small volume of nervous tissue while slower brain rhythms to a broader volume of tissue. In the case where multiple frequencies coexist simultaneously and due to this anatomical constraint, the phase of the slower frequency modulates the faster [11]. Discrete packets of information in, e.g., γ frequency that have to be sent from one brain to another are often grouped by slower brain rhythms via the cross-frequency phase-to-amplitude (PAC) mechanism. This packeting of
PT ER
Electroencephalography · Cross-frequency coupling · Multiplexity · Intrinsic coupling modes · Dynamic functional connectivity analysis · Time-delays · Intervention · Elderly
information content in γ oscillation can be thought of as packets that contain subparts of a message in an email following a TCP/IP protocol. Then, the letters that are contained into this packet form a “word” in the final destination. CFC and phase-to-amplitude (PAC) support the hierarchical organization of brain rhythms and the fast, accurate, and uninterrupted communication between neuronal population [54]. It is more than important to explore both intraand inter-frequency coupling mechanisms simultaneously under the umbrella of dominant coupling mode [23, 24, 31]. Especially in the case of dynamic functional connectivity, the way dominant coupling modes fluctuate even during spontaneous activity can be explored [16, 17]. It is believed that the ratio of inter versus intrafrequency coupling is a unique index of how brain functions that could be sensitive during the development, due to interventions and in various brain disorders and diseases. Brain rhythms are directly linked to a broad repertoire of cognitive functions and dysfunctions covering various Brodmann areas [30, 32]. Recently, Basar et al. [9] proposed the CLAIR model as a big database that will incorporate the results of all the studies linking anatomical brain areas with function/dysfunction, the type of dominant brain rhythm (δ to γ) locally within brain areas and globally between brain areas as part of power spectrum analysis or connectivity analysis, respectively. This suggestion is further supported by the incorporation of temporal dynamics and the notion of dominant intrinsic coupling modes [19, 21, 22–26]. Accurate timing is essential for human brain function. Still, current human brain imaging is dominated by methods focusing on the spatial distributions of brain activity by means of functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). Both modalities have been extensively used over 20 years unfolding the mapping of brain areas with specific brain functions. By default, both modalities cannot inform us regarding the dominant brain rhythm, the type of interactions between brain areas (intra- or cross-frequency coupling), and also the accurate timing of brain responses and time-delays between cofunction
HA
Keywords
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
Patient Recruitment and Data Availability
PT ER
1.2
For this study, 200 patients were randomly approached from a hospital-based cohort. From this cohort, 42 adults were excluded, and 158 right-handed MCI (96% of the single domain) individuals (mean age = 69.16; SD = 5.13) (see sup. material ST.1) were deemed eligible to participate in the trial; those with a diagnosis of AD were excluded, according to Dubois’s guidelines [28]. Each participant went through a detailed neuropsychological examination 15–20 days prior to the intervention onset. Participants also underwent cerebrospinal fluid (CSF) analysis, including measurement of tau, phospho-tau, and amyloid-β1–42 (Aβ1–42; cutoff < 500 ng/L) (INNOTEST® enzyme-linked immunosorbent assay; Fujirebio Europe, Gent, Belgium). Medial temporal atrophy on brain MRI scans was performed by visual assessment using the standardized Scheltens scale (five categories, range 0–4), with 0 corresponding to no atrophy. The diagnosis of neurocognitive disorder for each patient was made using the IWG-2 criteria for pro-AD [28]. After the examination, the participants were divided into two groups: (a) Mnemonic Strategy Training (MST) and (b) active control (AC). The study protocol was approved by the Bioethics Committee of the Medical School of the Aristotle University of Thessaloniki, as well as the Board of the Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD). This project was conducted in accordance with the Helsinki Declaration for Human Rights. The ethics committee of GAADRD approved the study protocol, and each participant received detailed information regarding the study. It was made clear to them that they could terminate the experiment at any time without the need to provide any justification for their decision (no one did).
AC TE
TR
RE
3
all, the adopted methodology could be useful to validate the positive outcome of an intervention in many target groups and strategies.
DC
areas. To reveal the aforementioned significant information from brain activity, the focus needs to shift to magnetoencephalography (MEG) and electroencephalography (EEG) [36]. Especially in terms of temporal contribution of one area to different processes, MEG revealed that one brain area can be involved in brain functions covering different time scales [35]. In the present chapter, the positive effect of a Mnemonic Strategy Training (MST) program under the umbrella of cognitive stimulation intervention, based on the Method of Loci (MoL) is analyzed. This MST was designed to challenge spatial memory and the integration of spatial location with object semantic memory [12, 26], which are directed in particular by the posterior hippocampal, entorhinal cortex, precuneus, and retrosplenial cortex, the regions in which both tau and Aβ pathology both initially co-occur [37, 41, 44]. MST was delivered by mobile platform technologies, i.e., iPad table, to participants at a specialist clinic setting, e.g., memory clinic. Recording spontaneous EEG activity before and after the intervention period in three groups (active: target, passive: doing nothing and control: watching documentaries), to underline the effectiveness of this protocol, was attempted [26]. Additionally, the positive outcome of this MST intervention protocol designed properly for elderly subjects at risk for dementia under a neuroinformatic approach was evaluated. First of all, a dynamic source connectivity analysis was adopted [16, 17, 21, 22–27]. Following a weighting strategy of both intra- and crossfrequency coupling interaction, the dynamic contribution of coupling mechanisms both within the same frequency and also between frequencies was revealed. At a second level, significant improvements of the contribution on specific intra- and/or inter-frequency bands with improvements in neuropsychological estimates on the target group were linked. Complementary, mean time-delays of brain rhythms were detected between every pair of EEG sources as an additional potential improvement of the intervention. Finally, the global efficiency of the dynamic network versus the cost was further explored. All in
HA
1
4
1.2.3
AC TE
TR
Cognitive Battery
RE
1.2.2
The Mini-Mental State Examination (MMSE) was used to assess global cognitive functioning and the Clinical Dementia Rating Scale Sum of Boxes (CDR-SB) score can be used to accurately stage severity of cognitive decline. Short-term memory and working memory were investigated using a digit span forward test. Tests of executive functioning included verbal fluency and category fluency (the Set Test), Stroop, and the TMT B.
Mnemonic Strategy Training
The Mnemonic Strategy Training (MST) program is a novel cognitive stimulation intervention, based on the Method of Loci (MoL) and designed to challenge spatial memory, and the integration of spatial location with object semantic memory [12, 26], which are directed in particular by the posterior hippocampal, entorhinal cortex, precuneus, and retrosplenial cortex, the regions in which both tau and Aβ pathology both initially co-occur [37, 41, 44]. MST was delivered by mobile platform technologies, i.e., iPad table, to participants at a specialist clinic setting, e.g., memory clinic. A demo showing the Mnemonic Strategy Training with the integrated physical demanding component is shown in S.1, while in Supplemental Material, there is a detailed description of the task. The AC intervention was based on the IMPACT study AC paradigm, which is widely used to control for potential confound factors, such as willingness to adopt an active aging profile, computer skills, and social interaction [27]. In this study, the AC group participants were exposed to YouTube documentaries about nature, art, and history with similar training parameters (e.g., computer use, duration, and intensity) as the IMPACT protocol. At the end, they completed questionnaires about the documentaries they just viewed. Therefore, AC may be regarded as a cognitive stimulation protocol that does not involve any Mnemonic Strategy Training. The whole protocol was computerized. Randomization was undertaken in blocks of 10–16,
DC
The prodromal Alzheimer’s dementia classification used in this study was considered to be the period immediately before Alzheimer’s dementia diagnosis characterized by MCI. MCI diagnosis was performed by a dementia expert neurologist, using Petersen’s criteria and meeting the IWG-2 criteria for pro-AD. All MCI participants had a Clinical Dementia Rating score of 0.5 and were at large majority females (75.73%). Individuals with a dementia diagnosis according to the International Classification of Diseases 10th Revision (ICD-10) or suffering notable cognitive impairment, as evidenced by an MMSE score of 23 or less, were excluded from the study. Additional exclusion criteria included contraindications for MRI, focal brain lesions seen on brain imaging or the presence of other severe or unstable medical illness, current psychiatric disorder, and current medical condition that prevented participation in the study tasks, such as a clinical history of stroke associated with permanent disability or sensory impairment and current hazardous or harmful alcohol consumption. All assessments and program interventions were undertaken at the 3rd Neurological Clinic of the Aristotle University of Thessaloniki, Greece, and the Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD) memory clinics. The data used in the preparation of this article were obtained from two independent datasets: GAADRD outpatient memory clinic and Virtual Reality Medical Center, San Diego.
Long-term memory was assessed with the California Verbal Learning Test (CVLT). Impairment was determined as if at least one score per domain was 1.5 SD below group means compared to test-specific normative data. Finally, functional assessment in complex activities of daily living was assessed with the Instrumental Activities of Daily Living Scale (IADL, [3]) and depression with the 30-item Geriatric Depression Scale (GDS).
PT ER
Disease Categorization
HA
1.2.1
T. P. Exarchos et al.
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
PT ER
1.2.4
Demographics and Neuropsychological Measurements
Baseline cognitive test scores and the effects of the intervention in the three different groups are shown in Tables 1.1 and 1.2. On average, study participants had moderate levels of cognitive function at baseline, consistent with their age and education levels. When individual cognitive tests were examined, there were significant main effects ( p < 0.0001) for the California Verbal Learning Test (CVLT); perseveration and intrusion errors, California Verbal Learning Test (CVLT) immediate and delayed recall; TrailMaking Test – part A (TMT-A); Trail-Making Test – part B (TMT-B); and Geriatric Depression Scale (GDS) (see sup. material ST.2). In contrast, the active control (AC) group showed a significant increment ( p = 0.01) in MMSE and GDS scores and significant decrement ( p = 0.001) in CVLT immediate and delayed recall, perseveration errors, and TMT-B time of completion after the intervention. The waitlist control (WC) group did not show any significant difference. Finally, at the end of the M3 neuropsychological performance measurements, the participants of the Mnemonic Strategy Training group were interviewed about the usage of MoL techniques during the CVLT responses. The responses from the majority of the participants (90%) were positive.
AC TE
TR
RE
5
as inhibition of external stimuli or processing speed (e.g., reaction time at interactive events). Participants played different difficulty levels of NSG according to their progress in the game, i.e., in a baseline session (three items hidden) and finally all items at the last training sessions (ten items hidden). A total of 50 sessions were available for the participants. The intervention frequency was up to 4 days per week with 2 h maximum duration per day. Most study participants (92%) completed all 50 treatment sessions and 32 of them dropped out (drop-out rate of 15%). Participants underwent a comprehensive cognitive assessment.
DC
according to a random list of computer-generated numbers, with five to eight individuals allocated to each group. Due to the nature of the intervention, participants were not blinded to group membership; however, research assistants undertaking the follow-up assessments were. The participants at the NSG intervention were given a mini iPad tablet (Apple Inc., San Francisco, USA), where the application was preinstalled, and were trained for 15 min at the usage of the application interface, which was designed to be compatible with MCI. After the training session, the participants were interfacing with the application on a one-to-one basis without any interruption (independently). In more details, there was a list for serial recall, which was constructed from the high-imageability word pool and the low-imageability word pool used by. Each word contained four to six letters in length and was then depicted as a threedimensional computer-generated object (3D object). There were ten total 3D objects, each representing one word. The participant in NSG was then asked to position the three to ten 3D objects in different rooms of a real-world environment (scene encoding phase), i.e., house or outdoors, using an AR-enabled mobile phone or tablet PC device. Once the items are “encoded,” the participant engages in a timed “recall” exercise, while being challenged with attention distractions, i.e., a high-pitch or a low-pitch environmental sound. This condition, reflecting both behavioral (i.e., greater conflict effect) and corresponding neural deficits in executive control (e.g., less activation in the prefrontal and anterior cingulate cortices) varied in intensity between sessions in order to create task variability and enable far transfer. In that context, NSG created an ecologically valid interaction, which challenged working and spatial memory as well as different aspects of executive function, such as volition, self-awareness, planning, inhibition of dominant response, and external distraction during response control. According to the literature, such repeated interactions require participants to demonstrate mental flexibility, follow a mental strategy, and monitor their performance by eliciting medium to high cognitive control, such
HA
1
6
T. P. Exarchos et al.
Table 1.1 Clinical characteristics of the subjects (means with SDs) Novelty serious game (NSG) 53 69.7 ± 5.3 24.9 ± 1.4 21.4 ± 4.3 13/53 (24.53%) 7.4 ± 2.9 MST Up to 6 h/w 54 ± 6.9 h
Active control (AC) 50 71.2 ± 3.9 25.1 ± 1.3 21.5 ± 4.2 10/50 (24%) 7.2 ± 2.7 Watching YouTube documentaries Up to 5 h/w 53 ± 4.5 h
Waitlist control (WC) 55 66.4 ± 6.1 24.8 ± 1.5 21.2 ± 4.3 13/55 (24%) 7.4 ± 2.7 None None None
PT ER
Population N Age (years) MMSE MoCA No. of males YOE Intervention Sessions Duration
MMSE Mini-Mental State Examination, MoCA Montreal Cognitive Assessment, YOE years of education, MST Mnemonic Strategy Training Table 1.2 Neuropsychological measurements of the three groups before and after intervention
CDR-SB CVLT
0.5 ± 0.4 7.8 ± 1.5
Global Immediate. recall Delayed recall Perseveration errors Intrusion errors Memory decay TMTA TMTB
IADL GDS
Direct span Reverse span Total score Depression
4.6 ± 1.1
1.1 ± 1.0***
4.5 ± 1.1
0.6 ± 0.4*** 1.1 ± 1.0*** 57.3 ± 11.3*** 212.5 ± 34.4 138.7 ± 22.6*** 5.8 ± 0.6 6.7 ± 0.6*** 4.2 ± 0.7 5.1 ± 0.6*** 9.9 ± 2.1 10.2 ± 2.3 7.2 ± 1.6 2.8 ± 2.2***
3.3 ± 0.5 2.5 ± 1.3 79.4 ± 17.8 167.9 ± 37.1 5.7 ± 0.6 4.3 ± 0.6 9.9 ± 2.2 7.8 ± 1.2
3.7 ± 0.7 2.6 ± 1.2 85.2 ± 15.7
TR
Attention
10.5 ± 1.8*** 6.4 ± 1.1
AC TE
Executive functions
6.3 ± 2.0
AC 50 71.2 ± 3.9 11.9 ± 1.6 M03 M0 28.1 ± 1.1*** 25.1 ± 1.3 0.5 ± 0.4 0.5 ± 0.4 10.8 ± 1.5*** 7.8 ± 1.3
HA
NSG 53 69.7 ± 5.3 11.6 ± 1.3 M0 24.9 ± 1.4
DC
Population n Age (years) School level Neuropsychological data MMSE Global
M03 27.1 ± 1.1* 0.5 ± 0.4 9.4 ± 1.9** 7.5 ± 1.6** 3.1 ± 0.9** 3.5 ± 0.4 1.9 ± 1.1* 71.5 ± 12.5 141.4 ± 19** 5.9 ± 0.5 4.6 ± 0.6 10.1 ± 2.8 6.5 ± 1.9*
PC 55 66.4 ± 6.1 11.7 ± 1.4 M0 24.8 ± 1.5 0.5 ± 0.4 7.9 ± 1.4
WC
M03 24.9 ± 1.2 0.5 ± 0.4 7.8 ± 1.8
6.5 ± 0.7 6.1 ± 0.6 4.6 ± 0.9 4.8 ± 1.1 3.4 ± 0.8 2 ± 1.0 84.7 ± 21.5 196.1 ± 21.2 5.7 ± 0.6 4.2 ± 0.5 9.8 ± 1.9 7.7 ± 1.9
3.6 ± 0.3 2.5 ± 1.1 76.9 ± 13.2 194.2 ± 18.9 5.8 ± 0.6 4.1 ± 0.6 9.9 ± 2.2 8.2 ± 2.2
RE
*p < 0.01 **p < 0.001 ***p < 0.0001 MMSE Mini-Mental State Examination, TMT Trail-Making Test, CVLT California Verbal Learning Test, Memory decay computed subtracting the number of words of the delayed recall to the number of words of the fifth learning trial, IADL instrumental activities of daily living, GDS Geriatric Depression Scale
1.3
Interventions
The Mnemonic Strategy Training program is a Method of Loci (MoL) intervention delivered by
augmented reality (AR) to users in their natural environments. A demo showing the Mnemonic Strategy Training sequence is shown in S.1, while below, there is a detailed description of the task.
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
PT ER
then position the next 3D item, in this case being the heart, and double encode it mentally with the spatial location. (c) The user is then asked recall the word and also locate the 3D items that are hidden with AR in the physical space, and (d) the user is challenged with attention distractions triggered by auditory feedback while he is attempting to recall the 3D item name and actual spatial memory path originally used to hide the object
TE DC
Fig. 1.1 Real-time screenshots of the computerized MST intervention from a tablet implementation, while the user is searching for the hidden object in real space. The user is asked to hide 3–10 and locate them in the shortest possible time, while performing a timed swing action with their upper extremity. The figure is split into the following sub-screens: (a) the user is asked to position a 3D teddy augmented reality (AR) item in a physical location and (b)
RE
TR
AC
The AC intervention was based on the IMPACT study AC paradigm, which is widely used to control for potential confound factors, such as willingness to adopt an active aging profile, computer skills, and social interaction. In this study, the AC group participants were exposed to YouTube documentaries about nature, art, and history with similar training parameters (e.g., computer use, duration, and intensity) as the IMPACT protocol. At the end, they completed questionnaires about the documentaries they just viewed. Therefore, AC may be regarded as a cognitive stimulation protocol that does not involve any Mnemonic Strategy Training (Fig. 1.1). The whole protocol was computerized with the serial recall phase implemented through the Python experiment-programming library (pyEPL). Randomization was undertaken in blocks of 10–16, according to a random list of computer-generated numbers, with five to eight individuals allocated to each group. Due to the nature of the intervention, participants were not blinded to group membership; however, research
7
HA
1
assistants undertaking assessments were.
1.4
the
follow-up
Neuropsychological Performance
In this study, the sample size has been calculated a priori in order to achieve a power of 80% on the neuropsychological performance at 3 months, after adjusting for an expected dropout rate of 10–15%. All analyses were performed using intent-to-treat principles, and the power calculations were based on previous studies in 140 patients with MCI [48–51]. Baseline characteristics were compared between groups with the use of χ2 tests for categorical variables and analysis of variance for continuous variables. Linear mixed models for repeated measures to study the differences between groups for each of the outcomes were used. The dependent variable was the outcome measure, while the independent fixed variables were group (passive control, active control, and intervention), baseline score,
8
T. P. Exarchos et al.
1.5
EEG Data Acquisition
PT ER
HA
AC TE
1.5.1
Methods
organization of the functional networks, within the concept of phase-amplitude coupling (PAC) interactions [18, 20, 24]. Cortical activity was obtained from 32 scalp EEG signals in each experiment through the high-resolution EEG technique, involving realistic models to characterize the effects of the different electrical conductivities of the head structures and linear inverse solutions. In the present chapter, an average head model from the reconstruction of 152 normal MRI scans was considered (MNI template, http://www.loni.ucla.edu/ICBM/). Scalp, outer skull, inner skull, and cortex structures were extracted through the boundary element method (BEM) [28]. The BEM approximates the different compartments of volume conductor models by closed triangle meshes with a limited number of nodes. In the present study, each structure consisted of 305 nodes, being enough to model the smooth surfaces of the average head model. Thus, the cortex model consisted of 305 equivalent electrical dipoles representing the cortical sources.
DC
and the interaction between measurement time points and group. The baseline-adjusted mean difference between groups at each measurement point with 95% Cis is presented. Secondary analyses examined the effects of the intervention on individual cognitive tests. Based on prior studies [48–51], an a priori hypothesis was formed that the effects of the intervention would be greatest for measures of executive function, verbal learning, and verbal memory tasks. In the post hoc analysis, Fisher’s exact test to calculate the proportion of patients in each group who reached a clinically important change (improvement or worsening) on the neuropsychological performance from baseline was used. Statistical analyses were done using SAS 9.2 for Windows and SPSS 20 for Windows. This trial is registered with ClinicalTrials.gov, NCT02417558.
RE
TR
EEG data were chosen, which were recorded using a Nihon Kohden JE-921A equipped with active electrodes attached to a cap fitted on the scalp. The device recorded brain signals through 32 electrodes, 2 reference electrodes attached to the earlobes, and a ground electrode placed at a left anterior position. In addition, both vertical and horizontal electrooculograms (EOG), as well as electrocardiographic (ECG) activity using bipolar electrodes, were recorded. Electrode impedances were kept lower than 2 KΩs, while the sampling rate was set at 500 Hz. Participants were instructed to sit in a comfortable armed chair, to close their eyes and to stay calm for 5 min.
1.5.2
EEG Data Source Reconstruction
The neuroimaging data analysis in this work was extended via the investigation of synchronous firing of cortical regions and the dynamic
1.5.3
EEG Data Source Connectivity Analysis
A dynamic connectivity analysis based on sliding window was applied to eight conventionally defined frequency bands: δ (1–4 Hz), θ (4–8 Hz), α1 (8–10 Hz), α2 (10–13 Hz), β1 (13–30 Hz), β2 (20–30 Hz), γ1 (30–45 Hz), and γ2 (55–90 Hz). Band-limited brain activity was derived by applying a third order Βutterworth filter (in zero-phase mode). The brain source network was quantified by employing four types of interactions and adopting properly defined connectivity estimators: (a) intra-frequency phase coupling within each of the six frequencies was estimated using the imaginary part of the phase locking value (iPLV); (b) cross-frequency coupling (CFC), namely phase-to-amplitude coupling (PAC) between 15 possible pairs of frequencies was defined with the PAC estimator [19, 20, 24, 25]. The strength of the connections estimated with the two adopted connectivity estimators (iPLV/PAC) ranged from 0 to 1. The
1
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
9
Table 1.3 Dimensions and information tabulated in the dynamic functional connectivity graphs Directed
Within frequencies ✓
✓
significant PAC values were determined after calculating PAC for rs = 10.000 surrogates for each connection derived by selecting a random time point from the amplitude time series (high frequency) and then exchanging the two ordered segments. For every time window, sensor pair, and pair of frequencies, the null hypothesis H0 that the observed PAC value came from the same distribution as the distribution of surrogate PAC values was tested. Ten thousand surrogate time series ϕsLF(t) were generated by cutting at single point at a random location and exchanging the two resulting time courses [2, 11]. Repeating this procedure produced a set of surrogates with minimal distortion of the original phase dynamics and impact on the non-stationarity of brain activity as compared to either merely shuffling the time series or cutting and rebuilding the time series in more than one time points. With this aforementioned approach, the non-stationarity of the brain activity as captured from the source time series is less affected compared to circularly permute phase time series (low-frequency) for PAC relative to amplitude series (high frequency for PAC) and the phase of the time series for iPLV. This procedure ensures that the observed and surrogate indices shared the same statistical properties. For each subject and condition, the surrogate PAC (sPAC) was computed. Then, a one-sided pvalue expressing the likelihood that the observed PAC value could belong to the surrogate distribution was determined, and it corresponded to the proportion of “surrogate” PACs which was higher than the observed PAC value [53]. PAC values associated with statistically significant p-values were considered unlikely to reflect signals not entailing PAC coupling. Similarly, for each subject and condition, the surrogate iPLV (siPLV) was computed. A one-sided p-value expressing the likelihood that the observed iPLV value could belong to the
1.5.4
AC TE
DC
derived quantities are tabulated in a 305 × 305 matrix, called hereafter the “functional connectivity graph” (FCG), in which an entry conveys the strength of iPLV/PAC for each pair of cortical sources. The aforementioned procedure produced 8 + 28 = 36 FCGs for each subject and for each pre-/post-condition. A sliding window of 250 ms moving every 25 samples in order to capture in more detail any possible transition of dominant intrinsic coupling mode between consecutive windows was adopted. The whole approach leaded to 190 time-varying FCGs for each subject and condition. For each subject and for each connectivity estimator, 4D dynamic functional connectivity graphs were derived each one with dimension: (modes: 8 + 28) × 190 (temporal segments) × 305 (sensors) × 305 (sensors). Table 1.3 summarizes the derived dynamic graphs and their dimension for each subject.
Statistical Filtering: Surrogate EEG Source Connectivity Analysis
RE
TR
To identify significant iPLV/PAC-interactions which were estimated within frequencies and for every pair of frequencies correspondingly, between all 248 sensors, and at each successive sliding window, surrogate data were employed [53]. Surrogate data analyses determined the following: (a) if a given PAC value differed from what would be expected by chance alone and (b) if a given nonzero PAC indicated coupling that was, at least statistically, non-spurious. Significant iPLV values were determined after calculating iPLV for rs = 10.000 surrogates for each connection derived by selecting a random time point from the amplitude time series of one of the sensors and then exchanging the order of the two segments that were created. Similarly,
Between frequencies ✓
PT ER
Dimensions 8 × 190 × 305 × 305 28 × 190 × 305 × 305
HA
iPLV PAC
10
T. P. Exarchos et al.
were indicated by a value of 1, with zeros indicating nonsignificant iPLV/PAC interactions.
1.5.5
Data-Driven Topological Filtering
PT ER
Here, a data-driven topological filtering scheme was adopted by finding the maximum value of the following quality formula: J GCE = GE - Cost
ð1:1Þ
HA
where GE refers to the global efficiency of the network while cost is the ratio of the total weight of the selecting edges in every run of the algorithm divided by the total strength of the original weighted graph with all the survived connections from the statistical thresholding scheme. Following multiple rounds of minimal spanning trees (MSTs) called orthogonal minimal spanning trees (OMSTs), the connections that maximize the formula 1.1 were detected. Dimitriadis et al. [21] offer more details regarding this data-driven topological filtering for structural/functional brain networks. The outcome of both the topological filtering was two 3D matrices per subject and condition with dimensions: 190 (temporal segments) × 305 (sources) × 305 (sources). The first one keeps the weights of the survived functional connections, while the second tabulates an integer that is referred to the dominant coupling mode (1 for δ, 2 for θ, . . ., 8 for γ2, 7 for δ-θ, . . ., 36 for γ1-γ2). Figures 1.2 and 1.3 demonstrate the functional connectivity graphs (FCGs) before and after applying statistical and topological filtering schemes. The example is from an active subject extracted from the first temporal segment during pre-scanning condition.
RE
TR
AC TE
DC
surrogate distribution was determined and corresponded to the proportion of “surrogate” iPLVs which was higher than the observed iPLV value [42]. iPLV values associated with statistically significant p-values were considered unlikely to reflect signals not entailing iPLV coupling. After obtaining a p-value per pair of EEG sources at every temporal segment and for each of 36 intra- and inter-frequency coupling modes, they were corrected from multiple comparisons ( p < 0.001; Bonferroni correction, p′ < p/36). The FDR method [6] was employed to control for multiple comparisons (across all frequencies and possible pairs of frequencies which are 36 in total) with the expected proportion of false positives set to q ≤ 0.01. Finally, the PAC mode that characterized a specific pair of frequencies was determined based on the highest, statistically significant PAC value from surrogates. Practically, the statistical surrogate analysis can lead to three conditions: (a) only one frequency or frequency pair met the statistical thresholding criterion; (b) in the case of two frequencies or frequency pairs both exceeding the statistical threshold, the one with the highest iPLV/PAC value was identified as the characteristic iPLV/PAC mode for this pair of sensors at that particular time window; and (c) if none of within frequencies and the cross-frequency pairs exceed the statistical threshold, a value of zero was assigned to this pair of sensors with not identified characteristic coupling mode. The selection of the maximum iPLV/PAC value in the (b) condition can be adopted as a solution in the case of more than one survived frequencies and/or frequency pairs since both iPLV/PAC are quantified based on the same formula. Finally, for each participant the resulting TV PAC profiles constituted a 4D array of size [36 (frequencies and pairs of frequencies) × 190 (temporal segments) × 305 (sources) × 305 (sources)]. The identity of prominent frequencies or frequency pairs for every pair of sources at each time window was finally stored in a second 4D array of size [36 × 190 × 305 × 305]. In the latter array, significant iPLV/PAC interactions
1.5.6
Graph Diffusion Distance Metric
To quantify the contribution of both intra and cross-frequency coupling mechanisms across the experimental time, a proper distance metric
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
11
(PAC) coupling between every possible crossfrequency pair. Each FCG (subplot) is a 2D matrix with dimensions 305 × 305 (sources × sources) and tabulates the strength between every possible pair of EEG source activity
RE
TR
AC
TE
DC
Fig. 1.2 Functional connectivity graphs (FCGs) based on intra- and inter-frequency coupling estimates. FCGs in the main diagonal demonstrated the intra-frequency phase-based full-weighted brain networks while in the off-diagonal FCGs illustrated the phase-to-amplitude
HA
PT
ER
1
Fig. 1.3 Statistically and topologically filtered with OMST functional connectivity graphs (FCGs) based on intra- and inter-frequency coupling estimates. FCGs in the main diagonal demonstrated the intra-frequency phasebased full-weighted brain networks while in the
off-diagonal FCGs illustrated the phase-to-amplitude (PAC) coupling between every possible cross-frequency pair. Each FCG (subplot) is a 2D matrix with dimensions 305 × 305 (sources × sources) and tabulates the strength between every possible pair of EEG source activity
12
T. P. Exarchos et al.
ð1:2Þ
To quantify the contribution of each dominant intrinsic coupling mode (DICM) across experimental time to the positive effects of this intervention, the following algorithmic procedure was designed. First, it was assumed that simultaneously all the EEG sources are connected with every possible option both intra- and interfrequency coupling. Afterward, the distance between every possible quasi-static FCG was quantified (36 in total). This produced a 36 × 36 distance matrix as it is illustrated in Fig. 1.4a. Afterward, the rows of this distance matrix getting the relative weights (RW) were summed (Fig. 1.4b) and normalized by their sum (Fig. 1.4c) in order to transform the weights to percentage of contribution (normalized relative weights (nRW)). The sum of these 36 weights equal to 1. Afterward, each of the FCG demonstrated in Fig. 1.3 was multiplied with the corresponding nRW, and the derived FCG was summed up across all the 36 FCGs leading to an integrated FCG (iFCG) illustrated in Fig. 1.4d. Finally, this iFCG was topologically filtered leading to the final iFCGTF (Fig. 1.4e). The whole approach was repeated independently for each instantaneous FCG, for each subject and condition. In Fig. 1.4c, it is concluded that the biggest contribution to the iFCG is given by the seven intra-frequency couplings and the β1-β2 cross-frequency pair. The estimated relative weights for both intraand inter-frequency coupling mechanisms lead to 36 time series, 1 for each frequency or crossfrequency paper and independently for each condition and subject. The ratio of the sum of relative weights derived from the cross-frequency pair
RE
TR
AC TE
where L is the graph Laplacian of FCG. With starting time point u(0) at time t = 0, Eq. 1.2 has the analytic solution. Here exp(-tL) is a N × N matrix function of t, known as Laplacian exponential diffusion kernel. Considering u(0) = ej, where is the unit vector with all zeros except in the jth component? Running the diffusion process up to a time t gives diffusion patterns exp (-tL)ej which is precisely the jth column of exp (-tL). The columns of the Laplacian exponential kernels, exp(-tL1) and exp.(-tL2), describe the different diffusion patterns centered at each vertex and generated by running the diffusion up to a time t using the two different sets of weighed edges. Computing the sum of squared differences between these patterns, summed over all the vertices, defines the following equation for graph diffusion distance (dgdd) metric: dgdd ðt Þ = k expð- tL1 Þ - expð- tL2 Þk2F
ð1:3Þ
Given the spectral decomposition L = VΛV, the Laplacian exponential can be estimated via expð- tLÞ = V expð- tΛÞV 0
ð1:4Þ
Quantifying the Contribution of Each Dominant Intrinsic Coupling Mode (DICM)
PT ER
1.5.7
DC
u0 ðt Þ = - Luðt Þ
where for Λ, exp(-tΛ) is diagonal with ith entry given by this equation. dgdd (FCG1, FCG2) was computed by first diagonalizing L1 and L2 and then applying Eqs. (1.3) and (1.4) for the estimation of dgdd(t) for every time t.
HA
applied between every possible pair of the 36 FCGs for each time stamp of the dynamic functional connectivity graph (DFCG) was adopted. As a proper distance metric, the graph diffusion distance measure based on graph Laplacian exponential kernel was adopted. Initially, the graph Laplacian operator of the FCG as L = D – FCG where D is a diagonal degree matrix related to FCG was defined. To describe diffusion process on the graph FCG, a time-varying vector that represents the quantity that is undergoing diffusion at each time point was defined. The weights of FCG express the information flow between vertices such as the for a pair of vertices i and j, and the quantity FCGij (ui(t) – uj(t)) represents the flow of information from i to j via the edges that connect them. So, it is straightforward that the previous equation can be written as
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
13
CT ED C
HA
PT
ER
1
leading to a percentage of contribution of both intra and inter-frequency coupling to the integrated FCG (iFCG). (d) iFCG is produced by multiplying each of the FCG (intra and inter-frequency coupling) demonstrated in Fig. 1.3 with the corresponding nRW (c) and finally summed up across all the 36 FCGs. (e) Finally, this iFCG was topologically filtered leading to the final iFCGTF
versus the sum of the relative weights derived from the intra-frequency coupling was estimated. If the relative weights of the intra-frequency coupling are tabulated in a matrix RWintra [8 × 190] and the relative weights of the cross-frequency coupling are tabulated in RDinter [28 × 190], then their ratio over experimental time can be estimated. Figure 1.5 illustrates the nRW of the eight frequency rhythms, while Fig. 1.6 illustrates the
sum of nRW corresponding to each frequency modulator (7 for δ, 6 for θ etc.). Figure 1.7 demonstrates the group-averaged dynamic evolution for each group and condition. Each time series illustrates the sum of nRW corresponding to each frequency modulator (7 for δ, 6 for θ, etc.). For that reason, the blue time series that correspond to δ frequency has a higher sum compared to the rest of the frequencies.
RE
TR A
Fig. 1.4 An outline of the proposed methodology of constructing an integrated topological filtering functional connectivity graph iFCGTF. (a) The distance between every possible quasi-static FCG (36 in total) with the graph diffusion distance metric (gDDM) was quantified giving a 36 × 36 Distance matrix. (b) Relative weights (RW) were produced by summing up the rows of the distance matrix in (a). (c) Normalized relative weights (nRW) were derived by normalizing RW with its sum
T. P. Exarchos et al.
HA PT
ER
14
RE
TR AC TE
DC
Fig. 1.5 Group-averaged dynamic evolution of the normalized relative weights (nRW) for each frequency band
Fig. 1.6 Group-averaged dynamic evolution of the normalized relative weights (nRW) for each frequency band phase modulator
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
15
HA
PT ER
1
A Dissimilarity Measure for Dynamical Trajectories Based on the Wald-Wolfowitz (WW) Test
ranging from 3 to 6. The two point samples {Xt}t = 1:m and {Yt}t = 1:n were then formed, and the wdist = w({Xt},{Yt}) was computed. Next, the minimal spanning tree (MST) graph of the overall sample was constructed (i.e., disregarding the sample identity of each point). In these graph points represent nodes with N-1 edges (N = n + m) (i.e., paths within each pair of nodes). The second step of the procedure entails computing the R statistic which is the total number of consecutive sequences with identical sample identities (i.e., “runs”). Based on the number of edge pairs of MST sharing a common node and the degrees of the nodes, the mean and variance of R can be calculated [39]. This property of R permits computation of the initial form of the normally distributed, WW dissimilarity index (w). The measure used in classification schemes in the present work was derived from w using the Heaviside step function H(x) as follows: wdist = | w|.H(-w). The higher the value of wdist, the more dissimilar the two point sets are considered to be.
AC TE
1.5.8
DC
Fig. 1.7 Group-averaged dynamic evolution for each group and condition
RE
TR
The two-sample, nonparametric WW test was adopted in the present work to assess the degree of similarity between two nRW(t) metric time series based on intra- and inter-frequency coupling mechanism and also for desynchronization events. The procedure entailed, first, transforming every pair of nRW(t)/time series x(t), t = 1.2,. . .T into dynamic trajectories represented by multidimensional vectors Xt = [x(t), x(t + 1), . . . , x(t + de)] and Yt = [y(t), y(t + 1), . . . , y(t + de)] (X and Y correspond to two pre- and postintervention nRW(t)/time series independently for each subject). These vectors were formed by selecting the appropriate set of de, which is the embedding dimension parameter that controls the dimensionality of the vectors, and dt is the timedelay. By adopting the Ragwitz criterion, the embedding dimension de and the embedding delay dt [42] were optimized, resulting in values
16
T. P. Exarchos et al.
Improvements
1.6.1
Improvement of GE–Cost for the MST Group
1.6.2
PT ER
Equation 1.1 optimizes the global efficiency of the network versus the overall cost. Here, a comparison was performed for each subject in the distribution of J values of the integrated dynamic functional connectivity graphs (IDFCG) between pre- and post-condition. For the statistical tests, the Wilcoxon rank sum test was used.
Improvement of Brain Activity Synchronization Due to MST Intervention Protocol
WW tests to each of the 37-time series (8 intra, 28 inter, and 1 the brain synchronization) were applied between pre- and post-condition and independently to each group. Finally, significant differences were revealed only for MST group. The whole analysis clearly demonstrated a positive effect of the MST protocol. To link the positive improvement of brain synchronization with improvements of neuropsychological measurements, a regression analysis was followed. Complementary to the aforementioned results, a multiple linear regression analysis was fed with neuronal oscillations (dependent variable) and absolute differences of neuropsychological assessments between pre- and postcondition for the MST group (independent variables). The analysis produced the following multilinear models:
1.5.9
AC TE
DC
At a second level, using only Wdist from over the 37-time series where significant improvements were demonstrated in the target group, it was attempted to link Wdist with improved neuropsychological indexes. For this purpose, a multiple linear regression analysis was adopted between the Wdist and the absolute difference of neuropsychological estimates between pre- and post-condition.
1.6
HA
WW test was applied to the time series of nRW of the intra-frequency, cross-frequency coupling and their ratio between pre- and post-condition at a subject level. To assign a p-value to Wdist, the 10,000 surrogates that were created for the statistical filtering of each pair of EEG sources were employed. Then, the whole procedure of estimating the iFCGTF for each of the 10,000 surrogates was repeated, leading to 10,000 time series of nRW (t) for both intra- and inter-frequency coupling and also for the desynchronization events. Using the distribution of such events, p-value was assigned to each Wdist to one of the 37 time series (8 + 28 nRW(t) + the per subject).
Estimating Time-Delays with Delay Symbolic Transfer Entropy (dSTE)
RE
TR
By adopting a novel estimator called delay symbolic transfer entropy (dSTE) [22] which demonstrated its effectiveness in a mental arithmetic task [25], the time-delay between every pair of EEG sources in every frequency and across all temporal segments was detected. This procedure was followed independently for each subject and condition (pre- and post-intervention time period). The main goal is to detect improvement in time-delays due to intervention and also to demonstrate in logarithmic scale the mean of time delays across EEG sources within each frequency. It was assumed that the hierarchy of time scales should be improved due to intervention.
(a) = 0.54*TMTA + 0.21*TMTB-0.006* TMTA*TMTB with R2 = 0.67 and p < 0.01 (b) = 6.37*MMSE + 1.02*CVLT_D + 1.59* with ATT_DSPAN-3.90*ATT_RSPAN R2 = 0.74 and p < 0.01. where ATT_DSPAN refers to attention direct span, ATT_RSPAN to attention reverse span, and CVLT_D to the recall delayed subcomponent of CVLT (see Table 1.2).
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
Fig. 1.8 Group mean and standard deviations of frequency-dependent time-delays. Improvement of mean time-delays for the MST target group in {δ, θ, α2, β1}
17
PT ER
1
compared to the pre-intervention period, (*Wilcoxon rank sum test, p < 0.01, Bonferroni corrected p′ < p/8)
Table 1.4 Mean and standard deviations of (GE–Cost) for each group and conditions MST_POST * *
CONTROL_PRECONTROL_POSTPASSIVE_PREPASSIVE_POST
HA
GE-Cost Cost
MST_PRE * *
Wilcoxon rank sum test, p < 0.000087
1.6.3
DC
*
Improvement of the Time-Delay
Improvement of GE–Cost for MST Group
TR
1.6.4
AC TE
Additionally, the time-delays of the cortical interactions were improved in {δ, θ, α2, β1} compared to the pre-intervention period (*Wilcoxon rank sum test, p < 0.01, Bonferroni corrected p′ < p/8). Figure 1.8 illustrates the group mean time-delays for each brain rhythm.
RE
Table 1.4 illustrates the mean and standard deviations of GE–Cost averaged separately for each group and condition. At first, the average across temporal segments for each subject was calculated. An improved GE–Cost and Cost only for the MST target group were revealed.
1.7
risk for dementia was demonstrated. The MST protocol was designed to trigger spatial memory and the integration of spatial location with object semantic memory [12, 26]. To evaluate the positive effect of MST protocol, 158 patients with mild cognitive impairment who underwent 90 min of training per day over 10 weeks were recruited. An active control group of 50 subjects were exposed to documentaries, and a passive control group of 55 subjects did not engage in any activity. Both active and passive groups were important to further explore training-induced benefits outside the neural mechanisms and brain areas tailored to the intervention and also determine which group resulted in any real-life improvements. The effectiveness of this protocol was evaluated by recording spontaneous EEG activity before and after the intervention period in three groups (active: target, passive: doing nothing and control: watching documentaries) [26]. A dynamic functional source connectivity analysis was adopted to untangle the dynamic reconfiguration and contribution of intra- and cross-frequency coupling mechanisms before and after the intervention.
Discussion
In the present chapter, the positive outcome of a target intervention in a large group of elderly at
18
T. P. Exarchos et al.
PT ER
Improved Contribution of b1 to Spontaneous Brain Connectivity After the Intervention
Neuroplastic effects of both cognitive and physical training increased the contribution of β1 frequency to the spontaneous activity after the intervention. Previous studies based on EEG cortical sources on intervention protocols reported improvements in power and functional connectivity in β band [38, 47]. Güntekin et al. [34] investigated the role of β band in both healthy and mild cognitive impairment participants. They linked β activity with attentional demands that support basic executive functions in numerous experimental paradigms. Here, the dynamic contribution of β1 brain rhythms was linked with TMTA and TMTB estimates of executive function.
A Large Repertoire of Neuroinformatic Tools Underlined the Positive Outcome of the Intervention Protocol
AC TE
1.7.1
1.7.2
DC
• The ratio of inter- versus intra-frequency coupling modes and also the contribution of β1 frequency were higher for the target group compared to its pre-intervention period. • Was linear modeled by the improvements in MMSE, the recall delayed subcomponent of CVLT, the attention direct span, and the attention reverse span. • Was linear and nonlinear modeled my TMTA and TMTB estimates of executive function. • The time-delays of the cortical interactions were improved in {δ, θ, α2, β1} compared to the pre-intervention period. • Based on the dynamic integrated functional connectivity graph (DIFCG), GE–C and Cost were significantly improved in the target group compared to its pre-intervention period.
the MST group. These improvements were linked with improvements on basic neuropsychological estimates. In addition, group mean time-delays of the cortical interactions were further improved in {δ, θ, α2, β1} for the MST protocol. Finally, the GE–Cost was enhanced in MST group with a significant reduced cost [9]. All in all, the positive outcome of this intervention was linked to dominant intrinsic coupling modes contribution, the time-delays between EEG sources, and the functional rewiring of the network in a more optimal way.
HA
Below, the significant results derived from the whole analysis are summarized:
RE
TR
According to our knowledge, this is the first study in the intervention literature that demonstrated the positive outcome of an intervention protocol tailored to a specific group via a large repertoire of network analytics and under the notion of dominant intrinsic coupling modes [31]. Initially, a dynamic integrated functional connectivity graph (DIFCG) was built by linear combination of all the versions of intra- and inter-frequencyoriented functional connectivity graphs (FCGs) [24]. This approach based on data-driven techniques (surrogate analysis, topological filtering, and graph diffusion distance metric) revealed the dynamic weighted contribution of each dominant coupling mode across experimental time. The analysis revealed that β1 frequency and the ratio of inter-frequency versus intra-frequency weights were enhanced due to intervention in
1.7.3
Improved Contribution of Inter/Intra to Spontaneous Brain Connectivity After the Intervention
In addition to a positive contribution of β1 frequency to the spontaneous activity after the intervention, an enhanced ratio of inter/intra was also observed. The estimator was designed such as to capture the dynamic reconfiguration of the
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
Improved Time-Delays in Spontaneous Brain Connectivity After the Intervention
Based on the dynamic integrated functional connectivity graph (DIFCG), (GE–C) and Cost were significantly improved in the target group compared to its pre-intervention period. Specifically, GE–C was enhanced after the intervention period, while Cost was diminished. The results further support the significant positive neuroplastic effect of the MST protocol in terms of network analysis. Overall, the brain functionality of the target group was more cost efficient due to both physical and cognitive training [9]. This result complements to the improved time-delays on specific brain rhythms and additional support the whole methodology using data-driven techniques.
1.8
TR
Conclusions
Neuroplastic alterations of brain activity in older adults with higher cognitive decline than normal were detected after a period of 10-week intense physical and cognitive training under the MST protocol. In the present study, clear evidence is provided via dynamic source connectivity analysis that even a short-targeted training of both physical and tailored to memory cognitive training can alter spontaneous brain activity in different ways. The analysis untangled the different aspects of brain activity over both intra- and inter-frequency coupling and also the improved time-delays that both support the observed enhanced of cost-efficient dynamic integrated functional connectivity graph for the training group. In addition, neuropsychological estimates were improved linked to an improved contribution of β1 brain rhythm and the ratio of inter-/ intra-frequency bands. It is important for the
AC TE
The time-delays of the cortical interactions were improved in specific frequency bands {δ, θ, α2, β1} for the target group compared to the pre-intervention period. It is the first time in the literature that time-delays were estimated across all the estimated sources and across time and different brain rhythms. Roux et al. [43] estimated conduction delays using transfer entropy between alpha-phase and gamma-amplitude revealing a directed information transfer from thalamic source to posterior medial parietal cortex. Thatcher et al. [52] reported phase delays of basic brain rhythms ranging from 100 ms for δ frequency up to a few tens for γ band [40]. The role of {δ, θ, α, β1} in the cognitive processes [54] and of α brain activity in cognitive impairment is well-known [8]. The improvement of time-delays can further support the aforementioned findings of improved neuropsychological estimates.
RE
Improved Cost Efficiency of Spontaneous Brain Connectivity After the Intervention
DC
1.7.4
1.7.5
HA
multiplexity of the brain over space and time. Dominant coupling modes [23, 26, 31] were improved after the intervention in the target group, and this was linked to the neuroplastic effects of the protocol. Furthermore, neuronal oscillations were lineally modeled with the MMSE, the recall delayed subcomponent of CVLT, the attention direct span, and the attention reverse span. The result of this linear model enhanced further the importance of this new estimator to explore the multiplexity of the human brain linked with many neuropsychological estimates like the MMSE and the attentional resources needed to perform the memory tasks [34].
19
PT ER
1
20
T. P. Exarchos et al.
RE
TR
AC TE
DC
1. Arnal, L.H., and Giraud, A.L. (2012). Cortical oscillations and sensory predictions. Trends Cogn. Sci. 16, 390–398. 2. Aru, J., Aru, J., Priesemann, V., Wibral, M., Lana, L., Pipa, G., et al. (2015). Untangling cross-frequency coupling in neuroscience. Curr. Opin. Neurobiol. 31, 51–61. https://doi.org/10.1016/j.conb.2014.08.002 3. Bangen, Katherine J. et al. 2010. “Complex Activities of Daily Living Vary by Mild Cognitive Impairment Subtype.” Journal of the International Neuropsychological Society : JINS 16(4):630–39. 4. Başar E., Schürmann M., Başar-Eroglu C., Karakaş S. Alpha oscillations in brain functioning: an integrative theory. International Journal of Psychophysiology. 1997;26:5–29. 5. Başar E., Başar-Eroglu C., Karakaş S., Schürmann M. Gamma, alpha, delta, and theta oscillations govern cognitive processes. International Journal of Psychophysiology. 2001;39:241–248. 6. Başar E., Güntekin B. A short review of alpha activity in cognitive processes and in cognitive impairment. International Journal of Psychophysiology. 2012;86: 25–38. 7. Başar E., Aysel Düzgün The CLAIR model: Extension of Brodmann areas based on brain oscillations and connectivity. International Journal of Psychophysiology. Volume 103, May 2016, Pages 185–198 8. Benjamini, Y., and Hochberg, Y. (1995). Controlling the false discovery rate a practical and powerful approach to multiple testing. J. R. Stat. Soc. B Stat. Methodol. 57, 289–300. https://doi.org/10.1093/ acprof:oso/9780195301069.001.0001 9. Bullmore E., Sporns, O. (2012). The economy of brain network organization Nature Reviews Neuroscience 13, 336–349 (May 2012) | https://doi.org/10.1038/ nrn3214 10. Buzsáki G, Watson BO. Brain rhythms and neural syntax: implications for efficient coding of cognitive content and neuropsychiatric disease. Dialogues in Clinical Neuroscience. 2012;14(4):345–367. 11. Canolty, R. T., Edwards, E., Dalal, S. S., Soltani, M., Nagarajan, S. S., Kirsch, H. E., et al. (2006). High gamma power is phase-locked to theta oscillations in
PT ER
References
human neocortex. Science 313, 1626–1628. https:// doi.org/10.1126/science.1128115 12. Chen, L.Y. Chuah, S.K. Sim, M.W. Chee, K.H. Hippocampal region-specific contributions to memory performance in normal elderly. Brain Cogn. 2010 Apr;72(3):400–7. https://doi.org/10.1016/j. bandc.2009.11.007. 13. Contreras, D., and Steriade, M. (1997). Synchronization of low-frequency rhythms in corticothalamic networks. Neuroscience 76, 11–24. 14. Deco, G., and Corbetta, M. (2011). The dynamical balance of the brain at rest. Neuroscientist 17, 107–123. 15. Destexhe, A., Contreras, D., and Steriade, M. (1999). Spatiotemporal analysis of local field potentials and unit discharges in cat cerebral cortex during natural wake and sleep states. J. Neurosci. 19, 4595–4608. 16. Dimitriadis SI, Laskaris NA, Tsirka V, Vourkas M, Micheloyannis S, Fotopoulos S. 2010. Tracking brain dynamics via time-dependent network analysis. J Neurosci Methods 193(1):145–155. 17. Dimitriadis SI, Laskaris, NA, Simos PG, Micheloyannis S, Fletcher JM, Rezaie R, Papanicolaou AC. 2013. Altered temporal correlations in resting-state connectivity fluctuations in children with reading difficulties detected via MEG. NeuroImage 83:307–31. 18. Dimitriadis, S.I., Sun, Yu, Kwok K., Laskaris, N.A., Thakor, N., Bezerianos, A., 2014. Cognitive Workload Assessment Based on the Tensorial Treatment of EEG Estimates of Cross-Frequency Phase Interactions. Annals of Biomedical Engineering October. 19. Dimitriadis SI, Zouridakis G, Rezaie R, BabajaniFeremi A, Papanicolaou AC. 2015a. Functional connectivity changes detected with magnetoencephalography after mild traumatic brain injury. NeuroImage: Clinical 9:519–531. 20. Dimitriadis SI, Sun Y, Kwok K, Laskaris NA, Thakor N, Bezerianos A. 2015b. Cognitive workload assessment based on the tensorial treatment of EEG estimates of cross-frequency phase interactions. Ann Biomed Eng. 43(4):977–89. 21. Dimitriadis SI, Laskaris NA, Bitzidou MP, Tarnanas I and Tsolaki MN (2015c) A novel biomarker of amnestic MCI based on dynamic cross-frequency coupling patterns during cognitive brain responses. Front. Neurosci. 9:350. https://doi.org/10.3389/fnins.2015. 00350 22. Dimitriadis S, Sun Y, Laskaris N, Thakor N, Bezerianos A. 2016a. Revealing cross-frequency causal interactions during a mental arithmetic task through symbolic transfer entropy: a novel vectorquantization approach. IEEE Trans Neural Syst Rehabil Eng 24(10):1017–1028. 23. Dimitriadis SI, Laskaris NA, Simos PG, Fletcher JM, Papanicolaou AC. 2016b. Greater repertoire and temporal variability of cross-frequency coupling (CFC)
HA
understanding of the benefits of the intervention protocol in daily activities and the exploration of the compensatory mechanisms generated by MCI patients to adopt similar connectivity analysis. Quantitative and qualitative benefits which encourage further investigations with larger samples were observed.
RETRACTED CHAPTER: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes. . .
PT ER
AC TE
TR
RE
21
37. Khan, L. Liu, F.A. Provenzano, D.E. Berman, C.P. Profaci, R. Sloan, U.A. Molecular drivers and cortical spread of lateral entorhinal cortex dysfunction in preclinical Alzheimer’s disease. Nat Neurosci, 17 (2014), pp. 304–311 38. Klados MA, Styliadis C, Frantzidis CA, Paraskevopoulos E, Bamidis PD. Beta-Band Functional Connectivity is Reorganized in Mild Cognitive Impairment after Combined Computerized Physical and Cognitive Training. Frontiers in Neuroscience. 2016;10:55. https://doi.org/10.3389/fnins.2016. 00055. 39. Laskaris NA and Ioannides AA. (2001): Exploratory data analysis of evoked response single trials based on minimal spanning tree. Clin. Neurophysiol. 112:698– 712 40. Milller, R. Axonal Conduction Time and Human Cerebral Laterality: A Psychological Theory. June 30, 1996 CRC Press Reference – 262 Pages 41. Nestor, T.D. Fryer, M. Ikeda, J.R. Hodges, P.J. Retrosplenial cortex (BA 29/30) hypometabolism in mild cognitive impairment (prodromal Alzheimer’s disease). Eur J Neurosci, 18 (2003), pp. 2663–2667 42. Ragwitz M and H. Kantz, H (2002) Markov models from data by simple nonlinear time series predictors in delay embedding spaces. Phys. Rev. E, 65, 056201. https://doi.org/10.1103/PhysRevE.65.056201 43. Roux F, Wibral M, Singer W, Aru J, Uhlhaas PJ. The Phase of Thalamic Alpha Activity Modulates Cortical Gamma-Band Activity: Evidence from Resting-State MEG Recordings. The Journal of Neuroscience. 2013;33(45):17827-17835. https://doi.org/10.1523/ JNEUROSCI.5778-12.2013. 44. Rowe, S. Ng, U. Ackerman, S.J. Gong, K. Pike, G. Savage. Imaging beta amyloid burden in aging and dementia. Neurology, 68 (2007), pp. 1718–1725 45. Steriade, M., Contreras, D., Amzica, F., and Timofeev, I. (1996a). Synchronization of fast (30-40 Hz) spontaneous oscillations in intrathalamic and thalamocortical networks. J. Neurosci. 16, 2788–2808. 46. Steriade, M., Amzica, F., and Contreras, D. (1996b). Synchronization of fast (30-40 Hz) spontaneous cortical rhythms during brain activation. J. Neurosci. 16, 392–417. 47. Styliadis C., Kartsidis P., Paraskevopoulos E., Ioannides A. A., Bamidis P. D. (2015). Neuroplastic effects of combined computerized physical and cognitive training in elderly individuals at risk for dementia: an eLORETA controlled study on resting states. Neural Plast. 2015:172192. https://doi.org/10.1155/2015/ 172192 48. Tarnanas, I., M. Tsolaki, T. Nef, R. Muri, and U. P. Mosimann. 2014. “Can a Novel Computerized Cognitive Screening Test Provide Additional Information for Early Detection of Alzheimer Disease?” Alzheimer’s & dementia : the journal of the Alzheimer’s Association. 49. Tarnanas I, Laskaris N, Tsolaki M, Nef T, Müri R, Mosimann UP (2015a). On the comparison of a novel
DC
modes in resting-state neuromagnetic recordings among children with reading difficulties. Front Hum Neurosci 10:163. 24. Dimitriadis SI. 2016c. Combining Intra and InterFrequency Dominant Coupling Modes into a single Dynamic Functional Connectivity Graph: Dynome, Dyconnectomics and Oscillopathies. 20th International Conference on Biomagnetism – BIOMAG 2016, At SOUTH KOREA 25. Dimitriadis SI, Sun Y, Thakor NV, and Bezerianos A. 2016d. Causal Interactions between Frontalθ – Parieto-Occipitalα2 Predict Performance on a Mental Arithmetic Task. Front. Hum. Neurosci. 10:454. 26. Dimitriadis SI, Tarnanas I, Wiederholdg M, Wiederholdh B, Tsolaki M, Fleish E. 2016e. Mnemonic strategy training of the elderly at risk for dementia enhances integration of information processing via cross-frequency coupling. Alzheimer’s & Dementia: Translational Research & Clinical Interventions, 2, 241–249. 27. Dimitriadis SI, Sallis C, Tarnanas I and Linden DE (2017). Topological Filtering of Dynamic Functional Brain Networks Unfolds Informative Chronnectomics: A novel data-driven thresholding scheme based on Orthogonal Minimal Spanning Trees (OMSTs). Front. Neuroinform. 11:28. https://doi.org/10.3389/ fninf.2017.00028 28. Dubois B., Feldman H. H., Jacova C., Hampel H., Molinuevo J. L., Blennow K., et al. (2014). Advancing research diagnostic criteria for Alzheimer’s disease: the IWG-2 criteria. Lancet Neurol. 13 614–629. https://doi.org/10.1016/s1474-4422(14)70090-0 29. Engel, A.K., Fries, P., and Singer, W. (2001). Dynamic predictions: oscillations and synchrony in top-down processing. Nat. Rev. Neurosci. 2, 704–716 30. Engel, A.K., and Fries, P. (2010). Beta-band oscillations – signalling the status quo? Curr. Opin. Neurobiol. 20, 156–165 31. Engel AK. , Christian Gerloff, Claus C. Hilgetag, Guido Nolte. 2013. Intrinsic Coupling Modes: Multiscale Interactions in Ongoing Brain Activity. Neuron 80(4):867–886. 32. Fries, P. (2009). Neuronal gamma-band synchronization as a fundamental process in cortical computation. Annu. Rev. Neurosci. 32, 209–224. 33. Friston, K. (2005). A theory of cortical responses. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360, 815–836 34. Güntekin B, Emek-Savaş DD, Kurt P, Yener GG, Başar E. Beta oscillatory responses in healthy subjects and subjects with mild cognitive impairment. NeuroImage : Clinical. 2013;3:39–46. https://doi.org/ 10.1016/j.nicl.2013.07.003. 35. Hari R, Parkkonen L, Nangini C. 2010 The brain in time: insights from neuromagnetic recordings. Ann. NY Acad. Sci. 1191, 89–109. 36. Hari R., Parkkonen L. (2015). The brain timewise: how timing shapes and supports brain function. Philos. Trans. R. Soc. Lond. B Biol. Sci. 370:20140170. https://doi.org/10.1098/rstb.2014.0170
HA
1
22
T. P. Exarchos et al.
PT ER
Dementia: Diagnosis, Assessment and Disease Monitoring, Volume 1, Issue 4, December 2015, Pages 521–532 52. Thatcher RW, Krause PJ, Hrybyk M. Cortico-cortical associations and EEG coherence: a two-compartmental model. Electroencephalogr Clin Neurophysiol. 1986 Aug;64(2):123–43. 53. Theiler, J., Eubank, S., Longtin, A., Galdrikian, B., and Farmer, J. D. (1992). Testing for nonlineaity in time series: the method of surrogate data. Physica D 85:77. https://doi.org/10.1016/0167-2789(92)90102-S 54. Wickelgren WA. Webs, cell assemblies, and chunking in neural nets: Introduction. Can J Exp Psychol. 1999;53:118–131.
RE
TR
AC TE
DC
HA
serious game and electroencephalography biomarkers for early dementia screening. Springer Series: Adv Exp Med Biol. 2015;821:63–77. 50. Tarnanas I, Papagiannopoulos S, Kazis D, Wiederhold M, Widerhold B and Tsolaki M (2015b) Reliability of a novel serious game using dual-task gait profiles to early characterize aMCI. Front. Aging Neurosci. 7:50. https://doi.org/10.3389/fnagi.2015. 00050 51. Tarnanas I, Tsolaki A, Wiederhold B, Wiederhold M, Tsolaki M (2015c). 5-year biomarker progression variability for AD dementia prediction: Can a complex iADL marker fill in the gaps? Alzheimer's &
2
A Sensor-Based Platform for Early-Stage Parkinson’s Disease Monitoring Marios G. Krokidis, Themis P. Exarchos, Aristidis G. Vrahatis, Christos Tzouvelekis, Dimitrios Drakoulis, Foteini Papavassileiou, and Panagiotis Vlamos
Abstract
Keywords
Biosensing platforms have gained much attention in clinical practice screening thousands of samples simultaneously for the accurate detection of important markers in various diseases for diagnostic and prognostic purposes. Herein, a framework for the design of an innovative methodological approach combined with data processing and appropriate software in order to implement a complete diagnostic system for Parkinson’s disease exploitation is presented. The integrated platform consists of biochemical and peripheral sensor platforms for measuring biological and biometric parameters of examinees, a central collection and management unit along with a server for storing data, and a decision support system for patient’s state assessment regarding the occurrence of the disease. The suggested perspective is oriented on data processing and experimental implementation and can provide a powerful holistic evaluation of personalized monitoring of patients or individuals at high risk of manifestation of the disease.
Biosensors · Parkinson’s disease · Peripheral sensors · Wearable devices · UPDRS · Decision support system
M. G. Krokidis (✉) · T. P. Exarchos · A. G. Vrahatis · C. Tzouvelekis · P. Vlamos (✉) Bioinformatics and Human Electrophysiology Laboratory, Department of Informatics, Ionian University, Corfu, Greece e-mail: [email protected]; [email protected] D. Drakoulis · F. Papavassileiou Telesto Technologies, Athens, Greece
2.1
Introduction
Parkinson’s disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease and affects 1% of the adult population over 60 years of age [1]. PD is characterized by both motor (bradykinesia, postural disturbances, rigidity, tremor) and non-motor features (hyposmia, sleep disturbances, autonomic, neuropsychiatric, and sensory symptoms). The disease as a progressive neurological disorder due to degeneration of dopaminergic neurons between the substantia nigra and the striatum is associated with the loss of the ability to produce and store dopamine [2]. Unfortunately, the clinical diagnosis of PD is often made when the disease has already progressed. To date, there is no cure for the disease, and the drugs given as well as surgery only aim to control the symptoms. Current treatment strategies focus on dopamine replacement, aiming to partially or fully correct the motor symptoms caused by dopamine deficiency [3]. The core neuropathology of PD is the deposition of Lewy bodies (abnormal aggregates of a misfolded protein called α-synuclein) resulting in
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_2
23
24
neuronal dysfunction, involving many brain regions and neurotransmitter systems. Therefore, there is a great need to increase the existing biomarkers available for PD and to accelerate their discovery and validation at different stages of the disease, including clinical signs to determine patient’s transition from pre-motor to motor symptoms [4]. Many of the non-motor symptoms that may appear years before motor symptoms, such as sleep disturbance, bowel dysfunction, olfactory deficits, and mood disturbances, are not specific for the disease and therefore are not reliable in predicting who will develop motor symptoms. Biomarkers that predict disease progression would be quite valuable in clinical trials of drugs that inhibit disease progression [5]. A wide range of features are available from the initial assessment of newly diagnosed patients in the PPMI (Parkinson Progression Markers Initiative) database [6], covering every aspect of disease symptomatology including evaluations of motor and non-motor symptoms, general and specific neurological clinical examinations, results from imaging studies of parts of the brain, laboratory results of blood, urine, and cerebrospinal fluid (CSF) tests and genetic analyzes including the most known polymorphisms (SNPs) associated with the disease. The identification of promising biomarkers through the PPMI study helps both in the diagnosis and management of the disease as well as in the clinical trial stage to evaluate drug candidates through the comprehensive study of clinical data, imaging data, as well as laboratory biological samples of individuals with a specific genetic mutation [7, 8]. A portion of the study covers cognitive tests, and it is important to understand the underlying psychometric properties of the specific procedures included in the PPMI data set to interpret the results accurately. Among participants with Parkinson’s on the PPMI, there are differences in standardized cognitive scores depending on the normative group used. These differences affect rates of decline across cognitive measures, and the use of internal standards results in lower standardized scores with the exception of memory tests. The selection of appropriate comparison groups is governed by strict criteria, as specific decisions
M. G. Krokidis et al.
can affect both the course of the research and the clinical interpretations of the cognitive data. The Unified Parkinson’s Disease Rating Scale (UPDRS) is currently the most common and reliable rating tool used to monitor disease severity, although it remains a subjective and semiquantitative measure of motor symptoms. Change in the UPDRS scale is often used in clinical trials to assess the effects of drugs on the progression of motor dysfunction and disease severity [9]. Additional biomarkers are needed to help evaluate drugs in clinical trials and monitor disease progression, which ultimately may include a combination of biological, genetic, imaging, and biochemical markers.
2.2
Τhe Sensor Perspective
Herein, an innovative methodological approach for the early diagnosis of PD is presented, which allows the simultaneous collection and processing of data from a biosensor and from multiple peripheral sensors. The integrated platform combines different bioelectrochemical biomarker detection devices, sensor biosignal collection and processing electronics, sensor measurement management, and display software as well as a decision support system which gives a holistic evaluation of the measurements, providing a personalized assessment of the disease to patients or people at high risk of its manifestation (Fig. 2.1). The platform is a continuation of a recent first edition [10], and it consists of the individual subsystems: (i) biochemical sensor measuring levels of biological indicators; (ii) unit for receiving, amplifying, and converting the analog signal received from the biochemical sensor into a digital one, storing and sending it to the central management unit; (iii) peripheral, non-biochemical sensors measuring biomarkers; (iv) central computing unit (server) for collecting, managing, and displaying sensor data; and (v) decision support system. The biosensor allows measuring the levels of biological markers (proteins involved in the pathophysiology of the disease) using conductive polymers due to their good mechanical and
2
A Sensor-Based Platform for Early-Stage Parkinson’s Disease Monitoring
25
Fig. 2.1 Integrated platform for early diagnosis of Parkinson’s disease (second edition, the first was recently presented [10])
electrical properties, ease of handling and patterning, as well as their low cost. Proteins are extensively used as prognostic and diagnostic biomarkers in clinical practice due to their ease of detection, isolation, purification, and quantification in various biological samples. Specific target proteins such as alpha-synuclein, tau protein, lipoproteins E and A, β2 microglobubin, and beta-amyloid 1–42 were used, which based on the international literature play an important role as characteristic biological molecules in the pathology of Parkinson’s disease [11, 12]. The formation of α-synuclein aggregates located in the Lewy bodies of patients with NP is the most important phenotypic pattern of the disease and this specific marker can be detected in the cerebrospinal fluid (CSF) and potentially in the blood depending on the course of the disease [13]. Also, the measurement of tau protein reflects the destruction and disintegration of neurons, and its phosphorylation can be carried out to varying degrees [14]. Nevertheless, the combination of several rather than individual biomarkers was judged to be the safest strategy for the most
accurate diagnostic approach based on the complexity of the disease progression and the derivation of more meaningful final conclusions according to the recording of clinical symptoms. To measure the response of the biosensor, a current is applied to the ends of the device, and the voltage drop across it is read. The principle of operation of the substrate is based on the view that the interactions of molecules—proteins with the polymer—change the density of the conductivity carriers and, by extension, the overall conductivity of the material. In more detail, the biosensor of the device is an activated polyaniline surface, on suitable gold substrates. The polyaniline is activated in order to accept and interact with the protein molecule under analysis [15]. This specific reaction affects the electrical properties of the polymer and in particular its conductivity in a measurable way. The polymerization of aniline takes place under oxidizing conditions in the presence of the polymeric acid PAAMSA (poly-(2-acrylamido-2-methylpropanesulfonic acid)) as it improves the mechanical properties of polyaniline (PANI), its water
26
M. G. Krokidis et al.
solubility, and its chemical stability, while maintaining good conductivity values. Through the four-contact method, the in situ conductivity is recorded. This particular methodological approach is implemented by measuring the voltage drop (2 contacts/pins) across the substrate which is subjected to constant current application. The response of the sensor by measuring the conductance (σ) is calculated based on the measured voltage drop across its terminals (V), the geometric characteristics of the measured device (polyaniline layer thickness (d), length (l), crosssectional area (w) of the channel), and the applied current (I) flowing through the device. It is noted that in the initial design [10], the use of a Raspberry Pi unit (Single Board Computer (SBC)) was foreseen for the initial collection of sensor data (biosensor and peripherals) and their forwarding to the central computing unit (server) for storage, display, and further processing. To simplify the layout and avoid further intermediate programming of the Raspberry Pi, it was decided not to use this module in the end, as its operation and use can be completely replaced by the central computing unit (server), as Fig. 2.1 shows.
2.3
Sensor Data Acquisition/Processing Unit
A specialized unit has been developed to manage the measurements of the biochemical sensor, based on a 32-bit microcontroller, which, with the help of an adjustable current source (communication with the microcontroller via I2C protocol), applies different currents to the biochemical sensor and receives the voltage drop across its terminals each time. The analog measurements obtained from the biochemical sensor are converted via a 24-bit ADC converter (analogto-digital converter, communication with microcontroller via SPI protocol) into digital ones, stored locally on the microcontroller and sent via the integrated Wi-Fi module of the microcontroller, wirelessly to a specific IP address, using MQTT protocol. The control circuit is based on the ESP32 microcontroller, which controls the
adjustable current source LM334S8, which injects different currents into the biochemical sensor, each time measuring the voltage drop across its terminals. The microcontroller takes continuous measurements within a user-preselected firmware-based current range and with a user-defined step, so as to find the maximum sensitivity of the biochemical sensor. These measurements are taken when the biochemical sensor does not contain the biological material for calibration purposes but also with the biochemical sensor containing the biological material to be measured. In both cases, however, the measurements must be carried out in the same current range and with the same rheumatic step for reasons of correct calibration of the biochemical sensor.
2.4
Interface with Peripheral Sensors Platform
For the comprehensive monitoring of the health/ condition of the examinees, the regional commercially available biomarker measuring platforms Bitalino [16] and MySignals [17] were used in parallel. The Bitalino platform is a toolbox of sensors (hardware) and related software (software), specially designed for monitoring the signals produced by the human body. Main sensors offered electromyogram (EMG), electrocardiogram (ECG), electrodermal action (EDA), electroencephalogram (EEG), and accelerometer (ACC). The Bitalino sensors are connected to the Bitalino hub wired via special signal conditioning boards. The software included in Bitalino is called OpenSignals and allows the reception and visualization of the different signals collected by the various sensors. The Bitalino (r)evolution firmware is open for distribution and programming on a device. To receive the data from Bitalino, a specialized interface was developed for the real-time reception and recording of the measurements and forwarding, storage, and display to the central computing unit. All the biometric data gathered by MySignals are encrypted and sent to the Libelium Cloud in real time to be visualized on the user’s private account. The new biometrical IoT platform
2
A Sensor-Based Platform for Early-Stage Parkinson’s Disease Monitoring
allows developers to easily create new software eHealth applications and medical devices by monitoring more than 20 different body parameters. All data collected can be visualized using different graphical methods: TFT, serial display, or real-time KST software. KST is the fastest tool for viewing and plotting large datasets in real time and has built-in data analysis functionality. It is very user-friendly and contains many powerful built-in features while being extensible with plug-ins. The platform was used to provide a complementary range of biometric sensors as well as accompanying data processing/ visualization software. The main sensors offered are body position detection, body temperature data, pulse and oxygen functions, glucose level measurement, and blood pressure measurement.
2.5
Central Computing Unit and Dashboard
The central computing unit (server) stores, manages, and displays the collected data from the biochemical sensor, as well as from the external peripheral reading platforms of vital signs, which allow the acquisition and processing of the measurements using a relevant interface. Through a specialized application, the measurements of the biochemical sensor and peripheral biosignal sensors, as well as details of the examinee (name, date of birth, contact details, etc.), his history, any medication he is receiving, are displayed on a central control screen (dashboard) and previous visit metrics. The central control screen (dashboard) of the device, with the centralized view of all the data of the examinee, is able to facilitate the co-decision process of the clinician regarding the assessment of Parkinson’s disease, the effect of the current treatment, as well as the examination of other treatment options, including referral to specialist care (if required). In order to meet the development and operational requirements, the platform should be able to first take a background measurement (without protein), store it, and then measure it with protein. Moreover, the platform should be able to check for invalid pool values
27
and gives the possibility to calculate the mean and median value of a series of concentration calculations. In order to strengthen the security of the patients’ personal data, their data should be encrypted so that it is not possible to use/edit them. With the help of the application, it is also possible to use the MDS-UPDRS questionnaire (Unified Parkinson’s Disease Rating Scale) and save the results, which offers a general assessment scale for Parkinson’s disease [18]. Herein, the platform is partially supported by a decision support system for the early diagnosis of Parkinson’s disease, which, by processing the collected data, makes a first assessment of the condition of the examinee, regarding the manifestation of the disease. Decision support systems (DSS) play an increasingly important role in medical practice as well, helping physicians make clinical decisions, while they are expected to improve the quality of medical care and diagnosis in general [19]. DSSs are capable of responding to changing real-world conditions. The user can change, delete, and add models and functionalities, so as to adapt the system to new requirements in an easy and fast way. Through the answers to selected questions obtained from the MDS-UPDRS questionnaire, an assessment of the stage of the disease is obtained using the Hoehn and Yahr Scale. The Hoehn and Yahr Scale has been used for decades to measure Parkinson’s symptoms and progression [20]. All values of Sections I-III of the MDS-UPDRS support the evaluation of motor and non-motor symptoms of the disease as well as their impact on the patients’ daily life. Section IV of the MDS-UPDRS scale is not taken into account as it concerns complications of concomitant treatment which usually appear much later. The scale ranks disease progression based on a score of 1–5, which purely assesses balance characteristics. The design of the dashboard for the collection, display, and management of the collected measurements from the biosensor and peripheral sensors (Bitalino, MySignals platforms) aims to facilitate the clinician for a first assessment of Parkinson’s disease, to monitor the progress of the disease (if it exists), to examine treatment options, to display the results of sensor
28
M. G. Krokidis et al.
measurements (with the possibility of history), to carry out tests based on the PPMI methodology, to collect related tests, etc.
2.6
Conclusions
The early detection of the onset of neurodegenerative diseases through the understanding of the pathophysiology of the specific diseases is of crucial importance, as it can give the patient the opportunity for a timely treatment protocol, which can prove valuable in eliminating the further progression of the disease. Planning to implement appropriate methodologies for the development of biosensors adapted to the detection and quantification of protein molecules requires a number of prerequisites such as selectivity for the target analyte, rapid flow of clinical information, and robustness to nonspecific interactions in the clinical samples. Based on the above conditions, the direction of biochemical approach can be chosen according to the principle of high specificity of the biological molecule in terms of the experimental setup with the simultaneous provision of analytical information (signal) in real time. Τhe proposed peripheral sensor platforms enable the visualization of the measurements they receive in real time with the help of graphs. Data can be presented per examinee with a view of past measurements, medication, and hospital visit management. The experimental implementation of integrated diagnostic approaches such as the one described in the present framework can be a useful tool to specialists (doctors) in the fight against PD as it can provide expertise in the early diagnosis and treatment of the disease. Funding This research has been co-financed by the European Union and Greek national funds through the Competitiveness, Entrepreneurship, and Innovation Operational Programme, under the Call “Research—Create— Innovate,” project title: “BIODIANEA: Bio-chips for the diagnosis and treatment of neurodegenerative diseases, focusing on Parkinson’s disease,” project code: T1EDK05029, MIS code: 5030363.
References 1. Kalia LV and Lang AE (2015) Parkinson’s disease Lancet 386: 896–912 2. Hirsch L, Jette N, Frolkis A, Steeves T, Pringsheim T (2016) The incidence of Parkinson’s disease: a systematic review and meta-analysis Neuroepidemiology 46: 292–300 3. Rizek P, Kumar N, Jog MS (2016) An update on the diagnosis and treatment of Parkinson disease Cmaj, 188: 1157–1165 4. Delenclos M, Jones DR, McLean PJ, Uitti RJ (2016) Biomarkers in Parkinson’s disease: Advances and strategies Parkinsonism & related disorders 22: S106–S110 5. He R, Yan X, Guo J, Xu Q, Tang B, Sun Q (2018) Recent advances in biomarkers for Parkinson’s disease Frontiers in aging neuroscience 10 305 6. Marek K, Chowdhury S, Siderowf A, Lasch S, Coffey CS, Caspell-Garcia C, Simuni T, Jennings D, Tanner CM, Trojanowski JQ, Shaw LM (2018) The Parkinson’s progression markers initiative (PPMI)– establishing a PD biomarker cohort Annals of clinical and translational neurology, 5: 1460–1477 7. Simuni T, Merchant K, Brumm MC, Cho H, CaspellGarcia C, Coffey CS, Chahine LM, Alcalay RN, Nudelman K, Foroud T, Mollenhauer B (2022) Longitudinal clinical and biomarker characteristics of non-manifesting LRRK2 G2019S carriers in the PPMI cohort npj Parkinson’s Disease 8:1–10 8. McFarthing K, Rafaloff G, Baptista MAS, Wyse RK, Stott SRW (2021) Parkinson’s disease drug therapies in the clinical trial pipeline: 2021 update J Parkinsons Dis 11: 891–903 9. Ivey FM, Katzel LI, Sorkin JD, Macko RF, Shulman LM (2012) The Unified Parkinson’s Disease Rating Scale as a predictor of peak aerobic capacity and ambulatory function Journal of rehabilitation research and development 49: 1269 10. Krokidis MG, Dimitrakopoulos GD, Vrahatis AV, Tzouvelekis C, Drakoulis D, Papavassileiou F, Exarchos TP, Vlamos P (2022) A Sensor-Based Perspective in Early-Stage Parkinson’s Disease: Current State and the Need for Machine Learning Processes Sensors 22: 409 11. Herbert MK, Eeftens JM, Aerts MB, Esselink RAJ, Bloem BR, Kuiperij HB et al (2014) CSF levels of DJ-1 and tau distinguish MSA patients from PD patients and controls. Parkinsonism Relat. Disord. 20, 112–115 12. Cova I and Priori A (2018) Diagnostic biomarkers for Parkinson’s disease at a glance: where are we? Journal of Neural Transmission 125: 1417–1432 13. Gallea JI and Celej MS (2014) Structural insights into amyloid oligomers of the Parkinson disease-related protein a-synuclein J Biol Chem 289 26733–26742 14. Montine ΤJ, Shi M, Quinn JF, Peskind ER, Craft S, Ginghina C, et al (2010) CSF Ab42 and tau in
2
A Sensor-Based Platform for Early-Stage Parkinson’s Disease Monitoring
Parkinson’s disease with cognitive impairment Mov Disord 25: 2682–2685 15. Bayer CL, Konuk AA, Peppas NA (2010) Development of a protein sensing device utilizing interactions between polyaniline and a polymer acid dopant Biomed Microdevices 12: 435–442 16. Batista D, Silva H, Fred A (2017) Experimental characterization and analysis of the BITalino platforms against a reference device In 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC): 2418–2421. IEEE, 2017 17. Cha H, Jeon J (2017) OCF healthcare proof of concept (PoC) on libelium MySignals In 2017 European
29
Conference on Electrical Engineering and Computer Science (EECS): 356–364. IEEE 18. Goetz CG, Fahn S, Martinez-Martin P, Poewe W, Sampaio C, Stebbins GT, Stern MB et al (2007) Movement Disorder Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS): process, format, and clinimetric testing plan Movement disorders 22: 41–47 19. Raza MA, Chaudry Q, Zaidi SMT, Khan MB (2017) Clinical decision support system for Parkinson’s disease and related movement disorders In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP): 1108–1112 IEEE 20. Hoehn MM, Yahr MD (1998) Parkinsonism: onset, progression, and mortality Neurology 50: 318–318
3
Pressure Prediction on Mechanical Ventilation Control Using Bidirectional Long-Short Term Memory Neural Networks Gerasimos Grammenos and Themis P. Exarchos
Abstract
Life support systems are playing a critical role on keeping a patient alive when admitted in ICU bed. One of the most popular life support system is Mechanical Ventilation which helps a patient to breath when breathing is inadequate to maintain life. Despite its important role during ICU admission, the technology for Mechanical Ventilation hasn’t change a lot for several years. In this paper, we developed a model using artificial neural networks, in an attempt to make ventilators more intelligent and personalized to each patient’s needs. We used artificial data to train a deep learning model that predicts the correct pressure to be applied on patient’s lungs every timepoint within a breath cycle. Our model was evaluated using cross-validation and achieved a Mean Absolute Error of 0.19 and a Mean Absolute Percentage Error of 2%. Keywords
Mechanical ventilation · Deep learning · Life support · LSTM
G. Grammenos (✉) · T. P. Exarchos (✉) Department of Informatics, Ionian University, Corfu, Greece e-mail: [email protected]; [email protected]
3.1
Introduction
Mechanical ventilation (MV) is one of the most widely used respiratory support methods for patients being hospitalized in intensive care unit (ICU) beds, who suffer from various infections that cause respiratory failure such as acute respiratory distress syndrome (ARDS) and COVID-19. Despite being used on a global scale by clinics and hospitals, the technology of MV has not undergone significant improvements for several years. The correct configuration of the MV controller is a difficult challenge, as its adjustment requires taking into account the patient’s clinical picture in order to minimize the chances of complications or injury to the patient’s lungs [1]. Furthermore, making the decision to intubate a patient is difficult for intensivists since COVID-19 showed us that is not always clear whether intubating a patient will improve their clinical image [2]. In this light, further research is needed on the development of a method that will adjust in real time a ventilator in an optimal personalized way taking into account the clinical picture of each individual patient. Artificial intelligence (AI) can play a key role into designing an algorithm that solves this problem as well as other challenges in the field of proper parameter tuning on life support systems. During patient hospitalization, large amounts of data are generated through life support systems. All these data can be processed by various AI
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_3
31
32
G. Grammenos and T. P. Exarchos
algorithms to decide the optimal configuration parameters needed for life support systems and adjust them accordingly to keep the patient at best possible condition [3]. In this paper, a deep learning model is developed that calculates the pressure applied by the ventilator on the patient’s lungs, taking into account structural factors, at any time point during a breath cycle. Artificial data from Kaggle’s “Google Brain: Ventilator Pressure Prediction” competition were used to train and validate the model. The dataset was created using “People’s Ventilator Project (PVP) combined with artificial lung “Quick Lung.” PVP is an open-source ventilator project developed by Princeton University, during the COVID-19 pandemic for research purposes in MV field [4]. More details about how the dataset was generated can be found here [5].
3.2
Background Work
In order to study mechanical ventilation, it is necessary to define basic concepts such as dynamic systems and PID controllers, since mechanical ventilation is a dynamic system and the PID controller regulates the pressure applied in patient’s lungs.
3.2.1
Dynamic Systems
As described by Suo et al. 2021, a dynamic discrete-time system is given by the following equation: xtþ1 = f t ðxt , ut Þ
ð3:1Þ
where xt is the system state, ut is the control input value, and ft is the transition function to the next state [5].
3.2.2
PID Controller
A widespread technique for controlling dynamic systems is the use of linear error feedback controllers. Those methods select a control
based on a linear function of current and past errors against the actual target. Equation (3.2) expresses that k
utþ1 =
ai ε t - 1
ð3:2Þ
i=0
where εt = xt - x*t is the deviation from the target state at time t and k represents the size of the controller’s history (steps in the past). The role of the PID controller is to properly regulate the pressure flow from the ventilators to the lungs.
3.2.3
How Mechanical Ventilation Works
In MV, the patient is connected to the ventilator through a tube that enters from the mouth and descends to the larynx or trachea. The ventilator is connected to a pressure machine which applies pressure in a circular way to simulate healthy breathing. During inspiratory, the applied pressure is increased until it reaches the peak inspiratory pressure (PIP). During expiration, the target is reduced until it reaches the value of positiveend expiratory pressure (PEEP), which is maintained to prevent lung collapse. The PIP and PEEP values, together with the durations of the inspiratory-expiratory phases, define the timevarying pressure waveform determined by the physician. Figure 3.1 illustrates the cycle of breathing through the ventilator. The goal of the ventilator is to regulate the pressure in a way to follow the target waveform p*t . Since ventilation is a dynamic system pressure, it can be defined as ptþ1 = f t ðpt , ut Þ
ð3:3Þ
and the cost function as the absolute value of the deviation from the target pressure can be defined as ct ðpt , ut Þ = pt - p*t
ð3:4Þ
3
Pressure Prediction on Mechanical Ventilation Control Using Bidirectional. . .
33
Fig. 3.1 Breathing cycle using a ventilator [5] Table 3.1 Features before preprocessing Feature Id Breath_id Resistance (R) Compliance (C) timestep u_in u_out Pressure
Description Unique identifier for each record Unique identifier for each breath cycle Lung resistance measured in cmH2O/L/S [6] Lung compliance measured in cmH2O/L/S [6] Time point since the beginning of the breath cycle Inhalation valve value .0–100 Exhalation valve value. 0–1. Receives the value 0 during inhalation (closed valve) and the value 1 during exhalation (open valve) Airway pressure value in cm H2O
From Eqs. (3.3) and (3.4), it turns out that a ventilator must be designed which minimizes the total cost ct within a specific time interval.
3.3
The Dataset
As mentioned above the dataset from Kaggle’s competition “mechanical ventilation control” is being used in this paper. At this point, it should be stated that the dataset does not contain real data from real lungs or real ventilator [5].
3.3.1
Data Format
The dataset consists of approximately 6.036 million records corresponding to 75.540 patients
Range value 0–6036 million 0–75.040 Discrete values: 5, 20, 50 Discrete values: 10, 20, 50 – 0–100 0–1 –
(time series). Each time series corresponds to a breath cycle that lasts about ≈3 s which translates to 80 timesteps. The main features of each record before preprocessing was applied can be found in Table 3.1. Figures 3.2 and 3.3 illustrate two breath cycle samples from the dataset.
3.3.2
Data Preprocessing
In order to extract the most useful information from the time series data, we applied lag calculation for various window sizes, cumulative summary calculation, time difference, etc. Also one-hot encoding was applied for the discrete parts of the dataset. Table 3.2 shows the full list of new features generated after preprocessing.
34
G. Grammenos and T. P. Exarchos
Sample breath cycle R=20, C=50 u_in u_out pressure
25 20 15 10 5 0 0.0
0.5
1.0
1.5
2.0
2.5
Fig. 3.2 3-second breath cycle with resistance R = 20 and compliance C = 50
Sample breath cycle R=20, C=20 u_in u_out pressure
17.5 15.0 12.5 10.0 7.5 5.0 2.5 0.0 0.0
0.5
1.0
1.5
2.0
2.5
Fig. 3.3 3-second breath cycle with resistance R = 20 and compliance C = 20
3.4
Model
As mentioned in the Introduction, the goal of the model is to predict the correct pressure applied to the ventilator on the patient’s lungs, taking into account structural factors, at any time point during a breath cycle. To achieve this, artificial neural networks (ANNs) were chosen to develop the model. ANNs can produce highly accurate results from nonlinear correlated data. In particular, bidirectional long-short term memory (BiLSTM) neural networks were chosen as the core technology of the model due to their ability to process time series data.
3.4.1
LSTM Nodes
What makes LSTM cells to stand out is their ability to preserve their state during classification or regression process. Each LSTM node has an input gate, an output gate, and a forget gate which allows it to reset its state when non-important features of the input data are being processed (Fig. 3.4). Its output is used recursively as input to all gates of the node. That is, the output computed at time t is recursively used as input at time t + 1. The hyperbolic tangent function (Eq. 3.5) is applied to the input and output of the node so that
3
Pressure Prediction on Mechanical Ventilation Control Using Bidirectional. . .
35
Table 3.2 Features after preprocessing Feature Id Breath_id Resistance (R) Compliance (C) timestep u_in u_out pressure time_passed u_in_prev u_in_prev_diff u_in_prev2 u_in_prev2_diff u_in_cummulative_sum Time_step_cummulative_sum
Notes Unique identifier for each record Unique identifier for each breath cycle Lung resistance measured in cmH2O/L/S [6] Lung compliance measured in cmH2O/L/S [6] Time point since the beginning of the breath cycle Inhalation valve value .0–100 Exhalation valve value. 0–1. Receives the value 0 during inhalation (closed valve) and the value 1 during exhalation (open valve) Airway pressure value in cm H2O timestept - timesteptt - 1 u _ int-1 uin - uinprev uin(t - 2) uin - uinprev2 t i=0 t i=0
u_in_cumsum/time_cumsum
uin ðt Þ timestept
t
uin ðtÞ i=0 t
timestept i=0
u_in_sub_time
uin - uinprev timestept - timestept t - 1
u_out_prev u_out_prev_diff Prev_pressure R_5 R_50 C_20 C_50 R_C_20_20 R_C_20_50 R_C_50_10 R_C_50_20 R_C_50_50 R_C_5_10 R_C_5_20 R_C_5_50
uout(t - 1) uout(t) - uout(t - 1) pressuret - 1 0_1 0_1 0_1 0_1 0_1 0_1 0_1 0_1 0_1 0_1 0_1 0_1
the output values always remain between the interval [-1, 1]. The sigmoid function (Eq. 3.6) with unique output values of 0 or 1 is applied to the recursively incoming data (forget gate). When σ(x) = 0, the cell resets its state and σ(x) = 1, the data is included in the calculation of the new output. The parameters that determine the outputs of the functions are calculated during training [7]:
σ ð xÞ =
tahnðxÞ =
1 1 þ e-x
sinðxÞ ex - e - x = x cosðxÞ e þ e - x
ð3:5Þ
ð3:6Þ
36
G. Grammenos and T. P. Exarchos
Fig. 3.4 Architecture of LSTM node [7]
3.4.2
Bidirectional LSTM
BiLSTM neural networks do not differ in terms of architecture from simple LSTM networks mentioned in Sect. 3.4.1. In essence, they are two neural networks with the same architecture where the first uses information from the past and the second from the future respectively during training process. More about BiLSTMs can be found here [8].
3.4.3
Model Architecture and Training
Considering the abilities of LSTMs and BiLSTMs neural networks as described in Sects. 3.4.1 and 3.4.2, we propose a model of 6 layers and 22 million parameters. The first 4 layers of the model are linearly connected BiLSTM layers, configured to maintain information from the t - 1
timestep, and the last two layers consist of simple fully connected nodes. The BiLSTM layers are intended to process time series breath data and pass the state to fully connected layers where selu (x) is applied as an activation function to calculate the output value needed for the ventilator. The complete architecture of the model is presented in Table 3.3 and in Fig. 3.5. In addition, fivefold cross-validation was applied to evaluate the performance of the model on the available training dataset. The mean absolute error function was used to measure the error during training, and the Adam optimizer was used to update the training parameters [9]. The mean absolute percentage error function was also used. Training was performed for batch size = 80 × 50 and epochs = 30, while the initial learning rate was set to 1e - 3 and was halved when the maximum absolute error did not show any improvements from epoch to epoch.
3
Pressure Prediction on Mechanical Ventilation Control Using Bidirectional. . .
37
Table 3.3 Model architecture Α/Α – 1 2 3 4 5 6 –
Output shape (timesteps × filters) 1 × 28 1 × 2048 1 × 1024 1 × 512 1 × 256 1 × 512 1 × 256 1×1
Layer Input BiLSTM BiLSTM BiLSTM BiLSTM Fully connected layer Fully connected layer Output
Table 3.4 Fivefold validation results input_12: InputLayer
Fold 1 2 3 4 5 Average
bidirectional_46(Istm_46): Bidirectional(LSTM)
bidirectional_47(Istm_47): Bidirectional(LSTM)
MAE 0.199 0.1921 0.189 01895 0.1933 0.19258
MAPE (%) 2.531 2.355 2.834 2.35 2.625 2.365
Mean absolute percentage error where Ai is the true value and Fi is the predicted value. Αttention is needed when it comes to choose the batch size. The goal of the model is to calculate the optimal pressure for the ventilator at each time point within a breath cycle (inhalationexhalation). In this case, each breath cycle lasts approximately 3 s or 80 timesteps (Sect. 3.3.1). Therefore, it is important that the batch size chosen is a multiple of 80, so that each batch contains an entire breath cycle, in order for the model to be able to train this particular goal.
bidirectional_48(Istm_48): Bidirectional(LSTM)
bidirectional_49(Istm_49): Bidirectional(LSTM)
dense_30: Dense
dense_31: Dense
dense_32: Dense
Fig. 3.5 Model architecture visualized using TensorFlow plot_model() function n
MAE =
i=1
jyi - xi j n
ð3:7Þ
Mean absolute error where yi is the model prediction and xi is the correct solution.
MAPE =
100% n
n i=0
Ai - F i Ai
ð3:8Þ
3.5
Results
To evaluate the model performance on the dataset fivefold cross-validation was performed (Table 3.4). The value recorded in the table for both MAE and MAPE is the value of the 30th epoch. Predictions deviated on average 2.365% from the actual values, which makes the model quite reliable on data that have the same format as the contents of the dataset. Figures 3.6 and 3.7 show the values at 5th fold of loss and MAPE functions from epoch to epoch.
38
G. Grammenos and T. P. Exarchos
Fig. 3.6 Mean absolute error (loss function) at 5th fold
Loss at 5th fold 0.7
loss
0.6 0.5 0.4 0.3 0.2 0
Fig. 3.7 Mean absolute percentage error at 5th fold
5
10
15 epoch
20
25
30
Mean Absolute Percentage Error at 5th fold 7
mape
6 5 4 3
0
Figures 3.8 and 3.9 show two random breathing cycles from the dataset. The orange points on the graphs represent the model predictions, while the blue points represent the actual pressure values corresponding to the input data.
3.6
Conclusions and Future Work
The need for a new way of operating mechanical ventilation is more relevant than ever due to the pandemic of COVID-19. This chapter is an attempt to emphasize this by developing a model that takes into account structural factors
5
10
15 epoch
20
25
30
of the lung (resistance, compliance) and time to calculate the optimal pressure to be applied by the ventilator. Although the data used for training originated from artificial lungs and noncommercial ventilators, the performance of the model is high. The model was able to perform almost as well as a normal ventilator having 2.7% error in its predictions. Despite the optimistic results of this study, further research is needed in the field of mechanical ventilation so that a smart system can be implemented in real-world conditions. In the future, other technologies such as computer vision could be exploited to develop pressure calculation models and algorithms that take
3
Pressure Prediction on Mechanical Ventilation Control Using Bidirectional. . .
39
Real vs Prediction - MAPE: 1.65 real predictions
30
Pressure
25
20
15
10
0
10
20
30
40
50
60
70
80
Timesteps Fig. 3.8 Real pressure points vs predicted ones with 1.65% MAPE
Real vs Prediction- MAPE: 3.19 real predictioins
30
Pressure
25 20 15 10 5 0
10
20
30
40
50
60
70
80
Timesteps Fig. 3.9 Real pressure points vs predicted ones with 3.19% MAPE
medical imaging into account. In this way the care provided by a ventilator to an intubated patient would be fully adjusted to the clinical image of the patient at the time of intubation.
References 1. H. IEEE Engineering in Medicine and Biology Society. Annual International Conference (40th: 2018:
Honolulu, IEEE Engineering in Medicine and Biology Society, and Institute of Electrical and Electronics Engineers, 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society: Learning from the Past, Looking to the Future: July 17–21, 2018, Hawaii Convention Center, Honolulu, Hawaii). 2. H. Wunsch, “Mechanical ventilation in COVID-19: Interpreting the current epidemiology,” American Journal of Respiratory and Critical Care Medicine, vol. 202, no. 1. American Thoracic Society, pp. 1–4, Jul.
40 01, 2020. https://doi.org/10.1164/rccm.2020041385ED. 3. B. Gholami, W. M. Haddad, and J. M. Bailey, “AI THE IN THE INTENSIVE CARE UNIT, ARTIFICIAL INTELLIGENCE CAN KEEP WATCH,” IEEE SPECTRUM VOL. 55, no. 133, pp. 31–35, 2018. 4. J. Lachance et al., “PVP1-The People’s Ventilator Project: A fully open, low-cost, pressure-controlled ventilator”, https://doi.org/10.1101/2020.10.02.20206037. 5. D. Suo et al., “Machine Learning for Mechanical Ventilation Control,” 2021. 6. T. Pham, L. J. Brochard, and A. S. Slutsky, “Mechanical Ventilation: State of the Art,” Mayo Clinic
G. Grammenos and T. P. Exarchos Proceedings, vol. 92, no. 9. Elsevier Ltd, pp. 1382–1400, Sep. 01, 2017. https://doi.org/10.1016/ j.mayocp.2017.05.004. 7. K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A Search Space Odyssey,” IEEE Trans Neural Netw Learn Syst, vol. 28, no. 10, pp. 2222–2232, Oct. 2017, https://doi. org/10.1109/TNNLS.2016.2582924. 8. M. Schuster and K. K. Paliwal, “Bidirectional Recurrent Neural Networks,” 1997. 9. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” Dec. 2014, [Online]. Available: http://arxiv.org/abs/1412.6980
4
Making Pre-screening for Alzheimer’s Disease (AD) and Postoperative Delirium Among Post-Acute COVID-19 Syndrome (PACS) a National Priority: The Deep Neuro Study Ioannis Tarnanas and Magda Tsolaki
Abstract
SARS-CoV-2 effects on cognition are a vibrant area of active research. Many researchers suggest that COVID-19 patients with severe symptoms leading to hospitalization sustain significant neurodegenerative injury, such as encephalopathy and poor discharge disposition. However, despite some post-acute COVID-19 syndrome (PACS) case series that have described elevated neurodegenerative biomarkers, no studies have been identified that directly compared levels to I. Tarnanas (✉) Altoida Inc, Washington, DC, USA Global Brain Health Institute, Trinity College Dublin, Dublin, Ireland Atlantic Fellow for Equity in Brain Health, Global Brain Health Institute, University of California San Francisco, San Francisco, CA, USA Latin American Brain Health Institute (BrainLat), Universidad Adolfo Ibáñez, Santiago de Chile, Santiago, Chile e-mail: [email protected] M. Tsolaki (✉) Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD), Thessaloniki, Greece 1st University Department of Neurology UH “AHEPA”, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece Laboratory of Neurodegenerative Diseases, Center for Interdisciplinary Research and Innovation (CIRI – AUTh) Balkan Center, Buildings A & B, Aristotle University of Thessaloniki, Thessaloniki, Greece
those in mild cognitive impairment, non-PACS postoperative delirium patients after major non-emergent surgery, or preclinical Alzheimer’s disease (AD) patients that have clinical evidence of Alzheimer’s without symptoms. According to recent estimates, there may be 416 million people globally on the AD continuum, which include approximately 315 million people with preclinical AD. In light of all the above, a more effective application of digital biomarker and explainable artificial intelligence methodologies that explored amyloid beta, neuronal, axonal, and glial markers in relation to neurological complications in-hospital or later outcomes could significantly assist progress in the field. Easy and scalable subjects’ risk stratification is of utmost importance, yet current international collaboration initiatives are still challenging due to the limited explainability and accuracy to identify individuals at risk or in the earliest stages that might be candidates for future clinical trials. In this open letter, we propose the administration of selected digital biomarkers previously discovered and validated in other EU-funded studies to become a routine assessment for non-PACS preoperative cognitive impairment, PACS neurological complications in-hospital, or later PACS and non-PACS improvement in cognition after surgery. The open letter also includes an economic analysis of the implications for such national-level initiatives. Similar collaboration initiatives
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_4
41
42
I. Tarnanas and M. Tsolaki
could have existing pre-diagnostic detection and progression prediction solutions pre-screen the stage before and around diagnosis, enabling new disease manifestation mapping and pushing the field into unchartered territory.
416 million people globally exist across the AD continuum. Considering both PACS and the preclinical Alzheimer’s disease population above, we are faced with an unprecedented almost one billion people potentially at risk around their brain health.
Keywords
Digital biomarkers · Artificial intelligence · Genetics · National priority projects
4.1
Introduction
More than 533 million people globally [1] are affected by the progression of coronavirus disease 2019 (COVID-19), and still at the time of this writing, studies about long-term persistent impacts have been steadily increasing [2]. Several studies have reported 31–69% [3] of all COVID19 patients having experiences post-acute COVID-19 syndrome or PACS with diverse manifestations demonstrating a variety of neurological and neuropsychiatric complications, often referred to as “brain fog” [4]. Among those complications are chronic fatigue, anosmia, ageusia, dysautonomia, and a general cognitive deterioration with or without fluctuations, which could manifest as difficulties with concentration, memory, receptive language, and/or executive function [5–7]. Interestingly, younger people are also at risk for “brain fog,” even in the absence of a serious disease [8]. To make things worse, a recent global estimation on the newly estimated total number of people across the Alzheimer’s disease continuum predicts that there may be 416 million people globally on the Alzheimer’s disease continuum, which includes approximately 315 million people with preclinical Alzheimer’s disease (no obvious signs or symptoms, but laboratory evidence of Alzheimer’s), 69 million people with prodromal (experiencing mild cognitive impairment), and 32 million people meeting the criteria for a clinical diagnosis of dementia or probable/possible Alzheimer’s disease dementia [9]. Taken together 22% of all persons aged 50 and above or around
4.2
PACS Long-Term Cognitive Outcomes
Some recent studies have linked the inflammatory response during PACS with tau hyperphosphorylation leading to neuropathological pathways blockages typically associated with AD [10]. The clinical phenotype associated with those blockages is a persistent brain fog, often correlated with the severity of PACS symptoms during the acute illness phase [11, 12]. For example, in about 74% of patients that have been hospitalized due to COVID-19, we observe coordination deficits that could be explained by tau pathology in the cerebellum [13]. Additionally, the constellation of long-lasting cognitive symptoms has been linked with leaky ryanodine receptor 2 (RyR2) even following mild illness and across all ages [10]. A recent comprehensive investigation [14] in the context of “brain fog” within patients that died from COVID-19 has identified extensive brain degeneration and inflammation confirming findings of the previous study, in terms of increased tau pathophysiology [10]. Consistent with this evidence, a potential molecular mechanism was proposed to explain long-lasting “brain fog,” which implicates leaking RyR2 activity or defective intracellular Ca2+ regulation caused by mitochondria overload, leading to dysfunction and oxidative stress contributing to AD-like neuropathology [15, 16]. The link between tau phosphorylation and “brain fog” observed in these patients is occurring in patients independently of age and brain areas in the cerebellum, which usually doesn’t exhibit tau pathology in typical AD patients. The increased tau signaling alongside TGF-β with their molecular signatures in this context is associated with long-lasting cognitive symptoms (>6 months) and needs further exploration. A related question
4
Making Pre-screening for Alzheimer’s Disease (AD) and Postoperative Delirium. . .
is whether the neurological symptoms observed after PACS can be progressive (>6 months), possibly increasing AD or cognitive impairment risk in the future. In contrast, biochemical features of resilient cognition, including cellular and synaptic networks in AD, have been positively associated with cognition, representing a “signature” of brain resilience [17, 18]. Data presented from the Mayo Clinic Study of Aging (MCSA) suggests that there are molecular signatures such as FDG-PET uptake in the bilateral anterior cingulate cortex and anterior temporal pole, which has been associated with stable global cognition in individuals 80+. This molecular signature, named by the authors the resilience signature, provided significant information about global longitudinal cognitive change trajectories when examining both the MCSA and ADNI cohorts, even accounting for the amyloid status [19]. Such findings and some preventive factors, in particular of the control of systemic vascular risk, should be taken into consideration when trying to isolate PACS-anticipating risk factors at the time of initial COVID-19 diagnosis. Currently, the four known factors are Epstein-Barr virus viremia, SARS-CoV-2 RNAemia, type 2 diabetes, and specific autoantibodies [20]. For example, dysregulated glucose metabolism during the initial COVID-19 diagnosis may be inducing cognitive disruption in both COVID-19 and AD and could be considered a common diagnostic biomarker. Taken together, the observed biochemical changes associated with long- or short-term “brain fog” should be assessed and regularly monitored. Moreover, several genetic polymorphisms associated with increased cognitive impairment risk are currently lacking, and a careful investigation within each patient’s environment with easy and scalable screening tools would assist the personalization of medicine and the rationalization of healthcare resources. According to a recent study [21], there was a genetic link between COVID-19 and AD via the oligoadenylate synthetase 1 (OAS1) gene. This gene is expressed in microglia and is involved in
43
innate immune responses to viral infection, with an increased risk for AD. More specifically, a total of 2547 DNA samples were genotyped for this study, of which 1313 were from sporadic Alzheimer’s disease patients and 1234 controls. Through this analysis, the researchers evaluated four variants of the OAS1 gene, which resulted in a reduction in its expression. These four single nucleotide polymorphisms (SNPs) are rs4766676 and rs1131454, which have been also linked to AD, and rs6489867 and rs10735079, which are linked to severe symptoms within COVID-19. According to the results of the study, the SNPs within OAS1 that were associated with AD, and the ones associated with severe symptoms within COVID-19 showed a linkage disequilibrium (LD). In the same study, some unique genetic expression networks were found in microglial cells that contained genes of interferon response pathways and macrophages treated to mimic COVID-19 effects. More specifically, the gene was found to control the number of pro-inflammatory proteins released by immune system cells. Moreover, it was also found that knockdown of OAS1 expression using small interfering RNA (siRNA) in microglia cells unleashed a “cytokine storm,” which led to an exaggerated pro-inflammatory response to tissue damage with an autoimmune state where the immune initiates an attack on itself. The above study is a first indication of a potential genetic link between AD risk and susceptibility to PACS symptoms centered on the OAS1 gene. However, we are still at the tip of an iceberg that needs to be explored for better understanding and treatment of diseases such as AD and PACS, as well as for the development of biomarkers to monitor the progression of this interaction. Such studies would enable predictive algorithms for the brain resilience signature and or disease-targeting modifiable risk factors such as vascular health maintenance and suggest digital biomarker metrics sensitive to short- and long-term cognitive sequelae that contribute to decreased quality of life and cost to society, especially in diverse populations.
44
4.3
I. Tarnanas and M. Tsolaki
Postoperative Delirium Long-Term Cognitive Outcomes
Delirium remains a commonly encountered condition in older adult’s post-hospitalization, specifically postoperative delirium, occurring 24–72 h after surgery [22]. The incidence depends on the type of surgery and increases to 15–25% after elective surgery while reaching around 50% of cases in frail or high-risk elderly patients [23]. Notably, delirium is very common in COVID-19 patients, partly worsened by prolonged mechanical ventilation [24]. In such situations, within the temporary ICU, the formal acute brain dysfunction monitoring is nearly impossible, and delirium screening is based on pure clinical observation. Usually, it follows a pattern of rapid onset, serious symptom fluctuation during day hours, powerful disruption of the sleep and wake cycle, and changes in functional cognition and behavior [25]. Postoperative period patients who develop delirium had a 10–11% risk of death increase within 3 months, increased duration of hospitalization, and more importantly an increased cognitive impairment risk in the long term [26]. It’s more common to observe such radical cognition fluctuations long term in frail patients undergoing longer, more invasive procedures, like cardiac and orthopedic surgery [27]. Risk factors for developing postoperative delirium could be patient-specific predisposing factors, such as frailty or low functional cognition in daily activities, underprivileged education, alcohol abuse history, and or any neurological and cognitive impairments that are preexisting [28]. It could also be surgery-related or medical factors depending on the timing of appearance during hospitalization [29]. However, given the largely underestimated predementia stages, as explained before, the postoperative delirium incidence could in the future be much larger than conveyed in available literature. Moreover, we hypothesize that given the weighty neurological and neuropsychiatric PACS consequences, which are also present in younger age groups, the denoting cognitive fluctuations of postoperative
delirium would be increasingly more present in a more heterogeneous and potentially younger population.
4.4
Digital Neuro Signatures of Brain Resilience
With such urgent requirements in mind, two avenues for the investigation of diagnostic and monitoring biomarkers for cognitive function in both PACS and Postoperative delirium can be examined: [1] creating a series of active “digital footprints” for functional cognition and including physical, social, and systemic vascular risk, systemic inflammation, and metabolic dysfunction determinants as a digital biomarker platform and/or [2] identifying a unique complex Digital Neuro Signature [30] (DNS)TM that exists as a proxy for brain resilience. Such screening tools would be significantly important since acquiring imaging data of PACS or postoperative delirium patients might be challenging, especially for betaamyloid, tau accumulation, brain glucose use, or even MRI-based blood flow [31]. To satisfy the first alternative, the study RADAR-AD tried to validate such a platform via a series of passive and active digital biomarkers collected through tablets, smartphones, wearables [32], or other sources of the Internet of Things (IoT) and also inferred causality between commercial tools that offer active digital biomarkers with various biological variables. Such platforms might contain different classes of digital biomarkers ranging from diagnostic, prognostic, monitoring, pharmacodynamic, and predictive to safety and susceptibility digital biomarkers, depending on their unique structure [33]. For example, a recent study from UCSF Osher Center for Integrative Medicine, named TermPredict utilized the Oura smart ring for such a platform to rapidly predict coronavirus symptoms before 24 h [34]. It should be noted that such platforms are still under validation, focusing on detecting common symptoms for COVID-19 like fatigue and aching muscles, fever, dry cough, and shortness of breath, but
4
Making Pre-screening for Alzheimer’s Disease (AD) and Postoperative Delirium. . .
not “brain fog.” This is a promising start for a metric that is easy for both patients and researchers to read and interpret, and further independent validation is still needed [35]. To satisfy the second alternative, a unique DNS monitoring biomarker [36] is being evaluated for remote data acquisition (RDA) in the Deep Neuro Study, a Greek national priority project since August 2022. This active biomarker type could prove useful in clinical practice as well as treatment options supposing to allow the appraisal of subtle neural correlates of COVID19 cognitive dysfunction. The DNS monitoring biomarker would serially measure those subtle imbalances so that changes in metabolic and neurotransmission signatures indicate meaningful changes that predict early neurological damage and the subsequent cognitive decline. Such abilities of novel digital biomarkers to measure off-target effects from molecular signatures such as brain metabolism will increasingly come into play after a recent draft guidance document from the FDA to provide guidelines to investigators, sponsors, and various stakeholders on the use of digital health technologies (DHTs) when acquiring data in clinical investigations remotely [37]. The goal of the present open letter is to urgently raise awareness about a pressing issue, possibly increasing the risk of cognitive impairment or AD in the future [38], which might prove catastrophic for the healthcare systems in the future. Such awareness is particularly important, especially for at-risk populations in middle and low-income regions with missing biomarker studies and with a population that is predisposed due to contributors [39], such as obesity, low brain resilience signatures, as well as certain genetic variants. Furthermore, a recent study by Sofie Persson et al. [40], aimed to explore healthcare costs attributed to dementia for 17 years, underscores the costs that occur one decade before diagnosis. Timely diagnosis with DNS monitoring biomarkers could be the baseline to explain any exact costs increase both post-screening and 6 years after diagnosis. Previous work showed that cognitive impairment leading to dementia usually develops 18 years prior to diagnosis [41]. We postulated that due to the
45
underestimated population of preclinical AD, increased healthcare use could be and should be calibrated several years before the formal dementia diagnosis. Acknowledgments The RADAR-AD project has received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No 806999. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation program and EFPIA and their associated partners. Disclaimer This communication reflects the views of the authors, and neither RADAR-AD consortium nor IMI nor the European Union and EFPIA are liable for any use that may be made of the information contained herein.
References 1. World Health Organization. WHO Coronavirus (COVID-19) Dashboard. Date last accessed: 10 June 2022. https://covid19.who.int 2. Centers for Disease Control and Prevention PostCOVID conditions. https://www.cdc.gov/coronavi rus/2019-ncov/long-term-effects/index. html Date: 2022. 3. Groff, Destin, et al. “Short-term and long-term rates of postacute sequelae of SARS-CoV-2 infection: a systematic review.” JAMA network open 4.10 (2021): e2128568-e2128568. 4. Nalbandian A, Sehgal K, Gupta A, et al.: Post-acute COVID-19 syndrome. Nat Med 2021;27:601–15 5. Miskowiak KW, Johnsen S, Sattler SM, Nielsen S, Kunalan K, Rungby J, Lapperre T, Porsberg CM. Cognitive impairments four months after COVID-19 hospital discharge: pattern, severity and association with illness variables. Eur Neuropsychopharmacol. 2021;46:39–48. 6. Daroische R, Hemminghyth MS, Eilertsen TH, Breitve MH, Chwiszczuk LJ. Cognitive impairment after COVID-19 – a review on objective test data. Front Neurol. 2021;12:699582. 7. Irene Beatrix Meier, Camila Vieira Ligo Teixeira, Ioannis Tarnanas, Fareed Mirza, Lawrence Rajendran, Neurological and mental health consequences of COVID-19: potential implications for well-being and labour force, Brain Communications, Volume 3, Issue 1, 2021. 8. Kanberg, N., Simrén, J., Éden, A., Andersson, L.-M., Nilsson, S., Ashton, N.J., Sundvall, P.-D., Nellgård, B., Blennow, K., Zetterberg, H. and Gisslén, M. (2021), Neurochemical signs of astrocytic and neuronal injury in acute COVID-19 normalizes during long-term follow-up. Alzheimer’s Dement., 17: e057889.
46 9. Gustavsson, A, Norton, N, Fast, T, et al. Global estimates on the number of persons across the Alzheimer’s disease continuum. Alzheimer’s Dement. 2022; 1–13. 10. Reiken, S, Sittenfeld, L, Dridi, H, Liu, Y, Liu, X, Marks, AR. Alzheimer’s-like signaling in brains of COVID-19 patients. Alzheimer’s Dement. 2022; 1–11. 11. Gordon, MN, Heneka, MT, Le Page, LM, et al. Impact of COVID-19 on the Onset and Progression of Alzheimer’s Disease and Related Dementias: A Roadmap for Future Research. Alzheimer’s Dement. 2021; 1–9. 12. Augustin M, Schommers P, Stecher M, Dewald F, Gieselmann L, Gruell H, Horn C, Vanshylla K, Cristanziano VD, Osebold L, et al. Post-COVID syndrome in non-hospitalised patients with COVID-19: a longitudinal prospective cohort study. Lancet Reg Health Eur. 2021;6:100122. 13. Ermis U, Rust MI, Bungenberg J, et al. Neurological symptoms in COVID-19: a cross-sectional monocentric study of hospitalized patients. Neurol Res Pract. 2021;3:17. 14. Yang AC, Kern F, Losada PM, et al. Dysregulation of brain and choroid plexus cell types in severe COVID-19. Nature. 2021;595:565–571. 15. Meinhardt J, Radke J, Dittmayer C, et al. Olfactory transmucosal SARS-CoV-2 invasion as a port of central nervous system entry in individuals with COVID-19. Nat Neurosci. 2021;24:168-175. 16. Datta D, Leslie SN, Wang M, et al. Age-related calcium dysregulation linked with tau pathology and impaired cognition in non-human primates. Alzheimer’s Dement. 2021;17:920–932. 17. Jenna N Adams, Samuel N Lockhart, Lexin Li, William J Jagust, Relationships Between Tau and Glucose Metabolism Reflect Alzheimer’s Disease Pathology in Cognitively Normal Older Adults, Cerebral Cortex, Volume 29, Issue 5, May 2019, Pages 1997–2009 18. Steven E. Arnold, Natalia Louneva, Kajia Cao, Li-San Wang, Li-Ying Han, David A. Wolk, Selamawit Negash, Sue E. Leurgans, Julie A. Schneider, Aron S. Buchman, Robert S. Wilson, David A. Bennett, Cellular, synaptic, and biochemical features of resilient cognition in Alzheimer’s disease, Neurobiology of Aging, Volume 34, Issue 1, 2013, 19. Arenaza-Urquijo EM, Przybelski SA, Lesnick TL, Graff-Radford J, Machulda MM, Knopman DS, Schwarz CG, Lowe VJ, Mielke MM, Petersen RC, Jack CR, Vemuri P. The metabolic brain signature of cognitive resilience in the 80+: beyond Alzheimer pathologies. Brain. 2019 Apr 1;142(4):1134–1147. 20. Su Y, Yuan D, Chen DG, Ng RH, Wang K, Choi J, Li S, Hong S, Zhang R, Xie J, Kornilov SA, Scherler K, Pavlovitch-Bedzyk AJ, Dong S, Lausted C, Lee I, Fallen S, Dai CL, Baloni P, Smith B, Duvvuri VR, Anderson KG, Li J, Yang F, Duncombe CJ, McCulloch DJ, Rostomily C,
I. Tarnanas and M. Tsolaki Troisch P, Zhou J, Mackay S, DeGottardi Q, May DH, Taniguchi R, Gittelman RM, Klinger M, Snyder TM, Roper R, Wojciechowska G, Murray K, Edmark R, Evans S, Jones L, Zhou Y, Rowen L, Liu R, Chour W, Algren HA, Berrington WR, Wallick JA, Cochran RA, Micikas ME; ISB-Swedish COVID19 Biobanking Unit, Wrin T, Petropoulos CJ, Cole HR, Fischer TD, Wei W, Hoon DSB, Price ND, Subramanian N, Hill JA, Hadlock J, Magis AT, Ribas A, Lanier LL, Boyd SD, Bluestone JA, Chu H, Hood L, Gottardo R, Greenberg PD, Davis MM, Goldman JD, Heath JR. Multiple early factors anticipate post-acute COVID-19 sequelae. Cell. 2022 Jan 25:S0092-8674(22)00072-1. 21. Huffman, J.E., Butler-Laporte, G., Khan, A. et al. Multi-ancestry fine mapping implicates OAS1 splicing in risk of severe COVID-19. Nat Genet 54, 125–127 (2022). 22. Austin CA, O’Gorman T, Stern E, et al. Association Between Postoperative Delirium and Long-term Cognitive Function After Major Nonemergent Surgery. JAMA Surg. 2019;154(4):328–334. 23. Melegari G, Gaspari A, Gualdi E, Zoli M, Meletti S, Barbieri A. Delirium in Older Adults: What a Surgeon Needs to Know. Surgeries. 2022; 3(1):28–43. 24. Watne, L.O., Tonby, K., Holten, A.R. et al. Delirium is common in patients hospitalized with COVID-19. Intern Emerg Med 16, 1997–2000 (2021). 25. Kotfis K, Witkiewicz W, Szylińska A, Witkiewicz K, Nalewajska M, Feret W, Wojczyński Ł, Duda Ł, Ely EW. Delirium Severely Worsens Outcome in Patients with COVID-19-A Retrospective Cohort Study from Temporary Critical Care Hospitals. J Clin Med. 2021 Jul 2;10(13):2974. 26. Raats JW, van Eijsden WA, Crolla RM, Steyerberg EW, van der Laan L. Risk factors and outcomes for postoperative delirium after major surgery in elderly patients. PLoS One. 2015;10(8):e0136071. 27. Zietemann V, Kopczak A, Müller C, Wollenweber FA, Dichgans M. Validation of the Telephone Interview of Cognitive Status and Telephone Montreal Cognitive assessment against detailed cognitive testing and clinical diagnosis of mild cognitive impairment after stroke. Stroke. 2017;48(11):2952–2957. 28. Pérez-Ros, P.; Martínez-Arnau, F.M.; BaixauliAlacreu, S.; Caballero-Pérez, M.; García-Gollarte, J. F.; Tarazona-Santabalbina, F. Delirium Predisposing and Triggering Factors in Nursing Home Residents: A Cohort Trial-Nested Case-Control Study. J. Alzheimers. Dis. 2019, 70, 1113–1122. 29. Folbert, E.C.; Hegeman, J.H.; Gierveld, R.; van Netten, J.J.; van der Velde, D.; Ten Duis, H.J.; Slaets, J.P. Complications during hospitalization and risk factors in elderly patients with hip fracture following integrated orthogeriatric treatment. Arch. Orthop. Trauma Surg. 2017, 137, 507–515. 30. Meier, I.B., Buegler, M., Harms, R. et al. Using a Digital Neuro Signature to measure longitudinal individual-level change in Alzheimer’s disease: the
4
Making Pre-screening for Alzheimer’s Disease (AD) and Postoperative Delirium. . .
Altoida large cohort study. npj Digit. Med. 4, 101 (2021). 31. van den Brink W, Bloem R, Ananth A, Kanagasabapathi T, Amelink A, Bouwman J, Gelinck G, van Veen S, Boorsma A and Wopereis S (2021) Digital Resilience Biomarkers for Personalized Health Maintenance and Disease Prevention. Front. Digit. Health 2:614670. https://doi.org/10.3389/fdgth. 2020.614670 32. Biogen to launch pioneering study to develop digital biomarkers of cognitive health using Apple Watch and iPhone. News release. January 11, 2021. Accessed January 12, 2021. https://investors.biogen.com/newsreleases/news-release-details/biogen-launchpioneering-study-develop-digital-biomarkers 33. Gold M, Amatniek J, Carrillo MC, Cedarbaum JM, Hendrix JA, Miller BB, et al. Digital technologies as biomarkers, clinical outcomes assessment, and recruitment tools in Alzheimer’s disease clinical trials. Alzheimers Dement. 2018 May;4:234–42. 34. Poongodi M, Hamdi M, Malviya M, Sharma A, Dhiman G, Vimal S. Diagnosis and combating COVID-19 using wearable Oura smart ring with deep learning methods [published correction appears in Pers Ubiquitous Comput. 2022;26(1):37]. Pers Ubiquitous Comput. 2022;26(1):25–35. https://doi.org/10.1007/ s00779-021-01541-4 35. Babrak L, M, Menetski J, Rebhan M, Nisato G, Zinggeler M, Brasier N, Baerenfaller K, Brenzikofer T, Baltzer L, Vogler C, Gschwind L, Schneider C, Streiff F, Groenen P, M, A, Miho E:
47
Traditional and Digital Biomarkers: Two Worlds Apart? Digit Biomark 2019;3:92–102. 36. Seixas AA, Rajabli F, Pericak-Vance MA, Jean-LouisG, Harms RL and Tarnanas I (2022) Associations of digital neuro-signatures with molecular and neuroimaging measures of brain resilience: The altoida large cohort study. Front. Psychiatry 13:899080. https://doi. org/10.3389/fpsyt.2022.899080 37. U.S. Food and Drug Administration. 2021. Digital Health Technologies for Remote Data Acquisition in Clinical In. [online] Available at: https://www.fda. gov/media/155022/download [Accessed 30 December 2021]. 38. Arnsten AFT, Datta D, Del Tredici K, Braak H. Hypothesis: tau pathology is an initiating factor in sporadic Alzheimer’s disease. Alzheimer’s Dement. 2021;17:115–124. 39. Willette AA, Xu G, Johnson SC, et al. Insulin resistance, brain atrophy, and cognitive performance in late middle-aged adults. Diabetes Care. 2013;36(2): 443–449. 40. Persson, S, Saha, S, Gerdtham, U-G, Toresson, H, Trépel, D, Jarl, J. Healthcare costs of dementia diseases before, during and after diagnosis: Longitudinal analysis of 17 years of Swedish register data. Alzheimer’s Dement. 2022; 1–10. R Core Team. 2013. 41. Rajan KB, Wilson RS, Weuve J, Barnes LL, Evans DA. Cognitive impairment 18 years before clinical diagnosis of Alzheimer’s disease dementia. Neurology. 2015;85(10):898–904.
5
Graph Theory-Based Approach in Brain Connectivity Modeling and Alzheimer’s Disease Detection Dionysios G. Cheirdaris
Abstract
There is strong evidence that the pathological findings of Alzheimer’s disease (AD), consisting of accumulated amyloid plaques and neurofibrillary tangles, could spread around the brain through synapses and neural connections of neighboring brain sections. Graph theory is a helpful tool in depicting the complex human brain divided into various regions of interest (ROIs) and the connections among them. Thus, applying graph theorybased models in the study of brain connectivity comes natural in the study of AD propagation mechanisms. Moreover, graph theorybased computational approaches have been lately applied in order to boost data-driven analysis, extract model measures and robustness-effectiveness indexes, and provide insights on casual interactions between regions of interest (ROI), as imposed by the models’ architecture. Keywords
Graph theory · Brain connectivity networks · Alzheimer’s disease · Noninvasive neuroimaging techniques
D. G. Cheirdaris (✉) Bioinformatics and Human Electrophysiology Laboratory, Department of Informatics, Ionian University, Corfu, Greece e-mail: [email protected]
5.1
Introduction
Alzheimer’s disease (AD) is one of the most common progressive neurodegenerative dementias where neurofibrillary tangles and amyloid plaques (amyloid β-peptide (Aβ)) accumulation in the brain tissue, compared to control subjects and/or mild cognitive impairment (MCI) patients, trigger damages in neurons and synapses in the cortex and eventually lead to neural loss [1]. The pathogenic mechanisms of this neurological disorder can be investigated using different imaging techniques that include magnetic resonance imaging (MRI) to evaluate structural changes, functional magnetic resonance imaging (fMRI) to gauge the functional patterns of neuronal activities, electro- and magnetoencephalography (EEG/MEG) to study the highresolution temporal brain dynamics, positron emission tomography (PET) to assess functional and metabolic changes through radioactive tracers in order to delineate healthy brain regions from affected ones, and diffusion tensor imaging (DTI), for elucidating any disruption in white matter fiber tracks due to disease. In many, if not most, studies, these techniques are applied supplementarily, and the results are investigated comparatively rather than exclusively. The bulk of the data recorded by all these techniques has been useful, while modeled through graph theory and assessed through computational commercial or custom toolboxes, in mapping brain
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_5
49
50
D. G. Cheirdaris
connectivity models and monitoring AD onset and progression.
5.2
Graph Theory in Brain Connectivity Modeling
Graph theory is the mathematical framework in which a complicated network can be roughly represented and essentially studied. A handful of biological networks (systems biology approach) can be modeled within this framework, which indicates there is a primary similarity among these networks. Graph theory considers the modeled network as a system consisting of a set of elements (nodes) linked by connections or interactions (edges). The human brain is thus an ideal candidate for graph theoretical analysis, that is, the way in which brain regions are functionally and structurally connected (connectomics). This is the core of the human connectome project launched in 2009, which has since enabled the extraction of brain parcellation mapping [2]. In this approach, brain regions under investigation can be considered as the nodes and the connections between them as the edges. In the case of brain neuroimaging data-driven analysis, the nodes can be defined as the brain regions underlying electrodes or using an anatomical, functional, or histological parcellation scheme. The edges are obtained as measures of association between the brain regions, which measures include the aforementioned statistical dependencies through time series (functional MRI (fMRI)), connection probabilities (diffusion tensor imaging (DTI)), interregional correlations in cortical thickness (magnetic resonance imaging (MRI)), and electrophysiological signals (electroencephalography (EEG)/magnetoencephalography (MEG)) and even blood flow (arterial spin labeling (ASL)) [3]. ΕΕG and MEG use scalp surface sensors to access neural activity and consequently electrophysiological measures and temporal dynamics of brain connectivity; however, measuring functional connectivity beyond the brain surface is restricted [4]. Imaging modalities that provide functional connectivity information are applied
to overcome limitations. The most prominent of these techniques is functional MRI (fMRI). fMRI has been applied as a noninvasive, radiation-free, imaging analysis technique to diagnose, predict, and classify different stages of a disease, crosssectionally or through longitudinal studies [5]. fMRI is equally applied to investigate functional interregional connectivity, evaluate spatiotemporal interregional correlations, and address brain dysfunctions as neural network dysfunctions. Compared to MRI imaging, fMRI data is not plainly structural but provides a measure of brain connectivity over time, so data actually consists of time series through which statistical dependencies or temporal correlations between close or spatially remote neurophysiological events can be inferred [6]. The principle of the technique itself is that blood flow affects the connectivity of brain tissues in such a way that cerebral blood flow and neural activation are actually coupled. Thus, the earlier fMRIs used the blood oxygen level-dependent (BOLD) contrast [3], introduced in the 1990s. To overcome limitations due to low-frequency fluctuations in fMRI signals, resting-state functional MRI (rs-fMRI) is mostly applied in functional brain connectivity studies for the last couple of decades.
5.3
From FMRI Data to Functional Networks Through Graph Theory
Using graph theory, the functional network can be investigated in four essential steps [5]. First, the nodes and associations among these nodes need to be determined. fMRI signal is spatially segregated, and the correlations between pairs of brain regions are stored in a matrix. These correlations of time series of the different brain regions represent the relationship or connectivity of those regions. In the end, the network features of the graph, including shortest path length, betweenness, clustering, small world, and modularity that can be measured mathematically, are calculated (Fig. 5.1). Most graph theory-based studies use symmetrical calculations like
5
Graph Theory-Based Approach in Brain Connectivity Modeling and. . .
fMRI time series
Functional Network
ROIs
51
Time Course Extraction
Correlation Matrix
Adjacency Matrix
Thresholding
Graph Parameters
Fig. 5.1 Diagram of obtaining functional network graph parameters. (Adapted from [5])
correlation and coherence or partial coherence to find the association between nodes. The first step in the network construction process is to calculate the correlation coefficient between each ROIs pair and set a threshold for that coefficient [7]. Pearson correlation coefficient, which measures the linear correlation between two normally distributed variables, is usually used for this cause. Time sequences obtained by fMRI data of two ROIs in the network serve as the two variables in the Pearson coefficient formula. For setting the threshold, researchers bear in mind that the brain network should be a sparse graph with dense local connectivity, containing much more short connection paths than long ones between any pair of ROIs [8]. Therefore, the network should not contain isolated nodes, its density should be larger than 50%, and the average degree of the network should be larger than 2 ln (N ), where N represents the total network nodes [9].
5.4
Graph Theory Measures and Network Feature Extraction
Brain network connectivity in fMRI data-driven studies has roughly been divided into modelbased and model-free methods (Fig. 5.2). Functional connectivity reveals the statistical patterns of the brain regions, while effective connectivity how different brain regions’ functions affect each other. Model-based methods, including cross-correlation, coherence analysis, and statistical parametric mapping (SPM), require the use of predefined criteria in examining the potential linear links between a “seed” region (arbitrary selected) and other ROIs. This provides straightforward result interpretation but also implies limitations in their application in complex functional architectures [10]. Cross-correlation simply measures the correlation between the BOLD signals of any two brain
52
D. G. Cheirdaris
Cross correlation analysis (CCA) Model-based
Coherenceanalysis (CA) Statistical parametric mapping (SPM)
Functional connectivity
Decomposition-based analysis
Principal component analysis (PCA) Independent component analysis (pICA, sICA, tICA) Hierarchical clustering
Connectomics based on fMRI
Model-free
Clustering
Graph Theory
Fuzzy c-means (FCM) Self-organizing map (SOM)
Mutual information (MI) Parametric Granger causality (GC) Non-parametric Model-based
Dynamic causal modeling (DCM) Structural equations modeling (SEM)
Effective connectivity
Gaussian BN (GBN)
Model-free
Bayesian network (BN)
Discrete dynamic BN (DBN)
Markov models
Gaussian DBN
Transfer entropy (TE)
Fig. 5.2 Diagram of functional and effective connectivity modeling in fMRI studies. (Adapted form [6])
ROIs. Simple as it may appear, it has high computational complexity in the case of calculations between various lags in time series [11] and questionable results in case of variations due to the shape of the hemodynamic response function (HRF) and high brain noise from normal activities (cardiac and respiratory variations). Statistical parametric mapping (SPM) is a statistical technique used to extract regional activation patterns from neuroimaging (mostly fMRI and PET) data, based on the general linear model (GLM) and the Gaussian random field (GRF) theory. The idea is to produce a map of the examined brain area, represented as grid points or “voxels,” each of which represents the activity of a specific brain volume in a three-dimensional space. The volume of each voxel depends on the imaging modality used. fMRI voxels typically represent equilateral cuboids of a volume of 27 mm3. Then GLM is used to estimate the parameters describing the
spatially continuous data by performing a univariate test statistic on each voxel, while GRF is used to handle the multiple comparisons among continuous data (images) when making statistical inferences over a brain part. Analyses may examine differences over time (correlations between a task variable and brain activity in a certain area) using linear convolution models of how the measured signal is caused by underlying changes in neural activity.
5.5
Graph Theory Metrics
After the construction of the network, one needs to calculate several graph theory measures – metrics that assess the topology of the whole brain network as well as its regions. These metrics include the node degree, characteristic path length, shortest path length, clustering coefficient,
5
Graph Theory-Based Approach in Brain Connectivity Modeling and. . .
centrality coefficients, modularity, smallworldness coefficient, and a handful of variations, often divided in literature into segregation and integration metrics subgroups [12]. Segregation refers to the degree to which network elements form separate clusters, while integration refers to the capacity of network to become interconnected and exchange information. The simplest, yet most fundamental, measure that can be assessed in a graph is the node degree, which essentially represents the number of connections a node has with the rest of the network. The degree distribution in the brain follows an exponentially bound power law [8] meaning that similarly connected areas tend to communicate with each other. In the case of weighted graph models, the nodal strength is also calculated as the sum of the weights of all connections linked to a node. A node with a high value of degree means that there is more interaction between the node and many other nodes, and consequently, it is an important node in the network [13]. The network’s modular structure, the extent to which a network can be divided into separate communities corresponding to regional proximity or similar functionality, is determined by maximizing the value of modularity and the ratio of the number of within-group edges to the number of between groups edges [14]. The shortest path length is the shortest distance between two nodes. In a binary graph, distance is measured as the minimum number of edges that need to be crossed to go from one node to the other. In a weighted graph, the length of an edge is a function of its weight. The average of the shortest path lengths between one node and all other nodes is the characteristic path length and serves as a measure of global connectiveness within a graph. Global efficiency measures the ability of a network to transmit information at the global level and is defined as the average inverse shortest path length, while local efficiency measures the efficient communication of each node with its immediate neighbors [15]. The clustering coefficient (or transitivity) reflects the connectedness among neighbors of a node in a graph, indicating the level of local clustering. For each node, it is calculated as the fraction of the node’s neighbors that are also neighbors of each other.
53
For the whole network, the clustering coefficients of all nodes can be averaged into the overall clustering coefficient. Small-worldness coefficient is given by the ratio between the characteristic path length and mean clustering coefficient (normalized by the corresponding values calculated on random graphs) and describes an optimal network architecture in the way that describes the balance between local connectedness and global integration of a network [12]. Compared to a random graph, a small-world network is characterized by similarly short paths but a significantly higher clustering coefficient [14]. In Fig. 5.3, adapted from Farahani et al. 2019, some of the abovementioned metrics are graphically presented, while on Table 5.1, the mathematic formulations and interpretations are summarized. For a more detailed presentation of graph theoretical properties/metrics, including formulae, classification, interpretation, and restrictions, associated with or resulted by AD studies, the interested reader is directed to Tijms et al. [17] survey.
5.6
Evaluating Graph Model Robustness
The construction of the connectivity network is highly stochastic and subject to changes. Apart from the extraction of the metrics presented above and consequently the effectiveness of graphs, researchers have paid attention to the robustness of the graph models, that is, the ability of a graph model to preserve its connectivity after the loss of nodes and edges. Graph models’ robustness is evaluated through the measure of specific indexes, such as the connection intensity index, Randić index, Kirchhoff index, and Fielder value. The thorough presentation and strict mathematical formulation of these indexes is beyond the scope of this work; nevertheless a brief report lies within. Connection intensity index of a graph is defined as the ratio of its actual (current) number of edges to its maximum number of edges in case it were a complete graph [18]. By definition, its values range between 0 and 1, and the greater it is,
54
D. G. Cheirdaris
A
B Clustering coefficient
C
Regular
r=0
Small-World
Increasing randomness
Modularity
Random
Shortest path
D
Assortative
r=1
Fig. 5.3 Α summary of global graph measures. (Adapted from [6])
Table 5.1 A table with graph measures, mathematical formulation, and meaning
Adapted from [16]
Disassortative
5
Graph Theory-Based Approach in Brain Connectivity Modeling and. . .
the more connected the graph is considered to be. It has a constant computational complexity with respect to the number of nodes and edges, and therefore, it can be easily calculated in large graphs and easily updated when the number of edges and/or nodes alter. Randić index is a measure of assortativity (similarity) between connected nodes. In its simplest formulation, it is defined as the sum of the product of the degrees of two arbitrary nodes within the graph, thus measures the degree to which nodes with a similar number of edges are connected to one another. A high Randić index indicates that nodes with higher degree (many edges) tend to be connected to other high-degree nodes [19]). Its computational complexity is in the order of the size of the graph, as it increases linearly compared to it [20]. De Meo et al. [21] provide an extensive presentation of Randić index in estimating graph robustness, covering its mathematical formulations, experimental applications on datasets, as well as sampling-based algorithms for its accurate approximation. Kirchhoff index of a graph is essentially the application of Kirchhoff’s law on an electric circuit derived by the graph, where nodes and edges correspond to points and resistors, respectively. The value of resistance measured between two of its nodes represents a measure of the resistance that the current has to flow from one to another. Kirchhoff’s law is applied us to compute the effective resistance between each pair of nodes, and Kirchhoff index is calculated as the sum of all pairwise resistances within the network [20]. In a recent comparative study among the abovementioned indexes concerning the functional connectivity network robustness over EEG data from healthy control and MCI/AD patients [20], it was shown that there is a significant difference between healthy controls and AD patients for all indexes, with Randić index outperforming the others. Randić index was also solid in differentiating the stable and progressive MCI groups, suggesting its usefulness in predicting the progression of AD, according to a study of correlation maps of cortical thickness obtained from MRI [19].
55
The Fiedler value, or algebraic connectivity, is equal to the second-smallest eigenvalue of the Laplacian matrix. The Laplacian matrix combines both degree information and connectivity information in the same matrix. Specifically, the Laplacian matrix is derived by subtracting the adjacency matrix from the degree matrix [19].
5.7
Brain Connectivity and AD Detection
The Alzheimer’s disease detection process through graph theory-based modeling can be roughly divided in three steps: the functional brain network construction (resting-state brain status is modeled as a functional brain network to analyze brain regions associated with brain functions [7]), network analysis (feature extraction with respect to graph theory modeling), and network learning. The last step includes classifiers’ training over the graph and topological features obtained by imaging data through network and extracted in step two, since AD detection is actually a classification problem. The network is constructed by images (most often resting-state fMRI data and EEG recordings) and large-scale atlases are used for ROIs mapping. Automated anatomical labeling (AAL) atlas is the most popular among the atlases used [22]. AAL divides the brain into 116 regions including 90 regions in cerebrum and 116 regions in cerebellum. Another thing that relevant studies almost always have in common is the separation of data among healthy control subjects, mild cognitive impairment (MCI) patients and AD patients, meaning that data is acquired from all three groups and studied comparatively. The studies of structural and functional brain connectomics in AD have illustrated that the brain network configuration in patients with AD was significantly altered compared with healthy controls, depending on the imaging modality, network size, and population size applied in individual studies [16]. Variety of machine learning algorithms have successfully applied to the imaging data (most usually fMRI) such as support
56
vector machines (SVM), random forest (RF), Bayesian networks (BN), and neural networks. Bi et al. [7] addressed the brain network classification problem by comparing deep learning methods and ELM (extreme learning machine)boosted architecture over direct fMRI data and found that the proposed methods which learn deep features directly from brain networks outperform shallow learning methods, with ELM architecture clearly outperforming convolutional learning approach in AD detection. Hojati et al. [23] successfully trained a support vector machine (SVM) to predict MCI to AD conversion based on global and local graph measures constructed by rs-MRI and to accurately (>90%) classify MCI-C from MCI-NC. On a similar experimental approach (preprocessing, brain parcellation, network-based features extraction/evaluation), Khazaee et al. [14] achieved an impressive 100% accuracy in distinguishing AD patients form healthy controls with the application on a SVM trained over rs-fMRI data. Moreover, using statistical analysis of functional connectivity between the brain regions and based on the high discriminative features, they identified brain regions with maximum abnormal functionality in AD patients. Jalili [15] used EEG recordings as primary data over which constructed the brain network and combined genetic algorithms (GA), binary particle swarm optimization (BPSO), and social impact theorybased optimization (SITO) feature selection methods to extract the optimal set of graph theory features for classification between AD patients and healthy controls. Liu et al. [24] proposed an AD/MCI/healthy controls’ classification approach based on individual hierarchical networks constructed with 3D texture features of subjects’ brain images (MRI), with the application of multiple kernels to combine edge as well as node features. Madhushree et al. [25] recently clustered the vertices (brain regions) into fixed number of clusters and used binary (fMRI and DTI scans) models for their classification algorithm. It should be noted that apart from classification and functional brain connectivity, much fewer studies have focused on the effective brain
D. G. Cheirdaris
connectivity comparing healthy and AD subjects. Effective connectivity investigates the casual effects of the brain regions on each other and provides the possible shifts in learning and memory process in MCI and AD (cognitive loss), an issue that cannot be addressed by functional connectivity. Last but not least, the role of apolipoprotein E (APOE) gene with allelic variant 4 (APOE4) as a major genetic factor in AD onset has been associated with disturbed functional connectivity in cognitively healthy people and with disrupted graph topologies in AD [26].
5.8
Software and Databases
Most of the relevant studies collect their imaging data from the Alzheimer’s Disease Neuroimaging Initiative ADNI generally accessible data repository, as well as BioFinder, which provide data in several large, longitudinal, prospective cohorts and also preanalytical protocols. As far as the necessary software to access and evaluate data is concerned, several toolboxes have been developed to study brain connectivity, including the Brain Connectivity Toolbox [27], eConnectome [28], GAT [29], CONN [30], BrainNet Viewer [31], GraphVar [32], and GRETNA [33]. All these toolboxes share some common features such as GUI (graphical user interface) and graph analysis, while other features, i.e., data preprocessing and parallel programming support, are not universal across toolboxes. For fMRI preprocessing FMRI Software Library and FreeSurfer are widely used. GRETNA and GraphVar are developed to feature the calculation of dynamic functional connectivity measures, which usually requires some serious user programming skills. A very popular lately freeware is BRAPH (BRain Analysis using graPH theory) in connectome studies [13], due to its spectrum of features and user-friendly capabilities. It can extract dynamic network topology by longitudinal graph theory analyses, can assess modular structure using different algorithms, and allows performing subnetwork analyses within the defined modules and provides utilities for
5
Graph Theory-Based Approach in Brain Connectivity Modeling and. . .
multimodal graph theory analysis by integrating information from different neuroimaging modalities. Acknowledgments This work is supported by the project: “NEUROSYSTEM: Decision Support System for the analysis of multilevel data of non-genetic neurodegenerative diseases” co-financed by EU funds (European Regional Development Fund-ERDF), within the Ionian Islands R.O.P. 2014-2020_MIS: 5016116.
Literature 1. Nestor, P., Scheltens, P. & Hodges, J. Advances in the early detection of Alzheimer’s disease. Nat Med 10, S34–S41 (2004). 2. Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J., Yacoub, E., . . . Van Essen, D. C. (2016). A multi-modal parcellation of human cerebral cortex. Nature, 536(7615), 171–178 3. Detre, J. A., & Wang, J. (2002). Technical aspects and utility of fMRI using BOLD and ASL. Clinical Neurophysiology, 113(5), 621–634. 4. Bischof, G. N., Ewers, M., Franzmeier, N., Grothe, M. J., Hoenig, M., . . . van Eimeren, T. (2019). Connectomics and molecular imaging in neurodegeneration. European Journal of Nuclear Medicine and Molecular Imaging. 5. Forouzannezhad, P., Abbaspour, A., Fang, C., Cabrerizo, M., Loewenstein, D., Duara, R., & Adjouadi, M. (2018). A Survey on applications and Analysis Methods of Functional Magnetic Resonance Imaging for Alzheimer’s Disease. Journal of Neuroscience Methods. 6. Farahani, F. V., Karwowski, W., & Lighthall, N. R. (2019). Application of Graph Theory for Identifying Connectivity Patterns in Human Brain Networks: A Systematic Review. Frontiers in Neuroscience, 13. 7. Bi, X., Zhao, X., Huang, H., Chen, D., & Ma, Y. (2019). Functional Brain Network Classification for Alzheimer’s Disease Detection with Deep Features and Extreme Learning Machine. Cognitive Computation. 8. Achard, S., & Bullmore, E. (2007). Efficiency and Cost of Economical Brain Functional Networks. PLoS Computational Biology, 3(2), e17. 9. Bassett, D. S., & Bullmore, E. (2006). Small-World Brain Networks. The Neuroscientist, 12(6), 512–523. 10. Farahani, F. V., & Karwowski, W. (2018). Computational Methods for Analyzing Functional and Effective Brain Network Connectivity Using fMRI. Advances in Intelligent Systems and Computing, 101–112. 11. Cecchi, G. A., Rao, A. R., Centeno, M. V., Baliki, M., Apkarian, A. V., & Chialvo, D. R. (2007). Identifying directed links in large scale functional networks: application to brain fMRI. BMC Cell Biology, 8
57
12. Vecchio, F., Miraglia, F., & Maria Rossini, P. (2017). Connectome: Graph theory application in functional brain network architecture. Clinical Neurophysiology Practice, 2, 206–213 13. Mijalkov, M., Kakaei, E., Pereira, J. B., Westman, E., & Volpe, G. (2017). BRAPH: A graph theory software for the analysis of brain connectivity. PLOS ONE, 12(8) 14. Khazaee, A., Ebrahimzadeh, A., & Babajani-Feremi, A. (2015). Identifying patients with Alzheimer’s disease using resting-state fMRI and graph theory. Clinical Neurophysiology, 126(11), 2132–2141. 15. Jalili, M. (2017). Graph theoretical analysis of Alzheimer’s disease: Discrimination of AD patients from healthy subjects. Information Sciences, 384, 145–156 16. Xie, T., & He, Y. (2012). Mapping the Alzheimer’s Brain with Connectomics. Frontiers in Psychiatry, 2 17. Tijms, B. M., Wink, A. M., de Haan, W., van der Flier, W. M., Stam, C. J., Scheltens, P., & Barkhof, F. (2013). Alzheimer’s disease: connecting findings from graph theoretical studies of brain networks. Neurobiology of Aging, 34(8), 2023–2036. 18. Bullmore, E., Sporns, O. (2009), Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci 10, 186–198 19. Phillips, D. J., McGlaughlin, A., Ruth, D., Jager, L. R., & Soldan, A. (2015). Graph theoretic analysis of structural connectivity across the spectrum of Alzheimer’s disease: The importance of graph creation methods. NeuroImage: Clinical, 7, 377–390 20. Dattola, S.; Mammone, N.;Morabito, F.C.; Rosaci, D.; Sarné, G.M.L.; La Foresta, F. (2021),Testing Graph Robustness Indexes for EEG Analysis in Alzheimer’s Disease Diagnosis. Electronics, 10, 1440. 21. De Meo, P., Messina, F., Rosaci, D., Sarne, G. M. L., & Vasilakos, A. V. (2017). Estimating Graph Robustness Through the Randic Index. IEEE Transactions on Cybernetics, 1–14 22. Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello, F., Etard, O., Delcroix, N., . . . Joliot, M. (2002). Automated Anatomical Labeling of Activations in SPM Using a Macroscopic Anatomical Parcellation of the MNI MRI Single-Subject Brain. NeuroImage, 15(1), 273–289. 23. Hojjati, S. H., Ebrahimzadeh, A., Khazaee, A., & Babajani-Feremi, A. (2017). Predicting conversion from MCI to AD using resting-state fMRI, graph theoretical approach and SVM. Journal of Neuroscience Methods, 282, 69–80. 24. Liu, J., Wang, J., Hu, B., Wu, F.-X., & Pan, Y. (2017). Alzheimer’s Disease Classification Based on Individual Hierarchical Networks Constructed With 3-D Texture Features. IEEE Transactions on NanoBioscience, 16(6), 428–437. 25. Madhushree, B. A., Gangadhar, N. D., & Prafulla Kumari, K. S. (2020). Modelling and Mining Brain Network Data for Diagnosis of Neurodegenerative Diseases. 2020 IEEE International Conference on
58 Electronics, Computing and Communication Technologies (CONECCT). 26. Li, J., Bian, C., Chen, D., Meng, X., Luo, H., . . . Shen, L. (2020). Effect of APOE ε4 on multimodal brain connectomic traits: a persistent homology study. BMC Bioinformatics, 21(S21) 27. Rubinov M, Sporns O. (2010) Complex network measures of brain connectivity: uses and interpretations. Neuroimage 52: 1059-1069 28. He B, Dai Y, Astolfi L, Babiloni F, Yuan H, Yang L. (2011) eConnectome: A MATLAB toolbox for mapping and imaging of brain functional connectivity. J Neurosci Methods; 195: 261-269 29. Hosseini, S. M. H., Hoeft, F., & Kesler, S. R. (2012). GAT: A Graph-Theoretical Analysis Toolbox for Analyzing Between-Group Differences in Large-Scale Structural and Functional Brain Networks. PLoS ONE, 7(7)
D. G. Cheirdaris 30. Whitfield-Gabrieli, S., & Nieto-Castanon, A. (2012). Conn: A Functional Connectivity Toolbox for Correlated and Anticorrelated Brain Networks. Brain Connectivity, 2(3), 125–141 31. Xia, M., Wang, J., & He, Y. (2013). BrainNet Viewer: A Network Visualization Tool for Human Brain Connectomics. PLoS ONE, 8(7) 32. Kruschwitz, J. D., List, D., Waller, L., Rubinov, M., & Walter, H. (2015). GraphVar: A user-friendly toolbox for comprehensive graph analyses of functional brain connectivity. Journal of Neuroscience Methods, 245, 107–115. 33. Wang, J., Wang, X., Xia, M., Liao, X., Evans, A., & He, Y. (2015). GRETNA: a graph theoretical network analysis toolbox for imaging connectomics. Frontiers in Human Neuroscience, 9.
6
Developing Theoretical Models of Kinesia Paradoxa Phenomenon in Order to Build Possible Therapeutic Protocols for Parkinson’s Disease Irene Banou
Abstract
One of the main characteristic symptoms of Parkinson’s disease is gait abnormalities: bradykinesia, rigidity, postural instability and “freezing of gait.” Because of these, falls are listed as the second leading cause of death in Parkinson’s patients. For this reason, the phenomenon of “paradoxical movement” or “Kinesia Paradoxa” (KP), in which a sudden but brief “disappearance” of parkinsonism is observed where the patients’ mobility is normalized (even if temporarily), is of great research interest. Having that as a starting point, the three circuits possibly involved the
phenomenon of KP, we headed towards an attempt to approach a first outline of the phenomenon which could give us directs towards the formulation – drafting of some possible theoretical therapeutic protocols or guidelines. It is worth mentioning here that to date there is no confirmed effective treatment for the reversal of Parkinson’s disease, except for alleviating the each symptom. Keywords
Parkinson disease · Kinesia Paradoxa · Therapeutic protocols for Parkinson’s · PD
I. Banou (✉) Bioinformatics and Human Electrophysiology Laboratory, Department of Informatics, Ionian University, Corfu, Greece # The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_6
59
7
Computational Methods for Protein Tertiary Structure Analysis Antigoni Avramouli
Abstract
Protein folding accuracy is fundamental to all cells. In spite of this, it is difficult to maintain the fidelity of protein synthesis and folding due to the fact that the implicit genetic and biochemical systems are inherently prone to error, which leads to the constant production of a certain amount of misfolded proteins. This problem is further compounded by genetic variation and the effects of environmental stress. To that end, the prediction of protein structures for tertiary protein structure analysis and prediction might be an ideal approach for the study of mutation effects in macromolecules and their complexes. With the development and accessibility to increasingly powerful computational systems, this type of study will enable a wide variety of opportunities for the creation of better-targeted peptide-based pharmacotherapy and prospects for precision medicine in future. Keywords
Tertiary structure · Protein misfolding · Misfolding diseases · Pharmacotherapy · Precision medicine
A. Avramouli (✉) Department of Informatics, Ionian University, Corfu, Greece e-mail: [email protected]
7.1
Introduction
Proteins frequently act as mediators between the essential structure and function of cells, and as a result, they are responsible for preserving the constancy of all cellular, molecular, and biological processes across all kingdoms of life. Proteins are complicated molecules that display a remarkable degree of diversity with regard to the sequence and spatial configurations of amino acids. This gives proteins the ability to carry out a wide variety of functions that are essential to the continuation of life. It is possible that proteins are the only biological macromolecules that have gone through billions of years of evolution and accumulated a range of functions, some of which remain unknown. This fact would make proteins the most complex biological molecules ever discovered. Researchers in the field of protein science have come to a consensus on a fundamental concept, and that principle is that the activity of proteins is dependent on their structural conformation. The manner in which the sequence of proteins folds into three-dimensional conformations (also known as “3D”) leads to the production of one-of-a-kind structure that enables the spatial arrangement of chemical groups inside a specific 3D space. Proteins are able to perform vital structural, regulatory, catalytic, and transport roles across the entire kingdom of life thanks to the substantial placement of the chemical entity. Anfinsen demonstrated in the past that unfolded proteins can only return to their native
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_7
61
62
conformation if their primary structure or amino acid sequence is intact [1]. This demonstration paved the way for computational methods to predict tertiary structures based on the primary sequence. The ultimate objective of protein structure prediction is to deduce a structure from its main sequence with an accuracy comparable to that of X-ray crystallography and nuclear magnetic resonance experiments. To attain the most thermodynamically stable conformation, the protein must evolve its folding by maximizing interactions inside and between residues and satisfying all the spatial restrictions between atoms in a peptide bond. Although there is a general grasp of the intermolecular and intramolecular interactions that govern the protein fold, it is still difficult to determine protein structures from fundamental physiochemical principles. Why is it necessary to predict protein structures? The reason rests in the fact that the structural properties of proteins determine their biological functions, and therefore computational prediction methods are the only viable option in all situations when experimental procedures fail. Figure 7.1 depicts the growth of released structures per year in Protein Data Bank (PDB). Numerous proteins are either too large for nuclear magnetic resonance (NMR) or lack the ability to
A. Avramouli
generate diffraction-quality crystals for X-ray crystallography, making computational structure prediction the only viable option. Prediction of a molecule’s 3D structure from its primary structure is a hotly disputed topic in structural bioinformatics. Despite the recent development of huge algorithms and computational approaches for protein structure prediction, an all-encompassing solution for the correct prediction of protein folding remains elusive. Numerous structural bioinformatics experts have developed a variety of strategies and algorithms to address this issue, but each solution has both benefits and drawbacks. A global competition has been organized to evaluate the efficacy of various structure prediction tools/softwares using blind tests on experimentally confirmed protein structures. This competition was established in 1995 as the Critical Assessment of Techniques for Protein Structure Prediction (CASP) and serves as a global standard for this intensive computer task [9]. Decades of research have shown the different approaches for reliably predicting the 3D structures of proteins, including template-based methods (homology modeling), fold recognition (also known as threading), and de novo (ab initio) methods. Homology modeling methods are also
Fig. 7.1 Growth of released structures per year in PDB. (Source: https://www.rcsb.org/stats/. In 2021 12,592 new structures were released)
7
Computational Methods for Protein Tertiary Structure Analysis
referred to as comparative modeling, and the prediction of the query structure is based on homologs of empirically known structures published in the public domain of the PDB. The fold recognition approach is typically employed when a structure with similar folds is provided but lacks a near homology modeling relative. When no structure with a similar folding pattern is available and a priori or knowledge-based prediction methods are required, the ab initio method is utilized. As the name suggests, ab initio approaches for structure prediction perform the prediction from scratch utilizing only amino acid information. With the emergence of improved algorithms and the availability of experimentally generated 3D structures, numerous software/tools that combine the aforementioned classical prediction methods are regularly being created.
7.2
Homology Modeling Methods
Following the construction of an unknown atomic-resolution model of the “target” protein based on its amino acid sequence, the homology modeling method generates the 3D structure of a related homologous protein [4]. This method discovers one or more comparable previously identified protein structures and then aligns the sequences of both previously discovered and previously unknown proteins so that they most closely resemble the previously discovered structures. The structure of an unidentified protein can be predicted using this method by comparing it to various templates. The accuracy of the structural predictions made by homology modeling is dependent on the amino acid sequences of the target protein (the protein being modeled) and the template protein (an already existing protein). Predicting the structure of proteins requires more work than just homology modeling. The discovery and design of structure-based drugs are both facilitated by this approach. The following steps are included in this process: template selection; alignment of the protein sequence between the template and the target protein; correction of the alignment and construction of the model backbone; construction of the
63
side chain and optimization of the side chain; and model optimization, assessment, and verification.
7.3
Protein Fold Recognition
In the process of protein fold recognition, previously identified proteins that fold in the same way as the target protein serve as a template [3, 6]. There is only a minute difference between protein fold recognition and protein homology modification. Homology modeling aligns the sequence to the structure of the template, whereas protein fold recognition focuses on proteins that have the same fold level and aligns the sequence to the structure of the template. Protein folding is influenced by a variety of factors, including the hydrophobic and hydrogen bonds that form between amino acid sequences, the van der Waals interactions that occur between them, electrostatic forces, and many others. Despite the fact that new protein folds are found every year, there are roughly 1300 protein folds that have been identified. The process of protein fold identification can be broken down into four major stages, which are as follows: (1) a library of fold templates within a protein data bank that illustrates the different template structures; (2) the degree of similarity between the aligned amino acid sequences and the template fold, which includes an assessment of the template’s compatibility; (3) identification of the best option for maximizing the degree to which the target sequence can be aligned with the structure of the template; and (4) conduction of an analysis to determine the degree to which the best match is statistically significant.
7.4
Ab Initio Modeling
It is quite difficult to make an accurate prediction of the type of protein structure depending on the sequence of amino acids in the protein. Protein homology modeling and fold prediction are used to determine the structure of proteins based on similarities in their sequences and/or the structural folds they share [8]. However, if there are
64
A. Avramouli
no homologs in the resource or if existing homologs cannot be located, an alternate technique known as ab initio modeling was found to be a possible answer. It is used to predict rather complex protein structures, including tertiary structure, among other protein structures. This procedure makes use of a significant amount of available computer resources in order to make an accurate prediction regarding the composition of a complicated item. In the field of medical research and the development of new drugs, this modeling is of tremendous assistance. In this modeling, a conformational search has been carried out based on the defined energy function that produces a variety of conformations that are compatible with one another (decoy structures), and the most appropriate conformation should be chosen. Ab initio modeling frequently depends on the design of the energy function, the conformational search engine, and the method of model selection. These are the three fundamental variables. The major purpose of ab initio modeling is to predict tertiary protein structures based on physicochemical factors after first deducing secondary protein structures from primary protein structures, which are represented by linear amino acid sequences. When other methods of structure prediction, such as homology modeling or fold identification, are unsuccessful, ab initio is frequently able to resolve the issue. On the other hand, this modeling has some restrictions when it comes to the investigation of the locations and orientations of amino acid side chains. The fact that it takes a large amount of time to arrive at a workable solution is yet another key drawback associated with this methodology.
7.5
Similarity Analysis and Clustering
Molecules are categorized, in accordance with the theory of molecular similarity, based on the biological effects, physical features, and 3D structures of the molecules. When it comes to protein function prediction, computer-aided molecular design, rational drug design, and
protein docking, comparing 3D molecular structures is an absolute necessity. Comparative modeling makes it possible to generate a 3D model of a protein even when that protein’s structure is unknown. In order to investigate the evolutionary connections between protein shapes, organizations SCOP [10] and CATH [7] were also founded. They are used as a reference for comparing the structures of proteins, as well as a training set for machine learning algorithms that classify protein structures and make predictions based on those classifications. Since the structures of proteins have remained relatively unchanged throughout the course of evolution, a protein family would make it much simpler to locate related proteins based on the structural similarities they share. Superposition of protein structures in which alignment between equivalent residues is not given a priori, feature representation of protein spatial profile in multidimensional vectors, and time series formed by the modification of protein tertiary structure are the three techniques that define similarities between 3D structures [11]. The degree of structural resemblance can be determined via scaling, rotation, transformation, and superposition. Following the rigid-body superposition step, different scoring functions are used to describe the positional deviations of equivalent atoms. Many aligners are able to identify structural similarities. When comparing protein structures, p-values and the root-meansquare divergence (RMSD) are both taken into consideration, efficient, but demanding in terms of both time and computation. Methods that are based on shapes come in second. Proteins are modeled as 3D vectors using this approach. Comparison of feature vectors that is both easier and more accurate. There may be a search for global or local parallels. Calculating the global characteristics requires first translating Euclidean space into a metric space, which defines the pairwise distances between the 3D objects. The accumulation of pairwise surface point relations is what is used to get the properties of local surface keypoints.
7
Computational Methods for Protein Tertiary Structure Analysis
The final strategy is to examine the relationship between different time series. This method results in the formation of polygonal protein chains. Through the process of turning them to feature vectors, 3D objects can be viewed as time series. The generation of a 3D polygonal chain by the tertiary structures of proteins is used by geometric similarity as a proxy for structural similarity.
7.6
Protein Structure Superimpositions and Deviation
Similar protein sequences mostly have related folds and functions even when there is no sequence similarity. In this scenario, contrasting the structures of the protein molecules could result in the identification of shared areas. As a result of this, structure-based multiple sequence alignment approaches were created in order to capture the influence of similar and dissimilar regions. Structure alignment and structure superposition are two separate but related approaches. Structure superposition compares the 3D features of two or more structures, whereas structural alignment determines whether or not two sequences of amino acids are equivalent on the basis of their 3D structures. In the process of structure superposition, C-alpha sites are used as anchor points between structures A and B, and a trans-formation method is used to reduce the distance that separates aligned residues as much as possible. This tactic’s goal is to reduce RMSD between A and B as much as possible. The RMSD is the difference between two sets of overlaid atomic coordinates [5]. When comparing two structures, the superposition method is utilized whenever the coordinates do not permit comparison functions such as translation and rotation to be applied to one of the structures in order to reduce RMSD. Lower RMSD values indicate that there is less fluctuation between the structure of the template and the model, which indicates that the model has a fold that is more native-like and helps identify differences.
7.7
65
Descriptors for Molecular Similarity Calculations
A great number of different descriptors that can be utilized in computations of similarity have been established. In most cases, the goal of these plots is to give a molecular description that is convertible to an abstract descriptor space by the use of a representation that maintains the integrity of the information. Descriptors can be broken down into three categories: those that are created from the molecular graph, those that are dependent on the shape of the molecule (also known as the conformation), and those that also need calculation of the wave function of the molecule [2]. There are other descriptors that reflect changes to the molecular structure (such as ionization and pKa), as well as derivations of surrogate experimental data. For example, ionization is a descriptor that reflects changes to the charge on a molecule (e.g., log P). Many “rational” medication designs are based on the idea that compounds that are chemically related have qualities that are comparable to one another. Αnalogous substructural elements connect to similarly occurring biological activities. Extending the molecular graph to include molecular qualities results in molecular similarity, which is a word that appears frequently in the scientific literature pertaining to chemistry. Pharmaceuticals tend to select approaches that are comparable. Bioisosterism is an extremely important concept in medicinal chemistry since it allows for the substitution of similar substructures while maintaining the same function. The necessity for computer-based methods of compound selection and evaluation has increased as a result of technological advancements in high-throughput screening and synthesis. Because of advances in computing power, it is now possible to run similarity programs on extremely large molecular datasets. These new applications pursue to find lead compounds that are patentable and more suited, as well as to lower the percentage of failed compound attempts in medication research and development. It is essential to have the ability to predict
66
A. Avramouli
appropriate and unsuitable candidate structures swiftly and precisely. However, there are limitations associated with techniques based on molecular similarities. The benefit of molecular similarity approaches, which is that they do not require any prior knowledge or experience external to the situation at hand, becomes a limiting factor as more and more information becomes available. By making use of this new information, it is possible that an alternate method, such as ligand protein docking, will become apparent. Molecular similarity is generally used in situations where understanding of the system is lacking. In silico techniques are considered a mean to reduce in vivo animal testing due to poor public sentiment. Another factor is legislation, particularly in Europe, which mandates that household and personal care products cannot be tested on animals beyond the year 2009. Methods based on molecular similarity may be utilized by industries in the process of selecting chemicals that possess the required levels of safety. It is possible for new formulations of already existing compounds to have dangerous synergistic effects that have not been evaluated. Computer modeling of pharmacodynamic effects is in its infancy, and molecular similarities may be used to evaluate risk.
to protein homology, fold recognition, and ab initio modeling, helps many researchers develop an interest in proteomics, as evidenced by the increasing number of servers and groups participating in community-wide prediction quality assessment experiments. When compared to traditional experimental methods, computational methods have a number of advantages. These advantages include the demonstration of performance that is robust, accurate, and consistent, as well as their usefulness in the recognition of large-scale protein folds. In addition to this, they efficiently deal with the restrictions that are inherently associated with experimental techniques, specifically the fact that these processes are hard and expensive. Even though it is essential to make more headway in the research and development of remote sequence detection as well as the precise determination of individual protein sequences in every respect, our continued capacity to establish the precise structure of proteins will lead to the discovery of novel pharmacological compounds that can be designed rationally. These compounds will be used in the treatment of various diseases. Thus, in spite of the increase in experimental data, computational methods for protein structure prediction are anticipated to play a significant role in structural proteomics.
7.8
Acknowledgments This work is funded by the project FOLDIT “Research Infrastructure for the study of protein misfolding in neurodegenerative diseases,” Operational Programme Competitiveness, Entrepreneurship & Innovation – EPAnEK, NSRF 2014–2020. MIS 5047144.
Conclusion
This chapter provides an overview of the computational methods and software available for protein structure prediction. The fundamental building blocks of the structure of any protein are called amino acid sequences. However, it is not sufficient to know merely the names of amino acids. When trying to predict the function and state of a protein based on its structure, other factors, such as the concentration of amino acid sequences, the type of folding, conformational space, free energy pH, active binding sites, and so on, are also extremely important. The gap that exists between the sequence of amino acids and the structure of proteins can be bridged by using 3D structure prediction. However, the accurate prediction, though it is limited
References 1. Anfinsen CB, Haber E, Sela M, White FH (1961) The kinetics of formation of native ribonuclease during oxidation of the reduced polypeptide chain. Proc Natl Acad Sci U S A 47:1309–1314. https://doi.org/10. 1073/pnas.47.9.1309 2. Bender A, Glen RC (2004). Molecular similarity: a key technique in molecular informatics. Org Biomol Chem 2(22):3204–18. https://doi.org/10.1039/ B409813G 3. Bowie JU, Lüthy R, Eisenberg D (1991) A method to identify protein sequences that fold into a known three-
7
Computational Methods for Protein Tertiary Structure Analysis
dimensional structure. Science 253:164–170. https:// doi.org/10.1126/science.1853201 4. Chothia C, Lesk AM (1986) The relation between the divergence of sequence and structure in proteins. EMBO J 5:823–826. https://doi.org/10.1002/j. 1460-2075.1986.tb04288.x 5. Gu J, Bourne PE (2009). Structural bioinformatics. Wiley-Blackwell https://www.wiley.com/en-ie/Struc tural+Bioinformatics%2C+2nd+Editionp-9780470181058 6. Jones DT, Taylort WR, Thornton JM (1992) A new approach to protein fold recognition. Nature 358:86– 89. https://doi.org/10.1038/358086a0 7. Knudsen M, Wiuf C (2010). The cath database. Hum Genomics 4(3):207. https://doi.org/10.1186/14797364-4-3-207 8. Lee D, Xiong D, Wierbowski S, Li L, Liang S, Yu H (2022). Deep learning methods for 3D structural proteome and interactome modeling. Curr Opin Struct Biol 73:102329. https://doi.org/10.1016/j.sbi.2022. 102329
67
9. Moult J, Fidelis K, Kryshtafovych A et al (2018) Critical assessment of methods of protein structure prediction (CASP)-round XII. Proteins 86:7–15. https://doi.org/10.1002/prot.25415 10. Murzin AG, Brenner SE, Hubbard T, Chothia C (1995). Scop: a structural classification of proteins database for the investigation of sequences and structures. J Mol Biol 247(4):536–40. https://doi.org/ 10.1006/jmbi.1995.0159 11. Polychronidou E, Kalamaras I, Agathangelidis A, Sutton LA, Yan XJ, Bikos V, Vardi A, Mochament K, Chiorazzi N, Belessi C, Rosenquist R, Ghia P, Stamatopoulos K, Vlamos P, Chailyan A, Overby N, Marcatili P, Hatzidimitriou A, Tzovaras D (2018). Automated shape-based clustering of 3D immunoglobulin protein structures in chronic lymphocytic leukemia. BMC Bioinformatics. 19 (Suppl 14):414. https://doi.org/10.1186/s12859-0182381-1
8
Spiking Neural Networks and Mathematical Models Mirto M. Gasparinatou, Nikolaos Matzakos, and Panagiotis Vlamos
Abstract
Neural networks are applied in various scientific fields such as medicine, engineering, pharmacology, etc. Investigating operations of neural networks refers to estimating the relationship among single neurons and their contributions to the network as well. Hence, studying a single neuron is an essential process to solve complex brain problems. Mathematical models that simulate neurons and the way they transmit information are proven to be an indispensable tool for neuroscientists. Constructing appropriate mathematical models to simulate information transmission of a biological neural network is a challenge for researchers, as in the real world, identical neurons in terms of their electrophysiological characteristics in different brain regions do not contribute in the same way to information transmission within a neural network due to the intrinsic characteristics. This review highlights four mathematical, singlecompartment models: Hodgkin-Huxley, Izhikevich, Leaky Integrate, and Fire and Morris-Lecar, and discusses comparison among them in terms of their biological M. M. Gasparinatou (✉) · P. Vlamos Ionian University, Corfu, Greece e-mail: [email protected]; [email protected] N. Matzakos (✉) School of Pedagogical and Technological Education, Marousi, Greece
plausibility, computational complexity, and applications, according to modern literature. Keywords
Neural networks · Mathematical models · Single-compartment model · HodgkinHuxley · Izhikevich · Leaky integrate and fire · Morris-Lecar
8.1
Introduction
Spiking neural networks (SNN) appeared a few decades ago and were introduced as a thirdgeneration neural network [30]. Research in many biological systems has shown that spiking neurons are a dominant component of information processing in the brain [22]. They are important because they describe both the time schedule of spikes (trains of action potentials between the neurons of the brain) and the variation of the subthreshold membrane potential [13]. The spikes that have been transmitted between the neurons in the brain are sparse in time and space carrying high information like the biological neurons, and thus they have greater biological plausibility. Furthermore, the specific spike timing in neurons of the brain is crucial for motor control and behavior [40]. These characteristics of SNN are useful for scientists because they can construct appropriate brainbased representations and even may develop
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_8
69
70
M. M. Gasparinatou et al.
methods by combing techniques of deep learning networks and spiking neural networks with low computational cost, which is still a scientific challenge [42]. Many methods of recording spatiotemporal brain data have been developed using the spiking neural networks such as electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG). Thus, spiking neural networks are argued to be relevant for understanding the above methods, because they are based on the same information process [26]. Mathematical modeling is a key tool in computational biology because complex phenomena are represented, analyzed, and predicted [2]. The construction of a spike neuron mathematical model is a pressing issue for researchers because inappropriate models of spiking or connectivity can lead to results that do not correspond to the actual function of brain neuron networks [22]. However, many scientists argue that in neuroscience sometimes, there are models that were constructed on inappropriate assumptions but produced reasonable results [1, 11]. The first mathematical model of biological neural networks simulation was the Hodgkin-Huxley model [21]. Nowadays, many researchers working in computational biology and neuroscience are still based on this model. Since then, several mathematical models have been developed to simulate the function of biological neurons. In this article, we review some of the most important models that are either currently in use or a variety of modifications are manufactured based on them. These models are Hodgkin-Huxley, Izhikevich, Leaky Integrate and Fire, and Morris-Lecar [22].
8.1.1
Hodgkin-Huxley Model
The first and most successful mathematical model of biological phenomena was the HodgkinHuxley model [21]. In particular, they constructed a mathematical model that could measure the flow of electric current through the cell membrane surface of a squid giant nerve fiber. The model was based on four ordinary equations with parameters from their
experiments. They discovered that the excitability of the squid neuron axon depends on transmembrane fluxes of potassium and sodium ions. The effects of ion channels on the permeability resistance of the cell membrane result in either the generation of an energy potential or nothing. Hodgkin and Huxley noticed that there are three ion gate channels: one for the potassium ions, the sodium, and the leaky channel, respectively. Thus, they simulated the functions of the neuron with a parallel electrical circuit in which the charge storage capacity of the cell membrane is represented as a capacitor, the resistors represent the ion channels, and the batteries represent the electrochemical energy potentials that occur due to the ions of K+ and Nα+ transmembrane flows. The equations for the electrical circuit are as follows: Cm
dV þ I C ðt Þ = I ext ðt Þ dt
I C ðt Þ = I Να þ I K þ I L
ð8:1Þ ð8:2Þ
where IC(t) is internal the transmembrane current, Iext(t) is the external the transmembrane current which can occurs from intracellular electrodes, IΝα, IKα, IL is each channel’s current, Cm is the capacitor of the cell’s membrane, and V is the membrane potential. Each ion channel has a number of gates, and depending on the activation or deactivation of the gates, each channel opens or closes, respectively. Each variable of the activation gate satisfies the following first-order differential equation [12]: dm = am ðV Þð1 - mÞ - bm ðV Þm dt
ð8:3Þ
dh = ah ðV Þð1 - hÞ - bh ðV Þh dt
ð8:4Þ
dn = an ðV Þð1 - nÞ - bn ðV Þn dt
ð8:5Þ
where m is the Nα+ gate activation, h is the Nα+ gate deactivation, and n is the K+ gate activation. Thus, according to Hodgkin-Huxley [21], for the potassium channel to be activated, all four elements of the activation gates must be activated; for sodium all three elements, while the leakage
8
Spiking Neural Networks and Mathematical Models
channel is always considered open. Hence, for each channel current, the following equations are obtained for the values of the individual currents of the channels in the neuron [41]: I Να = gNa m3 hðV - E Na Þ
ð8:6Þ
I Kα = gK n4 ðV - E K Þ
ð8:7Þ
I L = gL ðV - E L Þ
dV = I ext ðt Þ - gNa m3 hðV - ENa Þ dt - gK n4 ðV - E K Þ - gL ðV - E L Þ
ð8:9Þ
dm = am ðV Þð1 - mÞ - bm ðV Þm dt
ð8:10Þ
dh = ah ðV Þð1 - hÞ - bh ðV Þh dt
ð8:11Þ
dn = an ðV Þð1 - nÞ - bn ðV Þn dt
ð8:12Þ
where EL, ENa, EK is the equilibrium potential of the leaky, the sodium ions, and the potassium ions channel, respectively. The equilibrium potential is calculated from the equation E = -z58 ½½XX 12 ; we refer to [10] for more details. Due to the real parameters, this mathematical model is not solvable by analytic methods but only numerically. If we assume this model as a four-dimensional dynamical system of vector V 2 R4, then we can find the numerical solution of the system using numerical methods such as Euler’s first-order method [24].
8.1.2
dv ðt Þ = 0:04v2 þ 5v þ 140 - u þ I ðt Þ dt du ðt Þ = aðbv - uÞ dt with an auxiliary condition: if v ≥ 30 mV then
ð8:8Þ
To summarize, according to Eqs. (8.1)–(8.8), the Hodgkin-Huxley model consists of a system of four nonlinear differential equations, with the electrical characteristics of the neuron as parameters: Cm
71
Izhikevich Model
Using the bifurcation theory of dynamical systems, Izhikevich in 2003 proposed a neural model consisting of a two-dimensional system of ordinary differential equations with an auxiliary condition:
v=c u=u þ d
ð8:13Þ
where v is the voltage of a neuron, whereas the variable u is called the recovery variable and represents both the activation of K+ ions and inactivation of Nα+ ions [22]. In addition, u provides degenerative feedback to v, and I(t) is the input current of the neuron. When the membrane potential reaches its apex, i.e., +30 mV, the neuron emits a spike, and so then the membrane potential v and the recovery variable u are reset to values based on the (8.13). Neuron’s resting potential varies between 70 mV and -60 mV and depends on the variable b [23]. We must notice that, in this model as in real neurons, the starting threshold of the energy potential is not constant and depends on the values of the membrane potential before discharge.
8.1.3
Leaky Integrate and Fire Model
The neuron model which is called Integrate and Fire gives an account of a neuron related to its membrane potential, and when it comes to a threshold then a spike is generated. Furthermore, the biophysical factors of the neuron affecting the propagation of the current through the membrane are omitted [10]. A basic version of the Integrate and Fire model is the Leaky Integrate and Fire model which refers to ion diffusion in the cell, but there is not some equilibrium obtained. More specifically, the membrane of the neuron is represented as a capacitor Cm in parallel circuit with a resistor Rm. The membrane current is related as Ie. The model is described by the following differential equation:
72
M. M. Gasparinatou et al.
τm
dV = - V ðt Þ þ Rm I e dt
where V(t) is the membrane potential and τm = RmCm is the membrane resistor or “the leaky integrator” [16]. After the spike occurs, the membrane voltage Vth resets to the value of Vreset which is lower than the threshold value Vreset < Vth [10].
8.1.4
Morris-Lecar
The Morris-Lecar model was created by Catherine Morris and Harold Lecar [34], and its aim was to describe the excitability of two nonactive ion channels within the membrane of the barnacle muscle nerve fiber using simpler equations than the Hodgkin-Huxley model. It is a two-dimensional model which is pertinent to systems containing of two noninactivating conductances that are voltage-sensitive Ca++ and K+ channels. This conductance-based model is equivalent to a simple electrical circuit, and the mathematical representation is achieved by virtue of nonlinear differential equations. The equations and the parameters of this model are based on previous experiments and on their own experiments [34]. The equations that describe the behavior of the oscillations of the membrane potential are similar to the Hodgkin-Huxley model: CV 0 = - gCa M ss - gK W ðV - V K Þ - gL ðV - V L Þ þI _ = λM ðV Þ½M 1 ðV Þ - M M N_ = λN ðV Þ½N 1 ðV Þ - N where V is the membrane potential, I is the current stimulus that is being applied to the membrane, and the parameters M, N are corresponding to the Hodgkin-Huxley m, n parameters. For more details, we refer to [34]. Table 8.1 refers to some applications of the above models in various fields such as medicine, computer science, mathematics, etc.
8.2
Discussion
The Hodgkin-Huxley model is the most biological plausible conductance-based model because it describes the function of an individual squid giant neuron based on experimental data. Since then, numerous efforts have been attempted to construct mathematical models for mammalian neurons that rely on the Hodgkin-Huxley model. In spite of its greatest biological implementation of all mathematical neural models, this model is not preferable by scientists because it is the most computational expensive [22]. For over 30 years, a plethora of studies have focused on modifying this model in order to achieve biophysically meaningful models with lower computational cost. Table 8.1 shows some recent applications of the above spiking neural models and their modifications, respectively. Reference [17] proposed a modified HodgkinHuxley model able to separate the dynamics of an individual neuron from dynamics of the whole network; hence it can describe heterogeneous large-scale networks with biological parameters by absorbing various types of neurons and synapses. In reference [32], the authors present quantitative measurements of corticospinal excitability, using a Hodgkin-Huxley neuron model, and demonstrate that the proposed model predicts the motor threshold values and presents precision to biological real data with maximal error under 8% as well. An electronic neuron proposed in [39] based on the Hodgkin- Huxley model, and in [33], the authors improved the above mathematical model as they conclude that their proposed silicon-based model accurately mimics a biological neuron. In conjunction with the above researches, a variation of HodgkinHuxley model that is emerging is the memristive Hodgkin-Huxley model [9, 14, 44] in which the sodium and the potassium conductances are replaced by flow-controlled memristors. In Ref. [14], the comparison between the modified and the original model reveals that the first model produces more action potential in less time than the latter. Moreover, the Hodgkin Huxley model seems to be useful for the investigation of the
8
Spiking Neural Networks and Mathematical Models
73
Table 8.1 Applications of spiking neural network models and their comparison Models HodgkinHuxley (HH)
Izhikevich (IZ)
Authors Long and Fang [29]
Application field Mathematics and numerical investigation
Utility Construction of spiking neural networks
Mobille et al. [33]
Electronic neurons
Educational purpose in physics, electronics, mathematical modeling, neurophysiology Quantitative measurements of corticospinal excitability, determine the TMS intensity Replicate functions of biological neurons, construct biophysical model Parameter optimization and control applications
Memarian Sorkhabi et al. [32]
Transcranial magnetic stimulation (TMS)
Fang et al. [14] (Memristive HH model)
Design and investigation of memristive HH model
Giannari and Astolfi [17] (modified HH)
Design and investigation of modified HH network model
He and Zao [20]
Mathematics and numerical investigation
Long and Fang [29]
Mathematics and numerical investigation
Pu et al. [36] (modified IZ)
Design hardware platform
Design digital hardware
Sen-Bhattacharya [38]
Software platforms testing
Test software tools
Investigation of the correlation between oxygen concentration and membrane potential dynamics Construction of spiking neural networks
Advantages Biological plausibility, biological meaningful of parameters Mimics biological neuron
Disadvantages Big variable storage, number of flops, much amount of CPU time
Model precision to biological experimental data with maximal error 100). Energy minimisation was performed on the remaining compounds using Flare v10 (Cresset Inc., UK) software [4, 7, 19] and exported to Forge.
26
3D-QSAR-Based Virtual Screening of Flavonoids as Acetylcholinesterase Inhibitors
26.2.2
Confirmational Hunt, Pharmacophore Generation, Compound Alignment and Field QSAR
After the compounds were imported to Forge v10 software, IC50 values were converted to log scale, pIC50 (pIC50 = -log IC50). The compounds had a wide range of biological activities: from 6.5 to 10.5 pIC50. The co-crystallised structure of acetylcholinesterase was retrieved from RCSB PDB (PDB ID: 4PQE), and it was split into target receptor and ligand using Forge software. The co-crystallised ligand was used as reference. The XED (Extended Electron distribution) force field was used to create the field points for each and every molecule. Four 3D molecule descriptors, including shape, hydrophobic field and positive and negative electrostatic fields, were calculated. The hydrophobic, geometrical, and electrostatic properties of the substance are condensed in the field point pattern. Conformational hunt using XED was performed using ‘very slow and accurate’ method. The maximum number of confirmers was set to 500, and the RMSD (Root Mean Square Deviation) cutoff was set to 0.5 Å. The gradient cutoff for conformer minimisation was set to 0.1 kcal/mol. Each confirmer was manually checked, and the best confirmer which aligned with the alignment of reference molecule was allotted for model building. Reference conformer was then used to align all ligands using the ‘maximum common substructure’ method. All alignments were manually checked, and the best aligning molecules were chosen for model building. The initial set was divided into training set (70 %) and test set (30 %), and the field QSAR model was built. The maximum number of components was fixed to 20, and the maximum distance of sample point set to 1.0 Å and Y scrambles of components was set to 50.
26.2.3
SAR Activity Atlas Model
Activity atlas model was visualised using Bayesian approach. This method helps to understand
235
the hydrophobic, electrostatic and shape characteristics that explain the structure-activity relationship of certain molecules.
26.2.4
Virtual Screening
In order to screen more lead molecules, around 60,000 molecules were downloaded from Blaze, on the basis of pharmacophore model built on the top 5 reference molecules from QSAR model. All 6000 molecules were uploaded in prediction set, and mismatched SAR field points of prediction set compounds were discarded. The top ten ligands which fit in QSAR model and showed ‘excellent’ fit were docked with AChE protein. Protein preparation: The RCSB PDB database (https://www.rcsb.org/pdb) was used to get the three-dimensional crystallographic structures and coordinates of the target protein. Preparation of Protein included inserting missing atoms in incomplete residues, deleting alternate conformations, modelling the missing loop regions, protonating titratable residues, removing heteroatoms or water molecules was done using Flare v10 (Cresset, UK) software.
26.3 26.3.1
Results and Discussion Field 3D-QSAR: Field Points and Statistical Analysis
Field points are descriptors for building 3D-QSAR model. Based on field points, further alignment of ligands and comparison to reference and activity atlas model were done. Figure 26.1 shows the molecular description of aligned training set compounds with their respective molecular field points and shows regions around aligned molecules with electrostatic fields and steric field points. Training set compounds were aligned based on field points generated by reference molecules. Field QSAR is built after the alignment by taking logarithmic scale of biological activity (pIC50) and was defined as dependent variable. The activity interactive graph analysis, which presents the predicted against real or experimental activity
236
S. Andole et al.
Fig. 26.1 Field points of 3D-QSAR: favourable electrostatic interactions (green), unfavourable electrostatic interactions (red), favourable steric contributions (blue)
and unfavourable steric contributions (purple). Image generated using Flare™ from Cresset®
comparison plot with cross-validation data points, was used to demonstrate the robustness of the generated 3D-QSAR model. The derived 3D-QSAR model showed a high activity descriptor relationship accuracy of 97% as referred by regression coefficient (r2 = 0.97) and a high prediction accuracy of 69% as referred by crossvalidation regression coefficient (q2 = 0.69) (Fig. 26.2).
negative field is described by the activity cliff of electrostatics (Fig. 26.3a), and the region where hydrophobic interactions can be beneficial to biological activity is shown by the activity cliff of hydrophobics (Fig. 26.3b).
26.3.2
Visualisation of Activity Atlas
The average of actives and activity cliff summary for AChE inhibitors were studied to reveal the key features of the compounds modulating anticancer property for further lead optimisation and designing of novel analogues for drug discovery. The region which revealed more positive and
26.4
Virtual Screening and Molecular Docking
The pharmacophore model of reference molecules was used to virtually search all existing databases for more similar compounds using Blaze server (Cresset, UK). The pharmacophore model with desired field points was uploaded onto blaze server to retrieve more such compounds. Around 60,000 compounds were downloaded and uploaded into the prediction set. The compounds which did not fit in the
26
3D-QSAR-Based Virtual Screening of Flavonoids as Acetylcholinesterase Inhibitors
237
Fig. 26.2 (a) Activity interactive graph plot between predicted and actual experimental activity: showing training set (green), test (blue) and training cross-validation set
(black). (b) 3D-QSAR model performance graph plot: between r2 and q2. Image generated using Flare™ from Cresset®
Fig. 26.3 Average of cliff summary. (a) Electrostatics: positive (red), negative (cyan). (b) Hydrophobic interactions: favourable (pink), unfavourable (green). (c)
Favourable shape (pink), unfavourable shape (green). Image generated using Flare™ from Cresset®
QSAR model were discarded. 36 compounds were docked with crystal ligand retrieved from RCSB PDB (PDB ID: 4PQE). Using existing ligand as grid, ten best compounds showing the
highest LF-DG score are listed in Table 26.1. The molecular interaction of ligands is given in Table 26.2.
238
S. Andole et al.
Table 26.1 Docking results for best-fit prediction set ligands Compound name CHEMBL1927517 CHEMBL522717
LF-DG -7.498 -7.615
LF-rank score -3.342 -5.154
LF-VS score -7.945 -8.301
Hydrogen bond ASP72 (2.28), HIS440 (2.08), PHE331 (2.9)
CHEMBL1645124
-7.911
-8.1230
-9.545
GLU199 (2.0), HIS440 (2.28)
CHEMBL1583618 CHEMBL495953
-5.463 -4.937
-7.997 -8.073
-8.796 -8.767
CHEMBL1448630
-8.25
-6.998
-8.615
HIS440 (2.4) PHE331 (2.7), PHE330 (2.3), TRP279 (3.0) SER200 (2.931)
CHEMBL1642832 CHEMBL2295959
-8.731 -8.89
-7.2 -8.73
-9.498 -9.531
CHEMBL1718051
-9.325
-9.378
-9.768
TYR121 (2.26) SER286 (2.49) PHE288 (2.55), ILE287 (1.9) His440 (2.17), gly123 (2.67), HIS440 (2.8), HIS440 (2.39)
Table 26.2 3D and 2D images of the top 2 potential AChE inhibitors
Image generated using Flare™ from Cresset®
Hydrophobic interactions ILE444 (5.44), TRP84 (4.93) PHE330 (2.90), HIS440 (4.84), TRP279 (5.44) TRP279 (5.5), TYR334 (4.6), TRP279 (4.20) TRP84 (5.18), PHE330 (5.52) TRP279 (5.3), TYR334 (3.8), TRP84 (4.0) TRP84 (4.46), PHE330 (5.02), TYR334 (4.22) TYR121 (5.79) TRY334 (4.0), PHE330 (4.9), LEU28 (4.19), TYR334 (5.00) TRP84 (4.1), TYR121 (5.30)
26
3D-QSAR-Based Virtual Screening of Flavonoids as Acetylcholinesterase Inhibitors
26.5
Conclusion
Using the QSAR model, virtual screening was done, and the obtained dataset was loaded as prediction set to fit the developed QSAR model. The compounds which did not fit the model were discarded. The top ten compounds fitting the QSAR model were subjected to molecular docking and MD simulation. CHEMBL1718051 was found to be the lead compound. This study is offering an example of a computationally driven tool for prioritisation and discovery of probable AChE inhibitors.
References 1. Ahmad, S. S., M. Khalid, M. A. Kamal, and K. Younis. 2021. ‘Study of Nutraceuticals and Phytochemicals for the Management of Alzheimer’s Disease: A Review’, Curr Neuropharmacol, 19: 1884–95. 2. Ahmed, Sagheer, Sidrah Tariq Khan, Muhammad Kazim Zargaham, Arif Ullah Khan, Saeed Khan, Abrar Hussain, Jalal Uddin, Ajmal Khan, and Ahmed Al-Harrasi. 2021. ‘Potential therapeutic natural products against Alzheimer’s disease with Reference of Acetylcholinesterase’, Biomedicine & Pharmacotherapy, 139: 111609. 3. Barreca, Davide, Giuseppe Gattuso, Giuseppina Laganà, Ugo Leuzzi, and Ersilia Bellocco. 2016. ‘Cand O-glycosyl flavonoids in Sanguinello and Tarocco blood orange (Citrus sinensis (L.) Osbeck) juice: Identification and influence on antioxidant properties and acetylcholinesterase activity’, Food Chemistry, 196: 619–27. 4. Bauer, M. R., and M. D. Mackey. 2019. ‘Electrostatic Complementarity as a Fast and Effective Tool to Optimize Binding and Selectivity of Protein-Ligand Complexes’, J Med Chem, 62: 3036–50. 5. Berg, L., C. D. Andersson, E. Artursson, A. Hörnberg, A. K. Tunemalm, A. Linusson, and F. Ekström. 2011. ‘Targeting acetylcholinesterase: identification of chemical leads by high throughput screening, structure determination and molecular modeling’, PLoS One, 6: e26039. 6. Brogi, Simone, Panagiota Papazafiri, Vassilios Roussis, and Andrea Tafi. 2013. ‘3D-QSAR using pharmacophore-based alignment and virtual screening for discovery of novel MCF-7 cell line inhibitors’, European Journal of Medicinal Chemistry, 67: 344–51. 7. Cheeseright, Tim, Mark Mackey, Sally Rose, and Andy Vinter. 2006. ‘Molecular Field Extrema as Descriptors of Biological Activity: Definition and
239
Validation’, Journal of Chemical Information and Modeling, 46: 665–76. 8. da Silva, Horlando C., Francisco das Chagas L. Pinto, Anderson F. de Sousa, Otília usdenia L. De Pessoa, Maria resa Salles Te Trevisan, and Gilvandete M. P. Santiago. 2021. ‘Chemical constituents and acetylcholinesterase inhibitory activity from the stems of Bauhinia pentandra’, Natural Product Research, 35: 5277–81. 9. El Mchichi, L., K. Tabti, R. Kasmi, R. El-Mernissi, A. El Aissouq, F. En-nahli, A. Belhassan, T. Lakhlifi, and M. Bouachrine. 2022. ‘3D-QSAR study, docking molecular and simulation dynamic on series of benzimidazole derivatives as anti-cancer agents’, Journal of the Indian Chemical Society, 99: 100582. 10. Fang, C., and Z. Xiao. 2016. ‘Receptor-based 3D-QSAR in Drug Design: Methods and Applications in Kinase Studies’, Curr Top Med Chem, 16: 1463–77. 11. Garro Martinez, J. C., E. G. Vega-Hissi, M. F. Andrada, and M. R. Estrada. 2015. ‘QSAR and 3D-QSAR studies applied to compounds with anticonvulsant activity’, Expert Opin Drug Discov, 10: 37–51. 12. Guo, Haiqiong, Yuxuan Wang, Qingxiu He, Yuping Zhang, Yong Hu, Yuanqiang Wang, and Zhihua Lin. 2019. ‘In silico rational design and virtual screening of antioxidant tripeptides based on 3D-QSAR modeling’, Journal of Molecular Structure, 1193: 223–30. 13. Halder, N., and G. Lal. 2021. ‘Cholinergic System and Its Therapeutic Importance in Inflammation and Autoimmunity’, Front Immunol, 12: 660342. 14. Jiang, Y., H. Gao, and G. Turdu. 2017. ‘Traditional Chinese medicinal herbs as potential AChE inhibitors for anti-Alzheimer’s disease: A review’, Bioorg Chem, 75: 50–61. 15. Katalinić, Maja, Gordana Rusak, Jelena Domaćinović Barović, Goran Šinko, Dubravko Jelić, Roberto Antolović, and Zrinka Kovarik. 2010. ‘Structural aspects of flavonoids as inhibitors of human butyrylcholinesterase’, European Journal of Medicinal Chemistry, 45: 186–92. 16. Pasangulapati, J. P., Ravula, A. R., Kanala, D. R., Boyina, S., Gangarapu, K., & Boyina, H. K. 2020. Ocimum Sanctum Linn: A Potential Adjunct Therapy for Hyperhomocysteinemia-Induced Vascular Dementia. Advances in experimental medicine and biology, 1195, C1. 17. Boyina, H. K., Geethakhrishnan, S. L., Panuganti, S., Gangarapu, K., Devarakonda, K. P., Bakshi, V., & Guggilla, S. R. 2020. In Silico and In Vivo Studies on Quercetin as Potential Anti-Parkinson Agent. Advances in experimental medicine and biology, 1195, 1–11. 18. Khan, Haroon, Marya, Surriya Amin, Mohammad Amjad Kamal, and Seema Patel. 2018b. ‘Flavonoids as acetylcholinesterase inhibitors: Current therapeutic standing and future prospects’, Biomedicine & Pharmacotherapy, 101: 860–70. 19. Kuhn, Maximilian, Stuart Firth-Clark, Paolo Tosco, Antonia S. J. S. Mey, Mark Mackey, and Julien
240 Michel. 2020. ‘Assessment of Binding Affinity via Alchemical Free-Energy Calculations’, Journal of Chemical Information and Modeling, 60: 3120–30. 20. Kuppusamy, Asokkumar, Madeswaran Arumugam, and Sonia George. 2017. ‘Combining in silico and in vitro approaches to evaluate the acetylcholinesterase inhibitory profile of some commercially available flavonoids in the management of Alzheimer’s disease’, International Journal of Biological Macromolecules, 95: 199–203. 21. Li, Mengyue, Xi Gao, Mingxian Lan, Xianbin Liao, Fawu Su, Liming Fan, Yuhan Zhao, Xiaojiang Hao, Guoxing Wu, and Xiao Ding. 2020. ‘Inhibitory activities of flavonoids from Eupatorium adenophorum against acetylcholinesterase’, Pesticide Biochemistry and Physiology, 170: 104701. 22. Li, Ren-Shi, Xiao-Bing Wang, Xiao-Jun Hu, and Ling-Yi Kong. 2013. ‘Design, synthesis and evaluation of flavonoid derivatives as potential multifunctional acetylcholinesterase inhibitors against Alzheimer’s disease’, Bioorganic & medicinal chemistry letters, 23: 2636–41. 23. Li, Tang, Wan Pang, Jie Wang, Zesheng Zhao, Xiaoli Zhang, and Liping Cheng. 2021. ‘Docking-based 3D-QSAR, molecular dynamics simulation studies and virtual screening of novel ONC201 analogues targeting Mitochondrial ClpP’, Journal of Molecular Structure, 1245: 131025. 24. Liu, H. R., X. Men, X. H. Gao, L. B. Liu, H. Q. Fan, X. H. Xia, and Q. A. Wang. 2018. ‘Discovery of potent and selective acetylcholinesterase (AChE) inhibitors: acacetin 7-O-methyl ether Mannich base derivatives synthesised from easy access natural product naringin’, Nat Prod Res, 32: 743–47. 25. Ruddarraju, R. R., Kiran, G., Murugulla, A. C., Maroju, R., Prasad, D. K., Kumar, B. H., Bakshi, V., & Reddy, N. S. 2019. Design, synthesis and biological evaluation of theophylline containing variant acetylene derivatives as α-amylase inhibitors. Bioorganic chemistry, 92, 103120. 26. Luo, Wen, Ying Chen, Ting Wang, Chen Hong, Li-Ping Chang, Cong-Cong Chang, Ya-Cheng Yang, Song-Qiang Xie, and Chao-Jie Wang. 2016. ‘Design, synthesis and evaluation of novel 7-aminoalkylsubstituted flavonoid derivatives with improved cholinesterase inhibitory activities’, Bioorganic & Medicinal Chemistry, 24: 672–80. 27. Luo, Wen, Ya-Bin Su, Chen Hong, Run-Guo Tian, Lei-Peng Su, Yue-Qiao Wang, Yang Li, Jun-Jie Yue, and Chao-Jie Wang. 2013. ‘Design, synthesis and evaluation of novel 4-dimethylamine flavonoid derivatives as potential multi-functional anti-
S. Andole et al. Alzheimer agents’, Bioorganic & medicinal chemistry, 21: 7275–82. 28. Ma, Ying, Hong-Lian Li, Xiu-Bo Chen, Wen-Yan Jin, Hui Zhou, Ying Ma, and Run-Ling Wang. 2018. ‘3D QSAR Pharmacophore Based Virtual Screening for Identification of Potential Inhibitors for CDC25B’, Computational Biology and Chemistry, 73: 1–12. 29. Meziant, Leila, Mostapha Bachir-bey, Chawki Bensouici, Fairouz Saci, Malika Boutiche, and Hayette Louaileche. 2021. ‘Assessment of inhibitory properties of flavonoid-rich fig (Ficus carica L.) peel extracts against tyrosinase, α-glucosidase, urease and cholinesterases enzymes, and relationship with antioxidant activity’, European Journal of Integrative Medicine, 43: 101272. 30. Mohan, A., R. Kirubakaran, J. A. Parray, R. Sivakumar, E. Murugesh, and M. Govarthanan. 2020. ‘Ligand-based pharmacophore filtering, atom based 3D-QSAR, virtual screening and ADME studies for the discovery of potential ck2 inhibitors’, Journal of Molecular Structure, 1205: 127670. 31. Muthukumaran, Panchaksaram, and Muniyan Rajiniraja. 2018. ‘MIA-QSAR based model for bioactivity prediction of flavonoid derivatives as acetylcholinesterase inhibitors’, Journal of Theoretical Biology, 459: 103–10. 32. Ravula, A. R., Teegala, S. B., Kalakotla, S., Pasangulapati, J. P., Perumal, V., & Boyina, H. K. 2021. Fisetin, potential flavonoid with multifarious targets for treating neurological disorders: An updated review. European journal of pharmacology, 910, 174492. 33. Raafat, Asmaa, Samar Mowafy, Sahar M. Abouseri, Marwa A. Fouad, and Nahla A. Farag. 2022. ‘Lead generation of cysteine based mesenchymal epithelial transition (c-Met) kinase inhibitors: Using structurebased scaffold hopping, 3D-QSAR pharmacophore modeling, virtual screening, molecular docking, and molecular dynamics simulation’, Computers in Biology and Medicine, 146: 105526. 34. Sheng, Rong, Xiao Lin, Jing Zhang, Kim Sun Chol, Wenhai Huang, Bo Yang, Qiaojun He, and Yongzhou Hu. 2009. ‘Design, synthesis and evaluation of flavonoid derivatives as potent AChE inhibitors’, Bioorganic & Medicinal Chemistry, 17: 6692–98. 35. Verma, J., V. M. Khedkar, and E. C. Coutinho. 2010. ‘3D-QSAR in drug design--a review’, Curr Top Med Chem, 10: 95–115. 36. Wang, Yanyu, Yanping Zhao, Chaochun Wei, Nana Tian, and Hong Yan. 2021. ‘4D-QSAR Molecular Modeling and Analysis of Flavonoid Derivatives as Acetylcholinesterase Inhibitors’, Biological and Pharmaceutical Bulletin, 44: 999–1006.
A Comparison of the Various Methods for Selecting Features for Single-Cell RNA Sequencing Data in Alzheimer’s Disease
27
Petros Paplomatas, Panagiotis Vlamos, and Aristidis G. Vrahatis
Abstract
The high-throughput sequencing method known as RNA-Seq records the whole transcriptome of individual cells. Single-cell RNA sequencing, also known as scRNA-Seq, is widely utilized in the field of biomedical research and has resulted in the generation of huge quantities and types of data. The noise and artifacts that are present in the raw data require extensive cleaning before they can be used. When applied to applications for machine learning or pattern recognition, feature selection methods offer a method to reduce the amount of time spent on calculation while simultaneously improving predictions and offering a better knowledge of the data. The process of discovering biomarkers is analogous to feature selection methods used in machine learning and is especially helpful for applications in the medical field. An attempt is made by a feature selection algorithm to cut down on the total number of features by eliminating those that are unnecessary or redundant while retaining those that are the most helpful. We apply FS algorithms designed for scRNA-Seq to Alzheimer’s disease, which is the most prevalent neurodegenerative disease P. Paplomatas · P. Vlamos · A. G. Vrahatis (✉) Bioinformatics and Human Electrophysiology Lab (BiHELab), Department of Informatics, Ionian University, Corfu, Greece
in the western world and causes cognitive and behavioral impairment. AD is clinically and pathologically varied, and genetic studies imply a diversity of biological mechanisms and pathways. Over 20 new Alzheimer’s disease susceptibility loci have been discovered through linkage, genome-wide association, and next-generation sequencing (Tosto G, Reitz C, Mol Cell Probes 30:397-403, 2016). In this study, we focus on the performance of three different approaches to marker gene selection methods and compare them using the support vector machine (SVM), k-nearest neighbors’ algorithm (k-NN), and linear discriminant analysis (LDA), which are mainly supervised classification algorithms. Keywords
Feature selection · Big data · Ensemble methods
27.1
Introduction
Alzheimer’s disease is the leading cause of dementia. Despite the significant hereditary component of Alzheimer’s disease, our understanding of the disease-associated genes, their expression, and disease-related pathways is limited. Determining the relationship between gene dysfunctions and pathogenic systems, such as neuronal transporters, APP processing, calcium
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_27
241
242
homeostasis, and mitochondrial dysfunction, is thus essential [4]. Changes in gene expression and gene regulation may have a significant influence on neurodegeneration; according to recent research, mRNA-transcription factor interactions, non-coding RNAs, alternative splicing, and copy number variations may potentially contribute to the start of illness. These data imply that a better knowledge of the role of transcriptomes in Alzheimer’s disease might enhance both the diagnosis and treatment of the illness. The single-cell transcriptomics analysis could be crucial to the advancement of AD knowledge [3]. Single-cell transcriptomics is rapidly advancing, and the enormous datasets that are becoming available make it difficult for researchers to analyze high-dimensional data to better comprehend a phenomenon of interest. Feature selection offers an efficient method for solving this problem by removing features that are less informative or redundant. This can cut down on the amount of time spent on computing, enhance the accuracy of learning, and make it easier to have a better grasp of the learning model or the data [1]. These advances in single-cell technologies provide previously unreachable potential for studying complex biological systems at higher resolutions, resulting in increasing interest in feature selection for analyzing such data [13]. Feature selection is an essential component of AI-assisted diagnosis. Numerous approaches for selecting features have been developed thus far. However, the stability of a feature selection approach is very important. Thus, an effective FS method is crucial for detecting the essential traits [7]. Methods of feature selection are utilized for Alzheimer’s disease diagnosis [2, 8, 14]. The heterogeneity is of utmost importance when eliminating features using FS approaches in scRNA-Seq, and it is for this reason that we examine methods designed for single-cell RNA sequencing data. Gene expression can vary from cell to cell, even within populations of essentially identical cells, which is an important finding. To understand how a biological system develops, how it is homeostatically maintained, and how it
P. Paplomatas et al.
responds to external perturbations, we must first have an in-depth understanding of the cellular heterogeneity inside a biological system [5]. Commonly, the heterogeneity of single-cell RNA sequencing (scRNA-Seq) datasets is defined by finding cell clusters in gene expression space. Each cluster represents a distinct cell type or cell state. The adoption of feature selection approaches can improve our understanding of the cellular heterogeneity existing in a biological system. This is significant because heterogeneity is a fundamental characteristic of biological systems that influences the development, differentiation, and immune-mediated responses of cells, tissues, organs, and organisms, as well as the onset and progression of disease [6]. Numerous researchers, for instance, have utilized unsupervised clustering to identify previously undiscovered cell groups in a range of datasets. A good strategy for selecting features is one that selects cell-type-specific genes (also known as DE genes) and rejects all other genes. In a further crucial stage, the algorithm must select traits that optimize the differentiation between the many physiologically diverse cell groups [10]. In single-cell RNA sequencing (scRNA-Seq) data analysis, cell clustering is one of the most significant and regularly performed operations. The selection of a group of genes whose expression patterns will be used for subsequent cell clustering is a crucial stage in cell clustering. A decent set of characteristics should include those that distinguish between distinct cell types, and the quality of such a set can have a substantial effect on the clustering accuracy [11]. Therefore, feature selection algorithms (gene variable techniques) have been developed specifically for single-cell RNA sequencing (scRNA-Seq), which examine the heterogeneity of scRNA-Seq datasets, which are usually characterized by the identification of cell clusters in gene expression space [10]. The biobibliography about using gene variables algorithms for Alzheimer’s disease is somewhat lacking; thus, in this work, we examine three recent techniques to assess their performance as a gene marker for Alzheimer’s disease.
27
A Comparison of the Various Methods for Selecting Features for Single-Cell. . .
27.2
27.2.1
Performances of Feature Selection Methods on scRNA-Seq Datasets Dataset
The study is conducted utilizing an Alzheimer’s disease-related dataset (GSE103334) available from the NCBI database (National Center for Biotechnology Information). This study utilizes the above Alzheimer’s disease-related dataset to explore the influence of three marker selection procedures for single-cell transcriptome profiling and to compare them. Due to a lack of research on the application of variable gene techniques to Alzheimer’s disease, we use the more recent methods SCMarker, SelfE, and DUBStepR. SCMarker uses information-theoretic concepts to select the best gene subsets for cell-type identification without using any known transcriptome profiles or cell ontologies. The core concept of our strategy is to identify genes that are individually discriminative across underlying cell types, based on a mixture distribution model, and are coor mutually exclusively expressed with other genes owing to cell-type specific functional constraints. Although approaches involving the application of a mixture distribution model to a collection of continuous data points have been widely used in gene expression clustering analyses, it is unclear whether this approach is advantageous in this issue setting. Applying SCMarker to diverse datasets in several tissue types, followed by a variety of clustering techniques, yields consistent gains in cell-type classification and physiologically significant marker selection [12]. The SelfE is an innovative l2,0 minimization method that generates an ideal subset of feature vectors while preserving observable subspace structures in the data. SelfE assumes that gene expression is linearly dependent, xi = Xi c ci 8 i (1) . How genes are chosen is by finding a subset that can express the rest as a linear combination of the chosen subset. To locate a subset, the weight vectors ci’s must possess a structure. This will become clearer once equation 1 is expressed in
243
terms of all genes, X = XC, such that Cii = 0 (2). Stacking all the genes as columns forms X, and all the cis as columns with diagonal zeroes forms C. This constraint prevents the gene’s selfexpression. To pick a subset of all genes, C must have a row-sparse structure. To express the genes (X on the left), we will select a few genes (from X on the right), which means the corresponding rows of C will be nonzero (except for the diagonal element). Since we only want a subset of genes, most of C must be zeroes. This is represented as: arg min C ||C||2, 0 such that X = XC (3) [9]. DUBStepR (Determining the Underlying Basis using Stepwise Regression) is a feature selection method that combines gene-gene correlations with the density index, a unique measure of feature space inhomogeneity (DI). Despite picking a very small number of genes, DUBStepR outperforms previous single-cell feature selection approaches in a variety of clustering benchmarks by a significant margin. DUBStepR is scalable to more than one million cells and is easily applicable to various data formats, like single-cell ATAC-seq. DUBStepR is a method for selecting features that can be used to cluster single-cell data accurately [10].
27.3
Performance on SVM, k-NN, and LDA Classifiers
The efficacy of the aforementioned trait selection techniques to preserve the most informative genes based on the optimal classification model is evaluated. The objective is to determine how effectively the trait selection techniques can predict the class characteristic and compare them using measures such as precision, sensitivity, specificity, and the F1 score. The research was conducted in RStudio using the SelfE, SCMarker, and SCMarker packages. The LogNormalize function of the Seurat package is used to log-transform data before it is used. For every method except SelfE, the default arguments are used. SelfE facilitates the collection of a certain quantity of genes. Once the abovementioned packages are executed, the results are as follows:
244
P. Paplomatas et al.
SCMarker method yielded 324 genes, DUBStepR method yielded 44 genes, and finally SelfE method yielded 200 genes. The support vector machine (SVM), the k-nearest neighbors’ algorithm (k-NN), and the linear discriminant analysis (LDA) are utilized to compare the three distinct strategies with the original data. In the domains of data mining and statistics, SVM is one of the most widely used algorithms for classification and regression problems in supervised learning. In machine learning, it is used primarily for classification problems. The (k-NN) algorithm is a wellknown classification technique due to its ease of implementation and good classification performance. LDA is a linear model that is often used in topic modeling to classify and reduce the number of dimensions. In each case study, we first train each classification system by presenting it with a collection of informative genes selected using each gene selection technique. The results show that the DUBStepR model gives the best results, followed by the SCMarker method. The initial data from the opposing side almost always yields the worst results. The subsequent results of the SVM, kNN, and LDA classifiers are recorded and may be seen in Tables 27.1, 27.2, and 27.3. The area under the curve, commonly known as AUC, is employed as a performance metric across all groups (Fig. 27.1). Ten iterations of crossvalidation, repeating ten times, are utilized for all models. The cross-validation (CV) results are depicted in Figs. 27.2 and 27.3 via box plots.
27.4
Discussion
In conclusion, it is crucial to highlight that the approach with the smaller number of isolate genes (44 genes; DUBStepR method) yielded the best results. Despite having the largest number of genes, the results of the initial data appear to be in the bottom positions. This demonstrates that the information carried by the genes is more important than their quantity, as demonstrated by the fact that the method using only 44 genes yields the best results. The AUC analysis also
Table 27.1 SVM algorithm metric (left column) values for accuracy, kappa, sensitivity, specificity, and F1 score. Comparison of three different models, DUBStepR, SelfE, and SCMarker (columns 2, 3, 4), proving that DUBStepR algorithm yields the best results Metrics Accuracy Kappa Sensitivity Specificity F1
DUBStepR model 0.87 0.74 0.82 0.92 0.86
SelfE model 0.76 0.53 0.89 0.63 0.79
SCMarker model 0.80 0.61 0.82 0.79 0.81
Initial data 0.55 0.11 1.00 0.11 0.69
Table 27.2 k-NN algorithm metric (left column) values for accuracy, kappa, sensitivity, specificity, and F1 score. Comparison of three different models, DUBStepR, SelfE, and SCMarker (columns 2, 3, 4), proving that DUBStepR algorithm yields the best results. SelfE model and SCMarker model are very similar with each other Metrics Accuracy Kappa Sensitivity Specificity F1
DUBStepR model 0.68 0.37 0.74 0.63 0.70
SelfE model 0.53 0.05 0.08 0.97 0.14
SCMarker model 0.55 0.11 0.29 0.82 0.39
Initial data 0.64 0.29 0.66 0.63 0.65
Table 27.3 LDA algorithm metric (left column) values for accuracy, kappa, sensitivity, specificity, and F1 score. Comparison of three different models, DUBStepR, SelfE, and SCMarker (columns 2, 3, 4), proving that DUBStepR algorithm yields again as in Table 27.1 the best results Metrics Accuracy Kappa Sensitivity Specificity F1
DUBStepR model 0.88 0.76 0.92 0.84 0.89
SelfE model 0.54 0.08 0.55 0.53 0.55
SCMarker model 0.61 0.21 0.50 0.71 0.56
Initial data 0.60 0.09 0.59 0.53 0.58
provides the best performance in DUBStepR according to the three different plots illustrated in Fig. 27.1. Tables 27.1, 27.2, and 27.3 demonstrate that, with the exception of the DUBStepR method, the results of the others may be improving or deteriorating. This demonstrates that the selection of the model must be determined by the available data. The SVM model achieves the best results by all methods except initial data, which suggests
27
A Comparison of the Various Methods for Selecting Features for Single-Cell. . .
245
Fig. 27.1 Representation of the receiver operating characteristics (ROC) using the area under the curve (AUC), measuring the ability of a classifier to differentiate between classes. (a) depicts the true-positive rate versus
the false-positive rate; (b) depicts the typical precisionrecall metric; and (c) depicts the calibration curves. According to the memo, the three methods and the initial data are depicted in various colors
Fig. 27.2 Box plot comparing the tenfold accuracy measurement between the initial data and the scRNA techniques evaluation of selected features using different
classifiers (k-NN, LDA, SVM). The box plots show the median (midline) and interquartile range (25th and 75th percentiles)
Fig. 27.3 Box plot comparing the tenfold kappa measurement between the initial data and the scRNA techniques evaluation of selected features using different
classifiers (k-NN, LDA, SVM).The box plots show the median (midline) and interquartile range (25th and 75th percentiles)
that if you only have a small number of characteristics, you may want to employ this algorithm. In contrast, the k-NN appears to work significantly better with the initial data and less well with the scRNA-Seq methods. However, the DUBStepR algorithm yields the best results in this case as well. The LDA seems to act in a way that is not directly related to how many characteristics it has. Finally, it is worth mentioning that in the box plot, k-NN and LDA are generating a very similar box plot regarding data distribution and accuracy
when the initial data (brown dots) is plotted. Notably, SVM produces a box plot in which a majority of the data points are distributed remarkably densely to one another. Furthermore, when SCMarker and DUBStepR are plotted using any of the three methods (k-NN, LDA, SVM), there is no significant difference in the box plots, only a minor difference in accuracy. Finally, the invert effect of the initial data box plot appears in the SelfE box plot (purple), where the k-NN box plot is remarkably dense, in contrary to the LDA and SVM box plots.
246 Acknowledgments This research is funded by the European Union and Greece (Partnership Agreement for the Development Framework 2014–2020) under the Regional Operational Program Ionian Islands 2014–2020, project title: “NEUROSYSTEM: Decision Support System for the analysis of multilevel data of non-genetic neurodegenerative diseases” (project number MIS 5016116).
References 1. Abdelaal, T., Michielsen, L., Cats, D., Hoogduin, D., Mei, H., Reinders, M.J.T., Mahfouz, A., 2019. A comparison of automatic cell identification methods for single-cell RNA sequencing data. Genome Biol 20, 194. 2. Alashwal, H., Abdalla, A., Halaby, M.E., Moustafa, A. A., 2020. Feature Selection for the Classification of Alzheimer’s Disease Data, in: Proceedings of the 3rd International Conference on Software Engineering and Information Management, ICSIM ‘20. Association for Computing Machinery, New York, NY, USA, pp. 41–45. 3. Bagyinszky, E., Giau, V.V., An, S.A., 2020. Transcriptomics in Alzheimer’s Disease: Aspects and Challenges. Int J Mol Sci 21, 3517. 4. Cascella, R., Cecchi, C., 2021. Calcium Dyshomeostasis in Alzheimer’s Disease Pathogenesis. Int J Mol Sci 22, 4914. 5. Choi, Y.H., Kim, J.K., 2019. Dissecting Cellular Heterogeneity Using Single-Cell RNA Sequencing. Mol Cells 42, 189–199. 6. Gough, A., Stern, A.M., Maier, J., Lezon, T., Shun, T.-Y., Chennubhotla, C., Schurdak, M.E., Haney, S. A., Taylor, D.L., 2017. Biologically Relevant Heterogeneity: Metrics and Practical Insights. SLAS Discovery 22, 213–237.
P. Paplomatas et al. 7. Gu, F., Ma, S., Wang, X., Zhao, J., Yu, Y., Song, X., 2022. Evaluation of Feature Selection for Alzheimer’s Disease Diagnosis. Frontiers in Aging Neuroscience 14. 8. K.p., M.N., P., T., 2022. Feature selection using efficient fusion of Fisher Score and greedy searching for Alzheimer’s classification. Journal of King Saud University - Computer and Information Sciences 34, 4993–5006. 9. Rai, P., Sengupta, D., & Majumdar, A. (2020). SelfE: Gene Selection via Self-Expression for Single-Cell Data. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 10. Ranjan, B., Sun, W., Park, J., Mishra, K., Schmidt, F., Xie, R., Alipour, F., Singhal, V., Joanito, I., Honardoost, M.A., Yong, J.M.Y., Koh, E.T., Leong, K.P., Rayan, N.A., Lim, M.G.L., Prabhakar, S., 2021. DUBStepR is a scalable correlation-based feature selection method for accurately clustering single-cell data. Nat Commun 12, 5849. 11. Su, K., Yu, T., Wu, H., 2021. Accurate feature selection improves single-cell RNA-seq cell clustering. Briefings in bioinformatics 22, bbab034. 12. Wang, F., Liang, S., Kumar, T., Navin, N., Chen, K., 2019. SCMarker: Ab initio marker selection for single cell transcriptome profiling. PLOS Computational Biology 15, e1007445. 13. Yang, P., Huang, H., Liu, C., 2021. Feature selection revisited in the single-cell era. Genome Biology 22, 321. 14. Zhu, Y., Zhu, X., Kim, M., Shen, D., & Wu, G. (2016, October). Early diagnosis of Alzheimer’s disease by joint feature selection and classification on temporally structured support vector machine. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 264-272). Springer, Cham.
An Optimized Cloud Computing Method for Extracting Molecular Descriptors
28
Christos Didachos, Dionisis Panagiotis Kintos, Manolis Fousteris, Phivos Mylonas, and Andreas Kanavos
Abstract
Extracting molecular descriptors from chemical compounds is an essential preprocessing phase for developing accurate classification models. Supervised machine learning algorithms offer the capability to detect “hidden” patterns that may exist in a large dataset of compounds, which are represented by their molecular descriptors. Assuming that molecules with similar structure tend to share similar physicochemical properties, large chemical libraries can be screened by applying similarity sourcing techniques in order to detect potential bioactive compounds against a molecular target. However, the process of generating these compound features is timeconsuming. Our proposed methodology not only employs cloud computing to accelerate the process of extracting molecular descriptors but also introduces an optimized approach to
C. Didachos Computer Engineering and Informatics Department, University of Patras, Patras, Greece e-mail: [email protected] D. P. Kintos · M. Fousteris Department of Pharmacy, University of Patras, Patras, Greece e-mail: [email protected]; [email protected] P. Mylonas · A. Kanavos (✉) Department of Informatics, Ionian University, Corfu, Greece e-mail: [email protected]; [email protected]
utilize the computational resources in the most efficient way. Keywords
Molecular descriptors · Computational drug design · Computing performance · Dask · Ligand-based virtual screening · Chemical big data
28.1
Introduction
Machine learning algorithms can play a crucial role in solving problems related with classification [3] and object detection [23]. The precise capability of these algorithms to detect essential motifs in data and classify them in a meaningful way makes them applicable to different scientific fields. Artificial Intelligence and machine learning are highly bound with the demanding process of discovering and developing new drugs [6, 10, 13, 20]. Knowing the structure of a molecular target and a chemical compound is a prerequisite for studying their potential interactions [7]. To this purpose, mathematical approaches and computational methods are being used for the determination of quantitative relationships between the structural features of chemical compounds and their biological activities. This approach, known as Quantitative Structure Activity Relationship (QSAR), could be applied for numerous
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_28
247
248
purposes, such as the prediction of bioactivity of new compounds [9, 14]. However, its drawback is that it is time-consuming and usually requires high availability of computational resources. Building a QSAR model is frequently based on the use of molecular descriptors. As it has been defined earlier, a molecular descriptor is “the final result of a mathematical procedure which transforms chemical information encoded within a symbolic representation of a molecule into a useful number or the result of some standardized experiment” [19]. The selection of the suitable physicochemical properties as well as theoretical molecular descriptors in a QSAR study is of paramount importance to maximize the accuracy of prediction [18]. Machine learning algorithms are very efficient and effective in recognizing patterns in a given dataset. As a part of a virtual screening study [17, 21], “hidden patterns” in a dataset of compounds, represented by molecular descriptors, could offer valuable information for their classification in sub-classes based on their estimated binding affinity on a molecular target [11]. Usually, virtual screening studies involve large chemical datasets consisting of thousands of compounds. Consequently, extracting meaningful descriptors for datasets of this size may be not only an extremely time-consuming process but also a key preliminary step of a new drug discovery campaign [8]. In our initial methodology [5], we employed Dask framework, which is based on Python for distributed computing and Amazon Web Services (AWS) as the cloud services provider. Our approach successfully accelerated the process of extracting molecular descriptors; in some cases, approximately 73 times faster. The initial approach proved that using a cluster with multiple nodes is more performant for large dataset, but for medium sized dataset a cluster with less nodes is more efficient. However, the exact number of nodes that are required for different data sizes was not clearly defined. In this study, we aim to prove that the execution time of extracting descriptors is dependent on the size of the compound, which is relative to the length of the compound’s SMILES
C. Didachos et al.
representation. A dataset with a predefined SMILES length is used as a template. We initially extract the descriptors for a different number of rows of this template using a different number of nodes. The required time for the cluster formation and the extraction of the descriptors is then calculated for various combinations of data sizes and cluster sizes. These calculations are used as a template for the identification of the suitable, based on their performance, number of nodes that should be used for a dataset given different sizes of dataset. It has been proven that our new optimized approach can maximize the performance of the initially proposed process.
28.2
Material and Methods
The molecular weight is a one-dimensional descriptor which describes a compound. Similarly, the Balaban index J and Bertz’s complexity index are presented. Both of these indexes are transformations of the available knowledge which is related to the structure of a compound. The Balaban index J [2] is a descriptor which is based on graph theory. The standard distance matrix of D of a graph G is a matrix (D)ij as described below: ðDÞij =
ℓ ij ,
if
i≠j
0,
if
i=j
ð28:1Þ
where ℓ ij is the shorter path which can be described as the minimum number of edges between the vertices i and j. Given a four-node cycle C4, as it is illustrated in Fig. 28.1, the distance matrix D can be defined as the following matrix: 0
1
2
1
1
0
1
2
2
1
0
1
1
2
1
0
The Balaban index J is defined as
28
An Optimized Cloud Computing Method for Extracting Molecular Descriptors
28.3 28.3.1
Fig. 28.1 Labeled four-membered cycle C4
J=
E μþ1
ðdi dj Þ - 1∕2
ð28:2Þ
edges
where E is the number of edges in a graph G, μ is the cyclomatic number of G, and di is the distance sum of vertex i (it is a sum of all entries in the ith row or column of the distance matrix). The cyclomatic number μ of a polycyclic graph G is equal to the minimum number of edges necessary to be removed from G in order to convert G into the related acyclic graph. As an example, the Balaban index J for the graph illustrated in Fig. 28.1 is equal to 2 [1]. Bertz’s complexity index is a topological index which tries to quantify “complexity” of molecules. It consists of a sum of two terms: the first one representing the complexity of the bonding and the second representing the complexity of the distribution of heteroatoms [4, 15]. Bretz’s molecular complexity index C(n) is defined as follows: CðnÞ = 2n log 2 n -
ni log 2 ni
ð28:3Þ
where n denotes a graph invariant and ni is the cardinal number of the ith set of equivalent structural elements on which the invariant is defined. The summation goes over all sets of equivalent structural elements. Additionally, molecular fingerprints are an essential category of descriptors. Molecular fingerprints try to encode a molecular structure. This structure is represented as a vector. Actually it is a sequence of binary digits which represents the 3D structure of the compound.
249
Implementation Dataset
The dataset that was used in our study is a pandas dataframe of 80, 000 compounds. There are two different columns: the first one is the PubChem ID of the compound and the second is the SMILES representation of the compound [22]. The average SMILES length is 59.5 characters. Eight different datasets of 20, 001 rows were used at the phase of searching whether there is a relationship between the SMILES length and the execution time of extracting descriptors. For that purpose, each dataset consisted of the same compound. The comparison took place between two datasets having 25 and 118 SMILES length and two datasets of 26 and 112 SMILES length, respectively.
28.3.2
Dask Framework
Dask is a powerful Python framework which offers a pythonic way to execute code in parallel [16]. The code can be executed in multiple CPUs of a single machine or even in multiple nodes of a cloud cluster. Amazon Web Services were used as a cloud provider to scale up our computations and the coiled framework that enables the setting up of a cloud cluster.
28.3.3
Methodology
The extraction of molecular descriptors was achieved using the RDKIT framework.1 RDKIT is a Python scientific framework related to computational chemistry. The proposed method generated four different groups–categories of descriptors (mostly 1D, 2D, 3D, Morgan Fingerprints) and saved the output in CSV format as a binary file (.pkl).
1
https://zenodo.org/record/3732262.
250
C. Didachos et al.
In our approach, we calculated the time which is needed to extract molecular descriptors using a large combination of different cluster sizes and different dataset sizes. In more detail, we computed 170 different execution times in terms of extracting molecular descriptors using datasets having varying sizes, e.g., equal to 5, 000, 10, 000, 15, 000, 20, 000, 25, 000, 30, 000, 35, 000, 40, 000, 45, 000, 50, 000 compounds. The different “architectures” of the cloud cluster were set equal to 25, 35, 45, 55, 65, 75, 85, 95, 105, 115, 125, 135, 145, 155, 165, 175, 185 nodes. Each of these datasets consisted of the same SMILES with length equal to 57 characters as this was the median SMILES length of the initial dataset. We had to take into consideration the length of the SMILES as we proved that there is a relationship between the SMILES length and the execution time of generating molecular descriptors (more time is required for larger compounds). It is usually known that the SMILES length of a drug is between 20 and 90 characters [12]. As a result, all these 170 calculations were used as a template. Our proposed method used this template to estimate the best number of nodes related to the size of the dataset. Finally, we compared the execution time needed using the proposed method (using the number of nodes based on the template) and the best performance, which was observed from the initial approach in [5]. Our proposed method was proved to be the optimized one as it overpasses the initial approach. Based on the template, it offers the capability to estimate the best number of nodes related to the size of the dataset and the median number of SMILES length.
28.4
Results
As we can observe in Fig. 28.2, the time needed to extract molecular descriptors is larger for compounds with bigger SMILES length compared to compounds with smaller SMILES
length. Although in some cases the SMILES length is the same, each dataset consists of different chemical compounds. In Tables 28.1 and 28.2, the execution time is displayed for a variety of nodes for different cluster sizes. These calculations are used as a template in order to estimate the number of nodes that could be the most performant choice. The template that is displayed in these tables is compatible with the results of our previous study [5]. In more detail, when the number of rows in the dataset is relatively small, a cluster with a small number of nodes is the most efficient option. However, when the number of compounds is being increased, the solution of using more cloud nodes tends to be the most efficient. For a given number of compounds, our approach proposes a number of nodes that could have the maximum impact of extracting the molecular descriptors as fast as possible. In our initial approach [5], for a dataset of 10, 000 compounds and a cluster of 180 nodes, the required time to extract molecular descriptors was 198.9 seconds. The proposed method (using the template) proposed the usage of 145 nodes, and the result of this cloud infrastructure was to extract the descriptors in 152.2 seconds. Sequentially, for a dataset of 20, 000 compounds, the initial method used a cluster of 100 nodes, and the execution time was 263.6 seconds. However, the proposed approach, for the same dataset, used a cluster of 145 nodes, and the execution time was 183.1 seconds. Finally, for a dataset of 50, 000 compounds, the initial method used a cluster of 180 nodes, and the execution time was 445.3 seconds. The proposed method used a cluster of 185 nodes, and the execution time was 307 seconds. In all cases, the use of the template in our proposed method offered the capability to use a number of nodes that tend to extract the molecular descriptors much faster compared to the initial approach. All these comparisons are illustrated in Fig. 28.3.
28
An Optimized Cloud Computing Method for Extracting Molecular Descriptors
Fig. 28.2 Both datasets consist of 20,001 compounds and SMILES length equal to (a) first dataset = 25 characters, second dataset = 118 characters (b) first dataset = 26 characters, second dataset = 112 characters (c) first dataset = 25 characters, second dataset = 118 characters
28.5
Conclusions and Future Work
Our initial approach [5] is highly efficient in extracting RDKIT molecular descriptors. This offers researchers the capability to handle even a bigger number of compounds in computational chemistry approaches [21]. More to the point, the proposed method offers an optimized approach that extracts molecular descriptors in a more efficient way, e.g., in terms of time that is required.
251
(although the SMILES length is the same as in case (a), each dataset consists of a different compound) (d) first dataset = 26 characters, second dataset = 112 characters (although the SMILES length is the same as in case (b), each dataset consists of a different compound)
Regarding future work, the proposed methodology could be enriched using a larger number of templates for different SMILES lengths. This could give the capability to the researchers to extract molecular descriptors of a dataset based on the median SMILES length of the dataset. As the execution time of extracting molecular descriptors varies regarding the SMILES length, the usage of a large number of templates could offer the capability to extract descriptors using the most performant cloud infrastructure based on SMILES length.
252
C. Didachos et al.
Table 28.1 Execution time (sec) for a variety of cloud infrastructures 1/2 Number of nodes 25 35 45 55 65 75 85 95 105 115 125 135 145 155 165 175 185
5K rows 169,55 171,38 170,92 151,84 133,91 130,62 134 137,81 142,98 147,65 152,56 154,38 140,89 146,92 145,26 149,1 159,8
10K rows 211,74 212,75 211,78 199,87 187,78 165,74 155,7 158,37 155,15 155,93 152,81 153,03 152,2 161,05 170,19 170,79 170,61
15K rows 371,77 288,31 209,89 200,64 180,44 183,49 215,73 189,36 176,36 179,09 198,74 193,77 184,07 186,25 178,08 176,77 178,14
20K rows 476,84 288,19 250,09 239,43 238,09 236,41 238,37 217,29 198,76 195,91 192,62 187,45 183,19 201,25 211,94 216,76 220,62
25K rows 568,92 397,3 382,3 296,24 251,37 250,26 254,56 255,59 253,42 253,2 242,59 236,16 228,92 229,86 222,94 214,26 212,91
45K rows 1043 637,9 414,51 413,23 408,91 407,4 409,1 396,36 378,58 378,69 382,69 315,47 297,45 313,12 337,2 332,46 332,04
50K rows 794,42 706,5 681,63 580,6 435,98 429,4 383,37 357,57 349,17 343,35 328,69 319,07 315,45 319,97 315,7 309,42 307,57
Table 28.2 Execution time (sec) for a variety of cloud infrastructures 2/2 Number of nodes 25 35 45 55 65 75 85 95 105 115 125 135 145 155 165 175 185
30K rows 469,3 467,2 458,55 389,11 362,46 290,99 259,93 256,14 246,72 249,75 249,03 251,47 248,68 253,3 242,96 242,88 245,25
35K rows 829,06 533 361,52 358,62 355,71 314,75 297,48 295,27 269,87 269,88 265,58 265,41 263,3 258,86 239,73 239,73 239,94
40K rows 607,89 500,8 395,54 389,12 386,91 368,41 307,4 295,36 280,3 279,69 263,66 360,07 251,3 256,22 253,36 253,56 256,17
28
An Optimized Cloud Computing Method for Extracting Molecular Descriptors
Fig. 28.3 Initial vs Proposed Approach with characteristics: (a) Dataset size = 10K rows, Initial Approach = 180 nodes, Proposed Approach = 145 nodes (b) Dataset size = 20K rows, Initial Approach
References 1. Babić D, Klein D, Lukovits I, Nikolić S, Trinajstić N (2002) Resistance-distance matrix: A computational algorithm and its application. International Journal of Quantum Chemistry 90(1):166–176 2. Balaban AT (1982) Highly discriminating distancebased topological index. Chemical Physics Letters 89(5):399–404 3. Bazan JG, Nguyen HS, Nguyen SH, Synak P, Wróblewski J (2000) Rough set algorithms in classification problem. In: Rough Set Methods and Applications, pp 49–88
253
= 100 nodes, Proposed Approach = 145 nodes (c) Dataset size = 50K rows, Initial Approach = 180 nodes, Proposed Approach = 185 nodes
4. Bertz SH (1981) The first general index of molecular complexity. Journal of the American Chemical Society 103(12):3599–3601 5. Didachos C, Kintos DP, Fousteris M, Gerogiannis VC, Son LH, Kanavos A (2022) A cloud-based distributed computing approach for extracting molecular descriptors. In: 6th International Conference on Algorithms, Computing and Systems (ICACS) 6. Hessler G, Baringhaus KH (2018) Artificial intelligence in drug design. Molecules 23(10):2520 7. Hwang H, Dey F, Petrey D, Honig B (2017) Structurebased prediction of ligand–protein interactions on a genome-wide scale. Proceedings of the National Academy of Sciences 114(52):13685–13690
254 8. Kombo DC, Tallapragada K, Jain R, Chewning J, Mazurov AA, Speake JD, Hauser TA, Toler S (2013) 3d molecular descriptors important for clinical success. Journal of Chemical Information and Modeling 53(2):327–342 9. Kubinyi H (1997) QSAR and 3D QSAR in drug design part 1: Methodology. Drug Discovery Today 2(11):457–467 10. Lavecchia A (2015) Machine-learning approaches in drug discovery: Methods and applications. Drug Discovery Today 20(3):318–331 11. Lionta E, Spyrou G, Vassilatis DK, Cournia Z (2014) Structure-based virtual screening for drug discovery: Principles, applications and recent advances. Current Topics in Medicinal Chemistry 14(16):1923–1938 12. Liu P, Li H, Li S, Leung KS (2019) Improving prediction of phenotypic drug response on cancer cell lines using deep convolutional network. BMC Bioinformatics 20(1):1–14 13. Mak KK, Pichika MR (2019) Artificial intelligence in drug development: Present status and future prospects. Drug Discovery Today 24(3):773–780 14. Mauri A, Consonni V, Todeschini R (2017) Molecular descriptors. In: Handbook of Computational Chemistry, pp 2065–2093 15. Randić M, Plavšić D (2002) On the concept of molecular complexity. Croatica Chemica Acta 75(1): 107–116
C. Didachos et al. 16. Rocklin M (2015) Dask: Parallel computation with blocked algorithms and task scheduling. In: 14th Python in Science Conference, 130–136 17. Shoichet BK (2004) Virtual screening of chemical libraries. Nature 432(7019):862–865 18. Stahura FL, Bajorath J (2005) New methodologies for ligand-based virtual screening. Current Pharmaceutical Design 11(9):1189–1202 19. Todeschini R, Consonni V (2010) Molecular descriptors. Recent Advances in QSAR Studies pp 29–102 20. Vamathevan J, Clark D, Czodrowski P, Dunham I, Ferran E, Lee G, Li B, Madabhushi A, Shah P, Spitzer M, Zhao S (2019) Applications of machine learning in drug discovery and development. Nature Reviews Drug Discovery 18(6):463–477 21. Walters WP, Stahl MT, Murcko MA (1998) Virtual screening - an overview. Drug Discovery Today 3(4): 160–178 22. Weininger D (1988) Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of Chemical Information and Computer Sciences 28(1):31–36 23. Zou Z, Shi Z, Guo Y, Ye J (2019) Object detection in 20 years: A survey. CoRR abs/1905.05055
Prediction of Intracranial Temperature Through Invasive and Noninvasive Measurements on Patients with Severe Traumatic Brain Injury
29
Eleni Tsimitrea, Dimitra Anagnostopoulou, Maria Chatzi, Evangelos C. Fradelos, Garyfallia Tsimitrea, George Lykas, and Andreas D. Flouris Abstract
The brain’s temperature measurements (TB) in patients with severe brain damage are important, in order to offer the optimal treatment. The purpose of this research is the creation of mathematical models for the TB’s prediction, based on the temperatures in the bladder (TBL), femoral artery (TFA), ear canal (TΕC), and axilla (TA), without the need for placement of intracranial catheter, contributing significantly to the research of the human thermoregulatory system. The research involved 18 patients (13 men and 5 women), who were hospitalized in the adult intensive care units (ICU) of Larissa’s two hospitals, with severe brain injury. An
E. Tsimitrea · D. Anagnostopoulou · M. Chatzi University General Hospital of Larissa, Larissa, Greece Laboratory of Clinical Nursing, Department of Nursing, University of Thessaly Larissa, Larissa, Greece E. C. Fradelos (✉) Laboratory of Clinical Nursing, Department of Nursing, University of Thessaly Larissa, Larissa, Greece G. Tsimitrea Post–Secondary Computer Science Teacher Volos, Volos, Greece G. Lykas Department of Medicine, University of Thessaly Larissa, Larissa, Greece A. D. Flouris Department of Exercise Science, University of Thessaly, Trikala, Greece
intracranial catheter with a thermistor was used to continuously measure TB and other parameters. The TB’s measurements, and simultaneously one or more of TBL, TFA, TEC, and TA, were recorded every 1 h. To create TB predicting models, the data of each measurement was separated into (a) model sample (measurements’ 80%) and (b) validation sample (measurements’ 20%). Multivariate linear regression analysis demonstrated that it is possible to predict brain’s temperature (PrTB), using independent variables (R2 was TBL = 0.73, TFA = 0.80, TEC = 0.27, and TA = 0.17, p < 0.05). Significant linear associations were found, statistically, and no difference in means between TB and PrTB of each prediction model. Also, the 95% limits of agreement and the percent coefficient of variation showed sufficient agreement between the TB and PrTB in each prediction model. In conclusion, brain’s temperature prediction models based on TBL, TFA, TEC, and TA were successful. Its determination contributes to the improvement of clinical decisionmaking. Keywords
Neuroparametry · Brain temperature · Intracranial catheter · Temperature prediction model
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_29
255
256
E. Tsimitrea et al.
Abbreviations TB TBL TFA TΕC TA PrTB(TBL) PrTB(TFA) PrTB(TΕC) PrTB(TA)
29.1
Brain temperature Bladder temperature Femoral artery temperature Ear canal temperature Axilla temperature Predicted brain temperature based on bladder’s temperature Predicted brain temperature based on femoral artery’s temperature Predicted brain temperature based on ear canal’s temperature Predicted brain temperature based on axilla’s temperature
Background
Ischemia, hemorrhage, and edema of the brain after severe head injury, stroke, or cardiac arrest cause secondary damages and potentially fatal complications. Therefore, continuous monitoring of the patients’ brain function – referred to as neuroparametry – is necessary [22]. Neuroparametry is a modern concept that refers to the implantation of special intracranial catheters in the brain parenchyma in patients with severe brain damage. Using digital technology, some of the brain’s specialized function parameters can be measured [26], such as brain temperature (TB), cerebral perfusion pressure (CPP), intracranial pressure (ICP), and tissue oxygenation. Also neuroparametry is a reliable and clinically useful method for monitoring contributing to prevention and early diagnosis of cerebral complications [19, 28] and the use of appropriate treatment for fever control and neuroprotection [3]. The use of brain’s neurophysiological assessment has provided important information about intracranial temperature (TB). In particular, the range of variation of TB during the day remains relatively similar in different parts of the brain [6]. Intracranial temperature measurement is crucial in the ischemic brain; such a brain is very sensitive to temperature changes. Failure to control brain temperature fluctuations, and especially
hyperthermia, in patients with ischemic stroke can cause severe and irreversible damage [6]. In recent studies the use of therapeutic hypothermia has been shown to reduce the intracranial pressure [8] and tissue oxygen consumption and lead in the reduction of cerebral ischemia [10], improving the functional outcomes [9]. On the other side, Andrews et al. [2] showed that the therapeutic hypothermia successfully reduced ICP but led to a higher mortality rate and worse functional outcome. Apart from the use of intracranial catheter, another noninvasive method, which allows us to know with certainty the TB without measuring it, doesn’t exist until now. But it is possible to calculate it indirectly. Studies have shown that TB has a relatively linear relationship with the core body temperature (pulmonary artery, esophagus, bladder), and it is usually slightly higher than core temperature [5, 17]. Correlations between peripheral body temperatures have also been found [7, 11, 12, 13, 24]. However, it has not been precisely determined the type and degree of their correlations. In order to measure the TB indirectly, a mathematical model needs to be created, which uses measurements from the core of the body or from noninvasive methods (axillary or tympanic membrane temperature) and calculates the TB. The aim of the observational study is the creation of mathematical models for the prediction of TB based on the temperatures in the bladder (TBL), the femoral artery (TFA), the ear canal (TΕC), and the axilla (TA). This will contribute significantly both to improving patient care, through a better understanding of changes in their thermal homeostasis that will help in clinical decision-making, and reducing the cost of care.
29.2 29.2.1
Material and Methods Study Environment
The study took place in the Intensive Care Medicine Departments of the University Hospital of Larissa and the General Hospital of Larissa between April 2012 and April 2013. All patients
29
Prediction of Intracranial Temperature Through Invasive and Noninvasive. . .
or their next of kin signed an informed consent before inclusion in the study. The study was approved by the Internal Review Board and Ethics Committee of the University Hospital of Larissa (ID 58614) in 22 November 2011 and by the Internal Review Board and Ethics Committee of the University of Thessaly (UT) (ID 522) in 29 March 2012.
29.2.2
Measuring Instrument
The Glasgow Coma Scale (GCS) was used to assess the severity of the patients’ neurological condition. The TB was measured using an intracranial catheter with a rCBF-TD Hemedex thermistor produced by Johnson & Johnson with a Bowman monitor. The catheter was placed intraparenchymally at a depth of 15–25 mm into the white matter. Foley catheters with a Tyco/ Kendall Healthcare temperature sensor were used to measure TBL. TA was measured with an EcoTemp II electronic digital thermometer, OMRON, Japan. TΕC was measured with a digital ear thermometer IR 100, Microlife, Switzerland. The temperature was measured three times in the patient’s left ear, and the average value was recorded. TFA was measured with the PiCCO system (pulsion medical systems) using the transpulmonary thermodilution method in the patient’s left femoral artery.
29.2.4
on the recording of the aforementioned temperatures which were routine measurements in the ICU for patients with severe traumatic brain injury. The experimental protocol included recording of TB, as well as one or more of TBL, TFA, TEC, and TA. Recording was performed every 60 min, a time period that the ICU attending physicians judged necessary for the intracranial thermistor catheter to be in place on the patient.
Sample
The study population was consisted of 18 neurosurgical adult patients. Inclusion criteria were the presence of an intracranial catheter with a thermistor for the continuous measurement of intracranial pressure and TB and ICU stay more than 48 h. Exclusion criteria were age less than 18 years and pregnancy. During the measurements all the patients were under sedation.
29.2.3
257
Data Collection Process
The present study did not differentiate care and treatment received by the patients, but was based
29.2.5
Statistics
Initial distribution’s examination of these variables showed that the data had a normal distribution; therefore, parametric tests were used. To create TB predicting models, the data of each measurement was separated into (a) model sample (80% of the measurements) and (b) validation sample (20% of the measurements). Then, the analysis that followed included the following stages separately for each of the measurements TBL, TFA, TEC, and TA: Linear regression analysis was performed in the model sample with the TB as dependent variable and each one of TBL, TFA, TEC, and TA, successively, as independent variables. Calculation Brain’s predicted temperature (PrTB), based on the prediction model, derived from step 1 into sample model. Paired t-test was performed for dependent samples, between the TB and PrTB in the model sample. Calculation Brain’s predicted temperature, based on the prediction model derived from stage 1 into validation sample. Linear correlation analysis was performed between the TB and PrTB in the validation sample and t-test for dependent samples between the TB and PrTB in the validation sample. Calculation of the 95% limits of agreement was performed between the TB and PrTB in the validation sample. Calculation of the percentage coefficient of deviation was performed between the TB and PrTB in the validation sample. The IBM SPSS Statistics program (version 20, IBM Corp., Armonk, NY, USA) was used for the statistical analysis, while p < 0.05 was defined as the level of statistical significance.
258
E. Tsimitrea et al.
Table 29.1 Linear regression results on the sample model for each measurement Dependent variable TB
Independent variable TBL TFA TΕC TA
Linear correlation 0.85* 0.89* 0.52* 0.42*
R2 0.73 0.80 0.27 0.17
Prediction model PrTB(TBL) = 3.058 + TBL ∙ 0.925 PrTB(TFA) = 7.989 + TFA ∙ 0.788 PrTB(TΕC) = 28.231 + ΘΑκ ∙ 0.25 PrTB(TA) = -25.875 + ΘMα ∙ 1.68
*p < 0.001 R2: coefficient of determination
29.3
Results
Eighteen patients, 15 men (72.2%) and 5 women (27.8%), fulfilled the inclusion criteria and entered the study. The mean (95% CI) age was 52 years (44–59), length of ICU stay 31 days (22–54), and GCS 8 (6–8). Intracranial catheters were replaced 12 days (8–13 day). As for their diagnosis, five of the patients’ (28%) cause of hospitalization was traumatic brain injury (ΤΒI), six patients’ (34%) was cerebral hemorrhage and seven patients’ (39%) was subarachnoid hemorrhage (SAH). The mortality was 16.7%. After looking at 15 patients TBL was measured; in 3, TFA; in 6, TA; and in 3, TΕC. The study’s data consisted of 3545 measurements of TB, 3138 of TBL, 334 of TFA, 112 of TEC, and 162 of TA.
29.3.1
Analysis of TBL
In the sample model, the linear regression analysis (Table 29.1) showed that knowing the TBL (independent variable) allows the determination of TB (dependent variable) with accuracy [F (1, 2511) = 6716.19; p < 0.001]. The paired t-test for dependent samples showed that there was statistically no significant difference between the TB and the brain’s predicted temperature based on the bladder’s temperature (PrTB(TBL)) in the model sample ( p > 0.05). The paired t-test for dependent samples showed that there was statistically no significant difference between TB and PrTB(TBL) in the validation sample ( p > 0.05). The 95% limits of agreement and the percentage coefficient of variation between the TB and
Table 29.2 The 95% limits of agreement and the percentage coefficient of deviation between TB and PrTB in the validation sample for each measurement Predict through PrTB(TBL) PrTB(TFA) PrTB(TΕC) PrTB(TA)
95% limits of agreement -0.02 ± 1.11 0.01 ± 0.45 -0.09 ± 0.75 -0.25 ± 6.24
Percentage coefficient of deviation 1.51% 0.62% 1.03% 8.92%
PrTB(TBL) in the validity sample are presented in Table 29.2.
29.3.2
TFA Analysis
In the sample model, the linear regression analysis (Table 29.1) showed that knowing the TFA (independent variable) allows the determination of TB (dependent variable) with accuracy [F (1, 275) = 1090.02; p < 0.001]. The paired t-test for dependent samples showed that there was statistically no significant difference between TB and the brain’s predicted temperature based on the femoral artery’s temperature (PrTB(TFA)) in the model sample ( p > 0.05). The paired t-test for dependent samples showed that there was statistically no significant difference between TB and PrTB(TFA) in the validation sample ( p > 0.05). The 95% limits of agreement and the percentage coefficient of variation between the TB and the PrTB(TFA) in the validity sample are presented in Table 29.2.
29.3.3
Analysis of TΕC
In the sample model, the linear regression analysis (Table 29.1) showed that knowing the TΕC
29
Prediction of Intracranial Temperature Through Invasive and Noninvasive. . .
(independent variable) allows for the determination of the TB (dependent variable) with accuracy [F (1, 94) = 35.17; p < 0.001]. The paired t-test for dependent samples showed that there was statistically no significant difference between TB and the brain’s predicted temperature based on the ear canal’s temperature (PrTB(TΕC) in the model sample ( p > 0.05). The paired t-test for dependent samples showed that there was statistically no significant difference between TB and PrTB(TΕC) in the validation sample ( p > 0.05). The 95% limits of agreement and the percentage coefficient of variation between TB and PrTB(TΕC) in the validity sample are shown in Table 29.2.
41 39 37 35 33 31
variation between the TB and PrTB(TA) in the validation sample are presented in Table 29.2.
29.4
Discussion
The present study showed that the TBL measurements allow the prediction of brain temperature without the use of an intracranial catheter with great accuracy. The Pearson’s correlation coefficient showed that there is a strong positive correlation between the two variables (Table 29.1). From the scatter plot (Fig. 29.1), in the sample model, it can be concluded that the relationship between TB and PrTB(TBL) shows a strong linear positive correlation, and the curve shows that there is a very good degree of fit (Table 29.1). At the same time, the mean values’ comparison of two dependent samples both in the sample model and in the validation sample showed that there was statistically no significant difference between TB and PrTB(TBL). These findings show that the use of TBL offers the possibility to create a prediction model of TB with a very small coefficient of deviation. The findings of the present research are in agreement with Mcilvoy’s literature review [16], where a number of researches were recorded, show that the TBL is very close to the TB, while they are aligned with those of Akata et al. [1]. The aforementioned agrees with Lefrant et al. [14] where they showed that the TBL is a reliable measurement for predicting the core temperature. Finally, Stone
A. Model Sample.
31
33
35
37
39
41
TB (°C) PrTB(TBL) (°C)
Fig. 29.1 Scatter plots of TB and PrTB(TBL) in the model (a) and validation (b) Samples. The PrTB(TBL) showed a strong linear positive correlation (r = 0.85, p < 0.001) with TB in the model sample, as well as in the validation sample (r = 0.84, p < 0.001)
PrTB(TBL)(°C)
29.3.3.1 Analysis of TA In the sample model, the linear regression analysis (Table 29.1) showed that knowledge of TA (independent variable) allows the determination of TB (dependent variable) with accuracy [F (1, 128) = 27.58; p < 0.001]. The paired t-test for dependent samples showed that there was statistically no significant difference between TB and the brain’s predicted temperature based on axilla temperature (PrTB(TA)) in the model sample ( p > 0.05). The paired t-test for dependent samples showed that there was statistically no significant difference between TB and PrTB(TA) in the validation sample ( p > 0.05). The 95% limits of agreement and the percentage coefficient of
259
41 39 37 35 33 31
B. Validity Sample.
31
33
35
37
TB(°C)
39
41
38,5 38,0 37,5 37,0 36,5 36,0 35,5 35,0
A. Model Sample
35,0
35,5
36,0
36,5
37,0
37,5
38,0
38,5
TB (°C) PrTB(TFA)(°C)
Fig. 29.2 Scatter plots of TB and PrTB(TFA) in the model (a) and validation (b) samples. The TB and PrTB(TFA) showed a strong linear positive correlation (r = 0.89, p < 0.001) in the model sample, as well as in the validation sample (r = 0.904, p < 0.001)
E. Tsimitrea et al.
PrTB(TFA) (°C)
260
39,0
B. Validity Sample
38,0 37,0 36,0 35,0
35,0 35,5 36,0 36,5 37,0 37,5 38,0 38,5
TB (°C)
et al. [27] explored the possibility of predicting TB from the TBL in patients undergoing induced hypothermia therapy. Also the use of TFA measurements allows the prediction of TB without the use of an intracranial catheter with great accuracy. The Pearson correlation coefficient showed that there is a strong positive correlation between the two variables. From the scatter plot (Fig. 29.2), in the sample model, it can be concluded that the relationship between TB and PrTB(TFA) shows a strong linear positive correlation, and the curve shows that there is an excellent degree of adjustment (Table 29.1). So, the use of TFA with the method of transpulmonary thermodilution offers the possibility of creating a prediction model of TB with a very small deviation coefficient. These findings are in agreement with other studies. Seppelt [23] found that the temperature of the femoral artery is consistently 0.4 °C lower than the intracranial temperature. Krizanak et al. [12] showed that using a catheter to measure TFA reflects the core temperature in patients undergoing mild hypothermia after a heart attack. The TΕC measurements allow the prediction of TB without the use of an intracranial catheter with relative accuracy. The Pearson correlation coefficient showed that there is a correlation of moderate strength between the two variables (Table 29.1). From the scatter plot (Fig. 29.3), in
the model sample, it can be concluded that the relationship between TB and PrTB(TΕC) shows, statistically, a positive significant correlation, but the curve shows that there is a moderate and not an excellent fit (Table 29.1), which means that the chance factor is also introduced into the measurements. However, the TΕC’s measurement, according to similar studies, has a direct relationship with TB and is the best noninvasive method for predicting TB [15, 18, 21]. Erickson and Kirklin [4] found that they can use the ear canal thermometry to predict pulmonary artery blood temperature without the use of a catheter. Finally, between the TB and the PrTB(TΕC) (Table 29.2), the coefficient of deviation was very small compared to the corresponding coefficient of deviation of the research by Stone et al. [25]. These findings are also confirmed by the analysis of the validity sample. Indeed, in the scatter plot (Fig. 29.3), a moderate linear correlation can be seen between TB and PrTB(TΕC). At the same time, the mean values’ comparison of two dependent samples both in the model sample and in the validity sample showed that there was statistically no significant difference between TB and PrTB(TΕC) (Table 29.1). The TA measurements allow the prediction of TB without the use of an intracranial catheter with a 9% tolerance factor which means a high risk of failure to predict TB. The Pearson correlation
Fig. 29.3 Scatter plots of TB and PrTB(TΕC) in the model (a) and validation (b) samples. In the model sample, the TB and PrTB(TΕC) had a linear positive correlation of moderate strength (r = 0.52, p < 0.001) and also proportional correlation in the validity sample (r = 0.534, p < 0.05)
PrTB(TEC)(°C)
Prediction of Intracranial Temperature Through Invasive and Noninvasive. . .
38,5 38,0 37,5 37,0 36,5 36,0 36,0
261
A. Model Sample
36,5
37,0
37,5
38,0
38,5
TB (°C) PrTB(TEC) (°C)
29
38,5 38,0 37,5 37,0 36,5 36,0 36,0
B. Validity Sample
36,5
37,0
37,5
38,0
38,5
36
38
40 38 36 34 32 30 28
A.Model Sample
28
30
32
34
40
TB (°C) PrTB(TA) (°C)
Fig. 29.4 Scatter plots of TB and PrTB(TA) in the model (a) and validation (b) samples. The TB and PrTB(TA) presented a linear positive correlation of moderate strength (r = 0.42, p < 0.001) in the model sample and the same in the validity sample (r = 0.42, p < 0.05), as can be seen in the corresponding scatter diagram
PrTB(TA) (°C)
TB (°C)
40 38 36 34 32 30 28
B. Validity Sample
28
30
32
34
36
38
40
TB (°C)
coefficient showed that there is a correlation between low and moderate strength of the two variables (Table 29.1). From the scatter plot (Fig. 29.4), in the sample model, it is concluded that the relationship between TB and predicted PrTB(TA) showed a small positive correlation, and the curve showed that there is a low degree of adjustment (Table 29.1), which means that since there is no anticipatory adjustment, the values of TA are also affected by the luck factor [27]. Summarizing the findings for predicting TB through TA, we can ascertain that the use of fold measurements carries a serious risk of failure of the expected prediction (of either intracranial
temperature or core temperature). Similar findings were found in other studies (Robinson et al. [20], Nimah et al. [18], Erickson and Kirklin [4], Mariak et al. [15], Lefrant et al. [14]). Some of the weaknesses of the study were the small sample of patients, the high cost of catheters, and the use of noninvasive methods for the prediction of TB that is not absolutely accurate. This happens because there is no specific protocol, for taking those measurements, which provides a specific and standardized way of taking them. Furthermore, measuring the temperature using devices supported by different technologies has been little explored.
262
29.5
E. Tsimitrea et al.
Conclusions
The importance of the present research study lies in the fact that it became possible to create prediction models for the brain’s temperature from TBL, TFE, TEA, and TA, without the use of an intracranial catheter. The ability to predict TB from TBL, TFE, TEA, and TA presents a degree of deviation of 1.51%, 0.62%, 1.03%, and 8.92%, respectively. In many researches up till now, the assessment of TB was done mainly by the temperature of the pulmonary artery, while few studies, in Greece even fewer, carried out correlations between TB, TBL, TFE, TEA, and TA. The prediction of TB from less to no invasive methods presents advantages: it puts less strain on the patient, it has a low cost, and it results in quick decision-making by the medical human resources regarding the subsequent treatment. Future research in the aforementioned directions will certainly be interesting for the treatment of secondary damages due to hyperthermia in neurosurgery patients. Financial Support This specific research was done with the support of the University of Thessaly and ICU departments of the University Hospital of Larissa and the General Hospital of Larissa in the context of a master’s thesis; therefore, it was not financed by the above institutions.
References 1. Akata T., Setogushi H., Shirozu K. and Yoshino J. (2007), Reliability of temperatures measured at standard monitoring sites as an index of brain temperature during deep hypothermic cardiopulmonary bypass conducted for thoracic aortic reconstruction. Journal of Thoracic and Cardiovascular Surgery, 133, p. 1559–1565. 2. Andrews P., Sinclair L., Rodríguez A., Harris B., Rhodes J., Watson H., Murray G., (2018) Therapeutic hypothermia to reduce intracranial pressure after traumatic brain injury: the Eurotherm3235 RCT, ΝΗI Health Technol Assess 2018 Aug;22(45): 1–134, PMID: 30168413 PMCID: PMC6139479 https://doi.org/10.3310/hta22450
3. Childs C., Wieloch T., Lecky F., Machin G., Harris B. and Stochetti N. (2010), Report of a Consensus Meeting on Human Brain Temperature After Severe Traumatic Brain Injury: Its Measurement and Management During Pyrexia, Frontiers in Neurology, 1(146), p. 1–8. 4. Erickson R. and Kirklin S. (1993), Comparison of Ear-Based, Bladder, Oral and Axillary Methods for Core Temperature Measurement, Critical Care Medicine, 21(10), p. 1528–1534. 5. Fountas K., Kapsalaki E., Feltes C., Smisson H., Johnston K., Grigorian A. and Robinson J. (2003), Disassociation between intracranial and systemic temperatures as an early sign of brain death. J Neurosurg Anesthesiol, 15(2), p. 87–89. 6. Fountas K., Kapsalaki E., Feltes C., Smisson H., Johnston K. and Robinson J. (2004), Intracranial temperature: is it different throughout the brain? Neurocritical Care, 1(2), p. 195–200. 7. Frankenfield D. and Ashcraft C. (2011), Description and Prediction of Resting Metabolic Rate After Stroke and Traumatic Brain Injury, Journal of Nutrition, p. 1–6. 8. Gaohua L. and Kimura H. (2006), A Mathematical Model of Intracranial Pressure Dynamics for Brain Hypothermia Treatment, Journal of Theoretical Biology, 238, 882–900. 9. Idris Z, Yee A., Hassan W., Hassan M., Zain K., Manaf A., (2022), A Clinical Test for a Newly Developed Direct Brain Cooling System for the Injured Brain and Pattern of Cortical Brainwaves in Cooling, Noncooling, and Dead Brain, ΝΗI Ther Hypothermia Temp Manag., Jun;12(2):103–114 PMID: 33513054, https://doi.org/10.1089/ther.2020.0033 10. Kalantzis H., Karabinis A. And Papageorgiou D. (2009), Induced Therapeutic Hypothermia in Patients with Trauma Brain Injury Hospitalized in an intensive Care Unit, Journal of Nursing, 48(2), p. 157–163. 11. Kinberger O., Thell R., Schuh M., Koch J., Sessler D. and Kurz A. (2009), Accuracy and Precision of a Novel Non-Invasive Core Thermometer, British Journal of Anesthesia, 103, p. 226–231. 12. Krizanac D., Stratil P., Janata A., Sterz F., Laggner A., Haugk M., Testori C., Holzer M. and Berhinger W. (2009), Femoral Artery Temperature for Monitoring Core Temperature During Mild Hypothermia in Patients Resuscitated From Cardiac Arrest, American Heart Association. 13. Kuo J-R, Lo C-J., Wang C-C., Lu C-L., Lin S-C. and Chen C-F. (2011). Measuring brain temperature while maintaining brain normothermia in patients with severe traumatic brain injury. Journal of Clinical Neuroscience, 18, p. 1059–1063. 14. Lefrant J., Muller I., Emmanuel-Coussaye J., Benbabaali M., Lebris C., Zeitoun N., Mari C., Saissi G., Ripart J. and Eledjam J. (2003), Temperature
29
Prediction of Intracranial Temperature Through Invasive and Noninvasive. . .
Measurement in Intensive Care Parients: Comparison of Urinary Bladder, Oesophageal, Rectal, Axillary and Inguinal Methods Versus Pulmonary Artery Core Method, Intensive Care Medicine, 29, p. 414–418. 15. Mariak Z., White D. M., Lyson T. and Lewko J. (2003), Tympanic temperature reflects intracranial temperature changes in humans. Pflügers Archiv – European Journal of Physiology, 446, p. 279–284. 16. Mcilvoy L. (2004), Comparison of brain temperature to core temperature- a review of the literature. Journal of Neuroscience Nursing 36(1), p. 23–31. 17. Mellergard P. (1994), Monitoring of rectal, epidural, and intraventricular temperature in neurosurgical patients. Acta Neurochir Suppl (Wien), 60, p. 485–487. 18. Nimah M., Bshesh K., Callahan J. and Jacobs B. (2006), Infrared Tympanic Thermometry in Comparison With Other Temperature Measurement Techniques in Febrile Children, Pediatric Critical Care Medicine, 7, p. 48–55. 19. Papanikolaou P., Barkas K., Venetikidis A., Damilakis K., Georgoulis G., Paleologos T., Hatzidakis E., and Kyriakou T. (2011), Multivariate with Intracranial Endoparenchymal Catheters. A safe Interventional Procedure, Hellenic Neurosurgery Mι, Athens. 20. Robinson J., Charlton J., Seal R., Spady D. and Joffres M. (1998), Oesophageal, Rectal, Axillary, Tympanic and Pulmonary Artery Temperatures During Cardiac Surgery, Journal of Anaesthesia, 45(4), p. 317–323.
263
21. Rotello L., Crawford L. and Terndrup T. (1996), Comparison of Infrared Ear Thermometer Derived and Equilibrated Rectal Temperatures in Estimating Pulmonary Artery Temperatures, Journal of Critical Care Medicine, 24(9), p. 1501–1506. 22. Sdrolias P. (2003). Neuroparameterization and Neuroprotection in severe acute brain injury. Neurosurgical Advances 5, p. 1–4. 23. Seppelt I. (2005), Hypothermia Does Not Improve Outcome From Traumatic Brain Injury, Journal of Critical Care and Resuscitation, 7, p. 233–237. 24. Siderouli S., Gogotsi X., Tsirozi M., Constantonis D., Halvatzi S.,Bimba D. And Pneumatikos I. (2008), Comparison of Available Methods of Temperature Measurement in Critically severe patients journal of Nursing, 47(1), p. 96–101. 25. Stone J., Young W., Smith C., Solomon R., Wald A., Ostapkovich N. and Shrebnick D. (1995), Do standard monitoring sites reflect true brain temperature when profound hypothermia is rapidly induced and reversed? Anesthesiology 82(2), p. 344–351. 26. Stratzalis G. (2005), Craniocerebral Injury. Hospital Chronicles, 67, p. 181–188. 27. Symeonaki M. (2008), Statistical Analysis of Social Data with SPSS 15.0, Sofia Publications. 28. Venetikidis A., Papanikolaou P., Paleologos T., Damilakis K., Δαμηλάκης, Hydraios I., Tsanis G., Τσάνης, Papageorgiou N., Galanis P. (2009), Applied Medical Research Statistical Methods of Data Analysis, Archives of Greek Medicine 26(5), p. 699–711.
Improving Patient-Centered Dementia Screening for General, Multicultural Population and Persons with Disabilities from Primary Care Professionals with a Web-Based App
30
Maria Sagiadinou, Panagiotis Vlamos, Themis P. Exarchos, Dimitrios Vlachakis, and Christina Kostopoulou Abstract
Background Primary care serves as the first point of contact for people with dementia and is therefore a promising setting for screening, assessment, and initiation of specific treatment and care. According to literature, online applications can be effective by addressing different needs, such as screening, health counseling, and improving overall health status. Aim Our goal was to propose a brief, inexpensive, noninvasive strategy for screening dementia to general, multicultural population and persons with disabilities, through a web-based app with a tailored multicomponent design. Methods We designed and developed a web-based application, which combines cognitive tests and biomarkers to assist primary care professionals screen dementia. We then conducted an implementation study to M. Sagiadinou (✉) · P. Vlamos · T. P. Exarchos Department of Informatics, Ionian University, Corfu, Greece e-mail: [email protected] D. Vlachakis Agricultural University of Athens, Athens, Greece e-mail: [email protected] C. Kostopoulou (✉) Department of Psychology, Florina University, Florina, Greece
measure the usability of the app. Two groups of experts participated for the selection of the screening instruments, following the Delhi method. Then, 16 primary care professionals assessed the app to their patients (n = 132), and after they measured its usability with System Usability Scale. Outcomes Two cognitive tools were integrated in the app, GPCOG and RUDAS, which are adequate for primary care settings and for screening multicultural and special needs population, without educational or language bias. Also, for assessing biomarkers, the CAIDE model was preferred, which resulted in individualized proposals, concerning the modifiable risk factors. Usability scored high for the majority of users. Conclusion Utilization of the Dementia app could be incorporated into the routine practices of existing healthcare services and screening of multiple population for dementia. Keywords
Dementia screening · eHealth apps · Biomarkers · Cognitive tools Dementia has been raised as a priority issue for the World Health Organization, as the impact on patients and their families is enormous. Based on estimates by the International Alzheimer’s Association, 55 million people suffer from dementia
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_30
265
266
worldwide, a number that will double by 2030 and more than triple by 2050, while it is the seventh leading cause of death worldwide and one of the more costly diseases for society [15]. In the last decade, the interest of the scientific community has focused on primary care and the importance of early preventive control of health issues, before they reach the limit of the worsening period [27]. Primary care serves as the first point of contact for people with dementia and is therefore a promising setting for screening, assessment and initiation of specific treatment and dementia care [24]. According to the findings of the International Alzheimer’s Association, despite the fact that there are millions of people who suffer from dementia, less than 25% of cases have been diagnosed [15]. Especially in the case of dementia, a majority of patients arrive at a possible diagnosis only after finding strong symptomatology or complaints from family members. It is a fact that dementia is under-recognized in middle-aged and older adults [16]. Primary care clinicians face a variety of barriers, related to the diagnosis of dementia, such as time constraints of visiting patients, complexity of the diagnostic process, a lack of effective treatment, and concern about causing distress to patients and their families [13, 17]. Through literature, it appears that primary care settings often fail to diagnose dementia, as nearly 90% of cases are likely to be missed [7], while a previous study concluded that 66% of patients with dementia are not diagnosed in primary care settings [2]. On the other hand, an exception comes from Irish general practitioners (GPs), who reported that use appropriate tools achieves a diagnosis in 92% [10]. Regarding the case of immigrant dementia screening, recent literature highlights late diagnosis by a health professional, as in many cases symptoms of cognitive disorders are accepted as normal memory symptoms [21]. Also, language or cultural barriers make it complicated for general practitioners to diagnose such cases, as assessing cognitive abilities with the help of tests is an additional challenge [23]. Another important finding of Sagbakken’s study is that one possible reason for the underdiagnosis of
M. Sagiadinou et al.
minority patients with dementia is the inadequacy of the diagnostic tools administered, such as the Mini-Mental State Examination (MMSE), which is the most widely used assessment tool worldwide; however, it requires a good knowledge of the language and geography and addresses a population with a good level of education [12]. It is estimated that over 475,000 cases of dementia can be estimated among immigrants, who live in Europe [5], while in a later epidemiological study, 686,000 cases of mild cognitive impairment were estimated. Attenuation was even larger extended European Union [6]. As reported by the specific study [6] in Greece, the estimated cases of mild cognitive impairment reached 194,579 people among immigrants aged 60–89 years. Drawing on data from the field of neuroimaging, it is an already established scientific fact that before symptoms of dementia appear, there is an underlying long presymptomatic and preclinical period of pathological events in limbic brain regions that primarily affect episodic memory [18]. Cognitive assessment can play an important role in facilitating screening, even in diagnosis of dementia [1]. In some cases, cognitive tests can detect cognitive deficits 10 years before a clinical diagnosis of dementia, and recent research suggests that dementia screening may even precede clinical dementia by 18 years diagnosis of Alzheimer’s disease [22]. As the focus worldwide shifts to providing personalized management of dementia and reducing the negative impact, in terms of economic, psychological, and ethical aspects, for both patients and caregivers, the need for early detection becomes critical. Early detection will allow clinicians, but also primary healthcare staff, to offer a broader perspective of pharmacological or non-pharmacological treatments and help organize the next steps [14] and the overall patient environment. The development of information and communication technology (ICT) has created an opportunity to improve health [11], through various eHealth interventions using platforms such as web applications. Studies show that these online applications can be effective in changing health behaviors and improving overall
30
Improving Patient-Centered Dementia Screening for General, Multicultural. . .
health status by addressing different needs such as screening, promotion, and health counseling [20]. The online screening has the added advantages of being an attractive and effective method for primary prevention and health promotion strategy [9].
30.1
Methodology
Our goal was to propose a brief, inexpensive, noninvasive strategy for screening dementia to general, multicultural population and persons with disabilities, through a web-based app with a tailored multicomponent design. Our basic research hypothesis was that if cognitive markers are combined with biomarkers and individual risk factors, then we will have the best results for identifying highly vulnerable dementia patients. Additional research hypotheses are as follows: (a) if dementia screening can be assessed via an online application validly and reliably, then it could be administered by primary healthcare staff at routine patients’ wellness visits; b. if appropriate psychometric tools and biomarker models are utilized, then it is possible to assess with the same online application different population groups, such as general, multicultural population and people with disabilities. After consistent review of literature, two advisory groups of experts were set up. The first group consisted of clinical and experimental psychologists, and its target was to propose adequate, validated, brief cognitive tools for screening early dementia. The second group consisted of general doctors and neurologists, and its target was to propose those biomarkers that have a high value concerning the risk factors for dementia and would be most appropriate for primary healthcare settings. After knowledge extraction a prototype was designed and tested before the final application was ready to assess. Inclusion criteria for test selection were brief time of assessment, good psychometric values, availability of norms, restricted bias as for educational/linguistic/socioeconomic reasons, multiple linguistic translations, no use license needed, and online availability. It was also of great importance
267
the need to select tests which would evaluate both episodic memory and executive functioning. The final tools selected were GPCOG, RUDAS, GDS-4, and CAIDE model. General Practitioner Assessment of Cognition: The General Practitioner Assessment of Cognition (GPCOG) evaluates time orientation, word recall, and recall of recent event and includes a clock drawing test [3]. This tool has the option of an additional 6-question set addressing the informer (GPCOGI) examining changes that have been noticed lately, but for the sake of time it will not be administered in this study. Sensitivity reaches 82% and specificity 70%. Its administration time is less than 4 min. GPCOG is validated in primary settings, and one of its benefits is that it can be administrated by general physicians and not medical staff like nurses, psychologists, physiotherapists, etc. Rowland Universal Dementia Assessment Scale: The Rowland Universal Dementia Assessment Scale (RUDAS) is a validated measure for detecting dementia that is valid across cultures [26]. It is designed as a 6-item multicultural cognitive test which measures memory, gnosis, praxis, visuospatial skills, judgment, and language. It lasts about 5 min and examines recognition of body parts, visuospatial function, reasoning, and memory. Validation study showed 89% sensitivity and 98% specificity, while it is unaffected by gender, education, and first language. RUDAS is one form of the few cognitive scales, which could facilitate dementia screening for immigrant population, with the assistance of an interpreter. Moreover, there is a complete version of RUDAS for subjects with severe listening problems. Geriatric Depression Scale: The Geriatric Depression Scale (GDS) is a screening tool, which is developed to detect depression in the elderly by distinguishing symptoms of dementia and depression [29]. In the specific study, we used the short-term GDS, which consists of just 4 rather than 30 questions and makes the assessment brief and easy, while keeping its psychometric values. It is free in multiple versions and in many languages, which are available online at www.Stanford.edu/yesavage/GDS.html. GDS
268
M. Sagiadinou et al.
needs about 1–2 min to complete it. The GDS short form has been shown to differentiate between depressed and nondepressed elderly primary care patients with a sensitivity of 0.814 and a specificity of 0.754. It is the second most often used tool to screen depressive symptoms among the elderly and is recommended by the World Health Organization [28]. The Cardiovascular Risk Factors, Aging and Dementia (CAIDE) Risk Score is the first midlife tool, which combines modifiable and nonmodifiable factors and is developed for dementia prediction [19]. It is composed of vascular and sociodemographic risk factors, such as age, education, blood pressure, cholesterol, body mass index (BMI), and physical activity, and is based on the midlife risk profile; it provides a 20-year dementia risk estimate. The CAIDE model was tested in multiple settings, such as memory and general clinics, and is associated with vascular brain pathology at autopsy, cognitive impairment, dementia, and neuroimaging measures of the gray matter [25]. Body mass index has been proven to associate with dementia, but research indicates that the relationship differs depending on the age when BMI is measured [8]. The System Usability Scale is the most widespread scale for evaluating the usability of a system [4]. It includes ten questions/sections, which are answered on a five-point Likert scale, from Strongly Disagree to Strongly Agree, while five of them are formulated with positive statements and the remaining five with negative statements. The small number of questions results from Brooke’s [4] aim to construct a quick and simple questionnaire because he believed that in this way the results would be more reliable.
30.2
Procedure
Two separate groups of experts were created aiming at proposing adequate cognitive markers, as well as biomarkers, which would best screen dementia in primary care settings. For the implementation phase, 16 users/primary healthcare professionals both from public and private sectors participated, after addressing the Local Medical
Association of Corfu and from the personal network of the researcher. The assessment was carried out as part of the programmed regular patient visit. Every participant in the study signed a consent form. When the administration phase was completed, along with a thank you e-mail, an online version of the questionnaire System Usability Scale was sent to the users/primary healthcare professionals, asking them to evaluate the usability of the application.
30.3
Results
A web-app called Dementia was designed and then created. Dementia is a web-based application based on MariaDB, PHP, AngularJS, and Material AngularJS. It is a database management application (storage and retrieval of exam and patient data). It provides the ability to dynamically insert tests and can work in multiple languages. It supports multi-level users but also the simultaneous use of the application by multiple users from any browser in any part of the world. 132 patients participated in the study, 28 of whom were excluded, as they scored high in the Geriatric Depression Scale-4 (GDS-4). The mean number of age was 61.49 years old, with standard deviation of 45–85 years old. In terms of multicultural background, 63.4% (Ν = 66) were from Greece, 22.1% (Ν = 23) from Albania, 6.7% (Ν = 7) from Great Britain, 4.8% (Ν = 5) from Italy, 2% (Ν = 2) from Denmark, and 1% (Ν = 1) from Qatar. Moreover, 22.1% (Ν = 23) of subjects were person with disabilities, while 38.5% (Ν = 40) were already diagnosed with dementia. Νone of the participants with a diagnosis of dementia scored well in the overall results. According to the χ2 test of independence conducted, there is a statistically significant correlation between the two variables, total score and dementia diagnosis. According to the descriptive statistics, the Greek participants of the sample present a lower average in the GPCOG and CAIDE test scores, but a higher average in the RUDAS test score, than the participants from other countries.
30
Improving Patient-Centered Dementia Screening for General, Multicultural. . .
According to the results of the t-test, these differences between the multicultural population are statistically significant ( p < 0.05 in all three cases) (Table 30.1). According to the cross-tabulation between the overall test results, a number of differences between participants with and without disabilities appear. More specifically, 84% of nondisabled participants appear to have failed the overall results, compared to the 16% who passed. In
269
contrast, in disabled participants the results appear more divided (52.2% success, 47.8% failure). According to the χ2 test of independence carried out, there is a statistically significant correlation between the two variables, the overall result and disability (p = 0.001). Therefore, these differences are considered statistically significant (Table 30.2). Concerning the users of the application, 16 professionals from primary care settings
Table 30.1 t-test’s results from multicultural population Levene’s test for equality of variances
GPCOG points
RUDAS points
CAIDE points
Equal variances assumed Equal variances not assumed Equal variances assumed Equal variances not assumed Equal variances assumed Equal variances not assumed
F 3.947
Sig. 0.05
t-test for equality of means
t df -2.177 102
95% confidence interval of the Sig. Mean Std. error difference Upper (2-tailed) difference difference Lower 0.032 -1.05357 0.48396 -2.0135 -0.09364
-2.151 93.037 0.034
2.805
1.085
0.097
0.3
2.182 101
-1.05357 0.48989
-2.0264
-0.08075
0.031
2.77273 1.27054
0.25232
5.29313
2.172 96.898 0.032
2.77273 1.27654
0.23912
5.30633
-3.747 101
0
-3.06326 0.81756
-4.68508 -1.44144
-3.731 97.223 0
-3.06326 0.82093
-4.69252 -1.43399
Table 30.2 Results from persons with disabilities
Total result
Failed Success
Total
Count % with disabilities Count % with disabilities Count % with disabilities
People with disabilities No Yes 68 12 84.0% 52.2% 13 11 16.0% 47.8% 81 23 100.0% 100.0%
Total 80 76.9% 24 23.1% 104 100.0%
270
M. Sagiadinou et al.
Table 30.3 System Usability Scale scores Number 1 2 3 4 5 6 7 8 9 10
Item I think that I would like to use this website frequently I found this website unnecessarily complex I thought this website was easy to use I think that I would need assistance to be able to use this website I found the various functions in this website were well integrated I thought there was too much inconsistency in this website I would imagine that most people would learn to use this website very quickly I found this website very cumbersome/awkward to use I felt very confident using this website I needed to learn a lot of things before I could get going with this website
participated, half of them coming from public sector and half from individual work place. The overall score is 95.5, so the app is considered “excellent easy to use” (Table 30.3).
30.4
Discussion
According to the results from the specific survey, it seems that the combination of cognitive markers and biomarkers gives the best results for detecting early dementia. None of the patients, who already had an official diagnosis of dementia, successfully completed the administration of the application. Moreover, it is possible to evaluate a general and multicultural population and persons with disabilities with the same application. Also, a high positive evaluation was recorded by the users, concerning the usability of the system and the intention to continue the administration in primary care settings. The web-based app Dementia is particularly useful tool (a) in rural areas and (b) can be administered by health personnel with minimal training, as (c) it appears to be suitable for older people with no or minimal formal training. The geography of the Greek territory and the Mediterranean countries, which is governed by specific characteristics, is taken into account (rural areas far from the urban fabric of the city, where health structures and primary medical care centers are usually concentrated).
Agree/strongly agree 87.50%
Disagree/ stronglydisagree 68.80%
50%–50% 75.00% 75.00% 81.30% 50%/50% 81.30% 56.3%/31.3% 68.8/25%
From an ethical point of view, it is vital that predictive tests are protected by ethical principles that serve the needs of both individuals and society. The enhancement through online applications and new technology, such as Dementia app, is considered to be directed towards the innovative direction of preventive control and the emphasis and prominence of the primary health sector. The limitations of this research are initially related to the size of the patient sample, as it was limited, and this to some extent is explained by the unfortunate timing of the Covid pandemic. Also, each of the tools may have been evaluated and have good psychometric properties, but the application as a whole is expected to be evaluated. Furthermore, the combination of quantitative and qualitative study would have enriched the research. For future research, it would be of great importance to provide the information in an easy, accessible way and without requiring a good knowledge of the language and a good educational level, which could support the operation of the application in the direction of selfadministration. The proposal of the protocol in which a web-based app lays the foundations for personalized patient-centered medicine, and the holistic approach through the contribution of each scientific field, can indeed offer the best possible support to the patient.
30
Improving Patient-Centered Dementia Screening for General, Multicultural. . .
References 1. Aretouli, E., Tsilidis, K. K., & Brandt, J. (2013). Fouryear outcome of mild cognitive impairment: The contribution of executive dysfunction. Neuropsychology, 27(1), 95–106. https://doi.org/10.1037/a0030481 2. Boustani, M., Peterson, B., Hanson, L., Harris, R., & Lohr, K. N. (2003). Screening for Dementia in Primary Care: A Summary of the Evidence for the U.S. Preventive Services Task Force. Annals of Internal Medicine, 138(11), 927. https://doi.org/10.7326/ 0003-4819-138-11-200306030-00015 3. Brodaty, H., Pond, D., Kemp, N., Luscombe, G., Harding, L., Berman, K., et al. (2002). The GPCOG: a new screening test for dementia designed for general practice. Journal of the American Geriatrics Society, 50(3):530–4. 4. Brooke, J. (1995). System Usability Scale: a quick and dirty usability scale. CRC Press, London, https://doi. org/10.1201/9781498710411 5. Canevelli, M., Lacorte, E., Cova, I., Zaccaria, V., Valletta, M., Raganato, R., Bruno, G., Bargagli, A. M., Pomati, S., Pantoni, L., & Vanacore, N. (2019). Estimating dementia cases amongst migrants living in Europe. European Journal of Neurology, 26(9), 1191–1199. https://doi.org/10.1111/ ene.13964 6. Canevelli, M., Zaccaria, V., Lacorte, E., Cova, I., Remoli, G., Bacigalupo, I., Cascini, S., Bargagli, A., Pomati, S., Pantoni, L., & Vanacore, N. (2020). Mild Cognitive Impairment in the Migrant Population Living in Europe: An Epidemiological Estimation of the Phenomenon. Journal of Alzheimer’s Disease, 73(2), 715–721. https://doi.org/10.3233/JAD-191012 7. Carpenter, C., Banerjee, J., Keyes, D., Eagles, D., Schnitker, L., Barbic, D., Fowler, S., & La Mantia, M. A. (2018). Accuracy of Dementia Screening Instruments in Emergency Medicine – A Diagnostic Meta-Analysis. Academic Emergency Medicine, acem.13573. https://doi.org/10.1111/acem.13573 8. Danat, I., Clifford, A., Partridge, M., Zhou, W., Bakre, A., Chen, A., et al. (2019). Impacts of overweight and obesity in older age on the risk of dementia: a systematic literature review and a meta-analysis. Journal of Alzheimer’s Disease. 1;70 (s1), pp. 587–599. 9. Darling-Fisher, C. S., Salerno, J., Dahlem, C. H. Y., & Martyn, K. K. (2014). The Rapid Assessment for Adolescent Preventive Services (RAAPS): Providers’ Assessment of Its Usefulness in Their Clinical Practice Settings. Journal of Pediatric Health Care, 28(3), 217–226. https://doi.org/10.1016/j.pedhc.2013.03.003 10. Dyer, A., Foley, T., O’Shea, B., & Kennelly, S. P. (2018). Dementia Diagnosis and Referral in General Practice: A Representative Survey of Irish General Practitioners. Irish Medical Journal, 111(4), 735. 11. Elbert, N., van Os-Medendorp, H., van Renselaar, W., Ekeland, A., Hakkaart-van Roijen, L., Raat, H., Nijsten, T., & Pasmans, S. (2014). Effectiveness and Cost-Effectiveness of eHealth Interventions in
271
Somatic Diseases: A Systematic Review of Systematic 22(3), 234–241. 12. Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state”: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12(3), 189–198. 13. Galvin, J., Roe, C., Powlishta, K., Coats, M., Muich, S., Grant, E., Miller, J., Storandt, M., & Morris, J. C. (2005). The AD8: a brief informant interview to detect dementia. Neurology, 65(4), 559–564. 14. Gaugler, J., Roth, D., Haley, W., & Mittelman, M. S. (2011). Modeling Trajectories and Transitions. Nursing Research, 60 (Supplement), 528–537. https://doi.org/10.1097/NNR.0b013e318216007d 15. Gauthier, S., Rosa-Neto, P., Morais, J., & Webster, C. (2021). World Alzheimer Report 2021. Abridged version. Journey through the diagnosis of dementia. ALZHEIMER’S DISEASE INTERNATIONAL 16. Han, J., Bryce, S., Ely, E., Kripalani, S., Morandi, A., Shintani, A., Jackson, J., Storrow, A., Dittus, R., & Schnelle, J. (2011). The Effect of Cognitive Impairment on the Accuracy of the Presenting Complaint and Discharge Instruction Comprehension in Older Emergency Department Patients. Annals of Emergency Medicine, 57(6), 662–671.e2. https://doi. org/10.1016/j.annemergmed.2010.12.002 17. Hinton, L., Franz, C. E., Reddy, G., Flores, Y., Kravitz, R. L., & Barker, J. C. (2007). Practice Constraints, Behavioral Problems, and Dementia Care: Primary Care Physicians’ Perspectives. Journal of General Internal Medicine, 22(11), 1487–1492. https://doi.org/10.1007/s11606007-0317-y. 18. Jack, C., Petersen, R., Xu, Y., Waring, S., O’Brien, P., Tangalos, E., Smith, G., Ivnik, R., & Kokmen, E. (1997). Medial temporal atrophy on MRI in normal aging and very mild Alzheimer’s disease. Neurology, 49(3), 786–794. https://doi.org/10.1212/WNL.49. 3.786 19. Kivipelto, M., Ngandu, T., Laatikainen, T., Winblad, B., Soininen, H., Tuomilehto, J. (2006). Risk score for the prediction of dementia risk in 20 years among middle aged people: a longitudinal, populationbased study. The Lancet Neurology, 1;5(9), pp. 735–41. 20. Ooi, C. Y., Ng, C. J., Sales, A. E., & Lim, H. M. (2020). Implementation Strategies for WebBased Apps for Screening: Scoping Review. Journal of Medical Internet Research, 22(7), e15591. https://doi.org/ 10.2196/15591 21. Prorok, J., Horgan, S., & Seitz, D. P. (2013). Health care experiences of people with dementia and their caregivers: a meta-ethnographic analysis of qualitative studies. Canadian Medical Association Journal, 185(14), E669–E680. https://doi.org/10.1503/cmaj. 121795 22. Rajan, K., Wilson, R., Weuve, J., Barnes, L., & Evans, D. (2015). Cognitive impairment 18 years before clinical diagnosis of Alzheimer disease dementia.
272 Neurology, 85(10), 898–904. https://doi.org/10.1212/ WNL.0000000000001774 23. Sagbakken, M., Spilker, R., & Nielsen, T. (2018). Dementia and immigrant groups: a qualitative study of challenges related to identifying, assessing, and diagnosing dementia. BMC Health Services Research, 18(1), 910. https://doi.org/10.1186/s12913-0183720-7 24. Spenceley, S., Sedgwick, N., & Keenan, J. (2015). Dementia care in the context of primary care reform: an integrative review. Aging & Mental Health, 19(2), 107–120. https://doi.org/10.1080/13607863.2014. 920301 25. Stephen, R., Ngandu, T., Liu, Y., Peltonen, M., Antikainen, R., Kemppainen, N., et al. (2021). Change in CAIDE dementia risk score and neuroimaging biomarkers during a 2-year multidomain lifestyle randomized controlled trial: Results of a post-hoc subgroup analysis. The Journals of Gerontology: Series A, 76(8), pp. 1407–1420.
M. Sagiadinou et al. 26. Storey, J., Rowland, J., Conforti, D., Dickson, H. (2004). The Rowland universal dementia assessment scale (RUDAS): a multicultural cognitive assessment scale. International Psychogeriatrics, 16(1), pp. 13–31. 27. Thyrian, J., Hertel, J., Wucherer, D., Eichler, T., Michalowsky, B., Dreier-Wolfgramm, A., Zwingmann, I., Kilimann, I., Teipel, S., & Hoffmann, W. (2017). Effectiveness and Safety of Dementia Care Management in Primary Care. JAMA Psychiatry, 74(10), 996. https://doi.org/10.1001/jamapsychiatry. 2017.2124 28. WHO. (2010). Monitoring the building blocks of health systems: A handbook of indicators and their measurement strategies. World Health Organization. 29. Yesavage, J., Brink, T., Rose, T., Lum, O., Huang, V., Adey, M., et al. (1982). Development and validation of a geriatric depression screening scale: a preliminary report. Journal of Psychiatric Research, 1;17(1), pp. 37–49.
Improved Regularized Multi-class Logistic Regression for Gene Classification with Optimal Kernel PCA and HC Algorithm
31
Nwayyin Najat Mohammed
Abstract
A significant challenge in high-dimensional and big data analysis is related to the classification and prediction of the variables of interest. The massive genetic datasets are complex. Gene expression datasets are enriched with useful genes that are associated with specific diseases such as cancer. In this study, we used two gene expression datasets from the Gene Expression Omnibus and preprocessed them before classification. We used optimal kernel principal component analysis in which the optimal kernel function was chosen for dataset dimensionality reduction and extraction of the most important features. The gene sets with a high validity index were collected using a combined hieratical clustering and optimal kernel principal component analysis (KHC-RLR) algorithm. Logistic regression is one of the most common methods for classification, and it has been shown to be a useful classification approach for gene expression data analysis. In this study, we used multiclass logistic regression to classify the collected gene sets. We found that ordinary logistic regression caused a major overfitting problem; therefore, we used regularized multi-class logistic regression to classify the N. N. Mohammed (✉) Department of Computer Science, University of Sulaimani, Collage of Science, Sulaymaniyah, Iraq e-mail: [email protected]
gene sets. The proposed KHC-RLR algorithm showed a high performance and satisfied accuracy measures. Keywords
Multi-class logistic regression · Kernel functions · Principal component analysis · Hierarchical clustering · Regularization · Accuracy · Recall · Precision · Gene expression data · Preprocessing · Classification
31.1
Introduction
Gene expression profiling and analysis are usually designed to verify one or more hypotheses that may help in constructing effective diagnostic or prognostic models. Generally, expression data are collected from groups that exhibit certain differences, such as response to specific treatments, disease types, and developmental stages. Microarray technologies can monitor the expression of tens of thousands of genes in parallel and produce numerous amounts of important data. These technologies have made huge contributions to experimental molecular biology; however, managing and analysis of microarray data are a major obstruction in the wider usage of these technologies. Raw microarray datasets are transformed into gene expression matrices in which rows represent
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_31
273
274
genes, columns represent various samples such as tissues or experimental conditions, and the number in each cell is the expression level of each gene in a sample. The matrices are analyzed to extract information about the underlying biological processes. Gene expression profiles that indicate the dynamic functioning of each gene can be built by measuring gene transcription levels in an organism under various conditions, at different developmental stages, and in different tissues. Gene expression profiling data contains more informative genes than are routinely extracted using standard approaches. Preprocessing techniques are used to obtain expression values from multiple samples that can be compared. These techniques include (1) scaling, where expression values are scaled so that each sample has an equal value for a statistic such as median; (2) adjusting, where expression values are adjusted so that each sample has the same expression distribution across genes; and (3) normalization, where expression values are normalized across the different groups in a study. The z-score normalization method is one of the common techniques that has been used to preprocess gene expression data [1, 2]. In this study, the robust multi-array average method was used to normalize the gene expression values of two gene expression datasets. Gene expression data are normalized to minimize non-biological effects for reliable comparisons between multiple arrays so that the distribution of probe intensities is the same across whole arrays [3]. Accurate measures of differential expression and powerful strategies are required to identify high-confidence gene sets with biologically relevant changes in transcription levels. Differentially expressed genes are detected using a fold change threshold that can be varied to suit the selected datasets [4]. When many variables are existing, a dimension reduction procedure is usually run to reduce the variable scope before the later analysis is carried out. Gene expression microarray data generally have many variables with unknown correlation structures. Dimension reduction is conducted to obtain a shortened list of genes
N. N. Mohammed
that still includes all the relevant genes. Principal component analysis (PCA) is a statistical dimension reduction technique that has been applied to large datasets with high dimensionality. The aim of PCA is to reduce the dimensionality of a dataset and create a new set of variables while keeping the original information as much as possible. PCA orthogonally transforms a set of correlated variables into a set of linearly non-correlated variables, the principal components [5]. In this study, kernel principal component analysis (KPCA) was conducted to reduce the dimensions of the gene expression datasets. KPCA widens linear PCA by mapping the data into a high-dimensional feature space; then, ordinary linear PCA is performed in the feature space. The pairwise inner products between feature vectors are required for this computation, not the explicit feature vectors. This makes it possible to apply the kernel trick, in which all inner products are replaced by a kernel function that is chosen to define the feature space. This extension from linear to kernel PCA has been applied widely in recent years [6]. We chose the polynomial kernel as the optimal kernel function for the KPCA because it performed better than the Gaussian kernel. The gene sets were then formed and collected based on the hierarchical clustering (HC) algorithm, and the gene sets with a high validity index were chosen for further analysis. In many fields, there is increasing interest in identifying the groupings of the “objects” under study that best represent certain measured similarity relationships. For example, often large arrays of data are collected, but strong theoretical structures are lacking; the problem then is one of discovering whether there is any inherent structure in the data themselves. HC has many applications because it provides a view of data at different levels of abstraction, making it easier to visualize and interactively explore large data collections. HC was used in this study to gather and form gene sets prior to classification [7]. Logistic regression, an extension of linear regression, is used to model contrariety variables that usually represent the presence or absence of an event, and it is the main method used to
31
Improved Regularized Multi-class Logistic Regression for Gene. . .
classify large datasets and for machine learning [8]. Logistic regression is the process of modeling the probability of a discrete outcome given an input variable. The most common logistic regression models have a binary outcome, for example, two values that are true/false [9]. Multi-class logistic regression (MLR) is as an extension of standard logistic regression that is used to get a group of answers that are more divided in nature [10]. Regularization techniques are used to avoid overfitting issues with logistic regression that mainly occur when the number of parameters to be estimated is large and the number of available samples is small. Regularization enhances the generalization of predictive models [11, 12]. In this study, we tested two regularization Lasso methods L1 and L2 with logistic regression. The performance of the proposed algorithm was evaluated using accuracy, precision, and recall metrics. In this paper, we describe the proposed optimal KPCA combined with HC (KPCA-HC) algorithm, MLR, regularization methods, the microarray gene expression datasets, and classification evaluation criteria and present the results and discussion.
31.2
The polynomial kernel function dimensionality (d) of the mapping function grows with the d value. Integrating PCA with a kernel function generates the KPCA algorithm, which was designed for feature reduction. KPCA performs much better than linear PCA. The specific kernel function affects the PCA, which depends on the applied dataset. HC is simple to use and reflects the process by which each object is divided step by step using an HC tree. HC can be agglomerative or decisive. Agglomerative HC is built from the bottom up, and each data point is considered an individual cluster, and clusters do not overlap. For decisive HC, the data points are in a single cluster that is successively divided into multiple clusters. In this study, we used a polynomial kernel function with PCA. The polynomials were tested with different d values. The polynomial KPCA was applied to two gene expression datasets for dimension reduction; then, HC was conducted to cluster and organize the gene expression data. The optimal KPCA and HC (KHC) algorithm performance was evaluated using the adjusted Rand index (ARI) values, and the gene sets that had high ARI values were selected and reorganized for classification [13].
Methodology 31.2.2
31.2.1
275
Gathering Gene Sets Based on Optimal Kernel Principle Component Analysis (KPCA) and Hierarchical Clustering (HC) Algorithms
PCA is commonly used to reduce the dimensionality of data while retaining possible variations in a dataset. PCA finds the orthonormal feature space that has a maximum variability. KPCA is a nonlinear extension of linear PCA that describes the nonlinear structure of input data and is more coherent in reducing dimensionality than linear PCA. The nonlinear principal components are calculated using the kernel function to replace the dot product in high-dimensional feature space. Polynomial and Gaussian kernels are the most common kernel functions used for real data.
Classification with Regularized Multi-class Logistic Regression (RMLR) Algorithm
Supervised learning methods are used to build concise models of the distribution of class labels for predictor features. Supervised classification is frequently carried out using intelligent systems. Many techniques have been developed based on artificial intelligence and statistics, including logicbased techniques and Bayesian networks [14].
31.2.2.1 Multi-class Logistic Regression (MLR) Logistic regression is a machine learning method, and it is intuitive and easily understood. Logistic regression has a specific expression formula, and the model is relaxed and flexible in its assumptions. There is no requirement that the
276
N. N. Mohammed
independent variables are normally distributed or linearly related or that equal variance exists within each group. Being free from these assumptions, logistic regression is a tool that can be used in many applications [15]. MLR is a supervised learning algorithm for designing classifiers that can distinguish k classes using L-labeled training samples when feature vectors are given as the input for classification. The MLR algorithm requires a training phase and a testing phase. The L training samples with known class labels are defined as DL = {(X1 Y1,) . . . (XL, YL,)}. Posterior class distribution using the general MLR model is computed for a maximum a posteriori (MAP) estimation of regressors w [16]. The general MLR model is defined as
P ðy1 = k=xi, wÞ =
exp W ðkÞ X i K k=1
exp W ðkÞ X i
simultaneously, thereby generating a sparse solution for the learning model. L1 regularization, which we used in this study, has been widely applied in many fields. The L1 regularization term is defined as Pðw; λ1 Þ = λ1
p j=1
j wj j ,
ð31:2Þ
where λ1 is the regularization tuning parameter. L1 regularization can achieve implicit feature selection and performs well in high-dimensional and low-correlation settings. The model (Eq. 31.1) with penalty (Eq. 31.2) has been named L1-regularized logistic regression [18]. L2 Regularization L2 regularization, also called Tikhonov regularization [19], can be expressed as Pðw; λ1 Þ = λ1
, ð31:1Þ
p j=1
wj2 :
ð31:3Þ
This approach can identify features that are highly correlated, but L2 regularization cannot generate sparse models, and all features are always kept in the final solution. Thus, it does not provide a useful interpretable model [20] (Fig. 31.1).
where W(k) is the set of logistic regressors for T T class k; w is defined as (W ðkÞ , . . ., W ðk - 1Þ ), where W(k) is generally set to zero because the kth conditional probability is found by subtracting the sum of estimated regressors of (k - 1) classes from unity; and x = (x1, . . ., xi) represents the feature vectors selected for training the model.
31.2.3
31.2.2.2 Regularized Logistic Regression (RLR) Regularization is a common standard method that is used to avoid overfitting problems that can occur when a fitted model has many feature variables with regression coefficients (θ) that are relatively large. The aim of regularization is to add an additional term that penalizes large coefficients to the loss function for the regression model to balance the objective function, thus obtaining a regression coefficient θ that ensures the new objective function has the minimum value [17].
Many performance criteria have been used to evaluate RMLR methods. For example, precision/recall evaluation metrics have been used in information retrieval, where high precision and recall values indicate a good classifier. The most widely used metrics are correct classification rate, error rate, and classification accuracy. Accuracy is the portion of the total number of predictions that is correct and is highly dependent on the dataset distribution to evaluate the system performance [21]. In this study, accuracy (A) was calculated as
Lasso Penalty Lasso (L1 regularization), which was originally proposed by Tibshirani [18], performs continuous shrinkage and automatic feature selection
Classification Evaluation Criteria
A=
TN þ TP TN þ FN þ TP þ FP
ð31:4Þ
31
Improved Regularized Multi-class Logistic Regression for Gene. . .
Fig. 31.1 Flowchart of the improved regularized multi-class logistic regression for gene classification with the optimal kernel principal component analysis and hierarchical clustering (KHC) algorithm
277
Datasets
Data pre-processing
Optimal kernel function
Gathering genes by KHC Detecting best L1 regularization penalty Gene set classification with L1-regularized logistic regression
31.2.4
Gene Expression Datasets
31.2.4.1 EATM Dataset This dataset (Gene Expression Omnibus accession number GSE84000) contains the metabolic signatures of adipose tissue macrophages (ATMs) in lean and obese conditions. Transcriptome analysis, real-time flux measurements, enzyme-linked immunosorbent assay, and several other approaches were used to determine the metabolic signatures and inflammatory status of the ATMs. The dataset is composed of 8 samples with expression data of 35,557 genes [22]. 31.2.4.2 ATM Dataset This dataset (Gene Expression Omnibus accession number GSE14312) contains expression data of white ATMs that were treated with macrophage-conditioned media; control cells were treated with unconditioned media. The media were conditioned for 4 or 24 h. Agilent arrays comprising 44,000 probes were used to analyze gene expressions, and matrix
metalloproteinases were identified as key genes. The dataset is composed of 36 samples [23].
31.3
Results and Discussion
The proposed KHC-RLR algorithm was applied to the ATM and EATM gene expression datasets. The genes sets with high adjusted Rand index values that were gathered by KPCA-HC were classified, and a high accuracy was obtained. Both Lasso regularization penalties were used with MLR, and L1 regularization gave much better results. For the ATM dataset, the gene set expression classification that obtained a high accuracy rate (A) and the Lasso regularization penalty that enhanced the MLR performance are shown in Table 31.1. The L1 and L2 regularization penalties were used with the MLR, and the results are provided in Table 31.1. The results of KHC-RLR for the EATM dataset are shown in Table 31.2. The L1 and L2
278
N. N. Mohammed
Table 31.1 RLR and optimal KPCA with HC algorithm accuracy for the ATM dataset Dataset AMT AMT
KPCA-HC-RLR L1, RRL L2, RRL
Accuracy 81% 64%
RLR regularized logistic regression, KPCA kernel principal component analysis, HC hierarchical clustering Table 31.2 RLR and optimal KPCA with HC algorithm accuracy for the EATM dataset Dataset EAMT EAMT
KPCA-HC-RLR L1, RRL L2, RRL
Accuracy 87% 60%
RLR regularized logistic regression, KPCA kernel principal component analysis, HC hierarchical clustering
regularization penalties were used with MLR, which was applied on the gathered genes sets.
31.4
Conclusions
In this study, we propose an L1 regularization multi-class logistic regression (RMLR) with optimal kernel principal component analysis and hierarchical clustering (KHC) algorithm for solving the classification problem of microarray gene expression data. We applied the proposed KHC-RMLR algorithm to two gene expression datasets and gathered gene sets into clusters that had high adjusted Rand index values. The gene expression datasets were then classified based on L1-RMLR. The performance of the proposed KHC-RMLR was much better when applied on the gathered sets of gene expression data as shown by the high accuracy values that were obtained. The proposed KHC-RMLR achieved competitive classification accuracies.
References 1. A. Brazma and J. Vilo, “Gene expression data analysis,” FEBS Letters, vol. 480, pp. 17–24, 2000. 2. A. Belorkar and L. Wong, “GFS: fuzzy preprocessing for effective gene expression analysis,” BMC Bioinformatics, vol. 17, pp. 169–184, 2016. 3. C. S. Kim, S. Hwang, and S.-D. Zhang, “Rma with Quantile normalization Mixes Biological Signals
Between Different Sample Groups in Microarray Data Analysis,” in 2014 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2014, pp. 139–143. 4. I. V. Yang, E. Chen, J. P. Hasseman, W. Liang, B. C. Frank, S. Wang, et al., “Within the fold: assessing differential expression measures and reproducibility in microarray assays,” Genome Biology, vol. 3, pp. 1–13, 2002. 5. H. Wang and M. J. van der Laan, “Dimension reduction with gene expression data using targeted variable importance measurement,” BMC Bioinformatics, vol. 12, pp. 1–12, 2011. 6. M. Debruyne and T. Verdonck, “Robust kernel principal component analysis and classification,” Advances in Data Analysis and Classification, vol. 4, pp. 151–167, 2010. 7. M. J. Embrechts, C. J. Gatti, J. Linton, and B. Roysam, “Hierarchical Clustering for Large Data Sets,” in Advances in Intelligent Signal Processing and Data Mining, ed: Springer, 2013, pp. 197–233. 8. S. Domínguez-Rodríguez, M. Serna-Pascual, A. Oletto, S. Barnabas, P. Zuidewind, E. Dobbels, et al., “Machine learning outperformed logistic regression classification even with limit sample size: A model to predict pediatric HIV mortality and clinical progression to AIDS,” PloS One, vol. 17, p. e0276116, 2022. 9. T. Edgar and D. Manz, “Research Methods for Cyber Security,” Syngress, 2017. 10. R. M. de Souza, F. J. A. Cysneiros, D. C. Queiroz, and A. D. A. Roberta, “A Multi-class Logistic Regression Model for Interval Data,” in 2008 IEEE International Conference on Systems, Man and Cybernetics, 2008, pp. 1253–1258. 11. S. Ongkittikul, J. Suwatcharakulthorn, K. Chutisowan, and K. Ratanarangsank, “Covolutional Multinomial Logistic Regression for Face Recognition,” in 2020 8th International Electrical Engineering Congress (iEECON), 2020, pp. 1–4. 12. A. Arafa, M. Radad, M. Badawy, and N. El-Fishawy, “Regularized Logistic Regression Model for Cancer Classification,” in 2021 38th National Radio Science Conference (NRSC), 2021, pp. 251–261. 13. N. N. Mohammed and C. J. Mohammed, “Enhanced Determination of Gene Groups Based on Optimal Kernel PCA with Hierarchical Clustering Algorithm,” in 2021 55th Annual Conference on Information Sciences and Systems (CISS), 2021, pp. 1–5. 14. S. B. Kotsiantis, I. D. Zaharakis, and P. E. Pintelas, “Machine learning: a review of classification and combining techniques,” Artificial Intelligence Review, vol. 26, pp. 159–190, 2006. 15. C. Negoiţă and M. Praisler, “Logistic regression classification model identifying drugs of abuse based on their ATR-FTIR spectra: Case study on LASSO and Ridge regularization methods,” in 2019 6th International Symposium on Electrical and Electronics Engineering (ISEEE), 2019, pp. 1–4.
31
Improved Regularized Multi-class Logistic Regression for Gene. . .
16. O. Behadada, M. Trovati, M. A. Chikh, N. Bessis, and Y. Korkontzelos, “Logistic Regression Multinomial for Arrhythmia Detection,” in 2016 IEEE 1st International Workshops on Foundations and Applications of Self* Systems (FAS* W), 2016, pp. 133–137. 17. L. Li and Z.-P. Liu, “A connected network-regularized logistic regression model for feature selection,” Applied Intelligence, pp. 1–31, 2022. 18. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 58, pp. 267–288, 1996. 19. T. Robert, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 58, pp. 267–288, 1996. 20. C. Liu and H. San Wong, “Structured penalized logistic regression for gene selection in gene expression data analysis,” IEEE/ACM Transactions on
279
Computational Biology and Bioinformatics, vol. 16, pp. 312–321, 2017. 21. A. E. Hoerl and R. W. Kennard, “Ridge regression: Biased estimation for nonorthogonal problems,” Technometrics, vol. 12, pp. 55–67, 1970. 22. L. Boutens, G. J. Hooiveld, S. Dhingra, R. A. Cramer, M. G. Netea, and R. Stienstra, “Unique metabolic activation of adipose tissue macrophages in obesity promotes inflammatory responses,” Diabetologia, vol. 61, pp. 942–953, 2018. 23. A. O’Hara, F.-L. Lim, D. J. Mazzatti, and P. Trayhurn, “Microarray analysis identifies matrix metalloproteinases (MMPs) as key genes whose expression is up-regulated in human adipocytes by macrophage-conditioned medium,” Pflügers Archiv: European Journal of Physiology, vol. 458, pp. 1103–1114, 2009.
Mathematical Study of the Perturbation of Magnetic Fields Caused by Erythrocytes
32
Maria Hadjinicolaou and Eleftherios Protopapas
Abstract
The purpose of this chapter is the mathematical study of the perturbation of a homogeneous static magnetic field caused by the embedding of a red blood cell. Analytical expressions for the magnetic potential and the magnetic strength vector are derived. From the obtained results, it emerges that the magnetic field inside the red blood cell is not uniform and the magnitude depends on the orientation of the erythrocyte. The expressions for the magnetic field quantities are significant in applications such as the magnetic resonance imaging and in the magnetic resonance spectroscopy. Keywords
Magnetic potential · Magnetic field strength vector · Red blood cell · Inverted prolate spheroidal coordinates · Non-uniformity
M. Hadjinicolaou Hellenic Open University, School of Science and Technology, Patras, Greece e-mail: [email protected] E. Protopapas (✉) National Technical University of Athens, School of Applied Mathematical and Physical Sciences, Athens, Greece e-mail: [email protected]
32.1
Introduction
Magnetic fields (MFs) are used in medicine for diagnostic and therapeutic purposes through specialized instruments and techniques. A characteristic one is the Magnetic Resonance Imaging (MRI) technique that creates images of the organs of the body providing information about their physical and anatomical condition as well as their functionality. It is based on the Nuclear Magnetic Resonance (NMR) phenomenon, while recent advances of the method concern diffusion MRI (dMRI) and functional MRI (fMRI) for studying neuronal activity and blood flow [13], respectively. Additionally, through magnetoencephalography (MEG), we are provided with records of the magnetic fields induced by the brain activity. Although magnetic fields and electromagnetic fields (EMF) exist in nature, the exposure to them is often considered as causing health problems such as cancer [16]. Evidently, their therapeutic usage is of great value [15]. For example, clinical studies and biological research show acceleration of the healing processes, or of the pain relief, after the application of MF treatment [1]. At molecular level, new magnetic point of care (POC) devices allow for rapid and diagnostic testing, utilizing the different magnetic susceptibilities of the molecules. For example, POC technologies for molecular diagnostics use a drop of blood from a finger prick, gaining quick,
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_32
281
282
accurate, and inexpensive results for different biomarkers [9, 14]. The study of the effects of the magnetic fields on the rheological properties of blood is also of great importance as blood affects the function of many organs in the human body. It has been observed that at low shear rates, tendency of decreased blood viscosity or increase of the erythrocyte sedimentation rate (ESR). We recall at this point that the main parameters determining the blood viscosity are haematocrit value, erythrocytes aggregability and deformability, and the plasma viscosity. The effect of the magnetic field has been studied either on the whole blood or only on blood plasma, and the results show that blood’s plasma viscosity differs before and after the induction of a magnetic field [10]. Of particular interest is the kinematic response of the cells (cell migration) under the application of a magnetic field, the so-called magnetophoretic mobility (MM), as “it is directly proportional to the particle field interaction parameter, and inversely proportional to the cell friction coefficient” [17]. As MM is based only on the magnetic susceptibility of the cell, it turns out to be a more reliable method, since the measurements do not depend on the experimental device. Regarding the red blood cell (RBC), due to the haemoglobin that may be either oxidized or not, it may appear of having a different magnetic behaviour, i.e., diamagnetic or paramagnetic, accordingly. In this chapter, we examine this behaviour assuming a mathematical model for a magnetostatic field applied to an RBC of an arbitrary orientation. The obtained analytical results depict this divergent behaviour and indicate a kind of nonuniformity of the magnetic field inside the RBC, which is in contrast with the previous assumptions of the shape of the RBC (spherical or prolate) where the magnetic field inside the cell had been proved to be homogenous [5, 6, 11]. In [11], the authors proved the homogeneity of the magnetic field inside the central sphere of concentric spheres, while Kuchel et al. [6] came up with the same result in confocal prolate and oblate spheroids, which is in full compliance with [5]. Moreover, in [7], the uniformity of the
M. Hadjinicolaou and E. Protopapas
magnetic field in the interior of two kind of cylinders is also agreeably discussed. The structure of the manuscript is as follows. In Sect. 32.2, some physical prerequisites are presented, while in Sect. 32.3 the problem at hand is mathematically formulated. In Sect. 32.4, the solution of the problem is derived, and in Sect. 32.5, sample magnetic lines are depicted indicatively. Finally, in Sect. 32.6, discussion of the obtained results and a physical explanation are given.
32.2
Physical Prerequisites
The magnetic field strength, H, establishes the magnetic flux density, B, and if μ is the magnetic permeability of the medium, the relation that connects B and H, is B = μH:
ð32:1Þ
According to Maxwell’s equation [2], the curl of the magnetic field strength equals zero, ∇×H = 0, so it exists a potential ϕ such as H = -∇ϕ, and therefore B = -μ∇ϕ. Also another Maxwell’s equation states that ∇‧B = 0, so ∇ ‧ ðμ∇ϕÞ = 0, and in the case that the magnetic permeability is constant, the potential ϕ must satisfy the equation ∇‧∇ϕ = 0 or equivalently the Laplace equation Δϕ = 0.
32.3
Statement of the Problem
A uniform magnetic field H0 ¼ H 0x , H 0y , H 0z ¼ H 0 ðsinðβÞ cosðaÞ, sinðβÞ sinðaÞ, cosðβÞÞ ð32:2Þ of strength H0, which forms an angle a with the major axis of a red blood cell (RBC), is imposed (Fig. 32.1), while the corresponding potential [5] is Φ0 ðx, y, zÞ = - H 0x x þ H 0y y þ H 0z z : ð32:3Þ
32
Mathematical Study of the Perturbation of Magnetic Fields Caused by Erythrocytes
283
Φ1 ðrs Þ = Φ2 ðrs Þ
ð32:7Þ
and
½μ1 ðs ‧ ∇Φ1 ðrÞÞ]r = rs = ½μ2 ðs ‧ ∇Φ2 ðrÞÞ]r = rs , ð32:8Þ where r = rs expresses the surface, S, of the inverted spheroid. The far field condition (32.6) is translated in the Cartesian coordinates (x, y, z) as Φ1 ðrÞ = - H 0 ½z cosðaÞ þ x sinðaÞ],
r → þ 1: ð32:9Þ
Fig. 32.1 Denoting the spaces in the meridian plane of the RBC
Taking into account that the RBC’s shape is a biconcave disc, an RBC resembles as an inverted prolate spheroid, and therefore the inverted prolate spheroid system of coordinates is the appropriate one for formulating the problem. The inverted prolate spheroid has been used as RBC’s shape for the modelling of the sedimentation of an RBC [4]. The inverted prolate spheroid has magnetic permeability μ2 and is emerged in an infinite medium with magnetic permeability μ1. If Φi(r), i = 1, 2, and Hi(r), i = 1, 2, are the magnetic potentials and the total magnetic field in the exterior (i = 1) and the interior (i = 2) of the inverted spheroid, respectively, and V1 and V2 the corresponding spaces, it yields H1 ðrÞ = - ∇Φ1 ðrÞ, r 2 V 1 ,
ð32:4Þ
H2 ðrÞ = - ∇Φ2 ðrÞ, r 2 V 2 ,
ð32:5Þ
The continuity of the magnetic field (32.7) is written as Φ1 ðrÞ = Φ2 ðrÞ, r 2 S,
while the continuity of its normal derivatives (32.8) is translated to μ1
r → þ1
lim Φ1 = Φ0 :
r → þ1
ð32:6Þ
The problem at hand is completed fulfilling the following continuity conditions:
∂Φ1 ðrÞ ∂Φ2 ðrÞ = μ2 , ∂n ∂n
ð32:11Þ
r 2 S:
Applying the inverted prolate spheroidal coordinates [12] (τ, ζ, ϕ), where τ 2 (1, +1), ζ 2 [-1, 0) [ (0, 1], ϕ 2 [0, 2π], the problem at hand is translated to ΔΦ1 ðτ, ζ, ϕÞ = 0, ðτ, ζ, ϕÞ 2 V 1 ,
ð32:12Þ
ΔΦ2 ðτ, ζ, ϕÞ = 0, ðτ, ζ, ϕÞ 2 V 2 ,
ð32:13Þ
- H 0 τζ cosðaÞ cðτ2 þ ζ 2 - 1Þ p - H 0 τ2 - 1 1 - ζ 2 sinðaÞ þ cosðϕÞ, cðτ2 þ ζ 2 - 1Þ as ðτ, ζÞ → ð1, 0Þ, ð32:14Þ
Φ1 ðτ, ζ, ϕÞ =
where r is the proper position vector and r = jrj: For the external field, the condition that expresses that the field is uniform of strength H0 at infinite displacement (far field condition) is lim H1 = H0 ,
ð32:10Þ
Φ1 ðτ0 , ζ, ϕÞ = Φ2 ðτ0 , ζ, ϕÞ, μ1
∂Φ1 ðrÞ ∂τ
j
τ = τ0
= μ2
∂Φ2 ðrÞ ∂τ
j
τ = τ0
ð32:15Þ ,
ð32:16Þ
where τ = τ0 is the surface of the prolate spheroid.
284
M. Hadjinicolaou and E. Protopapas
The Laplace operator in the inverted prolate spheroidal coordinate system has the form c2 ðτ2 þ ζ 2 - 1Þ τ2 - ζ 2
Δ =
when τ → þ 1, we derive that C m n = 0, m ≥ 2 and therefore the expressions for Φ1, Φ2 are þ1
2
τ2
Φ1 ðτ, ζ, ϕÞ ¼
þ ζ -1
m Am n P n ðτ Þ n¼0
2
× ðτ2 - 1Þ
∂ ∂ 2τζ 2 þ 2 2 ∂τ τ þ ζ 2 - 1 ∂τ
þ1
2
m þ Bm n Q n ðτ Þ
m¼0
Pm n ðζ Þ cosðmϕÞ:
2
þð1 - ζ 2 Þ
ð32:20Þ
∂ ∂ 2τ2 ζ þ 2 2 2 ∂ζ τ þ ζ -1 ∂ζ
þ1
2
c2 ðτ2 þ ζ 2 - 1Þ ∂2 : þ 2 ðτ - 1Þð1 - ζ 2 Þ ∂ϕ2
Φ2 ðτ, ζ, ϕÞ ¼
C 0n P0n ðτÞ n¼0
ð32:17Þ
þ D0n Q0n ðτÞ P0n ðζ Þ þ C 1n P1n ðτÞ þ D1n Q1n ðτÞ P1n ðζ Þ cosðϕÞ
By applying R-separation of variables, we reach at the R-separable form of the solution of Laplace’s equation Δuðτ, ζ, ϕÞ = 0,
τ2 þ ζ 2 - 1
þ1
þ1 m m Dm n Qn ðτÞPn ðζ Þ cosðmϕÞ :
þ
ð32:18Þ
n¼2
m¼2
ð32:21Þ
which is uðτ, ζ, ϕÞ =
τ2 þ ζ 2 - 1
Applying recurrence relations for the Legendre functions and orthogonality arguments, the unknown constants are calculated and the potential for the exterior of the inverted prolate spheroid is
þ1 þ1 n=0 m=0
n m n × ½Am n Pn ðτÞ þ Bn Qn ðτÞ] n m n ½C m n Pn ðζÞ þ Dn Qn ðζÞ] m ½F m n cosðmϕÞ þ E n sinðmϕÞ],
ð32:19Þ Pm n
Solution of the Problem
The interior and the exterior of the inverted prolate spheroid contain ζ = 1, so terms Qm n ðζÞ cannot enter the expression for Φ1, Φ2 because the Legendre functions of the second kind are singular when ζ = 1 [8]. Due to the orientation of the imposed magnetic field, the terms including sinðmϕÞ should not enter the expression for Φ1, Φ2. Moreover since Pm n ðτÞ, n ≥ 2, is indefinite
τ2 þ ζ 2 - 1 þ1
×
Qm n
where and are Legendre functions of the first and the second kind, respectively, of nth degree and mth order [8]. The notion of R-separation and specifically the notion of R-semiseparation [3] appeared when studying the sedimentation of an RBC [4].
32.4
Φ1 ðτ, ζ, ϕÞ ¼
B02nþ1 Q02nþ1 ðτÞP02nþ1 ðζ Þ n¼0
þ B12nþ1 Q12nþ1 ðτÞP12nþ1 ðζ Þ cosðϕÞ , ð32:22Þ where - H 0 cosðaÞ c ð2n þ 1Þ2 ð4n þ 5Þ2 wn þ ð2n þ 2Þ2 ð4n þ 1Þ2 wnþ1 , x 2b02nþ1 - 1 ð4n þ 1Þ2 ð4n þ 5Þ2
B02nþ1 ¼
ð32:23Þ - H 0 sinðaÞ c ð32:24Þ ð4n þ 5Þ2 wn þ ð4n þ 1Þ2 wnþ1 x 2b12nþ1 - 1 ð4n þ 1Þ2 ð4n þ 5Þ2
B12nþ1 ¼
and
32
Mathematical Study of the Perturbation of Magnetic Fields Caused by Erythrocytes
ð- 1Þn ð4n þ 1Þð2nÞ! , 22n ðn!Þ2 2n2 þ 2n - 2m2 - 1 : bm n ¼ ð2n - 1Þð2n þ 3Þ
where
wn ¼
ð32:25Þ
am n cm n
The potential for the interior of the inverted prolate spheroid is Φ2 ðτ, ζ, ϕÞ =
ð32:31Þ
32.5 þ1
C02nþ1 P02nþ1 ðτÞ n¼0
þ D02nþ1 Q02nþ1 ðτÞ P02nþ1 ðζ Þ þ C 12nþ1 P12nþ1 ðτÞ þ D12nþ1 Q12nþ1 ðτÞ x P12nþ1 ðζ Þ cosðϕÞ , ð32:27Þ and the coefficients C02nþ1 , C 12nþ1 and D02nþ1 , D12nþ1 are calculated through the linear systems defined in (32.28), (32.29) and (32.28), (32.30) for m = 0, 1. m m m Cm 2nþ1 P2nþ1 ðτ0 Þ þ D2nþ1 Q2nþ1 ðτ0 Þ m ¼ Bm 2nþ1 Q2nþ1 ðτ0 Þ, n 2 ,
ð32:28Þ
'm m 'm μ2 τ20 - 1 C m 1 P1 ðτ0 Þ þ D1 Q1 ðτ0 Þ ¼ m m m m ¼ ðμ1 - μ2 Þ τ0 Bm 1 Q1 ðτ0 Þ þ b1 B1 Q1 ðτ0 Þ m m m 2 'm þ cm 3 B3 Q3 ðτ0 Þ þ μ1 B1 τ0 - 1 Q1 ðτ0 Þ,
ð32:29Þ 'm m ' m μ2 τ20 - 1 C m 2nþ1 P2nþ1 ðτ0 ÞþD2nþ1 Q2nþ1 ðτ0 Þ m ¼ ðμ1 - μ2 Þ τ0 Bm 2nþ1 Q2nþ1 ðτ0 Þ m m þ am 2n - 1 B2n - 1 Q2n - 1 ðτ0 Þ
þ ðμ1 - μ2 Þ
ðn - m þ 1Þðn - m þ 2Þ , ð2n þ 1Þð2n þ 3Þ ðn þ mÞðn þ m - 1Þ = : ð2n þ 1Þð2n - 1Þ =
τ2 þ ζ 2 - 1f ðτ, ζ, ϕÞ, ð32:26Þ
with f ðτ, ζ, ϕÞ ¼
285
In this section, several magnetic lines are depicted using the first three terms of the series expansions of the obtained potentials, Φ1, Φ2, which according to our study seem to be adequate to represent the magnetic lines and therefore the field characteristics. In Fig. 32.2, we draw magnetic lines with Φ1 = Φ2 = ±1, ±2, ±3, a = 0 rad, c = 1, μ1 = 1, μ2 = 100, τ0 = 1.15, and H0 = 1 T, H0 = 3 T, H0 = 6 T, , respectively. In Fig. 32.3, we draw magnetic lines with Φ1 = Φ2 = ±1, ±2, ±3, a = 0 rad, c = 1, H0 = 9.4 T, μ2 = 100, τ0 = 1.15 and μ1 = 1, μ1 = 80, μ1 = 120, , respectively. In Fig. 32.4, magnetic lines with H0 = 9.4 T, c = 1, μ1 = 1, μ2 = 100, and τ0 = 1.15, Φ1 = Φ2 = ±1, ±2, ±3, and a = 0 rad, a = π∕4 rad, a = π∕2 rad are depicted, respectively. It is interesting to note in Fig. 32.3 the differences appearing in the resulting magnetic field inside the RBC, in the cases where the magnetic permeability at the exterior domain is less or greater than the one in the interior expressing this way the different magnetic character of the cell (para/dia/ magnetic). In (Fig. 32.4), the angular dependance of the interior to RBC magnetic field on the orientation of the assumed exterior magnetic field is sound and clearly demonstrated.
32.6
m m bm 2nþ1 B2nþ1 Q2nþ1 ðτ0 Þ
m m þ cm 2nþ3 B2nþ3 Q2nþ3 ðτ0 Þ 2 'm * þ μ1 Bm 2nþ1 τ0 - 1 Q2nþ1 ðτ0 Þ, n 2 ,
ð32:30Þ
Magnetic Lines
Discussion
Given that the rheological properties of the blood are affected when a magnetic field is applied, in this chapter, we mathematically study this phenomenon, and the physical explanation of the obtained results are proposed.
286
M. Hadjinicolaou and E. Protopapas
Fig. 32.2 Magnetic lines with a = 0 rad, c = 1, μ1 = 1, μ2 = 100, τ0 = 1.15, and H0 = 1 T, H0 = 3 T, H0 = 6 T
Fig. 32.3 Magnetic lines with a = 0 rad, c = 1, H0 = 9.4 T, μ2 = 100, τ0 = 1.15, and μ1 = 1, μ1 = 80, μ1 = 120
Fig. 32.4 Magnetic lines with H0 = 9.4 T, c = 1, μ1 = 1, μ2 = 100, and τ0 = 1.15, and a = 0 rad, a = π∕4 rad, a = π∕2 rad
It is known that if an object to which a uniform magnetic field H0 is applied is either of spherical or of spheroidal (prolate or oblate) shape, the magnetic field inside the object is homogeneous
[6, 7], i.e., the total magnetic field H is constant. In this chapter, having modelled the RBC as an inverted prolate spheroid, we obtain that the magnetic field in its interior given by H2 = -∇ Φ2 is
32
Mathematical Study of the Perturbation of Magnetic Fields Caused by Erythrocytes
not homogeneous. This is due to the existence of the term τ2 þ ζ 2 - 1 in the mathematical expression of the magnetic potential Φ2 (32.26). More precisely, when calculating H2 = -∇ Φ2, the nonlinear quantity τ2 þ ζ 2 - 1 cannot be written in a separable form and thus cannot either simplified or eliminated, since the series part of Φ2 multiplied by this term is expressed in separable form as combinations of eigenfunctions of τ and of eigenfunctions of ζ, respectively. This probably reflects the particular geometrical shape of the object (RBC), which is modelled as an inverted prolate spheroid which is a non-convex domain. This result may explain the small discrepancies in line shapes observed in NMR spectroscopy [6]. Different values of the magnetic permeability of the interior of the RBC and the exterior solute (Fig. 32.3) where the RBCs are suspended are taken into account, corresponding to the different biochemical characteristics of the RBCs. In all the depicted magnetic lines, the non-uniformity of the field inside the RBC is verified, such as it was expected. In the case that the magnetic permeability inside is much greater than the one outside and as H0 increases (Fig. 32.2), the magnetic lines seem to “gather” in the long axes of the RBC as well as near its surface and parallel to the long axes. When the magnetic permeability in the interior of the RBC increases (Fig. 32.3), the field expands towards the entire surface of the RBC. Moreover, the magnetic field is angular sensitive, i.e., when the imposed magnetic field changes orientation with respect to an angle a, the magnetic field is also affected, as it is depicted by the magnetic lines in Fig. 32.4. In (Figs. 32.2, 32.3, and 32.4), one may notice some kind of discontinuity of the magnetic lines on the surface of the RBC, although continuity has been assumed through the boundary conditions. This is due to the cut-off we made for reasons of convenience in preparing the figures, while all the qualitative and quantitative information is given through the series expansions of the potentials Φ1, Φ2. Further investigation is aimed in a forthcoming publication.
287
References 1. Darendeliler M. A., Darendeliler A. and Sinclair P. M. (1997). Effects of static magnetic and pulsed electromagnetic fields on bone healing. Int. J. Adult Orthodon. Orthognath. Surg., 12:43–53. 2. Durrant J. C., Hertzberg P. M. and Kuchel P. W. (2003). Magnetic Susceptibility: Further Insights into Macroscopic and Microscopic Fields and the Sphere of Lorentz. Concepts in Magnetic Resonance Part A. Vol. 18A(1) 72–95. Published online in Wiley InterScience (www.interscience.wiley.com). https:// doi.org/10.1002/cmr.a.10067 3. Hadjinicolaou M. and Protopapas E. (2014). On the R-semiseparation of the Stokes bi-stream operator in the inverted prolate spheroidal geometry. Mathematical Methods in the Applied Sciences. 37, pages 207–211. 4. Hadjinicolaou M., Kamvyssas G. and Protopapas E. (2015). Stokes flow applied to the sedimentation of a red blood cell. Quarterly of Applied Mathematics. Vol. 73, No. 3, pp. 511–523. 5. Kraiger M. and Schnizer B. (2013). Potential and field of a homogeneous magnetic spheroid of arbitrary direction in a homogeneous magnetic field in Cartesian coordinates. The International Journal for Computation and Mathematics in Electrical and Electronic Engineering. Vol. 32, No. 3, pp. 936–960. 6. Kuchel W. P. and Bullian T. B. (1989). Perturbation of Homogeneous Magnetic Fields By Isolated Single and Confocal Spheroids. Implications for NMR Spectroscopy of Cells. NMR in Biomedicine. vol. 2, no. 4, 151–160. 7. Kuchel W. P., Chapman E. B., Bubb A. W., Hansen E. P., Durrant J. C. and Hertzberg P. M. (2003). Magnetic Susceptibility: Solutions, Emulsions, and Cells. Concepts in Magnetic Resonance. Part A, Vol. 18A(1) 56–71. 8. Lebedev N. N. (1972). Special Functions and Their Applications. Dover Publications. 9. Lee H., Shin T., Cheon J. and Weissleder R. (2015). Recent Developments in Magnetic Diagnostic Systems. Chem Rev.. 115(19):10690–724. doi: 10.1021/cr500698d. Epub 2015 Aug 10. PMID: 26258867; PMCID: PMC5791529. 10. Marcinkowska-Gapinska A. and Nawrocka-Bogusz H. (2013). Analysis of the magnetic field influence on the rheological properties of healthy persons blood. Biomed Res Int. 2013:490410. 11. Mendz G. L., Bulliman B. T., James N. L. and Kuchel P. W. (1989). Magnetic potential and field gradients of model cell. J. Theor. Biol. 137, 55–69. 12. Moon P., Spencer D. E. (1961). Field Theory Handbook, Springer-Verlag. 13. Pasek J., Pasek T., Sieroń-Stołtny K., Cieślar G. and Sieroń A. (2015). Electromagnetic fields in medicine The state of art. Electromagn Biol Med. 35(2):170–5. https://doi.org/10.3109/15368378.2015.1048549. Epub 2015 Jul 20.
288 14. Song Y., Huang Y.-Y., Liu X., Zhang X., Ferrari M. and Qin L. (2014). Point-of-care technologies for molecular diagnostics using a drop of blood. Trends Biotechnol. 32(3):132–9. https://doi.org/10.1016/j. tibtech.2014.01.003. 15. Shupak N. (2003). Therapeutic uses of pulsed magnetic-field exposure: a review. Radio Sci. Bull. 307:9–32.
M. Hadjinicolaou and E. Protopapas 16. Werheimer N., Savitz D. A. and Leeper E. (1995). Childhood cancer in relation to indicators of magnetic fields from ground current sources. Bioelectromagnetics. 16:86–96. 17. Zborowski M., Ostera R. G., Moore R. L., Milliron S, Chalmers J. J. and Schechter N. A. (2003). Red Blood Cell Magnetophoresis. Biophysical Journal. Volume 84. 2638–2645.
Computational Models for Biomarker Discovery
33
Konstantina Skolariki, Themis P. Exarchos, and Panagiotis Vlamos
Abstract
Alzheimer's disease (AD) is a prevalent and debilitating neurodegenerative disorder characterized by progressive cognitive decline. Early diagnosis and accurate prediction of disease progression are critical for developing effective therapeutic interventions. In recent years, computational models have emerged as powerful tools for biomarker discovery and disease prediction in Alzheimer's and other neurodegenerative diseases. This paper explores the use of computational models, particularly machine learning techniques, in analyzing large volumes of data and identifying patterns related to disease progression. The significance of early diagnosis, the challenges in classifying patients at the mild cognitive impairment (MCI) stage, and the potential of computational models to improve diagnostic accuracy are examined. Furthermore, the importance of incorporating diverse biomarkers, including genetic, molecular, and neuroimaging indicators, to enhance the predictive capabilities of these models is highlighted. The paper also presents case studies on the application of computational models in simulating disease progression,
K. Skolariki (✉) · T. P. Exarchos · P. Vlamos Department of Informatics, Ionian University, Corfu, Greece e-mail: [email protected]
analyzing neurodegenerative cascades, and predicting the future development of Alzheimer's. Overall, computational models for biomarker discovery offer promising opportunities to advance our understanding of Alzheimer's disease, facilitate early diagnosis, and guide the development of targeted therapeutic strategies. Keywords
Computational models · Neurodegenerative diseases · Machine Learning · Alzheimer’s disease
33.1
Introduction
Alzheimer’s disease is an irreversible and progressive neurodegenerative disease and the most common form of dementia. It usually affects the elderly population, but its early onset is still possible [16]. Recent studies suggest that Alzheimer’s is a midlife disease [16]. Regardless of the onset of the disease, it is important to note that it takes years for the symptoms to manifest themselves. In particular, it is believed that Alzheimer’s begins 20 years before the onset of symptoms. The disease is divided into three general stages: preclinical, mild cognitive impairment (MCI), and dementia [11]. Researchers find it difficult to classify patients at the MCI stage. This is partly because
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_33
289
290
although patients with MCI appear to have neurological deficits, their symptoms are not advanced enough to meet the disease’s criteria. MCI is also known as the stage between normal cognitive aging and dementia and is often thought of as the prodromal stage of Alzheimer’s [11]. Patients with MCI can either remain stable at this stage of the disease or develop into Alzheimer’s. Approximately 20–40% of patients with MCI convert to Alzheimer’s [11]. Like any other disease, early diagnosis is key. Therefore, identifying changes in the brain that occur during the conversion of MCI to Alzheimer’s as early as possible is crucial in developing more effective methods of treatment. Regarding Alzheimer’s disease, in a study by Petrella et al. [15], they applied a causation model to simulate time-dependent data from biomarkers. They modeled pathological biomarkers (amyloidβ and tau peptide) as well as biomarkers that are associated with neuronal loss and cognitive impairment as first-order nonlinear differential equations to include neurodegenerative cascades associated with amyloid-β. The computational model of the early onset of Alzheimer’s demonstrated the initial onset of amyloid-β, followed by biomarkers of tau and neurodegeneration and the onset of cognitive decline. Similarly, the late-onset Alzheimer’s computational models demonstrated that biomarker levels were proportional to the magnitude of the comorbid pathology. Amyloid-β is considered to be an important hallmark and pathological feature of Alzheimer’s. It is also a key feature in designing computational models for the disease. Various computer models have been proposed based on Alzheimer’s kinetics. Amyloid plaque formation is also considered a key biochemical concept for model design. Anastasio [3] developed a computational model of Alzheimer’s. The regulatory path of the model is justified by interrelated equations. In addition, molecular conditions were characterized by arbitrary integer values in the equations, and a set of rules was used to justify alterations in the elements of the model, which change the levels of other elements. The model investigated the disruption of Aβ-regulation through the
K. Skolariki et al.
interconnection of various diseases and pathological processes, including cerebrovascular disease, infection, and oxidative stress. More specifically, it was reported that cerebrovascular disease contributes to the progression of Alzheimer’s. In vivo biomarkers have been used as diagnostic methods for neurodegenerative diseases. Among other things, these biomarkers include structural brain changes that can be studied through imaging techniques (magnetic resonance imaging (MRI) and positron-emission tomography (PET)) as well as via amyloid-β and tau from the cerebrospinal fluid. Advances in neuroimaging approaches have allowed researchers to look for patterns of changes related to neurodegenerative diseases throughout the brain [19, 20, 23]. Image analysis from either MRI or PET with the combination of preexisting in vivo biomarkers proved to be a reliable diagnostic tool. A great advance in research would be the ability to predict the future development of neurodegenerative diseases based on a combination of biomarkers.
33.2
Computational Models for Neurodegenerative Diseases
In the age of big data and personalized medicine, scientists have moved on to using alternative methods to analyze large volumes of data. These approaches include several machine learning (ML) techniques. ML approaches can be divided into supervised and unsupervised learning. To create a decision support system for the collection and analysis of multidimensional data from patients with neurodegenerative diseases, machine learning techniques can be used to analyze multivariate data and identify any emerging patterns. Supervised ML methods have been used in the past to distinguish patients with Alzheimer’s from patients suffering from mild cognitive impairment [2, 5, 6, 10, 22]. This separation can be done on the basis of many different biomarkers (e.g., tau, amyloid-β, etc.). Accurately predicting the onset of Alzheimer’s disease or related disorders has many important
33
Computational Models for Biomarker Discovery
practical applications. A study by Nori et al. [13] aimed to build a machine learning model for predicting mild cognitive impairment, Alzheimer’s disease, and related disruptions from structured data using administrative and electronic sources of health data. The ability of the model to distinguish cases of dementia implies that it can be a useful tool for the triage of patients for their participation in clinical trials and their general management. The majority of current approaches aim to help patients manage behavioral symptoms and prevent the progression of others, such as memory loss and cognitive decline. Due to the complexity of the disease, a particular drug or therapeutic intervention seems unlikely to successfully cure the disease. Predicting the exact point at which the disease progresses from the prodromal stage of the disease (MCI) to Alzheimer’s would be extremely beneficial in identifying new prevention mechanisms. Models based on machine learning (ML) techniques provide a promising opportunity to develop tools that can detect patient progression. Machine learning methods such as decision trees and support vector machines have previously been exploited to classify patients who converted from MCI to Alzheimer’s and patients who remained stable at the MCI stage [18]. The creation of computational models will be able to contribute to the early diagnosis of diseases and their more efficient treatment. Computational models serve to understand the course of the disease and offer the ability to monitor its progression. So far, researchers have been able to create and use mathematical models for various purposes, such as further examining and screening of neurodegenerative disorders [1, 14, 17, 21], the detection of cancer [4, 9], and many other cases. Another model uses the repetitive neural networks (RNNs) methodology. RNNs can effectively solve the problem of predicting the progression of Alzheimer’s by making full use of the inherent temporal and medical patterns derived from patient history. The approach can be applied to other problems of chronic disease progression
291
[21]. The proposed model performs better than models based on traditional machine learning methods such as support vector machines. This is mainly due to the fact that it can fully capture and utilize the time patterns of patients along with their history. Based on the complexity and heterogeneity of Alzheimer’s, we conclude that an advanced prediction model that includes a group of characteristics will provide more meaningful insights into accurate diagnostic and predictive approaches in an unbiased manner. These features will include but will not be limited to ApoE, plasma and cerebrospinal fluid protein levels (tau, Ab, NFL), EEG markers and volumetric differences in mapped areas of the hippocampus, MRI (used to analyze certain areas of interest and classify brain areas affected by Alzheimer’s on a voxel scale), and PET [12]. The above are wellestablished indicators of Alzheimer’s. Therefore, the inclusion of a wider combination of indicators would increase the accuracy of the model, thus helping in the general goal of scientists to develop therapeutic methods that integrate all types of biomarkers (genetic, molecular, cellular, etc.) in order to prevent the disease. Computational models play a crucial role in advancing our understanding of neurodegenerative diseases. These models can simulate disease processes, explore underlying mechanisms, and aid in the development of new treatment strategies. Mathematical models use equations to describe the dynamics of disease progression and neuronal degeneration. These models often incorporate variables such as protein aggregation, synaptic dysfunction, and neuronal death. By utilizing and adjusting accordingly these variables, researchers can study the impact of different factors on disease progression and identify potential therapeutic targets. The development of a mathematical model that incorporates various cell types and pathological features of AD is a valuable approach to understanding the disease and exploring potential therapeutic strategies. Hao and Friedman [28] proposed a model which includes neurons, astrocytes, microglia, peripheral
292
macrophages, amyloid β aggregation, and hyperphosphorylated tau proteins, and allows for the investigation of drug effects in AD. According to the simulations conducted using this model, combined therapy with a TNF-α inhibitor (a drug that targets inflammation) and an anti-amyloid β agent could potentially lead to significant efficacy in slowing the progression of AD. This suggests that targeting both inflammation and amyloid β pathology simultaneously could have a beneficial effect on the disease course. Further experimental and clinical studies are necessary to validate the predictions made by the model and determine the actual efficacy and safety of the proposed combined therapy. Developing effective treatments for AD remains a complex and ongoing research challenge, and multiple approaches are being explored in the quest to find a cure or slow disease progression. Network models represent the brain as a complex network of interconnected neurons or brain regions. These models simulate the spread of pathological changes through the network and investigate how disruptions in network connectivity contribute to disease progression. Network models can provide insights into the spatial and temporal patterns of neurodegeneration and help identify critical nodes or pathways that could be targeted for intervention. Aberrant connectivity and network degeneration can be caused by factors such as demyelination, axonal injury, loss of signaling, and retraction of axons and dendrites. These changes in connectivity and network structure can contribute to the progression of neurodegenerative diseases. Another mechanism involves the direct propagation of disease factors along neural connections. This mechanism is associated with the “prion-like” spread of misfolded proteins, which can aggregate and transmit between cells, potentially contributing to disease progression. In this scenario, the network acts as a conduit for the transmission of diseaserelated pathology. The dynamics of neurodegeneration, whether driven primarily by network degeneration or disease propagation, is still a subject of debate among researchers. One specific model, the epidemic spreading model, has been successfully validated using PET
K. Skolariki et al.
amyloid-β patterns in AD patients. Another example is the network diffusion model (NDM), which mathematically describes protein transmission behavior as a graph heat equation driven by connectivity. These mathematical models provide insights into the spread and dynamics of neurodegenerative diseases at both local/regional and whole-brain network levels, and they contribute to our understanding of the underlying mechanisms involved [27]. Agent-based models simulate the behavior and interactions of individual agents, such as neurons or immune cells, within a virtual environment. These models can capture the heterogeneity and complexity of neurodegenerative diseases by considering various factors, such as genetic variations, environmental influences, and cell-to-cell interactions. Agentbased models are particularly useful for studying disease mechanisms, exploring emergent properties, and testing the effects of different interventions. An agent-based modeling framework in NetLogo 3D was developed to investigate the potential role of a microbe, specifically Chlamydia pneumoniae, in late-onset AD. The objective of this initial model is to simulate the spatial and temporal pathway of bacterial propagation via the olfactory system, which could potentially contribute to the development of AD symptoms. According to the results obtained from the simulations based on the set of biological rules, the infection by C. pneumoniae led to the formation of beta-amyloid (Aβ) plaques and neurofibrillary (NF) tangles, which are characteristic pathological features of AD. The simulations also demonstrated immune responses triggered by the infection. The model showed that breathing in C. pneumoniae can result in the propagation of infection and a significant buildup of Aβ plaques and NF tangles in the olfactory cortex and hippocampus regions of the brain. The model further indicated that mucosal and neural immunity play a significant role in the considered pathway. Lower immunities, associated with elderly individuals, resulted in faster and more pronounced formation of Aβ plaques and NF tangles. On the other hand, higher immunities, correlated with younger individuals, showed little to no formation of
33
Computational Models for Biomarker Discovery
these pathological features [26]. Machine learning algorithms can analyze large datasets, including genetic, clinical, and neuroimaging data, to identify patterns and make predictions. These models can aid in the early detection of neurodegenerative diseases, prediction of disease progression, and identification of potential biomarkers. Machine learning techniques, such as deep learning, have shown promise in analyzing brain images and genetic data to improve diagnostic accuracy and assist in treatment decision-making. Deep neural networks (DNNs) have emerged as powerful biomimetic models for cognitive processes and brain information processing. Tuladhar et al. [25] proposed a paradigm for modeling neural diseases using DNNs and demonstrate its application in modeling posterior cortical atrophy (PCA), an atypical form of Alzheimer’s disease affecting the visual cortex. The results revealed a progressive loss of object recognition capability in the injured networks, mirroring the visual agnosia observed in PCA patients. While this study represents an initial step toward modeling neural injury using DNNs, it is important to acknowledge certain limitations and outline future research directions. The integration of clinical data with in-silico modeling holds promise for developing patient-specific computational models, which could contribute to the advancement of precision medicine in neurological diseases. As this paradigm extends beyond PCA, future investigations can explore its application in modeling other cognitive domains, such as motor control, auditory cognition, language processing, memory, and decision making, facilitating the study of various neurological diseases [25]. Pharmacokinetic/pharmacodynamic models are another category of models that can be used in the framework of neurodegeneration. These models simulate the absorption, distribution, metabolism, and excretion (pharmacokinetics) of drugs in the body, as well as their effects on the disease-related targets (pharmacodynamics). By integrating data on drug properties, patient characteristics, and disease progression, these models can help optimize drug dosing regimens, predict drug efficacy, and evaluate the impact of
293
different therapeutic interventions. It’s important to note that these computational models are often used in conjunction with experimental studies and clinical observations to provide a comprehensive understanding of neurodegenerative diseases. They serve as powerful tools for hypothesis generation, testing, and refinement, ultimately aiding in the development of effective treatments for these challenging conditions. The pathological dysregulation of tauproteostasis and its transneuronal spread are prominent features of Alzheimer’s disease, emphasizing the urgent need for effective therapeutic interventions. Bloomingdale et al. [24] proposed a mechanistic mathematical model to enhance the comprehension of the pharmacokinetics and pharmacodynamics of tau antibodies in both animal models and humans. The primary objective was to facilitate the preclinical development and clinical translation of therapeutic tau antibodies for the treatment of Alzheimer’s disease. A physiologically based pharmacokinetic-pharmacodynamic (PBPK-PD) modeling approach was employed. This framework allowed for the integration of physiological factors and drug properties to simulate the behavior of tau antibodies in the body and their interaction with the target protein. Microdialysis studies in rats and nonhuman primates were conducted to evaluate the pharmacokinetics of a specific tau antibody. Overall, the PBPK-PD modeling approach offered valuable insights into the intricate dynamics between tau antibodies and their targets. By utilizing this mechanistic understanding, the optimization of lead candidates was supported and informed predictions regarding dosing regimens that could potentially yield clinical efficacy were made [24].
33.3
Conclusions
In conclusion, computational models have emerged as valuable tools for biomarker discovery and disease prediction in Alzheimer’s disease and other neurodegenerative disorders. These models provide a means to analyze large volumes of data, identify patterns, and gain insights into disease progression. By incorporating diverse
294
biomarkers, including genetic, molecular, and neuroimaging indicators, computational models offer the potential to improve diagnostic accuracy and enable early intervention strategies. The ability to accurately predict disease onset and progression is crucial for identifying high-risk individuals, triaging patients for clinical trials, and developing personalized treatment approaches. Moreover, computational models contribute to our understanding of the complex mechanisms underlying Alzheimer’s disease, such as the role of amyloid-β and tau proteins, neurodegenerative cascades, and the impact of comorbid pathologies. These models not only facilitate the design and evaluation of therapeutic interventions but also guide the optimization of dosing regimens and the identification of potential drug targets. However, challenges remain, including the need for large and diverse datasets, standardized methodologies, and validation across different populations. Further advancements in computational modeling, integration of multi-omics data, and refinement of machine learning algorithms hold great promise for accelerating biomarker discovery and transforming Alzheimer’s research and clinical practice. By combining computational models with experimental validation, new insights into the underlying mechanisms of neurodegenerative diseases pave the way for more effective treatments and preventive strategies can be unlocked. Acknowledgments
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the third Call for HFRI PhD Fellowships (Fellowship Number: 6620).
K. Skolariki et al.
References 1. Aich, S., et al. (2018). Prediction of Neurodegenerative Diseases Based on Gait Signals Using Supervised Machine Learning Techniques. Advanced Science Letters, vol. 24, no. 3, pp. 1974–78 2. Aksu, Y., Miller, D., Kesidis, G., Bigler, D. and Yang, Q. (2011). An MRI-Derived Definition of MCI-to-AD Conversion for Long-Term, Automatic Prognosis of MCI Patients. PLoS ONE, 6(10), p. e25074. 3. Anastasio, T. (2011). Data-driven modeling of Alzheimer Disease pathogenesis. Journal of Theoretical Biology, 290, pp. 60–72. 4. Brady, R., Enderling, H. (2019). Mathematical Models of Cancer: When to Predict Novel Therapies, and When Not to. 3722–3731. Bull Math Biol 81 5. Cuingnet, R., Gerardin, E., Tessieras, J., Auzias, G., Lehéricy, S., Habert, M., Chupin, M., Benali, H. and Colliot, O. (2011). Automatic classification of patients with Alzheimer’s disease from structural MRI: A comparison of ten methods using the ADNI database. Neuro Image, 56(2), pp. 766–781. 6. Falahati, F., Westman, E. and Simmons, A. (2014). Multivariate Data Analysis and Machine Learning in Alzheimer’s Disease with a Focus on Structural Magnetic Resonance Imaging. Journal of Alzheimer’s Disease, 41(3), pp. 685–708. 7. Fisun, M., and Horban, H., “Implementation of the information system of the association rules generation from OLAP-cubes in the post-relational DBMS caché,” 2016 XIth International Scientific and Technical Conference Computer Sciences and Information Technologies (CSIT), 2016, pp. 40–44, https://doi. org/10.1109/STC-CSIT.2016.7589864 8. Fokas, A. S., et al. “Mathematical Models and Deep Learning for Predicting the Number of Individuals Reported to Be Infected with SARS-CoV-2.” Journal of The Royal Society Interface, vol. 17, no. 169, Aug. 2020, p. 20200494. 9. Kazem, M., 2017. “Predictive Models in Cancer Management: A Guide for Clinicians.” The Surgeon, vol. 15, no. 2, Apr. 2017, pp. 93–97 10. Lee, S., Bachman, A., Yu, D., Lim, J. and Ardekani, B. (2016). Predicting progression from mild cognitive impairment to Alzheimer’s disease using longitudinal callosal atrophy. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring, 2, pp. 68–74. 11. Grassi, M., Rouleaux, N., Caldirola, D., Loewenstein, D., Schruers, K., Perna, G. and Dumontier, M., 2019. A Novel Ensemble-Based Machine Learning Algorithm to Predict the Conversion From Mild Cognitive Impairment to Alzheimer’s Disease Using SocioDemographic Characteristics, Clinical Information, and Neuropsychological Measures. Frontiers in Neurology, 10. 12. Gupta, Y., Lama, R. and Kwon, G., 2019. Prediction and Classification of Alzheimer’s Disease Based on Combined Features From Apolipoprotein-E Genotype, Cerebrospinal Fluid, MR, and FDG-PET Imaging
33
Computational Models for Biomarker Discovery
Biomarkers. Frontiers in Computational Neuroscience, 13. 13. Nori, Vijay S., et al. (2019). “Machine Learning Models to Predict Onset of Dementia: A Label Learning Approach.” Alzheimer’s & Dementia: Translational Research & Clinical Interventions, vol. 5, no. 1, pp. 918–25 14. Park, J.H., Cho, H.E., Kim, J.H. et al. (2020). Machine learning prediction of incidence of Alzheimer’s disease using large-scale administrative health data. npj Digit. Med. 3, 46 15. Petrella, J., Hao, W., Rao, A. and Doraiswamy, P. (2019). Computational Causal Modeling of the Dynamic Biomarker Cascade in Alzheimer’s Disease. Computational and Mathematical Methods in Medicine, 2019, pp. 1–8. 16. Ritchie, K., Ritchie, C., Yaffe, K., Skoog, I. and Scarmeas, N., 2015. Is late-onset Alzheimer’s disease really a disease of midlife?. Alzheimer’s & Dementia: Translational Research & Clinical Interventions, 1(2), pp. 122–130. 17. Rohini, M., & Surendran, D. (2019). Classification of Neurodegenerative Disease Stages using Ensemble Machine Learning Classifiers. Procedia Computer Science, 165, 66–73 18. Skolariki K, Terrera GM, Danso SO (2021) Predictive models for mild cognitive impairment to Alzheimer’s disease conversion. Neural Regen Res 16(9): 1766–1767. 19. Walhovd KB, Fjell AM, Brewer J, McEvoy LK, Fennema-Notestine C, Hagler DJ, Jr., Jennings RG, Karow D, Dale AM (2010) Combining MR imaging, positron-emission tomography, and CSF biomarkers in the diagnosis and prognosis of Alzheimer disease. AJNR Am J Neuroradiol 31, 347–354. 20. Westman E, Cavallin L, Muehlboeck JS, Zhang Y, Mecocci P, Vellas B, Tsolaki M, Kloszewska I, Soininen H, Spenger C, Lovestone S, Simmons A, Wahlund LO (2011) Sensitivity and specificity of
295 medial temporal lobe visual ratings and multivariate regional MRI classification in Alzheimer’s disease. PLoS ONE 6, e22506. 21. Wang, T., et al. (2018). “Predictive Modeling of the Progression of Alzheimer’s Disease with Recurrent Neural Networks.” Scientific Reports, vol. 8, no. 1, p. 9161. 22. Wolz, R., Julkunen, V., Koikkalainen, J., Niskanen, E., Zhang, D., Rueckert, D., Soininen, H. and Lötjönen, J. (2011). Multi-Method Analysis of MRI Images in Early Diagnostics of Alzheimer’s Disease. PLoS ONE, 6(10), p.e25446. 23. Zhang D, Wang Y, Zhou L, Yuan H, Shen D (2011) Multimodal classification of Alzheimer’s disease and mild cognitive impairment. Neuroimage 55, 856–867. 24. Bloomingdale, P., Bumbaca-Yadav, D., Sugam, J., Grauer, S., Smith, B., Antonenko, S., Judo, M., Azadi, G., & Yee, K. L. (2022). PBPK-PD modeling for the preclinical development and clinical translation of tau antibodies for Alzheimer’s disease. Frontiers in pharmacology, 13, 867457. https://doi.org/10.3389/ fphar.2022.867457 25. Tuladhar, A., Moore, J. A., Ismail, Z., & Forkert, N. D. (2021). Modeling Neurodegeneration in silico With Deep Learning. Frontiers in Neuroinformatics, 15. https://doi.org/10.3389/fninf.2021.748370 26. Sundar S, Battistoni C, McNulty R, et al. An agentbased model to investigate microbial initiation of Alzheimer’s via the olfactory system. Theor Biol Med Model. 2020;17(1):5. Published 2020 Apr 15. https://doi.org/10.1186/s12976-020-00123-w 27. Sundar S, Battistoni C, McNulty R, et al. An agentbased model to investigate microbial initiation of Alzheimer’s via the olfactory system. Theor Biol Med Model. 2020;17(1):5. Published 2020 Apr 15. https://doi.org/10.1186/s12976-020-00123-w 28. Hao, W., Friedman, A. Mathematical model on Alzheimer’s disease. BMC Syst Biol 10, 108 (2016). https://doi.org/10.1186/s12918-016-0348-2
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications
34
Eleni Georgiadou, Haralabos Bougias, Stephanos Leandrou, and Nikolaos Stogiannos
Abstract
Alzheimer’s disease is a neurodegenerative disease with a huge impact on people’s quality of life, life expectancy, and morbidity. The ongoing prevalence of the disease, in conjunction with an increased financial burden to healthcare services, necessitates the development of new technologies to be employed in this field. Hence, advanced computational methods have been developed to facilitate early and accurate diagnosis of the disease and improve all health outcomes. Artificial intelligence is now deeply involved in the
fight against this disease, with many clinical applications in the field of medical imaging. Deep learning approaches have been tested for use in this domain, while radiomics, an emerging quantitative method, are already being evaluated to be used in various medical imaging modalities. This chapter aims to provide an insight into the fundamental principles behind radiomics, discuss the most common techniques alongside their strengths and weaknesses, and suggest ways forward for future research standardization and reproducibility. Keywords
E. Georgiadou Department of Radiology, Metaxa Anticancer Hospital, Piraeus, Greece H. Bougias Department of Clinical Radiology, University Hospital of Ioannina, Ioannina, Greece S. Leandrou Department of Health Sciences, School of Sciences, European University Cyprus, Engomi, Cyprus e-mail: [email protected] N. Stogiannos (✉) Discipline of Medical Imaging and Radiation Therapy, University College Cork, Cork, Ireland Division of Midwifery & Radiography, City, University of London, London, UK
Alzheimer’s · Dementia · Radiomics · Medical imaging · Artificial intelligence
34.1
Introduction
Alzheimer’s disease (AD) is a neurodegenerative disease that was named after the renowned German psychiatrist Alloys Alzheimer (1864–1915), who was the first to report a case of this disease to the scientific community more than a century ago [72]. AD is the most common type of dementia, a progressive loss of cognitive ability in various domains which greatly impacts the social and occupational function of the patients [6, 11].
Medical Imaging Department, Corfu General Hospital, Corfu, Greece e-mail: [email protected] # The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_34
297
298
E. Georgiadou et al.
Fig. 34.1 The main risk factors associated with AD
34.1.1
Etiology
Extensive research has been conducted throughout the years to define the etiology behind AD. It is now confirmed that AD is a disease that depends on multiple factors, ranging from increased age to environmental factors [11]. Hence, several risk factors have been proved to contribute to AD, and the most important of these factors can be depicted on the following image (Fig. 34.1). Therefore, the main hypotheses related to the cause of AD are associated with aging, head injuries, degeneration of the cholinergic and cortico-cortical pathways, exposure to certain metals, vascular diseases, genetic factors, metabolic disorders, infections, and immune system impairment [5]. Age has been proved to be the greatest risk factor for developing AD, with lateonset AD (manifesting over 65 years of age) accounting for the vast majority of the cases. On the contrary, early-onset AD is mainly attributed to familial factors, and it is associated to genetic mutations in the Aβ-precursor protein (APP), leading to amyloid accumulation, or presenilin 1 (PSEN1) and presenilin 2 (PSEN2) mutations,
which also alter the production of APP [66]. Finally, some nongenetic factors, which are related to the individual’s lifestyle and the environment, have been linked to AD development, and these include smoking, alcohol consumption, occupational exposures, body mass index, physical activity, depression, and cognitive ability [30].
34.1.2
Pathophysiology
It is widely accepted that AD has a complex pathophysiology, which has not been completely understood yet. While on a macroscopic level, AD usually presents with an atrophy of the cerebral cortex and the hippocampus, and microscopic markers are more difficult to be studied. Extracellular deposits of Aβ in the brain parenchyma form the so-called amyloid plaques, a distinctive characteristic found in the brains of patients with AD [37]. It should be noted that the formation of amyloid or senile plaques or the accumulation of hyperphosphorylated tau protein can be observed [21]. According to the amyloid hypothesis, which is the most prevalent
34
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications
34.1.4
299
Clinical Manifestations
among all hypotheses about AD pathophysiology, the altered cleavage of the APP causes neurofibrillary tangles and Aβ fragments as a result of suboptimal protein folding. Therefore, these deposits are believed to be the hallmarks required for a diagnosis of AD [51]. In the genetic context, familial AD cases usually occur as a result of genetic mutations in the APP gene, with allele e4 of apolipoprotein E (APoE) being the single most important risk factor for developing AD [21]. Individuals with these mutations have a threefold increase in AD prevalence. Except for APP, PSEN1, PSEN2, and the APOE ɛ4 allele, recent studies corroborate the fact that new loci also contribute to the development of AD, with genome-wide association studies finding over 20 new loci [14].
Some of the symptoms that are associated with AD may be present during the preclinical stage of the disease, during which individuals present with a mild memory loss that does not affect their daily functionality [11]. Later, as the disease progresses, people start to develop more severe memory loss, disorientation, depression, behavioral alterations, and loss of concentration. Severe AD stages include difficulties in speaking, reading, or/and recognizing family and friends, while many patients may be bedridden in the final stages of the disease. In addition, AD-related neuropsychiatric comorbidities are very often, with irritability, anxiety, and apathy being the most common among them [4].
34.1.3
34.1.5
Epidemiology
It is well known that the prevalence and incidence of dementia increase with age. AD is the most common type of dementia, with recent research showing that AD accounts for 50–80% of all cases [24, 76], followed by vascular dementia, dementia due to Lewis bodies, the dementiaParkinson complex, and frontotemporal dementia. In addition, AD predominantly affects females (7.1%) compared to males (3.3%), and the incidence rates are greater among females for all types of dementia [9, 62]. AD is the fifth leading cause of mortality, and it has serious implications for the healthcare. It is estimated that over 44,000,000 of people are affected by AD worldwide, and this number is likely to be doubled until 2050 [22]. Only in the United States, the AD-caused mortalities were increased by 145% between 2000 and 2019, and it has been estimated that this year’s financial costs for people with dementia will be over 321 billion dollars. These statistics clearly reflect the great impact of the disease on people, societies, and global economies; hence, it is imperative to work on the right direction to facilitate optimal diagnosis and treatment of AD in a timely manner.
Prognosis and Quality of Life
Since AD is a multifactorial disease, prognostic estimates rely on many aspects of it. It has been confirmed that life expectancy for AD is highly dependent on the patient’s age. Patients being diagnosed with AD in younger ages are more likely to survive for more years compared to those receiving a diagnosis in older ages [12]. In addition, patients with severe behavioral and psychiatric manifestations exhibit a lower life expectancy [74]. However, recent research shows that adopting a healthy lifestyle can definitely contribute to a prolonged life expectancy among patients with AD, highlighting the importance of certain modifiable lifestyle factors. These include smoking, diet, alcohol consumption, cognitive activities, and physical activity [20]. AD affects the quality of life (QoL) of patients in many ways, resulting in many functional and psychic deficits. Many patients with AD present with anxiety and symptoms of depression, which can further impair their QoL [45]. Moreover, AD may negatively affect the relationships between the patient and their beloved ones and also result in loss of functionality. All these contribute to a decreased QoL compared to people without
300
E. Georgiadou et al.
AD. Finally, it is important to note that the QoL of caregivers of patients with AD is also decreased, mainly due to anxiety related to the patient’s disease [69]. Therefore, it is vital to improve the QoL for both the AD patients and their caregivers.
34.1.6
Diagnosis of the Disease
The clinical criteria to diagnose mild cognitive impairment (MCI) and all stages of dementia have been proposed by the National Institute on Aging-Alzheimer’s Association (NIA-AA) since 2011 [49]. These guidelines added additional biomarkers, such as medical imaging, to aid in the diagnosis of MCI due to AD and also to differentiate between AD and other types of dementia [71]. To facilitate diagnosis of AD and other types of dementia, many cognitive tests do exist, each with specific strengths and limitations. However, many short cognitive tests suffer from limitations related to patient age or education background [56]. Also, some more efficient tests, such as the Montreal Cognitive Assessment (MoCA) test, are very promising; however, they cannot detect AD when in the early stages [18], and it has been suggested that patients who are evaluated only with psychometric and clinical tests are generally diagnosed when in the later, irreversible stages [43]. All these necessitate the use of medical imaging as an efficient, non-invasive way to aid the AD diagnosis. Magnetic resonance imaging (MRI) is highly used to facilitate the diagnosis of AD, since the diagnostic criteria suggest assessment of changes in structural imaging. MRI in patients with AD demonstrates cerebral atrophy that is initially observed in the medial temporal lobe. Hippocampal volumes in AD patients are reduced compared to normal patients between 26% and 27% [15]. The entorhinal cortex also demonstrates a decrease in volume, ranging between 38% and 40%. This powerful imaging tool can differentiate patients with MCI who are likely to progress to AD from those who will not. The above advantages have made MRI a valuable clinical tool for diagnosing AD.
34.2
Computational Methods and AD
As previously discussed, diagnosing AD is extremely challenging. In addition, timely diagnosis is even more complex, since many symptoms manifest when the disease has already progressed. Hence, researchers have tried to apply computational methods to facilitate the early detection of AD. A recent study with more than 12,000 participants used explainable machine learning (ML) methods to get an insight into the causes and indicators of the disease [10]. Another study used random forests (RF) to predict neuropathological changes related to AD using structural MRI images. They concluded that categorization of AD is feasible with these computational methods and that this will allow for earlier detection of AD-related changes [32]. After applying RF and decision tree (DT) algorithms on MRI images of normal and AD subjects, researchers also demonstrated the usefulness of rule extraction in the evaluation of AD, as well as the positive effects of argumentation-based symbolic reasoning for result interpretation [2]. The great impact of RF in the detection of AD was also confirmed by another study, where the RF classifier achieved the highest performance in the classification accuracy [33]. However, it has been proved that when employing a hybrid modeling coupled with selective features, this results in a greater accuracy, and it has capacity to predict AD in the early stages [8]. In summary, many computational approaches have been already applied to facilitate the early detection of AD. Of them, radiomics are greatly used in the last years, and they exhibit promising outcomes.
34.3
Radiomics
Medical imaging is employed in a wide variety of clinical settings and includes the use of X-rays and ultrasound (US), computed tomography (CT), MRI, and positron-emission tomography (PET) scans among others. The combination of
34
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications
301
Fig. 34.2 Example of a radiomic analysis pipeline. Image shows a malignant tumor located on the right lung. The tumor is segmented, and appropriate feature extraction methods are then applied
radiomics and artificial intelligence (AI) with conventional diagnostic imaging offers several advantages, making medical imaging a powerful tool in the improvement of diagnostic, prognostic, and prediction accuracy [36]. Traditionally, information from medical images was obtained through visual inspection, thus missing a vast amount of important information. As medical images contain information beyond visual perception, radiomics were introduced as a tool of high-throughput extraction of data, being able to convert those images into meaningful and mineable data [38]. While the role of medical imaging is swiftly evolving and the number of data is approaching levels that cannot be handled with traditional approaches, there is an emerging need to introduce AI in the field of radiomic analysis [40]. AI algorithms have the ability to handle massive amounts of data, and they are widely used for classification tasks. These algorithms can analyze data using pattern recognition to provide several predictions. Furthermore, these algorithms can automatically create radiomic features without human intervention through the process of segmentation, using deep learning (DL), a very promising and rapidly emerging subset of AI [19]. DL algorithms such as convolutional neural networks (CNNs) are being trained to
progressively combine information and automatically discover patterns, starting with simple characteristics before proceeding to more complex representations. Through this process, DL algorithms can reveal unknown relationships within data and produce remarkable advancements in the field of medical imaging and precision diagnosis [70]. Radiomic analysis of a medical image is a process consisting of several steps, as it can be seen in the following diagram (Fig. 34.2):
34.4
Image Acquisition
The first step of the radiomic procedure is usually image acquisition, which is achieved through various medical imaging modalities. Medical images contain large amounts of data, and radiomics rely on them to reveal any possible correlations between them [38, 41]. However, due to the variety of imaging methods, intrinsic variations exist among data produced within the same modality, and these are caused by differences in image acquisition, protocols, and imaging equipment. Hence, it is essential to develop acquisition and preprocessing standards to ensure reliability of the outcomes produced by radiomic analysis [25, 42, 67].
302
34.5
E. Georgiadou et al.
Image Reconstruction and Preprocessing
Medical images are mathematically reconstructed raw data acquired by imaging modalities. Reconstruction of raw data is accomplished through application of mathematical algorithms. Different reconstruction algorithms affect the displayed image (spatial resolution, shapes), introducing in that way diversity in the data of radiomic analysis. Thus, it is crucial that image reconstruction processes should be clear, as radiomics highly depend on medical image parameters [26, 34]. Currently, there are several preprocessing techniques available: • • • • • • •
Resampling Normalization Motion correction Filtering for noise removal Filtering for improving image characteristics Gray level quantization Inhomogeneity correction
34.6
Segmentation
Segmentation is a fundamental step in radiomic analysis, as highly descriptive features will be obtained through this progress, turning medical images (2D, 3D) into meaningful and mineable data. Segmentation can be defined as the process of finding and labelling relevant regions within a given context. It can be explained as the process of determining the position of a region in an image (recognition) and precisely marking the boundary of a region (delimitation) [58, 73]. Segmentation of a region of interest (ROI) can be manual, automatic, or semiautomatic.
34.6.1
Manual Segmentation
Manual segmentation is an easy but timeconsuming and highly subjective clinical procedure, as ROIs have morphological variations which are usually difficult to segment.
34.6.2
Automatic Segmentation
Automatic segmentation refers to the segmentation process that is fully conducted by software and algorithms without human interaction. Such algorithms are trained on several data, and they are widely available for use in medical research (e.g., the 3D Slicer open segmentation software). However, these algorithms have been trained with unreal ground truth. As the output of this automated system usually depends on one or more parameters of the segmentation algorithm, there is an emerging consensus that the best way to achieve reliability and reproducibility of the radiomic features is to estimate these parameters with automated algorithms, followed by human audit [28].
34.6.3
Semiautomatic Segmentation
Semiautomatic segmentation is the process where automatic segmentation is followed by manual editing of the segment’s boundaries. Semiautomatic segmentation is widely considered to be the best reliable segmentation technique, as it aims to overcome the drawbacks of automatic segmentation while reducing user’s interaction time [35, 55].
34.7
Feature Extraction
Radiomics is a quantitative approach which was initially defined as the extraction of highthroughput features from medical images. As radiomic analysis is considered a data-driven approach, feature extraction is a very important procedure that takes place just after segmentation of the ROI, and it may affect the performance of the model [39]. There are two main categories of radiomic features: manually defined features and mathematically extracted features.
34
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications
34.7.1
Manually Crafted Features (Semantic Features)
Semantic features are defined as qualitative features introduced by human vision. These features cannot be described with mathematical procedures; however, they play a very important role in radiomic analysis [46].
• Deep learning features This group represents features generated from DL algorithms with human intervention. The most common and widely used method of extracting DL features is CNN [17].
34.8 34.7.2
Mathematically Extracted Features (Non-semantic Features)
Mathematically extracted features are also called quantitative features, as they can only be described with mathematical expressions and they are obtained from the ROIs within a medical image [47, 64]. Mathematically extracted features contain several groups of features, each one describing different types of characteristics (Fig. 34.3), and these can be classified in six main groups: • Shape-based statistics This group gives a quantitative description of geometrical characteristics. • First-order statistics This group describes the distribution of values of individual voxels on the image, regardless of spatial resolution. Histogram-based properties are used to report basic metrics, such as the maximum, median, and minimum values, or entropy [31]. • Second-order statistics This group describes the statistical relation between voxels (probability distribution) [52]. • Higher-order statistics This group includes statistical features calculated on matrices that describe relationships between three or more pixels [7].
303
Feature Selection
Thousands of features are obtained through the process of feature extraction. However, a lot of irrelevant or redundant information is included in those features, which may lead to model overfitting. Therefore, feature selection and dimension reduction are crucial steps in radiomic analysis [16, 63]. There are many ways to achieve this task using statistical methods and ML. The most widely used methods can be classified into the following main categories: filter, wrapper, embedding, and unsupervised methods [13, 60]. Filter methods are also called independent methods, as they evaluate the feature without involving the model (ANOVA, Pearson correlation, variance thresholding). These methods can be further categorized into univariate and multivariate methods. Wrapper methods use a classification algorithm to test which subset of the features provides a higher classification performance (forward, backward, and stepwise selection). During this process, the features are initially selected, and then the evaluation that follows involves the model. Embedded methods are based on ML and combine filter methods with wrapper methods. In this approach, the feature selection algorithm is integrated within the learning algorithm. Some of the most common techniques include tree algorithms, such as RF and extra tree algorithms, or least absolute shrinkage and selection operator (LASSO) and ridge regression (regularization methods) [29, 61]. Unsupervised methods are feature selection methods that use unlabeled features based on
304
E. Georgiadou et al.
Fig. 34.3 The main statistics categories and the associated characteristics *(GLCM gray-level co-occurrence matrix, GLSZM gray-level size zone matrix,
NGTDM neighboring gray-tone difference matrix, GLDM gray-level dependence matrix)
their attributes and characteristics. The most widely used unsupervised method is cluster analysis, as well as some other methods, such as principal component analysis (PCA), isometric matrix, diffusion map, etc.
preprocessing steps, as previously mentioned. As a rule, both ML and DL should be provided with features of significant value to avoid overfitting. This will result in less training time and more realistic results. After feature extraction, a ML approach will use its classifier algorithms to analyze the data and come to a decision. Within AI and ML, there are two basic approaches: supervised learning and unsupervised learning [54].
34.9
Classification Methods
In order to provide ML and DL with the data to be examined, the latter should go through
34
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications
Supervised learning uses labeled inputs and outputs to train algorithms to accurately classify data or predict outcomes. There are two main categories of problems: classification and regression problems. • In classification problems the output typically consists of classes or categories. In classification problems an algorithm is used to accurately assign test data into specific categories. • In regression problems an algorithm is used to identify the relationship between dependent and independent variables. Unsupervised learning uses unlabeled data to analyze and categorize unlabeled datasets. Unsupervised learning is usually employed in three main domains: clustering, association, and dimensionality reduction [1, 59]. The most common classifiers used in ML and specifically in research related to AD are: Support vector machines (SVM): This is one of the most popular classifiers in ML and image classification. It can be used for both classification and regression problems. DT: This classifier mimics the human brain, and it is easier to understand and explain. It can handle nonlinear data; however, this might create trees that are not directly related to the problem. Therefore, they might have a lower level of prediction compared to other classifiers. RF: Likewise DT, RF consist of a large number of trees, and each tree makes a prediction. The final outcome is averaged from all the trees. This classifier can handle large datasets very efficiently; however, it is more complex to evaluate. K-Nearest neighbors (KNN): KNN algorithms work by identifying K-nearest neighbors to a given observation point. They evaluate the proportions of each type of target variable using the K points, and then they predict the target variable with the highest ratio. Linear discriminant analysis (LDA): LDA is another very popular classifier in ML, and it
305
is mainly used to solve more than two-class classification problems. It can be used in data preprocessing (essential in radiomics) to reduce the number of features. Logistic regression (LR): Although this is a simple model that will take much less time to be trained, it can handle a large number of features. However, it can be only used for binary classification problems. CNN: CNN is used in DL as it works as a network architecture and learns directly from the data; therefore, manual feature extraction is not necessary. In medical imaging, CNNs can examine thousands of images and successfully detect pathologies.
34.10 Statistical Approaches 34.10.1 Univariate Statistics Univariate analysis is the simplest form of analysis, where the data consists of only one set of variables. The objective of univariate analysis is to derive, define, and summarize the data and analyze the pattern into it. This can be achieved by estimating the central tendency (mean, median, and mode), the range, the maximum and minimum values, and the standard deviation of a variable. The most frequently used visual techniques for univariate analysis are histograms, frequency distribution tables, frequency polygons, and pie charts.
34.10.2 Bivariate Statistics Bivariate analysis is the method in which two sets of variables are being compared, their relationships are studied, and the cause of variation is identified. These variables could be dependent or independent to each other. In bivariate analysis there is always a Y-value for each X-value. Bivariate analysis is usually conducted using correlation coefficients and regression analysis, as well as scatter plots.
306
E. Georgiadou et al.
34.10.3 Multivariate Statistics
34.12 Limitations and Future Work
Multivariate analysis is a methodology for analyzing the relationships between every variable within a set with every other variable of another set. It is mainly used when three or more variables exist. This kind of analysis is complex and difficult to visualize with a graph. We often rely on various software and techniques in order to study the relationships between data, find patterns and correlations between several variables, and achieve a deeper and more complex understanding of the data [27].
34.12.1 Limitations of Radiomics Applications
34.11 The Role of Explainability ML algorithms often operate as “black boxes” that are difficult to interpret. Taking an input and providing an output without any further knowledge on their inner procedures can arise safety and ethical issues both for clinicians and the patients. As AI is a rapidly developing field, there is an emerging need for more explainability. The major importance of explainability relies on the fact that AI procedures need to be understood by humans in order to identify and prevent emerging bias, therefore, to increase trust in models’ decisions (reliability) [75]. Explainable AI (XAI) methods have been developed to shed light into ML procedures and help us understand the relationships between the variables that led to a specific outcome. XAI methods are independent of the ML algorithm used to develop the prediction model, and they are applied to estimate the contribution of each feature to the models’ classification task. Some of the most common XAI methods are Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) [48]. SHAP is a method used to explain individual predictions and show the contribution of each feature. The higher the SHAP value is, the higher the impact of the feature value is on the prediction for the selected class. Thus, SHAP values quantify the magnitude and direction (positive or negative) of a feature’s effect on a prediction [57, 65].
The advantages of radiomics have been already confirmed by recent research; however, as with any technology, radiomics also have their own drawbacks, and the scientific community must overcome any challenges to achieve safe clinical adoption of radiomics. As discussed above, the radiomics pipeline is a multifactorial and complex process. End users should ensure that all steps of this procedure meet specific standards in order to eliminate any potential errors related to radiomics, improve the reproducibility of the methods [23], and also ensure that the whole process is transparent, explainable, and fair [44]. Robust evaluation of these procedures should be implemented in all clinical settings to ensure efficacy of radiomics, while ongoing monitoring should be in place to ensure safety and clinical effectiveness. The following limitations of applying radiomics in clinical practice have already been identified: Retrospective design: Many studies already exist, using radiomics for various tasks; however, most of the studies have a retrospective design, which limits their level of evidence, since data was retrospectively collected without using standardized procedures [68]. Researchers should opt to conduct prospective studies employing radiomics from the beginning. Heterogeneous data: A diverse dataset is needed to apply radiomic studies, since this will ensure the model’s robustness, generalization when applied to unseen data, and elimination of algorithmic bias [50]. Suboptimal standardization: There is an urgent need to carefully standardize all procedures involved throughout the radiomics pipeline, including data acquisition, pre- and postprocessing methods, extraction of features, and evaluation with well-established metrics.
34
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications
Limited amount of data: In radiomic studies, vast amounts of features are often extracted (many more than the available data), thus leading to the problem of high dimensionality [36]. Appropriate dimension reduction techniques should be applied to overcome this issue. Interpretability: Another drawback of radiomics is related to the lack of interpretability of the selected features or to the suboptimal interpretation of the results (e.g., causation vs. correlation) [68]. Interoperability: Interoperability of radiomics can be defined as the ability to operate within a system in a seamless way. Hence, a smooth integration within the clinical environments is essential, so that the whole pipeline is successfully adopted by all healthcare professionals. Standardization of the whole radiomic procedure is vital since variations in image acquisition protocols and equipment exist among different centers. All these may lead to inconsistencies in the obtained data for the same patient. Image acquisition parameters, such as image reconstruction kernels used in CT, or sequence parameters selected in MRI are all very important, and they all have an impact on data. In addition, some time-dependent variables, such as contrast dosage and timing, can affect voxel intensities and may result in the generation of different images, even for the same scanner.
34.12.2 Future Perspectives Although research around radiomics is rapidly growing and reveals a very promising field, specific challenges need to be addressed before clinical implementation of radiomics. These mainly include overcoming some technical and regulatory issues, as previously discussed. Further research is needed, with a prospective study design, to ensure the effectiveness of radiomics in all clinical settings. The implementation of robust validation protocols will be the key to
307
reproducibility, transparency, and standardization of the entire radiomic process. DL-based techniques should be employed, alongside the use of hand-crafted radiomic features to improve current applications and provide us with new solutions. In addition, caution should be paid to the clinical adoption of validated, proof-of-concept radiomics through the process of knowledge transfer from existing research. This also strengthens the need for standardized quality indicators, such as the Radiomics Quality Score (RQS), to report radiomic studies [53]. Regulatory frameworks are currently being developed in the field of AI-driven healthcare to ensure safe and successful clinical adoption. These should promote innovation while also ensuring the patient’s welfare. Personalized, patient-centered care should be in the center of interest, and radiomics will certainly play a major role in it in the future.
34.13 Conclusion AD is a complex, hard-to-detect neurodegenerative disease, and novel computational techniques need to be applied to improve the detection of the disease and deliver diagnosis in a timely manner. Radiomics, a novel and emerging method, aims to extract a high number of meaningful features from medical imaging data to facilitate diagnosis of AD. When combined with MRI, a powerful imaging tool with confirmed added value in the detection of AD, radiomics can accelerate the process and generate valuable results. Caution should be paid to standardization of radiomic analyses to achieve reproducibility of the results and reliable outcomes. Radiomics pipelines should be smoothly integrated within clinical settings, to allow safe adoption, build trust, and generate transparent, fair, and explainable results. Careful monitoring of all processes should be in place. Despite the challenges that radiomics face, this is certainly a promising field which needs to be further expanded to benefit all patients with AD.
308
References 1. Abbasi S, Tavakoli M, Boveiri HR, Shirazi MAM, Khayami R, Khorasani H, Javidan R, Mehdizadeh A. Medical image registration using unsupervised deep neural network: A scoping literature review. Biomed Signal Process Control 2022;73:103444. https://doi.org/10.1016/j.bspc.2021.103444 2. Achilleos KG, Leandrou S, Prentzas N, Kyriacou PA, Kakas AC, Pattichis CS. Extracting Explainable Assessments of Alzheimer’s disease via Machine Learning on brain MRI imaging data. 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE) 2020:1036–1041. https://doi. org/10.1109/BIBE50027.2020.00175 3. Alzheimer’s Association. 2022 Alzheimer’s disease facts and figures. Alzheimers Dement 2022;18(4): 700–789. https://doi.org/10.1002/alz.12638 4. Apostolova LG. Alzheimer Disease. Continuum (Minneap Minn) 2016;22(2 Dementia):419–434. https://doi.org/10.1212/CON.0000000000000307 5. Armstrong RA. What causes alzheimer’s disease? Folia neuropathologica 2013;51(3):169–188. https:// doi.org/10.5114/fn.2013.37702 6. Arvanitakis Z, Shah RC, Bennett DA. Diagnosis and Management of Dementia: Review. JAMA 2019;322 (16):1589–1599. https://doi.org/10.1001/jama.2019. 4782 7. Barucci A, Farnesi D, Ratto F, Pelli S, Pini R, Carpi R, Esposito M, Olmastroni M, Romei C, Taliani A, Materassi M. Fractal-Radiomics as Complexity Analysis of CT and MRI Cancer Images. 2018 IEEE Workshop on Complexity in Engineering (COMPENG) 2018:1–5. https://doi.org/10.1109/CompEng.2018. 8536249 8. Battineni G, Chintalapudi N, Amenta F, Traini E. A Comprehensive Machine-Learning Model Applied to Magnetic Resonance Imaging (MRI) to Predict Alzheimer’s Disease (AD) in Older Subjects. J Clin Med 2020;9(7):2146. https://doi.org/10.3390/ jcm9072146 9. Beam CR, Kaneshiro C, Jang JY, Reynolds CA, Pedersen NL, Gatz M. Differences Between Women and Men in Incidence Rates of Dementia and Alzheimer’s Disease. J Alzheimers Dis 2018;64(4): 1077–1083. https://doi.org/10.3233/jad-180141 10. Bogdanovic B, Eftimov T, Simjanoska M. In-depth insights into Alzheimer’s disease by using explainable machine learning approach. Sci Rep 2022;12(1):6508. https://doi.org/10.1038/s41598-022-10202-2 11. Breijyeh Z, Karaman R. Comprehensive Review on Alzheimer’s Disease: Causes and Treatment. Molecules 2020;25(24):5789. https://doi.org/10.3390/ molecules25245789 12. Brookmeyer R, Corrada MM, Curriero FC, Kawas C. Survival following a diagnosis of Alzheimer disease. Arch Neurol 2002;59(11):1764–1767. https:// doi.org/10.1001/archneur.59.11.1764
E. Georgiadou et al. 13. Cai J, Luo J, Wang S, Yang S. Feature selection in machine learning: A new perspective. Neurocomputing 2018;300:70–79. https://doi.org/10. 1016/j.neucom.2017.11.077 14. Carmona S, Hardy J, Guerreiro R. The genetic landscape of Alzheimer disease. Handb Clin Neurol 2018;148:395–408. https://doi.org/10.1016/b978-0444-64076-5.00026-0 15. Chandra A, Dervenoulas G, Politis M, Alzheimer’s Disease Neuroimaging Initiative. Magnetic resonance imaging in Alzheimer’s disease and mild cognitive impairment. J Neurol 2019;266(6):1293–1302. https://doi.org/10.1007/s00415-018-9016-3 16. Chandrashekar G, Sahin F. A survey on feature selection methods. Comput Electr Eng 2014;40(1):16–28. https://doi.org/10.1016/j.compeleceng.2013.11.024 17. Dara S, Tumma P. Feature Extraction By Using Deep Learning: A Survey. 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA) 2018:1795–1801. https:// doi.org/10.1109/ICECA.2018.8474912 18. De Roeck EE, De Deyn PP, Dierckx E, Engelborghs S. Brief cognitive screening instruments for early detection of Alzheimer’s disease: a systematic review. Alz Res Therapy 2019;21. https://doi.org/10.1186/ s13195-019-0474-3 19. Dercle L, McGale J, Sun S, Marabelle A, Yeh R, Deutsch E, Mokrane FZ, Farwell M, Ammari S, Schoder H, Zhao B, Schwartz LH. Artificial intelligence and radiomics: fundamentals, applications, and challenges in immunotherapy. J Immunother Cancer 2022;10:e005292. https://doi.org/10.1136/jitc2022-005292 20. Dhana K, Franco OH, Ritz EM, Ford CN, Desai P, Krueger KR, Holland TM, Dhana A, Liu X, Aggarwal NT, Evans DA, Rajan KB. Healthy lifestyle and life expectancy with and without Alzheimer’s dementia: population based cohort study. BMJ 2022;377: e068390. https://doi.org/10.1136/bmj-2021-068390 21. Dos Santos Picanco LC, Ozela PF, de Fatima de Brito Brito M, Pinheiro AA, Padilha EC, Braga FS, de Paula da Silva CHT, Dos Santos CBR, Rosa JMC, da Silva Hage-Melim LI. Alzheimer’s Disease: A Review from the Pathophysiology to Diagnosis, New Perspectives for Pharmacological Treatment. Curr Med Chem 2018;25(26):3141–3159. https://doi.org/10.2174/ 0929867323666161213101126 22. Dumurgier J, Sabia S. Epidemiology of Alzheimer’s disease: latest trends. Rev Prat 2020;70(2):149–151. 23. Galavis PE. Reproducibility and standardization in Radiomics: Are we there yet? AIP Conference Proceedings 2021;2348:20003. https://doi.org/10. 1063/5.0051609 24. Garre-Olmo J. Epidemiology of Alzheimer’s disease and other dementias. Rev Neurol 2018;66(11): 377–386. 25. Gillies RJ, Kinahan PE, Hricak H. Radiomics: Images Are More than Pictures, They Are Data. Radiology
34
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications
2016;278(2):563–577. https://doi.org/10.1148/radiol. 2015151169 26. Gupta AK, Chowdhury V, Khandelwal N, Sharma S, Bhalla AS, Hari S. Diagnostic Radiology: Recent Advances and Applied Physics in Imaging. 2nd ed. New Delhi: Jaypee Brothers Medical Publishers, 2013. 27. Habeck C, Stern Y, Alzheimer’s Disease Neuroimaging Initiative. Multivariate data analysis for neuroimaging data: overview and application to Alzheimer’s disease. Cell Biochem Biophys 2010;58 (2):53–67. https://doi.org/10.1007/s12013-0109093-0 28. Hesamian MH, Jia W, He X, Kennedy P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J Digit Imaging 2019;32(4):582–596. https://doi.org/10.1007/s10278019-00227-x 29. Huang YQ, Liang CH, He L, Tian J, Liang CS, Chen X, Ma ZL, Liu ZY. Development and Validation of a Radiomics Nomogram for Preoperative Prediction of Lymph Node Metastasis in Colorectal Cancer. J Clin Oncol 2016;34(18):2157–2164. https://doi.org/ 10.1200/jco.2015.65.9128 30. Jiang T, Yu JT, Tian Y, Tan L. Epidemiology and etiology of Alzheimer’s disease: from genetic to non-genetic factors. Curr Alzheimer Res 2013;10(8): 852–867. https://doi.org/10.2174/ 15672050113109990155 31. Kalkan S, Wörgötter F, Krüger N. First-order and second-order statistical analysis of 3D and 2D image structure. Network 2007;18(2):129–160. https://doi. org/10.1080/09548980701580444 32. Kautzky A, Seiger R, Hahn A, Fischer P, Krampla W, Kasper S, Kovacs GG, Lanzenberger R. Prediction of Autopsy Verified Neuropathological Change of Alzheimer’s Disease Using Machine Learning and MRI. Front Aging Neurosci 2018;10:406. https://doi. org/10.3389/fnagi.2018.00406 33. Khan A, Zubair S. An Improved Multi-Modal based Machine Learning Approach for the Prognosis of Alzheimer’s disease. J King Saud Univ Comput Inf Sci 2022;34(6):2688–2706. https://doi.org/10.1016/j. jksuci.2020.04.004 34. Kim Y, Oh DY, Chang W, Kang E, Ye JC, Lee K, Kim HY, Kim YH, Park JH, Lee YJ, Lee KH. Deep learning-based denoising algorithm in comparison to iterative reconstruction and filtered back projection: a 12-reader phantom study. Eur Radiol 2021;31(11): 8755–8764. https://doi.org/10.1007/s00330-02107810-3 35. Kim YJ, Lee SH, Park CM, Kim KG. Evaluation of Semi-automatic Segmentation Methods for Persistent Ground Glass Nodules on Thin-Section CT Scans. Healthc Inform Res 2016;22(4):305–315. https://doi. org/10.4258/hir.2016.22.4.305 36. Koçak B, Durmaz EŞ, Ateş E, K{l{çkesmez Ö. Radiomics with artificial intelligence: a practical
309
guide for beginners. Diagn Interv Radiol 2019;25(6): 485–495. https://doi.org/10.5152/dir.2019.19321 37. Kumar A, Singh A, Ekavali. A review on Alzheimer’s disease pathophysiology and its management: an update. Pharmacol Rep 2015;67(2):195–203. https:// doi.org/10.1016/j.pharep.2014.09.004 38. Kumar V, Gu Y, Basu S, Berglund A, Eschrich SA, Schabath MB, Forster K, Aerts HJ, Dekker A, Fenstermacher D, Goldgof DB, Hall LO, Lambin P, Balagurunathan Y, Gatenby RA, Gillies RJ. Radiomics: the process and the challenges. Magn Reson Imaging 2012;30(9):1234–1248. https://doi. org/10.1016/j.mri.2012.06.010 39. Laajili R, Said M, Tagina M. Application of radiomics features selection and classification algorithms for medical imaging decision: MRI radiomics breast cancer cases study. Inform Med Unlocked 2021;27: 100801. https://doi.org/10.1016/j.imu.2021.100801 40. Lambin P, Leijenaar RTH, Deist TM, Peerlings J, de Jong EEC, van Timmeren J, Sanduleanu S, Larue RTHM, Even AJG, Jochems A, van Wijk Y, Woodruff H, van Soest J, Lustberg T, Roelofs E, van Elmpt W, Dekker A, Mottaghy FM, Wildberger JE, Walsh S. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol 2017;14(12):749–762. https://doi.org/10.1038/ nrclinonc.2017.141 41. Lambin P, Rios-Velazquez E, Leijenaar R, Carvalho S, van Stiphout RG, Granton P, Zegers CM, Gillies R, Boellard R, Dekker A, Aerts HJ. Radiomics: extracting more information from medical images using advanced feature analysis. Eur J Cancer 2012;48(4): 441–446. https://doi.org/10.1016/j.ejca.2011.11.036 42. Larue RT, Defraene G, De Ruysscher D, Lambin P, van Elmpt W. Quantitative radiomics studies for tissue characterization: a review of technology and methodological procedures. Br J Radiol 2017;90(1070): 20160665. https://doi.org/10.1259/bjr.20160665 43. Leandrou S, Petroudi S, Kyriacou PA, Reyes-Aldasoro CC, Pattichis CS. Quantitative MRI Brain Studies in Mild Cognitive Impairment and Alzheimer’s Disease: A Methodological Review. IEEE Rev Biomed Eng 2018;11:97–111. https://doi.org/10.1109/rbme.2018. 2796598 44. Lekadir K, Osuala R, Gallin C, Lazrak N, Kushibar K, Tsakou G, Ausso S, Alberich LC, Marias K, Tsiknakis M, Colantonio S, Papanikolaou N, Salahuddin Z, Woodruff HC, Lambin P, Martí-Bonmatí L. FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging. 2021. 10.48550/ arXiv.2109.09658 45. Lima S, Sevilha S, Graca Pereira M. Quality of life in early-stage Alzheimer’s disease: the moderator role of family variables and coping strategies from the patients’ perspective. Psychogeriatrics 2020;20(5): 557–567. https://doi.org/10.1111/psyg.12544 46. Liu Y, Kim J, Qu F, Liu S, Wang H, Balagurunathan Y, Ye Z, Gillies RJ. CT Features
310 Associated with Epidermal Growth Factor Receptor Mutation Status in Patients with Lung Adenocarcinoma. Radiology 2016;280(1):271–280. https://doi. org/10.1148/radiol.2016151455 47. Liu Z, Wang S, Dong D, Wei J, Fang C, Zhou X, Sun K, Li L, Li B, Wang M, Tian J. The Applications of Radiomics in Precision Diagnosis and Treatment of Oncology: Opportunities and Challenges. Theranostics 2019;9(5):1303–1322. https://doi.org/ 10.7150/thno.30309 48. Lundberg S, Lee SI. A Unified Approach to Interpreting Model Predictions. 31st Conference on Neural Information Processing Systems 2017. https:// doi.org/10.48550/arXiv.1705.07874 49. McKhann GM, Knopman DS, Chertkow H, Hyman BT, Jack CR Jr, Kawas CH, Klunk WE, Koroshetz WJ, Manly JJ, Mayeux R, Mohs RC, Morris JC, Rossor MN, Scheltens P, Carrillo MC, Thies B, Weintraub S, Phelps CH. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement 2011;7(3): 263–269. https://doi.org/10.1016/j.jalz.2011.03.005 50. Moskowitz CS, Welch ML, Jacobs MA, Kurland BF, Simpson AL. Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies. Radiology 2022;304(2):265–273. https://doi.org/10. 1148/radiol.211597 51. Murphy MP, LeVine H. Alzheimer’s Disease and the β-Amyloid Peptide. J Alzheimers Dis 2010;19(1): 311–323. https://doi.org/10.3233/JAD-2010-1221 52. Oliva JT, Lee HD, Spolaôr N, Coy CSR, Wu FC. Prototype system for feature extraction, classification and study of medical images. Expert Syst Appl 2016;63:267–283. https://doi.org/10.1016/j.eswa. 2016.07.008 53. Park JE, Kim HS, Kim D, Park SY, Kim JY, Cho SJ, Kim JH. A systematic review reporting quality of radiomics research in neuro-oncology: toward clinical utility and quality improvement using highdimensional imaging features. BMC Cancer 2020;20: 29. https://doi.org/10.1186/s12885-019-6504-5 54. Parmar C, Grossmann P, Bussink J, Lambin P, Aerts HJWL. Machine Learning methods for Quantitative Radiomic Biomarkers. Sci Rep 2015;5:13087. https:// doi.org/10.1038/srep13087 55. Parmar C, Rios Velazquez E, Leijenaar R, Jermoumi M, Carvalho S, Mak RH, Mitra S, Shankar BU, Kikinis R, Haibe-Kains B, Lambin P, Aerts HJ. Robust Radiomics feature quantification using semiautomatic volumetric segmentation. PLoS One 2014;9(7):e102107. https://doi.org/10.1371/journal. pone.0102107 56. Parra MA. Overcoming barriers in cognitive assessment of Alzheimer’s disease. Dement Neuropsychol 2014;8(2):95–98. https://doi.org/10.1590/s198057642014dn82000002
E. Georgiadou et al. 57. Pintelas E, Liaskos M, Livieris IE, Kotsiantis S, Pintelas P. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction. J Imaging 2020;6(6):37. https://doi.org/10.3390/jimaging6060037 58. Ramesh KKD, Kiran Kumar G, Swapna K, Datta D, Rajest SS. A Review of Medical Image Segmentation Algorithms. European Union Digital Library 2021;21 (27):e6. https://doi.org/10.4108/eai.12-4-2021.169184 59. Raza K, Singh NK. A Tour of Unsupervised Deep Learning for Medical Image Analysis. Curr Med Imaging 2021;17(9):1059–1077. https://doi.org/10. 2174/1573405617666210127154257 60. Remeseiro B, Bolon-Canedo V. A review of feature selection methods in medical applications. Comput Biol Med 2019;112: 103375. https://doi.org/10.1016/ j.compbiomed.2019.103375 61. Rios Velazquez E, Parmar C, Liu Y, Coroller TP, Cruz G, Stringfield O, Ye Z, Makrigiorgos M, Fennessy F, Mak RH, Gillies R, Quackenbush J, Aerts HJWL. Somatic Mutations Drive Distinct Imaging Phenotypes in Lung Cancer. Cancer Res 2017;77 (14):3922–3930. https://doi.org/10.1158/0008-5472. can-17-0122 62. Rosende-Roca M, Abdelnour C, Esteban E, Tartari JP, Alarcon E, Martínez-Atienza J, González-Pérez A, Sáez ME, Lafuente A, Buendía M, Pancho A, Aguilera N, Ibarria M, Diego S, Jofresa S, Hernández I, López R, Gurruchaga MJ, Tárraga L, Valero S, Ruiz A, Marquié M, Boada M. The role of sex and gender in the selection of Alzheimer patients for clinical trial pre-screening. Alzheimers Res Ther 2021;13(1):95. https://doi.org/10.1186/s13195-02100833-4 63. Saeys Y, Inza I, Larrañaga P. A review of feature selection techniques in bioinformatics. Bioinformatics 2007;23(19):2507–2517. https://doi.org/10.1093/bioin formatics/btm344 64. Scapicchio C, Gabelloni M, Barucci A, Cioni D, Saba L, Neri E. A deep look into radiomics. Radiol Med 2021;126(10):1296–1311. https://doi.org/10. 1007/s11547-021-01389-x 65. Severn C, Suresh K, Görg C, Choi YS, Jain R, Ghosh D. A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features. Sensors (Basel) 2022;22(14):5205. https://doi.org/10.3390/s22145205 66. Sheppard O, Coleman M. Alzheimer’s Disease: Etiology, Neuropathology and Pathogenesis. In: Huang X (ed). Alzheimer’s Disease: Drug Discovery. Brisbane: Exon Publications, 2020. https://doi.org/10.36255/ exonpublications.alzheimersdisease.2020.ch1 67. Sullivan DC, Obuchowski NA, Kessler LG, Raunig DL, Gatsonis C, Huang EP, Kondratovich M, McShane LM, Reeves AP, Barboriak DP, Guimaraes AR, Wahl RL; RSNA-QIBA Metrology Working Group. Metrology Standards for Quantitative Imaging Biomarkers. Radiology 2015;277(3):813–825. https:// doi.org/10.1148/radiol.2015142202
34
Radiomics for Alzheimer’s Disease: Fundamental Principles and Clinical Applications
68. van Timmeren JE, Cester D, Tanadini-Lang S, Alkadhi H, Baessler B. Radiomics in medical imaging—“how-to” guide and critical reflection. Insights Imaging 2020;11:91. https://doi.org/10.1186/ s13244-020-00887-2 69. Vellone E, Piras G, Talucci C, Cohen MZ. Quality of life for caregivers of people with Alzheimer’s disease. J Adv Nurs 2008;61(2):222–231. https://doi.org/10. 1111/j.1365-2648.2007.04494.x 70. Wagner MW, Namdar K, Biswas A, Monah S, Khalvati F, Ertl-Wagner BB. Radiomics, machine learning, and artificial intelligence-what the neuroradiologist needs to know. Neuroradiology 2021;63(12): 1957–1967. https://doi.org/10.1007/s00234-02102813-9 71. Weller J, Budson A. Current understanding of Alzheimer’s disease diagnosis and treatment. F1000Res 2018;7:F1000 Faculty Rev-1161. 72. Yang HD, Kim DH, Lee SB, Young LD. History of Alzheimer’s Disease. Dement Neurocogn Disord
311
2016;15(4):115–121. https://doi.org/10.12779/dnd. 2016.15.4.115 73. Zanaty EA, Ghoniemy S. Medical Image Segmentation Techniques: An Overview. International Journal of informatics and medical data processing 2016;1(1): 16–37. 74. Zanetti O, Solerte SB, Cantoni F. Life expectancy in Alzheimer’s disease (AD). Arch Gerontol Geriatr 2009;49(suppl 1):237–243. https://doi.org/10.1016/j. archger.2009.09.035 75. Zhang X, Chan FTS, Mahadevan S. Explainable machine learning in image classification models: An uncertainty quantification perspective. Knowl Based Syst 2022;243:108418. https://doi.org/10.1016/j. knosys.2022.108418 76. Zhang XX, Tian Y, Wang ZT, Ma YH, Tan L, Yu JT. The Epidemiology of Alzheimer’s Disease Modifiable Risk Factors and Prevention. J Prev Alzheimers Dis 2021;8(3):313–321. https://doi.org/ 10.14283/jpad.2021.15
Retraction Note to: Dynamic Reconfiguration of Dominant Intrinsic Coupling Modes in Elderly at Prodromal Alzheimer’s Disease Risk Themis P. Exarchos, Robert Whelan, and Ioannis Tarnanas
Retraction Note to: Chapter 1 in: P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_1
The Editor has retracted this Conference Paper because the authors do not own all of the data reported in it; some of the data derived from the analysis was taken from an unpublished manuscript with a different author list. In addition, Robert Whelan has stated that he was unaware of the submission of this article. All authors agree with this retraction.
The retracted version of this chapter can be found at https://doi.org/10.1007/978-3-031-31982-2_1 # The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2_35
C1
Index
A Accuracy, 41, 56, 62, 63, 73–77, 82, 87, 188–191, 207, 226–229, 236, 242, 244, 245, 248, 258–260, 275–278, 291, 293, 294, 300, 301 Acetylcholinesterase (AChE), 233–239 Alzheimer’s disease (AD), 3, 23, 42, 49, 74, 98, 162, 167, 188, 193, 224, 233, 241, 265, 289, 295 Amyotrophic lateral sclerosis (ALS), 201–209 Anticancer drug discovery, 231 Artificial intelligence (AI), 31, 129, 131, 154, 242, 247, 275, 301, 304, 306 Attention deficit hyperactivity disorder (ADHD), 103, 111, 224–229 B Behavioral disorders, 97–113, 228 Behavior and cognition, 193–198 Big data, 2, 202, 290 Bioinformatics, 62, 97–113, 202 Biomarkers, 24–26, 43–45, 91, 111, 161–164, 188, 267, 268, 270, 282, 289–294 Biosensors, 24–28 Brain, 2, 24, 32, 42, 49, 69, 81, 92, 110, 117, 138, 162, 178, 188, 194, 208, 224, 234, 256, 266, 281, 290, 298 Brain connectivity networks, 53, 55 Brain imaging, 2, 4, 56, 118, 224, 228 Brain temperature, 255–262 C Chaotic, 75, 76, 157–160 Chemical big data, 248–251 Classification, 4, 15, 34, 53, 55, 56, 64, 100–102, 104, 107, 108, 110, 126, 146, 149–152, 188–190, 224–229, 243, 244, 248, 273–278, 300, 301, 303–306 Cognitive disorders, 98–105, 107, 108, 111, 112, 266 Cognitive enhancement, 161–164 Cognitive impairment, 4, 17–19, 42–45, 49, 103, 162, 163, 167–168, 188, 190, 191, 193, 266, 268, 289–291, 300 Cognitive neurorehabilitation, 135–143 Cognitive priming, 193–198
Cognitive tools, 267 Collaborative platforms, 125–131 Computational drug design, 231 Computational models, 178, 290–293 Computing performance, 247–253 Conductivity, 8, 25, 26, 81–87 Connectivity MAP (CMAP), 204–205, 207, 208 Contact sensitization, 145–155 Cresset, 234–238 Cross-frequency coupling (CFC), 2, 8, 10, 13, 16, 17 D Dask, 248, 249 Data mining, 101, 102, 180, 244 Decision support system, 24, 27, 290 Deep learning (DL), 32, 56, 70, 77, 202, 223, 229, 293, 301, 303–305 Dementia, 3, 4, 17, 42, 44, 45, 49, 98, 103, 104, 111, 162, 167–169, 172, 187–191, 194, 224, 241, 265–270, 289–291 Dementia screening, 265–270 Diffusion, 10–13, 18, 49, 50, 71, 82–87, 158, 160, 281, 304 Digital biomarkers, 43–45, 91, 163–164 Dimensionality reduction, 224, 226–228, 305 Disease profiling, 108 Drug repurposing (DR), 178, 201–209 Dynamic functional connectivity analysis, 2, 9, 12, 13, 16–19, 56 E Education, 5, 6, 44, 73, 91–95, 117–119, 122, 125–131, 168, 169, 196, 266–268, 300 e-Health-apps, 27, 266 Elderly, 1–20, 44, 98, 167–172, 188, 267, 268, 292 Electroencephalography (EEG), 3, 8–12, 16–18, 26, 49, 50, 55, 56, 70, 91, 94, 117–122, 194, 291 Electrophysiology, 50, 75, 77, 214 Epilepsy, 103, 104, 135–143, 214 Ethylenediamine dihydrochloride, 146–154 Excipients, 146, 152
# The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vlamos (ed.), GeNeDis 2022, Advances in Experimental Medicine and Biology 1424, https://doi.org/10.1007/978-3-031-31982-2
313
314 F Face tracking, 117, 121, 122 Feature selection, 56, 223–229, 242, 276, 303–304 FINGER Study, 167–172 Flavonoids, 233–239 Frontotemporal lobar degeneration (FTLD), 224–228 G Gene expression data, 274, 275, 277, 278 Gene expression profiling, 273, 274 Genetics, 24, 43, 45, 56, 94, 98–102, 104, 108–112, 154, 168, 209, 291–294, 298, 299 Genomic grammar, 99–100, 102 Genomics, 99, 100, 102, 108, 109, 112, 202, 203 Graph convolutional networks (GCN), 224–229 Graphs, 9–13, 15, 16, 18, 19, 28, 38, 49–57, 65, 99, 108, 110, 119, 121, 128, 139–141, 175–183, 190, 219, 223–229, 235, 237, 248, 249, 306 Graph theory, 49–57, 248 H HELIAD study, 187–191 Hierarchical clustering, 205, 274, 275, 277 Hodgkin-Huxley, 70–73, 75–77 I Intervention, 2–8, 15–19, 94, 119, 136–141, 143, 161– 164, 167–172, 177, 179, 193, 194, 215, 266, 291, 292, 294, 301, 303 Intracranial catheter, 256–260, 262 Intrinsic coupling modes, 1–20 Inverted prolate spheroidal coordinates, 283–287 Izhikevich (IZ), 70, 71, 73, 74, 76, 77 K Kernel functions, 274, 275, 277 Kinesia Paradoxa, 59 L Leaky Integrate and Fire (LIF), 70–72, 74, 76, 77 Lifestyle, 154, 161, 168, 172, 178, 189, 298, 299 Life support, 31, 32 Ligand based virtual screening, 235 Long-short term memory (LSTM), 31–39 M Machine learning (ML), 55, 64, 146, 154, 155, 178, 187– 191, 213–221, 223, 226, 229, 247, 248, 275, 290, 291, 293, 294, 300, 303–306 Magnetic field strength vector, 282 Magnetic potential, 282, 283, 287 Matchmaking algorithms, 125–131 Mathematical models, 69–77, 177, 256, 282, 291 Mathematics, 53, 69–77, 83, 91–95, 117–122, 281–287, 303 Maximal structure generation (MSG) algorithm, 181 Mechanical ventilation, 31–39, 44 Medical imaging, 39, 300, 301, 305, 307 Methyldibromo-glutaronitrile (MDBGN), 146–154 Mobile health application (mHealth apps), 168, 169
Index Models, 2, 27, 32, 49, 59, 62, 70, 82, 118, 128, 137, 157, 177, 189, 208, 220, 223, 234, 242, 248, 256, 267, 273, 282, 290, 302 Molecular descriptors, 247–253 Molecular docking, 234, 236–239 Monocarboxylate transporter 4 (MCT4), 231–239 Morris-Lecar (ML), 70, 72, 75–77 Multi-class logistic regression (MLR), 273–278 Multiplexity, 19 N National priority projects, 41–45 Neural networks, 31–39, 50, 56, 69–77, 128, 129, 141, 178, 202, 223 Neurodegenerative diseases, 112, 188, 193, 202, 228, 289–294, 297, 307 Neuroeducation, 91–95, 118–120 Neurons, 2, 16, 19, 25, 49, 69–77, 81–87, 111, 162, 202, 208, 241, 281 Neuroparametry, 256 Neurophysiology, 73, 91, 95, 118, 120, 122, 256 Neuroscience, 70, 91–95, 117–119, 224 Non-invasive neuroimaging techniques, 49–57 Non-uniformity, 282, 287 O One Health, 175–183 P Parkinson’s disease (PD), 23–28, 59–66, 74, 103, 104, 111, 214 Patch test, 146, 147, 152, 154 Peripheral sensors, 24, 26–27 Personalized medicine, 43, 99, 112, 290 P-graphs, 180–183 Pharmacophore design, 231 Pharmacotherapy, 61–66 Precision, 72, 73, 77, 82, 126, 243, 245, 275, 276, 293, 301 Precision medicine, 61–66 Preprocessing, 33, 35, 56, 188–190, 224–225, 229, 274, 301, 302, 304, 305 Preservatives, 146, 152, 154 Principal components analysis (PCA), 224, 226–228, 273–278, 304 Protein misfolding, 23, 208 R Radiomics, 297–307 Real-time health data monitoring, 215, 221 Recall, 5–7, 16, 18, 19, 139, 141, 267, 275, 276, 282 Red blood cell (RBC), 282–287 Regularization, 227, 273–278, 303 Relaxing environment, 194 Research, 5, 7, 24, 31, 32, 38, 62, 64–66, 69, 76, 82, 91–95, 98, 99, 111, 118–122, 125–131, 136, 140, 142, 143, 146, 147, 152, 162, 169, 183, 188, 194, 196, 197, 201, 202, 215, 218, 224, 234, 243, 259, 260, 262, 266–268, 270, 281, 290, 292–294, 298, 299, 302, 305–307
Index S Self-efficacy, 117–122 Semantic analysis, 100–102, 104, 112 Single-compartment model, 69–77 Single nucleotide polymorphisms (SNPs), 24, 43, 98, 100–102, 104–111, 178 Solution Structure Generation (SSG) algorithm, 182, 183 Stabilization, 158–160 State-space, 157–159 T Teaching proposals, 91–95 Temperature prediction model, 255–262 Tertiary structure, 61–66 Therapeutic protocols for Parkinson’s, 59–66 Thimerosal, 146, 147, 150, 152–154
315 Three-dimensional quantitative structure-activity relationship (3D-QSAR), 233–239 Time-delays, 2, 3, 16–19 Tumor, 157–160, 204, 301 U Unified Parkinson’s Disease Rating Scale (UPDRS), 24, 27 V Virtual environments, 136 Virtual reality, 135–143, 163, 194 W Wearable devices, 163, 215, 216, 220 White matter, 81–87, 257