The Cambridge Handbook of Intelligence and Cognitive Neuroscience [1st ed.] 1108480543, 9781108480543, 9781108727723

This handbook introduces the reader to the thought-provoking research on the neural foundations of human intelligence. W

512 12 10MB

English Pages 515 Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
List of Figures page x
List of Tables xiii
List of Contributors xv
Preface xix
Part I Fundamental Issues 1
1 Defining and Measuring Intelligence: The Psychometrics and
Neuroscience of g
thomas r. coyle 3
2 Network Neuroscience Methods for Studying Intelligence
kirsten hilger and olaf sporns 26
3 Imaging the Intelligence of Humans
kenia martı´nez and roberto colom 44
4 Research Consortia and Large-Scale Data Repositories for
Studying Intelligence
budhachandra khundrakpam, jean-baptiste poline,
and alan c. evans 70
Part II Theories, Models, and Hypotheses 83
5 Evaluating the Weight of the Evidence: Cognitive
Neuroscience Theories of Intelligence
matthew j. euler and ty l. mckinney 85
6 Human Intelligence and Network Neuroscience
aron k. barbey 102
7 It’s about Time: Towards a Longitudinal Cognitive
Neuroscience of Intelligence
rogier a. kievit and ivan l. simpson-kent 123
vii
8 A Lifespan Perspective on the Cognitive Neuroscience
of Intelligence
joseph p. hennessee and denise c. park 147
9 Predictive Intelligence for Learning and Optimization:
Multidisciplinary Perspectives from Social, Cognitive, and
Affective Neuroscience
christine ahrends, peter vuust, and morten
l. kringelbach 162
Part III Neuroimaging Methods and Findings 189
10 Diffusion-Weighted Imaging of Intelligence
erhan genc¸ and christoph fraenz 191
11 Structural Brain Imaging of Intelligence
stefan drakulich and sherif karama 210
12 Functional Brain Imaging of Intelligence
ulrike basten and christian j. fiebach 235
13 An Integrated, Dynamic Functional Connectome
Underlies Intelligence
jessica r. cohen and mark d’esposito 261
14 Biochemical Correlates of Intelligence
rex e. jung and marwa o. chohan 282
15 Good Sense and Good Chemistry: Neurochemical Correlates of
Cognitive Performance Assessed In Vivo through Magnetic
Resonance Spectroscopy
naftali raz and jeffrey a. stanley 297
Part IV Predictive Modeling Approaches 325
16 Predicting Individual Differences in Cognitive Ability
from Brain Imaging and Genetics
kevin m. anderson and avram j. holmes 327
17 Predicting Cognitive-Ability Differences from Genetic
and Brain-Imaging Data
emily a. willoughby and james j. lee 349
Part V Translating Research on the Neuroscience
of Intelligence into Action 365
18 Enhancing Cognition
michael i. posner and mary k. rothbart 367
viii Contents
19 Patient-Based Approaches to Understanding Intelligence and
Problem-Solving
shira cohen-zimerman, carola salvi, and jordan
h. grafman 382
20 Implications of Biological Research on Intelligence for
Education and Public Policy
kathryn asbury and diana fields 399
21 Vertical and Horizontal Levels of Analysis in the Study of
Human Intelligence
robert j. sternberg 416
22 How Intelligence Research Can Inform Education and
Public Policy
jonathan wai and drew h. bailey 434
23 The Neural Representation of Concrete and Abstract Concepts
robert vargas and marcel adam just 448
Index 469
Recommend Papers

The Cambridge Handbook of Intelligence and Cognitive Neuroscience [1st ed.]
 1108480543, 9781108480543, 9781108727723

  • Commentary
  • NO BOOKMARKS
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

The Cambridge Handbook of Intelligence and Cognitive Neuroscience Can the brain be manipulated to enhance intelligence? The answer depends on neuroscience progress in understanding how intelligence arises from the interplay of gene expression and experience in the developing brain and how the mature brain processes information to solve complex reasoning problems. The bad news is the issues are nightmarishly complex. The good news is there is extraordinary progress from researchers around the world. This book is a comprehensive sampling of recent exciting results, especially from neuroimaging studies. Each chapter has minimum jargon, so an advanced technical background is not required to understand the issues, the data, or the interpretation of results. The prospects for future advances will whet the appetite of young researchers and fuel enthusiasm for researchers already working in these areas. Many intelligence researchers of the past dreamed about a day when neuroscience could be applied to understanding fundamental aspects of intelligence. As this book demonstrates, that day has arrived. aron k. barbey is Professor of Psychology, Neuroscience, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs the Intelligence, Learning, and Plasticity Initiative, and the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology. sherif karama is a psychiatrist with a PhD in neuroscience. He completed a five-year postdoctoral fellowship in Brain Imaging of Cognitive Ability Differences at the Montreal Neurological Institute. He is an assistant professor in the Department of Psychiatry of McGill University. richard j. haier is Professor Emeritus in the School of Medicine, University of California, Irvine. His PhD in psychology is from Johns Hopkins University, he has been a staff fellow at NIMH and on the faculty of Brown University School of Medicine.

“This exciting book makes an elegant case that human intelligence is not the result of a test. It is the consequence of a brain. Drawing on state-of-the-art imaging methods, the reader is afforded a comprehensive view of the substrates enabling our most valued mental abilities.” Scott T. Grafton, Bedrosian-Coyne Presidential Chair in Neuroscience and Director of the Brain Imaging Center, University of California at Santa Barbara

“Our scientific understanding of human intelligence has advanced greatly over the past decade in terms of the measurement and modeling of intelligence in the human brain. This book provides an excellent analysis of current findings and theories written by top international authors. It should be recommended to students and professionals working in this field.” Sarah E. MacPherson, Senior Lecturer in Human Cognitive Neuroscience, University of Edinburgh

“This handbook focuses on the brain, but also integrates genetics and cognition. Come for a comprehensive brain survey and get the bonus of a panoramic foreshadowing of integrated intelligence research and applications.” Douglas K. Detterman, Louis D. Beaumont University Professor Emeritus of Psychological Sciences, Case Western Reserve University

“This handbook captures the conceptualization and measurement of intelligence, which is one of psychology’s greatest achievements. It shows how the advent of modern imaging techniques and large-scale data sets have added to our knowledge about brain–environmental– ability relationships and highlights the controversy in this rapidly expanding field.” Diane F. Halpern, Professor of Psychology, Emerita, Claremont McKenna College

“This handbook assembles an impressive group of pioneers and outstanding young researchers at the forefront of intelligence neuroscience. The chapters summarize the state of the field today and foreshadows what it might become.” Lars Penke, Professor of Psychology, Georg August University of Göttingen

“This book is a tribute to its topic. It is intelligently assembled, spanning all aspects of intelligence research and its applications. The authors are distinguished experts, masterfully summarizing the latest knowledge about intelligence obtained with cutting-edge methodology. If one wants to learn about intelligence, this is the book to read.” Yulia Kovas, Professor of Genetics and Psychology, Goldsmiths University of London

The Cambridge Handbook of Intelligence and Cognitive Neuroscience Edited by

Aron K. Barbey University of Illinois at Urbana-Champaign

Sherif Karama McGill University

Richard J. Haier University of California, Irvine

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108480543 DOI: 10.1017/9781108635462 © Cambridge University Press 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Barbey, Aron K., editor. | Karama, Sherif, editor. | Haier, Richard J., editor. Title: The Cambridge handbook of intelligence and cognitive neuroscience / edited by Aron K. Barbey, University of Illinois, Urbana-Champaign, Sherif Karama, McGill University, Montréal, Richard J. Haier, University of California, Irvine. Description: 1 Edition. | New York : Cambridge University Press, 2020. | Series: Cambridge handbooks in psychology | Includes bibliographical references and index. Identifiers: LCCN 2020033919 (print) | LCCN 2020033920 (ebook) | ISBN 9781108480543 (hardback) | ISBN 9781108727723 (paperback) | ISBN 9781108635462 (epub) Subjects: LCSH: Intellect, | Cognitive neuroscience. Classification: LCC BF431 .C268376 2020 (print) | LCC BF431 (ebook) | DDC 153.9–dc23 LC record available at https://lccn.loc.gov/2020033919 LC ebook record available at https://lccn.loc.gov/2020033920 ISBN 978-1-108-48054-3 Hardback ISBN 978-1-108-72772-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

The Cambridge Handbook of Intelligence and Cognitive Neuroscience Can the brain be manipulated to enhance intelligence? The answer depends on neuroscience progress in understanding how intelligence arises from the interplay of gene expression and experience in the developing brain and how the mature brain processes information to solve complex reasoning problems. The bad news is the issues are nightmarishly complex. The good news is there is extraordinary progress from researchers around the world. This book is a comprehensive sampling of recent exciting results, especially from neuroimaging studies. Each chapter has minimum jargon, so an advanced technical background is not required to understand the issues, the data, or the interpretation of results. The prospects for future advances will whet the appetite of young researchers and fuel enthusiasm for researchers already working in these areas. Many intelligence researchers of the past dreamed about a day when neuroscience could be applied to understanding fundamental aspects of intelligence. As this book demonstrates, that day has arrived. aron k. barbey is Professor of Psychology, Neuroscience, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs the Intelligence, Learning, and Plasticity Initiative, and the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology. sherif karama is a psychiatrist with a PhD in neuroscience. He completed a five-year postdoctoral fellowship in Brain Imaging of Cognitive Ability Differences at the Montreal Neurological Institute. He is an assistant professor in the Department of Psychiatry of McGill University. richard j. haier is Professor Emeritus in the School of Medicine, University of California, Irvine. His PhD in psychology is from Johns Hopkins University, he has been a staff fellow at NIMH and on the faculty of Brown University School of Medicine.

“This exciting book makes an elegant case that human intelligence is not the result of a test. It is the consequence of a brain. Drawing on state-of-the-art imaging methods, the reader is afforded a comprehensive view of the substrates enabling our most valued mental abilities.” Scott T. Grafton, Bedrosian-Coyne Presidential Chair in Neuroscience and Director of the Brain Imaging Center, University of California at Santa Barbara

“Our scientific understanding of human intelligence has advanced greatly over the past decade in terms of the measurement and modeling of intelligence in the human brain. This book provides an excellent analysis of current findings and theories written by top international authors. It should be recommended to students and professionals working in this field.” Sarah E. MacPherson, Senior Lecturer in Human Cognitive Neuroscience, University of Edinburgh

“This handbook focuses on the brain, but also integrates genetics and cognition. Come for a comprehensive brain survey and get the bonus of a panoramic foreshadowing of integrated intelligence research and applications.” Douglas K. Detterman, Louis D. Beaumont University Professor Emeritus of Psychological Sciences, Case Western Reserve University

“This handbook captures the conceptualization and measurement of intelligence, which is one of psychology’s greatest achievements. It shows how the advent of modern imaging techniques and large-scale data sets have added to our knowledge about brain–environmental– ability relationships and highlights the controversy in this rapidly expanding field.” Diane F. Halpern, Professor of Psychology, Emerita, Claremont McKenna College

“This handbook assembles an impressive group of pioneers and outstanding young researchers at the forefront of intelligence neuroscience. The chapters summarize the state of the field today and foreshadows what it might become.” Lars Penke, Professor of Psychology, Georg August University of Göttingen

“This book is a tribute to its topic. It is intelligently assembled, spanning all aspects of intelligence research and its applications. The authors are distinguished experts, masterfully summarizing the latest knowledge about intelligence obtained with cutting-edge methodology. If one wants to learn about intelligence, this is the book to read.” Yulia Kovas, Professor of Genetics and Psychology, Goldsmiths University of London

The Cambridge Handbook of Intelligence and Cognitive Neuroscience Edited by

Aron K. Barbey University of Illinois at Urbana-Champaign

Sherif Karama McGill University

Richard J. Haier University of California, Irvine

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108480543 DOI: 10.1017/9781108635462 © Cambridge University Press 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Barbey, Aron K., editor. | Karama, Sherif, editor. | Haier, Richard J., editor. Title: The Cambridge handbook of intelligence and cognitive neuroscience / edited by Aron K. Barbey, University of Illinois, Urbana-Champaign, Sherif Karama, McGill University, Montréal, Richard J. Haier, University of California, Irvine. Description: 1 Edition. | New York : Cambridge University Press, 2020. | Series: Cambridge handbooks in psychology | Includes bibliographical references and index. Identifiers: LCCN 2020033919 (print) | LCCN 2020033920 (ebook) | ISBN 9781108480543 (hardback) | ISBN 9781108727723 (paperback) | ISBN 9781108635462 (epub) Subjects: LCSH: Intellect, | Cognitive neuroscience. Classification: LCC BF431 .C268376 2020 (print) | LCC BF431 (ebook) | DDC 153.9–dc23 LC record available at https://lccn.loc.gov/2020033919 LC ebook record available at https://lccn.loc.gov/2020033920 ISBN 978-1-108-48054-3 Hardback ISBN 978-1-108-72772-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

The Cambridge Handbook of Intelligence and Cognitive Neuroscience Can the brain be manipulated to enhance intelligence? The answer depends on neuroscience progress in understanding how intelligence arises from the interplay of gene expression and experience in the developing brain and how the mature brain processes information to solve complex reasoning problems. The bad news is the issues are nightmarishly complex. The good news is there is extraordinary progress from researchers around the world. This book is a comprehensive sampling of recent exciting results, especially from neuroimaging studies. Each chapter has minimum jargon, so an advanced technical background is not required to understand the issues, the data, or the interpretation of results. The prospects for future advances will whet the appetite of young researchers and fuel enthusiasm for researchers already working in these areas. Many intelligence researchers of the past dreamed about a day when neuroscience could be applied to understanding fundamental aspects of intelligence. As this book demonstrates, that day has arrived. aron k. barbey is Professor of Psychology, Neuroscience, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs the Intelligence, Learning, and Plasticity Initiative, and the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology. sherif karama is a psychiatrist with a PhD in neuroscience. He completed a five-year postdoctoral fellowship in Brain Imaging of Cognitive Ability Differences at the Montreal Neurological Institute. He is an assistant professor in the Department of Psychiatry of McGill University. richard j. haier is Professor Emeritus in the School of Medicine, University of California, Irvine. His PhD in psychology is from Johns Hopkins University, he has been a staff fellow at NIMH and on the faculty of Brown University School of Medicine.

“This exciting book makes an elegant case that human intelligence is not the result of a test. It is the consequence of a brain. Drawing on state-of-the-art imaging methods, the reader is afforded a comprehensive view of the substrates enabling our most valued mental abilities.” Scott T. Grafton, Bedrosian-Coyne Presidential Chair in Neuroscience and Director of the Brain Imaging Center, University of California at Santa Barbara

“Our scientific understanding of human intelligence has advanced greatly over the past decade in terms of the measurement and modeling of intelligence in the human brain. This book provides an excellent analysis of current findings and theories written by top international authors. It should be recommended to students and professionals working in this field.” Sarah E. MacPherson, Senior Lecturer in Human Cognitive Neuroscience, University of Edinburgh

“This handbook focuses on the brain, but also integrates genetics and cognition. Come for a comprehensive brain survey and get the bonus of a panoramic foreshadowing of integrated intelligence research and applications.” Douglas K. Detterman, Louis D. Beaumont University Professor Emeritus of Psychological Sciences, Case Western Reserve University

“This handbook captures the conceptualization and measurement of intelligence, which is one of psychology’s greatest achievements. It shows how the advent of modern imaging techniques and large-scale data sets have added to our knowledge about brain–environmental– ability relationships and highlights the controversy in this rapidly expanding field.” Diane F. Halpern, Professor of Psychology, Emerita, Claremont McKenna College

“This handbook assembles an impressive group of pioneers and outstanding young researchers at the forefront of intelligence neuroscience. The chapters summarize the state of the field today and foreshadows what it might become.” Lars Penke, Professor of Psychology, Georg August University of Göttingen

“This book is a tribute to its topic. It is intelligently assembled, spanning all aspects of intelligence research and its applications. The authors are distinguished experts, masterfully summarizing the latest knowledge about intelligence obtained with cutting-edge methodology. If one wants to learn about intelligence, this is the book to read.” Yulia Kovas, Professor of Genetics and Psychology, Goldsmiths University of London

The Cambridge Handbook of Intelligence and Cognitive Neuroscience Edited by

Aron K. Barbey University of Illinois at Urbana-Champaign

Sherif Karama McGill University

Richard J. Haier University of California, Irvine

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108480543 DOI: 10.1017/9781108635462 © Cambridge University Press 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Barbey, Aron K., editor. | Karama, Sherif, editor. | Haier, Richard J., editor. Title: The Cambridge handbook of intelligence and cognitive neuroscience / edited by Aron K. Barbey, University of Illinois, Urbana-Champaign, Sherif Karama, McGill University, Montréal, Richard J. Haier, University of California, Irvine. Description: 1 Edition. | New York : Cambridge University Press, 2020. | Series: Cambridge handbooks in psychology | Includes bibliographical references and index. Identifiers: LCCN 2020033919 (print) | LCCN 2020033920 (ebook) | ISBN 9781108480543 (hardback) | ISBN 9781108727723 (paperback) | ISBN 9781108635462 (epub) Subjects: LCSH: Intellect, | Cognitive neuroscience. Classification: LCC BF431 .C268376 2020 (print) | LCC BF431 (ebook) | DDC 153.9–dc23 LC record available at https://lccn.loc.gov/2020033919 LC ebook record available at https://lccn.loc.gov/2020033920 ISBN 978-1-108-48054-3 Hardback ISBN 978-1-108-72772-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

The Cambridge Handbook of Intelligence and Cognitive Neuroscience Can the brain be manipulated to enhance intelligence? The answer depends on neuroscience progress in understanding how intelligence arises from the interplay of gene expression and experience in the developing brain and how the mature brain processes information to solve complex reasoning problems. The bad news is the issues are nightmarishly complex. The good news is there is extraordinary progress from researchers around the world. This book is a comprehensive sampling of recent exciting results, especially from neuroimaging studies. Each chapter has minimum jargon, so an advanced technical background is not required to understand the issues, the data, or the interpretation of results. The prospects for future advances will whet the appetite of young researchers and fuel enthusiasm for researchers already working in these areas. Many intelligence researchers of the past dreamed about a day when neuroscience could be applied to understanding fundamental aspects of intelligence. As this book demonstrates, that day has arrived. aron k. barbey is Professor of Psychology, Neuroscience, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs the Intelligence, Learning, and Plasticity Initiative, and the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology. sherif karama is a psychiatrist with a PhD in neuroscience. He completed a five-year postdoctoral fellowship in Brain Imaging of Cognitive Ability Differences at the Montreal Neurological Institute. He is an assistant professor in the Department of Psychiatry of McGill University. richard j. haier is Professor Emeritus in the School of Medicine, University of California, Irvine. His PhD in psychology is from Johns Hopkins University, he has been a staff fellow at NIMH and on the faculty of Brown University School of Medicine.

“This exciting book makes an elegant case that human intelligence is not the result of a test. It is the consequence of a brain. Drawing on state-of-the-art imaging methods, the reader is afforded a comprehensive view of the substrates enabling our most valued mental abilities.” Scott T. Grafton, Bedrosian-Coyne Presidential Chair in Neuroscience and Director of the Brain Imaging Center, University of California at Santa Barbara

“Our scientific understanding of human intelligence has advanced greatly over the past decade in terms of the measurement and modeling of intelligence in the human brain. This book provides an excellent analysis of current findings and theories written by top international authors. It should be recommended to students and professionals working in this field.” Sarah E. MacPherson, Senior Lecturer in Human Cognitive Neuroscience, University of Edinburgh

“This handbook focuses on the brain, but also integrates genetics and cognition. Come for a comprehensive brain survey and get the bonus of a panoramic foreshadowing of integrated intelligence research and applications.” Douglas K. Detterman, Louis D. Beaumont University Professor Emeritus of Psychological Sciences, Case Western Reserve University

“This handbook captures the conceptualization and measurement of intelligence, which is one of psychology’s greatest achievements. It shows how the advent of modern imaging techniques and large-scale data sets have added to our knowledge about brain–environmental– ability relationships and highlights the controversy in this rapidly expanding field.” Diane F. Halpern, Professor of Psychology, Emerita, Claremont McKenna College

“This handbook assembles an impressive group of pioneers and outstanding young researchers at the forefront of intelligence neuroscience. The chapters summarize the state of the field today and foreshadows what it might become.” Lars Penke, Professor of Psychology, Georg August University of Göttingen

“This book is a tribute to its topic. It is intelligently assembled, spanning all aspects of intelligence research and its applications. The authors are distinguished experts, masterfully summarizing the latest knowledge about intelligence obtained with cutting-edge methodology. If one wants to learn about intelligence, this is the book to read.” Yulia Kovas, Professor of Genetics and Psychology, Goldsmiths University of London

The Cambridge Handbook of Intelligence and Cognitive Neuroscience Edited by

Aron K. Barbey University of Illinois at Urbana-Champaign

Sherif Karama McGill University

Richard J. Haier University of California, Irvine

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108480543 DOI: 10.1017/9781108635462 © Cambridge University Press 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Barbey, Aron K., editor. | Karama, Sherif, editor. | Haier, Richard J., editor. Title: The Cambridge handbook of intelligence and cognitive neuroscience / edited by Aron K. Barbey, University of Illinois, Urbana-Champaign, Sherif Karama, McGill University, Montréal, Richard J. Haier, University of California, Irvine. Description: 1 Edition. | New York : Cambridge University Press, 2020. | Series: Cambridge handbooks in psychology | Includes bibliographical references and index. Identifiers: LCCN 2020033919 (print) | LCCN 2020033920 (ebook) | ISBN 9781108480543 (hardback) | ISBN 9781108727723 (paperback) | ISBN 9781108635462 (epub) Subjects: LCSH: Intellect, | Cognitive neuroscience. Classification: LCC BF431 .C268376 2020 (print) | LCC BF431 (ebook) | DDC 153.9–dc23 LC record available at https://lccn.loc.gov/2020033919 LC ebook record available at https://lccn.loc.gov/2020033920 ISBN 978-1-108-48054-3 Hardback ISBN 978-1-108-72772-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

For all those who inspire us – without reference to success or failure – to understand the origins of intelligence and the diversity of talents that make us all equally human. And for Michelle, who inspires me, beyond compare. Aron K. Barbey To my son, Alexandre, who enriches my life. To my father, Adel, who has taught me not to let the dictates of my passions interfere with my assessments of the facts and of the weight of evidence. To the mentors that have shaped my career and approach to science. Sherif Karama Dedicated to all the scientists and students who follow intelligence data wherever they lead, especially into the vast uncharted recesses of the brain. Richard J. Haier

Contents

List of Figures List of Tables List of Contributors Preface Part I Fundamental Issues

page x xiii xv xix 1

1 Defining and Measuring Intelligence: The Psychometrics and Neuroscience of g

thomas r. coyle

3

2 Network Neuroscience Methods for Studying Intelligence

kirsten hilger and olaf sporns

26

3 Imaging the Intelligence of Humans

kenia martı´nez and roberto colom

44

4 Research Consortia and Large-Scale Data Repositories for Studying Intelligence

budhachandra khundrakpam, jean-baptiste poline, and alan c. evans

70

Part II Theories, Models, and Hypotheses

83

5 Evaluating the Weight of the Evidence: Cognitive Neuroscience Theories of Intelligence

matthew j. euler and ty l. mckinney

85

6 Human Intelligence and Network Neuroscience

aron k. barbey

102

7 It’s about Time: Towards a Longitudinal Cognitive Neuroscience of Intelligence

rogier a. kievit and ivan l. simpson-kent

123

vii

viii

Contents

8 A Lifespan Perspective on the Cognitive Neuroscience of Intelligence

joseph p. hennessee and denise c. park

147

9 Predictive Intelligence for Learning and Optimization: Multidisciplinary Perspectives from Social, Cognitive, and Affective Neuroscience

10 11 12 13

christine ahrends, peter vuust, and morten l. kringelbach

162

Part III Neuroimaging Methods and Findings

189

Diffusion-Weighted Imaging of Intelligence erhan genc¸ and christoph fraenz

191

Structural Brain Imaging of Intelligence stefan drakulich and sherif karama

210

Functional Brain Imaging of Intelligence ulrike basten and christian j. fiebach

235

An Integrated, Dynamic Functional Connectome Underlies Intelligence

jessica r. cohen and mark d’esposito 14

Biochemical Correlates of Intelligence

rex e. jung and marwa o. chohan 15

16

18

282

Good Sense and Good Chemistry: Neurochemical Correlates of Cognitive Performance Assessed In Vivo through Magnetic Resonance Spectroscopy

naftali raz and jeffrey a. stanley

297

Part IV Predictive Modeling Approaches

325

Predicting Individual Differences in Cognitive Ability from Brain Imaging and Genetics

kevin m. anderson and avram j. holmes 17

261

327

Predicting Cognitive-Ability Differences from Genetic and Brain-Imaging Data

emily a. willoughby and james j. lee

349

Part V Translating Research on the Neuroscience of Intelligence into Action

365

Enhancing Cognition

michael i. posner and mary k. rothbart

367

Contents

19 Patient-Based Approaches to Understanding Intelligence and Problem-Solving

shira cohen-zimerman, carola salvi, and jordan h. grafman

382

20 Implications of Biological Research on Intelligence for Education and Public Policy

kathryn asbury and diana fields

399

21 Vertical and Horizontal Levels of Analysis in the Study of Human Intelligence

robert j. sternberg

416

22 How Intelligence Research Can Inform Education and Public Policy

jonathan wai and drew h. bailey

434

23 The Neural Representation of Concrete and Abstract Concepts

robert vargas and marcel adam just

448

Index

469

ix

Figures

2.1 Schematic illustration of structural and functional brain page 29 network construction and key network metrics. 2.2 The brain bases of intelligence – from a network neuroscience 32 perspective. 3.1 Regions identified by the Parieto-Frontal Integration Theory 46 (P-FIT) as relevant for human intelligence. 3.2 Variability in the gray matter correlates of intelligence across the psychometric hierarchy as reported in one study by Román et al. 48 (2014). 3.3 Workflow for voxel-based morphometry (VBM) and surface52 based morphometry (SBM) analysis. 3.4 Top panel: three left hemisphere brain surfaces from different 53 individuals. 3.5 (A and B) Distribution and variability of cortical thickness computed through different surface-based protocols: Cortical Pattern Matching (CPM), Brain-Suite, and CIVET. 54 3.6 (A) Pearson’s correlations among cortical thickness (CT), cortical surface area (CSA), and cortical gray matter volume (CGMV) obtained from a subsample of 279 healthy children and adolescents of the Pediatric MRI Data Repository created for the National Institute of Mental Health MRI Study of Normal Brain Development (Evans and Brain Development Cooperative Group, 2006). (B) Topography of significant correlations (q < .05, false discovery rate (FDR) corrected) between IQ and cortical thickness (CT), cortical surface area (CSA), and cortical gray matter volume (CGMV). 55 3.7 Summary of basic analytic steps for connectome-based analyses (A). The analytic sequence for computing the structural and 58 functional connectivity matrices (B). 3.8 Structural and functional correlates of human intelligence are not identified within the same brain regions: “the dissociation of functional vs. structural brain imaging correlates of intelligence is at odds with the principle assumption of the P-FIT that functional and structural studies on neural correlates of x

List of Figures

3.9

6.1 6.2 6.3 7.1

7.2 8.1 8.2 9.1 9.2 9.3 9.4 9.5 10.1

10.2

12.1

12.2 12.3 13.1 14.1

intelligence converge to imply the same set of brain regions” (Basten et al., 2015, p. 21). Mean (A) and variability (B) of cortical thickness across the cortex in two groups of individuals (Sample A and Sample B) matched for sex, age, and cognitive performance. The regional maps are almost identical. Pearson’s correlations between visuospatial intelligence and cortical thickness differences in these two groups are also shown (C). Small-world network. Intrinsic connectivity networks and network flexibility. Dynamic functional connectivity. Simplified bivariate latent change score model illustrating the co-development of intelligence scores (top) and brain measures (bottom) across two waves. An overview of longitudinal studies of brain structure, function, and intelligence. Lifespan performance measures. A conceptual model of the scaffolding theory of aging and cognition-revisited (STAC-r). The pleasure cycle, interactions between experience and predictions, as well as how learning might occur. The pleasure cycle, on its own, during circadian cycles, and over the lifespan. Parameter optimization of learning models. Hierarchical neuronal workspace architectures. Reward-related signals in the orbitofrontal cortex (OFC) unfold dynamically across space and time. The top half depicts ellipsoids (left side, A) and tensors (right side, B) that were yielded by means of diffusion-weighted imaging and projected onto a coronal slice of an MNI brain. White matter fiber tracts whose microstructural properties were found to correlate with interindividual differences in intelligence. Brain activation associated with the processing of intelligencerelated tasks, showing the results of the meta-analysis conducted by Santarnecchi, Emmendorfer, and Pascual-Leon (2017). Intelligence-related differences in brain activation during cognitive processing. Brain activation as a function of task difficulty and intelligence. Brain graph schematic. Representative spectrum from human voxel obtained from parietal white matter.

59

61 105 107 112

125 130 151 152 164 166 169 171 173

194

196

238 239 248 265 284

xi

xii

List of Figures

14.2 Linear relationship between size of study (Y axis) and magnitude of NAA–intelligence, reasoning, general cognitive functioning relationship (X axis), with the overall relationship being inverse (R2 = .20). 15.1 Examples of a quantified ¹H MRS spectrum and a quantified ³¹P MRS spectrum. 16.1 A graphical depiction of the Deep Boltzmann Machine (DMN) developed by Wang et al. (2018) to predict psychiatric case status. 19.1 Schematic drawing of brain areas associated with intelligence based on lesion mapping studies. 23.1 Conceptual schematic showing differences between GLM activation-based approaches and pattern-oriented MVPA, where the same number of voxels activate (shown as dark voxels) for two concepts but the spatial pattern of the activated voxels differs.

289 299

336 385

450

Tables

4.1 Details of large-scale datasets and research consortia with concurrent measures of neuroimaging and intelligence (and/or related) scores, and, in some cases, genetic data. page 72 6.1 Summary of cognitive neuroscience theories of human intelligence. 103 7.1 An overview of longitudinal studies of brain structure, function, and intelligence. 127 13.1 Definitions and descriptions of graph theory metrics. 266 14.1 Studies of NAA. 288 21.1 Vertical levels of analysis for the study of human intelligence. 418 21.2 Horizontal levels of analysis for the study of human intelligence. 424 22.1 Reverse and forward causal questions pertaining to intelligence. 438

xiii

Contributors

christine ahrends, PhD Candidate, Department of Psychiatry, University of Oxford, UK kevin m. anderson, PhD Candidate, Department of Psychology and Psychiatry, Yale University, USA kathryn asbury, PhD, Senior Lecturer, Department of Education, University of York, UK drew h. bailey, PhD, Associate Professor, School of Education, University of California, Irvine, USA aron k. barbey, PhD, Professor, Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign, USA ulrike basten, PhD, Postdoctoral Researcher, Bernstein Center for Computational Neuroscience, Goethe University Frankfurt, Germany marwa o. chohan, PhD Candidate, Albuquerque Academy, New Mexico, USA jessica r. cohen, PhD, Assistant Professor, Department of Psychology and Neuroscience, University of North California at Chapel Hill, USA shira cohen-zimerman, PhD, Postdoctoral Fellow, Shirley Ryan Ability Lab, USA roberto colom, PhD, Professor, Department of Biological and Health Psychology, Autonomous University of Madrid, Spain thomas r. coyle, PhD, Professor, Department of Psychology, University of Texas at San Antonio, USA mark d’esposito, MD, Professor, Helen Wills Neuroscience Institute, University of California, Berkeley, USA stefan drakulich, PhD Candidate, Integrated Program in Neuroscience, Montreal Neurological Institute, Canada matthew j. euler, PhD, Assistant Professor, Department of Psychology, The University of Utah, USA xv

xvi

List of Contributors

alan c. evans, PhD, James McGill Professor, Department of Neurology and Neurosurgery, McGill University, Canada christian j. fiebach, PhD, Professor, Psychology Department, Goethe University Frankfurt, Germany diana fields, PhD Candidate, Department of Education, University of York, UK christoph fraenz, PhD, Postdoctoral Researcher, Department of Psychology, Ruhr-University Bochum, Germany erhan genc¸ , PhD, Principal Investigator, Department of Psychology, RuhrUniversity Bochum, Germany jordan h. grafman, PhD, Professor of Physical Medicine and Rehabilitation, Neurology – Ken and Ruth Davee Department, Northwestern University Feinberg School of Medicine, USA richard j. haier, PhD, Professor Emeritus, School of Medicine, University of California, Irvine, USA joseph p. hennessee, PhD, Postdoctoral Researcher, School of Behavior and Brain Sciences, The University of Texas at Dallas, USA kirsten hilger, PhD, Postdoctoral Research Associate, Faculty of Human Sciences, University of Würzburg, Germany avram j. holmes, PhD, Assistant Professor, Department of Psychology and Psychiatry, Yale University, USA rex e. jung, PhD, Assistant Professor, Department of Neurosurgery, University of New Mexico, USA marcel adam just, PhD, D. O. Hebb Professor of Psychology, Psychology Department, Carnegie Mellon University, USA sherif karama, MD PhD, Assistant Professor, Department of Psychiatry, McGill University, Canada budhachandra khundrakpam, PhD, Postdoctoral Fellow, Department of Neurology and Neurosurgery, McGill University, Canada rogier a. kievit, PhD, Programme Leader, Department of Neuroscience, University of Cambridge, UK morten l. kringelbach, DPhil, Professor, Department of Psychiatry, Aarhus University, Denmark james j. lee, PhD, Assistant Professor, Department of Psychology, University of Minnesota, USA

List of Contributors

kenia martı´ nez, PhD, Investigator, Biomedical Imaging and Instrumentation Group, Gregorio Marañon General University Hospital, Spain ty l. mckinney, PhD Candidate, Department of Psychology, University of Utah, USA denise c. park, PhD, Professor, Center for Vital Longevity, The University of Texas at Dallas, USA jean-baptiste poline, PhD, Associate Professor, Department of Neurology and Neurosurgery, McGill University, Canada michael i. posner, PhD, Professor Emeritus, Department of Psychology, University of Oregon, USA naftali raz, PhD, Professor, Institute of Gerontology, Wayne State University, USA mary k. rothbart, PhD, Professor Emeritus, Department of Psychology, University of Oregon, USA carola salvi, PhD, Lecturer, Department of Psychiatry, University of Texas at Austin, USA ivan l. simpson-kent, PhD Candidate, MRC Cognitive and Brain Sciences Unit, University of Cambridge, UK olaf sporns, PhD, Distinguished Professor, Department of Psychological and Brain Sciences, Indiana University Bloomington, USA jeffrey a. stanley, PhD, Professor, Department of Psychiatry, Wayne State University, USA robert j. sternberg, PhD, Professor of Human Development, College of Human Ecology, Cornell University, USA robert vargas, PhD Candidate, Psychology Department, Carnegie Mellon University, USA peter vuust, Professor, Department of Clinical Medicine, Aarhus University, Denmark jonathan wai, PhD, Assistant Professor, Department of Education Reform, University of Arkansas, USA emily a. willoughby, PhD Candidate, Department of Psychology, University of Minnesota, USA

xvii

Preface

This book introduces one of the greatest and most exciting scientific challenges of our time – explicating the neurobiological foundations of human intelligence. Written for students and for professionals in related fields, The Cambridge Handbook of Intelligence and Cognitive Neuroscience surveys research emerging from the rapidly developing neuroscience literature on human intelligence. Our emphasis is on theoretical innovation and recent advances in the measurement, modeling, and characterization of the neurobiology, especially from brain imaging studies. Scientific research on human intelligence is evolving from limitations of psychometric testing approaches to advanced neuroscience methods. Each chapter, written by experts, explains these developments in clear language. Together the chapters show how scientists are uncovering the rich constellation of brain elements and connections that give rise to the remarkable depth and complexity of human reasoning and personal expression. If you doubt that intelligence can be defined or measured sufficiently for scientific study, you are in for a surprise. Each chapter presents thought-provoking findings and conceptions to whet the appetite of students and researchers. Part I is an introduction to fundamental issues in the characterization and measurement of general intelligence (Coyle, Chapter 1), reviewing emerging methods from network neuroscience (Hilger and Sporns, Chapter 2), presenting a comparative analysis of structural and functional MRI methods (Martínez and Colom, Chapter 3), and surveying multidisciplinary research consortia and large-scale data repositories for the study of general intelligence (Khundrakpam, Poline, and Evans, Chapter 4). Part II reviews cognitive neuroscience theories of general intelligence, evaluating the weight of the neuroscience evidence (Euler and McKinney, Chapter 5), presenting an emerging approach from network neuroscience (Barbey, Chapter 6), and reviewing neuroscience research that investigates general intelligence within a developmental (Kievit and Simpson-Kent, Chapter 7) and lifespan framework (Hennessee and Park, Chapter 8), and that applies a social, cognitive, and affective neuroscience perspective (Ahrends, Vuust, and Kringelbach, Chapter 9). Due to a production issue, Chapter 23 (Vargas and Just) was omitted from Part II, where it was intended to appear. This chapter now appears as the final chapter. xix

xx

Preface

Part III provides a systematic review of contemporary neuroimaging methods for studying intelligence, including structural and diffusion-weighted MRI techniques (Genç and Fraenz, Chapter 10; Drakulich and Karama, Chapter 11), functional MRI methods (Basten and Fiebach, Chapter 12; Cohen and D’Esposito, Chapter 13), and spectroscopic imaging of metabolic markers of intelligence (Jung and Chohan, Chapter 14; Raz and Stanley, Chapter 15). Part IV reviews predictive modeling approaches to the study of human intelligence, presenting research that enables the prediction of cognitive ability differences from brain imaging and genetics data (Anderson and Holmes, Chapter 16; Willoughby and Lee, Chapter 17). Finally, Part V addresses the need to translate findings from this burgeoning literature into potential action/policy, presenting research on cognitive enhancement (Posner and Rothbart, Chapter 18), clinical translation (Cohen-Zimerman, Salvi, and Grafman, Chapter 19), and education and public policy (Asbury and Fields, Chapter 20; Sternberg, Chapter 21; Wai and Bailey, Chapter 22). Research on cognitive neuroscience offers the profound possibility of enhancing intelligence, perhaps in combination with molecular biology, by manipulating genes and brain systems. Imagine what this might mean for education, life success, and for addressing fundamental social problems. You may even decide to pursue a career dedicated to these prospects. That would be an extra reward for us.

PART I

Fundamental Issues

1 Defining and Measuring Intelligence The Psychometrics and Neuroscience of g Thomas R. Coyle

Aims and Organization The purpose of this chapter is to review key principles and findings of intelligence research, with special attention to psychometrics and neuroscience. Following Jensen (1998), the chapter focuses on intelligence defined as general intelligence (g). g represents variance common to mental tests and arises from ubiquitous positive correlations among tests (scaled so that higher scores indicate better performance). The positive correlations indicate that people who perform well on one test generally perform well on all others. The chapter reviews measures of g (e.g., IQ and reaction times), models of g (e.g., Spearman’s model and the Cattell-Horn-Carroll model), and the invariance of g across test batteries. The chapter relies heavily on articles published in the last few years, seminal research on intelligence (e.g., neural efficiency hypothesis), and meta-analyses of intelligence and g-loaded tests. Effect sizes are reported for individual studies and meta-analyses of the validity of g and its link to the brain. The chapter is divided into five sections. The first section discusses historical definitions of intelligence, concluding with the decision to focus on g. The second section considers vehicles for measuring g (e.g., IQ tests), models for representing g (e.g., Cattell-Horn-Carroll), and the invariance of g. The next two sections discuss the predictive power of g-loaded tests, followed by a discussion of intelligence and the brain. The final section considers outstanding issues for future research. The issues include non-g factors, the development of intelligence, and recent research on genetic contributions to intelligence and the brain (e.g., Lee et al., 2018).

Defining Intelligence Intelligence can be defined as a general cognitive ability related to solving problems efficiently and effectively. Historically, several definitions of intelligence have been proposed. Alfred Binet, who co-developed the precursor 3

4

t. r. coyle

to modern intelligence tests (i.e., Stanford-Binet Intelligence Scales), defined it as “judgment, otherwise called good sense, practical sense, initiative, the faculty of adapting one’s self to circumstances” (Binet & Simon, 1916/1973, pp. 42–43). David Wechsler, who developed the Wechsler Intelligence Scales, defined it as the “global capacity of the individual to act purposefully, to think rationally and to deal effectively with his environment” (Wechsler, 1944, p. 3). Howard Gardner, a proponent of the theory of multiple intelligences, defined it as “the ability to solve problems, or to create products, that are valued within one or more cultural settings” (Gardner, 1983/2003, p. x). Perhaps the best known contemporary definition of intelligence was reported in the statement “Mainstream Science on Intelligence” (Gottfredson, 1997). The statement was signed by 52 experts on intelligence and first published in the Wall Street Journal. It defines intelligence as: [A] very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings – catching on, making sense of things, or figuring out what to do. (Gottfredson, 1997, p. 13)

Two elements of the definition are noteworthy. The first is that intelligence represents a general ability, which influences performance on all mental tasks (e.g., verbal, math, spatial). The second is that intelligence involves the ability to learn quickly, meaning that intelligence is related to fast and efficient mental processing. Psychometric evidence strongly supports the view that intelligence, measured by cognitive tests, reflects a general ability that permeates all mental tasks, and that it is associated with efficient mental processing, notably on elementary cognitive tasks that measure reaction times (e.g., Jensen, 1998, 2006). Arthur Jensen, a titan in intelligence research, advised against the use of the term “intelligence” because of its vague meaning and questionable scientific utility, noting: “Largely because of its popular and literary usage, the word ‘intelligence’ has come to mean too many different things to many people (including psychologists). It has also become so fraught with value judgments, emotions, and prejudices as to render it useless in scientific discussion” (Jensen, 1998, p. 48). Rather than using the term “intelligence,” Jensen (1998) proposed defining mental ability as g, which represents variance common to mental tasks. g reflects the empirical reality that people who do well in one mental task generally do well on all other mental tasks, a finding supported by positive correlations among cognitive tests. Following Jensen (1998), the current chapter focuses on g and g-loaded measures. It discusses methods for measuring g, models for representing g, the validity of g-loaded tests, and the nexus of relations between g-loaded measures, the brain, and diverse criteria.

Defining and Measuring Intelligence

Measuring g Jensen (1998, pp. 308–314) distinguished between constructs, vehicles, and measurements of g. The construct of g represents variance common to diverse mental tests. g is based on the positive manifold, which refers to positive correlations among tests given to representative samples. The positive correlations indicate that people who score high on one test generally score high on all others. g is a source of variance (i.e., individual differences) in test performance and therefore provides no information about an individual’s level of g, which can be measured using vehicles of g. Vehicles of g refer to methods used to elicit an individual’s level of g. Common vehicles of g include IQ tests, academic aptitude tests (SAT, ACT, PSAT), and elementary cognitive tasks (ECTs) that measure reaction times. All mental tests are g loaded to some extent, a finding consistent with Spearman’s (1927, pp. 197–198) principle of the indifference of the indicator, which states that all mental tests are loaded with g, irrespective of their content. A test’s g loading represents its correlation with g. Tests with strong g loadings generally predict school and work criteria well, whereas tests with weak g loadings generally predict such criteria poorly (Jensen, 1998, pp. 270–294). Measurements of g refer to the measurement scale of g-loaded tests (e.g., interval or ratio). IQ scores are based on an interval scale, which permits ranking individuals on a trait (from highest to lowest) and assumes equal intervals between units. IQ tests provide information about an individual’s performance on g-loaded tests (compared to other members in his or her cohort), which can be converted to a percentile. Unfortunately, IQ scores lack an absolute zero point and therefore do not permit proportional comparisons between individuals such as “individual A is twice as smart (in IQ points) as individual B,” which would require a ratio scale of measurement.

g-Loaded Tests This section reviews common tests of g. The tests include IQ tests, aptitude tests (SAT, ACT, PSAT), tests of fluid and crystallized intelligence, elementary cognitive tasks (ECTs), and tests of executive functions.

IQ Tests IQ tests include the Wechsler Intelligence Scales, which are among the most widely used IQ tests in the world. The Wechsler Scales are age-normed and define the average IQ at any age as 100 (with a standard deviation of 15). The scales yield four ability indexes: verbal comprehension, which measures verbal abilities (e.g., vocabulary knowledge); perceptual reasoning, which measures

5

6

t. r. coyle

non-verbal reasoning (e.g., building a model with blocks); processing speed, which measures psychomotor speed (e.g., completing a coding chart); and working memory, which measures the ability to manipulate information in immediate memory. Working memory is a strong correlate of g (e.g., Gignac & Watkins, 2015; see also, Colom, Rebollo, Palacios, Juan-Espinosa, & Kyllonen, 2004). Working memory can be measured using the Wechsler backward digit-span subtest, which measures the ability to repeat a series of digits in reverse order. The Wechsler Scales yield a strong g factor (Canivez & Watkins, 2010), which measures variance common to mental tests. g largely explains the predictive power of tests, which lose predictive power after (statistically) removing g from tests (Jensen, 1998, pp. 270–294; see also, Coyle, 2018a; Ree, Earles, & Teachout, 1994). The Wechsler vocabulary subtest has one of the strongest correlations with g (compared to other subtests), suggesting that vocabulary knowledge is a good proxy of g. A Wechsler subtest with a relatively weak g loading is coding, which measures the ability to quickly complete a coding chart. Coding partly measures handwriting speed, which involves a motor component that correlates weakly with g (e.g., Coyle, 2013). IQ scores are based on an interval (rather than ratio) scale, which estimates where an individual ranks relative to others in his or his age group. IQ scores can be converted to a percentile rank, which describes the percentage of scores that are equal to or lower than it. (For example, a person who scores at the 95th percentile performs better than 95% of people who take the test.) However, because IQ scores are not based on ratio scale (and therefore have no real zero point), they cannot describe where a person stands in proportion to another person. Therefore, IQ scores do not permit statements such as “a person with an IQ of 120 is twice as smart (in IQ points) as a person with an IQ of 60.” For such statements to be meaningful, cognitive performance must be measured on a ratio scale. Reaction times are based on a ratio scale and do permit proportional statements (for similar arguments, see Haier, 2017, pp. 41–42; Jensen, 2006, pp. 56–58).

Aptitude Tests Aptitude tests are designed to measure specific abilities (verbal or math) and predict performance in a particular domain (school or work). Aptitude tests include the SAT and ACT, two college admissions tests taken in high school; the PSAT, a college readiness test taken in junior high school; and the Armed Services Vocational Aptitude Battery (ASVAB), a selection test used by the US military. All of these tests produce scores based on an interval scale and provide percentiles to compare an examinee to others in his or her cohort. The SAT, ACT, PSAT, and ASVAB also yield a strong g factor, which accounts for about half of the variance in the tests. All of the tests correlate strongly with IQ tests and g factors based on other tests, suggesting that they are in fact

Defining and Measuring Intelligence

“intelligence” tests, even though “intelligence” is not mentioned in their names. Finally, all of the tests derive their predictive power for work and school criteria largely (though not exclusively) from g (e.g., Coyle & Pillow, 2008; see also Coyle, Purcell, Snyder, & Kochunov, 2013).

Tests of Fluid and Crystallized Intelligence Cattell (1963; see also Brown, 2016; Horn & Cattell, 1966) distinguished between fluid intelligence, which measures general reasoning ability on novel problems, and crystallized intelligence, which measures culturally acquired knowledge. A widely used test of fluid intelligence is the Raven’s Progressive Matrices. Each Raven’s item depicts a 3  3 grid, with the lower right cell empty and the other cells filled with shapes that form a pattern. Participants must select the shape (from eight options) that completes the pattern. The Raven’s correlates strongly with a g based on diverse tests (g loading [λ]  .70), making it a good measure of g, and it also loads moderately on a visuospatial factor (λ  .30, Gignac, 2015). Crystallized intelligence is often measured using vocabulary and general knowledge tests. Both types of tests measure culturally acquired knowledge and typically have among the highest correlations with a g based on diverse tests (λ  .80, Gignac, 2015). Fluid and crystallized intelligence show different developmental trajectories over the lifespan (20–80 years). Fluid intelligence begins to decline in early adulthood and shows rapid declines in middle and late adulthood. In contrast, crystallized intelligence shows slight declines in later adulthood, with modest declines thereafter (e.g., Tucker-Drob, 2009, p. 1107).

Elementary Cognitive Tasks (ECTs) ECTs examine relations between g and mental speed using reaction times (RTs) to simple stimuli (e.g., lights or sounds) (for a review see Jensen, 2006, pp. 155–186). ECTs measure two types of RTs: Simple RT (SRT), which measures the speed of responding to a single stimulus (with no distractors), and choice RT (CRT), which measures the speed of responding to a target stimulus paired with one or more distractors. In general, RTs increase (become slower) with the number of distractors, which increases the complexity of the ECT. Moreover, RT-IQ relations, and RT relations with other g-loaded measures (e.g., working memory) increase as a function of task complexity. RT-IQ relations are weakest for SRT and stronger for CRT, with RT-IQ relations increasing with the complexity of the ECT (e.g., Jensen, 2006, pp. 164–166). Such a pattern is consistent with the idea that intelligence involves the ability to handle complexity (Gottfredson, 1997). A similar pattern is found when RT is correlated with participants’ age in childhood (up to 20 years) or adulthood (20–80 years). Age correlates more strongly with CRT than with SRT, and

7

8

t. r. coyle

CRT relations with age generally increase with the number of distractors (e.g., Jensen, 2006, pp. 105–117). ECTs can separate the effects of RT, which measures how quickly participants initiate a response to a reaction stimulus (light or sound), from movement time (MT), which measures how quickly participants execute a response after initiating it. RT and MT can be measured with the Jensen box (Jensen, 2006, pp. 27–31). The Jensen box involves a home button surrounded by a semicircle of one-to-eight response buttons, which occasionally light up. The participant begins with a finger on the home button, waits for a response button to light up, and then has to release the home button and press the response button. RT is the interval between the lighting of the response button and the release of the home button. MT is the interval between the release of the home button and the press of the response button. RT generally correlates more strongly with IQ and task complexity than does MT (Jensen, 2006, p. 234). Such results suggest that IQ reflects the ability to evaluate options and initiate a response (i.e., RT) more than the ability to execute a motoric response after deciding to initiate it (cf. Coyle, 2013).

Executive Functions (EFs) Executive functions are cognitive abilities used to plan, control, and coordinate behavior. EFs include three cognitive abilities: updating, which measures the ability to update information in working memory; shifting, which measures the ability to shift attention to different stimuli or goals; and inhibition, which measures the ability to suppress distractions (Miyake et al., 2000). Of the three EFs, updating and its analog of working memory correlate most strongly with g (e.g., Friedman et al., 2006; see also, Benedek, Jauk, Sommer, Arendasy, & Neubauer, 2014). The relation between working memory and g approaches unity in latent variable analysis (e.g., Colom et al., 2004; see also, Gignac & Watkins, 2015), with a mean meta-analytic correlation of .48 among manifest variables (Ackerman, Beier, & Boyle, 2005). The three major EFs (updating, shifting, inhibition) are related to each other, suggesting a general EF factor. Controlling for correlations among the three EFs indicates that updating (an analog of working memory) correlates most strongly with g, whereas shifting and inhibition correlate weakly with g (e.g., Friedman et al., 2006).

Models of Intelligence and g Two prominent models of g are a Spearman model with no group factors, and a hierarchical model with group factors (Jensen, 1998, pp. 73–81). Group factors estimate specific abilities (e.g., verbal, math, spatial), whereas g estimates variance common to all abilities. Group factors (and the tests used to estimate them) almost always correlate positively, reflecting shared variance among the factors. The Spearman model estimates g using manifest variables

Defining and Measuring Intelligence

(e.g., test scores), with no intervening group factors. In contrast, the hierarchical model estimates g based on a pyramidal structure, with g at the apex, group factors (broad and narrow) in the middle, and manifest variables (individual tests) at the base. There are many hierarchical models of g with group factors. One of the most notable is the Cattell-Horn-Carroll (CHC) model (McGrew, 2009). The CHC model describes g as a third-order factor, followed by broad secondorder group factors, each loading on g, and narrow first-order group factors, each loading on a broad factor. The broad factors (sample narrow factors in parentheses) include fluid intelligence (induction), crystallized intelligence (general knowledge), quantitative knowledge (math knowledge), processing speed (perceptual speed), and short- and long-term memory (working memory capacity). In practice, intelligence research often targets g and a small number of group factors relevant to a study’s aims. It should be emphasized that all group factors (broad and narrow) are related to g. Therefore, the unique contribution of a group factor (e.g., math ability) to a criterion (e.g., school grades) can be examined only after statistically removing g from the factor, a point revisited in the section on non-g factors.

Invariance of g Using hierarchical models of g, Johnson and colleagues (Johnson, Bouchard, Krueger, McGue, & Gottesman, 2004; Johnson, te Nijenhuis, & Bouchard, 2008) estimated correlations among g factors based on different batteries of cognitive tests. An initial study (Johnson et al., 2004; N = 436 adults) estimated g and diverse group factors using three test batteries: Comprehensive Ability Battery (14 tests estimating five group factors), Hawaii Battery (17 tests estimating five group factors), and Wechsler Adult Intelligence Scale (11 tests estimating three group factors). g factors for each battery were estimated as second order factors in latent variable analyses. Although the three batteries differed on key dimensions (e.g., number of tests, content of tests, number of group factors), the g factors of the batteries correlated nearly perfectly (r  1.00). The near perfect correlations suggest that g is independent of specific tests and that g factors based on diverse test batteries are virtually interchangeable. Johnson et al.’s (2004) results were replicated in a subsequent study (Johnson et al., 2008), cleverly titled “Still just 1 g: Consistent results from five test batteries.” The study involved Dutch seamen (N = 500) who received five test batteries. The batteries estimated g and different group factors (perceptual, spatial, mechanical, dexterity), with few verbally loaded factors. Consistent with Johnson et al.’s (2004) results, the g factors of the different batteries correlated .95 or higher, with one exception. The exception was a test battery composed entirely of matrix type reasoning tests (Cattell Culture Fair Test), which yielded a g that correlated .77 or higher with the g factors of the other

9

10

t. r. coyle

tests. In Johnson et al.’s (2008) words, the results “provide evidence both for the existence of a general intelligence factor [i.e., g] and for the consistency and accuracy of its measurement” (p. 91). Johnson et al.’s (2004, 2008) results are consistent with Spearman’s (1927) principle of the indifference of the indicator. This principle is based on the idea that all cognitive tests are indicators of g and load on g (to some extent). The g loading of a test represents its correlation with g, which reflects how well it estimates g. The degree to which a test battery estimates g depends on the number and diversity of tests in the battery (e.g., Major, Johnson, & Bouchard, 2011). Larger and more diverse batteries like the ones used by Johnson et al. (2004, 2008) generally yield better estimates of g because such batteries are more likely to identify variance common to tests (i.e., g) and have test-specific variances cancel out. Johnson et al.’s (2004, 2008) research estimated g using samples from WEIRD countries (e.g., United States and Europe). WEIRD stands for Western, Educated, Industrialized, Rich, and Democratic. WEIRD countries have high levels of wealth and education (Henrich, Heine, & Norenzayan, 2010), which contribute to cognitive development, specific abilities (e.g., verbal, math, spatial), and g. Non-WEIRD countries have fewer resources, which may retard cognitive development and yield a poorly defined g factor (which explains limited variance among tests). Warne and Burningham (2019) examined g factors in non-WEIRD countries (e.g., Bangladesh, Papua New Guinea, Sudan). Cognitive test data were obtained for 97 samples from 31 non-WEIRD countries totaling 52,340 individuals. Exploratory factor analyses of the tests estimated g, defined as the first unrotated factor when only one factor was extracted, or a second-order factor when multiple factors were extracted. A single g factor was observed in 71 samples (73%), and a second-order g factor was observed in the remaining 23 of 26 samples (83%). The average variance explained by the first unrotated factor was 46%, which is consistent with results from WEIRD countries. In sum, a clearly identified g factor was observed in 94 of 97 non-WEIRD countries, suggesting that g is a universal human trait, found in both WEIRD and non-WEIRD countries.

Predictive Power of g and g-Loaded Tests Intelligence tests are useful because they predict diverse criteria in everyday life. The current section reviews research on the predictive power of intelligence at school and work. The review focuses on recent and seminal studies of g-loaded tests. g-loaded tests include IQ tests (Wechsler Scales), college aptitude tests (SAT, ACT, PSAT), military selection tests (Armed Services Aptitude Battery), and other cognitive tests (e.g., ECTs). In general, any test that involves a mental challenge will be g-loaded (to some extent), with the degree of relatedness between a test and g increasing with task complexity.

Defining and Measuring Intelligence

Intelligence and School Intelligence tests were developed to predict school performance and so it is no surprise that they predict school grades. Roth et al. (2015) examined the metaanalytic correlation between intelligence tests (verbal and nonverbal) and school grades with 240 samples and 105,185 students. The population correlation was .54 after correcting for artifacts (measurement error and range restriction). Moderating analysis indicated that the test-grade correlations increased from elementary to middle to high school (.45, .54, .58), and were stronger for math/science (.49) than for languages (.44), social sciences (.43), art/music (.31), and sports (.09). Roth et al. (2015) argued that the increases in effect sizes across grade levels could be attributed to increases in the complexity of course material, which would decrease the ability to compensate with practice and increase the contribution of intelligence. Are intelligence-grade correlations attributable to students’ socioeconomic status (SES), which reflects parental wealth, education, and occupational status? The question is important because intelligence tests and college admissions tests (SAT) have been assumed to derive their predictive power from SES. To address this question, Sackett, Kuncel, Arneson, Cooper, and Waters (2009) meta-analyzed SAT-GPA correlations using college GPAs from 41 institutions and correcting for range restriction. (The SAT correlates strongly with a g based on diverse tests [r = .86, corrected for nonlinearity, Frey & Detterman, 2004].) The meta-analytic SAT-GPA correlation was .47, which dropped negligibly to .44 after controlling for SES (Sackett et al., 2009, p. 7). Contrary to the assumption that the SAT derives its predictive power from SES, the results suggest that SES has a negligible impact on SAT-GPA correlations. The predictive power of admissions tests is not limited to undergraduate criteria but also applies to graduate and professional school criteria. Kuncel and Hezlett (2007) meta-analyzed correlations involving graduate admissions tests, correcting for range restriction and measurement error. The tests included the Graduate Record Examination (GRE), Law School Admission Test (LSAT), Pharmacy College Admission Test (PCAT), Miller Analogies Test (MAT), Graduate Management Admission Test (GMAT), and Medical College Admission Test (MCAT). The tests robustly predicted first-year graduate GPA (r > .40, all tests), overall graduate GPA (r > .40, all tests), and qualifying exams (r > .39, GRE and MAT). Moreover, the tests also predicted criteria other than grades, including publication citations (r  .23, GRE), faculty evaluations (r > .36, GRE and MAT), and licensing exams (r > .45, MCAT and PCAT). All correlations were positive, indicating better performance was associated with higher achievement. Are test–grade correlations attributable to g? The question is important because g is considered the “active ingredient” of tests, with the predictive power of a test increasing with its g loading. To address this question, Jensen

11

12

t. r. coyle

(1998, p. 280) correlated the g loadings of 11 subtests of the Wechsler Adult Intelligence Scale (WAIS) with their corresponding validity coefficients for college grades. The correlation between the g loadings and validity coefficients was r = .91, suggesting that a test’s predictive power is largely explained by g. Jensen (1998, p. 280) replicated the result in an analysis correlating the g loadings of the WAIS subtests with their validity coefficients for high-school class rank (r = .73). Separately, Thorndike (1984) found that 80–90% of the predictable variance in school grades was accounted for by g, with 10–20% explained by non-g factors measured by IQ and other tests. Together, the results suggest that the predictive power of cognitive tests is largely explained by g, with non-g factors explaining little variance.

Intelligence and Work Intelligence tests and other g-loaded tests also predict work performance (e.g., productivity and supervisor ratings). In a classic meta-analysis, Schmidt and Hunter (1998) examined the predictive power of general cognitive ability tests for overall job performance, as well as the incremental validity of other predictors (beyond general ability). The meta-analytic correlation between general ability and job performance was r = .51. Incremental validity, obtained after accounting for general ability, was negligible for other predictors, including job experience (.03), years of education (.01), conscientiousness (.09), and job knowledge (.07). The negligible effects suggest that the other predictors contributed little to the prediction of job performance beyond general ability. In related research, Schmidt and Hunter (2004) showed that the relationship between general ability and job performance was largely mediated by job knowledge, with general ability leading to increases in job knowledge, which in turn improved job performance. Is the predictive power of cognitive tests attributable to non-g factors? Non-g factors include specific abilities measured by cognitive tests. Specific abilities include math, verbal, and spatial abilities, which might contribute to the validity of tests beyond g. Ree et al. (1994) examined the validity of g (variance common to tests) and specific abilities (s, variance unique to tests) using the Armed Services Vocational Aptitude Battery (ASVAB), which was given to US Air Force recruits (N = 1,036). g was correlated with job performance based on three criteria (hands-on, work sample, walk-through), and s was measured as the incremental validity coefficient (beyond g). The average meta-analytic correlation (across all criteria) between g and job performance was .42, whereas the average incremental validity coefficient for s was .02. Based on these results, Ree et al. (1994) concluded that the predictive power of cognitive ability tests was attributable to “not much more than g” and that specific abilities (i.e., s) have negligible validity for job performance. A key question is whether g-loaded tests predict criteria at very high ability levels. The question is important because IQs and g-loaded tests have been

Defining and Measuring Intelligence

assumed to lose predictive power beyond an ability threshold, defined as IQs above 120 (e.g., Gladwell, 2008, p. 79). Data relevant to the question come from the Study of Mathematically Precocious Youth (SMPY; for a review, see Lubinski, 2016). The SMPY involves a sample of gifted subjects who took the SAT around age 12 years and scored in the top 1%. (The SAT correlates strongly with IQ and other g-loaded tests [e.g., Frey & Detterman, 2004].) The top 1% represents an IQ (M = 100, SD = 15) of around 135 or higher, which is well above 120; therefore, based on the threshold assumption, SAT scores of SMPY subjects should have negligible predictive power. Contrary to the assumption, higher SAT scores predicted higher levels of achievement around age 30 years in SMPY subjects (e.g., higher incomes, more patents, more scientific publications, more novels) (Lubinski, 2016, p. 913). The results indicate that higher levels of g are associated with higher achievement levels, even beyond the putative ability threshold of IQ 120.

Intelligence and the Brain The brain is the seat of intelligence. Early studies found that head size (an indirect measure of brain size) correlates positively with IQ test performance (e.g., r  .20, Deary et al., 2007, p. 520). Contemporary studies of intelligence and the brain use modern neuroimaging technologies. These technologies include magnetic resonance imaging (MRI), which measures brain structure (e.g., brain volume and cortical thickness); functional MRI (fMRI), which measures neural activity at rest or during task performance (typically based on blood flow); and positron emission tomography (PET), which measures chemicals in the brain such as glucose metabolism. The current section selectively reviews research on intelligence and the brain. Two hypotheses are discussed. The first is the “bigger is better hypothesis,” which predicts that intelligence correlates positively with larger quantities of brain tissue (gray matter or white matter), based on various brain measurements (e.g., volume, thickness, density). The second hypothesis is the “efficiency hypothesis,” which predicts that more intelligent people have more efficient brains, based on functional (e.g., cortical glucose metabolism) and structural (e.g., length of white matter tracts) measurements.

Efficiency and Intelligence In a seminal study of the efficiency hypothesis, Haier et al. (1988) had eight healthy males solve problems on the Raven’s Advanced Progressive Matrices (RAPM) while measuring cortical glucose metabolism with PET. The RAPM is a non-verbal reasoning test with a strong g loading (r  .70, Gignac, 2015). Cortical glucose metabolism is a measure of cortical energy consumption. The key result was a negative correlation between absolute glucose metabolic

13

14

t. r. coyle

rate and RAPM performance (whole-slice r = –.75, Haier et al., 1988, p. 208), indicating that glucose metabolism was lower for higher ability subjects. The result suggested that higher ability subjects were able to solve RAPM problems more easily and therefore processed the task more efficiently (and used less energy during problem solving). Haier et al.’s (1988) findings led to subsequent studies of the efficiency hypothesis (for a review, see Neubauer & Fink, 2009; see also, Haier, 2017, pp. 153–155). Consistent with the hypothesis, some studies found negative relations between intelligence and brain activity, measured as glucose metabolism or cerebral blood flow (Neubauer & Fink, 2009, pp. 1007–1008). Support for the hypothesis was strongest in frontal areas and for tasks of low-tomoderate difficulty. In contrast, other studies found mixed or contradictory evidence for the hypothesis (e.g., Neubauer & Fink, 2009, pp. 1010–1012). Basten, Stelzel, and Fiebach (2013) clarified the mixed results by distinguishing between two brain networks: the task-negative network (TNN), where brain activation decreases with task difficulty, and the task-positive network (TPN), where brain activation increases with task difficulty. When solving difficult RAPM problems, higher ability subjects showed lower efficiency (more activation) in the TPN, which is related to attentional control; but higher efficiency (less activation) in the TNN, which is related to mind wandering. According to Basten et al. (2013, pp. 523–524), the results suggest that “while high-intelligent individuals are more efficient in deactivating the TNN during task processing, they put more effort into cognitive control-related activity in the TPN.” More generally, the results suggest that studies of the efficiency hypothesis should consider both brain networks (TNN and TPN) or risk drawing erroneous conclusions about brain activity and intelligence. The neural efficiency hypothesis has also been examined using graph analysis, which measures efficiency based on brain regions (nodes) and connections between the regions (paths) (Li et al., 2009; Santarnecchi, Galli, Polizzotto, Rossi, & Rossi, 2014; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). Graph analysis examines the distance between brain regions (path length), based on structure (e.g., white matter tracts) or function (e.g., temporal activation between regions). Path length is assumed to measure the efficiency of neural communication, with shorter paths being related to quicker (and more efficient) neural processing between brain regions. Using structural MRI of white matter tracts, Li et al. (2009) correlated IQs based on the Wechsler Adult Intelligence Scale (N = 79 healthy subjects) with two measures of efficiency: mean shortest path length between nodes, and global efficiency (inverse of the mean shortest path length between nodes). Consistent with the efficiency hypothesis, IQ correlated negatively with mean shortest path length (r = –.36, weighted) and positively with global efficiency (r = .31) (Li et al., 2009, p. 9). van den Heuvel et al. (2009) found similar results using graph analysis and functional MRI, with measurements based on temporal activation between different brain regions. Together, the results suggest

Defining and Measuring Intelligence

that higher ability participants have more efficient connections between brain regions, which facilitate communication between regions and improve task performance.

Brain Size and Intelligence Intelligence and g-loaded measures are positively associated with various measures of brain size, including brain volume and cortical thickness, based on whole-brain and regional measurements (for a review, see Haier, 2017, pp. 101–186). Early studies found a small but positive relation between intelligence and external head size (head circumference), a crude and indirect indicator of brain size (e.g., r  .20, Deary et al., 2007, p. 520). Contemporary studies have examined intelligence–brain size relations using modern neuroimaging technologies (e.g., structural MRI). Consistent with the earlier studies, MRI studies show positive relations between g-loaded measures (e.g., IQ, Raven’s, Wechsler scales) and total brain volume, indicating that larger brains are associated with higher levels of g (e.g., Gignac & Bates, 2017). The correlations between g-loaded measures and brain volume vary with quality of intelligence measurements and with corrections (if any) for range restriction. Pietschnig, Penke, Wicherts, Zeiler, and Voracek (2015) found a meta-analytic correlation of r = .24, with no adjustments for quality or range restriction. Gignac, Vernon, and Wickett (2003) and McDaniel (2005) found meta-analytic correlations between intelligence and brain volume of .43 and .33, respectively, after correcting for range restriction, with McDaniel (2005) analyzing only healthy samples and total brain measurements. Finally, Gignac and Bates (2017) found a correlation between intelligence and brain size of r = .31, after correcting for range restriction, but the effect was moderated by quality. Intelligence measures were classified into three quality categories (fair, good, excellent), based on number of tests (1 to 9+), dimensions of tests (1 to 3+), testing time (3 to 40+ minutes), and correlation with g (< .50 to > .94). The correlations between brain size and intelligence increased with quality of measure (.23, .32, .39), indicating that quality moderated the effects. A recent preregistered study (Nave, Jung, Linnér, Kable, & Koellinger, 2019) with a large UK sample (N = 13,608) found that total brain volume correlated positively with fluid intelligence (r = .19) and with educational attainment (r = .12), and that the correlations were primarily attributable (fluid intelligence, educational attainment) to gray matter (.13, .06) rather than white matter (.06, .03). Are the correlations between brain size and intelligence attributable to g (variance common to cognitive abilities)? The question is important because meta-analyses of brain size and intelligence often rely on marker tests of g (e.g., Raven’s Matrices) rather than using factor analysis (of multiple tests) to estimate g. (Marker tests of g may be loaded with substantial non-g variance, which may also correlate with brain size.) Using structural MRI, Colom, Jung,

15

16

t. r. coyle

and Haier (2006a; see also, Colom, Jung, & Haier, 2006b) addressed the question by correlating the subtest g-loadings of the Wechsler Adult Intelligence Scale (WAIS) with the subtest validity coefficients for cluster size. Cluster size was an aggregate measure of brain volume, based on gray matter and white matter, and was obtained using voxel based morphometry. g loadings were obtained using hierarchical factor analysis of the WAIS subtests. The correlation between the subtest g loadings and the subtest validity coefficients for cluster size was r = .95 (Colom et al., 2006a, p. 565). The near perfect correlation suggests that g (rather than non-g factors) explains the effects and that larger brain volumes are associated with higher levels of g. Parallel analyses examined relations between g and cluster size across different brain regions (frontal, temporal, occipital, parietal, insula). Significant relations were concentrated in the frontal region but also included other regions (e.g., temporal, occipital, parietal, insula), suggesting that a distributed brain network mediates g (Colom et al., 2006a, p. 568; Colom et al., 2006b, p. 1361).

Parieto-Frontal Integration Theory (PFIT) The parieto-frontal integration theory (PFIT) integrates brain–intelligence relationships (Haier & Jung, 2007; Jung & Haier, 2007). PFIT emphasizes the role of the frontal and parietal regions. These regions form a key network that contributes to intelligence, with other regions (e.g., sensory and motor) also playing a role. PFIT assumes that sensory information is initially integrated in posterior regions (occipital and temporal), followed by anterior regions (parietal) and associative regions (frontal). The theory also assumes that similar levels of g can emerge by engaging different brain regions. For example, one person may have high levels of verbal ability but low levels of math ability, while another may show the opposite pattern, with both of them showing similar levels of g. Basten, Hilger, and Fiebach (2015) examined PFIT in a meta-analysis of functional (n = 16 studies) and structural (n = 12 studies) brain imaging studies. The meta-analysis examined associations between intelligence and either (a) brain activation during cognitive performance in functional studies, or (b) amount of gray matter based on voxel-based morphometry in structural studies. Intelligence was based on established g-loaded tests such as the Raven’s Advanced Progressive Matrices and the Wechsler Adult Intelligence Scale. All studies involved healthy adults and reported spatial coordinates in standard brain reference space (e.g., Talairach). The meta-analysis identified brain regions with significant clusters of activation or voxels linked to intelligence. The functional results yielded eight significant clusters, located in the lateral and medial frontal regions and the bilateral parietal and temporal regions. The structural results yielded 12 significant clusters, located in the lateral and medial frontal, temporal, and occipital regions, and subcortical regions linked to the default mode network (which is related to brain activity at rest). Curiously, the functional and structural results showed limited

Defining and Measuring Intelligence

overlap, and the structural results showed no parietal effects. Despite these anomalies, both sets of results were broadly consistent with the PFIT, which predicts that intelligence involves a distributed network of brain regions. Additional support for PFIT was obtained by Colom et al. (2009), who found that different measures of intelligence (g, fluid, crystallized) correlated with gray matter volumes in parieto-frontal regions.

Network Neuroscience Theory of Intelligence Network neuroscience theory of intelligence (Barbey, 2018; see Chapter 6) complements and extends PFIT. Whereas PFIT targets a specific brain network (i.e., frontoparietal), network neuroscience theory specifies three distinct networks that operate throughout the brain. The three networks have different connections between nodes (regions) and different relations with specific and general abilities. Regular networks consist of short connections between local nodes, which promote local efficiency and support specific abilities (e.g., testspecific abilities). Random networks consist of random connections between all types of nodes (local or distant), which promote global efficiency and support broad and general abilities (e.g., fluid intelligence). Small world networks balance the features of regular and random networks. Small world networks consist of short connections between local nodes and long connections between distant nodes. Such networks exist close to a phase transition between regular and random networks. A transition toward a regular network engages local regions and specific abilities, whereas a transition toward a random network engages distant regions and general abilities. According to network neuroscience theory, the flexible transition between network states is the foundation of g (Barbey, 2018, pp. 15–17), which is associated with all mental abilities.

Outstanding Issues and Future Directions This last section considers three areas for future research: non-g factors, development and intelligence, and genes and intelligence.

Non-g Factors g permeates all cognitive abilities, which correlate positively with each other, indicating that people who rank high on one ability generally rank high on all others. Although g has received much attention in intelligence research, non-g factors have received less attention. Non-g factors refer to specific abilities unrelated to g. Such abilities include verbal, math, and spatial abilities, obtained after (statistically) removing g from tests. The removal of g from tests produces non-g residuals, which can be correlated with criteria at work

17

18

t. r. coyle

(e.g., job or income) or school (e.g., college major or grades). Early studies found that non-g factors had negligible predictive power for overall performance at work (e.g., supervisor ratings) and school (e.g., grade point average) (e.g., Brown, Le, & Schmidt, 2006; Ree et al., 1994; see also, Coyle, Elpers, Gonzalez, Freeman, & Baggio, 2018). In contrast, more recent studies have found that non-g factors robustly predict domain-specific criteria at school and work (for a review, see Coyle, 2018a). Coyle (2018a, pp. 2–9; see also, Coyle, 2018b; Coyle & Pillow, 2008; Coyle, Snyder, Richmond, & Little, 2015; Coyle et al., 2013) found that non-g residuals of the SAT and ACT math and verbal subtests differentially predicted school grades and specific abilities (based on other tests), as well as college majors and jobs in two domains: science, technology, engineering, and math (STEM), and the humanities (e.g., history, English, arts, music, philosophy). Math residuals of the SAT and ACT correlated positively with STEM criteria (i.e., STEM grades, abilities, majors, jobs) and negatively with humanities criteria. In contrast, verbal residuals showed the opposite pattern. The contrasting patterns were confirmed by Coyle (2018b), who correlated math and verbal residuals of latent variables, based on several ASVAB tests, with criteria in STEM and the humanities. (Unlike single tests such as the SAT or ACT, latent variables based on multiple tests are more likely to accurately measure a specific ability and reduce measurement error.) Coyle (2018a; see also, Coyle, 2019) interpreted the results in terms of investment theories (e.g., Cattell, 1987, pp. 138–146). Investment theories predict that investment in a specific domain (e.g., math/STEM) boosts the development of similar abilities but retards the development of competing abilities (e.g., verbal/ humanities). Math residuals presumably reflect investment in STEM, which boosts math abilities but retards verbal abilities. Verbal residuals presumably reflect investment in humanities, which boosts verbal abilities but retards math abilities. The different patterns of investment may be related to early preferences, which increase engagement in complementary activities (e.g., math–STEM) and decrease engagement in non-complementary activities (e.g., math–humanities) (Bouchard, 1997; Scarr & McCartney, 1983). Future research should consider the role of non-g factors linking cognitive abilities with brain criteria. Consistent with investment theories, sustained investment in a specific domain (via sustained practice) may boost non-g factors, which in turn may affect brain morphology linked to the domain. In support of this hypothesis, sustained practice in motor skill learning has been linked to changes in brain morphology (e.g., increases in gray matter) in regions related to the practice. Such changes have been observed for golf (Bezzola, Mérillat, Gaser, & Jäncke, 2011), balancing skills (Taubert et al., 2010), and juggling (Gerber et al., 2014) (for a critical review, see Thomas & Baker, 2013). A similar pattern may be observed for sustained practice in learning specific, non-g skills (e.g., math, verbal, spatial), which in turn may affect brain morphology in regions related to the non-g skills.

Defining and Measuring Intelligence

Future research should also consider the impact of non-g factors at different levels of ability using Spearman’s Law of Diminishing Returns (SLODR). SLODR is based on Spearman’s (1927) observation that correlations among mental tests decrease at higher ability levels, presumably because tests becomes less loaded with g, and more loaded with non-g factors. Such a pattern has been confirmed by meta-analysis (e.g., Blum & Holling, 2017). The pattern suggests that non-g factors related to brain criteria (e.g., volume or activity) may be stronger predictors at higher ability levels, which are associated with stronger non-g effects.

Development and Intelligence A variation of SLODR is the age dedifferentiation hypothesis. The hypothesis predicts that the influence of g increases, and the influence of non-g factors decreases, over the lifespan (20–80 years) (e.g., Deary et al., 1996; see also, Tucker-Drob, 2009). The hypothesis is based on the assumption that a general ability factor related to all other abilities (e.g., mental slowing) exerts increasing influence over the lifespan. Although the hypothesis is theoretically plausible, it has received limited support (e.g., Tucker-Drob, 2009). Indeed, contra to the dedifferentiation hypothesis, Tucker-Drob (2009, p. 1113) found that the proportion of variance in broad abilities explained by g declined over the lifespan (20–90 years). The decline in g was significant for three broad abilities (crystallized, visual-spatial, short-term memory). The decline may reflect ability specialization (via sustained practice), which may increase the influence of non-g factors over the lifespan.

Genetics, Intelligence, and the Brain The cutting edge of intelligence research considers genetic contributions to intelligence, the brain, and diverse criteria. This issue was examined by Lee et al. (2018), who analyzed polygenic scores related to educational attainment in a sample of 1.1 million people. Polygenic scores are derived from genomewide association studies and are computed in regression using the sum of alleles (for a trait) multiplied by their betas. Lee et al. (2018) examined polygenic scores related to educational attainment, measured as the number of years of schooling completed by an individual. Educational attainment is strongly g-loaded (e.g., Jensen, 1998, pp. 277–280). Lee et al. (2018) identified 1,271 independent and significant singlenucleotide polymorphisms (SNPs; dubbed “lead SNPs”) for educational attainment. The median effect size for the lead SNPs equated to 1.7 weeks of schooling, or 1.1 and 2.6 weeks of schooling at the 5th and 95th percentiles, respectively. Moreover, consistent with the idea that the brain is the seat of intelligence, genes near the lead SNPs were overexpressed in the central

19

20

t. r. coyle

nervous system (relative to genes in a random set of loci), notably in the cerebral cortex and hippocampus. Lee et al. (2018) used genetic information in their discovery sample of 1.1 million people to predict educational attainment in two independent samples: the National Longitudinal Study of Adolescent to Adult Health (Add Health, n = 4,775), a representative sample of American adolescents; and the Health and Retirement Study (HRS, n = 8,609), a representative sample of American adults over age 50 years. The polygenic scores explained 10.6% and 12.7% of the variance in educational attainment in Add Health and HRS, respectively, with a weighted mean of 11%. Other analyses indicated the polygenic scores explained 9.2% of the variance in overall grade point average (GPA) in Add Health. These percentages approximate the variance in first-year college GPA explained by the SAT (e.g., Coyle, 2015, p. 20). The pattern was not specific to educational attainment. Similar associations with polygenic scores were obtained for related criteria, including cognitive test performance, self-reported math ability, and hardest math class completed (Lee et al., 2018). Future research should use polygenic scores to examine moderators and mediators of relations among g-loaded measures, brain variables, and other criteria. One possibility is to examine the Scarr-Rowe effect, originally described by Scarr-Salapatek (1971; see also, Tucker-Drob & Bates, 2015; Woodley of Menie, Pallesen, & Sarraf, 2018). The Scarr-Rowe effect predicts a gene  environment interaction that reduces the heritability of cognitive ability at low socioeconomic status (SES) levels, perhaps because low-SES environments suppress genetic potential whereas high-SES environments promote it (e.g., Woodley of Menie et al., 2018). Scarr-Rowe effects have been observed for g-loaded measures (IQ) in twin pairs in the US but not in non-US countries (Tucker-Drob & Bates, 2015). In addition, Scarr-Rowe effects have been obtained using polygenic scores based on diverse cognitive phenotypes (e.g., IQ, educational attainment, neuropsychological tests). Consistent with Scarr-Rowe effects, polygenic scores correlate more strongly with IQ at higher SES levels (Woodley of Menie et al., 2018). Scarr-Rowe effects may be amplified for non-g factors, which may be particularly sensitive to environmental variation in school quality, school quantity (days of schooling), parental income, and learning opportunities. In particular, more beneficial environments may allow low-SES individuals to reach their genetic potential and facilitate the development of specific abilities unrelated to g.

Acknowledgement This research was supported by a grant from the National Science Foundation’s Interdisciplinary Behavioral and Social Science Research Competition (IBSS-L 1620457).

Defining and Measuring Intelligence

References Ackerman, P. L., Beier, M. E., & Boyle, M. O. (2005). Working memory and intelligence: The same or different constructs? Psychological Bulletin, 131(1), 30–60. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41(5), 517–528. Benedek, M., Jauk, E., Sommer, M., Arendasy, M., & Neubauer, A. C. (2014). Intelligence, creativity, and cognitive control: The common and differential involvement of executive functions in intelligence and creativity, Intelligence, 46, 73–83. Bezzola, L., Mérillat, S., Gaser, C., & Jäncke, L. (2011). Training-induced neural plasticity in golf novices. Journal of Neuroscience, 31(35), 12444–12448. Binet, A., & Simon, T. (1916). The development of intelligence in children. Baltimore, MD: Williams & Wilkins (reprinted 1973, New York: Arno Press). Blum, D., & Holling, H. (2017). Spearman’s law of diminishing returns. A metaanalysis. Intelligence, 65, 60–66. Bouchard, T. J. (1997). Experience producing drive theory: How genes drive experience and shape personality. Acta Paediatrica, 86(Suppl. 422), 60–64. Brown, K. G., Le, H., & Schmidt, F. L. (2006). Specific aptitude theory revisited: Is there incremental validity for training performance? International Journal of Selection and Assessment, 14(2), 87–100. Brown, R. E. (2016) Hebb and Cattell: The genesis of the theory of fluid and crystallized intelligence. Frontiers in Human Neuroscience, 10, 1–11. Canivez, G. L., & Watkins, M. W. (2010). Exploratory and higher-order factor analyses of the Wechsler Adult Intelligence Scale–Fourth Edition (WAISIV) adolescent subsample. School Psychology Quarterly, 25(4), 223–235. Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22. Cattell, R. B. (1987). Intelligence: Its structure, growth and action. New York: NorthHolland. Colom, R., Haier, R. J., Head, K., Álvarez-Linera, J., Ángeles Quiroga, M., Chun Shih, P., & Jung, R. E. (2009). Gray matter correlates of fluid, crystallized, and spatial intelligence: Testing the P-FIT model. Intelligence, 37(2), 124–135. Colom, R., Jung, R. E., & Haier, R. J. (2006a). Finding the g-factor in brain structure using the method of correlated vectors. Intelligence, 34(6), 561–570. Colom, R., Jung, R. E., & Haier, R. J. (2006b). Distributed brain sites for the g-factor of intelligence. Neuroimage, 31(3), 1359–1365. Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., & Kyllonen, P. C. (2004). Working memory is (almost) perfectly predicted by g. Intelligence, 32(3), 277–296.

21

22

t. r. coyle

Coyle, T. R. (2013). Effects of processing speed on intelligence may be underestimated: Comment on Demetriou et al. (2013). Intelligence, 41(5), 732–734. Coyle, T. R. (2015). Relations among general intelligence (g), aptitude tests, and GPA: Linear effects dominate. Intelligence, 53, 16–22. Coyle, T. R. (2018a). Non-g factors predict educational and occupational criteria: More than g. Journal of Intelligence, 6(3), 1–15. Coyle, T. R. (2018b). Non-g residuals of group factors predict ability tilt, college majors, and jobs: A non-g nexus. Intelligence, 67, 19–25. Coyle, T. R. (2019). Tech tilt predicts jobs, college majors, and specific abilities: Support for investment theories. Intelligence, 75, 33–40. Coyle, T. R., Elpers, K. E., Gonazalez, M. C., Freeman, J., & Baggio, J. A. (2018). General intelligence (g), ACT scores, and theory of mind: ACT(g) predicts limited variance among theory of mind tests. Intelligence, 71, 85–91. Coyle, T. R., & Pillow, D. R. (2008). SAT and ACT predict college GPA after removing g. Intelligence, 36(6), 719–729. Coyle, T. R., Purcell, J. M., Snyder, A. C., & Kochunov, P. (2013). Non-g residuals of the SAT and ACT predict specific abilities. Intelligence, 41(2), 114–120. Coyle, T. R., Snyder, A. C., Richmond, M. C., & Little, M. (2015). SAT non-g residuals predict course specific GPAs: Support for investment theory. Intelligence, 51, 57–66. Deary, I. J., Egan, V., Gibson, G. J., Brand, C. R., Austin, E., & Kellaghan, T. (1996). Intelligence and the differentiation hypothesis. Intelligence, 23(2), 105–132. Deary, I. J., Ferguson, K. J., Bastin, M. E., Barrow, G. W. S., Reid, L. M., Seckl, J. R., . . . MacLullich, A. M. J. (2007). Skull size and intelligence, and King Robert Bruce’s IQ. Intelligence, 35(6), 519–525. Frey, M. C., & Detterman, D. K. (2004). Scholastic assessment or g? The relationship between the scholastic assessment test and general cognitive ability. Psychological Science, 15(6), 373–378. Friedman, N. P., Miyake, A., Corley, R. P., Young, S. E., DeFries, J. C., & Hewitt, J. K. (2006). Not all executive functions are related to intelligence. Psychological Science, 17(2), 172–179. Gardner, H. (1983/2003). Frames of mind. The theory of multiple intelligences. New York: Basic Books. Gerber, P., Schlaffke, L., Heba, S., Greenlee, M. W., Schultz, T., & Schmidt-Wilcke, T. (2014). Juggling revisited – A voxel-based morphometry study with expert jugglers. Neuroimage, 95, 320–325. Gignac, G. E. (2015). Raven’s is not a pure measure of general intelligence: Implications for g factor theory and the brief measurement of g. Intelligence, 52, 71–79. Gignac, G. E., & Bates, T. C. (2017). Brain volume and intelligence: The moderating role of intelligence measurement quality. Intelligence, 64, 18–29. Gignac, G., Vernon, P. A., & Wickett, J. C. (2003). Factors influencing the relationship between brain size and intelligence. In H. Nyborg (ed.), The scientific study of general intelligence: Tribute to Arthur R. Jensen (pp. 93–106). New York: Pergamon. Gignac, G. E., & Watkins, M. W. (2015). There may be nothing special about the association between working memory capacity and fluid intelligence. Intelligence, 52, 18–23.

Defining and Measuring Intelligence

Gladwell, M. (2008). Outliers: The story of success. New York: Little, Brown & Co. Gottfredson, L. S. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence, 24(1), 13–23. Haier, R. J. (2017). The neuroscience of intelligence. New York: Cambridge University Press. Haier, R. J. & Jung, R. E. (2007). Beautiful minds (i.e., brains) and the neural basis of intelligence. Behavioral and Brain Sciences, 30(2), 174–178. Haier, R. J., Siegel Jr, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavior & Brain Sciences, 33(2–3), 61–83. Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology, 57(5), 253–270. Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Jensen, A. R. (2006). Clocking the mind: Mental chronometry and individual differences. Amsterdam, The Netherlands: Elsevier. Johnson, W., Bouchard Jr, T. J., Krueger, R. F., McGue, M., & Gottesman, I. I. (2004). Just one g: Consistent results from three test batteries. Intelligence, 32(1), 95–107. Johnson, W., te Nijenhuis, J., & Bouchard, T. J. Jr. (2008). Still just 1 g: Consistent results from five test batteries. Intelligence, 36(1), 81–95. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kuncel, N. R., & Hezlett, S. A. (2007). Standardized tests predict graduate students’ success. Science, 315(5815), 1080–1081. Lee, J. J., Wedow, R., Okbay, A., Kong, O., Maghzian, M., Zacher, M., . . . Cesarini, D. (2018). Gene discovery and polygenic prediction from a 1.1-million-person GWAS of educational attainment. Nature Genetics, 50(8), 1112–1121. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C. & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), 1–17. Lubinski, D. (2016). From Terman to today: A century of findings on intellectual precocity. Review of Educational Research, 86(4), 900–944. Major, J. T., Johnson, W., & Bouchard, T. J. (2011). The dependability of the general factor of intelligence: Why small, single-factor models do not adequately represent g. Intelligence, 39(5), 418–433. McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37(1), 1–10. Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., & Wager, T. D. (2000). The unity and diversity of executive functions and their

23

24

t. r. coyle

contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49–100. Nave, G., Jung, W. H., Linnér, R. K., Kable, J. W., & Koellinger, P. D. (2019). Are bigger brains smarter? Evidence from a large-scale preregistered study. Psychological Science, 30(1), 43–54. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience and Biobehavioral Reviews, 57, 411–432. Ree, M. J., Earles, J. A., & Teachout, M. S. (1994). Predicting job performance: Not much more than g. Journal of Applied Psychology, 79(4), 518–524. Roth, B., Becker, N., Romeyke, S., Schäfer, S., Domnick, F., & Spinath, F. M. (2015). Intelligence and school grades: A meta-analysis. Intelligence, 53, 118–137. Sackett, P. R., Kuncel, N. R., Arneson, J. J., Cooper, S. R., & Waters, S. D. (2009). Does socioeconomic status explain the relationship between admissions tests and post-secondary academic performance? Psychological Bulletin, 135(1), 1–22. Santarnecchi, E., Galli, G., Polizzotto, N. R., Rossi, A., & Rossi, S. (2014). Efficiency of weak brain connections support general cognitive functioning. Human Brain Mapping, 35(9), 4566–4582. Scarr, S., & McCartney, K. (1983). How people make their own environments: A theory of genotype ➔ environment effects. Child Development, 54(2), 424–435. Scarr-Salapatek, S. (1971). Race, social class, and IQ. Science, 174(4016), 1285–1295. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274. Schmidt, F. L., & Hunter, J. E. (2004). General mental ability in the world of work: Occupational attainment and job performance. Journal of Personality and Social Psychology, 86(1), 162–173. Spearman, C. (1927). The abilities of man: Their nature and measurement. New York: The Macmillan Company. Taubert, M., Draganski, B., Anwander, A., Müller, K., Horstmann, A., Villringer, A., & Ragert, P. (2010). Dynamic properties of human brain structure: Learningrelated changes in cortical areas and associated fiber connections. Journal of Neuroscience, 30(35), 11670–11677. Thomas, C., & Baker, C. I. (2013). Teaching an adult brain new tricks: A critical review of evidence for training-dependent structural plasticity in humans. Neuroimage, 73, 225–236. Thorndike, R. L. (1984). Intelligence as information processing: The mind and the computer. Bloomington, IN: Center on Evaluation, Development, and Research. Tucker-Drob, E. M. (2009). Differentiation of cognitive abilities across the life span. Developmental Psychology, 45(4), 1097–1118. Tucker-Drob, E. M., & Bates, T. C. (2015). Large cross-national differences in gene  socioeconomic status interaction on intelligence. Psychological Science, 27(2), 138–149.

Defining and Measuring Intelligence

van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. Warne, R. T., & Burningham, C. (2019). Spearman’s g found in 31 non-Western nations: Strong evidence that g is a universal phenomenon. Psychological Bulletin, 145(3), 237–272. Wechsler, D. (1944). The measurement of adult intelligence (3rd ed.). Baltimore, MD: Williams & Wilkins. Woodley of Menie, M. A., Pallesen, J., & Sarraf, M. A. (2018). Evidence for the ScarrRowe effect on genetic expressivity in a large U.S. sample. Twin Research and Human Genetics, 21(6), 495–501.

25

2 Network Neuroscience Methods for Studying Intelligence Kirsten Hilger and Olaf Sporns

Introduction The human brain is a complex network consisting of numerous functionally specialized brain regions and their inter-regional connections. In recent years, much research has focused on identifying principles of the anatomical and functional organization of brain networks (Bullmore & Sporns, 2009; Sporns, 2014) and their relation to spontaneous (resting-state; Buckner, Krienen, & Yeo, 2013; Fox et al., 2005) or task-related brain activity (Cole, Bassett, Power, Braver, & Petersen, 2014). Numerous studies have identified relationships between variations in network elements or features and individual differences in behavior and cognition. In the context of this monograph, studies of general cognitive ability (often indexed as general intelligence) are of special interest. In this chapter we survey some of the methodological aspects surrounding studies of human brain networks using noninvasive large-scale imaging and electrophysiological techniques and discuss the application of such network approaches in studies of human intelligence.

The Human Brain as a Complex Network Structural and Functional Connectivity A fundamental distinction in the study of human brain networks is that between structural and functional networks (Honey et al., 2009; Sporns, 2014). Structural networks represent anatomical connectivity, usually estimated from white-matter tracts that connect gray matter regions of the cerebral cortex and subcortex. The resulting networks appear to be relatively stable across time and define potential pathways for the flow of neural signals and information (Avena-Koenigsberger, Misic, & Sporns, 2018). In contrast, functional networks are usually derived from time series of neural activity and represent patterns of statistical relationships. These patterns fluctuate on fast time scales (on the order of seconds), both spontaneously (Hutchison, Womelsdorf, Gati, Everling, & Menon, 2013) as well as in response to changes 26

Network Neuroscience Methods for Studying Intelligence

in stimulation or task (Cole et al., 2014; Gonzalez-Castillo et al., 2015). In humans, structural networks are most commonly constructed from diffusion tensor imaging (DTI) and tractography (Craddock et al. 2013; Hagmann et al., 2008) while functional networks are often estimated from functional magnetic resonance imaging (fMRI) obtained in the resting state (Buckner et al., 2013) or during ongoing tasks. An alternative approach to assess structural connectivity involves the construction of networks from acrosssubject covariance of regionally resolved anatomical measures, e.g., cortical thickness (Lerch et al., 2006). Limitations of structural covariance networks are the uncertain mechanistic basis of the observed covariance patterns in structural anatomy and the need to consider covariance patterns across different subjects. Thus, while this approach may be promising as a proxy of interregional connectivity, it is difficult to implement when studying inter-subject variations (e.g., in the relation of connectivity to intelligence). Alternative strategies for functional connectivity include other recording modalities (e.g., electroencephalography [EEG]/magnetencephalography [MEG]) and a large variety of time series measures designed to extract statistical similarities or causal dependence (e.g., the Phase Lag Index; Stam, Nolte, & Daffertshofer, 2007). In this chapter, we limit our survey to MRI approaches given their prominent role in the study of human intelligence. The estimation of structural networks involves a process of inference, resulting in a model of likely anatomical tracts and connections that fits a given set of diffusion measurements. This process is subject to numerous sources of error and statistical bias (Jbabdi, Sotiropoulos, Haber, Van Essen, & Behrens, 2015; Maier-Hein et al., 2017; Thomas et al., 2014) and requires careful control over model assumptions and parameters, as well as principled model selection and evaluation of model fit (Pestilli, Yeatman, Rokem, Kay, & Wandell, 2014). A number of investigations have attempted to validate DTI-derived networks against more direct anatomical techniques such as tract tracing carried out in rodents or non-human primates. Another unresolved issue is the definition of connection weights. Weights are often expressed as numbers or densities of tractography streamlines, or through measures of tract coherence or integrity, such as the fractional anisotropy (FA). The consideration of measurement accuracy and sensitivity is important for defining networks that are biologically valid, and they impact subsequent estimates of network measures. The estimation of functional networks, usually from fMRI time series, encounters a different set of methodological limitations and biases. As in all fMRI-derived measurements, neural activity is captured only indirectly through the brain’s neurovascular response whose underlying neural causes cannot be accessed directly. Furthermore, the common practice of computing simple Pearson correlations among time courses as a proxy for “functional connectivity” cannot disclose causal interactions or information flow – instead, such functional networks report mere similarities among temporal response profiles that may or may not be due to direct causal effects. Finally, these correlations

27

28

k. hilger and o. sporns

are sensitive to numerous sources of physiological and non-physiological noise, most importantly systematic biases due to small involuntary head motions. However, significant efforts have been made to correct for these unwanted sources of noise in fMRI recordings to improve the mapping of functional networks (Power, Schlaggar, & Petersen, 2015). Resting-state fMRI, despite the absence of an explicit task setting, yields functional networks that are consistent between (Damoiseaux et al., 2006) as well as within subjects (Zuo & Xing, 2014), provided that signals are sampled over sufficient periods of time (Birn et al., 2013; Laumann et al., 2015). The analysis of resting-state functional networks has led to the definition of a set of component functional systems (“resting-state networks”) that engage in coherent signal fluctuations and occupy specific cortical and subcortical territories (Power et al., 2011; Yeo et al., 2011). Meta-analyses have shown that restingstate networks strongly resemble fMRI co-activation patterns observed across large numbers of tasks (Crossley et al., 2013; Smith et al., 2009). When measured during task states, functional connectivity exhibits characteristic task-related modulation (Cole et al. 2014; Gonzalez-Castillo et al., 2015; Telesford et al., 2016). These observations suggest that, at any given time, functional networks represent a conjunction of “intrinsic connectivity” and “task-evoked connectivity” (Cole et al., 2014) and that switching between rest and task involves widespread reorganization of distributed functional connections (Amico, Arenas, & Goñi, 2019).

Networks and Graphs Both structural and functional networks can be represented as collections of nodes (brain regions) and their interconnecting edges (connections); see Figure 2.1. Such objects are also known as “graphs” and are the subject of graph theory, a branch of mathematics with many applications in modern network science (Barabási, 2016). Graph theory offers numerous quantitative measures that capture various aspects of a network’s local and global organization (Rubinov & Sporns, 2010). Local measures include the number of connections per node (node degree), or a node’s clustering into distinct subgroups. Global measures express characteristics of an entire network, such as its clustering coefficient (the average over the clustering of its nodes) and its path length (the average length of shortest paths linking all pairs of nodes). The conjunction of high clustering and short path length (compared to an appropriately designed random or randomized null model) is the hallmark of the smallworld architecture, a universal feature of many natural, social, and information networks (Watts & Strogatz, 1998). Globally, the shorter the path length of a network, the higher its global efficiency (Latora & Marchiori, 2001), a measure expressing a network’s capacity for communication irrespective of the potential communication cost imposed by wiring or time delays. Several important points are worth noting. First, many local and global network metrics are mutually dependent (correlated), and many are powerfully influenced by the

Network Neuroscience Methods for Studying Intelligence

Figure 2.1 Schematic illustration of structural and functional brain network construction and key network metrics. (A) Network construction. First (left), network nodes are defined based on, e.g., an anatomical brain atlas. Second (middle), edges are defined between pairs of nodes by measuring white matter fiber tracts (structural network, e.g., measured with DTI) or by estimating temporal relationships between time series of BOLD signals (functional network, e.g., measured with resting-state fMRI). Third (right), nodes and edges together define a graph (network) whose topological properties can be studied with global (whole-brain) and nodal (region-specific) graphtheoretical measures. (B) Key network metrics. Network efficiency (left) is derived from the lengths of shortest paths between node pairs. In this example, the path between nodes 1 and 2 has a length of three steps. Network modularity (right) partitions the network into communities or modules that are internally densely connected, with sparse connections between them. In this example, the network consists of four modules illustrated in different colors. Individual nodes differ in the way they connect to other nodes within their own module (withinmodule connectivity) and to nodes in other modules (diversity of betweenmodule connectivity, nodal participation). Here, node 1 has low participation, while node 2 has high participation.

29

30

k. hilger and o. sporns

brain’s geometry and spatial embedding (Bullmore & Sporns, 2012; Gollo et al., 2018). These aspects can be addressed through appropriately configured statistical and generative models designed to preserve specific features of the data in order to reveal their contribution to the global network architecture (Betzel & Bassett, 2017a; Rubinov, 2016). Second, while most graph metrics are suitable for structural networks, their application on functional networks (i.e., derived from fMRI time series correlations) can be problematic (Sporns, 2014). This includes frequently used metrics such as path length or clustering. For example, network paths constructed from correlations between time series of neural activation have a much less obvious physical interpretation than, for example, paths that link a series of structural connections. One of the most useful and biologically meaningful approaches to characterize both structural and functional brain networks involved the detection of network communities or modules (Sporns & Betzel, 2016). Most commonly, modules are defined as non-overlapping sets of nodes that are densely interconnected within, but only weakly connected between sets. Computationally, many tools for datadriven detection of modules are available (Fortunato & Hric, 2016). Most widely used are approaches that rely on the maximization of a global modularity metric (Newman & Girvan, 2004). With appropriate modification, this approach can be applied to all classes of networks (binary, weighted, directed, positive, and negative links) that are encountered in structural and functional neuroimaging. Recent advances further allow the detection of modules on multiple spatial scales (Betzel & Bassett, 2017b; Jeub, Sporns, & Fortunato, 2018), in multilayer networks (Vaiana & Muldoon, 2018), and in dynamic connectivity estimated across time (Fukushima et al., 2018). The definition of modules allows the identification of critical nodes or “hubs” that link modules to each other and hence promote global integration of structural and functional networks. Such nodes straddle the boundaries of modules and thus have uncertain modular affiliation (Rubinov & Sporns, 2011; Shinn et al., 2017) as well as connections spanning multiple modules (high participation). Modules and hubs have emerged as some of the most useful network attributes for studies of individual variability in phenotype and genotype, behaviour, and cognition. While network approaches have delivered many new insights into the organization of brain networks, they are also subject to significant challenges. An important issue shared across structural and functional networks involves the definition of network nodes. In practice, many studies are still carried out using nodes defined by anatomical brain atlases. However, such nodes generally do not correspond to anatomical or functional units derived from data-driven parcellation efforts, e.g., those based on boundary-detection in resting-state (Gordon et al., 2016), or on integration of data across multiple modalities (Glasser et al., 2016). The adoption of a particular parcellation strategy propagates into subsequent network analyses and may affect their reliability and robustness (Ryyppö, Glerean, Brattico, Saramäki, & Korhonen, 2018; Zalesky et al., 2010). For example, different parcellations may divide the brain into different numbers of nodes, resulting in networks that differ in size and density. In future, more

Network Neuroscience Methods for Studying Intelligence

standardized parcellation approaches may help to address this important methodological issue. Other limitations remain, e.g., the uncertain definition of edge weights, variations in pre-processing pipelines (global signal regression, motion correction, thresholding), the lack of data on directionality, and temporal delays in signal propagation. However, it is important to note that many of these limitations reflect constraints on the measurement process itself, and can be overcome in principle through refined spatial and temporal resolution, as well as through the inference of causal relations in networks (Bielczyk et al., 2019). The application of network neuroscience tools and methods is facilitated by a number of in-depth scholarly surveys and computational resources. Fornito, Zalesky, and Bullmore (2016) provide a comprehensive introduction to network methods applied to brain data that goes much beyond the scope and level of detail offered in this brief overview. For practical use, several software packages (Matlab and python) are available. These combine various structural and functional graph metrics, null and generative models, and visualization tools (e.g., https://sites.google.com/site/bctnet/).

Intelligence and Insights from Network Neuroscience Approaches Structural Networks Already one of the earliest and most popular neurocognitive models of intelligence, i.e., the Parieto-Frontal Integration Theory (P-FIT; Jung & Haier, 2007), suggests that structural connections (white matter fiber tracts) are critical for human intelligence. Since then many neuroimaging studies have addressed the question of whether and how specific structural brain connections may contribute to differences in intelligence. Most of them support the global finding that higher intelligence is associated with higher levels of brain-wide white matter integrity (as indexed by FA; e.g., Chiang et al., 2009; Navas-Sánchez et al., 2013). Some studies suggest further that this relation may differ for men and women (positive correlation in women, negative correlation in men; Schmithorst, 2009; Tang et al., 2010) and that intelligence-related differences are most prominent in white matter tracts linking frontal to parietal regions (arcuate fasciculus, longitudinal fasciculi; Malpas et al., 2016; Schmithorst, 2009), frontal to occipital regions (fronto-occipital fasciculus; Chiang et al., 2009; Kievit et al., 2012; Kievit, Davis, Griffiths, Correia, & Henson, 2016; Malpas et al., 2016), different frontal regions to each other (uncinate fasciculus; Kievit et al., 2016; Malpas et al, 2016; Yu et al., 2008), and connecting both hemispheres (corpus callosum; Chiang et al., 2009; Damiani, Pereira, Damiani, & Nascimento, 2017; Dunst, Benedek, Koschutnig, Jauk, & Neubauer, 2014; Kievit et al., 2012; Navas-Sánches et al., 2013; Tang et al., 2010; Wolf et al., 2014; Yu et al., 2008 for review). A schematic illustration of white matter tracts consistently associated with intelligence is depicted in Figure 2.2. Chiang et al.

31

32

Figure 2.2 The brain bases of intelligence – from a network neuroscience perspective. Schematic illustration of selected structural and functional brain connections associated with intelligence across different studies.

Network Neuroscience Methods for Studying Intelligence

(2009) found that the relation between intelligence and white matter integrity is mediated by common genetic factors and proposed a common physiological mechanism. Beyond microstructural integrity, higher intelligence has also been related to higher membrane density (lower global mean diffusivity; Dunst et al., 2014; Haász et al., 2013; Malpas et al., 2016) and higher myelinization of axonal fibers (radial diffusivity; Haász et al., 2013; Malpas et al., 2016) in a widely distributed network of white matter tracts. Graph-theoretical network approaches were also applied to structural connections measured with DTI. While some studies suggest that higher intelligence is linked to a globally more efficient organization of structural brain networks (Kim et al., 2015; Koenis et al., 2015; Li et al., 2009; Ma et al., 2017; Zalesky et al., 2011), others could not replicate this finding and found support only for slightly different metrics (Yeo et al., 2016). Finally, one initial study tested whether intelligence relates to the global level of network segregation (global modularity), but found no support for this hypothesis (Yeo et al., 2016).

Functional Networks Additional insight into the neural bases of intelligence comes from research focused on functional connectivity. Most of these studies employed fMRI recordings during the resting state, reflecting the brain’s intrinsic functional architecture. Intrinsic connectivity has been shown to relate closely to the underlying anatomical connections (Greicius, Supekar, Menon, & Dougherty, 2009; Hagmann et al., 2008; Honey, Kötter, Breakspear, & Sporns, 2007), and to predict brain activity during cognitive demands (Cole et al., 2014; Tavor et al., 2016). Early studies addressing the relation between intrinsic connectivity and intelligence used seed-based approaches focusing on connections between specific cortical regions. These studies suggested that higher connectivity between regions belonging to the fronto-parietal control network (Dosenbach et al., 2007) together with lower connectivity between these fronto-parietal regions and regions of the default mode network (DMN, Greicius, Krasnow, Reiss, & Menon, 2003; Raichle et al., 2001) are related to higher intelligence (Langeslag et al., 2013; Sherman et al., 2014; Song et al., 2008). This effect is also illustrated schematically in Figure 2.2. Going beyond seed-based analyses, graph-theoretical approaches applied to whole-brain networks have revealed global principles of intrinsic brain network organization. A pioneering study suggested that a globally shorter path length (higher efficiency) is positively correlated with levels of general intelligence (van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). More recent investigations with samples of up to 1096 people (Kruschwitz, Waller, Daedelow, Walter, & Veer, 2018) could not replicate this intuitively plausible finding (Hilger, Ekman, Fiebach, & Basten, 2017a; Kruschwitz et al., 2018; Pamplona, Santos Neto, Rosset, Rogers, & Salmon, 2015). This inconsistency may reflect differences in data acquisition and pre-processing, as well as sample sizes and subject cohorts. In contrast, the nodal efficiency of three specific brain regions that have previously been

33

34

k. hilger and o. sporns

associated with processes like salience detection and focused attention (dorsal anterior cingulate cortex, anterior insula, temporo-parietal junction area) appear to be associated with intelligence (Hilger et al., 2017a). A similar picture has recently been reported in respect to the concept of network modularity. Although intelligence is not correlated with the global level of functional network segregation (indexed by global modularity), there are different profiles of within- and between-module connectivity in a set of circumscribed cortical and subcortical brain regions (Hilger, Ekman, Fiebach, & Basten, 2017b). Interestingly, some of these brain regions are also featured in existing neurocognitive models of intelligence based on task activation profiles and morphological characteristics (e.g., the dorsal anterior cingulate cortex; P-FIT, Basten, Hilger, & Fiebach, 2015; Jung & Haier, 2007). In contrast, other regions are not implicated in these models (e.g., anterior insula) and thus seem to differ solely in their pattern of connectivity but not in task activation or structural properties (Hilger et al., 2017a; Smith et al., 2015). Further support for the relevance of functional connectivity for intelligence comes from prediction-based approaches (implying cross-validation; Yarkoni & Westfall, 2017) demonstrating that individual intelligence scores can significantly be predicted from the pattern of functional connectivity measured during rest (Dubois et al., 2018; Ferguson, Anderson, & Spreng, 2017; Finn et al., 2015) and during task (Greene, Gao, Scheinost, & Costable, 2018). Higher connectivity within the fronto-parietal network together with lower connectivity within the DMN was proposed as most critical for this prediction (Finn et al., 2015; see also Figure 2.2). However, the total amount of explained variance seems to be rather small (20% in Dubois et al., 2018; 6% in Ferguson et al., 2017; 25% in Finn et al., 2015; 20% in Greene et al., 2018). Focusing in more detail on the comparison between rest and task, Santarnecchi et al. (2016) reported high similarity between a meta-analytically generated intelligence network, i.e., active during various intelligence tasks, and the (resting-state) dorsal attention network (Corbetta & Shulman, 2002; Corbetta, Patel, & Shulman, 2008). The lowest similarity was observed with structures of the DMN. Interestingly, it has further been found that higher intelligence is associated with less reconfiguration of functional connections when switching from rest to task (Schultz & Cole, 2016). This finding corresponds well to the observation that higher intelligence is linked to lower DMN de-activation during cognitive tasks (Basten, Stelzel, & Fiebach, 2013). Studies addressing other concepts of connectivity revealed additional insights and observed, for instance, that an increased robustness of brain networks to systematic insults is linked to higher intelligence (Santarnecchi, Rossi, & Rossi, 2015). One point that has to be clarified by future research is the question of whether or not higher intelligence relates to generally higher levels of functional connectivity. Whereas some studies suggest a positive association (Hearne, Mattingley, & Cocchi, 2016; Smith et al., 2015), others observed no such effect (Cole, Yarkoni, Repovs, Anticevic, & Braver, 2012; Hilger et al., 2017a). Instead, the latter studies suggest that associations between connectivity and intelligence are

Network Neuroscience Methods for Studying Intelligence

region-specific and of both directions (positive and negative), thus canceling out at the global level (Hilger at al., 2017a). Another perspective on the relation between intelligence and functional connectivity was provided by studies using EEG. Functional connectivity was primarily measured as coherence between time series of distant EEG channels (signal space) and assessed during cognitive rest. Positive (Anokhin, Lutzenberger, & Birbaumer, 1999; Lee, Wu, Yu, Wu, & Chen, 2012), negative (Cheung, Chan, Han, & Sze, 2014, Jahidin, Taib, Tahir, Megat Ali, & Lias, 2013), and no associations (Smit, Stam, Posthuma, Boomsma, & De Geus, 2008) between intelligence and different operationalizations of intrinsic connectivity (coherence, inter-hemispheric asymmetry of normalized energy spectral density, synchronization likelihood) were observed. Two graphtheoretical studies reported a positive association between intelligence and the small-worldness of EEG-derived intrinsic brain networks (same sample; Langer, Pedroni, & Jäncke, 2013; Langer et al., 2012). In contrast to fMRIbased evidence suggesting less reconfiguration of connectivity between rest and task in more intelligent subjects (Schultz & Cole, 2016), two EEG-studies point to the opposite effect, i.e., more rest–task reconfiguration related to higher intelligence (Neubauer & Fink, 2009; Pahor & Jaušovec, 2014). Specifically, more intelligent subjects demonstrated greater changes in phase-locking values (Neubauer & Fink, 2009) and theta-gamma coupling patterns (Pahor & Jaušovec, 2014). Finally, one study applied graph-theoretical network analyses also to MEG data and found that higher intelligence in little children is linked to lower small-worldness of intrinsic brain networks (modeled on the base of mutual information criteria between time series of MEG channels; Duan et al., 2014).

Open Questions and Future Directions The application of network neuroscience approaches during the last decades has revealed interesting novel insights into the neural bases of human intelligence. An overall higher level of white matter integrity appears to be related to higher intelligence, and intelligence-related differences become most visible in structural connections linking frontal to parietal and frontal to occipital regions. Results from fMRI studies highlight the relevance of region-specific intrinsic connectivity profiles. However, these region-specific intelligence-related differences do not necessarily carry over to the global scale. Connections of attention-related brain regions seem to play a particularly important role as well as the proper segregation between task-positive and task-negative regions. Such new insights also stimulated the formulation of new theoretical models. For example, the recently proposed Network Neuroscience Theory of Intelligence suggests that general intelligence depends on the ability to flexibly transition between “easy-to-reach” and “difficult-toreach” network states (Barbey, 2018; Girn, Mills, & Christo, 2019). While

35

36

k. hilger and o. sporns

there is initial support for the relevance of dynamic connectivity (time-varying connectivity; Zalesky, Fornito, Cocchi, Gollo, & Breakspear, 2014) for intelligence (higher stability in intrinsic brain networks was associated with higher intelligence; Hilger, Fukushima, Sporns, & Fiebach, 2020), specific hypotheses concerning intelligence and task-induced transitions between specific network states remain to be investigated. Evidence from EEG and MEG studies is quite heterogeneous, potentially caused by the large number of free parameters (e.g., connectivity measure, signal vs. source space) which makes it difficult to compare results from different studies – a problem that can be overcome through the development of common standards. Importantly, the empirical evidence available so far does not allow for any inferences about directionality, i.e., whether region A influences region B or vice versa. Conclusions about directionality can only be derived from connectivity measures that account for the temporal lag between EEG or MEG signals stemming from different regions (e.g., phase slope index; Ewald, Avarvand, & Nolte, 2013) or from specific approaches developed for fMRI (Bielczyk et al., 2019). These methods have not yet been applied to the study of human intelligence. Furthermore, the reported relationships between intelligence and various aspects of connectivity are only correlative; they do not allow to infer whether variations in network characteristics causally contribute to variations in intelligence or vice versa. Some pioneering studies overcome this constraint through employing neuromodulatory interventions and suggest that even intelligence test performance can be experimentally influenced; most likely when baseline performance is low (Neubauer, Wammerl, Benedek, Jauk, & Jausovec, 2017; Santarnecchi et al., 2016). These techniques represent promising candidates for future investigations, and have, together with new graph-theoretical concepts like, e.g., multiscale modularity, and the analyses of connectivity changes over time (dynamic connectivity), great potential to enrich our understanding about the biological bases of human intelligence – from a network neuroscience perspective.

Acknowledgment KH received funding from the German Research Foundation (DFG grants FI 848/6-1 and HI 2185/1-1).

References Amico, E., Arenas, A., & Goñi, J. (2019) Centralized and distributed cognitive task processing in the human connectome. Network Neuroscience, 3(2), 455–474. Anokhin, A. P., Lutzenberger, W., & Birbaumer, N. (1999). Spatiotemporal organization of brain dynamics and intelligence: An EEG study in adolescents. International Journal of Psychophysiology, 33(3), 259–273.

Network Neuroscience Methods for Studying Intelligence

Avena-Koenigsberger, A., Misic, B., & Sporns, O. (2018). Communication dynamics in complex brain networks. Nature Reviews Neuroscience, 19(1), 17–33. Barabási, A. L. (2016). Network science. Cambridge University Press. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 1–13. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41(5), 517–528. Betzel, R. F., & Bassett, D. S. (2017a). Generative models for network neuroscience: Prospects and promise. Journal of the Royal Society Interface, 14(136), 20170623. Betzel, R. F., & Bassett, D. S. (2017b). Multi-scale brain networks. Neuroimage, 160, 73–83. Bielczyk, N. Z., Uithol, S., van Mourik, T., Anderson, P., Glennon, J. C., & Buitelaar, J. K. (2019). Disentangling causal webs in the brain using functional magnetic resonance imaging: A review of current approaches. Network Neuroscience, 3(2), 237–273. Birn, R. M., Molloy, E. K., Patriat, R., Parker, T., Meier, T. B., Kirk, G. R., . . . Prabhakaran, V. (2013). The effect of scan length on the reliability of restingstate fMRI connectivity estimates. Neuroimage, 83, 550–558. Buckner, R. L., Krienen, F. M., & Yeo, B. T. (2013). Opportunities and limitations of intrinsic functional connectivity MRI. Nature Neuroscience, 16(7), 832–837. Bullmore, E., & Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3), 186–198. Bullmore, E., & Sporns, O. (2012). The economy of brain network organization. Nature Reviews Neuroscience, 13(5), 336–349. Cheung, M., Chan, A. S., Han, Y. M., & Sze, S. L. (2014). Brain activity during resting state in relation to academic performance. Journal of Psychophysiology, 28(2), 47–53. Chiang, M.-C., Barysheva, M., Shattuck, D. W., Lee, A. D., Madsen, S. K., Avedissian, C., . . . Thompson, P. M. (2009). Genetics of brain fiber architecture and intellectual performance. Journal of Neuroscience, 29(7), 2212–2224. Cole, M. W., Bassett, D. S., Power, J. D., Braver, T. S., & Petersen, S. E. (2014). Intrinsic and task-evoked network architectures of the human brain. Neuron, 83(1), 238–251. Cole, M. W., Yarkoni, T., Repovs, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. Journal of Neuroscience, 32(26), 8988–8999. Corbetta, M., Patel, G., & Shulman, G. L. (2008). The reorienting system of the human brain: From environment to theory of mind. Neuron, 58(3), 306–324. Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience, 3(3), 201–215. Craddock, R. C., Jbabdi, S., Yan, C. G., Vogelstein, J. T., Castellanos, F. X., Di Martino, A., . . . Milham, M. P. (2013). Imaging human connectomes at the macroscale. Nature Methods, 10(6), 524–539.

37

38

k. hilger and o. sporns

Crossley, N. A., Mechelli, A., Vértes, P. E., Winton-Brown, T. T., Patel, A. X., Ginestet, C. E., . . . Bullmore, E. T. (2013). Cognitive relevance of the community structure of the human brain functional coactivation network. Proceedings of the National Academy of Sciences USA, 110(28), 11583–11588. Damiani, D., Pereira, L. K., Damiani, D., & Nascimento, A. M. (2017). Intelligence neurocircuitry: Cortical and subcortical structures. Journal of Morphological Sciences, 34(3), 123–129. Damoiseaux, J. S., Rombouts, S. A. R. B., Barkhof, F., Scheltens, P., Stam, C. J., Smith, S. M., & Beckmann, C. F. (2006). Consistent resting-state networks across healthy subjects. Proceedings of the National Academy of Sciences USA, 103(37), 13848–13853. Dosenbach, N. U. F., Fair, D. A, Miezin, F. M., Cohen, A. L., Wenger, K. K., Dosenbach, R. A. T., . . . Petersen, S. E. (2007). Distinct brain networks for adaptive and stable task control in humans. Proceedings of the National Academy of Sciences USA, 104(26), 11073–11078. Duan, F., Watanabe, K., Yoshimura, Y., Kikuchi, M., Minabe, Y., & Aihara, K. (2014). Relationship between brain network pattern and cognitive performance of children revealed by MEG signals during free viewing of video. Brain and Cognition, 86, 10–16. Dubois, J., Galdi, P., Paul, L. K., Adolphs, R., Engineering, B., Angeles, L., . . . Dubois, J. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society of London B Biological Sciences, 373(1756), 20170284. Dunst, B., Benedek, M., Koschutnig, K., Jauk, E., & Neubauer, A. C. (2014). Sex differences in the IQ-white matter microstructure relationship: A DTI study. Brain and Cognition, 91, 71–78. Ewald, A., Avarvand, F. S., & Nolte, G. (2013). Identifying causal networks of neuronal sources from EEG/MEG data with the phase slope index: A simulation study. Biomedizinische Technik, 58(2), 165–178. Ferguson, M. A., Anderson, J. S., & Spreng, R. N. (2017). Fluid and flexible minds: Intelligence reflects synchrony in the brain’s intrinsic network architecture. Network Neuroscience, 1(2), 192–207. Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., . . . Constable, R. T. (2015). Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity. Nature Neuroscience, 18(11), 1664–1671. Fornito, A., Zalesky, A., & Bullmore, E. (2016). Fundamentals of brain network analysis. Cambridge, MA: Academic Press. Fortunato, S., & Hric, D. (2016). Community detection in networks: A user guide. Physics Reports, 659, 1–44. Fox, M. D., Snyder, A. Z., Vincent, J. L., Corbetta, M., Van Essen, D. C., & Raichle, M. E. (2005). The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proceedings of the National Academy of Sciences USA, 102(27), 9673–9678. Fukushima, M., Betzel, R. F., He, Y., de Reus, M. A., van den Heuvel, M. P., Zuo, X. N., & Sporns, O. (2018). Fluctuations between high- and low-modularity topology in time-resolved functional connectivity. NeuroImage, 180(Pt. B), 406–416.

Network Neuroscience Methods for Studying Intelligence

Girn, M., Mills, C., & Christo, K. (2019). Linking brain network reconfiguration and intelligence: Are we there yet? Trends in Neuroscience and Education, 15, 62–70. Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J., Yacoub, E., . . . Smith, S. M. (2016). A multi-modal parcellation of human cerebral cortex. Nature, 536(7615), 171–178. Gollo, L. L., Roberts, J. A., Cropley, V. L., Di Biase, M. A., Pantelis, C., Zalesky, A., & Breakspear, M. (2018). Fragility and volatility of structural hubs in the human connectome. Nature Neuroscience, 21(8), 1107–1116. Gonzalez-Castillo, J., Hoy, C. W., Handwerker, D. A., Robinson, M. E., Buchanan, L. C., Saad, Z. S., & Bandettini, P. A. (2015). Tracking ongoing cognition in individuals using brief, whole-brain functional connectivity patterns. Proceedings of the National Academy of Sciences USA, 112(28), 8762–8767. Gordon, E. M., Laumann, T. O., Adeyemo, B., Huckins, J. F., Kelley, W. M., & Petersen, S. E. (2016). Generation and evaluation of a cortical area parcellation from resting-state correlations. Cerebral Cortex, 26(1), 288–303. Greene, A. S., Gao, S., Scheinost, D., & Costable, T. (2018). Task-induced brain states manipulation improves prediction of individual traits. Nature Communications, 9(1), 2807. Greicius, M. D., Krasnow, B., Reiss, A. L., & Menon, V. (2003). Functional connectivity in the resting brain: A network analysis of the default mode hypothesis. Proceedings of the National Academy of Sciences USA, 100(1), 253–258. Greicius, M. D., Supekar, K., Menon, V., & Dougherty, R. F. (2009). Resting-state functional connectivity reflects structural connectivity in the default mode network. Cerebral Cortex, 19(1), 72–78. Haász, J., Westlye, E. T., Fjær, S., Espeseth, T., Lundervold, A., & Lundervold, A. J. (2013). General fluid-type intelligence is related to indices of white matter structure in middle-aged and old adults. NeuroImage, 83, 372–383. Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C. J., Wedeen, V. J., & Sporns, O. (2008). Mapping the structural core of human cerebral cortex. PLoS Biology, 6(7), e159. Hearne, L. J., Mattingley, J. B., & Cocchi, L. (2016). Functional brain networks related to individual differences in human intelligence at rest. Scientific Reports, 6, 32328. Hilger, K., Ekman, M., Fiebach, C. J. & Basten, U. (2017a). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017b). Intelligence is associated with the modular structure of intrinsic brain networks. Scientific Reports, 7(1), 16088. Hilger, K., Fukushima, M., Sporns, O., & Fiebach, C. J. (2020). Temporal stability of functional brain modules associated with human intelligence. Human Brain Mapping, 41(2), 362–372. Honey, C. J., Kötter, R., Breakspear, M., & Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences USA, 104(24), 10240–10245. Honey, C. J., Sporns, O., Cammoun, L., Gigandet, X., Thiran, J. P., Meuli, R., & Hagmann, P. (2009). Predicting human resting-state functional connectivity from structural connectivity. Proceedings of the National Academy of Sciences USA, 106(6), 2035–2040.

39

40

k. hilger and o. sporns

Hutchison, R. M., Womelsdorf, T., Gati, J. S., Everling, S., & Menon, R. S. (2013). Resting-state networks show dynamic functional connectivity in awake humans and anesthetized macaques. Human Brain Mapping, 34(9), 2154–2177. Jahidin, A. H., Taib, M. N., Tahir, N. M., Megat Ali, M. S. A., & Lias, S. (2013). Asymmetry pattern of resting EEG for different IQ levels. Procedia – Social and Behavioral Sciences, 97, 246–251. Jbabdi, S., Sotiropoulos, S. N., Haber, S. N., Van Essen, D. C., & Behrens, T. E. (2015). Measuring macroscopic brain connections in vivo. Nature Neuroscience, 18(11), 1546. Jeub, L. G., Sporns, O., & Fortunato, S. (2018). Multiresolution consensus clustering in networks. Scientific Reports, 8(1), 3259. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kievit, R. A., Davis, S. W., Griffiths, J. D., Correia, M. M., & Henson, R. N. A. (2016). A watershed model of individual differences in fluid intelligence. Neuropsychologia, 91, 186–198. Kievit, R. A., van Rooijen, H., Wicherts, J. M., Waldorp, L. J., Kan, K. J., Scholte, H. S., & Borsboom, D. (2012). Intelligence and the brain: A modelbased approach. Cognitive Neuroscience, 3(2), 89–97. Kim, D.-J., Davis, E. P., Sandman, C. A., Sporns, O., O’Donnell, B. F., Buss, C., & Hetrick, W. P. (2015). Children’s intellectual ability is associated with structural network integrity. NeuroImage, 124(Pt. A), 550–556. Koenis, M. M. G., Brouwer, R. M., van den Heuvel, M. P., Mandl, R. C. W., van Soelen, I. L. C., Kahn, R. S., . . . Hulshoff Pol, H. E. (2015). Development of the brain’s structural network efficiency in early adolescence: A longitudinal DTI twin study. Human Brain Mapping, 36(12), 4938–4953. Kruschwitz, J. D., Waller, L., Daedelow, L. S., Walter, H., & Veer, I. M. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. Neuroimage, 171, 323–331. Langer, N., Pedroni, A., Gianotti, L. R. R., Hänggi, J., Knoch, D., & Jäncke, L. (2012). Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6), 1393–1406. Langer, N., Pedroni, A., & Jäncke, L. (2013). The problem of thresholding in smallworld network analysis. PLoS One, 8(1), e53199. Langeslag, S. J. E., Schmidt, M., Ghassabian, A., Jaddoe, V. W., Hofman, A., van der Lugt, A., . . . White, T. J. H. (2013). Functional connectivity between parietal and frontal brain regions and intelligence in young children: The generation R study. Human Brain Mapping, 34(12), 3299–3307. Latora, V., & Marchiori, M. (2001). Efficient behavior of small-world networks. Physical Review Letters, 87(19), 198701. Laumann, T. O., Gordon, E. M., Adeyemo, B., Snyder, A. Z., Joo, S. J., Chen, M. Y., . . . Schlaggar, B. L. (2015). Functional system and areal organization of a highly sampled individual human brain. Neuron, 87(3), 657–670. Lee, T. W., Wu, Y. Te, Yu, Y. W. Y., Wu, H. C., & Chen, T. J. (2012). A smarter brain is associated with stronger neural interaction in healthy young females: A resting EEG coherence study. Intelligence, 40(1), 38–48.

Network Neuroscience Methods for Studying Intelligence

Lerch, J. P., Worsley, K., Shaw, W. P., Greenstein, D. K., Lenroot, R.K., Giedd, J., & Evans, A. C. (2006). Mapping anatomical correlations across cerebral cortex (MACACC) using cortical thickness from MRI. Neuroimage, 31(3), 993–1003. Li, Y. H., Liu, Y., Li, J., Qin, W., Li, K. C., Yu, C. S., & Jiang, T. Z. (2009). Brain anatomical network and intelligence. Plos Computational Biology, 5(5), e1000395. Ma, J., Kang, H. J., Kim, J. Y., Jeong, H. S., Im, J. J., Namgung, E., . . . Yoon, S. (2017). Network attributes underlying intellectual giftedness in the developing brain. Scientific Reports, 7(1), 11321. Maier-Hein, K. H., Neher, P. F., Houde, J. C., Côté, M. A., Garyfallidis, E., Zhong, J., . . . Reddick, W. E. (2017). The challenge of mapping the human connectome based on diffusion tractography. Nature Communications, 8(1), 1349. Malpas, C. B., Genc, S., Saling, M. M., Velakoulis, D., Desmond, P. M., & Brien, T. J. O. (2016). MRI correlates of general intelligence in neurotypical adults. Journal of Clinical Neuroscience, 24, 128–134. Navas-Sánchez, F. J., Alemán-Gómez, Y., Sánchez-Gonzalez, J., Guzmán-DeVilloria, J. A, Franco, C., Robles, O., . . . Desco, M. (2013). White matter microstructure correlates of mathematical giftedness and intelligence quotient. Human Brain Mapping, 35(6), 2619–2631. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. Neubauer, A. C., Wammerl, M., Benedek, M., Jauk, E., & Jausovec, N. (2017). The influence of transcranial alternating current on fluid intelligence. A fMRI study. Personality and Individual Differences, 118, 50–55. Newman, M. E., & Girvan, M. (2004). Finding and evaluating community structure in networks. Physical Review E, 69(2), 026113. Pahor, A., & Jaušovec, N. (2014). Theta–gamma cross-frequency coupling relates to the level of human intelligence. Intelligence, 46, 283–290. Pamplona, G. S. P., Santos Neto, G. S., Rosset, S. R. E., Rogers, B. P., & Salmon, C. E. G. (2015). Analyzing the association between functional connectivity of the brain and intellectual performance. Frontiers in Human Neuroscience, 9, 61. Pestilli, F., Yeatman, J. D., Rokem, A., Kay, K. N., & Wandell, B. A. (2014). Evaluation and statistical inference for human connectomes. Nature Methods, 11(10), 1058–1063. Power, J. D., Cohen, A. L., Nelson, S. M., Wig, G. S., Barnes, K. A., Church, J. A., . . . Petersen, S. E. (2011). Functional network organization of the human brain. Neuron, 72(4), 665–678. Power, J. D., Schlaggar, B. L., & Petersen, S. E. (2015). Recent progress and outstanding issues in motion correction in resting state fMRI. Neuroimage, 105, 536–551. Raichle, M. E., MacLeod, A. M., Snyder, A. Z., Powers, W. J., Gusnard, D. A., & Shulman, G. L. (2001). A default mode of brain function. Proceedings of the National Academy of Sciences, 98(2), 676–682. Rubinov, M. (2016). Constraints and spandrels of interareal connectomes. Nature Communications, 7(1), 13812. Rubinov, M., & Sporns, O. (2010). Complex network measures of brain connectivity: Uses and interpretations. Neuroimage, 52(3), 1059–1069. Rubinov, M., & Sporns, O. (2011). Weight-conserving characterization of complex functional brain networks. Neuroimage, 56(4), 2068–2079.

41

42

k. hilger and o. sporns

Ryyppö, E., Glerean, E., Brattico, E., Saramäki, J., & Korhonen, O. (2018). Regions of interest as nodes of dynamic functional brain networks. Network Neuroscience, 2(4), 513–535. Santarnecchi, E., Muller, T., Rossi, S., Sarkar, A., Polizzotto, N. R., Rossi, A., & Cohen Kadosh, R. (2016). Individual differences and specificity of prefrontal gamma frequency-tACS on fluid intelligence capabilities. Cortex, 75, 33–43. Santarnecchi, E., Rossi, S., & Rossi, A. (2015). The smarter, the stronger: Intelligence level correlates with brain resilience to systematic insults. Cortex, 64, 293–309. Schmithorst, V. J. (2009). Developmental sex differences in the relation of neuroanatomical connectivity to intelligence. Intelligence, 37(2), 164–173. Schultz, X. D. H., & Cole, X. W. (2016). Higher intelligence is associated with less ask-related brain network reconfiguration. Journal of Neuroscience, 36(33), 8551–8561. Sherman, L. E., Rudie, J. D., Pfeifer, J. H., Masten, C. L., McNealy, K., & Dapretto, M. (2014). Development of the default mode and central executive networks across early adolescence: A longitudinal study. Developmental Cognitive Neuroscience, 10, 148–159. Shinn, M., Romero-Garcia, R., Seidlitz, J., Váša, F., Vértes, P. E., & Bullmore, E. (2017). Versatility of nodal affiliation to communities. Scientific Reports, 7(1), 4273. Smit, D. J. A, Stam, C. J., Posthuma, D., Boomsma, D. I., & De Geus, E. J. C. (2008). Heritability of “small-world” networks in the brain: A graph theoretical analysis of resting-state EEG functional connectivity. Human Brain Mapping, 29(12), 1368–1378. Smith, S. M., Fox, P. T., Miller, K. L., Glahn, D. C., Fox, P. M., Mackay, C. E., . . . Beckmann, C. F. (2009). Correspondence of the brain’s functional architecture during activation and rest. Proceedings of the National Academy of Sciences USA, 106(31), 13040–13045. Smith, S. M., Nichols, T. E., Vidaurre, D., Winkler, A. M., Behrens, T. E., Glasser, M. F., . . . Miller, K. L. (2015). A positive-negative mode of population covariation links brain connectivity, demographics and behavior. Nature Neuroscience, 18(11), 1565–1567. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. NeuroImage, 41(3), 1168–1176. Sporns, O. (2014). Contributions and challenges for network models in cognitive neuroscience. Nature Neuroscience, 17(5), 652–660. Sporns, O., & Betzel, R. F. (2016). Modular brain networks. Annual Review of Psychology, 67, 613–640. Stam, C. J., Nolte, G., & Daffertshofer, A. (2007). Phase lag index: Assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Human Brain Mapping, 28(11), 1178–1193. Tang, C. Y., Eaves, E. L., Ng, J. C., Carpenter, D. M., Mai, X., Schroeder, D. H., . . . Haier, R. J. (2010). Brain networks for working memory and factors of intelligence assessed in males and females with fMRI and DTI. Intelligence, 38(3), 293–303. Tavor, I., Jones, O. P., Mars, R. B., Smith, S. M., Behrens, T. E., & Jbabdi, S. (2016). Task-free MRI predicts individual differences in brain activity during task performance. Science, 352(6282), 216–220.

Network Neuroscience Methods for Studying Intelligence

Telesford, Q. K., Lynall, M. E., Vettel, J., Miller, M. B., Grafton, S. T., & Bassett, D. S. (2016). Detection of functional brain network reconfiguration during taskdriven cognitive states. NeuroImage, 142, 198–210. Thomas, C., Frank, Q. Y., Irfanoglu, M. O., Modi, P., Saleem, K. S., Leopold, D. A., & Pierpaoli, C. (2014). Anatomical accuracy of brain connections derived from diffusion MRI tractography is inherently limited. Proceedings of the National Academy of Sciences USA, 111(46), 16574–16579. Vaiana, M., & Muldoon, S. F. (2018). Multilayer brain networks. Journal of Nonlinear Science, 1–23. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of “small-world” networks. Nature, 393(6684), 440–442. Wolf, D., Fischer, F. U., Fesenbeckh, J., Yakushev, I., Lelieveld, I. M., Scheurich, A., . . . Fellgiebel, A. (2014). Structural integrity of the corpus callosum predicts long-term transfer of fluid intelligence-related training gains in normal aging. Human Brain Mapping, 35(1), 309–318. Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122. Yeo, B. T., Krienen, F. M., Sepulcre, J., Sabuncu, M. R., Lashkari, D., Hollinshead, M., . . . Buckner, R. L. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology, 106(3), 1125–1165. Yeo, R. A., Ryman, S. G., van den Heuvel, M. P., de Reus, M. A., Jung, R. E., Pommy, J., . . . Calhoun, V. D. (2016). Graph metrics of structural brain networks in individuals with schizophrenia and healthy controls: Group differences, relationships with intelligence, and genetics. Journal of the International Neuropsychological Society, 22(2), 240–249. Yu, C. S., Li, J., Liu, Y., Qin, W., Li, Y. H., Shu, N., . . . Li, K. C. (2008). White matter tract integrity and intelligence in patients with mental retardation and healthy adults. Neuroimage, 40(4), 1533–1541. Zalesky, A., Fornito, A., Cocchi, L., Gollo, L. L., & Breakspear, M. (2014). Timeresolved resting-state brain networks. Proceedings of the National Academy of Sciences, 111(28), 10341–10346. Zalesky, A., Fornito, A., Harding, I. H., Cocchi, L., Yücel, M., Pantelis, C., & Bullmore, E. T. (2010). Whole-brain anatomical networks: does the choice of nodes matter? NeuroImage, 50(3), 970–983. Zalesky, A., Fornito, A., Seal, M. L., Cocchi, L., Westin, C., Bullmore, E. T., . . . Pantelis, C. (2011). Disrupted axonal fiber connectivity in schizophrenia. Biological Psychiatry, 69(1), 80–89. Zuo, X. N., & Xing, X. X. (2014). Test–retest reliabilities of resting-state FMRI measurements in human brain functional connectomics: A systems neuroscience perspective. Neuroscience & Biobehavioral Reviews, 45, 100–118.

43

3 Imaging the Intelligence of Humans Kenia Martínez and Roberto Colom

Introduction Most humans can perceive the world, store information in the shortand the long-term, recover the relevant information when required, comprehend and produce language, orient themselves in known and unknown environments, make calculations of high and low levels of sophistication, and so forth. These cognitive actions must be coordinated and integrated in some way and “intelligence” is the psychological factor that takes the lead when humans pursue this goal. The manifestation of widespread individual differences in this factor is well documented in everyday life settings and has been addressed by scientific research from at least three complementary models: psychometric models, cognitive/information-processing models, and biological models. Psychometric models of intelligence have identified the dimensions of variation in cognitive ability (Johnson & Bouchard, 2005; Schneider & McGrew, 2018). Information-processing models uncover the basic cognitive processes presumably relevant for the abilities comprising the psychometric models (Chuderski, 2019). Finally, at the biological level, individual differences in brain structure and function (Basten, Hilger, & Fieback, 2015), along with genetic and non-genetic factors (Plomin, DeFries, Knopik, & Neiderhiser, 2016), are considered with respect to the behavioral variability both at the psychometric and cognitive levels. Most scientists acknowledge that the brain is the organ where relevant biological processes takes place for supporting every expression of intelligent behavior (Haier, 2017, Hunt, 2011). Nevertheless, the brain is also important for further psychological phenomena erroneously believed to be independent of intelligence (Grotzinger, Cheung, Patterson, Harden, & Tucker-Drob, 2019; Hill, Harris, & Deary, 2019). Thus, for instance, as underscored by Caspi and Moffitt (2018), “all mental disorders are expressed through dysfunction of the same organ (brain), whereas physical diseases such as cirrhosis, emphysema, and diabetes are manifested through dysfunction of different organ systems. Viewed from this perspective, perhaps the search for non-specificity in psychiatry is not unreasonable (. . .) a usual way to think about the meaning of a general factor of psychopathology (p) is, by analogy, in relation to cognitive abilities (. . .) just as there is a general factor of cognitive ability (g), it is possible that there is also a p” (pp. 3–8). Mental disorders and mental 44

Imaging the Intelligence of Humans

abilities share the same brain and, from this broad perspective, Colom, Chuderski, and Santarnecchi (2016) wrote the following regarding human intelligence: available evidence might depart from the view that there is a place in the brain for intelligence, and even that only places are relevant, as temporal properties of neural processing like synchronization and coordination might also play an important role, as well as network-level dynamics promoting the ability to evolve, robustness, and plasticity (. . .) maybe there is no place in the brain for intelligence because the brain itself is the place. And we only have a single brain. (p. 187, emphasis added)

Neuroimaging likely can help scientists find out answers to the question of why some people are smarter than others because of differences at the brain level. However, despite the large number of studies analyzing the links between structural and functional brain properties and high-order cognition, we still lack conclusive answers. Many early studies had small samples and often limited their analyses to a single measure of intelligence in addition to sometimes having poor quality control of image data (see Drakulich and Karama, Chapter 11). Brain properties, regions, and networks presumably supporting cognitive performance differences may look unstable across research studies (Colom, Karama, Jung, & Haier, 2010; Deary, Penke, & Johnson, 2010; Haier, 2017; Jung & Haier, 2007). Considerable neuroimaging evidence is consistent with the idea that understanding the intelligence of humans from a biological standpoint would not be achieved by focusing on specific regions of the brain, but on brain networks hierarchically organized to collaborate and compete in some way (Colom & Thompson, 2011; Colom, Jung, & Haier, 2006; Colom et al., 2010; Haier, 2017). Why does finding the brain supporting features of the intelligence of humans using neuroimaging approaches seem that difficult?

Imaging the Intelligence of Humans Rex Jung & Richard Haier proposed in 2007 the Parieto-Frontal Integration Theory of intelligence (P-FIT) based on the qualitative commonalities across 37 structural and functional neuroimaging studies published between 1988 and 2007. The framework identified several brain areas distributed across the brain as relevant for intelligence, with special emphasis on the dorsolateral prefrontal cortex and the parietal cortex (Figure 3.1). The model summarized (and combined) exploratory research available at the time using different neuroimaging structural and functional approaches for studying the brain support of intelligent behavior. Afterwards, most neuroimaging studies addressing the topic have evaluated and interpreted their results using the P-FIT as a frame of reference. However, two caveats could be highlighted: (1) the model is very generic and difficult to test against

45

46

k. martı´nez and r. colom

Figure 3.1 Regions identified by the Parieto-Frontal Integration Theory (P-FIT) as relevant for human intelligence. These regions are thought to support different information-processing stages. First, occipital and temporal areas process sensory information (Stage 1): the extrastriate cortex (Brodmann areas (BA) 18 and 19) and the fusiform gyrus (BA 37), involved with recognition, imagery, and elaboration of visual inputs, as well as Wernicke’s area (BA 22) for analysis and elaboration of syntax of auditory information. In the second processing stage (Stage 2), integration and abstraction of the sensory information is carried out by parietal BAs 39 (angular gyrus), 40 (supramarginal gyrus), and 7 (superior parietal lobule). Next, in Stage 3, the parietal areas interact with the frontal regions (BAs 6, 9, 10, 45, 46, and 47) serving problem solving, evaluation, and hypothesis testing. Finally, in Stage 4, the anterior cingulate (BA 32) is implicated for response selection and inhibition of alternative responses, once the best solution is determined in the previous stage.

competing models (Dubois, Galdi, Paul, & Adolphs, 2018) and (2) the empirical evidence does not all necessarily converge to the same degree on the brain regions highlighted by the P-FIT model (Colom, Jung, & Haier, 2007; Martínez et al., 2015). There is remarkable variability among the studies summarized by Jung and Haier (2007) and only a small number of identified brain areas approached 50% of convergence across published studies even employing the same neuroimaging strategy. Thus, for instance, considering gray matter properties, 32 brain areas were initially nominated, but only BAs 39–40 and 10 showed 50% of convergence. Nevertheless, subsequent studies showed that evidence is roughly consistent with the P-FIT model, mainly because of the fact that virtually the entire brain would be relevant for supporting intelligent behavior. Also, it should be noted that, because of the heterogeneity of approaches across studies, even the most consistently identified brain regions may show relatively low levels of convergence.

Imaging the Intelligence of Humans

These are examples of relevant sources of variability across neuroimaging studies of intelligence potentially contributing to this heterogeneity: 1. How intelligence is defined and estimated (IQ vs. g, broad domains such as fluid, crystallized, or visuospatial intelligence, specific measures administered across studies, and so forth). 2. How MRI images are processed (T1 and T2 weighted, diffusion weighted, etc.). 3. Which brain feature (structure or function), tissue (white matter, gray matter) or property (gray matter volume, cortical thickness, cortical folding pattern, responsiveness to targeted stimulation, white matter microstructure, connectivity, etc.) is considered. 4. Which humans are analyzed (sex, age, healthy humans, patients with chronic or acute lesions, infants, elderly, etc.). The remaining of this chapter is devoted to briefly addressing these key points because they are extremely important for pursuing a proper scientific investigation regarding the potential role of the brain for human intelligence.

What Intelligence? The evidence suggests that human intelligence can be conceptualized as a high-order integrative mental ability. Measured cognitive performance differences on standardized tests organize people according to their general mental ability (g). Moreover, individual variations in g are related with performance differences in a set of lower-level cognitive processes. Psychometric models have shown that individual differences in intelligence tests can be grouped in a number of broad and narrow cognitive dimensions. These models are built based on performance differences assessed by diverse speeded and non-speeded tests across domains (abstract, verbal, numerical, spatial, and so forth). Individual differences in the measured performance are submitted to exploratory or confirmatory factor analysis for separating sources of variance contributing to a general factor (g), cognitive abilities (group factors), and cognitive skills (test specificities). It is well known within the psychometric literature that g should be identified by three or more broad cognitive factors, and these latter factors must be identified by three or more tests varying in content and processing requirements (Haier et al., 2009). The outcomes derived from this psychometric framework support the view that intelligence has a hierarchical structure; fluid-abstract, crystallized-verbal, and visuospatial abilities being the most frequently considered factors of intelligence (Schneider & McGrew, 2018). Studying samples representative of the general population, g factor scores usually account for no less that 50% of the performance differences assessed by standardized tests and the obtained g estimates are extremely consistent across intelligence batteries

47

48

k. martı´nez and r. colom

(Johnson, Bouchard, Krueger, McGue, & Gottesman, 2004; Johnson, te Nijenhuis, & Bouchard, 2008). It is imperative to consider the available psychometric evidence when planning brain imaging studies of human intelligence. Figure 3.2 depicts an

Figure 3.2 Variability in the gray matter correlates of intelligence across the psychometric hierarchy as reported in one study by Román et al. (2014). Results at the test level are widespread across the cortex and show only minor overlaps with their respective factors (Gf, Gc, and Gv). This highlights the distinction among constructs, vehicles, and measurements when trying to improve our understanding of the biological underpinnings of cognitive performance. As noted by Jensen (1998) in his seminal book (The g Factor. The Science of Mental Ability), a given psychological construct (e.g., fluid intelligence) can be represented by distinguishable vehicles (e.g., intelligence tests) yielding different measurements. Changes in the measurements may or may not involve changes in the construct. The former changes involve different sources of variance. Using single or omnibus cognitive measures provides largely different results when their biological bases are systematically inspected. Therefore, fine-grained psychometric approaches for defining the constructs of interest are strongly required (Colom & Thompson, 2011; Colom et al., 2010; Haier et al., 2009). Figure adapted from Román et al. (2014)

Imaging the Intelligence of Humans

example of why this is extremely important. This example, which needs to be replicated on larger samples with more stringent statistical thresholds that account for multiple comparisons, suggests a variability in the gray matter correlates of human intelligence differences considered at different levels of the intelligence hierarchy (Román et al., 2014). Relatedly, the same score obtained for a given intelligence dimension might result from different cognitive profiles where specific skills involved contribute to the observed performance to different degrees. Distinguishable engagement of brain regions is expected when different mental processes are involved. Same test scores might be achieved using different cognitive strategies and brain features (Price, 2018). However, brain imaging findings typically reflect group averages and more or less overlapping regions across individuals. Most research studies have considered standardized global IQ indices, single measures, or poorly defined measurement models [see Colom et al. (2009) for a discussion of this issue]. Moreover, many studies have used only one cognitive test/subtest to tap psychometric g or one or two tests to tap cognitive ability domain factors (Cole, Yarkoni, Repovš, Anticevic, & Braver, 2012; Haier et al., 2009; Karama et al., 2011; Langer et al., 2012). Although Haier et al. (2009) stressed the need of following required standards in the neuroimaging research of the intelligence of humans, they are usually neglected. It is worthwhile to remember them before moving ahead: 1. Use several diverse measures tapping abstract, verbal, numerical, and visuospatial content domains. 2. Use three or more measures to define each cognitive ability (group factor). These abilities should fit the main factors comprised in models such as the CHC (Schneider & McGrew, 2018) of the g-VPR (Johnson & Bouchard, 2005) psychometric models. 3. Measures for each cognitive ability should not be based solely on speeded or non-speeded tests. Both types should be used. This recommendation is based on the fact that lower level abilities comprise both level (nonspeeded) and speed factors. 4. Use three or more lower level cognitive abilities to define the higher-order factor representing g. Ideally, measurement models should reveal that nonverbal, abstract, or fluid reasoning is the group factor best predicted by g. Fluid reasoning (drawing inferences, concept formation, classification, generating and testing hypotheses, identifying relations, comprehending implications, problem solving, extrapolating, and transforming information) is the cognitive ability more closely related to g. 5. Find a way to separate sources of variance contributing to participants’ performance on the administered measures. The influence of g is pervasive, but it changes for different lower order cognitive abilities and individual measures. Participants’ scores result from g, cognitive abilities (group factors), cognitive skills (test specificities), and statistical noise. Brain

49

50

k. martı´nez and r. colom

correlates for a given cognitive ability, like verbal ability or spatial ability, are influenced by all these sources of variance and they must be distinguished. Neuroimaging studies of intelligence must be carefully designed using what we already know after a century of research around the psychometrics of the intelligence of humans. Otherwise, we will jeopardize the scientific research efforts.

What Neuroimaging Approach and Brain Property? In the last decade, the number of methods for processing structural and functional MRIs has increased, along with the number of potential neuro-markers relevant for the intelligence of humans. Technical refinements address required improvements in the reliable and exhaustive description of brain structural and functional variations related with intelligence differences. However, there are methodological caveats eroding the goal of achieving sound reproducibility across research studies (Botvinik-Nezer et al., 2020). The most replicated brain correlate of intelligence is global brain size. The meta-analyses by Gignac and Bates (2017) addressed three shortcomings of the previously published Pietschnig, Penke, Wicherts, Zeiler, and Voracek’s (2015) meta-analysis. Their findings revealed a raw value of r = .29 (N = 1,758). Nevertheless, Gignac and Bates have gone one step further by classifying the considered studies according to their quality regarding the administered measures of intelligence (fair, good, and excellent). Interestingly, r values increased accordingly from .23 to .39. Based on these results, the authors confirmed that global brain volume is the largest neurophysiological correlate of the intelligence of humans. There is, however, one more lesson derived from the Gignac and Bates meta-analysis. In their own words and all else being equal, researchers who administer more comprehensive cognitive ability test batteries require smaller sample sizes to achieve the same level of (statistical) power . . . an investigator who plans to administer 9 cognitive ability tests (40 minutes testing time) would require a sample size of 49 to achieve a power of 0.80, based on an expected correlation of 0.30 . . . it is more efficient to administer a 40-minute comprehensive measure of intelligence across 49 participants in comparison to a relatively brief 20-minute measure across 146 participants. (Gignac & Bates, 2017, p. 27)

This conclusion cautions against the widespread (by default) current tendency to look with blind admiration at studies considering huge samples while ignoring small-scale research. As recently observed by Thompson et al. (2020) in their review of a decade of research within the ENIGMA consortium: “for effect sizes of d > .6, the reproducibility rate was higher than 90%

Imaging the Intelligence of Humans

even when including the datasets with sample sizes as low as 15, while it was impossible to obtain 70% reproducibility for small effects of d < .2 even with a relatively large minimum sample size threshold of 500.” Therefore, bigger samples are not necessarily always better. Beyond global brain measures, reported relationships between intelligence and regional structural brain features are rather unstable. The cortex varies widely among humans. Two popular approaches for studying these variations in macroscopic cortex anatomy for comparative analyses using highresolution T1-weighted data are Voxel-Based Morphometry (VBM) and Surface-Based Morphometry (SBM) (Figure 3.3). VBM identifies differences in the local composition of brain tissue across individuals and groups making voxel-by-voxel comparisons once large-scale differences in gross anatomy and position are discounted after registering the individual structural images to the same standard reference (Ashburner & Friston, 2000; Mechelli, Price, Friston, & Ashburner, 2005). On the other hand, using SBM methods, surfaces representing the structural boundaries within the brain are created and analyzed on the basis of brain segmentation in white matter, gray matter, and cerebrospinal fluid. Surfaces representing each boundary are often generated by a meshing algorithm that codifies relationships among voxels on the boundary into relationships between polygonal or polyhedral surface elements. For more details on VBM methods, see Drakulich and Karama, Chapter 11. VBM has been widely used because it requires minimal user intervention and can be completed relatively quickly by following well-documented and publicly available protocols. When considering the inconsistency of results for the neural correlates of cognitive measures, the answer may lie in the details of the various processing techniques. Regarding VBM, substantial inter-subject variability in cortical anatomy can be problematic for most standard linear and nonlinear volumetric registration algorithms used by different pipelines (Frost & Goebel, 2012). Neglecting this inter-subject macro-anatomical variability may weaken statistical power on group statistics, because different cortical regions are considered to be the same region and hence erroneously compared across subjects (Figure 3.4). The substantial brain variability across humans complicates replicability of findings in independent, albeit comparable, samples. To alleviate this loss of power due to data macro-anatomical misregistration across subjects, surface-based approaches create geometrical models of the cortex using parametric surfaces and build deformation maps on the geometric models explicitly associating corresponding cortical regions across individuals (Thompson et al., 2004). Furthermore, SBM allows computing several GM tissue features at the local level. These features include surface complexity, thickness, surface area, or volume. There are several SBM approaches and each protocol differs in algorithms, parameters, and required userintervention.

51

52

k. martı´nez and r. colom

Figure 3.3 Workflow for voxel-based morphometry (VBM) and surface-based morphometry (SBM) analysis. The analyses are based on high-resolution structural brain images. VBM and SBM approaches shared processing steps, as illustrated in (A). These steps involve linear or nonlinear volumetric alignment of MRI data from all subjects to a standardized anatomical template; correction for intensity inhomogeneities; and classification and segmentation of the image into gray matter, white matter, and cerebrospinal fluid (CSF). In VBM analyses (B), the normalized gray matter segment is smoothed with an isotropic Gaussian kernel and the smoothed normalized gray matter segments are entered into a statistical model to conduct voxel-wise statistical tests and map significant effects (D, left). To create parametric models of the brain from the SBM framework (C), additional steps are required, including the extraction of cortical surface models from each image data set (surfaces generation), as well as the non-linear matching of cortical features on these models to improve the alignment of individual data for group analysis. Finally, morphometric variables (e.g., volume, area, or thickness) obtained at the vertex level should be smoothed before group statistical analysis (D, right).

Imaging the Intelligence of Humans

Figure 3.4 Top panel: three left hemisphere brain surfaces from different individuals. Although all major sulci and gyri are present, the spatial location and geometry vary widely across individuals. For instance, the contours and boundaries of the sulci and gyri highlighted in a ‘real’ brain (Bottom panel, www.studyblue.com/notes/note/n/2-gyri/deck/5877272) are hardly discerned and widely vary across the individual brains in this specific example. In short: individual brains are unique.

Martínez et al. (2015) provide an elegant example of how extreme the influence of imaging protocols over the identified structural brain correlates of human intelligence can be when multiple issues converge (Figure 3.5). In this study, Martínez et al. used MRI data from 3T scanners to compare the outputs of different SBM pipelines on the same subjects. Three different SBM protocols for analyzing structural variations in regional cortical thickness (CT) were used. Distribution and variability of CT and CT–cognition relationships were systematically compared across pipelines. The findings revealed that, when all issues converge, even using the same SBM approach, the outputs from different processing pipelines can be inconsistent and show what seems like a considerable variation in the spatial pattern of observed CT–intelligence relationships. Importantly, when thresholded for multiple comparisons over the whole brain, no association between intelligence and cortical thickness was shown in any of the three pipelines (unpublished finding). This finding might stem from (a) a low power following the inherent small effect size of the intelligence–cortical thickness associations, (b) relatively old versions of imaging pipelines unable to deal well with 3T MRI data, or (c) the fact that the sample was not large enough to detect the weak signal that may have been there. Available data suggests that, even when all the analyzed gray matter indices quantify the number and density of neuronal bodies and dendritic ramifications

53

54

k. martı´nez and r. colom

Figure 3.5 (A and B) Distribution and variability of cortical thickness computed through different surface-based protocols: Cortical Pattern Matching (CPM), Brain-Suite, and CIVET. The figure depicts (A) mean values, (B) standard deviation values, and (C) Pearson’s correlations between cortical thickness and intelligence variations at the vertex level (Martínez et al., 2015). The results observed in the figure illustrate how the protocol applied for processing imaging data can influence the identified neural correlates of human intelligence in situations of low statistical power for detecting very small effect sizes. As discussed in the text, the relevance of sample size increases for reduced effect sizes.

supporting information processing capacity, the genetic etiology and cellular mechanisms supporting them can (and must) be distinguished (Panizzon et al., 2009; Winkler et al., 2010). Thus, the number of vertical columns drives the size of the cortical surface area reflecting, to a significant degree, the overall degree of cortical folding, whereas cortical thickness is mainly driven by the number of cells within a vertical column (Rakic, 1988), reflecting the packing

Imaging the Intelligence of Humans

density of neurons, as well as the content of neuropil. Neuroimaging measurements of cortical surface area and cortical thickness have been found to be genetically independent (Chen et al., 2013). Figure 3.6 shows how the GM measurement considered is relevant as a potential source of variability across studies. In the Figure 3.6 example, correlations between cortical surface area (CSA) and cortical gray matter volume (CGMV) across the cortex are stronger

Figure 3.6 (A) Pearson’s correlations among cortical thickness (CT), cortical surface area (CSA), and cortical gray matter volume (CGMV) obtained from a subsample of 279 healthy children and adolescents of the Pediatric MRI Data Repository created for the National Institute of Mental Health MRI Study of Normal Brain Development (Evans & Brain Development Cooperative Group, 2006). (B) Topography of significant correlations (q < .05, false discovery rate (FDR) corrected) between IQ and cortical thickness (CT), cortical surface area (CSA), and cortical gray matter volume (CGMV). Percentages of overlap among maps for the three gray matter measurements are also shown. Note. To obtain cortical thickness measurement, 3D T1-weighted MR images were submitted to the CIVET processing environment (version 1.1.9) developed at the MNI, a fully automated pipeline to extract and co-register the cortical surfaces for each subject. (Ad-Dab’bagh et al., 2006; Kim et al., 2005; MacDonald, Kabani, Avis, & Evans, 2000)

55

56

k. martı´nez and r. colom

compared to cortical thickness (CT)–CGMV correlations (Figure 3.6A). Also, there are low associations between CT and CSA. Figure 3.6B shows the spatial maps for significant (all positive) correlations with IQ scores: higher IQ scores were associated with greater GMM in several cortical regions. The highest percentage of significant vertices was found for the IQ–CGMV relationships (45.02%). The pattern of IQ–CT and IQ–CSA correlations was largely different (only 1.79% of significant vertices overlapped). In contrast, 50.79% of IQ–CGMV significant relationships were shared with IQ–CT (20.12%) or IQ–CSA (30.67%) associations. Therefore, findings based on different measures might not be directly comparable. This being said, one must keep in mind that there are different kinds of surfaces that could be used to calculate area: pial surface area, white matter surface area, and midsurface area (surface placement at the midpoint of cortical thickness). Pial surface area and mid surface area will correlate with thickness as they are based, in part, on thickness, whereas white matter surface area will correlate much less (if it does) with thickness. The same goes, more or less, for correlations between CSA and CGMV. CT and CGMV will definitely correlate as CGMV depends on thickness on top of also being associated with area. In other words, correlations between CSA and other metrics will highly depend on which measure of CSA is used. The example also illustrates the fact that some neuromarkers seem to be more relevant than others for explaining the variability observed at the psychological level. As noted above, larger brains tend to show greater intelligence levels, which may be related with increased numbers of neurons, increased sulcal convolution (surface area) or processing units – number of vertical columns – rather than to its thickness (Im et al., 2008; Pakkenberg & Gundersen, 1997). Variability in CSA is greater than variability in CT across individuals and species. The former index did show a dramatic growth over the course of evolution, which may support differences in intelligent behavior across species (Roth & Dicke, 2005). This, in combination with the differences in the surfaces used to estimate surface area, might account for some reports of more prominent findings for CGMV and CSA than CT when considering their relationships with individual differences in intelligence (Colom et al., 2013; Fjell et al., 2015; Vuoksimaa et al., 2015). Similar arguments might apply to further studies conducted from the assumption that high intelligence probably requires undisrupted and reliable information transfer among brain regions along white matter fibers and functional connections. In this regard, diffusion tensor imaging (DTI) and functional MRI have been used to study which properties of interacting brain networks predict individual differences in intelligence. For more on functional brain imaging of intelligence, see Chapter 12 by Basten and Fiebach. These studies revealed significant correlations between water diffusion parameters that quantify white matter integrity (such as fractional anisotropy,

Imaging the Intelligence of Humans

mean diffusivity, radian, and axial diffusivity) and intelligence measured by standardized tests. For more on white matter associations with intelligence, see the Chapter 10 by Genç and Fraenz. These associations have been reported at voxel and tract levels. Again, there are a variety of available processing pipelines to obtain the diffusion parameters and to compute the main bundles of white matter fibers by tractography algorithms. As a result, findings are also variable and the same pattern is observed when functional studies (task-related and resting state) are considered. Typically, connectome-based studies rely on graph theory metrics (see Figure 3.7). For more on network analyses, see Hilger and Sporns, Chapter 2. Usually considered measures include connection strength and degree, global and local efficiency, clustering, and characteristic path length. Where some studies find significant associations between intelligence and brain network efficiency (Dubois et al., 2018; Pineda-Pardo, Martínez, Román, & Colom, 2016; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009), others do not, even using the same dataset (e.g. the study by Dubois et al. (2018) contradicts by remarkable numbers the results obtained by Kruschwitz, Waller, Daedelow, Walter, and Veer (2018) using Human Connectome Project data). Nevertheless, structural and functional studies support four tentative general conclusions: 1. Intelligent behavior relies on distributed (and interacting) brain networks (Barbey, 2018). Regions involved in these networks overlap with frontalparietal regions underscored by the P-FIT model (Pineda-Pardo et al., 2016), but also with other regions, such as those involved in the Default Mode Network (DMN) (Dubois et al., 2018). 2. Brains of humans with higher levels of intelligence seem to process information more efficiently (using fewer brain resources when performing demanding cognitive tasks) than brains of humans with lower levels of intelligence (Haier, 2017). For more on the efficiency hypothesis, see the Basten and Fiebach, Chapter 12. 3. It is important to distinguish between “task” and “individual differences” approaches for exploring brain–intelligence relationships. The goal of the task approach is to detect those regions engaged in a group of individuals, ignoring inter-subject differences that might be associated with intelligence differences. The individual differences approach assesses whether differences in brain features are or are not correlated with individual differences in intelligence: “the fact that a brain region is commonly activated during a cognitive challenge does not yet imply that individual differences in this activation are linked to individual differences in cognitive ability” (Basten et al., 2015, p. 11). Using the individual differences approach, Basten et al.’s (2015) meta-analysis found that structural and functional correlates of intelligence are dissociated (Figure 3.8). For more on the individual differences approach, see Chapter 12 by Basten and Fiebach.

57

58

k. martı´nez and r. colom

Figure 3.7 Summary of basic analytic steps for connectome-based analyses (A). The analytic sequence for computing the structural and functional connectivity matrices (B). The T1-weighted MRI images are used for cortical and subcortical parcellation, whereas diffusion images and BOLD (Blood Oxygen Level-Dependent) images are processed for computing diffusion tensor tractography and blood flow time courses, respectively. Structural connectivity networks are typically represented as symmetric matrices including normalized weights connecting each pair of nodes in the parcellation scheme. Functional connectivity is commonly calculated as a Pearson correlation between each pair of regions in the adopted parcellation, which are subsequently Fisher-Z transformed. (C) The results from two published studies relating individual differences in intelligence and functional and structural connectivity. (Hearne, Mattingley, & Cocchi, 2016; Ponsoda et al., 2017)

Imaging the Intelligence of Humans

Figure 3.8 Structural and functional correlates of human intelligence are not identified within the same brain regions: “the dissociation of functional vs. structural brain imaging correlates of intelligence is at odds with the principle assumption of the P-FIT that functional and structural studies on neural correlates of intelligence converge to imply the same set of brain regions.” (Basten et al., 2015, p. 21)

In short, means and correlations might provide findings with quite substantially different theoretical implications. 4. Novel analytic strategies might help to find “new” brain properties or neuromarkers that better predict intelligent behavior (e.g., brain resilience and brain entropy). Thus, for instance, Santarnecchi, Rossi, and Rossi’s (2015) findings suggest that the brain of humans with greater intelligence levels are more resilient to targeted and random attacks. They quantified the robustness of individual networks using graph-based metrics after the systematic loss of the most important nodes within the system, concluding that the higher the intelligence level, the greater the distributed processing capacity. This thought-provoking finding requires independent replication using functional and structural data. Another interesting study suggests that greater brain entropy (measured by resting state input data) is associated with higher intelligence (Saxe, Calderone, & Morales, 2018). Within this framework, entropy in considered as “an indicator of the brain’s general readiness to process unpredictable stimuli from the environment, rather than the active use of brain states during a particular task” (p. 13).

59

60

k. martı´nez and r. colom

In summary, now scientists are strongly prone to apply multimodal approaches to exhaustively characterize individual brains and submit the data to matching learning algorithms for selecting the pool of neuromarkers that maximize predictive power.

What Humans? Heterogeneity in sample characteristics across studies has been traditionally addressed by regressing out the influence of potential confounding variables from the relationships between intelligence and brain features. The most popular confounding variables are sex and age because the brain changes with age and there are known systematic neuro-morphometric mean sex differences. These differences could add noise to a statistical analysis if not accounted for or, even worse for age, be confounded for substrates of cognitive change differences when they might be age effects that covary with cognitive development but that are somewhat independent of cognitive development per se. However, many studies don’t look at “age by intelligence” and “sex by intelligence” interactions because of the assumption that similar brain regions and networks support intelligent performance in both sexes and across developmental stages. However, men and women might have distinguishable neural substrates for intelligence (Chekroud, Ward, Rosenberg, & Holmes, 2016; Haier, Jung, Yeo, Head, & Alkire, 2005; Ingalhalikar et al., 2014; Ritchie et al., 2018) and brain correlates for intelligence might change across the life span (Estrada, Ferrer, Román, Karama, & Colom, 2019; Román et al., 2018; Viviano, Raz, Yuan, & Damoiseaux, 2017; Wendelken et al., 2017). However, there is still another possibility. There might be distinguishable neural substrates of intelligence for different individuals (regardless of their sex and age). Different brains may achieve closely similar intelligent levels through varied hard-wired and soft-wired routes and group analyses mainly tend to detect overlapping regions across the sample of a group. Figure 3.9 illustrates the point. Martínez et al. (2015) matched 100 pairs of independent samples of participants for sex, age, and cognitive performance (fluid intelligence, crystallized intelligence, visuospatial intelligence, working memory capacity, executive updating, controlled attention, and processing speed). Afterwards, the reproducibility of the brain–cognition correlations across these samples was assessed. Figure 3.9 depicts a randomly selected case example of the low convergence observed for the 100 pairs of matched samples. As shown in Figure 3.5, this meager convergence might follow, at least in part, from the fact that the effect size of the association between intelligence and cortical thickness is too small to be detected in a stable fashion with relatively small sample sizes. In a condition of low power for detecting very small effect sizes,

Imaging the Intelligence of Humans

Figure 3.9 Mean (A) and variability (B) of cortical thickness across the cortex in two groups of individuals (Sample A and Sample B) matched for sex, age, and cognitive performance. The regional maps are almost identical. Pearson’s correlations between visuospatial intelligence and cortical thickness differences in these two groups are also shown (C). The maps are remarkably different. This happens even when the distribution of cortical and intelligence scores are identical in both groups. The results might illustrate the fact that not all brains work the same way. (Haier, 2017)

results can be affected by random error/noise. This issue is further compounded by not using whole-brain thresholds to control for multiple comparisons. In such situations, no definitive conclusions can be drawn. Nevertheless, we think Euler’s (2018) evaluation of this study raises one thought-provoking possibility that stimulates refined research: the key finding was that although the subsamples were essentially identical in terms of their anatomical distribution of mean cortical thickness and variability, they showed no significant overlap (and even opposite effects) in some of their brain–ability relationships. [These findings] suggest the deeper and more intriguing possibility that cognitive ability might be structured somewhat differently in different individuals . . . imaging approaches that strongly emphasize inter-subject consistency may be looking for convergence that does not ultimately exist. (p. 101)

The finding highlighted by Martínez et al. (2015) is far from surprising. As demonstrated by Gratton et al. (2018), brain network organization arises from stable factors (genetics, structural connections, and long-term histories of coactivation among regions). Ongoing cognition or day-to-day variations are much less relevant: “the large subject-level effects in functional networks highlight the importance of individualized approaches for studying properties of brain organization.” These researchers considered data from 10 individuals scanned across 10 fMRI sessions. Within each session five

61

62

k. martı´nez and r. colom

runs were completed: rest, visual coherence, semantic, memory, and motor. Variability of functional networks was analyzed and the key result showed that functional networks clustering was explained by participants’ identity: “individual variability accounts for the majority of variation between functional networks, with substantially smaller effects due to task or session . . . task states modified functional networks, but these modifications largely varied by individual . . . networks formed from co-activations during tasks strongly resemble functional networks from spontaneous firing at rest.” These results emphasized the relevance of approaches aimed at the individual level for researching brain structure and function: “neglect of individual differences in brain networks may cause researchers to miss substantial and relevant portions of variability in the data.” Increasingly sophisticated technical developments invite to change perspectives from the group to the individual level, as underscored by Dubois and Adolphs (2016): “while the importance of a fully personalized investigation of brain function has been recognized for several years, only recent technological advances now make it possible.” In this regard, recent neuroscientific intelligence research points to this personalized approach. Consistent with the framework provided by Colom and Román (2018), Daugherty et al.’s (2020) research identified individuals showing low, moderate, and high responses to targeted cognitive interventions: “acute interventions aimed to promote cognitive ability appear to not be ‘one size fits all’ and individuals vary widely in response to the intervention (. . .) the type of multi-modal intervention activity did not differentiate between performance subgroups.” Intrinsic characteristics of humans win the game.

Concluding Remarks Because many brain regions participate in superficially disparate cognitive functions, a selective correspondence is difficult to establish. Moreover, cognitive profiles are heterogeneous: humans might display the same performance, but the way in which different underlying mental processes are involved may vary between them. We would expect similar heterogeneity in the association between intelligence and brain features. A direct implication of this lack of consistency for brain properties  psychological factors relationships is the difficulty to choose among competing neuroimaging protocols. Large samples are welcome (e.g., Human Connectome Project, ENIGMA, UK Biobank, and so forth) but this current generalized tendency might mask relevant problematic issues regarding the consideration of the intelligence– brain relationships. A large sample is not synonymous with a high quality study. Pursuing just statistical brute force might divert our attention

Imaging the Intelligence of Humans

from the main research goal of finding reliable brain correlates of individual differences in human intelligence and leave us with large blind spots. Because of the dynamic nature of the human brain and the complexities of cognition, replication needs carefully matched samples and strictly comparable psychological scores, neuroimaging methods, and brain properties. But even in such instances, replication may not be achieved. Imaging methods for processing structural and functional MR data are systematically refined for increasing biological plausibility. Advances will help to resolve the observed inconsistencies, but for now it is strongly recommended to replicate the findings using different protocols using the same dataset, along with the precise clarification of each processing step. Also, if the computed analyses produce null findings, they should be reported. If researchers choose to move on to exploring the data in further ways to get significant findings, they must explicitly acknowledge the move (Button et al., 2013). We think one of the most formidable challenges for intelligence research implies thinking again about the best way for studying complex psychological factors at the biological level. Statistical analyses for identifying distinguishable brain profiles may be very useful for a better understanding of brain network properties that account for inter-subject variability in cognitive performance. Moving from the group to the individual level may shed new light (Horien, Shen, Scheinost, & Constable, 2019). Finn et al. (2015) demonstrated that brain functional connectivity profiles act as a “fingerprint” that accurately identify unique individuals. Moreover, these individualized connectivity profiles did predict, to some degree, intelligence scores. As we move within the third decade of the twenty-first century, we would like to change our minds for addressing the key question about how varied brain features support the core psychological factor we usually name with the word “intelligence.” Because of the integrative nature of this factor, it is safe to predict that general properties of the brain will be of paramount relevance for enhancing our understanding of the role of this organ in intelligent behavior. This is what Valizadeh, Liem, Mérillat, Hänggi, and Jäncke (2018) found after the analysis of 191 individuals (from the Longitudinal Healthy Aging Brain) scanned three times across two years. Using 11 large brain regions only, they were able to identify individual brains with high accuracy: “even the usage of composite anatomical measures representing relatively large brain areas (total brain volume, total brain area, or mean cortical thickness) are so individual that they can be used for individual subject identification.” Because there are not two individuals on Earth with the same genome, there will not be two individuals with the same brain (Sella & Barton, 2019). This fact cannot be ignored from now on and should be properly weighted for achieving reliable answers to the question of why some people are smarter than others.

63

64

k. martı´nez and r. colom

References Ad-Dab’bagh, Y., Lyttelton, O., Muehlboeck, J. S., Lepage, C., Einarson, D., Mok, K., . . . Evans, A. C. (2006). The CIVET image-processing environment: A fully automated comprehensive pipeline for anatomical neuroimaging research. Proceedings of the 12th annual meeting of the organization for human brain mapping (Vol. 2266). Florence, Italy. Ashburner, J., & Friston, K. J. (2000). Voxel-based morphometry – The methods. Neuroimage, 11(6), 805–821. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Basten, U., Hilger, K., & Fieback, C. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Botvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M., . . . Schonberg, T. (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582, 84–88. Button K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. Caspi, A., & Moffitt, T. E. (2018). All for one and one for all: Mental disorders in one dimension. American Journal of Psychiatry, 175(9), 831–844. Chekroud, A. M., Ward, E. J., Rosenberg, M. D., & Holmes, A. J. (2016). Patterns in the human brain mosaic discriminate males from females. Proceedings of the National Academy of Sciences, 113(14), E1968–E1968. Chen, C. H., Fiecas, M., Gutierrez, E. D., Panizzon, M. S., Eyler, L. T., Vuoksimaa, E., . . . & Kremen, W. S. (2013). Genetic topography of brain morphology. Proceedings of the National Academy of Sciences, 110(42), 17089–17094. Chuderski, A. (2019). Even a single trivial binding of information is critical for fluid intelligence. Intelligence, 77, 101396. Cole, M. W., Yarkoni, T., Repovš, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. Journal of Neuroscience, 32(26), 8988–8999. Colom, R., Burgaleta, M., Román, F. J., Karama, S., Álvarez-Linera, J., Abad, F. J., . . . Haier, R. J. (2013). Neuroanatomic overlap between intelligence and cognitive factors: Morphometry methods provide support for the key role of the frontal lobes. Neuroimage, 72, 143–152. doi: 10.1016/j. neuroimage.2013.01.032. Colom, R., Chuderski, A., & Santarnecchi, E. (2016). Bridge over troubled water: Commenting on Kovacs and Conway’s process overlap theory. Psychological Inquiry, 27(3), 181–189. Colom, R., Haier, R. J., Head, K., Álvarez-Linera, J., Quiroga, M. Á., Shih, P. C., & Jung, R. E. (2009). Gray matter correlates of fluid, crystallized, and spatial intelligence: Testing the P-FIT model. Intelligence, 37(2), 124–135. Colom, R., Jung, R. E., & Haier, R. J. (2006). Distributed brain sites for the g-factor of intelligence. NeuroImage, 31(3), 1359–1365.

Imaging the Intelligence of Humans

Colom, R., Jung, R. E., & Haier, R. J. (2007). General intelligence and memory span: Evidence for a common neuroanatomic framework. Cognitive Neuropsychology, 24(8), 867–878. Colom, R., Karama, S., Jung, R. E., & Haier, R. J. (2010). Human intelligence and brain networks. Dialogues in Clinical Neuroscience, 12(4), 489–501. Colom, R., & Román, F. (2018). Enhancing intelligence: From the group to the individual. Journal of Intelligence, 6(1), 11. Colom, R., & Thompson, P. M. (2011). Understanding human intelligence by imaging the brain. In T. Chamorro-Premuzic, S. von Stumm, & A. Furnham (eds.), The Wiley-Blackwell handbook of individual differences (p. 330–352). Hoboken, NJ: Wiley-Blackwell. Daugherty, A. M., Sutton, B. P., Hillman, C., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2020). Individual differences in the neurobiology of fluid intelligence predict responsiveness to training: Evidence from a comprehensive cognitive, mindfulness meditation, and aerobic exercise intervention. Trends in Neuroscience and Education, 18, 100123. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. Dubois, J., & Adolphs, R. (2016). Building a science of individual differences from fMRI. Trends in Cognitive Sciences, 20(6), 425–443. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1756), 20170284. Estrada, E., Ferrer, E., Román, F. J., Karama, S., & Colom, R. (2019). Timelagged associations between cognitive and cortical development from childhood to early adulthood. Developmental Psychology, 55(6), 1338–1352. Euler, M. J. (2018). Intelligence and uncertainty: Implications of hierarchical predictive processing for the neuroscience of cognitive ability. Neuroscience & Biobehavioral Reviews, 94, 93–112. Evans, A. C., & Brain Development Cooperative Group (2006). The NIH MRI study of normal brain development. NeuroImage, 30(1), 184–202. Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., . . . Constable, R. T. (2015). Functional connectome fingerprint: Identifying individuals using patterns of brain connectivity. Nature Neuroscience, 18(11), 1664–1671. Fjell, A. M., Westlye, L. T., Amlien, I., Tamnes, C. K., Grydeland, H., Engvig, A., . . . Walhovd, K. B. (2015). High-expanding cortical regions in human development and evolution are related to higher intellectual abilities. Cerebral Cortex, 25(1), 26–34. Frost, M. A., & Goebel, R. (2012). Measuring structural–functional correspondence: Spatial variability of specialised brain regions after macro-anatomical alignment. Neuroimage, 59(2), 1369–1381. Gignac, G. E., & Bates, T. C. (2017). Brain volume and intelligence: The moderating role of intelligence measurement quality. Intelligence, 64, 18–29. doi: 10.1016/ j.intell.2017.06.004.

65

66

k. martı´nez and r. colom

Gratton, C., Laumann, T. O., Nielsen, A. N., Greene, D. J., Gordon, E. M., Gilmore, A. W., . . . Petersen, S. E. (2018). Functional brain networks are dominated by stable group and individual factors, not cognitive or daily variation. Neuron, 98(2), 439–452. Grotzinger, A. D., Cheung, A. K., Patterson, M. W., Harden, K. P., & TuckerDrob, E. M. (2019). Genetic and environmental links between general factors of psychopathology and cognitive ability in early childhood. Clinical Psychological Science, 7(3), 430–444. Haier, R. J. (2017). The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Colom, R., Schroeder, D., Condon, C., Tang, C., Eaves, E., & Head, K. (2009). Gray matter and intelligence factors: Is there a neuro-g? Intelligence, 37(2), 136–144. Haier, R. J., Jung, R. E., Yeo, R. A., Head, K., & Alkire, M. T. (2005). The neuroanatomy of general intelligence: Sex matters. NeuroImage, 25(1), 320–327. Hearne, L. J., Mattingley, J. B., & Cocchi, L. (2016). Functional brain networks related to individual differences in human intelligence at rest. Scientific Reports, 6, 32328. doi: 10.1038/srep32328. Hill, W. D., Harris, S. E., & Deary, I. J. (2019). What genome-wide association studies reveal about the association between intelligence and mental health. Current Opinion in Psychology, 27, 25–30. doi: 10.1016/j.copsyc.2018.07.007. Horien, C., Shen, X., Scheinost, D., & Constable, R. T. (2019). The individual functional connectome is unique and stable over months to years. NeuroImage, 189, 676–687. doi: 10.1016/j.neuroimage.2019.02.002. Hunt, E. B. (2011). Human intelligence. Cambridge University Press. Im, K., Lee, J. M., Lyttelton, O., Kim, S. H., Evans, A. C., & Kim, S. I. (2008). Brain size and cortical structure in the adult human brain. Cerebral Cortex, 18(9), 2181–2191. Ingalhalikar, M., Smith, A., Parker, D., Satterthwaite, T. D., Elliott, M. A., Ruparel, K., . . . Verma, R. (2014). Sex differences in the structural connectome of the human brain. Proceedings of the National Academy of Sciences, 111(2), 823–828. Jensen, A. R. (1998). The g factor. The science of mental ability. Westport, CT: Praeger. doi: 10.1093/cercor/bhm244. Johnson, W., & Bouchard, T. (2005). The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized. Intelligence, 33, 393–416. Johnson, W., Bouchard Jr, T. J., Krueger, R. F., McGue, M., & Gottesman, I. I. (2004). Just one g: Consistent results from three batteries. Intelligence, 32(1), 95–107. Johnson, W., te Nijenhuis, J., & Bouchard, T. (2008). Still just 1 g: Consistent results from five test batteries. Intelligence, 36(1), 81–95. Jung, R. E., & Haier, R. J. (2007). The parieto-frontal integration theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–187. Karama S., Colom, R., Johnson, W., Deary, I. J., Haier, R., Waber, D. P., . . . Brain Development Cooperative Group (2011). Cortical thickness correlates of

Imaging the Intelligence of Humans

specific cognitive performance accounted for by the general factor of intelligence in healthy children aged 6 to 18. NeuroImage, 55(4), 1443–1453. Kim, J. S., Singh, V., Lee, J. K., Lerch, J., Ad-Dab’bagh, Y., MacDonald, D., . . . Evans, A. C. (2005). Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification. NeuroImage, 27(1), 210–221. Kruschwitz, J. D., Waller, L., Daedelow, L. S., Walter, H., & Veer, I. M. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. NeuroImage, 171, 323–331. doi: 10.1016/j. neuroimage.2018.01.018. Langer, N., Pedroni, A., Gianotti, L. R., Hänggi, J., Knoch, D., & Jäncke, L. (2012). Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6), 1393–1406. MacDonald, D., Kabani, N., Avis, D., & Evans, A. C. (2000). Automated 3-D extraction of inner and outer surfaces of cerebral cortex from MRI. NeuroImage, 12(3), 340–356. Martínez, K., Madsen, S. K., Joshi, A. A., Joshi, S. H., Roman, F. J., Villalon-Reina, J., . . . Colom, R. (2015). Reproducibility of brain–cognition relationships using three cortical surface-based protocols: An exhaustive analysis based on cortical thickness. Human Brain Mapping, 36(8), 3227–3245. Mechelli, A., Price, C. J., Friston, K. J., & Ashburner, J. (2005). Voxel-based morphometry of the human brain: Methods and applications. Current Medical Imaging Reviews, 1(2), 105–113. Pakkenberg, B., & Gundersen, H. J. G. (1997). Neocortical neuron number in humans: Effect of sex and age. Journal of Comparative Neurology, 384(2), 312–320. Panizzon, M. S., Fennema-Notestine, C., Eyler, L. T., Jernigan, T. L., PromWormley, E., Neale, M., . . . Kremen, W. S. (2009). Distinct genetic influences on cortical surface area and cortical thickness. Cerebral Cortex, 19(11), 2728–2735. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience & Biobehavioral Reviews, 57, 411–432. Pineda-Pardo, J. A., Martínez, K., Román, F. J., & Colom, R. (2016). Structural efficiency within a parieto-frontal network and cognitive differences. Intelligence, 54, 105–116. doi: 10.1016/j.intell.2015.12.002. Plomin, R., DeFries, J. C., Knopik, V. S., & Neiderhiser, J. M. (2016). Top 10 replicated findings from behavioral genetics. Perspectives on Psychological Science, 11(1), 3–23. Ponsoda, V., Martínez, K., Pineda-Pardo, J. A., Abad, F. J., Olea, J., Román, F. J., . . . Colom, R. (2017). Structural brain connectivity and cognitive ability differences: A multivariate distance matrix regression analysis. Human Brain Mapping, 38(2), 803–816. Price, C. J. (2018). The evolution of cognitive models: From neuropsychology to neuroimaging and back. Cortex, 107, 37–49. Rakic, P. (1988). Specification of cerebral cortical areas. Science, 241(4862), 170–176.

67

68

k. martı´nez and r. colom

Ritchie, S. J., Cox, S. R., Shen, X., Lombardo, M. V., Reus, L. M., Alloza, C., . . . Deary, I. J. (2018). Sex differences in the adult human brain: Evidence from 5216 UK Biobank participants. Cerebral Cortex, 28(8), 2959–2975. Román F. J., Abad, F. J., Escorial, S., Burgaleta, M., Martínez, K., Álvarez-Linera, J., . . . Colom, R. (2014). Reversed hierarchy in the brain for general and specific cognitive abilities: A morphometric analysis. Human Brain Mapping, 35(8), 3805–3818. Román, F. J., Morillo, D., Estrada, E., Escorial, S., Karama, S., & Colom, R. (2018). Brain–intelligence relationships across childhood and adolescence: A latentvariable approach. Intelligence, 68, 21–29. doi: 10.1016/j.intell.2018.02.006. Roth, G., & Dicke, U. (2005). Evolution of the brain and intelligence. Trends in Cognitive Sciences, 9(5), 250–257. Santarnecchi, E., Rossi, S., & Rossi, A. (2015). The smarter, the stronger: Intelligence level correlates with brain resilience to systematic insults. Cortex, 64, 293–309. doi: 10.1016/j.cortex.2014.11.005. Saxe, G. N., Calderone, D., & Morales, L. J. (2018). Brain entropy and human intelligence: A resting-state fMRI study. PloS One, 13(2), e0191582. Schneider, W. J., & McGrew, K. S. (2018). The Cattell–Horn–Carroll theory of cognitive abilities. In D. P. Flanagan, & E. M. McDonough (eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 73–163). New York: The Guilford Press. Sella, G., & Barton, N. H. (2019). Thinking about the evolution of complex traits in the era of genome-wide association studies. Annual Review of Genomics and Human Genetics, 20, 461–493. Thompson, P. M., Hayashi, K. M., Sowell, E. R., Gogtay, N., Giedd, J. N., Rapoport, J. L., . . . Toga, A. W. (2004). Mapping cortical change in Alzheimer’s disease, brain development, and schizophrenia. NeuroImage, 23, S2–S18. doi: 10.1016/j.neuroimage.2004.07.071. Thompson, P. M., Jahanshad, N., Ching, C. R., Salminen, L. E., Thomopoulos, S. I., Bright, J., . . . for the ENIGMA Consortium (2020). ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries. Translational Psychiatry, 10(1), 1–28. Valizadeh, S. A., Liem, F., Mérillat, S., Hänggi, J., & Jäncke, L. (2018). Identification of individual subjects on the basis of their brain anatomical features. Scientific Reports, 8(1), 1–9. van den Heuvel, M.P., Stam, C.J., Kahn, R.S., Hulshoff Pol, H.E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. Viviano, R. P., Raz, N., Yuan, P., & Damoiseaux, J. S. (2017). Associations between dynamic functional connectivity and age, metabolic risk, and cognitive performance. Neurobiology of Aging, 59, 135–143. doi: 10.1016/j. neurobiolaging.2017.08.003. Vuoksimaa, E., Panizzon, M. S., Chen, C. H., Fiecas, M., Eyler, L. T., FennemaNotestine, C., . . . Kremen, W. S. (2015). The genetic association between neocortical volume and general cognitive ability is driven by global surface area rather than thickness. Cerebral Cortex, 25(8), 2127–2137.

Imaging the Intelligence of Humans

Wendelken, C., Ferrer, E., Ghetti, S., Bailey, S. K., Cutting, L., & Bunge, S. A. (2017). Frontoparietal structural connectivity in childhood predicts development of functional connectivity and reasoning ability: A large-scale longitudinal investigation. Journal of Neuroscience, 37(35), 8549–8558. Winkler, A. M., Kochunov, P., Blangero, J., Almasy, L., Zilles, K., Fox, P. T., . . . Glahn, D. C. (2010). Cortical thickness or grey matter volume? The importance of selecting the phenotype for imaging genetics studies. NeuroImage, 53(3), 1135–1146.

69

4 Research Consortia and Large-Scale Data Repositories for Studying Intelligence Budhachandra Khundrakpam, Jean-Baptiste Poline, and Alan C. Evans

Neuroimaging of Intelligence The first neuroimaging studies of intelligence were done with positron emission tomography (PET) (Haier et al., 1988). PET was expensive and invasive but more researchers had access to neuroimaging when Magnetic Resonance Imaging (MRI) became widely available around the year 2000. The advent of advanced MRI methods enabled researchers to investigate localized (region-level) associations of brain measures and measures of intelligence in healthy individuals (Gray & Thompson, 2004; Luders, Narr, Thompson, & Toga, 2009). At the whole-brain level, MRI-based studies have reported a positive association (r = .40 to .51) between some measures of intelligence and brain size (Andreasen et al., 1993; McDaniel, 2005). Several studies at the voxel and regional levels have also demonstrated a positive correlation of morphometry with intelligence in brain regions that are especially relevant to higher cognitive functions including frontal, temporal, parietal, hippocampus, and cerebellum (Andreasen et al., 1993; Burgaleta, Johnson, Waber, Colom, & Karama, 2014; Colom et al., 2009; Karama et al., 2011; Narr et al., 2007; Shaw et al., 2006). More recently, neuroimaging studies have revealed largescale structural and functional brain networks as potential neural substrates of intelligence (see review by Jung & Haier, 2007 and Barbey et al., 2012; Barbey, Colom, Paul, & Grafman, 2014; Colom, Karama, Jung, & Haier, 2010; Khundrakpam et al., 2017; Li et al., 2009; Sripada, Angstadt, Rutherford, & Taxali, 2019). The use of neuroimaging for studying intelligence has increased tremendously in recent years (detailed in Haier (2017), Neuroscience of Intelligence). From initial explorations comprising of a handful of subjects, recent studies have been conducted with very large sample sizes. For example, a recent study using the UK Biobank investigated the brain correlates of longitudinal changes in a measure of fluid intelligence across three time points in 185,317 subjects (Kievit, Fuhrmann, Borgeest, Simpson-Kent, & Henson, 2018). With increasing number of subjects, the statistical power of a study increases;

70

Research Consortia and Large-Scale Data Repositories

however, the required number of subjects depends partly on the effect size of the research question. Much of the large-scale data repositories and research consortia have been established partly because of inconsistencies in studies with small samples (Turner, Paul, Miller, & Barbey, 2018). Another prominent reason for large-scale data repositories is the fact that the study of many potential factors such as the genetic effects for a trait (such as intelligence) on brain measures leads to multiple comparison problems, thus requiring hundreds – sometimes many more – of subjects (Cox, Ritchie, FawnsRitchie, Tucker-Drob, & Deary, 2019).

Large-Scale Data Repositories and Study of Intelligence The various datasets that have been used in the study of intelligence can be categorized based on the level of planning with which they were acquired. The first category belongs to planned datasets such as the NIH MRI Study of Normal Brain Development (NIHPD), Pediatric Imaging, Neurocognition and Genetics (PING), Philadelphia Neurodevelopmental Cohort (PNC), Healthy Brain Network (HBN), Human Connectome Project (HCP), Lothian birth-cohort, IMAGEN, Adolescent Brain Cognitive Development (ABCD) and UK Biobank datasets (see Table 4.1 for details). These datasets arose from carefully planned studies with standardized protocols from several sites. Although rare, there are single-site planned studies also, examples being the Philadelphia Neurodevelopmental Cohort (PNC) dataset (Satterthwaite et al., 2016) and the IARPA SHARP Program INSIGHT dataset (Daugherty et al., 2020; Zwilling et al., 2019). Such planned datasets have resulted in major advances in understanding the neural correlates of intelligence. One prominent dataset is the NIHPD dataset that was, at the time, the most representative imaging sample of typically developing children and adolescents in the US (ages 5–22 years) and spurred a series of findings related to intelligence. For example, using longitudinal MRI scans of 307 subjects (total number of scans = 629), Shaw et al. (2006) reported that children with higher intelligence demonstrated more dynamic cortical trajectories compared to children with lower intelligence. In another study, using MRI scans from 216 subjects, Karama et al. (2009) reported positive associations between general cognitive ability factor (an estimate of g) and cortical thickness in several multimodal association areas, with a follow-up study demonstrating that the cortical thickness correlates of cognitive performance on complex tasks were well captured by g (Karama et al., 2011). Going beyond associations of intelligence and cortical thickness in localized brain regions, using 586 longitudinal MRI scans of children, Khundrakpam et al. (2017) showed distinct anatomical coupling among widely distributed cortical regions possibly reflecting a more efficient organisation in children with high verbal intelligence. In terms of functional connectivity, Sripada et al. (2019) utilized data from the

71

Table 4.1 Details of large-scale datasets and research consortia with concurrent measures of neuroimaging and intelligence (and/or related) scores, and, in some cases, genetic data. Although rare, some large-scale datasets were collected from single sites, while the majority were collected from multiple sites. Note, the list is not exhaustive and mostly concentrated on developmental datasets. Project/ Data

Brain measures

IQ and related tests/measures

21

sMRI, fMRI, dMRI

Crystallized composite and fluid composite

Yes

~550 (~3 visits)

6

sMRI, DTI

WASI, WISC-III

No

PING

~1,400

10

sMRI, fMRI, dMRI

Yes

PNC

~1,400

1

sMRI, fMRI, DTI

HBN

~660

3

Yes

ABIDE

~2,200

24

sMRI, fMRI, dMRI, EEG sMRI, fMRI

8 NTCB subtests including Oral Reading Recognition test, Picture Vocabulary test CNB tests for domains including executive control, complex cognition, social cognition WISC-V, WAIS-IV, WASI

PIQ, VIQ, FSIQ

No

ENIGMA Consortium

~30,000

200

sMRI, DTI

PIQ, VIQ, FSIQ

Yes

Sample size

Sites

ABCD

~12,000 (at visit 1)

NIHPD

Genetic data

Yes

Description Longitudinal study of 9–10 year olds, to be scanned every year for the next 10 years (Casey et al., 2018) Longitudinal study of brain development (ages 5–20 years) (Evans & Brain Development Cooperative Group, 2006) Cross-sectional study of brain development (ages 3–20 years) (Jernigan et al., 2016) Cross-sectional study of brain development (ages 8–21 years) (Satterthwaite et al., 2016) Creation of a biobank of 10,000 participants (ages 5–21 years) (Alexander et al., 2017) Agglomerated dataset of individuals with ASD and healthy controls (ages 6–74 years) (Di Martino et al., 2014, 2017) Consortium for large-scale collaborative analyses of neuroimaging and genetic data across lifespan (Thompson et al., 2017)

UK Biobank

Lothian Birth Cohort 1936 IMAGEN Consortium

HCP

~500,000 (total) 100,000 (+ imaging) ~1,091

22

sMRI, fMRI, dMRI

7 cognitive tests including Verbal Numerical Reasoning (Fluid intelligence)

Yes

Creation of a biobank of ~500,000 participants (ages 40–69 years) (Sudlow et al., 2015)

1

sMRI, DTI

Moray House Test of general cognitive ability

Yes

~2,000

4

sMRI, fMRI

PIQ, VIQ

Yes

~1,200

1

sMRI, fMRI, dMRI

NTCB tests including Oral Reading Recognition, Picture Vocabulary

Yes

Follow-up cohort study of participants in both youth (~11 years) and older age (~70 years) (Deary et al., 2007) Longitudinal genetic-neuroimaging study of 14 year-old adolescents (follow-up at 16 years) (Schumann et al., 2010) Cross-sectional study of brain connectivity in young adults (ages 22–35 years) (Van Essen et al., 2013)

Abbreviations: ABCD = Adolescent Brain Cognitive Development, NIHPD = NIH MRI Study of Normal Brain Development, PING = Pediatric Imaging, Neurocognition and Genetics, PNC = Philadelphia Neurodevelopmental Cohort, HBN = Healthy Brain Network, ABIDE = Autism Brain Imaging Data Exchange, ENIGMA = Enhancing NeuroImaging Genetics through Meta-Analysis, HCP = Human Connectome Project, sMRI = Structural Magnetic Resonance Imaging, fMRI = Functional Magnetic Resonance Imaging, dMRI = Diffusion Magnetic Resonance Imaging, DTI = Diffusion Tensor Resonance Imaging, EEG = Electroencephalography, NTCB = NIH ToolBox Cognitive Battery, CNB = Computerized Neurocognitive Battery, PIQ = Performance Intelligent Quotient, VIQ = Verbal IQ, FSIQ = Full-scale IQ, WISC = Wechsler Intelligence Scale for Children, WASI = Wechsler Adult Intelligence Scale, WAIS = Wechsler Adult Intelligence Scale.

74

b. khundrakpam, j.-b. poline, and a. c. evans

HCP and ABCD datasets, and identified novel mechanisms of general intelligence involving widespread functional brain networks. In particular, they showed the separation of the fronto-parietal network and the default-mode network as the major locus of individual variability in general intelligence. Several studies have also been conducted using other datasets, some of which include Burgaleta et al. (2014), Cox et al. (2019), Dubois, Galdi, Paul, & Adolphs (2018), Karama et al. (2014), Kievit et al. (2018), Xiao, Stephen, Wilson, Calhoun, and Wang (2019), and Zhao, Klein, Castellanos, and Milham (2019). Finally, the IARPA INSIGHT dataset represents the largest intervention trial conducted to date investigating the efficacy of a comprehensive, 16-week intervention protocol designed to enhance fluid intelligence (n = 400), examining skill-based cognitive training, high-intensity interval fitness training, non-invasive brain stimulation (HD-tDCS), and mindfulness meditation. Recent findings from the INSIGHT project establish the importance of individual differences in structural brain volume for predicting training response and transfer to measures of fluid intelligence (Daugherty et al., 2020) and further demonstrate the potential of multi-modal interventions (that incorporate physical fitness and cognitive training) to enhance fluid intelligence and decision-making (Zwilling et al., 2019). The next category belongs to unplanned, agglomerative datasets such as the Autism Brain Imaging Data Exchange, ABIDE dataset (Di Martino et al., 2014), ADHD-200 Consortium (HD-200 Consortium TA-200, Milham, Damien, Maarten, & Stewart, 2012), and 1,000 Connectomes project (Biswal et al., 2010), which comprise several datasets with compatible sample and imaging characteristics (see Table 4.1 for details). These datasets have revealed complex interactions between intelligence measures, brain measures, and clinical states. For example, using resting-state fMRI data of 964 subjects from the ABIDE dataset, Nielsen et al. (2013) built models to classify autism from controls, and showed that verbal IQ was significantly related to the classification score. In another study, Bedford et al. (2020) used structural MRI data from 1,327 subjects (including data from the ABIDE dataset) to demonstrate the influence of full-scale IQ (FSIQ) on neuroanatomical heterogeneity in autism. Interestingly, even without the use of neuroimaging, personal characteristic data (PCD) such as age, sex, handedness, and IQ from these large-scale datasets have facilitated the growth of machine learning models for automated diagnosis of autism (Parikh, Li, & He, 2019) and ADHD (Brown et al., 2012). Of particular interest is the study by Parikh et al. (2019), who showed that for classification models of autism, full-scale IQ, followed by verbal IQ, and performance IQ, had better predictive power than age, sex, and handedness. The last category belongs to meta-analyses such as the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA), BrainMap, in which data stay with individual sites and are not collected at a single site. The usual approach is to pool findings from smaller studies in order to help reach

Research Consortia and Large-Scale Data Repositories

consensus for inconsistent findings. A prominent example is BrainMaps’ coordinate-based activation likelihood estimation meta-analysis of neuroimaging data (Eickhoff et al. 2009) which allows statistical combination of coordinate-based analyses across studies. Using this meta-analysis approach on 12 structural and 16 functional neuroimaging studies on humans, Basten, Hilger, and Fiebach (2015) performed an empirical test of the Parieto-Frontal Integration Theory of Intelligence (P-FIT) model (Jung & Haier 2007), and suggested an updated P-FIT model for the neural bases of intelligence by extending earlier models to include the posterior cingulate cortex and subcortical structures. In another study, Santarnecchi, Emmendorfer, and PascualLeone (2017) performed a quantitative meta-analysis of 47 fMRI and PET studies on fluid intelligence in humans and demonstrated a network-centered perspective of problem-solving related to fluid intelligence. Another prominent example is the ENIGMA project in which several datasets are analyzed (using standardized pipelines) at individual sites, and then pooled for a metaanalysis on the shared results from each site (Thompson et al., 2014, 2017). By sharing data from 15,782 subjects, researchers of the ENIGMA project showed three genetic loci linked with intracranial volume (ICV). More interestingly, a variant of one of these genetic sequences (HMGA2 gene) was associated with IQ scores (Stein et al., 2012).

Opportunities and Challenges The most significant benefit of these large-scale data repositories and consortia is the ability to study imaging genetics of intelligence. Imaging genetics has always been difficult with small samples. However, the advent of big datasets (such as UK Biobank) and big consortia (such as ENIGMA) has enabled us to overcome these limitations. One of the early applications came from the ENIGMA project, which showed a link between HMGA2 gene and IQ measure (Stein et al., 2012). Another example that showed the utility of large-scale consortia came from a study by Huguet et al. (2018). Although copy number variants (CNVs) are prevalent in ~15% of individuals with neurodevelopmental disorders, individual association studies have been difficult due to their rare occurrence. Huguet et al. used data from 2,090 adolescents from the IMAGEN study and 1,983 children and parents from the Saguenay Youth Study, and investigated the effect sizes of recurrent and non-recurrent CNVs on IQ. They observed that, for rare deletions, size and number of genes affected IQ, such that each gene deletion was linked to a decrease in PIQ of .67.19 (mean  standard error) points. Genome-wide association studies (GWAS) of intelligence have also been possible with the advent of the large-scale datasets and consortia. In one such study, using data from 78,308 individuals, Sniekers et al. (2017) performed a genome-wide meta-analysis of intelligence identifying 336 associated

75

76

b. khundrakpam, j.-b. poline, and a. c. evans

single-nucleotide polymorphisms (SNPs) in 18 genomic loci. By including data from the UK Biobank, a follow-up study performed a GWAS meta-analysis on 269,867, and identified 206 genome-wide significant regions (Savage et al., 2018). More interestingly, the GWAS analysis resulted in genome-wide polygenic scores (GPS) of IQ that predicted ~4% of the variance in intelligence in independent samples (Savage et al. 2018). Going further, recent studies have explored the link between genome-wide polygenic score (GPS) of IQ and brain structure. Using the ABCD dataset (N = 11,875), Loughnan et al. (2019) demonstrated a significant association between total cortical surface area and GPS-IQ, with .3% variance of GPS-IQ explained by the total surface area. With increased sample size, future studies will likely reveal the neural correlates of GPS-IQ at regional and network-levels, but will require deep phenotyping to enable informed interpretation. For detailed discussion on this emerging new genetics of intelligence, the reader is referred to a review article by Plomin and Von Stumm (2018). There have been major advances in the development of new analysis techniques/methods because of the availability of these large-scale datasets. For example, the ABCD consortium recently organized the ABCD Neurocognitive Prediction Challenge-2019, in which they invited researchers to develop their methods for predicting fluid intelligence from T1-weighted MRI data. Out of a total of 8,500 subjects, aged 9–10 years, data of 4,100 children were provided for training, and the accuracy of the models were then tested by comparing the predicted fluid intelligence scores of 4,400 children. The winning team developed several regression and deep learning methods to predict fluid intelligence and showed Kernel Ridge Regression as the best prediction model with a mean-squared error of 69.72 on the validation set and 92.12 on the test set (Mihalik et al., 2019). On the other hand, these large neuroimaging datasets, along with personal characteristic data (PCD) including IQ, have led to the development of methods for enhanced diagnosis of brain disorders such as autism and ADHD (Ghiassian, Greiner, Jin, & Brown, 2016; Parikh et al., 2019). Interestingly, studies have also reported the critical importance of personal characteristics such as measures of IQ (in addition to age, sex, and handedness) in enhanced diagnosis of brain disorders. An elegant example of this came from the ADHD 200 Global Competition (http://fcon_1000.projects.nitrc.org/indi/adhd200/results.html), in which teams were invited to develop diagnostic classification tools for ADHD diagnosis based on neuroimaging and PCD data from the ADHD 200 consortium (N = 973, subjects with ADHD and healthy controls). The winning team (Brown et al., 2012) showed that using subjects’ PCD (including age, gender, handedness, site, performance IQ, verbal IQ, and full-scale IQ) without neuroimaging data as input features resulted in a diagnostic classifier with the highest accuracy. The study illustrated the critical importance of accounting for variability in personal characteristics (including IQ) in imaging diagnostic research.

Research Consortia and Large-Scale Data Repositories

These large-scale datasets and consortia come with several challenges. One prominent challenge is the increased variability in the MRI data due to multiple sites for data collection. The variability may be due to scanner and/or MRI protocol specifications. Such concerns have been raised in studies utilizing agglomerated datasets. For example, studies using the ABIDE dataset have shown that fMRI features predictive of autism have limited generalizability across sites (King et al., 2019; Nielsen et al., 2013). Similar inference may be made for studies of intelligence with large-scale datasets involving multiple sites. Several research groups are currently working on addressing this issue. For instance, the ABCD consortium adapted the Empirical Bayes approach “ComBat” (Fortin et al., 2018; Johnson, Li, & Rabinovic, 2007) in order to harmonize the differences related to the scanner (Nielson et al., 2018). Efforts are also made to incorporate data-driven methods for quantifying dataset bias in agglomerated datasets (Wachinger, Becker, & Rieckmann, 2018). Such studies could also give guidelines on how to merge data from different sources while limiting the introduction of unwanted variation. Note that these variations may also arise from different population structures. It may be noted that the datasets and consortia cited here are not exhaustive, the reader is referred to recent articles on large-scale datasets and questions of data-sharing (Book, Stevens, Assaf, Glahn, & Pearlson, 2016; Craddock et al., 2013; Mennes, Biswal, Castellanos, & Milham, 2013; Poldrack & Gorgolewski, 2014; Turner, 2014).

Conclusions The advent of large-scale open data repositories and research consortia has led to rapid advancement in the study of intelligence. The fact that large-scale datasets from research consortia are now available to not just the consortia members but also to any investigator has completely changed the research scenario. The most prominent advancement has been in the field of genetics of intelligence, particularly genome-wide studies revealing new genetic loci associated with intelligence measures. This, in turn, has led to genome-wide polygenic scores of intelligence with potential implications for society (Plomin & Von Stumm, 2018). Additionally, the availability of the large-scale datasets have spurred development of innovative methods, such as combining personal characteristics (e.g., IQ), examining sources of inter-individual differences (Daugherty et al., 2020; Hammer et al., 2019; Talukdar, Roman, Operskalski, Zwilling, & Barbey, 2018), and neuroimaging data for enhanced diagnosis of brain disorders (Ghiassian et al., 2016; Parikh et al., 2019). While high quality, large, and deep phenotyped open data are still missing, one challenge will be to identify which research questions can best be examined with the available large-scale datasets. This will likely prompt the next generation of researchers to develop more innovative ideas for working with big data in the study of intelligence.

77

78

b. khundrakpam, j.-b. poline, and a. c. evans

References Alexander, L. M., Escalera, J., Ai, L., Andreotti, C., Febre, K., Mangone, A., . . . Milham M. P. (2017). Data descriptor: An open resource for transdiagnostic research in pediatric mental health and learning disorders. Scientific Data, 4, 1–26. Andreasen, N. C., Flaum, M., Swayze, V., O’Leary, D. S., Alliger, R., Cohen, G., . . ., Yuh, W. T. (1993). Intelligence and brain structure in normal individuals. American Journal of Psychiatry, 150(1), 130–134. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure and Function, 219(2), 485–494. doi: 10.1007/s00429-013-0512-z. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(Pt 4), 1154–1164. doi: 10.1093/ brain/aws021. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Bedford, S. A., Park, M. T. M., Devenyi, G. A., Tullo, S., Germann, J., Patel, R., . . . Chakravarty, M. M. (2020). Large-scale analyses of the relationship between sex, age and intelligence quotient heterogeneity and cortical morphometry in autism spectrum disorder. Molecular Psychiatry, 25(3), 614–628. Biswal, B. B., Mennes, M., Zuo, X. N., Gohel, S., Kelly, C., Smith, S. M., . . . Milham, M. P. (2010). Toward discovery science of human brain function. Proceedings of the National Academy of Sciences USA, 107(10), 4734–4739. Book, G. A., Stevens, M. C., Assaf, M., Glahn, D. C., & Pearlson, G. D. (2016). Neuroimaging data sharing on the neuroinformatics database platform. Neuroimage, 124(Pt. B), 1089–1092. Brown, M. R. G. G., Sidhu, G. S., Greiner, R., Asgarian, N., Bastani, M., Silverstone, P. H., . . . Dursun, S. M. (2012). ADHD-200 global competition: Diagnosing ADHD using personal characteristic data can outperform resting state fMRI measurements. Frontiers in Systems Neuroscience, 6, 1–22. Burgaleta, M., Johnson, W., Waber, D. P., Colom, R., & Karama, S. 2014. Cognitive ability changes and dynamics of cortical thickness development in healthy children and adolescents. Neuroimage, 84, 810–819. Casey, B. J., Cannonier, T., Conley, M. I., Cohen, A. O., Barch, D. M., Heitzeg, M. M., . . . Dale, A. M. (2018). The Adolescent Brain Cognitive Development (ABCD) study: Imaging acquisition across 21 sites. Developmental Cognitive Neuroscience, 32, 43–54. Colom, R., Haier, R. J., Head, K., Álvarez-Linera, J., Quiroga, M. Á., Shih, P. C., & Jung, R. E. (2009). Gray matter correlates of fluid, crystallized, and spatial intelligence: Testing the P-FIT model. Intelligence, 37(2), 124–135. Colom, R., Karama, S., Jung, R. E., & Haier, R. J. 2010. Human intelligence and brain networks. Dialogues in Clinical Neuroscience, 12(4), 489–501. Cox, S. R., Ritchie, S. J., Fawns-Ritchie, C., Tucker-Drob, E. M., & Deary, I. J. (2019). Structural brain imaging correlates of general intelligence in UK Biobank. Intelligence, 76, 101376.

Research Consortia and Large-Scale Data Repositories

Craddock, C., Benhajali, Y., Chu, C., Chouinard, F., Evans, A., Jakab, A., . . . Bellec, P. (2013). The Neuro Bureau Preprocessing Initiative: Open sharing of preprocessed neuroimaging data and derivatives. Frontiers in Neuroinformatics, 7. doi: 10.3389/conf.fninf.2013.09.00041. Daugherty, A., Sutton, B., Hillman, C. H., Kramer, A., Cohen, N., & Barbey, A. K. (2020). Individual differences in the neurobiology of fluid intelligence predict responsiveness to training: Evidence from a comprehensive cognitive, mindfulness meditation, and aerobic exercise intervention. Trends in Neuroscience and Education, 18, 100123. Deary, I. J., Gow, A. J., Taylor, M. D., Corley, J., Brett, C., Wilson, V., . . . Starr, J. M. (2007). The Lothian Birth Cohort 1936: A study to examine influences on cognitive ageing from age 11 to age 70 and beyond. BMC Geriatrics, 7, 28. Di Martino, A., O’Connor, D., Chen, B., Alaerts, K., Anderson, J. S., Assaf, M., . . . Milham, M. P. (2017). Enhancing studies of the connectome in autism using the autism brain imaging data exchange II. Scientific Data, 4, 170010. Di Martino, A., Yan, C. G., Li, Q., Denio, E., Castellanos, F. X., Alaerts, K., . . . Milham, M. P. (2014). The autism brain imaging data exchange: Towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular Psychiatry, 19(6), 659–667. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B Biological Science, 373(1756), 20170284. Eickhoff, S. B., Laird, A. R., Grefkes, C., Wang, L. E., Zilles, K., & Fox, P. T. (2009). Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: A random-effects approach based on empirical estimates of spatial uncertainty. Human Brain Mapping, 30(9), 2907–2926. Evans, A. C., & Brain Development Cooperative Group. (2006). The NIH MRI study of normal brain development. Neuroimage, 30(1), 184–202. Fortin, J.-P., Cullen, N., Sheline, Y. I., Taylor, W. D., Aselcioglu, I., Cook, P. A., . . . Shinohara, R. T. (2018). Harmonization of cortical thickness measurements across scanners and sites. Neuroimage, 167, 104–120. Ghiassian, S., Greiner, R., Jin, P., & Brown, M. R. G. 2016. Using functional or structural magnetic resonance images and personal characteristic data to identify ADHD and autism. PLoS One, 11(12), e0166934. Gray, J. R., & Thompson, P. M. (2004). Neurobiology of intelligence: Science and ethics. Nature Reviews Neuroscience, 5(6), 471–482. Haier, R. J. (2017). The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. HD-200 Consortium TA-200, Milham, P. M., Damien, F., Maarten, M., & Stewart, H. M. (2012). The ADHD-200 Consortium: A model to advance the translational potential of neuroimaging in clinical neuroscience. Frontiers in Systems Neuroscience, 6, 1–5.

79

80

b. khundrakpam, j.-b. poline, and a. c. evans

Hammer, R., Paul, E. J., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2019). Individual differences in analogical reasoning revealed by multivariate task-based functional brain imaging. Neuroimage, 184, 993–1004. doi: 10.1016/j.neuroimage.2018.09.011. Huguet, G., Schramm, C., Douard, E., Jiang, L., Labbe, A., Tihy, F., . . . Jacquemont, S. (2018). Measuring and estimating the effect sizes of copy number variants on general intelligence in community-based samples. JAMA Psychiatry, 75(5), 447–457. Jernigan, T. L., Brown, T. T., Hagler, D. J., Akshoomoff, N., Bartsch, H., Newman, E., . . . Pediatric Imaging, Neurocognition and Genetics Study. (2016). The Pediatric Imaging, Neurocognition, and Genetics (PING) data repository. Neuroimage. 124(Pt. B), 1149–1154. Johnson, W. E., Li, C., & Rabinovic, A. (2007). Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics, 8(1), 118–127. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Karama, S., Ad-Dab’bagh, Y., Haier, R. J., Deary, I. J., Lyttelton, O. C., Lepage, C., & Evans, A. C. (2009). Positive association between cognitive ability and cortical thickness in a representative US sample of healthy 6 to 18 year-olds. Intelligence. 37(2), 145–155. Karama, S., Bastin, M. E., Murray, C., Royle, N. A., Penke, L., Muñoz Maniega, S., . . . Deary, I. J. (2014). Childhood cognitive ability accounts for associations between cognitive ability and brain cortical thickness in old age. Molecular Psychiatry, 19(3), 555–559. Karama, S., Colom, R., Johnson, W., Deary, I. J., Haier, R., Waber, D. P., . . . Evans, A. C. (2011). Cortical thickness correlates of specific cognitive performance accounted for by the general factor of intelligence in healthy children aged 6 to 18. Neuroimage, 55(4), 1443–1453. Khundrakpam, B. S., Lewis, J. D., Reid, A., Karama, S., Zhao, L., ChouinardDecorte, F., . . . Brain Development Cooperative Group. (2017). Imaging structural covariance in the development of intelligence. Neuroimage, 144(Pt. A), 227–240. Kievit, R. A., Fuhrmann, D., Borgeest, G. S., Simpson-Kent, I. L., & Henson, R. N. A. (2018). The neural determinants of age-related changes in fluid intelligence: A pre-registered, longitudinal analysis in UK Biobank. Wellcome Open Research, 3, 38. King, J. B., Prigge, M. B. D., King, C. K., Morgan, J., Weathersby, F., Fox, J. C., . . . Anderson, J. S. (2019). Generalizability and reproducibility of functional connectivity in autism. Molecular Autism, 10, 27. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. Loughnan, R. J., Palmer, C. E., Thompson, W. K., Dale, A. M., Jernigan, T. L., & Fan, C. C. (2019). Polygenic score of intelligence is more predictive of crystallized than fluid performance among children. bioRxiv. 637512. doi: 10.1101/637512.

Research Consortia and Large-Scale Data Repositories

Luders, E., Narr, K. L., Thompson, P. M., & Toga, A. W. (2009). Neuroanatomical correlates of intelligence. Intelligence, 37(2), 156–163. McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. Mennes, M., Biswal, B. B., Castellanos, F. X., & Milham, M. P. (2013). Making data sharing work: The FCP/INDI experience. Neuroimage, 82, 683–691. Mihalik, A., Brudfors, M., Robu, M., Ferreira, F. S., Lin, H., Rau, A., . . . Oxtoby, N. P. (2019). ABCD Neurocognitive Prediction Challenge 2019: Predicting individual fluid intelligence scores from structural MRI using probabilistic segmentation and kernel ridge regression. In K. Pohl, W. Thompson, E. Adeli, & M. Linguraru (eds.), Adolescent brain cognitive development neurocognitive prediction. ABCD-NP 2019. Lecture Notes in Computer Science, vol. 11791. Cham: Springer. doi: 10.1007/978-3-030-31901-4_16. Narr, K. L., Woods, R. P., Thompson, P. M., Szeszko, P., Robinson, D., Dimtcheva, T., . . . Bilder, R. M. (2007). Relationships between IQ and regional cortical gray matter thickness in healthy adults. Cerebral Cortex, 17(9), 2163–2171. Nielson, D. M., Pereira, F., Zheng, C. Y., Migineishvili, N., Lee, J. A., Thomas, A. G., & Bandettini, P. A. (2018). Detecting and harmonizing scanner differences in the ABCD study – Annual release 1.0. bioRxiv. 309260. doi: 10.1101/309260. Nielsen, J. A., Zielinski, B. A., Fletcher, P. T., Alexander, A. L., Lange, N., Bigler, E. D., . . . Anderson, J. S. (2013). Multisite functional connectivity MRI classification of autism: ABIDE results. Frontiers in Human Neuroscience, 7, 599. Parikh, M. N., Li, H., & He, L. (2019). Enhancing diagnosis of autism with optimized machine learning models and personal characteristic data. Frontiers in Computational Neuroscience, 13, 1–5. Plomin, R., & Von Stumm, S. (2018). The new genetics of intelligence. Nature Reviews Genetics, 19(3), 148–159. Poldrack, R. A., & Gorgolewski, K. J. (2014). Making big data open: Data sharing in neuroimaging. Nature Neuroscience, 17(11), 1510–1517. Santarnecchi, E., Emmendorfer, A., & Pascual-Leone, A. (2017). Dissecting the parieto-frontal correlates of fluid intelligence: A comprehensive ALE metaanalysis study. Intelligence, 63, 9–28. Satterthwaite, T. D., Connolly, J. J., Ruparel, K., Calkins, M. E., Jackson, C., Elliott, M. A., . . . Gur, R. E. (2016). The Philadelphia Neurodevelopmental Cohort: A publicly available resource for the study of normal and abnormal brain development in youth. Neuroimage, 124(Pt. B), 1115–1119. Savage, J. E., Jansen, P. R., Stringer, S., Watanabe, K., Bryois, J., de Leeuw, C. A., . . . Posthuma, D. (2018). Genome-wide association meta-analysis in 269,867 individuals identifies new genetic and functional links to intelligence. Nature Genetics, 50(7), 912–919. Schumann, G., Loth, E., Banaschewski, T., Barbot, A., Barker, G., Büchel, C., . . . Struve, M. (2010). The IMAGEN study: Reinforcement-related behaviour in normal brain function and psychopathology. Molecular Psychiatry, 15(12), 1128–1139.

81

82

b. khundrakpam, j.-b. poline, and a. c. evans

Shaw, P., Greenstein, D., Lerch, J., Clasen, L., Lenroot, R., Gogtay, N., . . . Giedd, J. (2006). Intellectual ability and cortical development in children and adolescents. Nature, 440(7084), 676–679. Sniekers, S., Stringer, S., Watanabe, K., Jansen, P. R., Coleman, J. R. I., Krapohl, E., . . . Posthuma, D. (2017). Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence. Nature Genetics, 49(7), 1107–1112. Sripada, C., Angstadt, M., Rutherford, S., & Taxali, A. (2019). Brain network mechanisms of general intelligence. bioRxiv. 657205. doi: 10.1101/657205. Stein, J. L., Medland, S. E., Vasquez, A. A., Hibar, D. P., Senstad, R. E., Winkler, A. M., . . . Thompson, P. M. (2012). Identification of common variants associated with human hippocampal and intracranial volumes. Nature Genetics, 44(5), 552–561. Sudlow, C., Gallacher, J., Allen, N., Beral, V., Burton, P., Danesh, J., . . . Collins, R. (2015). UK Biobank: An open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Medicine, 12(3), e1001779. Talukdar, T., Roman, F. J., Operskalski, J. T., Zwilling, C. E., & Barbey, A. K. (2018). Individual differences in decision making competence revealed by multivariate fMRI. Human Brain Mapping, 39(6), 2664–2672. doi: 10.1002/ hbm.24032. Thompson, P. M., Dennis, E. L., Gutman, B. A., Hibar, D. P., Jahanshad, N., Kelly, S., . . . Ye, J. (2017). ENIGMA and the individual: Predicting factors that affect the brain in 35 countries worldwide. Neuroimage, 145(Pt. B), 389–408. Thompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., . . . Drevets, W. (2014). The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging Behavior, 8(2), 153–182. Turner, B. O., Paul, E. J., Miller, M. B., & Barbey, A. K. (2018). Small sample sizes reduce the replicability of task-based fMRI studies. Communications Biology, 1, 62. doi: 10.1038/s42003-018-0073-z. Turner, J. A. (2014). The rise of large-scale imaging studies in psychiatry. Gigascience, 3, 1–8. Van Essen, D. C., Smith, S. M., Barch, D. M., Behrens, T. E. J., Yacoub, E., Ugurbil, K., & WU-Minn HCP Consortium. (2013). The WU-Minn human connectome project: An overview. Neuroimage, 80, 62–79. Wachinger, C., Becker, B. G., & Rieckmann, A. (2018). Detect, quantify, and incorporate dataset bias: A neuroimaging analysis on 12,207 individuals. arXiv:1804.10764. Xiao, L., Stephen, J. M., Wilson, T. W., Calhoun, V. D., & Wang, Y.-P. (2019). A manifold regularized multi-task learning model for IQ prediction from two fMRI paradigms. IEEE Transaactions in Biomedical Engineering, 67(3), 796–806. Zhao, Y., Klein, A., Castellanos, F. X., & Milham, M. P. (2019). Brain age prediction: Cortical and subcortical shape covariation in the developing human brain. Neuroimage, 202, 116149. Zwilling, C. E., Daugherty, A. M., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2019). Enhanced decision-making through multimodal training. NPJ Science of Learning, 4, 11. doi: 10.1038/s41539-019-0049-x.

PART II

Theories, Models, and Hypotheses

5 Evaluating the Weight of the Evidence Cognitive Neuroscience Theories of Intelligence Matthew J. Euler and Ty L. McKinney

Introduction The goal of this chapter is to provide an overview and critique of the major theories in the cognitive neuroscience of intelligence. In taking a broad view of this literature, two related themes emerge. First, as might be expected, theoretical developments have generally followed improvements in the methods available to acquire and analyze neural data. In turn, as a result of these developments, along with those in the psychometric and experimental literatures, cognitive neuroscience theories of intelligence have followed a general trajectory that runs from relatively global statements early on, to increasingly precise models and claims. As such, following Haier (2016), it is perhaps most instructive to divide the development of these models into early and later phases. The first group of theories consists of a small number of prominent, established models either for which there is a large base of support, and/or which are connected to large empirical literatures. The earliest of these grew out of electrophysiological (EEG) event-related potential (ERP) studies, and, in line with the capacities of those methods, sought to link intelligence to seemingly universal properties of the brain, like neural speed or variability (Ertl & Schafer, 1969; Hendrickson & Hendrickson, 1980). As such, while various difficulties all but doomed the latter account (Euler, Weisend, Jung, Thoma, & Yeo, 2015; Mackintosh, 2011), neural speed has quietly accrued support, and is currently undergoing a resurgence (e.g., Schubert, Hagemann, & Frischkorn, 2017). Nevertheless, and again consistent with the technological theme, interest in these accounts had until recently largely given way to the Neural Efficiency Hypothesis (NEH; Haier et al., 1988). NEH emerged from the first functional neuroimaging studies of intelligence and, following other advances, has undergone its own shift in emphasis, from a focus on neural activation to that of connectivity (Neubauer & Fink, 2009a). Finally, as structural and functional MRI (fMRI) eventually became widely available, the literature saw the development of the

85

86

m. j. euler and t. l. mckinney

more precise, anatomically-focused models of Parieto-Frontal Integration Theory (P-FIT; Jung & Haier, 2007) and the Multiple Demand system (MD; Duncan, 2010). These latter theories, perhaps along with NEH, still dominate the current literature. The second phase of theorizing essentially incorporates the advances brought about in the first phase, and seeks to revise, deepen, and integrate the previous accounts. These models have largely been developed in the last several years, and most prominently include Process Overlap Theory (POT; Kovacs & Conway, 2016), Network Neuroscience Theory (NNT; Barbey, 2018), Hierarchical Predictive Processing (Euler, 2018), and the Watershed Model of Fluid Intelligence (Kievit et al., 2016). Following then from that basic chronology, this chapter aims to provide an overview of the status and issues facing the current major theories in the neuroscience of intelligence, leading up to their culmination and extension in the most recent models. The chapter concludes with a summary of current challenges and proposed solutions for the field as a whole, as well as a preview of how resolving those issues might ultimately inform broader applications.

Established Cognitive Neuroscience Theories of Intelligence Intelligence and Neural Speed Although the neural speed view of intelligence is somewhat less prominent among neuroscientists, it is nevertheless important to consider, given its firm basis in one of the most central and best-replicated effects in the field – the moderate inverse relation between overall intelligence and speed of reaction time (Deary, Der, & Ford, 2001; Sheppard & Vernon, 2008). Moreover, while the relationship between ERP amplitudes and intelligence remains somewhat ambiguous, and typically violates neural efficiency, many studies of ERP latencies have shown the expected inverse relationship. As such, the key challenges concerning the theory of neural speed have less to do with establishing the basic effect, but rather with understanding which factors affect its relationship with intelligence and why. A recent study by Schubert et al. (2017) made several important observations in this respect. First, the authors noted that whereas intelligence is conceived of as a stable trait, ERP latencies likely reflect variance due to both situational and stable factors. As such, they combined multiple recording sessions with a psychometric approach, which enabled them to separate out situational variance and to operationalize neural speed as a stable latent trait. Second, they evaluated two competing models – one that contained a single latent speed variable vs. one that distinguished between early (P1, N1) and later ERP components (P2, N2, and P3). Results indicated not only that the second model fit the data better, but also that the speed of later

Evaluating the Weight of the Evidence

components in particular accounted for nearly 90% of the variance in general intelligence (g). In contrast, the speed of early ERP components only showed a small, positive relationship. Interestingly, while that study highlighted the need to distinguish between early vs. late information processing phases, other studies suggest that neural speed effects also depend on task conditions. For example, recent studies found that the correlation between intelligence and the latency of the P3 ERP depends crucially on task demands, such that it may scale with increasing complexity (Kapanci, Merks, Rammsayer, & Troche, 2019; Troche, Merks, Houlihan, & Rammsayer, 2017). The same case can be made for ERP amplitudes (Euler, McKinney, Schryver, & Okabe, 2017; McKinney & Euler, 2019). While these findings await replication, as a group, they exemplify the movement in the field from broad toward narrower claims. That is, rather than relating intelligence to overall neural speed, the findings of Schubert et al. (2017) preferentially implicate speed of higher-order processing. Likewise, the argument for task-dependence seeks to shift the focus away from global claims about activity–ability relationships, towards one on the conditions that elicit those effects, and hence onto more discrete neural circuits.

The Neural Efficiency Hypothesis Like neural speed, the neural efficiency hypothesis is undergoing a shift towards a more precise formulation. Namely, how should “efficiency” be operationalized in a way that allows for its systematic evaluation? A key challenge here concerns the sheer variety of methods used to assess NEH, for the reason that many of the various methodologies measured quite distinct physiological processes. For example, whereas the initial positron emission tomography (PET) studies showed that higher-ability people had lower overall glucose metabolism (Haier et al., 1988), fMRI research suggests a complex picture with various regional effects. Currently, the most definitive single statement on the activation-based account of NEH – the notion that neural activity inversely relates to intelligence – remains Neubauer and Fink’s (2009b) review, where they found that while most of the available literature supported NEH, the relationship seemed to primarily hold in frontal regions, at moderate task difficulties, with task type and sex also moderating the effect. Thus, like neural speed, recent studies on the NEH tend to make more nuanced claims about patterns of activation. For example, activation within the task-positive vs. task-negative networks shows opposite relationships with cognitive ability (Basten, Stelzel, & Fiebach, 2013), and efficiency effects may only emerge when higher and lower ability participants are matched for performance, but not subjective task difficulty (fMRI: Dunst et al., 2014; alpha ERD: Nussbaumer, Grabner, & Stern, 2015). Finally, as noted in the previous section, ERP

87

88

m. j. euler and t. l. mckinney

amplitudes typically correlate positively with intelligence, apparently contradicting activation-based NEH (Euler, 2018). The second major branch of neural efficiency research concerns whether higher-ability individuals are characterized by more efficient patterns of brain connectivity. Many of these studies use graph-theoretical approaches that can formally quantify aspects of network efficiency, such as the average distance between connections,1 and the importance of particular nodes. Here again, recent research has revealed that the relationship between intelligence and brain “efficiency” is apt to be nuanced. For example, while both structural and functional imaging studies initially pointed to a relationship between intelligence and global efficiency metrics (Li et al., 2009; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009), other studies have cast doubt on this picture (Kruschwitz, Waller, Daedelow, Walter, & Veer, 2018); favoring a role for more specific networks (Hilger, Ekman, Fiebach, & Basten, 2017; Pineda-Pardo, Martínez, Román, & Colom, 2016; Santarnecchi, Emmendorfer, Tadayon, et al., 2017), alternative framings of efficiency (Schultz & Cole, 2016), and moderating factors (Ryman et al., 2016). Indeed, efficiency might even manifest in minute, cellular features, with recent evidence linking intelligence to sparser dendritic arbors, which may promote more efficient signaling in particular networks (Genç et al., 2018). In summary, while neural efficiency remains an appealing concept in intelligence research, both the activation- and connectivity-based formulations require further specification in terms of which specific neural properties constitute “efficiency” (Poldrack, 2015), and precisely how these should relate to intelligence in an a priori way. Given these various nuances, it appears that while efficiency does characterize many brain–ability relationships, it does not rise to the level of a general functional principle that uniformly applies across networks and situations.

Fronto-Parietal Models Parieto-frontal integration theory (P-FIT) is arguably the most prominent and well-established cognitive neuroscience theory of intelligence. In short, P-FIT holds that a core network of largely lateral frontal and parietal areas (along with the anterior cingulate cortex, temporal and occipital association areas, and their connecting white matter tracts; Barbey et al., 2012; Jung & Haier, 2007) forms the basic substrate for individual differences in intelligence. Each set of regions is thought to play a particular role in a four-stage model of complex information processing (beginning with perceptual apprehension, through recollection, reasoning, etc., and ultimately response execution;

1

In this context, distance typically refers to the number of intervening connections between two given nodes of interest, as opposed to their physical proximity in the brain.

Evaluating the Weight of the Evidence

Jung & Haier, 2007), that may not be strictly sequential, and could take different forms across individuals (Haier, 2016, p. 94). Crucially, although P-FIT highlights the role of these particular regions in intelligence, it has always emphasized their collective status as an integrated network – a theme which has been heightened in recent years. Whereas P-FIT was proposed on the basis of a single large review of the available neuroimaging literature, the Multiple Demand (MD) theory evolved over time, from an experimental model that initially emphasized the role of the prefrontal cortex in cognitive differences (Owen & Duncan, 2000), until its present formulation as an explicitly fronto-parietal account (Duncan, 2010). The two accounts can also be contrasted in that while P-FIT defines a much more extensive network (involving sensory association areas and a larger area of the prefrontal cortex), the MD system is more circumscribed, being centered around the inferior frontal sulcus, intraparietal sulcus, and the anterior cingulate and pre-supplementary motor area. Further, while P-FIT has typically been discussed in anatomical terms, the consistent theme throughout the evolution of the MD account has been its longstanding functional emphasis, on trying to identify brain regions that are commonly activated across diverse types of tasks. Thus, whereas P-FIT facilitated a shift in the literature away from a focus on single regions to a more distributed view of intelligence in the brain, MD theory offers a contrasting account, with a unique functional emphasis and a more limited set of regions. Notwithstanding their differences, both P-FIT and MD theory clearly coalesce around a shared contention that a core set of lateral frontal and parietal and medial frontal regions are disproportionately related to intelligence. On that basis, there is considerable support in the literature for that broad claim (including from lesion studies; Barbey et al., 2012; Gläscher et al., 2010), although precise replications tend to be elusive. Two recent metaanalyses brought these issues into sharper focus. The first meta-analysis found a general pattern of support for P-FIT across both structural and functional brain imaging modalities (Basten, Hilger, & Fiebach, 2015). Functional studies showed both positive and negative associations, across regions primarily in the same core set of lateral frontal and parietal and medial frontal areas, with additional clusters in the right temporal lobe and posterior insula. Structural studies, by contrast, showed only positive associations between gray matter volume and intelligence in lateral and medial frontal regions (along with a more distributed set of cortical and subcortical regions). Strikingly though, the structural findings showed no overlapping voxels with the functional results (Basten et al., 2015). The second meta-analysis focused more narrowly on fMRI correlates of fluid intelligence, and likewise revealed a primary pattern of convergence in lateral fronto-parietal regions, albeit with a left-hemisphere predominance, and additional foci in the occipital and insular cortex, as well as the basal ganglia and thalamus (Santarnecchi, Emmendorfer, & PascualLeone, 2017).

89

90

m. j. euler and t. l. mckinney

Overall, while these studies differ somewhat in their methods and focus, they converge on several points. Most notably, the total lack of convergence between structural and functional studies in the first meta-analysis points to ongoing difficulties in relating cognitive differences to discrete neural regions. As those authors explain, this may in part reflect that while P-FIT was originally formulated at the level of Brodmann areas, neuroimaging studies provide much finer resolution, allowing the possibility of no precise overlap between different studies or modalities (Basten et al., 2015). Second, while both meta-analyses largely affirm the importance of fronto-parietal networks to cognitive differences, they also suggest revisions to that basic story, particularly regarding the role of the insula and subcortical areas (Basten et al., 2015; Santarnecchi, Emmendorfer, & Pascual-Leone, 2017). Finally, while P-FIT always emphasized integration throughout that network, recent studies have supported this more directly by explicitly examining neural connectivity (Pineda-Pardo et al., 2016; Vakhtin, Ryman, Flores, & Jung, 2014; and see Network Neuroscience Theory below). In summary then, while studies over the last decade largely support a disproportionate role for fronto-parietal networks in intelligence, exact replications are limited, and many studies implicate broader networks. Thus, the overall progress in the neuroanatomical literature has brought a set of more detailed questions into focus. Namely, how extensive is the core neural network involved in intelligence, how consistent is it across individuals, and what is the precise role of those regions in supporting intelligence? As outlined in the following sections, the newest cognitive neuroscience theories of intelligence each seek to address one or more of these issues.

Recent Theoretical Developments Process Overlap Theory While not a cognitive neuroscience theory per se, Process Overlap Theory (POT; Kovacs & Conway, 2016) offers a functional account of why fronto-parietal networks have been so central to neuroscience theories of intelligence, and hence shares features of Multiple Demand theory. Specifically, POT seeks to explain the positive manifold phenomenon of g, in that it aims to explain how intelligence as a unitary capacity could arise from a diverse set of processes. In motivating their answer, the authors note several key findings: (1) the fact that g is highly correlated with fluid reasoning (Gf) skills in particular, (2) that working memory and other executive skills are crucial to Gf, and 3) that the slowest reaction times within a task (indicative of executive lapses) often highly predict g. In light of these findings, POT suggests that working memory and executive control processes constrain cognitive performance in a domain-general way. That is, despite the fact that many

Evaluating the Weight of the Evidence

tasks rely on domain-specific skills, they also all rely on a single set of executive processes. These function as a bottleneck, constraining performance regardless of domain. In turn, given the predominant role of parietal and especially frontal cortices in supporting working memory and executive processes, the former are strong candidate substrates for the physical constraints (Kovacs & Conway, 2016, p. 169).

Network Neuroscience Theory As suggested in the section on fronto-parietal models, advances in the field have facilitated a shift away from a focus on discrete, localized functions to an emphasis on integrated networks. In particular, many recent studies have drawn attention to intrinsic connectivity or fMRI “resting state” networks (ICNs), and their potential importance to intelligence. While several of these have affirmed the importance of fronto-parietal networks (Hearne et al., 2016; Santarnecchi, Emmendorfer, Tadayon, et al., 2017), they also raise novel issues. For example, results have prompted calls to more carefully delineate the term “fronto-parietal” as it applies to the dorsal attention network, vs. other ICNs involving those regions, such as executive control networks (Santarnecchi, Emmendorfer, Tadayon, et al., 2017). Further, several studies have highlighted the role of broader networks, and particularly the default mode and salience network, albeit with some mixed results (c.f., Hearne et al., 2016; Hilger et al., 2017; Santarnecchi, Emmendorfer, Tadayon, et al., 2017). Overall, this pattern reinforces the importance of moderators, and highlights the theme that intelligence is unlikely to be a property of a single brain process, region, or even network. In that vein, Network Neuroscience Theory (NNT; Barbey, 2018) organizes the findings in this emerging frontier, and integrates the connectivity and psychometric literatures to arrive at a network-based account of human intelligence. Following from the hierarchical structure of the construct (i.e., g sits atop broad factors, followed by specific abilities), NNT applies formal graphtheoretical concepts to explain how intelligence as a global capacity arises from the dynamic connectivity patterns that characterize the brain. In brief, NNT invokes at least four key premises: First, g is conceived of as a global network phenomenon, and as such cannot be understood by analyses of particular cognitive processes or tests. Second, the brain is held to be organized such that it balances modularity – relatively focal, densely inter-connected, functional centers – with select long-range connections, that allow for more global integration across modules. This is formally known as “small-world” architecture, which characterizes the brain in general, as well as specific ICNs. Third, this small-world architecture underlies distinct broad capacities, such as fluid and crystallized intelligence; albeit as mediated by different networks. Crystallized intelligence, involving the retrieval of semantic and episodic knowledge, relies on easy-to-reach network states that require well-connected functional hubs,

91

92

m. j. euler and t. l. mckinney

especially within the default mode network. Fluid intelligence, on the other hand, seems to be supported by hard-to-reach states, and weak connectivity between the fronto-parietal and cingulo-opercular networks, that permit the flexible development of novel cognitive representations. Finally, g itself does not relate to static regions or even networks, but rather emerges through the dynamic reconfiguration of connectivity patterns among different ICNs (Barbey, 2018). Thus, NNT offers a view of intelligence based on the dynamic functional organization of the brain. On that basis, it may help address inconsistencies in the literature, by promoting a view of intelligence as a dynamic, emergent phenomenon, as opposed to one that “resides” in particular structures. Moreover, as noted later on in the section detailing Broader Applications, the view implied by NNT has important implications for issues beyond intelligence research itself.

Hierarchical Predictive Processing Predictive processing theories begin from the ambitious notion that the brain can be best understood as a statistical organ, designed to (1) enact a “model” of the world and (2) allow the organism to act in ways that minimize deviations from that model, in the form of unexpected, maladaptive exchanges with the environment (e.g., avoid being a literal “fish out of water”; Friston, 2010). This view has gained considerable traction within broader cognitive science, to the point of being under serious discussion as a potentially unified theory of the brain and cognition (Clark, 2015). From that jumping off point, the predictive processing view of intelligence poses two questions: First, if correct, what implications does predictive processing have for intelligence research; and second, as a supposedly unifying account, how could it foster greater integration among the somewhat disparate lines of neuroscientific intelligence research (Euler, 2018)? Three main ideas seem to follow in response to those questions. First, in considering intelligence as a complex, hierarchical construct, predictive processing suggests that, rather than grouping tasks only according to the constructs they measure (fluid reasoning vs. processing speed), it is also useful to recall the fundamental property they share – inducing a form of uncertainty. Although this seems like a truism, it nevertheless may help integrate the current methodological extremes of chronometric ERP studies at one end, and fMRI studies of complex cognition at the other. This is because it provides a systematic way for thinking about the expected size of task–ability correlations, and the likely neural basis of the underlying effects. Specifically, it suggests that most chronometric effects are perhaps rightfully modest, owing to the more limited uncertainty typical of those tasks and the comparatively circumscribed neural networks recruited. On the other hand, since reasoning and other complex cognitive processes entail much higher-order uncertainty, such tasks should recruit much more

Evaluating the Weight of the Evidence

distributed networks, thereby eliciting greater variability in brain functioning and intelligence. This in turn raises the second key aspect of predictive processing, which is the idea that neural hierarchies should be important in determining brain–ability effects. For example, if task-related uncertainty drives neural recruitment, then ERP–ability correlations should also scale as a function of the complexity of the tasks (and networks) involved. Likewise, tasks with the greatest uncertainty should require extended and iterative processing across multiple networks, and may be subject to domain-general bottlenecks that are implicated in higher-order cognition (e.g., fronto-parietal networks, as hypothesized by POT). Thus, as an explicitly hierarchical model of brain functioning (Clark, 2015), predictive processing provides a framework for relating cognitive hierarchies to neural ones, and potentially for better integrating the currently disparate subfields of ERP and fMRI research on intelligence. Third, because predictive processing holds that organisms must develop a model of the world, it could provide explicit, testable mechanisms for distinguishing neural activity that reflect more momentary capacities vs. those related to prior learning. Thus, it potentially provides a framework for testing developmental questions in the neuroscience of intelligence (Euler, 2018). Finally, it should be noted that the strength of predictive processing may also be its weakness, in that its ambitious scope may fail to heed the lessons of prior efforts to link intelligence to universal principles.

The Watershed Model of Fluid Intelligence The Watershed Model begins from the observation that while the heritability of intelligence is well-established, it has been difficult to understand the functional mechanisms that link candidate genes to the complex behavioral phenotype known as intelligence. Borrowing from the behavioral genetics literature, Kievit et al.’s (2016) Watershed Model instead recapitulates intelligence as an endophenotype – an intermediate step between genes and phenotype. As an endophenotype, intelligence is not a behavior per se, but a series of indirect dispositions towards intelligent behavior (i.e., high IQ scores), arranged in a hierarchical manner. Thus, while many previous theories have emphasized single neural processes in explaining intelligence (e.g., neural speed, NEH), the Watershed Model tries to integrate the relative influence of lower level biological processes with more proximal cognitive correlates, while also capturing the variability in how these processes might manifest across people. In their paper, Kievit and colleagues explore the influences that white matter tracts and processing speed could both have on fluid intelligence as an example of how intelligence might operate as an endophenotype. Like a geographical watershed, the genes that govern brain structure each make

93

94

m. j. euler and t. l. mckinney

small, independent contributions to the ultimate behavioral phenotype. Hence, they function like smaller “tributaries” that funnel into more proximate behavioral causes (larger “waterways”), which include things like properties of neurons or glia, and broader neuroanatomical features. The authors argue that white matter structure is one such lower level biological process, which could indirectly support intelligence through higher processing speed. Using sophisticated statistical models, they established that various processing speed measures each had partially independent contributions to fluid intelligence (i.e., multiple realizability) and could not be collapsed into a single factor. Similarly, white matter integrity across various tracts did correlate with fluid intelligence, but indirectly through processing speed. Furthermore, white matter integrity also had a higher dimensionality than processing speed (i.e., it was comprised of a greater number of factors), consistent with the predictions of the watershed model. Thus, the results provide evidence that various biological correlates of intelligence should be simultaneously considered within a hierarchy of influence, and that multiple constellations of these correlates could come together to achieve an equivalent level of ability.

Summary of Progress, Current Challenges, and Potential Ways Forward The Emerging Synthesis In reviewing the insights gained over the two phases of theory development, arguably five principles have emerged. First, contrary to some early suggestions in this literature, the neural correlates of intelligence are clearly distributed throughout the brain. Second, given the mounting complexity of findings, it seems likely that neither a single set of regions, nor a single functional relationship will sufficiently describe “the” brain–ability relationship. Rather, the search for universal principles has given way to an appreciation that many brain–ability effects are apt to be regionally-dependent and contingent on various moderators (Haier’s first law; 2016). Fortunately, the field seems to be embracing this complexity, and moving toward an emphasis on neural networks. This shift toward networks is the third consensus point, facilitating more sophisticated theories in two ways – by helping to unify the more activation-based vs. neuroanatomical approaches, and also by drawing attention to the specific functional roles that various networks play in intelligence. Indeed, each of the newer theories reviewed here explicitly highlights the ways in which different neural networks are apt to play unique roles in various sub-factors of intelligence. That is, although P-FIT and some work on NEH clearly acknowledged the importance of networks, it took technological advances to begin to understand those implications in practice. Recently, the second phase of theorizing has brought forward the last two principles, which

Evaluating the Weight of the Evidence

concern the role of measurement and hierarchies in neural research on intelligence.2 Because these raise the next set of challenges for the field, we take them in turn here.

Current Challenges and Potential Solutions The first challenge facing neuroscientific research on intelligence concerns measurement-related questions, and particularly difficulties in reproducing various brain–ability relationships (Basten et al., 2015; Martínez et al., 2015). These reflect both methodological and substantive issues, and, in light of the former, the first steps toward improving reproducibility include general recommendations like increasing sample sizes, distinguishing between withinand between-subjects variance with respect to neural correlates of task performance, reporting results of whole-brain analyses, and explicitly examining moderators (Basten et al., 2015). In addition, because intelligence is a hierarchical construct, it is critical that researchers carefully distinguish between specific tests, broad factors, and intelligence itself when testing brain–ability relationships, lest the results reflect unknown contributions from more specific capacities (Haier et al., 2009). Likewise, in functional studies, brain–ability correlations are likely also contingent on the tasks that are employed to operationalize constructs. Setting aside these methodological concerns, there are also several more conceptual issues facing studies in the area. The first of these is that, since intelligence is a hierarchical construct, various neural hierarchies may be operating to complicate research. For example, it has been shown that, contrary to intuitions, a “reversed hierarchy” may exist in the brain such that there might actually be fewer reliable neural correlates of intelligence as one moves up the cognitive hierarchy (Román et al., 2014; and see: Barbey et al., 2012; Gläscher et al., 2010). This is reminiscent of the concept of “Brunswik symmetry,” which holds that correlations are necessarily attenuated between constructs at different hierarchical levels (e.g., between g and a lower-order personality factor; Kretzschmar et al., 2018). Speculatively, this could operate in the brain in two possible ways: First, it could be that, for functional studies, brain–ability correlations inherently scale as a function of the complexity of the networks involved. That is, more demanding tasks may elicit greater variability in brain functioning, thereby producing stronger brain–ability correlations (Euler, 2018). Second, it could also be the case that attempting to relate relatively discrete neural events (ERPs, average BOLD responses) to the aggregate construct of intelligence might systematically underestimate 2

Neural speed, NNT, and predictive processing also raise neural dynamics as an important factor in understanding the neural basis of intelligence. Insofar as fully assessing these effects will require the next phase of conceptual and technological advances (e.g., MEG; Haier, 2016), this likely represents an additional frontier in this area.

95

96

m. j. euler and t. l. mckinney

these statistical relationships – an idea that seems to accord with the recent success in relating intelligence to neural activity when both were modeled as latent factors (Schubert et al., 2017). In any case, the best way to evaluate these possibilities is for researchers to begin systematically assessing them by using formal measurement models and testing the generality of effects at different hierarchical levels. A second and related issue is the possibility that intelligence might be instantiated in different ways in different brains (Kievit et al., 2016). That is, one might interpret the evidence for a reversed hierarchy as suggesting that g relates to either a small set of brain regions, or that it relates to more distributed networks, but in a way that varies across people. Nearly all of the theories reviewed in this chapter, and certainly the most recent theories, entertain some version of this possibility. Here again, though, attending to hierarchies can facilitate testing this idea, in that as one moves down the cognitive hierarchy to more domain-specific tests, and presumably to more discrete neural circuits, between-subject variability should give way to greater consistency in brain structure and function. Going in the other direction, to the extent that higher-order brain–ability effects could not be reliably shown, it would provide evidence that intelligence is in fact multiply realized to some degree. Last, the notion of hierarchies arguably also provides a means to achieve greater integration within the field. Ultimately, cognitive neuroscience theories of intelligence should strive toward a complete account of why apparently quite disparate phenomena all predict cognitive ability, which factors affect the size of those relationships, and how reliable they are across people. In turn, the answers to those questions should greatly enhance our conception of what intelligence actually is.

Broader Applications Although this chapter has largely focused on the basic science of intelligence, recent theoretical developments nevertheless have important implications for broader endeavors. Most clearly, these include applications to neurorehabilitation, to diagnosing and mitigating the effects of neurodegenerative conditions, and, potentially, to ultimately enhancing intelligence itself. The first major application of the ideas discussed here relates to treatment approaches following brain injuries. For example, NNT highlights how greater compartmentalization of domain-specific functions minimizes the consequences of neural injury (Barbey, 2018), while research on frontoparietal networks, and especially bottleneck accounts, highlight the crucial importance of these latter systems to domain-general cognition. Given that intelligence, broadly speaking, is a protective factor for many different health conditions, including recovery from brain injury, this underscores the importance of rehabilitative efforts following lesions to these networks. Regained

Evaluating the Weight of the Evidence

functionality in these networks will not only directly help patients recover cognitive abilities, but also promote their capacity to adapt to other aspects of their injuries (e.g., motor dysfunction and difficulties with emotion regulation). As such, neuroscientific intelligence research provides clear targets for rehabilitation scientists in their efforts to improve outcomes following acquired brain injury. Next, the cognitive neuroscience of intelligence informs neurological diagnosis and prevention in two important ways. First, if we can better understand the factors that moderate brain–ability relationships, and particularly the role of subjective difficulty as illuminated by NEH (Dunst et al., 2014), this paves the way toward a science of mental exertion. In turn, if intelligence researchers can validate neural markers of mental exertion, and especially using low-cost, highly-portable methods like EEG, this potentially supports a revolution in neuropsychological assessment and diagnosis. For example, a valid marker of mental exertion would enable clinicians to observe when a patient, based on their IQ score, is exerting greater effort than expected to perform a given task, thereby potentially signaling cerebral compromise. In turn, this would conceivably allow earlier detection of incipient neurodegenerative diseases, for the reason that neural functioning is likely impaired in these conditions prior to the onset of behavioral deficits. Given the movement in Alzheimer’s research toward earlier detection of at-risk individuals (Fiandaca, Mapstone, Cheema, & Federoff, 2014), understanding these moderators, and especially how exertion affects activity–ability relationships, could considerably improve early diagnosis. The second way that the cognitive neuroscience of intelligence can impact brain health and care is through identifying the neural substrates of cognitive reserve. In brief, cognitive reserve refers to those factors that provide functional resilience against the deleterious effects of neuropathology, whether due to dementias or acquired brain injuries (Barulli & Stern, 2013). Critically, cognitive reserve is understood to be shaped by an individual’s life experience, and especially education and exposure to intellectual stimulation. Further, reserve is routinely estimated using measures of crystallized intelligence, thereby placing it firmly within the purview of intelligence research. Thus, by refining the concept of crystallized intelligence, as well as its development and neural basis, research in this field could profoundly impact our understanding of cognitive reserve, and ultimately help provide strategies to lessen the effects of acquired brain injuries and neurodegenerative diseases. A final way in which cognitive neuroscience theories of intelligence could have broader impacts is by clarifying the neural basis of intellectual development, and particularly how intelligence might eventually be enhanced via environmental or biological interventions. While those interventions are likely far-off, several recent theories nevertheless provide testable claims that could improve understanding of how intelligence develops. In the first instance, NNT makes specific predictions about how learning, and hence intellectual

97

98

m. j. euler and t. l. mckinney

development, entails a transition from the ability to engage hard-to-reach network states in initial learning phases (when problems are novel), to consolidating those skills, via transfers to networks governed by easy-to-reach states (Barbey, 2018). Predictive processing complements NNT and provides an additional framework for potentially distinguishing neural activity related to previous learning (i.e., neural “priors”) vs. more novel processing (Euler, 2018). Most importantly, both accounts suggest ways to quantify discrete neural markers of learning and, hence, to make testable predictions about their role in intellectual development. Thus, progress in this area has the potential to help adjudicate questions related to the malleability of intelligence, and, potentially, to inform education and other interventions designed to increase intelligence. Overall, the theories reviewed in this chapter, and especially the emerging lines of research, have considerable potential to increase understanding of this fundamental trait and, ultimately, to enhance human wellbeing.

References Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. doi: 10.1016/j.tics.2017.10.001. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(4), 1154–1164. Barulli, D., & Stern, Y. (2013). Efficiency, capacity, compensation, maintenance, plasticity: Emerging concepts in cognitive reserve. Trends in Cognitive Sciences, 17(10), 502–509. doi: 10.1016/J.TICS.2013.08.012. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. doi: 10.1016/j.intell.2015.04.009. Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41(5), 517–528. Clark, A. (2015). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford University Press. Deary, I. J., Der, G., & Ford, G. (2001). Reaction times and intelligence differences: A population-based cohort study. Intelligence, 29(5), 389–399. doi: 10.1016/ S0160–2896(01)00062-9. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour. Trends in Cognitive Sciences, 14(4), 172–179. doi: 10.1016/j.tics.2010.01.004. Dunst, B., Benedek, M., Jauk, E., Bergner, S., Koschutnig, K., Sommer, M., . . . Neubauer, A. C. (2014). Neural efficiency as a function of task demands. Intelligence, 42, 22–30. doi: 10.1016/j.intell.2013.09.005. Ertl, J. P., & Schafer, E. W. P. (1969). Brain response correlates of psychometric intelligence. Nature, 223(5204), 421–422. doi: 10.1038/223421a0.

Evaluating the Weight of the Evidence

Euler, M. J. (2018). Intelligence and uncertainty: Implications of hierarchical predictive processing for the neuroscience of cognitive ability. Neuroscience & Biobehavioral Reviews, 94, 93–112. doi: 10.1016/j.neubiorev.2018.08.013. Euler, M. J., McKinney, T. L., Schryver, H. M., & Okabe, H. (2017). ERP correlates of the decision time-IQ relationship: The role of complexity in task- and brain-IQ effects. Intelligence, 65, 1–10. doi: 10.1016/j.intell.2017.08.003. Euler, M. J., Weisend, M. P., Jung, R. E., Thoma, R. J., & Yeo, R. A. (2015). Reliable activation to novel stimuli predicts higher fluid intelligence. NeuroImage, 114, 311–319. doi: 10.1016/j.neuroimage.2015.03.078. Fiandaca, M. S., Mapstone, M. E., Cheema, A. K., & Federoff, H. J. (2014). The critical need for defining preclinical biomarkers in Alzheimer’s disease. Alzheimer’s & Dementia, 10(3), S196–S212. doi: 10.1016/J.JALZ.2014.04.015. Friston, K. J. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. doi: 10.1038/nrn2787. Genç, E., Fraenz, C., Schlüter, C., Friedrich, P., Hossiep, R., Voelkle, M. C., . . . Jung, R. E. (2018). Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nature Communications, 9(1), 1905. doi: 10.1038/s41467–018-04268-8. Gläscher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences of the United States of America, 107(10), 4705–4709. doi: 10.1073/ pnas.0910397107. Haier, R. J. (2016). The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Colom, R., Schroeder, D. H., Condon, C. A., Tang, C., Eaves, E., & Head, K. (2009). Gray matter and intelligence factors: Is there a neuro-g? Intelligence, 37(2), 136–144. doi: 10.1016/j.intell.2008.10.011. Haier, R. J., Siegel Jr, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. doi: 10.1016/0160-2896(88)90016-5. Hearne, L. J., Mattingley, J. B., Cocchi, L., Neisser, U., Melnick, M. D., Harrison, B. R., . . . He, Y. (2016). Functional brain networks related to individual differences in human intelligence at rest. Scientific Reports, 6, 32328. doi: 10.1038/srep32328. Hendrickson, D. E., & Hendrickson, A. E. (1980). The biological basis of individual differences in intelligence. Personality and Individual Differences, 1(1), 3–33. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. doi: 10.1016/ J.INTELL.2016.11.001. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–187. doi: 10.1017/S0140525X07001185. Kapanci, T., Merks, S., Rammsayer, T. H., & Troche, S. J. (2019). On the relationship between P3 latency and mental ability as a function of increasing demands

99

100

m. j. euler and t. l. mckinney

in a selective attention task. Brain Sciences, 9(2), 28. doi: 10.3390/ brainsci9020028. Kievit, R. A., Davis, S. W., Griffiths, J., Correia, M. M., Cam-CAN, & Henson, R. N. (2016). A watershed model of individual differences in fluid intelligence. Neuropsychologia, 91, 186–198. doi: 10.1016/ J.NEUROPSYCHOLOGIA.2016.08.008. Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177. doi: 10.1080/1047840X.2016.1153946. Kretzschmar, A., Spengler, M., Schubert, A.-L., Steinmayr, R., Ziegler, M., Kretzschmar, A., . . . Ziegler, M. (2018). The relation of personality and intelligence – What can the Brunswik symmetry principle tell us? Journal of Intelligence, 6(3), 30. doi: 10.3390/jintelligence6030030. Kruschwitz, J. D., Waller, L., Daedelow, L. S., Walter, H., & Veer, I. M. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. NeuroImage, 171, 323–331. doi: 10.1016/ J.NEUROIMAGE.2018.01.018. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. doi: 10.1371/journal.pcbi.1000395. Mackintosh, N. J. (2011). IQ and human intelligence (2nd ed.). Oxford University Press. Martínez, K., Madsen, S. K., Joshi, A. A., Joshi, S. H., Román, F. J., Villalon-Reina, J., . . . Colom, R. (2015). Reproducibility of brain-cognition relationships using three cortical surface-based protocols: An exhaustive analysis based on cortical thickness. Human Brain Mapping, 36(8), 3227–3245. doi: 10.1002/hbm.22843. McKinney, T. L., & Euler, M. J. (2019). Neural anticipatory mechanisms predict faster reaction times and higher fluid intelligence. Psychophysiology, 56(10), e13426. doi: 10.1111/psyp.13426. Neubauer, A. C., & Fink, A. (2009a). Intelligence and neural efficiency: Measures of brain activation versus measures of functional connectivity in the brain. Intelligence, 37(2), 223–229. doi: 10.1016/j.intell.2008.10.008. Neubauer, A. C., & Fink, A. (2009b). Intelligence and neural efficiency. Neuroscience & Biobehavioral Reviews, 33(7), 1004–1023. doi: 10.1016/j.neubiorev.2009.04.001. Nussbaumer, D., Grabner, R. H., & Stern, E. (2015). Neural efficiency in working memory tasks: The impact of task demand. Intelligence, 50, 196–208. doi: 10.1016/j.intell.2015.04.004. Owen, A. M., & Duncan, J. (2000). Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in Neurosciences, 23(10), 475–483. doi: 10.1007/s11631–017-0212-0. Pineda-Pardo, J. A., Martínez, K., Román, F. J., & Colom, R. (2016). Structural efficiency within a parieto-frontal network and cognitive differences. Intelligence, 54, 105–116. doi: 10.1016/J.INTELL.2015.12.002. Poldrack, R. A. (2015). Is “efficiency” a useful concept in cognitive neuroscience? Developmental Cognitive Neuroscience, 11, 12–17.

Evaluating the Weight of the Evidence

Román, F. J., Abad, F. J., Escorial, S., Burgaleta, M., Martínez, K., Álvarez-Linera, J., . . . Colom, R. (2014). Reversed hierarchy in the brain for general and specific cognitive abilities: A morphometric analysis. Human Brain Mapping, 35(8), 3805–3818. doi: 10.1002/hbm.22438. Ryman, S. G., Yeo, R. A., Witkiewitz, K., Vakhtin, A. A., van den Heuvel, M., de Reus, M., . . . Jung, R. E. (2016). Fronto-Parietal gray matter and white matter efficiency differentially predict intelligence in males and females. Human Brain Mapping, 37(11), 4006–4016. doi: 10.1002/hbm.23291. Santarnecchi, E., Emmendorfer, A., & Pascual-Leone, A. (2017). Dissecting the parieto-frontal correlates of fluid intelligence: A comprehensive ALE metaanalysis study. Intelligence, 63, 9–28. doi: 10.1016/J.INTELL.2017.04.008. Santarnecchi, E., Emmendorfer, A., Tadayon, S., Rossi, S., Rossi, A., & Pascual-Leone, A. (2017). Network connectivity correlates of variability in fluid intelligence performance. Intelligence, 65, 35–47. doi: 10.1016/ J.INTELL.2017.10.002. Schubert, A.-L., Hagemann, D., & Frischkorn, G. T. (2017). Is general intelligence little more than the speed of higher-order processing? Journal of Experimental Psychology: General, 146(10), 1498–1512. doi: 10.1037/xge0000325. Schultz, D. H., & Cole, M. W. (2016). Higher intelligence is associated with less taskrelated brain network reconfiguration. The Journal of Neuroscience, 36(33), 8551–8561. doi: 10.1523/jneurosci.0358-16.2016. Sheppard, L. D., & Vernon, P. A. (2008). Intelligence and speed of informationprocessing: A review of 50 years of research. Personality and Individual Differences, 44(3), 535–551. doi: 10.1016/j.paid.2007.09.015. Troche, S. J., Merks, S., Houlihan, M. E., & Rammsayer, T. H. (2017). On the relation between mental ability and speed of information processing in the Hick task: An analysis of behavioral and electrophysiological speed measures. Personality and Individual Differences, 118, 11–16. doi: 10.1016/ J.PAID.2017.02.027. Vakhtin, A. A., Ryman, S. G., Flores, R. A., & Jung, R. E. (2014). Functional brain networks contributing to the Parieto-Frontal Integration Theory of Intelligence. Neuroimage, 103(0), 349–354. doi: 10.1016/j.neuroimage.2014.09.055. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. doi: 10.1523/JNEUROSCI.1443-09.2009.

101

6 Human Intelligence and Network Neuroscience Aron K. Barbey

Introduction Flexibility is central to human intelligence and is made possible by the brain’s remarkable capacity to reconfigure itself – to continually update prior knowledge on the basis of new information and to actively generate internal predictions that guide adaptive behavior and decision making. Rather than lying dormant until stimulated, contemporary research conceives of the brain as a dynamic and active inference generator that anticipates incoming sensory inputs, forming hypotheses about that world that can be tested against sensory signals that arrive in the brain (Clark, 2013; Friston, 2010). Plasticity is therefore critical for the emergence of human intelligence, providing a powerful mechanism for updating prior beliefs, generating dynamic predictions about the world, and adapting in response to ongoing changes in the environment (Barbey, 2018). This perspective provides a catalyst for contemporary research on human intelligence, breaking away from the classic view that general intelligence (g) originates from individual differences in a fixed set of cortical regions or a singular brain network (for reviews, see Haier, 2017; Posner & Barbey, 2020). Early studies investigating the neurobiology of g focused on the lateral prefrontal cortex (Barbey, Colom, & Grafman, 2013b; Duncan et al., 2000), motivating an influential theory based on the role of this region in cognitive control functions for intelligent behavior (Duncan & Owen, 2000). The later emergence of network-based theories reflected an effort to examine the neurobiology of intelligence through a wider lens, accounting for individual differences in g on the basis of broadly distributed networks. For example, the Parietal-Frontal Integration Theory (P-FIT) was the first to propose that “a discrete parieto-frontal network underlies intelligence” (Jung & Haier, 2007) and that g reflects the capacity of this network to evaluate and test hypotheses for problem-solving (see also Barbey et al., 2012). A central feature

Aron K. Barbey, Decision Neuroscience Laboratory, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 North Mathews Avenue, Urbana, IL 61801, USA. Email: [email protected]; Web: http://decisionneurosciencelab.org/

102

Human Intelligence and Network Neuroscience

103

of the P-FIT model is the integration of knowledge between the frontal and parietal cortex, afforded by white-matter fiber tracks that enable efficient communication among regions. Evidence to support the fronto-parietal network’s role in a wide range of problem-solving tasks later motivated the Multiple-Demand (MD) Theory, which proposes that this network underlies attentional control mechanisms for goal-directed problem-solving (Duncan, 2010). Finally, the Process Overlap Theory represents a recent network approach that accounts for individual differences in g by appealing to the spatial overlap among specific brain networks, reflecting the shared cognitive processes underlying g (Kovacs & Conway, 2016). Thus, contemporary theories suggest that individual differences in g originate from functionally localized processes within specific brain regions or networks (Table 6.1; for a comprehensive review of cognitive neuroscience theories of intelligence, see Chapter 5, by Euler and McKinney). Network Neuroscience Theory adopts a new perspective, proposing that g originates from individual differences in the system-wide topology and dynamics of the human brain (Barbey, 2018). According to this approach, the small-world topology of brain networks enables the rapid reconfiguration of their modular community structure, creating globally-coordinated mental representations of a desired goal-state and the sequence of operations required to achieve it. This chapter surveys recent evidence within the rapidly developing field of network neuroscience that assess the nature and mechanisms of general intelligence (Barbey, 2018; Girn, Mills, & Christoff, 2019) (for an

Table 6.1 Summary of cognitive neuroscience theories of human intelligence. System-Wide Topology and Dynamics

Functional Localization

Lateral PFC Theory P-FIT Theory* MD Theory Process Overlap Theory Network Neuroscience Theory

Primary Region

Primary Network

Multiple Networks

SmallWorld Topology

Network Flexibility

Network Dynamics













✘ ✘ ✘

✔ ✔ ✘

✘ ✘ ✔

✘ ✘ ✘

✘ ✘ ✘

✘ ✘ ✘













* The P-FIT theory was the first to propose that “a discrete parieto-frontal network underlies intelligence” (Jung & Haier, 2007).

104

a. k. barbey

introduction to modern methods in network neuroscience, see Chapter 2, by Hilger and Sporns). We identify directions for future research that aim to resolve prior methodological limitations and further investigate the hypothesis that general intelligence reflects individual differences in network mechanisms for (i) efficient and (ii) flexible information processing.

Network Efficiency Early research in the neurosciences revealed that the brain is designed for efficiency – to minimize the cost of information processing while maximizing the capacity for growth and adaptation (Bullmore & Sporns, 2012; Ramón y Cajal, Pasik, & Pasik, 1999). Minimization of cost is achieved by dividing the cortex into anatomically localized modules, comprised of densely interconnected regions or nodes. The spatial proximity of nodes within each module reduces the average length of axonal projections (conservation of space and material), increasing the signal transmission speed (conservation of time) and promoting local efficiency (Latora & Marchiori, 2001). This compartmentalization of function enhances robustness to brain injury by limiting the likelihood of global system failure (Barbey et al., 2015). Indeed, the capacity of each module to function and modify its operations without adversely effecting other modules enables cognitive flexibility (Barbey, Colom, & Grafman, 2013a) and therefore confers an important adaptive advantage (Bassett & Gazzaniga, 2011; Simon, 1962). Critically, however, the deployment of modules for coordinated systemwide function requires a network architecture that also enables global information processing. Local efficiency is therefore complemented by global efficiency, which reflects the capacity to integrate information across the network as a whole and represents the efficiency of the system for information transfer between any two nodes. This complementary aim, however, creates a need for long-distance connections that incur a high wiring cost. Thus, an efficient design is achieved by introducing competing constraints on brain organization, demanding a decrease in the wiring cost for local specialization and an opposing need to increase the connection distance to facilitate global, system-wide function. These competing constraints are captured by formal models of network topology (Deco, Tononi, Boly, & Kringelbach, 2015) (Figure 6.1). Local efficiency is embodied by a regular network or lattice, in which each node is connected to an equal number of its nearest neighbors, supporting direct local communication in the absence of long-range connections. In contrast, global efficiency is exemplified by a random network, in which each node connects on average to any other node, including connections between physically distant regions.

Human Intelligence and Network Neuroscience

Figure 6.1 Small-world network. Human brain networks exhibit a small-world topology that represents a parsimonious balance between a regular brain network, which promotes local efficiency, and a random brain network, which enables global efficiency. Figure modified with permission from Bullmore and Sporns (2012)

Recent discoveries in network neuroscience suggest that the human brain balances these competing constraints by incorporating elements of a regular and random network to create a small-world topology (Bassett & Bullmore, 2006, 2017; Watts & Strogatz, 1998). A small-world network embodies (i) short-distance connections that reduce the wiring cost (high local clustering), along with (ii) long-distance connections that provide direct topological links or short-cuts that promote global information processing (short path length). Together, these features enable high local and global efficiency at relatively low cost, providing a parsimonious architecture for human brain organization (Robinson, Henderson, Matar, Riley, & Gray, 2009; Sporns, Tononi, & Edelman, 2000a, b; van der Maas et al., 2006). Evidence further indicates that efficient network organization is based on routing strategies that combine local and global information about brain network topology in an effort to approximate a small-world architecture (Avena-Koenigsberger et al., 2019). Research in network neuroscience has consistently observed that the topology of human brain networks indeed exemplifies a small-world architecture, which has been demonstrated across multiple neuroimaging modalities, including structural (He, Chen, & Evans, 2007), functional (Achard & Bullmore, 2007; Achard, Salvador, Whitcher, Suckling, & Bullmore, 2006; Eguiluz, Chialvo, Cecchi, Baliki, & Apkarian, 2005), and diffusion tensor magnetic resonance imaging (MRI) (Hagmann et al., 2007). Alterations in the topology of a small-world network have also been linked to multiple disease states (Stam, 2014; Stam, Jones, Nolte, Breakspear, & Scheltens, 2007), stages of lifespan development (Zuo et al., 2017), and pharmacological interventions (Achard & Bullmore, 2007), establishing their importance for understanding human

105

106

a. k. barbey

health, aging, and disease (Bassett & Bullmore, 2009). Emerging neuroscience evidence further indicates that general intelligence is directly linked to characteristics of a small-world topology, demonstrating that individual differences in g are associated with network measures of global efficiency.

Small-World Topology and General Intelligence The functional topology and community structure of the human brain has been extensively studied through the application of resting-state functional MRI, which examines spontaneous low frequency fluctuations of the bloodoxygen-level dependent (BOLD) signal. This method demonstrates coherence in brain activity across spatially distributed regions to reveal a core set of intrinsic connectivity networks (ICNs; Figure 6.2a) (Achard et al., 2006; Biswal, Yetkin, Haughton, & Hyde, 1995; Buckner et al., 2009; Bullmore & Sporns, 2009; Power & Petersen, 2013; Power et al., 2011; Smith et al., 2013; Sporns, Chialvo, Kaiser, & Hilgetag, 2004; van den Heuvel, Mandl, Kahn, & Hulshoff Pol, 2009). Functional brain networks largely converge with the structural organization of networks measured using diffusion tensor MRI (Byrge, Sporns, & Smith, 2014; Hagmann et al., 2007; Park & Friston, 2013), together providing a window into the community structure from which global information processing emerges. The discovery that global brain network efficiency is associated with general intelligence was established by van den Heuvel, Stam, Kahn, and Hulshoff Pol (2009), who observed that g was positively correlated with higher global efficiency (as indexed by a globally shorter path length) (for earlier research on brain network efficiency using PET; see Haier et al., 1988). Santarnecchi, Galli, Polizzotto, Rossi, and Rossi (2014) further examined whether this finding reflects individual differences in connectivity strength, investigating the relationship between general intelligence and global network efficiency derived from weakly vs. strongly connected regions. Whereas strong connections provide the basis for densely connected modules, weak links index long-range connections that typically relay information between (rather than within) modules. The authors replicated van den Heuvel, Stam, et al. (2009) and further demonstrated that weakly connected regions explain more variance in g than strongly connected regions (Santarnecchi et al., 2014), supporting the hypothesis that global efficiency and the formation of weak connections are central to general intelligence. Further support for the role of global efficiency in general intelligence is provided by EEG studies, which examine functional connectivity as coherence between time series of distant EEG channels measured at rest. For instance, Langer and colleagues provide evidence for a positive association between g and the small-world topology of intrinsic brain networks derived from EEG (Langer, Pedroni, & Jancke, 2013; Langer et al., 2012). Complementary research examining the global connectivity of regions within the prefrontal cortex also supports a positive association with measures

Human Intelligence and Network Neuroscience

Figure 6.2 Intrinsic connectivity networks and network flexibility. (A) Functional networks drawn from a large-scale meta-analysis of peaks of brain activity for a wide range of cognitive, perceptual, and motor tasks. Upper left figure represents a graph theoretic embedding of the nodes. Similarity between nodes is represented by spatial distance, and nodes are assigned to their corresponding network by color. Middle and right sections present the nodal and voxel-wise network distribution in both hemispheres. Figure modified with permission from Power and Petersen (2013). (B) Left graph illustrates the percent of regions within each intrinsic connectivity network that can transition to many easy-to-reach network states, primarily within the default mode network. Right graph illustrates the percent of regions within each intrinsic connectivity network that can transition to many difficult-to-reach network states, primarily within cognitive control networks. Figure modified with permission from Gu et al. (2015)

107

108

a. k. barbey

of intelligence. For example, Cole, Ito, and Braver (2015) and Cole, Yarkoni, Repovš, Anticevic, and Braver (2012) observed that the global connectivity of the left lateral prefrontal cortex (as measured by the average connectivity of this region with every other region in the brain) demonstrates a positive association with fluid intelligence. Converging evidence is provided by Song et al. (2008), who found that the global connectivity of the bilateral dorsolateral prefrontal cortex was associated with general intelligence. To integrate the diversity of studies investigating the role of network efficiency in general intelligence – and to account for null findings (Kruschwitz, Waller, Daedelow, Walter, & Veer, 2018) – it will be important to examine differences among studies with respect to resting-state fMRI data acquisition, preprocessing, network analysis, and the study population. A central question concerns whether resting-state fMRI is sufficiently sensitive or whether task-based fMRI methods provide a more powerful lens to examine the role of network efficiency in general intelligence. Indeed, a growing body of evidence suggests that functional brain network organization measured during cognitive tasks is a stronger predictor of intelligence than when measured during resting-state fMRI (Greene, Gao, Scheinost, & Constable, 2018; Xiao, Stephen, Wilson, Calhoun, & Wang, 2019). This literature has primarily employed task-based fMRI paradigms investigating cognitive control, specifically within the domain of working memory (for a review, see Chapter 13, by Cohen and D’Esposito). For example, fMRI studies investigating global brain network organization have revealed that working memory task performance is associated with an increase in network integration and a decrease in network segregation (Cohen & D’Esposito, 2016; see also Gordon, Stollstorff, & Vaidya, 2012; Liang, Zou, He, & Yang, 2016). Increased integration was found primarily within networks for cognitive control (e.g., the fronto-parietal and cingular-opercular networks) and for task-relevant sensory processing (e.g., the somatomotor network) (Cohen, Gallen, Jacobs, Lee, & D’Esposito, 2014). Thus, global brain network integration measured by task-based fMRI provides a powerful lens for further characterizing the role of network efficiency in high-level cognitive processes (e.g., cognitive control and working memory). Increasingly, scientists have proposed that high-level cognitive operations emerge from brain network dynamics (Breakspear, 2017; Cabral, Kringelbach, & Deco, 2017; Deco & Corbetta, 2011; Deco, Jirsa, & McIntosh, 2013), motivating an investigation of their role in general intelligence.

Network Flexibility and Dynamics Recent discoveries in network neuroscience motivate a new perspective about the role of global network dynamics in general intelligence – marking an important point of departure from the standard view that

Human Intelligence and Network Neuroscience

intelligence originates from individual differences in a fixed set of cortical regions (Duncan et al., 2000) or a singular brain network (Barbey et al., 2012; Duncan, 2010; Jung & Haier, 2007) (Table 6.1). Accumulating evidence instead suggests that network efficiency and dynamics are critical for the diverse range of mental abilities underlying general intelligence (for earlier research on brain network efficiency using PET; see Haier et al. (1988)).

Network Dynamics of Crystallized Intelligence Global information processing is enabled by the hierarchical community structure of the human brain, with modules that are embedded within modules to form complex, interconnected networks (Betzel & Bassett, 2017; Meunier, Lambiotte, & Bullmore, 2010). This infrastructure is supported, in part, by nodes of high connectivity or hubs (Buckner et al., 2009; Hilger, Ekman, Fiebach, & Basten, 2017a, b; Power, Schlaggar, Lessov-Schlaggar, & Petersen, 2013; van den Heuvel & Sporns, 2013). These regions serve distinct roles either as provincial hubs, which primarily connect to nodes within the same module, or as connector hubs, which instead provide a link between distinct modules (Guimera & Nunes Amaral, 2005). Hubs are therefore essential for transferring information within and between ICNs and provide the basis for mutual interactions between cognitive processes (Bertolero, Yeo, & D’Esposito, 2015; van der Maas et al., 2006). Indeed, strongly connected hubs together comprise a rich club network that mediates almost 70% of the shortest paths throughout the brain and is therefore important for global network efficiency (van den Heuvel & Sporns, 2011). By applying engineering methods to network neuroscience, research from the field of network control theory further elucidates how brain network dynamics are shaped by the topology of strongly connected hubs, examining their capacity to act as drivers (network controllers) that move the system into specific network states (Gu et al., 2015). According to this approach, the hierarchical community structure of the brain may facilitate or constrain the transition from one network state to another, for example, by enabling a direct path that requires minimal transitions (an easy-to-reach network state) or a winding path that requires many transitions (a difficult-to-reach network state). Thus, by investigating how the brain is organized to form topologically direct or indirect pathways (comprising short- and long-distance connections), powerful inferences about the flexibility and dynamics of ICNs can be drawn. Recent studies applying this approach demonstrate that strongly connected hubs enable a network to function within many easy-to-reach states (Gu et al., 2015), engaging highly accessible representations of prior knowledge and experience that are a hallmark of crystallized intelligence (Carroll, 1993; Cattell, 1971; McGrew & Wendling, 2010). Extensive neuroscience data indicate that the topology of brain networks is shaped by learning and prior

109

110

a. k. barbey

experience – reflecting the formation of new neurons, synapses, connections, and blood supply pathways that promote the accessibility of crystallized knowledge (Bassett et al., 2011; Buchel, Coull, & Friston, 1999; Pascual-Leone, Amedi, Fregni, & Merabet, 2005). The capacity to engage easy-to-reach network states – and therefore to access crystallized knowledge – is exhibited by multiple ICNs, most prominently for the default mode network (Betzel, Gu, Medaglia, Pasqualetti, & Bassett, 2016; Gu et al., 2015) (Figure 6.2b). This network is known to support semantic and episodic memory representations that are central to crystallized intelligence (Christoff, Irving, Fox, Spreng, & Andrews-Hanna, 2016; Kucyi, 2018; St Jacques, Kragel, & Rubin, 2011; Wirth et al., 2011) and to provide a baseline, resting state from which these representations can be readily accessed. Thus, according to this view, crystallized abilities depend on accessing prior knowledge and experience through the engagement of easily reachable network states, supported, for example, by strongly connected hubs within the default mode network (Betzel, Gu et al., 2016; Gu et al., 2015).

Network Dynamics of Fluid Intelligence Although the utility of strongly connected hubs is well-recognized, a growing body of evidence suggests that they may not fully capture the higher-order structure of brain network organization and the flexibility of information processing that this global structure is known to afford (Schneidman, Berry, Segev, & Bialek, 2006). Research in network science has long appreciated that global information processing depends on the formation of weak ties, which comprise nodes with a small number of connections (Bassett & Bullmore, 2006, 2017; Granovetter, 1973). By analogy to a social network, a weak tie represents a mutual acquaintance that connects two groups of close friends, providing a weak link between multiple modules. In contrast to the intuition that strong connections are optimal for network function, the introduction of weak ties is known to produce a more globally efficient small-world topology (Gallos, Makse, & Sigman, 2012; Granovetter, 1973). Research investigating their role in brain network dynamics further indicates that weak connections enable the system to function within many difficult-toreach states (Gu et al., 2015), reflecting a capacity to adapt to novel situations by engaging mechanisms for flexible, intelligent behavior. Unlike the easily reachable network states underlying crystallized intelligence, difficult-to-reach states rely on connections and pathways that are not well-established from prior experience – instead requiring the adaptive selection and assembly of new representations that introduce high cognitive demands. The capacity to access difficult-to-reach states is exhibited by multiple ICNs, most notably the frontoparietal and cingulo-opercular networks (Gu et al., 2015) (Figure 6.2b). Together, these networks are known to support cognitive control, enabling the top-down regulation and control of mental operations (engaging the

Human Intelligence and Network Neuroscience

fronto-parietal network) in response to environmental change and adaptive task goals (maintained by the cingulo-opercular network) (Dosenbach, Fair, Cohen, Schlaggar, & Petersen, 2008). Converging evidence from resting-state fMRI and human lesion studies strongly implicates the fronto-parietal network in cognitive control, demonstrating that this network accounts for individual differences in adaptive reasoning and problem-solving – assessed by fMRI measures of global efficiency (Cole et al., 2012; Santarnecchi et al., 2014; van den Heuvel, Stam, et al., 2009) and structural measures of brain integrity (Barbey, Colom, Paul, & Grafman, 2014; Barbey et al., 2012, 2013a; Glascher et al., 2010). From this perspective, the fronto-parietal network’s role in fluid intelligence reflects a global, system-wide capacity to adapt to novel environments, engaging cognitive control mechanisms that guide the dynamic selection and assembly of mental operations required for goal achievement (Duncan, Chylinski, Mitchell, & Bhandari, 2017). Thus, rather than attempt to localize individual differences in fluid intelligence to a specific brain network, this framework instead suggests that weak connections within the fronto-parietal and cingulo-opercular networks (Cole et al., 2012; Santarnecchi et al., 2014) drive global network dynamics – flexibly engaging difficult-to-reach states in the service of adaptive behavior and providing a window into the architecture of individual differences in general intelligence at a global level.

Network Dynamics of General Intelligence Recent discoveries in network neuroscience motivate a new perspective about the role of global network dynamics in general intelligence – breaking away from standard theories that account for individual differences in g on the basis of a single brain region (Duncan et al., 2000), a primary brain network (Barbey et al., 2012; Duncan, 2010; Jung & Haier, 2007), or the overlap among specific networks (Kovacs & Conway, 2016). Accumulating evidence instead suggests that network flexibility and dynamics are critical for the diverse range of mental abilities underlying general intelligence. According to Network Neuroscience Theory, the capacity of ICNs to transition between network states is supported by their small-world topology, which enables each network to operate in a critical state that is close to a phase transition between a regular and random network (Beggs, 2008; Petermann et al., 2009) (Figure 6.1). The transition toward a regular network configuration is associated with the engagement of specific cognitive abilities, whereas the transition toward a random network configuration is linked to the engagement of broad or general abilities (Figure 6.1). Rather than reflect a uniform topology of dynamic states, emerging evidence suggests that ICNs exhibit different degrees of variability (Betzel, Gu et al., 2016; Mattar, Betzel, & Bassett, 2016) – elucidating the network architecture that supports flexible, time-varying profiles of functional

111

112

a. k. barbey

connectivity. Connections between modules are known to fluctuate more than connections within modules, demonstrating greater dynamic variability for connector hubs relative to provincial hubs (Zalesky, Fornito, Cocchi, Gollo, & Breakspear, 2014; Zhang et al., 2016). Thus, the modular community structure of specific mental abilities provides a stable foundation upon which the more flexible, small-world topology of broad mental abilities is constructed (Hampshire, Highfield, Parkin, & Owen, 2012). The dynamic flexibility of ICNs underlying broad mental abilities (Figure 6.2b) is known to reflect their capacity to access easy- vs. difficult-to-reach states, with greatest dynamic flexibility exhibited by networks that are strongly associated with fluid intelligence, particularly the fronto-parietal network (Figure 6.3) (Braun et al., 2015; Cole et al., 2013; Shine et al., 2016).

Figure 6.3 Dynamic functional connectivity. (A) Standard deviation in resting-state BOLD fMRI reveals regions of low (blue), moderate (green), and high (red) variability. (B) Dynamic functional connectivity matrices are derived by windowing time series and estimating the functional connectivity between pairs of regions. Rather than remain static, functional connectivity matrices demonstrate changes over time, revealing dynamic variability in the connectivity profile of specific brain regions. (C) Dynamic functional connectivity matrices can be used to assess the network’s modular structure at each time point, revealing regions of low or high temporal dynamics. Figure modified with permission from Mattar et al. (2016)

Human Intelligence and Network Neuroscience

Functional Brain Network Reconfiguration Accumulating evidence examines the dynamic reconfiguration of brain networks in the service of goal-directed, intelligent behavior. Recent findings indicate that the functional reconfiguration of brain networks (i.e., greater network flexibility) is positively associated with learning and performance on tests of executive function. For example, Bassett et al. (2011) found that functional network flexibility (as measured by changes in the modular structure of brain networks) predicted future learning in a simple motor task. Converging evidence is provided by Braun et al. (2015), who examined functional brain network reconfiguration in a continuous recognition memory task (i.e., n-back) and observed that higher cognitive load was associated with greater network reorganization within frontal cortex. In addition, Jia, Hu, and Deshpande (2014) examined functional brain network dynamics in the context of resting-state fMRI, investigating the stability of connections over time. The authors found that performance on tests of executive function was associated with the average stability of connections examined at the whole brain level, with greater brain network reconfiguration (i.e., lower stability) predicting higher performance. Notably, the highest level of functional brain network reconfiguration was observed within the fronto-parietal network (Jia et al., 2014; see also, Hilger, Fukushima, Sporns, & Fiebach, 2020). Taken together, these findings support the role of flexible brain network reconfiguration in goal-directed, intelligent behavior. Additional evidence to support this conclusion is provided by studies that investigate the efficiency of functional brain network reconfiguration in the context of task performance. For example, Schultz and Cole (2016) examined the similarity between functional connectivity patterns observed at rest vs. during three task conditions (language, working memory, and reasoning). The authors predicted that greater reconfiguration efficiency (as measured by the similarity between the resting-state and task-based connectomes) would be associated with better performance. Consistent with this prediction, the authors found that individuals with greater reconfiguration efficiency demonstrated better task performance and that this measure was positively associated with general intelligence. This finding emphasizes the importance of reconfiguration efficiency in task performance and supports the role of flexible, dynamic network mechanisms for general intelligence. Network Neuroscience Theory motivates new predictions about the role of network dynamics in learning, suggesting that the early stages of learning depend on adaptive behavior and the engagement of difficult-to-reach network states, followed by the transfer of skills to easily reachable network states as knowledge and experience are acquired to guide problem-solving. Indeed, recent findings suggest that the development of fluid abilities from childhood to young adulthood is associated with individual differences in the flexible

113

114

a. k. barbey

reconfiguration of brain networks for fluid intelligence (Chai et al., 2017). A recent study by Finc et al. (2020) examined the dynamic reconfiguration of functional brain networks during working memory training, providing evidence that early stages of learning engage cognitive control networks for adaptive behavior, followed by increasing reliance upon the default mode network as knowledge and skills are acquired (Finc et al., 2020), supporting the predictions of the Network Neuroscience Theory. A primary direction for future research is to further elucidate how the flexible reconfiguration of brain networks is related to general intelligence, with particular emphasis on mechanisms for cognitive control. Although brain networks underlying cognitive control have been extensively studied, their precise role in specific, broad, and general facets of intelligence remain to be well characterized (Mill, Ito, & Cole, 2017). Future research therefore aims to integrate the wealth of psychological and psychometric evidence on the cognitive processes underlying general intelligence (Carroll, 1993) and cognitive control (Friedman & Miyake, 2017) with research on the network mechanisms underlying these processes (Barbey, Koenigs, & Grafman, 2013; Barbey et al., 2012, 2013b) in an effort to better characterize the cognitive and neurobiological foundations of general intelligence.

Conclusion Network Neuroscience Theory raises new possibilities for understanding the nature and mechanisms of human intelligence, suggesting that interdisciplinary research in the emerging field of network neuroscience can advance our understanding of one of the most profound problems of intellectual life: How individual differences in general intelligence – which give rise to the stunning diversity and uniqueness of human identity and personal expression – originate from the network organization of the human brain. The reviewed findings elucidate the global network architecture underlying individual differences in g, drawing upon recent studies investigating the small-world topology and dynamics of human brain networks. Rather than attribute individual differences in general intelligence to a single brain region (Duncan et al., 2000), a primary brain network (Barbey et al., 2012; Duncan, 2010; Jung & Haier, 2007), or the overlap among specific networks (Kovacs & Conway, 2016), the proposed theory instead suggests that general intelligence depends on the dynamic reorganization of ICNs – modifying their topology and community structure in the service of system-wide flexibility and adaptation (Table 6.1). This framework sets the stage for new approaches to understanding individual differences in general intelligence and motivates important questions for future research, namely: • What are the neurobiological foundations of individual differences in g? Does the assumption that g originates from a primary brain region

Human Intelligence and Network Neuroscience

or network remain tenable, or should theories broaden the scope of their analysis to incorporate evidence from network neuroscience on individual differences in the global topology and dynamics of the human brain? • To what extent does brain network dynamics account for individual differences in specific, broad, and general facets of intelligence and do mechanisms for cognitive control figure prominently? To gain a better understanding of this issue, a more fundamental characterization of network dynamics will be necessary. • In what respects are ICNs dynamic?, how do strong and weak connections enable specific network transformations?, and what mental abilities do network dynamics support? • How does the structural topology of ICNs shape their functional dynamics and the capacity to flexibly transition between network states? To what extent is our current understanding of network dynamics limited by an inability to measure more precise temporal profiles or to capture higherorder representations of network topology at a global level? As the significance and scope of these issues would suggest, many fundamental questions about the nature and mechanisms of human intelligence remain to be investigated and provide a catalyst for contemporary research in network neuroscience. By investigating the foundations of general intelligence in global network dynamics, the burgeoning field of network neuroscience will continue to advance our understanding of the cognitive and neural architecture from which the remarkable constellation of individual differences in human intelligence emerge.

Acknowledgments This work was supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract 2014-13121700004 to the University of Illinois at Urbana-Champaign (PI: Barbey) and the Department of Defense, Defense Advanced Research Projects Activity (DARPA), via Contract 2019HR00111990067 to the University of Illinois at Urbana-Champaign (PI: Barbey). The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, DARPA, or the US Government. The US Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Preparation of this chapter was based on and adapted from research investigating the Network Neuroscience Theory of human intelligence (Barbey, 2018).

115

116

a. k. barbey

References Achard, S., & Bullmore, E. (2007). Efficiency and cost of economical brain functional networks. PLoS Computational Biology, 3, e17. Achard, S., Salvador, R., Whitcher, B., Suckling, J., & Bullmore, E. (2006). A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. Journal of Neuroscience, 26(1), 63–72. Avena-Koenigsberger, A., Yan, X., Kolchinsky, A., van den Heuvel, M. P., Hagmann, P., & Sporns, O. (2019). A spectrum of routing strategies for brain networks. PLoS Computational Biology, 15, e1006833. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8-20. Barbey, A. K., Belli, T., Logan, A., Rubin, R., Zamroziewicz, M., & Operskalski, T. (2015). Network topology and dynamics in traumatic brain injury Current Opinion in Behavioral Sciences, 4, 92–102. Barbey, A. K., Colom, R., & Grafman, J. (2013a). Architecture of cognitive flexibility revealed by lesion mapping. Neuroimage, 82, 547–554. Barbey, A. K., Colom, R., & Grafman, J. (2013b). Dorsolateral prefrontal contributions to human intelligence. Neuropsychologia, 51(7), 1361–1369. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure and Function, 219(2), 485–494. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(4), 1154–1164. Barbey, A. K., Koenigs, M., & Grafman, J. (2013c). Dorsolateral prefrontal contributions to human working memory. Cortex, 49(5), 1195–1205. Bassett, D. S., & Bullmore, E. (2006). Small-world brain networks. Neuroscientist, 12(6), 512–523. Bassett, D. S., & Bullmore, E. T. (2009). Human brain networks in health and disease. Current Opinion in Neurology, 22(4), 340–347. Bassett, D. S., & Bullmore, E. T. (2017). Small-world brain networks revisited. Neuroscientist, 23(5), 499–516. Bassett, D. S., & Gazzaniga, M. S. (2011). Understanding complexity in the human brain. Trends in Cognitive Sciences, 15(5), 200–209. Bassett, D. S., Wymbs, N. F., Porter, M. A., Mucha, P. J., Carlson, J. M., & Grafton, S. T. (2011). Dynamic reconfiguration of human brain networks during learning. Proceedings of the National Academy of Sciences USA, 108(18), 7641–7646. Beggs, J. M. (2008). The criticality hypothesis: How local cortical networks might optimize information processing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Science, 366(1864), 329–343. Bertolero, M. A., Yeo, B. T., & D’Esposito, M. (2015). The modular and integrative functional architecture of the human brain. Proceedings of the National Academy of Sciences USA, 112(49), E6798–6807. Betzel, R. F., & Bassett, D. S. (2017). Multi-scale brain networks. Neuroimage, 160, 73–83.

Human Intelligence and Network Neuroscience

Betzel, R. F., Gu, S., Medaglia, J. D., Pasqualetti, F., & Bassett, D. S. (2016). Optimally controlling the human connectome: the role of network topology. Science Reports, 6, 30770. Betzel, R. F., Satterthwaite, T. D., Gold, J. I., & Bassett, D. S. (2016). A positive mood, a flexible brain. arXiv preprint. Biswal, B., Yetkin, F. Z., Haughton, V. M., & Hyde, J. S. (1995). Functional connectivity in the motor cortex of resting human brain using echo-planar MRI. Magnetic Resonance Medicine, 34(4), 537–541. Braun, U., Schäfer, A., Walter, H., Erk, S., Romanczuk-Seiferth, N., Haddad, L., . . . Bassett, D. S. (2015). Dynamic reconfiguration of frontal brain networks during executive cognition in humans. Proceedings of the National Academy of Sciences USA, 112(37), 11678–11683. Breakspear, M. (2017). Dynamic models of large-scale brain activity. Nature Neuroscience, 20, 340–352. Buchel, C., Coull, J. T., & Friston, K. J. (1999). The predictive value of changes in effective connectivity for human learning. Science, 283(5407), 1538–1541. Buckner, R. L., Sepulcre, J., Talukdar, T., Krienen, F. M., Liu, H., Hedden, T., . . . Johnson, K. A. (2009). Cortical hubs revealed by intrinsic functional connectivity: Mapping, assessment of stability, and relation to Alzheimer’s disease. Journal of Neuroscience, 29(6), 1860–1873. Bullmore, E., & Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10, 186–198. Bullmore, E., & Sporns, O. (2012). The economy of brain network organization. Nature Reviews Neuroscience, 13, 336–349. Byrge, L., Sporns, O., & Smith, L. B. (2014). Developmental process emerges from extended brain-body-behavior networks. Trends in Cognitive Sciences, 18(8), 395–403. Cabral, J., Kringelbach, M. L., & Deco, G. (2017). Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanisms. Neuroimage, 160, 84–96. Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press. Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston: Houghton Mifflin. Chai, L. R., Khambhati, A. N., Ciric, R., Moore, T. M., Gur, R. C., Gur, R. E., . . . Bassett, D. S. (2017). Evolution of brain network dynamics in neurodevelopment. Network Neuroscience, 1(1), 14–30. Christoff, K., Irving, Z. C., Fox, K. C., Spreng, R. N., & Andrews-Hanna, J. R. (2016). Mind-wandering as spontaneous thought: A dynamic framework. Nature Reviews Neuroscience, 17, 718–731. Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. Cohen, J. R., & D’Esposito, M. (2016). The segregation and integration of distinct brain networks and their relationship to cognition. Journal of Neuroscience, 36, 12083–12094.

117

118

a. k. barbey

Cohen, J. R., Gallen, C. L., Jacobs, E. G., Lee, T. G., & D’Esposito, M. (2014). Quantifying the reconfiguration of intrinsic networks during working memory. PLoS One, 9, e106636. Cole, M. W., Ito, T., & Braver, T. S. (2015). Lateral prefrontal cortex contributes to fluid intelligence through multinetwork connectivity. Brain Connectivity, 5(8), 497–504. Cole, M. W., Reynolds, J. R., Power, J. D., Repovs, G., Anticevic, A., & Braver, T. S. (2013). Multi-task connectivity reveals flexible hubs for adaptive task control. Nature Neuroscience, 16(9), 1348–1355. Cole, M. W., Yarkoni, T., Repovs, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. Journal of Neuroscience, 32(26), 8988– 8999. Deco, G., & Corbetta, M. (2011). The dynamical balance of the brain at rest. Neuroscientist, 17(1), 107–123. Deco, G., Jirsa, V. K., & McIntosh, A. R. (2013). Resting brains never rest: Computational insights into potential cognitive architectures. Trends in Neurosciences, 36(5), 268–274. Deco, G., Tononi, G., Boly, M., & Kringelbach, M. L. (2015). Rethinking segregation and integration: Contributions of whole-brain modelling. Nature Reviews Neuroscience, 16, 430–439. Dosenbach, N. U., Fair, D. A., Cohen, A. L., Schlaggar, B. L., & Petersen, S. E. (2008). A dual-networks architecture of top-down control. Trends in Cognitive Sciences, 12(3), 99–105. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour. Trends in Cognitive Sciences, 14(4), 172–179. Duncan, J., Chylinski, D., Mitchell, D. J., & Bhandari, A. (2017). Complexity and compositionality in fluid intelligence. Proceedings of the National Academy of Sciences USA, 114(20), 5295–5299. Duncan, J., & Owen, A. M. (2000). Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in Neurosciences, 23(10), 475–483. Duncan, J., Seitz, R. J., Kolodny, J., Bor, D., Herzog, H., Ahmed, A., . . . Emslie, H. (2000). A neural basis for general intelligence. Science, 289(5478), 457–460. Eguiluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M., & Apkarian, A. V. (2005). Scale-free brain functional networks. Physical Review Letters, 94, 018102. Finc, K., Bonna, K., He, X., Lydon-Staley, D. M., Kuhn, S., Duch, W., & Bassett, D. S. (2020). Dynamic reconfiguration of functional brain networks during working memory training. Nature Communications, 11, 2435. Friedman, N. P., & Miyake, A. (2017). Unity and diversity of executive functions: Individual differences as a window on cognitive structure. Cortex, 86, 186–204. Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127–138. Gallos, L. K., Makse, H. A., & Sigman, M. (2012). A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks. Proceedings of the National Academy of Sciences USA, 109(8), 2825–2830.

Human Intelligence and Network Neuroscience

Girn, M., Mills, C., & Christoff, K. (2019). Linking brain network reconfiguration and intelligence: Are we there yet? Trends in Neuroscience and Education, 15, 62–70. Glascher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences USA, 107(10), 4705–4709. Gordon, E. M., Stollstorff, M., & Vaidya, C. J. (2012). Using spatial multiple regression to identify intrinsic connectivity networks involved in working memory performance. Human Brain Mapping, 33(7), 1536–1552. Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380. Greene, A. S., Gao, S., Scheinost, D., & Constable, R. T. (2018). Task-induced brain state manipulation improves prediction of individual traits. Nature Communications, 9, 2807. Gu, S., Pasqualetti, F., Cieslak, M., Telesford, Q. K., Yu, A. B., Kahn, A. E., . . . Bassett, D. S. (2015). Controllability of structural brain networks. Nature Communications, 6, 8414. Guimera, R., & Nunes Amaral, L. A. (2005). Functional cartography of complex metabolic networks. Nature, 433, 895–900. Hagmann, P., Kurant, M., Gigandet, X., Thiran, P., Wedeen, V. J., Meuli, R., & Thiran, J. P. (2007). Mapping human whole-brain structural networks with diffusion MRI. PLoS One, 2, e597. Haier, R. J., 2017. The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic-rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. Hampshire, A., Highfield, R. R., Parkin, B. L., & Owen, A. M. (2012). Fractionating human intelligence. Neuron, 76(6), 1225–1237. He, Y., Chen, Z. J., & Evans, A. C. (2007). Small-world anatomical networks in the human brain revealed by cortical thickness from MRI. Cerebral Cortex, 17(10), 2407–2419. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017a). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017b). Intelligence is associated with the modular structure of intrinsic brain networks. Science Reports, 7(1), 16088. Hilger, K., Fukushima, M., Sporns, O., & Fiebach, C. J. (2020). Temporal stability of functional brain modules associated with human intelligence. Human Brain Mapping, 41(2), 362–372. Jia, H., Hu, X., & Deshpande, G. (2014). Behavioral relevance of the dynamics of the functional brain connectome. Brain Connectivity, 4(9), 741–759. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154; discussion 154–187.

119

120

a. k. barbey

Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177. Kruschwitz, J., Waller, L., Daedelow, L., Walter, H., & Veer, I. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. Neuroimage, 171, 323–331. Kucyi, A. (2018). Just a thought: How mind-wandering is represented in dynamic brain connectivity. Neuroimage, 180(Pt B), 505–514. Langer, N., Pedroni, A., Gianotti, L. R., Hänggi, J., Knoch, D., & Jäncke, L. (2012). Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6), 1393–1406. Langer, N., Pedroni, A., & Jancke, L. (2013). The problem of thresholding in smallworld network analysis. PLoS One, 8, e53199. Latora, V., & Marchiori, M. (2001). Efficient behavior of small-world networks. Physical Review Letters, 87(19), 198701. Liang, X., Zou, Q., He, Y., & Yang, Y. (2016). Topologically reorganized connectivity architecture of default-mode, executive-control, and salience networks across working memory task loads. Cerebral Cortex, 26(4), 1501–1511. Mattar, M. G., Betzel, R. F., & Bassett, D. S. (2016). The flexible brain. Brain, 139(8), 2110–2112. McGrew, K. S., & Wendling, B. J. (2010). Cattell-Horn-Carroll cognitive-achievement relations: What we have learned from the past 20 years of research. Psychology in the Schools, 47(7), 651–675. Meunier, D., Lambiotte, R., & Bullmore, E. T. (2010). Modular and hierarchically modular organization of brain networks. Frontiers in Neuroscience, 4, 200. Mill, R. D., Ito, T., & Cole, M. W. (2017). From connectome to cognition: The search for mechanism in human functional brain networks. Neuroimage, 160, 124–139. Park, H. J., & Friston, K. (2013). Structural and functional brain networks: From connections to cognition. Science, 342(6158), 1238411. Pascual-Leone, A., Amedi, A., Fregni, F., & Merabet, L. B. (2005). The plastic human brain cortex. Annual Review of Neuroscience, 28, 377–401. Petermann, T., Thiagarajan, T. C., Lebedev, M. A., Nicolelis, M. A., Chialvo, D. R., & Plenz, D. (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Sciences USA, 106(37), 15921–15926. Posner, M. I., & Barbey, A. K. (2020). General intelligence in the age of neuroimaging. Trends in Neuroscience and Education, 18, 100126. Power, J. D., Cohen, A. L., Nelson, S. M., Wig, G. S., Barnes, K. A., Church, J. A., . . . Petersen, S. E. (2011). Functional network organization of the human brain. Neuron, 72(4), 665–678. Power, J. D., & Petersen, S. E. (2013). Control-related systems in the human brain. Current Opinion in Neurobiology, 23(2), 223–228. Power, J. D., Schlaggar, B. L., Lessov-Schlaggar, C. N., & Petersen, S. E. (2013). Evidence for hubs in human functional brain networks. Neuron, 79(4), 798–813.

Human Intelligence and Network Neuroscience

Ramón y Cajal, S., Pasik, P., & Pasik, T. (1999). Texture of the nervous system of man and the vertebrates. Wien: Springer. Robinson, P. A., Henderson, J. A., Matar, E., Riley, P., & Gray, R. T. (2009). Dynamical reconnection and stability constraints on cortical network architecture. Physical Review Letters, 103, 108104. Santarnecchi, E., Galli, G., Polizzotto, N. R., Rossi, A., & Rossi, S. (2014). Efficiency of weak brain connections support general cognitive functioning. Human Brain Mapping, 35(9), 4566–4582. Schneidman, E., Berry, M. J., 2nd, Segev, R., & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440, 1007–1012. Schultz, D. H., & Cole, M. W. (2016). Higher intelligence is associated with less taskrelated brain network reconfiguration. Journal of Neuroscience, 36(33), 8551–8561. Shine, J. M., Bissett, P. G., Bell, P. T., Koyejo, O., Balsters, J. H., Gorgolewski, K. J., . . . Poldrack, R. A. (2016). The dynamics of functional brain networks: Integrated network states during cognitive task performance. Neuron, 92(2), 544–554. Simon, H. (1962). The architecture of complexity. Proceedings of the American Philosophical Society, 106(6), 467–482. Smith, S. M., Beckmann, C. F., Andersson, J., Auerbach, E. J., Bijsterbosch, J., Douaud, G., . . . WU-Minn HCP Consortium (2013). Resting-state fMRI in the Human Connectome Project. Neuroimage, 80, 144–168. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. Neuroimage, 41(3), 1168–1176. Sporns, O., Chialvo, D. R., Kaiser, M., & Hilgetag, C. C. (2004). Organization, development and function of complex brain networks. Trends in Cognitive Sciences, 8(9), 418–425. Sporns, O., Tononi, G., & Edelman, G. M. (2000a). Connectivity and complexity: The relationship between neuroanatomy and brain dynamics. Neural Networks, 13(8–9), 909–922. Sporns, O., Tononi, G., & Edelman, G. M. (2000b). Theoretical neuroanatomy: Relating anatomical and functional connectivity in graphs and cortical connection matrices. Cerebral Cortex, 10(2), 127–141. St Jacques, P. L., Kragel, P. A., & Rubin, D. C. (2011). Dynamic neural networks supporting memory retrieval. Neuroimage, 57(2), 608–616. Stam, C. J. (2014). Modern network science of neurological disorders. Nature Reviews Neuroscience, 15, 683–695. Stam, C. J., Jones, B. F., Nolte, G., Breakspear, M., & Scheltens, P. (2007). Smallworld networks and functional connectivity in Alzheimer’s disease. Cerebral Cortex, 17(1), 92–99. van den Heuvel, M. P., Mandl, R. C., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Functionally linked resting-state networks reflect the underlying structural connectivity architecture of the human brain. Human Brain Mapping, 30(10), 3127–3141.

121

122

a. k. barbey

van den Heuvel, M. P., & Sporns, O. (2011). Rich-club organization of the human connectome. Journal of Neuroscience, 31(44), 15775–15786. van den Heuvel, M. P., & Sporns, O. (2013). Network hubs in the human brain. Trends in Cognitive Sciences, 17(12), 683–696. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. van der Maas, H. L., Dolan, C. V., Grasman, R. P., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. E. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113(4), 842–861. Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of “small-world” networks. Nature, 393, 440–442. Wirth, M., Jann, K., Dierks, T., Federspiel, A., Wiest, R., & Horn, H. (2011). Semantic memory involvement in the default mode network: A functional neuroimaging study using independent component analysis. Neuroimage, 54(4), 3057–3066. Xiao, L., Stephen, J. M., Wilson, T. W., Calhoun, V. D., & Wang, Y. P. (2019). Alternating diffusion map based fusion of multimodal brain connectivity networks for IQ prediction. IEEE Transactions of Biomedical Engineering, 66(8), 2140–2151. Zalesky, A., Fornito, A., Cocchi, L., Gollo, L. L., & Breakspear, M. (2014). Timeresolved resting-state brain networks. Proceedings of the National Academy of Sciences USA, 111(28), 10341–10346. Zhang, J., Cheng, W., Liu, Z., Zhang, K., Lei, X., Yao, Y., . . . Feng, J. (2016). Neural, electrophysiological and anatomical basis of brain-network variability and its characteristic changes in mental disorders. Brain, 139(8), 2307–2321. Zuo, X. N., He, Y., Betzel, R. F., Colcombe, S., Sporns, O., & Milham, M. P. (2017). Human connectomics across the life span. Trends in Cognitive Sciences, 21(1), 32–45.

7 It’s about Time Towards a Longitudinal Cognitive Neuroscience of Intelligence Rogier A. Kievit and Ivan L. Simpson-Kent

Introduction The search for the biological properties that underlie intelligent behavior has held the scientific imagination at least since the pre-Socratic philosophers. Early hypotheses posited a crucial role for the heart (Aristotle; Gross, 1995), the ventricles (Galen; Rocca, 2009), and the “Heat, Moisture, and Driness” of the brain (Huarte, 1594). The advent of neuroimaging technology such as EEG, MEG, and MRI has provided more suitable tools to scientifically study the relationship between mind and brain. To date, many hundreds of studies have examined the association between brain structure and function on the one hand and individual differences in general cognitive abilities on the other. Both qualitative and quantitative reviews have summarized the crosssectional associations between intelligence and brain volume (Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015), as well as more network- and imaging-specific hypotheses which suggest a key role for the frontoparietal system in supporting individual differences in intelligence (Basten, Hilger, & Fiebach, 2015; Deary, Penke, & Johnson, 2010; Jung & Haier, 2007). These findings are bolstered by converging evidence from lesion studies (Barbey, Colom, Paul, & Grafman, 2014), cognitive abilities in disorders associated with physiological abnormalities (Kail, 1998), and the neural signatures associated with the rapid acquisition of new skills (Bengtsson et al., 2005). These innovations in neuroimaging coincided with the emergence of more dynamic, longitudinal models of the development of intelligence. Where seminal works on intelligence such as Spearman (1904) and Jensen’s (1998) The g Factor: The Science of Mental Ability barely discuss developmental change, new theories have begun to address the role of development to understand intelligence conceptually and empirically. For instance, theories such as that of Dickens and Flynn (2001) suggest direct, reciprocal interactions between intelligence, genetic predisposition, and the environment over the lifespan. This model, where genetic predispositions lead to people self-stratifying to environments in line with their abilities, leads to amplification of initial differences, thus reconciling previously puzzling facts about heritability and environmental influences. Later, inspired by ecological models of predator–prey relationships,

123

124

r. a. kievit and i. l. simpson-kent

Van Der Maas et al. (2006) proposed the mutualism model, which suggests that general cognitive ability emerges, at least in part, due to positive reciprocal influences between lower cognitive faculties. In other words, a greater ability in one domain, such as vocabulary, may facilitate faster growth in others (memory, reasoning) through a range of mechanisms. Recent empirical studies (Ferrer & McArdle, 2004; Kievit, Hofman, & Nation, 2019; Kievit et al., 2017) in longitudinal samples as well as meta-analytic and narrative reviews (Peng & Kievit, 2020; Peng, Wang, Wang, & Lin, 2019) find support for the mutualism model, suggesting a key role for developmental dynamics in understanding cognitive ability. Converging evidence for the plausibility of such dynamic models comes from atypical populations. For instance, Ferrer, Shaywitz, Holahan, Marchione, and Shaywitz (2010) demonstrated that a subpopulation with dyslexia was characterized not (just) by differences in absolute performance in reading ability, but by an absence of positive, reciprocal effects between IQ and reading compared to typical controls. In other words, the atypical group is best understood as having atypical dynamic processes, which may ultimately manifest as cross-sectional differences However, there is very little work on the intersection between these two innovative strands of intelligence research: How changes in brain structure and function go hand in hand with changes in cognitive abilities associated with intelligence. This is unfortunate, as to truly understand the nature of the relationship between emerging brains and minds, there is no substitute for longitudinal data (Raz & Lindenberger, 2011), where the same individuals undergo repeated sessions of neuroimaging as well as repeated assessments of higher cognitive abilities closely associated with intelligence.

Towards a Dynamic Cognitive Neuroscience of Intelligence Our goal in this chapter is to examine longitudinal studies that study change in both cognitive ability and brain structure in childhood, adolescence, and early adulthood, when change in both domains is rapid. A similarly exciting question is neurocognitive aging at the other end of the lifespan – however, as that has recently been comprehensively reviewed (Oschwald et al., 2019), this chapter will focus on the period from early childhood to early adulthood. We will focus (with some partial exceptions) on studies that measure both cognitive ability and measures of brain structure on at least two time points. Such studies are sufficiently rare that we can survey them here comprehensively. Although we focus on intelligence, we do not limit ourselves to studies that use IQ scores, but rather studies that measure continuous measures of cognitive ability that are canonically considered closely related to intelligence (e.g., as defined by high standardized factor loadings in a hierarchical factor model). These include measures such as (working) memory, fluid reasoning, vocabulary, and processing speed, all of which

Towards a Longitudinal Cognitive Neuroscience of Intelligence

generally show steep developmental increases, and as such are likely most sensitive to contemporaneous brain changes. To understand the unfolding dynamics between cognitive and neural change requires longitudinal data as well as longitudinal methodology. As outlined in Kievit et al. (2018) (see Figure 7.1), we can conceptualize the relationship between brain change and cognitive change in terms of three key parameters which capture causal hypotheses: Brain structure driving cognitive change, cognitive ability leading to brain reorganization, and correlated change. First, current brain structure may be found to govern the rate of change in cognitive performance. This is what we would refer to as “structural scaffolding.” According to this hypothesis, the current state of the brain (most commonly captured by structural measures, but trait-like functional measures may serve a similar conceptual purpose) provides, in some sense, the preconditions that facilitate cognitive growth. Specifically, we would expect that individuals with “better” neural characteristics (e.g., high volume, greater

Figure 7.1 Simplified bivariate latent change score model illustrating the co-development of intelligence scores (top) and brain measures (bottom) across two waves. For more details, see Kievit et al. (2018)

125

126

r. a. kievit and i. l. simpson-kent

white matter integrity, etc.) would show faster rates of cognitive gain. This effect can be quantified by means of a coupling parameter in a latent change score (LCS) model, a regression that quantifies the association between the current state of brain structure and the rate of change (delta) of the cognitive domain of interest (red, upward arrow in Figure 7.1). Alternatively, current cognitive performance may be associated with the rate of change of structural (or functional) brain metrics. This could be conceptualized as cognitive plasticity, or reorganization. For instance, achieving a certain greater level of cognitive ability may lead to more rapid reorganization of cortical structure to engrain, or solidify, these newly acquired abilities. Recent mechanistic proposals (Wenger, Brozzoli, Lindenberger, & Lövdén, 2017) have shown how a mechanistic cascade of glial changes, dendritic branching, and axonal sprouting following rapid skill acquisition lead to volume expansion, followed by a period of renormalization. Such effects can be captured by a coupling parameter in an LCS model (blue, downward arrow in Figure 7.1). Both these parameters can be further expanded such that the recent rate of change in one domain governs future changes in another domain (Estrada, Ferrer, Román, Karama, & Colom, 2019; Grimm, An, McArdle, Zonderman, & Resnick, 2012). Finally, changes in brain structure and cognitive function may be correlated (yellow double-headed arrow in Figure 7.1). Although not all analytical approaches allow for the investigation of all these parameters, and the papers we discuss in Table 7.1 show considerable methodological heterogeneity, we find the analytical framework as sketched in Figure 7.1 a fruitful way to frame how associations over time unfold. We will discuss recent findings, grouped by imaging measure, which bear on these questions, and discuss avenues for future work. The empirical papers we will discuss in the following section are described in more detail (sample sizes, age range, measures of interest) in Table 7.1, and an at-a-glance overview of the age ranges and number of occasions is provided in Figure 7.2.

Grey Matter One of the earliest papers, Sowell et al. (2004), focused on mapping cortical changes across two years in 45 children aged 5–11 years. Greater cortical thinning was associated with more rapid gains in vocabulary, especially in the left hemisphere. A considerably larger study by Shaw et al. (2006) grouped 307 children into three “strata” of intelligence (high, medium, and low) and observed a complex pattern: Children with higher cognitive ability showed especially pronounced changes, with early steep increases in volume followed by steeper rates of cortical thinning afterwards. Notably, this process induced different brain–intelligence associations across development, with negative associations between cortical thickness and intelligence at early ages but positive associations later on in development.

Table 7.1 An overview of longitudinal studies of brain structure, function, and intelligence. For each study we show the abbreviated reference and details about cognitive and imaging measures used. If papers were ambiguous (e.g., only report SD instead of range), numbers reflect an informed estimate. Mean interval between waves (years)

Publication year

Sample size (per wave)

Beckwith and Parmelee (1986)

1986

53/53/53/ 53/49

~1

Sowell et al. (2004) Shaw et al. (2006) Brans et al. (2010) Ramsden et al. (2011) Tamnes, Walhovd, Grydeland et al. (2013) Burgaleta et al. (2014) Evans et al. (2015)

2004

45/45

~2

2006

307/178/92

~2

2010

242/183

5

2011

33/33

~3.5

2013

79

2014

188/188

~2

WASI

2015

43/43/12/7

~1

2015

162/162

~3

WASI, WIAT-II, digit recall, block recall, count recall, backward digit recall WISC-III

2015

504/504

~4

Reference

Koenis et al. (2015) Schnack et al. (2015)

Age range (years)

Congnitive test(s)

Imaging metric

Gesell Developmental Scale (4.9.24 mo), Stanford-Baiet Intelligence Scale (age 5–8), WISC (age 8) WISC (vocab & block design)

Electroencephalogram (EEG)

0–8

Cortical thickness, brain volume Cortical thickness

5–11

Cortical thickness, brain volume Functional & structural MRI Grey matter volume

20–40

WPR5I-III, WISC-III, WAIS-III WAIS-III WISC-III (wave 1). WAIS-III (wave 2) Verbal working memory

WISC-III, short form WAIS & WAIS-III-NL

Cortical thickness, cortical surface area Brain volume, resting state connectivity (fMRI) Fractional anisotropy, streamline count Cortical thickness, cortical surface area

4–25

12–20 8–22

6–22 8–14

9–15 9–60

Table 7.1 (cont.)

Congnitive test(s)

Imaging metric

~0.75

Mullen Scales of tarty learning

~1.5

WASI Matrix Reasoning

306/7/7

2

WASI

DWI Myelin water Fraction (MWF) Fractional anisotropy, functional connectivity (fMRI) Cortical thickness

6–18

2017

75/39/18/29

~5

WPPSI-III

Fractional anisotropy

0–29

2018

132/132/132

~2

WASI

6–21

2018

237/224/217

~2

WISC & WASI

Cortical thickness, cortical surface area Brain volume

2018

201/121/71

~1.5

2018

310/255/130

~3/~5

WISC-R, Woodcock-Johnson Test of Achievement (WJ-R) WISC-III, WAIS-iii

Fractional anisotropy, white matter volume Fractional anisotropy

9–23

2019

401

~1/5

Head circumference

0–26

2019

430/430/430

~2

K-ABC (ages 6–8) & WAIS (age 26) WASI

6–22

2019

813/7(up to8 waves, total 1748 scant) 37/37

~3

WPPSI-III, WISC-R, WAIS

Cortical thickness and surface area Cortical thickness

2

WASI-II. Language: CELF 4/CTOPP/GORT 5

Fractional anisotropy

6–8

Sample size (per wave)

Deoni et al. (2016) Wendelken et al. (2017)

2016 2017

257/126/ 39/15/4 523/223

Khundrakpam et al. (2017) Young et al. (2017) Román et al. (2018) Tamnes et al. (2018) Ferrer (2018)

2017

Reference

Koenis et al. (2018) Jaekel et al. (2019) Estrada et al. (2019) Schmitt et al. (2019) Borchers et al. (2019)

Age range (years)

Mean interval between waves (years)

Publication year

2019

0–5 6–22

8–29 5–21

3–34

Ritchie et al. (under review)

2019

2,091/1,423

~5

CANTAB, WISC, Educational polygenic score

Hahn et al. (2019) Qi et al. (2019)

2019

36/36

6

WISC, WAIS

2019

55/52/51

~1

Dai et al. (2019)

2019

210/not known

Selmeczy et al. (2020) Judd et al. (2020) Madsen et al. (2020)

2020

90/83/75

Unclear (likely similar to Deoni et al., 2016) 0.75–3.7 years

Sentence comprehension test (Test zum Satzverstehen von kindern, TSVK) Mullen Scales of Early Learning

Item-context association memory

2020

551/551

~5

CANTAB

2020

79/85/78/ 72/67/63/5/ 51/26

0.5

Stop signal reaction time (SSRT)

14–19

Cortical thickness, cortical volume, surface area EEG sleep spindles

8–18

Cortical thickness

5–7

DWI Myelin water Fraction (MWF)

0.1–4

Memory task-based (MRI response Cortical thickness/ surface area Fractional anisotropy, mean diffusivity; radial diffusivity

8–14 14–19 7–19

130

r. a. kievit and i. l. simpson-kent

Beckwith & Parmelee, 1986 Sowell et al., 2004 Shaw et al., 2006 Brans et al., 2010 Ramsden et al., 2011 Tamnes et al., 2013 Burgaleta et al., 2014 Evans et al., 2015 Koenis et al., 2015 Schnack et al., 2015 Deoni et al., 2016 Wendelken et al., 2017 Khundrakpam et al., 2017 Young et al., 2017 Koenis et al., 2018 Román et al., 2018 Tamnes et al., 2018 Ferrer, 2018 Jaekel et al., 2019 Estrada et al., 2019 Schmi et al., 2019 Borchers et al., 2019 Ritchie et al. 2019 Hahn et al., 2019 Qi et al., 2019 Dai et al., 2019 Selmeczy et al. 2020

Judd et al., 2020 Madsen et al., 2020

0

1

10

15

20

25

Figure 7.2 An overview of longitudinal studies of brain structure, function, and intelligence. For each study, organized chronologically, we show the age range (lines), number of waves (number of dots), and mean age at each wave (location of dots). Lines that extend beyond the graph rightwards have more waves beyond early adulthood. More details per study are shown in Table 7.1.

A more recent follow-up study attempted to tease apart cognitive specificity for such associations. Ramsden et al. (2011) examined longitudinal changes in verbal and non-verbal intelligence and their associations with grey matter in a relatively small sample (N = 33) of healthy adolescent participants. Correlating change scores across two measurements, three years apart, the authors showed that changes in verbal intelligence (VIQ) co-occurred alongside changes in grey matter density in a region of the left motor cortex previously linked to articulation of speech (Ramsden et al., 2011, p. 114). In contrast, changes in non-verbal intelligence were positively correlated with grey matter density in the anterior cerebellum, which has previously been implicated in hand motor movements. Although preliminary, this work suggests potential specificity in neurodevelopmental patterns. Burgaleta, Johnson, Waber, Colom, and Karama (2014) examined changes in intelligence test scores across two waves and observed correlated change between cortical thickness (especially in frontoparietal areas) and changes in intelligence. The pattern of results showed

Towards a Longitudinal Cognitive Neuroscience of Intelligence

that those with greatest gains in FSIQ showed less rapid cortical thinning than those with smaller gains, or decreases, in FSIQ. Similar results were not observed for cortical surface area, suggesting greater sensitivity of thickness to cognitive changes. Schnack et al. (2015) expanded beyond only children to a wider, lifespan sample (9–60 years), although weighted towards children and adolescents. As observed in Shaw et al. (2006), they observed a developmentally heterogeneous set of associations, with thinner cortices being associated with better cognitive performance at age 10 years, and high IQ children showing more rapid thinning. However, in adulthood this pattern reversed, such that greater cortical thickness in middle age is associated with higher intelligence, possibly due to slower lifespan thinning, further emphasizing the importance of a truly longitudinal, developmental perspective. The majority of work focuses on thickness, volume, and area of cortical regions. In contrast, Tamnes, Bos, van de Kamp, Peters, and Crone (2018) studied longitudinal changes in the hippocampus and its subregions in 237 individuals scanned up to three times. They observed cross-sectional correlations between intelligence and hippocampal subregions, but the only significant longitudinal associations were a positive association between the rates of increase in the molecular layer of the hippocampus and cognitive performance. Interestingly, such correlated changes may be specific to certain subtests of intelligence, even those considered quite central to cognitive ability such as working memory: Tamnes, Walhovd, Grydeland et al. (2013) showed that greater volume reductions in the frontal and rostral middle frontal gyri were associated with greater gains in working memory, even after adjusting for IQ. Although most studies focus on properties of brain structure directly (e.g., thickness, volume), an emerging subfield focuses on the covariance between regions instead. Khundrakpam et al. (2017) used structural covariance between regions to characterize the covariance network between regions. In 306 subjects scanned up to three times, they observed greater cortical thickness, higher global efficiency, but lower local efficiency in individuals with higher IQ, especially in frontal and temporal gyri. Although based on longitudinal data, the rate of change itself was not directly used in this study. Where most studies use summary metrics of intelligence or cognitive performance, increasingly authors implement full (longitudinal) latent variable models. The benefits of doing so are many, including increased power, and establishment of measurement invariance, which allows for unbiased interpretation of change over time (Widaman, Ferrer, & Conger, 2010), and more flexibility regarding missing data. One example of this approach is Román et al. (2018), who studied longitudinal changes in a measurement invariant g factor alongside changes in cortical thickness and surface area in a sample of 132 children and adolescents (6–21 years) from the NIH Pediatric MRI Data Repository (Evans & Brain Development Cooperative Group, 2006). A general intelligence factor was estimated at three time points with an

131

132

r. a. kievit and i. l. simpson-kent

average interval of two years. Changes in g scores correlated with changes in cortical thickness as well as surface area (r = .3/.37), in all regions (for cortical thickness) but mainly fronto-temporal (for surface area). Moreover, the trajectories of cortical thinning depended on cognitive ability: Significant cortical thinning was apparent at age 10–14 for individuals with lower g scores, whereas for those with higher g scores, cortical thinning only became apparent around age 17. A follow-up study on the same sample by a similar team of authors extended the analysis to what is likely the most advanced psychometric analysis in this field to date (Estrada et al., 2019). Using Latent Change Score models in three waves, the authors were able to tease apart lead-lag relations between intelligence and brain structure. Notably, and unlike any other paper to date, they used the estimates of current rates of change (at wave 2) to predict future rates of change in the other domains (see also Grimm et al., 2012 for a technical overview). They observed a complex but fascinating pattern of results: Although changes in cognitive ability or cortical structure were not predicted by the level in the other domain one time point before, the recent rate of change did predict future changes. In other words, individuals who showed less thinning and less surface loss showed greater gains in general intelligence in the subsequent period. In contrast, individuals who increased more in g during the previous period showed greater subsequent thinning – Possibly due to greater reorganization following the cognitive skill gains. For all analyses surface area showed less pronounced effects, suggesting, as in other studies, that cortical thickness is a more sensitive measure than surface area. These findings offer intriguing insights into the true intricacies of the unfolding development of cognition and brain structure, and suggest that even higher temporal resolution, as well as greater numbers of measurement occasions, are needed to truly capture these processes. A similarly psychometrically sophisticated study was conducted by Ritchie et al. (under review), who examined a large (N = 2,316) number of adolescents (14–19 years) tested and scanned in two waves, approximately five years apart from the IMAGEN sample. Ritchie et al. examined the relationship between (changes in) a broad general factor of ability (by extracting the first component of a battery of CANTAB tasks) and (changes in) a global (cortical) summary of grey matter, indexed by volume, thickness, and surface area. Using a Latent Change Score modeling strategy, they observed a constellation of interesting patterns. Cross-sectionally, higher cognitive ability was correlated with higher cortical volume and larger surface area, with weaker results for cortical thickness. Those with higher baseline ability tended to show more rapid cortical thinning and volume loss, although this was a relatively small effect. In contrast with some other findings, baseline brain structure was not associated with rates of cognitive change over time. Qi, Schaadt, and Friederici (2019) sought to investigate how cortical alteration during early development contributes to later language acquisition. In 56 children aged five to six years, they measured left and right hemispheric

Towards a Longitudinal Cognitive Neuroscience of Intelligence

cortical thickness and administered a sentence comprehension test (Test zum Satzverstehen von Kindern, TSVK) at two separate two points (mean interval: ~1 year). Moreover, they again acquired TSVK scores for a third time about a year later (age seven years) to estimate the effect of brain lateralization on language ability. In addition to evidence of early lateralization, they found that greater (cross-sectional) cortical thinning in the left, compared to the right, inferior frontal gyrus between ages five and six years was associated with greater language ability at age seven years. Lastly, and more relevant to this chapter, in the same subset of children, they found that changes in cortical thickness asymmetry were positively correlated with changes in language performance at age seven years, so that children with a greater increase in lateralization between five and six years improved more on the language test than those with less asymmetry. The majority of studies focused on children, usually from age seven-to-eight years onwards, likely for practical and logistical reasons. However, exceptions exist. Jaekel, Sorg, Baeuml, Bartmann, and Wolke (2019) studied the association between head growth and intelligence in 411 very preterm, preterm, and term born infants. Doing so, they observed that greater perinatal head size, as well as faster head growth (especially in the first 20 months) were associated with better cognitive performance, together explaining up to 70% of the variance in adult IQ. Other infant studies focusing on white matter will be discussed in the next section.

White Matter Much of the initial work on longitudinal studies of intelligence focused on grey matter structure, especially volume and thickness. In recent years, technology, especially quantitative models, to capture and quantify white matter microstructure has made considerable strides. Most of these innovations have focused on diffusion weighted imaging (DWI), which allows researchers to capture metrics such as fractional anisotropy (FA), mean diffusivity (MD), and myelin water fraction. Although the mapping from such measures to the underlying physiology such as axonal width and myelination remains far from perfect (Jones, Knösche, & Turner, 2013; Wandell, 2016), DWI measures have provided a range of new insights into the development of intelligence. Ferrer (2018) examined developmental changes in fluid reasoning in an N = 201, three wave sample of children, adolescents, and young adults aged 5–21 years (Earlier work by the same group relies on lower numbers from the same sample; Ferrer et al., 2013). Fluid reasoning was assessed using Matrix Reasoning, Block Design, Concept Formation, and Analysis Synthesis. Notably, Ferrer incorporated both a latent variable of fluid reasoning, as well as establishing measurement invariance across waves. Ferrer observed that greater global white matter volume (in mm) and greater

133

134

r. a. kievit and i. l. simpson-kent

white matter microstructure (indexed as fractional anisotropy) were associated with more rapid improvements in fluid reasoning. However, white matter was only incorporated at baseline, precluding the examination of cognitive performance driving white matter change. In a small cohort of children scanned on two occasions (N = 37, age range: six-to-eight years, two-year interval between scans), Borchers et al. (2019) examined the influence of white matter microstructure (mean tract-FA) at age six years on subsequent reading ability (Oral Reading Index) at age eight years. Pre-literate reading ability was assessed at age six years (considered by the authors as the onset of learning to read), with concurrent estimates of mean tract-FA of tracts known to be involved in reading-related abilities. They found that reading ability at age eight was predicted by mean tract-FA of the left inferior cerebellar peduncle as well as left and right superior longitudinal fasciculus even after controlling for age six years pre-literacy skills and demographic indicators (family history and sex). One of the most recent, and remarkable, projects is the Danish HUBU study. In this study, 95 children, aged 7 to 13 years, were scanned up to 12 times, six months apart using Diffusion Weighted Imaging alongside longitudinal assessments of cognitive tasks. Although the rich longitudinal findings of this project are still emerging, a recent paper reports initial findings. Madsen et al. (2020) examined the relationship between (changes in) tract-average Fractional Anisotropy and (changes in) a stop-signal reaction time task that captures the efficiency of executive inhibition in 88 children measured up to nine times (mean: 6.6). They observed that children with higher fractional anisotropy in pre-SMA tended to have better baseline SSRT performance, but a more shallow slope of improvement, suggesting that children with better white matter microstructure in key regions more rapidly reach a performance plateau in terms of speed. Although the sample size of the HUBU is moderate, the temporal richness of this study is unique, and likely to yield unique insights moving forward. Moving beyond summary metrics, Koenis et al. (2015) examined withinsubject white matter networks using mean FA as well as streamline count, and computed graph theory metrics such as global and local efficiency (both network metrics, which quantify the ease with which two nodes in a network can reach each other through edges, or “connections”) to characterize the nature of the within-subject networks. They demonstrated that children who made the greatest gains in efficiency (not to be confused with “neural efficiency” during task performance, see Neubauer & Fink, 2009) of their structural network were those who made the greatest gains in intelligence test scores. In contrast, individuals who showed no change or a decrease in network efficiency showed a decrease in intelligence test scores. The strongest nodal associations were present for the orbitofrontal cortex and the anterior cingulum. The associations for streamline, rather than FA, based networks were largely non-significant or marginally so and sometimes inconsistent with FA based efficiency.

Towards a Longitudinal Cognitive Neuroscience of Intelligence

Two studies examined the role of white matter microstructure in infants. Young et al. (2017) examined the white matter trajectories (as quantified by FA, MD, AD, and RD) in 75 very preterm neonates across up to four waves of diffusion weighted imaging. Doing so, they observed that a slower decrease in mean diffusivity (defined by MD) was associated with lower full scale IQ scores later on in life. Moreover, Deoni et al. (2016) studied the myelination profiles of 257 healthy developing children. A subset (N = 126) was scanned and tested at least twice, with further subsamples being scanned up to five times to derive advanced myelination metrics such as myelin water fraction. In all regions studied, children of above average cognitive ability showed distinct myelination trajectories: Higher intelligence test scores were associated with a longer initial lag and slower growth period, followed by a longer overall growth phase and faster secondary growth rates, yielding the most pronounced cross-sectional differences at age three years. A follow-up study using the same sample (Dai et al., 2019) examined a different question, namely whether the contemporaneous correlations between myelination water fraction and cognitive ability varied longitudinally using non-parametric models. Doing so, they observed complex, non-linear patterns of the association between MWF and cognitive ability during development, with a peak of association around 1 year of age. Although this approach precludes modeling coupling effects, it illustrates the time-varying associations between brain and behavior are also found in the absence of cohort or selection confounds.

The Role of Genetics As shown in the “white matter” and “grey matter” sections, an emerging body of work suggests key roles for cortical and subcortical maturation in supporting changes in intelligence. However, these observations leave open the question of the etiology of these processes. Are they driven largely by concurrent changes and differences in the environment? Or do underlying genetic differences underlie most or all of the observed processes? To address these questions, Schmitt et al. (2019) used a twin design to examine the associations between (changes in) cortical thickness and baseline intelligence, as well as the extent to which they shared genetic similarity, in 813 typically developing children with up to as many as eight scans. Using a twin analysis, the authors showed that the phenotypic covariance between IQ and cortical thickness and cortical thickness change was effectively entirely genetic. Koenis et al. (2018) showed (in the same sample as Koenis et al., 2015) age-dependent correlations between brain measures and IQ: Weak or absent correlations early in childhood (r = 0 at age 10 years) became pronounced by middle to late adolescence (age 18 years, r = .23). Notably, the previous steep changes in efficiency between age 10 and 13 years had leveled off by age 18 years, suggesting a slowing down of cortical development. Finally, a unique twin

135

136

r. a. kievit and i. l. simpson-kent

design allowed the authors to compute genetic correlations: The extent to which brain network changes and intelligence changes may share an underlying genetic origin. They observed a steady increase in this genetic correlation, suggesting that developmental trajectories continue to unfold, an interpretation in line (in spirit) with Dickens-Flynn type models of development, where genetic predispositions lead to an increasingly close alignment of genetic predispositions and the environment (Dickens & Flynn, 2001). Brans et al. (2010) investigated changes in cortical thickness in 66 twins (total N = 132) in late adolescence and adults (20–40 years) scanned on two occasions. They demonstrated, as did studies above, that individuals with higher IQs (defined by full WAIS-III) showed greater thickening and less thinning than those with lower IQs. Notably, the genetic correlations were significantly different between those affecting current thickness vs. rate of thinning, suggesting distinct etiological pathways between the state of the brain and its developmental pathway. The genetic analysis showed moderate to strong, regionally specific, genetic covariance between the rate of thickness change and the level of intelligence. A recent study in the IMAGEN sample (N = 551, Judd et al. (2020), age 14/19) examined the role of SES and a polygenic risk score of educational attainment (summarized across thousands of genetic loci) in the development of brain structure and working memory. The core analysis centered on the global effects of SES on the brain, but more specific effects of the polygenic risk score on individual differences in brain structure. For our purposes, the key finding was the bivariate latent change score model, which showed that individuals with higher baseline working memory ability showed a stronger decrease in global surface area during adolescence. In contrast, differences in baseline surface area at age 14 years were not associated with differences in the rate of working memory improvement – although the authors note a potential performance ceiling effect limits strong conclusions. Polygenic risk scores did not differentially predict the rate of surface area change.

Other Imaging Measures Although the majority of the work on the development of intelligence relies on structural and functional MRI, some exceptions exist. An extremely early study (Beckwith & Parmelee, 1986) examined sleep-related EEG markers and intelligence in 53 infants. Preterm infants who displayed a particular EEG pattern (“trace alternant”) during (transitional) sleep showed better intelligence scores at age 8. Hahn et al. (2019) studied longitudinal changes in sleep spindles based on polysomnography (EEG recording during sleep) in 34 children across a seven-year interval, to quantify slow and fast sleep spindle power. They observed that individuals with higher cognitive ability showed a greater increase in frontal slow spindle activity. Together

Towards a Longitudinal Cognitive Neuroscience of Intelligence

these two studies suggest that neural activity during sleep is associated with the level and change of intelligence. Selmeczy, Fandakova, Grimm, Bunge, and Ghetti (2020) used fMRI instead of EEG to study the interplay between (changes in) pubertal status and (changes) in fMRI hippocampal activation during an episodic memory task, and memory performance in three waves of 8 to 14 year olds (max N = 90) using a mixed modeling approach. They examined cross-sectional associations between hippocampal activity patterns and memory performance. In addition to observing a U-shaped developmental pattern of hippocampal activity (illustrating the importance of longitudinal studies), they found that greater baseline task-responsivity in the hippocampus was associated with a more rapid memory performance increase. Although most studies focus on a single neuroimaging metric, more recent work incorporated both functional and structural connectivity. Evans et al. (2015) examined 79 children (43 with scans) alongside changes in the Wechsler Abbreviated Scale of Intelligence, with a special focus on numerical operations. They showed that greater grey matter volume at baseline, especially in prefrontal regions, was associated with greater gains in numerical operations over time. In contrast, functional connectivity in, and between, the same regions identified in the grey matter volume analysis were not associated with greater gains in numerical abilities. Similarly, Wendelken et al. (2017) measured fluid intelligence as well as functional connectivity (FC) and structural connectivity (SC, defined as mean fractional anisotropy in tracts connecting key regions) and related this to changes in reasoning from childhood to early adulthood (age range: 6–22 years). Interestingly, this study incorporated pooled data from three datasets (some included in other papers reported here) with reasoning, FC, and/or SC data for at least two time points. The aggregate sample consisted of 523 participants. Cross-sectional analysis revealed differential age relations between FC, SC, and reasoning. Specifically, SC was strongly (and positively) related to reasoning ability in children but not adolescents and adults. In adolescents and adults, FC was positively associated with reasoning, but this effect was not found in children. Longitudinal analyses revealed fronto-parietal (RLPFC-IPL) SC at one time point positively predicted RLPFC-IPL FC, but not vice versa. Moreover, in young participants (children), SC was positively associated with change in reasoning. Together, these findings suggest that brain structure, especially white matter, may be a stronger determinant of longitudinal cognitive change than functional connectivity.

Aging For a more complete understanding of lifespan trajectories of cognitive abilities and brain structure, we must study, compare, and contrast findings from both ends of the lifespan (Tamnes, Walhovd, Dale, et al.,

137

138

r. a. kievit and i. l. simpson-kent

2013). In other words, it is crucial that we investigate not just how brain measures are associated with changes in intelligence in childhood, adolescence, and early adulthood, but also the mirroring patterns in later life decline. A recent review (Oschwald et al., 2019) has provided a comprehensive overview of truly longitudinal investigations of age-related decline in cognitive ability and concurrent changes in brain structure. We will here summarize the key findings in the realm of intelligence specifically, but refer the reader to that resource for a more in-depth discussion of the key papers in this field. Oschwald et al. identified 31 papers that had comprehensive assessments of cognitive performance as well as neural measurements on multiple occasions. The emerging findings across measures of grey matter volume, white matter volume, white matter microstructure (e.g., FA/MD), and more global measures of brain structure (e.g., intra-cranial volume, total brain volume, or head size) converged on a series of findings. The most common pattern was that of correlated change, and the overwhelming majority of such findings (exceptions include Bender, Prindle, Brandmaier, & Raz, 2015) was as expected: more rapid grey matter atrophy, white matter volume decline, and/or white matter microstructure loss were associated with more rapid cognitive decline. A second pattern of findings centered on level-change associations between domains. In other words, to what extent does cognitive decline depend on the current state of brain anatomy, and vice versa. Although not all studies used methodology that allowed for such conclusions, many observed a pattern consistent with the hypothesis we term structural scaffolding. The current state of the brain more strongly predicts the rate of cognitive decline than vice versa. One of the earliest papers to observe this was McArdle et al. (2004), who demonstrated that lower ventricle size was associated with a more rapid rate of memory decline, above and beyond contemporaneous measures of age and memory score. Both these findings (correlated change and structural scaffolding) are, in turn, in line with the notion of brain maintenance (Nyberg, Lövdén, Riklund, Lindenberger, & Bäckman, 2012): The main way to maintain current cognitive performance or decelerate cognitive decline is to maintain, to the greatest extent possible, the current state of the brain. Future work should work towards the formalized integration of theories about neurocognitive development throughout the lifespan.

Summary In this chapter, we provide an overview of studies that investigate cooccurring changes in intelligence, brain structure, and brain function from early childhood to early adulthood. From this literature, a few clear conclusions can be drawn. First and foremost, there is a profound sparsity of truly longitudinal work in this field. Regardless of one’s precise inclusion criteria,

Towards a Longitudinal Cognitive Neuroscience of Intelligence

there are currently more studies on fMRI in dogs (Thompkins, Deshpande, Waggoner, & Katz, 2016; and several studies since) than there are longitudinal investigations of changes in intelligence and brain structure in childhood. This is, perhaps, not entirely surprising: Large, longitudinal studies are demanding of time, person power, and resources. Combined with several challenges unique to this work, such as updates to MRI scanners, the studies that do exist are all the more impressive. However, there is a recent and rapid increase in large longitudinal studies – as Table 7.1 shows, almost half of all studies discussed date from just the last three years, suggesting many more studies will emerge in the near future. Future work in (very) large samples, such as the Adolescent Brain Cognitive Development (ABCD) study (Volkow et al., 2018), is uniquely positioned to examine the replicability and consistency of the exciting but preliminary findings discussed in this chapter. Taken together, although not plentiful in number, the studies so far allow several conceptual and substantive conclusions.

Timing Matters First and foremost, timing matters. All metrics used in the studies described in Table 7.1, from grey matter volume and thickness to structural and functional connectivity change rapidly, non-linearly, and in complex manners. One key consequence of these changes is that the cross-sectional associations between brain measures and intelligence will be heavily dependent on the age and age distribution of the population being studied. As the studies described in Table 7.1 show, measures such as cortical thickness can show positive, no, or negative correlations with intelligence during development, even in the same cohort, in children of different ages. This alone should send a clear message to developmental cognitive neuroscientists: Age, as much as gender, country of origin, and/or SES affects the nature of the associations we might observe. In studies that incorporate quantitative parameters of change between brain and behavior, two findings emerge across multiple studies. First, correlated change between brain structure and intelligence. Given the same interval, individuals who demonstrate greater gains in intelligence often show more rapid changes in structural development as well. Multiple, complementary explanations for such observations exist. One is that changes in both domains are governed by some third variable, such as the manifestation of a complex pattern of gene expression at a particular maturational period. A methodological explanation of the same statistical effect is that the temporal resolution of a given study is not suitable to tease apart lead-lag distinctions that may in fact exist – although in reality cognitive ability may precede brain change or vice versa, the actual intervals used in studies (often multiple years) will obscure the fine-grained temporal unfolding. This second pattern observed across multiple cohorts is what we referred to as “structural

139

140

r. a. kievit and i. l. simpson-kent

scaffolding.” This is the finding that current brain states (as indexed by measures of brain structure) tend to govern (statistically) the rate of change in cognitive performance. This pattern is observed in multiple studies (e.g., Ferrer, 2018; Wendelken et al., 2017, McArdle et al., 2004), and is consistently in the same direction: “Better” brain structure is generally associated with greater gains (in children) or shallower declines (in older individuals) than individuals with lower scores on brain structural metrics. This pattern is highly intriguing and worth further study, as it may have profound implications for how we think about the relationship between the brain and intelligence. However, this parameter can only be observed by studies that implement methodology (such as the latent change score model) which allows for the quantification of this pattern. This illustrates the importance of suitable quantitative methodology.

Methods Matter It is impossible to provide a precise quantitative meta-analysis of the findings in this field, as the analytical choices govern which parameters of interest are estimated and reported. Different models may provide somewhat, or even very, different conclusions when applied to different, or even the same, datasets (Oschwald et al., 2019). There is no single solution to this challenge: Particular methods have strengths and weaknesses, making them more or less suitable for the particular question being studied. Simultaneously, it seems clear that often (but not always), the choice of methodology is governed as much by conventions of the (sub)field and software limitations as by considered analytical choices. A broader awareness of the range of suitable methods out there would likely improve this state of affairs. A recent special issue (Pfeifer, Allen, Byrne, & Mills, 2018) brings together a wide range of innovations, perspectives, and methodological challenges, providing an excellent starting point for researchers looking to expand their horizons. Another line of improvement would be the strengthening of the explanatory theoretical frameworks used to conceptualize the development of intelligence. Studies are now of sufficient complexity and richness that we can, and should, move beyond the reporting of simple bivariate associations, and develop testable theoretical frameworks that bring together the disparate yet fascinating findings from cognitive development, genetics, grey and white matter, and brain function. Most crucially, such theories should be guided by the promise of translation into tractable quantitative models, such that others may build, refine, or replace theories moving forward. The longitudinal neuroscience of intelligence in childhood and adolescence is only in its infancy, yet many exciting discoveries have already been made. Many more await for a field willing and able to collaboratively work towards a cumulative developmental cognitive neuroscience of intelligence across the entire lifespan.

Towards a Longitudinal Cognitive Neuroscience of Intelligence

References Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure & Function, 219(2), 485–494. doi: 10.1007/s00429–013-0512-z. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. doi: 10.1016/j.intell.2015.04.009. Beckwith, L., & Parmelee, A. H. (1986). EEG patterns of preterm infants, home environment, and later IQ. Child Development, 57(3), 777–789. doi: 10.2307/1130354. Bender, A. R., Prindle, J. J., Brandmaier, A. M., & Raz, N. (2015). White matter and memory in healthy adults: Coupled changes over two years. NeuroImage, 131, 193–204. doi: 10.1016/j.neuroimage.2015.10.085. Bengtsson, S. L., Nagy, Z., Skare, S., Forsman, L., Forssberg, H., & Ullén, F. (2005). Extensive piano practicing has regionally specific effects on white matter development. Nature Neuroscience, 8(9), 1148–1150. doi: 10.1038/nn1516. Borchers, L. R., Bruckert, L., Dodson, C. K., Travis, K. E., Marchman, V. A., BenShachar, M., & Feldman, H. M. (2019). Microstructural properties of white matter pathways in relation to subsequent reading abilities in children: A longitudinal analysis. Brain Structure and Function, 224(2), 891–905. Brans, R. G. H., Kahn, R. S., Schnack, H. G., van Baal, G. C. M., Posthuma, D., van Haren, N. E. M., . . . Pol, H. E. H. (2010). Brain plasticity and intellectual ability are influenced by shared genes. Journal of Neuroscience, 30(16), 5519–5524. doi: 10.1523/JNEUROSCI.5841-09.2010. Burgaleta, M., Johnson, W., Waber, D. P., Colom, R., & Karama, S. (2014). Cognitive ability changes and dynamics of cortical thickness development in healthy children and adolescents. NeuroImage, 84, 810–819. doi: 10.1016/j. neuroimage.2013.09.038. Dai, X., Hadjipantelis, P., Wang, J. L., Deoni, S. C., & Müller, H. G. (2019). Longitudinal associations between white matter maturation and cognitive development across early childhood. Human Brain Mapping, 40(14), 4130–4145. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. doi: 10.1038/ nrn2793. Deoni, S. C. L., O’Muircheartaigh, J., Elison, J. T., Walker, L., Doernberg, E., Waskiewicz, N., . . . Jumbe, N. L. (2016). White matter maturation profiles through early childhood predict general cognitive ability. Brain Structure & Function, 221, 1189–1203. doi: 10.1007/s00429–014-0947-x. Dickens, W. T., & Flynn, J. R. (2001). Heritability estimates versus large environmental effects: The IQ paradox resolved. Psychological Review, 108(2), 346–369. doi: 10.1037//0033-295X. Estrada, E., Ferrer, E., Román, F. J., Karama, S., & Colom, R. (2019). Time-lagged associations between cognitive and cortical development from childhood to early adulthood. Developmental Psychology, 55(6), 1338–1352. doi: 10.1037/ dev0000716.

141

142

r. a. kievit and i. l. simpson-kent

Evans, A. C., & Brain Development Cooperative Group. (2006). The NIH MRI study of normal brain development. Neuroimage, 30(1), 184–202. Evans, T. M., Kochalka, J., Ngoon, T. J., Wu, S. S., Qin, S., Battista, C., & Menon, V. (2015). Brain structural integrity and intrinsic functional connectivity forecast 6 year longitudinal growth in children’s numerical abilities. Journal of Neuroscience, 35(33), 11743–11750. doi: 10.1523/JNEUROSCI.0216-15.2015. Ferrer, E. (2018). Discrete- and semi-continuous time latent change score models of fluid reasoning development from childhood to adolescence. In S. M. Boker, K. J. Grimm, & E. Ferrer (eds.), Longitudinal multivariate psychology (pp. 38–60). New York: Routledge. Ferrer, E., & McArdle, J. J. (2004). An experimental analysis of dynamic hypotheses about cognitive abilities and achievement from childhood to early adulthood. Developmental Psychology, 40(6), 935–952. Ferrer, E., Shaywitz, B. A., Holahan, J. M., Marchione, K., & Shaywitz, S. E. (2010). Uncoupling of reading and IQ over time: Empirical evidence for a definition of dyslexia. Psychological Science, 21(1), 93–101. doi: 10.1177/ 0956797609354084. Ferrer, E., Whitaker, K. J., Steele, J. S., Green, C. T., Wendelken, C., & Bunge, S. A. (2013). White matter maturation supports the development of reasoning ability through its influence on processing speed. Developmental Science, 16(6), 941–951. doi: 10.1111/desc.12088. Grimm, K. J., An, Y., McArdle, J. J., Zonderman, A. B., & Resnick, S. M. (2012). Recent changes leading to subsequent changes: Extensions of multivariate latent difference score models. Structural Equation Modeling: A Multidisciplinary Journal, 19(2), 268–292. doi: 10.1080/10705511.2012.659627. Gross, C. (1995). Aristotle on the brain. The Neuroscientist, 1(4), 245–250. doi: 10.1177/107385849500100408. Hahn, M., Joechner, A., Roell, J., Schabus, M., Heib, D. P., Gruber, G., . . . Hoedlmoser, K. (2019). Developmental changes of sleep spindles and their impact on sleep-dependent memory consolidation and general cognitive abilities: A longitudinal approach. Developmental Science, 22(1), e12706. doi: 10.1111/desc.12706. Huarte, J. (1594). Examen de ingenios. [The examination of mens wits]. Trans. M. Camillo Camilli and R. C. Esquire. London: Adam Islip, for C. Hunt of Excester. Jaekel, J., Sorg, C., Baeuml, J., Bartmann, P., & Wolke, D. (2019). Head growth and intelligence from birth to adulthood in very preterm and term born individuals. Journal of the International Neuropsychological Society, 25(1), 48–56. doi: 10.1017/S135561771800084X. Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Jones, D. K., Knösche, T. R., & Turner, R. (2013). White matter integrity, fiber count, and other fallacies: The do’s and don’ts of diffusion MRI. NeuroImage, 73, 239–254. doi: 10.1016/j.neuroimage.2012.06.081. Judd, N., Sauce, B., Wiedenhoeft, J., Tromp, J., Chaarani, B., Schliep, A., . . . & Becker, A. (2020). Cognitive and brain development is independently influenced by socioeconomic status and polygenic scores for educational attainment. Proceedings of the National Academy of Sciences, 117(22), 12411–12418.

Towards a Longitudinal Cognitive Neuroscience of Intelligence

Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135. doi: 10.1017/S0140525X07001185. Kail, R. V. (1998). Speed of information processing in patients with multiple sclerosis. Journal of Clinical and Experimental Neuropsychology, 20(1), 98–106. doi: 10.1076/jcen.20.1.98.1483. Khundrakpam, B. S., Lewis, J. D., Reid, A., Karama, S., Zhao, L., ChouinardDecorte, F., & Evans, A. C. (2017). Imaging structural covariance in the development of intelligence. NeuroImage, 144, 227–240. doi: 10.1016/j. neuroimage.2016.08.041. Kievit, R. A., Brandmaier, A. M., Ziegler, G., Van Harmelen, A. L., de Mooij, S. M., Moutoussis, M., . . . & Lindenberger, U. (2018). Developmental cognitive neuroscience using latent change score models: A tutorial and applications. Developmental Cognitive Neuroscience, 33, 99–117. Kievit, R. A., Hofman, A. D., & Nation, K. (2019). Mutualistic coupling between vocabulary and reasoning in young children: A replication and extension of the study by Kievit et al. (2017). Psychological Science, 30(8), 1245–1252. doi: 10.1177/0956797619841265. Kievit, R. A., Lindenberger, U., Goodyer, I. M., Jones, P. B., Fonagy, P., Bullmore, E. T., . . . Dolan, R. J. (2017). Mutualistic coupling between vocabulary and reasoning supports cognitive development during late adolescence and early adulthood. Psychological Science, 28(10), 1419–1431. Koenis, M. M. G., Brouwer, R. M., Swagerman, S. C., van Soelen, I. L. C., Boomsma, D. I., & Pol, H. E. H. (2018). Association between structural brain network efficiency and intelligence increases during adolescence. Human Brain Mapping, 39(2), 822–836. doi: 10.1002/hbm.23885. Koenis, M. M. G., Brouwer, R. M., van den Heuvel, M. P., Mandl, R. C. W., van Soelen, I. L. C., Kahn, R. S., . . . Pol, H. E. H. (2015). Development of the brain’s structural network efficiency in early adolescence: A longitudinal DTI twin study. Human Brain Mapping, 36(12), 4938–4953. doi: 10.1002/ hbm.22988. Madsen, K. S., Johansen, L. B., Thompson, W. K., Siebner, H. R., Jernigan, T. L., & Baare, W. F. (2020). Maturational trajectories of white matter microstructure underlying the right presupplementary motor area reflect individual improvements in motor response cancellation in children and adolescents. NeuroImage, 220, 117105. McArdle, J. J., Hamgami, F., Jones, K., Jolesz, F., Kikinis, R., Spiro, A., & Albert, M. S. (2004). Structural modeling of dynamic changes in memory and brain structure using longitudinal data from the normative aging study. The Journals of Gerontology. Series B, Psychological Sciences and Social Sciences, 59(6), P294–304. doi: 10.1093/GERONB/59.6.P294. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency: Measures of brain activation versus measures of functional connectivity in the brain. Intelligence, 37(2), 223–229. doi: 10.1016/j.intell.2008.10.008. Nyberg, L., Lövdén, M., Riklund, K., Lindenberger, U., & Bäckman, L. (2012). Memory aging and brain maintenance. Trends in Cognitive Sciences, 16(5), 292–305. doi: 10.1016/j.tics.2012.04.005.

143

144

r. a. kievit and i. l. simpson-kent

Oschwald, J., Guye, S., Liem, F., Rast, P., Willis, S., Röcke, C., . . . Mérillat, S. (2019). Brain structure and cognitive ability in healthy aging: A review on longitudinal correlated change. Reviews in the Neurosciences, 31(1), 1–57. doi: 10.1515/revneuro-2018-0096. Peng, P., & Kievit, R. A. (2020). The development of academic achievement and cognitive abilities: A bidirectional perspective. Child Development Perspectives, 14(1), 15–20. doi: 10.31219/osf.io/9u86q. Peng, P., Wang, T., Wang, C., & Lin, X. (2019). A meta-analysis on the relation between fluid intelligence and reading/mathematics: Effects of tasks, age, and social economics status. Psychological Bulletin, 145(2), 189–236. doi: 10.1037/ bul0000182. Pfeifer, J. H., Allen, N. B., Byrne, M. L., & Mills, K. L. (2018). Modeling developmental change: Contemporary approaches to key methodological challenges in developmental neuroimaging. Developmental Cognitive Neuroscience, 33, 1–4. doi: 10.1016/j.dcn.2018.10.001. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience & Biobehavioral Reviews, 57, 411–432. doi: 10.1016/j.neubiorev.2015.09.017. Qi, T., Schaadt, G., & Friederici, A. D. (2019). Cortical thickness lateralization and its relation to language abilities in children. Developmental Cognitive Neuroscience, 39, 100704. Ramsden, S., Richardson, F. M., Josse, G., Thomas, M. S. C., Ellis, C., Shakeshaft, C., . . . Price, C. J. (2011). Verbal and non-verbal intelligence changes in the teenage brain. Nature, 479(7371), 113–116. doi: 10.1038/nature10514. Raz, N., & Lindenberger, U. (2011). Only time will tell: Cross-sectional studies offer no solution to the age–brain–cognition triangle: Comment on Salthouse (2011). Psycological Bulletin, 137(5), 790–795. doi: 10.1037/a0024503. Ritchie, S. J., Quinlan, E. B., Banaschewski, T., Bokde, A. L., Desrivieres, S., Flor, H., . . . & Ittermann, B. (under review). Neuroimaging and genetic correlates of cognitive ability and cognitive development in adolescence. Psyarxiv, https:// psyarxiv.com/8pwd6/ Rocca, J. (2009). Galen and the ventricular system. Journal of the History of the Neurosciences, 6(3), 227–239. Retrieved from https://www.tandfonline.com/ doi/abs/10.1080/09647049709525710?casa_token=uaaDpevYWpgAAAAA: YgJ2sfv80R1vUd6M0VIqfxFd6hkCxAsKhim1_Bt-ZuwPHteZ4Wmwah5F WBCINOkHCi3L97VL1zuDiqo Román, F. J., Morillo, D., Estrada, E., Escorial, S., Karama, S., & Colom, R. (2018). Brain-intelligence relationships across childhood and adolescence: A latent-variable approach. Intelligence, 68, 21–29. doi: 10.1016/j. intell.2018.02.006. Schmitt, J. E., Raznahan, A., Clasen, L. S., Wallace, G. L., Pritikin, J. N., Lee, N. R., . . . Neale, M. C. (2019). The dynamic associations between cortical thickness and general intelligence are genetically mediated. Cerebral Cortex, 29(11). doi: 10.1093/cercor/bhz007. Schnack, H. G., van Haren, N. E. M., Brouwer, R. M., Evans, A., Durston, S., Boomsma, D. I., . . . Hulshoff Pol, H. E. (2015). Changes in thickness and

Towards a Longitudinal Cognitive Neuroscience of Intelligence

surface area of the human cortex and their relationship with intelligence. Cerebral Cortex, 25(6), 1608–1617. doi: 10.1093/cercor/bht357. Selmeczy, D., Fandakova, Y., Grimm, K. J., Bunge, S. A., & Ghetti, S. (2019). Longitudinal trajectories of hippocampal and prefrontal contributions to episodic retrieval: Effects of age and puberty. Developmental Cognitive Neuroscience, 36, 100599. Shaw, P., Greenstein, D., Lerch, J., Clasen, L., Lenroot, R., Gogtay, N., . . . Giedd, J. (2006). Intellectual ability and cortical development in children and adolescents. Nature, 440(7084), 676–679. doi: 10.1038/nature04513. Sowell, E. R., Thompson, P. M., Leonard, C. M., Welcome, S. E., Kan, E., & Toga, A. W. (2004). Longitudinal mapping of cortical thickness and brain growth in normal children. Journal of Neuroscience, 24(38), 8223–8231. doi: 10.1523/ JNEUROSCI.1798-04.2004. Spearman, C. (1904). “General intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–292. doi: 10.2307/1412107. Tamnes, C. K., Bos, M. G. N., van de Kamp, F. C., Peters, S., & Crone, E. A. (2018). Longitudinal development of hippocampal subregions from childhood to adulthood. Developmental Cognitive Neuroscience, 30, 212–222. doi: 10.1016/ j.dcn.2018.03.009. Tamnes, C. K., Walhovd, K. B., Dale, A. M., Østby, Y., Grydeland, H., Richardson, G., . . . Fjell, A. M. (2013). Brain development and aging: Overlapping and unique patterns of change. NeuroImage, 68, 63–74. doi: 10.1016/j. neuroimage.2012.11.039. Tamnes, C. K., Walhovd, K. B., Grydeland, H., Holland, D., Østby, Y., Dale, A. M., & Fjell, A. M. (2013). Longitudinal working memory development is related to structural maturation of frontal and parietal cortices. Journal of Cognitive Neuroscience, 25(10), 1611–1623. doi: 10.1162/jocn_a_00434. Thompkins, A. M., Deshpande, G., Waggoner, P., & Katz, J. S. (2016). Functional magnetic resonance imaging of the domestic dog: Research, methodology, and conceptual issues. Comparative Cognition & Behavior Reviews, 11, 63–82. doi: 10.3819/ccbr.2016.110004. Van Der Maas, H. L., Dolan, C. V., Grasman, R. P., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. E. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113(4), 842. Volkow, N. D., Koob, G. F., Croyle, R. T., Bianchi, D. W., Gordon, J. A., Koroshetz, W. J., . . . Weiss, S. R. B. (2018). The conception of the ABCD study: From substance use to a broad NIH collaboration. Developmental Cognitive Neuroscience, 32, 4–7. doi: 10.1016/j.dcn.2017.10.002. Wandell, B. A. (2016). Clarifying human white matter. Annual Review of Neuroscience, 39(1), 103–128. Wendelken, C., Ferrer, E., Ghetti, S., Bailey, S. K., Cutting, L., & Bunge, S. A. (2017). Frontoparietal structural connectivity in childhood predicts development of functional connectivity and reasoning ability: A large-scale longitudinal investigation. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 37(35), 8549–8558. doi: 10.1523/JNEUROSCI.372616.2017.

145

146

r. a. kievit and i. l. simpson-kent

Wenger, E., Brozzoli, C., Lindenberger, U., & Lövdén, M. (2017). Expansion and renormalization of human brain structure during skill acquisition. Trends in Cognitive Sciences, 21(12), 930–939. doi: 10.1016/j.tics.2017.09.008. Widaman, K. F., Ferrer, E., & Conger, R. D. (2010). Factorial invariance within longitudinal structural equation models: Measuring the same construct across time. Child Development Perspectives, 4(1), 10–18. doi: 10.1111/j.17508606.2009.00110.x. Young, J. M., Morgan, B. R., Whyte, H. E. A., Lee, W., Smith, M. L., Raybaud, C., . . . Taylor, M. J. (2017). Longitudinal study of white matter development and outcomes in children born very preterm. Cerebral Cortex, 27(8), 4094–4105. doi: 10.1093/cercor/bhw221.

8 A Lifespan Perspective on the Cognitive Neuroscience of Intelligence Joseph P. Hennessee and Denise C. Park Human intelligence is a multifaceted construct that has been defined in many different ways. In the present chapter, we consider intelligence to be a stable behavioral index that provides important predictive value for many complex and adaptive behaviors in life that require problem-solving and integration of multiple cognitive operations. The index is typically developed from measures of basic core cognitive abilities that include fluid reasoning, processing speed, working memory, episodic memory, and verbal ability. Whether intelligence changes with advancing age turns out to be a question that is surprisingly difficult to answer. On one hand, both cross-sectional and longitudinal studies provide evidence that adult intelligence is characterized by predictable normative developmental change with increasing age. This profile shows decline across the adult lifespan on core cognitive measures that comprise intelligence (Park et al., 2002; Salthouse, 2016). On the other hand, although it is certain that a very long life guarantees some decline in core cognitive measures associated with intelligence, the age at which an individual begins to show decline, and how fast they decline, is quite variable (e.g., Salthouse, 2016). Adding to the puzzle of how intelligence is affected by aging, there is also astonishing evidence that intelligence within a given person is highly stable across the individual’s lifespan. The Lothian Cohort Study reported that roughly 45% of the variance in intelligence at age 90 was accounted for by that individual’s level of intelligence at age 11 (Deary, Pattie, & Starr, 2013). The Vietnam Era Twin Study of Aging recently observed a correlation of similar magnitude between intelligence scores taken at age 20 with those taken at age 62 (Kremen et al., 2019). In the present chapter, we use a cognitive aging framework to connect age-related differences in brain structure and function to the measures that comprise intelligence. We then characterize brain mechanisms that underlie the classic profile of age-related changes in human intellectual function with age. We focus on the importance of the distinction between fluid and crystallized intelligence for understanding aging, and discuss methodological issues that limit a full understanding of the lifespan trajectory of intelligence. Then, we turn our attention to individual differences. Finally, we consider whether “brain training” and other experiences can induce reliable improvements in 147

148

j. p. hennessee and d. c. park

brain structures and functions underlying intelligence and if they open a door to maintaining or even improving intellectual ability with age.

Intelligence and the Cognitive Neuroscience of Aging There is a wealth of evidence suggesting that change in the core measures of cognition is a universal aspect of human aging. These measures, on average, follow a rather prescribed trend with most aspects of performance peaking in early adulthood (~20–30) and declining in later life (~50+) (Anstey, Sargent-Cox, Garde, Cherbuin, & Butterworth, 2014; Salthouse, 2016). Core functions that are most impacted by aging include the speed at which we process incoming information, episodic retrieval, working memory, and fluid reasoning, which describes our ability to solve novel problems. However, the above conclusions are all based on group means. Recent work examining individual differences in trajectories of aging highlight that, particularly in old age, cognitive function is highly variable, and some, labeled super agers, maintain strong cognition into their later years (for a review, see Nyberg & Pudas, 2019). Although individuals vary considerably in the age at which decline occurs, as well as in the rate of decline, some decrease in these functions is a universal signature of aging, as the dynamic interaction among biological systems in an ever-changing environment yields inevitable changes in most biological systems.

Patterns of Brain Aging Healthy cognitive function relies on effective neural structure and function; thus, it is unsurprising that age-related cognitive declines coincide with profound changes in the brain (e.g., Hedden et al., 2016; MacPherson et al., 2017). Neuroimaging research has consistently shown that some brain structures shrink with age, with the most affected structures being the prefrontal cortex (PFC) and the medial temporal lobe (Pacheco, Goh, Kraut, Ferrucci, & Resnick, 2015; Raz, Ghisletta, Rodrigue, Kennedy, & Lindenberger, 2010; Storsve et al., 2014). There is a wealth of data showing that volumetric declines are related to selective decreases in component abilities comprising intelligence. For example, individual differences in hippocampal volume are predictive of memory function at every age (e.g., Harrison, Maass, Baker, & Jagust, 2018) and differences in thickness of lateral PFC and the parietal cortex are associated with working memory capacity (Østby, Tamnes, Fjell, & Walhovd, 2011), processing speed (MacPherson et al., 2017), and fluid reasoning (Yuan, Voelkle, & Raz, 2018). In each of these cases, greater volume is associated with higher intelligence. There are also changes in the integrity of the brain’s white matter with age, evidenced by decreasing density and increasing porosity of the white matter as measured by diffusion tensor imaging (DTI).

Lifespan Perspective on Cognitive Neuroscience

This results in a “disconnection syndrome” (O’Sullivan et al., 2001), which slows or even limits transmission of neural signal from the white matter to the cerebral cortex. Functional magnetic resonance imaging (fMRI) data suggest that brain activity also changes markedly with age. There are three commonly observed patterns of activation that are hallmarks of the aging brain. First, numerous studies have consistently shown that older adults show heightened activation in both the right and left dorsal lateral PFC on cognitive tasks where young adults are primarily left-lateralized. There is considerable evidence that this increased functional activity in fronto-parietal regions is used to meet cognitive task demands in a compensatory fashion (Batista et al., 2019; Huang, Polk, Goh, & Park, 2012; Park & Reuter-Lorenz, 2009; Reuter-Lorenz & Cappell, 2008; Rieck, Rodrigue, Boylan, & Kennedy, 2017; Scheller et al., 2018). Other activation patterns characteristic of older adults are less adaptive and are evidence of degradation of healthy functional brain activity. For example, cognitive control tasks such as working memory or encoding tasks typically require activation of the fronto-parietal network and suppression of activity in the default network (brain regions that are associated with relaxation and daydreaming). Older adults consistently show an inability to suppress brain activity in the regions that comprise the default network and have more difficulty directing activation of the cognitive control network (e.g., Turner & Spreng, 2015). Another major difference in activation patterns between young and old is in the display of specific neural signatures to categories such as faces and places. Older adults show a less selective or “dedifferentiated” response in the ventral-visual cortex to category-specific regions, such as the fusiform gyrus, which activates in a highly selective manner to faces in young, but less so in old (Bernard & Seidler, 2012; Carp, Park, Hebrank, Park, & Polk, 2011; Park et al., 2004; Voss et al., 2008). Similarly, there are pronounced differences in how segregated specialized brain networks are from one another with age. Young adults show a high level of specificity and modularity in activation of functional brain networks with limited connectivity between networks. In contrast, with age, individual connections within networks are sparse, with greater connectivity between networks, which results in a more generalized neural signature to a range of tasks and is associated with decreased memory (Chan, Park, Savalia, Petersen, & Wig, 2014). Taken together, changes in neural structure and function with aging are consistent with the changing cognitive landscape that comes with age.

Aging and Theories of Human Intelligence Theories of what intelligence is and how it is best measured underwent numerous changes throughout the twentieth century. Pioneering work by

149

150

j. p. hennessee and d. c. park

Spearman (1904) demonstrated that, although cognitive tests are designed to tap into seemingly distinct cognitive processes these measures tend to share a surprisingly large amount of variance. More recent measures suggest that in a typical cognitive battery with at least 10 tests, around 40–50% of the variance generally overlaps (Carretta & Ree, 1995; Deary, Penke, & Johnson, 2010). Spearman considered this shared variance a proxy for general intelligence, labeled simply as g, that is flexibly recruited across a wide range of situations and tasks. Developments in modern computing and factor analysis drove further investigation of the structure of intelligence in the second half of the twentieth century. Perhaps most notably, Cattell discovered that there were two independent factors that better described intelligence compared to a single g factor – fluid intelligence and crystallized intelligence. The Gf-Gc theory of fluid and crystallized intelligence (Cattell, 1941) distinguishes one’s ability to quickly and efficiently manipulate information to solve abstract, novel problems (fluid intelligence), from crystallized intelligence – a measure of one’s accumulated knowledge and expertise. Fluid intelligence is quite similar to the construct of g and is traditionally derived from tasks that assess reasoning skills, with core cognitive measures such as processing speed and working memory underlying fluid intelligence, as they are fundamental to problemsolving (e.g., Kim & Park, 2018). In contrast, high crystallized intelligence is considered to be a result of enriching life experiences such as level of education, job complexity, and social and intellectual engagement throughout life. Crystallized intelligence is most commonly assessed using measures of vocabulary, with the assumption that vocabulary provides a good overall estimate of knowledge that may have been derived from enriching life experiences. Importantly, Cattell and Horn noted that deficits in fluid intelligence were widely observed with increased age, but crystallized intelligence was preserved with aging (Horn & Cattell, 1967), and might even show improvement with age, as shown in Figure 8.1 (Park et al., 2002). This pattern of findings is supported by longitudinal data, but with the caveat that after around age 80, crystallized intelligence also declines (Salthouse, 2014a). There is some evidence that a late life decrease in crystallized intelligence may be evidence of latent pathology. Because crystallized intelligence is preserved well into late adulthood, it can be used to estimate overall intelligence at younger ages and, in fact, a large disparity between fluid and crystallized abilities in older adults has been related to higher levels of amyloid plaque (a protein associated with Alzheimer’s disease) deposited on the brain (McDonough et al., 2016). The distinction between fluid and crystallized intelligence has played an important role in evolving theories of neurocognitive aging. Most neuroimaging work on aging has focused on the basic component tasks that comprise fluid intelligence (i.e., reasoning, processing speed, and working memory). These studies suggest that fluid intelligence is supported by a predominantly frontoparietal network across tasks, as described in the Parieto-Frontal Integration

Lifespan Perspective on Cognitive Neuroscience

Figure 8.1 Lifespan performance measures. Cross-sectional cognitive performance for each construct (Z-scores) examined at each age decade from 20 to 80. Adapted from D. C. Park and G. N. Bischof (2013), The aging mind: Neuroplasticity in response to cognitive training. Dialogues in Clinical Neuroscience, 15(1), p. 111. Copyright 2013 by LLS and licensed under CC BY-NC-ND 3.0

Theory (P-FIT, for a review, see Jung & Haier, 2007). Congruent with P-FIT, there is a wealth of lesion data showing that damage to the frontal parietal network results in deficits on fluid tasks (Barbey, Colom, Paul, & Grafman, 2014; Roca et al., 2010). The neural underpinnings of crystallized intelligence are more poorly understood, largely because of the difficulty of measuring the type and complexity of human experiences that occur across a lifetime and contribute to crystallized intelligence. There is some evidence that better grey matter structure – measured using grey matter thickness, volume, and surface area – in the inferior and middle frontal gyri are particularly important for crystallized intelligence (Colom et al., 2013), as the inferior frontal gyrus (esp. Broca’s area) plays a critical role in both verbal comprehension (Gläscher et al., 2009) and semantic retrieval (Binder, Desai, Graves, & Conant, 2009). Much more work is needed in this area.

Maintaining Intellectual Function with Declining Brain Integrity Just as there is considerable age-related variation in core behavioral measures of intelligence, there are also substantial individual differences in the

151

152

j. p. hennessee and d. c. park

Figure 8.2 A conceptual model of the scaffolding theory of aging and cognition-revisited (STAC-r). Adapted from P. A. Reuter-Lorenz and D. C. Park (2014), How does it STAC up? Revisiting the scaffolding theory of aging and cognition. Neuropsychology Review, 24(3), p. 360. Copyright 2014 by The Authors

amount of brain degradation that older adults evidence. There is pervasive and puzzling evidence from neuroimaging research on aging and cognition that the magnitude of brain degradation observed does not always result in degraded cognition. For example, older adults with significantly degraded fronto-parietal structure may nevertheless perform very well on tasks requiring fronto-parietal resources. The Scaffolding Theory of Aging and Cognition (Reuter-Lorenz & Park, 2014; STAC, Park & Reuter-Lorenz, 2009) provides a theoretical account of how individuals can “outperform” the apparent capacity of their brains, as shown in Figure 8.2. The model proposes that biological aging combined with life experiences that deplete or enrich the brain predict brain structure and function, which in turn predicts both the absolute level as well as rate of decline in cognitive function. Importantly, increased activation of brain resources (mainly from fronto-parietal activity) may provide some compensation that offsets the effects of brain degradation and maintains cognitive performance. Thus, good cognitive performance is predicted to occur in adults who maintain youthful brains and have not manifested brain degradation, as well as in older adults who compensate for degradation by increased brain activity. Another way to account for cognitive performance that appears to exceed observable brain integrity is to conceptualize the existence of an additional pool of resources, typically referred to as a “reserve,” that can be drawn upon to

Lifespan Perspective on Cognitive Neuroscience

maintain intellectual function as the brain degrades. There are multiple versions of theories of reserve (e.g., Satz, 1993; Stern, 2002; Stern, Arenaza-Urquijo, et al., 2018), but all are highly intertwined with the notion that there are factors that make one resilient to the structural and functional brain insults that occur with age, thus delaying intellectual decline. The original theory of brain reserve was used to describe why patients with similar levels of Alzheimer’s disease pathology (e.g., amyloid burden) or similar magnitudes of brain injury due to stroke can have vastly different cognitive outcomes. According to this theory, there may be a thresholded amount of neural resources (i.e., number of neurons) needed to perform a given cognitive function, so those with greater neurological capital can afford to sustain considerable brain degradation and still maintain performance. It is also suggested that enriching and novel activities that involve cognitive challenge can enhance reserve which has often been assessed using a proxy measurement of educational attainment or occupational complexity. Stern, Gazes, Razlighi, Steffener, and Habeck (2018) have attempted in a lifespan fMRI study (ages 20–80, N = 255) to identify a task-invariant cognitive reserve network. More specifically, they examined neural regions that shared activation across 12 cognitive tasks, and whose activation was correlated with National Adult Reading Test IQ, their proxy for cognitive reserve. The resulting cognitive reserve network included large portions of the fronto-parietal network, along with motor and visual areas. There is convincing evidence that well-educated individuals and those high in crystallized intelligence during late adulthood show some resilience to cognitive decline, but further work is needed to determine the locus of reserve. Moreover, it is not at all clear how cognitive reserve differs conceptually from high measures of fluid and crystallized intelligence (Park, 2019).

Methodological Challenges Associated with Understanding Intelligence Across the Lifespan It is important to recognize that research on lifespan changes in intelligence is inherently flawed, and that there is an uncertainty factor associated with almost any conclusion about intelligence and adult aging. Most research on lifespan aging examines cross-sectional differences between younger college students and older adults (usually age 60 and older), but interpretation of findings is confounded by cohort effects that result from differential lifetime environments experienced when people of different ages are tested during the same period of time. For example, if younger adults show better eye–hand coordination than older adults, is it due to age differences or to the fact that young adults have spent many more hours playing video games than older adults in their adolescence? Adding to the issue, Flynn (1984, 1987) has shown that, across the past century, scores on intelligence tests have been rapidly rising. In the United States, this gain is estimated to be three IQ points per decade (Flynn,

153

154

j. p. hennessee and d. c. park

1984), which has been further supported by a meta-analysis of 271 international datasets from 1909–2013 (Pietschnig & Voracek, 2015). Factors driving the Flynn effect likely include improvements in education, increased use of technology, and reductions in family size (for a review, see Pietschnig & Voracek, 2015). We note that the Flynn effect for crystallized intelligence is much smaller than the effect for fluid intelligence (Flynn, 1987; Pietschnig & Voracek, 2015); and this differential contribution likely contributes to the different lifespan trajectories of these constructs when assessed in cross-sectional research. In sum, conclusions about intelligence and aging based on cross-sectional research make it difficult to determine what amount of change in intelligence across the lifespan is truly due to aging as opposed to cohort differences. It would seem that the study of longitudinal changes within individuals over time would greatly diminish the influence of cohort effects. Indeed, it does, but unfortunately new limitations surface. Diminishing sample sizes over time are compounded by non-random participant attrition and practice effects. A dropout rate of 25–40% is common in a basic 3–4-year longitudinal study (Salthouse, 2014b). Because participants who drop out often differ substantially from those who remain, longitudinal studies tend to present an unduly optimistic picture of changes in cognition with age, as it is common to find that poorer-performing subjects are most likely to drop out. One solution is to include modeling the impact of participant attrition on key findings (e.g., Hu & Sale, 2003). Longitudinal studies of intelligence are further confounded by practice effects, so that performance often improves over time. In a metaanalysis of 50 studies on practice effects (Hausknecht, Halpert, Di Paolo, & Moriarty Gerrard, 2007), it was estimated that participants performed approximately 0.25 standard deviations better the second time they completed a cognitive test. Practice effects vary considerably across different cognitive tasks (e.g., verbal ability, Hausknecht et al., 2007), and extended experience produces changes in task-related brain activation and the organization of functional network organization, as these are honed with experience (see Buschkuehl, Jaeggi, & Jonides, 2012 review). Of particular concern are findings that suggest practice effects are more prevalent in young, with those below age 40 instead showed improved performance at a second test whereas adults over 60 fail to show improvement. Thus, longitudinal declines in later life intelligence may be underestimated due to practice. Some strategies for minimizing, but not alleviating, these problems include: (1) running large cross-sectional studies that include equal representation of all ages, allowing for a more finely-graded analysis of cohort effects than a young–old comparison, (2) conducting a 10- or 20-year longitudinal study of a specific lifespan phase that would be doable within a single investigator’s career, showing reproducibility of results across multiple samples and studies conducted both cross-sectionally and longitudinally, and 3) in longitudinal studies, testing subjects twice at the beginning of the study over a period of

Lifespan Perspective on Cognitive Neuroscience

weeks or a few months, to assess and control for the magnitude of the practice effects in the young compared to old.

Practical Implications: Can We Modify Intelligence and Combat Age-Related Decline? At least as far back as Francis Galton (1869), many have believed that intelligence is strictly inherited and unmodifiable, though there is much evidence to the contrary. Estimates of the heritability of intelligence are around 30% in childhood, but climb to as much as 70–80% by old age (Deary, Penke, & Johnson, 2010) based on evidence from the longitudinal Lothian Cohort Study of 1921 (Deary, Pattie, & Starr, 2013). These data suggest that the relative place one has “in line” with respect to peers is a constant – that is a high performer in childhood will maintain high intelligence in old age. The Lothian Cohort Study also reported considerable decline in intelligence from age 79 to 90, consistent with cross-sectional examinations of crystallized intelligence. Overall, these important results provide convincing evidence that the impact of experience is relatively small with age, suggesting that it may be difficult to maintain or enhance cognition in late adulthood through experiences. Nevertheless, the central goal of applied aging research has been to determine whether we can slow or even reverse age-related intellectual decline, either through lifestyle choices (e.g., exercise and mental engagement) or scientifically designed cognitive training programs. Initially, research on the efficacy of cognitive training produced mixed results, as improvements observed on trained tasks rarely transferred to related tasks (Redick et al., 2013; Simons et al., 2016); however, recent research looks somewhat more promising (Au et al., 2015). The N-back working memory (WM) task has been one of the most frequently utilized tasks for training. In N-back training, participants are presented with a series of digits or letters and are asked to recall the item that was presented N-number of positions before. Training programs usually require participants to do this task for several hours daily for a period of 2–4 weeks, with a cognitive battery completed at the beginning and end of training. With practice, participants are typically able to access more stimuli from working memory. A focus has been placed on WM training largely because being able to store, access, and update information in WM has been hypothesized to be fundamental to many intellectual activities (Martínez et al., 2011). In a metaanalysis of 20 studies of young to middle-aged adults, Au et al. (2015) determined that WM training was significantly associated with improvements in fluid intelligence equivalent to a gain of 3–4 IQ points. Comparable improvements were seen as to whether training was done at home or in a lab, which is practically important for older populations with limited mobility and those living far from research centers. In a recent meta-analysis of

155

156

j. p. hennessee and d. c. park

251 older adult studies (Basak, Qin, & O’Connell, 2020), cognitive training was found to also be beneficial to older adults, particularly on tasks similar to the trained cognitive ability (“near transfer”), although benefits on unrelated tasks (“far transfer”) are much less common. Second, improvements for healthy older adults and those diagnosed with mild cognitive impairment were comparable in size, suggesting that training can still improve function in this group that is at-risk for developing dementia. Additionally, programs that target multiple aspects of cognition appear to be most effective, as they have been shown to produce both near and far transfer effects, as well as measurable improvement in everyday function. Training may support strong intellectual ability in later life, as well as improve their ability to independently engage in everyday activities, such as balancing one’s checkbook. Research has focused on how intellectual gains from cognitive training are manifested in the brain. After WM training, reduced activation on the N-back task is observed in regions such as the prefrontal and parietal cortex (Buschkuehl et al., 2012), which are involved in goal-directed attention and fluid intelligence (Corbetta & Shulman, 2002; Jung & Haier, 2007). These activation reductions can last at least five weeks after WM training (MiróPadilla et al., 2019), and Buschkuehl et al. (2012) proposed that these changes reflect increased neural efficiency, as participants are able to achieve even greater performance with fewer neural resources. Although the function of decreased activation is often ambiguous, this pattern has previously been linked to increased efficiency in PFC, as lower activation with good performance represents a sparing of resource that may be deflected to other tasks (Reuter-Lorenz & Cappell, 2008). Furthermore, in a more naturalistic training regime, older adults who learned photography and quilting skills for three months showed enhanced ability to modulate activity in the fronto-parietal cortex to meet task demands (McDonough, Haber, Bischof, & Park, 2015). These training-related changes in neural activity likely stem from changes to underlying neurochemistry, including altered dopamine release (e.g., Bäckman et al., 2011), though more research is needed to understand this mechanism.

Conclusions and Future Directions In this chapter, we have explored how human intelligence demonstrates both a degree of stability and remarkable change across the lifespan. This is powerfully illustrated by the common decline of fluid aspects of intelligence in old age, despite preservation of verbal ability and expertise. Important foci for future work include understanding the interrelationships between network degradation with age and measures of fluid and crystallized abilities, learning more about the neural underpinnings of crystallized intelligence, and conducting high quality, short-term longitudinal studies that thoroughly assess brain/behavioral trajectories of different age groups. A greater

Lifespan Perspective on Cognitive Neuroscience

integration of theories of intelligence with theories of cognitive aging would advantage both fields. Perhaps most importantly, a large brain/behavior study of the entire lifespan that included both children and middle-aged adults would go a very long way in providing a more complete characterization of developmental trajectories of intelligence. Cognitive training and environmental enrichment have emerged as a potential support for maintaining intelligence. Longitudinal research must examine how cognitive training impacts intelligence across longer periods to determine whether it can help delay agerelated cognitive decline. If exposure to training or enriched environments can slow or even simply delay age-related decline, it may prove to be an invaluable way to ensure that people are not only living longer lives, but also have more quality years ahead of them.

References Anstey, K. J., Sargent-Cox, K., Garde, E., Cherbuin, N., & Butterworth, P. (2014). Cognitive development over 8 years in midlife and its association with cardiovascular risk factors. Neuropsychology, 28(4), 653–665. Au, J., Sheehan, E., Tsai, N., Duncan, G. J., Buschkuehl, M., & Jaeggi, S. M. (2015). Improving fluid intelligence with training on working memory: A metaanalysis. Psychonomic Bulletin & Review, 22(2), 366–377. Bäckman, L., Nyberg, L., Soveri, A., Johansson, J., Andersson, M., Dahlin, E., . . . Rinne, J. O. (2011). Effects of working-memory training on striatal dopamine release. Science, 333(6043), 718. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure and Function, 219(2), 485–494. Basak, C., Qin, S., & O’Connell, M. A. (2020). Differential effects of cognitive training modules in healthy aging and mild cognitive impairment: A comprehensive meta-analysis of randomized controlled trials. Psychology and Aging, 35(2), 220–249. Batista, A. X., Bazán, P. R., Conforto, A. B., Martins, M. da G. M., Hoshino, M., Simon, S. S., . . . Miotto, E. C. (2019). Resting state functional connectivity and neural correlates of face-name encoding in patients with ischemic vascular lesions with and without the involvement of the left inferior frontal gyrus. Cortex, 113, 15–28. Bernard, J. A., & Seidler, R. D. (2012). Evidence for motor cortex dedifferentiation in older adults. Neurobiology of Aging, 33(9), 1890–1899. Binder, J. R., Desai, R. H., Graves, W. W., & Conant, L. L. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19(12), 2767–2796. Buschkuehl, M., Jaeggi, S. M., & Jonides, J. (2012). Neuronal effects following working memory training. Developmental Cognitive Neuroscience, 2 (Supp 1), S167–S179.

157

158

j. p. hennessee and d. c. park

Carp, J., Park, J., Hebrank, A., Park, D. C., & Polk, T. A. (2011). Age-related neural dedifferentiation in the motor system. PLoS One, 6(12), e29411. Carretta, T. R., & Ree, M. J. (1995). Near identity of cognitive structure in sex and ethnic groups. Personality and Individual Differences, 19(2), 149–155. Cattell, R. B. (1941). Some theoretical issues in adult intelligence testing. Psychological Bulletin, 38(592), 10. Chan, M. Y., Park, D. C., Savalia, N. K., Petersen, S. E., & Wig, G. S. (2014). Decreased segregation of brain systems across the healthy adult lifespan. Proceedings of the National Academy of Sciences, 111(46), E4997–E5006. Colom, R., Burgaleta, M., Román, F. J., Karama, S., Álvarez-Linera, J., Abad, F. J., . . . Haier, R. J. (2013). Neuroanatomic overlap between intelligence and cognitive factors: Morphometry methods provide support for the key role of the frontal lobes. NeuroImage, 72, 143–152. Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience, 3(3), 201–215. Deary, I. J., Pattie, A., & Starr, J. M. (2013). The stability of intelligence from age 11 to age 90 years: The Lothian Birth Cohort of 1921. Psychological Science, 24(12), 2361–2368. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. Flynn, J. R. (1984). The mean IQ of Americans: Massive gains 1932 to 1978. Psychological Bulletin, 95(1), 29–51. Flynn, J. R. (1987). Massive IQ gains in 14 nations: What IQ tests really measure. Psychological Bulletin, 101(2), 171–191. Galton, F. (1869). Hereditary genius: An inquiry into its laws and consequences. London: Macmillan and Co. Gläscher, J., Tranel, D., Paul, L. K., Rudrauf, D., Rorden, C., Hornaday, A., . . . Adolphs, R. (2009). Lesion mapping of cognitive abilities linked to intelligence. Neuron, 61(5), 681–691. Harrison, T. M., Maass, A., Baker, S. L., & Jagust, W. J. (2018). Brain morphology, cognition, and β-amyloid in older adults with superior memory performance. Neurobiology of Aging, 67, 162–170. Hausknecht, J. P., Halpert, J. A., Di Paolo, N. T., & Moriarty Gerrard, M. O. (2007). Retesting in selection: A meta-analysis of coaching and practice effects for tests of cognitive ability. Journal of Applied Psychology, 92(2), 373–385. Hedden, T., Schultz, A. P., Rieckmann, A., Mormino, E. C., Johnson, K. A., Sperling, R. A., & Buckner, R. L. (2016). Multiple brain markers are linked to agerelated variation in cognition. Cerebral Cortex, 26(4), 1388–1400. Horn, J., & Cattell, R. B. (1967). Age differences in fluid and crystallized intelligence. Acta Psychologica, 26, 107–129. Hu, C., & Sale, M. E. (2003). A joint model for nonlinear longitudinal data with informative dropout. Journal of Pharmacokinetics and Pharmacodynamics, 30(1), 83–103. Huang, C.-M., Polk, T. A., Goh, J. O., & Park, D. C. (2012). Both left and right posterior parietal activations contribute to compensatory processes in normal aging. Neuropsychologia, 50(1), 55–66.

Lifespan Perspective on Cognitive Neuroscience

Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kim, S.-J., & Park, E. H. (2018). Relationship of working memory, processing speed, and fluid reasoning in psychiatric patients. Psychiatry Investigation, 15(12), 1154–1161. Kremen, W. S., Beck, A., Elman, J. A., Gustavson, D. E., Reynolds, C. A., Tu, X. M., . . . Franz, C. E. (2019). Influence of young adult cognitive ability and additional education on later-life cognition. Proceedings of the National Academy of Sciences USA, 116(6), 2021–2026. MacPherson, S. E., Cox, S. R., Dickie, D. A., Karama, S., Starr, J. M., Evans, A. C., . . . Deary, I. J. (2017). Processing speed and the relationship between Trail Making Test-B performance, cortical thinning and white matter microstructure in older adults. Cortex, 95, 92–103. Martínez, K., Burgaleta, M., Román, F. J., Escorial, S., Shih, P. C., Quiroga, M. Á., & Colom, R. (2011). Can fluid intelligence be reduced to “simple” short-term storage? Intelligence, 39(6), 473–480. McDonough, I. M., Bischof, G. N., Kennedy, K. M., Rodrigue, K. M., Farrell, M. E., & Park, D. C. (2016). Discrepancies between fluid and crystallized ability in healthy adults: A behavioral marker of preclinical Alzheimer’s disease. Neurobiology of Aging, 46, 68–75. McDonough, I. M., Haber, S., Bischof, G. N., & Park, D. C. (2015). The Synapse Project: Engagement in mentally challenging activities enhances neural efficiency. Restorative Neurology and Neuroscience, 33(6), 865–882. Miró-Padilla, A., Bueichekú, E., Ventura-Campos, N., Flores-Compañ, M.-J., Parcet, M. A., & Ávila, C. (2019). Long-term brain effects of N-back training: An fMRI study. Brain Imaging and Behavior, 13(4), 1115–1127. Nyberg, L., & Pudas, S. (2019). Successful memory aging. Annual Review of Psychology, 70(1), 219–243. Østby, Y., Tamnes, C. K., Fjell, A. M., & Walhovd, K. B. (2011). Morphometry and connectivity of the fronto-parietal verbal working memory network in development. Neuropsychologia, 49(14), 3854–3862. O’Sullivan, M., Jones, D. K., Summers, P. E., Morris, R. G., Williams, S. C. R., & Markus, H. S. (2001). Evidence for cortical “disconnection” as a mechanism of age-related cognitive decline. Neurology, 57(4), 632–638. Pacheco, J., Goh, J. O., Kraut, M. A., Ferrucci, L., & Resnick, S. M. (2015). Greater cortical thinning in normal older adults predicts later cognitive impairment. Neurobiology of Aging, 36(2), 903–908. Park, D. C. (2019). Cognitive ability in old age is predetermined by age 20. Proceedings of the National Academy of Sciences USA, 116(6):1832–1833. Park, D. C., & Bischof, G. N. (2013). The aging mind: Neuroplasticity in response to cognitive training. Dialogues in Clinical Neuroscience, 15(1), 109–119. Park, D. C., Lautenschlager, G., Hedden, T., Davidson, N. S., Smith, A. D., & Smith, P. K. (2002). Models of visuospatial and verbal memory across the adult life span. Psychology and Aging, 17(2), 299–293.

159

160

j. p. hennessee and d. c. park

Park, D. C., Polk, T. A., Park, P. R., Minear, M., Savage, A., & Smith, M. R. (2004). Aging reduces neural specialization in ventral visual cortex. Proceedings of the National Academy of Sciences USA, 101(35), 13091–13095. Park, D. C., & Reuter-Lorenz, P. (2009). The adaptive brain: Aging and neurocognitive scaffolding. Annual Review of Psychology, 60(1), 173–196. Pietschnig, J., & Voracek, M. (2015). One century of global IQ gains: A formal metaanalysis of the Flynn Effect (1909–2013). Perspectives on Psychological Science, 10(3), 282–306. Raz, N., Ghisletta, P., Rodrigue, K. M., Kennedy, K. M., & Lindenberger, U. (2010). Trajectories of brain aging in middle-aged and older adults: Regional and individual differences. NeuroImage, 51(2), 501–511. Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., . . . Engle, R. W. (2013). No evidence of intelligence improvement after working memory training: A randomized, placebo-controlled study. Journal of Experimental Psychology: General, 142(2), 359–379. Reuter-Lorenz, P. A., & Cappell, K. A. (2008). Neurocognitive aging and the compensation hypothesis. Current Directions in Psychological Science, 17(3), 177–182. Reuter-Lorenz, P. A., & Park, D. C. (2014). How does it STAC up? Revisiting the scaffolding theory of aging and cognition. Neuropsychology Review, 24(3), 355–370. Rieck, J. R., Rodrigue, K. M., Boylan, M. A., & Kennedy, K. M. (2017). Age-related reduction of BOLD modulation to cognitive difficulty predicts poorer task accuracy and poorer fluid reasoning ability. NeuroImage, 147, 262–271. Roca, M., Parr, A., Thompson, R., Woolgar, A., Torralva, T., Antoun, N., . . . Duncan, J. (2010). Executive function and fluid intelligence after frontal lobe lesions. Brain, 133(1), 234–247. Salthouse, T. A. (2014a). Correlates of cognitive change. Journal of Experimental Psychology: General, 143(3), 1026–1048. Salthouse, T. A. (2014b). Selectivity of attrition in longitudinal studies of cognitive functioning. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 69(4), 567–574. Salthouse, T. A. (2016). Continuity of cognitive change across adulthood. Psychonomic Bulletin & Review, 23(3), 932–939. Satz, P. (1993). Brain reserve capacity on symptom onset after brain injury: A formulation and review of evidence for threshold theory. Neuropsychology, 7(3), 273. Scheller, E., Schumacher, L. V., Peter, J., Lahr, J., Wehrle, J., Kaller, C. P., . . . Klöppel, S. (2018). Brain aging and APOE ε4 interact to reveal potential neuronal compensation in healthy older adults. Frontiers in Aging Neuroscience, 10, 1–11. Simons, D. J., Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., & Stine-Morrow, E. A. L. (2016). Do “brain-training” programs work? Psychological Science in the Public Interest, 17(3), 103–186. Spearman, C. (1904). “General Intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–292.

Lifespan Perspective on Cognitive Neuroscience

Stern, Y. (2002). What is cognitive reserve? Theory and research application of the reserve concept. Journal of the International Neuropsychological Society, 8(3), 448–460. Stern, Y., Arenaza-Urquijo, E. M., Bartrés-Faz, D., Belleville, S., Cantilon, M., Chetelat, G., . . . Vuoksimaa, E. (2018). Whitepaper: Defining and investigating cognitive reserve, brain reserve, and brain maintenance. Alzheimer’s & Dementia, 16(9), 1305–1311. Stern, Y., Gazes, Y., Razlighi, Q., Steffener, J., & Habeck, C. (2018). A task-invariant cognitive reserve network. NeuroImage, 178, 36–45. Storsve, A. B., Fjell, A. M., Tamnes, C. K., Westlye, L. T., Overbye, K., Aasland, H. W., & Walhovd, K. B. (2014). Differential longitudinal changes in cortical thickness, surface area and volume across the adult life span: Regions of accelerating and decelerating change. Journal of Neuroscience, 34(25), 8488–8498. Turner, G. R., & Spreng, R. N. (2015). Prefrontal engagement and reduced default network suppression co-occur and are dynamically coupled in older adults: The default–executive coupling hypothesis of aging. Journal of Cognitive Neuroscience, 27(12), 2462–2476. Voss, M. W., Erickson, K. I., Chaddock, L., Prakash, R. S., Colcombe, S. J., Morris, K. S., . . . Kramer, A. F. (2008). Dedifferentiation in the visual cortex: An fMRI investigation of individual differences in older adults. Brain Research, 1244, 121–131. Yuan, P., Voelkle, M. C., & Raz, N. (2018). Fluid intelligence and gross structural properties of the cerebral cortex in middle-aged and older adults: A multioccasion longitudinal study. NeuroImage, 172, 21–30.

161

9 Predictive Intelligence for Learning and Optimization Multidisciplinary Perspectives from Social, Cognitive, and Affective Neuroscience Christine Ahrends, Peter Vuust, and Morten L. Kringelbach Many different definitions of intelligence exist but, in the end, they all converge on the brain. In this chapter, we explore the implications of the simple idea that, ultimately, intelligence must help optimize the survival of the individual and of the species. Central to this evolutionary argument, intelligence must offer superior abilities to learn and flexibly adapt to new challenges in the environment. To enhance the possibility of survival, the brain must thus learn to make accurate predictions that optimize the amount of time and energy spent on choosing appropriate actions in a given situation. Such predictive models have a number of parameters, like speed, complexity, and flexibility, that ensure the correct balance and usefulness to solve a given problem (Deary, Penke, & Johnson, 2010; Friedman et al., 2008; Fuster, 2005; Houde, 2010; Johnson-Laird, 2001; Kringelbach & Rolls, 2004; Roth & Dicke, 2005). These parameters come from a variety of cognitive, affective, and social factors, but a main requirement is one of motivation to initiate and sustain the learning process. Finally, one thing is to survive, another is to flourish, and so we discuss whether the intelligent brain is also optimal in terms of wellbeing given that spending too much time predicting something that may never come to pass could be counterproductive to flourishing. Thus, in this perspective, intelligence can be thought of as the process of balancing and optimizing the parameters that allow animals to survive as individuals and as a species, while still maintaining the motivation to do so. Improving the predictive, intelligent brain is a lifelong process where there are important shifts throughout the lifespan in how different aspects and parameters are prioritized. In this chapter, we first investigate the fundamental requirements to survive in an intelligent manner. A central idea is that of the predictive brain which has gained traction over the last few decades. Ever more precise models have described model parameters that enable motivated learning to solve complex problems. The brain turns out to have a specific hierarchical architecture that forms the base for these learning processes. Yet, it has also become clear

162

Predictive Intelligence for Learning and Optimization

that we need to further our understanding of the communication in the human brain, and in particular the hierarchical, yet massively parallel processing that allows for intelligent predictions in this architecture. To this end, we show how these can be integrated in models such as the Global Workspace and how several concepts from the study of dynamical systems such as metastability and criticality have proven useful for describing the human brain. We then explore the support from emerging evidence in social, cognitive, and affective neuroscience on describing optimal and suboptimal states of the intelligent brain. Finally, we turn to the role of intelligence not just for surviving but for thriving.

Brain Requirements for Intelligent Survival Learning is the process that enables intelligent survival and the optimization of exploitation vs. exploration in the human brain. Crucially, learning can only occur given the motivation and expectation of receiving a reward or pleasure (Berridge & Robinson, 2003; Kringelbach & Berridge, 2017). This can be conceptualized as a pleasure cycle with multiple distinct phases: an appetitive/“wanting” phase followed by a consummatory/“liking” phase, and finally a satiety phase (see Figure 9.1a), each of which have behavioral manifestations (hence the quotation marks around “wanting” and “liking”) resulting from the underlying brain networks and mechanisms. Predictions are maximally instigated during the appetitive phase but are also present during the consumption and satiation phases. During the latter, the outcome of the cycle is evaluated to learn from the experience; although, importantly, learning can occur in any phase of the cycle (Kringelbach & Rolls, 2004). Learning thus occurs continuously in the pleasure cycles seamlessly taking place over much longer circadian cycles and over the lifespan (see Figure 9.2, for more information see Berridge and Kringelbach (2008)).

Learning Models in the Human Brain The theory of predicting from and updating a model through these pleasure cycles is essential to explain learning and decision-making. Over the past 20 years, neuroscientific research has developed several theories of a predictive brain (Clark, 2013; Friston & Kiebel, 2009; Johnson-Laird, 2001; Schacter, Addis, & Buckner, 2007). This principle describes how the mind is constantly making predictions about its environment from a mental model that it has built from previous experience, as illustrated in Figure 9.1b. The expectations generated from this model can then either be met or disappointed, which results in a prediction error. This error signal is used to update the model and improve it for future predictions – driven by the brain’s intrinsic drive to reduce the model’s free energy (or minimize surprise) (Friston, 2010).

163

164

c. ahrends, p. vuust, and m. l. kringelbach

Figure 9.1 The pleasure cycle, interactions between experience and predictions, as well as how learning might occur. (A) The pleasure cycle illustrates how humans go through an appetitive, a consummatory, and a satiety phase when processing a reward. This can be any number of different rewards, including food or abstract monetary reward. (B) Zooming in to this process shows how, at any point in the pleasure cycle, the brain is taking into

Predictive Intelligence for Learning and Optimization

Initially, research on perception demonstrated the involvement of feedforward and feedback loops to test predictions and update a mental model of the environment (for the visual domain, see Bar et al. (2006); Kanai, Komura, Shipp, and Friston (2015); and Rao and Ballard (1999) and for the auditory domain, see Garrido, Kilner, Stephan, and Friston (2009); Näätänen, Gaillard, and Mäntysalo (1978); and Näätänen, Paavilainen, Rinne, and Alho (2007)). It has since been shown that many higher level processes, like the understanding and enjoyment of music, rely on the same principle of generating and testing predictions from a model (Huron, 2006, 2016; Koelsch, Vuust, & Friston, 2019; Pearce & Wiggins, 2012; Rohrmeier & Koelsch, 2012; Vuust & Frith, 2008). The way in which the model is updated based on the expectation of future reward has been explained, for example, using reinforcement learning models (Niv & Schoenbaum, 2008; Schultz, 2015; Schultz, Dayan, & Montague, 1997; Vuust & Kringelbach, 2010). In these theories, any possible action in a given situation is associated with an expected value. In every trial (or every instance where the action is an available option), this associated value is updated based on experience. In the context of all available experience, this error signal is weighted by a learning rate, determining how much influence the given error term has on the model update (Glascher & O’Doherty, 2010; Niv & Schoenbaum, 2008; O’Doherty, 2004). Extensive evidence from human and animal neuroimaging studies has made a strong case for the involvement of dopaminergic signaling, focused on the ventral putamen and orbitofrontal cortex in humans, to mediate this process (O’Doherty, 2004; Schultz, 2015; Schultz & Dickinson, 2000; Schultz et al., 1997). A schematic of a simple reinforcement learning cycle can be found in Figure 9.1c.

Figure 9.1 (cont.) account past experiences of relevant outcomes to form predictions about the future. (C) Within the brain, several stages are likely traversed to update the predictive model in a massively parallel manner, where learning biases the update. One iteration of this cycle can be thought of as making a decision on how to act in a certain situation. First, the tree of possible options is sought by simulating, or predicting, the consequences of each option. The probabilities of each of these sequences of actions to lead to a reward are then evaluated. Suboptimal options (that are unlikely to lead to a reward) are discarded and only the most promising path is executed into an action. The outcome following this action is then evaluated based on the actual reward and compared to the expected reward associated with that action. Based on this comparison, the probabilities of the decision tree are updated: They could be increased if the reward was larger than expected, making the option more likely to be chosen in a future similar situation, or decreased if the reward either did not occur or was lower than expected, making the option less likely to be chosen. This process is iterated many times to create an accurate predictive model.

165

166

c. ahrends, p. vuust, and m. l. kringelbach

Figure 9.2 The pleasure cycle, on its own, during circadian cycles, and over the lifespan. (A) The repeating pleasure cycle consists of different phases: “wanting” (motivation), “liking” (consummation), and satiety. Learning, and updating of prediction errors, occurs throughout the cycle but most strongly after consummation. (B) These cycles are constantly iterated throughout the circadian cycle, both during the awake phase and during sleep. (C) Over the lifespan (shown on a logarithmic scale), these pleasure cycles help enable an individual’s wellbeing. Evidence suggests that overall wellbeing is u-shaped over the lifespan. (Stone et al., 2010)

Predictive Intelligence for Learning and Optimization

In everyday life, pleasure cycles are continuously and seamlessly occurring throughout the circadian cycle both when awake or asleep in order to help improve the predictive model (see Figure 9.2b). Problems with this pleasure cycle are manifested as anhedonia, the lack of pleasure, which is a key characteristic of neuropsychiatric disorders. As such, the dynamics of these pleasure cycles interact closely with our experience of wellbeing or eudaimonia, a life well-lived. Interestingly, subjective ratings of wellbeing have been shown to have a u-shape over the lifespan, with a significant dip around 50 years of age (Figure 9.2c) (Stone, Schwartz, Broderick, & Deaton, 2010).

Optimization Principles of Learning Models Learning models of intelligence have made their way into computer models where they have been particularly successful for the development of algorithms to solve complex problems. For instance, a major challenge of machine learning, the game of Go, was recently successfully mastered by a learning algorithm that relies on reinforcement learning (Silver et al., 2016, 2017). Using deep neural networks trained both in a supervised way on human games and through reinforcement learning on simulations of self-play, this algorithm outperforms human experts in the game of Go (Silver et al., 2016). Notably, the performance could be even further improved by discarding data from human games and training purely through reinforcement learning on selfplay (Silver et al., 2017). In the case of computer systems, reinforcement learning implies that the algorithm simulates a large number of games and “rewards” itself every time the sequence of actions leads to a win, and “punishes” itself every time it leads to a loss. It uses this information to compute probabilities of leading to a win at every step of the sequence. For every iteration, these probabilities are updated by weighting information from the simulations, making the model more flexible. When generating new sequences based on these probabilities, it needs to predict the outcome several steps ahead in the sequence and reiterate this at every step. How far this algorithm “looks into the future” is called the search space and is a major limiting factor in the performance of the algorithm. As a theory, these models can be generalized from computer science to human and animal decision-making in that they describe “how systems of any sort can choose their actions to maximize rewards or minimize punishments” (Dayan & Balleine, 2002: 258). They therefore offer a useful framework to understand general optimization principles (i.e., learning) in the brain. In the context of intelligence, the way in which the model used to make decisions and solve problems is updated can increase the chances of survival. This optimization depends on three main parameters that can be seen as analog to the criteria put forward for intelligence: (1) the time until a decision is made (cf. the speed of problem solving (Deary et al., 2010; Jung & Haier, 2007; Roth & Dicke, 2005)), (2) the depth of the search space (cf., the complexity of

167

168

c. ahrends, p. vuust, and m. l. kringelbach

problems that can be solved (Conway, Kane, & Engle, 2003; Duncan, 2013; Fuster, 2005; Johnson-Laird, 2001)), and (3) the model flexibility (cf., the ability to take new evidence into account (Fuster, 2005; Roth & Dicke, 2005)). Additionally, humans need to consider social aspects when trying to increase chances of survival, i.e., a fourth parameter, group size, may change depending on the focus of survival. Intelligent behavior can be thought of as optimizing these four parameters – a task that persists over the lifespan (see Figure 9.3). Each optimization parameter will change throughout the different stages of development based on the individual’s priorities (e.g., a shift from individual to family values during early adulthood). The probabilities of survival depend on the balance of these interdependent parameters. For instance, increasing the depth of the search space negatively impacts the speed at which a decision can be made. Putative relationships over the lifespan are illustrated in Figure 9.3 (center panel).

Architecture and Communication Principles in the Human Brain To enable complex processes such as learning, the whole brain needs to be organized in an appropriate structural and functional architecture that provides the necessary scaffold. Within this framework, the involved specialized regions need to communicate efficiently to implement the feedforward- and feedback-loops necessary for model updating. Recently, this network perspective has been shown to provide meaningful insights into the study of the general intelligence or g-factor in the brain (Barbey, 2018). We here argue how brain architecture and dynamics can help us understand intelligence in a wider sense, considering the interplay between cognitive, affective, and social functions of the brain. Using graph theory, it has been found that in the healthy brain, the structural connections of brain networks are self-organized in a so-called “smallworld” matter, i.e., they display dense local connectivity, with the involvement of few long-range connections (Stam & van Straaten, 2012; van den Heuvel & Hulshoff Pol, 2010). There is growing evidence that a higher degree of smallworldness, reflecting the efficiency of local information processing, together with shorter path lengths for long-range connections, i.e., increased global communication efficiency, are correlated with cognitive performance (Beaty, Benedek, Kaufman, & Silvia, 2015; Dimitriadis et al., 2013; Pamplona, Santos Neto, Rosset, Rogers, & Salmon, 2015; Santarnecchi, Galli, Polizzotto, Rossi, & Rossi, 2014; Song et al., 2008; Stern, 2009; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). Another perspective on the hierarchic architecture in which modular brain areas work together to solve problems is described by the Global Neuronal Workspace Theory (Baars, 1988; Baars & Franklin, 2007; Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006; Dehaene, Kerszberg, & Changeux, 1998;

Predictive Intelligence for Learning and Optimization

Figure 9.3 Parameter optimization of learning models. Schematic of how the four optimization parameters of the predictive model change across the different stages of development. In general, this optimization aims toward maximizing the depth of the search space (simulate further into the future) and the group size, but minimizing the model flexibility (as a function of the model improving) and the time until decision. These general patterns are affected by developmental changes, such as during adolescence and early adulthood. Each of these parameters affects probabilities of survival: In theory, the deeper search space and faster time until decision improve chances of survival, but they

169

170

c. ahrends, p. vuust, and m. l. kringelbach

Dehaene & Naccache, 2001). This view posits that effortful cognitive processing is achieved through dynamic interaction between a single global workspace (i.e., a group of highly interconnected neurons or regions) with several specialized modular subsystems, such as perceptual, memory, or attentional processors (see Figure 9.4). This is crucial for the study of intelligence, particularly as it can explain the interplay between parallel mental operations and splitting of complex problems into several “chunks” (Zylberberg, Dehaene, Roelfsema, & Sigman, 2011). There exist qualitative differences in the direction of interaction between the hierarchical levels: The top-down direction allows for conscious control of subconscious processes (signals directed from the global workspace to select submodule input), while the bottom-up direction is a subconscious stream that constantly competes unselectively for access to the global workspace (Dehaene & Naccache, 2001). During effortful, conscious processes, the global workspace ignites and suppresses the necessary specialized processors (Dehaene et al., 1998). Parallel subconscious processes compete for access to the (conscious) global workspace where their information can be integrated and eventually be considered for action selection (Baars & Franklin, 2007; Baars, Franklin, & Ramsoy, 2013; Mesulam, 1998). An example for these interactions can be found in effortful guiding of attention to auditory or visual stimuli given specific task instructions where the global workspace can select the relevant module (auditory and visual perceptive inputs respectively), which then becomes conscious, while the other module does not enter the workspace and information is thus “ignored” (Dehaene & Naccache, 2001). This type of architecture is essential for learning processes such as the ones described under “Learning Models in the Human Brain.” The global workspace can be thought of as keeping track of the model by integrating incoming information from the perceptual subsystems with information stored in memory systems, updating probabilities associated with outcomes via evaluative subsystems, simulating paths, and finally passing on the generative information to action subsystems (see Figure 9.5). The speed, capacity, and flexibility at which the global workspace operates can be thought of as determining the efficiency of problem-solving – a potential proxy for intelligence as a learning model.

Figure 9.3 (cont.) are positively correlated making the optimization problem more complex (center panel). The model flexibility parameter is optimal at an intermediate level where it is neither too rigid to take new evidence into account and adapt to new challenges nor too plastic in which case it would constantly be renewed. The group size parameter defines the focus of survival between the individual, the small group (e.g., family), or the species.

Predictive Intelligence for Learning and Optimization

Figure 9.4 Hierarchical neuronal workspace architectures. Intelligence is perhaps best understood in a hierarchically organized network akin to the Global Neuronal

171

172

c. ahrends, p. vuust, and m. l. kringelbach

Oscillator Models A remaining question in these theories is, however, how within these brain architectures different nodes or networks interact to not only enable but optimize complex learning processes. Important insights into the communication between brain areas come from computational models. Here, the rhythmic firing of neurons and neuronal fields can be modeled as coupled oscillators (Cabral et al., 2014; Cabral, Hugues, Sporns, & Deco, 2011; Deco, Jirsa, McIntosh, Sporns, & Kotter, 2009; Werner, 2007). A simple analogy of coupled oscillators is a row of metronomes that are connected by standing on the same moving platform. Even if started at random time points and frequencies, the metronomes will spontaneously synchronize, resulting in phase coherence or phase-locking. This self-entrainment of oscillators was described by Kuramoto (1975). On the neural mechanistic level, the Communication through Coherence theory has proposed that neuronal information transmission depends on phase coherence (Fries, 2005). This theory describes how the synchronization between two oscillators (in this case two local fields) opens temporal communication windows where input and output can be exchanged at the same time. This suggests that, for information to be processed optimally, the involved brain areas need to be temporally synchronized. An apparent contradiction to this view is that weaker coupling between long-range connections also supports cognitive performance (Santarnecchi et al., 2014). Synchronization (i.e., stronger coupling between nodes) has therefore great potential to describe communication, but it cannot conclusively explain the kind of spatiotemporal flexibility necessary to update a model in the Global Workspace framework (Deco & Kringelbach, 2016). In fact, over-synchronization between certain areas has been suggested to hinder

Figure 9.4 (cont.) Workspace. (A) In the human brain, information is integrated in a hierarchical fashion (shown here as concentric rings). Sensory information is progressively processed and integrated; shown here how visual (green) and auditory (blue) information is integrated in heteromodal regions (burgundy) (Mesulam, 1998). (B) This idea was further developed in the Global Workspace (Dehaene et al., 1998) based on Baars’ ideas of a cognitive workspace (Baars, 1988), where information is integrated in a hierarchical fashion from perceptual systems based on memory, attention, and evaluative systems. In the Global Workspace, metastable brain regions are dynamically assembling to optimally integrate information in order to intelligently shape behavior to ensure survival of the individual and of the species, and potentially maintain the motivation throughout life to thrive (from Dehaene & Changeux, 2011). (C) It has since been shown that the spatial gradients are topologically organized in the human brain. (Margulies et al., 2016)

Predictive Intelligence for Learning and Optimization

Figure 9.5 Reward-related signals in the orbitofrontal cortex (OFC) unfold dynamically across space and time. (A) Representative schematic of hierarchical cortical processing demonstrating higher-order limbic cortical regions (e.g., OFC) sending prediction signals to and receiving prediction error signals from multimodal, exteroceptive, and interoceptive systems (A1, primary auditory cortex; G1, primary gustatory cortex; I1, primary interoceptive cortex; O1, primary olfactory cortex; S1, primary somatosensory cortex; V1, primary visual cortex). Each ring represents a different type of cortex, from less (interior circles) to greater (exterior circles) laminar

173

174

c. ahrends, p. vuust, and m. l. kringelbach

cognitive performance and to be related with pathologies (Anokhin, Muller, Lindenberger, Heath, & Myers, 2006; Singer, 2001; Voytek & Knight, 2015). Coupling strength might therefore be a major factor to enable efficient workspace configuration. Using non-linear models, it has been shown that a system consisting of strongly coupled oscillators self-entrains, i.e., it quickly converges to a stable state where all oscillators are synchronized – in the above example, metronomes that are strongly connected via a moving platform synchronize with each other and it is difficult to disturb this pattern of synchronization (Deco & Jirsa, 2012; Honey, Kotter, Breakspear, & Sporns, 2007). This poses a problem for a dynamic system: The brain becomes a “prisoner to itself” (Tognoli & Kelso, 2014), i.e., it gets stuck in the stable state, eliminating the possibility of exploring any other state (Deco, Kringelbach, Jirsa, & Ritter, 2017). This can be thought of as a strong, simple attractor manifold: With strong coupling, no matter at which frequencies the oscillators – or metronomes in the above example – start, they will always converge to the same state of total synchrony and remain in that single equilibrium state (stability) (Deco & Jirsa, 2012; Deco, Jirsa, & McIntosh, 2011; Rolls, 2010; Tognoli & Kelso, 2014). Stability is equivalent to a perfectly orderly organization of the system. The opposite case, a system consisting of a number of uncoupled oscillators (or metronomes with no connecting matter between them), does not synchronize, but every oscillator stays independent (Deco & Jirsa, 2012; Deco et al., 2017; Honey et al., 2007; Tognoli & Kelso, 2014). This would mean no possible communication between the nodes or between subsystems and the global workspace (Fries, 2005). Assuming that brain activity within a node is not a perfect oscillator, but includes a certain amount of noise, or random activity, each oscillator would change its activity in a random manner but never converge with the others (Deco & Jirsa, 2012; Deco et al., 2009, 2014). In the dynamic systems perspective, this can be thought of as chaos (Friston, 1997; Tognoli & Kelso, 2014).

Figure 9.5 (cont.) differentiation. Adapted from Mesulam (1998) and Chanes and Barrett (2016). (B) Conceptual representation of reward space for a task with distinct phases of cue (prediction, red), anticipation (uncertainty, blue), and outcome (prediction error, green) with darker colors signaling more reward-related activity. (C) Changes in level of activity in the OFC during phases of prediction, uncertainty, and prediction error based on neural evidence in Li et al. (2016). (D) Changes in network dynamics as a function of the activity in the OFC. Hypothetical illustration of the OFC directing functional network configurations as a key part of the global workspace across multiple brain regions over time. From Kringelbach and Rapuano (2016)

Predictive Intelligence for Learning and Optimization

Computational models have shown how a system in which oscillators have a medium coupling strength results in the most efficient exploration of different states (Deco & Kringelbach, 2016; Deco et al., 2017; Honey et al., 2007). The typical behavior of this type of system is characterized by transient phases of synchronization and de-synchronization, i.e., locking and escape or integration and segregation (Deco & Kringelbach, 2016; Tognoli & Kelso, 2014). This behavior, called metastability, has been shown to exhibit the highest degree of dynamic complexity, even in the absence of external noise – and thus exist at the border between order and chaos (Friston, 1997). In this complex attractor manifold, configurations exist to which the system will be transiently attracted (Deco & Jirsa, 2012; Rolls, 2010). In light of the Global Workspace theory, different perceptual, attentional, memory, evaluative, or motor networks are subliminally available as ghost attractors in the state space and can easily be stabilized when necessary (Deco & Jirsa, 2012). In a metastable system, at the moment where a fixed state loses its stability (bifurcation), the system is characterized by criticality (Deco & Jirsa, 2012; Deco et al., 2011; Friston, 1997; Tognoli & Kelso, 2014). In this critical zone, information processing and flexibility in the brain are optimal (Deco et al., 2017). It has been shown that the healthy waking brain self-organizes into a state close to criticality (Singer, 2001; Werner, 2007). However, a system at criticality is also particularly susceptible to perturbation (or noise) – an important consideration for the search of the intelligent brain (Tognoli & Kelso, 2014; Werner, 2007). These models of metastability and criticality have been applied and shown to have great similarity to empirical data on different time-scales (Hu, Huang, Jiang, & Yu, 2019; Kringelbach, McIntosh, Ritter, Jirsa, & Deco, 2015). They can explain single-unit behavior (Friston, 1997; Rolls, 2010), phase-relation in EEG and MEG recordings of the brain (Cabral et al., 2014), and spontaneous functional connectivity in resting-state fMRI (Cabral et al., 2011; Deco et al., 2009, 2017). The available repertoire of brain states can be thought of as an attractor manifold, in which certain functional networks constitute stronger (i.e., more stable, like the Default Mode Network [DMN]) (Anticevic et al., 2012; Deco et al., 2014, 2017; Ghosh, Rho, McIntosh, Kotter, & Jirsa, 2008) or weaker attractors to which the system transiently converges (Deco et al., 2011). The trajectory by which the brain moves through this state space and explores the different network configurations is represented by transitions between functional networks in fMRI.

Optimal and Suboptimal States of the Intelligent Brain In this chapter, we have considered intelligence in an evolutionary context as the promotion of survival of the individual and of the species. Going beyond survival, we have suggested that intelligence should be aimed

175

176

c. ahrends, p. vuust, and m. l. kringelbach

at wellbeing to sustain motivation necessary for learning. We have shown how intelligence can be further conceptualized as an optimization problem of a learning model on the parameters speed, depth of search space, and flexibility. We have described a hierarchical brain architecture akin to the Global Workspace Theory that is necessary for this process. We have also shown that in order to enable an optimal regime for such a framework to work efficiently, concepts of metastability and criticality are needed to ensure optimal communication across brain networks. We have developed this conceptualization in contrast to current concepts of intelligence in the fields of social, cognitive, and affective neuroscience, which have posed several challenges that have been difficult to resolve in a unified theory. One attempt has been to introduce independent subdomains of intelligence, such as social and emotional intelligence, and numerous subareas of cognition (Gardner, 1984; Thorndike, 1920). While these offer great specificity to explain observed phenomena, they strip the term “intelligence” of its ambition of universality and create a gap between the different fields. This has further affected the areas of neurology and neuropsychiatry where symptom classification and mechanistic explanations for some disorders are scattered into a diffuse array of social, emotional, or cognitive deficits and corresponding neural processes that lack a holistic understanding of the diseases. Understanding the intelligent brain in the way we suggest here could provide a new theoretical framework to re-evaluate traditional questions in the study of intelligence. For instance, in cognitive neuroscience, the general intelligence factor (g) has been reliably shown to be highly correlated with most cognitive subdomains, such as working memory, and their functional relationship is a question that has received great attention (Colom, Rebollo, Palacios, Juan-Espinosa, & Kyllonen, 2004; Conway et al., 2003; Kane & Engle, 2002). The Global Workspace conceptualization allows for illustrating the relationship between working memory and general intelligence, where working memory can be understood as the limited capacity of the global workspace, while general intelligence is conceptualized as the interplay between several model parameters including capacity (or depth) of the workspace. Despite promising past attempts, the field of affective neuroscience is still struggling to explain cognitive deficits in mood disorders, such as depression. Depressed patients have been shown to have lower IQ-scores than matched healthy controls and diffuse cognitive deficits (Landro, Stiles, & Sletvold, 2001; Ravnkilde et al., 2002; Sackeim et al., 1992; Veiel, 1997; Zakzanis, Leach, & Kaplan, 1998). While both causal explanations have been suggested (lower IQ as a risk factor for depression or depressive symptoms hindering cognitive performance), the relationship remains difficult to interpret (Koenen et al., 2009; Liang et al., 2018). Similar to cognitive neuroscience, the dissociation of state and trait effects is unclear in affective disorders (Hansenne & Bianchi, 2009). Researchers and clinicians therefore often default to describing

Predictive Intelligence for Learning and Optimization

specific cognitive impairments, resulting in a disjointed list of cognitive and emotional symptoms. The reinforcement learning model can help to illustrate the dependency of learning processes on affective factors like motivation and reward evaluation. In this light, cognitive deficits in depression and the general susceptibility of intelligence tests to mood can more easily be understood within the unified hierarchical framework described in this chapter. Namely, we have shown how learning happens as a function of expectation, consumption, and satiation of pleasure, providing motivation and reward. With the absence of pleasure, or anhedonia, being a core symptom of depression, a main factor that enables learning is lacking in this disorder. Looking forward, the application of the concepts of metastability and criticality as principles of an optimally working brain is an emerging area in social, cognitive, and affective neuroscience that promises important causalmechanistic insights into several central research questions within these fields.

Emerging Evidence from Social Neuroscience The recent increase of oscillator models to simulate social interactions has already provided an interesting new perspective regarding the role of coherence for social behavior. For instance, studies on musical synchronization between dyads have demonstrated that a model of two coupled oscillators can simulate synchronized tapping (Konvalinka, Vuust, Roepstorff, & Frith, 2009) and that a complex pattern of phase-locking both within and between two brains is crucial for interpersonal synchronization (Heggli, Cabral, Konvalinka, Vuust, & Kringelbach, 2019; Sanger, Muller, & Lindenberger, 2012). These models also have the potential to explain and dissociate conflicts within (e.g., between auditory and motor regions of the brain) and between individuals in interpersonal interactions by modeling several oscillators within each brain and simulating different coupling strengths within a unit and across them (Heggli et al., 2019). In this way, oscillator models have been proven useful tools to model social interaction on several different levels. Taking models of social interaction one step further, it has been shown that larger group dynamics rely on metastability: In order for a system of interacting units to converge to a consensus (or make a decision), the system needs to be close to criticality (De Vincenzo, Giannoccaro, Carbone, & Grigolini, 2017; Grigolini et al., 2015; Turalska, Geneston, West, Allegrini, & Grigolini, 2012). A metastable state of transient phases of consensus and de-stabilization can moreover be meaningfully applied to describe real-life social and political phenomena (Turalska, West, & Grigolini, 2013). This could be an important perspective to understand social intelligence as a flexible, adaptive behavior (Parkinson & Wheatley, 2015). A major goal of this line of research within social neuroscience is to model social groups wherein each brain consists of many nodes, which could provide an important step towards understanding the mechanisms underlying dynamics and conflicts between personal and

177

178

c. ahrends, p. vuust, and m. l. kringelbach

collective behavior. At the moment, however, these ambitions are still limited by computational power.

Emerging Evidence from Cognitive Neuroscience The importance of inter-area synchronization for cognitive performance has been shown in large-scale resting-state neuroimaging studies (Ferguson, Anderson, & Spreng, 2017). Moreover, it has recently been found that increasing synchrony between individual brain network nodes can attenuate age-related working-memory deficits (Reinhart & Nguyen, 2019). Betweennetwork coupling has also been evaluated in resting-state functional connectivity studies. A recent large-sample study (N = 3,950 subjects from the UK Biobank) found that, besides functional connectivity within a network, the specific coupling pattern between several canonical resting-state networks (the Default Mode Network [DMN], frontoparietal network, and cinguloopercular network) can explain a large amount of variance in cognitive performance (Shen et al., 2018). Furthermore, the role of efficient de-activation of networks and flexible switching between networks for cognition are gaining acknowledgment (Anticevic et al., 2012; Leech & Sharp, 2014). Recent studies have used dynamic approaches to functional connectivity to describe the spatio-temporal dynamics of the brain during rest. For instance, Cabral et al. (2017) used the Leading Eigenvector Dynamics Analysis (LEiDA) and found that cognitive performance in older adults can be explained by distinct patterns of flexible switching between brain states. This dynamical perspective also has the potential to contribute to the discussion of the “milk-and-jug” problem (Dennis et al., 2009). This debate revolves around the relationship between theoretical concepts of intelligence (the “jug” or capacity) and the actual performance (the “milk” or content, which is limited by the capacity of the jug). Viewing the brain as a dynamical system could give theoretical insights towards understanding this relationship by dissociating between cognitive traits (like g) as the repertoire of available brain states, and cognitive states (performance in a given situation) as the exploration of that repertoire. Despite emerging evidence on graph-theoretical network configurations during effortful cognitive processing (Kitzbichler, Henson, Smith, Nathan, & Bullmore, 2011), a detailed description of the global workspace remains a major challenge that promises to allow linking specific state transition patterns with intelligence.

Emerging Evidence from Affective Neuroscience Using the concept of metastability, it has been proposed that certain neurological and neuropsychiatric diseases, such as Parkinson’s disease, depression, anxiety, and obsessive-compulsive disorder, can be explained by overcoupling,

Predictive Intelligence for Learning and Optimization

and others, such as age-related cognitive decline, autism spectrum disorder, and schizophrenia, can be explained by undercoupling (Voytek & Knight, 2015). Behaviorally, this could reflect symptoms like rumination in depression as “getting stuck” in a strong attractor network or disorganization in schizophrenia as a super-critical, chaotic regime or a surplus of noise. Empirical findings support this view: Resting-state functional connectivity studies in patients with major depressive disorder have shown increased connectivity in short-range connections, as well as disruption and dysfunction of the central executive network, the DMN, and the salience network (Fingelkurts et al., 2007; Menon, 2011). It also has been suggested that a dysfunction in state transitions, i.e., a lack of optimal metastability, can explain anhedonia in depressed patients (Kringelbach & Berridge, 2017). Additional evidence for the importance of coherence and metastability in psychiatric disorders comes from the study of schizophrenia. A recent study found that the functional cohesiveness, integration, and metastability of several resting-state networks could not only distinguish between patients and healthy controls, but also bore explanatory value for specific symptom severity, like anxious/depressive or disorganization symptoms (Lee, Doucet, Leibu, & Frangou, 2018). Using a modeling approach similar to the ones described above, Rolls, Loh, Deco, and Winterer (2008) had previously shown that a schizophrenia-like brain state can be achieved by introducing a larger amount of noise and variability into the system. Taking it even further, these principles are now being used to assess the potential of new therapies for affective disorders like depression. The entropic brain hypothesis suggests the possible beneficial effects of psychedelic therapy for depression by moving the brain closer towards criticality and thereby enhancing exploration of the state space (Carhart-Harris, 2018; Carhart-Harris et al., 2014). The potential effects of these novel treatment options can be simulated using computational models like the ones described under “Oscillator Models” (Deco & Kringelbach, 2014; Kringelbach & Berridge, 2017).

Conclusion The quest for the optimal state of the intelligent brain has resulted in several promising theories with great relevance to the fields of social, cognitive, and affective neuroscience. We suggest that learning models and the optimization of their model parameters across the lifespan could be a useful tool to understand intelligence from a multidisciplinary point of view. Namely, we described how cognitive learning relies on affective processes like motivation and evaluation of reward. The balance between individual and group intelligence, i.e., ensuring survival of the individual and of the species, depends on the prioritization of the different model parameters in the optimization. In order to understand the architecture to allow solving

179

180

c. ahrends, p. vuust, and m. l. kringelbach

reinforcement learning problems, we have suggested the necessity for a hierarchical system such as the Global Workspace, and demonstrated how it can be applied to research problems related to intelligence. Finally, we focused on flexible brain communication within this structure to enable efficient processing. We suggest that future research can shed new light on the physics of optimal and suboptimal states of the brain using the concept of metastability. We propose that the optimal brain is a brain at criticality that flexibly switches between a large repertoire of attractor states. Taking the case of depression as an example, the concepts delineated under “Oscillator Models” have the potential to not only explain certain symptoms (e.g., affective), but also the diffuse effects of the disease including cognitive symptoms. It has been suggested that the depressed brain is sub-critical, which could affect many areas of brain function and behavior (Carhart-Harris, 2018; Kringelbach & Berridge, 2017). We have argued that whole-brain computational modeling can help establish a causal understanding of the brain in health and disease by simulating the effects of different parameters, such as system-wide or node-specific coupling strength, noise, or transmission delay on whole-brain dynamics (Deco & Kringelbach, 2014). Interpreting the behavior of these models can contribute to major discussions in the different fields of neuroscience. For instance, simulations have shown that removing the anterior insula and cingulate – areas that are functionally affected (e.g., in schizophrenia, bipolar disorder, depression, and anxiety) – from a metastable brain model results in a strong reduction of the available state repertoire (Deco & Kringelbach, 2016). Speculatively, considering the system’s susceptibility to perturbation and closeness to instability at criticality could provide a theoretical account for the popularly discussed link between genius and insanity. The available models also open avenues for discovery and theoretical assessment of novel treatment options, like the mentioned psychedelic therapy approach for depression. Criticality of the brain as the optimal state of switching between attractor networks could be a concept that re-unifies social, cognitive, and emotional intelligence. While it offers a universal theory for a range of different neuronal and behavioral questions, the established computational models can help describe whole-brain dynamics in a continuum of healthy, diseased, suboptimal, and optimal brain states in great detail. We advocate the view that social, cognitive, and affective intelligence are in fact all inherently linked. While the human brain’s successful application of purely predictive intelligence has clearly helped survival, it is not necessarily conducive for a flourishing life if social and affective aspects are neglected. It would certainly seem true that spending too much time predicting something that may never come to pass might not necessarily be helpful for enjoying the “here and now” that is a major part of our state of wellbeing. As such, intelligence could be said to be a two-edged sword that might help us survive

Predictive Intelligence for Learning and Optimization

and give us more time, but where we should not forget to enjoy this extra time and perhaps even flourish with our friends and family. Nobel-prize winning novelist John Steinbeck and marine biologist Ed Ricketts wrote presciently of the “tragic miracle of consciousness,” of how we are paradoxically bound by our “physical memories to a past of struggle and survival” and limited in our “futures by the uneasiness of thought and consciousness” (Steinbeck & Ricketts, 1941). Too narrowly focusing on prediction and not taking affective aspects such as motivation into account can easily create this paradox. This is exactly why in our proposed definition of intelligence, we have chosen to focus both on prediction and maintaining the motivational factors that allow us to flourish. One thing is to survive, but it is just as important to thrive.

References Anokhin, A. P., Muller, V., Lindenberger, U., Heath, A. C., & Myers, E. (2006). Genetic influences on dynamic complexity of brain oscillations. Neuroscience Letters, 397(1–2), 93–98. Anticevic, A., Cole, M. W., Murray, J. D., Corlett, P. R., Wang, X. J., & Krystal, J. H. (2012). The role of default network deactivation in cognition and disease. Trends in Cognitive Science, 16, 584–592. Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press. Baars, B. J., & Franklin, S. (2007). An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA. Neural Networks, 20(9), 955–961. Baars, B. J., Franklin, S., & Ramsoy, T. Z. (2013). Global workspace dynamics: Cortical “binding and propagation” enables conscious contents. Frontiers in Psychology, 4, 200. Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, A. M., . . . Halgren, E. (2006). Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences USA, 103(2), 449–454. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Science, 22(1), 8–20. Beaty, R. E., Benedek, M., Kaufman, S. B., & Silvia, P. J. (2015). Default and executive network coupling supports creative idea production. Science Reports, 5, 10964. Berridge, K. C., & Kringelbach, M. L. (2008). Affective neuroscience of pleasure: Reward in humans and animals. Psychopharmacology, 199(3), 457–480. Berridge, K. C., & Robinson, T. E. (2003). Parsing reward. Trends in Neurosciences, 26(9), 507–513. Cabral, J., Hugues, E., Sporns, O., & Deco, G. (2011). Role of local network oscillations in resting-state functional connectivity. Neuroimage, 57(1), 130–139. Cabral, J., Luckhoo, H., Woolrich, M., Joensson, M., Mohseni, H., Baker, A., . . . Deco, G. (2014). Exploring mechanisms of spontaneous functional connectivity in MEG: How delayed network interactions lead to structured amplitude envelopes of band-pass filtered oscillations. Neuroimage, 90, 423–435.

181

182

c. ahrends, p. vuust, and m. l. kringelbach

Cabral, J., Vidaurre, D., Marques, P., Magalhaes, R., Silva Moreira, P., Miguel Soares, J., . . . Kringelbach, M. L. (2017). Cognitive performance in healthy older adults relates to spontaneous switching between states of functional connectivity during rest. Science Reports, 7, 5135. Carhart-Harris, R. L. (2018). The entropic brain – Revisited. Neuropharmacology, 142, 167–178. Carhart-Harris, R. L., Leech, R., Hellyer, P., Shanahan, M., Feilding, A., Tagliazucchi, E., . . . Nutt, D. (2014). The entropic brain: A theory of conscious states informed by neuroimaging research with psychedelic drugs. Frontiers in Human Neuroscience, 8, 20. Chanes, L., & Barrett, L. F. (2016). Redefining the role of limbic areas in cortical processing. Trends in Cognitive Sciences, 20(2), 96–106. Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., & Kyllonen, P. C. (2004). Working memory is (almost) perfectly predicted by g. Intelligence, 32(3), 277–296. Conway, A. R. A., Kane, M. J., & Engle, R. W. (2003). Working memory capacity and its relation to general intelligence. Trends in Cognitive Sciences, 7(12), 547–552. Dayan, P., & Balleine, B. W. (2002). Reward, motivation, and reinforcement learning. Neuron, 36(2), 285–298. De Vincenzo, I., Giannoccaro, I., Carbone, G., & Grigolini, P. (2017). Criticality triggers the emergence of collective intelligence in groups. Physical Review E, 96(2–1), 022309. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. Deco, G., & Jirsa, V. K. (2012). Ongoing cortical activity at rest: Criticality, multistability, and ghost attractors. Journal of Neuroscience, 32(10), 3366–3375. Deco, G., Jirsa, V. K., & McIntosh, A. R. (2011). Emerging concepts for the dynamical organization of resting-state activity in the brain. Nature Reviews Neuroscience, 12(1), 43–56. Deco, G., Jirsa, V. K., McIntosh, A. R., Sporns, O., & Kotter, R. (2009). Key role of coupling, delay, and noise in resting brain fluctuations. Proceedings of the National Academy of Sciences USA, 106(25), 10302–10307. Deco, G., & Kringelbach, M. L. (2014). Great expectations: Using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron, 84(5), 892–905. Deco, G., & Kringelbach, M. L. (2016). Metastability and coherence: Extending the communication through coherence hypothesis using a whole-brain computational perspective. Trends in Neuroscience, 39(3), 125–135. Deco, G., Kringelbach, M. L., Jirsa, V. K., & Ritter, P. (2017). The dynamics of resting fluctuations in the brain: Metastability and its dynamical cortical core. Science Reports, 7(1), 3095. Deco, G., Ponce-Alvarez, A., Hagmann, P., Romani, G. L., Mantini, D., & Corbetta, M. (2014). How local excitation-inhibition ratio impacts the whole brain dynamics. Journal of Neuroscience, 34(23), 7886–7898.

Predictive Intelligence for Learning and Optimization

Dehaene, S., & Changeux, J.-P. (2011). Experimental and theoretical approaches to conscious processing. Neuron, 70(2), 200–227. Dehaene, S., Changeux, J.-P., Naccache, L., Sackur, J., & Sergent, C. (2006). Conscious, preconscious, and subliminal processing: A testable taxonomy. Trends in Cognitive Sciences, 10(5), 204–211. Dehaene, S., Kerszberg, M., & Changeux, J.-P. (1998). A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the National Academy of Sciences, 95(24), 14529. Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition, 79(1), 1–37. Dennis, M., Francis, D. J., Cirino, P. T., Schachar, R., Barnes, M. A., & Fletcher, J. M. (2009). Why IQ is not a covariate in cognitive studies of neurodevelopmental disorders. Journal of the International Neuropsychological Society, 15(3), 331–343. Dimitriadis, S. I., Laskaris, N. A., Simos, P. G., Micheloyannis, S., Fletcher, J. M., Rezaie, R., & Papanicolaou, A. C. (2013). Altered temporal correlations in resting-state connectivity fluctuations in children with reading difficulties detected via MEG. Neuroimage, 83, 307–317. Duncan, J. (2013). The structure of cognition: Attentional episodes in mind and brain. Neuron, 80(1), 35–50. Ferguson, M. A., Anderson, J. S., & Spreng, R. N. (2017). Fluid and flexible minds: Intelligence reflects synchrony in the brain’s intrinsic network architecture. Network Neuroscience, 1(2), 192–207. Fingelkurts, A. A., Fingelkurts, A. A., Rytsala, H., Suominen, K., Isometsa, E., & Kahkonen, S. (2007). Impaired functional connectivity at EEG alpha and theta frequency bands in major depression. Human Brain Mapping, 28(3), 247–261. Friedman, N. P., Miyake, A., Young, S. E., DeFries, J. C., Corley, R. P., & Hewitt, J. K. (2008). Individual differences in executive functions are almost entirely genetic in origin. Journal of Experimental Psychology: General, 137(2), 201–225. Fries, P. (2005). A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence. Trends in Cognitive Science, 9(10), 474–480. Friston, K. (1997). Transients, metastability, and neuronal dynamics. Neuroimage, 5(2), 164–171. Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. Friston, K., & Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1521), 1211–1221. Fuster, J. M. (2005). Cortex and mind: Unifying cognition. Oxford University Press. Gardner, H. (1984). Frames of mind: The theory of multiple intelligences. London: Heinemann. Garrido, M. I., Kilner, J. M., Stephan, K. E., & Friston, K. J. (2009). The mismatch negativity: A review of underlying mechanisms. Clinical Neurophysiology, 120(3), 453–463.

183

184

c. ahrends, p. vuust, and m. l. kringelbach

Ghosh, A., Rho, Y., McIntosh, A. R., Kotter, R., & Jirsa, V. K. (2008). Noise during rest enables the exploration of the brain’s dynamic repertoire. PLoS Computational Biology, 4(10), e1000196. Glascher, J. P., & O’Doherty, J. P. (2010). Model-based approaches to neuroimaging: Combining reinforcement learning theory with fMRI data. Wiley Interdisciplinary Reviews: Cognitive Science, 1(4), 501–510. Grigolini, P., Piccinini, N., Svenkeson, A., Pramukkul, P., Lambert, D., & West, B. J. (2015). From neural and social cooperation to the global emergence of cognition. Frontiers in Bioengineering and Biotechnology, 3, 78. Hansenne, M., & Bianchi, J. (2009). Emotional intelligence and personality in major depression: Trait versus state effects. Psychiatry Research, 166(1), 63–68. Heggli, O. A., Cabral, J., Konvalinka, I., Vuust, P., & Kringelbach, M. L. (2019). A Kuramoto model of self-other integration across interpersonal synchronization strategies. PLoS Computational Biology, 15(10), e1007422. doi: 10.1371/journal.pcbi.1007422. Honey, C. J., Kotter, R., Breakspear, M., & Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences USA, 104(24), 10240–10245. Houde, O. (2010). Beyond IQ comparisons: Intra-individual training differences. Nature Reviews Neuroscience, 11(5), 370. Hu, G., Huang, X., Jiang, T., & Yu, S. (2019). Multi-scale expressions of one optimal state regulated by dopamine in the prefrontal cortex. Frontiers in Physiology, 10, 113. Huron, D. (2006). Sweet anticipation: Music and the psychology of expectation. Cambridge, MA: MIT Press. Huron, D. (2016). Voice leading: The science behind a musical art. Cambridge, MA: MIT Press. Johnson-Laird, P. N. (2001). Mental models and deduction. Trends in Cognitive Science, 5(10), 434–442. Jung, R. E. & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154; discussion 154–187. Kanai, R., Komura, Y., Shipp, S., & Friston, K. (2015). Cerebral hierarchies: Predictive processing, precision and the pulvinar. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 370(1668), 20140169. Kane, M. J., & Engle, R. W. (2002). The role of prefrontal cortex in working-memory capacity, executive attention, and general fluid intelligence: An individualdifferences perspective. Psychonomic Bulletin & Review, 9(4), 637–671. Kitzbichler, M. G., Henson, R. N. A., Smith, M. L., Nathan, P. J., & Bullmore, E. T. (2011). Cognitive effort drives workspace configuration of human brain functional networks. The Journal of Neuroscience, 31(22), 8259. Koelsch, S., Vuust, P., & Friston, K. (2019). Predictive processes and the peculiar case of music. Trends in Cognitive Sciences, 23(1), 63–77. Koenen, K. C., Moffitt, T. E., Roberts, A. L., Martin, L. T., Kubzansky, L., Harrington, H., . . . Caspi, A. (2009). Childhood IQ and adult mental disorders: A test of the cognitive reserve hypothesis. American Journal of Psychiatry, 166(1), 50–57.

Predictive Intelligence for Learning and Optimization

Konvalinka, I., Vuust, P., Roepstorff, A., & Frith, C. (2009). A coupled oscillator model of interactive tapping. Proceedings of the 7th Triennial Conference of European Society for the Cognitive Sciences of Music (ESCOM 2009), University of Jyväskylä, Jyväskylä, Finland, pp. 242–245. Kringelbach, M. L., & Berridge, K. C. (2017). The affective core of emotion: Linking pleasure, subjective well-being, and optimal metastability in the brain. Emotion Review, 9(3), 191–199. Kringelbach, M. L., McIntosh, A. R., Ritter, P., Jirsa, V. K., & Deco, G. (2015). The rediscovery of slowness: Exploring the timing of cognition. Trends in Cognitive Science, 19(10), 616–628. Kringelbach, M. L., & Rapuano, K. M. (2016). Time in the orbitofrontal cortex. Brain, 139(4), 1010–1013. Kringelbach, M. L., & Rolls, E. T. (2004). The functional neuroanatomy of the human orbitofrontal cortex: Evidence from neuroimaging and neuropsychology. Progress in Neurobiology, 72(5), 341–372. Kuramoto, Y. (1975) Self-entrainment of a population of coupled non-linear oscillators. In H. Araki (ed.), International symposium on mathematical problems in theoretical physics. Lecture Notes in Physics, vol 39. (pp. 420–422). Berlin, Heidelberg: Springer. doi: 10.1007/BFb0013365. Landro, N. I., Stiles, T. C., & Sletvold, H. (2001). Neuropsychological function in nonpsychotic unipolar major depression. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 14(4), 233–240. Lee, W. H., Doucet, G. E., Leibu, E., & Frangou, S. (2018). Resting-state network connectivity and metastability predict clinical symptoms in schizophrenia. Schizophrenia Research, 201, 208–216. Leech, R., & Sharp, D. J. (2014). The role of the posterior cingulate cortex in cognition and disease. Brain, 137(Pt. 1), 12–32. Li, Y., Vanni-Mercier, G., Isnard, J., Mauguière, F., & Dreher, J.-C. (2016). The neural dynamics of reward value and risk coding in the human orbitofrontal cortex. Brain, 139(4), 1295–1309. doi: 10.1093/brain/awv409. Liang, S., Brown, M. R. G., Deng, W., Wang, Q., Ma, X., Li, M., . . . Li, T. (2018). Convergence and divergence of neurocognitive patterns in schizophrenia and depression. Schizophrenia Research, 192, 327–334. Margulies, D. S., Ghosh, S. S., Goulas, A., Falkiewicz, M., Huntenburg, J. M., Langs, G., . . . Smallwood, J. (2016). Situating the default-mode network along a principal gradient of macroscale cortical organization. Proceedings of the National Academy of Sciences USA, 113(44), 12574–12579. Menon, V. (2011). Large-scale brain networks and psychopathology: A unifying triple network model. Trends in Cognitive Sciences, 15(10), 483–506. Mesulam, M. M. (1998). From sensation to cognition. Brain: A Journal of Neurology, 121(6), 1013–1052. Näätänen, R., Gaillard, A. W. K., & Mäntysalo, S. (1978). Early selectiveattention effect on evoked potential reinterpreted. Acta Psychologica, 42(4), 313–329. Näätänen, R., Paavilainen, P., Rinne, T., & Alho, K. (2007). The mismatch negativity (MMN) in basic research of central auditory processing: a review. Clinical Neurophysiology, 118(12), 2544–2590.

185

186

c. ahrends, p. vuust, and m. l. kringelbach

Niv, Y., & Schoenbaum, G. (2008). Dialogues on prediction errors. Trends in Cognitive Science, 12(7), 265–272. O’Doherty, J. P. (2004). Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14(6), 769–776. Pamplona, G. S., Santos Neto, G. S., Rosset, S. R., Rogers, B. P., & Salmon, C. E. (2015). Analyzing the association between functional connectivity of the brain and intellectual performance. Frontiers in Human Neuroscience, 9, 61. Parkinson, C., & Wheatley, T. (2015). The repurposed social brain. Trends in Cognitive Science, 19(3), 133–141. Pearce, M. T., & Wiggins, G. A. (2012). Auditory expectation: The information dynamics of music perception and cognition. Topics in Cognitive Science, 4(4), 625–652. Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87. Ravnkilde, B., Videbech, P., Clemmensen, K., Egander, A., Rasmussen, N. A., & Rosenberg, R. (2002). Cognitive deficits in major depression. Scandinavian Journal of Psychology, 43(3), 239–251. Reinhart, R. M. G., & Nguyen, J. A. (2019). Working memory revived in older adults by synchronizing rhythmic brain circuits. Nature Neuroscience, 22(5), 820–827. Rohrmeier, M. A., & Koelsch, S. (2012). Predictive information processing in music cognition. A critical review. International Journal of Psychophysiology: Official Journal of the International Organization of Psychophysiology, 83(2), 164–175. Rolls, E. T. (2010). Attractor networks. Wiley Interdisciplinary Reviews: Cognitive Science, 1(1), 119–134. Rolls, E. T., Loh, M., Deco, G., & Winterer, G. (2008). Computational models of schizophrenia and dopamine modulation in the prefrontal cortex. Nature Reviews Neuroscience, 9(9), 696. Roth, G., Dicke, U. (2005). Evolution of the brain and intelligence. Trends in Cognitive Science, 9(5), 250–257. Sackeim, H. A., Freeman, J., McElhiney, M., Coleman, E., Prudic, J., & Devanand, D. P. (1992). Effects of major depression on estimates of intelligence. Journal of Clinical and Experimental Neuropsychology, 14(2), 268–288. Sanger, J., Muller, V., & Lindenberger, U. (2012). Intra- and interbrain synchronization and network properties when playing guitar in duets. Frontiers in Human Neuroscience, 6, 312. Santarnecchi, E., Galli, G., Polizzotto, N. R., Rossi, A., & Rossi, S. (2014). Efficiency of weak brain connections support general cognitive functioning. Human Brain Mapping, 35(9), 4566–4582. Schacter, D. L., Addis, D. R., & Buckner, R. L. (2007). Remembering the past to imagine the future: The prospective brain. Nature Reviews Neuroscience, 8(9), 657–661. Schultz, W. (2015). Neuronal reward and decision signals: From theories to data. Physiological Review, 95(3), 853–951. Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593–1599.

Predictive Intelligence for Learning and Optimization

Schultz, W., & Dickinson, A. (2000). Neuronal coding of prediction errors. Annual Review of Neuroscience, 23(1), 473–500. Shen, X., Cox, S. R., Adams, M. J., Howard, D. M., Lawrie, S. M., Ritchie, S. J., . . . Whalley, H. C. (2018). Resting-state connectivity and its association with cognitive performance, educational attainment, and household income in the UK biobank. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 3(10), 878–886. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., . . . Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., . . . Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359. Singer, W. (2001). Consciousness and the binding problem. Annals of the New York Academy of Sciences, 929, 123–146. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. Neuroimage, 41(3), 1168–1176. Stam, C. J., & van Straaten, E. C. (2012). The organization of physiological brain networks. Clinical Neurophysiology, 123(6), 1067–1087. Steinbeck, J., & Ricketts, E. F. (1941). The log from the sea of Cortez. London: Penguin. Stern, Y. (2009). Cognitive reserve. Neuropsychologia, 47(10), 2015–2028. Stone, A. A., Schwartz, J. E., Broderick, J. E., & Deaton, A. (2010). A snapshot of the age distribution of psychological well-being in the United States. Proceedings of the National Academy of Sciences of the United States of America, 107(22), 9985–9990. Thorndike, E. L. (1920). Intelligence and its uses. Harper’s Magazine, 140, 227–235. Tognoli, E., & Kelso, J. A. (2014). The metastable brain. Neuron, 81(1), 35–48. Turalska, M., Geneston, E., West, B. J., Allegrini, P., & Grigolini, P. (2012). Cooperation-induced topological complexity: A promising road to fault tolerance and Hebbian learning. Frontiers in Physiology, 3, 52. Turalska, M., West, B. J., & Grigolini, P. (2013). Role of committed minorities in times of crisis. Science Reports, 3, 1371. van den Heuvel, M. P., & Hulshoff Pol, H. E. (2010). Exploring the brain network: A review on resting-state fMRI functional connectivity. European Neuropsychopharmacology, 20(8), 519–534. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. Veiel, H. O. (1997). A preliminary profile of neuropsychological deficits associated with major depression. Journal of Clinical and Experimental Neuropsychology, 19(4), 587–603. Voytek, B., & Knight, R. T. (2015). Dynamic network communication as a unifying neural basis for cognition, development, aging, and disease. Biological Psychiatry, 77(12), 1089–1097.

187

188

c. ahrends, p. vuust, and m. l. kringelbach

Vuust, P., & Frith, C. D. (2008). Anticipation is the key to understanding music and the effects of music on emotion. Behavioral and Brain Sciences, 31(5), 599–600. Vuust, P., & Kringelbach, M. L. (2010). The pleasure of making sense of music. Interdisciplinary Science Reviews, 35(2), 166–182. Werner, G. (2007). Metastability, criticality and phase transitions in brain and its models. Biosystems, 90(2), 496–508. Zakzanis, K. K., Leach, L., & Kaplan, E. (1998). On the nature and pattern of neurocognitive function in major depressive disorder. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 11(3), 111–119. Zylberberg, A., Dehaene, S., Roelfsema, P. R., & Sigman, M. (2011). The human Turing machine: A neural framework for mental programs. Trends in Cognitive Sciences, 15(7), 293–300.

PART III

Neuroimaging Methods and Findings

10 Diffusion-Weighted Imaging of Intelligence Erhan Genç and Christoph Fraenz Since the dawn of intelligence research, it has been of considerable interest to establish a link between intellectual ability and the various properties of the brain. In the second half of the nineteenth century, scientists such as Broca and Galton were among the first to utilize craniometry in order to investigate relationships between different measures of head size and intellectual ability (Deary, Penke, & Johnson, 2010; Galton, 1888). However, since craniometry can at best provide a very coarse estimate of actual brain morphometry and adequate methods for intelligence testing were not established at that time, respective efforts were not particularly successful in producing insightful evidence. About 100 years later, technical developments in neuroscientific research, such as the introduction of magnetic resonance imaging (MRI), enabled scientists to assess a wide variety of the brain’s structural properties in vivo and relate them to cognitive capacity. One of the most prominent and stable findings from this line of research is that bigger brains tend to perform better at intelligence-related tasks. Meta-analyses comprising a couple of thousand individuals have reported correlation coefficients in the range of .24–.33 for the association between overall brain volume and intelligence (McDaniel, 2005; Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015). A common biological explanation for this association is the fact that individuals with more cortical volume are likely to possess more neurons (Pakkenberg & Gundersen, 1997) and thus more computational power to engage in problem-solving and logical reasoning. The studies mentioned so far were mainly concerned with relationships between intelligence and different macrostructural properties of gray matter, leaving a large gap of knowledge at the cellular level. A recent working hypothesis endorses the idea that interindividual differences in intelligence are not only manifested in the amount of brain tissue, e.g., cortical thickness or surface area, but also in its wiring properties, which comprise markers like circuit complexity or dendritic arborization (Neubauer & Fink, 2009). Until recently, little was known about the relationship between cortical microstructure and intelligence. The introduction of novel in vivo diffusion-based MRI techniques such as neurite orientation dispersion and density imaging (NODDI) (Zhang, Schneider, Wheeler-Kingshott, & Alexander, 2012) opened up new opportunities in this regard. The first study utilizing NODDI in order to shed light on possible microstructural correlates affecting intelligence was

191

192

e. genc¸ and c. fraenz

conducted by Genc et al. (2018). The authors analysed data from two large independent samples, that comprised well over 700 individuals in total. Surprisingly, they found dendritic density and complexity, averaged across the whole cortex, to be negatively associated with matrix reasoning test scores. These results indicate that fluid intelligence is likely to benefit from cortical mantles with sparsely organized dendritic arbour. This kind of architecture might increase information processing speed and network efficiency within the cortex. Therefore, it could serve as a potential neuroanatomical foundation underlying the neural efficiency hypothesis of intelligence (Neubauer & Fink, 2009). More specific analyses on the level of single brain regions confirmed the pattern of results observed for the overall cortex. Statistically significant correlations were negative and almost exclusively localized within brain regions overlapping with areas from the P-FIT network. However, it has to be noted that a few cortical areas, some of them located in the middle temporal gyrus, also showed positive associations between dendritic complexity and intelligence but failed to reach statistical significance due to strict correction for multiple comparisons. Interestingly, a recent study by Goriounova et al. (2018) also observed positive associations between dendritic complexity and intelligence in the middle temporal gyrus. Here, the authors examined a sample of epilepsy patients who underwent neurosurgical treatment, which allowed them to directly extract non-pathological brain tissue from living human brains for histological and electrophysiological investigation. In combination with psychometric intelligence testing prior to surgery the authors were able to show that dendritic size and complexity of pyramidal neurons in the middle temporal gyrus are positively associated with intelligence. A computational model incorporating structural and electrophysiological data indicated that larger dendrites generate faster action potentials, which in turn increase processing of synaptic inputs with higher temporal precision and thus improve efficient information transfer (Goriounova & Mansvelder, 2019). Combined evidence from Genc et al. (2018) and Goriounova et al. (2018) suggests that high intelligence relies on efficient information processing that can be achieved by a proper differentiation between signal and noise. This might be realized by a dendritic circuitry that increases the signal within certain brain regions like the middle temporal gyrus (positive associations between dendritic complexity and intelligence) and decreases the noise within fronto-parietal regions (negative associations between dendritic complexity and intelligence). It is important to note that interindividual differences in intelligence are not only related to neuroanatomical properties exhibited by gray matter. Features like white matter volume also show relevant associations with intelligence (Narr et al., 2007). White matter is mainly comprised of myelinated axons transferring information from one brain region to another, making it crucial for human cognition and behavior (Filley, 2012). Consequently, information transfer within the P-FIT network is also accomplished through white matter

Diffusion-Weighted Imaging of Intelligence

fiber tracts (Jung & Haier, 2007). Brain regions constituting the P-FIT network are distributed across both cerebral hemispheres. This emphasizes the relevance of functional interaction between regions and across both hemispheres for cognitive performance in tasks demanding high intellectual ability. Given that the corpus callosum represents the most important connection between both hemispheres, it has long been suggested that the layout of the corpus callosum is likely to influence interhemispheric connectivity and thus intellectual performance (Hulshoff-Pol et al., 2006). One of the first studies investigating this relationship in a sample of healthy adults and epilepsy patients found a moderate positive association between midsagittal corpus callosum area and intelligence (Atkinson, Abou-Khalil, Charles, & Welch, 1996). In another study, Luders et al. (2007) quantified callosal morphology in healthy adults by measuring callosal thickness in 100 equidistant points across the whole structure. The authors observed significant positive correlations between various IQ measures and callosal thickness which were exhibited in the posterior half of the corpus callosum. In a study comprising monozygotic and dizygotic twins, Hulshoff-Pol et al. (2006) were able to show that the associations between callosal morphology and intelligence are influenced by a common genetic factor. The authors employed a cross-trait/ cross-twin approach, which is capable of quantifying the general extent to which structure–function relationships are driven either genetically or by environmental factors. Respective analyses do not operate on a molecular level and are not designed to identify specific genes contributing to the associations of interest. However, it is reasonable to assume that genetic factors underlying complex characteristics such as general intelligence are constituted by a multitude of genes rather than a single gene alone. Aforementioned studies investigated the relationship between intelligence differences and white matter macrostructure. In order to quantify white matter in terms of its microstructure, a more elaborate approach combining standard MRI and diffusion-weighted imaging (DWI) can be used. DWI is based on the diffusion process of water molecules within biological tissue like the human brain (Le Bihan, 2003). Within fluid-filled spaces, such as ventricles, diffusion is nearly unbounded and thus non-directional (isotropic). In contrast, diffusion within white matter is more directional (anisotropic). Here, the membrane around nerve fibers constitutes a natural border which forces water molecules to move along the direction of axons creating an anisotropic diffusion pattern. For each voxel of a DWI data set, water diffusion is represented as an ellipsoid shaped by the three orthogonally arranged diffusion vectors v1, v2, and v3, whereas the eigenvalues λ1, λ2, and λ3 describe the degree of diffusion along each vector (Figure 10.1a). There are various metrics to describe certain aspects of tissue-related water diffusion. The axial diffusivity (AD) corresponds to λ1 and represents water diffusion along the principal direction of axons within a voxel. The radial diffusivity (RD) is the mean value of λ2 and λ3 and represents water diffusion perpendicular to the principal direction of

193

194

e. genc¸ and c. fraenz

Figure 10.1 The top half depicts ellipsoids (left side, A) and tensors (right side, B) that were yielded by means of diffusion-weighted imaging and projected onto a coronal slice of an MNI brain. Within each voxel, the main diffusion direction of water molecules is represented by the orientation of ellipsoids or tensors and also visualized in the form of RGB color coding (left–right axis = red, anterior–posterior axis = green, superior–inferior axis = blue). The bottom half shows enlarged images of the corpus callosum that provide a more detailed visualization of ellipsoids and tensors.

axons. The mean diffusivity (MD) is the average water diffusion across all eigenvalues λ1, λ2, and λ3. It is lower in structurally organized and higher in directionally disorganized tissue segments. The fractional anisotropy (FA) is a non-linear combination of λ1, λ2, and λ3 (Basser & Pierpaoli, 1996). It can take any value between 0 and 1, with 0 implying no directionality of water diffusion, e.g., in ventricles, and 1 representing the most extreme form of directed water diffusion, i.e., water moving along the principal direction exclusively with no diffusion perpendicular to that axis. FA can be considered the most commonly used measure to describe white matter microstructure in terms of “microstructural integrity.” Several morphological factors such as axon diameter, fiber density, myelin concentration, and the distribution of fiber orientation can influence the aforementioned microstructural measures (Beaulieu, 2002; Le Bihan, 2003). Myelin concentration was found to exert a comparably small effect on the magnitude of FA values (Beaulieu, 2002) but more recent studies show contrasting results (Ocklenburg et al., 2018; Sampaio-Baptista et al., 2013). In a special application of DWI, known as diffusion tensor imaging (DTI), the three vectors and eigenvalues are quantified voxel by voxel and summarized in the form of so-called tensors (Figure 10.1b). Thus, a tensor contains information about the directional motion distribution of water

Diffusion-Weighted Imaging of Intelligence

molecules within a given voxel and allows for conclusions to be drawn about the orientation of adjacent nerve fibers (Mori, 2007). The trajectories of fiber bundles can be virtually constructed by means of mathematical fiber tracking algorithms, which provides the opportunity to estimate structural connectivity between different brain regions (Mori, 2007). While there are different models for the purpose of reconstructing fiber bundles or streamlines, e.g., the q-ball model (Tuch, 2004) or the balland-stick model (Behrens et al., 2003), virtual fiber tractography can be categorized as either probabilistic or deterministic (Campbell & Pike, 2014). The significant difference between the two methods is that probabilistic tractography computes the likelihood with which individual voxels constitute the connection between a given seed and target region (Behrens, Berg, Jbabdi, Rushworth, & Woolrich, 2007; Morris, Embleton, & Parker, 2008) whereas deterministic tractography provides only one definitive solution in this regard (Mori, 2007). Despite their differences, both types of tractography are capable of reconstructing the human brain’s connectome, which is constituted by white matter fiber bundles (Catani & Thiebaut de Schotten, 2008). Respective fiber bundles can be quantified by different markers such as microstructural integrity, which provides the opportunity to associate them with interindividual differences in cognitive performance. This procedure is called quantitative tractography. An illustration of major white matter pathways that were found to exhibit associations with interindividual intelligence differences are presented in Figure 10.2. One way of performing quantitative tractography is to follow a hypothesisdriven approach and segregate a fiber bundle into specific regions of interest (ROI) from which certain features such as FA can be extracted. Another way is to investigate white matter properties averaged across a whole fiber bundle. Deary et al. (2006) employed a ROI based method in a sample of older adults from the Lothian Birth Cohort (LBC). They found general intelligence to be positively associated with FA exhibited by the centrum semiovale. This region is a conjunction area in the center of the brain and consists of cortical projection (corona radiata or corticospinal tract) as well as association fibers (superior longitudinal fasciculus). In another study comprising a sample of young adults, Tang et al. (2010) followed a similar approach and defined multiple ROIs within various intra- and interhemispheric fiber bundles. For the whole group, general intelligence was not related to FA in any of the tracts. This lack of replication might be due to the fact that Deary et al. (2006) examined a substantially older sample compared to Tang et al. (2010). Further, Deary et al. (2006) observed a positive association only in the centrum semiovale, a structure that was not examined by Tang et al. (2010). However, when conducting the same analysis separately for males and females, Tang et al. (2010) found bilateral ROIs located in anterior callosal fibers (genu) to exhibit positive associations in females and negative associations in males.

195

196

e. genc¸ and c. fraenz

Diffusion-Weighted Imaging of Intelligence

Figure 10.2 White matter fiber tracts whose microstructural properties were found to correlate with interindividual differences in intelligence. Respective tracts were reconstructed by means of deterministic constrained spherical deconvolution fiber tractography. Each panel shows a single fiber bundle overlaid onto axial and sagittal brain slices from an MNI template (viewed from a posterior and left-hand side perspective). The number next to each fiber bundle depicts how many times the respective tract has been reported to exhibit significant associations between its microstructural properties and intelligence. Numbers are placed in circles that are color coded (0 = red, 15 = yellow). The first row depicts projection tracts that connect the thalamus to the frontal cortex (anterior thalamic radiation) and to the visual cortex (posterior thalamic radiation). The second row depicts projection tracts that connect the cortex to the brain stem (corona radiata) and the medial temporal lobe to the hypothalamic nuclei (fornix). The third row depicts different segments of the corpus callosum which is the largest commissural tract connecting both hemispheres. Respective panels show interhemispheric tracts connecting the prefrontal cortices (genu), the frontal cortices and temporal cortices (midbody), as well as the parietal cortices and visual cortices (splenium). The remaining rows depict fiber bundles that run within a hemisphere connecting distal cortical areas. Respective panels show tracts that connect medial frontal, parietal, occipital, temporal, and cingulate cortices (cingulum); tracts that connect the occipital and temporal cortex (inferior longitudinal fasciculus); tracts that connect the orbital and lateral frontal cortices to the occipital cortex (inferior fronto-occipital fasciculus); tracts that connect the orbitofrontal cortex to the anterior and medial temporal cortices (uncinate fasciculus), and tracts that connect the perisylvian frontal, parietal, and temporal cortices (superior longitudinal fasciculus and arcuate fasciculus). (Catani & Thiebaut de Schotten, 2008)

There are several studies that followed a tract-based approach in order to investigate the relationships between white matter properties and intelligence in older adults from the LBC. One of the first studies analyzing a pilot sample from the LBC was conducted by Penke et al. (2010). They reconstructed eight major fiber tracts using probabilistic tractography, namely cingulum, cingulate gyrus, uncinate fasciculus, and arcuate fasciculus in both hemispheres as well as genu and splenium of the corpus callosum. The authors found that processing speed was significantly associated with the average FA values of all eight major tracts. Interestingly, they were also able to extract a general factor of white matter FA by means of confirmatory principal component analysis (PCA). This factor explained about 40% of variance shown by the eight tracts and was significantly associated with processing speed but not with general intelligence. After the whole data set from the LBC had been made publicly available, Penke et al. (2012) conducted a follow-up study in order to further investigate the relationships between general intelligence, processing speed, and several white matter properties, namely FA, longitudinal relaxation time,

197

198

e. genc¸ and c. fraenz

and magnetization transfer ratio (MTR). The latter is assumed to be a marker of myelin concentration (Wolff & Balaban, 1989) under certain conditions (MacKay & Laule, 2016). They employed probabilistic fiber tracking in order to reconstruct 12 major fiber tracts, namely cingulum, cingulate gyrus, uncinate fasciculus, arcuate fasciculus, inferior longitudinal fasciculus, and anterior thalamic radiation in both hemispheres as well as genu and splenium of the corpus callosum. Since all white matter properties (FA, longitudinal relaxation time, and MTR) were highly correlated across the 12 fiber bundles, the authors decided to extract three general factors, each representing one of the three biomarkers. These general factors were significantly correlated with general intelligence. However, respective associations were completely mediated by processing speed, which indicates that processing speed serves as a mediator between white matter properties and general intelligence in older adults (Penke et al., 2012). The same group of researchers investigated the relationships between general intelligence and the aforementioned white matter properties exhibited by each of the 12 individual tracts (Booth et al., 2013). Irrespective of processing speed, they found positive associations between general intelligence and average FA values extracted from bilateral uncinate fasciculus, inferior longitudinal fasciculus, anterior thalamic radiation, and cingulum. Cremers et al. (2016) conducted a similar quantitative tractography study using a very large sample of old adults. The authors extracted average FA values from 14 different white matter tracts that were reconstructed by means of probabilistic tractography. Further, they measured general cognitive performance by means of a test battery assessing memory, executive functions, word fluency, and motor speed. General cognitive performance was positively associated with average FA values exhibited within the anterior and posterior thalamic radiation, the superior and inferior longitudinal fasciculus, the inferior fronto-occipital fasciculus, the uncinate fasciculus, as well as the genu and the splenium of the corpus callosum. Importantly, when controlling for the relationship between cognitive ability and FA averaged across the overall white matter, only the associations observed for posterior thalamic radiation, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus remained statistically significant. Evidence from the aforementioned studies is based on older adults but there are also some quantitative tractography studies in younger samples. Based on previous findings indicating that general intelligence is positively associated with corpus callosum size and thickness, Kontis et al. (2009) speculated whether microstructural abnormalities in the corpus callosum of preterm born adults may lead to a reduction of general intelligence. In a sample of young adults who were born preterm or term, the authors observed a positive association between general intelligence and average FA exhibited by the corpus callosum. Specifically, preterm born adults showed reduced FA values and thus lower general intelligence, whereas both measures were increased in adults born at term. In another sample comprising young adults, Yu et al.

Diffusion-Weighted Imaging of Intelligence

(2008) performed deterministic tractography in order to relate general intelligence to average FA values within seven white matter tracts. They observed positive associations for the genu and midbody of the corpus callosum, the uncinate fasciculus, the posterior thalamic radiation and the corticospinal tract. Using a hypothesis-driven approach in a very large sample of adults, Kievit et al. (2014) were able to confirm previously reported results for the genu of the corpus callosum. They found this fiber bundle’s average FA to be positively correlated with fluid intelligence. Urger et al. (2015) observed positive associations between general intelligence and average FA exhibited by the arcuate fasciculus in a sample of adolescents. In a comparable sample, Ferrer et al. (2013) found fluid intelligence and processing speed to be positively correlated with average FA values extracted from the superior longitudinal fasciculus, the whole corpus callosum, and the corticospinal tract. Comparable to the findings reported by Penke et al. (2012) for older adults from the LBC, the correlations between average FA values and fluid intelligence were completely mediated by processing speed in the sample of adolescents as well. This indicates that processing speed can be regarded as a mediator in the relationship between white matter properties and intelligence, not only for older adults but also for adolescents (Ferrer et al., 2013). Muetzel et al. (2015) were the first to employ quantitative tractography in a large sample of children. They utilized probabilistic fiber tractography in order to reconstruct seven major white matter pathways, namely the genu and splenium of the corpus callosum, the cingulum, the uncinate fasciculus, the superior longitudinal fasciculus, the inferior longitudinal fasciculus, and the corticospinal tract. In accordance with previous findings (Penke et al., 2010), the average FA values of all tracts were highly correlated with each other. As a consequence, the authors extracted a general factor of white matter FA. This factor was positively correlated with general intelligence. In order to determine whether all or only some of the tracts contributed to this association, univariate regression analyses were used. After applying correction for multiple comparisons, average FA exhibited by the right uncinate fasciculus remained as the only significant predictor of general intelligence (Muetzel et al., 2015). In summary, it is apparent that interindividual differences in intelligence are associated with microstructural properties of many major pathways across different age ranges. It can also be noted that most of the respective fiber tracks represent critical links between brain regions constituting the P-FIT network. However, most of the aforementioned studies investigated white matter properties of a priori selected tracts averaged across whole fiber bundles. In comparison to voxel-wise analyses, this approach is very likely to overlook relevant relationships exhibited by fiber bundles that are not included in the set of white matter tracts being analyzed. With diffusionweighted data there are two approaches towards computing voxel-wise

199

200

e. genc¸ and c. fraenz

statistics. The first method is very similar to VBM. Here, individual FA maps are transformed to a common stereotactic space, smoothed, and subjected to voxel-wise statistical comparisons covering the whole whiter matter compartment. Schmithorst, Wilke, Dardzinski, and Holland (2005) and Schmithorst (2009) are among the first studies that used this technique in samples of children and adolescents. Both studies observed positive correlations between general intelligence and FA in voxels located in bilateral arcuate fasciculus, corona radiata, superior longitudinal fasciculus, and the splenium of the corpus callosum. Allin et al. (2011) employed the same technique in a sample of young adults born preterm or term. Similar to a previous study by Kontis et al. (2009), the authors investigated whether microstructural abnormalities exhibited by white matter voxels of preterm born adults are linked to a reduction in general intelligence. Their results show that a decrease in FA in voxels belonging to the corona radiata or the genu of the corpus callosum was associated to reduction in general intelligence in preterm born adults. In order to investigate the extent to which the association between general intelligence and white matter properties is driven by genetic or environmental factors, Chiang et al. (2009) employed VBM in a sample comprising monozygotic and dizygotic twins. The authors followed a cross-trait/cross-twin approach and computed respective correlations for all voxels within white matter. Results showed positive associations between general intelligence and FA values exhibited by voxels located in the cingulum, the posterior corpus callosum, the posterior thalamic radiation, the corona radiata, and the superior longitudinal fasciculus. As with Hulshoff-Pol et al. (2006), respective associations were mainly mediated by a common genetic factor. Again, it is reasonable to assume that this common genetic factor was constituted by a multitude of genes rather than a single gene alone. Although the application of VBM in white matter has its advantages compared to quantitative tractography, this method also suffers from various shortcomings like partial volume effects, spatial smoothing, and the use of arbitrary thresholds (see Chapter 11 for a discussion of some of these issues). One technique that is able to overcome these limitations and combine the strengths of both VBM and quantitative tractography is called Tract-Based Spatial Statistics (TBSS) and was introduced by Smith et al. (2006). Similar to VBM, TBSS also transforms individual FA maps to a common stereotactic space. Subsequently, a mean FA map is created by averaging the spatially normalized images from all individuals. This FA map is thinned out to create a white matter “skeleton” that only includes those voxels at the center of fiber tracts. Finally, the “skeletonized” FA maps are subjected to voxel-wise statistical comparisons. Wang et al. (2012) were among the first to employ this technique in a small sample of adolescents. They found that general intelligence was positively correlated with FA values extracted from voxels located in the anterior–inferior fronto-occipital fasciculus. A similar study, examining mathematically gifted adolescents and a control sample, revealed that FA

Diffusion-Weighted Imaging of Intelligence

values extracted from voxels predominantly located in the genu, midbody, and splenium of the corpus callosum were positively associated with general intelligence. Voxels belonging to the fornix and the anterior limb of the left internal capsule also exhibited this association. Nusbaum et al. (2017) conducted a study in which they compared children with very high and normal levels of general intelligence. They observed higher FA values in gifted children predominately located in voxels corresponding to the genu, midbody, and splenium of the corpus callosum, the corona radiata, the internal and external capsules, the uncinate fasciculus, the fornix, and the cingulum. In order to investigate how maturation of white matter microstructure affects intellectual ability, Tamnes et al. (2010) conducted a TBSS study in a large cross-sectional sample of children and young adults. The authors did not report any associations between FA and general intelligence but focused on measures of verbal and non-verbal abilities. For both abilities they observed positive correlations with FA values exhibited by voxels located in the superior longitudinal fasciculus. Furthermore, verbal abilities were significantly associated with FA in voxels from the anterior thalamic radiation and the cingulum, whereas voxels belonging to the genu of the corpus callosum exhibited significant correlations between FA and non-verbal abilities. As of today, there are two studies that report TBSS findings in adults (Dunst, Benedek, Koschutnig, Jauk, & Neubauer, 2014; Malpas et al., 2016). Interestingly, both studies report substantially different results although they examined equally sized samples with comparable age ranges, used very similar test batteries in order to assess general intelligence, and performed the same statistical analyses including age and sex as covariates. Dunst et al. (2014) observed no significant association between general intelligence and FA in any of the white matter voxels covered by the TBSS skeleton. However, when performing the same analysis separately for both sexes, females still showed no effects, but males exhibited positive correlations in voxels belonging to the genu of the corpus callosum. In contrast, Malpas et al. (2016) demonstrated a strikingly different pattern of results. In the overall sample, they found general intelligence to be positively associated with FA in a widespread network of white matter voxels (about 30% of the skeleton). Respective voxels were located in the anterior thalamic radiation, the superior longitudinal fasciculus, the inferior fronto-occipital fasciculus, and the uncinate fasciculus. It is unclear why the results observed by Malpas et al. (2016) failed to replicate those reported by Dunst et al. (2014), even though both studies employed highly comparable methods and investigated seemingly large samples. Unfortunately, lack of replication is a common problem in neuroscientific intelligence research and other disciplines as well. An adequate way of dealing with such issues is to recruit even larger samples in collaborative efforts between multiple research sites. Sharing data and reporting only those findings which can be observed consistently across different samples represents a suitable approach towards producing reliable scientific evidence.

201

202

e. genc¸ and c. fraenz

Finally, there are two studies reporting TBSS results in individuals of older age. Haász et al. (2013) demonstrated that a general factor of fluid intelligence, which was extracted from a test battery measuring matrix reasoning, processing speed, and memory, was positively correlated with FA in a widespread network of white matter voxels (about 30% of the skeleton). The strongest effects were exhibited by voxels located in the inferior fronto-occipital fasciculus, the inferior longitudinal fasciculus, the superior longitudinal fasciculus, the arcuate fasciculus, the uncinate fasciculus, the anterior thalamic radiation, and the genu of the corpus callosum. Interestingly, when conducting separate analyses for the three components underlying the general factor of fluid intelligence, processing speed was the only component replicating the results observed for higher order fluid intelligence. Kuznetsova et al. (2016) examined a sample from the LBC comprising adults of older age. They found information processing to be positively associated with FA in voxels from all major white matter fiber pathways. In summary, it can be said that results from quantitative tractography and voxel-based analyses both demonstrate that microstructural integrity of several different fiber bundles is associated with interindividual differences in cognitive performance. It is conceivable that the integrity of respective fiber bundles is key to efficient communication between regions predominantly located in areas constituting the P-FIT network and thus fosters higher intelligence. Moreover, information exchange beneficial for intelligence is not only restricted to the interaction between single brain regions but also involves communication across whole brain networks (Barbey, 2018). In addition to the microstructural integrity of specific tracts, recent empirical studies show that general intelligence is also reflected in the efficiency with which a white matter network is organized. The efficiency or quality of brain networks can be quantified by analyzing data obtained via DWI with methods borrowed from graph theory. A widely used metric to describe the quality of information exchange within a brain network is called global efficiency. This metric is able to quantify the degree of efficient communication between all regions within a brain network. The extent to which a specific brain region contributes to efficient information exchange across a whole network is referred to as nodal efficiency. The first study to introduce the graph analytical approach to intelligence research was conducted by Li et al. (2009). The authors observed that higher general intelligence was associated with higher global efficiency in a sample of young adults. Li et al. (2009) also quantified the nodal efficiency of individual brain areas and investigated its relationship with general intelligence. In accordance with P-FIT, they demonstrated that the nodal efficiency exhibited by brain regions located in parietal, temporal, occipital, and frontal lobes as well as cingulate cortex and three subcortical structures was related to interindividual differences in general intelligence. Another study that employed the graph analytical approach in a sample of young females found that higher

Diffusion-Weighted Imaging of Intelligence

global efficiency was associated with higher fluid intelligence and working memory capacity but not processing speed (Pineda-Pardo, Martínez, Román, & Colom, 2016). Genc et al. (2019) used DWI and graph theory to investigate the association between global efficiency and general knowledge, which is considered a vital marker of crystallized intelligence. In a large sample of over 300 young males and females, they found general knowledge to be positively related with global efficiency and observed this association to be driven by positive correlations between general knowledge and the nodal efficiency values of brain regions from the P-FIT network. Ryman et al. (2016) investigated how intellectual ability is related to morphometric properties and network characteristics in a large sample of young adults. They found that global efficiency and total gray matter volume were both associated with general intelligence differences in females, whereas the overall volume of parieto-frontal brain regions served as the only significant predictor of intelligence in males. Interestingly, Ma et al. (2017) were able to confirm the importance of integrated information processing for general intelligence in a sample of gifted young adults. In addition, this study also demonstrated that segregated information processing, quantified as another graph analytical metric known as the clustering coefficient, is advantageous for cognitive performance in very high intelligence ranges. Fischer, Wolf, Scheurich, and Fellgiebel (2014) showed that the positive relationship between general intelligence and global efficiency also applies to samples comprised of older adults. The results from studies investigating the associations between global efficiency and other cognitive abilities in older adults indicate that global efficiency is also positively correlated to executive functions, visuospatial reasoning, verbal abilities, and processing speed (Wen et al., 2011; Wiseman et al., 2018). However, respective studies found that global efficiency was not related to memory function. Two recent studies investigated the relationship between intellectual ability and global efficiency in samples of younger age. In a sample of preadolescent children, Kim et al. (2016) were able to show that global efficiency was positively correlated with fluid intelligence as well as four different narrow abilities constituting fluid intelligence. Furthermore, they could demonstrate that this relationship could predominantly be attributed to the nodal efficiency of specific areas belonging to the P-FIT network. Following a cross-trait/cross-twin approach, Koenis et al. (2018) conducted a well-conceived longitudinal study that examined the relationship between general intelligence and global efficiency in an adolescent cohort. They were able to demonstrate that global efficiency increased in a non-linear fashion from early adolescence to early adulthood. Furthermore, results indicated that the association between general intelligence and global/nodal efficiency also increased during adolescence. No significant associations were observed in early adolescence, whereas positive associations were present in early adulthood. Importantly, global efficiency was found to show significant heritability throughout adolescence. Moreover, the extent to which genetic factors

203

204

e. genc¸ and c. fraenz

contributed to the correlation between general intelligence and global/nodal efficiency was shown to increase with age. In early adulthood genetic factors were able to explain up to 87% of the observed correlation between global efficiency and general intelligence (Koenis et al., 2018). Based on these findings, it is reasonable to assume that the association between global efficiency and general intelligence shows a strong genetic mediation in older adults as well. Nevertheless, future studies examining monozygotic and dizygotic twins are needed to substantiate this assumption. Given the myriad of studies presented in this chapter, one might be wondering about the essential findings that can be extracted from research on the relationships between neuroanatomical correlates and general intelligence. To summarize, one of the first hypotheses in neuroscientific intelligence research, namely that bigger brains contribute to higher intellectual ability, has been confirmed time and time again by modern in vivo imaging methods. Special attention has been paid to the brain’s gray matter. Over the last decades, this research agenda was successful in relating general intelligence to various properties of gray matter like cortical thickness, surface area, as well as dendritic organization. The growing body of evidence created by these efforts led to the proposal of Parieto-Frontal Integration Theory. The centerpiece of P-FIT is a brain network comprising multiple cortical and subcortical structures that have been identified as relevant correlates of intelligence. Importantly, the model also ascribes importance to the white matter fiber tracts connecting respective brain regions. Accordingly, macrostructural properties of the corpus callosum and other major white matter fiber bundles, e.g., volume, thickness, or surface area, have been related to interindividual intelligence differences in the past. The same applies to different measures of microstructural integrity, such as fractional anisotropy, which can be obtained by DWI. A recent addition to the ever-growing arsenal of methods employed by neuroscientific intelligence research is graph theory. This approach allows for the quantification of brain network organization. Its different metrics, such as global or nodal efficiency, have consistently been found to correlate with interindividual differences in intelligence. It is important to note that many of the associations between neuroanatomical properties and general intelligence are likely to follow dynamic changes. As research has shown, intellectual ability might be negatively associated to a certain brain property in children and exhibit a positive correlation in adults. Finally, compelling evidence from a multitude of studies examining monozygotic and dizygotic twin samples points towards a major role of genetics in mediating the relationships between general intelligence and brain structure. Neuroscientific research of intelligence has come a long way since its early beginnings. Scientists like Broca and Galton, who were highly restricted in their methodical arsenal, would be amazed by the possibilities with which the neural foundations of intellectual ability can be investigated in these days. Likewise, one has all reason to be excited about the technological

Diffusion-Weighted Imaging of Intelligence

advancements the future will bring. The combination of psychometric intelligence testing and neuroimaging represents a relatively young discipline, and it is fair to say that the best is yet to come.

References Allin, M. P. G., Kontis, D., Walshe, M., Wyatt, J., Barker, G. J., Kanaan, R. A. A., . . . Nosarti, C. (2011). White matter and cognition in adults who were born preterm. PLoS One, 6(10), e24525. Atkinson, D. S., Abou-Khalil, B., Charles, P. D., & Welch, L. (1996). Midsagittal corpus callosum area, intelligence and language in epilepsy. Journal of Neuroimaging, 6(4), 235–239. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Basser, P. J., & Pierpaoli, C. (1996). Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. Journal of Magnetic Resonance, Series B, 111(3), 209–219. Beaulieu, C. (2002). The basis of anisotropic water diffusion in the nervous system – A technical review. NMR in Biomedicine, 15(7–8), 435–455. Behrens, T. E., Berg, H. J., Jbabdi, S., Rushworth, M. F. S., & Woolrich, M. W. (2007). Probabilistic diffusion tractography with multiple fibre orientations: What can we gain? NeuroImage, 34(1), 144–155. Behrens, T. E., Woolrich, M. W., Jenkinson, M., Johansen-Berg, H., Nunes, R. G., Clare, S., . . . Smith, S. M. (2003). Characterization and propagation of uncertainty in diffusion-weighted MR imaging. Magnetic Resonance in Medicine, 50(5), 1077–1088. Booth, T., Bastin, M. E., Penke, L., Maniega, S. M., Murray, C., Royle, N. A., . . . Hernández, M. (2013). Brain white matter tract integrity and cognitive abilities in community-dwelling older people: The Lothian Birth Cohort, 1936. Neuropsychology, 27(5), 595–607. Campbell, J. S. W., & Pike, G. B. (2014). Potential and limitations of diffusion MRI tractography for the study of language. Brain and Language, 131, 65–73. Catani, M., & Thiebaut de Schotten, M. (2008). A diffusion tensor imaging tractography atlas for virtual in vivo dissections. Cortex, 44(8), 1105–1132. Chiang, M. C., Barysheva, M., Shattuck, D. W., Lee, A. D., Madsen, S. K., Avedissian, C., . . . Thompson, P. M. (2009). Genetics of brain fiber architecture and intellectual performance. Journal of Neuroscience, 29(7), 2212–2224. Cremers, L. G. M., de Groot, M., Hofman, A., Krestin, G. P., van der Lugt, A., Niessen, W. J., . . . Ikram, M. A. (2016). Altered tract-specific white matter microstructure is related to poorer cognitive performance: The Rotterdam Study. Neurobiology of Aging, 39, 108–117. Deary, I. J., Bastin, M. E., Pattie, A., Clayden, J. D., Whalley, L. J., Starr, J. M., & Wardlaw, J. M. (2006). White matter integrity and cognition in childhood and old age. Neurology, 66(4), 505–512. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211.

205

206

e. genc¸ and c. fraenz

Dunst, B., Benedek, M., Koschutnig, K., Jauk, E., & Neubauer, A. C. (2014). Sex differences in the IQ-white matter microstructure relationship: A DTI study. Brain and Cognition, 91, 71–78. Ferrer, E., Whitaker, K. J., Steele, J. S., Green, C. T., Wendelken, C., & Bunge, S. A. (2013). White matter maturation supports the development of reasoning ability through its influence on processing speed. Developmental Science, 16(6), 941–951. Filley, C. (2012). The behavioral neurology of white matter. New York: Oxford University Press. Fischer, F. U., Wolf, D., Scheurich, A., & Fellgiebel, A. (2014). Association of structural global brain network properties with intelligence in normal aging. PLoS One, 9(1), e86258. Galton, F. (1888). Head growth in students at the University of Cambridge. Nature, 38(996), 14–15. Genc, E., Fraenz, C., Schlüter, C., Friedrich, P., Hossiep, R., Voelkle, M. C., . . . Jung, R. E. (2018). Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nature Communications, 9(1), 1905. Genc, E., Fraenz, C., Schlüter, C., Friedrich, P., Voelkle, M. C., Hossiep, R., & Güntürkün, O. (2019). The neural architecture of general knowledge. European Journal of Personality, 33(5), 589–605. Goriounova, N. A., Heyer, D. B., Wilbers, R., Verhoog, M. B., Giugliano, M., Verbist, C., . . . Verberne, M. (2018). Large and fast human pyramidal neurons associate with intelligence. eLife, 7(1), e41714. Goriounova, N. A., & Mansvelder, H. D. (2019). Genes, cells and brain areas of intelligence. Frontiers in Human Neuroscience, 13, 14. Haász, J., Westlye, E. T., Fjær, S., Espeseth, T., Lundervold, A., & Lundervold, A. J. (2013). General fluid-type intelligence is related to indices of white matter structure in middle-aged and old adults. NeuroImage, 83, 372–383. Hulshoff-Pol, H. E., Schnack, H. G., Posthuma, D., Mandl, R. C. W., Baare, W. F., van Oel, C., . . . Kahn, R. S. (2006). Genetic contributions to human brain morphology and intelligence. Journal of Neuroscience, 26(40), 10235–10242. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kievit, R. A., Davis, S. W., Mitchell, D. J., Taylor, J. R., Duncan, J., Tyler, L. K., . . . Cusack, R. (2014). Distinct aspects of frontal lobe structure mediate agerelated differences in fluid intelligence and multitasking. Nature Communications, 5, 5658. Kim, D. J., Davis, E. P., Sandman, C. A., Sporns, O., O’Donnell, B. F., Buss, C., & Hetrick, W. P. (2016). Children’s intellectual ability is associated with structural network integrity. NeuroImage, 124, 550–556. Koenis, M. M. G., Brouwer, R. M., Swagerman, S. C., van Soelen, I. L. C., Boomsma, D. I., & Hulshoff Pol, H. E. (2018). Association between structural brain network efficiency and intelligence increases during adolescence. Human Brain Mapping, 39(2), 822–836.

Diffusion-Weighted Imaging of Intelligence

Kontis, D., Catani, M., Cuddy, M., Walshe, M., Nosarti, C., Jones, D., . . . Allin, M. (2009). Diffusion tensor MRI of the corpus callosum and cognitive function in adults born preterm. Neuroreport, 20(4), 424–428. Kuznetsova, K. A., Maniega, S. M., Ritchie, S. J., Cox, S. R., Storkey, A. J., Starr, J. M., . . . Bastin, M. E. (2016). Brain white matter structure and information processing speed in healthy older age. Brain Structure and Function, 221(6), 3223–3235. Le Bihan, D. (2003). Looking into the functional architecture of the brain with diffusion MRI. Nature Reviews Neuroscience, 4(6), 469–480. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. Luders, E., Narr, K. L., Bilder, R. M., Thompson, P. M., Szeszko, P. R., Hamilton, L., & Toga, A. W. (2007). Positive correlations between corpus callosum thickness and intelligence. NeuroImage, 37(4), 1457–1464. Ma, J., Kang, H. J., Kim, J. Y., Jeong, H. S., Im, J. J., Namgung, E., . . . Oh, J. K. (2017). Network attributes underlying intellectual giftedness in the developing brain. Scientific Reports, 7(1), 11321. MacKay, A. L., & Laule, C. (2016). Magnetic resonance of myelin water: An in vivo marker for myelin. Brain Plasticity, 2(1), 71–91. Malpas, C. B., Genc, S., Saling, M. M., Velakoulis, D., Desmond, P. M., & O’Brien, T. J. (2016). MRI correlates of general intelligence in neurotypical adults. Journal of Clinical Neuroscience, 24, 128–134. McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. Mori, S. (2007). Introduction to diffusion tensor imaging. Oxford: Elsevier. Morris, D. M., Embleton, K. V., & Parker, G. J. M. (2008). Probabilistic fibre tracking: Differentiation of connections from chance events. NeuroImage, 42(4), 1329–1339. Muetzel, R. L., Mous, S. E., van der Ende, J., Blanken, L. M. E., van der Lugt, A., Jaddoe, V. W. V., . . . White, T. (2015). White matter integrity and cognitive performance in school-age children: A population-based neuroimaging study. NeuroImage, 119, 119–128. Narr, K. L., Woods, R. P., Thompson, P. M., Szeszko, P., Robinson, D., Dimtcheva, T., . . . Bilder, R. M. (2007). Relationships between IQ and regional cortical gray matter thickness in healthy adults. Cerebral Cortex, 17(9), 2163–2171. Neubauer, A., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. Nusbaum, F., Hannoun, S., Kocevar, G., Stamile, C., Fourneret, P., Revol, O., & Sappey-Marinier, D. (2017). Hemispheric differences in white matter microstructure between two profiles of children with high intelligence quotient vs. controls: A tract-based spatial statistics study. Frontiers in Neuroscience, 11, 173. Ocklenburg, S., Anderson, C., Gerding, W. M., Fraenz, C., Schluter, C., Friedrich, P., . . . Genc, E. (2018). Myelin water fraction imaging reveals

207

208

e. genc¸ and c. fraenz

hemispheric asymmetries in human white matter that are associated with genetic variation in PLP1. Molecular Neurobiology, 56(6), 3999–4012. Pakkenberg, B., & Gundersen, H. J. G. (1997). Neocortical neuron number in humans: Effect of sex and age. Journal of Comparative Neurology, 384(2), 312–320. Penke, L., Maniega, S. M., Bastin, M. E., Hernandez, M. C. V., Murray, C., Royle, N. A., . . . Deary, I. J. (2012). Brain white matter tract integrity as a neural foundation for general intelligence. Molecular Psychiatry, 17(10), 1026–1030. Penke, L., Maniega, S. M., Murray, C., Gow, A. J., Hernandez, M. C., Clayden, J. D., . . . Deary, I. J. (2010). A general factor of brain white matter integrity predicts information processing speed in healthy older people. Journal of Neuroscience, 30(22), 7569–7574. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience and Biobehavioral Reviews, 57, 411–432. Pineda-Pardo, J. A., Martínez, K., Román, F. J., & Colom, R. (2016). Structural efficiency within a parieto-frontal network and cognitive differences. Intelligence, 54, 105–116. Ryman, S. G., Yeo, R. A., Witkiewitz, K., Vakhtin, A. A., van den Heuvel, M., de Reus, M., . . . Jung, R. E. (2016). Fronto-Parietal gray matter and white matter efficiency differentially predict intelligence in males and females. Human Brain Mapping, 37(11), 4006–4016. Sampaio-Baptista, C., Khrapitchev, A. A., Foxley, S., Schlagheck, T., Scholz, J., Jbabdi, S., . . . Thomas, N. (2013). Motor skill learning induces changes in white matter microstructure and myelination. Journal of Neuroscience, 33(50), 19499–19503. Schmithorst, V. J. (2009). Developmental sex differences in the relation of neuroanatomical connectivity to intelligence. Intelligence, 37(2), 164–173. Schmithorst, V. J., Wilke, M., Dardzinski, B. J., & Holland, S. K. (2005). Cognitive functions correlate with white matter architecture in a normal pediatric population: A diffusion tensor MRI study. Human Brain Mapping, 26(2), 139–147. Smith, S. M., Jenkinson, M., Johansen-Berg, H., Rueckert, D., Nichols, T. E., Mackay, C. E., . . . Matthews, P. M. (2006). Tract-based spatial statistics: Voxelwise analysis of multi-subject diffusion data. NeuroImage, 31(4), 1487–1505. Tamnes, C. K., Østby, Y., Walhovd, K. B., Westlye, L. T., Due-Tønnessen, P., & Fjell, A. M. (2010). Intellectual abilities and white matter microstructure in development: A diffusion tensor imaging study. Human Brain Mapping, 31(10), 1609–1625. Tang, C. Y., Eaves, E. L., Ng, J. C., Carpenter, D. M., Mai, X., Schroeder, D. H., . . . Haier, R. J. (2010). Brain networks for working memory and factors of intelligence assessed in males and females with fMRI and DTI. Intelligence, 38(3), 293–303. Tuch, D. S. (2004). Q-ball imaging. Magnetic Resonance in Medicine, 52(6), 1358–1372. Urger, S. E., De Bellis, M. D., Hooper, S. R., Woolley, D. P., Chen, S. D., & Provenzale, J. (2015). The superior longitudinal fasciculus in typically

Diffusion-Weighted Imaging of Intelligence

developing children and adolescents: Diffusion tensor imaging and neuropsychological correlates. Journal of Child Neurology, 30(1), 9–20. Wang, Y., Adamson, C., Yuan, W., Altaye, M., Rajagopal, A., Byars, A. W., & Holland, S. K. (2012). Sex differences in white matter development during adolescence: A DTI study. Brain Research, 1478, 1–15. Wen, W., Zhu, W., He, Y., Kochan, N. A., Reppermund, S., Slavin, M. J., . . . Sachdev, P. (2011). Discrete neuroanatomical networks are associated with specific cognitive abilities in old age. Journal of Neuroscience, 31(4), 1204–1212. Wiseman, S. J., Booth, T., Ritchie, S. J., Cox, S. R., Muñoz Maniega, S., Valdés Hernández, M., . . . Deary, I. J. (2018). Cognitive abilities, brain white matter hyperintensity volume, and structural network connectivity in older age. Human Brain Mapping, 39(2), 622–632. Wolff, S. D., & Balaban, R. S. (1989). Magnetization transfer contrast (MTC) and tissue water proton relaxation in vivo. Magnetic Resonance in Medicine, 10(1), 135–144. Yu, C., Li, J., Liu, Y., Qin, W., Li, Y., Shu, N., . . . Li, K. (2008). White matter tract integrity and intelligence in patients with mental retardation and healthy adults. NeuroImage, 40(4), 1533–1541. Zhang, H., Schneider, T., Wheeler-Kingshott, C. A., & Alexander, D. C. (2012). NODDI: Practical in vivo neurite orientation dispersion and density imaging of the human brain. NeuroImage, 61(4), 1000–1016.

209

11 Structural Brain Imaging of Intelligence Stefan Drakulich and Sherif Karama

Overview The brain’s remarkable inter-individual structural variability provides a wealth of information that is readily accessible via structural Magnetic Resonance Imaging (sMRI). sMRI enables various structural properties of the brain to be captured on a macroscale level – one that is quickly moving towards submillimeter resolution (Budde, Shajan, Scheffler, & Pohmann, 2014; Stucht et al., 2015). This constitutes a remarkable leap forward from historically crude brain measures, such as head circumference measurements, aimed at understanding the neurobiology of intelligence differences. The works presented here and those which continue today are distinguished by years of incremental validation. sMRI-based global (e.g., total brain volume), regional (e.g., subcortical structural volumes), and local (e.g., local cortical thickness) brain measurements have all been examined for associations with cognitive ability at different time points throughout the lifespan (Luders, Narr, Thompson, & Toga, 2009). The growing body of research in this field suggests that many aspects of our cognitive abilities are, to various degrees, associated with aspects of neuroanatomy (Luders et al., 2009). However, likely due to statistical power and methodological idiosyncrasies, there are still contradictions in the field, and unambiguous conclusions regarding intelligence associations with brain structure cannot yet be entirely drawn. This chapter serves to summarize what we know to date on the topic and seeks to provide suggestions (implicit and explicit) regarding the implementation of future structural neuroimaging studies of intelligence. Preferentially advocating for the use of any particular pipeline or software is avoided due to the inherent subjectivity associated with such choices, as well as to the relatively quick development and refinements of the various approaches.

Total Brain Volume The topic of a relationship between brain volume and intelligence has been debated since at least the 1930s (McDaniel, 2005). More recently, and using various brain imaging methods, the evidence points to quite a robust but 210

Structural Brain Imaging of Intelligence

rather modest association between brain volume and intelligence (McDaniel, 2005). Indeed, most sMRI-based studies report correlations ranging from .3 to .4 between brain volume and measures of general intelligence (McDaniel, 2005; Wickett, Vernon, & Lee, 2000). A meta-analysis consisting of over 8000 individuals across 88 studies using various brain-imaging methods found significant positive associations between brain volume and IQ (r = .24), and found this association to generalize over age, sex, and cognitive subdomains (Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015). This may seem surprising giving some historical findings suggesting no association between intellectual attainment and brain weight (which can be viewed as a proxy for size/volume). For instance, Einstein’s brain was 1,230 grams at autopsy, which was more or less average for his age, while Anatole France, a Nobel laureate in literature, had a brain that weighed only 1,100 grams at autopsy (DeFelipe, 2011). However, there is no contradiction here, as a correlation of .4, the maximum value reported above, essentially means that, at best, brain size only accounts for 16% (.4 squared) of the variance in general intelligence. In other words, at least 84% of intelligence differences are likely due to other factors (Andreasen et al., 1993; Rushton & Ankney, 2009). The corollary here is that some extremely bright individuals may have relatively small brains, and vice versa. The respective contributions of genes, environment, and their interactions to the relationship between brain volume and intelligence are not yet well elucidated. An early study on the topic that requires replication reported no brain volume association with intelligence within families (Schoenemann, Budinger, Sarich, & Wang, 2000). Acknowledging that brain volume can change across the lifespan, another study reported an association between brain volume change and IQ, and showed that part of this association was of genetic origin (Brouwer et al., 2014). Finally, according to Pietschnig et al. (2015), the strength of the association between brain volume and intelligence might have been inflated by publication bias and be closer to correlation values of .2 than .4.

Voxel-based Morphometry Methods sMRI analyses have initially and predominantly been performed using voxel-based morphometry (VBM), a method that aims to link a trait of interest to local amounts of gray matter, white matter, and/or cerebrospinal fluid (Ashburner & Friston, 2000). One of the advantages of VBM is its ease of implementation and its relatively low computational cost. A VBM analysis begins with a high-resolution MRI brain scan. This raw MRI then undergoes a series of linear and nonlinear transformations (standardization/normalization) to have individual brains match a predetermined template brain (registration template). The registration template is a

211

212

s. drakulich and s. karama

“reference brain,” which is typically an average of many high quality magnetic resonance images (Evans, Janke, Collins, & Baillet, 2012). The threedimensional coordinate system on which the average/registration template is set is referred to as “stereotaxic space,” whereas the coordinate system of the raw brain image, prior to its transformation, is referred to as “native space.” Note that the normalization step does not lead to a perfect match between individual brains and the template, aiming instead to only correct for global brain-shape and size differences, because a perfect match would obfuscate any attempt to find VBM differences between subjects or groups of subjects when making measurements in stereotaxic space. Normalization aims to having a given set of three-dimensional coordinates represent the same brain region across subjects, hence facilitating inter-subject/inter-group comparisons (Collins, Neelin, Peters, & Evans, 1994; Riahi et al., 1998; Worsley et al., 1996). After the brain normalization procedure, each brain image passes through an algorithm that performs tissue-classification of the voxels (voxels are somewhat analogous to three-dimensional pixels). This parcellates the brain into its various tissue components, including gray matter, white matter, and cerebrospinal fluid (CSF) (Cocosco, Zijdenbos, & Evans, 2003; Zijdenbos, Lerch, Bedell, & Evans, 2005). This step is usually followed by three-dimensional smoothing (blurring) of the data to improve the signal-to-noise ratio (Good et al., 2001). In any case, smoothing renders the data more normally distributed, thus increasing the validity of parametric statistical tests. Smoothing also reduces the effects of individual variation in sulcal/gyral anatomy that remain after standardization. The degree of smoothing is dependent on the size of the so-called smoothing kernel. While it is sometimes believed that there is a standard for the size of the smoothing kernel, the matched filter theorem (Turin, 1960) dictates otherwise. The theorem stipulates that the optimal size of the smoothing kernel depends on the size of the “real” underlying signal one wishes to detect. A smoothing kernel that is smaller or larger than the size of the true signal will decrease sensitivity to detect that signal. In other words, if the size of a brain area linked to some trait or phenotype (e.g., intelligence) is 20 mm in diameter, then the optimal kernel size to detect that association is 20 mm in diameter. There are two main types of VBM: unmodulated and modulated (aka optimized). Both methods aim to quantify, at each voxel, the degree of association between the amount of a tissue of interest (e.g., gray matter) and a phenotype of interest (e.g., intelligence). Unmodulated VBM uses data directly in stereotaxic space, and so disregards the fact that regions have been morphed (i.e., stretched or compressed) to fit the registration template. This leads to inferences about associations between a given trait of interest and a relative amount of a tissue of interest at each voxel for a given region. This is often referred to as tissue density and should not mean to refer to density in the sense of “concentration” of tissue at the voxel level. In other words, and all being equal, a voxel considered as having greater gray matter

Structural Brain Imaging of Intelligence

density contains a greater amount of gray matter rather than having gray matter that is more concentrated in the sense of more compact. Modulated (aka optimized) VBM, on the other hand, adjusts the amount of a tissue of interest for the level of morphing it has undergone. For instance, if a given voxel has been expanded to make a brain fit the registration template, the amount of tissue contained within that voxel will be reduced to compensate for the expansion. This leads to inferences about associations between a given trait of interest and absolute amount/volume of a tissue of interest in native space. After processing, associations between a trait of interest and gray/white/ CSF voxel values across the brain are typically analyzed using linear regression models. As thousands of voxels are examined independently for putative associations, appropriate corrections for multiple comparisons are required. For a figure summarizing the various VBM steps, see Figure 3.3 by Martínez and Colom.

Voxel-based Morphometry Findings Most VBM studies on brain correlates of intelligence have focused on the cortex and produced predominantly positive associations in multiple brain areas. This should not be interpreted as meaning that other brain structures are not also very important for intelligence differences but, rather, likely reflects the fact that the inherent large inter-individual variability of the human cortex and the known involvement of the cortex in higher-order cognitive processes have made it a good candidate for research on the neurobiology of intelligence (Eickhoff, Constable, & Yeo, 2018; Kennedy et al., 1998; Regis et al., 2005). The main findings from VBM methods are outlined in the Modulated VBM Cortical Findings, Unmodulated VBM Cortical Findings, and Modulated VBM White Matter Findings sections of this chapter. As a reminder, modulated VBM is used to estimate local gray matter volume, whereas unmodulated VBM is used to estimate the relative local amount of a tissue of interest and referred to as “density.” VBM-based cortical and subcortical gray matter as well as white matter findings are discussed in this section. Briefly, gray matter is distinguished by the presence of neuronal cell bodies. These are present in the cerebral cortex and within subcortical structures/ nuclei. White matter is distinguished by a predominance of myelinated axons and essentially represents most of the brain’s wiring with myelin serving as an isolating sheath that covers axons and allows for more efficient and rapid signal conduction. Important caveats: as suggested in the introduction of this chapter, the brain imaging literature of intelligence research is plagued with contradictory findings. Perhaps the most important reason for these contradictions includes

213

214

s. drakulich and s. karama

low-powered studies using low thresholds for controlling for multiple comparisons. Unfortunately, using low thresholds in such a context, where the typical effect size of associations between brain metrics and measures of intelligence is rather modest, is not an adequate strategy for compensating for low power. Rather, it frequently leads to false positive findings that tend to be more reflective of noise than real signals (Thompson, 2020). Another likely reason for the non-convergence of findings is poor measures of intelligence. In some cases, a single subtest is used. Sometimes, only a few items from a given subtest are administered within a few minutes and considered as proxy for good estimates of general intelligence. For more on this, see Chapter 3, by Martinez and Colom. A further possible source of non-reproducibility is range restriction in samples of convenience. University students, who are frequently solicited in research, tend to have a higher mean IQ than the general population, with the range of cognitive ability in these samples usually being smaller than that found in the general population. This is not trivial and can, at times, distort findings. Indeed, in cases of cognitive ability range restriction, the general factor of intelligence (g) will account for less of the variance in intelligence differences to the benefit of stronger effects of specific abilities (Haier, Karama, Colom, Jung, & Johnson, 2014). Finally, one needs to be aware that scanner-specific effects, which can even occur between two identical scanners of the same brand using the exact same scanning sequence, can have an impact on findings (Stonnington et al., 2008). One must hence exert care and account for this as much as possible when analyzing and interpreting results produced from different scanners. A common procedure includes calibrating the scanners by using an object and/or individual scanned on all scanners. Another helpful, albeit imperfect, procedure is to covary for scanner in the analysis. While both procedures are helpful, residual scanner effects may still persist. Despite these caveats, when taken as a whole, the literature seems to start converging towards certain general findings in associations between intelligence and measures of brain structure. We attempt to provide the salient points here.

Modulated (aka Optimized) VBM Cortical Findings Modulated VBM has a history of relatively robust and distributed associations between gray matter volumes and indices of cognitive ability. Cortical graymatter volume relations to cognitive ability were found to be most consistently reported in frontal and parietal lobes. This is in keeping with the ParietalFrontal Integration Theory of Intelligence (P-FIT) proposed by Jung and Haier (2007) on the basis of reviewing 37 neuroimaging studies of intelligence. It’s important to clarify that the P-FIT is not based on a meta-analysis but on a thorough review of brain imaging findings.

Structural Brain Imaging of Intelligence

A year prior to publishing the P-FIT review, Colom, Haier, and Jung reported that the cognitive tests with the highest g-loadings tended to exhibit more widespread positive gray matter volume associations than the less g-loaded cognitive tests. In keeping with the P-FIT, these areas involved frontal and parietal areas but also temporal and occipital regions (Colom, Jung, & Haier, 2006). More specifically, positive gray matter volume associations with the general intelligence factor (g) were reported in multiple histologically distinguishable Brodmann areas (BA) in frontal (BA 10, 46, and 9), parietal (BA 43 and 3), temporal (BA 21, 37, 22, and 42), and occipital regions (BA 19) (Haier, Jung, Yeo, Head, & Alkire, 2004; Jung & Haier, 2007). A subsequent study conducted by the same team in a sample of 100 healthy adult male and female university students, also found several positive associations between regional gray matter volume and g in frontal (BA 5, 6, 8, 9, 10, 11, 45, 46, and 47), parietal (BA 3, 7, 20, 21, 22, 36, 39, and 42), and occipital regions (BA 18 and 19); no negative associations were found (Colom et al., 2009). They summarized their findings by stating that their study corroborates the P-FIT model given the distribution and arrangement of volumetric correlates of intelligence factors (Colom et al., 2009; Jung & Haier, 2007). In keeping with this, findings from an independent meta-analysis of mostly modulated VBM studies generally supported the P-FIT (Jung & Haier, 2007) but also reported some cortical gray matter associations that extended beyond those of the P-FIT (Basten, Hilger, & Fiebach, 2015) by adding the precuneus, the presupplementary motor area, and the lateral middle temporal gyrus.

Unmodulated VBM Cortical Findings Studies opting for the non-optimized (i.e., unmodulated) VBM protocol to relate cognitive ability to gray matter “density” are rather scarce. Modroño et al. (2019) examined the development of transitive reasoning in parietal regions and found improved performance in the reasoning task to be negatively associated with gray matter density in the same region in adolescents – but not in the adult group. In a small sample of gifted mathematicians with age- and sex-matched controls, Aydin et al. (2007) found greater gray-matter density in the bilateral inferior parietal lobules and left inferior frontal gyrus for the group of mathematicians. A modestly sized study performed on young adolescents by Frangou, Chitins, and Williams (2004) found a significant positive association between IQ and gray-matter density in the orbitofrontal cortex and cingulate gyrus. The unmodulated VBM protocol has also been used to estimate subcortical gray-matter density associations with intelligence. Frangou et al. (2004) found positive associations between IQ and gray matter density in the thalamus and a negative correlation with IQ in the caudate nucleus. They also found positive associations between cerebellar gray matter density and IQ, somewhat partially echoing previous work that used manual segmentation and reported,

215

216

s. drakulich and s. karama

among other regions, an association between IQ and total cerebellar volume (Andreasen et al., 1993). Associations between intelligence and subcortical nuclei have been reproduced using volume estimates based on various methods and are discussed in the Other Noteworthy Findgins Using Various Methods section of this chapter.

Modulated VBM White Matter Findings Total white-matter volume increases during childhood and adolescence, likely reflecting the increased connectivity between disparate brain regions and functional connectivity pathways (Lenroot et al., 2007). These changes are thought to contribute in part to development and refinement of cognitive ability (Paus et al., 1999). The increase in total white matter volume has been reported to slow in late adolescence and early adulthood, and both proceed to decrease in late adulthood (Westlye et al., 2009). Several independent studies have found significant positive associations between total or regional white matter and intelligence using various methods to estimate white matter volume (Andreasen et al., 1993; Gur et al., 1999; Narr et al., 2007). These findings have been corroborated in at least two modulated VBM studies examining white-matter correlates of intelligence (Haier et al., 2004; Haier, Jung, Yeo, Head, & Alkire, 2005). In the first study, Haier et al. (2004) reported positive association between white matter volume and intelligence in two independent samples of adults of different mean ages (mean ages of 27 and 59 years). Although there was a weak overlap in regions associated with intelligence between the two samples, they both exhibited distributed local white matter volume associations with intelligence. The reason for the lack of overlap is not entirely clear but is likely due to lack of power and small sample sizes (23 and 24). Interestingly, positive associations between regional white matter volumes and IQ coincided, in large part, with corresponding regions in gray matter, suggesting the presence of functional connectivity due to the exhibited structural covariance (Haier et al., 2004). For support for this hypothesis, see Chapter 10, by Genç and Fraenz. In a second study from the same group, sex differences in associations between intelligence and white and grey matter volume were assessed. They had several notable findings – men had ~6.5-times the number of GM voxels identified as related to intellectual functioning than did women, and women had roughly 9-times more WM voxels than men; additionally, men had no WM voxels identified in the frontal lobes, while women had 86% of their identified WM voxels in the frontal lobes (Haier et al., 2005). Authors interpreted their findings as suggesting that men and women achieve similar levels of cognitive functioning using different regions. Although this might be the case, formally testing for statistically different gender effects in associations between intelligence and white (or gray) matter volume was not done (likely due to low power). Yet, this is required to formally draw a conclusion of

Structural Brain Imaging of Intelligence

gender differences. Nonetheless, the findings pointed Haier et al. (2005) towards a very interesting (and likely correct) hypothesis “that different types of brain designs may manifest equivalent intellectual performances” (p. 320).

Surface-based Morphometry Methods Surface-based morphometry (SBM) seeks to produce brain measures in a different manner than VBM but makes use of similar segmentation and normalization methods used in VBM. In SBM, brain surfaces (i.e., a sheathrendering of certain brain structures) are produced. Typically, cortical white and gray matter surfaces are produced but subcortical nuclei surfaces can also be produced. The exact method used to produce these surfaces goes beyond the scope of this chapter and can vary between pipelines (e.g., CIVET, FreeSurfer) and even between the various versions/updates of the same pipeline. For more details on some surface-based morphometry methods, see www.bic.mni.mcgill.ca/ServicesSoftware/CIVET and https:// surfer.nmr.mgh.harvard.edu/. One of the motivations for developing SBM was to overcome an important limitation of VBM: its inability to distinguish between area- or thickness-driven cortical volume differences (Sanabria-Diaz et al., 2010; Winkler et al., 2010). Indeed, by producing cortical white and gray matter surfaces, one can readily estimate the distance between these surfaces across the cortex and hence produce estimates of cortical thickness at multiple points on the cortical mantle. Local surface area can also be assessed across the cortex using SBM. Surface area has been used to estimate cortical area for the whole cortex or within specific regions of interest and has also been used to calculate areas of subcortical regions (Winkler et al., 2012). One has a choice of which surface to sample from when calculating a cortical areal quantity in SBM. The white surface (interface between gray and white matter) is sometimes used to calculate surface area. The benefits of using the white surface are that it matches directly to a morphological feature and is insensitive to variations in cortical thickness. The other possible choice is to sample from the middle surface, which runs at the mid-distance between the white and pial surfaces; although the middle surface does not inherently match any specific cortical layer, it is able to represent gyri and sulci in a relatively unbiased fashion (Van Essen, 2005). Finally, local cortical volume can also be estimated with SBM by considering both cortical area and thickness. It’s noteworthy that SBM metrics are typically produced in native space. As for VBM, associations between a trait of interest and SBM metrics are frequently analyzed by implementing linear regression models independently at thousands of points and correcting for these multiple comparisons. For a figure summarizing the various SBM steps, see Figure 3.3.

217

218

s. drakulich and s. karama

Surface-based Morphometry Findings The caveats about VBM findings also apply to SBM findings. Regarding sample size, in the experience of the authors of this chapter, a sample of at least 200 subjects is required for a certain degree of stability in cortical thickness findings. However, because SBM pipelines tend to be exquisitely sensitive to small movement artefacts, we must add that differences in quality control of SBM protocol outputs between studies very likely also account for some differences in findings. For more on this specific issue, see the Cortical Thickness Findings section of this chapter. Another likely source of contradiction in the literature is the use of multiple statistical models where only models yielding “significant” findings are reported in a study. On top of the obvious issue of cherry-picking, such a strategy sometimes further leads to reporting results from only a very complex regression model with multiple interaction terms where the main effect of interest no longer reflects what the authors originally intended to examine. For instance, in a hypothetical situation where one would want to examine the association between IQ and height while correcting for age (i.e. height ~ Intercept + IQ + Age), the estimated main effect of IQ would reflect the estimated change in height for each point of IQ gain/loss. As soon as one introduces an “Age by IQ” interaction, the main effect no longer holds the same meaning. Rather, it provides the change in height linked to each point of IQ gain/loss when Age equals 0 (and not across all ages). For more on this, we strongly suggest Aiken and West’s (1991) book on interpreting interactions in multiple regression. Another important source of differences in SBM reports (and VBM reports) is whether a study has controlled for total brain volume. The issue of whether one should or should not control for total brain volume is not entirely clear, and the following thought experiment with cortical thickness is useful to understand the issue at hand. Imaging the following absurd hypothetical situation: (1) every human being has exactly the same brain shape and size except for the thickness of the prefrontal cortex; and (2) there is a positive association between the thickness of the prefrontal cortex and intelligence. If this were the case, a study examining associations between cortical thickness and intelligence should find positive associations only in the prefrontal cortex. However, if one were to control for total brain volume, this association would disappear because the only source of variance in volume is the thickness of the prefrontal cortex. Not knowing that the rest of the brain is identical between humans, one would then conclude that the association is not really linked exclusively to the prefrontal cortex but stems from a global factor that affects the whole brain. However, based on the two premises, we know this not to be the case. Now, if we put the thought experiment aside, we know that brain volume is, to some degree, related to thickness and to any other measure of size in the brain. What is not entirely clear is whether brain volume stems from

Structural Brain Imaging of Intelligence

the sum of local effects or from global factors that would simultaneously affect multiple brain regions. If the brain volume differences stem mainly from small local effects, then controlling for total brain volume (which is dependent on these local differences) would likely eradicate real local findings and should not be done. On the other hand, if there are global factors that affect thickness across the cortex, then controlling for total brain volume is the strategy that should be used. As the truth likely lies somewhere between these two extremes, the need for controlling for total brain volume is unclear, and possibly presenting both results (with and without control for total brain volume) should be encouraged. Whatever may be the case, controlling for total brain volume will, in most cases, affect associations between neural morphometrics and IQ (or any other behavior of interest) and should be taken into account when comparing result findings.

Cortical Thickness Findings Cortical thickness has been a primary metric of interest in structural imaging in the last decade or so, primarily due to its ability to be mapped across the cortical sheet at relatively high resolution and in an automated or semiautomatic manner (Kabani, Le Goualher, MacDonald, & Evans, 2001; Kim et al., 2005; Lerch & Evans, 2005; MacDonald, Kabani, Avis, & Evans, 2000). While some studies had initially suggested curvilinear relationships between cortical thickness and development, more recent studies using stringent quality control procedures have shown that cortical thickness, as derived from MRI, tends to progressively thin after early childhood (Ducharme et al., 2016; Raznahan et al., 2011). This early thinning has been proposed to reflect gradual synaptic pruning of inefficient connections or other developmentally driven changes in neuronal size, glial cell density, and vasculature (Bourgeois, Goldman-Rakic, & Rakic, 1994; Huttenlocher, 1990; Zatorre, Fields, & Johansen-Berg, 2012). Alternatively, apparent MRI-based cortical thinning may simply reflect gradual cortical invasion of lower cortical layers by whitematter fibers which lead to the erroneous classification of lower cortical layers as white matter (Aleman-Gomez et al., 2013; Sowell et al., 2004). In such a scenario, cortical thickness could theoretically remain identical during development but be artefactually perceived as thinner in MRI. Likely, both artefactual and real thinning contribute to observed MRI-based thinning during development. Whether there is a causal link between early cortical and cognitive development is not entirely clear, as the co-occurrence of both phenomena could lead to spurious correlations. As for the elderly, MRI-based thinning probably mostly reflects genuine loss of tissue. One of the most cited studies on the relationship between cortical thickness and intelligence is the seminal work by Shaw et al. (2006). They reported that high-IQ individuals have a thinner cortex in young childhood, followed by a rapid increase in cortical thickness that results in a positive association

219

220

s. drakulich and s. karama

between cortical thickness and IQ by adolescence. This has been interpreted to indicate that high-IQ individuals have greater plasticity than lower-IQ individuals. While compelling, this finding still needs to be reproduced. Indeed, a study examining a large set of individuals of a similar age range as that of the Shaw et al. paper found that cortical thickness associations remained positive in both young childhood and adolescence (Karama et al., 2009). One possibility for this discrepancy is that different automated CIVET pipeline versions were used, with the pipeline version used by Karama et al. being the more recent version of the two. Another plausible explanation for the discrepancy is that Karama et al. applied a stringent quality control procedure (Ducharme et al., 2016) that had not yet been developed when Shaw et al. (2006) published their work. The quality control procedures tended to detect and remove from analysis subjects that had moved in the scanner. Indeed, in the presence of motion, the algorithm is likely to place the cortical gray–white matter boundary away from the white surface, in effect underestimating cortical thickness (Ducharme et al., 2016; Reuter et al., 2015). It is possible that the youngest children with higher IQs tended to move more in the scanner and that this movement led to artefactually thinner cortical thickness estimates. As these children aged and stopped moving as much, their cortex appeared to thicken. More work remains to be done to elucidate the source of these discrepancies. Generally, associations between cortical thickness and intelligence are often reported to be both positive and distributed and seem to hold across the lifespan (Bajaj et al., 2018; Bedford et al., 2020; Bjuland, Løhaugen, Martinussen, & Skranes, 2013; Choi et al., 2008; Karama et al., 2009, 2011, 2014; Menary et al., 2013; Narr et al., 2007; Schmitt, Raznahan et al., 2019). However, the proportion of variance in cognitive ability that is accounted for by local cortical thickness rarely exceeds 15%. In keeping with this, it is worth noting that another, relatively large study of elderly individuals (N = 672) found that TBV accounted for more of the variance in intelligence than cortical thickness (Ritchie et al., 2015). While results tend to be compatible with the P-FIT, they appear to extend beyond P-FIT areas including medial brain regions like the precuneus. This is in keeping with Basten et al.’s (2015) meta-analysis showing that the volume of some cortical regions not included in the P-FIT also seem associated with intelligence. A few studies have examined, longitudinally, associations between change in cortical thickness and changes in cognitive ability. Using data from the NIH Study of Normal Brain Development (Evans & Brain Development Cooperative Group, 2006) of children scanned and cognitively tested 2 years apart, Burgaleta, Johnson, Waber, Colom, and Karama (2014) have examined how changes in cortical thickness were associated with changes in IQ. Results showed that individuals that showed no significant change in IQ exhibited the standard pattern of cortical thinning over the 2-year period examined. In contrast, children that showed an increase in IQ did not exhibit thinning of their cortex, whereas children that showed a decrease in their IQ

Structural Brain Imaging of Intelligence

exhibited the steepest thinning of their cortex. Data to elucidate the cause of these IQ changes was not available. Nonetheless, this study suggests that reported significant changes in IQ in test–re-test situations that are classically reported for about 5% of children (Breslau et al., 2001; Moffitt, Caspi, Harkness, & Silva, 1993) are not merely artifacts of potential differences in testing conditions but reflect, at least in part, genuine changes in cognitive ability. Given that change scores over two timepoints were used and that such scores are susceptible to regression to mean effects and spurious findings, the finding needed to be replicated. Using a latent variable approach rather than a simple IQ measure, Román et al. (2018) confirmed the Burgaleta, Johnson, et al. (2014) findings on a subsample of the NIH Study of Normal Brain Development that had brain and IQ measurement at three timepoints. Another study, also conducted on a subsample from the NIH Study of Normal Brain Development, pushed the analysis further and showed that cognitive or cortical thickness changes at any given timepoint, respectively, predicted cortical thickness and cognitive change at a later point in time (Estrada, Ferrer, Román, Karama, & Colom, 2019). Another group confirmed, on another large dataset, observed associations between changes in cognitive ability and changes in cortical thickness (Schmitt, Raznahan et al., 2019). In an elegant analysis, they further showed that these dynamic associations were mainly genetically mediated (Schmitt, Neale et al., 2019). Overall, while results from multiple groups tend to suggest positive distributed associations between cortical thickness and intelligence from childhood to old age, some studies found almost no associations and even some negative associations (Tadayon, Pascual-Leone, & Santarnecchi, 2019; Tamnes et al., 2011). In some cases, potential explanations for this discrepancy are difficult to identify. In others, small sample sizes and low thresholds may account for the different findings (Colom et al., 2013; Escorial et al., 2015). In at least one study, only a complex regression model with multiple double interactions was used, making the main effect of IQ difficult to interpret (Goh et al., 2011).

Cortical Surface Area and Volume Findings Although there has been less work on cortical surface area (CSA) than thickness, CSA has also been a neuroimaging measure of interest for the study of cognitive ability. The interest in CSA is bolstered by the fact that it has greatly increased over the course of evolution and is believed to possibly account for intelligence differences between species (Roth & Dicke, 2005). Whether it is a main driver of intelligence differences within species is still not entirely clear. Whereas cortical thickness is thought to reflect the amount of neurons in a given cortical column alongside glia and dendritic arborization, CSA is thought to be related to the number and spacing of mini-columnar units of cells in the cerebral cortex (Chance, Casanova, Switala, & Crow,

221

222

s. drakulich and s. karama

2008; Chklovskii, Mel, & Svoboda, 2004; la Fougere et al., 2011; Lyttelton et al., 2009; Rakic, 1988; Sur & Rubenstein, 2005; Thompson et al., 2007). Additionally, CSA and cortical thickness have been reported to be at least partially independent both globally and regionally, and genetically uncorrelated in adults (Panizzon et al., 2009; Winkler et al., 2010). However, recent extensive work does not support this statement for children where a substantial genetic correlation (rG = .63) was shown between measures of surface area and cortical thickness; this was interpreted by the authors as “suggestive of overlapping genetic influences between these phenotypes early in life” (Schmitt, Neale et al., 2019, p. 3028). It is important to keep in mind that the level of independence between cortical thickness and surface area will depend, of course, on which exact surface is used to measure CSA (white, mid, or pial surface). This may possibly account for the apparent contradictory findings between these studies. For some more discussion on this issue, see Chapter 3, by Martínez and Colom. For a review of some of the intricacies of CSA measurement, see Winkler et al. (2012). Like cortical thickness, CSA has its own developmental trajectory, and has been compared with cortical thickness extensively (Hogstrom, Westlye, Walhovd, & Fjell, 2013; Lemaitre et al., 2012; Storsve et al., 2014). Most, if not all, published studies on cortical surface area associations with measures of intelligence have reported positive associations (Colom et al., 2013; Fjell et al., 2015; Schmitt, Neale et al., 2019; Vuoksimaa et al., 2015). Vuoksimaa et al. (2015) reported, in a large sample of middle-aged men from the Vietnam Era Twin Study of Aging, a positive association between total cortical surface area and intelligence that was of greater magnitude than the correlation between cortical thickness and intelligence. Fjell et al. (2015) reported positive associations between local cortical surface area and intelligence in a large sample of 8 to 89-year-old subjects (mean age 45.9 years). Associations were distributed across the brain and included, among others, clear frontal and cingulate regional associations (Fjell et al. 2015). Schmitt, Neale et al. (2019) also reported associations between cortical surface area and intelligence in a large sample of children, adolescents, and young adults (mean age 12.72 years). This association, which was genetically mediated, was somewhat distributed, including the precuneus, the anterior cingulate, as well as frontal and temporal regions. However, the strongest region of association was in the perisylvian area, a region known for its importance as a receptive language center. Finally, examining a much smaller sample than in the above studies, Colom et al. (2013) reported very local associations between surface area and intelligence in frontal regions. Few reports have been published on associations between SBM-based cortical volume and intelligence but the few that have, have reported positive associations with measures of intelligence (Bajaj et al., 2018; Vuoksimaa et al., 2015). Vuoksimaa et al. (2015) reported a positive association between total cortical volume and a measure of general cognitive ability on a sample of 515

Structural Brain Imaging of Intelligence

middle-aged twins but no association with thickness. In contrast, Bajaj et al. (2018) reported associations between measures of cognitive ability and cortical thickness that were of greater magnitude and more widely distributed than with cortical volume on a very small sample of 56 healthy adults.

Indices of Cortical Complexity Another gross anatomical measure of interest is the level of cortical convolution and is also a derivative measure from surface-based morphometry. Evolution of cortical convolution has served to increase the surface area of the cortex, and possibly by extension, cognitive ability (Rilling & Insel, 1999; Zilles, Armstrong, Schleicher, & Kretschmann, 1988). The gyrification index (GI) is a measure of cortical convolution and is calculated by taking the ratio of total sulcal surface area to total exposed cortical surface; this is limited by the reliability of surface classification, especially at sulcal boundaries (Kim et al., 2005; MacDonald et al., 2000). Positive associations have been found between GI and cognitive ability; one study found these associations to persist across age and sex, with the strongest associations being found in frontoparietal regions (Gregory et al., 2016). Another study found that increased gyrification was positively related to TBV and cognitive function, but not to cortical thickness (Gautam, Anstey, Wen, Sachdev, & Cherbuin, 2015). Much like other brain measures, the degree of cortical convolution can only provide so much insight as to the underlying biology of intelligence. However, examining gyrification is justified by research indicating that the regional degree and patterning of cortical convolution are likely associated with underlying neuronal circuitry and/or regional interconnectivity (Rakic, 1988; Richman, Stewart, Hutchinson, & Caviness, 1975). Increased gyrification of the parietofrontal region has been positively associated with increased working memory, even after controlling for cortical surface area (Green et al., 2018).

Other Noteworthy Findings Using Various Methods Corpus Callosum The corpus callosum (CC) is a densely packed bundle of white-matter fibers that serves as a central commissure that is thought to contribute to integrative properties of cognition between the hemispheres (Schulte & Muller-Oehring, 2010). Luders et al. (2007) reported positive associations between CC thickness and cognitive ability in adult men and women. However, two studies conducted on children and adolescents reported mainly negative correlations between total CC midsagittal area and IQ (Ganjavi et al., 2011; Luders et al., 2011). Both studies reported that males mainly drove the negative association, with one of the studies formally showing a significant gender difference in the

223

224

s. drakulich and s. karama

association between corpus callosum association with IQ (Ganjavi et al., 2011). The latter studies used typical age-standardized deviation IQ scores to evaluate the association with the corpus callosum. However, a large, well conducted multi-site study found no associations between measures of regional CC thickness and intelligence using raw cognitive scores after adjusting for the participants’ age and intracranial volume (Westerhausen et al., 2018). However, figures using typical deviation IQ scores were also provided, and no negative associations were noted in children or adolescents. Potential reasons for the discrepancy require further exploration and include important differences in data regression analysis procedures between the studies as well as differences in samples. Whatever may be the case, this study sheds doubt on reported associations between corpus callosum size and cognitive ability after controlling for age and TBV or intracranial volume.

Cerebellum The cerebellum’s role in higher-order cognitive and emotional processes, and not just motor functions, has become increasingly apparent (Riva & Giorgi, 2000; Schmahmann, 2004). Several positive associations between cerebellar volume and IQ have been found and, although they are relatively weak, they suggest a role for the cerebellum in cognitive ability (Flashman, Andreasen, Flaum, & Swayze, 1997; Frangou et al., 2004; Paradiso, Andreasen, O’Leary, Arndt, & Robinson, 1997).

Subcortical Nuclei Positive associations between IQ and subcortical gray-matter volume, “density,” and shape have also been found using various methods. Subcortical structures that have piqued interest include the thalamus and basal ganglia. Thalamus volume has been positively associated with IQ in a sample of 122 healthy children and adolescents (Xie, Chen, & De Bellis, 2012). Using unmodulated VBM, and hence looking at “density,” one study reported a negative association between IQ and caudate density but a positive association with thalamus density in a sample of 40 children and young adults (mean age ~15 years) (Frangou et al., 2004). Another study, manually counting the number of voxels within structures in native space, reported a positive association between caudate volume and IQ in a sample of 64 female and 21 male children and adolescents (mean age ~10.6 years) (Reiss, Abrams, Singer, Ross, & Denckla, 1996). A more recent study, conducted on a subsample of 303 children and adolescents from the NIH Study of Normal Brain Development (mean age 11.4 years) and using automated image intensity features with subsequent volume estimations, reported positive associations between IQ and the volume of the striatum, a subcortical nucleus comprising both the caudate and putamen

Structural Brain Imaging of Intelligence

(MacDonald, Ganjavi, Collins, Evans, & Karama, 2014). In keeping with the involvement of basal ganglia with intelligence, Burgaleta, MacDonald, et al. (2014) administered nine cognitive tests to 104 healthy adults (mean age 19.8). They then estimated fluid, crystallized, and spatial intelligence via confirmatory factor analysis and regressed these latent scores, vertex-wise, against subcortical shape while controlling for age, sex, and total brain volume. Fluid and spatial ability associations (but not crystallized ability) were shown to be positively associated with the shape of the basal ganglia, with the strongest association being a positive association between rostral putamen enlargement and cognitive performance (Burgaleta, MacDonald, et al., 2014). As only a few studies have examined associations between subcortical nuclei and IQ in large samples, further work is required before definitive statements can be made.

Structural Covariance-Based Network Neuroscience The covariance of various brain structure volumes is amenable to the use of graph-theoretic approaches, thus entering the domain of network neuroscience. Briefly, this involves the calculation of interregional or vertexwise correlations. Various graph-theoretic measures can be derived from these assembled networks, yielding several proxy measures of network organization and efficiency. Usage of surface-based morphometric data has been performed through the applications of clustering approaches on structural covariance in cortical thickness data to identify functional modules from longitudinal scan data (Chen, He, Rosa-Neto, Germann, & Evans, 2008; He, Chen, & Evans, 2007; Khundrakpam et al., 2013; Lerch et al., 2006; Lo, He, & Lin, 2011). Other graph-theoretic analyses of human brain structural metrics have also been applied to cortical surface area (Bassett et al., 2008; Li et al., 2017). However, to date, few studies have examined relationships between structural covariance and intelligence. Nonetheless, work on intelligence has been conducted using functional covariance-based and white matter connectivity-based networks. For instance, Dubois, Galdi, Paul, and Adolphs (2018) used a crossvalidation predictive framework, and were able to predict about 20% of variance in intelligence from resting-state connectivity matrices. For more on network neuroscience in general, see Chapter 6, by Barbey, as well as Chapter 2, by Hilger and Sporns. For more on functional imaging of intelligence differences, see Chapter 12, by Basten and Fiebach. For more on white-matter networks, see Chapter 10, by Genç and Fraenz.

Multi-metric Approaches This chapter focused, one metric at a time, on detailing structural associations with intelligence. It appears most likely that no one structural metric will be able to account for intelligence differences on its own, and

225

226

s. drakulich and s. karama

approaches combining multiple metrics and modalities are much more promising. For more on this, we suggest Paul et al. (2016), Ritchie et al. (2015), and Watson et al. (2016).

Conclusion Various avenues exist for structural MRI-based intelligence studies. It is essential to restate the importance of methodologically appropriate preprocessing, analyses, and study design, and a sufficient and appropriate sample (Button et al., 2013). The devil is in the details here. Advancements in structural MRI studies are happening constantly, and with the rising availability of larger datasets acquired at higher resolutions, the methods can once again rise to the occasion. Moreover, refinement of methods enables the possibility of more appropriately interpretable results, further fueling the power of neuroimaging to access the underlying biology and pathology. Examined as a whole, findings tend to strongly converge towards the view that there are significant associations between brain structure and intelligence. However, considering the many contradictions in the field, few definitive statements can be drawn. In keeping with this, Richard Haier’s three laws are apropos: (1) No story about the brain is simple; (2) No one study is definitive; and (3) It takes many years to sort out conflicting and inconsistent findings and establish a compelling weight of evidence (Haier, 2016).

References Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Thousand Oaks, CA: Sage Publications, Inc. Aleman-Gomez, Y., Janssen, J., Schnack, H., Balaban, E., Pina-Camacho, L., AlfaroAlmagro, F., . . . Desco, M. (2013). The human cerebral cortex flattens during adolescence. Journal of Neuroscience, 33(38), 15004–15010. Andreasen, N. C., Flaum, M., Swayze, V., 2nd, O’Leary, D. S., Alliger, R., Cohen, G., . . . Yuh, W. T. (1993). Intelligence and brain structure in normal individuals. American Journal of Psychiatry, 150(1), 130–134. Ashburner, J., & Friston, K. J. (2000). Voxel-based morphometry – The methods. Neuroimage, 11(6 Pt 1), 805–821. Aydin, K., Ucar, A., Oguz, K. K., Okur, O. O., Agayev, A., Unal, Z., Yilmaz, S., and Ozturk, C. (2007). Increased gray matter density in the parietal cortex of mathematicians: A voxel-based morphometry study. AJNR American Journal of Neuroradiology, 28(10), 1859–1864. Bajaj, S., Raikes, A., Smith, R., Dailey, N. S., Alkozei, A., Vanuk, J. R., & Killgore, W. D. S. (2018). The relationship between general intelligence and cortical structure in healthy individuals. Neuroscience, 388, 36–44. Bassett, D. S., Bullmore, E., Verchinski, B. A., Mattay, V. S., Weinberger, D. R., & Meyer-Lindenberg, A. (2008). Hierarchical organization of human cortical

Structural Brain Imaging of Intelligence

networks in health and schizophrenia. Journal of Neuroscience, 28(37), 9239–9248. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Bedford, S. A., Park, M. T. M., Devenyi, G. A., Tullo, S., Germann, J., Patel, R., . . . Consortium, Mrc Aims (2020). Large-scale analyses of the relationship between sex, age and intelligence quotient heterogeneity and cortical morphometry in autism spectrum disorder. Molecular Psychiatry, 25(3), 614–628. Bjuland, K. J., Løhaugen, G. C., Martinussen, M., & Skranes, J. (2013). Cortical thickness and cognition in very-low-birth-weight late teenagers. Early Human Development, 89(6), 371–380. Bourgeois, J. P., Goldman-Rakic, P. S., & Rakic, P. (1994). Synaptogenesis in the prefrontal cortex of rhesus monkeys. Cerebral Cortex, 4(1), 78–96. Breslau, N., Chilcoat, H. D., Susser, E. S., Matte, T., Liang, K.-Y., & Peterson, E. L. (2001). Stability and change in children’s intelligence quotient scores: A comparison of two socioeconomically disparate communities. American Journal of Epidemiology, 154(8), 711–717. Brouwer, R. M., Hedman, A. M., van Haren, N. E. M., Schnack, H. G., Brans, R. G. H., Smit, D. J. A., . . . Hulshoff Pol, H. E. (2014). Heritability of brain volume change and its relation to intelligence. Neuroimage, 100, 676–683. Budde, J., Shajan, G., Scheffler, K., & Pohmann, R. (2014). Ultra-high resolution imaging of the human brain using acquisition-weighted imaging at 9.4T. Neuroimage, 86, 592–598. Burgaleta, M., Johnson, W., Waber, D. P., Colom, R., & Karama, S. (2014). Cognitive ability changes and dynamics of cortical thickness development in healthy children and adolescents. Neuroimage, 84, 810–819. Burgaleta, M., MacDonald, P. A., Martínez, K., Román, F. J., Álvarez-Linera, J., Ramos González, A., . . . Colom, R. (2014). Subcortical regional morphology correlates with fluid and spatial intelligence. Human Brain Mapping, 35(5), 1957–1968. Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. Chance, S. A., Casanova, M. F., Switala, A. E., & Crow, T. J. (2008). Auditory cortex asymmetry, altered minicolumn spacing and absence of ageing effects in schizophrenia. Brain, 131(Pt 12), 3178–3192. Chen, Z. J., He, Y., Rosa-Neto, P., Germann, J., & Evans, A. C. (2008). Revealing modular architecture of human brain structural networks by using cortical thickness from MRI. Cerebral Cortex, 18(10), 2374–2381. Chklovskii, D. B., Mel, B. W., & Svoboda, K. (2004). Cortical rewiring and information storage. Nature, 431(7010), 782–788. Choi, Y. Y., Shamosh, N. A., Cho, S. H., DeYoung, C. G., Lee, M. J., Lee, J. M., . . . Lee, K. H. (2008). Multiple bases of human intelligence revealed by cortical thickness and neural activation. Journal of Neuroscience, 28(41), 10323–10329.

227

228

s. drakulich and s. karama

Cocosco, C. A., Zijdenbos, A. P., & Evans, A. C. (2003). A fully automatic and robust brain MRI tissue classification method. Medical Image Analysis, 7(4), 513–527. Collins, D. L., Neelin, P., Peters, T. M., & Evans, A. C. (1994). Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. Journal of Computer Assisted Tomography, 18(2), 192–205. Colom, R., Burgaleta, M., Román, F. J., Karama, S., Alvarez-Linera, J., Abad, F. J., . . . Haier, R. J. (2013). Neuroanatomic overlap between intelligence and cognitive factors: Morphometry methods provide support for the key role of the frontal lobes. Neuroimage, 72, 143–152. Colom, R., Haier, R. J., Head, K., Álvarez-Linera, J., Quiroga, M. Á., Shih, P. C., & Jung, R. E. (2009). Gray matter correlates of fluid, crystallized, and spatial intelligence: Testing the P-FIT model. Intelligence, 37(2), 124–135. Colom, R., Jung, R. E., & Haier, R. J. (2006). Distributed brain sites for the g-factor of intelligence. Neuroimage, 31(3), 1359–1365. DeFelipe, J. (2011). The evolution of the brain, the human nature of cortical circuits, and intellectual creativity. Frontiers in Neuroanatomy, 5, 29. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1756), 20170284. Ducharme, S., Albaugh, M. D., Nguyen, T. V., Hudziak, J. J., Mateos-Perez, J. M., Labbe, A., . . . Brain Development Cooperative Group (2016). Trajectories of cortical thickness maturation in normal brain development – The importance of quality control procedures. Neuroimage, 125, 267–279. Eickhoff, S. B., Constable, R. T., & Yeo, B. T. T. (2018). Topographic organization of the cerebral cortex and brain cartography. Neuroimage, 170, 332–347. Escorial, S., Román, F. J., Martínez, K., Burgaleta, M., Karama, S., & Colom, R. (2015). Sex differences in neocortical structure and cognitive performance: A surface-based morphometry study. Neuroimage, 104, 355–365. Estrada, E., Ferrer, E., Román, F. J., Karama, S., & Colom, R. (2019). Time-lagged associations between cognitive and cortical development from childhood to early adulthood. Developmental Psychology, 55(6), 1338–1352. Evans, A. C., & Brain Development Cooperative Group (2006). The NIH MRI study of normal brain development. Neuroimage, 30(1), 184–202. Evans, A. C., Janke, A. L., Collins, D. L., & Baillet, S. (2012). Brain templates and atlases. Neuroimage, 62(2), 911–922. Fjell, A. M., Westlye, L. T., Amlien, I., Tamnes, C. K., Grydeland, H., Engvig, A., . . . Walhovd, K. B. (2015). High-expanding cortical regions in human development and evolution are related to higher intellectual abilities. Cerebral Cortex, 25(1), 26–34. Flashman, L. A., Andreasen, N. C., Flaum, M., & Swayze, V. W. (1997). Intelligence and regional brain volumes in normal controls. Intelligence, 25(3), 149–160. Frangou, S., Chitins, X., & Williams, S. C. (2004). Mapping IQ and gray matter density in healthy young people. Neuroimage, 23(3), 800–805. Ganjavi, H., Lewis, J. D., Bellec, P., MacDonald, P. A., Waber, D. P., Evans, A. C., . . . Brain Development Cooperative Group (2011). Negative associations between

Structural Brain Imaging of Intelligence

corpus callosum midsagittal area and IQ in a representative sample of healthy children and adolescents. PLoS One, 6(5), e19698. Gautam, P., Anstey, K. J., Wen, W., Sachdev, P. S., & Cherbuin, N. (2015). Cortical gyrification and its relationships with cortical volume, cortical thickness, and cognitive performance in healthy mid-life adults. Behavioural Brain Research, 287, 331–339. Goh, S., Bansal, R., Xu, D., Hao, X., Liu, J., & Peterson, B. S. (2011). Neuroanatomical correlates of intellectual ability across the life span. Developmental Cognitive Neuroscience, 1(3), 305–312. Good, C. D., Johnsrude, I. S., Ashburner, J., Henson, R. N., Friston, K. J., & Frackowiak, R. S. (2001). A voxel-based morphometric study of ageing in 465 normal adult human brains. Neuroimage, 14(1 Pt 1), 21–36. Green, S., Blackmon, K., Thesen, T., DuBois, J., Wang, X., Halgren, E., & Devinsky, O. (2018). Parieto-frontal gyrification and working memory in healthy adults. Brain Imaging Behavior, 12(2), 303–308. Gregory, M. D., Kippenhan, J. S., Dickinson, D., Carrasco, J., Mattay, V. S., Weinberger, D. R., & Berman, K. F. (2016). Regional variations in brain gyrification are associated with general cognitive ability in humans. Current Biology, 26(10), 1301–1305. Gur, R. C., Turetsky, B. I., Matsui, M., Yan, M., Bilker, W., Hughett, P., & Gur, R. E. (1999). Sex differences in brain gray and white matter in healthy young adults: Correlations with cognitive performance. Journal of Neuroscience, 19(10), 4065–4072. Haier, R. J. (2016). The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Jung, R. E., Yeo, R. A., Head, K., & Alkire, M. T. (2004). Structural brain variation and general intelligence. Neuroimage, 23(1), 425–433. Haier, R. J., Jung, R. E., Yeo, R. A., Head, K., & Alkire, M. T. (2005). The neuroanatomy of general intelligence: Sex matters. Neuroimage, 25(1), 320–327. Haier, R. J., Karama, S., Colom, R., Jung, R., & Johnson, W. (2014). Yes, but flaws remain. Intelligence, 46, 341–344. He, Y., Chen, Z. J., & Evans, A. C. (2007). Small-world anatomical networks in the human brain revealed by cortical thickness from MRI. Cerebral Cortex, 17(10), 2407–2419. Hogstrom, L. J., Westlye, L. T., Walhovd, K. B., & Fjell, A. M. (2013). The structure of the cerebral cortex across adult life: Age-related patterns of surface area, thickness, and gyrification. Cerebral Cortex, 23(11), 2521–2530. Huttenlocher, P. R. (1990). Morphometric study of human cerebral cortex development. Neuropsychologia, 28(6), 517–527. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kabani, N., Le Goualher, G., MacDonald, D., & Evans, A. C. (2001). Measurement of cortical thickness using an automated 3-D algorithm: A validation study. Neuroimage, 13(2), 375–380. Karama, S., Ad-Dab’bagh, Y., Haier, R. J., Deary, I. J., Lyttelton, O. C., Lepage, C., . . . Brain Development Cooperative Group (2009). Positive

229

230

s. drakulich and s. karama

association between cognitive ability and cortical thickness in a representative US sample of healthy 6 to 18 year-olds. Intelligence, 37(2), 145–155. Karama, S., Bastin, M. E., Murray, C., Royle, N. A., Penke, L., Muñoz Maniega, S., . . . Deary, I. J. (2014). Childhood cognitive ability accounts for associations between cognitive ability and brain cortical thickness in old age. Molecular Psychiatry, 19(5), 555–559. Karama, S., Colom, R., Johnson, W., Deary, I. J., Haier, R., Waber, D. P., . . . Brain Development Cooperative Group (2011). Cortical thickness correlates of specific cognitive performance accounted for by the general factor of intelligence in healthy children aged 6 to 18. Neuroimage, 55(4), 1443–1453. Kennedy, D. N., Lange, N., Makris, N., Bates, J., Meyer, J., & Caviness, V. S., Jr. (1998). Gyri of the human neocortex: An MRI-based analysis of volume and variance. Cerebral Cortex, 8(4), 372–384. Khundrakpam, B. S., Reid, A., Brauer, J., Carbonell, F., Lewis, J., Ameis, S., . . . Brain Development Cooperative Group (2013). Developmental changes in organization of structural brain networks. Cerebral Cortex, 23(9), 2072–2085. Kim, J. S., Singh, V., Lee, J. K., Lerch, J., Ad-Dab’bagh, Y., MacDonald, D., . . . Evans, A. C. (2005). Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification. Neuroimage, 27(1), 210–221. la Fougere, C., Grant, S., Kostikov, A., Schirrmacher, R., Gravel, P., Schipper, H. M., . . . Thiel, A. (2011). Where in-vivo imaging meets cytoarchitectonics: The relationship between cortical thickness and neuronal density measured with high-resolution [18F]flumazenil-PET. Neuroimage, 56(3), 951–960. Lemaitre, H., Goldman, A. L., Sambataro, F., Verchinski, B. A., Meyer-Lindenberg, A., Weinberger, D. R., & Mattay, V. S. (2012). Normal age-related brain morphometric changes: Nonuniformity across cortical thickness, surface area and gray matter volume? Neurobiology of Aging, 33(3), 617.e1–617.e9. Lenroot, R. K., Gogtay, N., Greenstein, D. K., Wells, E. M., Wallace, G. L., Clasen, L. S., . . . Giedd, J. N. (2007). Sexual dimorphism of brain developmental trajectories during childhood and adolescence. Neuroimage, 36(4), 1065–1073. Lerch, J. P., & Evans, A. C. (2005). Cortical thickness analysis examined through power analysis and a population simulation. Neuroimage, 24(1), 163–173. Lerch, J. P., Worsley, K., Shaw, W. P., Greenstein, D. K., Lenroot, R. K., Giedd, J., & Evans, A. C. (2006). Mapping anatomical correlations across cerebral cortex (MACACC) using cortical thickness from MRI. Neuroimage, 31(3), 993–1003. Li, W., Yang, C., Shi, F., Wu, S., Wang, Q., Nie, Y., & Zhang, X. (2017). Construction of individual morphological brain networks with multiple morphometric features. Frontiers in Neuroanatomy, 11, 34. Lo, C. Y., He, Y., & Lin, C. P. (2011). Graph theoretical analysis of human brain structural networks. Reviews Neuroscience, 22(5), 551–563. Luders, E., Narr, K. L., Bilder, R. M., Thompson, P. M., Szeszko, P. R., Hamilton, L., & Toga, A. W. (2007). Positive correlations between corpus callosum thickness and intelligence. Neuroimage, 37(4), 1457–1464.

Structural Brain Imaging of Intelligence

Luders, E., Narr, K. L., Thompson, P. M., & Toga, A. W. (2009). Neuroanatomical correlates of intelligence. Intelligence, 37(2), 156–163. Luders, E., Thompson, P. M., Narr, K. L., Zamanyan, A., Chou, Y. Y., Gutman, B., . . . Toga, A. W. (2011). The link between callosal thickness and intelligence in healthy children and adolescents. Neuroimage, 54(3), 1823–1830. Lyttelton, O. C., Karama, S., Ad-Dab’bagh, Y., Zatorre, R. J., Carbonell, F., Worsley, K., & Evans, A. C. (2009). Positional and surface area asymmetry of the human cerebral cortex. Neuroimage, 46(4), 895–903. MacDonald, D., Kabani, N., Avis, D., & Evans, A. C. (2000). Automated 3-D extraction of inner and outer surfaces of cerebral cortex from MRI. Neuroimage, 12(3), 340–356. MacDonald, P. A., Ganjavi, H., Collins, D. L., Evans, A. C., & Karama, S. (2014). Investigating the relation between striatal volume and IQ. Brain Imaging and Behavior, 8(1), 52–59. McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. Menary, K., Collins, P. F., Porter, J. N., Muetzel, R., Olson, E. A., Kumar, V., . . . Luciana, M. (2013). Associations between cortical thickness and general intelligence in children, adolescents and young adults. Intelligence, 41(5), 597–606. Modroño, C., Navarrete, G., Nicolle, A., González-Mora, J. L., Smith, K. W., Marling, M., & Goel, V. (2019). Developmental grey matter changes in superior parietal cortex accompany improved transitive reasoning. Thinking & Reasoning, 25(2), 151–170. Moffitt, T. E., Caspi, A., Harkness, A. R., & Silva, P. A. (1993). The natural history of change in intellectual performance: Who changes? How much? Is it meaningful? Journal of Child Psychology and Psychiatry, 34(4), 455–506. Narr, K. L., Woods, R. P., Thompson, P. M., Szeszko, P., Robinson, D., Dimtcheva, T., . . . Bilder, R. M. (2007). Relationships between IQ and regional cortical gray matter thickness in healthy adults. Cerebral Cortex, 17(9), 2163–2171. Panizzon, M. S., Fennema-Notestine, C., Eyler, L. T., Jernigan, T. L., PromWormley, E., Neale, M., . . . Kremen, W. S. (2009). Distinct genetic influences on cortical surface area and cortical thickness. Cerebral Cortex, 19(11), 2728–2735. Paradiso, S., Andreasen, N. C., O’Leary, D. S., Arndt, S., & Robinson, R. G. (1997). Cerebellar size and cognition: Correlations with IQ, verbal memory and motor dexterity. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 10(1), 1–8. Paul, E. J., Larsen, R. J., Nikolaidis, A., Ward, N., Hillman, C. H., Cohen, N. J., . . . Barbey, A. K. (2016). Dissociable brain biomarkers of fluid intelligence. Neuroimage, 137, 201–211. Paus, T., Zijdenbos, A., Worsley, K., Collins, D. L., Blumenthal, J., Giedd, J. N., . . . Evans, A. C. (1999). Structural maturation of neural pathways in children and adolescents: In vivo study. Science, 283(5409), 1908–1911. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence

231

232

s. drakulich and s. karama

differences: How strong are they and what do they mean? Neuroscience & Biobehavioral Reviews, 57, 411–432. Rakic, P. (1988). Specification of cerebral cortical areas. Science, 241(4862), 170–176. Raznahan, A., Shaw, P., Lalonde, F., Stockman, M., Wallace, G. L., Greenstein, D., . . . Giedd, J. N. (2011). How does your cortex grow? The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31(19), 7174–7177. Regis, J., Mangin, J. F., Ochiai, T., Frouin, V., Riviere, D., Cachia, A., . . . Samson, Y. (2005). “Sulcal root” generic model: A hypothesis to overcome the variability of the human cortex folding patterns. Neurologia Medico-Chirurgica (Tokyo), 45(1), 1–17. Reiss, A. L., Abrams, M. T., Singer, H. S., Ross, J. L., & Denckla, M. B. (1996). Brain development, gender and IQ in children. A volumetric imaging study. Brain, 119(Pt 5), 1763–1774. Reuter, M., Tisdall, M. D., Qureshi, A., Buckner, R. L., van der Kouwe, A. J. W., & Fischl, B. (2015). Head motion during MRI acquisition reduces gray matter volume and thickness estimates. Neuroimage, 107, 107–115. Riahi, F., Zijdenbos, A., Narayanan, S., Arnold, D., Francis, G., Antel, J., & Evans, A. C. (1998). Improved correlation between scores on the expanded disability status scale and cerebral lesion load in relapsing-remitting multiple sclerosis. Results of the application of new imaging methods. Brain, 121(Pt 7), 1305–1312. Richman, D. P., Stewart, R. M., Hutchinson, J. W., & Caviness, V. S., Jr. (1975). Mechanical model of brain convolutional development. Science, 189(4196), 18–21. Rilling, J. K., & Insel, T. R. (1999). The primate neocortex in comparative perspective using magnetic resonance imaging. Journal of Human Evolution, 37(2), 191–223. Ritchie, S. J., Booth, T., Valdes Hernandez, M. D., Corley, J., Maniega, S. M., Gow, A. J., . . . Deary, I. J. (2015). Beyond a bigger brain: Multivariable structural brain imaging and intelligence. Intelligence, 51, 47–56. Riva, D., & Giorgi, C. (2000). The cerebellum contributes to higher functions during development: Evidence from a series of children surgically treated for posterior fossa tumours. Brain, 123(5), 1051–1061. Román, F. J., Morillo, D., Estrada, E., Escorial, S., Karama, S., & Colom, R. (2018). Brain-intelligence relationships across childhood and adolescence: A latentvariable approach. Intelligence, 68, 21–29. Roth, G., & Dicke, U. (2005). Evolution of the brain and intelligence. Trends in Cognitive Sciences, 9(5), 250–257. Rushton, J. P., & Ankney, C. D. (2009). Whole brain size and general mental ability: A review. International Journal of Neuroscience, 119(5), 691–731. Sanabria-Diaz, G., Melie-Garcia, L., Iturria-Medina, Y., Aleman-Gomez, Y., Hernandez-Gonzalez, G., Valdes-Urrutia, L., . . . Valdes-Sosa, P. (2010). Surface area and cortical thickness descriptors reveal different attributes of the structural human brain networks. Neuroimage, 50(4), 1497–1510. Schmahmann, J. D. (2004). Disorders of the cerebellum: Ataxia, dysmetria of thought, and the cerebellar cognitive affective syndrome. The Journal of Neuropsychiatry and Clinical Neurosciences, 16(3), 367–378.

Structural Brain Imaging of Intelligence

Schmitt, J. E., Neale, M. C., Clasen, L. S., Liu, S., Seidlitz, J., Pritikin, J. N., . . . Raznahan, A. (2019). A comprehensive quantitative genetic analysis of cerebral surface area in youth. Journal of Neuroscience, 39(16), 3028–3040. Schmitt, J. E., Raznahan, A., Clasen, L. S., Wallace, G. L., Pritikin, J. N., Lee, N. R., . . . Neale, M. C. (2019). The dynamic associations between cortical thickness and general intelligence are genetically mediated. Cerebral Cortex, 29(11), 4743–4752. Schoenemann, P. T., Budinger, T. F., Sarich, V. M., & Wang, W. S. Y. (2000). Brain size does not predict general cognitive ability within families. Proceedings of the National Academy of Sciences, 97(9), 4932–4937. Schulte, T., & Muller-Oehring, E. M. (2010). Contribution of callosal connections to the interhemispheric integration of visuomotor and cognitive processes. Neuropsychology Review, 20(2), 174–190. Shaw, P., Greenstein, D., Lerch, J., Clasen, L., Lenroot, R., Gogtay, N., . . . Giedd, J. (2006). Intellectual ability and cortical development in children and adolescents. Nature, 440(7084), 676–679. Sowell, E. R., Thompson, P. M., Leonard, C. M., Welcome, S. E., Kan, E., & Toga, A. W. (2004). Longitudinal mapping of cortical thickness and brain growth in normal children. The Journal of Neuroscience, 24(38), 8223. Stonnington, C. M., Tan, G., Klöppel, S., Chu, C., Draganski, B., Jack, C. R., Jr., . . . Frackowiak, R. S. (2008). Interpreting scan data acquired from multiple scanners: a study with Alzheimer’s disease. Neuroimage, 39(3), 1180–1185. Storsve, A. B., Fjell, A. M., Tamnes, C. K., Westlye, L. T., Overbye, K., Aasland, H. W., & Walhovd, K. B. (2014). Differential longitudinal changes in cortical thickness, surface area and volume across the adult life span: Regions of accelerating and decelerating change. Journal of Neuroscience, 34(25), 8488–8498. Stucht, D., Danishad, K. A., Schulze, P., Godenschweger, F., Zaitsev, M., & Speck, O. (2015). Highest resolution in vivo human brain MRI using prospective motion correction. PLoS One, 10(7), e0133921. Sur, M., & Rubenstein, J. L. (2005). Patterning and plasticity of the cerebral cortex. Science, 310(5749), 805–810. Tadayon, E., Pascual-Leone, A., & Santarnecchi, E. (2019). Differential contribution of cortical thickness, surface area, and gyrification to fluid and crystallized intelligence. Cerebral Cortex, 30(1). Tamnes, C. K., Fjell, A. M., Østby, Y., Westlye, L. T., Due-Tønnessen, P., Bjørnerud, A., & Walhovd, K. B. (2011). The brain dynamics of intellectual development: Waxing and waning white and gray matter. Neuropsychologia, 49(13), 3605–3611. Thompson, P. M., Hayashi, K. M., Dutton, R. A., Chiang, M.-C., Leow, A. D., Sowell, E. R., . . . Toga, A. W. (2007). Tracking Alzheimer’s disease. Annals of the New York Academy of Science, 1097, 183–214. Thompson, P. (2020). ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries. Biological Psychiatry, 87(9, Suppl), S56. Turin, G. (1960). An introduction to matched filters. IRE Transactions on Information Theory, 6(3), 311–329.

233

234

s. drakulich and s. karama

Van Essen, D. C. (2005). A population-average, landmark- and surface-based (PALS) atlas of human cerebral cortex. Neuroimage, 28(3), 635–662. Vuoksimaa, E., Panizzon, M. S., Chen, C.-H., Fiecas, M., Eyler, L. T., FennemaNotestine, C., . . . Kremen, W. S. (2015). The genetic association between neocortical volume and general cognitive ability is driven by global surface area rather than thickness. Cerebral Cortex, 25(8), 2127–2137. Watson, P. D., Paul, E. J., Cooke, G. E., Ward, N., Monti, J. M., Horecka, K. M., . . . Barbey, A. K. (2016). Underlying sources of cognitive-anatomical variation in multi-modal neuroimaging and cognitive testing. Neuroimage, 129, 439–449. Westerhausen, R., Friesen, C. M., Rohani, D. A., Krogsrud, S. K., Tamnes, C. K., Skranes, J. S., . . . Walhovd, K. B. (2018). The corpus callosum as anatomical marker of intelligence? A critical examination in a large-scale developmental study. Brain Structure and Function, 223(1), 285–296. Westlye, L. T., Walhovd, K. B., Dale, A. M., Bjørnerud, A., Due-Tønnessen, P., Engvig, A., . . . Fjell, A. M. (2009). Life-span changes of the human brain white matter: Diffusion tensor imaging (DTI) and volumetry. Cerebral Cortex, 20(9), 2055–2068. Wickett, J. C., Vernon, P. A., & Lee, D. H. (2000). Relationships between factors of intelligence and brain volume. Personality and Individual Differences, 29(6), 1095–1122. Winkler, A. M., Kochunov, P., Blangero, J., Almasy, L., Zilles, K., Fox, P. T., . . . Glahn, D. C. (2010). Cortical thickness or grey matter volume? The importance of selecting the phenotype for imaging genetics studies. Neuroimage, 53(3), 1135–1146. Winkler, A. M., Sabuncu, M. R., Yeo, B. T., Fischl, B., Greve, D. N., Kochunov, P., . . . Glahn, D. C. (2012). Measuring and comparing brain cortical surface area and other areal quantities. Neuroimage, 61(4), 1428–1443. Worsley, K. J., Marrett, S., Neelin, P., Vandal, A. C., Friston, K. J., & Evans, A. C. (1996). A unified statistical approach for determining significant signals in images of cerebral activation. Human Brain Mapping, 4(1), 58–73. Xie, Y., Chen, Y. A., & De Bellis, M. D. (2012). The relationship of age, gender, and IQ with the brainstem and thalamus in healthy children and adolescents: A magnetic resonance imaging volumetric study. Journal of Child Neurology, 27(3), 325–331. Zatorre, R. J., Fields, R. D., & Johansen-Berg, H. (2012). Plasticity in gray and white: Neuroimaging changes in brain structure during learning. Nature Neuroscience, 15(4), 528–536. Zijdenbos, A. P., Lerch, J. P., Bedell, B. J., & Evans, A. C. (2005). Brain imaging in drug R&D. Biomarkers 10(Suppl 1), S58–S68. Zilles, K., Armstrong, E., Schleicher, A., & Kretschmann, H. J. (1988). The human pattern of gyrification in the cerebral cortex. Anatomy and Embryology (Berlin), 179(2), 173–179.

12 Functional Brain Imaging of Intelligence Ulrike Basten and Christian J. Fiebach Functional brain imaging studies of intelligence have tackled the following questions: What happens in our brains when we solve tasks from an intelligence test? And are there differences between people? Do people with higher scores on an intelligence test show different patterns of brain activation while working on cognitive tasks than people with lower scores? Answering these questions can contribute to improving our understanding of the biological bases of intelligence. To investigate these questions, researchers have used different methods for quantifying patterns of brain activation changes and their association with cognitive processing – including electroencephalography (EEG), positron emission tomography (PET), functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The results of this research allow us to delineate those parts of the brain that are important for intelligence – either in the sense that they are activated when people solve tasks commonly used to test intelligence or in the sense that functional differences in these regions are associated with individual differences in intelligence. From the fact that some of our abilities – like our abilities to see, hear, feel, and move – can quite specifically be traced back to the contributions of distinct brain regions (namely the visual, auditory, somatosensory, and motor cortex) – one might derive the expectation that there must be another part of the brain responsible for higher cognitive functioning and intelligence. But, as the following review will show, there is no single “seat” of intelligence in our brain. Instead, intelligence is associated with a distributed set of brain regions. To study the neural basis of human intelligence, functional neuroimaging studies have used two different approaches, which we have previously described as the task approach and the individual differences approach (see Basten, Hilger, & Fiebach, 2015). The task approach seeks to identify brain regions activated when people work on intelligence-related tasks like those used in psychometric tests of intelligence. These tasks may actually be taken from established intelligence tests or be designed to closely resemble such tasks. Typically, studies following the task approach report the mean task-induced brain activation for a whole group of study participants, ignoring individual differences in brain activation and intelligence. The individual differences approach, on the other hand, explores which regions of the brain show differences in activation between persons with different degrees of general cognitive ability as assessed

235

236

u. basten and c. j. fiebach

with an intelligence test. It identifies brain regions in which activation strength covaries with intelligence. In this chapter, we present central findings from both approaches with a focus on recent evidence and the current state of knowledge. We also discuss factors that may moderate the association between intelligence and brain activation as studied in the individual differences approach. Finally, we compare the results of these two approaches, critically reflect the insights that have so far been gained with functional neuroimaging, and outline important topics for future research. Most neuroimaging studies focused their investigation on a general factor of intelligence g (sensu Spearman, 1904) or fluid intelligence gf (sensu Cattell, 1963) and used, as their measure of intelligence, established tests of reasoning (e.g., matrix reasoning tests like Raven’s Progressive Matrices, RPM) or sum scores from tests with many different cognitive tasks (i.e., full-scale intelligence tests like the Wechsler Adult Intelligence Scales, WAIS). Where we use the term intelligence without further specification in this chapter, we also refer to this broad conceptualization of fluid general intelligence. Positron emission tomography (PET) was one of the first methods capable of in vivo localization of brain activation during the performance of cognitive tasks. It visualizes changes in regional cerebral blood flow as a result of localized neuronal activity, by means of a weak radioactive tracer injected into the blood system. Functional magnetic resonance imaging (fMRI) and functional near infrared spectroscopy (fNIRS) measure a very similar biological signal, i.e., the blood oxygenation level dependent (BOLD) contrast that results from magnetization changes of the blood supplied to the brain following localized neuronal activity. However, fMRI allows for a better spatial localization of brain activation differences than fNIRS. The electroencephalogram (EEG) provides a much better temporal resolution, which makes it attractive for cognitive studies despite its lack of high spatial resolution. Most EEG work in the field of intelligence research has measured the event-related desynchronization (ERD) of brain activity, usually in the EEG alpha frequency band (approximately 8–13 Hz), which is typically observed when people concentrate on solving a cognitively demanding task (e.g., Neuper, Grabner, Fink, & Neubauer, 2005; Pfurtscheller & Aranibar, 1977).

Brain Regions Involved in Processing Intelligence-related Tasks: Can We Track Down Intelligence in the Brain? Many early studies using functional imaging to identify brain regions relevant for intelligence used the task approach, with the aim of identifying brain regions activated while study participants were solving cognitive tasks like those used in intelligence tests (e.g., Ghatan et al., 1995; Goel, Gold, Kapur, & Houle, 1998; Haier et al., 1988; Prabhakaran, Rypma, & Gabrieli,

Functional Brain Imaging of Intelligence

2001; Prabhakaran, Smith, Desmond, Glover, & Gabrieli, 1997). In these studies, researchers observed increased activation in brain regions known to be activated also during other cognitive demands (such as attention or working memory), including the lateral and medial frontal cortex as well as the parietal and insular cortex. To isolate a neural correlate of general cognitive ability in the sense of Spearman’s general intelligence factor g, which is assumed to be involved in all cognitive tasks independent of task-specific factors, Duncan et al. (2000) used PET to measure brain activation while participants performed three different tasks strongly depending on the g factor – a spatial, a verbal, and a perceptuo-motor task. Overlapping activation was found in three prefrontal brain regions, i.e., dorsolateral prefrontal cortex (DLPFC), ventrolateral prefrontal cortex (VLPFC; extending along the frontal operculum to the anterior insula), and the dorsal anterior cingulate cortex (ACC). Based on this PET study as well as on brain lesion studies also pointing to a pivotal role of the prefrontal cortex (PFC) for higher-order cognitive functioning (e.g., Duncan, Burgess, & Emslie, 1995), Duncan argued that functions of the PFC may be particularly central to general intelligence (Duncan, 1995, 2005; Duncan, Emslie, Williams, Johnson, & Freer, 1996). Yet, with evidence from other functional neuroimaging studies on intelligence, it became apparent that, while the PFC is indeed important for higher cognitive functions, intelligence-related tasks often also activate parts of the parietal cortex as well as sensory cortices in the occipital and temporal lobes (e.g., Esposito, Kirkby, Van Horn, Ellmore, & Berman, 1999; Ghatan et al., 1995; Goel & Dolan, 2001; Knauff, Mulack, Kassubek, Salih, & Greenlee, 2002). A systematic review of the brain imaging literature available in 2007 led to the formulation of the parieto-frontal integration theory of intelligence (P-FIT; Jung & Haier, 2007). This influential model conceptualizes intelligence as the product of the interaction among a set of distributed brain regions, primarily comprising parts of the frontal and parietal cortices. Duncan (2010) subsumes roughly the same prefrontal and parietal regions under the term multipledemand (MD) system, which he conceptualizes as a system of general-purpose brain regions recruited by a variety of cognitive demands and which is proposed to interact flexibly with more specialized perceptual and cognitive systems (for support from lesion studies, see Barbey et al., 2012; Barbey, Colom, & Grafman, 2013; Barbey, Colom, Paul, & Grafman, 2014; Gläscher et al., 2010; Woolgar et al., 2010; Woolgar, Duncan, Manes, & Fedorenko, 2018). Furthermore, the P-FIT and MD regions largely resemble what in other, not intelligence-related contexts is also referred to as the (attention and) working memory system (Cabeza & Nyberg, 2000), the cognitive control network (Cole & Schneider, 2007), or – most generally – the taskpositive network (Fox et al., 2005). Intelligence-related cognitive tasks thus activate a relatively broad and rather unspecific brain network involved in the processing of a number of higher cognitive challenges, ranging from working

237

238

u. basten and c. j. fiebach

Figure 12.1 Brain activation associated with the processing of intelligencerelated tasks, showing the results of the meta-analysis conducted by Santarnecchi, Emmendorfer, and Pascual-Leon E (2017). The brain regions marked with color were consistently activated across studies while study participants were solving tasks as they are used in common tests of intelligence. Produced using the brain map resulting from the ALE meta-analysis conducted by Santarnecchi, Emmendorfer, and Pascual-Leon E (2017) and made available at www.tmslab.com/santalab.php. Figure available at: https://github .com/fiebachlab/figures under a CC-BY license

memory maintenance and manipulation (e.g., see figure 1A in Basten, Stelzel, & Fiebach, 2012) to inhibitory control, as reflected for example in the Stroop task (e.g., see figure 2 in Basten, Stelzel, & Fiebach, 2011), see also Niendam et al. (2012) for meta-analytic evidence on common activation patterns across cognitive control demands. The current state of findings from the task approach in the functional neuroimaging of intelligence is comprehensively summarized by a recent meta-analysis of 35 fMRI and PET studies (Santarnecchi, Emmendorfer, & Pascual-Leone, 2017; see Figure 12.1). This meta-analysis quantitatively establishes the previously proposed convergence across studies for the frontal cortex (where 74% of the brain sites consistently activated across studies were

Functional Brain Imaging of Intelligence

Figure 12.2 Intelligence-related differences in brain activation during cognitive processing. (A) Results of the meta-analysis conducted by Basten et al. (2015). The brain regions marked with color consistently showed intelligence-related differences in brain activation across studies. Blue–green: negative associations, red–yellow: positive associations. IFG, inferior frontal gyrus; IFJ, inferior frontal junction; IPL, inferior parietal lobule; IPS, intraparietal sulcus; MFG, middle frontal gyrus; MTG, middle temporal gyrus; SFS, superior frontal sulcus. Reproduced and adapted from another illustration of the same results published in Basten et al. (2015). (B) Graphic summary where original studies found negative (–) or positive (+) associations between intelligence and brain activation. ACC, anterior cingulate cortex; PCC, posterior cingulate cortex; PFC, prefrontal cortex; Precun, Precuneus; (pre)SMA, pre-supplementary motor area. Figure available at: https://github.com/fiebachlab/figures/ under a CC-BY license

located) and the parietal lobes (13%). To a much lesser extent, convergence across activation studies was also observed in occipital regions (3%). Notably, this meta-analysis extends previous reviews – like the one resulting in the P-FIT model (Jung & Haier, 2007) – by also linking the insula and subcortical structures like the thalamus and basal ganglia (globus pallidus, putamen) to

239

240

u. basten and c. j. fiebach

the processing of intelligence-related tasks. While activation was in principle bilateral, left hemisphere activation was more dominant (63% as compared to 37% of brain sites activated across studies). This left-dominance was mainly due to a left-lateralization of inferior and middle frontal activation. A subset of regions from this distributed network, i.e., the left inferior frontal lobe and the left frontal eye fields, the bilateral anterior cingulate cortex, and the bilateral temporo-occipital cortex, was more strongly activated the more difficult the tasks were. Santarnecchi, Emmendorfer, and PascualLeon E (2017) further investigated whether there were dissociable correlates for different component processes of intelligent performance or for different task materials. Clear differences were observed for the component processes “rule inference” and “rule application”: While inferring rules (sub-analysis of eight studies) recruited left prefrontal and bilateral parietal regions, applying known rules (sub-analysis of six studies) relied on activity in subcortical structures (thalamus and caudate nuclei) as well as right prefrontal and temporal cortices. Furthermore, verbal tasks (sub-analysis of 22 studies) were associated with more left-lateralized activation involving inferior frontal and anterior cingulate areas, whereas visuospatial tasks (14 studies) were characterized by stronger activation of the bilateral frontal eye fields. To objectify the functional interpretation of their meta-analysis, Santarnecchi, Emmendorfer, and Pascual-Leon E (2017) compared the resultant meta-maps to maps of well-established functional brain networks. Such networks are identifiable in the patterns of intrinsic connectivity of the brain measured with fMRI in a so-called resting state (during which participants are not engaged in a particular cognitive task) and have been associated with specific sensory and cognitive functions, such as attention, executive control, language, sensorimotor, visual, or auditory processing (e.g., Dosenbach et al., 2007; Yeo et al., 2011). Santarnecchi and colleagues found that multiple functional networks were involved, but primarily those associated with attention, salience, and cognitive control. Specifically, the highest overlap (27 %) was observed between the results of the meta-analysis and the dorsal and ventral attention networks (Corbetta, Patel, & Shulman, 2008), followed by the anterior salience network (9%; also known as the cingulo-opercular network; Dosenbach et al., 2007), and the left-hemispheric executive control network (7%; also known as the fronto-parietal control network; Dosenbach et al., 2007). Notably, brain activation elicited by more difficult tasks showed relatively more overlap with the left executive control network and the language network. Summarizing the results from functional neuroimaging studies using the task approach to investigate the brain basis of intelligence, we can conclude that the processing of tasks commonly used in intelligence tests is associated with the activation of prefrontal and parietal brain regions that are generally involved in solving cognitive challenges. While in other research contexts the same brain regions are referred to as the task-positive, the attention and working memory, or the cognitive control network, intelligence researchers

Functional Brain Imaging of Intelligence

often refer to those networks as the “P-FIT areas” or the “multiple demand (MD) system.” The evidence available at present does not allow deciding whether this pattern of brain activation reflects the involvement of a unitary superordinate control system (as suggested for instance by Niendam et al., 2012), or whether it is the result of the co-activation of a diverse set of cognitive component processes that all contribute to solving intelligencerelated tasks. Importantly, the task approach ignores individual differences in brain activation. If, however, we want to understand how differences in brain functions may potentially explain individual differences in intelligence, we have to turn to studies using the individual differences approach.

Individual Differences in Brain Activation Associated with Intelligence: Do More Intelligent People Have More Efficient Brains? The first study on intelligence-related differences in brain activation during a cognitive challenge in healthy participants was conducted in the 1980s by Haier et al. (1988). In a sample of eight participants, these authors used PET to measure changes in the brain’s energy consumption (glucose metabolic rate, GMR) while participants completed matrix reasoning tasks from an established test of intelligence. The authors of this pioneer study had expected that participants with better performance in the reasoning task would show stronger brain activation, i.e., would recruit task-relevant brain regions to a greater degree. To their surprise, they observed the opposite.The brains of participants who solved more items correctly consumed less energy during task processing. In the same year, two other studies were published that reported concordant results (Berent et al., 1988; Parks et al., 1988). Haier et al. (1988) coined the term “efficiency” (also “brain efficiency” or “neural efficiency”) for the observed pattern of an inverse relationship between intelligence and the strength of brain activation during a cognitive challenge. In a nutshell, the resulting neural efficiency hypothesis of intelligence states that more intelligent people can achieve the same level of performance with smaller increases in brain activation as compared to less intelligent people. Haier, Siegel, Tang, Abel, and Buchsbaum (1992) summarized: “Intelligence is not a function of how hard the brain works but rather how efficiently it works” (p. 415 f.). It has been criticized that the term “neural efficiency” simply re-describes the observed pattern of less activation for a defined level of performance without explaining it (Poldrack, 2015). The potential reasons for activation differences are manifold, including the possibility that the same neural computations are indeed performed more efficiently, i.e., with lower metabolic expenditure. However, activation differences can also result from qualitative differences in neural computations and/or cognitive processes. In the latter case, differences in activation would be attributable to “people doing different things” – and not

241

242

u. basten and c. j. fiebach

to “people doing the same thing with different efficiency”. The neuroimaging studies we describe in this chapter can detect differences in activation strength – they do, however, not provide an explanation for the observed differences in terms of neural metabolism. Such explanations are the subject of interpretation and theoretical models. Higher neural efficiency during cognitive task performance could result from a more selective use of task-relevant neural networks or neurons (Haier et al., 1988, 1992), possibly due to anatomically sparser neural architectures following more extensive neural pruning in the course of brain development (for recent evidence on lower dendritic density and arborization in people with higher intelligence scores, see Genç et al., 2018) or faster information processing due to better myelinization (Miller, 1994; for evidence on higher white matter integrity in more intelligent individuals, see Kievit et al., 2016; Penke et al., 2012; and Chapter 10, by Genç and Fraenz). Furthermore, activation efficiency may also be due to a more efficient organization of intrinsic functional networks in terms of generally shorter paths from one point in the brain to any other as it is studied with brain networks modeled as graphs (van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009; but see Hilger, Ekman, Fiebach, & Basten, 2017a; Kruschwitz, Waller, Daedelow, Walter, & Veer, 2018; see also Barbey, 2018). The assumption that more intelligent people have more efficient brains represents a compelling idea with high face validity. But is there consistent evidence in support of the neural efficiency hypothesis? Since having been proposed about three decades ago, the neural efficiency hypothesis of intelligence has been studied repeatedly with different methods, including PET, EEG, fMRI, and fNIRS. All these studies have in common that they relate (by means of correlation or group comparisons) individual differences in performance on psychometric tests of intelligence (e.g., WAIS, RPM) to differences in some indicator of brain activation elicited during a cognitive challenge (such as working memory or reasoning tasks). Roughly summarized, earlier PET and EEG studies often reported negative correlations between intelligence and brain activation, which explains the early popularity of the neural efficiency hypothesis of intelligence (e.g., Haier et al., 1988, 1992; Jaušovec, 2000; Neubauer, Fink, & Schrausser, 2002; Neubauer, Freudenthaler, & Pfurtscheller, 1995; Parks et al., 1988). However, it soon became clear that a substantial number of studies also found contradictory evidence: Further EEG evidence produced mixed findings (for review, see Neubauer & Fink, 2009), and more recent fMRI studies often reported positive correlations between task-elicited brain activity and intelligence, thus contradicting the idea of higher neural efficiency in more intelligent people (e.g., Basten, Stelzel, & Fiebach, 2013; Burgess, Gray, Conway, & Braver, 2011; Choi et al., 2008; DeYoung, Shamosh, Green, Braver, & Gray, 2009; Ebisch et al., 2012; Geake & Hansen, 2005; Lee et al., 2006; O’Boyle et al., 2005). In 2015, we conducted a quantitative meta-analysis of the available evidence on intelligence-related differences in brain activation during a

Functional Brain Imaging of Intelligence

cognitive challenge. We included only studies that used the individual differences approach and that reported their results in standard brain reference space (Basten et al., 2015). These criteria were met by 16 studies (from 14 independent samples), comprising one PET study and 15 fMRI studies and a total of 464 participants, in sum reporting 151 foci for which intelligence-related differences in brain activation had been observed. Our meta-analysis provided only limited support for the neural efficiency hypothesis of intelligence. As Figure 12.2a illustrates, we identified two brain regions for which evidence across studies suggested weaker activation in more intelligent participants, i.e., the right inferior frontal junction area (IFJ, located at the junction of the inferior frontal sulcus and the inferior precentral sulcus, cf. Derrfuss, Vogt, Fiebach, von Cramon, & Tittgemeyer, 2012) and the right posterior insula. On the other hand, six brain regions were identified in which brain activation increases during solving of cognitive tasks were consistently greater for more intelligent people, i.e., the left inferior frontal junction area (IFJ), the right inferior frontal sulcus (IFS) extending into the inferior and middle frontal gyrus (IFG/MFG), the right superior frontal sulcus (SFS), the left inferior parietal lobe (IPL) and intraparietal sulcus (IPS), and the right posterior middle temporal gyrus (MTG). Figure 12.2b schematically summarizes the direction of effects reported in the original studies that were included in the meta-analysis. When comparing this visual summary to the results of the meta-analysis, it becomes clear that the lack of meta-effects in midline structures of the brain is attributable to mixed evidence across studies for anterior and posterior cingulate cortex, (pre)SMA, and the precuneus. On the other hand, the mixed evidence observed across studies (i.e., positive and negative correlations with intelligence) found in the lateral prefrontal cortex was associated with meta-analytic convergence of positive and negative effects in spatially dissociated parts of the LPFC. Taken together, empirical evidence concerning the localization of correlations between brain activation and intelligence stems primarily from fMRI studies, and at present these studies provide more evidence for positive associations between intelligence and brain activity than for a negative association (which would be in accordance with the neural efficiency hypothesis). Of the two brain regions showing individual differences in activation in accordance with the neural efficiency hypothesis, the right IFJ is located within a brain region that has also been associated with intelligence in task approach studies (see above section Brain Regions Involved in Processing Intelligence-related Tasks: Can We Track Down Intelligence in the Brain?). The insular cortex, however, has not only been implicated in cognitive processing but is more generally understood as an interface between the autonomic system, emotion, cognition, and action (e.g., Chang, Yarkoni, Khaw, & Sanfey, 2013). Overall, we have to acknowledge mixed evidence in a dual sense: First, there is mixed evidence across studies for some brain regions (e.g., dorsolateral PFC) concerning whether intelligence is associated with higher or

243

244

u. basten and c. j. fiebach

lower activity during task performance. Second, there is also mixed evidence regarding neural efficiency when comparing different regions of the brain (e.g., insula and parietal cortex), meaning that functional imaging studies do not generally support the conclusion of an “overall” more efficient brain. Very much in line with our conclusions concerning the fMRI evidence, Neubauer and Fink (2009) summarized the available EEG evidence on the neural efficiency hypothesis as being mixed. These authors state that a large body of early evidence in favor of the neural efficiency hypothesis was followed by a significant number of later studies providing only partial support or even contradictory evidence. Taking together all evidence from functional activation studies (EEG, PET, and fMRI), we must conclude that, with regard to the level of brain activation elicited by cognitive demands, the brains of more intelligent people are not generally more efficient. In their review, Neubauer and Fink (2009) discuss a number of factors potentially moderating the association between intelligence and brain activation – which could explain why some studies found negative associations supporting the neural efficiency hypothesis, while others found positive associations contradicting it, and yet others found no association at all. The suggested moderators are introduced and discussed in the next section; they may be understood as a refinement of the neural efficiency hypothesis in the sense that they define under which conditions to expect higher efficiency in more intelligent people. In a nutshell, Neubauer and Fink (2009) define these conditions as follows: “Neural efficiency might arise as a phenomenon when individuals are confronted with tasks of (subjectively) low to moderate task difficulty and it is mostly observable for frontal brain areas.” (p. 1018). When we compare the meta-analytic findings for the individual differences approach (Basten et al., 2015) to those for the task approach (Santarnecchi, Emmendorfer, & Pascual-Leone, 2017) and to the P-FIT model (which was also to the greater part based on task approach studies: 7 of 10 PET studies and 10 of 17 fMRI studies; Jung & Haier, 2007), there are two striking differences in findings: First, in contrast to the task approach, the individual differences approach provides less evidence for a role of temporal and occipital cortices in intelligence. While these regions are without doubt involved in the perceptual stages of cognitive tasks, their activation does not vary with intelligence and can thus be excluded as a neural phenomenon explaining individual differences in intelligence. The frontal and parietal cortex, on the other hand, show activation across people (in the task approach studies) along with intelligence-related differences (in the individual differences approach) that qualify them as candidates for brain regions in which functional differences may contribute to differences in intelligence. Second, the individual differences approach provides less evidence for a left-lateralization of brain systems involved in intelligence than the task approach. In our meta-analysis, we observed four clusters of intelligence-related activation differences in each

Functional Brain Imaging of Intelligence

hemisphere. One obvious explanation is that the relative left-dominance of task activation studies reflects a common characteristic of the majority of studies (like a strong dependency on verbal processes for solving the tasks) that, however, does not covary with individual differences in intelligence.

It Depends: Factors Moderating the Association between Intelligence and Brain Activation In the face of the mixed findings concerning the neural efficiency hypothesis, researchers have begun to ask under which specific conditions or circumstances we can expect to observe neural efficiency – and under which we cannot. A set of potential moderator variables has been discussed, including (a) the sex of the participants, (b) the task content, (c) the brain areas under study, (d) the state of learning and training, and (e) the difficulty of the task (for an extensive review and excellent discussion of these moderators, see Neubauer & Fink, 2009). Empirical evidence supporting the neural efficiency hypothesis has more often been reported for men than for women (see Neubauer & Fink, 2009). This finding may closely be associated with an interaction between sex and task content: Neubauer et al. (2002), for instance, found brain activation patterns supporting the neural efficiency hypothesis during performance of visuo-spatial tasks only for men and during verbal tasks only for women (see also Neubauer, Grabner, Fink, & Neuper, 2005). Thus, it seems that the phenomenon of neural efficiency is more likely to be observed in the cognitive domain in which the respective sex typically shows slight performance advantages (Miller & Halpern, 2014). To explain this moderating effect, researchers have speculated that some intelligence-relevant processes may be differently implemented in the brains of men and women. Neubauer and Fink (2009) also came to the conclusion that frontal brain areas are more likely to show activation patterns in line with the neural efficiency hypothesis than other brain areas (like the parietal cortex). A representative example for this comes from a study by Jaušovec and Jaušovec (2004), who observed that less intelligent participants showed greater activation over frontal brain areas (as inferred from event-related desynchronization in the upper alpha band of the EEG signal) during completion of a figural learning task, whereas more intelligent participants had stronger activation over parieto-occipital brain areas. It seems that under certain circumstances more intelligent people can solve a task without much frontal involvement, instead relying on parietal activity. This may reflect a relative shift from controlled to automatized processing, possibly due to more effective and better-trained cognitive routines. The assumption of neural efficiency being more likely in frontal than in parietal cortices was partly supported by

245

246

u. basten and c. j. fiebach

the findings of our meta-analysis some years later (Basten et al., 2015), which was based on a larger set of fMRI studies. This meta-analysis suggested a tendency for more intelligent people to show stronger parietal activation, while evidence was mixed for the frontal cortex. We thus conclude that there is still relatively more evidence in support of the neural efficiency hypothesis of intelligence for frontal than for parietal cortices – even though absolute evidence within the prefrontal cortex remains ambiguous. For fMRI studies, as we have pointed out previously (Basten et al., 2013), it is crucially important to consider in which functional network of the brain an association between intelligence and brain activation is observed. As discussed, a set of prefrontal and parietal brain regions shows increases in activation when task demands increase. These regions have been described as the task positive network (TPN). A second network, the task negative or default mode network (TNN or DMN), comprising ventromedial PFC, posterior cingulate cortex, superior frontal gyrus, and the region of the temporo-parietal junction, shows the opposite pattern of de-activation with increasing task demands (relative to task-free states; Fox et al., 2005). This task-related reduction of non-task-related brain activity is often interpreted as reflecting the suppression of task-unrelated cognitive processes to concentrate on the task at hand. For the interpretation of activation differences in terms of efficiency it obviously makes a difference if intelligence-related activation differences are observed in task-positive or task-negative regions: While a positive correlation between intelligence and fMRI BOLD signal changes may be interpreted as reflecting more activation in more intelligent individuals when involving task positive brain regions, the same correlation must be interpreted exactly opposite when localized in task negative regions, i.e., as reflecting less de-activation in more intelligent subjects, likely due to less rather than more cognitive effort exerted during task-processing (McKiernan, Kaufman, Kucera-Thompson, & Binder, 2003). The neglect of this important general distinction between task-activated and de-activated brain regions in fMRI studies may have led to incorrect interpretations of activation-intelligence associations in some of the earlier studies. First evidence suggests that individual differences in the deactivation of task-negative networks are indeed a reliable correlate of intelligence in functional brain imaging (Basten et al., 2013; Hammer et al., 2019; Lipp et al., 2012). A closely linked important factor seems to be the effectiveness of the interplay between the task-positive and task-negative networks, which is reflected in a negative coupling of the two. Here, higher intelligence was reported to be characterized by a stronger negative coupling between the TPN and the TNN under task-free conditions (Santarnecchi, Emmendorfer, Tadayon, et al., 2017; see also Chapter 6, by Barbey). It remains to be explored whether these findings can be replicated for brains engaged in cognitive processing. The relationship between intelligence and brain activation further seems to change with learning and practice on specific tasks. Studies in which

Functional Brain Imaging of Intelligence

participants received short-term training (single occasion to several weeks) on a cognitive task (e.g., a visuo-spatial task like the computer game Tetris or a complex reasoning task) reported stronger pre- to post-training decreases in task-associated brain activation for more intelligent participants (e.g., glucose metabolic rate – Haier et al., 1992; event-related EEG desynchronization – Neubauer, Grabner, Freudenthaler, Beckmann, & Guthke, 2004). In part such training-related changes in brain activity may also be due to changes in the use of cognitive strategies (Toffanin, Johnson, de Jong, & Martens, 2007) – which could of course vary depending on intelligence. Other studies have investigated the roles that long-term training and the acquisition of expertise over years play for neural efficiency. Two studies in taxi drivers (Grabner, Stern, & Neubauer, 2003) and chess players (Grabner, Neubauer, & Stern, 2006) suggest that above and beyond individual differences in general cognitive ability, higher expertise seems to make an efficient use of the brain more likely. This is often reflected in decreased frontal involvement along with an increased reliance on posterior/parietal brain systems (e.g., Grabner et al., 2006). Combined, the available evidence suggests that in the short-term (days to weeks), more intelligent people seem to profit more from practice in terms of gains in neural efficiency. However, long-term training (for several years) can even out intelligence-related differences in neural efficiency, as the acquisition of expertise in a specific task through extensive practice can lead to taskspecific efficiency independent of general cognitive ability. However, future research will have to specify in more detail the conditions under which these conclusions hold. The effects of training and practice on the development of neural efficiency suggest that intelligent people may be thought of as “experts in thinking” who are more likely to manage cognitive challenges with neural efficiency due to habitual practice in cognitive activity. Intelligent people may have had dispositional advantages for developing an efficient use of their brains in the first place. In addition, the constant challenging of their brains by cognitively demanding mental activity may further promote the development of neural efficiency in general as well as the potential to more quickly develop taskspecific efficiency when faced with new challenges. In other words, as a result of previous learning and daily routine in dealing with cognitive challenges, more intelligent people may have acquired skills and strategies that are cognitively effective and neurally efficient. From this perspective, neural efficiency may not as much refer to an overall reduced activity of the brain but rather reflect a tendency to solve a cognitive task with less control-related prefrontal involvement or the ability to quickly redistribute activity from the frontal cortex to a smaller set of brain regions that are essentially necessary for task processing. Finally, the difficulty of a task also seems to play an important role for understanding whether and under which conditions more intelligent people show more or less brain activation. Neubauer and Fink (2009), in their review, suggested that individual differences in neural efficiency are most likely to be

247

248

u. basten and c. j. fiebach

Figure 12.3 Brain activation as a function of task difficulty and intelligence. Adapted from figure 2 in Neubauer and Fink (2009). Figure available at: https://github.com/fiebachlab/figures/ under a CC-BY license

observed in tasks of low-to-medium but not high complexity, implying that the individual strength of brain activation during cognitive processing is interactively determined by the difficulty of the task and the intelligence of the individual. As illustrated in Figure 12.3, this model assumes that less intelligent people need to exert more brain activation for successful performance in even relatively easy tasks, consistent with the neural efficiency hypothesis. At some point, no further resources can be recruited to meet increasing task demands, so that brain activation reaches a plateau where it cannot further be increased – or may even drop if a task is too difficult and the participant “gives up.” More intelligent people will reach this point later, i.e., at higher levels of objective task difficulty, so that for more difficult tasks they will show greater activation than less intelligent people (Figure 12.3). This interaction between task difficulty and intelligence is equivalent to a moderation of the association between intelligence and brain activation by task difficulty. This model has often been used for post-hoc interpretations of associations found between intelligence and brain activation. Especially when observing a positive association that is not compatible with the neural efficiency hypothesis, i.e., stronger activation in more intelligent study participants, researchers tend to speculate the positive association was observed because the task used in their study was particularly difficult (e.g., Gray, Chabris, & Braver, 2003). Uncertainty about the level of difficulty addressed by a specific investigation may even remain when more than one level of task difficulty was studied, as was the case in one of our own studies for a working memory task with three levels of difficulty (Basten et al., 2013). In conclusion, there exist a number of plausible moderators of the relationship between individual differences in intelligence and brain activation. A better understanding of these moderators may help in clarifying why evidence has so far not conclusively supported or falsified the still popular neural efficiency hypothesis.

Functional Brain Imaging of Intelligence

Limitations of Available Studies: Why There May Still Be More to Learn about Brain Function and Intelligence What we know today about the association between intelligence and individual differences in patterns of brain activation during cognitive tasks is based on a body of studies that partly suffer from methodological limitations, including that many studies were based on small samples, that some studies restricted their analyses to predefined brain regions (i.e., “regions of interest”), and that most studies did not systematically investigate the roles of potential moderators. With respect to sample sizes, we observe a wide variation across studies, ranging from only eight subjects in the very first study conducted on intelligence-related brain activation differences (Haier et al., 1988) to 1,235 participants in a recent study (Takeuchi et al., 2018). The majority of studies, however, were based on data from not more than 20 to 40 participants. For example, in our meta-analysis (Basten et al., 2015), sample sizes varied between 12 and 104 participants, with a mean of 33.21 and a standard deviation of 24.41. Half of the 14 samples included in the metaanalysis comprised less than 25 participants. Such small sample sizes result in a lack of statistical power, which means that, by design, studies have a low probability of detecting the effects they study, even when a true effect exists (see, e.g., Yarkoni, 2009, for a discussion in the context of individual differences in fMRI research). On the one hand, a lack of statistical power may have led to overestimation of effect sizes or even false positives (for details, see Button et al., 2013; Cremers, Wager, & Yarkoni, 2017; Turner, Paul, Miller, & Barbey, 2018; Yarkoni, 2009). This problem, however, should effectively be taken care of in meta-analyses, because random false positives have little chance of being confirmed by evidence from other studies. On the other hand, low-powered studies will miss true effects when these are not very strong (type II error), a problem that meta-analyses cannot take care of. Effects that were missed in the first place by the original studies cannot influence the results of a metaanalysis. If the neurofunctional differences underlying individual differences in intelligence are in fact rather weak and widely distributed throughout the brain, the partial inconsistency of highly localized findings that have been reported so far may well be due to most studies being seriously underpowered (Cremers et al., 2017). This may even imply that the search for moderators (see above) might in fact not be the adequate response to the mixed findings and in part be a futile and misleading endeavor. The likelihood that part of the neural underpinnings of individual differences in intelligence have gone unnoticed until now is further increased by the fact that some of the original studies did not search the whole brain for intelligence-related differences in activation but restricted their analyses to pre-defined regions of interest, for example to brain regions activated across all participants for the task under study (e.g., Lee et al., 2006). With such an

249

250

u. basten and c. j. fiebach

approach, studies will miss effects in brain regions that may not show activation across participants exactly because of high inter-individual variation in activity or in brain regions commonly de-activated during task processing (e.g., Basten et al., 2013; Lipp et al., 2012). Finally, if the association between intelligence and brain activation is indeed moderated by factors like sex, task content, or task difficulty, the search for a general pattern across these factors will lead to only the most general mechanisms being identified and other, specific aspects (e.g., different functional implementations of intelligence in men and women) being missed. In a nutshell, these limitations make our current state of knowledge – as derived from meta-analyses – a rather conservative estimate of the neurofunctional basis of intelligence in the brain: We can be quite sure that the regions we identified in meta-analyses as replicating across studies are indeed relevant for intelligence. There may, however, be further brain regions where differences in function also contribute to individual differences in intelligence but which have been missed by neuroimaging studies so far. Given the low power of many studies available so far, and the resulting overestimation of localized effects (Cremers et al., 2017), future research may show that the neural systems associated with individual differences in intelligence are in fact much more widely distributed across the brain than currently assumed.

Trends and Perspectives for the Functional Imaging of Intelligence: What Researchers Are Working On Now and Should Tackle in the Future. Most importantly, of course, researchers should overcome the limitations of previous studies. One important need for future research is the study of bigger samples (e.g., Turner et al., 2018; Yarkoni, 2009). There has been a substantial increase in sample sizes over the years that gives reason for an optimistic outlook. Currently, researchers are more and more using larger samples (e.g., Takeuchi et al., 2018), including open access data from largescale collaborative projects like the Human Connectome Project (HCP; Van Essen et al., 2013), the Functional Connectomes Project (Mennes, Biswal, Castellanos, & Milham, 2013), or the UK Biobank (Miller et al., 2016; Sudlow et al., 2015). The first functional MRI study in a much larger sample than available before, i.e., with 1,235 subjects, suggests that effects are in fact smaller than assumed previously (Takeuchi et al., 2018) – which is in line with what one would theoretically expect for effect size estimates from larger as compared to smaller samples (e.g., Cremers et al., 2017; Ioannidis, 2008; Yarkoni, 2009). This study identified only two brain regions in which intelligence showed robust positive correlations with activation as elicited by a challenging working memory task (i.e., the 2-back task as compared to a no memory, 0-back control condition). These were not located in the lateral

Functional Brain Imaging of Intelligence

prefrontal or parietal cortex, but in the right hippocampus and the presupplementary motor area. Only when evaluating brain activation elicited by the 2-back task without correcting for the no memory control condition, were significant positive correlations also found in the dorsomedial PFC and the precuneus, and a significant negative correlation in the right intraparietal sulcus. These first task-based fMRI results from a large-scale study are not easy to integrate with previous results – including those of the meta-analyses. On the other hand, they also await replication in further large-scale datasets. For now, we must concede that the models developed so far may have to be called into question again. More studies using bigger samples – ideally combined with the integration across studies in regular meta-analyses (Yarkoni, Poldrack, Van Essen, & Wager, 2010) – will provide more reliable results and may well lead to a more refined understanding of the neural underpinnings of intelligence in the next years. A further important trend that will aid in clarifying the role of functional brain differences for intelligence is the use of cross-validated prediction approaches that allow for quantifying the generalizability of findings from one sample to others. Localization and explanation of the mechanisms by which individual differences in brain activation contribute to cognitive ability are not of primary interest here. Instead, predictive approaches test to what extent knowing patterns of brain activation during a cognitive challenge allows predicting individual intelligence test scores. Can we calculate the score that would result for a person in a traditional paper-pencil intelligence test from brain activation we have measured for this person, given a statistical model established on the basis of independent data? If this was possible, it would provide most stringent evidence for a reliable association between intelligence and brain activation, as a successful prediction ensures that the underlying model relating brain activation and intelligence is not merely capturing random characteristics of a specific sample at hand (a common problem of simple association studies known as overfitting) but rather features of brain function that are predictive of intelligence across different independent samples (Yarkoni & Westfall, 2017). Using a cross-validated predictive approach, Sripada, Angstadt, and Rutherford (2018) recently reported that they were indeed able to predict intelligence from brain activation data. They used data from 944 participants of the Human Connectome Project (i.e., from the same dataset also used by Takeuchi et al., 2018) to build and train a prediction model, which was then applied to the data from an independent test sample of another 100 participants, to predict individual intelligence scores and thereby test the model. The brain activation data had been acquired with fMRI during the processing of seven different cognitive tasks. Sripada et al. (2018) report a high correlation (r = .68) between the individual intelligence scores predicted by their model and those resulting from behavioural testing. They further note that tasks tapping executive functions and particularly demanding cognitive tasks (i.e., tasks

251

252

u. basten and c. j. fiebach

showing relatively stronger activation of the TPN and deactivation of the TNN) were especially effective in this prediction. Whether the prediction of intelligence scores is possible also with low absolute error is an important open question for future studies. A major advantage of engaging a predictive approach in the functional imaging of intelligence lies in the fact that one does not have to decide about the statistical significance of single associations between activation in specific brain regions and intelligence. The models take all available data into account and use whatever helps predicting. This will be particularly suitable if intelligence is related to rather weak differences in activation distributed across many parts of the brain. The study of Sripada et al. (2018) suggests that activation differences in the frontoparietal network that previous studies have associated with intelligence are indeed most predictive of intelligence. However, these authors also reported that activation differences in many other parts of the brain further contributed to the prediction, which supports the proposal that the neural underpinnings of intelligence may best be thought of as a distributed set of regions that are rather weakly associated with intelligence – just as the simulation study by Cremers et al. (2017) concluded for many aspects of personality and brain function. This would also reconcile the report of a successful prediction of intelligence by Sripada et al. (2018) with the report of rather small effect sizes in specific brain regions in the high-powered study by Takeuchi et al. (2018). Finally, an important open question is how the functional brain correlates of intelligence described in this chapter relate to other characteristics of the brain that also vary with intelligence, such as morphological differences (see Chapters 10 and 11) or differences in connectivity and network topology (see Chapters 2 and 6). Up to now, these different aspects have been studied separately from each other. Theoretically, however, we have to expect that they are not independent. Morphological differences, e.g., in the structure of dendrites (Genç et al., 2018) or the gyrification of the brain (Gregory et al., 2016) might be the basis for differences in the organization of functional brain networks (e.g., Hilger et al., 2017a; Hilger, Ekman, Fiebach, & Basten, 2017b) which will in turn affect energy consumption and activation associated with cognitive processing as measured by functional neuroimaging (for studies on metabolic and biochemical correlates of intelligence using magnetic resonance spectroscopy see, e.g., Paul et al., 2016). However, these assumptions remain speculation until research directly relates the different brain correlates of intelligence to each other (cf. Poldrack, 2015). Notably, regarding individual differences in brain activation, we may not only have to consider the degree of activation as a potential underpinning of differences in intelligence, but other aspects like the variability of brain responses to different kinds of stimuli (e.g., Euler, Weisend, Jung, Thoma, & Yeo, 2015).

Functional Brain Imaging of Intelligence

In sum, there is considerable evidence available to suggest that intelligencerelated cognitive tasks activate a distributed set of brain regions predominantly located in the prefrontal and parietal cortex, and to a lesser extent also in the temporal and occipital cortex as well as subcortical structures, such as the thalamus and the putamen. Furthermore, individual differences in intelligence are associated with differences in task-elicited brain activation. Uncertainty regarding the exact localizations and directions of associations between brain activation and individual differences in intelligence is most likely attributable to the low statistical power of many studies and the existence of several potential moderating factors, including sex, task content, and task difficulty. Future research should (a) strive to use sufficiently large samples to achieve the necessary statistical power to detect effect sizes of interest, (b) further clarify the effects (including interactions) of the postulated moderating variables, and (c) ensure generalizability of results by adopting predictive analysis approaches. A good understanding of the functional brain mechanisms underlying intelligence may ultimately serve the development of interventions designed to enhance cognitive performance (e.g., Daugherty et al., 2018, 2020; for a discussion see Haier, 2016), but may for example also prove valuable for informing clinical work like the rehabilitation of neurological patients with acquired cognitive deficits.

References Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Science, 22(1), 8–20. Barbey, A. K., Colom, R., & Grafman, J. (2013). Dorsolateral prefrontal contributions to human intelligence. Neuropsychologia, 51(7), 1361–1369. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure and Function, 219, 485–494. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(4), 1154–1164. doi: 10.1093/brain/ aws021. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. doi: 10.1016/j.intell.2015.04.009. Basten, U., Stelzel, C., & Fiebach, C. J. (2011). Trait anxiety modulates the neural efficiency of inhibitory control. Journal of Cognitive Neuroscience, 23(10), 3132–3145. doi: 10.1162/jocn_a_00003. Basten, U., Stelzel, C., & Fiebach, C. J. (2012). Trait anxiety and the neural efficiency of manipulation in working memory. Cognitive, Affective, & Behavioral Neuroscience, 12(3), 571–588. doi: 10.3758/s13415–012-0100-3.

253

254

u. basten and c. j. fiebach

Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41(5), 517–528. doi: 10.1016/j.intell.2013.07.006. Berent, S., Giordani, B., Lehtinen, S., Markel, D., Penney, J. B., Buchtel, H. A., . . . Young, A. B. (1988). Positron emission tomographic scan investigations of Huntington’s disease: Cerebral metabolic correlates of cognitive function. Annals of Neurology, 23(6), 541–546. doi: 10.1002/ana.410230603. Burgess, G. C., Gray, J. R., Conway, A. R. A., & Braver, T. S. (2011). Neural mechanisms of interference control underlie the relationship between fluid intelligence and working memory span. Journal of Experimental Psychology: General, 140(4), 674–692. doi: 10.1037/a0024695. Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. doi: 10.1038/nrn3475. Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12(1), 1–47. doi: 10.1162/08989290051137585. Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22. doi: 10.1037/ h0046743. Chang, L. J., Yarkoni, T., Khaw, M. W., & Sanfey, A. G. (2013). Decoding the role of the insula in human cognition: Functional parcellation and large-scale reverse inference. Cerebral Cortex, 23(3), 739–749. doi: 10.1093/cercor/bhs065. Choi, Y. Y., Shamosh, N. A., Cho, S. H., DeYoung, C. G., Lee, M. J., Lee, J.-M., . . . Lee, K. H. (2008). Multiple bases of human intelligence revealed by cortical thickness and neural activation. Journal of Neuroscience, 28(41), 10323–10329. doi: 10.1523/JNEUROSCI.3259-08.2008. Cole, M. W., & Schneider, W. (2007). The cognitive control network: Integrated cortical regions with dissociable functions. NeuroImage, 37(1), 343–360. doi: 10.1016/j.neuroimage.2007.03.071. Corbetta, M., Patel, G., & Shulman, G. L. (2008). The reorienting system of the human brain: From environment to theory of mind. Neuron, 58(3), 306–324. doi: 10.1016/j.neuron.2008.04.017. Cremers, H. R., Wager, T. D., & Yarkoni, T. (2017). The relation between statistical power and inference in fMRI. PLoS One, 12(11), e0184923. doi: 10.1371/ journal.pone.0184923. Daugherty, A. M., Sutton, B. P., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2020). Individual differences in the neurobiology of fluid intelligence predict responsiveness to training: Evidence from a comprehensive cognitive, mindfulness meditation, and aerobic fitness intervention. Trends in Neuroscience and Education, 18, 100123. doi: 10.1016/j.tine.2019.100123. Daugherty, A. M., Zwilling, C., Paul, E. J., Sherepa, N., Allen, C., Kramer, A. F., . . . Barbey, A. K. (2018). Multi-modal fitness and cognitive training to enhance fluid intelligence. Intelligence, 66, 32–43. Derrfuss, J., Vogt, V. L., Fiebach, C. J., von Cramon, D. Y., & Tittgemeyer, M. (2012). Functional organization of the left inferior precentral sulcus:

Functional Brain Imaging of Intelligence

Dissociating the inferior frontal eye field and the inferior frontal junction. NeuroImage, 59(4), 3829–3837. doi: 10.1016/j.neuroimage.2011.11.051. DeYoung, C. G., Shamosh, N. A., Green, A. E., Braver, T. S., & Gray, J. R. (2009). Intellect as distinct from openness: Differences revealed by fMRI of working memory. Journal of Personality and Social Psychology, 97(5), 883–892. doi: 10.1037/a0016615. Dosenbach, N. U. F., Fair, D. A., Miezin, F. M., Cohen, A. L., Wenger, K. K., Dosenbach, R. A. T., . . . Petersen, S. E. (2007). Distinct brain networks for adaptive and stable task control in humans. Proceedings of the National Academy of Sciences, 104(26), 11073–11078. doi: 10.1073/pnas.0704320104. Duncan, J. (1995). Attention, intelligence, and the frontal lobes. In M. S. Gazzaniga (ed.), The cognitive neurosciences (pp. 721–733). Cambridge, MA: The MIT Press. Duncan, J., Seitz, R. J., Kolodny, J., Bor, D., Herzog, H., Ahmed, A., . . . Emslie, H. (2000). A neural basis for general intelligence. Science, 289(5478), 457–460. doi: 10.1126/science.289.5478.457. Duncan, J. (2005). Frontal lobe function and general intelligence: Why it matters. Cortex, 41(2), 215–217. doi: 10.1016/S0010–9452(08)70896-7. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour. Trends in Cognitive Sciences, 14(4), 172–179. doi: 10.1016/j.tics.2010.01.004. Duncan, J., Burgess, P., & Emslie, H. (1995). Fluid intelligence after frontal lobe lesions. Neuropsychologia, 33(3), 261–268. doi: 10.1016/0028-3932(94) 00124-8. Duncan, J., Emslie, H., Williams, P., Johnson, R., & Freer, C. (1996). Intelligence and the frontal lobe: The organization of goal-directed behavior. Cognitive Psychology, 30(3), 257–303. doi: 10.1006/cogp.1996.0008. Ebisch, S. J., Perrucci, M. G., Mercuri, P., Romanelli, R., Mantini, D., Romani, G. L., . . . Saggino, A. (2012). Common and unique neuro-functional basis of induction, visualization, and spatial relationships as cognitive components of fluid intelligence. NeuroImage, 62(1), 331–342. doi: 10.1016/j. neuroimage.2012.04.053. Esposito, G., Kirkby, B. S., Van Horn, J. D., Ellmore, T. M., & Berman, K. F. (1999). Context-dependent, neural system-specific neurophysiological concomitants of ageing: Mapping PET correlates during cognitive activation. Brain: A Journal of Neurology, 122(Pt 5), 963–979. doi: 10.1093/brain/122.5.963. Euler, M. J., Weisend, M. P., Jung, R. E., Thoma, R. J., & Yeo, R. A. (2015). Reliable activation to novel stimuli predicts higher fluid intelligence. NeuroImage, 114, 311–319. doi: 10.1016/j.neuroimage.2015.03.078. Fox, M. D., Snyder, A. Z., Vincent, J. L., Corbetta, M., Van Essen, D. C., & Raichle, M. E. (2005). The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proceedings of the National Academy of Sciences of the United States of America, 102(27), 9673. doi: 10.1073/ pnas.0504136102. Geake, J. G., & Hansen, P. C. (2005). Neural correlates of intelligence as revealed by fMRI of fluid analogies. NeuroImage, 26(2), 555–564. doi: 10.1016/j. neuroimage.2005.01.035.

255

256

u. basten and c. j. fiebach

Genç, E., Fraenz, C., Schlüter, C., Friedrich, P., Hossiep, R., Voelkle, M. C., . . . Jung, R. E. (2018). Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nature Communications, 9(1), 1905. doi: 10.1038/s41467–018-04268-8. Ghatan, P. H., Hsieh, J. C., Wirsén-Meurling, A., Wredling, R., Eriksson, L., StoneElander, S., . . . Ingvar, M. (1995). Brain activation induced by the perceptual maze test: A PET study of cognitive performance. NeuroImage, 2(2), 112–124. Gläscher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences, 107(10), 4705–4709. doi: 10.1073/pnas.0910397107. Goel, V., & Dolan, R. J. (2001). Functional neuroanatomy of three-term relational reasoning. Neuropsychologia, 39(9), 901–909. Goel, V., Gold, B., Kapur, S., & Houle, S. (1998). Neuroanatomical correlates of human reasoning. Journal of Cognitive Neuroscience, 10(3), 293–302. doi: 10.1162/089892998562744. Grabner, R. H., Neubauer, A. C., & Stern, E. (2006). Superior performance and neural efficiency: The impact of intelligence and expertise. Brain Research Bulletin, 69(4), 422–439. doi: 10.1016/j.brainresbull.2006.02.009. Grabner, R. H., Stern, E., & Neubauer, A. C. (2003). When intelligence loses its impact: Neural efficiency during reasoning in a familiar area. International Journal of Psychophysiology, 49(2), 89–98. doi: 10.1016/S0167–8760(03) 00095-3. Gray, J. R., Chabris, C. F., & Braver, T. S. (2003). Neural mechanisms of general fluid intelligence. Nature Neuroscience, 6(3), 316–322. doi: 10.1038/nn1014. Gregory, M. D., Kippenhan, J. S., Dickinson, D., Carrasco, J., Mattay, V. S., Weinberger, D. R., & Berman, K. F. (2016). Regional variations in brain gyrification are associated with general cognitive ability in humans. Current Biology, 26(10), 1301–1305. doi: 10.1016/j.cub.2016.03.021. Haier, R. (2016). The neuroscience of intelligence (Cambridge fundamentals of neuroscience in psychology). Cambridge University Press. doi: 10.1017/ 9781316105771. Haier, R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. doi: 10.1016/0160-2896(88)90016-5. Haier, R. J., Siegel, B., Tang, C., Abel, L., & Buchsbaum, M. S. (1992). Intelligence and changes in regional cerebral glucose metabolic rate following learning. Intelligence, 16(3–4), 415–426. do: 10.1016/0160-2896(92)90018-M. Hammer, R., Paul, E. J., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2019). Individual differences in analogical reasoning revealed by multivariate task-based functional brain imaging. Neuroimage, 184, 993–1004. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017a). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. doi: 10.1016/j. intell.2016.11.001.

Functional Brain Imaging of Intelligence

Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017b). Intelligence is associated with the modular structure of intrinsic brain networks. Scientific Reports, 7(1), 16088. doi: 10.1038/s41598–017-15795-7. Ioannidis, J. P. A. (2008). Why most discovered true associations are inflated. Epidemiology, 19(5), 640–648. doi: 10.1097/EDE.0b013e31818131e7. Jaušovec, N. (2000). Differences in cognitive processes between gifted, intelligent, creative, and average individuals while solving complex problems: An EEG study. Intelligence, 28(3), 213–237. doi: 10.1016/S0160–2896(00)00037-4. Jaušovec, N., & Jaušovec, K. (2004). Differences in induced brain activity during the performance of learning and working-memory tasks related to intelligence. Brain and Cognition, 54(1), 65–74. doi: 10.1016/S0278–2626(03)00263-X. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. doi: 10.1017/S0140525X07001185. Kievit, R. A., Davis, S. W., Griffiths, J., Correia, M. M., Cam-Can, & Henson, R. N. (2016). A watershed model of individual differences in fluid intelligence. Neuropsychologia, 91, 186–198. doi: 10.1016/j.neuropsychologia.2016.08.008. Knauff, M., Mulack, T., Kassubek, J., Salih, H. R., & Greenlee, M. W. (2002). Spatial imagery in deductive reasoning: A functional MRI study. Brain Research Cognitive Brain Research, 13(2), 203–212. Kruschwitz, J. D., Waller, L., Daedelow, L. S., Walter, H., & Veer, I. M. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. NeuroImage, 171, 323–331. doi: 10.1016/j. neuroimage.2018.01.018. Lee, K. H., Choi, Y. Y., Gray, J. R., Cho, S. H., Chae, J.-H., Lee, S., & Kim, K. (2006). Neural correlates of superior intelligence: Stronger recruitment of posterior parietal cortex. NeuroImage, 29(2), 578–586. doi: 10.1016/j. neuroimage.2005.07.036. Lipp, I., Benedek, M., Fink, A., Koschutnig, K., Reishofer, G., Bergner, S., . . . Neubauer, A. C. (2012). Investigating neural efficiency in the visuo-spatial domain: An FMRI study. PLoS One, 7(12), e51316. doi: 10.1371/journal. pone.0051316. McKiernan, K. A., Kaufman, J. N., Kucera-Thompson, J., & Binder, J. R. (2003). A parametric manipulation of factors affecting task-induced deactivation in functional neuroimaging. Journal of Cognitive Neuroscience, 15(3), 394–408. doi: 10.1162/089892903321593117. Mennes, M., Biswal, B. B., Castellanos, F. X., & Milham, M. P. (2013). Making data sharing work: The FCP/INDI experience. NeuroImage, 82, 683–691. doi: 10.1016/j.neuroimage.2012.10.064. Miller, D. I., & Halpern, D. F. (2014). The new science of cognitive sex differences. Trends in Cognitive Sciences, 18(1), 37–45. doi: 10.1016/j.tics.2013.10.011. Miller, E. M. (1994). Intelligence and brain myelination: A hypothesis. Personality and Individual Differences, 17(6), 803–832. doi: 10.1016/0191-8869(94)90049-3. Miller, K. L., Alfaro-Almagro, F., Bangerter, N. K., Thomas, D. L., Yacoub, E., Xu, J., . . . Smith, S. M. (2016). Multimodal population brain imaging in the

257

258

u. basten and c. j. fiebach

UK Biobank prospective epidemiological study. Nature Neuroscience, 19(11), 1523–1536. doi: 10.1038/nn.4393. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience & Biobehavioral Reviews, 33(7), 1004–1023. doi: 10.1016/j.neubiorev.2009. 04.001. Neubauer, A. C., Fink, A., & Schrausser, D. G. (2002). Intelligence and neural efficiency: The influence of task content and sex on the brain–IQ relationship. Intelligence, 30(6), 515–536. doi: 10.1016/S0160–2896(02)00091-0. Neubauer, A. C., Freudenthaler, H. H., & Pfurtscheller, G. (1995). Intelligence and spatiotemporal patterns of event-related desynchronization (ERD). Intelligence, 20(3), 249–266. doi: 10.1016/0160-2896(95)90010-1. Neubauer, A. C., Grabner, R. H., Fink, A., & Neuper, C. (2005). Intelligence and neural efficiency: Further evidence of the influence of task content and sex on the brain–IQ relationship. Cognitive Brain Research, 25(1), 217–225. doi: 10.1016/j.cogbrainres.2005.05.011. Neubauer, A. C., Grabner, R. H., Freudenthaler, H. H., Beckmann, J. F., & Guthke, J. (2004). Intelligence and individual differences in becoming neurally efficient. Acta Psychologica, 116(1), 55–74. doi: 10.1016/j.actpsy.2003.11.005. Neuper, C., Grabner, R. H., Fink, A., & Neubauer, A. C. (2005). Long-term stability and consistency of EEG event-related (de-)synchronization across different cognitive tasks. Clinical Neurophysiology, 116(7), 1681–1694. doi: 10.1016/j. clinph.2005.03.013. Niendam, T. A., Laird, A. R., Ray, K. L., Dean, Y. M., Glahn, D. C., & Carter, C. S. (2012). Meta-analytic evidence for a superordinate cognitive control network subserving diverse executive functions. Cognitive, Affective, & Behavioral Neuroscience, 12(2), 241–268. doi: 10.3758/s13415–011-0083-5. O’Boyle, M. W., Cunnington, R., Silk, T. J., Vaughan, D., Jackson, G., Syngeniotis, A., & Egan, G. F. (2005). Mathematically gifted male adolescents activate a unique brain network during mental rotation. Cognitive Brain Research, 25(2), 583–587. doi: 10.1016/j.cogbrainres.2005.08.004. Parks, R. W., Loewenstein, D. A., Dodrill, K. L., Barker, W. W., Yoshii, F., Chang, J. Y., . . . Duara, R. (1988). Cerebral metabolic effects of a verbal fluency test: A PET scan study. Journal of Clinical and Experimental Neuropsychology, 10(5), 565–575. doi: 10.1080/01688638808402795. Paul, E. J., Larsen, R. J., Nikolaidis, A., Ward, N., Hillman, C. H., Cohen, N. J., . . . Barbey, A. K. (2016). Dissociable brain biomarkers of fluid intelligence. Neuroimage, 137, 201–211. Penke, L., Maniega, S. M., Bastin, M. E., Valdés Hernández, M. C., Murray, C., Royle, N. A., . . . Deary, I. J. (2012). Brain white matter tract integrity as a neural foundation for general intelligence. Molecular Psychiatry, 17(10), 1026–1030. doi: 10.1038/mp.2012.66. Pfurtscheller, G., & Aranibar, A. (1977). Event-related cortical desynchronization detected by power measurements of scalp EEG. Electroencephalography and Clinical Neurophysiology, 42(6), 817–826. doi: 10.1016/0013-4694(77) 90235-8. Poldrack, R.A. (2015). Is “efficiency” a useful concept in cognitive neuroscience? Developments in Cognitive Neuroscience, 11, 12–17.

Functional Brain Imaging of Intelligence

Prabhakaran, V., Rypma, B., & Gabrieli, J. D. E. (2001). Neural substrates of mathematical reasoning: A functional magnetic resonance imaging study of neocortical activation during performance of the necessary arithmetic operations test. Neuropsychology, 15(1), 115–127. doi: 10.1037/0894-4105.15.1.115. Prabhakaran, V., Smith, J. A. L., Desmond, J. E., Glover, G. H., & Gabrieli, J. D. E. (1997). Neural substrates of fluid reasoning: An fMRI study of neocortical activation during performance of the Raven’s progressive matrices test. Cognitive Psychology, 33(1), 43–63. doi: 10.1006/cogp.1997.0659. Santarnecchi, E., Emmendorfer, A., & Pascual-Leone, A. (2017). Dissecting the parieto-frontal correlates of fluid intelligence: A comprehensive ALE metaanalysis study. Intelligence, 63, 9–28. doi: 10.1016/j.intell.2017.04.008. Santarnecchi, E., Emmendorfer, A., Tadayon, S., Rossi, S., Rossi, A., & Pascual-Leone, A. (2017). Network connectivity correlates of variability in fluid intelligence performance. Intelligence, 65, 35–47. doi: 10.1016/j.intell.2017.10.002. Spearman, C. (1904). “General intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–293. doi: 10.2307/1412107. Sripada, C., Angstadt, M., & Rutherford, S. (2018). Towards a “treadmill test” for cognition: Reliable prediction of intelligence from whole-brain task activation patterns. BioRxiv, 412056. doi: 10.1101/412056. Sudlow, C., Gallacher, J., Allen, N., Beral, V., Burton, P., Danesh, J., . . . Collins, R. (2015). UK Biobank: An open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Medicine, 12(3), e1001779. doi: 10.1371/journal.pmed.1001779. Takeuchi, H., Taki, Y., Nouchi, R., Yokoyama, R., Kotozaki, Y., Nakagawa, S., . . . Kawashima, R. (2018). General intelligence is associated with working memory-related brain activity: New evidence from a large sample study. Brain Structure and Function, 223(9), 4243–4258. doi: 10.1007/s00429–0181747-5. Toffanin, P., Johnson, A., de Jong, R., & Martens, S. (2007). Rethinking neural efficiency: Effects of controlling for strategy use. Behavioral Neuroscience, 121(5), 854–870. doi: 10.1037/0735-7044.121.5.854. Turner, B. O., Paul, E. J., Miller, M. B., & Barbey, A. K. (2018). Small sample sizes reduce the replicability of task-based fMRI studies. Communications Biology, 1, 62. doi: 10.1038/s42003-018-0073-z. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. doi: 10.1523/JNEUROSCI.1443-09.2009. Van Essen, D. C., Smith, S. M., Barch, D. M., Behrens, T. E. J., Yacoub, E., & Ugurbil, K. (2013). The WU-Minn Human Connectome Project: An overview. NeuroImage, 80(15), 62–79. doi: 10.1016/j.neuroimage.2013.05.041. Woolgar, A., Duncan, J., Manes, F., & Fedorenko, E. (2018). Fluid intelligence is supported by the multiple-demand system not the language system. Nature Human Behaviour, 2(3), 200–204. doi: 10.1038/s41562–017-0282-3. Woolgar, A., Parr, A., Cusack, R., Thompson, R., Nimmo-Smith, I., Torralva, T., . . . Duncan, J. (2010). Fluid intelligence loss linked to restricted regions of damage within frontal and parietal cortex. Proceedings of the National Academy of Sciences, 107(33), 14899–14902. doi: 10.1073/pnas.1007928107.

259

260

u. basten and c. j. fiebach

Yarkoni, T. (2009). Big correlations in little studies: Inflated fMRI correlations reflect low statistical power – Commentary on Vul et al. (2009). Perspectives on Psychological Science, 4(3), 294–298. doi: 10.1111/j.1745-6924.2009.01127.x. Yarkoni, T., Poldrack, R. A., Van Essen, D. C., & Wager, T. D. (2010). Cognitive neuroscience 2.0: Building a cumulative science of human brain function. Trends in Cognitive Sciences, 14(11), 489–496. doi: 10.1016/j.tics.2010.08.004. Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122. doi: 10.1177/1745691617693393. Yeo, B. T., Krienen, F. M., Sepulcre, J., Sabuncu, M. R., Lashkari, D., Hollinshead, M., . . . Buckner, R. L. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology, 106(3), 1125–1165. doi: 10.1152/jn.00338.2011.

13 An Integrated, Dynamic Functional Connectome Underlies Intelligence Jessica R. Cohen and Mark D’Esposito Intelligence is an elusive concept. For well over a century, what exactly intelligence is and how best to measure it has been debated (see Sternberg & Kaufman, 2011). In one predominant factorization of the components of intelligence it is separated into fluid and crystalized categories, with fluid intelligence measuring one’s reasoning and problem-solving ability, and crystallized intelligence measuring lifetime knowledge (Cattell, 1971). Influential theories of intelligence, particularly fluid intelligence, have proposed that aspects of cognitive control, most notably working memory, are the drivers of intelligent behavior (Conway, Getz, Macnamara, & Engel de Abreu, 2011; Conway, Kane, & Engle, 2003; Kane & Engle, 2002; Kovacs & Conway, 2016). More specifically, it is thought that the control aspect of working memory, the central executive proposed by Baddeley and Hitch (1974), is the basis for the types of cognitive processes tapped by intelligence assessments (Conway et al., 2003; Kane & Engle, 2002). It has further been proposed that the control process underlying intelligence may not be a single process, but instead a cluster of domain-general control processes, including attentional control, interference resolution, updating of relevant information, and others (Conway et al., 2011; Kovacs & Conway, 2016). Here, we focus on what we have learned about how intelligence emerges from brain function, taking the perspective that cognitive control ability and intelligence are supported by similar brain mechanisms, namely integration, efficiency, and plasticity. These mechanisms are best investigated using brain network methodology. From a network neuroscience perspective, integration refers to interactions across distinct brain networks; efficiency refers to the speed at which information can be transferred across the brain; and plasticity refers to the ability of brain networks to reconfigure, or rearrange, into an organization that is optimal for the current context. Therefore, we review relevant literature relating brain network function to both intelligence and cognitive control, as well as literature relating intelligence to cognitive control. Given the strong link between fluid intelligence, in particular, and cognitive control, we focus mainly on literature probing fluid intelligence in this chapter.

261

262

j. r. cohen and m. d’esposito

Correspondence between Neural Models of Intelligence and Neural Models of Cognitive Control Neural models of intelligence propose that a distributed network of brain regions underlies intelligence, and that the efficiency with which information can be transferred across this network, as well as its ability to adapt to changing environmental demands, determines the level of intelligence of an individual (Barbey, 2018; Euler, 2018; Garlick, 2002; Haier et al., 1988; Jung & Haier, 2007; Mercado, 2008). As an example, the parieto-frontal integration theory (P-FIT) model of intelligence proposes that parietal and prefrontal brain regions work together as a network, connected by white matter tracts, to produce intelligent behavior (Jung & Haier, 2007). Later work has additionally proposed that the insula and subcortical regions, as well as white matter tracts connecting frontal cortices to the rest of the brain, are related to intelligence as well (Basten, Hilger, & Fiebach, 2015; Gläscher et al., 2010). While these particular theories are focused on the relevance of specific brain regions and connections, a notable conclusion that emerges from this literature is that widespread brain regions distributed across the entire brain, with a variety of functional roles, work in concert to produce intelligent behavior. Other neural models of intelligence focus on the mechanisms through which intelligent behavior can emerge, without proposing that certain brain regions or connections are more important than others. For example, it has been proposed that whole-brain neural efficiency (i.e., speed of information transfer) is a key feature of intelligence, as is cortical plasticity, the brain’s ability to adjust patterns of communication based on current demands (Garlick, 2002; Mercado, 2008). These ideas have recently been reframed in terms of network neuroscience (Barbey, 2018). The large-scale network organization of the brain, as well as its ability to dynamically reconfigure based on current demands, is purported to be more important for intelligence than the specific brain regions or networks that are engaged at any given moment. It has further been proposed, however, that the reconfiguration and interaction of fronto-parietal networks in particular with downstream brain networks may drive fluid intelligence (Barbey, 2018). Finally, it has also been proposed that the ability of the brain to efficiently adapt its function through prediction error learning and the reduction of uncertainty (“predictive processing”) may underlie intelligence (Euler, 2018). Supporting our proposition that cognitive control and intelligence rely on highly overlapping neural mechanisms, models of the neural basis of cognitive control are strikingly similar to models of the neural basis of intelligence. In fact, they incorporate the same criteria: integration across distinct brain regions (or networks) to increase efficiency, as well as the ability to dynamically reconfigure communication patterns in response to current cognitive demands. Early research focusing on brain network organization underlying cognitive control focused on interactions across specific networks thought to

An Integrated, Dynamic Functional Connectome

be related to dissociable cognitive control processes. In this literature, brain networks are often divided into “processors”, or groups of brain regions whose role is specialized for a particular operation (i.e., sensory input or motor output), and “controllers”, or groups of brain regions whose role is to integrate across multiple processors to affect their operations (Dehaene, Kerszberg, & Changeux, 1998; Power & Petersen, 2013). Network neuroscience has made it clear that controllers, which were originally thought to lie predominantly in the prefrontal cortex (Duncan, 2001; Miller & Cohen, 2001), are in fact distributed throughout the brain (Betzel, Gu, Medaglia, Pasqualetti, & Bassett, 2016; Gratton, Sun, & Petersen, 2018). Two candidates for controller networks that are critical for distinct aspects of cognitive control are the fronto-parietal (FP) and cingulo-opercular (CO) networks (Dosenbach, Fair, Cohen, Schlaggar, & Petersen, 2008; Power & Petersen, 2013). The FP network is engaged during tasks that require updating of information to be manipulated. The CO network is engaged during tasks that require the maintenance of task goals, error monitoring, and attention to salient stimuli. Some regions of these networks, such as the anterior cingulate cortex and the insula, are important aspects of neural models of intelligence (Basten et al., 2015; Jung & Haier, 2007). Brain regions critical for both cognitive control and intelligence have been directly compared in patients with brain lesions using voxel-based lesion-symptom mapping. Regions of the FP network, as well as white matter tracts connecting these regions, were found to be critical for performance on both tasks of intelligence and tasks probing cognitive control (Barbey et al., 2012). Additionally, the multiple-demand (MD) system (Duncan, 2010), which comprises regions from both the FP and CO networks and has high overlap with the regions emphasized in the P-FIT model of intelligence, has been explicitly linked to intelligence. A key aspect of theories regarding specific networks that underlie cognitive control is that for successful cognitive control to be exerted, the networks must interact with each other as well as with task-specific networks, such as sensory networks, motor networks, or networks that underlie task-relevant cognitive processes (language, memory, etc.). More broadly, the theory that integration across distinct brain networks is critical for complex cognition has been asserted (Mesulam 1990; Sporns 2013) and is supported by empirical literature (for a review, see Shine & Poldrack, 2018). In light of these theories, this chapter provides a focused review of literature probing how functional connectivity and brain network organization underlies cognitive control and intelligence to shed light on how intelligence may emerge from large-scale brain organization and dynamics. We begin by discussing functional connectivity and graph theory methods. Next, we review literature directly relating brain network organization to measures of intelligence, followed by literature relating brain network organization to cognitive control. We then discuss translational applications, and end by suggesting promising future directions this field could take to elucidate the brain network mechanisms underlying

263

264

j. r. cohen and m. d’esposito

cognitive control and intelligence. We draw on important theories and empirical evidence regarding the brain network basis of intelligence and of cognitive control to assert that both cognitive control and intelligence emerge from similar brain mechanisms, namely integration across distinct brain networks to increase efficiency, and dynamic reconfiguration in the service of current goals.

The Brain as a Network It is well-known that individual brain regions do not act in isolation but are instead embedded within a larger system (see also Chapter 6, by Barbey). One approach toward studying the brain as a network is the use of mathematical tools, such as graph theory, which have a long history of being used to describe interactions across elements of large-scale, interconnected systems, such as social networks or airline route maps (Guimerà, Mossa, Turtschi, & Amaral, 2005; Newman & Girvan, 2004; see Chapter 2, by Hilger and Sporns, for a more detailed discussion of methods for studying networks). There is a growing body of literature that implements graph theory to describe the brain as a large-scale network, also referred to as a connectome, in which individual brain regions are graph nodes, and connections across brain regions are graph edges (Bullmore & Sporns, 2009; Sporns, 2010). Brain graph nodes can be defined using structural boundaries (i.e., individual gyri or subcortical structures), functional boundaries (i.e., voxels within dorsolateral prefrontal cortex that are engaged during working memory), or in a voxel-wise manner (i.e., each voxel is considered to be a graph node). Brain graph edges can be defined using physical structures (i.e., white matter tracts measured with diffusion MRI [dMRI]) or functional connections (i.e., coherent fluctuations in blood oxygen level-dependent [BOLD] signal measured with functional MRI [fMRI]). Structural brain graphs are often defined by counting the number of white matter tracts between each pair of brain regions, with a greater number of tracts reflecting stronger structural connectivity. Measures of white matter integrity, such as fractional anisotropy, can also be used to quantify structural connectivity strength. Functional brain graphs, on the other hand, are often defined by quantifying the strength of the correlation between temporal fluctuations in BOLD magnitude across two brain regions, with stronger correlations indicating greater functional connectivity. Other functional measures, such as coherence or covariance measures, as well as directed connectivity measures using techniques such as dynamic causal modeling or structural equation modeling, are alternate methods for estimating functional edge strength. Once nodes are defined and edges between all pairs of nodes estimated, the overall topological organization of

An Integrated, Dynamic Functional Connectome

Figure 13.1 Brain graph schematic. Gray ovals are networks of the graph, circles are nodes, dashed lines are within-network edges, and solid lines are between-network edges. Figure adapted from Cohen and D’Esposito (2016). See also Table 13.1.

a brain graph can be quantified using whole-brain summary measures without losing information about individual nodes or individual edges. Figure 13.1 and Table 13.1 describe a subset of whole-brain summary measures and nodal measures that are commonly used in network neuroscience. Using a network neuroscience approach, the brain has been found to have characteristics of both small-world and modular networks. A small-world network is one in which groups of tightly interconnected regions have sparse, long-range connections across them, and is defined as a combination of high clustering coefficient and relatively short path length (Bassett & Bullmore, 2006). Small-world networks allow for both specialized information processing (within tightly connected clusters of regions) and integrated, distributed information processing (long-range connections that cut across clusters) (Bassett & Bullmore, 2006). They further minimize wiring costs (only a small subset of connections are long, metabolically costly connections), and thus have an efficient network structure (Bullmore & Sporns, 2012). Modular networks are small-world networks in which the tightly-interconnected

265

266

j. r. cohen and m. d’esposito

Table 13.1 Definitions and descriptions of graph theory metrics. Metric

Definition

Description

Depiction

Degree

The number of edges of node

How highly connected a node is

The orange node has a degree of 6

System Segregation

Relative strength of withinnetwork connectivity as compared to betweennetwork connectivity

Network segregation into distinct networks with stronger within-network connectivity and weaker between-network connectivity

Edges within gray ovals vs. edges spanning different ovals

Modularity

Degree to which withinnetwork edges are stronger than expected at random

Network segregation into distinct networks with strong within-network connectivity

Gray ovals

Within-module Degree

Number of within-network connections of a node relative to average number of connections

Nodes important for within-network communication (provincial hub nodes)

Orange node

Participation Coefficient

Number of inter-network connections relative to all connections of a node

Nodes important for across-network integration (connector hub nodes)

Green node

Path Length

Shortest distance (number of edges) between a pair of nodes

Efficiency of information transfer between two nodes

Blue edges

Clustering Coefficient

Probability that two nodes connected to a third are also connected to each other

Description of interconnectedness (clique-ishness) of groups of neighboring nodes

Red nodes/edges

Global Efficiency

Inverse of average shortest path length across the entire system

Efficient global, integrative communication

Blue edges

Local Efficiency

Inverse of average shortest path length across a system consisting only of a node’s immediate neighbors

Efficient local, withinnetwork communication

Red nodes/edges

regions are organized into communities (also referred to as individual networks or modules). In addition to the aforementioned features of smallworld networks, modular networks are resilient to brain damage (Alstott, Breakspear, Hagmann, Cammoun, & Sporns, 2009; Gratton, Nomura, Perez, & D’Esposito, 2012) and are highly adaptable (both on short timescales, such as when engaged in a cognitive task, and on long time-scales, such as across evolution) (Meunier, Lambiotte, & Bullmore, 2010). A small-world,

An Integrated, Dynamic Functional Connectome

modular brain organization has long been proposed to be critical for complex cognition (Dehaene et al., 1998), including intelligence (Jung & Haier, 2007; Mercado, 2008). Finally, brain networks can be characterized at different timescales. Static connectivity, or functional connectivity assessed across several minutes, is useful for understanding dominant patterns of brain network organization during a specific cognitive context (i.e., during a resting state scan or during a working memory task). It has been proposed that static connectivity, in particular during a resting state, may reflect a trait-level marker of one’s brain network organization (Finn et al., 2015; Gratton, Laumann et al., 2018). Dynamic, or time-varying, functional connectivity is assessed over shorter timescales and may reflect rapid, momentary changes in cognitive demands or in internal state (Cohen, 2018; Kucyi, Tambini, Sadaghiani, Keilholz, & Cohen, 2018). While some time-varying functional connectivity methods may be particularly sensitive to artifact, model-based approaches have been shown to more accurately reflect true underlying changes in patterns of functional connectivity (see Cohen, 2018 for a discussion of methodological challenges related to time-varying functional connectivity). Dynamic changes in network organization are a key part of neural models of intelligent behavior (Barbey, 2018; Garlick, 2002; Mercado, 2008), thus both static and dynamic functional connectivity will be discussed here.

Intelligence and the Functional Connectome Some extant literature has related brain network organization to intelligence as measured with normed and psychometrically validated tests, such as estimated full-scale IQ (FSIQ) or IQ subscores using the Wechsler Adult Intelligence Scale (Wechsler, 2008) or the Wechsler Abbreviated Scale of Intelligence (Wechsler, 2011), or measures focused on fluid intelligence such as the Perceptual Reasoning Index of the Wechsler scales, Raven’s Progressive Matrices (Raven, 2000), or the Cattell Culture Fair Test (Cattell & Horn, 1978). This literature consistently finds that the degree of network integration, or interactions across widely distributed brain regions, underlies intelligence. This is observed regardless of data modality (structural white matter networks, resting state functional networks, or task-based functional networks). For example, studies of structural brain network organization characterized using dMRI have found that a greater number of white matter tracts, resulting in greater network integration, is related to higher FSIQ (Bohlken et al., 2016), as are the existence of tracts with properties that result in more efficient information flow (i.e., higher fractional anisotropy or reduced radial diffusivity or mean diffusivity) (Malpas et al., 2016). Additionally, global efficiency of white matter tracts has been found to be related to FSIQ, as well as performance IQ and verbal IQ subscales (Li et al., 2009).

267

268

j. r. cohen and m. d’esposito

Studies of brain network organization measured using resting state fMRI (rs-fMRI) have generally found that greater integration (stronger functional connectivity distributed across the brain and lower path length) is related to FSIQ (Langer et al., 2012; Malpas et al., 2016; Song et al., 2008; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009; Wang, Song, Jiang, Zhang, & Yu, 2011). Notably, greater global and local integration (i.e., both across and within distinct brain networks) in frontal, parietal, and temporal regions may be particularly important for intelligence (Cole, Ito, & Braver, 2015; Cole, Yarkoni, Repovš, Anticevic, & Braver, 2012; Hearne, Mattingley, & Cocchi, 2016; Hilger, Ekman, Fiebach, & Basten, 2017a, 2017b; Malpas et al., 2016; Santarnecchi et al., 2017; Song et al., 2008; van den Heuvel et al., 2009; Wang et al., 2011). Using machine learning approaches, patterns of whole-brain rs-fMRI functional connectivity have been found to successfully predict intelligence scores on fluid intelligence measures (Dubois, Galdi, Paul, & Adolphs, 2018; Finn et al., 2015; Greene, Gao, Scheinost, & Constable, 2018). While some studies have found that connections within and across networks encompassing frontal, parietal, and temporal regions can more successfully predict intelligence scores than whole-brain connectivity matrices (Finn et al., 2015), other studies have found that focusing on specific networks or pairs of networks reduces success as compared to utilizing whole-brain connectivity patterns (Dubois et al., 2018). A few studies have assessed functional brain network organization during cognitive tasks as compared to during rs-fMRI and related that to fluid intelligence. In a study directly quantifying how network reconfiguration between rs-fMRI and cognitive tasks relates to fluid intelligence, it was observed that a more similar network organization between rest and the tasks was indicative of higher intelligence (Schultz & Cole, 2016). This held for both cognitive control tasks (i.e., working memory or relational reasoning) and other cognitive tasks (i.e., language comprehension). Critically, when comparing the ability to predict intelligence scores using either rs-fMRI or task-based functional connectivity patterns, it has been found that functional brain network organization measured during working memory tasks is able to predict intelligence scores more successfully than during rs-fMRI (Greene et al., 2018; Xiao, Stephen, Wilson, Calhoun, & Wang, 2018). Together, this line of research indicates that it is critical to investigate task-based network organization, and particularly that of cognitive control tasks, to understand how brain function underlies intelligence.

Cognitive Control and the Functional Connectome One of the most well studied forms of cognitive control from a network perspective is working memory. Similar to studies relating network organization to intelligence, we have found that, when engaged in a working

An Integrated, Dynamic Functional Connectome

memory task, global measures of network integration increase, while global measures of network segregation decrease (Cohen & D’Esposito, 2016). This increased integration was observed between networks important for cognitive control (FP, CO) and task-relevant sensory processes (the somatomotor network) in particular (Cohen, Gallen, Jacobs, Lee, & D’Esposito, 2014). Importantly, individuals with higher network integration during the performance of tasks probing working memory performed better on the tasks (Cohen & D’Esposito, 2016; Cohen et al., 2014). These findings have held in both young adults (Cohen & D’Esposito, 2016; Cohen et al., 2014) and in older adults (Gallen, Turner, Adnan, & D’Esposito, 2016). Increased integration between cognitive control networks during working memory has been observed by others as well (Gordon, Stollstorff, & Vaidya, 2012; Liang, Zou, He, & Yang, 2016), with a parametric relationship between increased integration across networks and increasing working memory load (Finc et al., 2017; Kitzbichler, Henson, Smith, Nathan, & Bullmore, 2011; Vatansever, Menon, Manktelow, Sahakian, & Stamatakis, 2015; Zippo, Della Rosa, Castiglioni, & Biella, 2018). In addition, the strength and density of functional connectivity in frontal, parietal, and occipital regions related to cognitive control and task-relevant sensory processes has also been shown to increase with increasing working memory load (Liu et al., 2017). Importantly, this increased integration has been related to improved working memory performance (Bassett et al., 2009; Finc et al., 2017; Stanley, Dagenbach, Lyday, Burdette, & Laurienti, 2014; Vatansever et al., 2015). Time-varying functional connectivity analyses during working memory tasks have indicated that levels of network integration and segregation fluctuate even within trials (Zippo et al., 2018), that more time is spent overall in an integrated state during a working memory task as compared to during rest or during non-cognitive control tasks (Shine et al., 2016), and that more stable functional connectivity during working memory is related to increased performance (Chen, Chang, Greicius, & Glover, 2015). Studies that have probed other aspects of cognitive control, such as task-switching (Chen, Cai, Ryali, Supekar, & Menon, 2016; Cole, Laurent, & Stocco, 2013; Yin, Wang, Pan, Liu, & Chen, 2015), interference resolution (Elton & Gao, 2014, 2015; Hutchison & Morton, 2015), and sustained attention (Ekman, Derrfuss, Tittgemeyer, & Fiebach, 2012; Godwin, Barry, & Marois, 2015; Spadone et al., 2015), have reported similar findings with regard to network integration and time-varying functional connectivity. Together, the findings from studies that examine functional brain network organization during the performance of cognitive control tasks are strikingly similar to those relating functional brain network organization to standard measures of intelligence (FSIQ or fluid intelligence). Specifically, both point to increased network integration, particularly involving FP and CO networks and their connections to task-relevant sensory and motor networks. Moreover, a greater degree of network reconfiguration during rest, in combination with

269

270

j. r. cohen and m. d’esposito

more stable network organization during task, is indicative of increased cognitive control ability. This supports our claim that cognitive control and intelligence rely on similar neural mechanisms, namely network integration and reconfiguration (i.e., plasticity) (Barbey, 2018; Dehaene et al., 1998; Garlick, 2002; Jung & Haier, 2007; Mercado, 2008; Power & Petersen, 2013). We propose that, because the brain is organized in a modular structure, integration is crucial for efficient communication across distinct modules, each of which implements a specific function (e.g., sensation, motor execution, attention, memory). The capacity for plasticity allows for rapid changes in the patterns of communication within and across modules based on current demands. Together, these features of brain organization allow for all aspects of goal-directed higher-order cognition, including cognitive control and intelligence.

Translational Applications Literature probing the translational relevance of the relationship between network organization/plasticity and intelligence is scant, thus here we highlight findings from the cognitive control literature. Impairment in cognitive control is hypothesized to be a transdiagnostic marker for brain disorders (Goschke, 2014; McTeague, Goodkind, & Etkin, 2016; Snyder, Miyake, & Hankin, 2015). Notably, transdiagnostic alterations in brain structure and function, particularly as related to regions of the CO and FP networks, are related to these impairments in cognitive control (Goodkind et al., 2015; McTeague et al., 2017). These findings are not specific to adults; in a large sample of children and adolescents with symptoms from multiple disorders, CO dysfunction during a working memory task was related to a general psychopathology trait (Shanmugan et al., 2016). From a brain network perspective, consistent, reliable disruptions to patterns of brain network organization and dynamics have also been observed transdiagnostically (Calhoun, Miller, Pearlson, & Adalı, 2014; Cao, Wang, & He, 2015; Fornito, Zalesky, & Breakspear, 2015; Xia & He, 2011). Better characterizing network topology and dynamics will allow predictions to be made about brain dysfunction and the cognitive processes and behavioral symptoms that arise from such dysfunction (Deco & Kringelbach, 2014; Fornito et al., 2015). As an example, we have shown that damage to nodes that integrate across distinct brain networks cause more extensive brain network disruption than nodes whose connections are contained within specific networks (Gratton et al., 2012). By leveraging information about dysfunctional resting state network organization and the roles of specific nodes in psychiatric and neurologic disorders, it is possible to identify promising neural sites to target with brain stimulation as a means of treatment (Dubin, 2017; Hart, Ypma, Romero-Garcia, Price, & Suckling, 2016). For example, it has

An Integrated, Dynamic Functional Connectome

been demonstrated that by combining information about canonical resting state network organization with knowledge of individual differences in nuances of connectivity patterns, it is possible to predict which target stimulation sites are more likely to be effective at treating brain disorders (Fox et al., 2014; Opitz, Fox, Craddock, Colcombe, & Milham, 2016). Future research should combine modeling of network mechanisms with empirical research relating these changes in connectivity to both disease-specific and transdiagnostic improvements in cognition and symptom profiles. Finally, we have recently proposed that modularity of intrinsic functional network organization, or the degree to which the brain segregates into distinct networks, may be a biomarker indicating potential for cognitive plasticity (Gallen & D’Esposito, 2019). Thus, individuals with cognitive control difficulties due to normal aging, neurological, or psychiatric disorders, and who have particularly modular brain organization, may be good candidates for cognitive control training. It is thought that this is because higher modularity, which leads to greater information encapsulation, allows for more efficient processing. As previously stated, these features of brain function have been proposed to be related to higher intelligence as well (Garlick, 2002; Mercado, 2008).

Promising Next Steps: Uncovering Mechanisms Underlying the Emergence of Cognitive Control and Intelligence from the Functional Connectome While we have learned which brain networks are involved in cognitive control, we do not yet understand the particular roles of these networks in different aspects of cognition, or the specific mechanisms through which they act (Mill, Ito, & Cole, 2017). There is a rich behavioral literature distinguishing among components of cognitive control (Friedman & Miyake, 2017) or components of intelligence (Conway et al., 2003), as well as how they relate to each other (Chen et al., 2019; Friedman et al., 2006). An important next step will be to integrate these cognitive theories with empirical data investigating network mechanisms (Barbey, 2018; Girn, Mills, & Christoff, 2019; Kovacs & Conway, 2016; Mill et al., 2017; see also Chapter 6, by Barbey, and Chapter 2, by Hilger and Sporns). As an example, it has recently been proposed that cognition in general emerges from interactions between stable components of network organization (mainly unimodal sensory and motor networks) and flexible components of network organization (higher-order networks associated with cognitive control, such as the FP and CO networks) (Mill et al., 2017). Future research could test this hypothesis and specify predictions for distinct aspects of cognitive control. For example, based on the hypothesized roles of the FP and CO networks (trialwise updating and task set maintenance; Dosenbach et al., 2008; Power & Petersen, 2013), perhaps the CO network drives the maintenance component of working memory, while FP network

271

272

j. r. cohen and m. d’esposito

interactions drive the updating component. Characterizing network organization and dynamics during a working memory task that can separate these two components would address this prediction. A more direct assessment of how network flexibility relates to cognitive control can be achieved using time-varying functional connectivity measures (Cohen, 2018; Gonzalez-Castillo & Bandettini, 2018; Shine & Poldrack, 2018). Using these tools, it has been found that brain network dynamics become more stable when participants are successfully focused on a challenging cognitive task (Chen et al., 2015; Elton & Gao, 2015; Hutchison & Morton, 2015), changes in brain network organization related to attention predict whether or not a subject will detect a stimulus (Ekman et al., 2012; Sadaghiani, Poline, Kleinschmidt, & D’Esposito, 2015; Thompson et al., 2013; Wang, Ong, Patanaik, Zhou, & Chee, 2016), and more variable network organization during rest, especially as related to the salience network, which is highly overlapping with the CO network, is related to greater cognitive flexibility (Chen et al., 2016). We may be able to better understand how these dynamics relate to specific cognitive demands by focusing on different time courses of stability vs. flexibility across different networks. For example, perhaps more stable CO network connectivity, in combination with more dynamic FP network connectivity, is most optimal for cognitive control. Implementing these methods will allow us to better characterize how the dynamics of brain network organization underlies successful cognitive control, both across distinct brain networks and across timescales. Much of the evidence we have reviewed thus far has been correlational. Causal manipulation of brain function with transcranial magnetic stimulation (TMS) can be used to probe neural mechanisms in humans. For example, we demonstrated that disruption of function in key nodes of the FP or CO networks with TMS induced widespread changes in functional connectivity distributed across the entire brain, while TMS to the primary somatosensory cortex did not (Gratton, Lee, Nomura, & D’Esposito, 2013). These findings add to correlational studies that have shown that cognitive control networks are more highly integrated than primary sensory networks (Cole, Pathak, & Schneider, 2010; van den Heuvel & Sporns, 2011), and that the dynamics of FP nodes (Cole et al., 2013; Yin et al., 2015) and of CO nodes (Chen et al., 2016) are critical for cognitive control. Future research could further probe the impact of causal manipulation of functional brain network organization on changes in behavioral performance during tasks assessing various aspects of intelligence. Finally, computational models based on theories of how brain network function underlies specific aspects of cognitive control will allow us to move beyond descriptive measures and explore the mechanisms that cause cognitive control and intelligence to emerge from brain network organization. Early models of cognitive control that focused primarily on functions of individual regions, particularly of the prefrontal cortex, uncovered important information about various aspects of cognitive control (for a review, see O’Reilly,

An Integrated, Dynamic Functional Connectome

Herd, & Pauli, 2010). Network-based computational models thus far have focused mainly on overall brain function – how underlying brain structure constrains brain function (Deco, Jirsa, McIntosh, Sporns, & Kötter, 2009; Gu et al., 2015; Honey, Kötter, Breakspear, & Sporns, 2007), how information flows through the system (Cole, Ito, Bassett, & Schultz, 2016; Mitra, Snyder, Blazey, & Raichle, 2015), and how network dynamics emerge (Breakspear, 2017; Cabral, Kringelbach, & Deco, 2017; Deco & Corbetta, 2011; Deco, Jirsa, & McIntosh, 2013). These models are successfully able to predict arousal state (Deco, Tononi, Boly, & Kringelbach, 2015) and inform our understanding of neuropsychiatric disorders (Deco & Kringelbach, 2014), but they have yet to be used to model specific aspects of cognition, such as cognitive control or intelligence. A promising model based on network control theory has identified brain regions that, based on their structural connectivity, have key roles in directing the brain into different “states”, or whole-brain patterns of functional connectivity (Gu et al., 2015). Different brain states are thought to underlie different cognitive and affective states (Cohen, 2018), and a particularly integrated state is thought to underlie cognitive control and intelligence (Barbey, 2018; Dehaene et al., 1998; see also Chapter 6, by Barbey). It has been found that regions within the FP and CO networks were most able to initiate transitions across brain states that are thought to underlie different facets of higher-order cognition (Gu et al., 2015). Further, the ability of the parietal cortex to initiate state transitions has been related to intelligence (Kenett et al., 2018). It has been proposed that fluid intelligence may emerge from a brain network structure that is optimally organized to reconfigure brain network organization into a variety of states that are effortful and difficult to reach (Barbey, 2018; see also Chapter 6, by Barbey). Thus, intelligence may be an emergent property of large-scale topology and dynamics. While compelling, further research is needed to directly address this hypothesis (Girn et al., 2019).

Conclusion In conclusion, we can learn more about how intelligence arises from brain function by considering studies of cognitive control and brain network organization. This literature has found that greater integration within and across brain networks, in combination with task-relevant dynamic reconfiguration, underlies successful cognitive control across a variety of domains (i.e., working memory updating, task-switching, interference resolution, controlled attention). These cognitive processes are thought to be critical for intelligence, a theory that is supported by the finding that a similar pattern of integrated brain organization and dynamics is crucial for both successful cognitive control and higher intelligence. These observations hold important implications for understanding the cognitive deficits observed across a wide range

273

274

j. r. cohen and m. d’esposito

of neurological and psychiatric disorders, as well as for targeting promising methods of treatment for these disorders. To continue to make progress in understanding how cognitive control and intelligence emerge from brain function, future work aimed at understanding how cognition emerges from dynamic interactions across brain regions is critical.

References Alstott, J., Breakspear, M., Hagmann, P., Cammoun, L., & Sporns, O. (2009). Modeling the impact of lesions in the human brain. PLoS Computational Biology, 5(6), e1000408. Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. H. Bower (ed.) Psychology of learning and motivation, 8th ed. (pp. 47–89). New York: Academic Press. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(Pt 4), 1154–1164. Bassett, D. S., & Bullmore, E. (2006). Small-world brain networks. The Neuroscientist, 12(6), 512–523. Bassett, D. S., Bullmore, E. T., Meyer-Lindenberg, A., Apud, J. A., Weinberger, D. R., & Coppola, R. (2009). Cognitive fitness of cost-efficient brain functional networks. Proceedings of the National Academy of Sciences of the United States of America, 106(28), 11747–11752. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Betzel, R. F., Gu, S., Medaglia, J. D., Pasqualetti, F., & Bassett, D. S. (2016). Optimally controlling the human connectome: The role of network topology. Scientific Reports, 6, 30770. Bohlken, M. M., Brouwer, R. M., Mandl, R. C. W., Hedman, A. M., van den Heuvel, M. P., van Haren, N. E. M., . . . Hulshoff Pol, H. E. (2016). Topology of genetic associations between regional gray matter volume and intellectual ability: Evidence for a high capacity network. Neuroimage, 124(Pt A), 1044–1053. Breakspear, M. (2017). Dynamic models of large-scale brain activity. Nature Neuroscience, 20(3), 340–352. Bullmore, E., & Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3), 186–198. Bullmore, E., & Sporns, O. (2012). The economy of brain network organization. Nature Reviews Neuroscience, 13(5), 336–349. Cabral, J., Kringelbach, M. L., & Deco, G. (2017). Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanisms. Neuroimage, 160, 84–96.

An Integrated, Dynamic Functional Connectome

Calhoun, V. D., Miller, R., Pearlson, G., & Adalı, T. (2014). The chronnectome: Time-varying connectivity networks as the next frontier in fMRI data discovery. Neuron, 84(2), 262–274. Cao, M., Wang, Z., & He, Y. (2015). Connectomics in psychiatric research: Advances and applications. Neuropsychiatric Disease and Treatment, 11, 2801–2810. Cattell, R. B. (1971). Abilities: Their structure, growth and action. Boston, MA: Houghton Mifflin. Cattell, R. B., & Horn, J. D. (1978). A check on the theory of fluid and crystallized intelligence with description of new subtest designs. Journal of Educational Measurement, 15(3), 139–164. Chen, T., Cai, W., Ryali, S., Supekar, K., & Menon, V. (2016). Distinct global brain dynamics and spatiotemporal organization of the salience network. PLoS Biology, 14(6), e1002469. Chen, J. E., Chang, C., Greicius, M. D., & Glover, G. H. (2015). Introducing co-activation pattern metrics to quantify spontaneous brain network dynamics. Neuroimage, 111, 476–488. Chen, Y., Spagna, A., Wu, T., Kim, T. H., Wu, Q., Chen, C., . . . Fan, J. (2019). Testing a cognitive control model of human intelligence. Scientific Reports, 9, 2898. Cohen, J. R. (2018). The behavioral and cognitive relevance of time-varying, dynamic changes in functional connectivity. Neuroimage, 180(Pt B), 515–525. Cohen, J. R., & D’Esposito, M. (2016). The segregation and integration of distinct brain networks and their relationship to cognition. The Journal of Neuroscience, 36(48), 12083–12094. Cohen, J. R., Gallen, C. L., Jacobs, E. G., Lee, T. G., & D’Esposito, M. (2014). Quantifying the reconfiguration of intrinsic networks during working memory. PLoS One, 9(9), e106636. Cole, M. W., Ito, T., Bassett, D. S., & Schultz, D. H. (2016). Activity flow over restingstate networks shapes cognitive task activations. Nature Neuroscience, 19(12), 1718–1726. Cole, M. W., Ito, T., & Braver, T. S. (2015). Lateral prefrontal cortex contributes to fluid intelligence through multinetwork connectivity. Brain Connectivity, 5(8), 497–504. Cole, M. W., Laurent, P., & Stocco, A. (2013). Rapid instructed task learning: A new window into the human brain’s unique capacity for flexible cognitive control. Cognitive, Affective & Behavioral Neuroscience, 13(1), 1–22. Cole, M. W., Pathak, S., & Schneider, W. (2010). Identifying the brain’s most globally connected regions. Neuroimage, 49(4), 3132–3148. Cole, M. W., Yarkoni, T., Repovš, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. The Journal of Neuroscience, 32(26), 8988–8999. Conway, A. R. A., Getz, S. J., Macnamara, B., & Engel de Abreu, P. M. J. (2011). Working memory and intelligence. In R. J. Sternberg, & S. B. Kaufman (eds.), The Cambridge handbook of intelligence (pp. 394–418). New York: Cambridge University Press. Conway, A. R. A., Kane, M. J., & Engle, R. W. (2003). Working memory capacity and its relation to general intelligence. Trends in Cognitive Sciences, 7(12), 547–552.

275

276

j. r. cohen and m. d’esposito

Deco, G., & Corbetta, M. (2011). The dynamical balance of the brain at rest. The Neuroscientist, 17(1), 107–123. Deco, G., Jirsa, V. K., & McIntosh, A. R. (2013). Resting brains never rest: Computational insights into potential cognitive architectures. Trends in Neurosciences, 36(5), 268–274. Deco, G., Jirsa, V., McIntosh, A. R., Sporns, O., & Kötter, R. (2009). Key role of coupling, delay, and noise in resting brain fluctuations. Proceedings of the National Academy of Sciences of the United States of America, 106(25), 10302–10307. Deco, G., & Kringelbach, M. L. (2014). Great expectations: Using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron, 84(5), 892–905. Deco, G., Tononi, G., Boly, M., & Kringelbach, M. L. (2015). Rethinking segregation and integration: Contributions of whole-brain modelling. Nature Reviews Neuroscience, 16(7), 430–439. Dehaene, S., Kerszberg, M., & Changeux, J. P. (1998). A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the National Academy of Sciences of the United States of America, 95(24), 14529–14534. Dosenbach, N. U. F., Fair, D. A., Cohen, A. L., Schlaggar, B. L., & Petersen, S. E. (2008). A dual-networks architecture of top-down control. Trends in Cognitive Sciences, 12(3), 99–105. Dubin, M. (2017). Imaging TMS: Antidepressant mechanisms and treatment optimization. International Review of Psychiatry, 29(2), 89–97. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373, 20170284. Duncan, J. (2001). An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2(11), 820–829. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour. Trends in Cognitive Sciences, 14(4), 172–179. Ekman, M., Derrfuss, J., Tittgemeyer, M., & Fiebach, C. J. (2012). Predicting errors from reconfiguration patterns in human brain networks. Proceedings of the National Academy of Sciences of the United States of America, 109(41), 16714–16719. Elton, A., & Gao, W. (2014). Divergent task-dependent functional connectivity of executive control and salience networks. Cortex, 51, 56–66. Elton, A., & Gao, W. (2015). Task-related modulation of functional connectivity variability and its behavioral correlations. Human Brain Mapping, 36(8), 3260–3272. Euler, M. J. (2018). Intelligence and uncertainty: Implications of hierarchical predictive processing for the neuroscience of cognitive ability. Neuroscience and Biobehavioral Reviews, 94, 93–112. Finc, K., Bonna, K., Lewandowska, M., Wolak, T., Nikadon, J., Dreszer, J., . . . Kühn, S. (2017). Transition of the functional brain network related to increasing cognitive demands. Human Brain Mapping, 38(7), 3659–3674.

An Integrated, Dynamic Functional Connectome

Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., . . . Constable, R. T. (2015). Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity. Nature Neuroscience, 18(11), 1664–1671. Fornito, A., Zalesky, A., & Breakspear, M. (2015). The connectomics of brain disorders. Nature Reviews Neuroscience, 16(3), 159–172. Fox, M. D., Buckner, R. L., Liu, H., Chakravarty, M. M., Lozano, A. M., & PascualLeone, A. (2014). Resting-state networks link invasive and noninvasive brain stimulation across diverse psychiatric and neurological diseases. Proceedings of the National Academy of Sciences of the United States of America, 111(41), E4367–E4375. Friedman, N. P., & Miyake, A. (2017). Unity and diversity of executive functions: Individual differences as a window on cognitive structure. Cortex, 86, 186–204. Friedman, N. P., Miyake, A., Corley, R. P., Young, S. E., Defries, J. C., & Hewitt, J. K. (2006). Not all executive functions are related to intelligence. Psychological Science, 17(2), 172–179. Gallen, C. L., & D’Esposito, M. (2019). Modular brain network organization: A biomarker of cognitive plasticity. Trends in Cognitive Sciences, 23(4), 293–304. Gallen, C. L., Turner, G. R., Adnan, A., & D’Esposito, M. (2016). Reconfiguration of brain network architecture to support executive control in aging. Neurobiology of Aging, 44, 42–52. Garlick, D. (2002). Understanding the nature of the general factor of intelligence: The role of individual differences in neural plasticity as an explanatory mechanism. Psychological Review, 109(1), 116–136. Girn, M., Mills, C., & Christoff, K. (2019). Linking brain network reconfiguration and intelligence: Are we there yet? Trends in Neuroscience and Education, 15, 62–70. Gläscher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences of the United States of America, 107(10), 4705–4709. Godwin, D., Barry, R. L., & Marois, R. (2015). Breakdown of the brain’s functional network modularity with awareness. Proceedings of the National Academy of Sciences of the United States of America, 112(12), 3799–3804. Gonzalez-Castillo, J., & Bandettini, P. A. (2018). Task-based dynamic functional connectivity: Recent findings and open questions. Neuroimage, 180(Pt B), 526–533. Goodkind, M., Eickhoff, S. B., Oathes, D. J., Jiang, Y., Chang, A., Jones-Hagata, L. B., . . . Etkin, A. (2015). Identification of a common neurobiological substrate for mental illness. JAMA Psychiatry, 72(4), 305–315. Gordon, E. M., Stollstorff, M., & Vaidya, C. J. (2012). Using spatial multiple regression to identify intrinsic connectivity networks involved in working memory performance. Human Brain Mapping, 33(7), 1536–1552. Goschke, T. (2014). Dysfunctions of decision-making and cognitive control as transdiagnostic mechanisms of mental disorders: Advances, gaps, and needs

277

278

j. r. cohen and m. d’esposito

in current research. International Journal of Methods in Psychiatric Research, 23(Suppl 1), 41–57. Gratton, C., Laumann, T. O., Nielsen, A. N., Greene, D. J., Gordon, E. M., Gilmore, A. W., . . . Petersen, S. E. (2018). Functional brain networks are dominated by stable group and individual factors, not cognitive or daily variation. Neuron, 98(2), 439–452.e5. Gratton, C., Lee, T. G., Nomura, E. M., & D’Esposito, M. (2013). The effect of theta-burst TMS on cognitive control networks measured with resting state fMRI. Frontiers in Systems Neuroscience, 7, 124. Gratton, C., Nomura, E. M., Perez, F., & D’Esposito, M. (2012). Focal brain lesions to critical locations cause widespread disruption of the modular organization of the brain. Journal of Cognitive Neuroscience, 24(6), 1275–1285. Gratton, C., Sun, H., & Petersen, S. E. (2018). Control networks and hubs. Psychophysiology, 55(3), e13032. Greene, A. S., Gao, S., Scheinost, D., & Constable, R. T. (2018). Task-induced brain state manipulation improves prediction of individual traits. Nature Communications, 9(1), 2807. Gu, S., Pasqualetti, F., Cieslak, M., Telesford, Q. K., Yu, A. B., Kahn, A. E., . . . Bassett, D. S. (2015). Controllability of structural brain networks. Nature Communications, 6, 8414. Guimerà, R., Mossa, S., Turtschi, A., & Amaral, L. A. N. (2005). The worldwide air transportation network: Anomalous centrality, community structure, and cities’ global roles. Proceedings of the National Academy of Sciences of the United States of America, 102(22), 7794–7799. Haier, R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12, 199–217. Hart, M. G., Ypma, R. J. F., Romero-Garcia, R., Price, S. J., & Suckling, J. (2016). Graph theory analysis of complex brain networks: New concepts in brain mapping applied to neurosurgery. Journal of Neurosurgery, 124(6), 1665–1678. Hearne, L. J., Mattingley, J. B., & Cocchi, L. (2016). Functional brain networks related to individual differences in human intelligence at rest. Scientific Reports, 6, 32328. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017a). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017b). Intelligence is associated with the modular structure of intrinsic brain networks. Scientific Reports, 7(1), 16088. Honey, C. J., Kötter, R., Breakspear, M., & Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences of the United States of America, 104(24), 10240–10245. Hutchison, R. M., & Morton, J. B. (2015). Tracking the brain’s functional coupling dynamics over development. The Journal of Neuroscience, 35(17), 6849–6859.

An Integrated, Dynamic Functional Connectome

Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154, discussion 154–187. Kane, M. J., & Engle, R. W. (2002). The role of prefrontal cortex in working-memory capacity, executive attention, and general fluid intelligence: An individualdifferences perspective. Psychonomic Bulletin & Review, 9(4), 637–671. Kenett, Y. N., Medaglia, J. D., Beaty, R. E., Chen, Q., Betzel, R. F., ThompsonSchill, S. L., & Qiu, J. (2018). Driving the brain towards creativity and intelligence: A network control theory analysis. Neuropsychologia, 118(Pt A), 79–90. Kitzbichler, M. G., Henson, R. N. A., Smith, M. L., Nathan, P. J., & Bullmore, E. T. (2011). Cognitive effort drives workspace configuration of human brain functional networks. The Journal of Neuroscience, 31(22), 8259–8270. Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177. Kucyi, A., Tambini, A., Sadaghiani, S., Keilholz, S., & Cohen, J. R. (2018). Spontaneous cognitive processes and the behavioral validation of timevarying brain connectivity. Network Neuroscience, 2(4), 397–417. Langer, N., Pedroni, A., Gianotti, L. R. R., Hänggi, J., Knoch, D., & Jäncke, L. (2012). Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6), 1393–1406. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. Liang, X., Zou, Q., He, Y., & Yang, Y. (2016). Topologically reorganized connectivity architecture of default-mode, executive-control, and salience networks across working memory task loads. Cerebral Cortex, 26(4), 1501–1511. Liu, H., Yu, H., Li, Y., Qin, W., Xu, L., Yu, C., & Liang, M. (2017). An energyefficient intrinsic functional organization of human working memory: A resting-state functional connectivity study. Behavioural Brain Research, 316, 66–73. Malpas, C. B., Genc, S., Saling, M. M., Velakoulis, D., Desmond, P. M., & O’Brien, T. J. (2016). MRI correlates of general intelligence in neurotypical adults. Journal of Clinical Neuroscience, 24, 128–134. McTeague, L. M., Goodkind, M. S., & Etkin, A. (2016). Transdiagnostic impairment of cognitive control in mental illness. Journal of Psychiatric Research, 83, 37–46. McTeague, L. M., Huemer, J., Carreon, D. M., Jiang, Y., Eickhoff, S. B., & Etkin, A. (2017). Identification of common neural circuit disruptions in cognitive control across psychiatric disorders. American Journal of Psychiatry, 174(7), 676–685. Mercado, E., III. (2008). Neural and cognitive plasticity: From maps to minds. Psychological Bulletin, 134(1), 109–137. Mesulam, M.-M. (1990). Large-scale neurocognitive networks and distributed processing for attention, language, and memory. Annals of Neurology, 28(5), 597–613. Meunier, D., Lambiotte, R., & Bullmore, E. T. (2010). Modular and hierarchically modular organization of brain networks. Frontiers in Neuroscience, 4, 200.

279

280

j. r. cohen and m. d’esposito

Mill, R. D., Ito, T., & Cole, M. W. (2017). From connectome to cognition: The search for mechanism in human functional brain networks. Neuroimage, 160, 124–139. Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24, 167–202. Mitra, A., Snyder, A. Z., Blazey, T., & Raichle, M. E. (2015). Lag threads organize the brain’s intrinsic activity. Proceedings of the National Academy of Sciences of the United States of America, 112(17), E2235–E2244. Newman, M. E. J., & Girvan, M. (2004). Finding and evaluating community structure in networks. Physical Review E, 69(2), 026113. O’Reilly, R. C., Herd, S. A., & Pauli, W. M. (2010). Computational models of cognitive control. Current Opinion in Neurobiology, 20(2), 257–261. Opitz, A., Fox, M. D., Craddock, R. C., Colcombe, S., & Milham, M. P. (2016). An integrated framework for targeting functional networks via transcranial magnetic stimulation. Neuroimage, 127, 86–96. Power, J. D., & Petersen, S. E. (2013). Control-related systems in the human brain. Current Opinion in Neurobiology, 23(2), 223–228. Raven, J. (2000). The Raven’s progressive matrices: Change and stability over culture and time. Cognitive Psychology, 41(1), 1–48. Sadaghiani, S., Poline, J. B., Kleinschmidt, A., & D’Esposito, M. (2015). Ongoing dynamics in large-scale functional connectivity predict perception. Proceedings of the National Academy of Sciences of the United States of America, 112(27), 8463–8468. Santarnecchi, E., Emmendorfer, A., Tadayon, S., Rossi, S., Rossi, A., Pascual-Leone, A., & Honeywell SHARP Team Authors. (2017). Network connectivity correlates of variability in fluid intelligence performance. Intelligence, 65, 35–47. Schultz, D. H., & Cole, M. W. (2016). Higher intelligence is associated with less taskrelated brain network reconfiguration. The Journal of Neuroscience, 36(33), 8551–8561. Shanmugan, S., Wolf, D. H., Calkins, M. E., Moore, T. M., Ruparel, K., Hopson, R. D., . . . Satterthwaite, T. D. (2016). Common and dissociable mechanisms of executive system dysfunction across psychiatric disorders in youth. American Journal of Psychiatry, 173(5), 517–526. Shine, J. M., Bissett, P. G., Bell, P. T., Koyejo, O., Balsters, J. H., Gorgolewski, K. J., . . . Poldrack, R. A. (2016). The dynamics of functional brain networks: Integrated network states during cognitive task performance. Neuron, 92(2), 544–554. Shine, J. M., & Poldrack, R. A. (2018). Principles of dynamic network reconfiguration across diverse brain states. Neuroimage, 180(Pt B), 396–405. Snyder, H. R., Miyake, A., & Hankin, B. L. (2015). Advancing understanding of executive function impairments and psychopathology: Bridging the gap between clinical and cognitive approaches. Frontiers in Psychology, 6, 328. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. Neuroimage, 41(3), 1168–1176. Spadone, S., Della Penna, S., Sestieri, C., Betti, V., Tosoni, A., Perrucci, M. G., . . . Corbetta, M. (2015). Dynamic reorganization of human resting-state

An Integrated, Dynamic Functional Connectome

networks during visuospatial attention. Proceedings of the National Academy of Sciences of the United States of America, 112(26), 8112–8117. Sporns, O. (2010). Networks of the brain. Cambridge, MA: MIT Press. Sporns, O. (2013). Network attributes for segregation and integration in the human brain. Current Opinion in Neurobiology, 23(2), 162–171. Stanley, M. L., Dagenbach, D., Lyday, R. G., Burdette, J. H., & Laurienti, P. J. (2014). Changes in global and regional modularity associated with increasing working memory load. Frontiers in Human Neuroscience, 8, 954. Sternberg, R. J., & Kaufman, S. B. (eds.) (2011). The Cambridge handbook of intelligence. New York: Cambridge University Press. Thompson, G. J., Magnuson, M. E., Merritt, M. D., Schwarb, H., Pan, W.-J., McKinley, A., . . . Keilholz, S. D. (2013). Short-time windows of correlation between large-scale functional brain networks predict vigilance intraindividually and interindividually. Human Brain Mapping, 34(12), 3280–3298. van den Heuvel, M. P., & Sporns, O. (2011). Rich-club organization of the human connectome. The Journal of Neuroscience, 31(44), 15775–15786. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. The Journal of Neuroscience, 29(23), 7619–7624. Vatansever, D., Menon, D. K., Manktelow, A. E., Sahakian, B. J., & Stamatakis, E. A. (2015). Default mode dynamics for global functional integration. The Journal of Neuroscience, 35(46), 15254–15262. Wang, C., Ong, J. L., Patanaik, A., Zhou, J., & Chee, M. W. L. (2016). Spontaneous eyelid closures link vigilance fluctuation with fMRI dynamic connectivity states. Proceedings of the National Academy of Sciences of the United States of America, 113(34), 9653–9658. Wang, L., Song, M., Jiang, T., Zhang, Y., & Yu, C. (2011). Regional homogeneity of the resting-state brain activity correlates with individual intelligence. Neuroscience Letters, 488(3), 275–278. Wechsler, D. (2008). Wechsler Adult Intelligence Scale – Fourth edition (WAIS-IV). San Antonio, TX: Pearson. Wechsler, D. (2011). Wechsler Abbreviated Scale of Intelligence – Second edition (WASI-II). San Antonio, TX: Pearson. Xia, M., & He, Y. (2011). Magnetic resonance imaging and graph theoretical analysis of complex brain networks in neuropsychiatric disorders. Brain Connectivity, 1(5), 349–365. Xiao, L., Stephen, J. M., Wilson, T. W., Calhoun, V. D., & Wang, Y. (2019). Alternating diffusion map based fusion of multimodal brain connectivity networks for IQ prediction. IEEE Transactions on Biomedical Engineering, 68(8), 2140–2151. Yin, S., Wang, T., Pan, W., Liu, Y., & Chen, A. (2015). Task-switching cost and intrinsic functional connectivity in the human brain: Toward understanding individual differences in cognitive flexibility. PLoS One, 10(12), e0145826. Zippo, A. G., Della Rosa, P. A., Castiglioni, I., & Biella, G. E. M. (2018). Alternating dynamics of segregation and integration in human EEG functional networks during working-memory task. Neuroscience, 371, 191–206.

281

14 Biochemical Correlates of Intelligence Rex E. Jung and Marwa O. Chohan The search for physiological correlates of intelligence, prior to the 1990s, largely revolved around well-established correlates found across species, particularly nerve conduction velocity and overall brain size. Human studies arose naturally from the psychometric literature noting that individuals with higher IQ had both faster reaction times and less variability in their responses (Jensen, 1982). These reaction time studies implied that there was something about intelligence beyond acquisition of knowledge, learning, and skill development, which: (1) could be measured with a high degree of accuracy, (2) could be obtained with minimal bias, (3) had a developmental trajectory from childhood through the teen years, and (4) (presumably) had something to do with neuronal structure and/or functional capacity. However, the tools of the neuroscientist were rather few, and the world of functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and the human connectome was only a distant dream. Several MRI studies of brain size emerged in the 1990s, however, with the first showing an r = .51 in 40 college students selected for high vs. low SAT scores (Willerman, Schultz, Rutledge, & Bigler, 1991), followed by a second in 67 normal subjects showing r = .38 between total brain volume and Full Scale IQ (FSIQ) (Andreasen et al., 1993). Interestingly, the pattern of correlation held for gray matter volumes (r = .35), but not for white matter volume (r = .14) in this later sample. Similar magnitudes of positive correlations were reported subsequently by others, ranging from r = .35 to r = .69 (Gur et al., 1999; Harvey, Persaud, Ron, Baker, & Murray, 1994; Reiss, Abrams, Singer, Ross, & Denckla, 1996; Wickett, Vernon, & Lee 1994). In a recent metaanalysis comprised of over 88 studies and more than 8,000 individuals, the relationship between brain volume and IQ was found to be r = .24, generalizing across age, IQ domain (full scale, verbal, performance), and sex (Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015). These authors conclude that: “brain size, likely a proxy for neuron number, is one of many neuronal factors associated with individual differences in intelligence, alongside parieto-frontal neuronal networks, neuronal efficiency, white matter integrity, cortical gyrification, overall developmental stability, and probably others” (p. 429). It is to one of these “other factors,” often lost in conversations regarding brain correlates of intellectual functioning, that we now turn our attention, moving from the macroscopic to the microscopic.

282

Biochemical Correlates of Intelligence

Magnetic Resonance Spectroscopy (MRS) Spectroscopy was introduced by Sir Isaac Newton (in 1666) when he separated a “spectrum” of colors by shining white light through a glass prism, a process mediated by the electrons of the refractive material. Magnetic resonance spectroscopic techniques, however, are concerned with the degree to which electromagnetic radiation is absorbed and emitted by nuclei – the functional display of which is called a spectrum (Gadian, 1995). Thus, the chemical composition of a given sample (living or inorganic) can be determined through a non-invasive technique exploiting electromagnetic and quantum theory. Specifically, the atomic nuclei with non-zero (e.g., hydrogen spin quantum number I = ½) angular momentum (i.e., “spin”) produce a magnetic field that can be manipulated with radio frequency pulses and subsequently recorded via an MRI machine. The resonance frequency of a nucleus in a MRS experiment is proportional to the strength of a magnetic field it experiences. This field is composed of the large “static” field of the MR spectrometer or MRI (Bo), and the much smaller field produced by the circulating electrons of the atomic molecule (Be). The resultant spectrum can be explained by the electron density of a given chemical sample, summarized by the expression: v (frequency) = gamma (Bo+Be). These frequencies are known as “chemical shifts” when they are referenced to a “zero” point (i.e., trimethylsilane), and expressed in terms of parts per million (ppm) (Gadian, 1995). The intensity of a given chemical signal is the area under the peak produced at a given frequency, and is proportional to the number of nuclei that contribute to a given signal (Figure 14.1). Spectra from living systems reveal narrow linewidths for metabolites with high molecular mobility (such as N-acetylaspartate) and broad linewidths for macromolecules such as proteins, DNA, and membrane lipids. Additionally, such factors as chemical exchange, magnetic field inhomogeneities, and paramagnetic interactions can influence linewidths and lineshapes. The overall sensitivity of MRS to subtle variations in concentration of specific metabolites also depends on these factors since they bear on the signal-to-noise ratio (S/N) of the MR data. Indeed, S/N can be influenced by such diverse factors as (1) the voxel size of the region of interest, (2) the time (or number) of acquisitions, (3) regional magnetic inhomogeneities, and (4) regional movement/ tissue interface artifacts (Gadian, 1995). MRS has been utilized within the biological sciences for a long period of time, largely within the field of physics and chemistry (see Nobel Prize to Edward Purcell and Felix Bloch in 1952), but later applied to biomolecular structures of proteins, nucleic acids, and carbohydrate structure (Gadian, 1995). MRS became increasingly useful in human medicine, as it allows for a bioassay of the chemical composition of tissue, non-invasively, by use of an MRI machine, which can excite hydrogen atoms (i.e., protons) interacting with a wide range of other chemicals. One of the first studies to utilize

283

284

r. e. jung and m. o. chohan

Figure 14.1 Representative spectrum from human voxel obtained from parietal white matter. The tallest peak (right) is N-acetylaspartate (2.02 ppm), with other major peaks (right to left) being Creatine (3.03 ppm) and Choline (3.2 ppm). A second Creatine peak is present at 3.93 ppm. 1

H-MRS in the brain compared relaxation rates of water in normal and brain tumor samples, showing the potential for this technique to both distinguish normal from neoplastic tissue, and diagnose different types of brain tumors in vivo (Parrish, Kurland, Janese, & Bakay, 1974). Currently, Proton Magnetic Resonance Spectroscopy (1H-MRS) is a major technique used in humans to diagnose disease entities ranging from cancer (Kumar, Sharma, & Jagannathan 2012), Alzheimer’s disease (Graff-Radford & Kantarci, 2013), traumatic brain injury (Friedman, Brooks, Jung, Hart, & Yeo, 1998), and even rare neurological syndromes in systemic lupus erythematosus (Jung et al., 2001) (to name a few). We will describe its use in studies of human intelligence.

Key Neurochemical Variables Assessed by MRS N-acetylaspartate (NAA) is located almost entirely in neurons and is the strongest peak in the proton MR brain spectrum in adults (Moffett, Ross, Arun, Madhavarao, & Namboodiri, 2007). Although the exact mechanism by which NAA is related to neuronal functioning and hence cognition is unknown, it has been demonstrated that NAA contributes to lipid synthesis

Biochemical Correlates of Intelligence

for myelination during development in rats (Taylor et al., 2002). NAA is a metabolic precursor of N-acetyl-aspartyl-glutamate, a neuromodulator (Blakely & Coyle, 1988), which may protect neurons from osmotic stress (Taylor et al., 2002), and may be a marker of neuronal oxidative phosphorylation and mitochondrial health (Bates et al., 1996). The Creatine (Cre) peak represents the sum of intracellular creatine and phosphocreatine, reflecting tissue energetics. The Choline (Cho) peak reflects the sum of all MRS visible choline moieties, and can be elevated in stroke, multiple sclerosis, and traumatic brain injury (among other diseases), due to membrane breakdown, inflammation, and/or demyelination (Brooks, Friedman, & Gasparovic, 2001). Given that MRS assays of neurometabolites are often conducted in white matter regions, it is of interest to what extent higher levels of NAA may reflect some aspects of axonal functioning. For example, increased levels of NAA may confer more rapid neural transmission through its demonstrated role as an acetyl group donor for myelination (D’Adamo & Yatsu, 1966). Indeed, two known predictors of neural transmission speed are myelin thickness and axonal diameter (Aboitiz, Scheibel, Fisher, & Zaidel, 1992). A recent study in rats found that NAA was more concentrated in myelin as compared to neuronal dendrites, nerve terminals, and cell bodies, favoring a role in myelin synthesis for this metabolite (Nordengen, Heuser, Rinholm, Matalon, & Gundersen, 2015). This research also supports the notion that the NAA signal represents intact myelin/oligodendrocytes, as opposed to purely viable neuron/ axons (Schuff et al., 2001; Soher, van Zijl, Duyn, & Barker, 1996; Tedeschi et al., 1995). Regardless of the precise role of this neurometabolite, it appears that NAA is an important marker of cognitive disability across numerous clinical groups (Ross & Sachdev, 2004).

Early MRS Studies of Cognition Two studies led to our early interest in applying spectroscopic measures, particularly NAA, to studies of normal human intelligence. The first was a study using 1H-MRS in 28 patients diagnosed with mental retardation (MR, IQ range 20–79), compared to 25 age-matched healthy children ranging in age from 2 to 13 years old. Spectra were obtained from a voxel placed within the right parietal lobe, largely consisting of white matter. While the NAA/Cho ratio was observed to increase with advancing age in both groups (p = .01), this ratio was consistently lower in the MR group (p = .0029). Importantly, while these authors used ratios (as opposed to absolute concentrations), the relationships between metabolites and MR appeared to be driven by NAA as opposed to either choline or creative resonances. The authors interpreted these differences to reflect “maldevelopment of the apical dendrites and abnormalities of synapse formation and spine morphology” reflective of low neuron activity in MR (Hashimoto et al., 1995). This study was the first to establish NAA as a valid marker of extreme intellectual differences, with mechanistic

285

286

r. e. jung and m. o. chohan

relationships likely to be present at the level of neuronal axons within large white matter populations. The second study was conducted with 42 boys, and involved obtaining phosphorous (31P) spectroscopy of pH from a region comprising the top of each subject’s head (i.e., bilateral frontoparietal region) (Rae et al., 1996). These were normal control boys (age range 6–13), who were administered the Wechsler Intelligence Scale for Children – III, and their scores were in the average range (Mean 102.319.6). These authors found a positive relationship between brain pH and FSIQ (r = .52), with stronger relationships between crystallized intelligence (r = .60), than for fluid intelligence (r = .44). They noted that previous studies had shown relationships between IQ and averaged evoked potentials (Callaway, 1973; Ellis, 1969; Ertl & Schafer, 1969), and that increased cellular pH is associated with increased amplitude of nerve action potentials (Lehmann, 1937) and decreased conduction times (Ellis, 1969). Thus, spectroscopic measures of pH were here first shown to be associated with likely efficiency of nerve conduction capabilities as manifested by increased performance on standardized IQ tests in a normal cohort (Rae et al., 1996). Could it be that spectroscopic measures were predictive of intelligence in normal healthy adult subjects as well? Surprisingly, no one had looked at these relationships, in spite of vigorous research regarding the association between metabolites, including NAA, with neuropsychological functioning in disease states including: abstinent alcoholics (Martin et al., 1995), HIV/ AIDS (López-Villegas, Lenkinski, & Frank, 1997), adrenoleukodystrophy (Rajanayagam et al., 1997), and traumatic brain injury (Friedman et al., 1998). Given previous differentiation between high and low IQ subjects with respect to measures of NAA (Hashimoto et al., 1995), the establishment of a strong, linear relationship between spectroscopic measures and IQ in a normal cohort (Rae et al. 1996), and several studies showing decrements of NAA to be correlated with neuropsychological functioning across a wide range of disease entities, it seemed entirely plausible to hypothesize a relationship between IQ, the most sensitive and reliable measure of human cognitive functioning, and NAA, a measure sensitive to neuronal integrity. But where to put the voxel of interest?

Our MRS Studies of Intelligence One of us (RJ) became interested in MRS during graduate school, looking at studies of traumatic brain injury (Brooks et al., 2001) and systemic lupus erythematosus (Sibbitt Jr, Sibbitt, & Brooks, 1999). Spectroscopic voxels were placed within the occipito-parietal white matter, because we could get very high quality spectra from these locations without significant artifacts, as was a problem with voxels within the frontal lobes (due to air/tissue

Biochemical Correlates of Intelligence

interface associated with the nasal conchae and away from dental work which, at that time, was associated with metallic artifacts creeping into the images from fillings, implants, and the like). Our spectra were beautiful, with sharp, thin peaks, although having little to do with most aspects of higher cognitive functioning associated with the massive frontal lobes of the human brain. Indeed, nascent neuroimaging research at the time, regarding neural correlates of higher cognitive functioning (e.g., abstract reasoning, working memory, language, attention), clearly pointed to frontal lobe involvement (Cabeza & Nyberg, 2000; Posner & Raichle, 1998). RJ’s dissertation project established the first MRS study regarding the biochemical correlates of intelligence in a normal adult cohort (Jung et al., 1999). We studied 27 participants (17 female, 10 males), and placed spectroscopic voxels within bilateral frontal lobe regions and one “control” voxel within the left occipito-parietal lobe, and hypothesized that frontal NAA would be significantly associated with IQ, while the control region would be unassociated (or only weakly associated with IQ). We administered the Wechsler Adult Intelligence Scale – 3 (Mean = 111  11.4; range = 91–135) to all participants, who were screened to exclude any neurological or psychiatric disease or disorder. Absolute quantification of NAA, Cre, and Cho were obtained, as a separate water scan was acquired independently as a reference, allowing for “absolute” quantification of these metabolites at a millimolar level. We found a significant, moderate correlation between NAA (and somewhat lower for Cho) and IQ across the sample (r = .52). The only problem was that this was found in our “control” voxel within the occipito-parietal region; indeed, there was no association between frontal NAA, Cho, or Cre and IQ whatsoever! We reported the significant results to the Proceedings of the Royal Society of London. With regard to the voxel location, we noted that: The main association pathways sampled by our experimental paradigm included axonal fibers from the posterior aspects of the superior and inferior longitudinal, occipitofrontal and arcuate fasciculi, as well as the splenium of the corpus callosum. As this voxel location sampled numerous association pathways connecting many brain regions, metabolic concentrations in this voxel may widely influence cognitive processing. (p. 1378)

Several studies of NAA have emerged in normal subjects, showing rather consistent, low, positive correlations between this metabolite and various measures of cognitive functioning, most particularly intelligence (Table 14.1).

Limitations of MRS/Intelligence Studies so Far Several comments can be made with regard to the variability of these R values across studies. First, there is a general trend towards smaller studies having higher NAA-IQ relationships than larger N studies (Figure 14.2), a characteristic well established within brain–behavior literature (Button et al., 2

287

288

r. e. jung and m. o. chohan

Table 14.1 Studies of NAA. Author

N

IQ r2

(Jung et al., 1999) (Ferguson et al., 2002)

26 88

1

Location

Method

Age

Gender

0.27 .042

L Parietal L Parietal

22 (4.6) 65–70

17F/10M

62

0.311

STEAM PRESS NAA/Cr STEAM

(Pfleiderer et al., 2004)

38.5 (15.4)

22F/40M

21

0.00031

L DLPFC L ACC L Temporal PRESS NAA/ Cho L Parietal PRESS B Centrum PRESS S. CSI

(Giménez et al., 2004)

14.05 (2.46)

11F/10M

24.8 (5.9) 50–89

10F/17M 51F/55M

PRESS CSI STEAM

23.7 (4.2)

29F/34M

15.1 (.75)

30M

STEAM NAA/Cr

21.1 (3.5)

29F/11M

(Jung et al., 2005) 27 .261 106 .033 (Charlton, McIntyre, Howe, Morris, & Markus, 2007) (Jung et al., 2009) 63 .121

R Posterior

(Aydin, Uysal, Yakut, Emiroglu, & Yilmaz, 2012) (Patel & Talcott, 2014)

30

.321

CC Posterior

40

.021

L Frontal

(Paul et al., 2016)

211 .024

(Nikolaidis et al., 2017)

71

.115

Post. Cingulate Multiple L F/P

Average

12

.14

24.6 (18–44) 90F/121M PRESS CSI

21.15 (2.56)

47F/24M

1

Wechsler Intelligence Scales; 2 Raven’s Progressive Matrices Test; 3 Wechsler Abbreviated Scale of Intelligence (Matrix Reasoning, Block Design); 4 G Fluid: BOMAT, Number Series, Letter Sets; 5 G Fluid: Matrix Reasoning, Shipley Abstraction, Letter Sets, Spatial Relations Task, Paper Folding Task, Form Boards Task.

2013), and associated with a move toward N > 100 in the neurosciences (Dubois & Adolphs, 2016). Second, there has been an increase in sophistication of measurement of spectra, from a single voxel (Jung et al., 1999), to multiple voxels within frontal and posterior brain regions (Jung, Gasparovic, Chavez, Caprihan, et al., 2009), to very elegant Chemical Shift Imaging (CSI) studies assessing multiple brain regions, with both metabolic and volume comparisons by region being made against multiple behavioral measures, including measures of reasoning (Nikolaidis et al., 2017). Measures of absolute quantitation clearly show stronger relationships to measures of IQ and reasoning (average R2 = .16) than do measures of NAA as a ratio to other metabolites (e.g., Cre or Cho), with average R2 = .02. This is likely due to both increased variance associated with denominator metabolites, as well as the lack of tissue correction (gray, white, CSF) associated with quantification of

Biochemical Correlates of Intelligence

Figure 14.2 Linear relationship between size of study (Y axis) and magnitude of NAA–intelligence, reasoning, general cognitive functioning relationship (X axis), with the overall relationship being inverse (R2 = .20).

NAA in millimolar units to an underlying water scan. Third, a few studies have “de-standardized” the IQ measure in such ways that make interpretation of the findings difficult and sometimes impossible. The most baffling of these was from Charleton et al. (2007), who converted two non-verbal subtests from the WASI (Block Design and Matrix Reasoning) to an “optimal range: 25–29,” then controlled their regression analysis by numerous factors, including estimated intelligence from the National Adult Reading Test, thus ensuring that NAA-IQ relationships would be nil. We are not aware of any other neuroimaging study of Brain–IQ relationships that would use premorbid intelligence as a covariate. The results of such an analysis are considered to be self-evident: if you control for IQ (i.e., premorbid IQ, which correlates r = .72–.81 with current IQ in healthy adults; Lezak, Howieson, & Loring 2004), the relationship between any variable and IQ will approach zero. Finally, we have picked the highest R2 values for each study, although it should be noted that many of the studies show other regions or region by sex relationships that are much lower, inverse, or negligible. The focus of this chapter is to determine whether some consensus can be found, and whether any recommendations can be made towards better spectroscopic–IQ studies undertaken in the future.

289

290

r. e. jung and m. o. chohan

Recommendations for Future MRS/Intelligence Studies One major recommendation to be made when studying the construct of intelligence, reasoning, or general cognitive functioning is to choose a measure with the highest reliability and validity. The construct of intelligence has been studied for over 100 years, and a dedicated journal (Intelligence) has chronicled research regarding this construct for some 40 plus years. There are many ways to measure intelligence, including the Shipley, Raven’s, BOMAT, and Wechsler Scales; however, none possess higher reliability and validity than the Wechsler Scales with respect to measuring this important human construct (Anon, 2002). Split half reliability ranges from .97 (16–17 years old) to .98 (80–84 years old), with average reliability of .98 for the Full Scale Intelligence Quotient (FSIQ). Test–retest reliability (across 2–12 week intervals) ranged from .95 (16–29 years old) to .96 (75–89 years old). Inter-rater reliabilities across verbal subtests requiring some subjective scoring ranged from .91 (Comprehension) to .95 (Vocabulary). Both convergent validity (e.g., Standard Progressive Matrices, Stanford Binet) and discriminative validity (e.g., attention, memory, language) have been demonstrated for the WAIS FSIQ. The Raven’s Matrices test is a good substitute for a general measure of intelligence, with moderate correlation with the WAIS FSIQ (r = .64 for Standard Progressive Matrices), although for normal samples both Standard and Advanced Matrices must be used to provide adequate variance of the measure (i.e., high ceiling, low floor). The BOMAT provides a high ceiling, but should never be mixed with other measures possessing unknown reliability/validity. Mixing together various measures of “reasoning” into an average score comprising “General Fluid Ability” creates a new measure of unknown reliability or validity, with unknown relationships to well-established measures of intelligence. So, this would be general advice for all readers of this handbook: use well-validated measures of intelligence with high reliability in order to reduce the likelihood of behavioral measurement error. A second major recommendation to spectroscopic researchers (and to neuroimaging researchers more broadly) is to interrogate regions of the brain that have been demonstrated to have high relationships with intelligence in prior research. What might these brain regions be? One of the most firmly established and well-replicated findings in all of intelligence research is the modest correlation between overall brain size and IQ of around r = .24 (Pietschnig et al., 2015). Thus, any large or gross measure of brain structure or function is likely to produce measures comparable to the magnitude found in overall cortical volume (e.g., R2 = .06). Neurosurgeons have long known (and demonstrated through their surgical prowess) that the left hemisphere is more “eloquent” than the non-dominant right when it comes to higher cognitive functioning, although this dogma has recently been challenged (Vilasboas, Herbet, & Duffau, 2017). Certain subcortical structures have been demonstrated to be critical to higher cognitive functioning, and the volume of one in

Biochemical Correlates of Intelligence

particular – the caudate nucleus – has been linked to IQ (Grazioplene et al., 2015). Neither the mesial temporal lobe (including hippocampus), nor the occipital lobe (other than BA 19), nor the sensory/motor cortex, has ever consistently been demonstrated to be related to intelligence (Basten, Hilger, & Fiebach, 2015). This leaves the frontal and parietal lobes, the focus of a major theory of intelligence, relating parietal and frontal lobe integrity to the structure and function of intelligence in the human brain (Jung & Haier, 2007). The vast majority of research in both structural and functional domains have supported the importance of parietal and frontal regions, particularly regions overlapping the Task Positive Network (Glasser et al., 2016), as well as white matter tracts including the corpus callosum and other central tracts both connecting the two hemispheres (Kocevar et al., 2019; Navas-Sánchez et al., 2014; Nusbaum et al., 2017) and connecting the frontal to more posterior lobes, such as the inferior frontal occipital fasciculus (IFOF) (Haász et al., 2013). Thus, there are areas of theoretical and empirical interest in which voxel placement is more or less likely to yield results. We have noted that MRS techniques are more amenable to studying white matter volumes given that voxels must be placed away from the skull (to avoid lipid contamination from the skull and scalp), and from air/tissue interfaces. These voxels are also generally placed to avoid overlap with the ventricles, to avoid contributions from samples containing minimal to no spectra. This leaves deep white matter and subcortical gray matter structures to interrogate with either single voxel or CSI. CSI, almost invariably, is placed just dorsal to the lateral ventricles, leaving little room for choice, but maximal brain coverage of both gray and white volumes without contamination of either scalp/skull or ventricles. Placement of single voxel(s) should be carried out with some knowledge of underlying brain anatomy being interrogated, with particular white matter tracts (e.g., Inferior Frontal Occipital Fasciculus – IFOF; Arcuate Fasciculus – AF) being of particular interest given both theoretical and meta-analytical research regarding white matter contributions to intelligence (Basten et al., 2015; Jung & Haier, 2007). Given the various techniques and findings reviewed here, final recommendations can be summarized with regard to spectroscopic inquiries of intelligence as follows: Use a reliable and valid measure of intelligence – Wechsler is best (WASI subtests of Vocabulary and Matrix Reasoning yield FSIQ and can be administered in 20 minutes). Use of ad hoc and/or home grown measures of fluid intelligence, reasoning, or the like does not allow for comparison across studies. Moreover, the reliability, validity, age range of normative sample (6–89), and updates due to so-called Flynn Effects (Flynn, 1987) of such measures are highly likely to be lower than the gold standard established with the Wechsler Scales. Spectroscopic voxels need to be placed in order to yield high quality spectra; however, consideration of underlying anatomy is vital to interpretation of

291

292

r. e. jung and m. o. chohan

findings. Nikolaidis et al. (2017) provide the most elegant example for future researchers to follow, with combined CSI imaging overlaid on particular cortical regions, which are then combined statistically using Principal Component Analysis. Posterior brain regions are particularly fruitful with regard to NAAIQ relationships, with parietal-frontal white and gray matter voxels being most likely to produce moderate positive associations across studies. Absolute quantification of metabolic concentration is critical. Convolving metabolites through use of ratios (e.g., NAA/Cre), and ignoring and/or minimizing effects of tissue concentration and/or water content within voxels does not move the science forward. Sample sizes should be N > 100 with roughly equal sampling from males and females to have sufficient power to detect reliable NAA-IQ relationships, as well as to determine whether any significant sex differences exist within the sample.

References Aboitiz, F., Scheibel, A. B., Fisher, R. S., & Zaidel, E. (1992). Fiber composition of the human corpus callosum. Brain Research, 598(1–2), 143–153. Andreasen, N. C., Flaum, M., Swayze, V. D., O’Leary, D. S., Alliger, R., Cohen, G., . . . Yuh, W. T. (1993). Intelligence and brain structure in normal individuals. American Journal of Psychiatry, 150(1), 130–134. Anon. (2002). WAIS-III WMS-III technical manual. New York: The Psychological Corporation. Aydin, K., Uysal, S., Yakut, A., Emiroglu, B., & Yilmaz, F. (2012). N-Acetylaspartate concentration in corpus callosum is positively correlated with intelligence in adolescents. NeuroImage, 59(2), 1058–1064. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51(1), 10–27. Bates, T. E., Strangward, M., Keelan, J., Davey, G. P., Munro, P. M. G. G., & Clark, J. B. 1996. Inhibition of N-acetylaspartate production: Implications for 1H MRS studies in vivo. Neuroreport, 7(8), 1397–1400. Blakely, R. D., & Coyle, J. T. (1988). The neurobiology of N-acetylasparty. International Review of Neurobiology, 30, 39–100. Brooks, W. M., Friedman, S. D., & Gasparovic, C. (2001). Magnetic resonance spectroscopy in traumatic brain injury. Journal of Head Trauma Rehabilitation, 16(2), 149–164. Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A. Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376. Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and FMRI studies. Journal of Cognitive Neuroscience, 12(1), 1–47.

Biochemical Correlates of Intelligence

Callaway, E. (1973). Correlations between averaged evoked potentials and measures of intelligence: An overview. Archives of General Psychiatry, 29(4), 553–558. Charlton, R. A., McIntyre, D. J. O. O., Howe, F. A., Morris, R. G., & Markus H. S. (2007). The relationship between white matter brain metabolites and cognition in normal aging: The GENIE study. Brain Research, 1164, 108–116. D’Adamo, A. F., & Yatsu. F. M. (1966). Acetate metabolism in the nervous system. N-acetyl-l-aspartic acid and the biosynthesis of brain lipids. Journal of Neurochemistry, 13(10), 961–965. Dubois, J., & Adolphs, R. (2016). Building a science of individual differences from FMRI. Trends in Cognitive Sciences, 20(6), 425–443. Ellis, F. R. (1969). Some effects of PCO2 and PH on nerve tissue. British Journal of Pharmacology, 35(1), 197–201. Ertl, J. P., & Schafer, E. W. P. (1969). Brain response correlates of psychometric intelligence. Nature, 223, 421–422. Ferguson, K. J., MacLullich, A. M. J., Marshall, I., Deary, I. J., Starr, J. M., Seckl, J. R., & Wardlaw, J. M. (2002). Magnetic resonance spectroscopy and cognitive function in healthy elderly men. Brain, 125(Pt. 12), 2743–2749. Flynn, J. R. (1987). Massive IQ gains in 14 nations: What IQ tests really measure. Psychological Bulletin, 101(2), 171–191. Friedman, S. D., Brooks, W. M., Jung, R. E., Blaine, B. L. L., Hart, L., & Yeo, R. A. (1998). Proton MR spectroscopic findings correspond to neuropsychological function in traumatic brain injury. American Journal of Neuroradiology, 19(10), 1879–1885. Gadian, D. G. (1995). NMR and its applications to living systems. Oxford University Press. Giménez, M., Junqué, C., Narberhaus, A., Caldú, X., Segarra, D., Vendrell, P., . . . Mercader, J. M. (2004). Medial temporal MR spectroscopy is related to memory performance in normal adolescent subjects. Neuroreport, 15(4), 703–707. Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J., Yacoub, E., . . . Van Essen, D. C. (2016). A multi-modal parcellation of human cerebral cortex. Nature, 536, 171–178. Graff-Radford, J., & Kantarci, K. (2013). Magnetic resonance spectroscopy in Alzheimer’s disease. Neuropsychiatric Disease and Treatment, 9, 687–696. Grazioplene, R. G., Rachael, G., Ryman, S. G., Gray, J. R., Rustichini, A., Jung, R. E., & DeYoung, C. G. (2015). Subcortical intelligence: Caudate volume predicts IQ in healthy adults. Human Brain Mapping, 36(4), 1407–1416. Gur, R. C., Turetsky, B. I., Matsui, M., Yan, M., Bilker, W., Hughett, P., & Gur, R. E. (1999). Sex differences in brain gray and white matter in healthy young adults: Correlations with cognitive performance. Journal of Neuroscience, 19(10), 4065–4072. Haász, J., Westlye, E. T., Fjær, S., Espeseth, T., Lundervold, A., & Lundervold, A. J. (2013). General fluid-type intelligence is related to indices of white matter structure in middle-aged and old adults. NeuroImage, 83, 372–383.

293

294

r. e. jung and m. o. chohan

Harvey, I., Persaud, R., Ron, M. A., Baker, G., & Murray, R. M. (1994). Volumetric MRI measurements in bipolars compared with schizophrenics and healthy controls. Psychological Medicine, 24(3), 689–699. Hashimoto, T., Tayama, M., Miyazaki, M., Yoneda, Y., Yoshimoto, T., Harada, M., . . . Kuroda, Y. (1995). Reduced N-acetylaspartate in the brain observed on in vivo proton magnetic resonance spectroscopy in patients with mental retardation. Pediatric Neurology, 13(3), 205–208. Jensen, A. R. (1982). Reaction time and psychometric g. In H. J. Eysenck (ed.), A model for intelligence (pp. 93–132). Berlin: Springer-Verlag. Jung, R. E., Brooks, W. M., Yeo, R. A., Chiulli, S. J., Weers, D. C., & Sibbitt Jr, W. L. (1999). Biochemical markers of intelligence: A proton MR spectroscopy study of normal human brain. Proceedings of the Royal Society B-Biological Sciences, 266(1426), 1375–1379. Jung, R. E., Gasparovic, C., Robert, R. S., Chavez, S., Caprihan, A., Barrow, R., & Yeo, R. A. (2009). Imaging intelligence with proton magnetic resonance spectroscopy. Intelligence, 37(2), 192–198. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2):135–154. Jung, R. E., Haier, R. J., Yeo, R. A., Rowland, L. M., Petropoulos, H., Levine, A. S., . . . Brooks, W. M. (2005). Sex differences in N-acetylaspartate correlates of general intelligence: An H-1-MRS study of normal human brain. Neuroimage, 26(3), 965–972. Jung, R. E., Yeo, R. A., Sibbitt Jr., W. L., Ford, C. C., Hart, B. L., & Brooks, W. M. (2001). Gerstmann syndrome in systemic lupus erythematosus: Neuropsychological, neuroimaging and dpectroscopic findings. Neurocase, 7(6), 515–521. Kocevar, G., Suprano, I., Stamile, C., Hannoun, S., Fourneret, P., Revol, O., . . . Sappey-Marinier, D. (2019). Brain structural connectivity correlates with fluid intelligence in children: A DTI graph analysis. Intelligence, 72, 67–75. Kumar, V., Sharma, U., & Jagannathan, N. R. (2012). In vivo magnetic resonance spectroscopy of cancer. Biomedical Spectroscopy and Imaging, 1(1), 89–100. Lehmann, J. E. (1937). The effect of changes in PH on the action of mammalian A nerve fibres. American Journal of Physiology, 118(3), 600–612. Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment, 4th ed. New York: Oxford University Press. López-Villegas, D., Lenkinski, R. E., & Frank, I. (1997). Biochemical changes in the frontal lobe of HIV-infected individuals detected by magnetic resonance spectroscopy. Proceedings of the National Academy of Sciences of the United States of America, 94(18), 9854–9859. Martin, P. R., Gibbs, S. J., Nimmerrichter, A. A., Riddle, W. R., Welch, L. W., & Willcott, M. R. (1995). Brain proton magnetic resonance spectroscopy studies in recently abstinent alcoholics. Alcoholism: Clinical and Experimental Research, 19(4), 1078–1082. Moffett, J. R., Ross, B. D., Arun, P., Madhavarao, C. N., & Namboodiri, Aryan. (2007). N-Acetylaspartate in the CNS: From neurodiagnostics to neurobiology. Progress in Neurobiology, 81(2), 89–131.

Biochemical Correlates of Intelligence

Navas-Sánchez, F. J., Alemán-Gómez, Y., Sánchez-Gonzalez, J., Guzmán-DeVilloria, J. A., Franco, C., Robles, O., . . . Desco, M. (2014). White matter microstructure correlates of mathematical giftedness and intelligence quotient. Human Brain Mapping, 35(6), 2619–2631. Nikolaidis, A., Baniqued, P. L., Kranz, M. B., Scavuzzo, C. J., Barbey, A. K., Kramer, A. F., & Larsen, R. J. (2017). Multivariate associations of fluid intelligence and NAA. Cerebral Cortex, 27(4), 2607–2616. Nordengen, K., Heuser, C., Rinholm, J. E., Matalon, R., & Gundersen, V. (2015). Localisation of N-acetylaspartate in oligodendrocytes/myelin. Brain Structure and Function, 220(2), 899–917. Nusbaum, F., Hannoun, S., Kocevar, G., Stamile, C., Fourneret, P., Revol, O., & Sappey-Marinier, D. (2017). Hemispheric differences in white matter microstructure between two profiles of children with high intelligence quotient vs. controls: A tract-based spatial statistics study. Frontiers in Neuroscience, 11, 173. doi: 10.3389/fnins.2017.00173. eCollection 2017. Parrish, R. G., Kurland, R. J., Janese, W. W., & Bakay, L. (1974). Proton relaxation rates of water in brain and brain tumors. Science, 183(4123), 438–439. Patel, T., & Talcott, J. B. (2014). Moderate relationships between NAA and cognitive ability in healthy adults: Implications for cognitive spectroscopy. Frontiers in Human Neuroscience, 8, 39. doi: 10.3389/fnhum.2014.00039. eCollection 2014. Paul, E. J., Larsen, R. J., Nikolaidis, A., Ward, N., Hillman, C. H., Cohen, N. J., . . . Barbey, A. K. (2016). Dissociable brain biomarkers of fluid intelligence. NeuroImage, 137, 201–211. Pfleiderer, B., Ohrmann, P., Suslow, T., Wolgast, M., Gerlach, A. L., Heindel, W., & Michael, N. (2004). N-Acetylaspartate levels of left frontal cortex are associated with verbal intelligence in women but not in men: A proton magnetic resonance spectroscopy study. Neuroscience, 123(4), 1053–1058. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Meta-analysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience and Biobehavioral Reviews, 57, 411–432. Posner, M. I., & Raichle, M. E. (1998). The neuroimaging of human brain function. Proceedings of the National Academy of Sciences of the United States of America, 95(3), 763–764. Rae, C., Scott, R. B., Thompson, C. H., Kemp, G. J., Dumughn, I., Styles, P., . . . Radda, G. K. (1996). Is PH a biochemical marker of IQ? Proceedings of the Royal Society B: Biological Sciences, 263(1373), 1061–1064. Rajanayagam, V., Balthazor, M., Shapiro, E. G., Krivit, W., Lockman, L., & Stillman, A. E. (1997). Proton MR spectroscopy and neuropsychological testing in adrenoleukodystrophy. American Journal of Neuroradiology, 18(10), 1909–1914. Reiss, A. L., Abrams, M. T., Singer, H. S., Ross, J. L., & Denckla, M. B. (1996). Brain development, gender and IQ in children. A volumetric imaging study. Brain, 119(Pt 5), 1763–1774. doi: 10.1093/brain/119.5.1763. Ross, A. J., & Sachdev, P. S. (2004). Magnetic resonance spectroscopy in cognitive research. Brain Research Reviews, 44(2–3), 83–102.

295

296

r. e. jung and m. o. chohan

Schuff, N., Ezekiel, F., Gamst, A. C., Amend, D. L., Capizzano, A. A., Maudsley, A. A., & Weiner, M. W. (2001). Region and tissue differences of metabolites in normally aged brain using multislice 1H magnetic resonance spectroscopic imaging. Magnetic Resonance in Medicine, 45(5), 899–907. Sibbitt Jr., W. L., Sibbitt, R. R., & Brooks, W. M. (1999). Neuroimaging in neuropsychiatric systemic lupus erythematosus. Arthritis & Rheumatism, 42(10), 2026–2038. Soher, B. J., van Zijl, P. C., Duyn, J. H., & Barker, P. B. (1996). Quantitative proton MR spectroscopic imaging of the human brain. Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine/Society of Magnetic Resonance in Medicine, 35(3), 356–363. Taylor, D. L., Davies, S. E. C., Obrenovitch, T. P., Doheny, M. H., Patsalos, P. N., Clark, J. B., & Symon, L. (2002). Investigation into the role of N-acetylaspartate in cerebral osmoregulation. Journal of Neurochemistry, 65(1), 275–281. Tedeschi, G., Bertolino, A., Righini, A., Campbell, G., Raman, R., Duyn, J. H., . . . Di Chiro, G. (1995). Brain regional distribution pattern of metabolite signal intensities in young adults by proton magnetic resonance spectroscopic imaging. Neurology, 45(7), 1384–1391. Vilasboas, T., Herbet, G., & Duffau, H. (2017). Challenging the myth of right nondominant hemisphere: Lessons from corticosubcortical stimulation mapping in awake surgery and surgical implications. World Neurosurgery, 103, 449–456. Wickett, J. C., Vernon, P. A., & Lee, D. H. (1994). In vivo brain size, head perimeter, and intelligence in a sample of healthy adult females. Personality and Individual Differences, 16(6), 831–838. Willerman, L., Schultz, R., Rutledge, J. N., & Bigler, E. D. (1991). In vivo brain size and intelligence. Intelligence, 15(2), 223–228.

15 Good Sense and Good Chemistry Neurochemical Correlates of Cognitive Performance Assessed In Vivo through Magnetic Resonance Spectroscopy Naftali Raz and Jeffrey A. Stanley Intelligence is an extensively researched and psychometrically robust construct, whose biological validity remains insufficiently elucidated. The extant theorizing about neural mechanisms of intelligence links better reasoning abilities to the efficiency of information processing by the brain as a system (Neubauer & Fink, 2009), to the structural and functional integrity of the network connecting critically important brain hubs (a parietal-frontal integration or P-FIT theory, Jung & Haier, 2007), and to properties of specific brain regions, such as the prefrontal cortices (Duncan, Emslie, Williams, Johnson, & Freer, 1996). Gathering data for testing these theories is a complicated enterprise that involves interrogating the brain from multiple perspectives. Despite recent promising work on multimodal imaging (Sui, Huster, Yu, Segall, & Calhoun, 2014), it is still unrealistic to assess all relevant aspects of the brain at once. Thus, the investigators are compelled to evaluate specific salient features of the brain’s structure and function. In this chapter, we review the application of Magnetic Resonance Spectroscopy (MRS) that is used to investigate the neurochemistry of energy metabolism and neurotransmission underlying cognitive operations including complex reasoning abilities that we assume as components or expressions of intelligence. Specifically, we focus on two important sets of characteristics that underpin information processing and transfer within the brain as a system: brain energy metabolism and neurotransmission – the main consumer of the brain’s energetic resources. We restrict our discussion to specific methods of assessing the brain’s neurochemical and metabolic properties in vivo: ¹H and ³¹P MRS (Stanley, 2002; Stanley, Pettegrew, & Keshavan, 2000; Stanley & Raz, 2018). After highlighting the key aspects of cerebral energy metabolism and neurotransmission, we describe physical foundations of MRS, its capabilities in estimating the brain’s metabolites, spatial and temporal resolution constraints on MRS-generated estimates, distinct advantages and disadvantage of ¹H and ³¹P MRS, and cognitive correlates of MRS-derived indices

297

298

n. raz and j. a. stanley

described in the extant literatures. Finally, we present a road map to maximizing the advantages and overcoming the limitations of MRS for future studies of energetic and neurotransmission mechanisms that may underlie implementation of simple and complex cognitive abilities.

Magnetic Resonance Spectroscopy: A Brief Introduction The Fundamentals: Physical and Chemical Foundations of MRS Although MRS and magnetic resonance imaging (MRI) are both based on the same phenomenon of nuclear magnetic resonance (NMR), the two differ in important ways. MRI focuses on capturing the signal of a single chemical species, water, by targeting the two ¹H nuclei of its molecule. Therefore, the data collected in all MRI studies, regardless of the specific technique (structural, diffusion, or susceptibility-weighted), are based on a strong signal from the brain water, whose concentration ranges between 70% and 95% of 55.5 mol. In contrast, MRS focuses on acquiring the signal of multiple chemical species simultaneously. It can target, for example, any molecule containing ¹H nuclei, such as glutamate or γ-aminobutyric acid (GABA), or any phosphorus (³¹P)-containing molecules, such as phosphocreatine (PCr) or adenosine triphosphate (ATP). The chemical shift interaction is the primary mechanism that enables MRS to discern multiple chemical species. It is based on the principle that the “resonant” conditions or the precessional frequencies of the targeted nuclei are directly proportional to the static magnetic field strength, B0, via a nuclei-specific gyromagnetic ratio constant. In biological systems, molecules are typically composed of CH, CH2, CH3, NH3, and PO3 groups, to name a few, and are referred to as “spin groups.” The magnetic fields at each of the spin groups of a molecule are slightly different from each other due to the physical interaction of local magnetic fields. If the local magnetic field is different, so does the resonant frequency or “chemical shift” of each spin group. The nuclear magnetic resonance (NMR) signal from each group of the targeted nuclei is acquired in the time-domain and through Fourier Transform is converted to a frequency-domain representation. The latter allows chemical shift frequencies to be displayed as a MRS spectrum, which consists of a series of uniquely positioned spectral peaks (or chemical shifts) expressed in parts per million (ppm), as illustrated in Figure 15.1. In addition to the chemical shift interactions that depend on the B0 field strength, there is an additional through-bond interaction between adjacent spin groups within the same molecule – referred to as the scalar J-coupling interaction. That interaction may enable splitting the chemical shift from a singlet to multiple subpeaks or multiplets – e.g., doublets, triplets, or quartets. The peak separation of multiplets, which are field independent, hinges on the J-coupling strength and is expressed in Hertz. Thus, the combination of unique chemical shift and J-

Neurochemical Correlates of Intelligence

Figure 15.1 Examples of a quantified ¹H MRS spectrum and a quantified ³¹P MRS spectrum. (A) An example of a quantified ¹H MRS spectrum derived with the LC Model approach (Provencher, 1993). The MRS data were acquired from the dorsal anterior cingulate cortex at 3 Tesla. The modeled spectrum (red line) is superimposed on the acquired spectrum (black line). The residual and the modeled spectrum of glutamate are shown below. On the right is a single-voxel location marked by a box superimposed on the MRI images.

299

300

n. raz and j. a. stanley

coupling interactions of spin groups within and between different neurochemicals enables identification of the neurochemical composition in a given brain location, in vivo (Govindaraju, Young, & Maudsley, 2000). Greater details on the basic principles and applications relevant to MRS are described elsewhere (Fukushima & Roeder, 1981; McRobbie, Moore, Graves, & Prince, 2006). Lastly, the signal intensity of a spectral peak (or more precisely, the area under the peak or the signal amplitude at time equals zero in the the timedomain) is proportional to the concentration of the molecule associated with that chemical shift. This relationship implies that the signal amplitude of each peak is directly related to the measured number of ¹H or ³¹P nuclei of that spin group associated to the targeted chemical compound within a sampled voxel. Therefore, MRS possesses a unique ability of identifying the neurochemical composition as well as quantifying the absolute in vivo concentration of multiple neurochemical compounds from a localized volume of interest. The MRS outcome measurement can be presented in the form of a spectrum as described in the above paragraph, or, like an MRI, as a set of images with the signal intensity at each pixel representing the concentration of a specific neurochemical such as glutamate or GABA. The latter representation option is referred to as spectroscopic imaging or MRSI. Despite its advantage in assessing the composition of various neurochemical compounds, and not just assessing the MR signal of water molecules as with MRI, MRS has some limitations. Because the signal emanating from water, the most abundant source of ¹H nuclei in the brain, is stronger by several orders of magnitude in comparison to other chemical components of the brain tissue, MRI has a substantially greater temporal and spatial resolution than MRS. For example, the brain concentrations of glutamate or GABA are ~8–12 mmol and 1.5–3 mmol, respectively, whereas the concentration of water is stronger by a factor of approximately 104. The consequence of the weaker signal being produced in MRS is that the voxel size for meaningful data collection must be larger to achieve an adequate signal-to-noise ratio (S/N) for reliable quantification. Thus, on a 3 Tesla system, a typical voxel size

Figure 15.1 (cont.) Abbreviations: NAA, N-acetylaspartate; PCr+Cr, phosphocreatine plus creatine; GPC+PC, glycerophosphocholine plus phosphocholine; Glu, glutamate. (B) An example of a quantified ³¹P MRS spectrum from a left anterior white matter voxel that was extracted from a 3D CSI acquisition with ¹H decoupling at 3 Tesla. The modeled spectrum (red line) is superimposed on the acquired spectrum (black line), with the residual and the individual modeled spectra shown below. On the right is the voxel selection box superimposed on the 3D CSI grid and the MRI images. Abbreviations: PCr, phosphocreatine; Pi, inorganic orthophosphate; GPC, glycerophosphocholine; PC, phosphocholine; PE, phoshoethanolamine; GPE, glycerophosphoethanolamine; DN, dinucleotides; ATP, adenosine triphosphate.

Neurochemical Correlates of Intelligence

for ¹H MRS is 1–27 cm3, compared to 0.5–8 mm3 for routine MRI imaging. In the case of ³¹P MRS, the inherent sensitivity is approximately 1/15th of that of the ¹H nuclei and therefore, the spatial resolution of ³¹P MRS is even poorer compared to ¹H MRS, with typical voxels in excess of 20 cm3 at 3 Tesla. Conducting the MRS at higher B0 field strengths has many advantages. One, the S/N ratio scales approximately with the B0 field strength and by increasing the latter brings significant gains in spatial and temporal resolution of MRS. The enhanced S/N at higher field strengths, such as 7 Tesla, can boost the spatial resolution by at least a factor of two and reduce the acquisition to under a minute, thus bringing the temporal resolution in line with a typical cognitive process duration in task-based fMRI paradigms. Two, higher field strengths also increase the chemical shift dispersion, which leads to greater separation of the chemical shifts within and between chemical species (Ugurbil et al., 2003) and, hence, greatly improves the differentiation of coupled spin systems, such as glutamate and glutamine (Tkac et al., 2001). In all, conducting MRS experiments at higher B0 fields improves the accuracy and precision of the quantification (Pradhan et al., 2015; Yang, Hu Kou, & Yang, 2008), minimizes the partial volume effects that impedes voxel placement precision in functionally relevant brain areas, and boosts the temporal resolution required for capturing neurochemical modulations on the time scale of epochs often used in task-based fMRI paradigms (Stanley & Raz, 2018). With respect to hardware, collecting ¹H MRS data requires no additional hardware except for specialized acquisition sequences suited for MRS, which makes it a popular choice in research facilities where only clinical scanners are available. In the case of ³¹P MRS, additional hardware – a multi-nuclei capability package and a specialized transmit-receive radio frequency (RF) coil, are required but are readily available by major manufacturers and thirdparty vendors. Acquisition schemes for localizing the MRS signal fall into two main categories – single- and multi-voxel MRS. Multi-voxel MRS acquisition has a significant advantage of characterizing the neurochemistry of multiple brain areas or voxels in a single cross-sectional slice or multiple slices in a single measurement, which is also known as chemical shift imaging (CSI). The spatial resolution is typically much better than in single-voxel MRS, because CSI methods are more efficient with signal averaging per unit of time. However, the spectral quality tends to be poorer than in single-voxel MRS. For example, if the goal of a study is to measure glutamate with greatest precision, a single-voxel ¹H MRS approach is most preferred. In the past three decades, the stimulated acquisition mode (STEAM) and the point-resolved spectroscopy (PRESS) acquisition sequences have been the two most commonly used approaches in both modes, single- and multi-voxel, for ¹H MRS. More recent innovative approaches include Localization by Adiabatic SElective Refocusing (LASER) (Garwood & DelaBarre, 2001), semi-LASER (Scheenen, Klomp, Wijnen, & Heerschap, 2008), and SPin ECho, full Intensity Acquired Localized (SPECIAL) (Mlynárik, Gambarota,

301

302

n. raz and j. a. stanley

Frenkel, & Gruetter, 2006). Adiabatic pulses are highly effective for outer volume suppression (OVS), which is a key component of the acquisition sequence (Tkác, Starcuk, Choi, & Gruetter, 1999). Choices of localization for in vivo 31P MRS are limited due to the relatively shorter spin–spin T2 relaxation of 31P metabolites. Common methods include image-selected in vivo spectroscopy (ISIS) applied as a single- or multiple-voxel technique, and CSI. An example of 1H MRS tailored to evaluate glutamate is presented in Figure 15.1a.

Brain Energy Metabolism The brain’s share of the body weight is only 2%, yet it consumes about 20% of the total energy generated by the body’s cellular machinery (Attwell & Laughlin, 2001). Most of that energy is invested in neurotransmission that underpins the brain’s core function: information processing (Howarth, Gleeson, & Attwell, 2012; Sokoloff, 1991, 1993). The bulk of the brain’s energy is generated by the mitochondria, which produce daily about 6 kg of the main energy substrate, ATP, via oxidative phosphorylation (OXPHOS) through the ATPase pathway: inorganic orthophosphate (Pi) + adenosine diphosphate (ADP) ! ATP. In addition, the high-energy phosphate store, PCr, can be converted into ATP through creatine kinase (CK) reaction: PCr + ADP ! ATP + Pi. The latter is considerably more efficient compared to the mitochondrial ATPase-mediated route (Andres, Ducray, Schlattner, Wallimann, & Widmer, 2008; Schlattner, Tokarska-Schlattner, & Wallimann, 2006; Wallimann, Wyss, Brdiczka, Nicolay, & Eppenberger, 1992). The CK is also involved in shuttling PCr out of mitochondria to sites utilizing ATP. The physiological ATP production and utilization via the ATPase and CK pathways can be estimated using ³¹P MRS combined with magnetization transfer (MT), which allows quantifying the exchange between PCr and the saturated signal of Pi or γ-ATP (Du, Cooper, Lukas, Cohen, & Ongur, 2013; Shoubridge, Briggs, & Radda, 1982; Zhu et al., 2012). As noted, the state of brain energy metabolites and changes therein can be estimated in vivo using ³¹P MRS by quantifying PCr, ATP, and Pi (Goldstein et al., 2009; Kemp, 2000). The brain tissue concentration of ATP is approximately 3 μmole/g, which is well buffered by PCr with a tissue concentration of 5 μmole/g (McIlwain & Bachelard, 1985). On a ³¹P MRS spectrum, the ATP is represented by three chemical shifts, one per phosphate spin group, in which the β- and α-ADP reside on the shoulders of the γ- and α-ATP, respectively (Figure 15.1b), and therefore, the β-ATP is the preferred chemical shift in quantifying ATP. However, the interpretation of basal PCr levels in vivo may not only reflect energy consumption but also the high-energy storage capacity. That is, under increased utilization of energy or reduced PCr storage, lower levels of PCr will be observed and, therefore, it is impossible to identify a specific mechanism driving a lower PCr level. Methods such as ³¹P MRS with MT are better suited for assessing mechanisms directly related to CK

Neurochemical Correlates of Intelligence

utilization or ATP production. Lastly, one must also be mindful that PCr can be measured with ¹H MRS, which is commonly referred to in the literature as “creatine” or Cr and viewed as misleading. Because the PCr and Cr spectral peaks are indistinguishable on a ¹H MRS spectrum, and PCr and Cr are both reactants in the CK reaction, a shift in the CK utilization would not result in a net change in the combined PCr+Cr measurement by ¹H MRS. Thus, ¹H MRS is less optimal for assessing differences in the utilization of energy metabolism (Du et al., 2013; Shoubridge et al., 1982; Zhu et al., 2012).

Markers of Neuropil Expansion and Contraction Derived from 31P MRS In addition to assessing energy-related metabolites, ³¹P MRS provides an opportunity to access the molecular markers of cortical neuropil expansion and contraction, which may be particularly important in evaluating changes in brain–cognition relationships during development, aging, and treatment. These changes can be inferred from measuring precursors of membrane phospholipids (MPLs) – phosphocholine (PC) and phoshoethanolamine (PE) – as well as breakdown products of MPLs glycerophosphocholine (GPC) and glycerophosphoethanolamine (GPE) (Pettegrew, Klunk, Panchalingam, McClure, & Stanley, 2000). In the brain tissue, MPLs form membrane bilayers that physically separate the intracellular components from the extracellular environment in neurons, astrocytes, oligodendrocytes, and microglia, as well as different organelles within cells (Stanley & Pettegrew, 2001). Early in postnatal brain development, in vivo human and ex vivo rat brain 31P MRS studies have consistently shown high MPL precursor levels of mainly PE, and low levels of MPL breakdown products, GPC and GPE (Pettegrew, Panchalingam, Withers, McKeag, & Strychor, 1990; Pettegrew et al., 2000; Stanley & Pettegrew, 2001). This reflects the high demand of active MPL synthesis for the development of cell membrane structures required in the proliferation of dendritic and synaptic connections (i.e., neuropil expansion). The expansion of neuropil is followed by decreases in precursor levels and increases in breakdown products coinciding with maturation (i.e., pruning or neuropil contraction) (Goldstein et al., 2009; Stanley et al., 2008). In the context of rapid proliferating tissue, elevated levels of MPL precursor levels, specifically PC, have been reported at the time and site of neuritic sprouting in the hippocampus following unilateral lesions of the entorhinal cortex in rats (Geddes, Panchalingam, Keller, & Pettegrew, 1997). Collectively, this provides support for quantification of MPL precursor levels, PE and PC, with 31P MRS as a sensitive measure of active MPLs synthesis in the neuropil. ¹H MRS can also capture MPL metabolites by quantifying the trimethylamine chemical shift at approximately 3.2 ppm (Govindaraju et al., 2000), with peaks representing GPC plus PC, which are indistinguishable. In the literature, the trimethylamine ¹H peak, GPC+PC, is typically referred to as the “choline” peak or the “choline-containing” peak, which can be misleading

303

304

n. raz and j. a. stanley

because the contribution of choline is below the detection limit (Miller, 1991). Also, the interpretation of the GPC+PC with ¹H MRS is ambiguous due to the lack of specificity of implicating precursors or breakdown products of MPL’s (Stanley & Pettegrew, 2001; Stanley et al., 2000).

Specificity of Markers Derived from ¹H MRS: NAA and Myo-Inositol On a typical ¹H MRS spectrum of the brain, the prominent chemical shift at 2.01 ppm is attributed to the CH3 spin group of N-acetyl-aspartate (NAA) (Govindaraju et al., 2000), which, next to glutamate, is the second most abundant free amino acid, with a brain concentration of approximately 10 mmol (Tallan, 1957). NAA is synthesized in mitochondria of neurons from acetyl-CoA and aspartate with a help of the membrane-bound enzyme L-aspartate N-acetyltransferase, and catabolized by the principal enzyme, N-acetyl-L-aspartate aminohydrolase II (aspartoacylase), with the highest activity in oligodendrocytes (Baslow, 2003). Based on monoclonal antibody studies, NAA is localized to neurons with greater staining in the perikaryal, dendrites, and axons (Simmons, Frondoza, & Coyle, 1991). Cell culture studies have shown that NAA is also localized in neurons, immature oligodendrocytes, and in O-2A progenitor cells (Urenjak, Williams, Gadian, & Noble, 1993). Thus, historically NAA has been viewed exclusively as a maker of mature neurons (De Stefano et al., 1998). However, several more recent investigations have revealed that NAA is present in mature oligodendrocytes (Bhakoo & Pearce, 2000) and evidence of inter-compartmental cycling of NAA between neurons and oligodendrocytes (Baslow, 2000, 2003; Bhakoo & Pearce, 2000; Chakraborty, Mekala, Yahya, Wu, & Ledeen, 2001). Therefore, more precisely, NAA should be viewed as a marker of functioning neuroaxonal tissue including functional aspects of the formation and/or maintenance of myelin (Chakraborty et al., 2001). Thus, NAA is a rather nonspecific biomarker of neuronal and axonal viability, maturity, or maintenance. Early in postnatal brain development, levels of NAA (or NAA/PCr+Cr ratios) are low but increase progressively and plateau as the brain reaches maturation (van der Knaap et al., 1992). Of all brain compartments, cortical grey matter, cerebellum, and the thalamus are areas of greatest NAA elevations during development (Pouwels et al., 1999), which indicates that NAA does not merely reflect the number of neurons but is a marker of functioning neurons or, in the context of development, a marker of cortical expansion (Pouwels et al., 1999). Like glutamate, ¹H MRS acquired using a short echo time (TE) acquisition scheme can reliably measure myo-inositol, which has a prominent chemical shift at 3.56 ppm (Govindaraju et al., 2000). Myo-inositol, which is generally viewed as a cerebral osmolyte, is an intermediate of several important pathways involving inositol-polyphosphate messengers, inositol-1-phosphate,

Neurochemical Correlates of Intelligence

phosphatidyl inositol, glucose-6-phosphate, and glucuronic acid (Ross & Bluml, 2001). More importantly, myo-inositol is almost exclusively localized in astrocytes (Coupland et al., 2005; Kim, McGrath, & Silverstone, 2005) and, therefore, viewed as a marker of glia (Ross & Bluml, 2001).

Neurotransmission and Information Processing in the Brain The neural mechanisms of intelligence and information processing in general are unquestionably complex and not fully understood. Blood Oxygen Level Difference (BOLD) fMRI studies, which can access functional activity related to task and the correlation or functional connectivity between brain areas, have significantly contributed to the fundamental understanding of neural engagement of circuits and networks in intelligence and information processing (e.g., van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). BOLD fMRI provides good temporal and spatial resolution, favorably compared to MRS. The BOLD signal, however, is a remote indicator of neural activity, and its surrogate nature does not allow direct probing of neuronal processes. Moreover, BOLD fMRI is influenced by major determinants of vascular tone, such as dopamine (Lauritzen, Mathiesen, Schaefer, & Thomsen, 2012), that depend on age (Bäckman, Lindenberger, Li, & Nyberg, 2010) and genetic makeup (Damoiseaux, Viviano, Yuan, & Raz, 2016). Because of these limitations, BOLD MRI, whether task-dependent or resting-state, is an inadequate tool for elucidating the functioning of the highly integrated glutamatergic and GABAergic neuronal ensembles within local and long-range circuits that drive the dynamic shifts in the excitatory and inhibitory (E/I) balance necessary for synaptic plasticity and reorganization of neuronal processes (Isaacson & Scanziani, 2011; Lauritzen et al., 2012; Maffei, 2017; Tatti, Haley, Swanson, Tselha, & Maffei, 2017). Quantifying the net shift of the E/I dynamics of microcircuits via in vivo measurements of glutamate and GABA that is driven by directed cognitive engagement is paramount for understanding the brain underpinning of cognition (Stanley & Raz, 2018). Afterall, excitatory glutamatergic neurons comprise about 80% of cortical processing units, with the remaining 20% being GABA-ergic inhibitory neurons (Somogyi, Tamás, Lujan, & Buhl, 1998). Coherent cerebral networks depend on a delicate and ever-shifting balance of glutamate and GABA neurotransmission (Isaacson & Scanziani, 2011; Lauritzen et al., 2012; Maffei, 2017; Tatti et al., 2017), and a tool that allows tracking fluctuation of these neurotransmitters in vivo is necessary for making progress in understanding brain mechanisms of intelligent activity. For example, it is unclear what underlies a significant mismatch between stimulus-driven non-oxidative glucose utilization and oxygen consumption reported in Dienel (2012) or Mergenthaler, Lindauer, Dienel, and Meisel (2013). This fundamental phenomenon in brain information processing circuitry cannot be assessed by BOLD MRI. Recent proton functional MRS

305

306

n. raz and j. a. stanley

(¹H fMRS) studies suggest that this mismatch is a transient phenomenon needed for transition between metabolic states in glutamatergic neurons during neurotransmission (Stanley & Raz, 2018). The task-related shifts in dynamics of neuronal activity may be closely associated with synaptic plasticity (McEwen & Morrison, 2013), and ¹H fMRS may be a highly promising tool for studying brain efficiency in handling neurochemical and microstructural changes induced by cognitive activity. Fulfillment of these promises, however, depends on resolving several key methodological issues.

Neurotransmitters: Glutamate, GABA, and Their Interaction As noted, ¹H MRS, with its ability to measure local levels of glutamate and GABA in vivo, is well suited for investigating the conceptual framework that emphasizes temporal dynamics of the E/I equilibrium in cortical and subcortical circuits (Stanley & Raz, 2018). Unfortunately, the dynamic aspect of glutamate and GABA activity is absent in the majority of the ¹H MRS literature, with measurements primarily reflecting static neurotransmitter levels under quasi-rest condition. Typically, ¹H MRS is acquired without any specific instructions or behavioral constraints aside from asking the participants to relax and keep the head still during acquisition. Thus, the measured neurotransmitter levels may not be static but reflect an integrated level over a time window spanning several minutes. Such coarse data structure limits the interpretation of findings with respect to neural correlates of neurotransmission and synaptic plasticity. The need of capturing the temporal dynamics of glutamate and GABA in vivo, is being met by the emerging paradigm of ¹H fMRS. This “new” ¹H MRS promises exciting contributions to the understanding of neural mechanisms relevant to cognitive neuroscience and psychiatry research (Stanley & Raz, 2018). The brain concentration of glutamate (which constitutes eight protons in total) is similar to that of the NAA (nine protons). However, the reliability of quantifying these two compounds differs greatly (Bartha, Drost, Menon, & Williamson, 2000; de Graaf & Bovee 1990; Provencher, 1993), due to differences in the chemical shift pattern of the peaks. The CH3 of NAA gives rise to an uncoupled singlet with a relatively high S/N at 2.01 ppm. In contrast, the α- and β-CH2 groups of glutamate (two protons each) are strongly coupled at low fields, leading to complex multiplets with poorer S/N and, hence, less reliable measurements. The quantification of glutamate is further hampered by other metabolites with similar chemical shifts that overlap each other, such as glutamine, GABA and signal from macromolecules. The crucial condition for reliably quantifying glutamate is using a relatively short echo time at acquisition along with the appropriate a priori knowledge in the spectral fitting. Regarding GABA, establishing reliable measurements is challenging due to its weaker ¹H MRS signal and complex multiple peaks within the

Neurochemical Correlates of Intelligence

2.5–2.0 ppm spectral region and a triplet approximately at the PCr+Cr chemical shift. Consequently, in vivo GABA quantification is typically attained by using a spectral editing type sequence or two-dimensional J-coupling-resolved spectroscopy, which can isolate particular chemical shifts of GABA (Harris, Saleh, & Edden, 2017; Keltner, Wald, Frederick, & Renshaw, 1997; Rothman, Behar, Hyder, & Shulman, 1993; Wilman & Allen, 1995).

Investigating Relationships between the Brain Neurochemistry and Cognition In the preamble to this brief survey of the extant literature on MRS correlates of cognition, it is important to note that unlike the MRI-based body of work, it includes very few studies of intelligence as a latent construct, g, or its subspecies crystallized (gc) or fluid (gf) intelligence. To date, the exploration of in vivo neurochemistry and brain energy metabolism has been almost entirely devoted to examining individual indicators that measure properties of sensory and motor systems, which are necessary for processing the data about the environment and expressing putative cognitive transformations in intelligent action or scores on specific tasks of memory and executive functions.

Cognitive Performance and Energy Metabolisms Indicators Estimated from 31P MRS Extant 31P MRS studies of cognitive processes are rare, heterogeneous in their selection of participants, instruments, and cognitive indicators, and thus are difficult to summarize. One study examined a sizable sample of healthy and carefully screened children and adolescents (age 6–17), who underwent both ¹H and 31P MRS on a 1.5 Tesla system and completed an extensive battery of cognitive tasks (Goldstein et al., 2009). The study reported multiple findings, but after applying Bonferroni correction for multiple comparisons, which set the effective Type I error at .004, two cognitive scores showed significant associations with PCr measured by 31P MRS: Verbal intelligence (“Language”) composite score and Memory composite that included five verbal and nonverbal memory tests. Executive functions composite that included Similarities, Matrix Reasoning, and Perseverative errors on Wisconsin Card Sorting Test showed no significant ¹H or ³¹P MRS correlates even before p-value adjustment. Because PCr and cognitive indicators correlated positively with age, it was imperative to examine the associations between them with age controlled for. Unfortunately, the discrepancy in N among the reported statistical tests precluded computation of partial correlations, and it is unclear if any either of the two significant correlations would survive age correction. In older adults, higher whole-brain gray matter PCr has been linked to better performance on an age-sensitive response inhibition task in healthy

307

308

n. raz and j. a. stanley

older adults (Harper, Joe, Jensen, Ravichandran, & Forester, 2016). Studying brain energy metabolites in diseases, in which cognitive impairment is the primary symptom, such as Alzheimer’s dementia (AD), can provide some clues to the role of the brain energetics in cognition. In comparing AD patients to matched healthy controls, an increase of PCr as measured by ³¹P MRS has been reported in the regions that are important for executing the componential processes of intelligence, i.e., the hippocampi, but not in the anterior cingulate cortex (Rijpma, van der Graaf, Meulenbroek, Olde Rikkert, & Heerschap, 2018). While PCr/Pi ratios and pH were also increased in AD, no changes were found for precursors or breakdown products of MPL’s. In sum, in vivo assessment of PCr may be a useful way of tapping into energetic correlates of intelligence and its components and further exploration of this imaging modality is warranted, despite meager current evidence. The ability of 31P MRS to quantify MPL precursor levels, PE, and PC, has been rarely used in cognitive neuroscience. To date, two relevant studies have been conducted. One reported associations of these indirect quantifiers of neuropil with cognitive performance in healthy children (Goldstein et al., 2009) and the other demonstrated early alterations preceding cognitive decline in older adults (Pettegrew et al., 2000).

1

H MRS and Studies of Cognition

N-acetyl Aspartate (NAA) Being the most prominent peak in a ¹H MRS spectrum, clearly detectable even at 1.5 Tesla, NAA is by far the most widely studied reported MRS-derived index and the most frequently reported correlate of cognitive performance. That said, the extant literature on cognitive correlates of basal NAA levels is relatively sparse. Moreover, these studies use diverse MRS methods and vary substantially in their selection of the brain regions and cognitive performance measures. The latter range from global and relatively coarse indices to indicators of specific cognitive operations that contribute to intelligence but pertain only to limited aspects of that construct. In a small sample of healthy children studied in a 1.5 Tesla system, prefrontal PCr+Cr and NAA levels correlated with working memory performance, although correction for multiple comparisons would have rendered the associations non-significant (Yeo, Hill Campbell Vigil, & Brooks, 2000). No correlations with IQ were found in that study. A single-voxel ¹H MRS study with placement in occipitoparietal and frontal cortices revealed modest correlations between NAA and IQ, with the authors noting some undue influence of extreme NAA values (Patel, Blyth, Griffiths, Kelly, & Talcott, 2014). In young adults a large slab of tissue above the ventricles yielded significant but weak associations between NAA and IQ, both in the right hemisphere, with lower right anterior gray NAA predicting higher VIQ and higher posterior NAA

Neurochemical Correlates of Intelligence

linked to higher PIQ (Jung et al., 2009). Higher hippocampal NAA levels were weakly associated with better performance on a global cognitive measure (Kroll et al., 2018). In children, faster cross-modal word matching was related to higher NAA (Del Tufo et al., 2018). No associations between NAA and cognition were observed in another study of healthy children (Goldstein et al., 2009). Considering the interpretation of NAA levels as indicators of cell viability, one would expect more studies exploring mediating or moderating influence of NAA on the relationship between cognitive performance with structural and functional properties of the brain assessed by other MRI modalities. Surprisingly, investigations of that type are common in various pathological conditions but rare in healthy adults. In one such study, NAA level has been positively associated with a non-specific indicator of white matter organization and integrity (fractional anisotropy, FA) in selected commissural tracts using diffusion-tensor imaging (Wijtenburg et al., 2013). Because of the illdefined nature of NAA, its use as an indicator of brain function is limited. Targeting specific neurotransmitters in evaluating the relationship between the brain and cognition seems more promising. These studies, however, are quite challenging because of significantly lower signals from neurotransmitters on the ¹H MRS compared to those of the NAA. Technical limitations inherent to generating ¹H MRS spectra on a typical MRI scanner further limit simultaneous assessment of multiple neurotransmitters in multiple brain regions.

Specific Neurotransmitters: GABA The main inhibitory neurotransmitter of the brain, GABA, has been in the focus of several investigations of brain–cognition associations. During task performance and presumably elevated neural activity, regional concentration of GABA inferred from ¹H MRS drops, and glutamate concentrations rises (Duncan, Wiebking, & Northoff, 2014). The cellular mechanisms reflected in these ¹H MRS-derived measures remain to be elucidated but the evidence seems to favor energetic, with changes in the traffic of ROS and endogenous antioxidants, rather than neurotransmitter-release explanation (Lin, Stephenson, Xin, Napolitano, & Morris, 2012). In another small sample, GABA+ levels correlated with expression of microglia-associated protein TSPO in the mPFC, but no links between GABA+ levels and cognitive performance were revealed (Da Silva et al., 2019). In right-handed young and older adults, poorer performance proficiency on a visual motor bimanual coordination task was associated with higher GABA+ levels in the left sensorimotor but not bilateral occipital cortex (Chalavi et al., 2018). In children, faster cross-modal word matching was related to low basal levels of GABA (Del Tufo et al., 2018). In general, these limited findings suggest that better cognitive performance may be associated with (temporarily) reduced levels of

309

310

n. raz and j. a. stanley

GABA, which may indicate the importance of suppressing inhibitory activity in the service of cognitive effort and efficiency.

Specific Neurotransmitters: Glutamate ¹H MRS has been used very infrequently in animal models of cognition. In a study in middle-aged marmosets (Callithrix jacchus) who were scanned within 3 months of a serial Reversal Learning task (Lacreuse, Moore, LaClair, Payne, & King, 2018), higher prefrontal cortex Glx (defined as glutamate +  glutamine) level was associated with faster acquisition of the reversals but only in males, not in females. In younger adults suffering from asthma as well as healthy controls, poorer cognitive function assessed by the Montreal Cognitive Assessment (MoCA) was associated with reduced resting glutamate levels (Kroll et al., 2018).

Combined GABA and Glu Studies PCr+Cr-referenced glutamate levels in the posterior medial cortex and associated white matter correlated positively and GABA, also divided by PCr+Cr levels, correlated negatively with functional connectivity in the default network (Kapogiannis, Reiter, Willette, & Mattson, 2013).

Functional MRS (fMRS) Sensory-Motor Tasks Flashing checkerboard stimuli induces modest but consistent stimulus-related increases in steady-state glutamate levels (Bednařík et al., 2015; Lin et al., 2012; Mangia et al., 2007; Schaller, Mekle, Xin, Kunz, & Gruetter, 2013), with the magnitude of the increase depending on task duration and cognitive processing demands. Notably, novel stimuli, even when viewed passively, induce a significant elevation in glutamate, while frequently repeated familiar pictures do not (Apsvalka, Gadie, Clemence, & Mullins, 2015). In a combined ¹H fMRS and fMRI study of healthy young adults on a 7 Tesla system, BOLD and glutamate changes in response to short (64 s) repeated flickering checkerboard stimulation evidenced a moderate correlation, which was strengthened once the initial block with counterphase glutamate-BOLD time series was eliminated (Ip et al., 2017). Of note, the rise in BOLD-fMRI signal during visual stimulation was mirrored by concomitant elevation in glutamate, while none of these changes and associations were noted at rest (i.e., the control comparison condition). A motor task such as a periodic finger tapping induces a modest glutamate increase in the sensorymotor cortical regions, and these increases are co-localized with BOLD activation (Schaller, Xin, O’Brien, Magill, & Gruetter, 2014). These findings, albeit limited in scope, link an increase in sensory and motor processing to temporary elevation in glutamate levels. Thus, proportionate glutamate recruitment

Neurochemical Correlates of Intelligence

appears to underlie basic information processing that underpins complex cognitive activity and may impose an important constraint on success in accomplishing multifarious tasks that are commonly used for gauging intelligence. If further replicated and tied to cognitive performance, this task-dependent glutamate surge may reflect the efficiency of information gathering via what Galton (1883) called “the avenues of senses,” and contribute significantly to our understanding of individual differences in intellectual prowess.

Cognitive Tasks The ¹H fMRS literature on higher-level cognitive activity is, alas, even sparser than the body of research summarized in the previous section. In one study, Glx levels in the mPFC were compared between resting-state and mental imagery task conditions, with the auditory cortex used as a control region (Huang et al., 2015). The block-design of the fMRS that was implemented in the study falls short of the true event-related approach that allows tracking within-task changes in glutamate level. Moreover, combining glutamate þ glutamine (i.e., Glx) as the key outcome measurement hampers the interpretation of the results. Nonetheless, this investigation is a step forward in comparison to ¹H MRS studies of glutamate (or Glx) assessed without behavioral constraints during the acquisition. In addition, fMRI data were collected on the same day, although not within the same session. Mental imagery investigated in this study, is heavily dependent on working memory and thus is an important component of (primarily but not exclusively) many nonverbal reasoning tasks that fall under the rubric of fluid intelligence (Kyllonen & Christal, 1990). The data collection during imagined swimming in experienced practitioners of that sport had an important advantage – lack of individual differences and learning-related changes that are inevitable in all laboratory cognitive tasks performed in the scanner. The disadvantage, however, was the propensity of that task to engage regions within the default-mode network that is activated at “rest,” i.e., during unstructured mental activity that is likely to include metal imagery (Mazoyer et al., 2001). Notably, unlike resting BOLD studies, for which extensive investigations of temporal fluctuations in the brain’s hubs and networks constitute a voluminous literature, little is known about ¹H MRS changes during rest over comparable time windows and, to the best of our knowledge, nothing is known about the synchronization and desynchronization patterns of glutamate and GABA in the healthy brain. What are the physiological mechanisms underlying the observed taskrelated increase in Glx? The possibilities are twofold: The task-related demands are of energetic metabolic origin; they may increase the oxidative metabolism, and accordingly the glutamate–glutamine cycling to make available a higher concentration of glutamate at the synaptic cleft. A task-related glutamate increase may be of neuronal origin and related to an increase in synaptic glutamate release. It is unclear, however, how “cycling” is inferred if only glutamate þ glutamine is observed.

311

312

n. raz and j. a. stanley

Thielen et al. (2018) used ¹H MRS and fMRI (psychophysiological interaction analysis) for investigating the hypothesized contribution of the mPFC to performance on name-face association task in young adults. They measured, in a single mPFC voxel, both GABA and Glx levels, referenced to NAA i.e., ratios, before and after volunteers memorized face–name association. Although this study’s block design cannot capture the encoding-retrieval cycle dynamics, its results are nonetheless intriguing. Higher scores on an outof-scanner memory test were associated with elevated ratios of mPFC Glx/ NAA but unrelated to GABA-to-NAA ratios. In the fMRI study using the subsequent memory paradigm and carried out between the two ¹H MRS acquisitions, a positive correlation between the Glx/NAA increase and mPFC connectivity to the thalamus and hippocampus was observed. This correlation was noted only for associations subsequently recognized with high confidence and not those that were recognized with low confidence or forgotten altogether. The mediation analyses showed the relationship between Glx/NAA change and memory performance (the difference in recall of high- vs low-confidence items) was mediated by functional connectivity between mPFC and the hippocampus, with the magnitude of connectivity correlated with memory scores. The role of the mPFC-thalamus connectivity in a similar mediation pattern could not be established, however, with a conventional level of confidence, although the effect was in the same direction (Thielen et al., 2018). In a recent task-based ¹H fMRS study with a single-voxel placement in left dlPFC, a significant 2.7% increase in glutamate was observed during a standard 2-back WM task compared to a continuous visual crosshair fixation in healthy young adults (Woodcock, Anand, Khatib, Diwadkar, & Stanley, 2018). Notably, the glutamate increase was more pronounced during the initial moments of task performance in each task block. In another study from the same group, during performance of an associative learning task with object–location pairs, healthy adults displayed unique temporal dynamics of glutamate modulation in the right hippocampus (Stanley et al., 2017). Notably, the differences in the time course of glutamate modulation were associated with learning proficiency: faster learners demonstrated up to an 11% increase in glutamate during the early trials, whereas a significant but smaller and later increase of 8% was observed in slower learners. Taylor et al. (2015) investigated glutamate modulation during a classic Stroop task, which includes a mixture of congruent and incongruent conditions as well as trials with words only (no color) and color only (no words) and is frequently used to assess executive functions. In this study, conducted on a 7 Tesla system, the authors investigated glutamate level changes in the dorsal anterior cingulate gyrus (ACG) of healthy adults. They found that, compared to the rest condition, during the Stroop task performance, ACG glutamate increased by 2.6%. However, differences in dorsal ACG glutamate modulation between trail conditions within the Stroop were not reported.

Neurochemical Correlates of Intelligence

Visuospatial Cognition Glutamate modulation during tasks involving the visuospatial attention and memory system were recently investigated using ¹H fMRS at 3 Tesla. In healthy individuals, a non-significant modulation of glutamate was observed in the parietal-occipital cortex during a visuospatial attention task compared to the control condition (Lindner, Bell, Iqbal, Mullins, & Christakou, 2017). In another study, no significant task-related glutamate modulation was observed in the parietal-posterior cingulate cortex of healthy adults, patients with Alzheimer’s disease (AD), and individuals with amnestic mild cognitive impairment who performed a face–name associative memory task compared to the rest control condition (Jahng et al., 2016). In both studies, details on the variability of the glutamate measurements were omitted and, therefore, it remains unclear whether the method afforded detection of a task-related change in glutamate of the order of 10% or less. 31

P fMRS Several attempts to capture the brain’s energetic response to sensory stimulation via 31P MRS have been made in the past two decades, with mixed results. Although some found no changes in energy-related metabolites (Barreto, Costa, Landim, Castellano, & Salmon, 2014; Chen, Zhu, Adriany, & Ugurbil, 1997; van de Bank, Maas, Bains, Heerschap, & Scheenen, 2018), most published reports identified significant decreases in PCr (Barreto et al., 2014; Kato, Murashita, Shioiri, Hamakawa, & Inubushi, 1996; Murashita, Kato, Shioiri, Inubushi, & Kato, 1999; Rango, Castelli, & Scarlato, 1997; Sappey-Marinier et al., 1992) that appear, at least in some samples, agedependent (Murashita et al., 1999). Some findings hint at the dynamic nature of PCr changes (Rango, Bonifati, & Bresolin, 2006; Yuksel et al., 2015), that could have been obscured by integration over a wide time window and across a very large voxel. With respect to other metabolites detectable by 31P MRS, two studies reported elevation in the inorganic phosphate (Pi) levels during visual stimulation (Barreto et al., 2014; Mochel et al., 2012), while others reporting null results (van de Bank et al., 2018).

Summary, Conclusions, and Future Directions This survey of the extant literature on the brain underpinnings of intelligence revealed by ¹H and ³¹P MRS underscores the discrepancy between the great potential of the technique that opens a window to in vivo assessment of the brain chemistry and the findings that reveal small effects and show little consistency among the studies. It seems that a curse of “10%-of-the-variance barrier” hovers over this field of inquiry just as it does over the large body of studies that attempted to relate intelligence to multiple indices of information processing, that is, correlations between any measure of intelligence and any measure of physiological or neural property of the brain tend to congregate

313

314

n. raz and j. a. stanley

around the .20–.40 values (Hunt, 1980). Nonetheless, we espouse an optimistic view of the future research that can realize the promises of MRS. Here we outline the steps that in our opinion can improve the understanding of neurochemical and energetic mechanisms of human reasoning and individual differences therein. 1. Thus far, MRS studies of neurotransmitters and energy metabolites roles in intelligence have been conducted, almost exclusively, on 1.5 Tesla and 3 Tesla systems. With increasing availability of stronger (e.g., 7 Tesla) devices, greater temporal and spatial resolution can be attained. Typical 1.5 Tesla (and even some 3 Tesla) MRS studies face significant challenges in resolving glutamate and glutamine in the ¹H MRS spectrum. Because the latter is both a precursor and a break-down product of the former, using Glx as an outcome variable hampers the mechanistic understanding of glutamate’s role in cognition as it confounds differences and changes in glutamate release with variations in turnover and synthesis. 2. By their very nature, neurochemical changes in the brain’s neurons are fleeting. Measuring “stationary” levels of glutamate and GABA, integrated over a wide time window may not be useful in investigations that target normal brain–cognition relationships. However, comparing neurotransmitter or energy metabolites levels between task active conditions reflecting defined cognitive processes vs non-task-active conditions over shorter time windows are crucial for advancing the field (Stanley & Raz, 2018). In advancing that goal, high-field systems are of critical importance, as they allow collecting refined spectra with much improved temporal resolution. Significant progress in studying glutamate modulation within encoding-retrieval cycles of memory tasks obtained on clinical 3 Tesla systems give the taste of what can be achieved on high-field devices (Stanley et al., 2017). 3. The brain is a structurally and functionally heterogeneous organ. Therefore, the ability to collect the data from multiple regions is critical for understanding the mechanisms of intelligent behavior and bringing MRS studies in agreement with the decades of localization findings generated by lesion, structural MRI, and PET studies. Here again, an increase in magnetic field strength is necessary, for, although reasonably small voxels can be targeted with 3 Tesla systems, collecting the data from multiple locations simultaneously is unrealistic under the constraints of human subjects’ tolerance of lengthy in-scanner procedures. 4. The same logic applies to targeting more than one chemical compound such as glutamate and GABA with comparable precision, especially in ¹H fMRS studies. At the currently standard 3 Tesla field strength, this is an

Neurochemical Correlates of Intelligence

5.

6.

7.

8.

unrealistic aim. Increase in availability of 7 Tesla systems is expected to give a significant boost to such double-target studies and allow evaluation of complex dynamics of excitation and inhibition in a living human performing cognitive tasks (An, Araneta, Johnson, & Shen, 2018). On the cognitive side of things, a more systematic approach is in order. Intelligence is a complex construct defined by multiple indicators. Some have stronger association with it than others. Moving from reliance on arbitrary indices of memory, executive functions, and speed of processing – all of which are related to general intelligence – to deliberately selected tasks that produce indicators with the highest g-loading will advance understanding of associations between brain function and intellectual performance. The hope is that, with advancement in instrumentation and data processing, it will become possible to gauge neurochemical and metabolic changes in the course of performing multiple g-related tasks in the same individuals. Understanding of the role played by modulation of the brain energy substrates and its key neurotransmitters in cognitive processes can be advanced by examining their change, within multiple time windows, over the course of life-span development. Multi-occasion longitudinal studies can also clarify the role of rapid neurotransmitter fluctuations vs. long term plastic changes of neuropil in supporting development, maintenance, and decline of cognitive abilities. Success in interpreting the results of noninvasive MRS studies in humans hinges on validating the indices produced by these techniques in relevant animal models. With respect to higher cognitive functions and gathering of sensory information in support of the latter, it is imperative to develop neuroimaging experimental paradigms for primates. To date, only one primate study of this kind has been reported (Lacreuse et al., 2018), but the model employed in that investigation, a common marmoset, appears very promising for future developments, especially if the MRS experiments are accompanied by more invasive and precise probes of neurotransmitter changes within cognitive processing cycles. In summary, further advancement of understanding the neural underpinnings of intelligence hinges on several developments. In no particular order, these are improvement of instruments, with an emphasis on higher magnetic field strengths, greater temporal and spatial resolution of the fMRS, expansion of animal modeling studies harmonized with investigations in humans, integration of MRS findings with structural and BOLD imaging, and simultaneous within-person assessment of multiple indicators of intelligence as a construct. With such a capacious room to grow, both ¹H and ³¹P MRS will be able to deliver on its promise of revealing important neurochemical and metabolic underpinning of individual differences in intelligence.

315

316

n. raz and j. a. stanley

Acknowledgment This work was supported by National Institutes of Health grants R01-AG011230 (NR) and R21-AG059160 (JAS and NR).

References An, L., Araneta, M. F., Johnson, C., & Shen, J. (2018). Simultaneous measurement of glutamate, glutamine, GABA, and glutathione by spectral editing without subtraction. Magnetic Resonance in Medicine, 80(5), 1776–1717. doi: 10.1002/ mrm.27172. Andres, R. H., Ducray, A. D., Schlattner, U., Wallimann, T., & Widmer, H. R. (2008). Functions and effects of creatine in the central nervous system. Brain Research Bulletin, 76(4), 329–343. doi: 10.1016/j.brainresbull.2008.02.035 Apsvalka, D., Gadie, A., Clemence, M., & Mullins, P. G. (2015). Event-related dynamics of glutamate and BOLD effects measured using functional magnetic resonance spectroscopy (fMRS) at 3T in a repetition suppression paradigm. Neuroimage, 118, 292–300. Attwell, D., & Laughlin, S. B. (2001). An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow and Metabolism: Official Journal of the International Society of Cerebral Blood Flow and Metabolism, 21(10), 1133–1145. doi: 10.1097/00004647-200110000-00001. Bäckman, L., Lindenberger, U., Li, S. C., & Nyberg, L. (2010). Linking cognitive aging to alterations in dopamine neurotransmitter functioning: Recent data and future avenues. Neuroscience and Biobehavioral Reviews, 34(5), 670–677. doi: 10.1016/j.neubiorev.2009.12.008 Barreto, F. R., Costa, T. B., Landim, R. C., Castellano, G., & Salmon, C. E. (2014). (31)P-MRS using visual stimulation protocols with different durations in healthy young adult subjects. Neurochemical Research, 39(12), 2343–2350. doi: 10.1007/s1106 4-014-1433-9. Bartha, R., Drost, D. J., Menon, R. S., & Williamson, P. C. (2000). Comparison of the quantification precision of human short echo time (1)H spectroscopy at 1.5 and 4.0 Tesla. Magnetic Resonance in Medicine, 44(2), 185–192. Baslow, M. H. (2000). Functions of N-acetyl-L-aspartate and N-acetyl-L-aspartylglutamate in the vertebrate brain: Role in glial cell-specific signaling. Journal of Neurochemistry, 75(2), 453–459. Baslow, M. H. (2003). Brain N-acetylaspartate as a molecular water pump and its role in the etiology of Canavan disease: A mechanistic explanation. Journal of Molecular Neuroscience, 21(3), 185–190. Bednarik, P., Tkac, I., Giove, F., DiNuzzo, M., Deelchand, D. K., Emir, U. E., . . . Mangia, S. (2015). Neurochemical and BOLD responses during neuronal activation measured in the human visual cortex at 7 T. Journal of Cerebral. Blood Flow and Metabolism, 35(4), 601–610. Bhakoo, K., & Pearce, D. (2000). In vitro expression of N-acetyl aspartate by oligodendrocytes: Implications for proton magnetic resonance spectroscopy signal in vivo. Journal of Neurochemistry, 74(1), 254–262.

Neurochemical Correlates of Intelligence

Chakraborty, G., Mekala, P., Yahya, D., Wu, G., & Ledeen, R. W. (2001). Intraneuronal N-acetylaspartate supplies acetyl groups for myelin lipid synthesis: Evidence for myelin-associated aspartoacylase. Journal of Neurochemistry, 78(4), 736–745. Chalavi, S., Pauwels, L., Heise, K.-F., Zivari Adab, H., Maes, C., Puts, N. A. J., . . . Swinnen, S. P. (2018). The neurochemical basis of the contextual interference effect. Neurobiology of Aging, 66, 85–96. doi: 10.1016/j. neurobiolaging.2018.02.014. Chen, W., Zhu, X. H., Adriany, G., & Ugurbil, K. (1997). Increase of creatine kinase activity in the visual cortex of human brain during visual stimulation: A 31P magnetization transfer study. Magnetic Resonance in Medicine, 38(4), 551–557. Coupland, N. J., Ogilvie, C. J., Hegadoren, K. M., Seres, P., Hanstock, C. C., & Allen, P. S. (2005). Decreased prefrontal myo-inositol in major depressive disorder. Biological Psychiatry, 57(12), 1526–1534. Da Silva, T., Hafizi, S., Rusjan, P. M., Houle, S., Wilson, A. A., Price, I., . . . Mizrahi, R. (2019). GABA levels and TSPO expression in people at clinical high risk for psychosis and healthy volunteers: A PET-MRS study. Journal of Psychiatry and Neuroscience, 44(2), 111–119. doi: 10.1503/ jpn.170201. Damoiseaux, J. S., Viviano, R. P., Yuan, P., & Raz, N. (2016). Differential effect of age on posterior and anterior hippocampal functional connectivity. NeuroImage, 133, 468–476. doi: 10.1016/j.neuroimage.2016.03.047. de Graaf, A. A., & Bovee, W. M. (1990). Improved quantification of in vivo 1H NMR spectra by optimization of signal acquisition and processing and by incorporation of prior knowledge into the spectral fitting. Magnetic Resonance in Medicine, 15(2), 305–319. De Stefano, N., Matthews, P. M., Fu, L., Narayanan, S., Stanley, J., Francis, G. S., . . . Arnold, D. L. (1998). Axonal damage correlates with disability in patients with relapsing-remitting multiple sclerosis. Results of a longitudinal magnetic resonance spectroscopy study. Brain, 121(Pt 8), 1469–1477. Del Tufo, S. N., Frost, S. J., Hoeft, F., Cutting, L. E., Molfese, P. J., Mason, G. F., . . . Pugh, K. R. (2018). Neurochemistry predicts convergence of written and spoken language: A proton magnetic resonance spectroscopy study of crossmodal language integration. Frontiers in Psychology, 9, 1507. doi: 10.3389/ fpsyg.2018.01507. eCollection 2018. Dienel, G. A. (2012). Fueling and imaging brain activation. ASN Neuro, 4(5), 267–321. doi: 10.1042/AN20120021. Du, F., Cooper, A., Lukas, S. E., Cohen, B. M., & Ongur, D. (2013). Creatine kinase and ATP synthase reaction rates in human frontal lobe measured by (31)P magnetization transfer spectroscopy at 4T. Magnetic Resonance Imaging, 31(1), 102–108. doi: 10.1016/j.mri.2012.06.018 Duncan, J., Emslie, H., Williams, P., Johnson, R., & Freer, C. (1996). Intelligence and the frontal lobe: The organization of goal-directed behavior. Cognitive Psychology, 30(3), 257–303. Duncan, N. W., Wiebking, C., & Northoff, G. (2014). Associations of regional GABA and glutamate with intrinsic and extrinsic neural activity in

317

318

n. raz and j. a. stanley

humans – A review of multimodal imaging studies. Neuroscience and Biobehavioral Reviews, 47, 36–52. doi: 10.1016/j.neubiorev.2014.07.016. Fukushima, E., & Roeder, S. B. W. (1981). Experimental pulse NMR: A nuts and bolts approach. Reading, MA: Addison-Wesley. Galton, F. (1883). Inquiries into human faculty. London: Macmillan. Garwood, M., & DelaBarre, L. (2001). The return of the frequency sweep: Designing adiabatic pulses for contemporary NMR. Journal of Magnetic Resonance, 153(2), 155–177. doi: 10.1006/jmre.2001.2340. Geddes, J. W., Panchalingam, K., Keller, J. N., & Pettegrew, J. W. (1997). Elevated phosphocholine and phosphatidylcholine following rat entorhinal cortex lesions. Neurobiology of Aging, 18(3), 305–308. Goldstein, G., Panchalingam, K., McClure, R. J., Stanley, J. A.., Calhoun, V. D., Pearlson, G. D., & Pettegrew, J. W. (2009). Molecular neurodevelopment: An in vivo 31 P- 1 H MRSI study. Journal of the International Neuropsychological Society, 15(5), 671–683. Govindaraju, V., Young, K., & Maudsley, A. A. (2000). Proton NMR chemical shifts and coupling constants for brain metabolites. NMR in Biomedicine, 13(3), 129–153. Harper, D. G., Joe, E. B., Jensen, J. E., Ravichandran, C., & Forester, B. P. (2016). Brain levels of high-energy phosphate metabolites and executive function in geriatric depression. International Journal of Geriatric Psychiatry, 31(11), 1241–1249. doi: 10.1002/gps.4439. Harris, A. D., Saleh, M. G., & Edden, R. A. E. (2017). Edited 1H magnetic resonance spectroscopy in vivo: Methods and metabolites. Magnetic Resonance in Medicine, 77(4), 1377–1389. doi: 10.1002/mrm.26619. Howarth, C., Gleeson, P., & Attwell, D. (2012). Updated energy budgets for neural computation in the neocortex and cerebellum. Journal of Cerebral Blood Flow and Metabolism: Official Journal of the International Society of Cerebral Blood Flow and Metabolism, 32(7), 1222–1232. doi: 10.1038/ jcbfm.2012.35. Huang, Z., Davis, H. H., Yue, Q., Wiebking, C., Duncan, N. W., Zhang, J., . . . Northoff, G. (2015). Increase in glutamate/glutamine concentration in the medial prefrontal cortex during mental imagery: A combined functional MRS and fMRI study. Human Brain Mapping, 36(8), 3204–3212. doi: 10.1002/hbm.22841 Hunt, E. (1980). Intelligence as an information-processing concept. British Journal of Psychology, 71(4), 449–474. Ip, B., Berrington, A., Hess, A. T., Parker, A. J., Emir, U. E., & Bridge, H. (2017). Combined fMRI-MRS acquires simultaneous glutamate and BOLD-fMRI signals in the human brain. NeuroImage, 155, 113–119. Isaacson, J. S., & Scanziani, M. (2011). How inhibition shapes cortical activity. Neuron, 72(2), 231–243. doi: 10.1016/j.neuron.2011.09.027. Jahng, G. H., Oh, J., Lee, D. W., Kim, H. G., Rhee, H. Y., Shin, W., . . . Ryu, C. W. (2016). Glutamine and glutamate complex, as measured by functional magnetic resonance spectroscopy, alters during face-name association task in patients with mild cognitive impairment and Alzheimer’s disease. Journal of Alzheimers Disease, 53(2), 745. doi: 10.3233/JAD-169004.

Neurochemical Correlates of Intelligence

Jung, R. E., Gasparovic, C., Chavez, R. S., Caprihan, A., Barrow, R., & Yeo, R. A. (2009). Imaging intelligence with proton magnetic resonance spectroscopy. Intelligence, 37(2), 192–198. doi: 10.1016/j.intell.2008.10.009. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Science, 30(2), 135–154; discussion 154–187. Kapogiannis, D., Reiter, D. A., Willette, A. A., & Mattson, M. P. (2013). Posteromedial cortex glutamate and GABA predict intrinsic functional connectivity of the default mode network. Neuroimage, 64, 112–119. doi: 10.1016/j.neuroimage.2012.09.029. Kato, T., Murashita, J., Shioiri, T., Hamakawa, H., & Inubushi, T. (1996) Effect of photic stimulation on energy metabolism in the human brain measured by 31P-MR spectroscopy. Journal of Neuropsychiatry and Clinical Neuroscience, 8(4), 417–422. Keltner, J. R., Wald, L. L., Frederick, B. D., & Renshaw, P. F. (1997): In vivo detection of GABA in human brain using a localized double-quantum filter technique. Magnetic Resonance in Medicine, 37(3), 366–371. Kemp, G. J. (2000). Non-invasive methods for studying brain energy metabolism: What they show and what it means. Developmental Neuroscience, 22(5–6), 418–428. doi: 10.1159/000017471. Kim, H., McGrath, B. M., & Silverstone, P. H. (2005). A review of the possible relevance of inositol and the phosphatidylinositol second messenger system (PI-cycle) to psychiatric disorders – Focus on magnetic resonance spectroscopy (MRS) studies. Human Psychopharmacology, 20(5), 309–326. Kroll, J. L., Steele, A. M., Pinkham, A. E., Choi, C., Khan, D. A., Patel, S. V., . . . Ritz, T. (2018). Hippocampal metabolites in asthma and their implications for cognitive function. Neuroimage Clinical, 19, 213–221. doi: 10.1016/j. nicl.2018.04.012. eCollection 2018. Kyllonen, P. C., & Christal, R. E. (1990). Reasoning ability is (little more than) working-memory capacity?! Intelligence, 14(4), 389–433. Lacreuse, A., Moore, C. M., LaClair, M., Payne, L., & King, J. A. (2018). Glutamine/ glutamate (Glx) concentration in prefrontal cortex predicts reversal learning performance in the marmoset. Behavioral Brain Research, 346, 11–15. doi: 10.1016/j.bbr.2018.01.025. Lauritzen, M., Mathiesen, C., Schaefer, K., & Thomsen, K. J. (2012). Neuronal inhibition and excitation, and the dichotomic control of brain hemodynamic and oxygen responses. NeuroImage, 62(2), 1040–1050. doi: 10.1016/j. neuroimage.2012.01.040 Lin, Y., Stephenson, M. C., Xin, L., Napolitano, A., & Morris, P. G. (2012). Investigating the metabolic changes due to visual stimulation using functional proton magnetic resonance spectroscopy at 7 T. Journal of Cerebral Blood Flow and Metabolism, 32(8), 1484–1495. doi: 10.1038/ jcbfm.2012.33. Lindner, M., Bell, T., Iqbal, S., Mullins, P. G., & Christakou, A. (2017). In vivo functional neurochemistry of human cortical cholinergic function during visuospatial attention. PLoS One, 12(2), e0171338. doi: 10.1371/journal. pone.0171338.

319

320

n. raz and j. a. stanley

Maffei, A. (2017). Fifty shades of inhibition. Current Opinion in Neurobiology, 43, 43–47. doi: 10.1016/j.conb.2016.12.003 Mangia, S., Tkac, I., Gruetter, R., Van de Moortele, P. F., Maraviglia, B., & Ugurbil, K. (2007) Sustained neuronal activation raises oxidative metabolism to a new steady-state level: Evidence from 1H NMR spectroscopy in the human visual cortex. Journal of Cerebral Blood Flow and Metabolism, 27(5), 1055–1063. doi: 10.1038/sj.jcbfm .96004-01. Mazoyer, B., Zago, L., Mellet, E., Bricogne, S., Etard, O., Houdé, O., . . . TzourioMazoyer, N. (2001). Cortical networks for working memory and executive functions sustain the conscious resting state in man. Brain Research Bulletin, 54(3), 287–298. McEwen, B. S., & Morrison, J. H. (2013). The brain on stress: Vulnerability and plasticity of the prefrontal cortex over the life course. Neuron, 79(1), 16–29. doi: 10.1016/j.neuron.2013.06.028. McIlwain, H., & Bachelard, H. S. (1985). Biochemistry and the central nervous system, vol. 5. Edinburgh: Churchill Livingstone. McRobbie, D., Moore, E., Graves, M., & Prince, M. (2006). MRI from picture to proton. Cambridge University Press. doi: 10.1017/CBO9780511545405. Mergenthaler, P., Lindauer, U., Dienel, G. A., & Meisel, A. (2013). Sugar for the brain: The role of glucose in physiological and pathological brain function. Trends in Neurosciences, 36(10), 587–597. doi: 10.1016/j.tins.2013.07.001. Miller, B. L. (1991). A review of chemical issues in 1H NMR spectroscopy: N-AcetylL-aspartate, creatine and choline. NMR in Biomedicine, 4(2), 47–52. Mlynárik, V., Gambarota, G., Frenkel, H., & Gruetter, R. (2006). Localized shortecho-time proton MR spectroscopy with full signal-intensity acquisition. Magnetic Resonance in Medicine, 56(5), 965–970. doi: 10.1002/mrm.21043. Mochel, F., N’Guyen, T. M., Deelchand, D., Rinaldi, D., Valabregue, R., Wary, C., . . . Henry, P. G. (2012) Abnormal response to cortical activation in early stages of Huntington disease. Movement Disorders, 27(7), 907–910. doi: 10.1002/mds.25009. Murashita, J., Kato, T., Shioiri, T., Inubushi, T., & Kato, N. (1999). Age dependent alteration of metabolic response to photic stimulation in the human brain measured by 31P MR-spectroscopy. Brain Research, 818(1), 72–76. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. doi: 10.1016/j.neubiorev.2009.04.001. Patel, T., Blyth, J. C., Griffiths, G., Kelly, D., & Talcott, J. B. (2014). Moderate relationships between NAA and cognitive ability in healthy adults: Implications for cognitive spectroscopy. Frontiers in Human Neuroscience, 14(8), 39. doi: 10.3389/fnhum.2014.00039. eCollection 2014. Pettegrew, J. W., Klunk, W. E., Panchalingam, K., McClure, R. J., & Stanley, J. A. (2000). Molecular insights into neurodevelopmental and neurodegenerative diseases. Brain Research Bulletin, 53(4), 455–469. doi: S0361-9230(00)00376-2 [pii]. Pettegrew, J. W., Panchalingam, K., Withers, G., McKeag, D., & Strychor, S. (1990). Changes in brain energy and phospholipid metabolism during development and aging in the Fischer 344 rat. Journal of Neuropathology and Experimental Neurology, 49(3), 237–249.

Neurochemical Correlates of Intelligence

Pouwels, P. J., Brockmann, K., Kruse, B., Wilken, B., Wick, M., Hanefeld, F., & Frahm, J. (1999). Regional age dependence of human brain metabolites from infancy to adulthood as detected by quantitative localized proton MRS. Pediatric Research, 46(4), 474–485. Pradhan, S., Bonekamp, S., Gillen, J. S., Rowland, L. M., Wijtenburg, S. A., Edden, R. A. E., & Barker, P. B. (2015). Comparison of single voxel brain MRS AT 3T and 7T using 32-channel head coils. Magnetic Resonance Imaging, 33(8), 1013–1018. doi: 10.1016/j.mri.2015.06.003. Provencher, S. W. (1993). Estimation of metabolite concentrations from localized in vivo proton NMR spectra. Magnetic Resonance in Medicine, 30(6), 672–679. Rango, M., Bonifati, C., & Bresolin, N. (2006). Parkinson’s disease and brain mitochondrial dysfunction: A functional phosphorus magnetic resonance spectroscopy study. Journal of Cerebral Blood Flow and Metabolism, 26(2), 283–290. doi: 10.1038/sj.jcbfm.96001.-92. Rango, M., Castelli, A., & Scarlato, G. (1997). Energetics of 3.5 s neural activation in humans: A 31P MR spectroscopy study. Magnetic Resonance in Medicine, 38(6), 878–883. Rijpma, A., van der Graaf, M., Meulenbroek, O., Olde Rikkert, M. G. M., & Heerschap, A. (2018). Altered brain high-energy phosphate metabolism in mild Alzheimer’s disease: A 3-dimensional ³¹P MR spectroscopic imaging study. Neuroimage: Clinical, 18, 254–261. doi: 10.1016/j.nicl.2018.01.031. eCollection 2018. Ross, B., & Bluml, S. (2001). Magnetic resonance spectroscopy of the human brain. The Anatomical Record, 265(2), 54–84. Rothman, D. L., Petroff, O. A., Behar, K. L., & Mattson, R. H. (1993). Localized 1H NMR measurements of gamma-aminobutyric acid in human brain in vivo. Proceedings of the National Academy of Sciences USA, 90(12), 5662–5666. Sappey-Marinier, D., Calabrese, G., Fein, G., Hugg, J. W., Biggins, C., & Weiner, M. W. (1992). Effect of photic stimulation on human visual cortex lactate and phosphates using 1H and 31P magnetic resonance spectroscopy. Journal of Cerebral Blood Flow and Metabolism, 12(4), 584–592. doi: 10.1038/jcbfm .1992.82. Schaller, B., Mekle, R., Xin, L., Kunz, N., & Gruetter, R. (2013). Net increase of lactate and glutamate concentration in activated human visual cortex detected with magnetic resonance spectroscopy at 7 tesla. Journal of Neuroscience Research, 91(8), 1076–1083. doi: 10.1002/jnr.23194. Schaller, B., Xin, L., O’Brien, K., Magill, A. W., & Gruetter, R. (2014). Are glutamate and lactate increases ubiquitous to physiological activation? A (1)H functional MR spectroscopy study during motor activation in human brain at 7Tesla. NeuroImage, 93(Pt 1), 138–145. doi: 10.1016/j.neuroimage.2014.02.016. Scheenen, T. W. J., Klomp, D. W. J., Wijnen, J. P., & Heerschap, A. (2008). Short echo time 1H-MRSI of the human brain at 3T with minimal chemical shift displacement errors using adiabatic refocusing pulses. Magnetic Resonance in Medicine, 59(1), 1–6. doi: 10.1002/mrm.21302. Schlattner, U., Tokarska-Schlattner, M., & Wallimann, T. (2006). Mitochondrial creatine kinase in human health and disease. Biophysica Biochimica Acta - Molecular Basis of Disease, 1762(2), 164–180. doi: 10.1016/j.bbadis.2005.09.004.

321

322

n. raz and j. a. stanley

Shoubridge, E. A., Briggs, R. W., & Radda, G. K. (1982). 31p NMR saturation transfer measurements of the steady state rates of creatine kinase and ATP synthetase in the rat brain. FEBS Letters, 140(2), 289–292. doi: 10.1016/00145793(82)80916-2. Simmons, M. L., Frondoza, C. G., & Coyle, J. T. (1991). Immunocytochemical localization of N-acetyl-aspartate with monoclonal antibodies. Neuroscience, 45(1), 37–45. doi: 10.1016/0306-4522(91)90101-s. Sokoloff, L. (1991). Measurement of local cerebral glucose utilization and its relation to local functional activity in the brain. Advances in Experimental Medicine and Biology, 291, 21–42. doi: 10.1007/978-1-4684-5931-5994. Sokoloff, L. (1993). Function-related changes in energy metabolism in the nervous system: Localization and mechanisms. Keio Journal of Medicine, 42(3), 95-103. Somogyi, P., Tamás, G., Lujan, R., & Buhl, E. H. (1998). Salient features of synaptic organisation in the cerebral cortex. Brain Research Brain Research Reviews, 26(2–3), 113–135. Stagg, C. J. (2014). Magnetic resonance spectroscopy as a tool to study the role of GABA in motor-cortical plasticity. Neuroimage, 86, 19–27. Stanley, J. A. (2002). In vivo magnetic resonance spectroscopy and its application to neuropsychiatric disorders. Canadian Journal of Psychiatry, 47(4), 315–326. Stanley, J., Burgess, A., Khatib, D., Ramaseshan, K., Arshad, M., Wu, H., & Diwadkar, V. (2017). Functional dynamics of hippocampal glutamate during associative learning assessed with in vivo 1H functional magnetic resonance spectroscopy. NeuroImage, 153, 189–197. doi: 10.1016/j.neuroimage.2017.03.051. Stanley, J. A., Kipp, H., Greisenegger, E., MacMaster, F. P., Panchalingam, K., Keshavan, M. S., . . . Pettegrew, J. W. (2008). Evidence of developmental alterations in cortical and subcortical regions of children with attentiondeficit/hyperactivity disorder: A multivoxel in vivo phosphorus 31 spectroscopy study. Archives of General Psychiatry, 65(12), 1419–1428. doi: 65/12/ 1419 [pii]10.1001/archgenpsychiatry.2008.503. Stanley, J. A., & Pettegrew, J. W. (2001). A post-processing method to segregate and quantify the broad components underlying the phosphodiester spectral region of in vivo 31P brain spectra. Magnetic Resonance in Medicine, 45(3), 390–396. Stanley, J. A., Pettegrew, J. W., & Keshavan, M. S. (2000). Magnetic resonance spectroscopy in schizophrenia: Methodological issues and findings – Part I. Biological Psychiatry, 48(5), 357–368. doi: S0006-3223(00)00949-5 [pii]. Stanley, J. A., & Raz, N. (2018). Functional magnetic resonance spectroscopy: The “new” MRS for cognitive neuroscience and psychiatry research. Frontiers in Psychiatry – Neuroimaging and Stimulation, 9, 76. doi: 10.3389/ fpsyt.2018.00076. Sui, J., Huster, R., Yu, Q., Segall, J. M., & Calhoun, V. D. (2014). Function-structure associations of the brain: Evidence from multimodal connectivity and covariance studies. Neuroimage, 102(Pt 1), 11–23. doi: 10.1016/j. neuroimage.2013.09.044. Tallan, H. (1957). Studies on the distribution of N-acetyl-L-aspartic acid in brain. Journal of Biological Chemistry, 224(1), 41–45.

Neurochemical Correlates of Intelligence

Tatti, R., Haley, M. S., Swanson, O. K., Tselha, T., & Maffei, A. (2017). Neurophysiology and regulation of the balance between excitation and inhibition in neocortical circuits. Biological Psychiatry, 81(10), 821–831. doi:10.1016/j.biopsych.2016.09.017. Taylor, R., Schaefer, B., Densmore, M., Neufeld, R. W. J., Rajakumar, N., Williamson, P. C., & Théberge, J. (2015). Increased glutamate levels observed upon functional activation in the anterior cingulate cortex using the Stroop Task and functional spectroscopy. Neuroreport, 26(3), 107–112. doi: 10.1097/ WNR.0000000000000309. Thielen, J. W., Hong, D., Rohani Rankouhi, S., Wiltfang, J., Fernández, G., Norris, D. G., & Tendolkar, I. (2018). The increase in medial prefrontal glutamate/ glutamine concentration during memory encoding is associated with better memory performance and stronger functional connectivity in the human medial prefrontal-thalamus-hippocampus network. Human Brain Mapping, 39(6), 2381–2390. doi: 10.1002/hbm.24008. Tkac, I., Andersen, P., Adriany, G., Merkle, H., Ugurbil, K., & Gruetter, R. (2001). In vivo 1H NMR spectroscopy of the human brain at 7 T. Magnetic Resonance in Medicine, 46(3), 451–456. Tkác, I., Starcuk, Z., Choi, I. Y., & Gruetter, R. (1999). In vivo 1H NMR spectroscopy of rat brain at 1 ms echo time. Magnetic Resonance in Medicine, 41(4), 649–656. U gurbil, K., Adriany, G., Andersen, P., Chen, W., Garwood, M., Gruetter, R., . . . Zhu, X. H. (2003). Ultrahigh field magnetic resonance imaging and spectroscopy. Magnetic Resonance Imaging, 21(10), 1263–1281. Urenjak, J., Williams, S. R., Gadian, D. G., & Noble, M. (1993). Proton nuclear magnetic resonance spectroscopy unambiguously identifies different neural cell types. Journal of Neuroscience, 13(3), 981–989. van de Bank, B. L., Maas, M. C., Bains, L. J., Heerschap, A., & Scheenen, T. W. J. (2018). Is visual activation associated with changes in cerebral high-energy phosphate levels? Brain Structure and Function, 223, 2721–2731. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. doi: 10.1523/JNEUROSCI.1443-09.2009. van der Knaap, M. S., van der Grond, J., Luyten, P. R., den Hollander, J. A., Nauta, J. J., & Valk, J. (1992). 1H and 31P magnetic resonance spectroscopy of the brain in degenerative cerebral disorders. Annals of Neurology, 31(2), 202–211. Wallimann, T., Wyss, M., Brdiczka, D., Nicolay, K., & Eppenberger, H. M. (1992). Intracellular compartmentation, structure and function of creatine kinase isoenzymes in tissues with high and fluctuating energy demands: The “phosphocreatine circuit” for cellular energy homeostasis. Biochemical Journal, 281(Pt 1), 21–40. doi: 10.1042/bj2810021. Wijtenburg, S. A., McGuire, S. A., Rowland, L. M., Sherman, P. M., Lancaster, J. L., Tate, D. F., . . . Kochunov, P. (2013). Relationship between fractional anisotropy of cerebral white matter and metabolite concentrations measured using (1)H magnetic resonance spectroscopy in healthy adults. Neuroimage, 66, 161–168. doi: 10.1016/j.neuroimage.2012.10.014.

323

324

n. raz and j. a. stanley

Wilman, A. H., & Allen, P. S. (1995). Yield enhancement of a double-quantum filter sequence designed for the edited detection of GABA. Journal of Magnetic Resonance B, 109(2), 169–174. Woodcock, E. A., Anand, C., Khatib, D., Diwadkar, V. A., & Stanley, J. A. (2018). Working memory modulates glutamate levels in the dorsolateral prefrontal cortex during (1)H fMRS. Frontiers in Psychiatry, 9, 66. Epub 2018/03/22. doi: 10.3389/fpsyt.2018.00066. Yang, S., Hu, J., Kou, Z., & Yang, Y. (2008). Spectral simplification for resolved glutamate and glutamine measurement using a standard STEAM sequence with optimized timing parameters at 3, 4, 4.7, 7, and 9.4T. Magnetic Resonance in Medicine, 59(2), 236–244. doi: 10.1002/mrm.21463. Yeo, R. A., Hill, D., Campbell, R., Vigil, J., & Brooks, W. M. (2000). Developmental instability and working memory ability in children: A magnetic resonance spectroscopy investigation. Developmental Neuropsychology, 17(2), 143–159. Yuksel, C., Du, F., Ravichandran, C., Goldbach, J. R., Thida, T., Lin, P., . . . Cohen, B. M. (2015). Abnormal high-energy phosphate molecule metabolism during regional brain activation in patients with bipolar disorder. Molecular Psychiatry, 20(9), 1079–1084. doi: 10.1038/mp.2015.13. Zhu, X.-H., Qiao, H., Du, F., Xiong, Q., Liu, X., Zhang, X., . . . Chen, W. (2012). Quantitative imaging of energy expenditure in human brain. NeuroImage, 60(4), 2107–2117. doi: 10.1016/j.neuroimage.2012.02.013.

PART IV

Predictive Modeling Approaches

16 Predicting Individual Differences in Cognitive Ability from Brain Imaging and Genetics Kevin M. Anderson and Avram J. Holmes

Introduction The study of intelligence, or general cognitive ability, is one of the earliest avenues of modern psychological enquiry (Spearman, 1904). A consistent goal of this field is the development of cognitive measures that predict real-world outcomes, ranging from academic performance, health (Calvin et al., 2017), and psychopathology (Woodberry, Giuliano, & Seidman, 2008), to mortality and morbidity rates (Batty, Deary, & Gottfredson, 2007). Despite evidence linking intelligence with a host of important life outcomes, we remain far from a mechanistic understanding of how neurobiological processes contribute to individual differences in general cognitive ability. Excitingly for researchers, advances in predictive statistical modeling, the emergence of wellpowered imaging and genetic datasets, and a cultural shift toward open access data may allow for behavioral prediction at the level of a single individual (Miller et al., 2016; Poldrack & Gorgolewski, 2014). There is a growing interest in generalizable brain- and genetic-based predictive models of intelligence (Finn et al., 2015; Lee et al., 2018; see also Chapter 17, by Willoughby and Lee). In the short term, statistical models predicting cognitive ability may yield insight into underlying neurobiology and, in the long-term, may inform empirically-driven and individualized medical and educational interventions. Here, we take a critical look at how recent advances in genetic and brain imaging methods can be used for the prediction of individual differences in cognitive ability. In doing so, we will cover prior work defining cognitive ability and mapping biological correlates of intelligence. We discuss the importance of prioritizing statistical models that are both predictive and interpretable before highlighting recent progress and future directions in the genetic and neuroimaging prediction of cognitive ability. We conclude with a brief discussion of the ethical implications and limitations associated with the creation of predictive models.

327

328

k. m. anderson and a. j. holmes

What Are Cognitive Abilities? Traditionally in neuroscience, broad cognitive ability is quantified using one or more standardized tests, including the Wechsler Adult Intelligence Scale (WAIS), Raven’s Progressive Matrices, or related measures of fluid intelligence or reasoning capacity (Deary, Penke, & Johnson, 2010). Through these, researchers hope to estimate an individual’s learning ability, comprehension, and capacity for reasoning and abstraction. This broad description of cognitive ability is sometimes called g, standing for general intelligence, which reflects the observation that individuals who do well on one test also tend to do well on others (Haier, 2017). Measures of general cognitive ability have been criticized as being overly reified, which is the translation of an abstraction or hypothetical construct into a discrete biological entity (Nisbett et al., 2012). Although there is consensus on the stability and predictive utility of psychometrically defined general cognitive ability, this abstract factor has been subject to debate regarding its interpretation (Gray & Thompson, 2004) and construct validity – which is the degree that a test unambiguously reflects what it aims to measure. Estimates of general cognitive ability show reliable associations with important real-world outcomes, but the biological mechanisms underlying intelligence remain a topic of focused study (Genç et al., 2018; Goriounova & Mansvelder, 2019). No one measure of cognitive ability is perfect, free from cultural bias, or immune to misuse (Sternberg, 2004). Indeed, perhaps no other subject in psychology has provoked more debate and controversy than the study of human intelligence and the associated concept of a unitary factor that broadly supports behavior and cognition. For instance, researchers have highlighted complementary abilities and behaviors not explicitly assessed through standard batteries (e.g., emotional intelligence; Salovey & Mayer, 1990), although there is evidence for shared variance across emotional and cognitive domains (Barbey, Colom, & Grafman, 2014). Because space does not permit a detailed discussion of this literature, general cognitive ability will be used to illustrate the potential of genetic and brain imaging methods for predicting behavior and individual differences in cognition. Readers should note that these approaches can be leveraged to generate predictions for a range of cognitive abilities and other complex behaviors. Critically, analyses linking the genome and brain biology to cognitive ability should not be taken to imply biological determinism or essentialism. The expression of the genome and the development of the brain are influenced both by stochastic processes as well as complex and bidirectional interactions with the environment (Dor & Cedar, 2018). For instance, even a “simple” genetic measure like heritability – which is the amount of variance in a trait explained by structural genetics – can vary across developmental stages and environments (Kendler & Baker, 2007; Visscher, Hill, & Wray, 2008).

Predicting Cognitive Ability: Brain Imaging and Genetics

Explanation Versus Prediction We endorse the terminology of Gabrieli, Ghosh, and WhitfieldGabrieli (2015), who differentiate between in-sample correlation and out-ofsample prediction across three types of models. First, a study is often said to “predict” an outcome by showing within-sample correlations. For instance, individual differences in intelligence may correlate with aspects of concurrently measured brain anatomy (e.g., cortical thickness). This type of correlation is critical for theory-building and nominating potentially important variables. Second, a study may demonstrate longitudinal correlation within a given sample, for instance to establish that a brain measure at time 1 correlates with subsequent behavior at time 2. While this approach meets the temporal requirement of predictive forecasting, it does not necessarily satisfy the criteria of generalizability that differentiates the third class of prediction. That is, a crucial test of a predictive model is whether it explains behavioral variance in an unseen or out-of-sample set of individuals (Gabrieli et al., 2015). This last class of prediction aims for external validity, and is arguably the most important for translating genetic or cognitive neuroscientific data into insights suitable for clinical or public health applications. However, we propose an addendum to the three-part conceptualization of Gabrielli et al. (2015), and argue for the prioritization of statistical models that are both predictive and interpretable. The decision to apply predictive and/or inferential methods is one routinely faced by researchers, although the distinction between traditional statistical approaches and machine learning is in many ways arbitrary (Bzdok, Altman, & Krzywinski, 2018; Bzdok & Ioannidis, 2019). We emphasize that these broad categories of predictive and correlational models are mutually informative, and the selection of one method over another is usually based on the inferential vs. predictive goals of the researcher (Bzdok & Ioannidis, 2019; Yarkoni & Westfall, 2017). In practice, model transparency and biological interpretability are often sacrificed for predictive performance, due in large part to the multivariate nature of the data (Bzdok & Ioannidis, 2019). An average SNP array, for instance, provides information on about 500–800,000 genomic variants, which can be expanded to upwards of 70 million genomic features using modern SNP imputation techniques. With brain imaging data, even a conservative parcellation consisting of 200 areas would produce 19,900 unique functional relationships. Accordingly, many successful predictive models of behavior employ data dimensionality reduction, feature selection, machine-learning (e.g., random forests, support vector machines), or specialized forms of regression (e.g., partial least squares, canonical correlation, elastic net) to reduce the number of comparisons and stabilize signal estimates (for a review, see Bzdok & Yeo, 2017). These approaches capture complex multivariate interactions among predictors, although often at the expense of mechanism

329

330

k. m. anderson and a. j. holmes

or interpretation. At the end of this chapter, we will review the promise of interpretable forms of machine- and deep-learning techniques. Predictive models of cognitive ability are perhaps most important for the study of human development (Rosenberg, Casey, & Holmes, 2018). Core psychological functions emerge through neurodevelopmental processes and concurrent molecular and genetic cascades that are influenced by the environment (e.g., resource availability, early life stress). Correspondingly, the heritability of general cognitive ability increases across childhood, adolescence, and into adulthood, due in part to amplification processes, sometimes called genotype-environment covariance (Briley & Tucker-Drob, 2013; Haworth et al., 2019). That is, initially small heritable differences in cognitive ability may lead to self-amplifying environmental selection (e.g., parent or teacher investment, self-sought intellectual challenges; Tucker-Drob & Harden, 2012). Although genes are far from deterministic or immutable (Kendler, Turkheimer, Ohlsson, Sundquist, & Sundquist, 2015), genetic variation is fixed at conception and may one day be a useful guide for early interventions or to improve educational outcomes, particularly during developmental periods when behavioral measurement is difficult. Overall prediction of intelligence may even prove useful for individualized psychiatric medicine (Lam et al., 2017), given that general cognitive ability is genetically associated to schizophrenia, bipolar disorder, and other forms of mental illness (Bulik-Sullivan et al., 2015; Hagenaars et al., 2016; Hill, Davies, Liewald, McIntosh, & Deary, 2016). However, biological predictive tools are far from mature and remain subject to serious ethical and technical challenges, which are addressed at the end of this chapter.

Neuroimaging Prediction of Cognitive Ability Identifying the neural correlates of generalized cognitive ability is of great importance, since molecular and biologically mechanistic explanations of intelligence remain largely theoretical (Barbey, 2018; Deary et al., 2010; Jung & Haier, 2007). To date, most research in this area prioritizes inferential hypothesis testing, for instance, to identify features of brain biology associated with individual differences in cognitive ability (Cole et al., 2013; Smith et al., 2015). These studies generally implicate the heteromodal association cortex as important for cognitive ability (Cole, Yarkoni, Repovs, Anticevic, & Braver, 2012; Jung & Haier, 2007). However, across populations, intellectual ability shows generalized and diffuse correlations with brain size (Deary et al., 2010), white matter tracts (Penke et al., 2012), regional brain anatomy (Luders, Narr, Thompson, & Toga, 2009; Tadayon, Pascual-Leone, & Santarnecchi, 2019), brain connectivity (Barbey, 2018; Smith et al., 2015; Song et al., 2008), and functional dynamics (Liégeois et al., 2019; Shine et al., 2019). Taken together, these data suggest that general cognitive ability

Predicting Cognitive Ability: Brain Imaging and Genetics

correlates widely with diverse anatomical and functional brain features measured across multiple modalities. Investigators have increasingly adopted predictive modeling approaches to simultaneously maximize variance explained and contend with highly multivariate imaging feature sets (e.g., functional connections) and behaviors (e.g., fluid intelligence). In an example from a series of landmark studies, Rosenberg et al. (2015) trained a statistical model to predict attention based on the correlation of blood oxygenation-level dependent (BOLD) time courses (i.e., functional connections). The statistical model predicted attention on the initial group of participants and was also externally predictive of attentional deficits in an out-of-sample cohort of individuals with ADHD. Further work in this domain has revealed that models built on task-based fMRI data yield more accurate predictions of fluid intelligence (20%) than resting-state only models (< 6%), indicating that individual differences in functional neurocircuitry are accentuated by task-based perturbations of the system (Greene, Gao, Scheinost, & Constable, 2018). The accuracy of brain based predictive models will continue to increase as methods are refined (Scheinost et al., 2019) and sample sizes increase (Miller et al., 2016). However, a consensus is emerging that no single imaging feature can explain a large proportion of the variance in any complex behavioral or cognitive trait (Cremers, Wager, & Yarkoni, 2017; Smith & Nichols, 2018), motivating the continued use of multivariate predictive models. However, establishing biological mechanisms that underlie predictive models will be critical, particularly for disambiguating true signal from artifactual confounds (e.g., head motion; Siegel et al., 2017). Given the cost and expertise required to obtain structural and functional brain imaging data – and the potential for disparate sampling across populations – their use as predictive tools must be justified. That is, why predict cognitive ability with brain data when it can be measured directly from behavioral assessments? Here, we provide a partial list of potential uses for brain-based predictive models: 1. Neurobiological Inference: Predictive models may identify brain imaging features (e.g., connectivity patterns, cortical thickness) that are most tied to variance in a cognitive process. 2. Outcome Prediction: A brain-based model of cognitive ability may yield unique predictions for personalized health, education, and psychiatric illness. Imaging-based models must demonstrate generalizability, and may benefit from benchmarking against behavior and focusing on developmental periods or populations where psychological assessment is difficult (Woo, Chang, Lindquist, & Wager, 2017). 3. Define Predictive Boundaries: Researchers may survey where predictions work and fail to reveal areas of shared and unique variance – for instance, to reveal trajectories of model accuracy across development (Rosenberg et al., 2018) .

331

332

k. m. anderson and a. j. holmes

4. Multivariate Integration: Machine-learning methods are particularly suited for dealing with high-dimensional and disparate types of data. 5. Measurement Inference: A validated predictive model could be applied to imaging data to impute a trait or variable that was not originally measured.

Genetic Prediction of Cognitive Ability Quantitative family and twin studies establish a genetic component to general intelligence (30–50%; Deary, Johnson, & Houlihan, 2009), educational attainment (40%; Branigan, McCallum, & Freese, 2013), and working memory (15–59%; Karlsgodt et al., 2010). Briefly, twin designs separate genetic and environmental variance by comparing monozygotic twins, who have nearly identical genomes, to dizygotic twins, who on average have 50% of their genomes in common (Boomsma, Busjahn, & Peltonen, 2002). Evidence for genetic effects emerges if a trait is more correlated among monozygotic than dizygotic twins (Boomsma et al., 2002). Heritability provides an invaluable “upper bound” estimate of the total variance that can be explained by genetics, but heritability does not imply immutability, Rather, it reflects a point estimate for a given sample within a set environment (Neisser et al., 1996). Although indispensable, heritability estimates do not provide predictive inferences at the level of a specific individual. How can general cognitive ability be predicted by individual differences in the nearly 3 billion base pairs that comprise the human genome? The most reliable and widely used method for linking genotypes to phenotypes is the Genome-Wide Association Study (GWAS). GWAS examines genomic locations that differ between individuals – termed single-nucleotide polymorphisms (SNPs) – and tests whether a phenotype is correlated with certain combinations of SNPs. By conducting linear or logistic regressions across millions of SNPs, GWAS identifies genomic locations that are associated with binary traits (e.g., disease status) or continuous measures, like height or cognitive ability. A virtue of the GWAS approach is that it allows for the identification of genetic predictors of complex traits in a data driven manner, which contrasts with targeted investigations of candidate genes. Early candidate approaches focused on specific genes with hypothesized relevance to cognitive ability, often informed by animal models or genes related to specific neurotransmitter systems. However, most reported associations between individual genes and cognitive ability have been found to either not replicate or reflect overestimates of the true effect size (Chabris et al., 2012). Advances in statistical genetics reveal that the genetic architecture of most complex (i.e., nonMendelian) traits are extremely polygenic and determined by variation that is distributed across the entire genome. It is rare to find highly penetrant individual genes or SNPs that explain more than 0.1–0.2% of the variance in a

Predicting Cognitive Ability: Brain Imaging and Genetics

trait, requiring researchers to adopt polygenic predictive approaches (Barton, Etheridge, & Véber, 2017; Boyle, Li, & Pritchard, 2017; Wray, Wijmenga, Sullivan, Yang, & Visscher, 2018) in extremely large samples. The most common form of genetic prediction utilizes polygenic scores (or polygenic risk scores), which aggregate genetic associations for a particular trait across many genetic variants and their associated weights, determined from a GWAS (Torkamani, Wineinger, & Topol, 2018). Polygenic scores are easy to understand since they are built off the sum or average of many thousands or millions of linear predictors, and they hold eventual promise for shaping early health interventions (Torkamani et al., 2018). Well-powered GWAS have recently been conducted for neurocognitive measures of intelligence (N = 269,867; Savage et al., 2018), mathematical ability (N = 564,698; Lee et al., 2018), and cognitive ability (N = 300,486; Davies et al., 2018; N = 107,207; Lam et al., 2017). These studies reveal SNPs that are associated with neurocognitive measures, however the variance explained by any individual variant is exceedingly small and current polygenic scores explain about 3–6% of the variance in cognitive ability in independent samples (Hill et al., 2019; Savage et al., 2018). Future polygenic scores based on increasingly large GWAS samples will likely explain more of the heritable variance in cognitive ability, but the endeavor has been complicated by cost of collecting standardized cognitive batteries on hundreds of thousands of individuals. A major turning point in the genomic study of intelligence occurred when researchers focused on the measure of years of education (Plomin & von Stumm, 2018). Because this demographic variable is so commonly collected by large-scale genetic consortia, investigators were able to achieve dramatic increases in sample size and power (Okbay et al., 2016; Rietveld et al., 2013). The largest GWAS of educational attainment included approximately 1.1 million individuals and explained 7–10% of the variance in cognitive measures from an independent test cohort (Lee et al., 2018), which may be further improved by leveraging cross-trait pleiotropy and correlation (Allegrini et al., 2019; Krapohl et al., 2018). Although the downstream effect of a particular variant is not always directly inferable from its genomic location (Tam et al., 2019), biological relevance of GWAS-nominated SNPs can be approximated using gene-set and cell-type enrichment methods. For instance, genetic associations with educational attainment are greater in coding regions of genes expressed in brain tissue, neurons (Watanabe, Umicevic Mirkov, de Leeuw, van den Heuvel, & Posthuma, 2019), and gene sets tied to myelination and neurogenesis (Hill et al., 2019). Well-powered GWAS of educational attainment, which is highly genetically correlated with cognitive ability, provide a route for reliable cross-modal integration of genetic and neuroimaging measures. For instance, polygenic scores of cognitive ability can be correlated with imaging features, such as brain activation in a working memory task (Heck et al., 2014) or brain size (Elliott et al., 2018).

333

334

k. m. anderson and a. j. holmes

Joint Heritability of Brain and Cognitive Ability The influence of genetics on cognitive ability is likely mediated by structural and functional features of the brain. Twin studies have shown that both cognitive abilities as well as brain structure and function are heritable (Gray & Thompson, 2004). That is, about 50% of variation in cognitive ability is attributed to genetic factors (Deary et al., 2009), and MRI based measures of brain anatomy are also heritable, including total brain volume, cortical thickness (Ge et al., 2019), and the size and shape of subcortical volumes (Hibar et al., 2017; Roshchupkin et al., 2016). These findings demonstrate that individual differences in brain and behavior are shaped by genetics, but they do not indicate whether cognitive ability and brain phenotypes are influenced by the same underlying features of the genome, nor do they reveal the relevant biological pathways that contribute to the observed heritability. Using a method called genetic correlation, researchers are able to quantify whether the same genetic factors influence both general cognitive ability and neural phenotypes (Neale & Maes, 1992). Recent imaging genetic analyses in 7,818 older adult (45–79 years) white British participants demonstrated moderate levels of genetic correlation (.10 < r < .30) between cognitive ability and cortical thickness in the somato/motor cortex and anterior temporal cortex (Ge et al., 2019). Convergent evidence indicates shared genetic relationships between cognitive ability and cortical surface area (Vuoksimaa et al., 2015), and have implicated biological pathways tied to cell growth as a driver of shared genetic variance between brain morphology and intelligence (Jansen et al., 2019). However, genetic correlations are subject to genetic confounding, for instance, the genetic relationship between one trait (e.g., cholesterol) and another (e.g., heart disease) could be mediated by pleitropic effects of a third variable (e.g., triglycerides; Bulik-Sullivan et al., 2015). Twin-based designs reveal a similar shared genetic basis of cognitive ability with brain morphology (Hulshoff Pol et al., 2006; Pennington et al., 2000; Posthuma et al., 2002; Thompson et al., 2001) and white matter structure (Chiang et al., 2011; Penke et al., 2012), although more research is needed to test for shared genetic relationships with functional connectivity.

Integrative Imaging-Genetic Approaches Investigators have identified replicable genetic and neuroimaging correlates of cognitive ability, but combining these levels of analysis to establish associated molecular mechanisms remains an outstanding challenge. If a researcher’s sole priority is to maximize predictive accuracy, then biological mechanism is largely irrelevant so long as the model is generalizable and performs well. With enough data, many of the current approaches may independently reach the “upper bound” of predictive accuracy. For instance,

Predicting Cognitive Ability: Brain Imaging and Genetics

polygenic scores derived from GWAS of height now predict nearly all of the SNP-based heritable variance in the trait (r2 = .40; (Lello et al., 2018) and polygenic scores of education attainment already explain about 7–10% of the variance in cognitive ability (Lee et al., 2018). With regard to brain imaging data, about 6–20% of the variance in general intelligence can be predicted from resting-state functional connectivity (Dubois, Galdi, Paul, & Adolphs, 2018; He et al., 2020). Integrating genomic and neural data into generative predictive models of behavior is a herculean task, in part because the two data types are separated in scale by orders of magnitude. A single base pair is measured in picometers (1/1,000,000,000,000 m) while a high quality MRI scan provides information at millimeter resolution (1/1,000 m). In between these two levels is an interdependent hierarchy of gene transcription, genomic regulation, protein synthesis, cellular-molecular processes, and complex patterns of brain cytoarchitecture and connectivity. The majority of this rich functional genomic data can only be measured in post-mortem brain tissue and is largely inaccessible to human neuroimaging approaches. How, then, can information about the brain’s molecular pathways, gene coregulation, and cell architecture be incorporated into existing imaginggenetic frameworks, and would this multi-scale approach provide for more accurate, mechanistically informative, models of complex phenotypes like cognitive ability? The daunting task of linking these data may yield, slightly, to the flood of openly available functional genomic data and the recent development of interpretable forms of machine learning (Eraslan, Avsec, Gagneur, & Theis, 2019). In a landmark series of publications, which serves as an example of work in this domain, the PsychENCODE consortium characterized the functional genomic landscape of the human brain with unprecedented precision and scale. This collaborative endeavor provides data on gene expression, brain-active transcriptional enhancers, chromatin accessibility, methylation, and gene regulatory networks in single-cell and bulk tissue data from nearly 2,000 individuals (Wang et al., 2018). Sometimes called the functional genome (e.g., gene expression, methylation, chromatin folding, cell-specific interactions), these features refer to molecular processes and interactions that encompass the activity of the genome (e.g., expression), as opposed to its structure (e.g., SNP, copy number variant). A large fraction of the PsychENCODE data were obtained from individuals with schizophrenia, bipolar disorder, and autism spectrum disorder, allowing investigators to build functional genomic models to predict psychiatric disease. Wang et al. (2018) trained a generative form of a shallow deep learning model, called a Deep Boltzman Machine (DBM; Salakhutdinov & Hinton, 2009). Deep learning is a subset of machine learning techniques that allows information to be structured into a hierarchy, and for progressively more complex and combinatorial features of the data to be extracted across levels

335

336

k. m. anderson and a. j. holmes

Figure 16.1 A graphical depiction of the Deep Boltzmann Machine (DMN) developed by Wang et al. (2018) to predict psychiatric case status. Functional genomic information (e.g., gene expression, gene enhancer activity, and co-expression networks) is embedded in the structure of the model. For instance, empirically mapped quantitative trait loci (QTL; green-dashed lines) reflect relationships between individual SNPs and downstream layers (e.g., gene expression). Lateral connections (purple solid lines) reflect gene-regulatory mechanisms and interactions (e.g., enhancers, transcription factors) also embedded in the DNN. Learned higher-order features of the model may reflect integrative and biologically plausible pathways (e.g., glutamatergic synapses) that can be deconstructed using feature interpretation techniques. The learned cross-level structure embedded in the model can be adapted to include brain imaging features or predict cognitive phenotypes in datasets where functional genomic “Imputed Layers” are not observed (e.g., UK Biobank).

(Figure 16.1). Generative deep learning techniques are distinguishable from discriminative models. While discriminative techniques aim to maximize prediction accuracy of categorical (e.g., case vs control) or continuous values (e.g., gene expression), generative models are trained to capture a realistic joint distribution of observed and higher-order latent predictive features to produce a full model of the trait (Libbrecht & Noble, 2015). Critically, deep-learning approaches are flexible and allow for realistic biological structure to be embedded within the architecture of such generative models (Gazestani & Lewis, 2019). Typically in population genetics, a model is trained to predict a phenotype (e.g., educational attainment) directly from structural genetic variations (e.g., SNPs). However, by incorporating intermediate functional genomic measures into the model framework, Wang et al. (2018) demonstrated a 6-fold improvement in the prediction of psychiatric disease relative to a genotype-only model. A feature of this approach is the flexible inclusion of domain knowledge. For instance, PsychENCODE data were

Predicting Cognitive Ability: Brain Imaging and Genetics

analyzed to find associations between SNPs and gene expression – known as quantitative trait loci (QTL) – to constrain potential links among levels of the DBM (Figure 16.1; green lines). Imposing biological reality onto the structure of the model reduces the untenably large number of possible connections between layers (e.g., all SNPs to all genes), and facilitates interpretation of higher order latent features of the model. For instance, Wang et al. found that biological pathways tied to synaptic activity and immune function were most predictive of schizophrenia status, providing targets for follow-up research and experimental validation. Although speculative, deep models may also provide a means for individualized inference. For traditional methods that collapse genome-wide polygenic load into a single predicted value (e.g., schizophrenia risk), two theoretical individuals could have an identical polygenic score but totally non-overlapping profiles of risk alleles (Wray et al., 2018). Structured deep models, however, would still yield a predicted outcome (e.g., schizophrenia risk) while also mapping features that are most relevant in one individual (e.g., synaptic genes) vs. another (e.g., immune genes). However, it is unlikely that an investigator will possess functional genomic, imaging, and behavioral data on the same individuals. How, then, can rich data, like that described Wang et al. (2018), be integrated into predictive models to reveal biological insights? One possibility comes from transfer learning (Pan & Yang, 2010), which incorporates knowledge from an already trained model as the starting point for a new one. This approach is particularly useful when training data is expensive or difficult to obtain and has been successfully applied in biology and medicine. For instance, a Google deep model (i.e., a convolutional neural network) trained to classify images into categories (e.g., building, cat, dog) has been adapted, or transferred, to detect skin cancer from clinical images using a much smaller training set (Esteva et al., 2017). For a hypothetical investigator studying the genetic basis of cognitive ability, they will likely possess data on structural nucleotide polymorphisms, possibly some brain phenotypes, and one or several measures of behavior or cognition. Rather than calculating a polygenic score or directly running a linear regression between millions of SNPs and a brain or cognitive phenotype, this researcher could leverage already trained deep neural networks (e.g., Wang et al., 2018) or embed empirically defined biological information (e.g., QTLs, gene-coexpression networks) into the structure of an integrative predictive model. Although imaging-genetic deep models have not yet been used to predict cognitive ability, incorporating biological information into generative machine-learning models has been shown to increase both predictive performance and interpretability across disciplines and applications (Eraslan et al., 2019; Libbrecht & Noble, 2015), particularly when training data is limited. For instance, deep neural networks have revealed regulatory motifs of non-coding segments of DNA (Zou et al., 2019). In biology, similar techniques that incorporate gene pathway information were best able to

337

338

k. m. anderson and a. j. holmes

predict drug sensitivity (Costello et al., 2014), mechanisms of bactericidal antibiotics (Yang et al., 2019), and nominate drug targets for immune-related disorders (Fange et al., 2019). In one noteworthy example, Ma et al. (2018) modeled the architecture of a deep network to reflect the biological hierarchy and interconnected processes of a eukaryotic cell. That is, genes were clustered into expert defined groupings that reflect biological processes, cell features, or functions (e.g., DNA repair, cell-wall structure). Extensive prior knowledge was then used to define the hierarchical structure between groupings, and cell growth was predicted from the genotypes of millions of training cell observations. The biological realism of the model allowed for highly predictive features to be linked to specific pathways or processes, and even permitted insilico simulations on the effect of gene deletions. In such simulations, inputs to the model (e.g., SNPs, genes) can be selectively removed and their downstream effects on learned higher-order features can be estimated. For instance, deletion of two growth genes by Ma et al. (2018) led to predicted disruption in a select biological process (e.g., DNA repair) that was later validated through experimental gene knockouts. Such “white-box” or “visible” forms of deep learning address the common criticism of multivariate predictive models as being a “black box” that fail to explain underlying mechanisms (Ching et al., 2018). Unconstrained deep models learn relationships between an input and output, but the internal structure of the trained model is not obligated to be understandable by humans or reflect biological reality. We have highlighted examples of biologically inspired machine learning models to show how interpretability can be aided by incorporation of empirical and mechanistic relationships (e.g., gene networks). Rapid progress in these transparent forms of machine learning open the door for understanding how predictions are produced and could provide a means to integrate explanation and prediction in previously unimagined ways (Camacho, Collins, Powers, Costello, & Collins, 2018; Yang et al., 2019; Yu et al., 2018). If such methods are to be applied to the study of human behavior, researchers must carefully consider which neuroimaging features best represent the fundamental “units” of psychological processes, and to structure brain hierarchies (e.g., region to network) and cross-modal relationships (e.g., brain structure to function; Poldrack & Yarkoni, 2016).

Ethics and Limitations Genetic and brain-based prediction of complex traits may one day allow for the development of preventative interventions and clinical biomarkers to assess risk and improve cognitive outcomes (Ashley, 2016; Gabrieli et al., 2015). However, the potential for misuse or misapplication of biological predictors of cognitive ability raises moral, societal, and practical

Predicting Cognitive Ability: Brain Imaging and Genetics

concerns that must be directly addressed by both scientists and policy makers. Currently, GWAS samples almost exclusively include white European populations, in large part due to data availability (Popejoy & Fullerton, 2016). This systemic bias is problematic given evidence that polygenic scores from GWAS derived in one ancestral population give inaccurate predictions when applied to other groups (Martin et al., 2017; Wojcik et al., 2019). Even the genotyping arrays used to measure individual allelic frequencies may be most sensitive to variation occurring in Europeans, leading to measurement biases in other populations (Kim, Patel, Teng, Berens, & Lachance, 2018). This confound exacerbates existing issues associated with the development and application of standardized assessment batteries for cognitive ability. Left unaddressed, the convergence of these points may exacerbate societal health and educational inequalities (Martin et al., 2019). For instance, application of current polygenic scores as clinical tools would be more accurate, and thus of greater utility, in individuals of European descent, and would thus perpetuate existing imbalances in the provision of healthcare. Dissemination of imaging-genetic predictions and related information also poses a major challenge, especially given the existence of direct-to-consumer genetic testing services. Misinterpretation of prediction accuracies may lead to mistaken reductive biological determinism, the wholesale discounting of social and environmental factors, and unwarranted harmful stigmatization (Palk, Dalvie, de Vries, Martin, & Stein, 2019). Emerging evidence indicates that simply learning about genetic risk may even lead to self-fulfilling behavioral predispositions, suggesting that access to genetic knowledge may alter individual outcomes in unintended ways (Turnwald et al., 2019). Further, individuals tend to overweight neuroscientific explanations, even when they may be completely erroneous (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008). This could cause harm by de-emphasizing behavioral or environmental factors that are more amenable to intervention (e.g., reducing chronic environmental stress, dietary supplementation, early attachment to parents and caregivers, access to educational resources). Further, the presence of a “genetic” signal, either in the form of GWAS or heritability estimates, must be interpreted with caution given the potential for gene–environment correlations (Haworth et al., 2019), incomplete correction for genetic population stratification, and genetic confounding due to intergenerational transfer of risk (e.g., maternal smoking and prenatal development; Leppert et al., 2019)

Conclusion Investigators are increasingly able to predict individual differences in cognitive abilities using neuroimaging and genetic data. Brain-based predictive models of intelligence have leveraged large scale and open-access datasets (e.g., UK Biobank, Human Connectome Project) to train multivariate statistical

339

340

k. m. anderson and a. j. holmes

models based on brain anatomy and function. Genomic predictions most commonly use polygenic scores, derived from GWAS on hundreds of thousands of individuals, to predict individual differences in cognition. In both domains, biological interpretation remains a challenge and there is a pressing need for integrative predictive methods that combine genomic, neuroimaging, and behavioral observations. Recent advances in interpretable “white-box” forms of deep learning are a promising approach for cross-modal data integration and flexible incorporation of prior biological knowledge. In the short term, genomic and brain-based predictive models promise to yield deep insights into the neurobiology of behavior. In the long-term, some hope to use biology to guide early preventative interventions or inform individualized precision medicine. However, this raises serious ethical, societal, and pragmatic concerns that must be addressed by the scientific community and the larger public as these methods continue to develop.

References Allegrini, A. G., Selzam, S., Rimfeld, K., von Stumm, S., Pingault, J. B., & Plomin, R. (2019). Genomic prediction of cognitive traits in childhood and adolescence. Molecular Psychiatry, 24(6), 819–827. doi: 10.1038/s41380–019-0394-4. Ashley, E. A. (2016). Towards precision medicine. Nature Reviews Genetics, 17(9), 507–522. doi: 10.1038/nrg.2016.86. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. doi: 10.1016/j.tics.2017.10.001. Barbey, A. K., Colom, R., & Grafman, J. (2014). Distributed neural system for emotional intelligence revealed by lesion mapping. Social Cognitive and Affective Neuroscience, 9(3), 265–272. doi: 10.1093/scan/nss124. Barton, N. H., Etheridge, A. M., & Véber, A. (2017). The infinitesimal model: Definition, derivation, and implications. Theoretical Population Biology, 118, 50–73. doi: 10.1016/j.tpb.2017.06.001. Batty, G. D., Deary, I. J., & Gottfredson, L. S. (2007). Premorbid (early life) IQ and later mortality risk: Systematic review. Annals of Epidemiology, 17(4), 278–288. doi: 10.1016/j.annepidem.2006.07.010. Boomsma, D., Busjahn, A., & Peltonen, L. (2002). Classical twin studies and beyond. Nature Reviews Genetics, 3(11), 872–882. doi: 10.1038/nrg932. Boyle, E. A., Li, Y. I., & Pritchard, J. K. (2017). An expanded view of complex traits: From polygenic to omnigenic. Cell, 169(7), 1177–1186. doi: 10.1016/j. cell.2017.05.038. Branigan, A. R., McCallum, K. J., & Freese, J. (2013). Variation in the heritability of educational attainment: An international meta-analysis. Social Forces, 92(1), 109–140. doi: 10.1093/sf/sot076. Briley, D. A., & Tucker-Drob, E. M. (2013). Explaining the increasing heritability of cognitive ability across development: A meta-analysis of longitudinal twin and adoption studies. Psychological Science, 24(9), 1704–1713. doi: 10.1177/ 0956797613478618.

Predicting Cognitive Ability: Brain Imaging and Genetics

Bulik-Sullivan, B., Finucane, H. K., Anttila, V., Gusev, A., Day, F. R., Loh, P.-R., . . . Neale, B. M. (2015). An atlas of genetic correlations across human diseases and traits. Nature Genetics, 47(11), 1236–1241. doi.org/10.1038/ng .3406. Bzdok, D., Altman, N., & Krzywinski, M. (2018). Statistics versus machine learning. Nature Methods, 15(4), 233–234. doi: 10.1038/nmeth.4642. Bzdok, D., & Ioannidis, J. P. A. (2019). Exploration, inference, and prediction in neuroscience and biomedicine. Trends in Neurosciences, 42(4), 251–262. doi: 10.1016/j.tins.2019.02.001. Bzdok, D., & Yeo, B. T. T. (2017). Inference in the age of big data: Future perspectives on neuroscience. NeuroImage, 155, 549–564. doi: 10.1016/j. neuroimage.2017.04.061. Calvin, C. M., Batty, G. D., Der, G., Brett, C. E., Taylor, A., Pattie, A., . . . Deary, I. J. (2017). Childhood intelligence in relation to major causes of death in 68 year follow-up: Prospective population study. British Medical Journal, 357(j2708), 1–14. doi: 10.1136/bmj.j2708. Camacho, D. M., Collins, K. M., Powers, R. K., Costello, J. C., & Collins, J. J. (2018). Next-generation machine learning for biological networks. Cell, 173(7), 1581–1592. doi: 10.1016/j.cell.2018.05.015. Chabris, C. F., Hebert, B. M., Benjamin, D. J., Beauchamp, J., Cesarini, D., van der Loos, M., . . . Laibson, D. (2012). Most reported genetic associations with general intelligence are probably false positives. Psychological Science, 23(11), 1314–1323. doi: 10.1177/0956797611435528. Chiang, M.-C., McMahon, K. L., de Zubicaray, G. I., Martin, N. G., Hickie, I., Toga, A. W., . . . Thompson, P. M. (2011). Genetics of white matter development: A DTI study of 705 twins and their siblings aged 12 to 29. NeuroImage, 54(3), 2308–2317. doi: 10.1016/j.neuroimage.2010.10.015. Ching, T., Himmelstein, D. S., Beaulieu-Jones, B. K., Kalinin, A. A., Do, B. T., Way, G. P., . . . Greene, C. S. (2018). Opportunities and obstacles for deep learning in biology and medicine. Journal of The Royal Society Interface, 15(141), 20170387. doi: 10.1098/rsif.2017.0387. Cole, M. W., Reynolds, J. R., Power, J. D., Repovs, G., Anticevic, A., & Braver, T. S. (2013). Multi-task connectivity reveals flexible hubs for adaptive task control. Nature Neuroscience, 16(9), 1348–1355. doi: 10.1038/nn.3470. Cole, M. W., Yarkoni, T., Repovs, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. Journal of Neuroscience, 32(26), 8988–8999. doi: 10.1523/JNEUROSCI.053612.2012. Costello, J. C., Georgii, E., Gönen, M., Menden, M. P., Wang, N. J., Bansal, M., . . . Stolovitzky, G. (2014). A community effort to assess and improve drug sensitivity prediction algorithms. Nature Biotechnology, 32(12), 1202–1212. doi: 10.1038/nbt.2877. Cremers, H. R., Wager, T. D., & Yarkoni, T. (2017). The relation between statistical power and inference in fMRI. PLoS One, 12(11), e0184923. doi: 10.1371/ journal.pone.0184923. Davies, G., Lam, M., Harris, S. E., Trampush, J. W., Luciano, M., Hill, W. D., . . . Deary, I. J. (2018). Study of 300,486 individuals identifies 148 independent

341

342

k. m. anderson and a. j. holmes

genetic loci influencing general cognitive function. Nature Communications, 9(1), 2098. doi: 10.1038/s41467–018-04362-x. Deary, I. J., Johnson, W., & Houlihan, L. M. (2009). Genetic foundations of human intelligence. Human Genetics, 126(1), 215–232. doi: 10.1007/s00439–0090655-4. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. doi: 10.1038/ nrn2793. Dor, Y., & Cedar, H. (2018). Principles of DNA methylation and their implications for biology and medicine. The Lancet, 392(10149), 777–786. doi: 10.1016/ S0140–6736(18)31268-6. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1756), 20170284. Elliott, M. L., Belsky, D. W., Anderson, K., Corcoran, D. L., Ge, T., Knodt, A., . . . Hariri, A. R. (2018). A polygenic score for higher educational attainment is associated with larger brains. Cerebral Cortex, 491(8), 56–59. doi: 10.1093/ cercor/bhy219. Eraslan, G., Avsec, Ž., Gagneur, J., & Theis, F. J. (2019). Deep learning: New computational modelling techniques for genomics. Nature Reviews Genetics, 20(7), 389–403. doi: 10.1038/s41576–019-0122-6. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. doi: 10.1038/nature21056. Fange, H., Knezevic, B., Burnham, K. L., Osgood, J., Sanniti, A., Lledó Lara, A., . . . Knight, J. C. (2019). A genetics-led approach defines the drug target landscape of 30 immune-related traits. Nature Genetics, 51(7), 1082–1091. doi: 10.1038/s41588–019-0456-1. Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., . . . Constable, R. T. (2015). Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity. Nature Neuroscience, 18(11), 1664–1671. doi: 10.1038/nn.4135. Gabrieli, J. D. E., Ghosh, S. S., & Whitfield-Gabrieli, S. (2015). Prediction as a humanitarian and pragmatic contribution from human cognitive neuroscience. Neuron, 85(1), 11–26. doi: 10.1016/j.neuron.2014.10.047. Gazestani, V. H., & Lewis, N. E. (2019). From genotype to phenotype: Augmenting deep learning with networks and systems biology. Current Opinion in Systems Biology, 15, 68–73. doi: 10.1016/j.coisb.2019.04.001. Ge, T., Chen, C.-Y., Doyle, A. E., Vettermann, R., Tuominen, L. J., Holt, D. J., . . . Smoller, J. W. (2019). The shared genetic basis of educational attainment and cerebral cortical morphology. Cerebral Cortex, 29(8), 3471–3481. doi: 10.1093/cercor/bhy216. Genç, E., Fraenz, C., Schlüter, C., Friedrich, P., Hossiep, R., Voelkle, M. C., . . . Jung, R. E. (2018). Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nature Communications, 9(1), 1905. doi: 10.1038/s41467–018-04268-8.

Predicting Cognitive Ability: Brain Imaging and Genetics

Goriounova, N. A., & Mansvelder, H. D. (2019). Genes, cells and brain areas of intelligence. Frontiers in Human Neuroscience, 13, 44. doi: 10.3389/ fnhum.2019.00044. Gray, J. R., & Thompson, P. M. (2004). Neurobiology of intelligence: Science and ethics. Nature Reviews Neuroscience, 5(6), 471–482. doi: 10.1038/nrn1405. Greene, A. S., Gao, S., Scheinost, D., & Constable, R. T. (2018). Task-induced brain state manipulation improves prediction of individual traits. Nature Communications, 9(1), 2807. doi: 10.1038/s41467–018-04920-3. Hagenaars, S. P., Harris, S. E., Davies, G., Hill, W. D., Liewald, D. C. M., Ritchie, S. J., . . . Deary, I. J. (2016). Shared genetic aetiology between cognitive functions and physical and mental health in UK Biobank (N=112 151) and 24 GWAS consortia. Molecular Psychiatry, 21(11), 1624–1632. doi: 10.1038/ mp.2015.225. Haier, R. J. (2017). The neuroscience of intelligence. New York: Cambridge University Press. Haworth, S., Mitchell, R., Corbin, L., Wade, K. H., Dudding, T., Budu-Aggrey, A. J., . . . Timpson, N. (2019). Apparent latent structure within the UK Biobank sample has implications for epidemiological analysis. Nature Communications, 10(1), 333. doi: 10.1038/s41467–018-08219-1. He, T., Kong, R., Holmes, A., Nguyen, M., Sabuncu, M., Eickhoff, S. B., . . . Yeo, B. T. T. (2020). Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics. NeuroImage, 206, 116276. doi: 10.1016/j.neuroimage.2019.116276. Heck, A., Fastenrath, M., Ackermann, S., Auschra, B., Bickel, H., Coynel, D., . . . Papassotiropoulos, A. (2014). Converging genetic and functional brain imaging evidence links neuronal excitability to working memory, psychiatric disease, and brain activity. Neuron, 81(5), 1203–1213. doi: 10.1016/j. neuron.2014.01.010. Hibar, D. P., Adams, H. H. H., Chauhan, G., Hofer, E., Rentería, M. E., Adams, H. H. H., . . . Ikram, M. A. (2017). Novel genetic loci associated with hippocampal volume. Nature Communications, 8(13624), 1–12. doi: 10.1038/ncomms13624. Hill, D., Davies, G., Liewald, D. C., McIntosh, A. M., & Deary, I. J. (2016). Age-dependent pleiotropy between general cognitive function and major psychiatric disorders. Biological Psychiatry, 80(4), 266–273. doi: 10.1016/j. biopsych.2015.08.033. Hill, W. D., Marioni, R. E., Maghzian, O., Ritchie, S. J., Hagenaars, S. P., McIntosh, A. M., . . . Deary, I. J. (2019). A combined analysis of genetically correlated traits identifies 187 loci and a role for neurogenesis and myelination in intelligence. Molecular Psychiatry, 24(2), 169–181. doi: 10.1038/s41380–0170001-5. Hulshoff Pol, H. E., Schnack, H. G., Posthuma, D., Mandl, R. C. W., Baare, W. F., van Oel, C., . . . Kahn, R. S. (2006). Genetic contributions to human brain morphology and intelligence. Journal of Neuroscience, 26(40), 10235–10242. doi: 10.1523/JNEUROSCI.1312-06.2006. Jansen, P. R., Nagel, M., Watanabe, K., Wei, Y., Savage, J. E., de Leeuw, C. A., . . . Posthuma, D. (2019). GWAS of brain volume on 54,407 individuals and

343

344

k. m. anderson and a. j. holmes

cross-trait analysis with intelligence identifies shared genomic loci and genes. BioRxiv. 1–34. doi: 10.1101/613489. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. doi: 10.1017/S0140525X07001185. Karlsgodt, K. H., Kochunov, P., Winkler, A. M., Laird, A. R., Almasy, L., Duggirala, R., . . . Glahn, D. C. (2010). A multimodal assessment of the genetic control over working memory. Journal of Neuroscience, 30(24), 8197–8202. doi: 10.1523/JNEUROSCI.0359-10.2010. Kendler, K. S., & Baker, J. H. (2007). Genetic influences on measures of the environment: A systematic review. Psychological Medicine, 37(05), 615. doi: 10.1017/ S0033291706009524. Kendler, K. S., Turkheimer, E., Ohlsson, H., Sundquist, J., & Sundquist, K. (2015). Family environment and the malleability of cognitive ability: A Swedish national home-reared and adopted-away cosibling control study. Proceedings of the National Academy of Sciences, 112(15), 4612–4617. doi: 10.1073/pnas.1417106112. Kim, M. S., Patel, K. P., Teng, A. K., Berens, A. J., & Lachance, J. (2018). Genetic disease risks can be misestimated across global populations. Genome Biology, 19(1), 179. doi: 10.1186/s13059–018-1561-7. Krapohl, E., Patel, H., Newhouse, S., Curtis, C. J., von Stumm, S., Dale, P. S., . . . Plomin, R. (2018). Multi-polygenic score approach to trait prediction. Molecular Psychiatry, 23(5), 1368–1374. doi: 10.1038/mp.2017.163. Lam, M., Trampush, J. W., Yu, J., Knowles, E., Davies, G., Liewald, D. C., . . . Lencz, T. (2017). Large-scale cognitive GWAS meta-analysis reveals tissuespecific neural expression and potential nootropic drug targets. Cell Reports, 21(9), 2597–2613. doi: 10.1016/j.celrep.2017.11.028. Lee, J., Wedow, R., Okbay, A., Kong, E., Meghzian, O., Zacher, M., . . . Cesarini, D. (2018). Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals. Nature Genetics, 50(8), 1112–1121. doi: 10.1038/s41588–018-0147-3. Lello, L., Avery, S. G., Tellier, L., Vazquez, A. I., de los Campos, G., & Hsu, S. D. H. (2018). Accurate genomic prediction of human height. Genetics, 210(2), 477–497. doi: 10.1534/genetics.118.301267. Leppert, B., Havdahl, A., Riglin, L., Jones, H. J., Zheng, J., Davey Smith, G., . . . Stergiakouli, E. (2019). Association of maternal neurodevelopmental risk alleles with early-life exposures. JAMA Psychiatry, 76(8), 834. doi: 10.1001/ jamapsychiatry.2019.0774. Libbrecht, M. W., & Noble, W. S. (2015). Machine learning applications in genetics and genomics. Nature Reviews Genetics, 16(6), 321–332. doi: 10.1038/nrg3920. Liégeois, R., Li, J., Kong, R., Orban, C., Van De Ville, D., Ge, T., . . . Yeo, B. T. T. (2019). Resting brain dynamics at different timescales capture distinct aspects of human behavior. Nature Communications, 10(1), 2317. doi: 10.1038/ s41467–019-10317-7. Luders, E., Narr, K. L., Thompson, P. M., & Toga, A. W. (2009). Neuroanatomical correlates of intelligence. Intelligence, 37(2), 156–163. doi: 10.1016/j. intell.2008.07.002.

Predicting Cognitive Ability: Brain Imaging and Genetics

Ma, J., Yu, M. K., Fong, S., Ono, K., Sage, E., Demchak, B., . . . Ideker, T. (2018). Using deep learning to model the hierarchical structure and function of a cell. Nature Methods, 15(4), 290–298. doi: 10.1038/nmeth.4627. Martin, A. R., Gignoux, C. R., Walters, R. K., Wojcik, G. L., Neale, B. M., Gravel, S., . . . Kenny, E. E. (2017). Human demographic history impacts genetic risk prediction across diverse populations. The American Journal of Human Genetics, 100(4), 635–649. doi: 10.1016/j.ajhg.2017.03.004. Martin, A. R., Kanai, M., Kamatani, Y., Okada, Y., Neale, B. M., & Daly, M. J. (2019). Clinical use of current polygenic risk scores may exacerbate health disparities. Nature Genetics, 51(4), 584–591. doi: 10.1038/s41588–019-0379-x. Miller, K. L., Alfaro-Almagro, F., Bangerter, N. K., Thomas, D. L., Yacoub, E., Xu, J., . . . Smith, S. M. (2016). Multimodal population brain imaging in the UK Biobank prospective epidemiological study. Nature Neuroscience, 19(11), 1523–1536. doi: 10.1038/nn.4393. Neale, M. C., & Maes, H. H. M. (1992). Methodology for genetic studies of twins and families. Dordrecht, The Netherlands: Kluwer Academic Publishers B.V. Neisser, U., Boodoo, G., Bouchard, T. J., Jr., Boykin, A. W., Brody, N., Ceci, S. J., . . . Urbina, S. (1996). Intelligence: Knowns and unknowns. American Psychologist, 51(2), 77–101. doi: 10.1037/0003-066X.51.2.77. Nisbett, R. E., Aronson, J., Blair, C., Dickens, W., Flynn, J., Halpern, D. F., & Turkheimer, E. (2012). Intelligence: New findings and theoretical developments. American Psychologist, 67(2), 130–159. doi: 10.1037/a0026699. Okbay, A., Beauchamp, J. P., Fontana, M. A., Lee, J. J., Pers, T. H., Rietveld, C. A., . . . Benjamin, D. J. (2016). Genome-wide association study identifies 74 loci associated with educational attainment. Nature, 533(7604), 539–542. doi: 10.1038/nature17671. Palk, A. C., Dalvie, S., de Vries, J., Martin, A. R., & Stein, D. J. (2019). Potential use of clinical polygenic risk scores in psychiatry – Ethical implications and communicating high polygenic risk. Philosophy, Ethics, and Humanities in Medicine, 14(1), 4. doi: 10.1186/s13010–019-0073-8. Pan, S., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1–15. Penke, L., Maniega, S. M., Bastin, M. E., Valdés Hernández, M. C., Murray, C., Royle, N. A., . . . Deary, I. J. (2012). Brain white matter tract integrity as a neural foundation for general intelligence. Molecular Psychiatry, 17(10), 1026–1030. doi: 10.1038/mp.2012.66. Pennington, B. F., Filipek, P. A., Lefly, D., Chhabildas, N., Kennedy, D. N., Simon, J. H., . . . DeFries, J. C. (2000). A twin MRI study of size variations in the human brain. Journal of Cognitive Neuroscience, 12(1), 223–232. doi: 10.1162/089892900561850. Plomin, R., & von Stumm, S. (2018). The new genetics of intelligence. Nature Reviews Genetics, 19(3), 148–159. doi: 10.1038/nrg.2017.104. Poldrack, R. A., & Gorgolewski, K. J. (2014). Making big data open: Data sharing in neuroimaging. Nature Neuroscience, 17(11), 1510–1517. doi: 10.1038/nn.3818. Poldrack, R. A., & Yarkoni, T. (2016). From brain maps to cognitive ontologies: Informatics and the search for mental structure. Annual Review of Psychology, 67(1), 587–612. doi: 10.1146/annurev-psych-122414-033729.

345

346

k. m. anderson and a. j. holmes

Popejoy, A. B., & Fullerton, S. M. (2016). Genomics is failing on diversity. Nature, 538(7624), 161–164. doi: 10.1038/538161a. Posthuma, D., De Geus, E. J. C., Baaré, W. F. C., Pol, H. E. H., Kahn, R. S., & Boomsma, D. I. (2002). The association between brain volume and intelligence is of genetic origin. Nature Neuroscience, 5(2), 83–84. doi: 10.1038/ nn0202–83. Rietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., . . . Koellinger, P. D. (2013). GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science, 340(6139), 1467–1471. doi: 10.1126/science.1235488. Rosenberg, M. D., Casey, B. J., & Holmes, A. J. (2018). Prediction complements explanation in understanding the developing brain. Nature Communications, 9(1), 589. doi: 10.1038/s41467–018-02887-9. Rosenberg, M. D., Finn, E. S., Scheinost, D., Papademetris, X., Shen, X., Constable, R. T., & Chun, M. M. (2015). A neuromarker of sustained attention from whole-brain functional connectivity. Nature Neuroscience, 19(1), 165–171. doi: 10.1038/nn.4179. Roshchupkin, G. V., Gutman, B. A., Vernooij, M. W., Jahanshad, N., Martin, N. G., Hofman, A., . . . Adams, H. H. H. (2016). Heritability of the shape of subcortical brain structures in the general population. Nature Communications, 7(1), 13738. doi: 10.1038/ncomms13738. Salakhutdinov, R., & Hinton, G. (2009). Deep Boltzmann machines. Proceedings of the 12th International Conference on Artificial Intelligence Statistics, 5, 448–455. www.utstat.toronto.edu/~rsalakhu/papers/dbm.pdf Salovey, P., & Mayer, J. D. (1990). Emotional Intelligence. Imagination, Cognition and Personality, 9(3), 185–211. doi: 10.2190/DUGG-P24E-52WK-6CDG. Savage, J. E., Jansen, P. R., Stringer, S., Watanabe, K., Bryois, J., de Leeuw, C. A., . . . Posthuma, D. (2018). Genome-wide association meta-analysis in 269,867 individuals identifies new genetic and functional links to intelligence. Nature Genetics, 50(7), 912–919. doi: 10.1038/s41588–018-0152-6. Scheinost, D., Noble, S., Horien, C., Greene, A. S., Lake, E. MR., Salehi, M., . . . Constable, R. T. (2019). Ten simple rules for predictive modeling of individual differences in neuroimaging. NeuroImage, 193, 35–45. doi: 10.1016/j. neuroimage.2019.02.057. Shine, J. M., Breakspear, M., Bell, P. T., Ehgoetz Martens, K. A., Shine, R., Koyejo, O., . . . Poldrack, R. A. (2019). Human cognition involves the dynamic integration of neural activity and neuromodulatory systems. Nature Neuroscience, 22(2), 289–296. doi: 10.1038/s41593–018-0312-0. Siegel, J. S., Mitra, A., Laumann, T. O., Seitzman, B. A., Raichle, M., Corbetta, M., & Snyder, A. Z. (2017). Data quality influences observed links between functional connectivity and behavior. Cerebral Cortex, 27(9), 4492–4502. doi: 10.1093/cercor/bhw253. Smith, S. M., & Nichols, T. E. (2018). Statistical challenges in “big data” human neuroimaging. Neuron, 97(2), 263–268. doi: 10.1016/j.neuron.2017.12.018. Smith, S. M., Nichols, T. E., Vidaurre, D., Winkler, A. M., Behrens, T. E. J., Glasser, M. F., . . . Miller, K. L. (2015). A positive-negative mode of population

Predicting Cognitive Ability: Brain Imaging and Genetics

covariation links brain connectivity, demographics and behavior. Nature Neuroscience, 18(11), 1565–1567. doi: 10.1038/nn.4125. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. NeuroImage, 41(3), 1168–1176. doi: 10.1016/j.neuroimage.2008.02.036. Spearman, C. (1904). “General intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–292. Sternberg, R. J. (2004). Culture and intelligence. American Psychologist, 59(5), 325–338. doi: 10.1037/0003–066X.59.5.325. Tadayon, E., Pascual-Leone, A., & Santarnecchi, E. (2019). Differential contribution of cortical thickness, surface area, and gyrification to fluid and crystallized intelligence. Cerebral Cortex, 30, 215–225. doi: 10.1093/cercor/bhz082. Tam, V., Patel, N., Turcotte, M., Bossé, Y., Paré, G., & Meyre, D. (2019). Benefits and limitations of genome-wide association studies. Nature Reviews Genetics, 20(8), 467–484. doi: 10.1038/s41576–019-0127-1. Thompson, P. M., Cannon, T. D., Narr, K. L., van Erp, T., Poutanen, V.-P., Huttunen, M., . . . Toga, A. W. (2001). Genetic influences on brain structure. Nature Neuroscience, 4(12), 1253–1258. doi: 10.1038/nn758. Torkamani, A., Wineinger, N. E., & Topol, E. J. (2018). The personal and clinical utility of polygenic risk scores. Nature Reviews Genetics, 19(9), 581–590. doi: 10.1038/s41576–018-0018-x. Tucker-Drob, E. M., & Harden, K. P. (2012). Early childhood cognitive development and parental cognitive stimulation: Evidence for reciprocal gene-environment transactions: Early cognitive development and parenting. Developmental Science, 15(2), 250–259. doi: 10.1111/j.1467-7687.2011.01121.x. Turnwald, B. P., Goyer, J. P., Boles, D. Z., Silder, A., Delp, S. L., & Crum, A. J. (2019). Learning one’s genetic risk changes physiology independent of actual genetic risk. Nature Human Behaviour, 3(1), 48–56. doi: 10.1038/s41562–0180483-4. Visscher, P. M., Hill, W. G., & Wray, N. R. (2008). Heritability in the genomics era – Concepts and misconceptions. Nature Reviews Genetics, 9(4), 255–266. doi: 10.1038/nrg2322. Vuoksimaa, E., Panizzon, M. S., Chen, C.-H., Fiecas, M., Eyler, L. T., FennemaNotestine, C., . . . Kremen, W. S. (2015). The genetic association between neocortical volume and general cognitive ability is driven by global surface area rather than thickness. Cerebral Cortex, 25(8), 2127–2137. doi: 10.1093/ cercor/bhu018. Wang, D., Liu, S., Warrell, J., Won, H., Shi, X., Navarro, F. C. P., . . . Gerstein, M. B. (2018). Comprehensive functional genomic resource and integrative model for the human brain. Science, 362(6420), eaat8464. doi: 10.1126/science.aat8464. Watanabe, K., Umicevic Mirkov, M., de Leeuw, C. A., van den Heuvel, M. P., & Posthuma, D. (2019). Genetic mapping of cell type specificity for complex traits. Nature Communications, 10(1), 3222. doi: 10.1038/s41467–019-11181-1. Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470–477. doi: 10.1162/jocn.2008.20040.

347

348

k. m. anderson and a. j. holmes

Wojcik, G. L., Graff, M., Nishimura, K. K., Tao, R., Haessler, J., Gignoux, C. R., . . . Carlson, C. S. (2019). Genetic analyses of diverse populations improves discovery for complex traits. Nature, 570(7762), 514–518. doi: 10.1038/ s41586–019-1310-4. Woo, C.-W., Chang, L. J., Lindquist, M. A., & Wager, T. D. (2017). Building better biomarkers: Brain models in translational neuroimaging. Nature Neuroscience, 20(3), 365–377. doi: 10.1038/nn.4478. Woodberry, K. A., Giuliano, A. J., & Seidman, L. J. (2008). Premorbid IQ in schizophrenia: A meta-analytic review. American Journal of Psychiatry, 165(5), 579–587. doi: 10.1176/appi.ajp.2008.07081242. Wray, N. R., Wijmenga, C., Sullivan, P. F., Yang, J., & Visscher, P. M. (2018). Common disease is more complex than implied by the core gene omnigenic model. Cell, 173(7), 1573–1580. doi: 10.1016/j.cell.2018.05.051. Yang, J. H., Wright, S. N., Hamblin, M., McCloskey, D., Alcantar, M. A., Schrübbers, L., . . . Collins, J. J. (2019). A white-box machine learning approach for revealing antibiotic mechanisms of action. Cell, 177(6), 16491661.e9. doi: 10.1016/j.cell.2019.04.016. Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122. doi: 10.1177/1745691617693393. Yu, M. K., Ma, J., Fisher, J., Kreisberg, J. F., Raphael, B. J., & Ideker, T. (2018). Visible machine learning for biomedicine. Cell, 173(7), 1562–1565. doi: 10.1016/j.cell.2018.05.056. Zou, J., Huss, M., Abid, A., Mohammadi, P., Torkamani, A., & Telenti, A. (2019). A primer on deep learning in genomics. Nature Genetics, 51(1), 12–18. doi: 10.1038/s41588–018-0295-5.

17 Predicting Cognitive-Ability Differences from Genetic and Brain-Imaging Data Emily A. Willoughby and James J. Lee

Introduction Statistical genetics and brain imaging are together at the technological forefront of research into human intelligence. While these approaches have historically had little practical overlap, they are united both conceptually and in several broad methodological challenges. In concept, both areas attempt to explain complex human behavior by understanding its biological origins, and in doing so have faced the problems that arise from this complexity. The prospect of finding large-effect predictors, for example, has shaped both histories: statistical genetics, with its study of candidate genes that were once thought to have outsized influence on the development of many traits, and neuroscience, with its search for localized brain properties underlying complex behaviors. Both of these areas have then had to adjust their scope and methodology to address the issue of making valid and meaningful predictions from a large number of predictors with small effects. A key understanding is that larger samples of participants than originally employed may be necessary for these predictions to be accurate and useful. This chapter is about our modern efforts to predict human intelligence and related outcomes from their biological antecedents. We will briefly review the discoveries and challenges that are shaping each discipline today, and some of the further insights that have been enabled by recent increases in predictive power. If readers feel skepticism about the value of predictions based on a large number of features, each accounting for a small or even minuscule fraction of the variance, an analogy to natural selection may begin to overcome it. Genomewide association studies (GWAS) have been so productive largely because of the enormous samples that have been deployed to reliably find genetic variants of significant effect. In doing so, scientists are using more or less the same information available to nature when organisms undergo natural selection: . . . whether a given allele is correlated with the phenotype (which, in this case, is fitness) to the resolution afforded by size of the population (Fisher, 1941). Nevertheless, looking around at the exquisite adaptedness of living

349

350

e. a. willoughby and j. j. lee

things, we can be confident that Nature correctly picks out alleles for their causal effects on fitness often enough (Lee, 2012; Lee & Chow, 2013). If we live in a world that is simple enough for natural selection to be robust, than perhaps it is not surprising that we can make progress by duplicating Nature’s strategy of using large sample sizes to detect small DNAtrait correlations. (Lee & McGue, 2016, p. 30)

Predictions from Genetic Data The study of quantitative genetics has its roots in the ancient arts of agriculture and domestication. Today, breeders are highly effective in augmenting artificial selection for valuable traits in animals and crops with modern genetic techniques (Spindel & McCouch, 2016; Wray, Kemper, Hayes, Goddard, & Visscher, 2019). The success of both traditional breeding and its formal offshoot quantitative genetics is in spite of polygenicity: the phenotypes of interest being affected by many genetic variants, each of extremely small effect. Indeed, the field of livestock genetics has given rise to the concept of the polygenic score (PGS), which has crossed over to become a useful tool in human genetics.

The Polygenic Score Something recognizably akin to the modern PGS was proposed nearly 30 years ago under the term “marker-based selection” (MAS; Lande & Thompson, 1990). According to this proposal, breeders can estimate the regression coefficients of polymorphic markers correlated with the causal variants affecting a trait of interest, calculate scores for individuals with genotyping data, and thereby increase the efficiency of artificial selection. A substantial development of this fundamental insight was put forth by Meuwissen, Hayes, and Goddard (2001), who among other things proposed the use of Bayesian priors to overcome the problem of marker number being greater than the sample size. This approach was later proposed for use in human genetics to identify individuals at high risk for disease (Wray, Goddard, & Visscher, 2007). To our knowledge, the first empirical use of PGS came in an early GWAS of schizophrenia (Purcell et al., 2009). This was a landmark study in a number of ways. First, Purcell et al. were the first to use the term “polygenic score” for a phenotypic prediction based on a linear combination of single-nucleotide polymorphism (SNP) genotypes, the weights in this combination being the regression coefficients estimated in a GWAS of the trait. Second, the higher mean PGS in schizophrenic cases, amounting to 3% of the variance (Nagelkerke’s pseudo R2 from logistic regression), provided proof of the principle that GWAS can lead to predictive power even if relatively few

Genetic and Neural Predictors of Intelligence

SNP associations have been identified at a stringent threshold of statistical significance. These authors also demonstrated that variance explained in their validation sample was actually reduced if the terms in the PGS were restricted to only significant SNPs. In other words, a better PGS resulted when the effects of all SNPs were included – no p-value was too high. To many observers this was conclusive evidence that schizophrenia is a highly polygenic trait, with SNPs that affect it scattered throughout the genome. Years of education was the first cognitive phenotype to be studied successfully in a GWAS. Today, we are able to predict upwards of 10% of the variance in various cognitive traits using PGS derived from 1.1 million individuals (Lee et al., 2018). This predictive power is the result of both the sample size and methodological developments that have improved the construction of PGS.

Methods of Construction In a GWAS, a set of genetic markers is genotyped or imputed in a training sample, and estimates are obtained of each marker’s association with the trait of interest. These weights are then used to construct a PGS for each individual in a replication sample that is independent from the initial training sample ^ of an individual is equal to the (Dudbridge, 2013). The estimated PGS, S, weighted sum of the individual’s marker genotypes, Xj, at m SNPs, such that ^¼ S

m X

X j β^ j :

(17.1)

j¼1

Different methodologies for construction arise chiefly from two sources: how to generate the weights of the SNPs, ^β j , and how to determine which m SNPs should be included.

Naïve Methods In the most straightforward “naïve” method, weights are set equal to the coefficient estimates from univariate regressions of the phenotype on each variant j (Eq. 17.1). The m SNPs may be selected via an algorithm that uses pruning to approximate independence of all markers used in the score. The motivation behind pruning is that genetic variants are often correlated with variants that are nearby in the genome, a phenomenon called linkage disequilibrium (LD). Failing to account for this non-random association – which arises chiefly from the shared evolutionary history of nearby variants that seldom experience recombination – might reduce the accuracy of the PGS if causal variants are in more or less LD with their neighbors than null variants. We can further restrict the included SNPs by omitting those which fail to meet a certain significance threshold for association with the phenotype. A more

351

352

e. a. willoughby and j. j. lee

stringent threshold might be thought to boost the signal-to-noise ratio by only including variants for which evidence of true association with the phenotype is particularly high, but it is not always clear how to optimize this threshold. This method and its variants are generally referred to as “pruning + thresholding.”

LDpred LDpred estimation is a Bayesian approach that attempts to explicitly model and account for genetic architecture and LD. Other methods of PGS construction that prune markers based on LD may go too far and discard useful information. LDpred estimation explicitly assumes a prior for the distribution of effect sizes and sets the weight for each variant equal to the mean of its posterior distribution after accounting for LD. The theory underlying LDpred is derived assuming the covariance matrix of the genotype data in the training sample is known. In practice, this matrix is not known, so we replace the training-sample covariance matrix by an approximation that is estimated using observed LD patterns in a reference sample of conventionally unrelated individuals with the same ancestral background as the training and validation samples. Theory and simulations show that LDpred outperforms pruning + thresholding, especially at large sample sizes. For example, the prediction R2 increased from 20.1% to 25.3% in a large schizophrenia dataset. Unlike its simpler predecessors, an LDpred estimation converges to the heritable variance explained by the SNPs as a function of increased sample size, a desirable property of a PGS (Vilhjálmsson et al., 2015).

Penalized Regression The prior distribution of effect size in LDpred and the related methods of Meuwissen et al. (2001) can be regarded as supplying the additional information needed to estimate m partial regression coefficients in a training sample consisting of fewer than m individuals. An alternative approach toward this end is to “penalize” the regression model for having large coefficients and thereby shrink them conservatively (although such methods often also have a Bayesian interpretation). Ridge regression shrinks the prediction with a term penalizing the sum of the squared coefficients (de Vlaming & Groenen, 2015); LASSO does the same with the sum of absolute coefficients (Vattikuti, Lee, Chang, Hsu, & Chow, 2014). It is possible to apply these methods to univariate summary statistics, employing a reference panel to estimate the covariances between SNPs as in LDpred (Mak, Porsch, Choi, Zhou, & Sham, 2017). Some preliminary results obtained with LASSO in particular – including results with respect to IQ and educational achievement – suggest that this form of penalization can lead to a prediction R2 at least as large as that of LDpred (Allegrini et al., 2019; Lello et al., 2018).

Genetic and Neural Predictors of Intelligence

If the fraction of common SNPs that must be given a nonzero weight is relatively small, say less than 1%, then on theoretical grounds we might expect LASSO to converge most efficiently on the full heritable variance. This is an issue worth following as research unfolds.

Empirical Applications One of the outcomes that has been studied most successfully with PGS is educational attainment (measured in total years of education, EduYears). The first GWAS of educational attainment by the Social Science Genetic Association Consortium (SSGAC) found three SNPs reaching genome-wide statistical significance in a sample of approximately 100,000 individuals and produced a PGS that accounted for 2% of the variance in educational attainment in independent samples (Rietveld et al., 2013). Three years later, a GWAS with nearly 300,000 participants (EA2) found 74 loci (Okbay et al., 2016), and in the most recent SSGAC study of educational attainment (EA3), a GWAS of 1.1 million individuals identified 1,271 independent lead SNPs. A PGS constructed from this large sample is now able to predict 11–13% of the variance in educational attainment and 7–10% of the variance in IQ, providing further evidence that educational attainment is a viable proxy for cognitive ability in genetic research. But a PGS can do more than simply explain a portion of variance in educational attainment, IQ, or other cognitive phenotypes. Even though these scores have not yet amassed the sample sizes necessary for explaining all heritable variance, they are reliable enough for many substantive research purposes, including the study of the effects that parents can have on the outcomes of their children, and how far a person rises through the ranks of society.

Genetic Nurture PGS have been used to predict outcomes in offspring consistent with a causal role of the environment fostered by the parents. The first study to demonstrate this sort of “genetic nurture” – or “passive gene-environment correlation,” in the terminology of Plomin, DeFries, and Loehlin (1977) – found that only 70% of the correlation between the EduYears PGS and educational attainment is due to the effect of the offspring’s own scores on their own attainments. The remainder is due to the parent PGS acting as a confounder, affecting both the PGS and educational attainment of the offspring. This was inferred from a significant effect of the non-transmitted portion of the parent PGS on offspring attainment (Kong et al., 2018). A number of subsequent studies have replicated this finding (Bates et al., 2019; Belsky et al., 2018; Liu, 2018). What is the heritable parental characteristic affecting offspring EduYears? Bates et al. (2019) found that parent socioeconomic status (SES) completely mediates the effect of the non-transmitted parent PGS. In a recent study at the

353

354

e. a. willoughby and j. j. lee

University of Minnesota, we have examined this issue as well and also found that parent SES appears to be a complete mediator (Willoughby, McGue, Iacono, Rustichini, & Lee, 2019). Parent IQ and EduYears by itself (rather than as part of the SES composite) also either substantially or completely attenuate the effect of parent PGS.

Social Mobility PGS can also help to guide our understanding of the role of genetics in societal success, especially since the prediction R2 of the EA3 PGS exceeds that of parent income (Lee et al., 2018). Because children inherit both genes and socially transmitted advantage from their parents, it is plausible that an observed association between genes and social outcomes could be spurious. In a sample of over 20,000 participants in five longitudinal studies across the United States, Britain, and New Zealand, Belsky et al. (2018) found that individuals with higher EduYears PGS also tended to accumulate more wealth, more education, and greater success in their careers. Furthermore, however, they employed a within-family design to ask the key questions: Do offspring with the higher PGS, compared to their parents, tend to climb the social ladder beyond their parents’ achievements? And, in families with multiple offspring, does the sibling with the higher PGS also tend to achieve more than his or her other siblings? The answer to both questions, it turns out, is yes: Whether relative to the parents or to the other sibling, the individual with the higher PGS tends to be more upwardly mobile in career status, education, and wealth. These findings contradict the notion that GWAS results are nothing more than correlates of privilege, but rather affirm that at least some portion of this genetic endowment is likely to be causal.

Assortative Mating Assortative mating refers to any kind of mating preference leading to a correlation between the trait values of mothers and fathers. The psychological basis of a given preference might be somewhat obscure. Do short people tend to have short spouses because they prefer a mate of similar stature? Or does everyone prefer taller mates, and short people simply have to settle for what remains after the taller have chosen? Regardless of the answer, Fisher (1918) noted that any such preference can have profound consequences for the genetic composition of the population, increasing the magnitudes of both the correlations between relatives and the additive genetic variance. In a very clever study, Yengo et al. (2018) detected the signature of such assortative mating with respect to EduYears by finding a significant correlation between the PGS calculated over only the odd chromosomes and that calculated over only the even chromosomes. In the absence of assortative

Genetic and Neural Predictors of Intelligence

mating, this correlation is expected to be zero, essentially because of Mendelian independent assortment. But assortative mating will lead to the contributions of the different chromosomes to the PGS of a given gamete to be positively correlated, because knowing that a parent who contributed one set of chromosomes was phenotypically above average means that the parent contributing the complementary set was likely also above average.

Evolution Although intelligence has presumably increased fitness throughout millions of years of our recent evolution, it has been less clear how natural selection is operating on intelligence today. Recent applications of PGS have helped shed light on this question by using them to investigate signs of recent selection. One approach is to test for an association between PGS for a cognitive phenotype and measures of Darwinian fitness, such as total number of children or lifetime reproductive success (LRS) relative to others of the same age and gender. This approach offers a powerful advantage over methods that lack GWAS data, in that a direct measure of the focal trait’s additive genetic value (i.e., the PGS) enables the use of Robertson’s (1966) Secondary Theorem of Natural Selection to calculate the amount of evolutionary change. Using this method, Beauchamp (2016) found a negative association between EduYears PGS and LRS in a sample of approximately 20,000 Americans, implying that natural selection is slowly favoring lower educational attainment at a rate of what amounts to –1.5 months of education per generation. Kong et al. (2018) presented corroborating evidence from a study of approximately 100,000 Icelanders, which found EduYears PGS to be associated with delayed reproduction and fewer children overall. From this, they extrapolated that the mean EduYears PGS is declining at approximately 0.01 standard units per decade. In other words, evolution does seem to be currently operating on human intelligence, but in the opposite direction from that which prevailed in the deep evolutionary past.

Predictions from Brain-Imaging Data What types of predictions made possible by brain imaging have led to the largest amounts of explained variance in individual differences in human intelligence? In describing the current state of predictive utility in brain imaging and intelligence, we have deliberately chosen to focus only on the predictions, rather than the mechanisms that may underlie them. In doing so, we will also leave out many studies of historical interest. As neuroimaging studies continue to use larger samples, out-of-sample replication, more explicit best practices, and methodological uniformity – removing researcher degrees of freedom – the predictions made by these techniques will continue to improve.

355

356

e. a. willoughby and j. j. lee

Imaging Techniques Since their application to human intelligence beginning in the late 1980s, neuroimaging has provided several noninvasive methods of studying the biological basis of cognition in humans. These techniques have historically been limited to relatively small samples of study, largely because they are timeconsuming and expensive to conduct. There have been recent encouraging signs, however, that these limitations are starting to be overcome (e.g., Elliott et al., 2018).

Positron Emission Tomography Positron emission tomography (PET) scanning works by tracking the location of a radioactive tracer compound in a person’s body. This tracer may be an isotope like fluorine-18. It is chemically bound to a biologically active molecule that is designed to be used by a specific organ or region of the body. The isotope emits positrons continuously and annihilates the first electron it contacts to produce a pair of gamma photons. By detecting these gamma rays, a PET scanner tracks the production of positrons at relatively high resolution. For studies of the brain, the fluorine-18 is attached to an analog of glucose, producing a molecule called fluorodeoxyglucose (FDG). Since neurons need glucose to fire, the amount of FDG deposited in the brain varies depending on how rapidly the glucose is being used in that area. This enables the scanner to see when and where parts of the brain are working especially hard in response to a cognitive task or stimulus.

Magnetic Resonance Imaging Magnetic resonance imaging (MRI) uses a powerful magnetic field to align protons in water molecules. When pulsed with radio waves, these protons are rapidly knocked out of and then back into alignment. This causes them to emit their own radio frequencies as they change energy levels. When the process is applied continuously, the pattern of radio waves can be detected and converted into a three-dimensional representation of the density of water molecules in that area. This technique allows for the production of images of living tissue at exquisite spatial resolution. Structural MRI is useful for accurately capturing brain structure and volume, but functional magnetic resonance imaging (fMRI) is needed for tracking function and temporal changes by detecting rapid changes in oxygen level during blood flow. The utility of fMRI to the study of intelligence, like PET scanning, relies on the principle that a working brain uses resources; in the case of fMRI, the resource is oxygen rather than glucose. The location, quantity, and rate of use conveys information about when and where the brain is working hard.

Genetic and Neural Predictors of Intelligence

Empirical Findings Brain Size and Structure While older studies of IQ and head circumference or post-mortem brain weight have drawn criticism, a recent meta-analysis of the relationship between IQ and MRI-measured brain volume has produced an estimate of r  .24 (Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015), although the low reliability with which g is measured in many of the contributing studies may mean that this should be treated as a lower bound (Gignac & Bates, 2017). Nevertheless, this association remains robust across age, IQ domain, and sex, though its size may continue to be debated. Similar findings were recently reported by Cox, Ritchie, Fawns-Ritchie, Tucker-Drob, and Deary (2019), who investigated the association in a sample of several thousand individuals from the UK Biobank. Corrected for sex and age, the association between total brain volume and g was found to be r = .275 (95% CI = .252–.299). By analogy to polygenic scoring in GWAS, one might expect that brainbased prediction of intelligence should proceed by analysis of the brain into distinct features and then data-driven learning of each feature’s predictive weight. The simplest possible such analysis may be the factorization of brain volume into surface area and thickness, and indeed several recent studies have attempted to determine the contribution of one or both factors to intelligence (Schmitt et al., 2019; Schnack et al., 2015; Vuoksimaa et al., 2015; Walhovd et al., 2016). These studies differ in some of their findings, perhaps as a result of developmental complexity, but do agree that surface area is consistently correlated with intelligence. A greater predictive power of surface area is not inconsistent with Lee, McGue, Iacono, Michael, and Chabris (2019), who found roughly speaking that any SNP affecting intracranial volume is very likely to go on to affect IQ, EduYears, and so on. This is because surface area accounts for far more of the total variance in volume than thickness.

The P-FIT Model Brain efficiency has held true as a property associated with intelligence. PET scanning, for example, has shown that protracted practice at the video game Tetris led to decreased, rather than increased, activation (i.e., glucose use). Furthermore, the rate of decreased activity as a function of practice appeared to be related to IQ scores, suggesting that smarter people are able to consolidate learning at a quicker rate, freeing up other resources – doing “more with less.” Perhaps more remarkably, it appeared that smarter brains were particularly efficient in certain regions and pathways (Haier, 2011). In the early 2000s, in an era of small samples and inconsistent methodology, researchers began noticing that, despite these hurdles, the body of research on brain imaging and intelligence had produced substantial overlap in the

357

358

e. a. willoughby and j. j. lee

associations between intelligence and certain brain regions. These areas appeared to be distributed throughout the brain – which is perhaps to be expected from the results of Lee et al. (2019) – but were most prominently represented in the parietal and frontal areas of the brain, and the connections among them. This led to the development of the parieto-frontal integration theory (P-FIT) model of intelligence (Jung & Haier, 2007). Since its inception, the P-FIT model has enjoyed much success and validation in modern neuroimaging research. For example, although cortical thickness as a whole may not be strongly associated with IQ, cortical thickness in certain regions along the P-FIT track have been shown to contain some signal (Karama et al., 2009). The P-FIT model has also helped to clarify the role of brain efficiency in intelligence, as studies have suggested that IQ is associated negatively with total length of the network paths connecting functional areas (Li et al., 2009; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009) and consumption of both glucose and oxygen in critical regions (Neubauer & Fink, 2009).

The Connectome Largely informed by and consistent with the P-FIT model as well as the composite and polygenic nature of psychometric g and its correlates, the search for neurobiological substrates of intelligence has turned to more complex distributed models. The Human Connectome Project (HCP), which has produced maps of complete structural and functional neural connections – “connectomes” – both within and across individuals, has opened the doors to more precise studies of connectivity in large samples. Dubois, Galdi, Paul, and Adolphs (2018), for example, have recently shown in an HCP sample of approximately 800 individuals that fMRI measures of activity in resting-state connectivity matrices are able to predict 20% of the variance in general intelligence. Remarkably, this prediction value is after controlling for brain volume, which has a substantial association with IQ on its own, as well as sex, age, and in-scanner motion. Over the connectome, connectivity among four resting-state networks emerged as carrying the most information about g: frontoparietal, cingulo-opercular, default mode, and visual – in good agreement with the P-FIT. It is of interest that this study employed feature selection and weighting based on a combination of p-value thresholding and elastic net (which is, in turn, a combination of LASSO and ridge regression) – techniques that we mentioned in our discussion of genetic prediction. This suggests a convergence of methodology in these respective fields as they continue to develop. Fornito, Arnatkeviči ut_e, and Fulcher (2019) have reviewed attempts to bridge the gap between connectome and transcriptome – the latter being a brain-wide gene expression atlas that documents the transcriptional activity of thousands of genes across many anatomical locations of the brain. It turns out that the spatial patterning of gene expression is closely linked to neuronal

Genetic and Neural Predictors of Intelligence

connectivity. The transcriptional landscape of the brain is dominated by broad spatial gradients, which represent variations in inter-regional connectivity, regional cellular architecture, and microcircuitry. While this family of methods seeking relationships between gene expression and brain connectivity has not to our knowledge been applied to human intelligence, many studies of intelligence have used data sources of one type or the other. The possibility of using both types for the purpose of prediction therefore seems promising. Informed by such studies of gene expression in the brain, the field of “network neuroscience theory” has been put forth as a potential source of insight into the neurobiology of g (Barbey, 2018). According to this approach, g originates from individual differences in the system-wide topology and dynamics of the human brain, and that individuals may vary in the ability to dynamically reorganize “small-world” brain network typology in the service of system-wide, flexible adaptation. The capacity to flexibly transition between appropriate network states therefore provides the foundation for individual differences in g. Since networks are hierarchically organized at multiple scales in the brain, this framework may point toward new approaches in predicting individual differences in g, making use of observations ranging from the level of the synapse to neural circuitry and systems.

Looking Forward The future of predicting individual differences in cognitive ability from its biological substrate – both at the genetic and neuronal level – is bright, and new developments and insights are occurring weekly. For example, GWAS have been conducted of not only IQ and EduYears but also self-rated math ability and highest math class ever taken (Lee et al., 2018), and these may point toward the prediction of more specific outcomes (Park, Lubinski, & Benbow, 2007). There is another dimension to genetic and neurobiological approaches to the prediction of cognitive ability, and it is on that final note that we stress both hope and caution. The process of natural selection that gave rise to a brain capable of studying itself in these ways has also now led it to the cusp of profoundly altering its future. The predictive power of PGS may eventually cross a threshold enabling an acceleration of human evolution that has not been possible until now. As embryo selection and genetic engineering become more feasible and affordable, they have the potential to significantly influence national competitiveness, human capital, and global economic and scientific progress (Shulman & Bostrom, 2014). They also have the potential to increase class divides and perhaps to change what it means to be human altogether. It is difficult to anticipate whether our long-term interest will be served by uncontrolled use of genetic engineering. How to use this technology wisely is a matter that will affect the world our posterity will inhabit for generations to come.

359

360

e. a. willoughby and j. j. lee

References Allegrini, A. G., Selzam, S., Rimfeld, K., von Stumm, S., Pingault, J.-B., & Plomin, R. (2019). Genomic prediction of cognitive traits in childhood and adolescence. Molecular Psychiatry, 24(6), 819–827. doi: 10.1038/ s41380-019-0394-4. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 1–20. doi: 10.1016/j.tics.2017.10.001. Bates, T. C., Maher, B. S., Colodro-Conde, L., Medland, S. E., McAloney, K., Wright, M. J., . . . Gillespie, N. A. (2019). Social competence in parents increases children’s educational attainment: Replicable genetically-mediated effects of parenting revealed by non-transmitted DNA. Twin Research and Human Genetics, 22(1), 1–3. doi: 10.1017/thg.2018.75. Beauchamp, J. P. (2016). Genetic evidence for natural selection in humans in the contemporary United States. Proceedings of the National Academy of Sciences, 113(28), 7774–7779. doi: 10.1073/pnas.1600398113. Belsky, D. W., Domingue, B. W., Wedow, R., Arseneault, L., Boardman, J. D., Caspi, A., . . . Harris, K. M. (2018). Genetic analysis of social-class mobility in five longitudinal studies. Proceedings of the National Academy of Sciences, 115(31), E7275–E7284. doi: 10.1073/pnas.1801238115. Cox, S. R., Ritchie, S. J., Fawns-Ritchie, C., Tucker-Drob, E. M., & Deary, I. J. (2019). Structural brain imaging correlates of general intelligence in UK Biobank. Intelligence, 76, 1–13. doi: 10.1016/j.intell.2019.101376. de Vlaming, R., & Groenen, P. J. F. (2015). The current and future use of ridge regression for prediction in quantitative genetics. BioMed Research International, 2015, 143712. doi: 10.1155/2015/143712. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B, 373(1756), 20170284. doi: 10.1098/rstb.2017.0284. Dudbridge, F. (2013). Power and predictive accuracy of polygenic risk scores. PLoS Genetics, 9(3), e1003348. doi: 10.1371/journal.pgen.1003348. Elliott, L. T., Sharp, K., Alfaro-Almagro, F., Shi, S., Miller, K. L., Douaud, G., . . . Smith, S. M. (2018). Genome-wide association studies of brain imaging phenotypes in UK Biobank. Nature, 562(7726), 210–216. doi: 10.1038/ s41586-018-0571-7. Fisher, R. A. (1918). The correlation between relatives on the supposition of Mendelian inheritance. Transactions of the Royal Society of Edinburgh, 52(2), 399–433. doi: 10.1017/S0080456800012163. Fisher, R. A. (1941). Average excess and average effect of a gene substitution. Annals of Eugenics, 11(1), 53–63. doi: 10.1111/j.1469-1809.1941.tb02272.x. Fornito, A., Arnatkeviči ut_e, A., & Fulcher, B. D. (2019). Bridging the gap between connectome and transcriptome. Trends in Cognitive Sciences, 23(1), 34–50. doi: 10.1016/j.tics.2018.10.005. Gignac, G. E., & Bates, T. C. (2017). Brain volume and intelligence: The moderating role of intelligence measurement quality. Intelligence, 64(May), 18–29. doi: 10.1016/j.intell.2017.06.004.

Genetic and Neural Predictors of Intelligence

Haier, R. J. (2011). Biological basis of intelligence. In R. J. Sternberg & S. B. Kaufman (eds.), The Cambridge handbook of intelligence (pp. 351–368). Cambridge University Press. doi: 10.1017/CBO9780511977244.019. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. doi: 10.1017/S0140525X07001185. Karama, S., Ad-Dab’bagh, Y., Haier, R. J., Deary, I. J., Lyttelton, O. C., Lepage, C., . . . Brain Development Cooperative Group. (2009). Positive association between cognitive ability and cortical thickness in a representative US sample of healthy 6 to 18 year-olds. Intelligence, 37(2), 145–155. doi: 10.1016/j. intell.2008.09.006. Kong, A., Thorleifsson, G., Frigge, M. L., Vilhjálmsson, B. J., Young, A. I., Thorgeirsson, T. E., . . . Stefansson, K. (2018). The nature of nurture: Effects of parental genotypes. Science, 359(6374), 424–428. doi: 10.1126/science. aan6877. Lande, R., & Thompson, R. (1990). Efficiency of marker-assisted selection in the improvement of quantitative traits. Genetics, 124(3), 743–756. doi: 10.1046/ j.1365-2540.1998.00308.x. Lee, J. J. (2012). Correlation and causation in the study of personality (with discussion). European Journal of Personality, 26(4), 372–412. doi: 10.1002/per.1863. Lee, J. J., & Chow, C. C. (2013). The causal meaning of Fisher’s average effect. Genetics Research, 95(2–3), 89–109. doi: 10.1017/S0016672313000074. Lee, J. J., & McGue, M. (2016). Why behavioral genetics matters: A comment on Plomin (2016). Perspectives on Psychological Science, 11(1), 29–30. doi: 10.1177/1745691615611932. Lee, J. J., McGue, M., Iacono, W. G., Michael, A. M., & Chabris, C. F. (2019). The causal influence of brain size on human intelligence: Evidence from withinfamily phenotypic associations and GWAS modeling. Intelligence, 75, 48–58. doi: 10.1016/j.intell.2019.01.011. Lee, J. J., Wedow, R., Okbay, A., Kong, E., Maghzian, O., Zacher, M., . . . Cesarini, D. (2018). Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals. Nature Genetics, 50(8), 1112–1121. doi: 10.1038/s41588-018-0147-3. Lello, L., Avery, S. G., Tellier, L., Vazquez, A. I., de los Campos, G., & Hsu, S. D. H. (2018). Accurate genomic prediction of human height. Genetics, 210(2), 477–497. doi: 10.1534/genetics.118.301267. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. doi: 10.1371/journal.pcbi.1000395. Liu, H. (2018). Social and genetic pathways in multigenerational transmission of educational attainment. American Sociological Review, 83(2), 278–304. doi: 10.1177/0003122418759651. Mak, T. S. H., Porsch, R. M., Choi, S. W., Zhou, X., & Sham, P. C. (2017). Polygenic scores via penalized regression on summary statistics. Genetic Epidemiology, 41(6), 469–480. doi: 10.1002/gepi.22050. Meuwissen, T. H. E., Hayes, B. J., & Goddard, M. E. (2001). Prediction of total genetic value using genome-wide dense marker maps. Genetics, 157(4), 1819–1829.

361

362

e. a. willoughby and j. j. lee

Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. doi: 10.1016/j.neubiorev. 2009.04.001. Okbay, A., Beauchamp, J. P., Fontana, M. A., Lee, J. J., Pers, T. H., Rietveld, C. A., . . . Benjamin, D. J. (2016). Genome-wide association study identifies 74 loci associated with educational attainment. Nature, 533(7604), 539–542. doi: 10.1038/ nature17671. arXiv: NIHMS150003. Park, G., Lubinski, D., & Benbow, C. P. (2007). Contrasting intellectual patterns predict creativity in the arts and sciences: Tracking intellectually precocious youth over 25 years. Psychological Science, 18(11), 948–952. doi: 10.1111/ j.1467-9280.2007.02007.x. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Meta-analysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience and Biobehavioral Reviews, 57, 411–432. doi: 10.1016/j.neubiorev.2015.09.017. Plomin, R., DeFries, J. C., & Loehlin, J. C. (1977). Genotype-environment interaction and correlation in the analysis of human behavior. Psychological Bulletin, 84(2), 309–322. doi: 10.1037/0033-2909.84.2.309. Purcell, S. M., Pato, M. T., Williams, N. M., Scolnick, E. M., Van Beck, M., O’Donovan, M. C., . . . Holmans, P. A. (2009). Common polygenic variation contributes to risk of schizophrenia and bipolar disorder. Nature, 460(August), 748–752. doi: 10.1038/nature08185. Rietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., . . . Koellinger, P. D. (2013). GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science, 25(4), 57–82. doi: 10.1257/jep.25.4.57. Robertson, A. (1966). A mathematical model of the culling process in dairy cattle. Animal Production, 8(1), 95–108. doi: 10.1017/S0003356100037752. Schmitt, J. E., Neale, M. C., Clasen, L. S., Liu, S., Seidlitz, J., Pritikin, J. N., . . . Raznahan, A. (2019). A comprehensive quantitative genetic analysis of cerebral surface area in youth. Journal of Neuroscience, 13(16), 3028–3040. doi: 10.1523/JNEUROSCI.2248-18.2019. Schnack, H. G., Van Haren, N. E. M., Brouwer, R. M., Evans, A., Durston, S., Boomsma, D. I., . . . Hulshoff Pol, H. E. (2015). Changes in thickness and surface area of the human cortex and their relationship with intelligence. Cerebral Cortex, 25(6), 1608–1617. doi: 10.1093/cercor/bht357. Shulman, C., & Bostrom, N. (2014). Embryo selection for cognitive enhancement: Curiosity or game-changer? Global Policy, 5(1), 85–92. doi: 10.1111/17585899.12123. Spindel, J. E., & McCouch, S. R. (2016). When more is better: How data sharing would accelerate genomic selection of crop plants. New Phytologist, 212(4), 814–826. doi: 10.1111/nph.14174. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. doi: 10.1523/jneurosci.144309.2009.

Genetic and Neural Predictors of Intelligence

Vattikuti, S., Lee, J. J., Chang, C. C., Hsu, S. D. H., & Chow, C. C. (2014). Applying compressed sensing to genome-wide association studies. GigaScience, 3(1), 10. doi: 10.1186/2047-217X-3-10. Vilhjálmsson, B. J., Yang, J., Finucane, H. K., Gusev, A., Lindström, S., Ripke, S., . . . Price, A. L. (2015). Modeling linkage disequilibrium increases accuracy of polygenic risk scores. American Journal of Human Genetics, 97(4), 576–592. doi: 10.1016/j.ajhg.2015.09.001. Vuoksimaa, E., Panizzon, M. S., Chen, C.-H., Fiecas, M., Eyler, L. T., FennemaNotestine, C., . . . Kremen, W. S. (2015). The genetic association between neocortical volume and general cognitive ability is driven by global surface area rather than thickness. Cerebral Cortex, 25(8), 2127–2137. doi: 10.1093/ cercor/bhu018. Walhovd, K. B., Krogsrud, S. K., Amlien, I. K., Bartsch, H., Bjørnerud, A., Due-Tønnessen, P., . . . Fjell, A. M. (2016). Neurodevelopmental origins of lifespan changes in brain and cognition. Proceedings of the National Academy of Sciences, 113(33), 9357–9362. doi: 10.1073/pnas.1524259113. Willoughby, E. A., McGue, M., Iacono, W. G., Rustichini, A., & Lee, J. J. (2019). The role of parental genotype in predicting offspring years of education: Evidence for genetic nurture. Molecular Psychiatry. Online first. doi: 10.1038/s41380019-0494-1. Wray, N. R., Goddard, M. E., & Visscher, P. M. (2007). Prediction of individual genetic risk to disease from genome-wide association studies. Genome Research, 17(10), 1520–1528. doi: 10.1101/gr.6665407. Wray, N. R., Kemper, K. E., Hayes, B. J., Goddard, M. E., & Visscher, P. M. (2019). Complex trait prediction from genome data: Contrasting EBV in livestock to PRS in humans. Genetics, 211(4), 1131–1141. doi: 10.1534/ genetics.119.301859. Yengo, L., Robinson, M. R., Keller, M. C., Kemper, K. E., Yang, Y., Trzaskowski, M., . . . Visscher, P. M. (2018). Imprint of assortative mating on the human genome. Nature Human Behaviour, 2(12), 948–954. doi: 10.1038/s41562-018-0476-3.

363

PART V

Translating Research on the Neuroscience of Intelligence into Action

18 Enhancing Cognition Michael I. Posner and Mary K. Rothbart

Introduction It is useful to consider three very general approaches to enhancing cognitive functions such as attention, memory, or problem solving (Tang & Posner, 2014). One is training a specific brain network by practice on a task that uses that network (Network Training). Attention and working memory have been two of the most widely used tasks for studying network training. Another approach to enhancement involves a change in brain state by use of physical exercise, meditation, drugs, or playing video games (Brain State). A third approach involves the use of external electrical or magnetic stimulation to activate or inhibit brain pathways (Brain Stimulation). Recently studies have examined these methods in combination (Daugherty et al., 2018; Ward et al., 2017). In this chapter we review examples of each approach designed to improve cognition, related criticisms, and opportunities for further research and application.

Network Training There is widespread agreement that practice on specific cognitive tasks improves performance. Many such tasks improve reaction time (RT) or accuracy according to a power function (Anderson, Fincham, & Douglass, 1999; Fitts & Posner, 1967) or an exponential function (Heathcoate, Brown, & Mewhort, 2000) that relates performance to the number of training trials. In addition there have been efforts to improve more general cognition through training on specific tasks. This work began with studies in neuropsychology designed to improve cognition following stroke or brain injury. Because many people suffering from stroke have attentional deficits, one effort to remediate deficits has been attention training (Sohlberg & Mateer, 2001), which has been reported to help patients maintain focus and avoid distraction. However, there are clear limitations in the ability of network training to generalize to other cognitive tasks. Attention should not be thought of as a single unified function. Instead it includes mainly separate networks involved in alerting, orienting, and executive control through resolving conflict (Petersen & Posner, 2012; Posner & Petersen, 1990). In one study (Rinne et al., 2013), 110 stroke patients and 62 control participants were given the Attention Network Test (ANT; 367

368

m. i. posner and m. k. rothbart

Fan, McCandliss, Sommer, Raz, & Posner, 2002) Analysis of the brain scans and ANT scores revealed three separate groups of patients: (i) those patients with thalamic damage showing deficits in alerting, (ii) parietal damage patients showing deficits in orienting, and (iii) patients with damage to white matter tracts related to the anterior cingulate showing deficits in cognitive control (Rinne et al., 2013) . The classification into three categories fits with many findings from imaging research summarized in Fan, McCandliss, Fossella, Flombaum, and Posner (2005). Studies have also used training techniques to improve attention in the patient groups discussed. Patients with thalamic damage and participants who are sleep deprived have trouble obtaining a sufficient level of alertness to perform well in cognitive tasks. The use of auditory warning signals has been shown to provide a temporary improvement in performance, and practice in using warning signals can improve both alerting and orienting functions (Thimm, Fink, Kust, Karbe, & Sturm, 2006). Patients with right parietal damage have problems orienting to signals that go directly to the damaged hemisphere. Training that involved self-instruction to orient to the left during visual search improved performance on targets on the left of the display (Van Kessel, Geurts, Brouwer, & Fasotti, 2013). These methods, while somewhat effective, are highly specific to a given function and thus have rather limited generalizability. One problem with many studies using network training with patient populations is the lack of appropriate controls. Patients generally appreciate learning about their deficits and recognizing that deficits are due to their brain injury and are not a general failing of the patient. In one study a group given attention process therapy was compared with a control group. The controls learned neuropsychology relevant to their deficit. While the group taking attention therapy improved in attention, and the control group did not, the controls were much more satisfied with the training largely because they appreciated learning the cause of their disorder (Sohlberg, McLaughlin, Pavese, Heidrich, & Posner, 2000). The effort to enhance cognition in children with Attention Deficit Hyperactivity Disorder (ADHD) has used computerized training of working memory that has led to a popular commercial program called Cog Med www .additudemag.com/treatment/cogmed/ (Klingberg, Forssberg, & Westerberg, 2002). This program improved cognitive performance in the children with ADHD and stimulated efforts to improve cognition in typically developing children and adults. A striking result with working memory training in adults contributed to the goal of improving general cognition (Jaeggi, Buschkuehl, Jonides, & Perrig, 2008). Working memory training in adults improved overall fluid intelligence by a significant amount. Since general intelligence involves many forms of cognition and since working memory is involved in nearly all cognitive tasks, this finding held promise for a generalized approach to cognitive enhancement. However, there has been considerable controversy

Enhancing Cognition

about the success of working memory training to transfer to remote aspects of cognition (Harrison et al., 2013; Melby-Lervag, Redick, & Hulme, 2016; Redick, 2019). It seems likely that no single task can improve all of cognition, but perhaps there is more promise in training a number of tasks thought to cover fundamental aspects of cognition, even though this approach remains controversial (for a fuller discussion of future directions see Redick, 2019 and the section on multimodal interventions).

Development Similar efforts to improve attention have used attention training and have most often been restricted to various parts of the developmental process. For example, training in effortful control via computerized exercises to improve the ability to resolve conflict have been applied mainly to young children. High levels of self-control in childhood as measured by observational ratings by parents and teachers have been associated with many positive outcomes of these same children when they are adults (Moffitt et al., 2011). Thus, improving self-control in early childhood, even by a small amount, might lead to more success in early schooling and increasing effects in later life. Some of these efforts to train attention in children have used very tightly controlled computerized exercises (Rueda, Checa, & Combita, 2012; Rueda, Rothbart, McCandliss, Saccamanno, & Posner, 2005), which have been shown to improve the brain network related to self-regulation. Other studies have used a less controlled curriculum that emphasizes classroombased training and that allows children to work together to improve attention (Diamond & Lee, 2011). These methods found improvement in performance on cognitive tasks and/or in the brain network related to attention. Attention training methods have reported success in enhancing effortful control as measured, for example, by the ability to resolve conflict and shown generalization to tests of delayed reward and IQ (Rueda et al., 2012). It is not clear how well these efforts do in improving later schooling and outcomes as adults. Studies of children in Head Start combining attention training with parent training enhanced cognitive performance in these children (Neville et al., 2013). A meta-analysis of such attention training studies revealed a moderately positive effect size (Peng & Miller, 2016). Some support for the continuing influence of training working memory, attention, or inhibitory control in preschool children comes from studies using controls whose age matched to those undergoing normal schooling. Both training in inhibitory control and in working memory allowed the preschool group to catch up with a group having a full year of normal learning within the school system (Zhang et al., 2019). However, evidence that early training provides lasting benefits into adulthood is still lacking.

369

370

m. i. posner and m. k. rothbart

There have also been efforts to study the elderly to see if it is possible to enhance their cognitive skills by training. While memory and speed training improved their performance, only high level reasoning training generalized strongly to daily life performance (Willis et al., 2006). There is controversy surrounding network training in healthy young adults, and most studies of individual network training have failed to find convincing evidence of transfer beyond the trained tasks and those similar to it (Simons et al., 2016). Perhaps children and the elderly may show greater evidence for transfer because they are not at peak performance. However, the data are rather sparse, the training methods differ, and it is not clear whether the young maintain attention improvements until adulthood and whether the trained elderly succeed in delaying cognitive decline. Network neuroscience provides a theoretical framework that helps to explain the training results (Barbey, 2018). Network neuroscience exploits graph theory to explain how training on individual cognitive tasks is contained within specific modules and lacks the global features needed to provide far transfer. To some extent the state training methods described in the section on training brain states may provide these missing global features.

Individual Differences Another reason for the controversy about the effectiveness of network training methods may lie in individual variability in the degree of improvement in tasks. In many studies improvement appears greater in those who show the poorest baseline performance (Rueda et al., 2005). In part this could be regression toward the mean given the large error variance involved, but it could explain why studies of patient populations, or very young and very old participants show stronger evidence of improvement. Some studies predicted, independently of initial performance level, the degree to which training methods will be successful in improving behavior from a consideration of the overall modularity present in the brain using resting state MRI (Gallen & D’Esposito, 2019). The stronger the brain modules are connected internally in the pre-training resting state, the greater the overall improvement in studies of brain injury patients, and in healthy adults undergoing cognitive training. To some extent this effect may oppose the greater gain by those lower at baseline. One may ask why modularity might be related to degree of gain from the intervention. The answer may rely in the relation of present modularity to prior successful learning. People whose experience has led to brain changes in the past may be most likely to improve in the future. Another possibility lies in the degree to which genetic variation may produce improved connectivity within brain modules. Genes that influence the ease of myelination such as polymorphisms in the MTHFR gene (Voelker, Rothbart & Posner, 2016)

Enhancing Cognition

may gain more modularity by improved connectivity between active neurons. An understanding of individual differences may lead to a better chance to determine for whom various training methods may best be targeted. Another reason for inconsistency may be differences within a single individual over time. For example, time of day is well known to influence both alertness and mood, and these in turn might cause differences in performance between experiments conducted at different seasons, weather, or time of day.

Training Brain States Methods such as physical exercise (Hillman, Ericson, & Kramer, 2008), psychoactive drugs (Adam, 2018), meditation (Tang & Posner, 2009), exposure to nature (Berman, Jonides, & Kaplan, 2008), and playing video games (Green & Bavelier, 2008) have been used to influence brain state in a way that might improve multiple cognitive networks. While it is not possible to review all of the studies using these methods, we point out some common characteristics of their role in improving cognition.

Physical Exercise Physical exercise is defined as planned, structured, repetitive exercise with a goal of improvement or maintenance of one or more components of physical fitness (Mandolesi et al., 2018). Most studies are of aerobic exercise, which has had the most consistent overall training effect on cognition of any of the methods so far developed (Hillman et al., 2008). In addition to its effects on physical factors such as respiration and heart rate, aerobic exercise has been reported to improve attention, cognitive control, memory, and thinking. It apparently does so by changes in both white and grey matter (Mandolesi et al., 2018). On the molecular level it also affects gene expression (Mandolesi et al., 2018). The effects of physical exercise can be found both acutely and with longer term programs, and they occur in much the same way over the life span. In common with other methods of training there is considerable individual variability, predicted in part by baseline modularity measures (Gallen & D’Esposito, 2019).

Meditation In terms of physical activity, meditation is at the opposite pole from physical exercise. However, studies have shown similar widespread effects of mindfulness mediation upon a range of cognitive tasks including attention, memory, and creative problem solving. Meditation has also been shown to reduce stress, as measured by salivary cortisol, and to improve immunoreactivity (Tang et al., 2007). Some training studies using a control of relaxation training

371

372

m. i. posner and m. k. rothbart

have shown rapid changes in only five days in mood and attention (Tang et al., 2007), with dose dependent changes over a month of training in stress reduction and immunoreactivity (Tang, Holzel, & Posner, 2015). Reports of both grey and white matter change in areas of the brain related to attention occur over two-to-four weeks of training (Tang et al., 2015). One possible mechanism for some of these changes is the widely reported increase in frontal theta that follows meditation training (Posner, Tang, & Lynch, 2014). These changes may alter both synaptic efficiency and myelination (Piscopo, Weible, Rothbart, Posner, & Niell, 2018).

Video Games Training in action video games has been shown to improve orienting network performance during visual search and similar tasks (Green & Bevelier, 2003). In one extensive training study with video games and controls the authors found transfer to untrained paradigms with some forms of video games (Baniqued et al., 2014). Overall, evidence from several studies indicated that there was a large and consistent effect of video games on aspects of orienting, particularly in blocking irrelevant information, and a much smaller effect on task switching and the executive network. Green, Sugarman, Medford, Klobusicky, and Bavelier (2012) summarize these effects as follows: . . . to the extent that action video games can indeed reduce task-switch cost, the effect seems less robust than the previously seen effects of action video games on various aspects of visual attention and low-level vision. (p. 992)

Not all media and multimedia activities are equivalent to action video games in the skills acquired. Green et al. (2012) noted that action video games are those with high velocity target movement, the presence of many objects that are transiently visible, and the presence of spatial and temporal uncertainty. These features are obviously related to orienting of attention. According to one theory, the video games also teach participants how to learn a wide range of new things (learning to learn), better than those who had not played the video games (Bavelier, Green, Pouget, & Schrater, 2012).

Other State Methods Brain state can also be altered by soothing experiences such as exposure to nature (Berman et al., 2008), or by taking various drugs that can alter alertness (Farah, 2015; Husain & Mehta, 2011). A summary of drug research suggests that the effects of drugs on cognition are small and variable (Husain & Mehta, 2011). Farah (2015) argues that the effects of stimulant drugs on performance may occur for only a short time due to altered mood or temporary motivation. Increases in motivation to perform tasks may be part of the reason that

Enhancing Cognition

exposure to nature improves cognitive performance, but studies indicate that exposure to nature also has a more direct effect on cognitive control (Bourrier, Berman, & Enns, 2018).

Brain Stimulation Like many of the techniques discussed in this chapter, the use of brain stimulation to enhance cognition arose first in the treatment of disorders that influence cognitive activity. With patients it is easier to propose invasive methods such as deep brain stimulation used to treat Parkinson’s patients and those suffering from forms of depression that are difficult to treat. . Because deep brain stimulation for these disorders uses specific brain areas that differ depending on the disorder, the technique is similar in some ways to network specific training. For example, deep brain stimulation of the medial temporal lobe has been shown to activate specific memories, suggesting that the technique might enhance cognition by improving memory. The use of deep brain stimulation also drew upon animal studies supporting the idea that stimulating a brain network increases synaptic efficacy through mechanisms such as long-term potentiation (Lynch, 1998). More recently, mostly non-invasive stimulation using scalp EEG electrodes or transcortical magnetic stimulation (TMS) has been used with non-patient human populations. The most common types of stimulus are direct current or TMS mainly across the frontal cortex, or low frequency alternating current, often in the theta range, targeted to specific areas of the brain. Reviews of these and other methods can be found in Cohen-Kadosh (2014), Santarnecchi et al. (2015), and Sasaki et al. (2016). In this chapter we examine only the methods most frequently used in normal adults.

Direct Current Stimulation Studies using direct current stimulation (tDCS) have usually examined the enhancement of memory, language, or attention, and sometimes other cognitive processes, most frequently in college students. According to a review (Santarnecchi et al., 2015), 60% of the studies have involved memory, 19% language, and about 10% attention or cognitive control. The largest number of studies have examined electrodes placed above the frontal cortex (56%). Most studies involve stimulation in a single session and about half have reported significant changes in RT or accuracy due to stimulation. However, there is considerable variability in whether performance is enhanced by increasing or decreasing frontal activity. The doubt on which direction of stimulation improves performance and the finding by metaanalysis of no overall effect of tDCS on cognitive performance in single session studies (Horvath, Forte, & Carter, 2015) support skepticism about whether tDCS

373

374

m. i. posner and m. k. rothbart

can systematically influence cognitive processing. Multisession studies have been somewhat more successful. However, there are studies showing effects of even a single session (Reinhart & Woodman, 2015; Reinhart, Zhu, Park & Woodman, 2015). For example, in one study (Reinhart & Woodman, 2015) 20 minutes of direct current stimulation increased visual acuity. Subjects’ event-related potentials showed the same pattern of amplitude changes observed with behavior. Finally, subjects with the worst visual acuity showed the largest improvements following stimulation of the visual cortex.

Theta Stimulation The widespread use of theta stimulation to improve cognitive performance was based on three ideas. First, it has been argued that theta rhythm plays a special role in inducing long-term potentiation (LTP), one of the ways of enhancing synaptic connections (Lynch, 1998). Second, synchronizing theta stimulation with incoming sensory stimuli has been proposed as one mechanism by which the hippocampus may enhance the encoding of incoming sensory information and thus enhance memory (Vinogradova, Kitchigina, Kudina, & Zenchenko, 1999). Third, theta rhythm has been shown to activate dormant oligodendrocytes and foster changes in white matter (Piscopo et al., 2018). In one important study (Reinhart, 2017), components of executive function were changed by directly manipulating neural activity using high definition transcranial alternating current stimulation (HD-tACS). Twenty minutes of in-phase stimulation over the medial frontal cortex and right lateral prefrontal cortex synchronized theta (~6 Hz) rhythms between these regions and rapidly improved behavioral control. In contrast, anti-phase stimulation in the same individuals desynchronized theta phase coupling and impaired adaptive behavior. These findings suggest it is possible to intervene in the neural integration between frontal cortical structures that govern complex human behavior. The development of such drug-free interventions for addressing disorders of cortical connectivity is of considerable clinical importance. There is also evidence that memory can be enhanced by theta stimulation. In one such study (Roberts, Clarke, Addante, & Ranganath, 2018) participants were presented with a set of words to be classified as animate or inanimate and another set they classified as manufactured or natural. After this stimulus presentation they were stimulated with auditory theta at 5.5 Hz for 36 minutes (experimental condition) or white noise (control) and then given a memory test in which they were to recognize the word and the classification task they had been given. Those exposed to theta stimulation did better than controls in remembering which classification had been involved but did not differ from controls on recognition of whether the item was old or new. A study using mice found that 20 days of theta stimulation of the anterior cingulate (ACC) by implanted lasers led to improved connectivity through

Enhancing Cognition

changes in white matter (Piscopo et al., 2018). Since the ACC is an important part of the executive attention network (Petersen & Posner, 2012), it is possible that increased connectivity would allow improved performance in tasks requiring attention. This study illustrates that one mechanism involved in long-term theta stimulation is activation of previously dormant oligodendrocytes producing better connectivity between brain areas (Piscopo et al., 2018). This mechanism could not account for short-term effects found in the memory studies, which probably involve a change in synaptic plasticity (Albensi, Oliver, Toupin, & Odero, 2007; Lynch, 1998). It is also possible that effects found with DC stimulation or with other frequencies involve similar mechanisms by stimulating intrinsic brain theta activity. There are many issues yet to be resolved in the study of brain stimulation. Both the ethics and safety of this form of stimulation remain important issues. The effectiveness and mechanism of such stimulation and the exact mechanism as well the understanding of which individuals might best benefit from these methods needs further study.

Multimodal Interventions Recent reports have used combinations of the network, state, and stimulation methods described in the sections on network, state, and brain stimulation. One study (Daugherty et al., 2018) compared physical traing alone, and physical training combined with a video game or accompanied by meditation training and an active control task involving change detection. The combination of methods found generalization to spatial tests and to fluid intelligence. In this study physical training alone did not produce significant generalization to cognitive tasks. Another large study used five groups of young adults, average age 24 years (Ward et al., 2017). The groups used network training or combinations of network, state, and stimulation training. The results showed that a combination of all three methods produced improvements in learning a wide range of cognitive tasks including tests of executive function, working memory, planning, and problem solving in comparison with network training alone. One goal of interventions is to influence the quality of decisions in daily life. As a step toward this goal, one study (Zwilling et al., 2018) compared groups given 16 weeks of physical training, physical training plus network training, and physical training plus mindfulness training against an active control involving change detection and visual search. Decision making was measued by the Decision Making Competence Test, which measured value assessment, belief assessment, and information integration. All of the interventions improved decision making more than the control, while the authors found that physical exercise had the largest overall influence on decision making, and that mindfulness and cognitive training each showed specific effects.

375

376

m. i. posner and m. k. rothbart

Future Directions Much work needs to be done to establish various methods for improving cognitive function. It is clear that practicing a cognitive task can improve performance on that task and transfer to very similar conditions. The prospect of far transfer appears to be better if the training task is adaptive, so that trainees continue to work hard to achieve successful performance. The literature suggests that individual differences may be crucial in who improves. Currently there is much doubt that any single cognitive task will be adequate to transfer widely or produce an increase in a broad range of people. The use of combinations of network, state, and stimulation intervention modalities seems to provide a wider effect, but more research is needed to determine the optimal choices. It has been shown that multi-modal interventions can be effective in improving tests of decision making, but further studies are needed to examine their effectiveness in schooling and to the decisions of everyday life. Despite evidence that multi-modal training may prove more effective, it is also important to use individual methods to better understand their mechanism. Physical training, meditation, and video games seem to have a wider impact than practicing specific cognitive tasks. In the case of meditation, frontal theta seems to be one mechanism of this effect. Future studies in rodents may tell us whether changes in white matter results from a faster change in synapse formation. We need research to determine if there is a common mechanism (e.g., brain rhythms) behind the various forms of brain state learning. Research should determine if there is a connection between the change in white matter found with low level stimulation in rodents and changes occuring in the human brain following long-term theta stimulation. It is also important to know if changes that occur with tDCS, which have an unknown mechanism, are related to those that occur in white matter following low frequency stimulation. Further we need to determine if there is continuity between short-term theta effects on memory and long-term white matter change. Perhaps a fast occurring synaptic change will be a necessary condition for the longer-term white matter change. Studies in rodents suggest that the degree of white matter change is limited. In mice, the change in g ratio following a month of stimulation was about 5% (Piscopo et al., 2018), similarly MRI meditation studies have shown only about a 10% change in fractional anisotropy (Tang et al., 2010). Moreover, the Attention Network Test is improved by training by approximately 10% of the RT improvement found during development between 7 years and adulthood (Voelker et al., 2016). Thus 10% may serve as an upper bound of the change we might expect from training. It is unclear whether a 10% change will be sufficient to improve various disorders involving white matter or whether it will provide benefits in skilled performance for those without such a disorder.

Enhancing Cognition

Some published data favor the idea that addiction disorders might benefit from improved white matter (Tang, Tang, & Posner, 2013). Smokers showed reduced connectivity between the ACC and striatum compared to nonsmokers. Since all were recruited to reduce stress, they did not necessarily have an intention to quit smoking. Nonetheless, two weeks of meditation training improved connectivity and led to highly significantly reductions in smoking. Although further tests are needed, it is tempting to conclude that restoration of white matter efficiency led to reduced smoking. If the white matter change found in rodents is also true in humans, clinical studies of various disorders of white matter will be needed to see if the degree of change that occurs in these studies actually serves to remediate the symptoms. Our chapter suggests some reason for optimism that carefully chosen task or state training may in combination be helpful for at least some people. Moreover, the use of safe brain stimulation, either alone or in conjunction with training, may also prove useful. It is probably better to regard these methods as a tool kit for aiding those in need of improvement in cognitive performance. As we learn more about how individual differences determine who learns and by what method, the success of these methods should improve.

Acknowledgment Support for this research was provided by Office of Naval Research Grants N00014–19-1-2015 and N00014–15-1-2148 to the University of Oregon.

References Adam, D. (ed.) (2018). The genius within: Smart pills, brain hacks and adventures in intelligence. London: Picador. Albensi, B.C., Oliver, D. R., Toupin, J., & Odero, G. (2007). Electrical stimulation protocols for hippocampal synaptic plasticity and neuronal hyper-excitability: Are they effective or relevant? Experimental Neurology, 204, 1–13. Anderson, J. R., Fincham, J. M., & Douglass, S. (1999). Practice and retention: A unifying analysis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25(5), 1120–1136. Baniqued, P., Kranz, M. B., Voss, M. W., Lee, H., Casman, J. D., Severson, J., & Kramer, A. F. (2014). Cognitive training with casual video games: Points to consider. Frontiers in Psychology, 4, 1010. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Bavelier, D., Green, C. S., Pouget, A., & Schrater, P. (2012). Brain plasticity through the life span: Learning to learn and action video games. Annual Reviews of Neuroscience, 35, 391–416.

377

378

m. i. posner and m. k. rothbart

Berman, M. G., Jonides, J., & Kaplan, S. (2008). The cognitive benefits of interacting with nature. Psychological Science, 19(12), 1207–1212. doi: 10.1111/j.14679280.2008.02225.x. Bourrier, S. C., Berman, M. G., & Enns, J. T. (2018). Cognitive strategies and natural environments interact in influencing executive function. Frontiers in Psychology, 9, 1248. doi: 10.3389/fpsyg.2018.01248. Cohen-Kadosh, R. (ed.) (2014). The stimulated brain. Amsterdam: Elsevier Daugherty, A. M., Zwillinga, C., Paula, E. J., Sherepaa, N., Allena, C., Kramer, A. F., . . . Barbey, A. K. (2018). Multi-modal fitness and cognitive training to enhance fluid intelligence. Intelligence, 66, 32–43. doi: 10.1016/j. intell.2017.11.001. Diamond, A., & Lee, K. (2011). Interventions shown to aid executive function development in children 4–12 years old. Science, 333(6054), 959–964. Fan, J., McCandliss, B. D., Fossella, J., Flombaum, J. I., & Posner, M. I. (2005). The activation of attentional networks. Neuroimage, 26(2), 471–479. Fan, J., McCandliss, B. D., Sommer, T., Raz, M. & Posner, M. I. (2002). Testing the efficiency and independence of attentional networks. Journal of Cognitive Neuroscience, 3(14), 340–347. Farah, M. J. (2015). The unknowns of cognitive enhancement. Science, 379(3), 379–380. Fitts, P. M., & Posner, M. I. (1967). Human performance. Belmont, CA: Wadsworth. Gallen, C. L., & D’Esposito, M. (2019). Brain modularity: A biomarker of intervention-related plasticity. Trends in Cognitive Science, 23(4), 293–304. Green, C. S., & Bavelier, D. (2003). Action video games modify visual selective attention. Nature, 423(6939), 534–537. Green, C. S., & Bavelier, D. (2008). Exercising your brain: A review of human brain plasticity and training-induced learning. Psychology and Aging, 23(4), 692–701. Green, C. S., Sugarman, M. A., Medford, K., Klobusicky, E., & Bavelier, D. (2012). The effect of action video game experience ontask-switching. Computers in Human Behavior, 28(3), 984–994. Harrison, T. L., Shipstead, Z., Hicks, K. L., Hambrick, D. Z., Redick, T. S., & Engle, R. W. (2013). Working memory training may increase working memory capacity but not fluid intelligence. Psychological Science, 24(12), 2409–2419. Heathcote, A., Brown, S., & Mewhort, D. J. K. (2000) The power law repealed: The case for an exponential law of practice. Psychonomic Bulletin & Review, 7(2), 185–207. Hillman, C. H., Erickson, K. I., & Kramer, A. F. (2008). Be smart, exercise your heart: Exercise effects on brain and cognition. Nature Reviews Neuroscience, 9(1), 58–65. Horvath, J. C., Forte, J. D., & Carter, O. (2015). Quantitative review finds no evidence of cognitive effects in healthy populations from single-session transcranial direct current stimulation (tDCS). Brain Stimulation, 8(3), 535–550. Husain, M., & Mehta, M. A. (2011). Cognitive enhancement by drugs in health and disease. Trends in Cognitive Science, 15(1), 28–36.

Enhancing Cognition

Jaeggi, A. M., Buschkuehl, M., Jonides, J., & Perrig, W. J. (2008). Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences, 105(19), 196829–196833. Klingberg, T., Forssberg, H., & Westerberg, H. (2002). Training of working memory in children with ADHD. Journal of Clinical and Experimental Neuropsychology, 24(6), 781–791. Lynch, G. (1998). Memory and the brain: Unexpected chemistries and a new pharmacology. Neurobiology of Learning and Memory, 70(1–2), 82–100. Mandolesi, L., Polverino, A., Monturori, S., Foti, F., Giampaolo,F. Sorrentino, P., & Sorrentino, G. (2018). Effects of physical exercise on cognitive functioning and wellbeing: Biological and psychological benefits. Frontiers in Psychology, 9, 509. Melby-Lervag, M., Redick, T. S., & Hulme, C. (2016). Working memory training does not improve performance on measures of intelligence or other mMeasures of “far transfer”: Evidence from a meta-analytic review. Perspectives on Psychological Science, 11(4), 512–534. Moffitt, T. E., Arseneault, L., Belsky, D., Dickson, N., Hancox, R. J., Harrington, H. L., . . . Caspi, A. (2011). A gradient of childhood self-control predicts health, wealth and public safety. Proceedings of the National Academy of Sciences USA, 108(7), 2693–2698. Neville, H. J., Stevens, C., Pakulak, E., Bell, T. A., Fanning, J., Klein, S., & Isbell, E. (2013). Family-based training program improves brain function, cognition, and behavior in lower socioeconomic status preschoolers. Proceedings of the National Academy of Sciences USA, 110(29), 12138–12143. Peng, P., & Miller, A. C. (2016). Does attention training work? A selective metaanalysis to explore the effects of attention training and moderators. Learning and Individual Differences, 45, 77–87. Petersen, S. E., & Posner, M. I. (2012). The attention system of the human brain: 20 years after. Annual Review of Neuroscience, 35, 71–89. Piscopo, D., Weible, A., Rothbart, M. K., Posner, M. I., & Niell, C. M. (2018). Changes in white matter in mice resulting from low frequency brain stimulation. Proceedings of the National Academy of Sciences USA, 115(27), 6639–6646. doi: 10.1073/pnas.1802160115. Posner, M. I., & Petersen, S. E. (1990). The attention system of the human brain. Annual Review of Neuroscience, 13, 25–42. Posner, M. I., Tang, Y. Y., & Lynch, G. (2014) Mechanisms of white matter change induced by meditation. Frontiers in Psychology, 5, 1220. doi: 10.3389/ fpsyg.2014.01220. Redick, T. S. (2019) The hype cycle in working memory training. Current Directions in Psychological Science, 28(5), 1–7. Reinhart, R. M. G. (2017). Disruption and rescue of interareal theta phase coupling and adaptive behavior. Proceedings of the National Academy of Sciences USA, 114(43), 11542–11547. Reinhart, R. M. G., & Woodman, G. F. (2015). Enhancing long-term memory with stimulation tunes visual attention in one trial. Proceedings of the National Academy of Sciences USA, 112(2), 625–630. Reinhart, R. M. G., Zhu, J., Park, S., & Woodman, G. F. (2015). Synchronizing theta oscillations with direct-current stimulation strengthens adaptive control in the

379

380

m. i. posner and m. k. rothbart

human brain. Proceedings of the National Academy of Sciences USA, 112(30), 9448–9453. Rinne, P., Hassan, M., Goniotakis, D., Chohan, K., Sharma, P., Langdon, D., . . . Bentley, P. (2013). Triple dissociation of attention networks in stroke according to lesion location. Neurology, 81(9), 812–820. Roberts, B. M., Clarke, A., Addante, R. J., & Ranganath, C. (2018). Entrainment enhances theta oscillations and improves episodic memory. Cognitive Neuroscience, 9(3–4), 181–193. Rueda, M. R., Checa, P., & Combita, L. M. (2012). Enhanced efficiency of the executive attention network after training in preschool children: Immediate and after two month effects. Developemental Cognitive Neuroscience, 2(Supp 1), S192–S204. Rueda, M. R., Rothbart, M. K., McCandliss, B., Saccamanno, L., & Posner, M. I. (2005). Training, maturation and genetic influences on the development of executive attention. Proceedings of the National Academy of Sciences USA, 102(41), 14931–14936. Santarnecchi, E., Brem, A.-K., Levenbaum, E., Thompson, T., Kadosh, R. C., & Pascual-Leone, A. (2015). Enhancing cognition using transcranial electrical stimulation Current Opinion in Behavioral Sciences, 4, 171–178. Sasaki, S. R., Tsuiki, S., Miyaguchi, S., Kojima, S., Masaki, M., Otsuru, N., & Onishi, H. (2016). Comparison of three non-invasive transcranial electrical stimulation methods for increasing cortical excitability. Fontiers in Human Neuroscience, 10, 668. doi: 10.3389/fnhum.2016.00668. Simons, D. J. , Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., & Stine-Morrow, E. A. L. (2016). Do “brain-training” programs work? Psychological Science in the Public Interest, 17(3), 103–186. Sohlberg, M. M., & Mateer, C. A. (2001). Cognitive rehabilitation: An integrative neuropsychological approach. New York: Guilford. Sohlberg, M. M., McLaughlin, K. A., Pavese, A., Heidrich, A., & Posner, M. I. (2000). Evaluation of attention process therapy training in persons with acquired brain injury. Journal of Clinical and Experimental Neuropsychology, 22(5), 656–676. Tang, Y. Y., Holzel, B. K., & Posner, M. I. (2015). The neuroscience of mindfulness meditation. Nature Reviews Neuroscience, 16(5), 213–225. Tang, Y., Lu, Q., Geng, X., Stein, E. A., Yang, Y., & Posner, M. I. (2010) Short term mental training induces white-matter changes in the anterior cingulate. Proceedings of the National Academy of Sciences USA, 107(35), 15649–15652. Tang, Y. Y., Ma, Y., Wang, J., Fan, Y., Feng, S., Lu, Q., . . . Posner, M. I. (2007). Short term meditation training improves attention and self regulation. Proceedings of the National Academy of Sciences USA, 104(43), 17152–17156. Tang, Y. Y., & Posner, M. I. (2009). Attention training and attention state training. Trends in Cognitive Science, 13(5), 222–227. Tang, Y. Y., & Posner, M. I. (2014). Training brain networks and states. Trends in Cognitive Science, 18(7), 345–350. doi: 10.1016/j.tics.2014.04.002. Tang, Y. Y., Tang, R., & Posner, M. I. (2013). Brief meditation training induces smoking reduction. Proceedings of the National Academy of Sciences USA, 110(34), 13971–13975.

Enhancing Cognition

Thimm, M., Fink, G. R., Kust, J., Karbe, H., Sturm, W. (2006). Impact of alertness training on spatial neglect: A behavioural and fMRI study. Neuropsychology, 44(7), 1230–1246. Van Kessel, M. E., Geurts, A. C. H., Brouwer, W. H., & Fasotti, L. (2013). Visual scanning training for neglect after stroke with and without a computerized lane tracking dual task. Frontiers in Human Neuroscience, 7, 358. Vinogradova, O. S., Kitchigina, V. F., Kudina, T. A., & Zenchenko, K. I. (1999). Spontaneous activity and sensory responses of hippocampal neurons during persistent theta-rhythm evoked by median raphe nucleus blockade in rabbit. Neuroscience, 94(3), 745–753. doi: 10.1016/S0306-4522(99)00253-5. Voelker, P., Rothbart, M. K., & Posner, M. I. (2016) A polymorphism related to methylation influences attention during performance of speeded skills. AIMS Neuroscience, 3(1), 40–55. Ward, N., Paul, E., Watson, P., Cook, G. E., Hillman, C. H., Cohen, N. J., . . . Barbey, A. K. (2017). Enhanced learning through multimodal training: Evidence from a comprehensive cognitive, physical fitness, and neuroscience intervention. Scientific Reports, 7, 5808 doi: 10.1038/s41598–017-06237-5. Willis, S. L.., Tennstedt, S. L., Mariske, M., Ball, K., Elias, J., Koepke, K. M., . . . Wright, E. (2006). Long-term effects of cognitive training on everyday functional outcomes in older adults. Journal of the American Medical Association, 296(23), 2805–2814. Zhang, Q., Wang, C. P., Zhao, Q. W., Yang, L., Buschkuehl, M., & Jaeggi, S. M. (2019). The malleability of executive function in early childhood: Effects of schooling and targeted training. Development Science, 22(2), e12748. Zwilling, C. E., Daugherty, A. M., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2018). Enhanced decision-making through multimodal training. NPJ Science of Learning, 4(1). doi: 10.1038/s41539–019-0049.

381

19 Patient-Based Approaches to Understanding Intelligence and Problem-Solving Shira Cohen-Zimerman, Carola Salvi, and Jordan H. Grafman

Introduction One of the major achievements of psychology in the twentieth century is the establishment and implementation of standard measures of human intelligence. The use of these measures yielded a large body of research as well as many controversies and criticisms (for review see Nisbett et al., 2012). The relatively recent development of structural and functional brain imaging techniques led to attempts to identify the neural correlates of intelligence, as well as to the use of intelligence testing as a diagnostic and prognostic factor within clinical populations. Different syndromes are characterized by a unique pattern of performance on standard intelligence tests, with specific profiles reported for patients with developmental disorders (e.g., Down syndrome, autism spectrum disorder), for neurological and neurodegenerative disorders (e.g., epilepsy, multiple sclerosis, Parkinson’s disease, Alzheimer’s disease), and for acute neurological trauma (e.g., traumatic brain injury and stroke. See Hamburg et al., 2019; Wechsler, 2008a). In this chapter, we will highlight the clinical implications of studying intelligence in adults with traumatic brain injury (TBI), which has been a central research interest for our group over the years. When appropriate, we will also provide additional examples referring to other clinical populations. For the purpose of this chapter, intelligence will be defined as a general cognitive ability shared across different domains of cognitive functioning. Two higher-level cognitive domains considered critical components for adaptive behavior are problem-solving and planning abilities, and their neural basis will also be described. Our main goal in this chapter is to highlight the translational aspects of the neuroscientific study of intelligence and problem-solving by focusing on research with TBI patients. We will start by reviewing common tools to assess premorbid intelligence in a clinical population. We then review the major lessons learned about human intelligence and problem-solving from lesion studies, and discuss protective factors that help preserve intelligence following

382

Patient-Based Approaches to Understanding Intelligence

trauma or disease. Later we present demographic, neurological, and genetic predictors of post-injury intelligence. We end by discussing whether intelligence scores can predict post-injury vocational and social outcomes. Overall, this chapter suggests that studying clinical populations is necessary for our understanding of intelligence, and that such research not only benefits the patients, their caregivers, and the clinicians working with them, but also advances our theoretical understanding of the concept of intelligence.

Assessment of General Intelligence in Clinical Populations Initial attempts to assess intelligence are dated prior to the establishment of psychology as a research field in the nineteenth century (Gottfredson & Saklofske, 2009). While dozens of intelligence tests are available today, we will focus on the most commonly used in assessments of patients, both in research and clinical settings. The gold standard for assessing intelligence in healthy and clinical populations is the Wechsler Adult Intelligence Scale (WAIS) (Wechsler, 2008b). This scale includes subtests which assess intellectual functioning in specified cognitive areas. The subtests can later be summed up to create a verbal IQ (VIQ) score, performance IQ (PIQ) score, and the full-scale IQ (FSIQ) score. While in healthy populations the input from both scales can be equality informative, for individuals with motor and communicative impairments it is advisable to use non-verbal assessments with minimal motor requirements in order to avoid underestimation of their true cognitive abilities.

Estimating Premorbid Intelligence The estimation of premorbid (e.g., pre-injury or pre-illness) intelligence is crucial for determining a change in function for individuals with acquired intellectual deficits. Without such an estimation it would be impossible to discern whether deficits in test performance represent a decline from previous ability or an overall low – yet stable – level of performance. Since the vast majority of patients do not have a formal assessment of cognitive abilities done prior to their injury or disease, indirect measures of premorbid functioning are often used. These measures can be categorized into three main groups: (1) Collecting demographic information such as education and occupation to predict premorbid intelligence (Barona, Reynolds, & Chastain, 1984); (2) Administration of specific subtests from the Wechsler Intelligence Scale that are considered stable and resistant to injury. Such subtests include the vocabulary, information, and picture completion tasks (Wechsler, 1958). In addition, reading tests are often used to assess premorbid ability since reading is an overlearned skill that is often preserved relative to cognitive changes from brain injury and disease. For example, several studies have shown that performances on the National Adult

383

384

s. cohen-zimerman, c. salvi, and j. h. grafman

Reading Test (NART) (Blair & Spreen, 1989) and Wechsler Test of Adult Reading (WTAR) (Wechsler, 2001) are relatively resistant to various neuropsychiatric disorders and provide a reliable and precise estimates of FSIQ (Bright & van der Linde, 2020; Crawford, Deary, Starr, & Whalley, 2001). (3) The last group of premorbid tests contains tools that combine demographic measures and current cognitive abilities (e.g., Schoenberg, Lange, Brickell, & Saklofske, 2007). There is some data to suggest that the latter approach is most highly correlated with the full scale IQ in both healthy and clinical samples (Axelrod, Vanderploeg, & Schinka, 1999). A unique population for which a direct measure of premorbid intelligence is often available is military personnel. Many studies conducted on veterans compare post-injury cognitive performance to a pre-injury general cognitive ability assessment, which is routinely administered upon enlistment, namely the Armed Forces Qualification Test (AFQT). The AFQT is a paper-and-pencil test designed by the Department of Defense, which is given to all branches of the military and assesses vocabulary, visual-spatial organization, arithmetic, and functional associations. AFQT scores are consistently reported to be highly correlated with standardized tests of intelligence such as the WAIS (Cohen-Zimerman, Kachian, Krueger, Gordon, & Grafman, 2019; Grafman et al., 1988; Orme, Brehm, & Ree, 2001), and therefore provide a valid direct measure of premorbid intelligence. Our group in particular has studied Vietnam combat veterans as part of the Vietnam Head Injury Study (VHIS) and used AFQT scores to assess intelligence. We will review findings from the VHIS in the following sections, hence it is worth noting at this point that the VHIS is a prospective study in which Vietnam combat veterans with and without penetrating traumatic brain injuries (pTBI) were followed up 15-, 35-, and 45-years post-injury (for a description of this study see Raymont, Salazar, Krueger, and Grafman, 2011).

Lessons from Brain Lesion Studies Cognitive neuroscience has made remarkable progress in linking human intelligence with underlying neuronal networks, and in identifying how traumatic brain injury may impact functional brain networks (Barbey et al., 2015; Hillary & Grafman, 2017). An overview of different brain imaging methods and their contribution to understanding how intelligence is represented in the brain can be found in Part III of this book. While imaging studies can provide meaningful insight into the neuroscience of intelligence, they are correlational by nature and cannot determine whether a specific brain area is necessary for achieving a high score on an intelligence test. In contrast, human brain lesion studies can establish a causal link between a specific brain area and cognitive ability. In this section, we summarize the findings from studying patients with brain lesions. We first discuss lesions studies

Patient-Based Approaches to Understanding Intelligence

focused on general intelligence, and then review studies focused on problemsolving, planning, and reasoning.

General Intelligence Studies on patients with focal brain injuries have allowed for methodical investigation regarding brain areas critical for human intelligence. One theoretical perspective argues that the frontal lobes are the neuronal basis of general intelligence (e.g., Duncan et al., 2000), with both brain imaging and lesion studies supporting this approach. For example, Barbey, Colom, and Grafman (2013) showed that pTBI survivors participating in the VHIS study with lesions in the dorsolateral prefrontal cortex (dlPFC, see Figure 19.1) had lower intelligence scores compared to patients with brain damage in areas other than the dlPFC and to healthy controls. Given that the dlPFC is related to executive control functions, the authors argued that such functions are critical for general cognitive ability. Consistent with these findings, a recent study (Arbula et al., 2020) tested patients with brain tumors on multiple cognitive tests. Latent variable modeling, together with lesion-symptom methods, revealed that lesions in the left lateral prefrontal areas resulted in the most profound cognitive impairment, regardless of tumor type. A second influential theoretical framework argues that a parietal-frontal network and the white matter tracts that connect these areas can best account for individual differences in intelligence (Jung & Haier, 2007; Vakhtin, Ryman, Flores, & Jung, 2014). Several brain lesion studies support this framework. For example, Gläscher et al. (2010) tested the neural underpinning of intelligence in patients with focal brain damage using voxel-based

Figure 19.1 Schematic drawing of brain areas associated with intelligence based on lesion mapping studies. This brain image is adapted from work by Patrick J. Lynch, medical illustrator; C. Carl Jaffe, MD, cardiologist. https:// creativecommons.org/licenses/by/2.5/

385

386

s. cohen-zimerman, c. salvi, and j. h. grafman

lesion-symptom mapping (VLSM). The authors reported that patients with impairment to a network including regions in the frontal and parietal cortex, as well as white matter association tracts and the frontopolar cortex, had lower intelligence scores. Similar results were obtained in another lesion mapping study (Barbey et al., 2012), which investigated the neural substrates of intelligence (measures by the WAIS) and executive function (measured by the Delis–Kaplan Executive Function System). This study preformed VLSM analysis on 182 Vietnam veterans with focal brain damage, and showed that individuals with lower intelligence scores shared brain damage in a distributed network in the frontal and parietal cortex, as well as associated white matter tracts, mostly in the left hemisphere. Lastly, Barbey, Colom, Paul, and Grafman (2014) investigated the neural architecture of intelligence (measured by the WAIS) and its relation to working memory (measured by an N-back task). The results from this study highlighted the role of specific white matter tracts, which connect frontal and parietal cortices, in providing the neuronal basis for the tested intellectual abilities.

Problem-Solving, Planning, and Reasoning While the commonly used Wechsler Adult Intelligence Scale includes subtests evaluating distinct cognitive abilities such as vocabulary, spatial skills, and simple associations, Simon (1973) defined intelligence as the ability to produce the single best (or correct) answer to a clearly defined question, such as a proof to a theorem. Solving a problem is considered as an essential component of intelligence and reasoning. Luria (1966) defined intelligence as a goal-directed cognitive activity that arises in situations for which no response is immediately apparent or available. In other words, problem-solving implies achieving a specific goal where the solution steps needed to be performed in a particular order are unknown (Unterrainer & Owen, 2006). It has long been argued that patients with lesions in the prefrontal cortex have difficulty in problem-solving in real-world, ill-structured situations, particularly problems involving planning and look-ahead components. Harlow (1868) was the first to demonstrate that “planning skill” was impaired in a patient with frontal-lobe damage. Bianchi (1922) showed that monkeys with frontal lesions were impaired in their ability to “coordinate the different elements of a complex activity.” Thus, the role of the frontal cortex in planning and problem-solving can be conceived of ‘‘as a general system for sequencing or guiding behavior towards the attainment of an immediate or distant goal’’ (Jouandet & Gazzaniga, 1979), or as pivotal for the ‘‘planning of future actions’’ (for a review, see Shallice, 1988). Goel, Grafman, Tajik, Gana, and Danto’s (1997) results indicate that patients with frontal lobe lesions are impaired in real-world planning problems. Specifically, patients were given information regarding their household income and expenditures and were asked to balance their budget currently and to prepare for future

Patient-Based Approaches to Understanding Intelligence

expenditures. The findings showed that patients with frontal lesions were impaired in several cognitive domains usually required to solve financial problems, including organizing and structuring their problem-solving process, dealing with ill-structured situations that do not have clear right or wrong answers, and projecting potential financial issues in the distant future. Research on the neural correlates of problem-solving in the last 20 years revealed that there are at least two ways people generate ideas to solve a problem: either with a sudden insight (also known as the Aha! experience) or via continuous step-by-step reasoning (for a review, see Kounios & Beeman, 2014). While solving a problem with insight has the advantage of being more accurate and the study of its neural components is proliferating (e.g., Aziz-Zadeh, Kaplan, & Iacoboni, 2009; Danek & Salvi, 2018; JungBeeman et al., 2004; Laukkonen, Webb, Salvi, Tange, & Shooler, 2020; Palmiero et al., 2019; Salvi, Bricolo, Franconeri, Kounios & Beeman, 2015; Salvi, Bricolo, Kounios, Bowden, & Beeman, 2016; Salvi, Beeman, Bikson, McKinley, & Grafman, 2020; Salvi, Simoncini, Grafman, & Beeman, 2020; Santarnecchi et al., 2019; Sprugnoli et al., 2017; Tik et al., 2018), only a few studies in the literature investigated this phenomenon in clinical populations. Reverberi, Toraldo, D’Agostini, and Skrap (2005) tested the role of the dlPFC in problem-solving by administering insight problems to a population of 35 patients with a single focal brain lesion. In this study, the researchers aimed to test the claims that the dlPFC should be crucial for non-routine tasks (as in the case of creative or insightful solutions). The surprising results showed that patients outperformed healthy participants. Specifically, 82% of lateral frontal patients solved the most difficult insight problems compared to healthy control participants who solved only 43% of the same kind of problems. The results of this experiment suggested that the dlPFC plays an important role in inhibition in non-routine implicit problemsolving, leading to higher performance in patients with damage to this brain region. Another study (N ¼ 144) investigated the neural bases of social problemsolving (measured by the Everyday Problem-Solving Inventory) and examined the degree to which individual differences in performance are predicted by several psychological variables, including general intelligence (measured by the WAIS). VLSM analysis indicated that social problem-solving and general intelligence are supported by a shared network of frontal, temporal, and parietal regions, including white matter association tracts that bind these areas into an integrated network (Barbey, Colom, Paul, Chau, et al., 2014). Additionally, lesion mapping results on cognitive flexibility (as a general adaptive aspect of intellectual function) revealed that, despite its distributed nature, its neural substrates are remarkably circumscribed around white matter sectors of the left hemisphere, including the superior longitudinal/arcuate fasciculus that links frontal, temporal, and parietal cortices (Barbey et al., 2013).

387

388

s. cohen-zimerman, c. salvi, and j. h. grafman

Shuren and Grafman (2002) described the neural basis and importance of being able to reason for adaptive thinking and goal attainment. Even in the case when logical reasoning problems substitute meaningful content for abstract rules, patients with frontal lobe lesions fail to benefit compared to controls (Goel, Makale, & Grafman, 2004). Reasoning often requires solving problems with unspecified aspects, and results from another study by Goel and colleagues indicated that pTBI patients with right prefrontal cortex lesions had particular difficulty with these kinds of problems. The findings suggested that the right prefrontal cortex has a role in modulating a premature overinterpretation of the parameters of a problem (see also Goel, Stollstorff, Nakic, Knutson, & Grafman, 2009). Lesions to the left prefrontal cortex may impair retrieval of prior held beliefs leaving those patients to become more skeptical reasoners (Goel, Marling, Raymont, Krueger, & Grafman, 2019). Transitive inference reasoning problems often require a person to compare a number of relational pairs to understand the overall group hierarchy. This type of strategy can be used to solve cognitive problems but also to ascertain a social group hierarchy after observing social relations between pairs of people. In this case, based on a study with pTBI patients (Waechter, Goel, Raymont, Kruger, & Grafman, 2013), parietal lobe lesions impaired transitive inferential reasoning. Reasoning can also be domainspecific, and a study examining reasoning about emotional content indicated that lesions in the polar and orbitofrontal lesions led to impaired performance (Eimontaite et al., 2018; Goel et al., 2017). Moreover, general intelligence has been found to be a predictor of emotional intelligence co-localizing to a shared network of frontal, temporal, and parietal brain regions (Barbey, Colom, & Grafman, 2014). This set of studies with pTBI patients emphasizes the relevance of the frontal lobes for most reasoning processes. In many of these studies, the participating pTBI patients performed within normal limits on standard tests of intelligence. This dissociation between performance on reasoning and standard intelligence tests highlights the relevance of testing patients’ higher level reasoning skills when concerned about their performance in situations requiring adaptive and logical thinking.

Section Summary Brain lesion mapping studies on general intelligence and problem-solving support an integrative framework for understanding these abilities in health and disease. Given that general intelligence is associated with verbal, visuospatial, and executive processes, it is not surprising that it is represented in the brain in a distributed underlying network. The data suggest that lesions in the left hemisphere, including frontal regions (specifically the dlPFC) as well as the parietal cortex and associated white matter tracts, are likely to result in

Patient-Based Approaches to Understanding Intelligence

lower intelligence scores and poorer problem-solving abilities. These findings indicate the complexity of the factor underlying intellectual abilities. They suggest that the neural mediators of intelligence reflect a modular functional organization, with different brain regions serving different cognitive abilities required for intellectual performance. Defining these regions is not only theoretically relevant but carries translational value as well, since it may change the way healthcare practitioners set goals and anticipate the course of recovery in patients with brain damage in these areas.

Predictors of General Intelligence following TBI Several studies precisely examined the expected trajectory of recovery of intelligence scores after brain injury. While each patient has unique injury characteristics and recovery patterns, several factors can predict future changes in intelligence scores following trauma at the group level. We next review three factors which may influence the trajectory of cognitive recovery following TBI: (1) Pre-injury intelligence, (2) Lesion Severity and Laterality, and (3) Genetic Predisposition.

Pre-injury Intelligence as a Predictor of Recovery Over the course of the VHIS, several studies attempted to account for variance in post-injury general cognitive ability. The earliest VHIS study which focused on predictors of post-injury intelligence tested 263 pTBI patients as well as 64 healthy controls during phase 2 of the study (P2), which was conducted 15 years post-injury (Grafman et al., 1988). Pre-injury intelligence was measured using the AFQT percentile score upon enlistment, and post-injury intelligence was measured at P2 using the same test. This study showed that the best predictor of post-injury intelligence scores was the pre-injury intelligence score. While brain volume loss was also a significant predictor of post-injury intelligence, it was not as strong a predictor as the pre-injury performance. This sample of veterans were assessed with the AFQT again at phase 3 of the study (P3), 35 years post-injury. A study which focused on the changes in intelligence scores between pre-injury, P2, and P3 (Raymont et al., 2008) found that the pTBI group had lower intelligence scores than healthy Vietnam Veteran controls at both 15- and 35-years post-injury. Moreover, this study revealed that the decline in intelligence between P2 and P3 was more rapid among the pTBI patients compared to the control group. Similar to the findings from the previous phase, pre-injury intelligence was the most consistent predictor of cognitive outcome and decline following brain injury. Based on the greater decline in AFQT percentile scores over the time between

389

390

s. cohen-zimerman, c. salvi, and j. h. grafman

15 and 35 years post-injury in the pTBI group, the authors concluded that exacerbated general cognitive decline may occur following brain injury based on the combination of brain volume loss due to the injury and the routine neuronal loss that accompanies aging. Most recently a new study was published in this series, adding AFQT data collected at phase 4 (P4) of the VHIS, 42 years post-injury (Cohen-Zimerman, Salvi, Krueger, Gordon, & Grafman, 2018). This study included 120 participants with pTBI and 33 matched Vietnam Veteran controls, and the findings were consistent with the two previous studies. Pre-injury intelligence was a significant predictor of post-injury intelligence scores 42 years after the injury, though no exacerbated decline over the last decade was observed in the pTBI group. These findings from the VHIS study are consistent with other similar studies. For example, Kesler, Adams, Blasey, and Bigler (2003) tested 25 closed TBI patients and compared pre-injury scores (assessed by school-based cognitive assessment) with post-injury WAIS score. The sample was divided into high and low IQ groups based on pre-injury standardized test scores. The results showed that a lower IQ pre-injury predicted greater decline in IQ from pre- to post-injury. Overall there is a strong empirical base to conclude that while a head trauma does lead to lower intelligence scores, premorbid IQ plays a key determinant in predicting recovery of intellectual functions. These results demonstrate the predictive power of cognitive reserve upon recovery from brain injury. The term “Cognitive reserve” (Stern, 2009) is defined as the ability to cope with brain damage due to enhanced pre-existing cognitive functions or to the development of new compensatory processes, with intelligence playing a key role in that adaptation. A recent review on the effect of cognitive reserve on cognitive recovery in patients with TBI or stroke (Nunnari, Bramanti, & Marino, 2014) concluded that low premorbid IQ and low education are the best predictors of cognitive impairment after TBI.

A Note on Socioeconomic Status and Post-Injury Intelligence A recent paper (Cohen-Zimerman et al., 2019) showed that socioeconomic status (SES) at childhood can also predict intelligence scores following brain injury. In this study, information regarding childhood SES (Parental occupation and education) was collected from 186 patients with pTBI and 54 healthy controls from the VHIS, as well as intelligence scores over 40 years post-injury. The results showed that childhood SES accounted for a significant proportion of the variance in intelligence scores both pre- and post-injury for all the participants. However, childhood SES was not associated with the rate of change in intelligence scores across time. The authors suggested that higher SES serves as another factor contributing to cognitive reserve which facilitates cognitive recovery following TBI.

Patient-Based Approaches to Understanding Intelligence

Lesion Severity and Laterality as Predictors of Recovery A recent meta-analytic study (Königs, Engenhorst, & Oosterlaan, 2016) showed that while patients with mild TBI showed no meaningful intelligence impairments in the acute phase of recovery, patients with moderate TBI exhibited medium-sized FSIQ and PIQ impairments and small VIQ impairments, and patients with severe TBI exhibited large-sized FSIQ, PIQ, and VIQ impairments which only minimally improved over time. Several studies (Grafman et al., 1988; Raymont et al., 2008) tested whether lesion laterality (left vs. right hemisphere damage) affects intelligence score post-injury and reported that lesion laterality did not predict post-injury intelligence scores. However, one study (Raymont et al., 2008) reported that left hemisphere volume loss predicted the change in intelligence between 15 and 35 years post-injury.

Genetic Predisposition as Predictors of Recovery In recent years, there has been increasing evidence linking specific genotypes and cognitive recovery following TBI. Several VHIS studies tested whether genetic polymophisms can predict post-injury intelligence scores. For example, Raymont et al. (2008) found that the GRIN glutamate receptor gene (rs968301) is a significant predictor of intelligence scores 35 years post-injury. A more recent study (Rostami et al., 2011) focused on Brain-Derived Neurotrophic Factor (BDNF) gene and its role in the recovery of general intelligence following TBI (also see Krueger et al., 2011). This study focused on seven BDNF single-nucleotide polymorphisms (SNPs), and their association with post-injury intelligence score. Two BDNF SNPs, rs7124442 and rs1519480, were significantly associated with post-injury recovery of general cognitive intelligence, indicating lesion-induced plasticity. After controlling for pre-injury intelligence and the percentage of brain volume loss, the genotypes accounted for 5% of the variance of post-injury AFQT scores. These data indicate that genetic variations in BDNF play a significant role in cognitive recovery following pTBI. Another study (Barbey, Colom, Paul, Forbes et al., 2014) focused on the Val66Met polymorphism of the BDNF gene as a potential predictor of post-injury intelligence. Consistent with the previous studies, the results showed that while the different genotypes did not affect pre-injury intelligence, they predicted general intelligence post-TBI.

Can General Intelligence Predict Outcomes in Clinical Populations? One of the major challenges of head injury rehabilitation is to improve the low rates of post-injury employment and community reintegration. Given that intelligence is predictive of scholastic achievement (Colom &

391

392

s. cohen-zimerman, c. salvi, and j. h. grafman

Flores-Mendoza, 2007) and job performance (Ree & Earles, 1992) in the healthy population, several studies tested its role in predicting occupational and social outcomes following head trauma. For example, in an early VHIS study (Schwab, Grafman, Salazar, & Kraft, 1993) work status was recorded 15 years post-injury. Pre- and post-injury intelligence, as measured by the AFQT, were highly correlated with each other and both predicted work status at follow-up. Post-injury intelligence, however, predicted work status better than pre-injury intelligence. Another study (Ip, Dornan, & Schentag, 1995) followed a sample of 45 participants with TBI after their discharge to identify predictive factors for return to work or school. Data were collected on five sets of variables, including sociodemographics, chronicity, indices of severity, physical impairment, and cognitive functioning. The results indicated that the PIQ score of the WAIS-R was the most significant predictor of return to work or school. Recently, O’Connell (2000) followed 43 TBI survivors who completed a program designed to help them return to competitive employment and showed that individuals with TBI who scored higher on the PIQ and a measure of verbal memory were more likely to return to work one year following the program. Importantly, not only does intelligence play a significant role in predicting and facilitating return to work (RTW) in TBI survivors (Mani, Cater, & Hudlikar, 2017), it serves as a better predictor of RTW compared to other clinical measurements such as physical disability, injury severity, the duration of post-traumatic amnesia, or the level of consciousness at admission (Benedictus, Spikman, & van der Naalt, 2010). There is less of a consensus in the literature regarding the association between intelligence and social abilities following TBI. Several studies tested whether general cognitive abilities can predict social-cognitive outcomes (e.g., Theory of Mind, empathy, emotion recognition) following head trauma, and found no association between the two (Spikman, Timmerman, Milders, Veenstra, & van der Naalt, 2011). Yet, a recent study examined the association between premorbid IQ – measured with the National Adult Reading Test – and social participation – measured with a self-report scale – following TBI (Wardlaw, Hicks, Sherer, & Ponsford, 2018). This study found that higher premorbid IQ, among other factors, significantly predicted increased social participation following TBI. Overall, while there is strong evidence to support the role of intelligence scores in predicting occupational outcomes following TBI, more research is needed to better understand which aspects of post-injury social functioning can be predicated by IQ.

Conclusions and Future Directions While cognitive neuroscience investigations of intelligence are often rooted in basic science, they carry far-reaching potential translational value.

Patient-Based Approaches to Understanding Intelligence

The patient-based studies reviewed in this chapter demonstrate dissociations between subtypes of intellectual impairments as well as between intelligence test scores and performance on other cognitive and social tests. Such dissociations reveal the complex infrastructure of human intelligence, and suggest that it can not be easily localized to a single brain region. This highlights the need to integrate findings from human lesion studies with research on how functional brain networks are organized (see Chapter 6, by Barbey) and altered by brain injury. One particularly promising approach is lesion network mapping (Boes et al., 2015), which allows for a thorough examination of which lesions are associated with IQ scores and how they map onto a connected brain network, while considering how these networks are changed following TBI. Another promising approach involves applying machine learning techniques to large brain imaging data sets using clinical samples. This method can provide new insights on the connectivity and hierarchy of key brain areas related to human intelligence. Overall, this chapter demonstrates that significant progress in understanding the neural bases of intelligence was achieved through studying clinical populations. In addition, it hints at strategies for the evaluation and enhancement of intelligence in brain-injured patients. Particularly, we anticipate that findings from lesion-based studies of human intelligence will be routinely used to facilitate personalized rehabilitation interventions with the therapeutic goal of enhancing cognitive recovery. This could be achieved by identifying key neuroanatomical regions to use as targets for non-invasive brain stimulation techniques (e.g., tDCS or TMS), based on specific characteristics of one’s intelligence-related brain network, as well as demographics and personal history.

Acknowledgments Drs. Cohen-Zimerman and Grafman were supported by the Therapeutic Cognitive Neuroscience Fund (PI- Dr. Barry Gordon), the Smart Family Foundation of New York (J. Grafman), and by NIH T32 funding (C. Salvi).

References Arbula, S., Ambrosini, E., Della Puppa, A., De Pellegrin, S., Anglani, M., Denaro, L., . . . Vallesi, A. (2020). Focal left prefrontal lesions and cognitive impairment: A multivariate lesion-symptom mapping approach. Neuropsychologia, 136, 107253. doi: 10.1016/j.neuropsychologia.2019.107253. Axelrod, B. N., Vanderploeg, R. D., & Schinka, J. A. (1999). Comparing methods for estimating premorbid intellectual functioning. Archives of Clinical Neuropsychology, 14(4), 341–346. doi: 10.1016/S0887–6177(98)00028-6.

393

394

s. cohen-zimerman, c. salvi, and j. h. grafman

Aziz-Zadeh, L., Kaplan, J. T., & Iacoboni, M. (2009). “Aha!”: The neural correlates of verbal insight solutions. Human Brain Mapping, 30(3), 908–916. doi: 10.1002/ hbm.20554. Barbey, A. K., Belli, A., Logan, A., Rubin, R., Zamroziewicz, M., & Operskalski, J. T. (2015). Network topology and dynamics in traumatic brain injury. Current Opinion in Behavioral Sciences, 4, 92–102. Barbey, A. K., Colom, R., & Grafman, J. (2013). Dorsolateral prefrontal contributions to human intelligence. Neuropsychologia, 51(7), 1361–1369. doi: 10.1016/j.neuropsychologia.2012.05.017. Barbey, A. K., Colom, R., & Grafman, J. (2014). Distributed neural system for emotional intelligence revealed by lesion mapping. Social Cognitive and Affective Neuroscience, 9(3), 265–272. doi: 10.1093/scan/nss124. Barbey, A. K., Colom, R., Paul, E. J., Chau, A., Solomon, J., & Grafman, J. H. (2014). Lesion mapping of social problem solving. Brain: A Journal of Neurology, 137(10), 2823–2833. doi: 10.1093/brain/awu207. Barbey, A. K., Colom, R., Paul, E., Forbes, C., Krueger, F., Goldman, D., & Grafman, J. (2014). Preservation of general intelligence following traumatic brain injury: Contributions of the Met66 brain-derived neurotrophic factor. PLoS One, 9(2), e838733. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure & Function, 219(2), 485–494. doi: 10.1007/s00429–013-0512-z. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(4), 1154–1164. doi: 10.1093/brain/ aws021. Barona, A., Reynolds, C. R., & Chastain, R. (1984). A demographically based index of premorbid intelligence for the WAIS-R. Journal of Consulting and Clinical Psychology, 52(5), 885. Benedictus, M. R., Spikman, J. M., & van der Naalt, J. (2010). Cognitive and behavioral impairment in traumatic brain injury related to outcome and return to work. Archives of Physical Medicine and Rehabilitation, 91(9), 1436–1441. Bianchi, L. (1922). The mechanism of the brain and the function of the frontal lobes. Edinburgh: Livingstone. doi: 10.1192/bjp.68.283.402. Blair, J. R., & Spreen, O. (1989). Predicting premorbid IQ: A revision of the National Adult Reading Test. The Clinical Neuropsychologist, 3(2), 129–136. Boes, A. D., Prasad, S., Liu, H., Liu, Q., Pascual-Leone, A., Caviness Jr, V. S., & Fox, M. D. (2015). Network localization of neurological symptoms from focal brain lesions. Brain, 138(10), 3061–3075. Bright, P., & van der Linde, I. (2020). Comparison of methods for estimating premorbid intelligence. Neuropsychological Rehabilitation, 30(1), 1–14. doi: 10.1080/09602011.2018.1445650. Cohen-Zimerman, S., Kachian, Z. R., Krueger, F., Gordon, B., & Grafman, J. (2019). Childhood socioeconomic status predicts cognitive outcomes across adulthood following traumatic brain injury. Neuropsychologia, 124, 1–8. doi: 10.1016/j.neuropsychologia.2019.01.001.

Patient-Based Approaches to Understanding Intelligence

Cohen-Zimerman, S., Salvi, C., Krueger, F., Gordon, B., & Grafman, J. (2018). Intelligence across the seventh decade in patients with brain injuries acquired in young adulthood. Trends in Neuroscience and Education, 13, 1–7. doi: 10.1016/j.tine.2018.08.001. Colom, R., & Flores-Mendoza, C. E. (2007). Intelligence predicts scholastic achievement irrespective of SES factors: Evidence from Brazil. Intelligence, 35(3), 243–251. doi: 10.1016/j.intell.2006.07.008. Crawford, J. R., Deary, I. J., Starr, J., & Whalley, L. J. (2001). The NART as an index of prior intellectual functioning: A retrospective validity study covering a 66year interval. Psychological Medicine, 31(3), 451–458. doi: 10.1017/ S0033291701003634. Danek, A., & Salvi, C. (2018). Moment of truth: Why Aha! experiences are correct. Journal of Creative Behavior. doi: 10.1002/jocb.380. Duncan, J., Seitz, R. J., Kolodny, J., Bor, D., Herzog, H., Ahmed, A., . . . Emslie, H. (2000). A neural basis for general intelligence. Science, 289(5478), 457–460. doi: 10.1126/science.289.5478.457. Eimontaite, I., Goel, V., Raymont, V., Krueger, F., Schindler, I., & Grafman, J. (2018). Differential roles of polar orbital prefrontal cortex and parietal lobes in logical reasoning with neutral and negative emotional content. Neuropsychologia, 119, 320–329. Gläscher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences, 107(10), 4705–4709. Retrieved from www.pnas.org/content/107/10/4705.abstract Goel, V., Grafman, J., Tajik, J., Gana, S., & Danto, D. (1997). A study of the performance of patients with frontal lobe lesions in a financial planning task. Brain, 120(10), 1805–1822. doi: 10.1093/brain/120.10.1805. Goel, V., Lam, E., Smith, K. W., Goel, A., Raymont, V., Krueger, F., & Grafman, J. (2017). Lesions to polar/orbital prefrontal cortex selectively impair reasoning about emotional material. Neuropsychologia, 99, 236–245. Goel, V., Makale, M., & Grafman, J. (2004). The hippocampal system mediates logical reasoning about familiar spatial environments. Journal of Cognitive Neuroscience, 16(4), 654–664. Goel, V., Marling, M., Raymont, V., Krueger, F., & Grafman, J. (2019). Patients with lesions to left prefrontal cortex (BA 9 and BA 10) have less entrenched beliefs and are more skeptical reasoners. Journal of Cognitive Neuroscience, 31(11), 1674–1688. Goel, V., Stollstorff, M., Nakic, M., Knutson, K., & Grafman, J. (2009). A role for right ventrolateral prefrontal cortex in reasoning about indeterminate relations. Neuropsychologia, 47(13), 2790–2797. Gottfredson, L., & Saklofske, D. H. (2009). Intelligence: Foundations and issues in assessment. Canadian Psychology/Psychologie Canadienne, 50(3), 183. Grafman, J., Jonas, B. S., Martin, A., Salazar, A. M., Weingartner, H., Ludlow, C., . . . Vance, S. C. (1988). Intellectual function following penetrating head-injury in Vietnam veterans. Brain, 111(1), 169–184. Hamburg, S., Lowe, B., Startin, C. M., Padilla, C., Coppus, A., Silverman, W., . . . Strydom, A. (2019). Assessing general cognitive and adaptive abilities in adults

395

396

s. cohen-zimerman, c. salvi, and j. h. grafman

with Down syndrome: A systematic review. Journal of Neurodevelopmental Disorders, 11(1), 20. doi: 10.1186/s11689–019-9279-8. Harlow, J. M. (1868). Recovery from the passage of an iron bar through the head. Publications of the Massachusetts Medical Society, 2, 327–347. doi: 10.1177/ 0957154X9300401407. Hillary, F. G., & Grafman, J. H. (2017). Injured brains and adaptive networks: The benefits and costs of hyperconnectivity. Trends in Cognitive Sciences, 21(5), 385–401. doi: 10.1016/j.tics.2017.03.003. Ip, R. Y., Dornan, J., & Schentag, C. (1995). Traumatic brain injury: Factors predicting return to work or school. Brain Injury, 9(5), 517–532. doi: 10.3109/ 02699059509008211. Jouandet, M., & Gazzaniga, M. S. (1979). The frontal lobes. In M. S. Gazzaniga (ed.), Neuropsychology. Handbook of behavioral neurobiology, vol 2 (pp. 25–59). Boston, MA: Springer. doi: 10.1007/978-1-4613-3944-1_2 Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Jung-Beeman, M., Bowden, E. M., Haberman, J., Frymiare, J. L., Arambel-Liu, S., Greenblatt, R., . . . Kounios, J. (2004). Neural activity when people solve verbal problems with insight. PLoS Biology, 2(4), 500–510. doi: 10.1371/ journal.pbio.0020097. Kesler, S. R., Adams, H. F., Blasey, C. M., & Bigler, E. D. (2003). Premorbid intellectual functioning, education, and brain size in traumatic brain injury: An investigation of the cognitive reserve hypothesis. Applied Neuropsychology, 10(3), 153–162. Königs, M., Engenhorst, P. J., & Oosterlaan, J. (2016). Intelligence after traumatic brain injury: Meta-analysis of outcomes and prognosis. European Journal of Neurology, 23(1), 21–29. doi: 10.1111/ene.12719. Kounios, J., & Beeman, M. (2014). The cognitive neuroscience of insight. Annual Review of Psychology, 65(1), 71–93. doi: 10.1146/annurev-psych-010213-115154. Krueger, F., Pardini, M., Huey, E. D., Raymont, V., Solomon, J., Lipsky, R. H., . . . Grafman, J. (2011). The role of the Met66 brain-derived neurotrophic factor allele in the recovery of executive functioning after combat-related traumatic brain injury. The Journal of Neuroscience, 31(2), 598–606. Laukkonen, R., Webb, M., Salvi, C., Tange, J., and Shooler, J. (2020). Eureka heuristics: How feelings of insight signal the quality of a new idea. PsyArXiv. doi: 10.31234/osf.io/ez3tn. Luria, A. (1966). Higher cortical functions in man. Boston, MA: Springer. doi: 10.1007/ 978-1-4684-7741-2. Mani, K., Cater, B., & Hudlikar, A. (2017). Cognition and return to work after mild/ moderate traumatic brain injury: A systematic review. Work, 58(1), 51–62. Nisbett, R. E., Aronson, J., Blair, C., Dickens, W., Flynn, J., Halpern, D. F., & Turkheimer, E. (2012). Intelligence: New findings and theoretical developments. American Psychologist, 67(2), 130. Nunnari, D., Bramanti, P., & Marino, S. (2014). Cognitive reserve in stroke and traumatic brain injury patients. Neurological Sciences, 35(10), 1513–1518. doi: 10.1007/s10072–014-1897-z.

Patient-Based Approaches to Understanding Intelligence

O’Connell, M. J. (2000). Prediction of return to work following traumatic brain injury: Intellectual, memory, and demographic variables. Rehabilitation Psychology, 45(2), 212–217. doi: 10.1037/0090-5550.45.2.212. Orme, D. R., Brehm, W., & Ree, M. J. (2001). Armed Forces Qualification Test as a measure of premorbid intelligence. Military Psychology, 13(4), 187–197. doi: 10.1207/S15327876MP1304_1. Palmiero, C., Piccardi, M., Nori, L., Palermo, R., Salvi, L., & Guariglia, C. (2019). Creativity: Education and rehabilitation. Frontiers in Psychology, 10, 1500. doi: 10.3389/fpsyg.2019.01500. Raymont, V., Greathouse, A., Reding, K., Lipsky, R., Salazar, A., & Grafman, J. (2008). Demographic, structural and genetic predictors of late cognitive decline after penetrating head injury. Brain, 131(2), 543–558. doi: 10.1093/ brain/awm300. Raymont, V., Salazar, A. M., Krueger, F., & Grafman, J. (2011). Studying injured minds – the Vietnam head injury study and 40 years of brain injury research. Frontiers in Neurology, 2, 15. doi: 10.3389/fneur.2011.00015 Ree, M. J., & Earles, J. A. (1992). Intelligence is the best predictor of job performance. Current Directions in Psychological Science, 1(3), 86–89. Reverberi, C., Toraldo, A., D’Agostini, S., & Skrap, M. (2005). Better without (lateral) frontal cortex? Insight problems solved by frontal patients. Brain, 128(12), 2882–2890. doi: 10.1093/brain/awh577. Rostami, E., Krueger, F., Zoubak, S., Dal Monte, O., Raymont, V., Pardini, M., . . . Grafman, J. (2011). BDNF polymorphism predicts general intelligence after penetrating traumatic brain injury. PLoS One, 6(11), e27389. doi: org/ 10.1371/journal.pone.0027389. Salvi, C., Beeman, M., Bikson, M., McKinley, R., & Grafman, J. (2020). TDCS to the right anterior temporal lobe facilitates insight problem-solving. Science Reports, 10, 946. doi: 10.1038/s41598-020-57724-1. Salvi, C., Bricolo, E., Franconeri, S., Kounios, J., & Beeman, M. (2015). Sudden insight is associated with shutting down visual inputs. Psychonomic Bulletin & Review, 22(6), 1814–1819. doi: 10.3758/ s13423-015-0845-0. Salvi, C., Bricolo, E., Kounios, J., Bowden, E. M., & Beeman, M. (2016). Insight solutions are correct more often than analytic solutions. Thinking & Reasoning, 22(4), 1–18. doi: 10.1080/13546783.2016.1141798. Salvi, C., Simoncini, C., Grafman, J., & Beeman, M. (2020). Oculometric signature of switch into awareness? Pupil size predicts sudden insight whereas microsaccades predict problem-solving via analysis. Neuroimage, 217, 116933. doi: 10.1016/j.neuroimage.2020.116933. Santarnecchi, E., Sprugnoli, G., Bricolo, E., Constantini, G., Liew, S. L., Musaeus, C. S., . . . Rossi, S. (2019). Gamma tACS over the temporal lobe increases the occurrence of Eureka! moments. Scientific Reports, 9, 5778. doi: 10.1038/ s41598–019-42192-z. Schoenberg, M. R., Lange, R. T., Brickell, T. A., & Saklofske, D. H. (2007). Estimating premorbid general cognitive functioning for children and adolescents using the American Wechsler Intelligence Scale for Children – Fourth edition: Demographic and current performance approaches. Journal of Child Neurology, 22(4), 379–388. doi: 10.1177/0883073807301925.

397

398

s. cohen-zimerman, c. salvi, and j. h. grafman

Schwab, K., Grafman, J., Salazar, A. M., & Kraft, J. (1993). Residual impairments and work status 15 years after penetrating head injury. Neurology, 43(1 Part 1), 95. doi: 10.1212/WNL.43.1_Part_1.95. Shallice, T. (1988). From neuropsychology to mental structure. Cambridge University Press. doi: 10.1017/CBO9780511526817. Shuren, J. E., & Grafman, J. (2002). The neurology of reasoning. Archives of Neurology, 59(6), 916–919. Simon, H. A. (1973). The structure of ill structured problems. Artificial Intelligence, 4(3–4), 181–201. doi: 10.1016/0004-3702(73)90011-8. Spikman, J. M., Timmerman, M. E., Milders, M. V, Veenstra, W. S., & van der Naalt, J. (2011). Social cognition impairments in relation to general cognitive deficits, injury severity, and prefrontal lesions in traumatic brain injury patients. Journal of Neurotrauma, 29(1), 101–111. doi: 10.1089/neu.2011.2084. Sprugnoli, G., Rossi, S., Emmerdorfer, A., Rossi, A., Liew, S., Tatti, E., . . . Santarnecchi, E. (2017). Neural correlates of Eureka moment. Intelligence, 62, 99–118. doi: 10.1016/j.intell.2017.03.004. Stern Y. (2009). Cognitive reserve. Neuropsychologia, 47(10), 2015–2028. doi: 10.1016/ j.neuropsychologia.2009.03.004. Tik, M., Sladky, S., Luft, C., Willinger, D., Hoffmann, A., Banissy, M., . . . Windischberger, C. (2018). Ultra-high-field fMRI insights on insight: Neural correlates of the Aha!-moment. Human Brain Mapping, 39, 3241–3252. Unterrainer, J. M., & Owen, A. M. (2006). Planning and problem solving: From neuropsychology to functional neuroimaging. Journal of Physiology-Paris, 99(4), 308–317. doi: 10.1016/j.jphysparis.2006.03.014. Vakhtin, A. A., Ryman, S. G., Flores, R. A., & Jung, R. E. (2014). Functional brain networks contributing to the Parieto-Frontal Integration Theory of Intelligence. NeuroImage, 103, 349–354. doi: 10.1016/j. neuroimage.2014.09.055. Waechter, R. L., Goel, V., Raymont, V., Kruger, F., & Grafman, J. (2013). Transitive inference reasoning is impaired by focal lesions in parietal cortex rather than rostrolateral prefrontal cortex. Neuropsychologia, 51(3), 464–471. Wardlaw, C., Hicks, A. J., Sherer, M., & Ponsford, J. L. (2018). Psychological resilience is associated with participation outcomes following mild to severe traumatic brain injury. Frontiers in Neurology, 9, 563. doi: 10.3389/ fneur.2018.00563. Wechsler, D. (1958). The measurement and appraisal of adult intelligence (4th ed.). Philadelphia, PA: Williams & Wilkins Co. doi: 10.1037/11167-000. Wechsler, D. (2001). Wechsler Test of Adult Reading: WTAR. San Antonio, TX: Psychological Corporation. Wechsler, D. (2008a). WAIS-IV technical and interpretive manual. San Antonio, TX: Pearson. Wechsler, D. (2008b). Wechsler adult intelligence scale – Fourth edition (WAIS–IV). San Antonio, TX: NCS Pearson.

20 Implications of Biological Research on Intelligence for Education and Public Policy Kathryn Asbury and Diana Fields

Introduction In his 2011 book, Incognito, Stanford neuroscientist David Eagleman asked us to: Imagine for a moment that we are nothing but the product of billions of years of molecules coming together and ratcheting up through natural selection, that we are composed only of highways of fluids and chemicals sliding along roadways within billions of dancing cells, that trillions of synaptic conversations hum in parallel, that this vast egglike fabric of micron-thin circuitry runs algorithms undreamt of in modern science, and that these neural programs give rise to our decision making, loves, desires, fears, and aspirations. (Eagleman, 2011, p. 223)

Eagleman makes a case for the inherent loveliness of this materialist daydream. He says: To me, that understanding would be a numinous experience, better than anything ever proposed in anyone’s holy text. (p. 224)

There is no doubt that finding we are little more than our biological history would have major policy implications, including for education, as has often been the case for holy texts. Our priority would be to design schools that could serve as studios to accommodate our “dancing cells,” creating challenges for education policy that differ markedly from those that currently occupy us. Our educational focus would be on nurturing individual capacities rather than improving mean achievement or narrowing gaps on broad attainment measures such as GPA. Equally, there can be no doubt that Eagleman’s “numinous experience” represents a dysfunctional, dystopian, inhuman, and possibly inhumane vision for some. When attempting to bridge the divide between science and policy it is important to remember that both perspectives matter; that both need to be understood and addressed.

399

400

k. asbury and d. fields

Proponents of cognitive neuroscience and behavioral genetics have argued that their findings have relevance for public policy and, specifically, for education policy (e.g. Asbury & Plomin, 2013; Blakemore, 2018). However, although such arguments focus on biological markers which offer probabilistic predictions about human behavior, they should not be viewed as materialist because the vital role of social forces is clearly acknowledged in both fields. Proponents of intelligence research and testing have done the same thing, over a much longer period of time (e.g. Gottfredson, 1997; Pearson, 1914) and, in all three cases, the arguments put forward have proved divisive. Making judgments about “what we need” on the basis of “what we are” rather than “what we have” is highly controversial and fiercely contested territory. Proponents of biologically deterministic arguments (“what we are”) are open to using biological predictors of school success – be they genetic, hormonal, or neurological – viewing them as effective (and perhaps unbiased) ways to select and track pupils more efficiently. By contrast, proponents of environmentally deterministic arguments (“what we have”) reject such approaches as intrinsically unjust and are more likely to advocate for strategies that compensate for social and economic injustice, such as meanstested scholarships, contextual admissions, and affirmative action. In this chapter we argue that understanding “what we need” does not benefit from deterministic or adversarial approaches and that a greater level of nuance is required. We represent both sides of these arguments and try to identify common ground and ways forward by looking at findings from, and implications of, intelligence research, cognitive neuroscience, and behavioral genetics. Our discussion is based on our understanding that, although school has been found to make us (a bit) cleverer (Ritchie & Tucker-Drob, 2018), boosting intelligence is not, and should not be, the primary purpose of education. The bread and butter of education has always been supporting pupils in working towards the highest level of attainment they are capable of. Understanding and acknowledging the role of intelligence is central to achieving this.

Intelligence and Public Policy In our view, attainment is the key that unlocks opportunities. If you want to be a doctor, teacher, lawyer, pharmacist, or engineer you must be accepted into University and be successful in attaining the appropriate degree. If you want to be a hairdresser, mechanic, chef, or nursery worker you need to attain the relevant professional qualifications and have a sufficient level of prior attainment to be accepted onto a course that provides them. No employer is going to waive their requirement for an academic, technical, or professional qualification on the basis of an intelligence test score. Therefore,

Implications of Biological Research on Intelligence

the most basic purpose of education has to be supporting pupils in acquiring – to the best of their ability – the skills and knowledge that support attainment, giving them access to the bits of paper that their educational and occupational future depends on. Understanding the etiology of attainment is therefore crucial to educational policy and practice. The importance of attainment is one major reason why intelligence matters in the context of public policy. Intelligence correlates with, and predicts, individual differences in capacity to attain those all-important bits of paper (e.g., Mackintosh & Mackintosh, 2011; Rimfeld, Kovas, Dale, & Plomin, 2016; Roth et al., 2015). For example, measures of intelligence at age 11 have been found to have good predictive validity for attainment at age 16 in a national sample of Scottish students (Deary, Strand, Smith, & Fernandes, 2007). Intelligence has also been shown, using a cross-sectional design with twin data, to be the heaviest phenotypic (and in fact genetic) hitter – compared to other correlates such as self-efficacy, health, and behavior problems – in explaining individual differences in attainment at age 16 (Krapohl et al., 2014). Cognitive ability is clearly a relevant factor, and the relationship between intelligence and attainment is an important one to understand, along with any moderators of that relationship (Johnson, Brett, & Deary, 2010). Whether operationalized as g or IQ, intelligence is reliable and stable over time, making it useful data for understanding attainment trajectories. Intelligence is also relevant to attainment in that controlling for its effects can help us to understand the role of non-cognitive traits and mechanisms such as self-regulation in explaining individual differences in attainment (Malanchini, Engelhardt, Grotzinger, Harden, & Tucker-Drob, 2019; Tucker-Drob & Harden, 2017). For example, Malanchini et al. (2019) found that executive function predicted reading and mathematics attainment, even after statistically correcting for the effects of intelligence, suggesting an opportunity for self-regulation-focused intervention. Alongside being predictive of attainment, measures of intelligence correlate with important outcomes that schools may, with appropriate support, be uniquely well positioned to help with. For example, IQ scores predict physical and mental illnesses and longevity (e.g. Calvin et al., 2010, 2017; Gale, Batty, Tynelius, Deary, & Rasmussen, 2010). Knowing this is relevant to the design and delivery of public health campaigns, many of which are communicated to children and their parents through schools. The convergence of findings showing that measures of intelligence are associated with the educational, occupational, and health and wellbeing outcomes that people care most about show clearly its relevance to social policy in general, and education policy in particular. Beyond the predictive validity of intelligence, and the ways in which it might be used, it is important to note that intelligence is already considered, and indeed screened for, in some aspects of education policy and educational practice. Measures of intelligence are used for selection, ability grouping, the

401

402

k. asbury and d. fields

identification of additional needs, and the evaluation of interventions. We will consider the pros and cons of each of these uses of intelligence measures, and intelligence research, beginning with selection. Selective educational institutions are found around the world and, in most cases, selection is based on assessments of intelligence such as Cognitive Ability Tests (CATs). In this sense, intelligence is viewed as cognitive “potential,” or capacity for attainment. This seems reasonable given what we know about the predictive validity of intelligence for attainment. In the US this extends to University entrance and is relevant to a recent college admissions scandal in which wealthy parents essentially bought higher Standardized Achievement Test (SAT) scores for their offspring in a bid to get them into elite colleges. One of the reasons parents felt the need to cheat in this way – beyond buying their children even more advantage than they already had – is that tests like the SAT reflect cognitive ability pretty accurately and, in this sense, they are not easy to “game” (Frey & Detterman, 2004). Intelligence tests can therefore be seen as representing a fairer basis for college admissions (cheating aside) than attainment which, through factors such as coursework or access to private tutors, can be much more easily “gamed” (Wai, Brown, & Chabris, 2018). Other factors such as sporting or musical attainment, or work experience, are likely to reflect systematic inequality at least as much as naturally occurring individual differences. Not every musical child has access to lessons or an instrument. This is precisely why colleges use SAT scores. However, it is important to note that Higher Education in many countries around the world does not rely on cognitive ability testing and a case can be made that it is not necessary to rely on SATs in more equitable societies, especially those with a national curriculum in which the majority of young people have access to a roughly equivalent education. In such societies, attainment is a more useful metric as it more accurately reflects the complex interaction of intelligence, personality, and experience that will matter to university success (Smith-Woolley, Ayorech, Dale, von Stumm, & Plomin, 2018). However, where equality of educational opportunity is compromised it seems clear that intelligence scores are less biased than measures of attainment, and are less susceptible to factors such as private tutoring. IQ testing is also used to admit pupils into selective schools. In some cases, a selective admissions policy simply reflects the school’s desire to work only with able students, an approach that is common among private schools. However, in others, such as the UK grammar school system, the approach initially reflected the loftier post-war aim of levelling the playing field by ensuring that the brightest pupils had access to the “best” education, regardless of their social and economic circumstances. The logic was reasonable, and the motivation laudable, but UK grammar schools have now been largely abandoned. UK grammar schools did not in fact increase social mobility – their intended purpose – and, instead, exacerbated social inequalities (Boliver & Swift, 2011; Gorard & Siddiqui, 2018; Parker, Jerrim, Schoon, & Marsh, 2016). It turned

Implications of Biological Research on Intelligence

out the approach was too simplistic and too deterministic in its aims and understandings to succeed. The assumption is often made that the schools that achieve the best results are the “best” schools and that others should therefore strive to emulate them. However, given the correlation between intelligence (the basis for selection) and attainment, it is far more likely that their position in the league tables is a result of pupil characteristics than superior teaching (Detterman, 2016; SmithWoolley, Pingault et al., 2018). Indeed, it has been shown convincingly that the vast majority of variance in students’ educational outcomes is explained by individual student characteristics, primarily intelligence, and that school and teacher effects are comparatively small, in the region of 10% of the total variance (Detterman, 2016). Understanding the child is key to understanding their attainment. As Detterman argues: “as long as educational research fails to focus on students’ characteristics we will never understand education or be able to improve it” (p. 1). Not acknowledging the primacy of student characteristics in explaining attainment variance leads to educational policies that make life unfairly difficult for inclusive schools in disadvantaged neighborhoods, and therefore for the pupils attending them. In a society where individuals are equally and effectively nurtured, intelligence testing should perhaps not be needed as a basis for selection. It should be reflected in prior attainment, along with other salient characteristics such as self-regulation and conscientiousness. However, the relative lack of bias in IQ-based entry criteria makes it important to consider when thinking about social justice in education. Intelligence testing is also used for ability grouping within schools. The evidence suggests this is harmful when it involves a blunt approach such as streaming, wherein a child is allocated to a top, middle, or bottom stream for all school subjects (Francis et al., 2017). A more nuanced approach, in which it is possible to be in different ability groups for, say, French and Physics, and in which there is the possibility of being moved up or down, reduces the harm to some extent. However, while studies of ability grouping suggest it confers a small advantage on the most able it has been found to have a negative impact on everybody else. The majority of children and young people benefit most from being taught in mixed-ability classes (Steenbergen-Hu, Makel, & OlszewskiKubilius, 2016). One way of accommodating this is to set different work for different groups based on ability within a class, an approach that is commonplace in elementary schools but much less so in secondary education. Steenbergen-Hu et al.’s second order meta-analysis found that within-class ability grouping was beneficial to students, as was cross-grade subject grouping and special grouping for the gifted. However, this large, systematic, and rigorous study found no benefits of between-class grouping, and this did not differ for high, medium, or low-ability students. Overall, the evidence suggests that intelligence testing for ability grouping is not a good thing, particularly between classes, because ability grouping itself appears to do more harm (to the

403

404

k. asbury and d. fields

majority) than good (Schofield, 2010). That said, mixed-ability teaching is often unpopular with schools and parents, particularly middle-class parents, rendering this a policy problem (Taylor et al., 2017). Intelligence testing can be seen as making a more straightforwardly positive contribution in relation to the identification of additional needs. This is relevant in a variety of circumstances but is often not used as well as it could be. When Binet gave the world its first intelligence test, his aim was partly to distinguish behavior problems from learning problems (Nicholas, Andrieu, Croizet, Sanitioso, & Burman, 2013). If intelligence testing can identify an able child whose behavior masks their cognitive ability it can help to target appropriate support to that child, potentially enhancing both attainment and behavior. Intelligence testing is also used as a diagnostic tool for disorders including developmental delay, and has value in identifying children who will benefit from additional help and support, or perhaps a different pace of learning (faster or slower), and giving them access to that support. There is also an argument for using intelligence testing to identify “cognitive disadvantage” in the way that we already use means-testing to identify social and economic disadvantage. Children with low cognitive abilities, but no diagnosis of a learning disability, could be identified in this way (e.g., Holmes, Bryant, Gathercole, & CALM Team, 2019). Clearly, this is only worth doing if the result of such testing is that their disadvantage is compensated for in some way. We would argue that there may be positive outcomes to testing for, and compensating for, cognitive disadvantage through additional resourcing and support from the earliest days of education. Further research on the impact of such an approach is needed (Schneider & Kaufman, 2017). Expensive policy interventions such as Head Start in the US and Sure Start in the UK are routinely evaluated for evidence of effectiveness. In both cases a major chosen indicator of effectiveness has been IQ gains, and studies have tended to find short-term cognitive gains (Puma et al., 2010; Shager et al., 2013) that peter out over time (Puma et al., 2012). This has led in the UK, during a period in which the politics of austerity have dominated, to drastic cuts in the Sure Start budget and the closure of many Sure Start Children’s Centers. By focusing on the effects on IQ to a greater extent than effects on parenting or behavior we run the risk of throwing the baby out with the bathwater. Intelligence is important – and predictive of attainment – but it is not the only important, or indeed the only predictive, factor. The evidence suggests that while Head Start and Sure Start do not boost intelligence in any sustainable way, they have other benefits that justify their cost (Ludwig & Phillips, 2007). In some of these examples the benefits of intelligence testing outweigh the harms, for example in removing bias in admissions to Higher Education. But in others, testing appears to cause more problems than it solves, including being used as a basis for between-class ability grouping and a justification for cuts in early intervention support. However, this mixed profile does not mean

Implications of Biological Research on Intelligence

that intelligence research and intelligence testing have nothing to offer to policy-makers. On the contrary, we argue that intelligence research can inform education policy in highly useful ways. Much of the discussion in this area has focused on either the identification of high intelligence or the improvement of intelligence, and there has been much less focus on the implications of low intelligence. If we could gain a better understanding of what life is really like for individuals with no diagnosed learning abilities but an IQ at the lower end of the normal range, e.g., 70–85, a very under-researched area, it is difficult to imagine that the policy implications of this would not be significant. Indeed, such insight could inform education, health, welfare, and employment policies and practices, to name just a few (Gottfredson, 1997). This seems like a particularly important avenue for further research. In an increasingly data-driven world, intelligence data is some of the best (most valid and reliable) data that we have and, although it is not always used in optimal ways, as previously outlined, it would be foolish to ignore the evidence of its reliability and usefulness. However, we go a step further when we claim that biological markers for intelligence are relevant policy considerations and, therefore, it is important that such a step is justified and understood before being taken.

Cognitive Neuroscience and Public Policy Some commentators have argued that cognitive neuroscience is, at best, a distraction from the business of understanding and supporting learners. For instance, Bowers (2016a) argued that neuroscience has achieved nothing tangible in terms of motivating new and effective teaching methods, and that its claims of usefulness to education have been vastly oversold. He argued that behavioral measures tell us far more about children’s cognitive capacities than brain measures can, and that the case for policy change or for incorporating educational neuroscience into teacher training programs should be dismissed as premature (e.g., Frith et al., 2011). Bowers argued that neuroscience’s contributions to education to date have been trivial (telling us things we already knew), misleading, or unwarranted. More specifically, he claimed that neuroscience is unhelpful in telling us whether education should target impaired skills (correction) or non-impaired skills (enhancement). For Bowers, the only outcomes of interest are changes in behavior, not changes in brains: “Neuroscientists cannot help educators but educators can help neuroscientists” (Bowers, 2016a, p. 1). He concluded that we should focus our attention on what psychology, not neuroscience, can contribute to education because psychology is the study of behavior and, in school, “behavior is the only relevant metric” (p. 8). Not surprisingly, this argument generated responses from the cognitive neuroscience community. Some retorted that cognitive neuroscience never

405

406

k. asbury and d. fields

promised a direct leap from “brain scan to classroom plan” (Howard-Jones et al., 2016, p. 621). Others conceded some points, agreeing that educational neuroscience has been oversold – “irrational exuberance” (Gabrieli, 2016, p. 613) – and that the primary outcomes of education are behavioral, while also making counter-arguments. Gabrieli (2016) claimed that novel insights from neuroscience support arguments for individualized education, an argument also made by behavioral geneticists (e.g., Asbury, 2015). Gabrieli argued that: “Better teaching and learning would occur if important student characteristics could be identified at the outset so that curriculum was individualised rather than being offered uniformly on a trial and error basis” (p. 615). He cited Supekar et al. (2013) as an instance in which neuroscience allowed researchers to predict which learners were more likely to benefit from intervention before the intervention even took place. Bowers remains unconvinced and is not the only skeptic (Bowers, 2016b). Wastell and White (2012), in a paper called “Blinded by neuroscience” discussed the influence of cognitive neuroscience on social policy, specifically early intervention policies. They argued that neuroscience has been used problematically to give such policies “the lustre of science” (p. 397) and that too much has been taken on trust. The co-option of neuroscience has medicalized policy discourse, silencing vital moral debate and pushing practice in the direction of standardized, targeted interventions rather than simpler forms of family and community support, which can yield more sustainable results. (p. 397)

Much neuroscience-informed policy has relied on an understanding of the first three years of life as a “critical period” (Bruer, 1999), and Wastell and White (2012) argue this has led to an unhelpful shift of focus from simply helping to a policy of “screen and intervene” (Rose, 2010, p. 410). They conclude that cognitive neuroscience is simply not policy-ready, and that early-intervention and education policies that depend on evidence from cognitive neuroscience run the risk of doing more harm than good. In contrast to this rather bleak outlook on the cognitive neuroscience–public policy interface, others have made a cautious case for acknowledging challenges and avoiding hype but, at the same time, building bridges (Ansari, Coch, & de Smedt, 2011; Gabrieli, 2016). They argue that we need to focus on getting an infrastructure in place for neuroscience and education to benefit from one another when that is appropriate and beneficial. Cognitive neuroscience has made rapid progress since its inception in the 1990s, and there is no sign that this rate of progress is slowing down. One key focus has involved addressing whether theories in cognitive psychology, of vital importance to education, are biologically plausible. These commentators argue that we need to put mechanisms, such as communication channels between scientists and educators, in place for any benefits from this body of research to be used wisely and well. Much cognitive neuroscience has focused on aspects of attainment such as reading (e.g., Pugh et al., 1996) or mathematics (e.g., Dehaene, Piazza,

Implications of Biological Research on Intelligence

Pinel, & Cohen, 2003; Supekar et al., 2013) rather than intelligence per se. However, we know from neuroimaging work that intelligence is correlated with structural and functional brain parameters (e.g. Ansari et al., 2011; Haier, 2016; Penke et al., 2012). An understanding of individual differences in the biological markers and mechanisms associated with intelligence may help us to design interventions that can enhance or support classroom learning and student attainment. However, Ansari et al. (2011) argue that, without a concerted effort to build bridges between cognitive neuroscientists and educators, neuroscience will amount to little more than a passing educational fad. They encourage the cognitive neuroscience and education communities to be realistic about what neuroscience can offer to educational policies and practices and not to expect a straightforward translation from laboratory findings to classroom practice. The process is invariably much messier than that. Instead, Ansari et al. (2011) argue that we should focus on building a joint, non-hierarchical community in which common questions and a common language can develop. It is interesting to note that there is considerable openness to this, from both sides, and indeed evidence of attempts to support it such as the Wellcome Trust’s I’m a Scientist, Get Me Out of Here initiative in the UK in which teachers and pupils are offered the opportunity to ask questions and engage in discussion with learning scientists via an online platform (https://imascientist.org.uk/). In an analysis of how neuroscience is used in policy, parenting, and everyday life, Broer, Cunningham-Burley, Deary, and Pickersgill (2016) found that neuroscience tends to receive a positive reception in society. It is interesting to note that viewing the brain as an influence on behavior is seen as helpful, whereas viewing DNA as an influence on behavior is viewed by a larger section of society as somehow nefarious, even though genetic effects are expressed in the brain. It is interesting too that intelligence itself, although built on much more substantial foundations than educational neuroscience, is less easily accepted as useful by some sections of society. It is possible that cognitive neuroscience will, in time, make valuable contributions to both education and social policy, but we are not there yet (Sigman, Peña, Goldin, & Ribeiro, 2014). The best evidence we currently have of the relevance of cognitive neuroscience to education relates to the brain and learning benefits of sleep (Hagenauer, Perryman, Lee, & Carskadon, 2009; Owens, Belon, & Moss, 2010), breakfast (Grantham-McGregor, 2005), and exercise (Erickson et al., 2011). Policies and practices that support good nutrition and healthy moderation in sleep and exercise are a good thing from many perspectives, and harmful from none. It is positive that the neuroscientific evidence available so far supports them but we would be safe to support these strategies in any case – the risks or barriers to ensuring all students have adequate breakfast, sleep, and exercise are logistical rather than existential. However, selecting students for an intervention on the basis of a brain scan is not currently supported by the evidence – and no neuroscientist to our

407

408

k. asbury and d. fields

knowledge claims that it is. We believe that cognitive neuroscience will increasingly add value to both education and policy but that, as Broer et al. (2016) have argued, it will be just one tool in the toolbox.

DNA and Public Policy The same can be said for genetics – it has the potential to be a useful tool in education’s policy and practice toolbox but must be used in combination with other tools. While it is obvious that the brain must have something to do with learning, an observation ingrained in popular culture and language with terms like “brainy,” “brainbox,” and “brainless” used widely to denote intelligence or its perceived absence, this is not true for DNA. We don’t have Baby Mozart products to nurture musical alleles in infants, or Gurgle with Galton programs to enhance communication in babies with a predisposition for high or low levels of speech and language. We’re being facetious here, but this is a real difference, which direct-to-consumer genetic testing might change – for better or worse (probably worse, as such an approach would inevitably be too simplistic, depending on a fallacious belief in genetic determinism). It tells us something interesting about how we view biological markers for intelligence, namely that the general public view the brain as malleable but fear that genes are destiny. We are more deterministic about one biological correlate of intelligence (genes) than we are about the other (the brain). It is not clear why this should be the case – it is unlikely that public understanding of cognitive neuroscience is more sophisticated than public understanding of behavioral genetics – but there is greater resistance to genetic explanations of intelligence and attainment than to brainbased predictors, most likely for historical reasons related to the highly deterministic eugenics movement that gained ground in the first half of the twentieth century. That said, it is clear that genetic differences between people are correlated with individual differences in intelligence (Plomin & Deary, 2015). The heritability of intelligence starts quite low in infancy but increases through the school years to over 60%, a pattern that has been observed in large twin samples in several countries (Haworth et al., 2010). As children grow and have more opportunities to choose their own experiences (genotypeenvironment correlation), genetic differences explain an increasing proportion of the variance in cognitive ability. This means that policies that affect children raised in the same family in the same way (shared environmental factors) are unlikely to have a meaningful impact on individual differences in cognitive ability after the preschool years. Even if they do have an impact on IQ in the preschool years this is likely to fade out over time (Puma et al., 2012). Knowing about the etiology of intelligence, and how it changes over the course of development, can enhance our prediction about the types of policies

Implications of Biological Research on Intelligence

and practices that have the best chance of success. The evidence suggests that policies that support genotype–environment correlations are likely to be most useful. For instance, by ensuring that all pupils have an equal canteen of opportunities to choose from we could potentially support optimal development (Asbury & Plomin, 2013). Although individual differences would remain, we would know that they were not caused by systematic inequality (Sokolowski & Ansari, 2018). Behavioral genetic research makes a strong case for enhanced choice and individualization (e.g., Asbury & Plomin, 2013; Sokolowski & Ansari, 2018), based on genetically influenced individual differences in intelligence, personality, motivation, and interests, but this case can also be made without recourse to genetics, given the problems caused by widespread deterministic misconceptions (e.g., Crosswaite & Asbury, 2016). That said, in spite of the protestations that often go alongside discussions of genetics and intelligence, several studies show that the public, especially teachers, accept that there are moderate to substantial genetic influences on individual differences in intelligence, even when their knowledge of the science in this area is low (Crosswaite & Asbury, 2019; Walker & Plomin, 2005). We support Sokolowski and Ansari’s (2018) case that educational policies and educational interventions should focus on achieving equity rather than equality. They argue that equality in education involves providing all students with equal resources and learning opportunities with a view to achieving equal outcomes, but that this is intrinsically unfair because some pupils (partly for biological, including genetic, reasons) need more resources than others. Equity is a fairer goal because its aim is to distribute resources in a way that will eliminate systematic inequality of outcome, e.g., disadvantage associated with family income or school catchment area. Beyond that, genetics (and also cognitive neuroscience) can help us to understand why some pupils find it easier to learn and achieve than others, and policies that support individualization in education may potentially help teachers work better with diverse individuals. We have known that intelligence is heritable for many decades now, but a highly significant recent development is that we now have a polygenic score, called EA3, which explains a small but meaningful chunk of the variance in intelligence (Allegrini et al., 2019; Lee et al., 2018). A polygenic score is a collection of genetic markers (over 1000 in the case of EA3) which have all been found to be correlated with a particular outcome. Although the number of genetic variants represented in EA3 is vast, clearly indicating the complexity involved in understanding pathways from genotype, it is important to note that the same can be said for complex social predictors such as socio-economic status (SES). This has important implications for policy-makers and for schools and teachers, and raises pressing questions. A polygenic score that explains up to 15% of the variance in intelligence is roughly as powerful as an environmental predictor of outcomes such as socio-economic status (von Stumm et al., 2020). If we take seriously the argument presented earlier,

409

410

k. asbury and d. fields

that “cognitive disadvantage” may have value as a trigger for the provision of additional resources and support, then a DNA-based predictor would potentially help us to identify those at risk of cognitive disadvantage from birth and to intervene early. The main advantage of such an approach would be that early intervention could be focused on those likely to be in greatest need of it. However, it is also clear that such an approach may come with considerable risks related to false positives (highly likely, as is also the case with socio-economic risk indicators), stigma (including self-stigma), expectancy effects, and active discrimination. Research is needed to establish whether (and at what level of validity and predictiveness) the benefits of such an approach would outweigh the costs. That said, it is clear that findings from behavioral genetic research, alongside those from cognitive neuroscience and intelligence research, are highly relevant to education and wider aspects of social policy. In an era in which there is clamoring for more evidence-informed education and policy it is wrong to ignore these large and robust bodies of evidence.

Major Challenges and Ways Ahead We argue that future priorities for research in this area are: 1. A detailed empirical description of life with an IQ at the low end of the normal range (see Gottfredson, 1997). 2. Interdisciplinary studies of the risks and benefits of taking “cognitive disadvantage” into account in the planning and resourcing of education and aspects of social welfare. 3. Interdisciplinary studies of the risks and benefits of using necessarily imprecise biological risk markers to predict, and meet, educational needs. 4. Increased focus on understanding the mechanisms via which genetic and neural factors exert their influence on intelligence, and via which intelligence influences other outcomes. It is possible that making predictions about educational needs on the basis of intelligence testing in combination with DNA screening or brain scanning will never be a cost-effective basis for policy. Even if that is not the case, however, it is possible that identifying a biological basis for disadvantage could generate stigma and discrimination that nullifies any benefits gained. It is important to conduct research to establish whether this is the case, and whether anything can be done to minimize risk, allowing neuroscience and genetics to make a positive contribution to social policies and educational practices either now or in the future. Studios to accommodate our “dancing cells” will never be enough because education is a more active, and interactive, process than that. However, a focus on nurturing individual natures through increased choice and personalization is, we argue, something to be welcomed.

Implications of Biological Research on Intelligence

References Allegrini, A. G., Selzam, S., Rimfeld, K., von Stumm, S., Pingault, J. B., & Plomin, R. (2019). Genomic prediction of cognitive traits in childhood and adolescence. Molecular Psychiatry, 24(6), 819–827. Ansari, D., Coch, D., & De Smedt, B. (2011). Connecting education and cognitive neuroscience: Where will the journey take us? Educational Philosophy and Theory, 43(1), 37–42. Asbury, K. (2015). Can genetics research benefit educational interventions for all? Hastings Center Report, 45(S1), S39–S42. Asbury, K., & Plomin, R. (2013). G is for genes: The impact of genetics on education and achievement, Vol. 24. Chichester, UK: John Wiley & Sons. Blakemore, S. J. (2018). Inventing ourselves: The secret life of the teenage brain. London: Penguin Random House. Boliver, V., & Swift, A. (2011). Do comprehensive schools reduce social mobility? The British Journal of Sociology, 62(1), 89–110. Bowers, J. S. (2016a). The practical and principled problems with educational neuroscience. Psychological Review, 123(5), 600–612. Bowers, J. S. (2016b). Psychology, not educational neuroscience, is the way forward for improving educational outcomes for all children: Reply to Gabrieli (2016) and Howard-Jones et al. (2016). Psychological Review, 123(5), 628–635. Broer, T., Cunningham-Burley, S., Deary, I., & Pickersgill, M. (2016). Neuroscience, policy and family life. Edinburgh: University of Edinburgh. Bruer, J. T. (1999). The myth of the first three years: A new understanding of early brain development and lifelong learning. New York: Free Press. Calvin, C. M., Batty, G. D., Der, G., Brett, C. E., Taylor, A., Pattie, A., . . . & Deary, I. J. (2017). Childhood intelligence in relation to major causes of death in 68 year follow-up: Prospective population study. British Medical Journal, 357(8112), j2708. Calvin, C. M., Deary, I. J., Fenton, C., Roberts, B. A., Der, G., Leckenby, N., & Batty, G. D. (2010). Intelligence in youth and all-cause-mortality: Systematic review with meta-analysis. International journal of epidemiology, 40(3), 626–644. Crosswaite, M., & Asbury, K. (2016). “Mr Cummings clearly does not understand the science of genetics and should maybe go back to school on the subject”: An exploratory content analysis of the online comments beneath a controversial news story. Life Sciences, Society and Policy, 12(1), 11. Crosswaite, M., & Asbury, K. (2019). Teacher beliefs about the aetiology of individual differences in cognitive ability, and the relevance of behavioural genetics to education. British Journal of Educational Psychology, 89(1), 95–110. Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13–21. Dehaene, S., Piazza, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20(3–6), 487–506. Detterman, D. K. (2016). Education and intelligence: Pity the poor teacher because student characteristics are more significant than teachers or schools. The Spanish Journal of Psychology, 19(e93), 1–11.

411

412

k. asbury and d. fields

Eagleman, D. (2011). Incognito: The hidden life of the brain. New York: Pantheon Books. Erickson, K. I., Voss, M. W., Prakash, R. S., Basak, C., Szabo, A., Chaddock, L., . . . Wojcicki, T. R. (2011). Exercise training increases size of hippocampus and improves memory. Proceedings of the National Academy of Sciences, 108(7), 3017–3022. Francis, B., Connolly, P., Archer, L., Hodgen, J., Mazenod, A., Pepper, D., . . . Travers, M. C. (2017). Attainment grouping as self-fulfilling prophesy? A mixed methods exploration of self confidence and set level among year 7 students. International Journal of Educational Research, 86, 96–108. Frey, M. C., & Detterman, D. K. (2004). Scholastic assessment or g? The relationship between the scholastic assessment test and general cognitive ability. Psychological Science, 15(6), 373–378. Frith, U., Bishop, D., Blakemore, C., Blakemore, S. J., Butterworth, B., & Goswami, U. (2011). Neuroscience: Implications for education and lifelong learning. The Royal Society. Gabrieli, J. D. (2016). The promise of educational neuroscience: Comment on Bowers (2016). Psychological Review, 123(5), 613–619. Gale, C. R., Batty, G. D., Tynelius, P., Deary, I. J., & Rasmussen, F. (2010). Intelligence in early adulthood and subsequent hospitalisation and admission rates for the whole range of mental disorders: Longitudinal study of 1,049,663 men. Epidemiology, 21(1), 70–77. Gorard, S., & Siddiqui, N. (2018). Grammar schools in England: A new analysis of social segregation and academic outcomes. British Journal of Sociology of Education, 39(7), 909–924. Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence, 24(1), 79–132. Grantham-McGregor, S. (2005). Can the provision of breakfast benefit school performance? Food and Nutrition Bulletin, 26(2 suppl 2), S144–S158. Hagenauer, M. H., Perryman, J. I., Lee, T. M., & Carskadon, M. A. (2009). Adolescent changes in the homeostatic and circadian regulation of sleep. Developmental Neuroscience, 31(4), 276–284. Haier, R. J. (2016). The neuroscience of intelligence. Cambridge University Press. Haworth, C. M., Wright, M. J., Luciano, M., Martin, N. G., de Geus, E. J., van Beijsterveldt, C. E., . . . Kovas, Y. (2010). The heritability of general cognitive ability increases linearly from childhood to young adulthood. Molecular Psychiatry, 15(11), 1112–1120. Holmes, J., Bryant, A., Gathercole, S. E., & CALM Team. (2019). Protocol for a transdiagnostic study of children with problems of attention, learning and memory (CALM). BMC Pediatrics, 19(1), 10. Howard-Jones, P. A., Varma, S., Ansari, D., Butterworth, B., De Smedt, B., Goswami, U., . . . & Thomas, M. S. (2016). The principles and practices of educational neuroscience: Comment on Bowers (2016), Psychological Review, 123(5), 620–627. Johnson, W., Brett, C. E., & Deary, I. J. (2010). The pivotal role of education in the association between ability and social class attainment: A look across three generations. Intelligence, 38(1), 55–65.

Implications of Biological Research on Intelligence

Krapohl, E., Rimfeld, K., Shakeshaft, N. G., Trzaskowski, M., McMillan, A., Pingault, J. B., . . . & Plomin, R. (2014). The high heritability of educational achievement reflects many genetically influenced traits, not just intelligence. Proceedings of the National Academy of Sciences, 111(42), 15273–15278. Lee, J. J., Wedow, R., Okbay, A., Kong, E., Maghzian, O., Zacher, M., . . . & Fontana, M. A. (2018). Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals. Nature Genetics, 50(8), 1112–1121. Ludwig, J., & Phillips, D. A. (2007). The benefits and costs of Head Start (No. w12973). Cambridge, MA: National Bureau of Economic Research. Mackintosh, N., & Mackintosh, N. J. (2011). IQ and human intelligence. Oxford University Press. Malanchini, M., Engelhardt, L. E., Grotzinger, A. D., Harden, K. P., & Tucker-Drob, E. M. (2019). “Same but different”: Associations between multiple aspects of self-regulation, cognition, and academic abilities. Journal of Personality and Social Psychology, 117(6), 1164–1188. Nicholas, S., Andrieu, B., Croizet, J. C., Sanitioso, R. B., & Burman, J. T. (2013). Sick? Or slow? On the origins of intelligence as a psychological object. Intelligence, 41(5), 699–711. Owens, J. A., Belon, K., & Moss, P. (2010). Impact of delaying school start time on adolescent sleep, mood, and behavior. Archives of Pediatrics & Adolescent Medicine, 164(7), 608–614. Parker, P. D., Jerrim, J., Schoon, I., & Marsh, H. W. (2016). A multination study of socioeconomic inequality in expectations for progression to higher education: The role of between-school tracking and ability stratification. American Educational Research Journal, 53(1), 6–32. Pearson, K. (1914). On the handicapping of the first-born, Vol. 10. Cambridge University Press. Penke, L., Maniega, S. M., Bastin, M. E., Hernández, M. V., Murray, C., Royle, N. A., . . . Deary, I. J. (2012). Brain white matter tract integrity as a neural foundation for general intelligence. Molecular Psychiatry, 17(10), 1026–1030. Plomin, R., & Deary, I. J. (2015). Genetics and intelligence differences: Five special findings. Molecular Psychiatry, 20(1), 98–108. Pugh, K. R., Shaywitz, B. A., Shaywitz, S. E., Constable, R. T., Skudlarski, P., Fulbright, R. K., . . . Gore, J. C. (1996). Cerebral organization of component processes in reading. Brain, 119(4), 1221–1238. Puma, M., Bell, S., Cook, R., Heid, C., Broene, P., Jenkins, F., . . . Downer, J. (2012). Third grade follow-up to the Head Start impact study: Final report. OPRE Report 2012-45b. Administration for Children & Families (pp. 1–346) Washington, DC. Puma, M., Bell, S., Cook, R., Heid, C., Shapiro, G., Broene, P., . . . Ciarico, J. (2010). Head Start impact study. Final report. Administration for Children & Families (611 pp). Rimfeld, K., Kovas, Y., Dale, P. S., & Plomin, R. (2016). True grit and genetics: Predicting academic achievement from personality. Journal of Personality and Social Psychology, 111(5), 780–789. Ritchie, S. J., & Tucker-Drob, E. M. (2018). How much does education improve intelligence? A meta-analysis. Psychological Science, 29(8), 1358–1369.

413

414

k. asbury and d. fields

Rose, N. (2010). “Screen and intervene”: Governing risky brains. History of the Human Sciences, 23(1), 79–105. Roth, B., Becker, N., Romeyke, S., Schäfer, S., Domnick, F., & Spinath, F. M. (2015). Intelligence and school grades: A meta-analysis. Intelligence, 53, 118–137. Schneider, W. J., & Kaufman, A. S. (2017). Let’s not do away with comprehensive cognitive assessments just yet. Archives of Clinical Neuropsychology, 32(1), 8–20. Schofield, J. W. (2010). International evidence on ability grouping with curriculum differentiation and the achievement gap in secondary schools. Teachers College Record, 112(5), 1492–1528. Shager, H. M., Schindler, H. S., Magnuson, K. A., Duncan, G. J., Yoshikawa, H., & Hart, C. M. (2013). Can research design explain variation in Head Start research results? A meta-analysis of cognitive and achievement outcomes. Educational Evaluation and Policy Analysis, 35(1), 76–95. Sigman, M., Peña, M., Goldin, A. P., & Ribeiro, S. (2014). Neuroscience and education: Prime time to build the bridge. Nature Neuroscience, 17(4), 497–502. Smith-Woolley, E., Ayorech, Z., Dale, P. S., von Stumm, S., & Plomin, R. (2018). The genetics of university success. Scientific Reports, 8(1), 14579. Smith-Woolley, E., Pingault, J. B., Selzam, S., Rimfeld, K., Krapohl, E., von Stumm, S., . . . Kovas, Y. (2018). Differences in exam performance between pupils attending selective and non-selective schools mirror the genetic differences between them. NPJ Science of Learning, 3(1), 3. Sokolowski, H. M., & Ansari, D. (2018). Understanding the effects of education through the lens of biology. NPJ Science of Learning, 3(1), 17. Steenbergen-Hu, S., Makel, M. C., & Olszewski-Kubilius, P. (2016). What one hundred years of research says about the effects of ability grouping and acceleration on K–12 students’ academic achievement: Findings of two second-order meta-analyses. Review of Educational Research, 86(4), 849–899. Supekar, K., Swigart, A. G., Tenison, C., Jolles, D. D., Rosenberg-Lee, M., Fuchs, L., & Menon, V. (2013). Neural predictors of individual differences in response to math tutoring in primary-grade school children. Proceedings of the National Academy of Sciences, 110(20), 8230–8235. Taylor, B., Francis, B., Archer, L., Hodgen, J., Pepper, D., Tereshchenko, A., & Travers, M. C. (2017). Factors deterring schools from mixed attainment teaching practice. Pedagogy, Culture & Society, 25(3), 327–345. Tucker-Drob, E. M., & Harden, K. P. (2017). A behavioral genetic perspective on noncognitive factors and academic achievement. Genetics, Ethics and Education, 134–158. von Stumm, S., Smith-Woolley, E., Ayorech, Z., McMillan, A., Rimfeld, K., Dale, P., & Plomin, R. (2020). Predicting educational achievement from genomic measures and socioeconomic status. Developmental Science, 23(3), e12925. Wai, J., Brown, M., & Chabris, C. (2018). Using standardized test scores to include general cognitive ability in education research and policy. Journal of Intelligence, 6(3), 37.

Implications of Biological Research on Intelligence

Walker, S. O., & Plomin, R. (2005). The nature–nurture question: Teachers’ perceptions of how genes and the environment influence educationally relevant behaviour. Educational Psychology, 25(5), 509–516. Wastell, D., & White, S. (2012). Blinded by neuroscience: Social policy, the family and the infant brain. Families, Relationships and Societies, 1(3), 397–414.

415

21 Vertical and Horizontal Levels of Analysis in the Study of Human Intelligence Robert J. Sternberg Neuroscientific theories of intelligence view intelligence as localized in the brain (e.g., Barbey, 2018; Duncan et al., 2000; Haier, 2016). It would be hard to disagree with the idea that intelligence is somehow localized in the brain. Aside from reflexes, intelligent human behavior emanates from the brain, which in turn is affected by a variety of bodily systems. But is intelligence 100% biological, as Haier (2016) at least claims, or is there some benefit in viewing intelligence through a larger lens? Is the argument over whether intelligence is 100% biological even worth having? In some sense, as a reviewer of this article pointed out, intelligence must be 100% biological in that the effects of environment, including culture, are filtered through brain mechanisms. But in that sense there is no viable alternative scientific hypothesis that would make the claim scientifically engaging – of course human behavior emanates, at some level, from the action of the brain. (Aristotle believed the heart to be the organ of intelligence – see Sternberg (1990) – but views such as his are no longer scientifically viable.) The question is whether that intelligent behavior and the thinking behind it – what emanates from the brain – is culturally and otherwise contextually mediated or not, and cultural psychologists, at least, argue that it always is. Early psychometricians spent a great deal of time arguing over which of their factorial theories was correct (see Mackintosh, 2011, Walrath, Willis, Dumont, & Kaufman, 2020, for reviews of these theories). As it later turned out, this was a waste of time, energy, and journal space. The various theories were, by and large, hierarchical factor models of intelligence, as shown by Carroll (1993). The various theories, such as Spearman’s (1927) and Thurstone’s (1938) theories, were compatible with each other if the theories were viewed hierarchically rather than as competing at the same level. Not all theories fit neatly into Carroll’s hierarchical model. For example, Guilford’s (1967, 1982) model contained abilities not included in Carroll’s model. Guilford’s model was seriously flawed (Horn & Knapp, 1973). But this was a structural problem: The cube structure (containing, at different points in time, different numbers of abilities but ending at 180 in Guilford’s later career) just did not work. Some of the abilities may have 416

Vertical and Horizontal Levels of Analysis

been reasonable, but not in the context of the cube structure in which they were presented. Before proceeding, it is worth noting two things. First, although there are many different definitions of intelligence, intelligence by almost any definition involves the ability to adapt to the environment (see, e.g., Gottfredson, 1997; “Intelligence and its measurement,” 1921; Sternberg & Detterman, 1986). Evolutionarily, certainly adaptation to the environment is what matters, as individuals (or, to be more precise, genes) or as species, in that those that do not adapt disappear. In a sense, disappearance of genes may be the ultimate exemplar of lack of adaptation. Second, it may be worthwhile to make a distinction between intelligence and manifestations of intelligence. Intelligence is a hypothetical construct. It cannot be measured directly. Presumably it is the result of some complex combination of genes acting in concert with environmental forces, but no one knows exactly what genes, how those genes combine, and how environmental forces affect their expression, through epigenetics or whatever. Intelligence can be assessed and studied only through its manifestations. Items on any kind of test of intelligence, at best, assess intelligence only very indirectly. Even biologically based measures (various EEG or ERP measures, brain volume, brain volume normalized on the basis of other factors, various fMRI-based indices) are measuring manifestations, in this case biological manifestations of intelligence. They are not measuring the thing in itself, which would be hard as no one knows exactly what “it” is. All measures assess only manifestations of intelligence. And whereas intelligence itself may be an unknown largely biological phenomenon, it is the manifestations of intelligence that we see in everyday life. And, I will argue in this chapter, however constant intelligence may be, the manifestations of intelligence can vary across time and place and can be much broader than whatever is assessed by intelligence tests. I propose in this chapter that the same argument that applied to different (and largely legitimate) psychometric models of intelligence applies as well more broadly – that there are different approaches to the study of intelligence that all have something to offer and that focusing only on one level inevitably presents an incomplete analysis of human intelligence. On this view, arguing whether intelligence is or is not 100% biological is a futile waste of time as was the argument between Spearman and Thurstone, whether intelligence comprised an encompassing general factor or, instead, a set of seven primary mental abilities. Even Spearman and Thurstone eventually recognized that the theories were, at some level, compatible. My own attempt to show that information-processing analysis is somehow more basic than factor analysis (Sternberg, 1977) was also a waste of time. The factor method and the information-processing method simply looked at different sources of variation – person variation, in the case of the factor method, and stimulus variation, in the case of the information-processing method. We do not need senseless arguments. There are enough real ones to be had! We would do

417

418

r. j. sternberg

better to learn from the history of the field of intelligence rather than to repeat it (see Sternberg, 2019a). In the remainder of this chapter, I will discuss levels of analysis in the study of human intelligence. I will discuss two kinds of levels: vertical and horizontal. Vertical refers to the level of specificity at which intelligence is studied. Horizontal refers to the range of knowledge and skills that are encompassed within the conception of intelligence. I should say that both types of levels are controversial within the field (see chapters in Sternberg, 2020). The goal in thinking of levels of analysis is to integrate, in a sense, the study of intelligence through diverse metaphors of mind (Sternberg, 1985, 1990).

Vertical Levels The analysis of vertical levels does not question a biological substrate of intelligence. Rather, it questions whether the exact biological structures and functions are the same at different levels of analysis. Table 21.1 summarizes the vertical levels considered here. Levels can be divided into spatial and temporal characteristics of intelligence. Each is considered in turn. Spatial characteristics refer to locating intelligence in places – whether cultural, psychometric (factors of the mind), or biological (parts of the brain), in one sense or another. Temporal characteristics refer to locating intelligence over time – whether through historical epochs over long periods of time, mental processes over short periods of time, or neuronal transmission over even shorter periods of time.

Cultural Issues in Defining Intelligence A number of researchers, most of them cultural psychologists, have argued that the nature of intelligence varies in distinctive ways across cultures (e.g., Berry, 1974; Laboratory of Comparative Human Cognition, 1982; Serpell, 2000; Sternberg, 2004a, 2004b; Yang & Sternberg, 1997). Certainly, conceptions of intelligence differ from one culture to another (see Sternberg, 2004a). An example of how manifestations of intelligence can differ from one culture to another emerges out of research we did studying rural children in Table 21.1 Vertical levels of analysis for the study of human intelligence. Level of Analysis

Spatial

Temporal

Contextual Molar: Structural/Process Molecular: Biological

Cultural/Subcultural Psychometric Neuroscientific: Anatomical

Epochal Information Processing Neuroscientific: Physiological

Vertical and Horizontal Levels of Analysis

Kenya (Sternberg et al., 2001). In the villages where these children live, parasitic illnesses are endemic. The parasites are of various kinds: malaria, schistosomiasis, hookworm, whipworm, and so forth. At any given time, a large percentage of children are suffering from parasitic illnesses, often multiple such illnesses at once. Under these circumstances, children need to learn, if possible, what natural herbal medicines are available to treat the parasitic illness or other illnesses from which they suffer. The symptoms of the diseases are serious – lassitude, stomachache, headache, bloody diarrhea, the inability to control defecation, nausea, vomiting, and weight loss, among others. Malaria also brings with it high fevers. In some cases, the diseases are fatal. Although there are effective anti-parasitic medicines available for many of these conditions, they are not readily available to many of the children and adults in sub-Saharan Africa. Moreover, the tradition is to use natural herbs for treatment purposes. My colleagues and I developed a test that measured procedural knowledge of the use of natural herbal medicines against parasitic illnesses (Sternberg et al., 2001). The test presented short items describing a child suffering from a particular set of symptoms, and the child had to specify how to use natural herbal medicines in response to the symptoms. Needless to say, most Western children would get answers right only on a chance basis: Because they do not live in an environment with these illnesses, they do not know how to treat them. But the same might be said of the academic knowledge of the Kenyan children: Much of the knowledge and many of the skills tested in standardized tests of intelligence are not part of their normal environment in which they are socialized. We found that the correlations between scores on our test and standardized tests of intelligence were negative – the better the children performed on the IQ tests, the worse they performed on the test of what might be called “adaptive intelligence.” We hypothesized, following the investigations of the anthropologists on our team, that the reason was that in the society in which they lived, perceived smartness resulted in students receiving not more, but rather less formal schooling. The students perceived as more intelligent left school to take apprenticeships, which produced income. Students perceived as less intelligent stayed in school and gained academic skills, but such skills were seen as not relevant to life in the communities in which they lived. If schooling is seen as largely a waste of time, intelligent children may leave school to find paths that enable them to live better. To make clear what we found: Manifestations of intelligence, but not necessarily intelligence itself, differ from one culture to another. The tests that measure one thing in one culture may measure something quite different in another culture (Sternberg, 2004a). Similarly, we found in studies of Russia shortly after its break from the Soviet Union that when changes in societal norms resulted in formal schooling being seen as leading only to a somewhat dismal future, the more intelligent

419

420

r. j. sternberg

children often left school to go into business, where there was money to be made (Grigorenko & Sternberg, 2001). These findings show the extent to which correlational patterns in psychometric and other data are human-made rather than being simply a function of nature. The amount of Western schooling is substantially correlated with level of IQ and related measures (Ceci, 2009). In general, more schooling is partially causal of higher levels of scores on academic tests, including intelligence tests. But if a society does not particularly value Western schooling, then those with the highest levels of adaptive intelligence will not necessarily be those who do the best on intelligence or academically-oriented tests. These findings are not limited to children in rural Kenya. In a study of Yup’ik Eskimo children (Grigorenko et al., 2004), my colleagues and I found that if one measured adaptive intelligence through knowledge of indigenous skills Eskimo children need to survive – hunting, gathering, ice fishing, and so forth – the Eskimo children outperformed children in small cities in Alaska. But if the children were all given standard intelligence tests, the city children outperformed the Eskimo children. Each set of children developed skills they needed to adapt to and even thrive in their own environments. The standard intelligence tests were perhaps reasonable measures of intelligence, somewhat narrowly defined, for the city children; but they did not well reflect adaptation in the kinds of lives that the Yup’ik Eskimo children lived. When children were evaluated by teachers, the city children fared much better. But they fared better in terms of the kinds of tasks teachers valued. Few if any of the teachers would have been able to navigate long distances on a dogsled in the frozen tundra in the winter without obvious landmarks; indeed, they might have died trying. The Eskimo children were able to do so. Again, intelligence needs to be assessed in terms of the adaptive demands of the cultural context. Are the same parts of the brain being recruited? Perhaps in part. But the life of the Eskimos probably had rather different emphases from the lives of the city children. Theories such as those of Barbey (2018) or Haier (2016) might well apply – but the tests used to assess the validity of these theories would have to be quite different. For example, a spatial test might apply to both Western middle-class children and Yup’ik Eskimo children, but the test for the latter might measure spatial navigation in large open terrains rather than, say, mental rotation or paper-folding. Put another way (Sternberg, 2004a), intelligence in itself might not be such a different thing from one culture to another, but its manifestations seem to be disparate across cultures and hence its measurement almost certainly needs to be. When the Eskimo children were taught geometry in ways that emphasized their own contextual setting (Sternberg, Lipka, Newman, Wildfeuer, & Grigorenko, 2007), in particular, using fish racks, they performed better than when they were taught by textbooks emphasizing more g-based skills. That is, not only testing, but also teaching should reflect children’s cultural backgrounds.

Vertical and Horizontal Levels of Analysis

Epochal Issues in Defining Intelligence When researchers talk about intelligence, they typically talk about it as though it is the same thing across not only place, but also time. Yet, it is not at all clear that this is the case. Thurstone’s (1938) theory of intelligence was operationalized in part by the SRA Primary Mental Abilities tests. In the 1950s and 1960s, one of Thurstone’s factors, Number, was measured by arithmetic computation (addition) problems. The test was largely a speed test, measuring how quickly one could compute sums of fairly simple addition problems. At the time, such a test seemed to make some sense. Achievement tests for children in math at the time generally contained three parts: computation, concepts, and problemsolving – so computation was one third of what mattered for assessing math achievement. Children still need to learn to compute. But can there be any serious doubt that arithmetic computation skills are less important today than they were in the mid-twentieth century? With the advent of electronic calculators and then mainframe and later personal computers, how much computation do people really need to do to adapt to their environments? Skills that once were important to intelligence are now less important. Computation is not the only skill whose importance has decreased. One of the tests on the Wechsler Intelligence Scales is General Information. Of course, general information is important now, as it always has been. But it is less clear today just how important it is, for three reasons. First, most of the information we need can be gotten quickly through search engines such as Google or Bing. There is simply less need to store lots of information than there once was. Second, the sheer amount of information in the world is expanding so quickly that it is simply unrealistic to expect anyone to keep up with it. We are lucky to have search engines. Third, as we come to realize how global our world is and how much of our lives we may have to spend interacting with people from other cultures, just what constitutes “general”? Is the general information one needs in the United States the same as what one needs in China or Tanzania? In the United States, perhaps knowing the names or accomplishments of some major US presidents is important. Should Russian people have this knowledge? And is the Russian position of “president” equivalent to the US position? If not, what position is important? Prime minister? King or queen (in the past)? Fourth, much supposedly general information becomes dated very quickly. Any information about technology – which is certainly important to everyday adaptation in today’s world – becomes irrelevant quickly. (I know how and like to use CDs in my car. I now cannot find a new car with a CD player. And if one goes back not so many years, the issue was not CDs but tape.). How much of what you learned in your introductory course in any topic – I learned an awful lot of “general information” about Freud and Skinner – is relevant

421

422

r. j. sternberg

today? General information just isn’t what it used to be, figuratively or literally! Whereas some knowledge and skills become less important with time, other knowledge and skills become more important. With nine-year-old triplets, I use search engines almost every day to retrieve “general information” they want that I do not have in my head. Search engines function to provide information. But there are three skills that are truly important in today’s world, and none of them is possession of general information. The first skill is being able to find (retrieve) the information one needs. Sometimes the route to finding the general information is transparent (e.g., “Who was the first US president?”); but other times it is not (“What was the meaning of Lyndon Johnson’s concept of the ‘Great Society’?”). If one cannot retrieve the information one needs, it is hard today to be “intelligent,” no matter how extensive one’s (soon to be outdated) knowledge base is. The second skill is being able critically to evaluate the information one retrieves. So much of the information on the Internet is dubious or outright false. In the past, in the age when most information was stored in books and periodicals, editing helped to ensure that there was at least some vetting of published information. Today, there is no vetting for many Internet sources. So if one cannot think critically about the information, the information may be of little use. It may well be false. And of course, this applies not only to the Internet, but to all sources of information, right up to the level of top national leaders. The third skill is being able to apply the information one gets. Schools, colleges, and universities are great sources of information. But how well can students actually use the information in their lives? And if they cannot use the information, of what good is it? Of course, the ability to apply information has always been important. But today people seem unusually bad at applying lessons, or at least the lessons of history. How many citizens and leaders today have learned how quickly and easily toxic leaders can destroy their societies (Sternberg, 2019b)? It was not so long ago that toxic leaders were responsible for millions of deaths and almost destroyed the world order? Such leaders are doing it again. And World War II was not all that long ago. The generation of World War II and the post-war generation are almost gone; and people of today seem not to have learned a whole lot, if anything, from the catastrophes of the twentieth century. Are they “intelligent” if they do well on IQ tests but allow the world to descend into chaos, or if they actually hasten its descent (as with current US policies that actively promote the burning of coal and other fossil fuels)? Greenfield (2020) has argued that, with historical time, cultures evolve so that what is important for intelligence changes with the cultures. Greenfield’s point is that, over time, there is a “shift from considering intelligence to be practical (Sternberg & Grigorenko, 2000) and contextualized to abstract and decontextualized.” In Greenfield’s view, the shift is from what she refers to as

Vertical and Horizontal Levels of Analysis

“Gemeinschaft” intelligence to “Gesellschaft” intelligence. “Gemeinschaft denotes a small-scale social entity with all social relations based on close personal and lifelong ties – for example, extended family relations in a rural village; in contrast, Gesellschaft denotes a large-scale social entity with many relationships that are impersonal and transitory – for example, store clerks in an urban city” (p. 920). Greenfield suggests that the skills needed for the two kinds of intelligence are simply different. Her theory is consistent with the results cited earlier in the chapter, and with many other results as well (see, e.g., Cole, Gay, Glick, & Sharp, 1971; Glick, 1974). One might ask why anyone should care about the skills relevant to Gemeinschaft cultures? After all, aren’t they on the way out? That’s not, I believe, the point. The point, rather, is that the skills we value, as expressed through current cultures – Gesselschaft skills – may be as irrelevant in the world of the future as the Gemeinschaft skills are to many of today’s modernized cultures. We really don’t know what will be “intelligent” in the future. Who a century ago would have imagined today’s technology, except perhaps science-fiction writers? And who would have imagined that the world might be descending back into pre–World War II tribalism at an astonishing rate of speed? The general point is that manifestations of intelligence change over time. The parts of the brain that are relevant may also change. How many of the readers of this chapter think they would be first-rate hunter-gatherers? How would you all have adapted in prehistoric times? Or in medieval times? Even today, skills that one would hope had become irrelevant are relevant and indeed key for subpopulations. If you are a journalist or any opponent of a government in power, how good are your escape skills when hostile (usually governmental) forces come looking for you? How adept are you when you get mobbed on the Internet? People always have been mobbed, but there is a difference in the skills required to deal with Internet mobbing vs., say, crowds of people looking for you because you are Jewish, Hutu, or whatever. Parents used to worry about how intelligently to prevent their children from starting smoking or drinking. How well can you handle a child’s drug addiction, when his or her life may depend on the intelligence of your response? What is adaptively important changes over time, and so must what is considered to be intelligent.

Modes of Studying the Vertical Levels of Analysis Changes in intelligence over space or place can be studied in a variety of ways. I have summarized them as contextual, molar, and molecular. The contextual approach looks at the changes in the environmental context over space and place that result in changing manifestations of intelligence. The molar approach looks at changes in psychometric factors (structural analysis) or cognitive operations (process analysis) that take place over space and

423

424

r. j. sternberg

time. And the molecular approach looks at biological differences over space and place. Some neuroscientists might believe that the biological level is somehow basic and the other levels, at best, are subsidiary, and, at worst, are epiphenomenal. I would argue instead that the optimal level at which to study intelligence depends on the questions one wishes to address. If one is concerned about how the brain produces the thoughts and actions that we label as “intelligent,” then certainly the biological level is the preferred level at which to study intelligence. If instead we want to know what an individual is thinking from the time he or she sets eyes on a problem until the time the individual is done solving the problem, the cognitive level is the preferred one. If instead we want to understand what kinds of structural abilities are tapped by a problem, the psychometric approach is preferred. Or if instead we want to know what forces lead individuals to act in one way or another in the belief that they are acting intelligently (and that includes solving problems on an intelligence test but also trying to persuade someone to hire the individual), the contextual approach is most useful. Some might believe that, when all is said and done, the biological level is still the most basic one. And perhaps it is in a reductionist sense. But what is basic really depends on the particular set of problems the investigator wishes to solve.

Horizontal Levels Horizontal levels refer to just how broad a sampling of thinking or behavior one wishes to include within one’s definition of intelligence. Table 21.2 sets out different scopes for consideration. The traditional view of intelligence defines intelligence primarily in terms of cognitive abilities. These would include abilities such as fluid intelligence, or the ability to cope with novel problems quickly, flexibly, and accurately; and crystallized intelligence, or one’s store of accumulated knowledge. Abilities included within cognitive intelligence would be ones such as those specified by Thurstone (1938), such as (inductive) reasoning, spatial visualization, quantitative skills, verbal comprehension, verbal fluency, memory, and

Table 21.2 Horizontal levels of analysis for the study of human intelligence. Intelligence in the Cognitive World

Intelligence in the Social World

Cognitive Intelligence

Social/Emotional/ Practical Intelligence

Intelligence Transforming the World Creative Intelligence

Intelligence Improving the World Wisdom

Vertical and Horizontal Levels of Analysis

perceptual speed. These are the abilities typically measured, one way or another, by conventional tests of intelligence. Most intelligence researchers are satisfied that conventional measures of intelligence tell them everything they want to know – at least about intelligence. An alternative point of view, however, is that cognitive intelligence is only part of the picture of intelligence – that if intelligence is about adaptation, then the cognitive view of intelligence is too narrow.

Social, Practical, and Emotional Intelligence In everyday life, there is another class of knowledge and skills that is very important for adaptation, namely, the knowledge and skills involved in one’s interactions with the social world. I believe it is an uncontentious statement that some people may have high levels of general intelligence and yet be socially awkward or even seriously ineffective. Although I do not have space in the context of this chapter to review the literature, there is good evidence that correlations of social, practical, and emotional intelligence – three of the extensions of intelligence considered here – are low enough to justify consideration of some kind of social component of intelligence as distinct from the cognitive (Hedlund, 2020; Kihlstrom & Cantor, 2020; Rivers, Handley-Miner, Mayer, & Caruso, 2020; Sternberg et al., 2000; Sternberg & Smith, 1985). Thus, g is important but, with regard to intelligence, it is not all that matters (Sternberg & Grigorenko, 2002). On the one hand, it is easy to dismiss social aspects of intelligence. On the other hand, when people fail to adapt in their lives with their family, their employers, their coworkers, and with various kinds of bureaucracies with which we have to deal, social aspects of intelligence are arguably at least as important or perhaps more important than cognitive ones. A lot of high-IQ people have been tripped up by their inability, say, to succeed in job interviews or in their inability to relate to the people on whose success their careers depend. Is this intelligence? If someone is ruining his or her life through failure to control his or her emotions, failure to be able to successfully relate to his or her family, or failure to be able to thrive in or even keep a job, then that would seem to be important to intelligence as adaptation.

Creative Intelligence The world is changing at an astonishing rate. Many of us, needing to make a phone call in an airport, used to look for one of the many banks of pay phones that lined the walls. Those banks of phones are gone, replaced by cell phones. Similarly, cell phones can save the day when our car conks out while we are driving in the middle of nowhere and no one else is in sight to rescue us. Many us remember writing our high school or college papers on typewriters and, when we made too many mistakes on a page, ripping out the page and

425

426

r. j. sternberg

replacing it with another blank sheet. Personal computers have made typewriters obsolete. And, of course, most of us remember when politicians, especially senators, showed some reasonable level of decorum and willingness to work with those with whom they disagreed. Given how quickly the world is changing, creative intelligence – the use of intelligence to generate novel and useful ideas or products – is at a premium, perhaps like never before (Sternberg, 2018a). It is through creative intelligence that we transform the world rather than just tinkering with it at the edges. Today we need to tell our students that the job they ultimately will take may not yet exist. We need to tell them that many jobs, even perhaps some higher level ones, may be automatized. Or that if their future job now exists, the job requirements of today may look little like the job requirements of tomorrow. We also have to remind students that anything they write anywhere online is never truly lost – that the indiscretions they commit online today may be available for anyone to see tomorrow or next month or in 10 years. Given these facts, it is a puzzle to me why creative thinking is given such short shrift in modern thinking about intelligence. Certainly, creativity involves much more than any kind of intelligence, such as personality and motivational aspects (Kaufman & Sternberg, 2019). Creativity, I have argued, is in large part an attitude toward life – one of willingness to defy the crowd, one’s own past thinking, and the unconscious presuppositions of the Zeitgeist of the society in which we live (Sternberg, 2018a). But divergent intellectual processing is an important part of creativity and, in many industries, is at least as important as or more important than the kinds of convergent intellectual processing measured by intelligence tests. Can we really afford to socialize and educate children to be convergent, “multiple-choice” thinkers in a world where most of the challenges are not of solving well-structured problems with unique solutions, but rather of figuring out what the problems are, how to define them, and how to find the best of what often are multiple nonoptimal solutions? Most start-up businesses fail, not because the entrepreneurs have failed to think through the current problems they face, but rather because they have failed creatively to anticipate the problems they would face in the future. Various means exist to measure creativity in general and creative intelligence in particular (Kaufman & Sternberg, 2019; Sternberg, 2017). None of them is perfect, but it is unclear we have perfect measures of anything. In the long run, it may be better to measure not just skills that are important and easy to measure, but also to measure imperfectly skills that are even more important for adaptation to the world.

Wisdom Wisdom once was seen as the domain of philosophers but today it has become an active area of psychological research (see Sternberg & Glueck, 2019). It is through wisdom that we make the world a better place. Although there are

Vertical and Horizontal Levels of Analysis

many different definitions of wisdom (described in Sternberg & Glueck, 2019), the definition I use is one whereby people are wise to the extent that they use their knowledge and skills to achieve a common good; by balancing their own, others’, and larger interests, over the long- as well as the short-term, through the infusion of positive ethical values. Read the leading stories of today’s (or any day’s) news: Is the problem that our leaders face that they lack IQ points or sufficient general intelligence, or is it that they are unwise? Many world leaders, not just in the US but throughout the world, went to good and even top colleges and universities. They excelled in the academic aspects of intelligence. And they are far more “intelligent,” in a traditional sense, than the leaders of a century ago. We know from the Flynn effect (Flynn, 2020) that IQs rose two standard deviations (roughly 30 points) during the twentieth century. Most of this increase was in fluid intelligence (g-f), the kind that is tantamount to general intelligence (g). We do not know for sure the cause of the Flynn effect. For that matter, we do not know for sure what different intelligence tests measure for different populations (Sternberg, 2004b). What we do know is that whatever it is that IQ tests measure, it increased! How has that worked out for the world? Climate change is sinking islands, destroying coastlines, wreaking havoc with hurricanes and other severe storms, and helping to create forest fires on an unprecedented scale. Although worldwide poverty has decreased (Pinker, 2018), increasing income disparities are creating millions of disgruntled and disillusioned citizens who then vote for populists who promise simple solutions to complex problems but who, once in power, serve themselves rather than their constituencies. The world also faces problems of terrorism, arms races, pollution, antibiotic resistance of bacteria, and many more. Of what use has been a 30-point increase in IQ in solving any of these problems? As with creativity, there are multiple measures of wisdom that could be used to assess wisdom in children or adults (see Sternberg & Glueck, 2019). As with the measures of creativity, none of them resembles a conventional IQ test.

Synthesis of Intelligence, Creativity, and Wisdom The obvious question that arises from these various horizontal extensions of intelligence is what, if anything, they add to general intelligence, or g. There have been various studies, as noted above, that find incremental validity for these extensional constructs over g-based measures. For example, in a study that was conducted across the country with students of a wide range of ethnic groups and ability levels, Sternberg and the Rainbow Project Collaborators (2006) found that adding creative and practical intelligence to a measure that is largely g-based (see Sackett, Shewach, & Dahlke, 2019, for a discussion of how college-admissions type tests are proxies for measurement of g) doubled prediction of first-year college grades. (See Sternberg (2010) for discussion of measures of broader aspects of intelligence.) Sternberg et al.

427

428

r. j. sternberg

(2001) found negative correlations between IQ and practical–intelligence measures in Kenya. Investigations of emotional intelligence have also found incremental prediction of a wide variety of criteria over g-based measures (Rivers et al., 2020). The question is what these findings mean for understanding psychometric g. These patterns of correlation suggest that the correlations between various measures of intelligence and other constructs may, in part, be culturally mediated. They are partly determined by, for example, how much a society values the IQ-based skills imparted by Western education. Almost all contemporary investigators, at least of whom I am aware, accept the existence and psychological meaningfulness of psychometric g (Sternberg & Grigorenko, 2002). Moreover, recent data suggest that the construct can be found across a wide range of cultural settings (Warne & Burningham, 2019). Even Howard Gardner (2017), who originally devised his theory of multiple intelligences in part as an alternative to g theory (Gardner, 1983), recognizes that at least three of his multiple intelligences (linguistic, logical-mathematical, spatial) can fit within a general-intelligence framework, although he views them as independent (contrary to a large amount of data suggesting that they are correlated, e.g., Carroll (1993)). Unlike Gardner, I do not view the data as contraindicating the importance of g or its cultural relevance (Sternberg, 2004a). But, as even Warne and Burningham (2019) note for their multi-cultural study, it is important that measures be culturally appropriate (see also Sternberg, 2004a, 2004b). And it is important to recognize that there is more to intelligence than g, even for g theorists such as Carroll. It is an open question how much the measures of other constructs add to the predictive validity of g across a range of criteria. I, at least, do not see interest in g and in more diverse aspects of intelligence as mutually exclusive. Rather, the task is to figure out how they are related to each other internally and in terms of predictive value. A further issue is measurement of aspects of intelligence horizontally beyond g. On the one hand, there are a number of measures of all these constructs – emotional intelligence (see Rivers et al., 2020); social intelligence (see Kihlstrom & Cantor, 2020); practical intelligence (see Hedlund, 2020); multiple intelligences (Krechevsky, 1998); creativity (Sternberg, 2017); and wisdom (Kunzmann, 2019). But none of these measures, to my knowledge, has reached the level of reliability, construct validity, or utility that measures of g have reached. Why? For one thing, there is less agreement among theorists of these diverse constructs regarding how to measure them than there is among theorists of g regarding how to measure g. Moreover, there may not be, any time soon, such a high level of agreement, as definitions of these constructs are diverse (as were, in earlier times, definitions of intelligence). And the constructs are much harder to measure than is the analytical aspect of intelligence, so it may be quite a while before any of the tests of these expanded constructs reach a level at which they are readily usable for mass testing. A further issue is that they often must be subjectively scored and often take a

Vertical and Horizontal Levels of Analysis

long time to administer. That said, proponents of these broader tests argue that, in the end, measurement of crucial constructs should not be neglected in research simply because the theories and tests are in need of further development. There are very few constructs in psychology that have reached the level of development of g. Arguably, there are none at all!

Integration Research on different horizontal and vertical approaches, unfortunately, has been rather narrowly focused. We have yet to see a model that fully integrates what we know biologically, psychometrically, cognitively, and culturally about intelligence. Yet, a truly comprehensive and ultimately construct-valid theory of intelligence will have to find a way to achieve such integration. Certainly there have been attempts at least at partial integrations. For example, Hebb’s (1962/2002) work on the organization of behavior made a serious step toward integration of biological and psychometric approaches. Luria (1976a, 1976b) attempted to integrate biological, cognitive, and cultural approaches to intelligence. And recent biological work (see, e.g., Barbey, 2018; Haier, 2016, 2020) seems to have great potential to connect with measurement issues. But we all need to go further with the ambition of Luria to find ways to be more comprehensively integrative of the different approaches that, ultimately, may yield a unified theory of intelligence.

Conclusion There is more to intelligence than g and its subfactors. The measurement of creativity and the measurement of wisdom do not lend themselves to multiple-choice answers or even to uniquely correct answers. But given the state of the world today, does it make any kind of sense horizontally to restrict our definition of intelligence to a rather narrow set of cognitive skills? Does it make sense to limit ourselves vertically to one level of analysis? Narrow cognitive skills have proven insufficient to solve any serious world problem and one could argue have exacerbated the problems as people come to think they are too intelligent to do foolish things (Sternberg, 2005, 2018b). It is at least worth considering whether researchers in the field of intelligence, regardless of their particular theoretical or methodological inclinations, should not take a broader and deeper view of intelligence, one that recognizes the need to study intelligence at both multiple vertical and multiple horizontal levels. What are the policy implications for these broader views of human intelligence? First, as noted earlier, the focus of societies on narrow views of education based on g-based curricula has brought us many great technological developments, such as computers, cell phones, and sleek automobiles, but also has

429

430

r. j. sternberg

brought us (concomitantly, not necessarily causally) out-of-control climate change, pollution, warfare, income disparities, and an increasing number of autocratic governments around the world. If we do not teach children to be more creative in coming up with novel and useful solutions to challenging world problems, and if we do not teach them to have the wisdom to want to tackle these problems in the first, we will end up with another generation of unimaginative and in some cases retrogressive leaders whose solutions to complex problems represent, at best, the simplified and unsuccessful and, often, destructive populist solutions of the past. Second, we need to provide incentives to society – parents, educators, and especially testing companies – to be more innovative than they have been in the past. The standardized tests used to measure academic skills are very similar to those that were used in the early twentieth century. Imagine if medical testing proceeded at the same snail’s pace of innovation that has characterized educational testing. A lot more people would be dying than are dying today, precisely because medical researchers have been innovative in ways that psychometric researchers may not have been, perhaps for lack of incentives fundamentally to change anything. If there has been any innovation, it is in biological approaches to intelligence that may, at some point, yield neuropsychological measures of intelligence that bypass the need for traditional psychometric tests. Third, science and psychology especially need to get away from false dichotomies that view g-based and broader-based approaches as somehow competitive with each other. In general, the reaction of g theorists to broader theories of intelligence has been dismissive, and the reaction of broader theorists to g theory has been to discount it too much, as Gardner (1983) did in his original book on multiple intelligences. Science and society both need to go beyond manufactured conflicts and rather to encourage scientists with differing points of view to collaborate, even if the collaborations are sometimes fraught and adversarial, so that the results of the research can achieve a common good for science and society.

References Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Berry, J. W. (1974). Radical cultural relativism and the concept of intelligence. In J. W. Berry & P. R. Dasen (eds.), Culture and cognition: Readings in cross-cultural psychology (pp. 225–229). London: Methuen. Carroll, J. B. (1993). Human cognitive abilities. A survey of factor-analytic studies. Cambridge University Press. Ceci, S. J. (2009). On intelligence . . . A bioecological treatise on intellectual development. Cambridge, MA: Harvard University Press.

Vertical and Horizontal Levels of Analysis

Cole, M., Gay, J., Glick, J. A., & Sharp, D. W. (1971). The cultural context of thinking and learning. New York: Basic Books. Duncan, J., Seitz, R. J., Kolodny, J., Bor, D., Herzog, H., Ahmed, A., . . . Emslie, H. (2000). A neural basis for general intelligence. Science, 289(5478), 457–460. Flynn, J. R. (2020). Secular changes in intelligence: The “Flynn effect.” In R. J. Sternberg (ed.), Cambridge handbook of intelligence (2nd ed.), pp. 940–963. New York: Cambridge University Press. Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books. Gardner, H. (2017). Taking a multiple intelligences (MI) perspective. Behavioral and Brain Sciences, doi: 10.1017/S0140525X16001631. Glick, J. (1974). Cognitive development in cross-cultural perspective. In E. M. Hetherington (ed.), Review of child development research, 4 (pp. 891–1008). Chicago: SRCD. Gottfredson, L. S. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence, 24(1), 13–23. Greenfield, P. M. (2020). Historical evolution of intelligence. In R. J. Sternberg (ed.), Cambridge handbook of intelligence (2nd ed.) (pp. 916–933). New York: Cambridge University Press. Grigorenko, E. L., Meier, E., Lipka, J., Mohatt, G., Yanez, E., & Sternberg, R. J. (2004). Academic and practical intelligence: A case study of the Yup’ik in Alaska. Learning and Individual Differences, 14(4), 183–207. Grigorenko, E. L., & Sternberg, R. J. (2001). Analytical, creative, and practical intelligence as predictors of self–reported adaptive functioning: A case study in Russia. Intelligence, 29(1), 57–73. Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill. Guilford, J. P. (1982). Cognitive psychology’s ambiguities: Some suggested remedies. Psychological Review, 89(1), 48–59. Haier, R. J. (2016). The neuroscience of intelligence. New York: Cambridge University Press. Haier, R. J. (2020). The biological basis of intelligence. In R. J. Sternberg (ed.), Cambridge handbook of intelligence (2nd ed.) (pp. 451–468). New York: Cambridge University Press. Hebb, D. O. (1962/2002). The organization of behavior. New York: Psychology Press. Hedlund, J. (2020). Practical intelligence. In R. J. Sternberg (ed.), Cambridge handbook of intelligence (2nd ed.) (pp. 736–755). New York: Cambridge University Press. Horn, J. L., & Knapp, J. R. (1973). On the subjective character of the empirical base of Guilford’s Structure-of-Intellect model. Psychological Bulletin, 80(1), 33–43. Intelligence and its measurement: A symposium (1921). Journal of Educational Psychology, 12, 123–147, 195–216, 271–275. Kaufman, J. C., & Sternberg, R. J. (eds.) (2019). Cambridge handbook of creativity (2nd ed.). New York: Cambridge University Press. Kihlstrom, J. F., & Cantor, N. (2020). Social intelligence. In R. J. Sternberg (ed.), Cambridge handbook of intelligence (2nd ed.) (pp. 756–779). New York: Cambridge University Press.

431

432

r. j. sternberg

Krechevsky, M. (1998). Project Spectrum: Preschool assessment handbook. New York: Teachers College Press. Kunzmann, U. (2019). Performance-based measures of wisdom: State of the art and future directions. In R. J. Sternberg & J. Glueck (eds.), Cambridge handbook of wisdom (pp. 277–296). New York: Cambridge University Press. Laboratory of Comparative Human Cognition (1982). Culture and intelligence. In R. J. Sternberg (ed.), Handbook of human intelligence (pp. 642–719). New York: Cambridge University Press. Luria, A. R. (1976a). Cognitive development: Its cultural and social foundations. Cambridge, MA: Harvard University Press. Luria, A. R. (1976b). The working brain: An introduction to neuropsychology. New York: Basic Books. Mackintosh, N. J. (2011). History of theories and measurement of intelligence. In R. J. Sternberg & S. B. Kaufman (eds.), Cambridge handbook of intelligence (pp. 1–19). New York: Cambridge University Press. Pinker, S. (2018). Enlightenment now! The case for reason, science, humanism, and progress. New York: Viking. Rivers, S. E., Handley-Miner, I. J., Mayer, J. D., & Caruso, D. R. (2020). Emotional intelligence. In R. J. Sternberg (ed.), Cambridge handbook of intelligence (2nd ed.) (pp. 709–735). New York: Cambridge University Press. Sackett, P. R., Shewach, O. R., & Dahlke, J A. (2020). The predictive value of general intelligence. In R. J. Sternberg (ed.), Human intelligence: An introduction (pp. 381–414). New York: Cambridge University Press. Serpell, R. (2000). Intelligence and culture. In R. J. Sternberg (ed.), Handbook of intelligence (pp. 549–577). New York: Cambridge University Press. Spearman, C. (1927). The abilities of man. New York: Macmillan. Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities. Hillsdale, NJ: Erlbaum. Sternberg, R. J. (1985). Human intelligence: The model is the message. Science, 230(4730), 1111–1118. Sternberg, R. J. (1990). Metaphors of mind: Conceptions of the nature of intelligence. New York: Cambridge University Press. Sternberg, R. J. (2004a). Culture and intelligence. American Psychologist, 59(5), 325–338. Sternberg, R. J. (ed.) (2004b). International handbook of intelligence. New York: Cambridge University Press. Sternberg, R. J. (2005). Foolishness. In R. J. Sternberg & J. Jordan (eds.), Handbook of wisdom: Psychological perspectives (pp. 331–352). New York: Cambridge University Press. Sternberg, R. J. (2010). College admissions for the 21st century. Cambridge, MA: Harvard University Press. Sternberg, R. J. (2017). Measuring creativity: A 40+ year retrospective. Journal of Creative Behavior, 53(4), 600–604. doi: 10.1002/jocb.218. Sternberg, R. J. (2018a). A triangular theory of creativity. Psychology of Aesthetics, Creativity, and the Arts, 12(1), 50–67. Sternberg, R. J. (2018b). Wisdom, foolishness, and toxicity in human development. Research in Human Development, 15(3–4), 200–210. doi: 10.1080/ 15427609.2018.1491216.

Vertical and Horizontal Levels of Analysis

Sternberg, R. J. (2019a). Intelligence. In R. J. Sternberg & W. Pickren (eds.), Handbook of the intellectual history of psychology: How psychological ideas have evolved from past to present (pp. 267–286). New York: Cambridge University Press. Sternberg, R. J. (2019b). Wisdom, foolishness, and toxicity: How does one know which is which? In M. Mumford & C. A. Higgs (eds.), Leader thinking skills (pp. 362–381). New York: Taylor & Francis. Sternberg, R. J. (ed.) (2020). Cambridge handbook of intelligence (2nd ed.). New York: Cambridge University Press. Sternberg, R. J., & Detterman, D. K. (eds.) (1986). What is intelligence? Norwood, NJ: Ablex Publishing Corporation. Sternberg, R. J., Forsythe, G. B., Hedlund, J., Horvath, J., Snook, S., Williams, W. M., . . . Grigorenko, E. L. (2000). Practical intelligence in everyday life. New York: Cambridge University Press. Sternberg, R. J., & Glueck, J. (eds.) (2019). Cambridge handbook of wisdom. New York: Cambridge University Press. Sternberg, R. J., & Grigorenko, E. L. (2000). Practical intelligence and its development. In R. Bar-On & J. D. A. Parker (eds.), The handbook of emotional intelligence: Theory, development, assessment, and application at home, school, and in the workplace (pp. 215–243). San Francisco: Jossey-Bass. Sternberg, R. J., & Grigorenko E. L. (eds.) (2002). The general factor of intelligence: How general is it? Mahwah, NJ: Lawrence Erlbaum Associates. Sternberg, R. J., Lipka, J., Newman, T., Wildfeuer, S., & Grigorenko, E. L. (2007). Triarchically-based instruction and assessment of sixth-grade mathematics in a Yup’ik cultural setting in Alaska. International Journal of Giftedness and Creativity, 21(2), 6–19. Sternberg, R. J., Nokes, K., Geissler, P. W., Prince, R., Okatcha, F., Bundy, D. A., & Grigorenko, E. L. (2001). The relationship between academic and practical intelligence: A case study in Kenya. Intelligence, 29(5), 401–418. Sternberg, R. J., & Smith, C. (1985). Social intelligence and decoding skills in nonverbal communication. Social Cognition, 3(2), 168–192. Sternberg, R. J., & The Rainbow Project Collaborators (2006). The Rainbow Project: Enhancing the SAT through assessments of analytical, practical and creative skills. Intelligence, 34(4), 321–350. Thurstone, L. L. (1938). Primary mental abilities. Chicago: University of Chicago Press. Walrath, R., Willis, J. O., Dumont, R., & Kaufman, A. S. (2020). Factor-analytic models of intelligence. In R. J. Sternberg (ed.), Cambridge handbook of intelligence (2nd ed.) (pp. 75–98). New York: Cambridge University Press. Warne, R. T., & Burningham, C. (2019). Spearman’s g found in 21 non-Western nations: Strong evidence that g is a universal phenomenon. Psychological Bulletin, 145(3), 237–272. doi: 10.1037/bul000184. Yang, S., & Sternberg, R. J. (1997). Taiwanese Chinese people’s conceptions of intelligence. Intelligence, 25(1), 21–36.

433

22 How Intelligence Research Can Inform Education and Public Policy Jonathan Wai and Drew H. Bailey

Introduction This chapter aims to explore how intelligence research – both current evidence as well as potential new findings that remain undiscovered – might inform education and public policy. We will first address why studying human intelligence is not only an exciting area of research for basic discovery, but also how knowledge about intelligence might be applicable to education and public policy. We will review selected areas of intelligence related research with potential implications for policy. We will discuss research on intelligence test scores as predictors of school performance and later success and research on features of the most promising policy relevant variables for improving intelligence. Finally, we will conclude with a discussion and explanation for researchers about how influencing policy requires (1) learning from policy researchers and practitioners, who have expertise that we argue complements the strengths of intelligence researchers and (2) effectively communicating those findings to policy researchers and practitioners. We write this chapter primarily from the perspective of researchers who are engaged in intelligence research that might be applied to education and policy discussions.

Intelligence Research and Policy Debates The policy-relevance of intelligence research is affected by a conundrum. Intelligence research has yielded some of the most consistent empirical regularities in the social sciences, many of which pertain to important socially important inputs and outcomes, yet (1) the theoretical status of intelligence is hotly debated in the social sciences and even among intelligence researchers and (2) the translation of these findings into policy is hotly contested. Point 2 is likely not fully resolvable, because it reflects individuals’ diverse values about the means and ends of social policy. However, points 1 and 2 are also somewhat intertwined. We briefly review some core findings from research on intelligence, including some findings discussed in this volume. We aim to 434

How Intelligence Research Informs Education and Public Policy

explain current policy applications based on the weight of evidence to date as well as potential policy applications based on what might be possible when the weight of evidence surrounding aspects of intelligence research accumulate in the future. In each case, we discuss the relevance of the findings to the conundrum described above. We conclude that better integrating intelligence research with causally informative research designs will better illuminate the links between intelligence and policy. This might involve collaborations between intelligence researchers with expertise in theories of the structure and consequences of cognitive ability, and policy researchers, who are interested in estimating the causal effects of policies that might alter the inputs (e.g., parental income) and socially important outcomes (e.g., occupational success) with various possible theorized connections to cognitive abilities.

Cognitive Test Scores Statistically Predict Socially Important Outcomes A large body of literature has established that general intelligence (psychometric g) is an important statistical predictor of school performance (e.g., Gottfredson, 2004), and later success, including academic performance, career potential, creativity, and even job performance (e.g., Kuncel, Hezlett, & Ones, 2004; Schmidt & Hunter, 2004). Cognitive tests have been widely used for selection purposes. In 1904, Alfred Binet designed a test to help identify students with learning disabilities, which would eventually develop into the well-known Stanford-Binet IQ test. This test has been used across the range of ability, for example to select “gifted” students into Lewis Terman’s now famous longitudinal study (Terman, 1925). During World War I, the task of screening large numbers of recruits fell to the Army Alpha and Beta tests, and these tests formed the beginnings of the development of early versions of the SAT, which, along with the American College Test (ACT) has been used for decades in the United States quite widely for college admissions. Standardized intelligence tests have also been widely used in occupational selection due to their predictive power in job performance (e.g., Schmidt & Hunter, 2004), and versions of ability or achievement tests continue to prove useful in both identifying K-12 students with disabilities as well as students with learning advantages. For example, the SAT and ACT which are designed for typical college-bound 17-year-olds are used in “talent searches” aimed at identifying highly gifted students at age 12 (Lubinski & Benbow, 2000). The use of cognitive tests for selection is controversial within the United States (Sackett, Borneman, & Connelly, 2008). Perhaps the most common argument against the use of standardized testing for the purpose of college admissions is that standardized test scores are correlated with socioeconomic status (SES); thus, selecting on test scores may lead to greater persistence of

435

436

j. wai and d. h. bailey

social class across generations. We see this argument as primarily pertaining to how research on cognitive ability is translated into policy, which depends on individuals’ values, and thus cannot be resolved by evidence alone. However, some research is relevant to this issue: For example, the SAT does not substantially under-predict performance in college for students from minority groups under-represented in higher education (Sackett et al., 2008) or for low-SES students (Sackett et al., 2012), suggesting that whatever mechanisms through which these groups receive lower grades in college are not disproportionately reflected in this standardized test. Reasonable people may disagree about the use of standardized tests for the purpose of college admissions or gifted and talented placement on the basis of tradeoffs between efficiency, fairness, and social cohesion. Again, we see this as primarily a disagreement about values. However, even someone with a given set of values might consider a wide range of policy choices. For example, although this kind of opposition to standardized testing is associated more with socially progressive values, universal screening programs, wherein all members of a district or state are required to complete a cognitive test, have sometimes been found to increase the selection of disadvantaged groups into desirable educational settings. For example, Michigan’s mandatory ACT policy and Maine’s mandatory SAT policy both had a small impact on 4-year college enrollment, with a larger effect for poor students in Michigan (Hyman, 2017) and for rural (but not poor) students in Maine (Hurwitz, Smith, Niu, & Howell, 2015). A universal cognitive testing program in a large school district in Florida, coupled with a pre-existing policy that allowed students from poor and under-represented minority groups to qualify for a gifted program based on a lower test score cutoff, substantially raised the number of poor and minority students in gifted education (Card & Giuliano, 2016). Thus, research on valid cognitive tests, coupled with careful consideration of how these tests are used, might be combined to meet the policy goals of organizations with a wide range of values.

Cognitive Test Scores as Tools for Program Evaluation Reverse Causal Questions and Forward Causal Inference One threat to productive exchange between intelligence and policy researchers pertains to the different kinds of questions these groups of researchers typically ask. One key distinction between these questions is the one between what Gelman and Imbens (2013) call reverse causal questions and forward causal inference. Reverse causal questions are questions about the unknown causes of known effects, whereas forward causal inference requires estimating the unknown effects of known causes.

How Intelligence Research Informs Education and Public Policy

Historically, intelligence researchers have studied reverse causal questions, such as “Why do children who enter school with low academic achievement often struggle throughout schooling?” To address this question, they have traditionally decomposed achievement variance into the school, class. and child levels. For example, Detterman (2016, p. 9), concluded that schools and teachers account for less than 10% of the total variance in academic achievement and that student characteristics account for 90%. This observation has been supported by many studies and reviews and has been known at least since the 1960s. In fact, in the few studies that estimate the variance in academic achievement attributable to teachers not confounded with schools it is probably only 1–8%.

In contrast, policy researchers have focused on the corresponding, more directly policy-relevant forward causal questions, such as “What is the causal effect of having an effective teacher for one year on students’ academic outcomes?” To answer this question, policy researchers might compare children randomly or quasi-randomly assigned to receive different teachers. Sometimes, these effects might persist for many years, influencing socially relevant outcomes such as earnings (Chetty, Friedman, & Rockoff, 2014). Is it possible that measured environmental factors might explain small proportions of the variance in student outcomes, but that these effects could also be policy relevant? (Bailey, Duncan, Cunha, Foorman, & Yeager, 2020). Turkheimer (1991) addressed the apparent paradox between twin studies, which find low contributions of the shared environment to IQ scores, and adoption studies, which sometimes find very large effects of adoption on IQ in childhood and moderate effects in adulthood. Twin studies can be seen as addressing the reverse causal question: “why are twins similar to each other?,” whereas adoption studies address the forward causal question, “what is the impact of moving from one kind of rearing environment to another on a set of outcomes?” The contribution of the environment, relative to everything else, can be small, whereas the causal effect of a large change to the environment can also be meaningful – these findings appear to be wholly reconcilable (Bailey et al., 2020). We argue that this point is critical for intelligence researchers interested in entering policy discussions. Below, we will argue that research on intelligence is policy relevant in a variety of ways. However, the effect sizes of interest in policy research are not the amount of variance explained (in or by cognitive test scores), but point estimates that can be used to quantify the effect of an intervention relative to some counterfactual, for example in cost-benefit calculations. Table 22.1 illustrates these and some additional examples of the distinction between these types of questions and corresponding methods to illustrate that the goals historically guiding these kinds of research are different, but can be unified in ways that can be meaningfully conveyed to policy researchers. The distinction between reverse causal questions and forward causal inferences may partially account for (and/or reflect) differences between differential

437

Table 22.1 Reverse and forward causal questions pertaining to intelligence.

Reverse causal question

Implied analyses

Findings

Why do children enter school with low academic achievement often struggle throughout schooling?

Decompose variance in achievement into school, class, and child levels.

Child level variance predominates.

What are the effects of genes vs. environments on children’s educational attainment?

Decompose variance into heritability, shared environment, and unique environment.

Why is there a positive manifold?

Build models that can account for the positive manifold.

Corresponding forward causal question

Implied design

Findings

What is the return to a boost in early achievement many years later?

Compare children randomly or quasirandomly assigned to receive more or different early educational experiences.

Early childhood education appears to have lasting effects on adult outcomes in some cases at levels exceeding program costs; extra schooling appears to have modest but lasting effects on intelligence.

Heritability is high; shared environmental variance is low.

What is the return to a change in the environment on children’s educational attainment?

Compare children randomly or quasirandomly assigned to receive more or different life experiences.

Varied: Large effects from adoption, some effects from early childhood education, some ineffective interventions.

Reflective factor models with g on top, sampling models, mutualism models.

What is the effect of changes in one cognitive domain on performance in another cognitive domain?

Compare children randomly or quasirandomly assigned to receive training in one cognitive domain on other measured abilities.

Near transfer, which may be consistent with some models but inconsistent with others.

How Intelligence Research Informs Education and Public Policy

psychologists and experimentalists (including some policy researchers) in their optimism expressed about various interventions. For example, Moreau, Macnamara, and Hambrick (2018) discuss several proposed inputs to skilled performance that have likely been overhyped by at least some researchers and the media. For example, they cite claims that imply that deliberate practice is nearly sufficient to account for individual differences in expertise, a claim which does not appear to be supported by the best available evidence (Macnamara, Hambrick, & Oswald, 2014). However, Moreau et al. also openly acknowledge that “There is no question that deliberate practice can lead to major improvements in performance within an individual” (p. 30). The most policy relevant comparisons that should drive interest in deliberate practice as an intervention are not “How much variance does deliberate practice explain in skilled performance relative to everything else that varies?” (although intelligence researchers will find such questions to be theoretically important), but “Does manipulating deliberate practice raise socially desirable outcomes relative to not manipulating deliberate practice, and is the cost of doing so outweighed by the benefits?” Similarly, Moreau et al. argue that interventions that are designed to boost children’s levels of growth mindset produce small and often null effects. From the perspective of a differential psychologist asking “What cognitive inputs explain differences in children’s levels of academic achievement?,” these findings suggest that growth mindset is a tiny part of the answer to this question. However, to a policy researcher asking “How can we raise children’s academic performance at scale in ways that could plausibly be justified by their costs?,” the key comparison is between a growth mindset intervention and some counterfactual in which that intervention is not funded and children do not receive it. We are not arguing here that the benefits of such interventions predictably outweigh the costs (which Moreau et al. argue plausibly include the unintended consequence of diverting attention from more promising, more intensive interventions); however, we are arguing that intelligence researchers interested in policy should be able to think about effects relative to counterfactuals, in addition to those expressed as a proportion of variance. Further, although it may be less relevant to immediate policy decisions, we suspect that policy researchers with an understanding of the decomposition of variance may benefit in several ways as well. We will discuss some examples here.

Cognitive Test Scores as Predictors Intelligence test scores may be useful for evaluating evidence in favor of particular policies. We discuss two uses here: (1) the use of intelligence research to help in identifying the causal effect of policy-related variables and (2) the use of cognitive test scores as outcome measures in the evaluation of programs or policies.

439

440

j. wai and d. h. bailey

Intelligence research has produced a useful set of regularities, which can help researchers identify and interpret the effects of policy-related variables. One regularity is that, because naturally occurring individual differences in cognitive abilities appear to partially reflect a common causal pathway or set of pathways (e.g., Tucker-Drob, 2013), non-experimental research designs that do not adequately control for individual differences in general cognitive ability are likely to yield biased estimates of the effectiveness of hypothetical interventions on children’s cognitive outcomes. For example, there is a robust correlation between children’s early math achievement and their much later academic achievement (Duncan et al., 2007). However, it does not necessarily follow that boosting children’s early math achievement will substantially improve their performance on much later math and reading achievement several years later; indeed, non-experimental estimates of the returns to boosts in early levels of academic achievement appear to sometimes dramatically over-estimate the longer-term effects of these boosts on children’s later achievement (Bailey, Duncan, Watts, Clements, & Sarama, 2018; Bailey, Fuchs, Gilbert, Geary, & Fuchs, 2020). This is consistent with the argument that there exists a confound or set of confounds, sometimes called general cognitive ability (but plausibly including unmeasured contextual factors as well), which cannot be perfectly captured with measured covariates alone (Schmidt, 2017). Statistical methods that attempt to account for these factors may produce less biased estimates of the effects of policy-relevant variables on outcomes of interest. This problem is substantially mitigated when policy-relevant variables are randomly or quasi-randomly assigned. Although measuring intelligence in strong experimental or quasi-experimental designs may be useful for ruling out the possibility of unhappy randomization, identifying treatment effect heterogeneity, or for reducing error in estimated effects (Wai, Brown, & Chabris, 2018), randomization distributes cognitive ability approximately equally across groups on average, thereby addressing the most worrisome problem with making inferences about research designs that do not include pre-treatment measures of general cognitive ability. Whether it makes sense to include a cognitive battery as a pretest in a randomized controlled trial likely depends substantially on the outcome of interest (is it highly associated with cognitive ability?) and the resources available to the researchers (do they have the time and money to collect data on a full intelligence battery? If not, are administrative data available on achievement test scores and grades?), and the treatment of interest (is it cognitive in nature? If so, perhaps it will predictably provide a greater benefit to individuals with higher or lower levels of cognitive functioning). We note that, given the well documented correlations between cognitive test scores and educational settings and outcomes, educational research might be one of the most important domains in which these conditions would apply.

How Intelligence Research Informs Education and Public Policy

Cognitive Test Scores as Outcomes Research on intelligence has yielded several insights into best practices behind the use of cognitive tests as measures of an intervention’s success. For example, to the extent that the intervention is narrowly tailored to the cognitive test, the observed effect is likely to provide an overly optimistic estimate of the benefit of the intervention on cognitive functioning (e.g., Shipstead, Redick, & Engle, 2012). Further, researchers should carefully consider the meaning of a test score impact in their evaluations: an intervention-induced gain on an IQ score does not necessarily imply that individuals have been changed in a way that mirrors the structure of individual differences between individuals (Protzko, 2017). If the goal of some policy or intervention is to raise children’s or adults’ skills for a specific set of purposes (e.g., the labor market, or for a particular vocation), then other assessments may be more appropriate (Ackerman, 2017).

Intelligence Research Might Tell Us How to Improve Intelligence Haier (2017, p. xiv) argued that “The ultimate purpose of all intelligence research is to enhance intelligence. Finding ways to maximize a person’s use of their intelligence is one goal of education. It is not yet clear from the weight of evidence how neuroscience can help teachers or parents do this.” Although we have argued here that intelligence research can serve purposes other than enhancing intelligence, improving human abilities is a longstanding goal of both many psychologists and policy researchers and practitioners. In this section we briefly review research on improving intelligence, along with potential policy applications. The extent to which intelligence can be changed is highly contested within the field of intelligence research (Haier, 2017). Although there is widespread agreement that performance on cognitive tests can be influenced by a variety of factors (e.g., coaching, education, health problems, motivation, cognitive aging), and further agreement that some of these changes do not constitute changes in intelligence (e.g., that test coaching does not), some researchers argue that environmental factors that lead to broad performance changes on cognitive tests, such as adoption, schooling, and the Flynn Effect, reflect changes in intelligence (for contrasting perspectives, compare Jensen, 1998, to Ackerman, 2017, and Protzko, 2016). Others argue that some or all of these changes do not affect general intelligence or it is simply unclear to date if they do, because they reflect changes that are different in kind from naturally observed variation in cognitive ability (e.g., Haier, 2014). Here, we avoid the debate about the ontology of g and its relation with intelligence, taking the view that meaningful changes that manifest across domains in cognitive

441

442

j. wai and d. h. bailey

skill test scores can be produced by environmental influences and discuss some of this literature in this chapter. Perhaps the most convincing evidence of the causal effect of measured environments on cognitive test scores comes from adoption studies (e.g., van IJzendoorn, Juffer, & Poelhuis, 2005). Social class and its environmental correlates appear to impact cognitive and other life outcomes, as children adopted into families in more advantaged families show larger cognitive advantages and higher educational attainment relative to individuals adopted into less advantaged families (Kendler, Turkheimer, Ohlsson, Sundquist, & Sundquist, 2015; Sacerdote, 2007). Intelligence research also provides evidence that the effects of genes on cognition are substantially meditated through the environment: evidence from this comes from strong evidence that genetic contributions to tests of cognitive ability grow dramatically from early to middle childhood (Tucker-Drob & Briley, 2014) and additional evidence that, within the United States, genes have a stronger influence on cognitive ability among high SES children (Tucker-Drob & Bates, 2016). The policy implications of these findings are not straightforward and depend both on the identification of efficient policy levers and on one’s relative concern with efficiency, equality of outcome, and equality of opportunity. However, the current literature suggests that favorable environments can facilitate cognitive development by allowing children to reach the high end of their genetic potential. One policy-relevant lever for which there is good evidence of persistent effects on cognitive ability is additional years of schooling (Ceci, 1991; Ritchie & Tucker-Drob, 2018; for a somewhat contrasting perspective, see Haier, 2017). Importantly, this literature includes studies of policy changes whereby children receive a required extra year of schooling at the end of their academic training. Cognitive impacts from a year of preschool are sometimes substantial at the end of treatment, but often fade out by adulthood in the limited set of studies that have examined them (Li et al., 2017; Protzko, 2015); of course, the best target for additional educational investments depend on several additional factors outside of the area of intelligence research, such as the current levels of investment in education at different grade levels, the cost of education, benefits of education other than intelligence (e.g., Elango, García, Heckman, & Hojman, 2016; Kraft, 2019), and possible complementarities from educational investments across grade levels (Johnson & Jackson, 2019).

Engaging in Policy Requires More Than Just Conducting Excellent Research We include in this section some answers to core questions that basic science researchers focused on specific topics of the neuroscience of intelligence might have about education and public policy. Throughout this process of

How Intelligence Research Informs Education and Public Policy

engagement both to other researchers outside our field and the public more broadly, we emphasize the responsible communication of scientific findings (Lewis & Wai, in press).

Why Care About Practical or Policy Implications of Your Work? What Are Things That I Should Consider to Help My Work Matter? If one purpose of intelligence research is to generate methods for increasing intelligence, as Haier (2017) argues, then intelligence researchers should work closely with policy and education researchers. Additionally, useful research may also be driven by what practical policy problems need to be solved. In other words, important and impactful research may come not just from basic research being communicated to policymakers and the public, but also from research questions being driven by practical on the ground problems in areas of education or policy that could greatly benefit from being informed by evidence based solutions. We argue that researchers should care about policy, both because policy is a major mechanism through which our research can generate real world benefits, and because policy research can inform theories of intelligence.

What Are Public Policy and Education Policy? Academics often view themselves as engaged in public policy or education policy if they publish articles in education or another area which they believe has policy implications. However, the domain of policy is much broader than this, and includes researchers and practitioners in a variety of areas, such as funders, politicians, school districts, public policy departments, among others. To impact public policy and education may require familiarity with a very different world with different terminology, interests, culture, and value system. In our view, psychologists may benefit from learning more about the science of policy. Considerations such as implementation feasibility and fidelity, costbenefit ratios, sustainability, and unintended consequences are complex topics that have commanded substantial research interest in the social sciences, but in which many intelligence researchers have little training.

What Kinds of Roadblocks Might I Encounter When Dealing With Public Policy and Education Policy Makers in Regards to Intelligence? Intelligence research, without question, has had its share of challenges (see Haier (2017) for a review of controversies in the history of intelligence research). Not only have researchers outside the field of intelligence been wary about research on intelligence, the general public and the media have also tended to ignore the empirical evidence base that has accumulated across many decades. In other words, the “weight of evidence” that supports the

443

444

j. wai and d. h. bailey

field of intelligence remains rather unaccounted for or omitted in education and policy research more broadly (Wai et al., 2018). Although in this chapter we have argued that some of this is attributable to differences in the research interests and goals of intelligence and policy researchers, better communication might be the fastest way to increasing productive discussion between intelligence and policy researchers (Wai, 2020). In communicating with education researchers and policymakers about intelligence research, we argue that psychologists should present their research responsibly and sensitively. We must have an eye towards explaining how taking evidence – all the evidence – into account when attempting to help children, is an important strategy that can lead to real and lasting solutions that can benefit kids and societies. We must first not assume that other researchers outside our subfield are aware of our work and we must secondly learn to clearly communicate using plain language the core findings of our field and why they matter to areas in education and public policy. One solution towards this end may be the encouragement of public engagement in graduate training and among the academic community more broadly (e.g., Pinker, 2015; Wai & Miller, 2015). Another is to engage with policy researchers and policymakers, and consider asking both reverse causal questions and considering forward causal inference.

Conclusions We have argued that intelligence research can inform some, but not fully solve problems faced by policy researchers. Further, we have argued that for intelligence research to inform policy discussions, intelligence researchers must be able to discuss the same kinds of questions that policy researchers are primarily interested in (i.e., questions about forward causal inference). With this in mind, and with an understanding of policy structures and goals, intelligence researchers should be able to contribute more to discussions about policy than they currently do.

References Ackerman, P. L. (2017). Adult intelligence: The construct and the criterion problem. Perspectives on Psychological Science, 12(6), 987–998. doi: 10.1177/ 1745691617703437. Bailey, D. H., Duncan, G. J., Cunha, F., Foorman, B. R., & Yeager, D. S. (2020). Persistence and fade-out of educational-intervention effects: Mechanisms and potential solutions. Psychological Science in the Public Interest, 21(2), 55–97. Bailey, D. H., Duncan, G. J., Watts, T., Clements, D. H., & Sarama, J. (2018). Risky business: Correlation and causation in longitudinal studies of skill development. American Psychologist, 73(1), 81–94.

How Intelligence Research Informs Education and Public Policy

Bailey, D. H., Fuchs, L. S., Gilbert, J. K., Geary, D. C., & Fuchs, D. (2020). Prevention: Necessary but insufficient? A two-year follow-up of effective first-grade mathematics intervention. Child Development, 91(2), 382–400. Card, D., & Giuliano, L. (2016). Universal screening increases the representation of low-income and minority students in gifted education. Proceedings of the National Academy of Sciences, 113(48), 13678–13683. doi: 10.1073/ pnas.1605043113. Ceci, S. J. (1991). How much does schooling influence general intelligence and its cognitive components? A reassessment of the evidence. Developmental Psychology, 27(5), 703–722. doi: 10.1037/0012-1649.27.5.703. Chetty, R., Friedman, J. N., & Rockoff, J. E. (2014). Measuring the impacts of teachers II: Teacher value-added and student outcomes in adulthood. American Economic Review, 104(9), 2633–2679. doi: 10.1257/aer.104.9.2633. Detterman, D. K. (2016). Pity the poor teacher because student characteristics are more significant than teachers or schools. Spanish Journal of Psychology, 19, E93. doi: 10.1017/sip.2016.88. Duncan, G. J., Dowsett, C. J., Claessens, A., Magnuson, K., Huston, A. C., Klebanov, P., . . . Japel, C. (2007). School readiness and later achievement. Developmental Psychology, 43(6), 1428–1446. Elango, S., García, J. L., Heckman, J. J., & Hojman, A. (2015). Early childhood education (No. w21766). Cambridge, MA: National Bureau of Economic Research. Gelman, A., & Imbens, G. (2013). Why ask why? Forward causal inference and reverse causal questions. NBER Working Paper 19614. www.nber.org/papers/ w19614.pdf Gottfredson, L. S. (2004). Schools and the g factor. The Wilson Quarterly, 28(3), 35–45. Haier, R. J. (2014). Increased intelligence is a myth (so far). Frontiers in Systems Neuroscience, 8, 34. doi: 10.3389/fnsys.2014.00034. Haier, R. J. (2017). The neuroscience of intelligence. Cambridge University Press. Hurwitz, M., Smith, J., Niu, S., & Howell, J. (2015). The Maine question: How is 4-year college enrollment affected by mandatory college entrance exams? Educational Evaluation and Policy Analysis, 37(1), 138–159. doi: 10.3102/ 0162373714521866. Hyman, J. (2017). ACT for all: The effect of mandatory college entrance exams on postsecondary attainment and choice. Education Finance and Policy, 12(3), 281–311. doi: 10.1162/EDFP_a_00206. Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Johnson, R. C., & Jackson, C. K. (2019). Reducing inequality through dynamic complementarity: Evidence from Head Start and public school spending. American Economic Journal: Economic Policy, 11(4), 310–349. Kendler, K. S., Turkheimer, E., Ohlsson, H., Sundquist, J., & Sundquist, K. (2015). Family environment and the malleability of cognitive ability: A Swedish national home-reared and adopted-away cosibling control study. Proceedings of the National Academy of Sciences, 112(15), 4612–4617. doi: 10.1073/pnas.1417106112. Kraft, M. A. (2019). Teacher effects on complex cognitive skills and social-emotional competencies. Journal of Human Resources, 54(1), 1–36.

445

446

j. wai and d. h. bailey

Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential, creativity, and job performance. Can one construct predict them all? Journal of Personality and Social Psychology, 86(1), 148–161. doi: 10.1037/0022-3514.86.1.148. Lewis, N. A., Jr., & Wai, J. (in press). Communicating what we know, and what isn’t so: Science communication in psychology. Perspectives on Psychological Science. https://psyarxiv.com/cfmzk Li, W., Leak, J., Duncan, G. J., Magnuson, K., Schindler, H., & Yoshikawa, H. (2017). Is timing everything? How early childhood education program impacts vary by starting age, program duration and time since the end of the program. Working Paper, National Forum on Early Childhood Policy and Programs, Meta-analytic Database Project. Center on the Developing Child, Harvard University. Lubinski, D., & Benbow, C. P. (2000). States of excellence. American Psychologist, 55(1), 137–150. doi: 10.1037/0003-066X.55.1.137. Macnamara, B. N., Hambrick, D. Z., & Oswald, F. L. (2014). Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis. Psychological Science, 25(8), 1608–1618. doi: 10.1177/ 0956797614535810. Moreau, D., Macnamara, B. N., & Hambrick, D. Z. (2018). Overstating the role of environmental factors in success: A cautionary note. Current Directions in Psychological Science, 28(1), 28–33. doi: 10.1177/0963721418797300. Pinker, S. A. (2015). The sense of style: The thinking person’s guide to writing in the 21st century. New York: Penguin Books. Protzko, J. (2015). The environment in raising early intelligence: A meta-analysis of the fadeout effect. Intelligence, 53, 202–210. Protzko, J. (2016). Does the raising IQ-raising g distinction explain the fadeout effect? Intelligence, 56, 65–71. Protzko, J. (2017). Effects of cognitive training on the structure of intelligence. Psychonomic Bulletin & Review, 24(4), 1022–1031. doi: 10.3758/s13423-0161196-1. Ritchie, S. J., & Tucker-Drob, E. M. (2018). How much does education improve intelligence? A meta-analysis. Psychological Science, 29(8), 1358–1369. doi: 10.1177/0956797618774253. Sacerdote, B. (2007). How large are the effects from changes in family environment? A study of Korean American adoptees. The Quarterly Journal of Economics, 122(1), 119–157. doi: 10.1162/qjec.122.1.119. Sackett, P. R., Borneman, M. J., & Connelly, B. S. (2008). High-stakes testing in higher education and employment. American Psychologist, 63(4), 215–227. doi: 10.1037/0003-066X.63.4.215. Sackett, P. R., Kuncel, N. R., Beatty, A. S., Rigdon, J. L., Shen, W., & Kiger, T. B. (2012). The role of socioeconomic status in SAT-grade relationships and in college admissions decisions. Psychological Science, 23(9), 1000–1007. doi: 10.1177/0956797612438732. Schmidt, F. L. (2017). Beyond questionable research methods: The role of omitted relevant research in the credibility of research. Archives of Scientific Psychology, 5(1), 32–41. doi: 10.1037/arc0000033.

How Intelligence Research Informs Education and Public Policy

Schmidt, F. L., & Hunter, J. E. (2004). General mental ability in the world of work: Occupational attainment and job performance. Journal of Personality and Social Psychology, 86(1), 162–173. doi: 10.1037/0022-3514.86.1.162. Shipstead, Z., Redick, T. S., & Engle, R. W. (2012). Is working memory training effective? Psychological Bulletin, 138(4), 628–654. Terman, L. M. (1925). Genetic studies of genius: Volume 1. Mental and physical traits of a thousand gifted children. Palo Alto, CA: Stanford University Press. Tucker-Drob, E. M. (2013). How many pathways underlie socioeconomic differences in the development of cognition and achievement? Learning and Individual Differences, 25, 12–20. doi: 10.1016/j.lindif.2013.01.015. Tucker-Drob, E. M., & Bates, T. C. (2016). Large cross-national differences in gene  socioeconomic status interaction on intelligence. Psychological Science, 27(2), 138–149. doi: 10.1177/0956797615612727. Tucker-Drob, E. M., & Briley, D. A. (2014). Continuity of genetic and environmental influences on cognition across the life span: A meta-analysis of longitudinal twin and adoption studies. Psychological Bulletin, 140(4), 949. doi: 10.1037/ a0035893. Turkheimer, E. (1991). Individual and group differences in adoption studies of IQ. Psychological Bulletin, 110(3), 392–405. van Ijzendoorn, M. H., Juffer, F., & Poelhuis, C. W. (2005). Adoption and cognitive development: A meta-analytic comparison of adopted and nonadopted children’s IQ and school performance. Psychological Bulletin, 131(2), 301–316. doi: 10.1037/0033-2909.131.2.301. Wai, (2020). Communicating intelligence research. Journal of Intelligence, 8(4), 40. https://doi.org/10.3390/jintelligence8040040 Wai, J., Brown, M. I., & Chabris, C. F. (2018). Using standardized test scores to include general cognitive ability in education research and policy. Journal of Intelligence, 6(3), 37. doi: 10.3390/jintelligence6030037. Wai, J., & Miller, D. I. (2015). Here’s why academics should write for the public. The Conversation. https://theconversation.com/heres-why-academics-should-writefor-the-public-50874

447

23 The Neural Representation of Concrete and Abstract Concepts Robert Vargas and Marcel Adam Just

Introduction Although the study of concept knowledge has long been of interest in psychology and philosophy, it is only in the past two decades that it has been possible to characterize the neural implementation of concept knowledge. With the use of neuroimaging technology, it has become possible to ask previously unanswerable questions about the representation of concepts, such as the semantic composition of a concept in its brain representation. In particular, it has become possible to uncover some of the fundamental dimensions of representation that characterize several important domains of concepts. Much of the recent research has been done with fMRI to predict and localize various concept representations and discover the semantic properties that underlie them. Commonly used experimental designs in this research area present single words or pictures of objects, measure the resulting activation pattern in multiple brain locations, and develop a mapping between the topographically distributed activation pattern and the semantic representation of the concept. The primary research topics concerning concept representations pertain to three issues: The composition of concept representations; the neurally defined underlying semantic dimensions; and the relation between neuroimaging findings and cognitive and psycholinguistic findings. It is these types of relationships between cortical function and meaning representation that allow us to understand more about both the way knowledge is organized in the human brain and the functional role that various brain systems play in representing the knowledge. Concepts are often qualitatively different from one another with regard to their perceptual grounding. As a result, one area of research has largely focused on the neural representations of concrete object concepts. However, as imaging technology and analytic techniques continue to improve, the neural representations of seemingly ethereal, abstract concepts such as ethics and truth have recently become a topic of increasing interest. In addition to the Due to a production issue, this chapter appears as the final chapter; it was intended to be in Part II.

448

The Neural Representation of Concrete and Abstract Concepts

interest in such highly abstract concepts, recent research has also investigated hybrids between concrete and abstract concepts such as emotions, physics concepts, and social concepts. These hybrid concepts are not directly perceptually grounded but they can nevertheless be experienced. This chapter provides an overview of contemporary neuroimaging research examining the neural instantiation of concrete concepts, abstract concepts, and concepts that fall somewhere in between, which we call hybrid concepts.

Contemporary Approaches to Analyzing Concept Representations Univariate-Based Analyses The initial approach of task-related fMRI imaging was to measure the difference in activation for a class of stimuli (such as a semantic category, like houses) relative to a “rest” condition. At each 3-dimensional volume element in the brain (a voxel), a general linear regression model (GLM) is fit to relate the occurrence of the stimuli to the increase in activation relative to the rest condition. The result is a beta weight whose magnitude reflects the degree of condition-relevant activation in each voxel. This approach proves useful for investigating the involvement of cortical regions whose activation systematically increases or decreases relative to rest for a specific mental activity. However, with this voxel-wise univariate approach, complex relations between the activation in different brain regions within a network are often not apparent (Kriegeskorte, Goebel, & Bandettini, 2006; Mur, Bandettini, & Kriegeskorte, 2009). Moreover, treating each voxel independently of the others misses the fact that the activation pattern corresponding to a concept consists of a set of co-activating voxels that may or may not be proximal to each other. Nevertheless, the univariate approach was successful in identifying which brain regions were activated in response to a given class of concepts.

Multivariate Pattern Analysis (MVPA) The advent of higher-resolution imaging analyses aided in shifting the research focus from identifying the cortical regions involved in the representation of concepts to focusing on the coordinated activation across a network of brain regions or subregions (Haxby et al., 2001; Haynes & Rees, 2006). Instead of assessing the activation evoked by a class of concepts in terms of individual voxels in various brain regions considered independently of each other, multivariate analyses treated the activating voxels in conjunction with each other, as multiple dependent variables. Multivariate pattern analysis (MVPA) is graphically illustrated in Figure 23.1. MVPA refers to a family of analyses designed to take into account the multivariate relationships among the voxels

449

450

r. vargas and m. a. just

Figure 23.1 Conceptual schematic showing differences between GLM activation-based approaches and pattern-oriented MVPA, where the same number of voxels activate (shown as dark voxels) for two concepts but the spatial pattern of the activated voxels differs.

that represent various concepts. Some of the most common analyses for investigating concept representations include: 1) Representational Similarity Analysis (RSA), which enables comparison of the multivariate activation patterns of different concepts; 2) Factor Analysis or Principle Components Analysis (PCA), which enables discovery of the lower-dimensional structure of distributed patterns of activation; 3) Predictive Modeling, which enables assessment of various postulated interpretations of underlying semantic structures by predicting activation patterns of concepts; and 4) Encoding Models, which enable quantitative assessment of various organizational structures hypothesized to drive the activation. These techniques tend to answer somewhat different questions.

Representational Similarity Analysis (RSA) RSA is often used to measure the similarity (or dissimilarity) of representational structures of various individual concepts or categories of concepts. The representation of a concept or a category of concepts can be defined as the evoked activation levels of some set of voxels. These activation patterns can be computed with respect to all of the voxels in the whole cortex but are often restricted to the voxels in semantically relevant regions. The most common technique is to redefine the representation of a concept from being an activation pattern to a similarity pattern with respect to the other concepts in the set (Kriegeskorte, Mur, & Bandettini, 2008a). For example, the neural representation of a concept like robin can be thought of in terms of its similarities to a set of other birds. This approach makes it possible to compare various brain subsystems in terms of the types of information they represent, and thus to characterize the processing characteristics of each subsystem. For example, RSA has been used to demonstrate the similarities in the visuospatial subsystems of humans and monkeys in the representations of visually depicted

The Neural Representation of Concrete and Abstract Concepts

objects (Kriegeskorte et al., 2008b). The strength of this approach is its higher level of abstraction of the neural representation of concepts, representing them in terms of their relations (similarities) to other concepts. The cost of this approach is its limited focus on the representation of the properties of individual concepts.

Extracting Dimensions of Semantics (Factor Analysis / PCA) Factor analysis and PCA are used to extract neurally meaningful dimensions from high-dimensional activation patterns. “Neurally meaningful” refers to a subset of concepts systematically evoking activation from a subset of relevant voxels. For example, concrete objects that entail interaction with parts of the human body (such as hand tools) evoke activation in motor and pre-motor areas, such that a neural dimension of body–object interaction emerges (Just, Cherkassky, Aryal, & Mitchell, 2010). This approach focuses on dimensions that are shared by some concepts and de-emphasizes the differences among the concepts that share the dimension. The regions corresponding to the dimension can be localized to particular brain areas (by noting the factor loadings of various clusters of voxels). After the dimension reduction procedure finds a dimension and the items and voxels associated with it, the dimension requires interpretation. The source of the interpretation often comes from past knowledge of the functional roles of the regions involved and the nature of the items strongly associated with the dimension. For example, if hand tools obtain the highest factor scores on some factor, then that factor might plausibly be interpreted as a body– object interaction factor. (The items’ factor scores indicate the strength of the association between the items and the factor). One approach to assessing an interpretation of a dimension (such as a body–object interaction dimension in this example) is to first obtain ratings of the salience of the postulated dimension, say body–object interaction, to each of the items from an independent group of participants. For example, the raters may be asked to rate the degree to which a concept, such as pliers, is related to the hypothesized dimension body–object interaction (Just et al., 2010). Then the correlation between the behavioral ratings and the activation-derived factor scores of the items provides a measure of how well the interpretation of the dimension fits the activation data. This technique has been used to extract and interpret semantically meaningful dimensions underlying the representations of both concrete nouns and abstract concepts (Just et al., 2010; Vargas & Just, 2019).

Predictive Modeling The goal of a predictive modeling procedure is to assess whether the activation pattern of a concept that was left out of the modeling can be predicted with reasonable accuracy, given some theoretical basis. The prediction process

451

452

r. vargas and m. a. just

starts by generating a hypothesis about the underlying factor or dimension (which is based on how the items are ordered by their factor scores and on the locations of the voxels with high factor loadings). Then a linear regression model is used to define the mapping between the salience ratings of all but one item and the activation levels evoked by those items in factor-related locations (voxel clusters with high factor loadings in factor analyses that excluded the participant in question). The mapping is defined for all of the underlying factors. Then the activation prediction for the left-out item is generated by applying the mappings for all of the factors to the ratings of the left-out item. This process is repeated, each time leaving out a different item, generating an activation prediction for all of the items. Activation predictions for each concept can be made within each participant and then averaged over participants. The accuracy of the predictions provides converging evidence for the interpretation of the neurosemantic factors. Unlike correlations between behavioral ratings and factor scores for items, this approach develops a mapping that is generative or predictive, applying to items uninvolved in the modeling.

Hypothesis-Driven Encoding Modeling Encoding models provide another more general way to test whether a hypothesized semantic organization structure is capable of explaining the activation data for some set of concepts. A first step in the modeling is the specification of a theoretically plausible feature set that is hypothesized to account for the relationship between a stimulus set and the corresponding evoked activation patterns (Naselaris, Kay, Nishimoto, & Gallant, 2011). For example, the co-occurrence of noun concepts with verbs in a large text corpus may account for the relationship between individual concepts and activation patterns for those concepts, say in a regression model. The resulting betaweights from the regression model quantify the degree to which each feature determines the relationship between the stimuli and neural activity (Mitchell et al., 2008). The ability of this mapping to generalize to novel concepts, either in activation space or in feature space, provides a quantitative assessment of the plausibility of the hypothesized relation. This approach is especially useful for representations that are less clearly mapped in the brain, such as abstract concepts, enabling an evaluation of the neural plausibility of theories of abstract concept representation (Wang et al., 2018). More recently, encoding models have been used with semantic vectors, a feature structure constructed by extracting information from the cooccurrence of words in a large text corpus, to serve as a basis for predictions of large-scale sets of concept representations (Pereira et al., 2018). Encoding models have also been used to measure the ability for theoretically-derived semantic feature structures to explain neural activation data for sentences (Yang, Wang, Bailer, Cherkassky, & Just, 2017). Encoding models are a

The Neural Representation of Concrete and Abstract Concepts

flexible tool that allow for the quantitative evaluation of the ability of theoretically motivated feature structures to account for brain activation patterns.

Neurosemantic Structure of Concrete Object Representations Object concepts are the most perceptually driven of concept representations. Consequently, the neural representation of object concepts is fairly well understood because the neural organization of low-level perceptual information is well understood (Grill-Spector & Malach, 2004; Martin, 2007). Haxby et al. (2001) showed that pictures of different objects could be related to each other based on their pattern of activation in the visuospatial pathway, specifically in the fusiform face area (FFA) and parahippocampal place area (PPA). Patterns of activation in these regions were distinguishable in terms of the object categories being represented (i.e., faces vs. houses). It seems clear that a substantial part of concrete object representations consists of the representation of their perceptual properties. Moreover, it has been possible to determine the sequence in which various types of perceptual information becomes activated as the thought of a concrete object emerges. Recent MEG research has shown that the temporal trajectory of the neural activation for object representations starts with low-level visual properties such as image complexity, which begins to be activated about 75 ms after stimulus onset in the early bilateral occipital cortex. Later, at 80–120 ms, information concerning more complex categorically defined shapes (e.g., has eyes, has four legs) begins to be activated along the left ventral temporal cortex and anterior temporal regions (Clarke, Taylor, Devereux, Randall, & Tyler, 2013). The early onset object representation suggests that coarse categorical distinctions between objects are rapidly represented along a left-hemispheric feed-forward neural pipeline. After this initial representation is generated, more complex semantic features take form through recurrent activation and the integration of more distributed cortical systems at 200–300 ms. This temporal trajectory from simple to complex information suggests a cumulating pipeline designed to construct meaning from distributed semantic features. Beyond the understanding that concrete object representations are based in large part on the objects’ perceptual properties, several interesting questions remain, such as how the differing perceptual properties of an object are integrated in the object representation and what semantic properties underlie the organization of the representations of differing objects.

Hub-and-Spoke Model of Feature Integration in Concept Representations Any individual concept representation is thought to be composed of a network of semantic features (Collins & Loftus, 1975). Connections to more similar

453

454

r. vargas and m. a. just

(closer) semantic representations are more likely and easier to come to mind than more distal ones. The anterior temporal lobe (ATL), sometimes referred to as the convergence zone or hub, has been credited with incorporating individual semantic features of concepts (the spokes, in this analogy) into an integrated representation of that concept (Meyer & Damasio, 2009). Recent fMRI research suggests that this integration of semantic features in the brain is localized to the ATL. One study showed that combining color-related activation coded in the right V4 region of the occipital cortex and shape-related activation coded in the lateral occipital cortex (LOC) allowed visual objects to be distinguished in the ATL (Coutanche & Thompson-Schill, 2015). Although the ATL has also been shown to activate for abstract concepts (Hoffman 2016), a study similar to Coutanche and Thompson-Schill (2015) has yet to be conducted showing that individual abstract concepts can be decoded from ATL based on their composite semantic features. In sum, the ATL is thought to act as a cognitive mechanism that integrates perceptual and verbal (i.e., concrete and abstract) information comprising the representation of a concept (Lambon Ralph, 2014).

Semantic Dimensions of Concrete Concepts Contemporary research into concrete object concepts has progressed beyond the focus on perceptual aspects of concept representations and begun to examine higher-level semantic properties of concrete concept representations. This approach generally utilizes dimension reduction techniques such as factor analysis, first on an individual participant level then at the group level, to investigate semantic dimensions that are present in the neural representations across individuals (Just, Cherkassky, Buchweitz, Keller, & Mitchell, 2014). This dimension reduction approach applied to a set of activation patterns has the advantage of discovering neurally driven dimensions of meaning rather than imposing a previously hypothesized semantic organization. Just et al. (2010) utilized this approach to uncover three semantic dimensions underlying the representation of 60 words referring to concrete nouns (e.g., hammer, apple). Specifically, they found that these 60 concrete concepts could be characterized by the way they relate to eating, manipulation (or body– object interaction), and shelter (or enclosure). Moreover, each of these dimensions was associated with a small set of cortical regions. The shelter dimension was associated with activation in regions of bilateral parahippocampal place area, bilateral precuneus, and left inferior frontal gyrus. The manipulation dimension was associated with activation in regions of left supramarginal gyrus and left pre- and post-central gyrus (the participants were right-handed). The eating dimension was associated with activation in regions of the left inferior and middle frontal gyrus and left inferior temporal gyrus. These results indicate the beginnings of a biologically plausible basis set for concrete nouns and highlight semantic properties beyond a visuospatial domain.

The Neural Representation of Concrete and Abstract Concepts

Other research has sought to discover semantic dimensions of non-word or picture concept representations using a different approach. Principal Components Analysis (PCA) was applied to the activation evoked by 1800 object and action concepts shown in short movie clips (Nishimoto et al., 2011). This approach sub-divided the brain based on the similarities of the activation patterns among the concepts to their co-occurrence with a large text corpus. This technique was also applied to the activation patterns evoked by natural continuous speech (Huth, De Heer, Griffiths, Theunissen, & Gallant, 2016). Both the video clip and the natural-speech studies related neural activation similarities to corpus co-occurrence information to locate semantically consistent regions within the cerebral cortex based on domainspecific information. This parcellation approach associated the activation of various regions and semantic categories with individual concepts. The 12 interpretable semantic categories from the PCA were: mental (e.g., asleep); emotional (e.g., despised); social (e.g., child); communal (e.g., schools); professional (e.g., meetings); violent (e.g., lethal); temporal (e.g., minute); abstract (e.g., natural); locational (e.g., stadium); numeric (e.g., four); tactile (e.g., fingers); and visual (e.g., yellow). Aside from the format of stimulus presentation, the notable distinction between the dimension reduction approaches in Just et al. (2010) and Huth et al. (2016) was that Huth et al. generated semantic dimensions based on the mapping between activation and co-occurrence, while Just et al. generated dimensions from the activation patterns. The exploration of the underlying dimensions of concrete concepts helps provide a basis for the semantic organization of perceptible concepts beyond basic visuospatial properties.

Neurosemantic Signatures of Abstract Concepts The representations of abstract concepts, such as ethics and law, are neurally and qualitatively distinct from those of concrete concepts. Abstract concepts, by definition, have no direct link to perception, with the exception of some form of symbolic representation (e.g., lady justice holding a scale to represent the concept of law or justice). The conventional view of abstractness portrays it as an absence of a perceptual basis, that is, the opposite of concreteness (Barsalou, 1999; 2003; Brysbaert, Warriner, & Kuperman, 2014; Wang, Conder, Blitzer, & Shinkareva, 2010). Although it is easy to define abstract concepts such as those lacking concreteness, this definition does not describe the psychological or neurocognitive properties and mechanisms of abstract concepts. Concrete and abstract concepts generally evoke different activation patterns, as a meta-analysis showed (Wang et al., 2010). This meta-analysis indicated that the two types of concepts differ in their activation in areas related to verbal processing, particularly the left inferior frontal gyrus (LIFG).

455

456

r. vargas and m. a. just

Abstract concepts elicited greater activation than concrete concepts in such verbal processing areas. By contrast, concrete concepts elicited greater activation than abstract concepts in visuospatial processing (precuneus, posterior cingulate, and fusiform gyrus). This meta-analysis was limited to univariate comparisons of categories of concepts and did not have access to the activation patterns evoked by individual concepts. This limitation potentially overlooks nuanced distinctions in the representational structure. Univariate contrasts potentially overlook critical relationships across neural states and neural regions (Mur et al., 2009). Through the use of MVPA techniques, more recent studies have begun to examine the underlying semantic structure of sets of abstract concepts. The next section focuses on various imaging studies examining the neural activation patterns associated with abstract concepts and explores the possible semantic structures that are specific to abstract concepts.

Neurosemantic Dimensions of Abstract Meaning As in the case of concrete concepts, the semantic dimensions underlying abstract concept categories can be identified from their activation patterns. One of the first attempts to decode the semantic content of abstract semantic information was conducted by Anderson, Kiela, Clark, and Poesio (2017). A set of individual concepts that belonged to various taxonomic categories (tools, locations, social roles, events, communications, and attributes) were decoded from their activation patterns. Whether a concept belonged to one of two abstract semantic categories (i.e., Law or Music) was also decoded from the activation patterns of individual concepts. Although these abstract semantic categories could be decoded based on their activation patterns, the localization of this dissociation is unclear. Neurally-based semantic dimensions underlying abstract concepts differ from the dimensions underlying concrete concepts. Vargas and Just (2019) investigated the fMRI activation patterns of 28 abstract concepts (e.g., ethics, truth, spirituality) focusing on individual concept representation and the relationship between the activation profiles of these concept representations. Factor analyses of the activation patterns evoked by the stimulus set revealed three underlying semantic dimensions. These dimensions corresponded to 1) the degree to which a concept was Verbally Represented, 2) whether a concept was External (or Internal) to the individual, and 3) whether the concept contained Social Content. The Verbal Representation dimension was present across all participants and was the most salient of the semantic dimensions. Concepts with large positive factor scores for this factor included compliment, faith, and ethics, while concepts with large negative scores for this factor included gravity, force, and acceleration. The former three concepts seem far less perceptual than the latter three. For the Externality factor, a concept that is external is one that requires the representation of the world

The Neural Representation of Concrete and Abstract Concepts

outside oneself and the relative non-involvement of one’s own state. An internal concept is one that involves the representation of the self. At one extreme of the dimension lie concepts that are external to the self (e.g., causality, sacrilege, and deity). At the other extreme lie concepts that are internal to the participant (e.g. spirituality and sadness). The last semantic dimension was interpreted to correspond to Social Content. The concepts at one extreme of the dimension included pride, gossip, and equality, while the concepts at the other extreme included heat, necessity, and multiplication. Together these semantic dimensions underlie the neural representations of the 28 abstract concepts. One surprising finding was that the regions associated with the Verbal Representation dimension were the same as those found in the meta-analysis conducted by Wang et al. (2010) that contrasted the activation between concrete and abstract concepts. Activation in the LIFG (a region clearly involved in verbal processing) was evoked by concepts such as faith and truth, while the left supramarginal gyrus (LSMG) and left lateral occipital complex (LOC), both of which are involved in different aspects of visuospatial processing, were associated with concepts such as gravity and heat. Moreover, the output of the factor analysis (i.e., factor scores) for the Verbal Representation factor also suggested that the abstractness of the neural patterning in these regions for an individual concept is represented as a point on a continuum between language systems and perceptual processing systems. This interpretation corresponds to the intuition that abstractness is not a binary construct but rather a gradient-like translation of a concept into a more verbal encoding. This conclusion is somewhat surprising given that the set of 28 concepts are all qualitatively abstract, in that they have no direct perceptual referent. The amount of activation in LIFG evoked by a given abstract concept corresponds to its Verbal Representation factor score. These results raise an interesting theoretical and psychological question regarding the role of neural language systems, particularly LIFG, in the verbal representation of abstract concepts. That is, what does it mean, neurally and psychologically, for an abstract concept to be verbally represented?

Abstract Concepts as Verbal Representations What does it mean for an abstract concept to be represented in regions involved in verbal processing and to evoke activation in the LIFG? When the LIFG is artificially lesioned through the repeated use of transcranial magnetic stimulation (TMS), healthy participants show a 150 ms slower response time for comprehending abstract concepts (e.g., chance) (Hoffman, Jefferies, & Lambon Ralph, 2010). This same TMS-based lesioning procedure showed no influence in the amount of time needed to respond to concrete concepts (e.g., apple). However, these differences in the impact of TMS were nullified when the abstract concepts were presented within a context (e.g.,

457

458

r. vargas and m. a. just

“You don’t stand a chance”). These results suggest that the abstractness of a concept is dependent on whether it requires integration of meaning across multiple contexts (Crutch & Warrington 2005; 2010; Hoffman 2016; Hayes & Kraemer, 2017). Moreover, the LIFG seems to be involved in the contextdependent integration of meaning. Given that LIFG appears to be involved in the contextualization of the meaning of abstract concepts (Hoffman et al., 2010) and that the magnitude of activation in LIFG is directly proportional to the degree that it is verbally represented (Vargas & Just, 2019), taken together these results suggest that the activation in LIFG reflects the magnitude of mental activity required to contextualize the meaning of a lexical concept. LIFG has been shown to elicit greater activation for sentence-level representations as compared to word-level concepts (Xu, Kemeny, Park, Frattali, & Braun, 2005). It may be the case that the central cognitive mechanism underlying the neural activation in LIFG represents the integration of meaning across multiple representations in order to form a new representation that is a product of its components. That is, the components of meaning of apple require less computation (in LIFG) to generate a composite representation than the concept of chance. Also, providing a context for chance, as in “You don’t stand a chance”, reduced the cognitive workload by providing a more explicit version of its meaning. A similar mechanism can account for the greater activation in LIFG for sentences than for individual words, because constructing a sentence-level representation requires combining the meanings of individual concept representations in a mutually context-constraining way. As previously discussed, another region involved in the integrating of meaning for concepts is the anterior temporal lobe (ATL). ATL has been implicated in the integration of semantic features to form a composite representation of object concepts (Coutanche & Thompson-Schill, 2015). However, unlike LIFG, ATL does not appear to differentiate between abstract concepts that vary based on the degree that they are verbally represented (as defined by their factor scores in Vargas & Just (2019)). In sum, the integration of abstract concept representations with other concepts in a sentence seems to require additional computation. However, it is unclear whether these integrating computations are processing some episodic contexts (as suggested by the results of Hoffman et al., 2010), or some specific concept representations, or use some more general amodal representational format.

Hybrid Concepts: Neither Completely Concrete nor Completely Abstract Hybrid concepts are concepts that can be experienced directly but require additional processing beyond the five basic perceptual faculties to be

The Neural Representation of Concrete and Abstract Concepts

evoked. These concepts do not neatly fit within the dichotomy of concrete vs. abstract. For example, the concept envy cannot be tasted, seen, heard, smelled, or touched, but it inarguably can be experienced as an internal event which could have perceptual repercussions (e.g., feeling lethargic, crying). We propose that envy and other concepts referring to psychological states are hybrid. The view of embodied cognition (Barsalou, 1999) expands upon the definition of perception beyond our five basic perceptual faculties to also include the experiences of proprioception and emotions. Hybrid concepts fall outside the strict realm of the sensory-perceptual but within the realm of psychological experience as described by embodiment theory (e.g., proprioception and emotion). Emotions, physics, and social concepts are not usually defined exclusively with respect to their concreteness/abstractness but serve as excellent exemplars of hybrid concepts in that they can be perceptually experienced without evoking any of our five senses directly. Additionally, the neural understanding of the semantic underpinning of hybrid concepts is not well understood. The following three sections describe research investigating the neurosemantic organization of hybrid concepts, specifically the neurosemantic organization of emotions, physics concepts, and social concepts.

Neurosemantic Dimensions of Meaning Underlying Emotions Concepts Meta-analyses of activation contrasts investigating emotion concepts reveal six functional networks (Kober et al., 2008) including limbic regions (i.e., amygdala, hypothalamus, and thalamus), areas related to top-down executive control function (i.e., dorsal lateral prefrontal cortex), the processing of autobiographical information (i.e., posterior cingulate cortex) (Klasen, Kenworthy, Mathiak, Kircher, & Mathiak, 2011), visual association regions, and subregions within the motor cortex (Phan, Wager, Taylor, & Liberzon, 2002). These networks suggest that emotion representations partially involve cognitive functions related to more complex perceptual functioning (i.e., motion and visual association). Furthermore, the involvement of regions related to top-down executive functioning and regions related to the processing of autobiographical information suggest that emotion concepts recruit cognitive faculties for not only basic perceptual representations but also for higher-ordered cognitive functions. Although these findings identify regions involved in emotion representation and processing, they do not provide insight into how different emotions are neurally distinguished. Recent MVPA analyses examining the neural representations of emotion concepts have provided insight into the way the representations of different emotions are neurally organized. Kassam, Markey, Cherkassky, Loewenstein, and Just (2013) examined the evoked neural activation patterns of 18 emotion concepts such as happiness, pride, envy, and sadness. The participants in this study didn’t just think about the meaning of a presented emotion word, they tried to evoke the emotion in themselves at that moment. Factor analyses of

459

460

r. vargas and m. a. just

the activation profiles for the 18 emotion concepts followed by a predictive model to validate the interpretations revealed three underlying dimensions of meaning. The underlying semantic dimensions organize these emotions concepts according to the valence of the emotion (positive or negative), its degree of arousal (fury vs. annoyance) and degree of social involvement (i.e., whether another person is included in the representation, as is the case for envy but not necessarily so for sadness). Each of these dimensions of representation were found to correspond to activation distributed across several cortical regions. The brain locations associated with the valence of an emotion concept included the right medial prefrontal cortex, left hippocampus, right putamen, and the cerebellum. The brain locations associated with the arousal dimension included the right caudate and left anterior cingulum. Finally, brain locations associated with sociality included the bilateral cingulum and somatosensory regions. Both univariate and multivariate approaches provided neural evidence for the involvement of perceptual and higher-cognitive faculties. The multivariate analyses provided additional insight into the dimensions along which the individual emotion concepts are differentiated from each other. So even though emotions are very different from object concepts, the principles underlying their neural representations are rather similar to those of other types of concepts.

Neurosemantic Dimensions of Meaning Underlying Physics Concepts Research investigating the neural representation of physics concepts suggests their neural organization somewhat reflects the physical world they refer to, such as the movements or interactions of objects. Mason and Just (2016) investigated the neural activation patterns of 30 elementary physics concepts (e.g., acceleration, centripetal force, diffraction, light, refraction). Factor analyses of the activation patterns evoked by the 30 concepts revealed four underlying semantic dimensions. These dimensions were periodicity (typified by words such as wavelength, radio waves, frequency), causal-motion/visualization (e.g., centripetal force, torque, displacement), energy flow (electric field, light, direct, current, sound waves, and heat transfer), and algebraic/equation representation (velocity, acceleration, and heat transfer) which are associated with familiar equations. The regions associated with each semantic dimension provide insight into the underlying cognitive role of the region. The periodicity dimension was associated with dorsal premotor cortex, somatosensory cortex, bilateral parietal regions, and the left intraparietal sulcus. These regions have been shown to activate for rhythmic finger tapping (Chen, Zatorre, & Penhune, 2006). The causal-motion/visualization dimension was associated with the left intraparietal sulcus, left middle frontal gyrus, parahippocampus, and occipital-temporalparietal junction. These regions have been shown to be involved in attributing causality to the interactions between objects and data (Fugelsang & Dunbar,

The Neural Representation of Concrete and Abstract Concepts

2005; Fugelsang, Roser, Corballis, Gazzaniga, & Dunbar, 2005). The algebraic/equations dimension includes the precuneus, left intraparietal sulcus, left inferior frontal gyrus, and occipital lobe. These regions have been implicated in the executive processing and integration of visuospatial and linguistic information in calculation (Benn, Zheng, Wilkinson, Siegal, & Varley, 2012) and more general arithmetic processing. The regions associated with energy flow were middle temporal and inferior frontal regions. In the context of physics concepts, these regions are attributed with representing the visual information associated with abstract concepts (Mason & Just, 2016). Together, these results suggest that the neural representations of physics concepts, many of them developed only a few hundred years ago, draw on the human brain’s ancient ability to perceive and represent physical objects and events.

Neurosemantic Dimensions of Meaning Underlying Social Concepts Research comparing the neural representation of social concepts between healthy controls and individuals with high-functioning autism has revealed three semantic dimensions involved in the neural representations of social interactions (Just et al., 2014). Participants in this study thought about the representations of eight verbs describing social interactions (compliment, insult, adore, hate, hug, kick, encourage, and humiliate) considered from the perspective of either the agent or recipient of the action. Factor analyses of neural activation profiled for these 16 concept–role combinations revealed semantic dimensions associated with self-related cognition (hate in the agent role and humiliate in the recipient role), social valence (adore and compliment), and accessibility/familiarity relating to the ease or difficulty of semantic access. The self dimension was associated with activation in the posterior cingulate: An area commonly implicated in the processing of autobiographical information. The social valence factor included the caudate and putamen for both controls and individuals with autism. The accessibility/familiarity factor included regions that are part of the default mode network, particularly middle cingulate, right angular gyrus, and right superior medial frontal. Because this study involved a comparison between young adult healthy controls and participants with high-functioning ASD, it provided an important glimpse into how a psychiatric or neurological condition can systematically alter the way a certain class of concepts is thought about. The use of fMRI neuroimaging allows the precise measurement of how a given concept is neurally represented, and specify precisely how a condition like ASD can alter the representation. The interesting finding was that the members of the two participant groups could be very accurately distinguished by their neural representations of these social interaction concepts. More specifically, the ASD group showed a lack of a self dimension, showing little activation in the regions associated with the self dimension in the healthy control group. The findings suggest that when the ASD participants thought about a concept

461

462

r. vargas and m. a. just

like hug, it involved very little thought of themselves. By contrast, the control group thought about themselves when thinking about what hug means. Thus, the assessment of neural representations of various classes of concepts has the potential to identify the presence and the nature of concept alterations in psychiatric or neurological conditions. The neurosemantic architecture of hybrid concepts (as exemplified by emotions, physics, and social concepts) suggests these concepts relate us with the external world (e.g., causal-motion visualization dimension for physics concepts or self/other for social concepts). Moreover, the neural activation associated with magnitudes of perceptual experience are also captured by the neural representations (e.g., degree of arousal with emotion concepts). Taken together, these results suggest that hybrid concepts are composed, in part, of perceptual states that translate our perceptual world into various mental states.

Commonality of Individual Concrete and Abstract Concepts across People The commonality across participants of the neurally-defined dimensions underlying various semantic domains foreshadows one of the most interesting findings concerning the neural representations of individual concepts. The surprising finding is that the neural representations of all the concepts studied so far are rather similar across people. This section focuses on the commonality of individual concept representations across individuals. The general approach to quantitatively evaluating the commonality of individual concept representations is to train a machine learning classifier on the labeled activation data of all but one participant for a given set of concepts, and then to classify or make predictions concerning the concept representations of the left-out individual. In a cross-validation protocol, this process is repeated with a different person left out on each iteration. The accuracies of the predictions are then averaged across iterations. This averaged accuracy measures the commonality of a set of concept representations. This approach has shown that there is considerable commonality of the neural representations of concepts across healthy participants. The commonality was present for concrete, abstract, and hybrid concepts. Decoding accuracies across participants were high and approximately equivalent for concrete, abstract, and hybrid concepts (i.e., mean rank accuracy =.72 for concrete concepts (Just et al., 2010); .74 for abstract concepts (Vargas & Just, 2019); .71 for physics concepts (Mason & Just, 2016); .7 for emotion concepts (Kassam et al., 2013); and .77 for social concepts (Just et al., 2014). Although a large proportion of the concepts in a brain reading study are accurately predicted across participants, there are always a few items at the negative tail of the accuracy distribution, and it would be interesting to know if the items with lower across-participant commonalities had some

The Neural Representation of Concrete and Abstract Concepts

distinguishing properties. In a study of sentence decoding across three languages (Portuguese, Mandarin, and English), Yang et al. (2017) found lower across-language, across participant decoding accuracies for concepts that are more abstract and related to social and mental activities (e.g., happy, negotiation, artist). They attributed this lower degree of commonality across languages of such items to some abstract and socially-related concept domains being more culturally-determined. For the set of 28 abstract concepts presented in the Vargas and Just (2019) study, the concepts which were more prototypically abstract (e.g., sacrilege and contract) were somewhat less accurately predicted across participants than concepts that tend to be more hybrid (e.g., force and pride). However, there were a number of exceptions to this trend. For example, concepts such as anger and gossip were less well predicted than others across participants (although still with an above-chance accuracy), and these concepts tended to be highly instantiable. By contrast, concepts such as necessity and causality, which are highly verbally represented, were more accurately predicted across participants.

Relating Neuroimaging Findings and Corpus Cooccurrence Measures One particular class of encoding models, as was previously discussed, attempts to relate neural representations to some well-defined feature set. Defining the meaning of a concept in some computationally tractable way has long been a challenge, and it is relevant here because it has the potential to be systematically related to the neural representation of the concept. One of the early answers to this challenge suggested that concepts can be characterized in terms of the concepts with which they co-occur in some large text corpus (Landauer & Dumais, 1997). The lower dimensions (about 300) of a large co-occurrence matrix produce a semantic vector representation of the words in the corpus (Pennington, Socher, & Manning, 2014; Deerwester, Dumais, Furnas, Landauer, & Harshman, 1990). The method of deriving this lower dimensional feature space can vary, depending on the specific approach. The utility of semantic vector representations comes from their convenience in natural language processing applications. But can the semantic vector representation of a concept like apple be informative about the neural representation of apple? The semantic vector representations can be used as the predictive basis of an encoding model. Predicted images can be generated from the learned mapping relating brain activation data from a matrix containing semantic vectors. This learned mapping can then be used to generate predicted brain images for concepts with no previously collected data (Mitchell et al., 2008). This approach provides the basis for generating a set of concept representations which can then

463

464

r. vargas and m. a. just

be explored for its semantic properties (Pereira et al., 2018). Moreover, it enables the study of many more concept representations than can easily be acquired in time- and cost-limited fMRI studies. However, it is unclear whether encoding models based on semantic vector representations illuminate the difference between concrete and abstract concepts representations. Co-occurrence structures have also been used to evaluate the neural instantiation of the associative theories of abstract concept representations. Wang et al. (2018) utilized RSA to compare the organizational structures of 360 abstract concept representations by examining the representational structure of fMRI activation patterns across the whole brain and concept cooccurrence properties in a large corpus. The goal was to show that each of these viable organization principles is instantiated uniquely within the brain. Co-occurrence properties represent the theoretical view that abstract concepts are represented in terms of their association with other concepts. Their results showed that the relationship between co-occurrence representations and brain activity for 360 abstract concepts was largely left lateralized and seemed to uniquely activate areas traditionally associated with language processing such as left lateral temporal, inferior parietal, and inferior frontal regions.

Conclusion The understanding of how concepts are represented in the human brain has advanced significantly based on innovations in imaging technology and multivariate machine learning techniques. One new insight concerns how human and self-centric concept representations are neurally structured. No dictionary definition has specified how a hammer is to be wielded, and yet that is an important part of how it is neurally represented. Thus, part of the neural representation of a physical object specifies how our bodies interact with the object (Hauk & Pulvermüller, 2004; Just et al., 2010). Part of the neural representation of gossip specifies a social interaction. The concept of spirituality evokes self-reflection. Thus, this insight is that many neural representations of concepts contain human-centric information in addition to semantic information. A second insight concerns the dependence of abstract concepts on the verbal representations of other concepts. Representing the meaning of abstract concepts may require a greater integration of meaning across multiple other concept representations than is the case for concrete concepts. Abstract concepts evoke activation in cortical regions associated with language processing, particularly LIFG, which may reflect the neurocomputational demand for this increased integration of meaning. A third insight is that the semantic components of a neural representation of a concept consist of the representations within various neural subsystems, such

The Neural Representation of Concrete and Abstract Concepts

as the motor system, the social processing system, and the visual system. These neural subsystems constitute the neural indexing or organizational system. A fourth insight concerns the remarkable degree of commonality of neural representations across people and languages. Although concept representations phenomenologically seem very individualized, the neural representations indicate very substantial commonality, while still leaving room for some individuality. The commonality probably arises from the commonality of human brain structures and their capabilities, and from commonalities in our environment. We all have a motor system for controlling our hands, and all apples have a similar shape, so our neural representations of holding an apple are similar. A fifth insight is that the principles regarding the neural representations of physical objects extend without much modification to more concrete and hybrid concepts. Although it is easy to see why the concept of apple is similarly neurally represented in all of us, it is more surprising that an emotion like anger evokes a very similar activation pattern in all of us. Moreover, even abstract concepts like ethics have a systematic neural representation that is similar across people. Although there is much more to human thought than the representation of concepts, these representations constitute an important set of building blocks from which thoughts are constructed. The neuroimaging of these concept representations reveals several of their important properties as well as hints as to how they might combine to form more complex thoughts.

Acknowledgments This research was supported by the Office of Naval Research Grant N00014–16-1-2694.

References Anderson, A. J., Kiela, D., Clark, S., & Poesio, M. (2017). Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns. Transactions of the Association for Computational Linguistics, 5, 17–30. Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22 (4), 577–660. Barsalou, L. W. (2003). Abstraction in perceptual symbol systems. Philosophical Transactions of the Royal Society B: Biological Sciences, 358(1435), 1177–1187. doi.org/10.1098/rstb.2003.1319 Benn, Y., Zheng, Y., Wilkinson, I. D., Siegal, M., & Varley, R. (2012). Language in calculation: A core mechanism? Neuropsychologia, 50(1), 1–10. https://doi .org/10.1016/j.neuropsychologia.2011.09.045

465

466

r. vargas and m. a. just

Brysbaert, M., Warriner, A. B., & Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3), 904–911. https://doi.org/10.3758/s13428-013-0403-5 Chen, J. L., Zatorre, R. J., & Penhune, V. B. (2006). Interactions between auditory and dorsal premotor cortex during synchronization to musical rhythms. NeuroImage, 32(4), 1771–1781. https://doi.org/10.1016/j.neuroimage.2006.04.207 Clarke, A., Taylor, K. I., Devereux, B., Randall, B., & Tyler, L. K. (2013). From perception to conception: How meaningful objects are processed over time. Cerebral Cortex, 23(1), 187–197. https://doi.org/10.1093/cercor/bhs002 Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82(6), 407–428. https://doi.org/10.1037/ 0033-295X.82.6.407 Coutanche, M. N., & Thompson-Schill, S. L. (2015). Creating concepts from converging features in human cortex. Cerebral Cortex, 25(9), 2584–2593. https://doi .org/10.1093/cercor/bhu057 Crutch, S. J., & Warrington, E. K. (2005). Abstract and concrete concepts have structurally different representational frameworks. Brain, 128(3), 615–627. https://doi.org/10.1093/brain/awh349 Crutch, S. J., & Warrington, E. K. (2010). The differential dependence of abstract and concrete words upon associative and similarity-based information: Complementary semantic interference and facilitation effects. Cognitive Neuropsychology, 27(1), 46–71. https://doi.org/10.1080/02643294.2010.491359 Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the Association for Information Science and Technology, 41(6), 391–407. Fugelsang, J. A., & Dunbar, K. N. (2005). Brain-based mechanisms underlying complex causal thinking. Neuropsychologia, 43(8), 1204–1213. https://doi .org/10.1016/j.neuropsychologia.2004.10.012 Fugelsang, J. A., Roser, M. E., Corballis, P. M., Gazzaniga, M. S., & Dunbar, K. N. (2005). Brain mechanisms underlying perceptual causality. Cognitive Brain Research, 24(1), 41–47. https://doi.org/10.1016/j.cogbrainres.2004.12.001 Grill-Spector, K., & Malach, R. (2004). The human visual cortex. Annual Review of Neuroscience, 27(1), 649–677. https://doi.org/10.1146/annurev.neuro.27 .070203.144220 Hauk, O., & Pulvermüller, F. (2004) Neurophysiological distinction of action words in the fronto-central cortex. Human Brain Mapping, 21(3), 191–201. DOI: 10.1002/hbm.10157 Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex, Science, 293(5539), 2425–2430. doi: 10.1126/ science.1063736. Hayes, J. C., & Kraemer, D. J. M. (2017). Grounded understanding of abstract concepts: The case of STEM learning. Cognitive Research: Principles and Implications, 2(1), 7. https://doi.org/10.1186/s41235-016-0046-z Haynes, J. D., & Rees, G. (2006). Decoding mental states from brain activity in humans. Nature Reviews Neuroscience, 7(7), 523–534. https://doi.org/10 .1038/nrn1931

The Neural Representation of Concrete and Abstract Concepts

Hoffman, P. (2016). The meaning of “life” and other abstract words: Insights from neuropsychology. Journal of Neuropsychology, 10(2), 317–343. https://doi .org/10.1111/jnp.12065 Hoffman, P., Jefferies, E., & Lambon Ralph, M. A. (2010). Ventrolateral prefrontal cortex plays an executive regulation role in comprehension of abstract words: Convergent neuropsychological and repetitive TMS evidence. Journal of Neuroscience, 30(46), 15450–15456. https://doi.org/10.1523/JNEUROSCI .3783-10.2010 Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600), 453–458. https://doi.org/10.1038/nature17637 Just, M. A., Cherkassky, V. L., Aryal, S., & Mitchell, T. M. (2010). A neurosemantic theory of concrete noun representation based on the underlying brain codes. PLoS ONE, 5(1), e8622. https://doi.org/10.1371/journal.pone.0008622 Just, M. A., Cherkassky, V. L., Buchweitz, A., Keller, T. A., & Mitchell, T. M. (2014). Identifying autism from neural representations of social interactions: Neurocognitive markers of autism. PLoS ONE, 9(12), e113879. https://doi .org/10.1371/journal.pone.0113879 Kassam, K. S., Markey, A. R., Cherkassky, V. L., Loewenstein, G., & Just, M. A. (2013). Identifying emotions on the basis of neural activation. PLoS ONE, 8 (6), e66032. https://doi.org/10.1371/journal.pone.0066032 Klasen, M., Kenworthy, C. A., Mathiak, K. A., Kircher, T. T. J., & Mathiak, K. (2011). Supramodal representation of emotions. Journal of Neuroscience, 31 (38), 13635–13643. https://doi.org/10.1523/JNEUROSCI.2833-11.2011 Kober, H., Barrett, L. F., Joseph, J., Bliss-Moreau, E., Lindquist, K., & Wager, T. D. (2008). Functional grouping and cortical–subcortical interactions in emotion. NeuroImage, 42(2), 998–1031. https://doi.org/10.1016/j.neuroimage.2008.03 .059 Kriegeskorte, N., Goebel, R., & Bandettini, P. (2006). Information-based functional brain mapping. Proceedings of the National Academy of Sciences of the United States of America, 103(10), 3863–3868. https://doi.org/10.1073/pnas .0600244103 Kriegeskorte, N., Mur, M., & Bandettini, P. (2008a). Representational similarity analysis – Connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2(11), 1–28. https://doi.org/10.3389/neuro.06.004.2008 Kriegeskorte, N., Mur, M., Ruff, D. A., Kiani, R., Bodurka, J., Esteky, H., . . . Bandettini, P. A. (2008b). Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron, 60(6), 1126–1141. https://doi.org/10.1016/j.neuron.2008.10.043 Lambon Ralph, M. A. (2014). Neurocognitive insights on conceptual knowledge and its breakdown. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1634), 20120392. https://doi.org/10.1098/rstb.2012.0392 Landauer, T. K., & Dumas, S. T. (1997) A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211–240. Martin, A. (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58, 25–45.

467

468

r. vargas and m. a. just

Mason, R. A., & Just, M. A. (2016). Neural representations of physics concepts. Psychological Science, 27(6), 904–913. https://doi.org/10.1177/ 0956797616641941 Meyer, K., & Damasio, A. (2009). Convergence and divergence in a neural architecture for recognition and memory. Trends in Neurosciences, 32(7), 376–382. https://doi.org/10.1016/j.tins.2009.04.002 Mitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K. M., Malave, V. L., Mason, R. A., & Just, M. A. (2008). Predicting human brain activity associated with the meanings of nouns. Science, 320(5880), 1191–1195. https://doi.org/10 .1126/science.1152876 Mur, M., Bandettini, P. A., & Kriegeskorte, N. (2009). Revealing representational content with pattern-information fMRI – An introductory guide. Social Cognitive and Affective Neuroscience, 4(1), 101–109. https://doi.org/10.1093/ scan/nsn044 Naselaris, T., Kay, K. N., Nishimoto, S., & Gallant, J. L. (2011). Encoding and decoding in fMRI. Neuroimage, 56(2), 400–410. Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641–1646. https://doi.org/10.1016/j.cub.2011.08.031 Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global Vectors for Word Representation. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 1532–1543. Pereira, F., Lou, B., Pritchett, B., Ritter, S., Gershman, S. J., Kanwisher, N., . . . Fedorenko, E. (2018). Toward a universal decoder of linguistic meaning from brain activation. Nature Communications, 9(1), 963. https://doi.org/10.1038/ s41467-018-03068-4 Phan, K. L., Wager, T., Taylor, S. F., & Liberzon, I. (2002). Functional neuroanatomy of emotion: A meta-analysis of emotion activation studies in PET and fMRI. NeuroImage, 16(2), 331–348. https://doi.org/10.1006/nimg.2002.1087 Vargas, R., & Just, M. A. (2019). Neural representations of abstract concepts: Identifying underlying neurosemantic dimensions. Cerebral Cortex, 30(4), 2157–2166. https://doi.org/10.1093/cercor/bhz229 Wang, J., Conder, J. A., Blitzer, D. N., & Shinkareva, S. V. (2010). Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies. Human Brain Mapping, 31(10), 1459–1468. https://doi.org/10.1002/ hbm.20950 Wang, X., Wu, W., Ling, Z., Xu, Y., Fang, Y., Wang, X., . . . Bi, Y. (2018). Organizational principles of abstract words in the human brain. Cerebral Cortex, 28(12), 4305–4318. https://doi.org/10.1093/cercor/bhx283 Xu, J., Kemeny, S., Park, G., Frattali, C., & Braun, A. (2005). Language in context: Emergent features of word, sentence, and narrative comprehension. NeuroImage, 25(3), 1002–1015. https://doi.org/10.1016/j.neuroimage.2004.12.013 Yang, Y., Wang, J., Bailer, C., Cherkassky, V., & Just, M. A. (2017). Commonality of neural representations of sentences across languages: Predicting brain activation during Portuguese sentence comprehension using an English-based model of brain function. NeuroImage, 146, 658–666. https://doi.org/10.1016/ j.neuroimage.2016.10.029

Index

ability grouping, in education policy, 403–404 abstract concepts, neurosemantic signatures of, 455–458 dimensions of meaning, 456–457 ethics, 455 law, 455 LIFG activation, 455–458 for verbal representations, 457–458 academic aptitude tests American College Testing, 5–7 Armed Services Vocational Aptitude Battery, 6, 12 predictive power of for academic and school performance, 11–12 in school admissions tests, 11 PSAT, 6 scholastic aptitude test, 5–7 ACT. See American College Testing ADHD. See Attention-Deficit Hyperactivity Disorder affective intelligence, 180–181 affective neuroscience, 178–179 age, intelligence and cognitive neuroscience and, 148–149 core functions in, 148 patterns of aging, 148–149 super agers, 148 with declining brain integrity, 151–153 brain reserve theory and, 152–153 modification strategies for, 155–156 Scaffolding Theory of Aging and Cognition, 152 functional magnetic resonance imaging of, 149 future directions of, 156–157 human intelligence theories and, 149–151 for crystallized intelligence, 150–151 for fluid intelligence, 150–151 imaging of intelligence by, 60 lifespan performance measures, 151 in longitudinal studies, lifespan trajectories and, 137–138

brain maintenance and, 138 level-change associations, 138 structural scaffolding, 138 methodological challenges with, across lifespan, 153–155 cohort effects, 153–154 Flynn effect, 154 modification strategies, implications of, 155–156 working memory tasks, 155–156 age differentiation hypothesis, 19 American College Testing (ACT), 5–7 analysis, of human intelligence studies horizontal levels of, 424–429 for creative intelligence, 425–429 for emotional intelligence, 425 integration with vertical level analysis, 429 for practical intelligence, 425 for social intelligence, 425 for wisdom, 426–429 overview of, 429–430 synthesis of creativity, intelligence, and wisdom in, 427–429 g and, 428–429 theoretical approach to, 416–418 vertical levels of, 418–424 cultural issues, 418–420 epochal issues, 421–423 integration with horizontal level analysis, 429 modes of, 423–424 anhedonia, 166–167, 176–177 anterior salience network, 240 Armed Services Vocational Aptitude Battery (ASVAB), 6, 12 assessment of general intelligence, 383–384 from brain lesion studies, 384–389 for mapping, 386–388 for planning, 386–388 for problem solving, 386–388 voxel-based lesion-system mapping, 385–386

469

470

Index assessment of general intelligence (cont.) future research on, 392–393 for premorbid intelligence, 383–384 after traumatic brain injuries, recovery from, 389–391 genetic predisposition as factors for, 391 lesion severity and, 391 outcome predictions, 391–392 pre-injury intelligence as predictor of, 389–391 socioeconomic status as factor in, 390 with Wechsler Adult Intelligence Scale, 383 assortative mating, through PGS, 354–355 ASVAB. See Armed Services Vocational Aptitude Battery attention dorsal and ventral attention networks, 240 network training and, 367–369 sustained, in neural models of cognitive control, 269 Attention-Deficit Hyperactivity Disorder (ADHD), 368–369 attractor manifold, in oscillator models, 172–175 ball-and-stick model, with DWI, 195 Binet, Alfred on intelligence, definition of, 3–4 Stanford-Binet Intelligence Scales, 3–4 biochemical correlates, of intelligence Full Scale IQ, 282 historical development of, 282 through magnetic resonance spectroscopy. See magnetic resonance spectroscopy biological research, on intelligence, for education and public policy ability grouping in, 403–404 attainment of intelligence, 400–405 cognitive neuroscience and, 405–408 future directions in, 410 genetic research, 408–410 Head Start program, 404 intelligence testing, 402–405 IQ scores and testing, 401–403 supportive arguments for, 400 Sure Start program, 404 blood oxygenation level dependent (BOLD) contrast, 236, 305–306 the brain. See also brain states; brain stimulation; neuroscience of brain networks brain size, 50–51 as complex network, 26–31 diffusion tensor imaging of, 27 functional connectivity in, 26–29, 33–35 functional magnetic resonance imaging of, 27–31

graphs for, 28–31 modules in, 26–30 as process of inference, 27 structural connectivity in, 26–29, 31–33 cortical complexity, indices of, 223 cortical structure, variability in, 51, 53, 54–55, 59–62 cortical surface area and volume, 221–223 cortical thickness, 219–221 dorsal anterior cingulate cortex, 237 dorsolateral prefrontal cortex, 237 flexibility of, 102 functional correlates, 59 functional topology of, 106–108 grey matter. See grey matter intelligence and, 13–17 biological processes of, 44–45 efficiency hypothesis for, 13–15 network neuroscience theory of, 17 Parieto-Frontal Integration Theory of, 16–17, 31–33 size and volume of brain, 15–16 language network, 240 learning and, requirements for, 163 pleasure cycle, 164–167, 176–177 left executive control network, 240 left-hemispheric executive control network, 240 in longitudinal studies, 125–126, 130 magnetic resonance spectroscopy, 283–292 in Multiple Demand system theory, 102–103 as network, 264–267. See also specific networks graph theory, 264–266 neuroscience approach to, 265–267 timescales for, 267 Parieto-Frontal Integration Theory of intelligence and, 16–17, 31–33, 46, 48–57, 102–103 prefrontal cortex, 237 Process Overlap Theory and, 102–103 resting state of, 240 size of. See craniometry structure and volume of. See craniometry; grey matter; white matter surface-based morphometry methods, 51, 53–55 task-positive network, 237 total volume, sMRI for, 210–211 ventrolateral prefrontal cortex, 237 voxel-based morphometry methods, 51–52 white matter. See white matter white matter structure, in Watershed model of Fluid Intelligence, 94 brain energy metabolism, 302–303 brain lesion studies, 384–389 mapping and, 386–388

Index voxel-based lesion-system, 385–386 planning and, 386–388 problem solving and, 386–388 brain reserve theory, 152–153 brain states, training of, 371–373 with meditation, 371–372 with physical exercise, 371 with video games, 372 brain stimulation, enhancement of cognition through, 373–375 multimodal interventions, 375 theta stimulation, 374–375 transcortical direct current stimulation, 373–374 transcortical magnetic stimulation, 373 Cattell Culture Fair Test, 267 Cattell-Horn-Carroll (CHC) model, 9 cerebellum, sMRI of, 224 CHC model. See Cattell-Horn-Carroll model chemical shift imaging (CSI), 301 children, in longitudinal studies grey matter development and, 126–131 language acquisition in, 129–133 white matter development and, 133–134 cingulo-opercular (CO) network, 263, 271–272 cognition studies, with MRS, 285–286 cognitive ability enhancement of. See enhancement of cognition predictive intelligence and, 328, 330 genetic prediction of, 332–333 joint hereditability of, 334 neuroimaging of, 330–332 real-world outcomes of, 328 testing for, 328 cognitive control. See neural models of cognitive control cognitive control network, 237, 240–241 cognitive intelligence, 180–181 cognitive neuroscience, 178 in biological research on intelligence, 405–408 cognitive neuroscience theories of intelligence, 85–94, 103. See also Parieto-Frontal Integration Theory of intelligence age and, 148–149 core functions in, 148 for crystallized intelligence, 150–151 for fluid intelligence, 150–151 patterns of aging, 148–149 super agers, 148 aging and, 149–151 applications of in biological interventions, 97–98 for brain health, 96–98 in environmental interventions, 97–98 for neurological diagnosis, 97

conceptual challenges with, 95–96 event-related potential studies and, 85–86 for Neural Efficiency Hypothesis, 87–88 neural speed in and, 86–87 Hierarchical Predictive Processing, 86, 92–93 brain-ability effects in, 93 neural dynamics of, 95 scope of, 92–93 longitudinal studies of. See longitudinal studies methodological approach to, 85–86 Multiple Demand system theory, 85–86 Parieto-Frontal Integration Theory of intelligence and, 89 Network Neuroscience Theory. See Network Neuroscience Theory Neural Efficiency Hypothesis, 85–88 activation-based formulations, 88 connectivity-based formulations, 88 distance in, 88 ERP amplitudes, 87–88 in higher-ability individuals, 88 individual differences approach and, 241–246 neural speed in, 86–87 ERP amplitudes, 86–87 for higher-order processing, 87 neural dynamics of, 95 Process Overlap Theory, 86, 90–91 research challenges with, 95–96 synthesis of theories, 94–95 Watershed model of Fluid Intelligence, 86, 93–94 intelligence as endophenotype, 93–94 white matter structure, 94 cognitive tasks, 311–312 cognitive test scores. See intelligence research communication principles, in learning models, 168–170 concepts. See abstract concepts; hybrid concepts concrete object representations, 453–455 hub-and-spoke model of feature integration, 453–454 Principal Components Analysis, 455 semantic dimensions of, 454–455 connectome-based studies, 57–58 for predictive intelligence, 358–359 corpus callosum, sMRI of, 223–224 corpus co-occurrence measures, 463–464 semantic vector representations, 463–464 cortical complexity, indices of, 223 cortical surface area and volume, 221–223 cortical thickness, 219–221 covariance-based network neuroscience, sMRI in, 225

471

472

Index craniometry, intelligence and, 191–205 historical development of, 191 IQ measure and, 193 white matter volume, 192–198 with diffusion-weighted imaging. See diffusion-weighted imaging Parieto-Frontal Integration Theory, 192–193, 202–204 creative intelligence, 425–429 crystallized intelligence, 7 aging and, 150–151 neuroscience of brain networks and, 109–110 CSI. See chemical shift imaging declining brain integrity, 151–153 brain reserve theory and, 152–153 modification strategies for, 155–156 Scaffolding Theory of Aging and Cognition, 152 Deep Boltzman Machine, 335–336 deep learning, 335–337 visible forms of, 338 default mode network, 246 Default Mode Network, in brain imaging, 48–57 diffusion tensor imaging (DTI), 27 diffusion-weighted imaging and, 194–195 diffusion-weighted imaging (DWI), 133 diffusion metrics, types of, 193–194 diffusion tensor imaging and, 194–195 of white matter, 193–205 ball-and-stick model, 195 fiber tracking, 195–198 principal component analysis, 197–198 q-ball model, 195 quantitative tractography, 195, 198–199 Tract-Based Spatial Statistics, 200–202 virtual fiber tractography, 195–198 voxel-based morphometry methods, 200–201 dorsal and ventral attention networks, 240 dorsal anterior cingulate cortex, 237 dorsolateral prefrontal cortex, 237 DTI. See diffusion tensor imaging DWI. See diffusion-weighted imaging ECTs. See elementary cognitive tasks education policy. See biological research; intelligence research EEG. See electroencephalogram efficiency hypothesis, 13–15 Raven’s Advanced Progressive Matrices and, 13–14 task-negative network and, 14 task-positive network and, 14 EFs. See executive functions

electroencephalogram (EEG), 236 elementary cognitive tasks (ECTs) for g measurement, 5, 7–8 reaction time for, 7–8 embodied cognition, 459 emotional intelligence, 425 enhancement of cognition with Attention-Deficit Hyperactivity Disorder, 368–369 through brain stimulation, 373–375 multimodal interventions, 375 theta stimulation, 374–375 transcortical direct current stimulation, 373–374 transcortical magnetic stimulation, 373 future research on, 376–377 through network training, 367–371 attention and, 367–369 development of, 369–370 individual differences as factor in, 370–371 through training of brain states, 371–373 with meditation, 371–372 with physical exercise, 371 with video games, 372 Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) studies, 74–77 ERD. See event-related desynchronization ERP studies. See event-related potential studies ethics, of predictive intelligence models, 338–339 event-related desynchronization (ERD), 236 event-related potential (ERP) studies cognitive neuroscience theories of intelligence and, 85–86 for Neural Efficiency Hypothesis, 87–88 neural speed in and, 86–87 executive functions (EFs), 8 feedback loops, 165 feedforward loops, 165 fiber tracking, for white matter, 195–198 fluid intelligence, 7 aging and, 150–151 functional brain imaging for, 236 neuroscience of brain networks and, 110–111 Watershed model of Fluid Intelligence, 86, 93–94 intelligence as endophenotype, 93–94 white matter structure, 94 Flynn effect, 154 fMRI. See functional magnetic resonance imaging fMRS. See functional magnetic resonance spectroscopy

Index fNIRS. See functional near infrared spectroscopy fronto-parietal (FP) controller network, 263, 271–272 Full Scale IQ (FSIQ), 282 functional brain imaging, of intelligence. See also functional magnetic resonance imaging for anterior salience network, 240 blood oxygenation level dependent contrast, 236 of brain regions, for intelligence-related tasks, 236–241 default mode network, 246 dorsal anterior cingulate cortex, 237 dorsolateral prefrontal cortex, 237 for general cognitive ability, 237 prefrontal cortex, 237 task-negative network, 246 task-positive network, 246 ventrolateral prefrontal cortex, 237 for cognitive control network, 237, 240–241 for dorsal and ventral attention networks, 240 electroencephalogram, 236 event-related desynchronization, 236 for fluid intelligence, 236 functional near infrared spectroscopy, 236 future research on, trends and perspectives for, 250–253 for g, 236 individual differences approach, 235–236, 239, 241–245 for brain efficiency, 241–242 neural efficiency hypothesis, 241–246 for language network, 240 for left executive control network, 240 for left-hemispheric executive control network, 240 moderating factors for, 245–248 learning and practice studies, 246–247 task difficulty, 247–248 predictive approach, 251–252 studies on, limitations of, 249–250 Functional Connectomes Project, 250 Human Connectome Project, 250 UK Biobank, 250 task approach, 235–241 multiple demand system and, 237–238 Parieto-Frontal Integration Theory and, 237–240 study findings, 238–240 for task-positive network, 237 theoretical approaches to, 235–236. See also task approach; individual differences approach for working memory, 237

functional connectivity, in brain networks, 26–29, 33–35 functional connectomes, in neural models of cognitive control, 267–270 machine learning approaches, 268 mechanisms in, 271–273 working memory and, 268–270 Functional Connectomes Project, 250 functional magnetic resonance imaging (fMRI), 13, 27–31 of aging, intelligence influenced by, 149 in longitudinal studies, 129–137 of neuroscience of brain networks for network dynamics, 111 for network efficiency, 108 of resting state, for brain, 240 functional magnetic resonance spectroscopy (fMRS), 310–313 for cognitive tasks, 311–312 for sensory motor tasks, 310–311 for visuospatial cognition, 313 functional near infrared spectroscopy (fNIRS), 236 g (general intelligence). See also assessment of general intelligence definition of, 3–4 ability in, 4 for Binet, 3–4 for Gardner, 4 mental processing speed as element of, 4 vagueness in, 4 for Wechsler, 4 functional brain imaging for, 236 measurement of. See measurement of g methodological approach to, 3–18 in Network Neuroscience Theory, 103–104 neurobiology of, 102–103 neuroscience of brain networks and dynamics of, 111–112 efficiency of brain networks and, 106–108 neurobiological individual differences in, 115 predictive power of, 10–13 for academic and school performance, 11–12 in school admissions tests, 11 socioeconomic status as factor in, 11 for work performance, 12–13 in synthesis of creativity, intelligence, and wisdom, 428–429 GABA, neurotransmission of, 306–307, 309–310 Galton, Francis, 155 Gardner, Howard, 4, 428 Gemeinschaft intelligence, 422–423 gender, imaging of intelligence and, 60

473

474

Index general intelligence. See g genetic nurture, through PGS, 353–354 genetics in biological research on intelligence, 408–410 intelligence and, 19–20 in large-scale data repositories, 75–77 in longitudinal studies, 135–136 predictive intelligence and, 330, 350. See also polygenic score for cognitive ability, 332–333 Genome-Wide Association Studies, 332–333, 339 integrative imaging approaches, 334–338 joint hereditability of cognitive ability, 334 traumatic brain injury recovery and, 391 Genome-Wide Association Studies (GWAS), 75–76, 349–350 polygenic score and. See polygenic score predictive intelligence and, 332–333, 339 Gesellschaft intelligence, 422–423 g-loaded tests, 5–8 academic aptitude tests, 5–7 predictive power of, 11 for crystallized intelligence, 7 elementary cognitive tasks, 5, 7–8 for executive functions, 8 for fluid intelligence, 7 IQ tests, 5–6 predictive power of, 10–13 for academic and school performance, 11–12 in school admissions tests, 11 socioeconomic status as factor in, 11 for work performance, 12–13 reaction times for, 7–8 Wechsler Intelligence Scales, 4–6, 9 Global Neuronal Workspace Theory, 168–170, 176 glutamate, neurotransmission of, 306–307, 310 graph theory, of brain networks, 264–266 grey matter, intelligence and imaging of, 48 in longitudinal studies, 126–133 change in g scores, 132 in children, 126–131 by cortical region, 131 functional connectivity in, 137 GWAS. See Genome-Wide Association Studies Hawaii Battery test, 9 Head Start program, 404 hierarchical models, with group factors, 9–10 Hierarchical Neuronal Workspace Architectures, 171–172

Hierarchical Predictive Processing theory, 86, 92–93 brain-ability effects in, 93 neural dynamics of, 95 scope of, 92–93 H-MRS. See proton magnetic resonance spectroscopy hub-and-spoke model of feature integration, 453–454 Human Connectome Project, 250, 358 hybrid concepts, neural representation of, 458–462 embodied cognition, 459 emotional concepts, 459–460 physics concepts, 460–461 social concepts, 461–462 hypothesis-driven encoding modeling, 452–453 ICNs. See intrinsic connectivity networks imaging, of intelligence, 45–47. See also diffusion-weighted imaging; functional magnetic resonance imaging; magnetic resonance imaging approach strategies for, 50–60 in connectome-based studies, 57–58 correlates in, 50–60 brain size, 50–51 cortical structure, variability in, 51, 53, 54–55, 59–62 functional, 59 surface-based morphometry methods, 51, 53–55 voxel-based morphometry methods, 51–52 Default Mode Network, 48–57 diffusion tensor imaging, 27 of grey matter, variability in, 48 heterogeneity in human sample characteristics, 60–62 age factors, 60 cortical differences, variability in, 60–62 gender factors, 60 historical development of, craniometry in, 191 for information processing, 48–57 in longitudinal studies, 123–124, 136–137 functional magnetic resonance imaging, 129–137 magnetic resonance imaging, 136–137 neurite orientation dispersion and density imaging, 191–192 overview of, 62–63 Parieto-Frontal Integration Theory of intelligence and, 45–47 brain regions in, 46, 48–57 positron emission tomography, 13, 70–71 for types of intelligence, 47–50

Index psychometric models of, 47–50 variability across imaging studies, 47 of white matter. See diffusion-weighted imaging individual differences approach, in functional brain imaging, 235–236, 239, 241–245 for brain efficiency, 241–242 neural efficiency hypothesis, 241–246 infants, in longitudinal studies, 135 inference, in network neuroscience methods, 27 information-processing models of intelligence, 44 imaging for, 48–57 magnetic resonance spectroscopy for, 305–307 intelligence. See also g; neural models of intelligence; predictive intelligence; specific imaging modalities affective, 180–181 brain and, 13–17 biological processes of, 44–45 efficiency hypothesis for, 13–15 network neuroscience theory of intelligence, 17 Parieto-Frontal Integration Theory and, 16–17, 31–33 size and volume of, 15–16 cognitive, 180–181 craniometry and. See craniometry creative, 425–429 crystallized, 7 aging and, 150–151 neuroscience of brain networks and, 109–110 emotional, 425 fluid, 7 aging and, 150–151 neuroscience of brain networks and, 110–111 future research on, 17–20 age differentiation hypothesis, 19 genetic information as factor in, 19–20 non-g factors, 17–19 Spearman’s Law of Diminishing Returns, 19 Gemeinschaft, 422–423 Gesellschaft, 422–423 information-processing models, 44, 48–57 lifespan performance measures for, 151 magnetic resonance imaging for, 13 optimal states of, 175–180 affective neuroscience and, emerging evidence of, 178–179 cognitive neuroscience and, emerging evidence of, 178 Global Neuronal Workspace Theory, 176

Leading Eigenvector Dynamics Analysis, 178 social neuroscience and, emerging evidence of, 177–178 plasticity of, 102 positron emission tomography for, 13 practical, 425 psychometric models, 44 Scarr-Rowe effects and, 20 social, 180–181, 425 suboptimal states of, 175–180 intelligence models, 8–9 Cattell-Horn-Carroll model, 9 hierarchical models with group factors, 9–10 Spearman model without group factors, 8–10 intelligence quotient (IQ) tests, 5–6 in biological research on intelligence, 401–403 craniometry and, 193 Full Scale IQ, 282 intelligence research, education and public policy influenced by applications of, 442–444 cognitive test scores in, use of, 435–441 forward causal inferences, 436–438 historical development of, 435 as outcomes, 441 as predictors, 439–440 reverse causal questions, 436–438 for selection purposes, 435 improvement strategies for intelligence through, 441–442 policy definitions, 443 public debates over, 434–436 roadblocks in, 443–444 twin studies in, 437 intelligence studies. See also large-scale data repositories with magnetic resonance spectroscopy, 286–292 limitations of, 287–289 recommendations for future applications, 290–292 interference resolution, in neural models of cognitive control, 269 intrinsic connectivity networks (ICNs) in Network Neuroscience Theory, 91–92 neuroscience of brain networks and, 107, 109–110, 115 capacity of ICNs, 111–112 dynamic functional connectivity and, 112 flexibility of, 111–112 invariance, in g measurement, 9–10 in WEIRD countries, 10 IQ tests. See intelligence quotient tests Jensen, Arthur, 4

475

476

Index language acquisition, in children, 129–133 language network, 240 large-scale data repositories, for intelligence studies, 71–77 challenges with, 75–77 Enhancing Neuro Imaging Genetics through Meta-Analysis, 74–77 Genome-Wide Association Studies, 75–76 imaging genetics in, 75–77 meta-analyses, 74 Parieto-Frontal Integration Theory of intelligence and, 75 planned datasets, 71–74 research consortia with, 72–73 unplanned datasets, 74 LASER. See Localization by Adiabatic SElective Refocusing LDpred estimation, 352 Leading Eigenvector Dynamics Analysis (LEiDA), 178 learning. See also predictive intelligence brain requirements for, 163 pleasure cycle, 164–167, 176–177 deep, 335–337 machine, 167–168 in functional connectomes, 268 predictive intelligence and, 163–170 feedback loops, 165 feedforward loops, 165 reinforcement of, 165 learning models. See also predictive intelligence architecture principles in, 168–170 Global Neuronal Workspace Theory, 168–170, 176 Hierarchical Neuronal Workspace Architectures, 171–172 communication principles in, 168–170 flexibility of, 167–168 for machine learning, 167–168 optimization principles of, 163–170 parameters of, 169–170 reward-related signals and, 173–174 oscillator models attractor manifold, 172–175 metastability of, 175 social neuroscience and, emerging evidence from, 177–178 reinforcement, 176–177 left executive control network, 240 left inferior frontal gyrus (LIFG) activation, 455–458 left-hemispheric executive control network, 240 LEiDA. See Leading Eigenvector Dynamics Analysis lifespan performance measures. See also age for intelligence, 151

LIFG activation. See left inferior frontal gyrus activation Localization by Adiabatic SElective Refocusing (LASER), 301–302 longitudinal studies, of cognitive neuroscience of intelligence aging in, lifespan trajectories and, 137–138 brain maintenance and, 138 level-change associations, 138 structural scaffolding, 138 brain structure in, 125–126, 130 in children grey matter development and, 126–131 language acquisition in, 129–133 white matter development and, 133–134 dynamics of, 124–126 coupling parameters, 126 genetics in, 135–136 grey matter in, 126–133 change in g scores, 132 in children, 126–131 by cortical region, 131 functional connectivity in, 137 in infants, 135 measures of cognitive ability, 124–125 methodology of, 140 mutualism model, 123–124 neuroimaging technology in, 123–124, 136–137 functional magnetic resonance imaging, 129–137 magnetic resonance imaging, 136–137 overview of, 124–139 socioeconomic status factors in, 136 timing of, 139–140 white matter in, 133–135 in children, 133–134 diffusion weighted imaging of, 133 in infants, 135 machine learning, 167–168 in functional connectomes, 268 magnetic resonance imaging (MRI). See also functional magnetic resonance imaging; structural magnetic resonance imaging for intelligence, 13, 70–71 in longitudinal studies, 136–137 magnetic resonance spectroscopy and, 298 for predictive intelligence, 356 magnetic resonance spectroscopy (MRS), 283–292 blood oxygenation level dependent contrast and, 305–306 for brain energy metabolism, 302–303 chemical foundations of, 298–306 chemical shift imaging and, 301

Index functional magnetic resonance spectroscopy and, 310–313 for cognitive tasks, 311–312 for sensory motor tasks, 310–311 for visuospatial cognition, 313 future applications of, 313–315 hardware for, 301 historical applications of, 283–284 for information processing, 305–307 in vivo, 306–307 intelligence studies and, 286–292 limitations of, 287–289 recommendations for future applications, 290–292 Localization by Adiabatic SElective Refocusing, 301–302 magnetic resonance imaging and, 298 N-acetylaspartate, 284–286, 288–289, 304–305, 308–310 neurochemical variable assessment by, 284–286 advantages of, 301 for early cognition studies, 285–286 limitations of, 300–301 of N-acetylaspartate, 284–286, 288–289 for neurochemistry and cognition, relationship between, 307–315 energy metabolism indicators, 307–308 for neurotransmission, 305–307 of GABA, 306–307, 309–310 of glutamate, 306–307, 310 nuclear magnetic resonance and, 298–300 outer volume suppression, 301–302 P MRS studies, 307–308 physical foundations of, 298–306 point-resolved spectroscopy, 301–302 process of, 283 proton magnetic resonance spectroscopy, 284, 298–299 cognition studies, 308–310 contraction derived from, 303–304 for Myo-Inositol markers, 304–305 for N-acetylaspartate markers, 304–305 for neuropil expansion markers, 303–304 SPin ECho, full Intensity Acquired Localized, 301–302 stimulated acquisition mode and, 301–302 theoretical approach to, 297–298 of white matter, 284 mapping brain lesion studies and, 386–388 voxel-based lesion-system mapping, 385–386 in predictive modeling, 452 MD system theory. See Multiple Demand system theory meaning representation, 448 in abstract concepts, 456–457

measurement of g, 5–10. See also g-loaded tests at individual level, 5 intelligence models, 8–9 Cattell-Horn-Carroll model, 9 hierarchical models with group factors, 9–10 Spearman model without group factors, 8–10 invariance in, 9–10 in WEIRD countries, 10 positive manifold in, 5 principle of the indifference of the indicator in, 5 scope of, 5 variance in, 5 measurement of intelligence. See measurement of g; specific tests meditation, enhancement of cognition through, 371–372 mental processing speed, as intelligence, 4 metastability affective neuroscience and, 178–179 of oscillator models, 175 modulated voxel-based morphometry, 212–217 modules, in brain networks, 26–30 morphometry methods. See surface-based morphometry methods; voxel-based morphometry methods MRI. See magnetic resonance imaging MRS. See magnetic resonance spectroscopy Multiple Demand (MD) system theory, 85–86 brain networks and, 102–103 Parieto-Frontal Integration Theory of intelligence and, 89 in task approach, to functional brain imaging, 237–238 multiple intelligences, 4, 428 multivariate pattern analysis (MVPA), 449–453 factor analysis, 451 GLM activation-based approaches compared to, 450 hypothesis-driven encoding modeling, 452–453 predictive modeling, 451–452 Principal Components Analysis, 451 representational similarity analysis, 450–451 mutualism model, 123–124 MVPA. See multivariate pattern analysis Myo-Inositol markers, 304–305 N-acetylaspartate (NAA), 284–286, 288–289, 304–305, 308–310 naïve methods, 351–352 NEH. See Neural Efficiency Hypothesis

477

478

Index network neuroscience methods. See also the brain for brain as complex network functional connectivity in, 26–29, 33–35 graphs for, 28–31 as process of inference, 27 structural connectivity in, 26–29, 31–33 future directions for, 35–36 neurocognitive intelligence models, 31–35 brain bases of intelligence, 32 functional networks, 33–35 Parieto-Frontal Integration Theory, 16–17, 31–33 structural networks, 31–33 theory of intelligence, 17, 35–36 types of networks in, 17 Network Neuroscience Theory (NNT), 17, 35–36, 86, 91–92 g in, 103–104 intrinsic connectivity networks, 91–92 neural dynamics of, 95 predictions of, 113–114 predictive intelligence and, 359 premises of, 91–92 small-world architecture in, 91–92 network training, through enhancement of cognition, 367–371 attention and, 367–369 development of, 369–370 individual differences as factor in, 370–371 Neural Efficiency Hypothesis (NEH), 85–88 activation-based formulations, 88 connectivity-based formulations, 88 distance in, 88 ERP amplitudes, 87–88 in higher-ability individuals, 88 individual differences approach and, 241–246 neural models of cognitive control cingulo-opercular network, 263, 271–272 components of, 271 fronto-parietal controller network, 263, 271–272 functional connectomes, 267–270 machine learning approaches, 268 mechanisms in, 271–273 working memory and, 268–270 impairment in, 270 interference resolution in, 269 network-based computational models, 262–273 neural models of intelligence compared to, 262–264 sustained attention in, 269 task-switching in, 269 translational applications, 270–271

neural models of intelligence. See also ParietoFrontal Integration Theory of intelligence components of, 271 neural models of cognitive control compared to, 262–264 neural representation of concepts for abstract concepts, neurosemantic signatures of, 455–458 dimensions of meaning, 456–457 ethics, 455 law, 455 LIFG activation, 455–458 for verbal representations, 457–458 academic research on, 448–449 commonality across participants, 462–463 concrete object representations, structure of, 453–455 hub-and-spoke model of feature integration, 453–454 Principal Components Analysis, 455 semantic dimensions of, 454–455 contemporary approaches to, 449–453 corpus co-occurrence measures, 463–464 semantic vector representations, 463–464 cortical function, 448 for hybrid concepts, 458–462 embodied cognition, 459 emotional concepts, 459–460 physics concepts, 460–461 social concepts, 461–462 meaning representation, 448 in abstract concepts, 456–457 multivariate pattern analysis, 449–453 factor analysis, 451 GLM activation-based approaches compared to, 450 hypothesis-driven encoding modeling, 452–453 predictive modeling, 451–452 Principal Components Analysis, 451 representational similarity analysis, 450–451 neuroimaging findings, 463–464 new insights on, 464–465 perceptual grounding, of concepts, 448–449 univariate-base analyses, 449 neural speed, 86–87 ERP amplitudes, 86–87 for higher-order processing, 87 neural dynamics of, 95 neurocognitive intelligence models, 31–35 brain bases of intelligence, 32 functional networks, 33–35 Parieto-Frontal Integration Theory, 16–17, 31–33 structural networks, 31–33

Index neuron orientation dispersion and density imaging (NODDI), 191–192 neuropil expansion markers, 303–304 neuroscience of brain networks. See also network neuroscience methods; Network Neuroscience Theory dynamics of of crystallized intelligence, 109–110 of fluid intelligence, 110–111 functional magnetic resonance imaging of, 111 of g, 111–112 efficiency of networks, 104–108 through flexibility, 107 functional magnetic resonance imaging of, 108 g and, 106–108 in small-world networks, 104–108 flexibility in, 108–114 efficiency through, 107 functional reconfiguration of brain networks, 113–114 g and dynamics of, 111–112 efficiency of brain networks and, 106–108 neurobiological individual differences in, 115 intrinsic connectivity networks, 107, 109–110, 115 capacity of, 111–112 dynamic functional connectivity and, 112 flexibility of, 111–112 Newton, Isaac, 283 NMR. See nuclear magnetic resonance NNT. See Network Neuroscience Theory NODDI. See neuron orientation dispersion and density imaging non-g factors, in intelligence, 17–19 nuclear magnetic resonance (NMR), 298–300 optimal states, of intelligence, 175–180 affective neuroscience and, emerging evidence of, 178–179 cognitive neuroscience and, emerging evidence of, 178 Global Neuronal Workspace Theory, 176 Leading Eigenvector Dynamics Analysis, 178 social neuroscience and, emerging evidence of, 177–178 optimization principles, in learning models, 163–170 parameters of, 169–170 reward-related signals and, 173–174 oscillator models, for learning attractor manifold, 172–175 metastability of, 175

social neuroscience and, emerging evidence from, 177–178 outer volume suppression (OVS), 301–302 Parieto-Frontal Integration Theory (P-FIT) of intelligence, 88–90 brain and, 16–17, 31–33 brain networks, 102–103 imaging of brain regions, 46, 48–57 functional studies on, 90 imaging of intelligence and, 45–47 of brain regions, 46, 48–57 in large-scale data repositories, 75 Multiple Demand theory, 89 network neuroscience theory of intelligence and, 17, 31–33 for predictive intelligence, 357–358 scope of, 88–89 structural studies on, 90 in task approach, to functional brain imaging, 237–240 PCA. See Principal Component Analysis penalized regression, 352–353 PET. See positron emission tomography P-FIT of intelligence. See Parieto-Frontal Integration Theory of intelligence PGS. See polygenic score physical exercise, enhancement of cognition through, 371 planned datasets, in large-scale data repositories, 71–74 planning, brain lesion studies and, 386–388 plasticity, of intelligence, 102 pleasure cycle, 164–166 anhedonia, 166–167, 176–177 point-resolved spectroscopy (PRESS), 301–302 polygenic score (PGS), predictive intelligence and, 350–355 empirical applications of, 353–355 assortative mating, 354–355 for evolution, 355 genetic nurture, 353–354 social mobility, 354 methods of construction, 351–353 LDpred estimation, 352 naïve methods, 351–352 penalized regression, 352–353 positive manifold, in measurement of g, 5 positron emission tomography (PET), 13 for predictive intelligence, 356 POT. See Process Overlap Theory practical intelligence, 425 predictive intelligence, models for. See also learning biological interpretability of, 329–330 brain-based, uses for, 331–332

479

480

Index predictive intelligence, models for. (cont.) from brain-imaging data, 355–356 with magnetic resonance imaging, 356 with positron emission tomography, 356 cognitive ability and, 328, 330 genetic prediction of, 332–333 joint hereditability of, 334 neuroimaging of, 330–332 real-world outcomes of, 328 testing for, 328 Deep Boltzman Machine, 335–336 deep learning, 335–337 visible forms of, 338 empirical findings for, 357–359 brain size and structure, 357 for connectomes, 358–359 Network Neuroscience Theory and, 359 for Parieto-Frontal Integration Theory model, 357–358 ethics of, 338–339 explanation of intelligence compared to, 329–330 external validity for, 329 future research on, 359 genetics and, 330, 350. See also polygenic score for cognitive ability, 332–333 Genome-Wide Association Studies. See Genome-Wide Association Studies integrative imaging approaches, 334–338 joint heritability of cognitive ability, 334 learning models, 163–170 feedback loops, 165 feedforward loops, 165 reinforcement, 165 limitations of, 338–339 theoretical approach to, 162–163 transparency in, 329–330 twin-based studies for, 334 predictive modeling mapping in, 452 multivariate pattern analysis and, 451–452 predictive power, of g, 10–13 for academic and school performance, 11–12 in school admissions tests, 11 socioeconomic status as factor in, 11 for work performance, 12–13 prefrontal cortex, 237 premorbid intelligence, 383–384 PRESS. See point-resolved spectroscopy Principal Components Analysis (PCA) for concrete object representations, 197–198 in multivariate pattern analysis, 451 for white matter, 197–198 principle of the indifference of the indicator, 5 problem solving, brain lesion studies and, 386–388

Process Overlap Theory (POT), 86, 90–91 brain networks and, 102–103 proton magnetic resonance spectroscopy (HMRS), 284, 298–299 cognition studies, 308–310 contraction derived from, 303–304 for Myo-Inositol markers, 304–305 for N-acetylaspartate markers, 304–305 for neuropil expansion markers, 303–304 PSAT, 6 psychometric models, of intelligence, 44, 47–50 public policy. See biological research; intelligence research q-ball model, with DWI, 195 quantitative tractography, for white matter, 195, 198–199 random networks, in neuroscience theory of intelligence, 17 Raven’s Advanced Progressive Matrices (RAPM), 13–14, 267 reaction times (RT), for g-loaded tests, 7–8 regular networks, in neuroscience theory of intelligence, 17 reinforcement learning models, 176–177 representational similarity analysis (RSA), 450–451 representations. See neural representation of concepts reward-related signals, in learning models, 173–174 Ricketts, Ed, 180–181 RSA. See representational similarity analysis RT. See reaction times SAT. See scholastic aptitude test SBM methods. See surface-based morphometry methods Scaffolding Theory of Aging and Cognition (STAC), 152 Scarr-Rowe effects, 20 scholastic aptitude test (SAT), 5–7 semantic vector representations, 463–464 sensory motor tasks, 310–311 SES. See socioeconomic status SLDOR. See Spearman’s Law of Diminishing Returns small world networks, in neuroscience theory of intelligence, 17 small-world architecture, in NNT, 91–92 small-world networks, in neuroscience of brain networks, 104–108 sMRI. See structural magnetic resonance imaging social intelligence, 180–181, 425 social neuroscience, 177–178

Index socioeconomic status (SES) g and, predictive power of, 11 in longitudinal studies, 136 recovery from traumatic brain injury and, 390 Spearman model, without group factors, 8–10 Spearman’s Law of Diminishing Returns (SLODR), 19 SPin ECho, full Intensity Acquired Localized (SPECIAL), 301–302 SRA Primary Mental Abilities tests, 421 STAC. See Scaffolding Theory of Aging and Cognition Stanford-Binet Intelligence Scales, 3–4 STEAM. See stimulated acquisition mode Steinbeck, John, 180–181 stimulated acquisition mode (STEAM), 301–302 structural connectivity, in brain networks, 26–29, 31–33 structural magnetic resonance imaging (sMRI) of cerebellum, 224 of corpus callosum, 223–224 in covariance-based network neuroscience, 225 multi-metric approaches to, 225–226 purpose of, 210 of subcortical nuclei, 224–225 with surface-based morphometry, 217–223 for cortical complexity, indices of, 223 for cortical surface area and volume, 221–223 for cortical thickness, 219–221 goals and purpose of, 217 study findings, 218–223 for total brain volume, 210–211 with voxel-based morphometry, 211–217 modulated, 212–217 process of, 211–212 study findings, 213–217 unmodulated, 212–213, 215–216 for white matter, 216–217 structural neuroimaging, of intelligence differences. See structural magnetic resonance imaging structural scaffolding, 138 subcortical nuclei, sMRI of, 224–225 suboptimal states, of intelligence, 175–180 super agers, 148 Sure Start program, 404 surface-based morphometry (SBM) methods, 51, 53–55 with sMRI, 217–223 for cortical complexity, indices of, 223 for cortical surface area and volume, 221–223 for cortical thickness, 219–221

goals and purpose of, 217 study findings, 218–223 sMRI with, 217–223 sustained attention, in neural models of cognitive control, 269 task approach, to functional brain imaging, 235–241 multiple demand system and, 237–238 Parieto-Frontal Integration Theory and, 237–240 study findings, 238–240 task-negative network (TNN), 14, 246 task-positive network (TPN), 14, 237, 246 task-switching, in neural models of cognitive control, 269 TBI. See traumatic brain injuries TBSS. See Tract-Based Spatial Statistics tDCS. See transcortical direct current stimulation theta stimulation, 374–375 TMS. See transcortical magnetic stimulation; transcranial magnetic stimulation TNN. See task-negative network TPN. See task-positive network Tract-Based Spatial Statistics (TBSS), 200–202 transcortical direct current stimulation (tDCS), 373–374 transcortical magnetic stimulation (TMS), 373 transcranial magnetic stimulation (TMS), 272 traumatic brain injuries (TBI), recovery from, assessment of general intelligence, 389–391 genetic predisposition as factors for, 391 lesion severity and, 391 outcome predictions, 391–392 pre-injury intelligence as predictor of, 389–391 socioeconomic status as factor in, 390 twin-based studies in intelligence research, 437 for predictive intelligence, 334 UK Biobank, 250 univariate base analyses, for neural representation of concepts, 449 unmodulated voxel-based morphometry, 212–213, 215–216 unplanned datasets, in large-scale data repositories, 74 variances, in measurement of g, 5 VBLSM. See voxel-based lesion-system mapping VBM methods. See voxel-based morphometry methods ventrolateral prefrontal cortex, 237

481

482

Index verbal representations, neurosemantic signatures of, 457–458 video games, enhancement of cognition through, 372 virtual fiber tractography, for white matter, 195–198 visuospatial cognition, 313 voxel-based lesion-system mapping (VBLSM), 385–386 voxel-based morphometry (VBM) methods, 51–52 with sMRI, 211–217 modulated VBM, 212–217 process of, 211–212 study findings, 213–217 unmodulated, 212–213, 215–216 for white matter, 216–217 for white matter, 200–201 Watershed model of Fluid Intelligence, 86, 93–94 intelligence as endophenotype, 93–94 white matter structure, 94 Wechsler, David, 4, 267 Wechsler Intelligence Scales, 4–6 Abbreviated Scale of Intelligence, 267

for adults, 9, 267 assessment of general intelligence through, 383 Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries, 10 white matter craniometry for, 192–198 with diffusion-weighted imaging. See diffusion-weighted imaging Parieto-Frontal Integration Theory and, 192–193, 202–204 diffusion-weighted imaging, 193–205 in longitudinal studies, 133–135 children and, 133–134 diffusion weighted imaging of, 133 in infants, 135 magnetic resonance spectroscopy of, 284 sMRI for, with VBM, 216–217 in Watershed model of Fluid Intelligence, 94 wisdom, 426–429 work performance, predictive power of g for, 12–13 working memory tasks, 155–156 control processes of, 261 functional brain imaging of, 237 functional connectomes and, 268–270