259 109 14MB
English Pages VIII, 372 [368] Year 2020
Cinzia Daraio Wolfgang Glänzel Editors
Evaluative Informetrics: The Art of Metrics-Based Research Assessment Festschrift in Honour of Henk F. Moed
Evaluative Informetrics: The Art of Metrics-Based Research Assessment
Cinzia Daraio Wolfgang Glänzel •
Editors
Evaluative Informetrics: The Art of Metrics-Based Research Assessment Festschrift in Honour of Henk F. Moed
123
Editors Cinzia Daraio Dipartimento di Ingegneria Informatica, Automatica e Gestionale (DIAG) Sapienza University Rome, Italy
Wolfgang Glänzel Centre for R&D Monitoring (ECOOM) and Dept MSI KU Leuven Leuven, Belgium
ISBN 978-3-030-47664-9 ISBN 978-3-030-47665-6 https://doi.org/10.1007/978-3-030-47665-6
(eBook)
© Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Henk’s full first name is Hendrik, his middle initial is F. In official documents, also those related to the Doctorate honoris causa, his name is therefore Hendrik F. Moed. But friends, family and colleagues use the first name Henk, a sort of nickname for Hendrik. The name Henk is also used in Henk’s scientific publications.
Contents
Tracing the Art of Metrics-Based Research Assessment Through Henk Moed’s Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cinzia Daraio and Wolfgang Glänzel
1
Selected Essays Selected essays of Henk F. Moed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cinzia Daraio and Wolfgang Glänzel
15
Contributed Chapters Citation Profiles and Research Dynamics . . . . . . . . . . . . . . . . . . . . . . . . Robert Braam
71
Characteristics of Publication Delays Over the Period 2000–2016 . . . . . Marc Luwel, Nees Jan van Eck, and Thed van Leeuwen
89
When the Data Don’t Mean What They Say: Japan’s Comparative Underperformance in Citation Impact . . . . . . . . . . . . . . . . . . . . . . . . . . 115 David A. Pendlebury Origin and Impact: A Study of the Intellectual Transfer of Professor Henk F. Moed’s Works by Using Reference Publication Year Spectroscopy (RPYS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Yong Zhao, Jiayan Han, Jian Du, and Yishan Wu Delineating Organizations at CWTS—A Story of Many Pathways . . . . 163 Clara Calero-Medina, Ed Noyons, Martijn Visser, and Renger De Bruin Research Trends—Practical Bibliometrics and a Growing Publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Gali Halevi
vii
viii
Contents
The Evidence Base of International Clinical Practice Guidelines on Prostate Cancer: A Global Framework for Clinical Research Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Elena Pallari and Grant Lewison The Differing Meanings of Indicators Under Different Policy Contexts. The Case of Internationalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Nicolas Robinson-Garcia and Ismael Ràfols De Profundis: A Decade of Bibliometric Services Under Scrutiny . . . . . 233 Juan Gorraiz, Martin Wieland, Ursula Ulrych, and Christian Gumpenberger A Comparison of the Citing, Publishing, and Tweeting Activity of Scholars on Web of Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Rodrigo Costas and Márcia R. Ferreira Library Catalog Analysis and Library Holdings Counts: Origins, Methodological Issues and Application to the Field of Informetrics . . . . 287 Daniel Torres-Salinas and Wenceslao Arroyo-Machado Cross-National Comparison of Open Access Models: A Cost/Benefit Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Félix Moya-Anegón, Vicente P. Guerrero-Bote, and Estefanía Herrán-Páez The Altmetrics of Henk Moed’s Publications . . . . . . . . . . . . . . . . . . . . . 327 Judit Bar-Ilan (Deceased) and Gali Halevi Doctorate Honoris Causa Conferral of the Doctorate Honoris Causa in Industrial and Management Engineering to Hendrik F. Moed—Address of Eugenio Gaudio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Eugenio Gaudio Conferral of the Doctorate honoris causa in Industrial and Management Engineering to Hendrik F. Moed—Address of Massimo Tronci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Massimo Tronci The Application Context of Research Assessment Methodologies . . . . . . 347 Henk F. Moed Personal notes Under Bibliometrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Carmen López-Illescas My Long Time Acquaintance with Henk Moed . . . . . . . . . . . . . . . . . . . 367 Bluma C. Peritz
Tracing the Art of Metrics-Based Research Assessment Through Henk Moed’s Work Cinzia Daraio and Wolfgang Glänzel
The title of the editorial introduction summarises the main objective of this book. During the ISSI2019 Conference held at the Sapienza University of Rome, on 5 September 2019, we organised a Special Plenary Session in honour of our colleague and friend Prof. Henk F. Moed to celebrate his retirement. Before this special session, a formal ceremony to the conferral of the Doctorate Honoris Causa in Industrial and Management Engineering on “Research Assessment Methodologies” was held in the historical Academic Senate House of the Sapienza University of Rome. We organized this session, since we had the fortune to accompany stages of Henk Moed’s career as his colleagues and collaborators, co-authors and friends; younger colleagues enjoyed the opportunity to learn and benefit from the comprehensive knowledge that he has shared with the scholarly community. We embraced the opportunity to commemorate this special occasion and withal honour one of the most prominent scholars in the field of scientometrics by editing this book. The book consists of four parts. The first part presents selected papers by Henk Moed, the second part contains contributed research papers, the third part refers to the ceremony for the conferral of the doctorate honoris causa in Research Assessment Methodologies to Henk Moed, and the fourth part includes personal notes. The first part reports a collection of the most important publications by Henk F. Moed. This selection presents Henk as a scholar with a broad spectrum of activities and a multifaceted research profile. Due to his rich contribution to the advancement of the research assessment methodologies and its application, investigating the development of his career is, to a considerable extent, also a survey of our research field. We
C. Daraio (B) Sapienza University of Rome, Rome, Italy e-mail: [email protected] W. Glänzel KU Leuven, Leuven, Belgium e-mail: [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_1
1
2
C. Daraio and W. Glänzel
Table 1 A selection of the most important publications by Henk F. Moed Bibliometric databases
Exploring the use of existing, primarily bibliographic databases for bibliometric purposes has been the most important subject of the first half of Henk Moed’s career, although he has made several database-oriented studies also in the second half. It was a topic of great general interest in the field. This topic involves the following sub-topics: the creation of bibliometric databases; combining databases; comparing databases; and the assessment and enhancement of their data quality
1
Moed, H. F. (1988). The Use of Online Databases for Bibliometric Analysis. In: Informetrics 87/88. L. Egghe and R. Rousseau (eds.), Elsevier Science Publishers, Amsterdam, ISBN 0-444-70425-6, 15-28
2
Moed, H. F., Vriens, M. (1989). Possible Inaccuracies Occurring in Citation Analysis. Journal of Information Science, 15, 2, 95–107. Sage Journals
3
Moed, H. F. (2005). Accuracy of citation counts. In: H.F. Moed, Citation Analysis in Research Evaluation. Springer, Dordrecht (Netherlands). ISBN 1-4020-3713-9, 173–179
4
López-Illescas, C., De Moya-Anegón, F., Moed, H. F. (2008). Coverage and citation impact of oncological journals in the Web of Science and Scopus. Journal of Informetrics, 2, 304–316. Elsevier
5
Moed, H.F., Bar-Ilan, J, Halevi, G. (2016). A new methodology for comparing Google Scholar and Scopus. Journal of Informetrics, 10, 533–551. Elsevier
Journal citation measures
Journal impact factors and related citation measures are even today probably the most frequently used bibliometric indicators. The articles relate to a critique on existing indicators, proposals for new indicators, and a more reflexive paper addressing criteria for evaluating indicators on the basis of their statistical soundness, theoretical validity, and practical usefulness. Also, one paper examines the effect of the Open Access upon citation impact (continued)
Tracing the Art of Metrics-Based Research Assessment …
3
Table 1 (continued) 1
Moed, H. F., van Leeuwen, Th. N. (1995). Improving the accuracy of Institute for Scientific Information’s journal impact factors. J. of the American Society for Information Science, 46, 461–467 Wiley publisher
2
Moed, H. F., van Leeuwen, Th. N., Reedijk, J. (1999). Towards appropriate indicators of journal impact, Scientometrics, 46, 575-589. Springer
3
Moed, H. F., van Leeuwen, Th. N., Reedijk, J. (1999). Towards appropriate indicators of journal impact, Scientometrics, 46, 575–589. Springer
4
Moed, H. F. (2007). The effect of “Open Access” upon citation impact: An analysis of ArXiv’s Condensed Matter Section. Journal of the American Society for Information Science and Technology, 58, 2047–2054. Wiley publisher
5
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4, 265–277. Elsevier
6
Moed, H. F. (2016). Comprehensive indicator comparisons intelligible to non-experts: the case of two SNIP versions. Scientometrics, 106 (1), 51–65. Springer
Indicators of research performance in science, social science and humanities
The development of appropriate quantitative research assessment methodologies in the various domains of science and scholarship and various organizational levels has been Henk Moed’s core-activity during the first two decades. Bibliometric indicators were applied to research groups, departments, institutions, and countries
1
Moed, H. F., Burger, W. J. M., Frankfort, J. G., van Raan, A. F. J. (1985). The Use of Bibliometric Data for the Measurement of University Research Performance. Research Policy, 14, 131–149. Elsevier
2
Moed, H. F., de Bruin, R. E., van Leeuwen, Th. N. (1995). New bibliometric tools for the assessment of national research performance: database description, overview of indicators and first applications. Scientometrics, 33, 381–422. Springer (continued)
4
C. Daraio and W. Glänzel
Table 1 (continued) 3
Moed, H. F., Hesselink, F. Th. (1996). The publication output and impact of academic chemistry research in the Netherlands during the 1980’s: bibliometric analyses and policy implications. Research Policy, 25, 819–836. Elsevier
4
Van den Berghe, H., Houben, J. A., de Bruin, R. E., Moed, H. F., Kint, A., Luwel, M., Spruyt, E. H. J. (1998). Bibliometric indicators of university research performance in Flanders. Journal of the American Society for Information Science, 49, 59–67. Wiley publisher
5
Moed, H. F. (2002). Measuring China’s research performance using the Science Citation Index. Scientometrics, 53, 281–296. Springer
6
Moed, H. F., Nederhof, A. J, Luwel, M. (2002). Towards performance in the humanities. Library Trends, 50, 498–520. JHU Press
Theoretical understanding and proper use of bibliometric indicators
This topic comprises articles of Henk Moed discussing and proposing theories about what citations and other bibliometric indicators measure. Moreover, it includes reflexive articles addressing the issue as to what are appropriate ways to use these indicators in research assessment processes
1
Moed, H. F. (2000). Bibliometric indicators reflect publication and management strategies. Scientometrics, 47, 323–346. Springer
2
Moed H.F., Garfield E. (2004). In basic science the percentage of ‘authoritative’ references decreases as bibliographies become shorter. Scientometrics, 60, 295–303. Springer
3
Moed, H. F. (2005). Towards a theory of citations: Some building blocks. In: H. F. Moed, Citation Analysis in Research Evaluation. Springer, Dordrecht (Netherlands). ISBN 1-4020-3713-9, 209–220
4
Moed, H. F. (2008). UK Research Assessment Exercises: Informed Judgments on Research Quality or Quantity? Scientometrics, 74, 141–149. Springer
5
Moed, H. F., Halevi, G. (2015). Multidimensional Assessment of Scholarly Research Impact. Journal of the American Society for Information Science and Technology, 66, 1988–2002. Wiley publisher (continued)
Tracing the Art of Metrics-Based Research Assessment …
5
Table 1 (continued) Usage-based metrics and altmetrics
Nowadays publication- and citation based indicators of research performance are not seldom denoted as ‘classical’, and new, alternative types of indicators are being proposed and explored. Two articles by Henk Moed listed below relate to ‘usage’ indicators, based on the number of times full text articles are downloaded from publishers’ publication archives. A third article discusses the potential of so called altmetrics, especially those that reflect use of social media
1
Moed, H. F. (2005). Statistical relationships between downloads and citations at the level of individual documents within a single journal. Journal of the American Society for Information Science and Technology, 56, 1088–1097. Wiley publisher
2
Moed, H. F. (2016). Altmetrics as traces of the computerization of the research process. In: C.R. Sugimoto (Ed.), Theories of Informetrics and Scholarly Communication (A Festschrift in honour of Blaise Cronin). Walter de Gruyter, Berlin–Boston. ISBN 978-3-11-029803-1, 360–371
3
Moed, H.F., Halevi, G. (2016). On full text download and citation distributions in scientific-scholarly journals. Journal of the American Society for Information Science and Technology, 67, 412–431. Preprint version available at https://arxiv.org/ftp/arxiv/ papers/1510/1510.05129.pdf Wiley publisher
International collaboration and migration
Scientific collaboration and migration are important phenomena that can be properly studied with bibliometric-informetric methods. Below three contributions by Moed are listed, two on collaboration, and one on migration
1
Moed, H. F. (2005). Does international scientific collaboration pay? In: H. F. Moed, Citation Analysis in Research Evaluation. Springer, Dordrecht (Netherlands). ISBN 1-4020-3713-9, 285–290
2
Moed, H. F. (2016). Iran’s scientific dominance and the emergence of South-East Asian countries as scientific collaborators in the Persian Gulf Region. Scientometrics, 108, 305–314. Preprint version available at http:// arxiv.org/ftp/arxiv/papers/1602/1602.04701. pdf. Springer (continued)
6
C. Daraio and W. Glänzel
Table 1 (continued) 3
Moed, H. F., Halevi, G. (2014). A bibliometric approach to tracking international scientific migration. Scientometrics, 101, 1987–2001. Springer
The future of bibliometric and informetrics
The articles of Moed in this section provide a perspective of the future, both in the development of informetric indicators, and in their application in research assessment processes. His monograph Applied Evaluative Informetrics contains several chapters on these topics. Therefore, the executive summary of this book is also listed below
1
Moed, H. F. (2007). The Future of Research Evaluation Rests with an Intelligent Combination of Advanced Metrics and Transparent Peer Review. Science and Public Policy, 34, 575–584. Oxford University Press
2
Moed, H. F. (2016). Toward new indicators of a journal’s manuscript peer review process. Frontiers in Research Metrics and Analytics, 1, art. no 5. Available at: http://journal. frontiersin.org/article/10.3389/frma.2016. 00005/full
3
Moed, H. F. (2017). A critical comparative analysis of five world university rankings. Scientometrics, 110, 967–990. Springer
4
Moed, H. F. (2017). Executive Summary. In: H. F. Moed, Applied Evaluative Informetrics. Springer, ISBN 978-3-319-60521-0 (hard cover); 978-3-319-60522-7 (E-Book), https:// doi.org/10.1007/978-3-319-60522-7
grouped his publications into seven topics at the intersection of bibliometrics, scientometrics, informetrics and research evaluation. The main topics covered are: ‘Bibliometric databases’, ‘Journal citation measures’, ‘Indicators of research performance in science, social science and humanities’, ‘Theoretical understanding and proper use of bibliometric indicators’, ‘Usage-based metrics and altmetrics’, ‘International collaboration and migration’, and ‘The future of bibliometric and informetrics’ (see Table 1). The second part collects 13 original research papers by experts in the field who have worked and collaborated with Henk F. Moed during the last over three decades. We organised these contributions, reported in detail in Table 2, in the three following topics: – Advancement of bibliometric methodology – Evaluative informetrics and research assessment – New horizons in informetric studies.
Tracing the Art of Metrics-Based Research Assessment …
7
Table 2 Chapters in Part II Topic
Authors
Title
Advancement of bibliometric methodology. Braam R.
Citation profiles and research dynamics
Luwel M., van Eck N. J., and van Leeuwen T.
Characteristics of publication delays over the period 2000–2016
Pendlebury D. A.
When the data do not mean what they say: Japan’s comparative underperformance in citation impact
Zhao Y., Han J., Du J. and Wu Y.
Origin and Impact: A Study of the Intellectual Transfer of Professor Henk F. Moed’s works by Using Reference Publication Year Spectroscopy (RPYS)
Evaluative informetrics and research assessment. Calero-Medina C., Noyons Ed, Visser M. and de Bruin R.
Delineating Organizations at CWTS—A story of many pathways
Halevi G.
Research Trends—Practical Bibliometrics and a Growing Publication
Pallari E. and Lewison G.
The evidence base of international clinical practice guidelines on prostate cancer: a global framework for clinical research evaluation
Robinson-Garcia N. and Ràfols I.
The differing meanings of indicators under different policy contexts. The case of internationalisation.
Gorraiz J., Martin Wieland M., Ulrych U. and Gumpenberger C.
De profundis: a decade of bibliometric services under scrutiny
New horizons in informetric studies. Costas R. and Ferreira M.R.
A Comparison of the Citing, Publishing, and Tweeting Activity of Scholars on Web of Science
Torres-Salinas D., Arroyo-Machado W.
Library Catalog Analysis and Library Holdings Counts: origins, methodological issues and application to the field of Informetrics
De-Moya-Anegón F., Guerrero-Bote V.P. and Herrán-Páez E.
Cross-national comparison of Open Access models: A cost/benefit analysis
Bar-Ilan J. and Halevi G.
The Altmetrics of Henk Moed’s publications
The following gives a content-related summary of the above 13 chapters, most which are very closely related to Henk Moed’s ideas, proceeding from, reinforcing or generalising his findings by new examples or contexts, others by using his work as the subject of new bibliometric studies.
8
C. Daraio and W. Glänzel
Advancement of bibliometric methodology The chapter by Braam (2020) analyses the citation profiles of individual researchers as reflected by Google Scholar in the light of their dynamics. The author distinguished different types of profiles according to the authors’ productivity and prestige. The comparison with expected patterns based on bibliometric theories of publication and citation processes resulted in the identification of three characteristic elements in terms of communication and the reception by the community. Luwel et al. (2020) study the characteristics of publication delays in the era of electronic scholarly communication in the about last two decades. The study is based on Elsevier publications and conducted at three levels, the subject level, the journal level and the publishing model. Although the publication process has been substantially accelerated, the peer-reviewing still requires considerable amount of time and proved the most time-consuming element in the process. Pendlebury (2020) examines an interesting phenomenon: Japan’s comparative underperformance in citation impact. The analysis is methodically based on several aspects that are usually considered influencing factors of citation impact, including publication language, number of co-authors, international collaboration, mobility, research focus and diversity. The author identified the national orientation of publication venues with an effect of cumulative disadvantage as one possible determinant the resulting in a structural citation-impact deficit. He also argues in favour of a careful interpretation of national citation indicators to avoid misconstruction of their meaning. The chapter by Zhao, Han, Du, and Wu (2020) focuses on the intellectual transfer of Henk Moed’s ideas. In particular, the authors propose the (co-)citation analysis of both the documents cited by Henk Moed and the literature citing his most influential papers. In order to implement this idea, the authors adopt a method previously proposed by Marx et al. in (2014) called Reference Publication Year Spectroscopy (RPYS). By doing so, they characterise Henk Moed as one of the influential contemporary scientists in the field of bibliometrics and informetrics and also provide new methodological insights by connecting bibliometrics with research in history of science.
Evaluative informetrics and research assessment The chapter by Calero-Medina et al. tackle an extremely important task in evaluative bibliometrics, the identification and harmonisation of entities. They describe the time-consuming and complex process of identifying and harmonising organisation names, which includes the careful cleaning of author affiliations of publications. This work proved an indispensable prerequisite to reliable meso-level research evaluation and university rankings.
Tracing the Art of Metrics-Based Research Assessment …
9
Halevi (2020) reflects on Henk Moed work as the editor in chief of “Research Trends”, Elsevier’s online publication aiming to provide straightforward insights into scientific trends based on bibliometric research. Under Henk Moed’s management, Research Trends evolved from kind of ‘newsletter’ to a full-featured scientific publication organ providing a large spectrum of articles in a variety of topics and disciplines. Pallari and Lewison (2020) investigate the evidence base of international clinical practice guidelines on prostate cancer. The guidelines are designed to ensure that medical diagnosis and treatment are based on the best available evidence. The authors analyse their cited references in journals processed in the Web of Science as their evidence base. They found, among others, that most guidelines over-cite research from their own country and also differences between countries in the topicality of citations. The authors conclude that citations on the guidelines provide an alternative source of information for the evaluation of clinical research. In their chapter, Robinson-Garcia and Ràfols (2020) focus on the use of indicators in research evaluation regarding internationalisation policies. In particular, they analyse three examples of indicators in this context. The first example is related to international collaboration and investigates whether a larger extent of internationally co-authored publications exhibits higher citation impact and thereby benefits national science systems. The second one concerns the publication language, particularly the promotion of English language as the dominant language of science. The last example shows the effect of the policy contexts in shaping the use and application of bibliometric indicators, sometimes in a partial way which does not properly reflect the phenomenon under study. Gorraiz et al. (2020) present and discuss the lessons learned after having provided bibliometric services at the University of Vienna for more than a decade. By comparing their experience and insights with current evaluative practices, with statements of declarations and manifestos, they succeeded in coming up with new recommendations and including the question of to what degree alternative metrics have the potential for being used in research assessment. The authors also plead for going beyond evaluative tasks. Bibliometric services should encourage researchers in improving their publication strategies and enhancing their visibility within and beyond their research communities.
New horizons in informetric studies Proceeding from Henk Moed’s statement that web-based indicators “do not have function merely in the evaluation of research performance of individuals and groups, but also in the research process”, Costas and Ferreira (2020) set out to go beyond the evaluative perspective of altmetrics to a more contextualised one in which they conducted a comparative analysis of researchers’ citing, publishing, and tweeting activities. They found at the individual researcher level that Twitter-based indicators are empirically different from production-based and citation-based bibliometric
10
C. Daraio and W. Glänzel
indicators. The authors consider their results a step towards a conceptual shift to a more dynamic perspective that focuses on the social media activities of researchers and propose future research directions based on their findings. A completely different approach is proposed by Torres-Salinas and ArroyoMachado in their Sect. “Citer Motivations”. Library Catalog Analysis designed as the application of bibliometric techniques to published book titles in online library catalogues can be used to analyse the impact and dissemination of academic books in different ways. The aim of the chapter is to conduct an in-depth analysis of major scientific contributions and to this topic. Beyond the discussion of the original purposes of library holdings and analysis of the principal sources of information, the authors study the correlation between library holdings and altmetrics indicators and the use of WorldCat Identities to identify the principal authors and works in the field of informetrics. A cost-benefit analysis of Open Access publishing in a cross-national comparison of OA models is presented by de-Moya et al. in their Sect. “Relevancy Versus Impact”. The transition from traditional publishing towards OA is internationally dealt with in different ways. In particular, the four OA models, platinum, gold, green and hybrid are compared in terms of scientific impact and costs. The authors found and discuss different country models, with different costs and different results. Halevi and Bar-Ilan have chosen Henk Moed’s work as the subject of their study (2020). His work, embracing collaboration with over 60 authors from 30 countries across all continents and published in more than 30 different journals and a variety of attracted thousands of citations. Hitherto relatively little is known about the altmetric impact in terms of usage, readership, and social media attention of his work. The results obtained from the main altmetric indicators shed light on how his publications are viewed, read, shared and tweeted about within the scholarly community and beyond. Part III concerns the Conferral of the Doctorate Honoris Causa to Henk Moed and includes the opening address of the Rector of Sapienza University of Rome (Gaudio, 2020), the address by the Coordinator of the Doctoral Program in Industrial and Management Engineering (Tronci, 2020) and Henk Moed’s Lectio Magistralis (Moed, 2020). In his Lectio magistralis on “The Application Context of Research Assessment Methodologies”, Moed sheds new lights on the complex and controversial role and use of bibliometric or informetric indicators in the assessment of research performance. He highlights the fundamental importance of the application context of these indicators enlightening and further developing the search for best practices in research assessment. Part IV includes a personal note (Lopez-Illescas, 2020) and concludes the book a nice Interview done by Cinzia Daraio to Bluma Peritz during the ISSI2019 Conference in Rome (Peritz, 2020). We would like to express our gratitude to all the authors of the chapters of Part II of the book for their valuable contribution. We warmly thank the publishers of the journals and books reported in Part I of the book that kindly allowed us to reproduce the abstracts and executive summary of the selected works of Henk Moed. Finally, our deepest thanks are due to Diletta Abbonato for her precious support in the finalisation
Tracing the Art of Metrics-Based Research Assessment …
11
of the book and to Guido Zosimo-Landolfo from Springer Nature for his kind support in the development of this project.
References Bar-Ilan, J., & Halevi, G. (2020). The Altmetrics of Henk Moed’s publications, In this volume. Braam, R. (2020). Citation profiles and research dynamics. In this volume. Calero-Medina C., Noyons Ed, Visser, M., & de Bruin, R. (2020). Delineating Organizations at CWTS—A story of many pathways. In this volume. Costas, R., Ferreira, M. R. (2020). A comparison of the citing, Publishing, and Tweeting Activity of Scholars on Web of Science, In this volume. De-Moya-Anegón, F., Guerrero-Bote, V. P., Herrán-Páez, E. (2020). Cross-national comparison of Open Access models: A cost/benefit analysis, In this volume. Gaudio, E. (2020). Conferral of the Doctorate honoris causa in Industrial and Management Engineering to Hendrik F. Moed: Opening Address, In this volume. Gorraiz, J., Wieland, M., Ulrych, U., & Gumpenberger, C. (2020). De profundis: A decade of bibliometric services under scrutiny In this volume. Halevi, G. (2020). Research Trends—Practical Bibliometrics and a Growing Publication. In this volume. Lopez-Illescas, C. (2020). Under Bibliometrics. In this volume. Luwel, L., van Eck, N. J., & van Leeuwen, T. (2020). Characteristics of publication delays over the period 2000–2016. In this volume. Marx, W., Bornmann, L., Barth, A., & Leydesdorff, L. (2014). Detecting the historical roots of research fields by reference publication year spectroscopy (RPYS). Journal of the Association for Information Science and Technology, 65(4), 751–764. Moed, H. F. (2020). The application context of research assessment methodologies, Lectio Magistralis, Doctorate Honoris Causa Sapienza University of Rome, 5 September 2019 In this volume. Pallari, E., & Lewison, G. (2020). The evidence base of international clinical practice guidelines in prostate cancer: A global framework for clinical research evaluation. In this volume. Pendlebury, D. A. (2020). When the data don’t mean what they say: Japan’s comparative underperformance in citation impact. In this volume. Peritz, B. (2020). My long time acquaintance with Henk Moed. In this volume. Robinson-Garcia, N., & Ràfols, I. (2020). The differing meanings of indicators under different policy contexts. The case of internationalisation In this volume. Torres-Salinas, D., Arroyo-Machado, W. (2020). Library catalog analysis and library holdings counts: Origins, methodological issues and application to the field of Informetrics In this volume. Tronci, M. (2020). Conferral of the Doctorate honoris causa in Industrial and Management Engineering to Hendrik F. Moed: Address by the Coordinator of the Doctoral Program in Industrial and Management Engineering In this volume. Zhao, Y., Han, J., Du, J., & Wu, Y. (2020). Origin and impact: a study of the intellectual Transfer of Professor Henk F. Moed’s works by Using Reference Publication Year Spectroscopy (RPYS) In this volume.
Selected Essays
Selected essays of Henk F. Moed Cinzia Daraio and Wolfgang Glänzel
Introduction This part presents a collection of the most important publications by Henk F. Moed. This collection characterises the author as a researcher personality with a broad spectrum of activities and a multifaceted research profile. As Henk Moed has contributed to the advancement of the field in many topics, an overview of the development of his career is, to a considerable extent, also a survey of the research field. We grouped his publications into seven topics in the field at the intersection of bibliometrics, informetrics, science studies and research assessment. The main topics are the following. 1. 2. 3. 4. 5. 6. 7.
Bibliometric databases Journal citation measures Indicators of research performance in science, social science and humanities Theoretical understanding and proper use of bibliometric indicators Usage-based metrics and altmetrics International collaboration and migration The future of bibliometric and informetrics.
The authors would like to thank Sarah Heeffer (ECOOM, KU Leuven, Belgium) for her kind assistance in proofreading the chapter. C. Daraio (B) Sapienza University of Rome, Rome, Italy e-mail: [email protected] W. Glänzel KU Leuven University, Louvain, Belgium © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_2
15
16
C. Daraio and W. Glänzel
Bibliometric databases Exploring the use of existing, primarily bibliographic databases for bibliometric purposes has been the most important subject of Henk Moed’s work during the first half of his career, although he has made several database-oriented studies also in the second half. It was a topic of great general interest in the field. This topic involves the following sub-topics: the creation of bibliometric databases; combining databases; comparing databases; and the assessment and enhancement of their data quality. The Use of Online Databases for Bibliometric Analysis. In: Informetrics 87/88. L. Egghe1 and R. Rousseau2 (eds.), Elsevier Science Publishers, Amsterdam, (ISBN 0-444-70425-6), 1988, 15–28. 1 Univ Hasselt, Belgium 2 Univ Antwerp, Belgium Abstract Databases containing bibliometric information on published scientific literature play an important role in the field of quantitative studies of science and in the development and application of Science and Technology indicators. For these purposes, perhaps the most important and probably the most frequently used database is the Science Citation Index, produced by the Institute for Scientific Information. SCISEARCH, the on-line version of the Science Citation Index (SCI), is included in several host computers. However, other databases are used as well, such as Physics Abstracts or Chemical Abstracts. In this contribution, potentialities and limitations of several online databases as sources of bibliometric data in a number of host computers will be discussed. The discussion will focus on the on-line version of the Science Citation Index, and on citation analysis. It will be argued that for several specific bibliometric applications, on-line databases and software implemented in the host computer do not provide appropriate facilities. In fact, for these specific applications, one should first download the primary data from the host into a local computer (PC, Mainframe). Next, dedicated software should be developed on a local level, in order to perform the bibliometric analyses properly. This will be illustrated by presenting a number of applications, related to citation analysis (‘ impact measurement’) and co-citation analysis (‘ mapping fields of science’). Moed, H.F., Vriens, M.1 (1989). Possible Inaccuracies Occurring in Citation Analysis. Journal of Information Science 15, 2, 95–107. Sage Journals 1 University of Wisconsin, La Crosse, United States Abstract Citation analysis of scientific articles constitutes an important tool in quantitative studies of science and technology. Moreover, citation indexes are used frequently in searches for relevant scientific documents. In this article we focus on the issue of reliability of citation analysis. How accurate are citation counts to individual scientific articles? What pitfalls might occur in the process of data collection? To what extent do
Selected essays of Henk F. Moed
17
’random’ or ’systematic’ errors affect the results of the citation analysis? We present a detailed analysis of discrepancies between target articles and cited references with respect to author names, publication year, volume number, and starting page number. Our data consist of some 4500 target articles published in five scientific journals, and 25000 citations to these articles. Both target and citation data were obtained from the Science Citation Index, produced by the Institute for Scientific Information. It appears that in many cases a specific error in a citation to a particular target article occurs in more than one citing publication. We present evidence that authors in compiling reference lists, may copy references from reference lists in other articles, and that this may be one of the mechanisms underlying this phenomenon of ‘multiple’ variations/errors phenomenon of multiple’ variations/errors. Accuracy of citation counts. In: Citation Analysis in Research Evaluation. Dordrecht (Netherlands) 2005 From: Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 1-4020-3713-9, 346, Chapter 13, Pages 173–180
Introduction and Research Questions Many bibliometric indicators are based on the number of times particular articles are cited in the journals processed for the various ISI Citation Indexes, the Science Citation Index (SCI) being the most prominent. Thus, citation links constitute crucial elements both in scientific literature retrieval and in assessment of research performance or journal impact (Garfield, 1979). The reliability of citation-based indicators strongly depends on the accuracy with which citation links are identified. It is therefore essential to users of citation-based indicators to have detailed insights into the types of problems that emerge and the degree of accuracy that can be achieved in establishing these links. This chapter aims at providing such insights. It builds upon the terminology described in Chap. 6. The ISI citation indexes, including the SCI and the Web of Science, contain for all documents published in approximately 7,500 journals, full bibliographic data, including their title, all contributing authors and their institutional affiliations, journal title, issue, volume, starting and ending page number. The cited references from source articles are also extracted. These are the publications included in the reference lists at the bottom of a paper. From a cited reference, ISI includes five data fields: the first author, source (e.g., journal, or book) title, publication year, volume number and starting page number. Generally, the representation of a target document subjected to citation analysis may differ from that regarded as a cited reference. For instance, an author citing a particular target article may indicate an erroneous starting page number or may have misspelled the cited author’s name in his or her reference list. The neutral term ‘discrepancy’ is used to indicate such differences or variations between a target article intentionally cited in a reference and the cited reference itself. A basic problem in
18
C. Daraio and W. Glänzel
any citation analysis holds: how does one properly match a particular set of target articles to the file of cited references, in order to establish accurate citation links between these targets and the source articles citing them, and how should one deal with discrepancies? This chapter examines the case in which the set of target articles is a set as large as the total collection of source articles processed by ISI during a twenty-year period. In other words, it deals with citation links between ISI source articles, described in Sect. 6.3. The questions addressed in this chapter are: What types of discrepancies between cited references and target articles occur? How frequently do these occur? And what are the consequences of omitting discrepant references in the calculation of citation statistics?
Data and Methods The Centre for Science and Technology Studies (CWTS) at Leiden University has created a large database of all documents processed during the period 1980–2004 for the CD-ROM version of the SCI and a number of related Citation Indexes on CDROM. The database is bibliometric, as it is primarily designed to conduct quantitative, statistical analysis and mapping, and was used in a large series of scientific and commissioned projects conducted during the past 10 years (van Raan, 1996; van Raan, 2004a). The analyses presented below relate to as many as 22 million cited references extracted from all source articles processed in 1999, matched to about 18 million target articles, being the total collection of ISI source articles published during the period 1980–1999. The methodology applied in this chapter builds upon work described in an earlier paper by Moed and Vriens (1989), and in a paper by Luwel (1999). It focuses on cases showing discrepancies in one data field only. Cited references and target articles were matched in a process involving five match keys, each one based on four out of the five data fields available. In a first round, a match key was applied consisting of the first six characters of the author’s family name, his or her first initial, the year of publication, volume number and starting page number. This key can be assumed to be a sufficiently unique characterization of a journal article and will be denoted as ‘simple’ match key. For reasons of simplicity, cited references matched in this round will be denoted as ‘correct’. In a second round, additional match keys were applied, including the journal title, but leaving out the author name, publication year, volume number and starting page number, respectively. Thus, discrepancies in the data field omitted could be analyzed. Cited references matched in this second round will be denoted as ‘discrepant’. Discrepancies were reconstructed by finding a ‘plausible’ explanation for them. Therefore, a classification was designed of 32 types of discrepancies. Discrepancies for which, in the current stage of the work, no plausible explanation could be given, were assigned to a rest category.
Selected essays of Henk F. Moed
19
Results Table 13.1 presents the number of matches obtained in applying the various match keys. In the second round, 989,709 discrepant cited references were matched. This number equals 7.7% of the total number of ‘correct’ references matched in the first round, applying the simple match key. The 32 types of discrepancies were grouped into 11 main types, presented in Table 13.2. Many of the discrepancies showing small variations in a data field can be attributed to inaccurate referencing by the citing authors. However, a substantial part of small variations in author names is not due to inattention or sloppiness, but rather to difficulties in identifying the family name and first names of authors from foreign countries or cultures (Borgman and Siegfried, 1992). A typical example is when Western scientists unfamiliar with Chinese names cites a Chinese author. Moreover, transliteration, i.e. the spelling of author names from one language with characters from the alphabet of another, may easily lead to mismatches. Chapter 3 further discusses problems with author names. Table 13.1. Matches and discrepancies Round
Datafield in which discrepancy occurred
1 2
No discrepancy (‘correct’ reference) Volume number Author Publication year Starting page number Total 2nd round
No. refs matched
Ratio discrepant/ Correct refs (%)
12,887,206 207,043 272,009 95,190 415,467 989,709
1.6 2.1 0.7 3.2 7.7
Number of ISI source/target articles (1980–1999): approximately 18.4 million. The figure for starting page number includes an estimated 20% of cases in which the cited page number originally contained a character (e.g., p. L115) but was missing in the file used in this analysis.
20
C. Daraio and W. Glänzel
Table 13.2. Main types of discrepancies Main type of discrepancy
N
%
Page number in cited ref missing Small variations in author names Small variations in page numbers Small variations in volume numbers Small variations in publication years Cited page number lies between starting and end page of target Issue number cited rather than volume number Citations to papers by ‘consortia’ Volume number missing in cited ref (but not in target) Secondary author cited rather than first author Author name in target or cited reference missing
165,793 159,503 117,683 95,336 62,837 58,853 41,369 36,196 20,323 19,281 14,754
16.7 16.1 11.9 9.6 6.3 5.9 4.2 3.7 2.1 1.9 1.5
Total number of discrepancies explained
791,928
80.0
42.275 73,138 50,015 32,353
4.3 7.4 5.1 3.3
197,781 989,709
20.0 100.0
All other discrepancies in author names All other discrepancies in page numbers All other discrepancies in volume numbers All other discrepancies in publication years Total number of discrepancies not (yet) explained Total number of discrepancies analysed
Table 13.2 shows that in the current stage of the work about 80% of the discrepancies could be explained and matched with a very high probability to the intended target. For the remaining 20% of discrepant references no plausible explanation of the discrepancy could yet be given. It is expected that there is a certain percentage of these that was erroneously matched to a target, particularly when they contain discrepancies in two or more datafields. Several types of discrepancies are caused mainly by editorial characteristics of the journals cited, by referencing conventions in particular fields of scholarship, or by data capturing and formatting procedures at ISI, or by a combination of these three factors. This can be illustrated with the following examples. – When scholars in the field of law cite a paper, they often include in their reference the page number containing the statement(s) they are referring to. Thus, the cited page number is often not the starting page number, but rather a number between starting and end page. There is a striking similarity among reference lists among US law journals in this respect, all showing around 50% of mismatches. Indicating a page number ‘in between’ also occurs, though less frequently, in references to reviews or data compilations in the natural and life sciences. – Several journals have dual-volume numbering systems, or publish ‘combined’ (particularly proceedings) volumes. ISI data capturing procedures do not allow for ranges of numbers in the (source) volume number field, and therefore in a
Selected essays of Henk F. Moed
21
sense has to choose from several possibilities. Citing authors may make different choices, however, so that volume numbers in cited reference and target article may differ. A similar problem arises with journals of which it is apparently unclear whether the serial numbering system relates to volumes or to issues. – Journals may publish their articles in a printed and an electronic version, and article identifiers in these versions may differ from one another. Starting and end page numbers may differ, or the electronic version may apply article serial numbers rather than page numbers. Although ISI puts an enormous effort into dealing which such differences, these may hinder proper matching of cited references and target articles, and are expected to become more onerous in the future. – Particularly in the medical sciences, more and more papers are published presenting outcomes of a joint study conducted by a consortium, task force, survey committee or clinical trial group. Such papers normally do have authors, and ISI includes the first author on the paper in the first author field. However, scientists citing such papers indicate in their reference list mostly the name of the consortium rather than that of the first author. As a result, names in the author fields of target and cited reference do not match. The journal Nature is not the only journal suffering from this type of discrepancy (Anonymous, Nature, 2002). It is essential to make clear that, due to their systematic nature, the discrepancies between targets and cited references are skewly distributed among target articles. Table 13.3 shows parameters of the distribution of discrepant citations among target articles. Most informative is an analysis by journal, examining the effect of including discrepancies upon its impact factor, and one by country of origin of the target articles receiving discrepant citations (Table 13.4). The journal most affected by ignoring discrepant citations is Clinical Orthopedics and Related Research. The serial numbers attached to this journal are captured by ISI as issue numbers, whereas virtually all cited references to the journal’s papers include these numbers in the volume number field. Focusing on the bigger non-Western countries, (former) USSR shows the highest ratio of discrepant/correct citations (21%) followed by China (13%). Among the larger Western countries, Spain and Italy rank top with 7.9 and 7.0%, respectively. USA and Australia show the lowest percentages, 5.7 and 5.3, respectively. Table 13.3. Distribution of discrepant citations among cited target articles No. Citations
Cumm Cited articles (%)
Cumm discrepant citations (%)
1 2 3 10 15 444
78.7 91.3 95.2 99.4 99.7 100.0
51.9 68.5 75.9 91.1 93.7 100.0
22
C. Daraio and W. Glänzel
Table 13.3 demonstrates how the 989,709 references showing a discrepancy are distributed among target articles intentionally cited: 652,419 targets were affected; 78.7% of these received only one discrepant citation, accounting for 51.9% of all cited references showing a discrepancy. About 5% of the targets received at least 4 discrepant citations that account for about 24% of all discrepant citations. About 4,000 targets (0.6%) received more than 10 discrepant citations, accounting for 8.9% of all discrepant citations. The maximum number of discrepant citations to the same target is 444. This is a ‘Consortium’ paper published by the Diabetes Control Complication Trial (first author Shamoon, H), in New Engl. J. Med, 329 (14) 977–986, (1993). Table 13.4. Percentile values of the distribution of the ratio discrepant/correct citations among target journals and countries Percentile
Ratio discrepant/correct citations (%) Journals Countries
P10 P25 P50 P75 P90 P95 P99
2.5 3.4 4.9 7.2 11.6 18.3 108.9
5.4 6.3 7.8 9.0 11.9 14.2 41.6
For 2,547 journals (second column) and 99 countries (third column) receiving in 1999 more than 100 ‘correct’ citations to articles published in 1997 and 1998, the ratio was calculated on the number of discrepant and correct citations, expressed as a percentage. The distribution of ratio scores among journals and countries was characterised by their percentile values. The 50th percentile (P50, i.e. the median) is 4.9 for journals and 7.8 for countries. For 127 journals (5%) the ratio discrepant/correct citations exceeds 18.3% (P95), and for 5 countries this ratio exceeds 14.2. For one country it is 41.6%: Vietnam. Cited References Borgman, C.L., and Siegfried, S.L. (1992). Getty’s Synoname and its cousins: A survey of applications of personal name-matching algorithms. Journal of the American Society for Information Science, 43, 459–476. Garfield, E. (1979). Citation Indexing. Its theory and application in science, technology and humanities. New York: Wiley. Lok, C.K.W., Chan, M.T.V., and Martinson, I.M. (2001). Risk factors for citation errors in peer-reviewed nursing journals. Journal of Advanced Nursing, 34, 223–229. Luwel, M. (1999). Is the Science Citation Index US-biased?. Scientometrics, 46: 549-562.
Selected essays of Henk F. Moed
23
Moed, H. F., Vriens, M. (1989). Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15: 95-107. Van Raan, A.F.J. (1996). Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics, 36, 397–420. Van Raan, A.F.J. (2004a). Measuring Science. In: Moed, H.F., Glänzel, W., and Schmoch, U (2004) (eds.). Handbook of quantitative science and technology research. The use of publication and patent statistics in studies of S&T systems. Dordrecht (the Netherlands): Kluwer Academic Publishers, 19–50. López-Illescas, C.1 , De Moya-Anegón, F.2 , Moed, H.F. (2008). Coverage and citation impact of oncological journals in the Web of Science and Scopus. Journal of Informetrics 2, 304-316. Elsevier 1 SCImago Research Group, University of Extremadura, Badajoz, Spain 2 SCImago Research Group, Spanish National Research Council, Madrid, Spain Abstract This paper reviews a number of studies comparing Thomson Scientific’s Web of Science (WoS) and Elsevier’s Scopus. It collates their journal coverage in an important medical subfield: oncology. It is found that all WoS-covered oncological journals (n = 126) are indexed in Scopus, but that Scopus covers many more journals (an additional n = 106). However, the latter group tends to have much lower impact factors than WoS covered journals. Among the top 25% of sources with the highest impact factors in Scopus, 94% is indexed in the WoS, and for the bottom 25% only 6%. In short, in oncology the WoS is a genuine subset of Scopus and tends to cover the best journals from it in terms of citation impact per paper. Although Scopus covers 90% more oncological journals compared to WoS, the average Scopus-based impact factor for journals indexed by both databases is only 2.6% higher than that based on WoS data. Results reflect fundamental differences in coverage policies: the WoS based on Eugene Garfield’s concepts of covering a selective set of most frequently used (cited) journals; Scopus with broad coverage, more similar to large disciplinary literature databases. The paper also found that ‘classical’, WoS-based impact factors strongly correlate with a new, Scopus-based metric, SCImago Journal Rank (SJR), one of a series of new indicators founded on earlier work by Pinski and Narin [Pinski, G., & Narin F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics. Information Processing and Management, 12, 297–312] that weight citations according to the prestige of the citing journal (Spearman’s rho = 0.93). Four lines of future research are proposed. Moed, H.F., Bar-Ilan, J1 , Halevi, G2 . (2016). A new methodology for comparing Google Scholar and Scopus. Journal of Informetrics, 10, 533–551. Elsevier 1 Bar Ilan University, Ramat Gan, Israel 2 Mount Sinai School of Medicine, NY, USA Abstract A new methodology is proposed for comparing Google Scholar (GS) with other citation indexes. It focuses on the coverage and citation impact of sources, indexing
24
C. Daraio and W. Glänzel
speed, and data quality, including the effect of duplicate citation counts. The method compares GS with Elsevier’s Scopus, and is applied to a limited set of articles published in 12 journals from six subject fields, so that its findings cannot be generalized to all journals or fields. The study is exploratory, and hypothesis generating rather than hypothesis-testing. It confirms findings on source coverage and citation impact obtained in earlier studies. The ratio of GS over Scopus citation varies across subject fields between 1.0 and 4.0, while Open Access journals in the sample show higher ratios than their non-OA counterparts. The linear correlation between GS and Scopus citation counts at the article level is high: Pearson’s R is in the range of 0.8–0.9. A median Scopus indexing delay of two months compared to GS is largely though not exclusively due to missing cited references in articles in press in Scopus. The effect of double citation counts in GS due to multiple citations with identical or substantially similar meta-data occurs in less than 2% of cases. Pros and cons of article-based and what is termed as concept-based citation indexes are discussed.
Journal citation measures Journal impact factors and related citation measures are even today probably the most frequently used bibliometric indicators. The articles relate to a critique on existing indicators, proposals for new indicators, and a more reflexive paper addressing criteria for evaluating indicators on the basis of their statistical soundness, theoretical validity, and practical usefulness. Also, one paper examines the effect of the Open Access upon citation impact. Moed, H.F., van Leeuwen, Th.N1 . (1995). Improving the accuracy of Institute for Scientific Information’s journal impact factors. J. of the American Society for Information Science (JASIS) 46, 461–467 Wiley publisher 1 Leiden University, Leiden, Netherlands Abstract The Institute for Scientific Information (ISI) publishes annually listings of impact factors of scientific journals, based upon data extracted from the Science Citation Index (SCI). The impact factor of a journal is defined as the average number of citations given in a specific year to documents published in that journal in the two preceding years, divided by the number of “citable” documents published in that journal in those 2 years. This article presents evidence that for a considerable number of journals the values of the impact factors published in ISI’s Journal Citation Reports (JCR) are inaccurate, particularly for several journals having a high impact factor. The inaccuracies are due to an inappropriate definition of citable documents. Document types not defined by ISI as citable (particularly letters and editorials) are actually cited and do contribute to the citation counts of a journal. We present empirical data in order to assess the degree of inaccuracy due to this phenomenon. For several journals
Selected essays of Henk F. Moed
25
the results are striking. We propose to calculate for a journal impact factors per type of document rather than one single impact factor as given currently in the JCR. Moed, H.F., van Leeuwen, Th.N.1 , Reedijk, J.2 (1999). Towards appropriate indicators of journal impact, Scientometrics 46, 575–589. Springer 1, 2 Leiden University, Leiden, Netherlands Abstract This paper reviews a range of studies conducted by the authors on indicators reflecting scholarly journal impact. A critical examination of the journal impact data in the Journal Citation Reports (JCR), published by the Institute for Scientific Information (ISI) has shown that the JCR impact factor is inaccurate and biased towards journals revealing a rapid maturing or decline in impact. In addition, it was found that the JCR cited half-life is an inappropriate measure of decline of journal impact. More appropriate impact measures of scholarly journals are proposed. A new classification system is explored, describing both maturing and decline of journal impact as measured through citations. Suggestions for future research are made, analyzing in more detail the distribution of citations among papers in a journal. Moed, H.F. (2007). The effect of “Open Access” upon citation impact: An analysis of ArXiv’s Condensed Matter Section. Journal of the American Society for Information Science and Technology 58, 2047–2054. Wiley publisher Abstract This article statistically analyses how the citation impact of articles deposited in the Condensed Matter section of the preprint server ArXiv (hosted by Cornell University), and subsequently published in a scientific journal, compares to that of articles in the same journal that were not deposited in that archive. Its principal aim is to further illustrate and roughly estimate the effect of two factors, ‘early view’ and ‘quality bias’, upon differences in citation impact between these two sets of papers, using citation data from Thomson Scientific’s Web of Science. It presents estimates for a number of journals in the field of condensed matter physics. In order to discriminate between an ‘open access’ effect and an early view effect, longitudinal citation data was analyzed covering a time period as long as 7 years. Quality bias was measured by calculating ArXiv citation impact differentials at the level of individual authors publishing in a journal, taking into account co-authorship. The analysis provided evidence of a strong quality bias and early view effect. Correcting for these effects, there is in a sample of 6 condensed matter physics journals studied in detail, no sign of a general ‘open access advantage’ of papers deposited in ArXiv. The study does provide evidence that ArXiv accelerates citation, due to the fact that that ArXiv makes papers earlier available rather than that it makes papers freely available. Moed, H.F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics 4, 265–277. Elsevier
26
C. Daraio and W. Glänzel
Abstract This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal’s contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field’s literature. It further develops Eugene Garfield’s notions of a field’s ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal’s subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal’s citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories—groupings of journals sharing a research field—or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier’s Scopus. Moed, H.F. (2016). Comprehensive indicator comparisons intelligible to nonexperts: the case of two SNIP versions. Scientometrics, 106 (1), 51-65. Springer Abstract A framework is proposed for comparing different types of bibliometric indicators, introducing the notion of an Indicator Comparison Report. It provides a comprehensive overview of the main differences and similarities of indicators. The comparison shows both the strong points and the limitations of each of the indicators at stake, rather than over-promoting one indicator and ignoring the benefits of alternative constructs. It focuses on base notions, assumptions, and application contexts, which makes it more intelligible to non-experts. As an illustration, a comparison report is presented for the original and the modified SNIP (Source Normalized Impact per Paper) indicator of journal citation impact.
Selected essays of Henk F. Moed
27
Indicators of research performance in science, social science and humanities The development of appropriate quantitative research assessment methodologies in the various domains of science and scholarship and various organizational levels has been my core-activity during the first two decades. Bibliometric indicators were applied to research groups, departments, institutions, and countries. Moed, H.F., Burger, W.J.M.1 , Frankfort, J.G.2 , van Raan, A.F.J3 . (1985). The Use of Bibliometric Data for the Measurement of University Research Performance. Research Policy 14, 131–149 Elsevier 1, 2, 3
Leiden University, Leiden, Netherlands
Abstract In this paper we present the results of a study on the potentialities of “bibliometric” (publication and citation) data as tools for university research policy. In this study bibliometric indicators were calculated for all research groups in the Faculty of Medicine and the Faculty of Mathematics and Natural Sciences at the University of Leiden. Bibliometric results were discussed with a number of researchers from the two faculties involved. Our main conclusion is that the use of bibliometric data for evaluation purposes carries a number of problems, both with respect to data collection and handling, and with respect to the interpretation of bibliometric results. However, most of these problems can be overcome. When used properly, bibliometric indicators can provide a “monitoring device” for university research-management and science policy. They enable research policy-makers to ask relevant questions of researchers on their scientific performance, in order to find explanations of the bibliometric results in terms of factors relevant to policy. Moed, H.F., de Bruin, R.E.1 , van Leeuwen, Th.N.2 (1995). New bibliometric tools for the assessment of national research performance: database description, overview of indicators and first applications. Scientometrics 33, 381–422. Springer 1, 2 Leiden University, Leiden, Netherlands Abstract This paper gives an outline of a new bibliometric database based upon all articles published by authors from the Netherlands and processed during the time period 1980-1993 by the Institute for Scientific Information (ISI) for the Science Citation Index (SCI), Social Science Citation lndex (SSCI) and Arts & Humanities Citation Index (A&HCI). The paper describes various types of information added to the database: data on articles citing the Dutch publications; detailed citation data on ISI journals and subfields; and a classification system of publishing main organizations, appearing in the addresses. Moreover, an overview is given of the types of bibliometric indicators that were constructed. Their relationship to indicators developed
28
C. Daraio and W. Glänzel
by other researchers in the field is discussed. Finally, two applications are given in order to illustrate the potentials of the database and of the bibliometric indicators derived from it. The first represents a synthesis of ‘classical’ macro indicator studies at the one hand, and bibliometric analyses of research groups or institutes at the other. The second application gives for the first time a detailed analysis of a country’s publication output per institutional sector. Moed, H.F., Hesselink, F.Th.1 (1996). The publication output and impact of academic chemistry research in the Netherlands during the 1980s: bibliometric analyses and policy implications. Research Policy 25, 819–836. Elsevier 1 Stichting SON/NWO, Den Haag, The Netherlands Abstract The primary aim of this paper is to assess the contribution to the international literature of Spanish scientific production in the research stream of innovation and technology management. For this purpose 72 articles published in the last decade in the most prestigious international journals in this research stream have been evaluated. From this analysis we have concluded that there has been a positive evolution from 1995 to the present time, as much from a qualitative as from a quantitative point of view. Likewise, we have found that research in this research stream is concentrated fundamentally on a reduced group of universities. Nevertheless, these do not focus exclusively on one or a few research subjects, but on a wide range thereof. Van den Berghe, H.1 , Houben, J.A.2 , de Bruin, R.E.3 , Moed, H.F., Kint, A.4 , Luwel, M.5 , Spruyt, E.H.J.6 (1998). Bibliometric indicators of university research performance in Flanders. Journal of the American Society for Information Science (JASIS) 49, 59–67. Wiley publisher 1, 2 KU Leuven, Leuven, Belgium 3 Leiden University, Leiden, Netherlands 4,5 University of Ghent, Ghent, Belgium 6 University of Antwerp, Antwerpen, Belgium Abstract During the past few years, bibliometric studies were conducted on research performance at three Flemish universities: The University of Ghent, the Catholic University of Leuven, and the University of Antwerp. Longitudinal analyses of research input, publication output, and impact covering a time span of 12 years were made of hundreds of research departments. This article outlines the general methodology used during these studies and presents the main outcomes with respect to the faculties of medicine, science, and pharmaceutical science at the three universities involved. It focuses on the reactions of the researchers working in these faculties and of the university evaluation authorities on the studies. Moed, H.F. (2002). Measuring China’s research performance using the Science Citation Index. Scientometrics 53, 281–296. Springer
Selected essays of Henk F. Moed
29
Abstract This contribution focuses on the application of bibliometric techniques to research activities in China, based on data extracted from the Science Citation Index (SCI) and related Citation Indexes, produced by the Institute for Scientific Information (ISI). The main conclusion is that bibliometric analyses based on the ISI databases in principle provide useful and valid indicators of the international position of Chinese research activities, provided that these analyses deal properly with the relatively large number of national Chinese journals covered by the ISI indexes. It is argued that it is important to distinguish between a national and an international point of view. In order to assess the Chinese research activities from a national perspective, it is appropriate to use the scientific literature databases with a good coverage of Chinese periodicals, such as the Chinese Science Citation Database (CSCD), produced at the Chinese Academy of Sciences. Assessment of the position of Chinese research from an international perspective should be based on the ISI databases, but it is suggested to exclude national Chinese journals from this analysis. In addition it is proposed to compute an indicator of international publication activity, defined as the percentage of articles in journals processed for the ISI indexes, with the national Chinese journals being removed, relative to the total number of articles published either in national Chinese or in other journals, regardless of whether these journals are processed for the ISI indexes or not. This indicator can only be calculated by properly combining CSCD and ISI indexes. Moed, H.F., Nederhof, A.J1 , Luwel, M.2 (2002). Towards performance in the humanities. Library Trends (Special Issue on Current Theory in Library and Information Science) 50, 498-520. JHU Press 1 Leiden University, Leiden, Netherlands 2 University of Ghent, Ghent, Belgium Abstract This paper describes a general methodology for developing bibliometric performance indicators. Such a description provides a framework or paradigm for applicationoriented research in the field of evaluative quantitative science and technology studies, particularly in the humanities and social sciences. It is based on our study of scholarly output in the field of Law at the four major universities in Flanders, the Dutch speaking part of Belgium. The study illustrates that bibliometrics is much more than conducting citation analyses based on the indexes produced by the Institute for Scientific Information (ISI), since citation data do not play a role in the study. Interaction with scholars in the fields under consideration and openness in the presentation of the quantitative outcomes are the basic features of the methodology. Bibliometrics should be used as an instrument to create a mirror. While not a direct reflection, this study provides a thorough analysis of how scholars in the humanities and social sciences structure their activities and their research output. This structure can be examined empirically from the point of view of its consistency and the degree of consensus among scholars. Relevant issues can be raised that are worth considering in more detail in follow-up studies, and conclusions from our empirical materials
30
C. Daraio and W. Glänzel
may illuminate such issues. We argue that the principal aim of the development and application of bibliometric indicators is to stimulate a debate among scholars in the field under investigation on the nature of scholarly quality, its principal dimensions, and operationalizations.
Theoretical understanding and proper use of bibliometric indicators This topic comprises articles discussing and proposing theories about what citations and other bibliometric indicators measure. Moreover, it includes reflexive articles addressing the issue as to what are appropriate ways to use these indicators in research assessment processes. Moed, H.F. (2000). Bibliometric indicators reflect publication and management strategies. Scientometrics 47, 323–346. Springer Abstract In a bibliometric study of nine research departments in the field of biotechnology and molecular biology, indicators of research capacity, output and productivity were calculated, taking into account the researchers’ participation in scientific collaboration as expressed in co-publications. In a quantitative approach, rankings of departments based on a number of different research performance indicators were compared with one another. The results were discussed with members from all nine departments involved. Two publication strategies were identified, denoted as a quantity of publication and a quality of publication strategy, and two strategies with respect to scientific collaboration were outlined, one focusing on multi-lateral and a second on bi-lateral collaborations. Our findings suggest that rankings of departments may be influenced by specific publication and management strategies, which in turn may depend upon the phase of development of the departments or their personnel structure. As a consequence, differences in rankings cannot be interpreted merely in terms of quality or significance of research. It is suggested that the problem of assigning papers resulting from multi-lateral collaboration to the contributing research groups has not yet been solved properly, and that more research is needed into the influence of a department’s state of development and personnel structure upon the values of bibliometric indicators. A possible implication at the science policy level is that different requirements should hold for departments of different age or personnel structure. Moed H.F., Garfield E.1 (2004). In basic science the percentage of ‘authoritative’ references decreases as bibliographies become shorter. Scientometrics 60, 295303. Springer 1 Institute for Scientific Information, Philadelphia, USA
Selected essays of Henk F. Moed
31
Abstract The empirical question addressed in this contribution is: How does the relative frequency at which authors in a research field cite ‘authoritative’ documents in the reference lists in their papers vary with the number of references such papers contain? ‘Authoritative’ documents are defined as those that are among the ten percent most frequently cited items in a research field. It is assumed that authors who write papers with relatively short reference lists are more selective in what they cite than authors who compile long reference lists. Thus, by comparing in a research field the fraction of references of a particular type in short reference lists to that in longer lists, one can obtain an indication of the importance of that type. Our analysis suggests that in basic science fields such as physics or molecular biology the percentage of ‘authoritative’ references decreases as bibliographies become shorter. In other words, when basic scientists are selective in referencing behavior, references to ‘authoritative’ documents are dropped more readily than other types. The implications of this empirical finding for the debate on normative versus constructive citation theories are discussed. Moed, H.F. (2005). Towards a theory of citations: Some building blocks. In: Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 1-4020-3713-9, Chap. 16, 209–220.
Introduction It is essential that methodologies and indicators applied in policy studies of scholarly activity and performance are properly tested and theoretically founded. Obviously, analysts of scholarly performance should not employ methodological practices that they would condemn as inadequate in the work of those scholars under evaluation (e.g., Hull, 1998). In Sect. 15.2 it was argued that quantitative science and technology studies is a multi-disciplinary field, and that even within a discipline fundamentally distinct paradigms were developed. If quantitative science studies is a multi-disciplinary research field, the quest for a comprehensive theory of citation can be conceived as the fairly difficult task to transform a multi-disciplinary activity into an interdisciplinary one. Participants in the ‘theory of citation’ debate do not always properly recognize this fundamental problem. The existence of distinct paradigms within a single discipline makes this even more difficult. The development of science indicators in a scholarly multi-disciplinary context does not necessarily result in a broad consensus among its practitioners upon what such indicators reflect and how they are properly used in a policy context. Generally, the social sciences often embrace schools of thought, each with its own fundamental assumptions and principles. This is particularly true in the sociology of science. This condition has important consequences for the debate on ‘citation theories’ aimed at providing a framework for interpretation of citation-based indicators. Not
32
C. Daraio and W. Glänzel
infrequently, the quest for a citation theory seems to assume that it would be feasible to develop one ‘single’—comprehensive or ‘grand’—theoretical framework shared by all practitioners, thus at the same time setting all disputes among the various schools. But it is invalid to assume that a theoretical foundation is sound only when there is a strict consensus among practitioners involved and that, whenever various, competing theoretical positions exist, it follows that there is no theoretical foundation at all. Wouters’ proposal of a reflexive indicator theory is fruitful, as it does not assume the primacy of any existing citation theory, but rather creates a theoretical openness by proposing to further develop a framework in which each approach eventually finds its proper place. Below a number of observations and comments follow that can be conceived as contributions to Wouters’ project of a reflexive indicator theory. Although they aim at contributing to a deeper understanding of referencing practices and what citations measure, they do not claim to develop a full, encompassing theory. They focus upon the validity citation analysis in research evaluation, i.e. the extent to which citation counts indicate aspects such as ‘importance’ or ‘influence’ of scientific achievements. As a background, Sect. 16.2 presents basic quantitative characteristics of reference lists in research articles. It is argued that reference lists have a limited length and that authors have to be selective in including cited documents. It is shown that reference lists are unique in the sense that very few papers have identical lists, but that at the same time they contain more commonly used cited references. Hence, there is a large variability in citation counts among individual papers, and the distribution of citations amongst papers in any field is skewed. The crucial issue at stake is which factors account for this skewness, and how there are related to research performance. Section 16.3 introduces a distinction between a ‘citation analytical’ and a citationist’, and between a constructive and a constructivist viewpoint of what citations measure. It is argued that both a citation analytical and a constructive viewpoint are valuable approaches. However, a citationist and constructivist viewpoint represent extreme positions that tend to have a negative influence upon the quest for a scholarly foundation of the use of citation analysis in research evaluation. Section 16.4 presents a critical discussion of the views of the various scholars outlined in Chap. 15. It is concluded that citation analysis applied in an evaluative context does not aim at capturing motives of individuals, but rather their consequences at an aggregate level. It embodies a fundamental shift in perspective from that of the psychology of individual citers towards what scientist jointly express sociologically in their referencing behavior about the structures and performances of scholarly activity. On the other hand, it is emphasized that using large data samples does not necessarily rule out all sorts of biases. Section 16.5 broadens the viewpoint often adopted in library and information science of research articles as separate ‘entities’, by incorporating relevant notions from a sociological perspective. It conceives papers as elements from coherent publication ensembles of research groups carrying out a research programme. It is hypothesized that citing authors acknowledging a research group’s work do not distribute their citations evenly among all papers emerging from its programme, but rather cite
Selected essays of Henk F. Moed
33
particular papers that have become symbols or ‘flags’ of such a programme. This tendency accounts for a part of the skewness observed in citation distributions of individual papers. But on the whole, some groups or programmes are more frequently cited than others. In order to further develop a theoretical perspective upon reference behavior—and also the hypothesis of the existence of ‘flag’ papers mentioned above—a crucial challenge is to account for the increasing importance of reference lists as content descriptors in the scholarly information system, and for the increasing role of citations in research evaluation practices. Sections 16.6 proposes conceiving a reference list as a distinct part of a research paper with proper functions related to the use of references bibliographically in citation indexing and bibliometrically in research evaluation in the broadest sense. It is hypothesized that citing authors tend to ensure that important research groups and their programmes are represented in the reference lists of their papers. Including works in a reference list can still be interpreted in terms of cognitive influence, but its expression in the citing text may be vague or implicit. Chapter 9 underlined differences in referencing practices of authors from science fields (including the natural and life sciences) on the one hand, and those from the social sciences, and particularly the humanities, on the other. The reflections presented in this chapter primarily relate to science, or, more generally, to subfields with a fairly quantitative substantive content and strongly developed international social and communication networks. The extent to which the various observations made below are also valid for other domains of scholarship is an issue that needs further study. Reference lists are selective and contain both unique and more commonly used cited references Reference lists have limited length. The average length of references varies among disciplines and type of source paper, but it is plausible to assume that authors must be selective when they compile their reference lists. A reference list should not be viewed as a complete list of influences exercised upon the work described in the citing paper, this notion can also be found in the work of Small (1987), Zuckerman (1987) and van Raan (1998). Several journals actually specify an indicative or a maximum number of references. A cited work may generate influence through other papers citing that work. Authors may therefore refer to some of the papers citing that particular work rather than explicitly referring to the work itself. Thus, intermediary publications may serve as “cognitive conduits” (Zuckerman, 1987). Other works may be generally conceived as so crucial and firmly incorporated into the current state of a field that authors do not feel the need to cite them explicitly. This phenomenon is termed by Zuckerman “obliteration by incorporation”. A reference list is generally unique, in the sense that hardly any papers with references have identical lists. From an analysis of source papers included in the 2001 SCI, it emerges that almost 91% contain at least one reference that is cited in the particular source paper only. Evidently, this percentage increases with increasing length of reference lists. In fact, for source papers with 20 references, being the mode of the distribution of number of references among source papers, 94% of source
34
C. Daraio and W. Glänzel
papers contain at least one unique reference, and for papers with 40 references this rises to 96%. The ‘particularistic’ aspect of referencing highlighted by Cronin is thus clearly reflected in citing authors’ reference lists. The unique references relate to sources that, in the year that they are cited, do not have a citation impact upon other papers, but that may nevertheless constitute an important basis of the work described in the citing paper. A reference list thus contains a certain fraction of unique references, but at the same time there is also a considerable amount of similarity among reference lists. A reference list normally contains a portion of references to documents that are cited in other reference lists as well. This is precisely the profile that one would expect to find in papers making original contributions to a common cause, the advancement of scientific knowledge. In the total collection of 2001 SCI source papers, the 10% most frequently cited papers account for 33% of all citations. The latter percentage varies across research discipline and is 26% in engineering and 39% in physics and astronomy (Moed and Garfield, 2004). It was found that 93% of all source appears in a year contain at least one reference to a document included among the ten% most frequently cited items in that year. For source papers with 20 and 40 references, this percentage is 98.4 and 99.7%, respectively. Hence, there is a large variability in citation counts among individual papers, and the distribution of citations amongst papers in any field is skewed. Which factors account for this skewness, and how are these related to research performance?
Extreme Positions Are Not Useful in the Debate on Citation Theories It is useful to make a distinction between a social constructive and a constructivist view on referencing behavior, and between a citation analytical versus a citationist viewpoint. These distinctions are crucial in any attempt to relate the various existing indicator theories with one another. A social constructive view of referencing behavior analyses the social conditions and interactions involved in the publication process. It does not negate that a cited paper has a reality of its own, or an identity that also exists outside the world of the citing author, but its primary interest lies in analyzing how it may be influenced by the social environment in which it is produced. A constructivist view denies such a proper identity and claims that a cited paper is merely what the citing author makes of it. In other words, it assumes that a constructive approach is the only one valid. In as much as many authors cite the same paper, the citations are merely an aggregate of a wide variety of individual motives and special circumstances. There is essentially no aspect that the citations have in common, because motives and circumstances producing them were different, and therefore there is no rationale for counting them, and attempting to understand what properties of the cited paper reflected in the counts.
Selected essays of Henk F. Moed
35
A second relevant distinction is that of a citation-analytical and a citationist approach. The first assumes that—under certain conditions—citation analysis may provide valid indications of the significance of a document. Such indications may be denoted as objective in the sense that they reflect properties of the cited document itself, and that they are replicable, and based on the practices and perceptions of large numbers of (citing) scientists rather than on those of a single individual scientist. A citationist view holds that citations are the only valid measures of research quality, and that it is merely their quantitative character and the magnitude of the data files from which they are drawn that makes them objective, even to the extent that no further theoretical foundation is needed to justify their application. According to this view, it would be extremely difficult if not impossible to provide such a foundation, as any potential empirical evidence would tend to be ‘subjective’ and can therefore hardly have implications for the status of the objective tool. Perhaps the most extreme position is expressed in the circular argument that ‘citations measure quality because quality is what citations measure’. The author of this book does acknowledge the potential usefulness of citationsbased indicators and of the social constructive approach, but he rejects both a constructivist and a citationist viewpoint. Although none of the authors discussed in Chap. 15 adopts such an extreme, constructivist or citationist viewpoint in the debate on what citations measure, positions of scholars sometimes tend to be criticized as if they are extreme in the sense outlined above, and this tendency may hamper theoretical progress. The extreme theoretical positions have their correlates in the ways citation analysis can be applied in research evaluation and policy. A citationist view would justify if not stimulate a rigid, formulaic use of bibliometric indicators as if these are the only valid measures of research performance, whereas a constructivist view would reject them by qualifying them as totally irrelevant constructs.
Comments on the Views of Scholars Discussed in Chap. 15 The micro-sociological school analyses, from the point of view of an individual author, how particular motives or circumstances influence or regulate the selection of cited references. However, it often seems to disregard what the citing authors’ selection of references expresses as regards the way they conceive the outside world, particularly the research front at which they operate. Scientist do not merely cite papers because the cited contents fit into the logical structure of an argument, but also because the cited paper or its authors have, in their perception, earned a certain status during the past and can substantiate or add credibility to statements or claims made in a paper. A cited paper can be a strong weapon in persuading colleagues only if it has a certain significance. The relevance of taking into account what citing authors express as regard the ‘outside world’, can be further underlined by confronting Wouters’ claim that “the citation is the product of the indexer” with the notion of concept symbols developed by Small. The latter focused on what is common in reference practices. He combined
36
C. Daraio and W. Glänzel
reference and citation analysis rather than separating them. Although a highly cited reference is embedded in a number of different citing texts, these texts have some elements in common. They use the reference in a similar way. The reference has an ‘identity’ of its own, and is not merely a construct of the citing authors, even if it appears to be a split identity, in the sense that different networks of researchers may establish distinct symbolic applications of a particular cited work (Cozzens, 1982). In this sense, the citation is not merely a product of the citation indexer as Wouters seems to suggest, but also of the scientist. Conformity in reference patterns provides a basis for aggregating articles containing the same reference, and hence for counting— or more generally analyzing—citations to a particular document. The distinction made by Zuckerman between motives and consequences of referencing behavior is particularly useful in this context. The author of this book agrees with Zuckerman’s reply to Gilbert that, even if a citing author intends to persuade, the reference may express intellectual influence. Authoritative papers tend to be authoritative because of their influence upon practitioners in a field, reflected in their high citation rates. Cozzens suggested that the reward, rhetoric and communication system each attribute a certain portion to the variance in citation counts, and that, in order to use citations as measures of reward, these portions should be separated from one another. But although some rhetoric or communication factors can thus be accounted for—for instance in so-called ‘normalized’ citation indicators discussed in Chaps. 4 and 5 of this book—it is questionable whether the reward and the rhetoric system can be fully separated, since citations reflect both aspects at the same time. It is a matter of distinct theoretical perspectives, each with its own validity, rather than a matter of separate factors in a variance analysis. Leydesdorff and Amsterdamska (1990) made a similar argument, by underlining the “inherently multidimensional character of citation”. Cronin argued that one should concentrate on the ‘personal’, ‘motivational’ content of citations, and on micro-sociological conditions of their creation and application. Leydesdorff and Amsterdamska (1990) rightly argued that analyzing scientists’ motives for citing through interviews and questionnaires on the one hand and studying the role of the cited reference in the argumentation structure in the citing text on the other, represent two analytically distinct levels of analysis. Their empirical research revealed that motives or perceptions of citing authors do not directly correspond to the rhetorical function of cited documents in the citing text. This outcome underlines once more the relevance of the distinction between citing authors’ motives and their consequences referred to above. White’s idea of co-citation maps as aerial views measuring a historical consensus as to important authors and works is based on the notion of references as “acknowledgements”, and thus adopts the ‘normative’ view on referencing behavior. He assumed that in the analysis of large data files, individual “vagaries” in referencing behavior cancel out. However, enlargement of data samples tens to neutralize random errors, but not necessarily systematic errors or biases. Following White’s metaphor on the aerial view, one may ask whether the methodology generating a proper aerial view of a town also provides sufficiently detailed and valid information to describe and ‘evaluate’ an individual living in the town. This issue is particularly relevant in the use of citation indicator in research evaluation of individual entities such as
Selected essays of Henk F. Moed
37
authors, research groups or institutions. Regarding Van Raan’s “thermodynamic” model describing large ensembles of citers analogously to ensembles of molecules, it must be noted that according to the thermodynamic model, molecules obey the laws of mechanics. One may therefore ask what the ‘general laws’ are that underliying reference behavior of authors. Van Raan apparently assumes that references essentially reflect influences of the cited works upon the citing paper, regardless of whether the referenced works are “modal” or not. The author of this book agrees with Zuckerman that, on the one hand, the presence of error does not preclude the possibility of precise measurement and that the net effect of certain sorts of error can be measured, but that on the other hand the crucial issue is whether errors are randomly distributed among all subgroups of scientist or whether they systematically affect certain subgroups (Zuckerman, 1987, p. 331). Thus, it cannot a priori be assumed that any deviations of the norm cancel out when data samples are sufficiently large. Martin and Irvine clearly expressed this insight in their methodological work. Their method of multiple converging partial indicators involves a quest for biases in any of the indicators used, but they also noted that convergence itself does not guarantee that the outcome is free of bias (Martin and Irvine, 1983, p. 87). To the extent that the micro-sociological approach adopts a constructivist viewpoint, the author of this book agrees with Wouters’ claim that in the quest for an encompassing citation theory, the micro-sociological studies of reference behavior are a “dead end”. However, he would not agree with the claim that studies constructing the reference merely contribute to a reference theory and not to citation theory. Reference and citation theories, although analytically distinct, should not be separated from one another. A satisfactory theory of citation should be grounded in a notion of what scientist tend to express in the referencing practices. In the next two sections, two notions are described that could be conceived as building blocks in such a theory.
Research Articles Are Elements of Publication Ensembles of Research Groups Carrying Out a Research Programme One source of variation or skewness in the distribution of citations among cited papers emerges from the notion that research articles should be conceived as elements of a publication ensemble of a collection of scientists who are working in a particular institutional environment—a research group—and who carry out a scientific or technological goal or mission—a research programme. An academic research group normally consists of research students working their PhD thesis, supervised by senior scientists or by post-doctoral students. Normally there is one group leader. Research groups may have a more permanent character and consist of scientists working together for a period of years. But they may also be formed on a temporary basis to carry out a specific task or project and be dissolved when their mission is
38
C. Daraio and W. Glänzel
accomplished. An individual scientist may even participate in more than one research group at the same time. The term ‘research programme’ has a heavy burden philosophically but is used here as a term from daily scientific practice. In operational terms its core is comprised in the few slides a group leader would show in a presentation introducing the work of his or her group. It includes a mission statement, the principal lines of research, the main achievements, the names of the principal investigators, and the main funding sources. To the outside world of colleagues in the field, the programme and the group are closely connected. A programme may be symbolized by the names of the principal investigators, and vice versa. Both the programme and the group thus have cognitive and a social interpretation. A research group produces results, published in scientific papers. It is hypothesized that a group’s papers can be subdivided roughly into two types, denoted as ‘bricks’ and ‘flags’. Bricks contain elementary, or moreor-less ‘normal’ contributions, and can be distinguished from flag papers, presenting either overviews of the research programme carried out by the group—mostly in review papers—or the few research articles describing the very significant progress made by member of the group. Both types of papers are essential elements of the output of the group’s programme. There are no flags without bricks, and in principle no bricks without flags either. Review articles may be born, so to say, as flag papers. Other articles, however, may present outcomes that appear to be so significant that they become flags of the programme from which they emerged. Flag articles are symbols for a range of studies conducted in the framework of a research programme. In other words, authors who need to refer to a research programme, its general principles and main outcomes, tend to cite that programme’s flag papers. By citing a flag paper, they implicitly cite many, if not all, related brick papers. Considering highly cited articles as flags or symbols of research programmes of research groups rather than as ‘concept symbols’ as suggested by Small, may account for the phenomenon of ‘split citation identity’ observed by Cozzens (1982). A research programme may embody several concepts, and authors referring to it do not necessarily use one and the same concept. In addition, papers may start as significant brick papers, initially cited because of particular results, and transform in a later phase, when their high significance is generally acknowledged, into flag papers. During its lifetime a paper may therefore represent different concept symbols. From the point of view of citation impact, the relationship between flag articles and brick papers is complex. On the one hand, flag papers in a sense lure citation away from brick articles. The principle of cumulative advantage is at stake here: the more a paper is cited, the more colleagues tend to see it as a flag paper, and the more citations it subsequently attracts. On the other hand, however, flag papers increase the visibility of the programme as a whole, and hence of the brick papers without which they would never have become flags at all. The citation distribution of a research group’s articles is thus essentially skewed. Disregarding the effect of age, a typical distribution of citations among a group’s articles reveals a limited number of highly cited papers, and a much larger share of uncited or moderately cited papers. This pattern can be found both
Selected essays of Henk F. Moed
39
for leading groups making key contributions to their field and for less prominent groups. The existence of flag papers, however, is not the only factor accounting for the observed skewness in citation distributions. Leading groups tend to have higher citation rates to their flag papers and relatively lower shares of uncited brick papers than less prominent ones.
A Reference List Constitutes a Distinct Part of a Paper with Proper Functions Reference lists in a sense have a ‘life of their own’: they can be viewed, evaluated, and analyzed to some extent separately from the text in which they were made. This does not mean that references have no function in the function in the rhetorical structure of a scientific paper. References are attached to specific point in the text. Thus, a rhetorical viewpoint on references is appropriate and fruitful. But references are also elements of a reference list, which can be conceived as a distinct part of a text with proper functions. One may distinguish between two functions. The first relates to the use of references as document content descriptors. For instance, in order to obtain an impression of its contents, potential readers of a full paper tend to browse not only through its title and abstract, but also through its list of references. A second function relates to the increasing awareness of citing authors that references, when converted into citations, may play a role in the broad domain of research evaluation. This domain not only comprises the process of peer review of submitted manuscripts, but also the use of citation-based indicators in research performance assessment. In practice it is difficult, if not impossible, to distinguish this function from that related to the role of references as content descriptors. Both functions influence a paper’s reference list and both are enforced by the increasing use of citation indexes, particularly those produced by the Institute for Scientific Information, for bibliographic and bibliometric purposes. From this perspective a reference list marks a paper’s ‘socio-cognitive location’, reflected in the special mix between unique and common references. In this way citing authors tend to ensure that important works, scientist or groups are represented in their reference lists. Including works in a reference list can still be interpreted in terms of cognitive influence, but its expression in the citing text may be vague or implicit. This hypothesis explains why in citation context analyses relatively large proportions of references were qualified as ‘perfunctory’ (Moravcsik and Murguesan, 1975; Hooten, 1991), ‘providing a background’ (e.g., Oppenheim and Renn, 1978), or ‘setting the stage (Pertix, 1983; Cano, 1989). From the perspective of rhetorical analysis of citing texts one may conclude that such references have little information utility to the authors of citing papers. But from the perspective of the use of citation analysis in research evaluation, it is not the information utility within the citing text
40
C. Daraio and W. Glänzel
that is of primary relevance, but rather the extent to which works, or groups are cited in references ‘setting the stage’. In any field there are leading groups active at the forefront of scientific development. Their leading position is both cognitively and socially anchored. Cognitively, their important contributions tend to be highlighted in a state-of-the-art of a field. But to the extent that the science system functions well in stimulating and warranting scientific quality, leading groups, and a particularly their senior researchers, tend at the same time to acquire powerful social positions, as institute directors, journal editors, conference organizers, peer committee members or government advisers. Since leading groups tend to be represented more frequently in scientific articles’ reference lists than less prominent groups, their publication ensembles, and a particularly their flag papers, tend to be more frequently cited. Thus, citations can be interpreted as manifestations of intellectual influence, even though such influence may not directly be traced from the citing texts. They can be viewed as instances of citing authors’ socio-cognitive location that reflect their awareness of what are the important groups or programmes that must be included in their reference lists. Cited References Cano, V. (1989). Citation behavior: classification, utility and location. Journal of the American Society for Information Science, 40, 248–290. Cozzens, S.E. (1982). Split citation identity: A case-study in economics. Journal of the American Society for Information Science, 33, 233–236. Hooten, P.A. (1991). Frequency and functional use of cited documents in information science. Journal of the American Society for Information Science, 42, 397–404 Hull, D.L. (1998). Studying the study of science scientifically. Perspectives on Science, 6, 209–231. Leydesdorff, L., and Amsterdamska, O. (1990). Dimensions of citation analysis. Science, Technology and Human Values, 15, 305–335. Martin, B.R., and Irvine, J. (1983). Assessing basic research: some partial indicators of scientific progress in radio astronomy. Research Policy, 12, 61–90. Oppenheim, C., and Renn, S.P. (1978). Highly cited old papers and the reasons why they continue to be cited. Journal of the American Society for Information Science, 29, 227–231. Peritz, B.C. (1983). A classification of citation roles for the social sciences and related fields. Scientometrics, 5, 303–312. Small, H. (1987). The Significance of bibliographic references. Scientometrics, 12, 339–342. Van Raan, A.F.J (1998). In matters of quantitative studies of science the fault of theorists is offering too little and asking too much. Scientometrics, 43, 129–139 Zuckerman, H. (1987). Citation analysis and the complex problem of intellectual influence. Scientometrics, 12, 329–338. Moed, H.F. (2008). UK Research Assessment Exercises: Informed Judgments on Research Quality or Quantity? Scientometrics 74, 141–149. Springer
Selected essays of Henk F. Moed
41
Abstract A longitudinal analysis of UK science covering almost 20 years revealed in the years prior to a Research Assessment Exercise (RAE 1992, 1996 and 2001) three distinct bibliometric patterns, that can be interpreted in terms of scientists’ responses to the principal evaluation criteria applied in a RAE. When in the RAE 1992 total publications counts were requested, UK scientists substantially increased their article production. When a shift in evaluation criteria in the RAE 1996 was announced from ‘quantity’ to ‘quality’, UK authors gradually increased their number of papers in journals with a relatively high citation impact. And during 1997–2000, institutions raised their number of active research staff by stimulating their staff members to collaborate more intensively, or at least to co-author more intensively, although their joint paper productivity did not. This finding suggests that, along the way towards the RAE 2001, evaluated units in a sense shifted back from ‘quality’ to ‘quantity’. The analysis also observed a slight upward trend in overall UK citation impact, corroborating conclusions from an earlier study. The implications of the findings for the use of citation analysis in the RAE are briefly discussed. Moed, H.F., Halevi, G.1 (2015). Multidimensional Assessment of Scholarly Research Impact. Journal of the American Society for Information Science and Technology, 66, 1988–2002. Wiley publisher 1 Elsevier, New York, USA Abstract This article introduces the Multidimensional Research Assessment Matrix of scientific output. Its base notion holds that the choice of metrics to be applied in a research assessment process depends upon the unit of assessment, the research dimension to be assessed, and the purposes and policy context of the assessment. An indicator may by highly useful within one assessment process, but less so in another. For instance, publication counts are useful tools to help discriminating between those staff members who are research active, and those who are not, but are of little value if active scientists are to be compared one another according to their research performance. This paper gives a systematic account of the potential usefulness and limitations of a set of 10 important metrics including altmetrics, applied at the level of individual articles, individual researchers, research groups and institutions. It presents a typology of research impact dimensions, and indicates which metrics are the most appropriate to measure each dimension. It introduces the concept of a “meta-analysis” of the units under assessment in which metrics are not used as tools to evaluate individual units, but to reach policy inferences regarding the objectives and general set-up of an assessment process.
42
C. Daraio and W. Glänzel
Usage-based metrics and altmetrics
Nowadays publication- and citation based indicators of research performance are not seldom denoted as ‘classical’, and new, alternative types of indicators are being proposed and explored. Two articles listed below relate to ‘usage’ indicators, based on the number of times full text articles are downloaded from publishers’ publication archives. A third article discusses the potential of so called altmetrics, especially those that reflect use of social media.
Moed, H.F. (2005). Statistical relationships between downloads and citations at the level of individual documents within a single journal. Journal of the American Society for Information Science and Technology 56, 1088–1097. Wiley publisher Abstract Statistical relationships between downloads from ScienceDirect of documents in Elsevier’s electronic journal Tetrahedron Letters and citations to these documents recorded in journals processed by the Institute for Scientific Information/Thomson Scientific for the Science Citation Index (SCI) are examined. A synchronous approach revealed that downloads and citations show different patterns of obsolescence of the used materials. The former can be adequately described by a model consisting of the sum of two negative exponential functions, representing an ephemeral and a residual factor, whereas the decline phase of the latter conforms to a simple exponential function with a decay constant statistically similar to that of the downloads residual factor. A diachronous approach showed that, as a cohort of documents grows older, its download distribution becomes more and more skewed, and more statistically similar to its citation distribution. A method is proposed to estimate the effect of citations upon downloads using obsolescence patterns. It was found that during the first 3 months after an article is cited, its number of downloads increased 25% compared to what one would expect this number to be if the article had not been cited. Moreover, more downloads of citing documents led to more downloads of the cited article through the citation. An analysis of 1,190 papers in the journal during a time interval of 2 years after publication date revealed that there is about one citation for every 100 downloads. A Spearman rank correlation coefficient of 0.22 was found between the number of times an article was downloaded and its citation rate recorded in the SCI. When initial downloads—defined as downloads made during the first 3 months after publication—were discarded, the correlation raised to 0.35. However, both outcomes measure the joint effect of downloads upon citation and that of citation upon downloads. Correlating initial downloads to later citation counts, the correlation coefficient drops to 0.11. Findings suggest that initial downloads and citations relate to distinct phases in the process of collecting and processing relevant scientific information that eventually leads to the publication of a journal article. Moed, H.F. (2016). Altmetrics as traces of the computerization of the research process. In: C.R. Sugimoto (ed.), Theories of Informetrics and Scholarly
Selected essays of Henk F. Moed
43
Communication (A Festschrift in honor of Blaise Cronin). ISBN 978-3-11029803-1, 360–371. Berlin/Boston: Walter de Gruyter. Abstract I propose a broad, multi-dimensional conception of altmetrics, namely as traces of the computerization of the research process. Computerization should be conceived in its broadest sense, including all recent developments in ICT and software, taking place in society as a whole. I distinguish four aspects of the research process: the collection of research data and development of research methods; scientific information processing; communication and organization; and, last but not least, research assessment. I will argue that in each aspect, computerization plays a key role, and metrics are being developed to describe this process. I propose to label the total collection of such metrics as Altmetrics. I seek to provide a theoretical foundation of altmetrics, based on notions developed by Michael Nielsen in his monograph Reinventing Discovery: The New Era of Networked Science. Altmetrics can be conceived as tools for the practical realization of the ethos of science and scholarship in a computerized or digital age.
Introduction In the Altmetrics Manifesto published on the Web in October 2010, the concept of “Altmetrics” is introduced as follows: In growing numbers, scholars are moving their everyday work to the web. Online reference managers Zotero and Mendeley each claim to store over 40 million articles (making them substantially larger than PubMed); as many as a third of scholars are on Twitter, and a growing number tend scholarly blogs. These new forms reflect and transmit scholarly impact: that dog eared (but uncited) article that used to live on a shelf now lives in Mendeley, CiteULike, or Zotero where we can see and count it. That hallway conversation about a recent finding has moved to blogs and social networks now, we can listen in. The local genomics dataset has moved to an online repository now, we can track it. This diverse group of activities forms a composite trace of impact far richer than any available before. We call the elements of this trace altmetrics (Priem et al., 2010). Online reference managers, social networking tools, scholarly blogs, and online repositories are highlighted as technological inventions, and their use by the scientific community or even the wider public leaves traces of impact of scientific activity. A leading commercial provider of such data, Altmetric.com, distinguishes four types of altmetric data sources (Altmetric.com, 2014): • Social media such as Twitter and Facebook, covering social activity; • Reference managers or reader libraries such as Mendeley or ResearchGate covering scholarly activity; • Various forms of scholarly blogs reflecting scholarly commentary;
44
C. Daraio and W. Glänzel
• Mass media coverage, for instance, daily newspapers or news broadcasting services, informing the general public. I distinguish three drivers of development of the field of altmetrics. Firstly, in the policy or political domain, there is an increasing awareness of the multidimensionality of research performance, and an increasing emphasis on societal merit, an overview of which can be found in Moed and Halevi (2015a). A typical example of this awareness is the ACUMEN project (Academic Careers Understood through Measurement and Norms) funded by the European Commission, aimed at “studying and proposing alternative and broader ways of measuring the productivity and performance of individual researchers” (Bar-Ilan, 2014). The reader is referred to Bar-Ilan (2014) for an overview of this project and the role of altmetrics therein. In the domain of technology, a second driver is the development of information and communication technologies (ICTs), especially websites and software in order to support and foster social interaction. The technological inventions mentioned in the Altmetrics Manifesto are typical examples of this development. It seems appropriate to link the Altmetrics manifesto to the notion of a “computerization movement”. Elliot and Kraemer (2009) define a computerization moment as “… a type of movement that focuses on computer-based systems as the core technologies which their advocates claim will be instruments to bring about a new social order. These advocates of computerization movements spread their message through public discourse in various segments of society such as vendors, media, academics, visionaries, and professional societies” (p. 3). A further positioning of the Altmetrics ideas as computerization movement falls outside the scope of this chapter, even though there is a vast amount of literature on computerization movements, of which Elliot and Kraemer give an overview. I am inclined to conceive the Altmetrics Manifesto as a proclamation of a computerization movement, but a very special one, appealing to basic ideals of science and scholarship. What is important in this chapter is to characterize the type of ideals that inspires the altmetrics movement. I believe they can best be associated with a third driver, primarily emerging from the scientific community itself, namely the Open Science movement. Open Science is conceived as: The movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge. (“Open Science”, n.d.). The increasing importance of altmetrics is also reflected in the foundation of the NISO Altmetrics Standards Project. The National Information Standards Organization (NISO) is a United States nonprofit organization that develops, maintains and publishes technical standards related to publishing, bibliographic and library applications. Funded by the Alfred P. Sloan Foundation, NISO established a project to identify standards and/or best practices related to altmetrics, as an important step towards the development and adoption of new assessment metrics. The NISO Project Group published a White Paper in June 2014 (NISO, 2014). 5 In the NISO Project
Selected essays of Henk F. Moed
45
mentioned above, but also in altmetrics sessions of scientific conferences, altmetrics increasingly linked to and often limited to social media references, and to research performance assessment. Empirical studies of altmetrics have focused nearly exclusively on these as well. In Sect. 2 I will propose a much broader, multidimensional conception of altmetrics, namely as traces of the computerization of the research process. “Computerization” should be conceived in its broadest sense, including all recent developments in ICT and software, taking place in society as a whole. I distinguish four aspects of the research process: the collection of research data and development of research methods; scientific information processing; communication and organization; and, last but not least, research assessment. I will argue that in each aspect, computerization plays a key role, and metrics are being developed to describe this process. I propose to label the total collection of such metrics as “Altmetrics”. In Sect. 3 I seek to provide a theoretical foundation of altmetrics, based on notions developed by Michael Nielsen in his monograph Reinventing Discovery: The New Era of Networked Science (Nielsen, 2011). To the extent that altmetrics are used as research assessment tools, Sect. 4 underlines a series of basic theoretical distinctions, which are not only valid in the case of “classical” metrics such as those based on citation analysis, but also, and, perhaps, even more so, in the case of new metrics such as those based on social media references or electronic document usage patterns. These are as follows: the distinction between scientific scholarly and societal impact; scientific opinion and scientific fact; peer reviewed versus non peer reviewed manuscripts; immediate and delayed response or impact; intended and unintended consequences of particular behaviors; and, lastly, a distinction between the various domains of science and scholarship, for instance, between natural, technical, formal, biological and medical, social sciences and humanities. I conclude that altmetrics can provide tools not only to reflect this process passively, but, even more so, to design, monitor, improve, and actively facilitate it. From this perspective, altmetrics can be conceived as tools for the practical realization of the ethos of science and scholarship in a computerized or digital age.
The Computerization of the Research Process I distinguish four aspects of the research process. In this section I briefly explain these aspects by giving typical outcomes of metrics-based studies of these aspects. The purpose of these examples is to illustrate an aspect rather than give a detailed account of it. Firstly, at the level of the everyday research practice, there is the collection of research data and the development of research methods. A “classical” citation analysis in Scopus of articles published during 2002–2012 and cited up until March 2014, generated per discipline a list of the most frequently cited articles. A subject classification of journals was used into 26 research disciplines. It was found that in many disciplines computing related articles are the most heavily cited (Halevi, 2014).
46
C. Daraio and W. Glänzel
Table 1 Computer science-related top cited articles in Scopus # Cites
Discipline
Article Title
17,171
Agr & Biol Sci, Mol Biol; Medicine
MEGA4: Molecular Evolutionary Genetics Analysis (MEGA) software version 4.0 (2007)
4,335
Social sciences; business, managemt
User acceptance of information technology: Toward a unified view (2003)
5,325
Chemistry
UCSF Chimera - A visualization system for exploratory research and analysis (2004)
15,191
Computer Sci; Eng
Distinctive image features from scale-invariant keypoints (2004)
1,335
Energy
Geant4 developments and applications (2006) [software for simulating passage of particles through matter]
7,784
Engineering; Math
A fast and elitist multi-objective genetic algorithm: NSGA-n (2002)
4,026
Environm Sci
GENALEX 6: Genetic analysis in Excel. Population genetic software….(2006)
4,404
Materials Science
The SIESTA method for ab initio order-N materials simulation (2002)
10,921
Physics & Astron
Coot: Model-building tools for molecular graphics (2004)
Table 1 presents nine such articles. The term “computing related” is used in a broad sense. Most articles describe software packages for data analysis, digital imaging, and simulation techniques. Interestingly, the most frequently cited article in social sciences is about user acceptance of information technology. The second aspect relates to scientific information processing. There is a long history of research in the field of information science on information seeking behavior; since this behavior occurs increasingly online, a digital trace of it can be identified. A topic of rapidly increasing importance is the study of searching, browsing and reading behavior of researchers, based on an analysis of the electronic log files recording the usage of publication archives such as Elsevier’s ScienceDirect or an Open Access archive such as arxiv.org. Comparison of citation counts and full text downloads of research articles may provide more insight both into citation practices and in usage behavior (Kurtz et al., 2005; Kurtz & Bollen, 2010; Gorraiz, Gumpenberger, & Schlögl, 2013; Guerrero-Bote & Moya-Anegón, 2014). Table 2 summarizes the main sources of differences between these two types of counts (Moed & Halevi, 2015 b). Usage and citation leaks, bulk downloading, differences between reader and author populations in a subject field, the type of document or its content, differences in obsolescence patterns between downloads and citations, and different functions of reading and citing in the research process, all provide possible explanations of differences between download and citation distributions. Communication and organization is a third group of aspects. These two elements are distinct, from an altmetric point of view, to the extent that the first takes place via blogs, Twitter and similar social media, whereas the second occurs for instance
Selected essays of Henk F. Moed
47
Table 2 Ten important factors differentiating between downloads and citations L
Usage leak: Not all downloads may be recorded
2
Citation leak: Not all citations may be recorded
3
Downloading the full text of a document does not mean that it is read
4
The user (reader) and the author (citer) population may not coincide
5
Distribution # downloads less skewed than that of # cites, and depends upon the type of document differently
6
Downloads and citations show different obsolescence functions
7
Downloads and citations measure distmct concepts
8
Downloads and citations may influence one another in multiple ways
9
Download counts are more sensitive to manipulation
10
Citations are public, usage is private
in scholarly tools as Mendely or Zotero. In this paper, the two aspects will be discussed jointly. The analysis of the use of online tools such as social media, reference managers and scientific blogs perhaps constitutes the core of studies of the computerization in this domain. Many altmetric studies cover this aspect. In a recent special altmetrics issue of the journal Research Trends, Thelwall gives an historical overview of the study of social web services using altmetrics, focusing on Mendeley and Twitter (Thelwall, 2014). He underlines the need to further validate altmetrics, by investigating the degree at which they correlate with or predict citation counts and other traditional measures. In the same issue, Shema presents a n additional, and state of the art altmetric data source: scholarly blogs (Shema, 2014). The studies focusing on this aspect aim to deepen our understanding of the ways in which researchers communicate and organize themselves, and how the new technologies not only influence communication and organization but also how they could improve these processes. The use of altmetrics or metrics in general in research assessment is a fourth aspect of the computerization of the research process. Mentions of authors and their publications in social media like Twitter, in scholarly blogs and in reference managers form the basis of the exploration of new impact measures. In his historical overview, Thelwall concludes that “altmetrics [also] have the potential to be used for impact indicators for individual researchers based upon their web presences, although this information should not be used as a primary source of impact information since the extent to which academics possess or exploit social web profiles is variable” and that, “more widely, however, altmetrics should not be used to help evaluate academics for anything important because of the ease with which they can be manipulated” (Thelwall, 2014). Moed and Halevi (2015a) underline that indicators that are appropriate in one context may be invalid or useless in another. The decision as to which indicators should be used in a particular assessment depends upon (a) what units have to be assessed; (b) which aspect of research performance is being assessed; (c) what constitutes the overall objective of the assessment. The authors introduce the notion of a “meta analysis” of the units under assessment, in which metrics are not used
48
C. Daraio and W. Glänzel
as tools to evaluate individual units, but rather to reach policy inferences regarding the objectives and general set up of an assessment process. For instance, publication counts and average journal impact factors of a group’s publications are hardly useful in a relative assessment of research active groups with a strong participation in international networks, but they may be very useful in a context in which there is solid evidence that a substantial number of groups is hardly research active or publishing mainly in national journals (Moed & Halevi, 2015a).
A Theoretical Foundation: Michael Nielsen’s “Reinventing Discovery” Fully capturing the notion of the ethos of science and scholarship and tracing back its history requires a full essay, the presentation of which reaches far beyond the scope of the current chapter and also exceeds the competency of its author. Perhaps it is appropriate to refer to Francis Bacon and his proposal “for an universal reform of knowledge into scientific methodology and the improvement of mankind’s state using the scientific method” (“Francis Bacon”, n.d.). It must be noted that Bacon is generally conceived of as the founder of the positive, empirical sciences. But the ethos I seek to capture does not merely relate to this type of science, but to science and scholarship in general, including for instance hermeneutic scholarship. In any case, Bacon’s proposal develops two base notions, namely the notion that science can be used to improve the state of mankind, and that it is governed by a strict scientific scholarly methodology. Both dimensions, the practical and the theoretical methodological, are essential in his idea. A key issue nowadays is how the ethos of science and scholarship, admittedly outlined so vaguely above, must be realized in the modern, computerized, or digital ‘The state of development of information and communication technology (ICTs) ‘creates enormous possibilities for the organization of the research process, as well as for society as a whole. I believe that it is against this background that the emergence and potential of altmetrics should be considered. Michael Nielsen’s (2010) monograph presents a systematic, creative exploration of the actual and potential value of the new ICT for the organization of the research process. The aim of the remaining part of this section is to summarize some of the main features of this thinking. I believe it provides an adequate framework in which altmetrics can be positioned and further developed, without claiming that alternative frameworks are of no value. In building up his ideas, Nielsen borrows concepts from several disciplines, and uses them as building blocks or models. A central thesis is that online tools can and should be used in science to amplify collective intelligence. Collective intelligence results from an appropriate organization of collaborative projects. In order to further explain this, he uses the concept of ‘diversity’, borrowed perhaps from biology, or its subbranch, ecology, but in the sense of cognitive diversity, as he states: “To amplify cognitive intelligence, we should scale
Selected essays of Henk F. Moed
49
up collaborations, increasing cognitive diversity and the range of available expertise as much as possible (Nielsen, 2010, p. 32). As each participant can give only a limited amount of attention in a collaboration, there are inherent limits to size of the contributions that participants can make. At this point the genuine challenge of the new online tools comes into the picture: they should create an “architecture of attention and in my view one of the most intriguing notions in Nielsen’s work, “that directs each participant’s attention where it is best suited i.e., where they have maximal competitive advantage.” (Nielsen, 2010, p. 32). In the ideal case, scientific collaboration will achieve what he terms as “designed serendipity” so that a problem posed by someone who cannot solve it finds its way to one with the right micro expertise. Using a concept stemming from statistical physics, namely, critical mass, he further explains that “conversational critical mass is achieved, and the collaboration becomes self-stimulating, with new ideas constantly being explored” (Nielsen, 2010, p. 33). One of the ways to optimize the collaboration is by modularizing it. Here Nielsen adopts the open source software development as a model. Actually, he speaks of open source collaboration, in which participants work in a modular way, make small contributions, and have easy reuse of earlier work. And, last but not least, this type of collaboration uses signaling mechanisms (e.g., scores, or metrics) to help people to decide where to direct attention. Also, he uses the concept of “data web being defined as “a linked web of data that connects all parts of knowledge,” and “an online network intended to be read by machines. He underlines that data driven intelligence is controlled by human intelligence and amplifies collective intelligence. Nielsen highlights the potential of the new online tools to stimulate interaction and even collaboration between professional researchers and the wider public, and the role this public can play for instance in data collection processes using crowdsourcing techniques. My proposal is to use Michael Nielsen’ s set of creative ideas as a framework in which altmetrics can be positioned. Their role would not merely be that of rather passively descriptors, but, actively, or proactively, as tools to establish and optimize Nielsen’s “architecture of attention”, a configuration that combines the efforts of researchers and technicians on the one hand, and the wider public and the policy domain on the other. I will further discuss this issue in Sect. 5. In the next section I will highlight a series of distinctions that are crucial when discussing the potential and limits of altmetrics in the assessment of research performance.
Useful Distinctions To further explore the potential and limitations of altmetrics, I believe it is useful to highlight a series of distinctions that are often made in the context of the use of “classical” metrics and publishing, but that are in my view most relevant in connection with altmetrics as well.
50
C. Daraio and W. Glänzel
First of all, a most relevant distinction is that between scientific-scholarly and societal merit and impact. These two aspects do not coincide. In Sect. 3, speaking of the ethos of science, two dimensions were highlighted: a practical and a theoretical-methodological: science potentially improves the state of mankind, and is governed by strict scientific-scholarly methodology. I defend the position that these methodological rules are essential to the scientific method. These rules are constitutive for science and scholarship and discriminate between what is a justified scientific-scholarly knowledge claim and what is not. Societal merit of scientific–scholarly research is in my view a legitimate and valuable aspect, not only in connection with motives and strivings of individual researchers, but also related to funding and assessment criteria. But it cannot be assessed in apolitically neutral manner. To be successful, the project proposed by Bacon and so many others requires a certain distance and independence from the political domain, and most of all, a strong, continuous defense of proper methodological rules when making knowledge claims and examining their validity. A next distinction is perhaps even more difficult to make, namely between scientific opinion and scientific fact or result. In journal publishing, many journals distinguish between research articles on the one hand, and opinion pieces, discussion papers, or editorials on the other. At least in the empirical sciences, the first type ideally reports on the outcomes of empirical research conducted along valid methodological lines, and discusses their theoretical implications. The second type is more informal, normally not peer-reviewed, and speculative. The two types have from an epistemological point of view a different status. I believe it is crucial to keep this in mind when exploring the role of altmetric data sources containing scholarly commentaries, such scientific-scholarly blogs. At this point, it is also important to distinguish between speculations or opinion pieces related to scientific-scholarly issues, and those primarily connected with political issues. I believe that it is in the interest of the ethos of science to be especially alert to a practice in which researchers make political statements using their authority as scientific-scholarly experts. Such practices should be rigorously unmasked whenever they are detected. Intended versus unintended consequences of particular behavior is a next distinction. During the past ten years or so, the general debate on the application of “classical” metrics based on publication and citations, especially their large-scale use in national research assessment exercises, strongly focused on the effects that the actual use of such metrics have upon researchers, and on the degree of manipulability of the metrics. These were among the main topics of the discussions on the organization of national research assessment exercises in the UK and in Australia. The least that can be said is that this debate is equally relevant as regards the use of altmetrics based on social media. But, as indicated in Sect. 2, Thelwall warns that the problem of manipulability is much larger in case of altmetrics than it is in the application of citation indices (Thelwall, 2014). Finally, it is also crucial to distinguish the various domains of science and scholarship, for instance, natural, technical, formal, biological, medical, social sciences, and humanities. Although such subject classifications suffer from a certain degree of arbitrariness, it is important to realize that
Selected essays of Henk F. Moed
51
the research process, including communication practices, reference practices, and orientation towards social media, may differ significantly between one discipline and another. In this context one of the limitations of the model Michael Nielsen proposes in his monograph Reinventing Discovery should be highlighted: the use of the open source software development as a model of collaboration may fit the domain of the formal sciences rather well but may be less appropriate in many subject fields in humanities and social sciences. In other passages in his monograph he is aware that this organizational model may not be appropriate in all domains of science and scholarship.
Concluding Remarks What then are the main conclusions of this chapter? I propose a broad conception of altmetrics. Altmetrics is more than measuring attention in social media to scientificscholarly artifacts but should be conceived as metrics of the computerization of the research process in general. I propose the set of ideas developed by Michael Nielsen as a framework within which altmetrics can be positioned and further explored. His work represents a thorough, systematic account of the potential of online tools in the research process, and, in this way, articulates the practical realization of the ethos of science and scholarship in the computerized or digital age. He shows how the new online tools support open science, the notion that is in my view one of the pillars, perhaps even the most important one, of the altmetrics manifesto. Many proponents of altmetrics may, either as a first impression, or after reflection, not be so happy with my proposal. After all, the demarcation between altmetrics and “classical” metrics is rather vague. Citation indexes are also the product of the ICT development, be it in an earlier phase than the current one. Moreover, citation indices are even used to illustrate the computerization of the research process. Therefore, in a sense, classical metrics are altmetrics as well. Both classical metrics and altmetrics are subjected to the same danger, namely, that their utility is limited to a few very specific cases, and both types of metrics do have in principle the same potential. In the same way that classical citation metrics are often uniquely linked to the use of journal impact factors for assessing individual researchers—although so many other citation-based metrics and methodologies have been developed, applied to different aggregations and with different purposes—altmetrics runs perhaps a danger of being too closely linked with the notion of assessing individuals by counting mentions in Twitter and related social media, a practice that may provide a richer impression of impact than citation counts do, but that has clearly its limitations as well (e.g., Cronin, 2014). Altmetrics and science metrics, or indicators in general, are much more than that. Apart from the fact that much more sophisticated indicators are available than journal impact factors or Twitter counts, these indicators do not have a function merely in the evaluation of research performance of individuals and groups, but also in the study
52
C. Daraio and W. Glänzel
of the research process. In this way, in terms of a distinction developed in Geisler (2000), these indicators are used as process indicators rather than outcome measures. Also, like science metrics in general, altmetrics does not merely provide reflections of the computerization of the research process, but can, in fact develop into a set of tools to further shape, facilitate, design, and conduct this process. Cited References Altmetric.com (2014). www.altmetric.com. Bar-Ilan, J. (2014). Evaluating the individual researcher—adding an altmetric perspective. Research Trends, issue 37 (Special issue on altmetrics, June). http:// www.researchtrends.com/issue-37-June-2014/evaluating-the-individual-researcher/ Cronin, B. (2014). Meta Life. Journal of the American Society for Information Science and Technology, 65(3), 431–432. Elliott, M.S. & Kraemer, K.L. (2009). Computerization Movements and the Diffusion of Technological Innovations. In M. Elliott & K. Kraemer (Eds.), Computerization Movements and Technology Diffusion: From Mainframes to Ubiquitous Computing (pp. 3–41), Medford, New Jersey: Information Today, Inc Francis Bacon. (n.d.) In Wikipedia. Retrieved August 25, 2014 from http://en.wik ipedia.org/wiki/Francis_Bacon Geisler, E. (2000). The Metrics of Science and Technology. Westport, CT, USA: Greenwood Publishing Group Gorraiz, J., Gumpenberger, C., & Schlögl, C. (2013). Differences and similarities in usage versus citation behaviours observed for five subject areas. In Proceedings of the 14th ISSI Conference Vol 1, 519–535. http://www.issi2013.org/Images/ISSI_P roceedings_Volume_I.pdf. Guerrero-Bote, V.P., & Moya-Anegón, F. (2014). Relationship between Downloads and Citations at Journal and Paper Levels, and the Influence of Language. Scientometrics (in press). Halevi, G. (2014). 10 years of research impact: top cited papers in Scopus 2001– 2011. Research Trends, issue 38 (Sep 2014). http://www.researchtrends.com/, to be published. Kurtz, M.J., Eichhorn, G., Accomazzi, A., Grant, C., Demleitner, M., Murray, S.S., Martimbeau, N., & Elwell, B. (2005). The bibliometric properties of article readership information. Journal of the American Society for Information Science and Technology, 56, 111–128. Kurtz, M.J., & Bollen, J. (2010). Usage Bibliometrics. Annual Review of Information Science and Technology, 44, 3–64. Moed, H.F. & Halevi, G. (2015a). The Multidimensional Assessment of Scholarly Research Impact. Journal of the Association for Information Science and Technology, to be published. Moed, H.F. & Halevi, G. (2015b). On full text download and citation distributions in scientific-scholarly journals. Journal of the Association for Information Science and Technology, to be published. Nielsen, M. (2011). Reinventing Discovery: The New Era of Networked Science. Princeton University Press.
Selected essays of Henk F. Moed
53
NISO (National Information Standards Organization) (2014). NISO Altmetrics Standards Project White Paper. Available at http://www.niso.org/apps/group_public/ download.php/13295/niso_altmetrics_white_paper_draft_v4.pdf Open Science. (n.d.) In Wikipedia. Retrieved August 22, 2014 from http://en.wik ipedia.org/wiki/Open_science Priem, J., Taraborelli, D., Groth, P. &Neylon, C. (2010). Altmetrics: A Manifesto. Available at: http://altmetrics.org/manifesto/ Shema, H., Bar-Ilan, J. & & Thelwall, M. Scholarly blogs are a promising altmetric source. Research Trends, issue 37 (Special issue on altmetrics, June). http://www.researchtrends.com/issue-37-June-2014/scholarly-blogsare-a-promising-altmetric-source/. Sugimoto, C. (2014). Private communication. Thelwall, M. (2014). A brief history of altmetrics. Research Trends, issue 37 (Special issue on altmetrics, June). Available at http://www.researchtrends.com/ issue-37-June-2014/a-brief-history-of-altmetrics/ Moed, H.F., Halevi, G.1 (2016). On full text download and citation distributions in scientific-scholarly journals. Journal of the American Society for Information Science and Technology, 67, 412–431. Author copy available at https://arxiv.org/ pdf/1510.05129.pdf Wiley publisher 1 Elsevier, New York, USA Abstract A statistical analysis of full text downloads of articles in Elsevier’s ScienceDirect covering all disciplines reveals large differences in download frequencies, their skewness, and their correlation with Scopus-based citation counts, between disciplines, journals, and document types. Download counts tend to be two orders of magnitude higher and less skewedly distributed than citations. A mathematical model based on the sum of two exponentials does not adequately capture monthly download counts. The degree of correlation at the article level within a journal is similar to that at the journal level in the discipline covered by that journal, suggesting that the differences between journals are to a large extent discipline-specific. Despite the fact that in all study journals download and citation counts per article positively correlate, little overlap may exist between the set of articles appearing in the top of the citation distribution and that with the most frequently downloaded ones. Usage and citation leaks, bulk downloading, differences between reader and author populations in a subject field, the type of document or its content, differences in obsolescence patterns between downloads and citations, different functions of reading and citing in the research process, all provide possible explanations of differences between download and citation distributions.
54
C. Daraio and W. Glänzel
International collaboration and migration
Scientific collaboration and migration are important phenomena that can be properly studied with bibliometric-informetric methods. Below three articles are listed, two on collaboration, and one on migration.
Moed, H.F. (2005). Does international scientific collaboration pay? In: Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 14020-3713-9, 285-290, Chapter 23
Introduction The benefits of international scientific collaboration are heavily debated among scientists and science policy makers and constitute an important research topic in the field of quantitative science and technology studies. Funding agencies such as the European Commission stimulate collaboration within the European Union by applying it as a funding criterion. A bibliometric analysis of papers included in the Science Citation Index and related Citation Indexes published by the Institute for Scientific Information (ISI, currently Thomson Scientific) revealed that the share of internationally co-authored (IC) papers increased steadily during the past few decades, and reached a level of 16% at the end of the 1990s. It varied among research fields and was highest in mathematics, geosciences and in physics & astronomy (above 20%), and lowest in clinical medicine (about 10%). Papers can be categorised according to the number of countries involved in the collaboration. About 85% of IC papers have authors from two countries and reflect bi-lateral international collaboration (BIC). The remaining 15% reflect multi-lateral international collaboration (MIC) involving authors from 3 or more countries. Various bibliometric studies reported that for specific scientific fields and countries internationally co-authored papers tend to have higher citation rates than those published by authors from a single country. But these studies were rightly cautious in generalising their outcomes and interpreting them in terms of causality (e.g., Narin et al., 1991; Glänzel, 2001; for a review the reader is referred to Glänzel and Schubert, 2004). This chapter further examines how the citation impact of internationally co-authored papers relates to that of other papers. It aims at providing a global, comprehensive analysis, focusing on papers covering the natural and life sciences and resulting from bi-lateral international collaboration. Data, methods and results A citation analysis compared the citation rate of BIC and MIC papers to that of ‘purely domestic’ papers, i.e., papers published by authors from a single country and hence not resulting from international collaboration (NIC). Publications analysed
Selected essays of Henk F. Moed
55
were published during 1996–2000 and citations were counted according to a fixed citation window of 4 years, i.e., during the first four years after publication date, including the publication year. In order to avoid possible biases due to the fact that multi-authored papers may receive more author self-citations than single-author papers do, citations in which the citing and cited articles have at least one author in common were excluded from the counts. In all science fields, the citation rate of BIC papers exceeds that of NIC articles, while the average citation impact of MIC papers exceeds that of BIC papers. Table 23.1 shows that the mean citation impact of BIC papers divided by that for NIC papers is 1.24. It is lowest in chemistry (1.08) and highest in clinical medicine (1.62). For all science fields aggregated, the mean citation impact ratio of MIC compared to NIC articles is 1.64. For articles with authors from at least 10 different countries (MIC 10 +) this ratio is 3.23. Thus, in all fields internationally co-authored papers have on average higher citation rates than papers with authors from a single country.
But from Table 23.1 it does not follow that international collaboration is a principal factor responsible for this pattern. Countries performing well at the international research front can be expected both to generate more citation impact and to collaborate more intensively than less well performing countries and may hence be over-represented in the set of internationally co-authored papers. A more detailed analysis focused on bi-lateral international collaboration and assumed that a country’s performance can be validly measured by the citation impact of its purely domestic (NIC) articles. Table 23.2 presents for the 20 major countries in terms of their number of purely domestic papers, the distribution of NIC and BIC papers in science fields in function of the citation impact of a publishing country. This group of countries contains both scientifically established and emerging countries, and accounts for almost 90% of the global NIC publication output. It was hypothesised that the order of the countries in a BIC pair is significant. The first country in ISI’s corporate address field is normally that of the first or reprint author. Since first or reprint authorship in many fields tends to be attributed to an author (or his or her research group) who made the largest contribution to the work described in the paper, it can be assumed that the first country tends to play a more important role in the collaboration than the second.
56
C. Daraio and W. Glänzel
Data is extracted from the Science Citation Index (SCI) and related Citation Indexes published by the Institute for Scientific Information (ISI, currently Thomson ISI). Publications analysed were published during 1996–2000 and citations were counted according to a fixed citation window of 4 years. Author self-citations were not included. Results relate to bi-lateral international collaboration (BIC) among the 20 countries with the highest number of papers with domestic authors only (NIC). The total number of NIC and BIC papers are about 2,400,000 and 290,000, respectively. Countries were categorised according to whether they belong to the upper or to the lower half of a ranking of countries by descending average citation impact of their NIC papers. For instance, citation impact class High–Low indicates papers resulting from bi-lateral collaborations in which the first country is in the top 50% and the second country in the bottom 50% of the ranking. This categorisation into high and low impact countries was made by research field, and thus took into account differences in citation practices among research fields. In each research field, the average citation impact of BIC papers published by any pair of countries was evaluated by comparing it to that of NIC papers from those countries in the following two ways. 1. A first approach categorised each pair according to whether the BIC citation rate is lower than the lowest NIC rate among the two contributing countries, lies between the lowest and highest NIC rate, or is higher than the highest NIC rate. In the first case, none of the two countries profits from the collaboration; in the second case the one with the lowest NIC rate profits, whereas the one with the highest does not; finally, in the third case both countries raise their citation impact compared to that of their purely domestic papers. 2. A second approach determined whether the rate for BIC papers is below or above the mean citation rate of NIC papers from the two countries involved in the collaboration. If one conceives the latter mean as an expected value for the citation impact of a pair’s BIC papers, one can evaluate the extent to which the collaboration has produced additional value in terms of a citation impact increase compared to this a priori expectation.
Selected essays of Henk F. Moed
57
Table 23.3 analyses a total of 3,523 pairs of collaborating countries involving 20 countries in 10 science fields. For instance, the papers in chemistry co-published between USA and UK with first or reprint authors from the USA constitute one pair of collaborating countries, or one ‘case’. Such a pair is denoted below as a bi-lateral collaboration pair. Two categorisations were made of pairs according to how the average citation impact of a country pair’s BIC papers—denoted as BIC in the table’s heading—relates to that of NIC papers published by its constituents. The lower NIC citation rate in a pair is denoted as NIC min and the higher as NIC max. NIC is defined as (NIC + NIC)/2. Data are extracted from the Science Citation Index (SCI) and related Citation Indexes published by the Institute for Scientific Information (ISI, currently Thomson ISI). Publications analysed were published during 1996–2000 and citations were counted according to a fixed citation window of 4 years. Author self-citations were not included. The outcomes are presented in Table 23.3. From Tables 23.2 and 23.3 the following conclusions may be drawn. – Countries with a high citation impact of their NIC papers are indeed overrepresented in the set of papers emerging from bi-lateral international collaboration. They contributed to 93% of all BIC papers, whereas their share of purely domestic papers was 67%. Some 48% of BIC papers resulted from collaboration between two countries that both had a high citation impact, whereas only 7% were from a pair in which both had a low citation impact of their NIC papers. – In 44% of bi-lateral collaboration pairs, both participating countries increased their citation impact relative to that of their purely domestic papers. In 35% of the cases only the country with the lowest citation impact of domestic papers profited, whereas in 22% of the cases none of the countries raised its impact. – Considering the mean citation impact of domestic papers of both contributing countries as a norm, it follows that 60% of collaboration pairs generated a citation impact above this norm. When two countries with a low citation impact of their NIC papers collaborated, the latter percentage was 50, whereas for collaboration between two high impact countries it was 80%. An additional analysis by discipline not presented in Tables 23.2 and 23.3 found that this percentage was highest
58
C. Daraio and W. Glänzel
in biological sciences and clinical medicine, (over 90%) and lowest in chemistry and applied physics & chemistry (around 70%). – When a high and a low citation impact country collaborated, their order is indeed significant. When the former came first, and hence delivered the primary author or leading research group, 67% of collaboration pairs produced BIC papers with an average citation impact above the mean citation impact of NIC papers from the two. But when a low impact country came first, this percentage dropped to 43. However, an additional analysis discovered that this decline was found to be much smaller in engineering and mathematics, which may reflect substantial differences among disciplines, both in author sequence conventions and in the nature of bi-lateral collaboration. – From the perspective of high impact countries, when they collaborated with low impact nations, the citation impact of their BIC papers was lower than that of their NIC papers in 57% of collaboration pairs when they were first, and in 76% of cases when they were second. Expanding the set of countries by considering the most productive 40 countries in terms of number of NIC papers, and categorising these into four citation impact classes, the results were qualitatively similar to—but in most cases more pronounced than—those obtained for the 20-country set and two impact classes. For instance, 83% of BIC papers had at least one top impact country, whereas only 20% had at least one country with the lowest citation impact. Still, for the 40 country set, bi-lateral collaboration among the top 25% countries in terms of NIC citation impact accounted for 30% of all global BIC papers, and in 68% of collaboration pairs these papers generated a citation impact above that of each contributor’s NIC impact. When these countries collaborated with the bottom 25% of countries in terms of NIC citation impact, their BIC impact in 76% of cases was lower than that of their NIC papers when they were first, and in 92% when they were second.
Conclusions King (2004) and authors of many other studies properly emphasised that “there is a stark disparity between the first and second divisions in the scientific impact of nations”. This notion appears to be crucial in any study of scientific impact and international collaboration. The bibliometric analysis of bi-lateral international collaboration presented above shows that when scientifically advanced countries collaborate in a particular research field, they tend—in about 7 out of 10 cases—to profit from the collaboration, in the sense that they raise their citation impact compared to that of their purely domestic publication output. But when countries from the first division contribute in bi-lateral international collaboration to the development of scientifically less advanced countries—and thus to the advancement of science in the somewhat longer term than the time horizon normally adopted in research evaluation—this
Selected essays of Henk F. Moed
59
activity may negatively affect their short-term citation rates, particularly when their role is secondary. Research evaluators should conceive short-term citation impact at the research front and longer-term development of scientifically less advanced countries as distinct aspects in their own right, and citation analysts should develop special indicators enabling them to carry out this task. Cited References Glänzel, W. (2001). National characteristics in international scientific co-authorship. Scientometrics, 51, 69–115. Glänzel, W., and Schubert, A. (2004). Analysing scientific networks through coauthorship. In: Moed, H.F., Glänzel, W., and Schmoch, U. (2004) (eds.). Handbook of quantitative science and technology research. The use of publication and patent statistics in studies of S&T systems. Dordrecht (the Netherlands): Kluwer Academic Publishers, 257–276. King, D.A. (2004). The scientific impact of nations. Nature, 430, 311–316. Narin, F. (1994). Patent bibliometrics. Scientometrics, 30, 147–155. Moed, H.F. (2016). Iran’s scientific dominance and the emergence of SouthEast Asian countries as scientific collaborators in the Persian Gulf Region. Scientometrics 108, 305-314. Springer Abstract A longitudinal bibliometric analysis of publications indexed in Thomson Reuters’ Incites and Elsevier’s Scopus, and published from Persian Gulf States and neighbouring Middle East countries, shows clear effects of major political events during the past 35 years. Predictions made in 2006 by the US diplomat Richard N. Haass on political changes in the Middle East have come true in the Gulf States’ national scientific research systems, to the extent that Iran has become in 2015 by far the leading country in the Persian Gulf, and South-East Asian countries including China, Malaysia and South Korea have become major scientific collaborators, displacing the USA and other large Western countries. But collaborations patterns among Persian Gulf States show no apparent relationship with differences in Islam denominations. Moed, H.F., Halevi, G.1 (2014). A bibliometric approach to tracking international scientific migration. Scientometrics 101, 1987-2001. Springer 1 Elsevier, New York, USA Abstract A bibliometric approach is explored to tracking international scientific migration, based on an analysis of the affiliation countries of authors publishing in peer reviewed journals indexed in ScopusTM. The paper introduces a model that relates base concepts in the study of migration to bibliometric constructs, and discusses the potentialities and limitations of a bibliometric approach both with respect to data accuracy and interpretation. Synchronous and asynchronous analyses are presented for 10 rapidly growing countries and 7 scientifically established countries. Rough
60
C. Daraio and W. Glänzel
error rates of the proposed indicators are estimated. It is concluded that the bibliometric approach is promising provided that its outcomes are interpreted with care, based on insight into the limits and potentialities of the approach, and combined with complementary data, obtained, for instance, from researchers’ Curricula Vitae o, survey or questionnaire- based data. Error rates for units of assessment with indicator values based on sufficiently large numbers are estimated to be fairly below 10%, but can be expected to vary substantially among countries of origin, especially between Asian countries and Western countries
The future of bibliometric and informetrics
The articles in this section provide a perspective of the future, both in the development of informetric indicators, and in their application in research assessment processes. My monograph Applied Evaluative Informetrics contains several chapters on these topics. Therefore, the executive summary of this book is also listed below
Moed, H.F. (2007). The Future of Research Evaluation Rests with an Intelligent Combination of Advanced Metrics and Transparent Peer Review. Science and Public Policy 34, 575-584. Oxford University Press Abstract The paper discusses the strengths and limitations of ‘metrics’ and peer review in largescale evaluations of scholarly research performance. A real challenge is to combine the two methodologies in such a way that the strength of the first compensates for the limitations of the second, and vice versa. It underlines the need to systematically take into account the unintended effects of the use of metrics. It proposes a set of general criteria for the proper use of bibliometric indicators within peer-review processes, and applies these to a particular case: the UK Research Assessment Exercise (RAE) Moed, H.F. (2016). Toward new indicators of a journal’s manuscript peer review process. Frontiers in Research Metrics and Analytics, 1, art. no 5, https://doi.org/ 10.3389/frma.2016.00005. Abstract Journal impact factor is among the most frequently used bibliometric indicators in scientific-scholarly journal and research assessment. This paper addresses the question as to why this indicator has become so attractive and pervasive. It defends the position that the most effective way to reduce the role of citation-based journal metrics in journal and research assessment is developing indicators of the quality of journals’ manuscript peer review process, based on an analysis of this process itself , as reflected in the written communication between authors, referees, and journal editors in electronic submission systems. This approach combines computational linguistic
Selected essays of Henk F. Moed
61
tools from the domain of “digital humanities” with “classical humanistic” text analysis and a profound knowledge of the manuscript peer review and the publication process. Moed, H.F. (2017). A critical comparative analysis of five world university rankings. Scientometrics, 110, 967–990. Springer Abstract To provide users insight into the value and limits of world university rankings, a comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS and U-Multirank. It links these systems with one another at the level of individual institutions, and analyses the overlap in institutional coverage, geographical coverage, how indicators are calculated from raw data, the skewness of indicator distributions, and statistical correlations between indicators. Four secondary analyses are presented investigating national academic systems and selected pairs of indicators. It is argued that current systems are still one-dimensional in the sense that they provide finalized, seemingly unrelated indicator values rather than offer a dataset and tools to observe patterns in multi-faceted data. By systematically comparing different systems, more insight is provided into how their institutional coverage, rating methods, the selection of indicators and their normalizations influence the ranking positions of given institutions. Moed, H.F. (2017). Executive Summary. In: Applied Evaluative Informetrics. Springer, ISBN 978-3-319-60521-0 (hard cover); 978-3-319-60522-7 (E-Book), https://doi.org/10.1007/978-3-319-60522-7. Executive Summary This book presents an introduction to the field of applied evaluative informetrics. Its main topic is application of informetric indicators in the assessment of research performance. It gives an overview of the field’s history and recent achievements, and its potential and limits. It also discusses the way forward, proposes informetric options for future research assessment processes, and new lines for indicator development. It is written for interested scholars from all domains of science and scholarship, especially those subjected to quantitative research assessment, research students at advanced master and PhD level, and researchers in informetrics and research assessment, and for research managers, science policy officials, research funders, and other users of informetric tools. The use of the term informetrics reflects that the book does not only deal with bibliometric indicators based on publication and citation counts, but also with altmetrics, webometrics, and usage-based metrics derived from a variety of data sources, and does not only consider research output and impact, but also research input and process. Research performance is conceived as a multi-dimensional concept. Key distinctions are made between publications and other forms of output, and between
62
C. Daraio and W. Glänzel
scientific-scholarly and societal impact. The pros and cons of 28 often used indicators are discussed. An analytical distinction is made between four domains of intellectual activity in an assessment process, comprising the following activities. • Policy and management: The formulation of a policy issue and assessment objectives; making decisions on the assessment’s organizational aspects and budget. Its main outcome is a policy decision based on the outcomes from the evaluation domain. • Evaluation: The specification of an evaluative framework, i.e., a set of evaluation criteria, in agreement with the policy issue and assessment objectives. The main outcome is a judgment on the basis of the evaluative framework and the empirical evidence collected. • Analytics. Collecting, analyzing and reporting empirical knowledge on the subjects of assessment; the specification of an assessment model or strategy, and the operationalization of the criteria in the evaluative framework. Its main outcome is an analytical report as input for the evaluative domain. • Data collection. The collection of relevant data for analytical purposes, as specified in an analytical model. Data can be either quantitative or qualitative. Its main outcome is a dataset for the calculation of all indicators specified in the analytical model. Three basic assumptions of this book are the following. • Informetric analysis is positioned in the analytics domain. A basic notion holds that from what is cannot be inferred what ought to be. Evaluation criteria and policy objectives are not informetrically demonstrable values. Of course, empirical informetric research may study quality perceptions, user satisfaction, the acceptability of policy objectives, or effects of particular policies, but they cannot provide a foundation of the validity of the quality criteria or the appropriateness of policy objectives. Informetricians should maintain in their informetric work a neutral position towards these values. • If the tendency to replace reality with symbols and to conceive these symbols as an even a higher from of reality, are typical characteristics of magical thinking, jointly with the belief to be able to change reality by acting upon the symbol, one could rightly argue that the un-reflected, unconditional belief in indicators shows rather strong similarities with magical thinking. • The future of research assessment lies in the intelligent combination of indicators and peer review. Since their emergence, and in reaction to a perceived lack of transparency in peer review processes, bibliometric indicators were used to break open peer review processes, and to stimulate peers to make the foundation and justification of their judgments more explicit. The notion of informetric indicators as a support tool in peer review processes rather than as a replacement of such processes still has a great potential.
Selected essays of Henk F. Moed
63
Five strong points of the use of informetric indicators in research assessment are highlighted: it provides tools to demonstrate performance; and to shape one’s communication strategies; it offers standardized approaches and independent yardsticks; it delivers comprehensive insights that reach beyond the perceptions of individual participants; and it provides tools for enlightening policy assumptions. But severe criticisms were raised as well against these indicators. Indicators may be imperfect and biased; they may suggest a façade of exactness; most studies adopt a limited time horizon; indicators can be manipulated, and may have constitutive effects; measuring societal impact is problematic; and when they are applied, an evaluative framework and assessment model are often lacking. The following views are expressed, partly supportive, and partly as a countercritique towards these criticisms. • Calculating indicators at the level of an individual and claiming they measure by themselves the individual’s performance, suggests a façade of exactness that cannot be justified. A valid and fair assessment of individual research performance can be conducted properly only on the basis of sufficient background knowledge on the particular role they played in the research presented in their publications, and by taking into account also other types on information on their performance. • The notion of making a contribution to scientific-scholarly progress, does have a basis in reality, that can best be illustrated by referring to an historical viewpoint. History will show which contributions to scholarly knowledge are valuable and sustainable. In this sense, informetric indicators do not measure contribution to scientific-scholarly progress, but rather indicate attention, visibility or short-term impact. • Societal value cannot be assessed in a politically neutral manner. The foundation of the criteria for assessing societal value is not a matter in which scientific experts have quality at preferred status, but should eventually take place in the policy domain. One possible option is moving away from the objective to evaluate an activity’s societal value, towards measuring in a neutral manner researchers’ orientation towards any articulated, lawful need in society. • Studies on changes in editorial and author practices under the influence of assessment exercises are most relevant and illuminative. But the issue at stake is not whether scholars’ practices change under the influence of the use of informetric indicators, but rather whether or not the application of such measures enhances research performance. Although this is in some cases difficult to assess without extra study, other cases clearly show traces of mere indicator manipulation with no positive effect on performance at all. • A typical example of a constitutive effect is that research quality is more and more conceived as what citations measure. More empirical research on the size of constitutive effects is needed. If there is a genuine constitutive effect of informetric indicators in quality assessment, one should not point the critique on current assessment practices merely towards informetric indicators as such, but rather towards any claim for an absolute status of a particular way to assess research
64
C. Daraio and W. Glänzel
quality. Research quality is not what citations measure, but at the same time peers may assess it wrongly. • If the role of informetric indicators has become too dominant, it does not follow that the notion to intelligently combine peer judgments and indicators is fundamentally flawed and that indicators should be banned from the assessment arena. But it does show the combination of the two methodologies has to be organized in a more balanced manner. • In the proper use of informetric tools an evaluative framework and an assessment model are indispensable. To the extent that in a practical application an evaluative framework is absent or implicit, there is a vacuum, that may be easily filled either with ad hoc arguments of evaluators and policy makers, or with un-reflected assumptions underlying informetric tools. Perhaps the role of such ad hoc arguments and assumptions has nowadays become too dominant. It can be reduced only if evaluative frameworks become stronger, and more actively determine which tools are to be used, and how. The following alternative approaches to the assessment of academic research are proposed. • A key assumption in the assessment of academic research has been that it is not the potential influence or importance of research, but the actual influence or impact that is of primary interest to policy makers and evaluators. But an academic assessment policy is conceivable that rejects this assumption. It embodies a shift in focus from the measurement of performance itself to the assessment of preconditions for performance. • Rather than using citations as indicator of research importance or quality, they could provide a tool in the assessment of communication effectiveness and express the extent to which researchers bring their work to the attention of a broad, potentially interested audience. This extent can in principle be measured with informetric tools. It discourages the use of citation data as a principal indicator of importance. • The functions of publications and other forms of scientific-scholarly output, as well as their target audiences should be taken into account more explicitly than they have been in the past. Scientific-scholarly journals could be systematically categorized according to their function and target audience, and separate indicators could be calculated for each category. More sophisticated indicators of internationality of communication sources can be calculated than the journal impact factor and its variants. • One possible approach to the use of informetric indicators in research assessment is a systematic exploration of indicators as tools to set minimum performance standards. Using baseline indicators, researchers will most probably change their research practices as they are stimulated to meet the standards, but if the standards are appropriate and fair, this behavior will actually increase their performance and that of their institutions. • At the upper part of the quality distribution, it is perhaps feasible to distinguish entities which are ‘hors catégorie’, or ‘at Nobel Prize level’. Assessment processes
Selected essays of Henk F. Moed
65
focusing on the very top of the quality distributions could further operationalize the criteria for this qualification. • Realistically speaking, rankings of world universities are here to stay. Academic institutions could, individually or collectively, seek to influence the various systems by formally sending to their creators a request to consider the implementation of a series of new features: more advanced analytical tools; more insight into how the methodological decisions influence rankings; and more information in the system about additional, relevant factors, such as teaching course language. • In response to major criticisms towards current national research assessment exercises and performance-based funding formula, an alternative model would require less efforts, be more transparent, stimulate new research lines and reduce to some extent the Matthew Effect. The basic unit of assessment in such a model is the emerging research group rather than the individual researcher. Institutions submit emerging groups and their research programs, which are assessed in a combined peer review-based and informetric approach, applying minimum performance criteria. A funding formula is partly based on an institution’s number of acknowledged emerging groups. The practical realization of these proposals requires a large amount of informetric research and development. They constitute important elements of a wider R&D program of applied evaluative informetrics. The further exploration of measures of communication effectiveness, minimum performance standards, new functionalities in research information systems, and tools to facilitate alternative funding formula, should be conducted in a close collaboration between informetricians and external stakeholders, each with their own domain of expertise and responsibilities. These activities tend to have an applied character and often a short term perspective. Strategic, longer term research projects with a great potential for research assessment are proposed as well. They put a greater emphasis on the use of techniques from computer science and the newly available information and communication technologies, and on theoretical models for the interpretation of indicators. • It is proposed to develop new indicators of the manuscript peer review process. Although this process is considered important by publishers, editors and researchers, it still strikingly opaque. Applying classical humanistic and computational linguistic tools to peer review reports, an understanding may be obtained for each discipline what is considered a reasonable quality threshold for publication, how it differs among journals and disciplines, and what distinguishes an acceptable paper from one that is rejected. Eventually, it could lead to better indicators of journal quality. • To solve a series of challenges related to the management of informetric data and standardization of informetric methods and concepts, it is proposed to develop an Ontology-Based Data Management (OBDM) system for research assessment. The key idea of OBDM is to create a three-level architecture, constituted by the ontology, a conceptual, formal description of the domain of interest; the data sources; and the mapping between these two domains. Users can access the data
66
C. Daraio and W. Glänzel
by using the elements of the ontology. A strict separation exists between the conceptual and the logical-physical level. • The creation is proposed of an informetric self-assessment tool at the level of individual authors or small research groups. A challenge is to create an online application based on key notions expressed decades ago by Eugene Garfield about author benchmarking, and by Robert K. Merton about the formation of a reference group. It enables authors to check the indicator data calculated about themselves, decompose the indicators’ values, learn more about informetric indicators, and defend themselves against inaccurate calculation or invalid interpretation of indicators. • As an illustration of the importance of theoretical models for the interpretation of informetric indicators, a model of a country’s scientific development is presented. Categorizing national research systems in terms of the phase of their scientific development is a meaningful alternative to the presentation of rankings of entities based on a single indicator. In addition, it contributes to the solution of ambiguity problems in the interpretation of indicators. It is proposed to dedicate in doctoral programs more attention to the ins and outs, potential and limits of the various assessment methodologies. Research assessment is an activity one can learn.
Discussion and conclusions The analysis presented in this chapter relates to discrepancies in a single datafield. Discrepancies in more than one datafield, containing, for instance, errors in both the author name and the volume number, were not examined. As such discrepancies cannot be assumed to be independent from one another (Lok et al., 2001), their probabilities cannot be calculated by simply multiplying those related to discrepancies in a single datafield. This issue requires a more detailed examination in future studies. The outcomes presented in this chapter provided a lower boundary estimate of the overall number of discrepancies between cited references and target articles. Another issue to be studied in more detail regards the consequences of electronic publishing, particularly the existence of different versions of publications and different numbering systems. Simkin and Roychowdhury (2002) posted in the Eprint archive ArXiv two versions of a paper entitled ‘Read before you cite’, of which the first version received considerable attention from scientific journals and the nonscholarly press. From a limited number of case studies on “citation errors”, applying a mathematical model in itself interesting, they concluded that citing authors copy a large percentage of references from other papers. However, apart from the fact that when an author copies a reference from another paper it does not follow that he or she did not read the cited paper, their analysis provided no empirical evidence that when two ore more citing papers contain the same discrepant reference, their citing authors actually copied it from one
Selected essays of Henk F. Moed
67
another. For a case study illustrating a methodology to collect this type of evidence, the reader is referred to Moed and Vriens (1989). Focusing on the quantitative implications for bibliometric research performance assessment, it can be concluded that, due to the skewed distribution of discrepant citations among target articles, citation statistics at the level of individuals, research departments or scientific journals may be highly inaccurate when cited references are not properly matched to target articles. The data collection procedures underlying citation based indicators must be sound and accurate. Consequently, advanced citation data handling procedures must take into account inaccurate, sloppy referencing, editorial characteristics of scientific journals, referencing conventions in scholarly subfields, language problems, author identification problems, unfamiliarity with foreign author names and data capturing procedures.
Contributed Chapters
Citation Profiles and Research Dynamics Robert Braam
Introduction Citation indexing, developed in its computerized version by Garfield, started as an information retrieval tool, but soon was used as well in analyzing the conceptual structure of science, and in the measurement of research performance (Chen et al., 2002; Moed, 2016). Citation analysis became organized in the field of ‘scientometrics’ during the nineteen-nineties (Lucio-Arias and Leydesdorff, 2009). Citations became also used in analyzing the strategic positioning of scientists in so called actor-networks (Callon, 1986). Most recently, citation analysis is proposed to be used as a strategic tool in scientific career planning (Xiao et al., 2016). The basic notion behind all these applications of citation analysis is the way scientists work together in so called ‘research fronts’, where scientists built on the work of other, earlier, scientists to create new science. This notion, introduced by Price (1965) in his article ‘Networks of scientific papers’, holds that scientists work upon the most recent earlier contributions, as a way of growing, more than on older work (that has been built upon already earlier). This way of working in science would be visible in citation patterns as an ‘immediacy effect’, a high percentage of references in articles to the most recent literature (Ibid). The immediacy effect could be measured by Price’s index, the percentage of 0–4 year old cited publications in the yearly references of a given field. Moed (1989) found differences between research groups from 25% up to 70% of their reference lists citing the most recent work. When the immediacy effect amounts to 100%, the group focus reaches a maximum: all references in their publications at the research front are citations of other work published a few years earlier. Recent citation modeling studies on predicting citation scores, confirm the immediacy thesis, now formulated as the combined effect of triggering and aging (Xiao R. Braam (B) Heemlanden College, Utrecht Area, Houten, The Netherlands e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_3
71
72
R. Braam
et al., 2016). Besides, in these models a term is added to predict ‘second acts’, the ‘awakening’ of forgotten or ‘sleeping’ papers (ibid). Awakening of such papers is thought to be triggered by newly found relevance, particularly in novel or other research areas, leading to these ‘delayed’ citation bursts (ibid).
Citation Profiles and Immediacy How would the immediacy effect, defined by Price as a group phenomenon, translate to citation patterns of individual publications and to author citation histories? In a lively growing research front, the immediacy effect implies that received citations peak in the years shortly after publication with a ‘delay’ depending on publication practice. After this early peak, a publication’s received citations gradually fades out, as the work is taken up in the fabric, or ‘factory’, of science. The early peak may be higher or lower depending on the size of the particular research area, and more or less compact depending on the growth pace, as indicated by Price’s index. Also the initial citation delay will be shorter or longer, depending on the speed of the publication practice in the particular area. In general, the expected form of the citation curve for a publication in an active research front would be skewed, e.g., as depicted in Fig. 1. Now, if an author contributes a publication a year to the research front, for say 15 years, a citation profile arises as the sum of citations to publications in consecutive years, in case all publications are received according to the above basic delayimmediacy-aging citation pattern (see Fig. 2). Deviations from the model pattern point to changes in the author contributions and/or to changes in the research front. Incidentally better or poorer received papers will lead to fluctuations around the highest flat level line (m). Changes in the research front size will change the hight of the flat level line. If contributions have wider relevance, the level also may rise without research front growth, receiving outside citations. Differences between author profiles may derive from all such factors. Fig. 1 Expected citation history of a research front publication (m citations; immediacy = 80%)
citations % of m
50 40 30 20 10 0 year year year year year year year year year 1 2 3 4 5 6 7 8 9
Citation Profiles and Research Dynamics
73
citations % of max. (m) immediacy = 80% 125 100 75 50
25 0 yearyearyearyearyearyearyearyearyearyearyearyearyearyearyearyearyearyearyearyearyearyearyear 1 2 3 4 5 6 7 8 9 10 11 12 13 15 16 17 18 19 20 21 22 23 24
Fig. 2 Model citation profile of an author contributing a publication a year to the research front
C-author (t) =
n
CP(nt)
n=1
C-author (t) = citations to author in a given year t CPn (t) = publication n, receiving citations in year t Pn = the n-th publication of the author m = total of CPn over all years.
Data and Sources Citation profiles were taken from Google Scholar, fall 2018, for scientometrics scholars, for which a GS citation profile was available, that were active already long before publication of the first thesis at CWTS (Moed, 1989), the ‘first generation’, from later colleagues of Henk Moed, at CWTS or elsewhere, and their coauthors, as listed alongside citation profiles in Google Scholar for the selected authors, the ‘second generation’, and, finally, GS citation data for some ‘third generation’ colleagues. Google Scholar citation data improved by retrograde updating (Winter et al., 2013).
Results and Discussion As shown in Fig. 3, none of the citation profiles selected show a steep or gradual decline in citations, even the profiles of long since passed away colleagues do not decline (Price, R.I.P. 1983; Moravcsik, R.I.P. 1989; Griffith, R.I.P. 1999). Postproductive lasting high citation records cannot be attributed to immediacy, as immediacy leads to a citation decline after the publication is ‘swallowed and digested’ by the growing research front. Other sources of citation need to be taken into account. The patterns until the year 2000 are in accordance with the middle flat phase of the model. However, a steep rise occurs around the year 2000, and citations keep growing
74
R. Braam
Fig. 3 First generation scientometrics author citation profiles
steadily for a decade after which a new stable flat period follows. The general rise in citation scores may reflect a general growth of science and citation, and/or a raised activity in the area of scientometrics. Another result is the differing maximum height of the profiles. None of the other selected first generation scientometricians rise above Price or Garfield, the one establishing the ‘science of science’, the other establishing citation indexing and computerized citation analysis. As shown in Fig. 4, the same holds for the ‘second generation’ scientometricians included here, except for one author (Glänzel, with a citation profile level in between Price and Garfield). What explains the continuing ‘dominance’ of Price and Garfield? Why aren’t they overtaken by scientometricians of the first or the second generation? Price’s own profile deviates from the immediacy based model citation profile (Fig. 2), as no contributions are made to the front since the author’s passing away. What other factors are at play? Thinking of possible other factors, three come to mind to explain his prolonged citedness. First, the prolonged dominance of Price may be attributed to scientometricians pertaining to pay respects by citations to the founding father of their field. Secondly, it may result from scientometricians pertained focus on the basic concepts of the paradigm set by Price, in absence of a major break through, e.g. actor network theory (Callon, 1986) did not fundamentally change scientometrics. Thirdly, it may partly be explained by a larger audience for the wider and more basic ideas studied by Price as compared to the focus of later colleagues on more specific scientometrics themes, who therefore receive citations from a smaller audience. The same points may hold for the dominance of Garfield:
Citation Profiles and Research Dynamics
75
Fig. 4 Second generation scientometrics author citation profiles
researchers paying respect to the father of computerized citation indexing; citation analysis as a basic tool in information retrieval and scientometrics, and in the broader science of science field, and upcoming area’s such as visualization, data mining and artificial intelligence.
Wider Audiences In order to check the idea of wider audiences, we looked closer at the citation profiles of those scientometrics authors that, given their numbers of received citations, did, so to speak, ‘grew out of the shadow of Price and Garfield’. For example, we took the record of Glänzel, from the ‘second generation’, and the records of some ‘third generation scientometricians’ at the Leiden group, as a change in profile of this institute has been forecasted in earlier analysis (Braam and van de Besselaar, 2010). In the citation receival record of Glänzel, we find, besides many citations from within the scientometrics area, also references to his work by authors in the field of data mining that are not directly related to scientometrics. This is also the case for citations to the ‘third generation’ Leiden scientometricians Van Eck and Waltman, who, besides inside citations, also receive citations from areas outside the scientometrics field, e.g. to their technical work on developing visualization software (VOS Viewer). These findings are in contrast to references to the work of ‘third generation colleagues’ Klavans and Boyack, who both remain in the ‘citation shadow’ of Price and Garfield, i.e. receiving less citations, probably because their work focuses on application of data mining and visualization in the field of scientometrics. Another
76 Table 1 Author citation profile data
R. Braam Author
All citations Highest two
Percentage
Garfield (R.I.P. 2017)
30941
5869
19
Price (R.I.P. 1983)
18858
8992
48
Glänzel
17287
1299
8
Van Raan
13257
1429
11
Moed
12318
2200
18
Griffith (R.I.P. 1999)
8551
1205 (2788)
14 (31)
White
7489
2716
36
McCain
6079
2424
40
Moravcsik (R.I.P. 1989)
4113
806 (1169)
20 (28)
Small
10570
3912
37
Braun
9218
1111
12
Braam Leydesdorff Tijssen Van Leeuwen Rousseau Hicks (R.I.P. ….) = passed away
1134
651
57
0 (13188)
0 (30)
5873
804
14
44651 6662
921
14
12751
2304
18
6511
1336
20
(..) = other field included
(..) = other field included
GS Feb/March 2019
striking finding is in the record of Leydesdorff, whose received citations rise sky-high over 4000 a year since 2013, which, on more close inspection, can be traced back for a very large part to references to articles on the Triple Helix publications together with Etzkowitz on government-industry-science relations and the global knowledge economy (Etzkowitz, more focused on this topic, receives about 4000 citations a year). Thus, Leydesdorff is to be seen more as a ‘part-time second generation scientometrician’, as many of his received citations are to publications in this other, much wider, field of research. The variety of topics studied by scholars in scientometrics, such as Leydesdorff and Glänzel, has been shown in detail in a recent study (Zhang et al., 2018). Citations to my own, much scarcer work, resemble the ‘second generation scientometricians’ pattern, though at a much lower level, due to being involved in ‘mapping of science’, and ‘research dynamics studies’, less popular sub-areas of scientometrics compared to performance and impact studies, thus to smaller citation audiences. As shown in Table 1, author citation data differ in skewness, as percentage of citations obtained by the two most cited publications connected to each author, ranging
Citation Profiles and Research Dynamics
77
from 8% (Glänzel), and 11% (Van Raan) to 48% (Price), and 57% (Braam), with Leydesdorff at the extreme of 0% if citations outside scientometrics are left out (but 30% including his other research areas). Thus, authors differ in having a more regular distribution of citations to their work, while others have highly cited publications as well as (some or many) low cited items, e.g. Garfield has many low cited as well. Findings on citation histories – Citations to scientometrics authors generally raised considerably since the millennium. – The immediacy based profile doesn’t fit post productive profiles of (passed away) authors. – The founding fathers of scientometrics remain highly cited, compared to the selected first-, second- and third generation scientometricians. – Some very highly cited author profiles obtain large shares of citations from other areas. – Authors differ considerably in the percentage of citations obtained by the highest two.
Towards a Theory of Citation Dynamics Understanding these findings requires a more encompassing theory of citation then immediacy. A variety of articulated viewpoints for a theory of citation has been summed up by Moed (2005, updated 2017), in his book on citation analysis: references as symbols or as means of persuasion; micro versus macro sociological analysis; citer motivations versus impact, etc. According to Moed, drawing these various views together in an encompassing framework, might not be wise, as all these viewpoints behold relevant aspects of a theory on their own account. However, he stresses the need to keep citation and referencing together, and explores, starting from the perspective of a research group working on a research program, the distinction between references of the group to two kinds of papers in the groups ‘publication ensemble’: ‘bricks’, normal contributions to the program, and ‘flags’, more significant contributions, either overviews or reports of significant progress (Moed, 2005, p. 216). Following Moed, the research process seems a good starting point. We here further develop the citation theory related to the process of knowledge growth, centered on Price’s idea of the research front, looking upon referencing as a cultural by-product of scholarly communication. The basic elements for such a theory have all been introduced, but, as of yet, not brought together in a single framework, in line with Moed’s warning, but possibly also due to the focus of much scientometrics work on the overall earning model: measuring research impact.
78
R. Braam
The elements we here use are: • The scholarly communication practice as it developed, using publications and references; • Price’s idea of the research front, and of immediacy: growth by building on the most recent work; • Small’s idea of the highly cited item as a concept symbol, related to Kuhn’s paradigm concept; • Bonaccorsi’s idea of search regimes, of more focused or more widening (re)search activity; • Callon’s proposed idea of strategic positioning and translation of researchers in actor networks; • Moed’s distinction between citations to two kinds of contributions: bricks and flags; retaining the relationship between referencing and citation, and combining a micro- and a macro-perspective; • Brooks distinction between social citer motives and informative citer motives.
Scholarly Communication References, and thereby citations, are contingent upon actual science communication practices. Publication of scholarly work in connection with works of others, developed in the communication system of science since the seventeenth century saw the introduction of refereed scientific journals of the Royal Societies in Paris and London in 1665 (Zuckerman and Merton, 1971). Decades earlier, books appeared containing both the views of the author as well as invited comments of other scholars, such as Descartes ‘Meditations on First Philosophy’, including sets of invited objections of his friends, and via Marin Mersenne, of distinguished scholars, with replies by the author (Cottingham et al., 1984). The practice of invited comments and replies, as in Descartes book, is still found today in journal form, for example in: ‘Scott Atran and Ara Norenzayan, 2004, Religion’s evolutionary landscape: Counter-intuition, commitment, compassion, communion, in: Behavioral and Brain Sciences. The practice of the first scientific journals has grown into the prevailing system of refereed scholarly journals as we know it today, including computerized literature storage and retrieval facilities. Since Garfield’s introduction of computerized citation indexing (Garfield, 1965) publication and citation counts have also become a part of this system. So much so, that nowadays artificial intelligence studies appear predicting citation scores directed at scientific career planning (Xiao et al., 2016). It might, however, have been otherwise.
Citation Profiles and Research Dynamics Table 2 Citer motivations
79
Social motives
Informative motives
• Social alignment • Mercantile alignment
• Argumentation • Data
Factors
Factors
• ‘scholarly consensus’ and • ‘operational information’ ‘positive credit’ + ‘recency’ and ‘reader alert’ + • ‘negative credit’ and ‘recency’ ‘persuasiveness’
Citer Motivations In order to further examine the above findings we now look at research results on citer motivations. Citer motivations are found to be complex, with both social and informative motives (Brooks, 1986). Factor analysis of author’s views on citer motivations showed ‘scholarly consensus’ and ‘positive credit’ forming one factor, next to ‘operational information’ and ‘reader alert’, forming another factor, whereby ‘recency’ loads on both these two factors, and a third factor is formed by ‘negative credit’ and ‘persuasiveness’ (Braam and Bruil, 1991). A recent study lists four main categories of citer motivations: (1) argumentation, (2) social alignment, (3) mercantile alignment, and (4) data (Erikson and Erlandson, 2014). Combining these results offers us a scheme as given in Table 2. Table 2 uses Brooks’s distinction between social and informative motives as a framework for the other found results on citer motivations. Social and Informative motives can be seen as related to the dual positioning of researchers at the research front: being accepted as valid researchers by their colleagues and having their work accepted as valid contribution to scientific research issues. In situations where social acceptance is not guaranteed, social motives will prevail, whereas, in situations where social acceptance is superfluous, informative motives will prevail, as well as in situations where there is not so much social coherence in the first place, such as in novel areas.
Relevancy Versus Impact From the perspective of the cited authors, citations are seen as a reaction to the impact made by their publication on the citing researchers. However, references, seen from the perspective of citing authors, are tokens of relevancy to the work of the citing author. One could therefore ask if the relevancy of a publication is a quality belonging to the paper itself or rather is better to be seen in the eye of the beholder. Perceived relevancy, explaining citation from the point of view of the citing author, may be immediate and in the original area of publication, or at a later stage and in a novel or other area of research. Perceived relevancy thus includes citations outside
80
R. Braam
the research front. From the perspective of the citing author, the two main motives for citing are social and informative.
Research Fronts Price’s vision of the research front, as a focused growing tip of science, links to broader theories of the dynamics of science, in particular with puzzle solving activities in periods of normal science as stated by Kuhn’s idea of scientific progress. The basic notions and methods are shrined in a stable paradigm of the research area. As noted by Small (1979), the paradigm is reflected in highly cited documents as concept symbols. The highly cited document, though underlying the focused research depicted by Price, is not captured by immediacy. As shown in field studies, highly cited documents cluster in relation to current focused research activities, though not all researchers contributing to the research front do pay tribute to them (Braam et al., 1991). Unstable periods do occur in the history of science (Kuhn, 1970) and these more or less revolutionary changes in research derive from non-focused research activities (Bonaccorsi, 2008). They stem from exploring new questions, or old questions in an new way. Such explorative aspects of research are also not captured by the notion of immediacy at focused research fronts. But the widening of views and exploration of new questions as well as the reopening of old questions are important aspects of the growth of science. From the perspective of the citing authors, then, citation has three main functions: 1. Connecting: relate ongoing research to contributions to current issues in a research front; 2. Ritual anchoring: pointing to basic problems, ideas, or methods that are worked upon in their research area; and 3. Explorative: searching and capturing earlier research work, or work in other areas, to use in their own research. Novel citations indicate divergence in research front networks (Lucio-Arias and Leydesdorff, 2009). Diversity in citation networks seems vital, from an ecological point of view, to the waxing and waning of research fronts (Stewart et al., 2017). The punctuated infusion of novel citations to existing networks may represent new contributions to the research line of leading ‘parent papers’ (top cited papers in a research area) or point to an emerging interdisciplinary related to the leading ‘parent’ paper’s conceptual concerns (ibid.). Table 3 gives the three functions.
Three Functions of Citation 1. Connecting to current research Focused research requires connecting one’s work to the most recent predecessors in this special area of a group of researchers. It seems logical to thereby give data and operational information, argumentation, possibly aligned with positive and/or
Citation Profiles and Research Dynamics
81
Table 3 Citation functions Three functions of citation:
From the perspective of citing researchers
1. Connecting
Connect to work on current issues in an active research front Price, 1965
2. Ritual anchoring
Point to basic problems, ideas, or methods in the research area Small, 1979
3. Explorative searching
Searching and capturing earlier work or research work in other areas Bonaccorsi, 2010
negative credits, and alert the reader to what seems most promising, etc. This, seen from the perspective of the citing author. 2. Ritual anchoring High citation records that prolong post-productively, e.g. after leaving the field, after retirement or post mortem, cannot result from Price’s immediacy effect. These citations follow ‘ritual’ referencing, such as to concept symbols of the field’s paradigm (Small, 1979). The ritual citation of older work strengthens the paradigm and shows the awareness of the citing authors of the fields past icons. 3. Searching the archive At other times and places, however, researchers may carry out more extensive literature searches for relevant work, be it of older age or from other areas, as sources to be used for their own purposes. Citing authors select publications appearing in retrieval results as sources for their own usage, that may come from contributions directed at research fronts in other research areas then their own. Broad searches for relevant papers nowadays is even easier then before, because of free of charge facilities, such as Google Scholar. As a result of these free facilities, one is less bound by information from peers in one’s own research area. Papers can be more easily inspected for relevancy to new research lines or novel research areas. Referring to a particular paper may thus not be the result of impact upon the citing author, but of the citing author’s own focus in selecting documents for a purpose of his or her own. Therefore, post-immediately received citations may also remain high or even increase because of novel audiences from other areas.
Positioning and Rewards Selecting references and citing other research work is a scholarly activity with strategic aspects in the networks where researchers have to position themselves and their ideas in relation to others, thereby working on and creating new social ties (Callon et al., 1986). This is important, not only in the networking process of
82
R. Braam current
ritual
explorative
125
100 75 50 25 0 tight
medium
loose
Fig. 5 Theoretical relation of citation modes to research focus
knowledge production, but, as publication output and—increasingly—received citations are used in the measurement of research performance, in decisions on research funding, in employment decision, and, in line with all this, in scientific career planning. The three functions of citation, specified above, thus point to three different types of research for which a publication may be seen as relevant in the eyes of current researchers, in relation to their own positioning strategies. And here lies the solution to the prevailing predominance of the likes of Price and Garfield, and the prolonged citations to no-longer productive scientists: the ritual referencing to them provides prove of an author’s knowledge and recognition of the field’s paradigm and thereby legitimizes their offering a novel contribution. The acceptance of offered contributions by the field’s journals and conferences is an acknowledgement of membership, signifies a first scientific reward. A second reward may be the citation of their contributions by references made in consecutive other contributions to the field, and as these come in three categories, likewise one’s contributions may be taken up in one of these three: (1) a regular work feeding the research front; (2) a basic method or concept, ritually cited to strengthen paradigm confidence, or (3) a contribution of ideas, concepts or methods in other, or novel, areas of research. The type of contributions and references, we here theoretically suggest, depends upon the tightness of the research front, as given in Fig. 5. In extremely tight fronts, with immediacy of 100%, all novel contributions are quickly consumed, the cognitive focus is very clear, and ritual references superfluous. In medium fronts things are less clear, ritual referencing provides some focus, but researchers explore novel possibilities as well. In loose areas, researchers explore various novel possible fruitful data and ideas without a paradigm.
Citation Profiles and Research Dynamics
83
Citation Modes and Research Fronts As research fronts may wax and wane (Bonaccorsi, 2008), so will the types of references made by their researchers. In tight research fronts, social motivation will be less required, everyone knowing everyone all effort can be put in working out the clear research program, building on the most recent contributions. If the front is less tight, the program less clear, researchers feel need to refer to basic ideas and methods, and prestigious colleagues, ritually strengthening coherence, leading to highly cited papers. If a clear front is absent, there will be no need to ritually refer to basic notions nor to refer to prestigious colleagues, as there is ample room for novel explorations. Thus, we here predict highly cited items absent in super tight research fronts, and in loose areas. In Moed’s terms, ‘bricks’ prevail in tight fronts, ‘flags’ in medium fronts, and, we suggest, ‘stepping stones’, for one who is searching in wider fields or loose areas. As, at very tight focused fronts, one doesn’t need to ritually express anchors, such fronts will not be found in maps based on highly (co-)cited publications, nor will loose areas, because of the lack of such anchors. This explains low coverage of such maps found in earlier studies, besides less attentive citation (Braam et al., 1991). Finally, artificial intelligence based predictions of future citations, using data on high prestigious authors, may work well in medium tight research fronts, but fail in both very tight and loose areas. Whether a researcher’s work is more in one or another of these three categories will be visible in the distribution of received citations over contributions in the citation history. Regular contributions to a tight research front, i.e. with high immediacy, would be reflected in a profile, as in the above Fig. 2, with a small variance over the authors contributions. Ritual and explorative referencing of one’s work, on the other hand, would lead to a more fluctuating profile, with larger variance, and a skewed or multi-peaked distribution of citations over contributions. This provides a topic for further investigation. As these differences depend on the functional relevance perceived by citing authors, the cited contributing authors should therefore not be held responsible for the reference type nor for the quantity of their received citations. One does, however, hope for acknowledgement of one’s serious attempts to contribute to science, and one feels comforted by the long-lasting efforts made to ensure reliable and valid counting of citations to one’s work by scientists such as Henk Moed.
Conclusion Inspection of GS citation profiles shows increased interest in scientometrics and informetrics research since the millennium. It is clear also that this research area has as of yet not outgrown its founding authors, who’s work conceptually and practically remain inspirational anchor points. The fact that the work of the founding fathers
84
R. Braam
remains highly cited, does partly reflects relevance to audiences outside scientometrics. Only few ‘scientometricians’ receive higher citations, due to involvement in other, much broader, research areas with larger audiences, such as the ‘global knowledge economy’. A closer look at citation, in relation to knowledge growth, shows citation scores can be taken to result from the combined effect of three factors: connecting to the research front; ritual referencing to basic ideas, and exploring the archive. Depending on the tightness of the research front, citations will be more to first, with high immediacy; to the second, with highly cited items; or to the third of these three, with miscellaneous items from the archive of science. Citation forecasting based on highly prestigious authors, will be biased towards the second of these factors.
Appendix to Citation Profiles and Research Dynamics: Immediacy and Citation Functions, an Example In this appendix, we inspect an example article for the relation between immediacy and the three formulated theoretical functions of citation: (1) connecting to a research front; (2) ritual citation to basic paradigm concepts and/or its prestigious authors; and (3) explorative citations to earlier or remote work to gain novel research ideas. Immediacy is defined as the percentage of citations to the most recent earlier work (0–4 years old) that researchers build upon in their current research. Example: High immediacy and front connection. As an example we analyse an article selected at GS from http://gut.bmj.com/ on 27 May 2019. With high immediacy, the research is expected to be highly focused and containing mainly ‘bricks’ as references. The article is published in Endoscopy news, open access: Ebigbo A, et al. Gut 2018;0:1–3. https://doi.org/10.1136/gutjnl2018-317573. Below we discuss its references in context. Here below an extract of the example article.
Citation Profiles and Research Dynamics
85
Immediacy With eight out of nine of the cited references aging within three years since publication, the immediacy, 89%, is very high. Only one cited reference is not immediate, dating from 2010. Citation functions The first three references, the first being the only ‘older’ one, are cited together legitimizing the research effort: “The incidence of BE (Barret’s oesophagus) and EAC (early oesophageal adenocarcinoma) in the West is rising significantly, and because of its close association with the metabolic syndrome this trend is expected to continue. 1–3”. The first, and only non-immediate cited reference, from 2010, receiving 135 citations by the end of 2018 (GS), reviews the evidence of rising incidence from clinical trials, meta-analyses, and large cohort and case-control studies, and points to the importance of early detection of oesophageal cancer, outlines strategies for prevention and describes features of oesophageal cancer to assist generalists in diagnosis. This reference is thus not to basic theoretical concepts, but underpins, together with references 2 and 3, the societal relevance of the study by Ebigbo et al.,
86
R. Braam
relating to the grants received from the Alexander von Humboldt Foundation and the Deutsche Forschungsgemeinschaft. Pointing to the study’s societal relevance, the prospect of improving medical practice, is a ritual citation function, as it does not directly bear to the content of the study, but affirms and reassures it’s wider goals. References 4–7, given together, point to earlier use of the technique the group works upon to improve by computer aided learning next to handcrafted learning: “Reports of CAD (computer-aided diagnosis) in BE analysis have used mainly handcrafted features based on texture and colour”. Reference 8 then specifies databases used to improving aided learning technique used by the authors: “… to train and test a CAD system on the basis of a deep convolutional neural net (CNN) with a residual net (ResNet) architecture.” Finally, reference 9 points to their recent earlier study now further worked upon: “In this manuscript, we extend on our prior study on CNN in BE analysis.” Their research goal, improving computer aided diagnosis, i.c. “endosopic assessment of BE”, and its wider relevance, “enhancing patient management”, are directed at medical practice, not medical theory. Thus, this article with high immediacy has six out of nine references that are built upon to improve cancer diagnosis techniques, thus ‘bricks’, and three out of nine legitimizing the effort by pointing to scientific evidence of the rising incidence of BE and ACE, a ritual function. Discussion and Conclusion In this example article, immediacy of 89% goes together with 67% bricks and 33% ritual citations. The ritual function here relates both to theoretical elements of the paradigm (evidence of rising incidence), and its societal relevance (improve medical practice). As the ritual function is performed also by two most recent articles, it follows that 100% immediacy would not exclude ritual citations. Whether, even at the fastest growing research fronts, paying ritual dues to scientific or societal relevance, though less efficient, is an inevitable requirement, is a question for further examination. Reference Ebigbo, A., et al. (2018). Computer-aided diagnosis using deep learning in the evaluation of early oesophagealo adenocarcinoma. Gut 2018;0:1–3. https://doi.org/10. 1136/gutjnl-2018-317573
References Perceived as Bricks (14 = 56%) Braam, R. R., & Bruil, J. (1991). Reviewing and Referencing. In R. Robert (Ed.), Braam, 1991, Mapping of science: foci of intellectual interest in scientific literature (pp. 207–242). The Netherlands: Leiden University Press.
Citation Profiles and Research Dynamics
87
Chen, C., McCain, K., White, H., Xia L. (2002). Mapping Scientometrics (1981–2001). In: Proceedings of the American Society for Information Science and Technology, ASIST Annual Meeting 2002, v39, 24–34,. Wiley Online Library. Erikson, M. G., Erlandson, P. (2014). A taxonomy of motives to cite. Social Studies of Science 44(4), 625–637. Sage. Harzing, A., van der Wal, R. (2008). Google Scholar as a new source for citation analysis. Ethics in Science and Environmental Politics, 8, 61–73. Inter-Research Science Publisher. Klavans, R., Boyack, K. W. (2011). Scientific Superstars and their effect on the evolution of science. In: Proceedings Science and Technology Indicators Conference 2011. ENID. Moed, H. F. (2017). From Derek Price’s Networks of Scientific Papers to Advanced Science Mapping. In: Applied Evaluative Bibliometrics, chapter 13, 177–191. Springer Verlag Moed, H. F. (2016). Eugene Garfield’s influences upon the future of evaluative bibliometrics, Frontiers in Research Metrics and Analytics 3:5. Frontiers. Lucio-Arias, D., Leydesdorff, L. (2009). An indicator of research front activity: measuring intellectual organization as uncertainty reduction in document sets, Journal of the American Society for Information Sience, 60(12), 2488–2498. Wiley Online Library. Nisonger, T. E. (2004). Citation Autobiography: An Investigation of ISI Databae Coverage In Determining Author Citedness. College & Research Libraries 65, 152–162. Association of College & Research Libraries. Xiao, S., Jan, J., Li, C., Jin, B., Wang, X., Yang, X., Chu, S. M., Zha, H. (2016). On Modeling and Predicting of Individual Paper Citation Count over Time. In: Proceedings of the Twenty-Fifth Joint International Conference on Artificial Intelligence IJCAI-16, New York. Small, H. (2018). Citation Indexing Revisited, Garfield’s early Vision and its Implications for the Future, Frontiers in Research Metrics and Analytics 3:8. Frontiers. Stewart, B. W., Rivas, A., Vuong, L. T. (2017). Structure in scientific networks: towards predictions of research dynamism, arXiv: 1708.03850v1, 13 Aug 2017. De Winter, J. C. F., Zapoor, A. A., Dodou, D. (2013). The expansion of Google Scholar versus Web of Science: a longitudinal study, Scientometrics, 98(2), 1547–1565. https://doi.org/10.1007/s11 192-013-1089-2. Springer. Zhang, Y., Wang, X., Zhang, G., Lu, J. (2018). Predicting the dynamics of scientific activities: A diffusion-based network analytic methodology. In: Proceedings of the 81th Annual meeting of the Association for Information Science and Technology, Vancouver, Canada.
Perceived as Flags (9 = 36%) Bonaccorsi, A. (2008). Search regimes and the industrial dynamics of science, Minerva 46(3),285– 315. Springer. Braam, R. R., Moed, H. F., van Raan, A. (1991). Mapping of science by combined co-citation and word analysis I. structural aspects, Journal of the American Society for Information Science, JASIS 42(4), 233–251. Wiley Online Library. Brooks, T. A. (1986). Evidence of complex citer motivations, Journal of the American Society for Information Science 37, 34–36. Wiley Online Library. Callon, M. (1986). The sociology of an Actor-Network, The case of the Electric Vehicle. In: Mapping the dynamics of science and technology, Michel Callon, John Law and Arie Rip (editors), 1986, 19–34. United Kingdom, London: MacMillan Press. Kuhn, T. (1970). The structure of scientific revolutions, (second, enlarged, edition). U.S.A: Chicago University Press. Moed, H. F. (1989). Bibliometric measurement of research performance and Price’s theory of differences among the sciences, Scientometrics 15(5–6), 473-483. Springer. De Solla Price, D. J. (1965). Networks of Scientific Papers, Science 149, 510–515. AAAS.
88
R. Braam
Small, H. (1978). Cited documents as concept symbols. Social Studies of Science, 8(3), 327–340. Zuckerman, H., Merton, R. (1971). Patterns of Evaluation in Science: Institutionalization, Structure and Functions of the Referee System, Minerva 9(1) 66–100. In (reprint): Robert Merton, The Sociology of Science, chapter 21, p. 460-496. Chicago/London: University of Chicago Press.
Perceived as Stepping Stones (2 = 8%) Atran, S., Norenzayan, A. (2004). Religion’s evolutionary landscape: Counter-intuition, commitment, compassion, communion, Behavioral and Brain Sciences 27(6), 713–770. Cambridge University Press. Cottingham, J., Stoothoff, R., & Murdoch, D. (1984). The philosophical writings of Descartes (Vol. II). Cambridge/New York: Cambridge University Press.
Characteristics of Publication Delays Over the Period 2000–2016 Marc Luwel, Nees Jan van Eck, and Thed van Leeuwen
Introduction By publishing the results of their work in open literature, researchers acquire intellectual ownership by the principle of scientific priority (Merton, 1957). It makes these results available for scrutiny by the scientific community that can build on them to advance scientific knowledge. Not all scientific disciplines use the same medium to communicate research outcomes. In the natural and life sciences, scholarly journals are used most. In addition, proceedings of international conferences play an important role in engineering and applied sciences, in informing the community about the results. In the humanities, books play an important role in communicating scholarly work (Hicks, 2004; van Leeuwen, van Wijk, & Wouters, 2016). To some extent, this is also true of sub-disciplines in the social sciences. Scholarly journals apply the peer review process to determine whether a manuscript is suitable for publication or not: competent experts working in the same field(s) covered by the manuscript provide a substantiated opinion and make suggestions, when revisions on the original manuscript are deemed necessary. With the substantial scientific advances and technological innovations after the Second World War and English becoming the default language for communicating scientific work in those disciplines where peer-reviewed journals are the dominant medium, the international diffusion of research results became stronger (van Leeuwen, Moed, Tijssen, Visser, & van Raan, 2001; Montgomery, 2013; van Raan, van Leeuwen, & Visser, 2011). With the accelerated growth of knowledge production, science became structured in more disciplines and with the growing number of publications, more journals were emerged, often along disciplinary lines. In the middle of this decade, about 2.5 million articles were published annually in about M. Luwel (B) · N. J. van Eck · T. van Leeuwen Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands e-mail: [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_4
89
90
M. Luwel et al.
28,000 active, peer-reviewed English language journals. The number of journals and articles increased at a rate of about 3% per annum (Ware & Mabe, 2015). While scientific journals were mostly published by learned societies earlier (Kaufman, 1998), after World War II, commercial publishers became more active in this dynamic landscape (Larivière, Haustein, & Mongeon, 2015). Driven by the rapid development of information and communication technologies and starting in the last decade of the 20th century, the dissemination of journals changed radically to dual publishing, with electronic publishing complementing the paper version. In 1994, less than 75 peer-reviewed journals were available online. In 1998, 30% of the journals processed for the Science Citation Index had an online version and in 2002, it crossed 75%. For the Social Science Citation Index, more than 60% were online and for the Arts and Humanities Citation Index, 34% (van Orsdel & Born, 2002). Parallel with the switch to e-publishing, another paradigm shift occurred. Although the idea of providing free-of-charge access to scholarly literature, primarily to peer-reviewed journals, is considerably older, the term ‘open access’ (OA) was coined in the beginning of this century with three important declarations: the Budapest Open Access Initiative in February 2002, the Bethesda Statement on Open Access Publishing in June 2003, and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities in October 2003 (Suber, 2012). Although the definition is somewhat fluid, the goal is to make peer-reviewed articles accessible to read and free for reuse by removing subscription fees and as much as possible, copyright and licensing restrictions also. Articles are designated as ‘gold OA’ when they are directly, openly available on the OA journal’s website. However, there are other types of OA that are more restricted. A detailed discussion is beyond the scope of this paper. Since 2000, the number of gold OA publications has increased significantly (Piwowar et al., 2018), either because new journals applying the gold OA model were started (such as the PLOS journals and the Frontiers journal family) or existing journals changed from the toll model to OA. Publishing in peer-reviewed journals can be considered a chronological process, schematically consisting of a number of steps, starting with carrying out the research that produces the results reported on in the manuscript. Next, the authors select a journal they consider appropriate to make their work known to the scientific community and submit the manuscript. Its reception is the first formal milestone in the publication process. If not desk-rejected immediately, in a next step, one or more versions are reviewed by the referees, followed by the rejection or acceptance (under conditions) of the final version of the manuscript. The interval between the submission and the acceptance of a manuscript is called editorial delay (Garg, 2016). After acceptance, the manuscript is technically prepared for publication and the electronic version is posted online, followed by its publication in a print version of the journal. Although some journals no longer have a printed version, most are still organized in volumes and issues; each issue has a cover date. The interval between the manuscript’s acceptance and its online posting is sometimes called technical delay for online posting and that between the acceptance and the publication in an issue as the technical delay for publishing the issue.
Characteristics of Publication Delays Over the Period 2000–2016
91
There are many reasons for publication delays and for differences in their length across scientific disciplines, journals and publishers (Abt, 1992; Luwel & Moed, 1998; Amat, 2008). Firstly, there are differences in the organization and quality of the peer-review process. Before sending submitted manuscripts to reviewers, the editor-in-chief or the editorial board of some journals screens them and rapidly make the decision to desk-reject manuscripts, if they are unsuitable for publication in the journal. For a manuscript going through the full review process, the initial version is mostly not accepted and the reviewers may suggest minor or major points of revision in the manuscript. Depending on the manuscript’s quality and the authors’ response to these suggestions, the review process may consist of more than one cycle. Finally, the manuscript is accepted in its initial or revised form or rejected by the editorial board. A third possibility is its withdrawal by the authors, because they disagree with the reviewers’ opinion or consider the whole process too time consuming. The authors can decide to re-submit the rejected or withdrawn manuscript in its original or modified form to another journal. Eventually, the reviewers’ comments can be useful in enhancing the manuscript’s quality. As discussed by Bjork and Solomon (2013), publication delays are mostly viewed from a journal-centric perspective and not from a manuscript-centric perspective. In the latter, the delays are even longer, due to the sequential submission of the same or fairly similar manuscripts to two or more journals before they are eventually accepted for publication. In this case, a preliminary screening by the journal is valuable, as it reduces the delay before the authors can make more rapidly a decision on the selection of another journal for submitting the manuscript to (Azar, 2004). After acceptance, the final version of the manuscript is made publication- ready (copy-editing and typesetting) and in some cases, the authors have the opportunity to check the publication proof and eventually errors are corrected and small modifications suggested by the authors are processed by the editors and the publication staff. Generally, the authors are given strict deadlines for sending their remarks on the publication proof. For print-only journals, the paper’s finalized version is put in the waiting queue. Its publication in an issue depends on the backlog and journal’s editorial policy. As most journals have an online version too, publishers make the electronic version of papers also available as ‘in-press’ or ‘online-first’, before it is formally put in an issue with a volume number, issue number, and page numbers. Each issue normally has a cover date that is equal or later than the article’s online publication date. For most electronic OA journals, a different procedure is applied: no issues are used and the paper is immediately published when it is ready. Making accepted manuscripts electronically available under different ‘press’ labels before assigning volume, issue, and page numbers, together with the increasing use of preprint servers, blurs the notion of ‘being published’. There is sometimes no consensus even on the exact date a journal article is available online (Haustein, Bowman, & Costas, 2015). Moreover, the early release of e-versions results in an increase in citations of papers that have not yet been formally published in an issue and speeding up the readcite-read cycle has a direct impact on bibliometric indicators such as impact factors
92
M. Luwel et al.
(Egghe & Rousseau, 2000; Yu, Guo, & Li, 2006; Moed, 2007; Shi, Rousseau, Yang, & Li, 2006; Echeverria, Stuart, & Cordon-Garcia, 2017; Dong, Loh, & Mondry, 2006; Gonzalez-Betancor & Dorta-Gonzalez, 2019; Tort, Targino, & Amaral, 2012; Lozano, Larivière, & Gingras, 2012; Heneberg, 2013; Alves-Sliva et al., 2016). The impact on publication delays of the introduction of the dual print format of journals and the publication of electronic journals followed by the rise of OA, has been studied by a number of authors. Bjork and Solomon (2013) provided an overview of 14 articles published on this subject up to 2009. Using stratified samples, these authors also analyzed the publication delays for 2700 articles published in 135 journals processed for the Scopus Citation Index. As in previous studies, they found large differences in publication delays, not only between disciplines, but also between journals in the same discipline. Chen, Chen, and Jhanji (2013) performed an analysis of publication delays of 51 journals on ophthalmology by choosing randomly 12 papers in each journal. They found differences in the publication delays between journals and no correlation between delays and impact factor. Garg (2016) studied 1223 articles in 13 journals published by India’s CSIR-National Institute of Science, Technology and Development Studies and found also that publication delays vary from one discipline to another and among journals. Each discipline and even each journal seems to have its proper publication culture and all their elements and their interaction have not yet been fully understood. For a set of articles on food research, Yegros and Amat (2009) found that the editorial delay is influenced by the authors’ experience and not by the countries mentioned in the address byline. For Nature, Science and Physical Reviews Letters, Lin, Hu, and Wu (2016) found that papers with shorter editorial delays have a higher probability to become highly cited. For Nature, Science and Cell, Shen et al. (2015) also found a tendency, although statistically weak, for an inverse relation between editorial delay and the number of citations a paper receives. Fiala, Havrilova, Dostal, & Paralic, (2016) studied 3 journals and a total of 1541 articles about the influence of the membership of the editorial board on the editorial delay and found only for one journal a significant reduction in the interval between submission and acceptance for publications co-signed by board members. For 261 papers published in 29 ecology journals, Alves-Silva et al. (2016) found no relation between publication delays and either the length of the papers or the number of papers published per year and per journal. Tosi (2009) discussed some of the reasons for the long editorial delay in the management discipline and formulated suggestions to resolve them. Lotriet (2012) identified a number of sources of delay in the editorial process of the Australian Medical Journal, an OA journal. In all these studies, the effect of publication delays on scholarly work is stressed and the necessity to better understand the impact of recent changes in the publication process on these delays. Earlier studies are based on rather small sets of publications and journals, often limited to one or a few disciplines and publication years. The time consuming and tedious collection of the information on dates related to the publication process explains these limitations to a large extent.
Characteristics of Publication Delays Over the Period 2000–2016
93
This paper is based on a large set of peer-reviewed journals published between 2000 and 2016 by Elsevier, one of the world’s leading publishing houses. For the first time, a very large set of papers from nearly all scientific disciplines is used to study the main characteristics of the scholarly publication process and how it has changed over the years. In the first section of this paper, the construction of the dataset and the methodology is presented. The next section gives an overview of results and in the concluding section, their contribution to a better understanding of the publication process, as also their limitations are discussed, together with possible topics for follow-up research.
Data and Methodology Our analysis of publication delays is based on a dataset obtained by the Centre of Science and Technology Studies (CWTS) of Leiden University that contains the full text of more than five million works published in Elsevier journals (Boyack, van Eck, Colavizza, & Waltman, 2018). The Elsevier’s Science Direct Article Retrieval API (Article retrieval API) was used to download the full text of the publications in XML format. Only publications for which Leiden University had a subscription could be accessed and the downloading was limited to those publications of which a full text was available. The dataset considered in this paper contains only the publications in English and labelled by Elsevier as ‘full-length article’, ‘short communication’ and ‘review article’, and published between 2000 and 2016. The XML full text of each of the remaining 4,582,044 publications was parsed and (when available) the following five dates were extracted: 1. received date (date on which the original manuscript was submitted to the journal), 2. revised date (when the revised manuscript was submitted to the journal), 3. accepted date (when the manuscript was accepted for publication), 4. online date (when the manuscript was posted online), and 5. cover date (when the manuscript was published in an issue). This publication scheme coincides with the one used by Dong et al. (2006) and Amat (2008). However, in calculating publication delays, some caution is required, as the cover date does not always coincide with the issue’s actual publication date; it can fall behind or precede it by several weeks, even a few months. However, this information is not publicly available. Publications in the dataset were matched with those in the in-house version of the Web of Science (WoS) available at CWTS. The CWTS version of the WoS database contains the Science Citation Index Expanded, the Social Science Citation Index, and the Arts and Humanities Citation Index. Firstly, in a two-step process, the matching of publications was based on the digital object identifier (DOI). If no DOI-based match could be made, the matching of publications was based on a combination of the last name and the first initial of the first author, the publication year, the volume number, and the first page number. Publications for which a match was obtained with the WoS database were also linked to the five broad fields of science distinguished in the 2019 version of the
94
M. Luwel et al.
CWTS Leiden Ranking1 : 1. Biomedical and Health Sciences (B&HS), 2. Life and Earth Sciences (L&ES), 3. Mathematical and Computer Science (M&CS), 4. Physical Sciences and Engineering (PS&E), and 5. Social Sciences and Humanities (SS&H). These five broad fields of science, further called disciplines, are defined based on a large number of micro fields that are identified using an algorithmic procedure based on citation relations between publications (Waltman & Van Eck, 2012).2 For our analysis of publication delays, the relevant aggregation level is journals. Each journal was therefore assigned to one of the five disciplines. A journal was assigned to a discipline if the majority of the publications in the CWTS Leiden Ranking belonged to this discipline. Some journals, including the multidisciplinary journal Heliyon, could not be assigned to a discipline because they were not covered in the WoS database. Consequently, about 2.5% of the 4.6 million publications in our analysis could not be assigned to a discipline. These were disregarded in the analysis of the publication delays at the disciplinary level. In order to determine whether OA publishing has an effect on the length of the whole publishing process, we also determined the business model of each journal in our dataset. Since OA publishing strongly focuses on the digital world, we might expect it to have different pathways from submission to publishing. A common source to support analyses on OA is the Directory of Open Access Journals,3 also known as the DOAJ list. This list contains journals that are fully Gold OA, which means that the journals on the DOAJ list publish manuscripts after the author(s) pay the article production charges (APCs), which leads to immediate accessibility to the published material. We selected all the journals from the DOAJ list for which the publisher is ‘Elsevier’, with one exception (a journal on the list under the name ‘Elsevier España S.L.U.’). The selected journals were subsequently linked to the journals in our dataset, based on ISSN and eISSN. In this way, we were able to classify 363 journals in our dataset as gold OA. Next to Gold OA, we can distinguish between Green OA, Hybrid OA and Bronze OA (Piwowar et al., 2018). However, these other types of OA publishing have a more fragmented character, and therefore comparing them to the traditional way of publishing is more difficult. All collected and processed data was stored in a structured table in an SQL Server database. This table contains the relevant information needed for our study for each publication in our dataset: journal, discipline, publication year, publication dates (received, revised, accepted, online, cover), and business model (toll or OA). As shown in earlier works, the distribution of publication delays is mostly skewed and contains outliers (Chen et al., 2013; Alves-Silva et al., 2016). Therefore, in the descriptive statistics and the trend analysis in the results section, both mean and median values are used. As a measure of the statistical dispersion of the distributions, the interquartile range (IQR), being equal to the difference between the 75th and 25th percentiles, is used. Whenever necessary to get a better understanding of a variable, the outliers are also investigated. 1 https://www.leidenranking.com. 2 https://www.leidenranking.com/information/fields. 3 https://doaj.org.
Characteristics of Publication Delays Over the Period 2000–2016
95
Results Publication delays are analyzed from three perspectives: the scientific discipline, the journal, and the journal’s availability under OA or toll access publishing model.
Differences in Publication Delays Among Disciplines The dates to study publication delays mentioned in the previous section are not always available. For the period 2000–2016, Table 1 gives an overview of the number of publications and those for which the received date, the accepted date, the online publication date, and the cover date of the print version are available, as well as these four dates simultaneously and in chronological order. All publications have an online and cover date, but only 80% have a received or accepted date. For 74% of the total number of publications, the four dates are available. For some publications, the order of the four dates is not respected and for example, the online publication date is before the accepted date. These cases are nonsensical. A detailed analysis shows that most cases are in the early 2000s, when electronic publishing was introduced. Some occasional misprints also occur. These cases represent less than 1% of the total dataset and they are not taken into account in the analysis. The rest of the analysis is therefore based on the 3,375,429 publications in our dataset with all four dates in chronological order. As could be expected, the number of publications is unevenly distributed over the five disciplines. In Table 1, we see that the PS&E and B&HS are the largest with 38% and 35% respectively, while M&CS and SS&H are the smallest with around 5%. About 16% of the publications are assigned to L&ES. It is to be remembered that Table 1 The number of publications in each of the five disciplines, the number with a received date, an accepted date, an online date, a cover date, with all four dates, and with all four dates in chronological order. For the total dataset also these numbers are given. Due to the classification criteria not all publications are assigned to a discipline Discipline
No. of pub. Total
With a received date
With an accepted date
With an online date
With a cover date
With all four dates
With all four dates in chronological order
B&HS
1,736,596
1,266,195
1,348,647
1,736,596
1,736,596
1,189,642
1,179,138
L&ES
615,174
543,911
553,393
615,174
615,174
534,438
530,300
M&CS
275,419
221,167
193,456
275„419
275,419
191,939
191,355
1,601,844
1,369,235
1,299,146
1,601,844
1,601,844
1,279,936
1,272,944
PS&E SS&H
224,774
150,876
145,207
224,774
224,774
135,938
134,212
Total dataset
4,582,044
3,627,657
3,615,539
4,582,044
4,582,044
3,407,039
3,375,429
96
M. Luwel et al.
not all journals and therefore not all publications are assigned to a discipline. This explains why the sum of the number of publications over all disciplines is smaller than the total number of publications in Table 1. In our analysis, the manuscript’s revision date is discarded, because it does not contain much relevant information and its inclusion would have substantially reduced the sample as shown in Table 2. In the beginning of the last decade, for only around 60% of the publications for which the four dates were available, the revised date is also given. This percentage increased to 75% in 2008 and to 86% in 2016. The difference between the revised date and the accepted date remained stable over the period 2000–2016; the median value is around one week with an IQR decreasing somewhat from 28 to 20 days. Figure 1 shows for the five disciplines the ratio of the number of publications with the four dates available in chronological order and the total number of publications. Starting at the beginning of the century with 39% for SS&H and 71% for L&ES, this ratio increases steadily. At the end of the period, with 95%, it has the highest value Table 2 For the period 2000–2016, the annual number of publications with the four dates (received, accepted, online, cover) and the number with the revised date also. For the latter subset, the average (Avg.) and the median (Med.) number of days between the revised and accepted date and the interquartile range (IQR) are given. The table also contains the values of these statistical parameters for the total period Year
No. of pub. With all four dates
Days revised-accepted With all four dates and a revised date
Avg.
Med.
IQR
2000
86,870
48,910 (56%)
27
9
28
2001
93,469
55,969 (60%)
27
9
27
2002
101,278
61,254 (60%)
27
10
29
2003
117,311
72,635 (62%)
27
10
28
2004
134,319
86,460 (64%)
26
9
27
2005
143,385
93,135 (65%)
25
8
25
2006
166,119
115,852 (70%)
21
7
20
2007
177,439
128,658 (73%)
20
7
20
2008
195,639
146,379 (75%)
20
7
20
2009
209,152
159,578 (76%)
19
7
19
2010
215,164
164,494 (76%)
18
6
19
2011
234,063
180,340 (77%)
18
6
19
2012
243,958
187,137 (77%)
18
7
20
2013
276,954
212,860 (77%)
18
7
20
2014
294,106
229,462 (78%)
18
7
20
2015
325,301
265,969 (82%)
17
7
20
2016
392,512
336,832 (86%)
18
7
21
340,7039
2,545,924 (75%)
20
7
21
All years
Characteristics of Publication Delays Over the Period 2000–2016
97
Fig. 1 For each of the five disciplines and for all the publications in the dataset, the evolution of the percentage of the publications with all the four dates (received, accepted, online, and cover) in chronological order to the number of publications
for L&ES and PS&E, followed by SS&H and M&CS. With an already relatively high value of 56% in 2000, the increase of the ratio with 24% is the lowest for the B&HS. In the next step, the descriptive statistics of the editorial delay (the period between the submission/received date of the manuscript and the accepted date of its final version; Amat, 2008), the technical delay for online publishing (the period between the accepted date and the appearance date of the paper on the journal’s website) and the technical delay for publishing the issue (the period between the accepted date and the cover date of the issue containing the paper) are calculated. Figure 2 gives for the total dataset and for each discipline the evolution of the average and the median values of these three types of delays. As the distribution of the different variables is skewed, especially for the technical delay for online publishing and at the beginning of the period 2000–2016, the evolution of the median values is discussed. For each discipline, the technical delay for online publishing decreases strongly over the period to around 7–9 days for B&HS, L&ES, and PS&E and about 2 weeks for the other two disciplines. Publications in M&CS and in SS&H show the strongest decrease in the technical delay for online publishing: at the beginning of the 2000s, this delay was around 60%-100% longer than for the other three disciplines. Over the period 2000–2016, the technical delay for publishing the issue also decreases. Again, with about 60% the reduction being most pronounced for SS&H and for M&CS, for L&ES, this delay was nearly halved. For the other two disciplines, the technical delay for publishing a paper in an issue was already relatively short at the beginning of the period, its 30% its reduction is less spectacular.
98
M. Luwel et al.
Fig. 2 For the period 2000–2016 and for the publications with the four dates in chronological order, the average and the median number of days between the received and accepted date (Avg./Med. received-accepted), the accepted and online date (Avg./Med. accepted-online), and the accepted and cover date (Avg./Med. accepted-cover). These delays are given for a all the publications in the dataset and for b–f each of the five disciplines
For L&ES, the median value of the editorial delay decreases by 28% over the period 2000–2016. For B&HS, M&CS, and SS&H, a slight reduction in this delay is observed and for the PS&E, this delay remains nearly constant. However, at the end of the period, the median value of editorial delay differs considerably among the disciplines: for B&HS and for PS&E, is it around 100 days and somewhat longer for L&ES with 128 days. But for M&CS and for SS&H the median value of this delay is between 7 and 8 months and more than twice as long as that for B&HS and for PS&E. As already indicated, the underlying distributions are highly skewed with the presence of outliers. This is illustrated in Table 3, where for the years 2000, 2008, and 2016 and for each discipline, the descriptive statistics of the distributions are given. The number of publications increased strongly, even by a factor 9 for M&CS and 10 for SS&H; for the three other disciplines, the increase is between 3.5 and 5. Two
25,273
141,799
24,451
M&CS
PS&E
SS&H
11,805
71,478
8997
M&CS
PS&E
SS&H
194,535
29,908
L&ES
Total dataset
71,091
B&HS
(b) 2008
389,060
58,989
Total dataset
121,077
L&ES
No. of pub.
B&HS
(a) 2016
Discipline
161
294
140
336
190
126
148
259
129
266
153
122
120
231
106
282
157
102
116
215
102
225
128
104
94
136
252
121
278
145
99
115
202
95
191
109
0
0
0
0
0
0
0
0
0
0
0
0
Min.
2616
2616
1820
2467
2497
2202
3217
2847
2665
3217
1776
1658
Max.
63
63
24
124
47
97
18
22
12
24
15
21
21
44
12
28
32
43
8
13
7
14
9
8
Med.
47
49
22
53
43
66
17
17
13
18
14
21
IQR
Avg.
IQR
Avg.
Med.
Days accepted-online
Days received-accepted
0
1
0
2
1
1
0
0
0
0
0
0
Min.
3800
1587
961
2356
2489
3800
1267
768
796
724
1267
1143
Max.
182
255
179
289
180
157
105
111
106
136
103
97
Avg.
143
227
145
259
149
126
91
93
95
120
91
83
Med.
138
200
148
214
126
94
71
85
63
96
66
67
IQR
Days accepted-cover
0
0
0
3
1
0
0
0
0
0
0
0
2226
1805
1037
1766
2226
2003
2045
1868
2045
1529
1310
1743
Max.
(continued)
Min.
Table 3 Descriptive statistics of the publication delays for each discipline and for the total dataset: number of publications (No. of pub.), the average (Avg.), the median (Med.), the interquartile range (IQR), the minimum (Min.), and the maximum (Max.) of the number of days between the received and accepted date (Days received-accepted), the accepted and online date (Days accepted-online), and the accepted and cover date (Days accepted-cover). The statistics are presented for the publication years (a) 2016, (b) 2008, and (c) 2000
Characteristics of Publication Delays Over the Period 2000–2016 99
2803
34,151
2414
M&CS
PS&E
SS&H
85,388
11,661
L&ES
Total dataset
34,118
No. of pub.
B&HS
(c) 2000
Discipline
Table 3 (continued) Max.
156
315
130
335
207
139
119
245
97
274
177
113
128
254
114
275
153
103
0
0
0
0
0
0
36566
2078
36566
4091
2404
36367
332
286
167
750
187
516
149
239
144
229
158
141
Med.
122
178
108
257
114
137
IQR
Avg.
Min.
Days accepted-online
IQR
Avg.
Med.
Days received-accepted
3
48
21
54
9
3
Min.
6259
1983
3704
4701
1907
6259
Max.
161
298
161
300
190
130
Avg.
137
274
140
264
170
118
Med.
118
180
134
195
135
81
IQR
Days accepted-cover
0
0
0
11
0
0
Min.
1917
1412
1917
1301
925
1423
Max.
100 M. Luwel et al.
Characteristics of Publication Delays Over the Period 2000–2016
101
trends explain this observation: the increase in the number of Elsevier publications and the progressive addition of information on the received and accepted dates in the metadata of the publications (only publications with all four dates are taken into account). For the three delays, the IQR is smaller in 2016 compared to 2008 and 2000, indicating that the middle 50% has become less dispersed. Although the maximum values in the editorial and the two technical delays also decrease for all disciplines, even in 2016, outlier values between 2 and 9 years are found. Inspecting the hardcover version of the 15 papers with the largest delays reveals that delays most probably are not caused by printing errors.
Differences in Publication Delays Between Journals Within a Discipline In the previous section, the publication delays are analyzed at the level of scientific disciplines. Publications appear in journals assigned to disciplines. Table 4 shows the number of journals included in our analysis for each discipline and the publication years 2000, 2008, and 2016. To keep the subsequent statistical analysis meaningful, journals with less than 10 papers (with all four dates in chronological order) in a publication year were ignored for that year. Although the number of journals not taken into account because of this seems rather large (15% in 2000 and around 10% in 2008 and 2016), the associated number of papers not taken into account is very small (0.5% in 2000 and 0.2% in 2008 and 2016). In Table 4, we see that there is a large spread between the disciplines: for the publication year 2016 there are 601 journals assigned to B&HS, the largest discipline, and 129 journals to M&CS, the discipline with the lowest number of journals. It should be remembered that not all journals are assigned to a discipline. This explains why the sum of the number of journals over all disciplines is smaller than the total number of journals in Table 4.
Table 4 For the publication years 2000, 2008, and 2016, the number of journals assigned to each of the five disciplines and in the total dataset. Only journals with at least 10 publications in a publication year are taken into account. Due to the classification criteria not all journals are assigned to a discipline Discipline
No. of journals 2000
2008
2016
B&HS
237 (36%)
410 (39%)
601 (32%)
L&ES
122 (19%)
181 (17%)
237 (13%)
M&CS
42 (6%)
89 (8%)
129 (7%)
PS&E
186 (29%)
236 (22%)
317 (17%)
SS&H
55 (8%)
111 (10%)
217 (12%)
Total dataset
651 (100%)
1058 (100%)
1881 (100%)
102
M. Luwel et al.
Table 5 For the publication year 2016 and for each of the five disciplines, the number of journals is given and the descriptive statistics of the number of publications in these journals: the average (Avg.), the median (Med.), the interquartile range (IQR), the minimum (Min.), and the maximum (Max.). Only journals with at least 10 publications in 2016 are taken into account Discipline
No. of journals
No. of pub. Avg.
Med.
IQR
Min.
Max.
B&HS
601
201
144
187
10
2748
L&ES
237
248
165
224
10
2646
M&CS
129
195
120
210
22
1951
PS&E
317
447
298
402
10
3307
SS&H
217
112
76
83
10
2227
1881
206
111
205
10
3307
All disciplines
In Table 5, for the publication year 2016 the descriptive statistics of the number of papers per journal is also presented. Again, these distributions are skewed. There are also large differences among disciplines: the median value of journals’ size is a factor 4 larger for PS&E compared to SS&H. For the three other disciplines, the median is comparable with values between 120 and 165; the IQR is around 200 publications. The number of publications in the largest journal also differs among the disciplines: from 3307 publications in the Journal of Alloys and Compounds assigned to PS&E to 1951 publications in Neurocomputing assigned to M&CS. Within a discipline, there are also differences in the publication delays among journals. As mentioned in the introduction, Alves-Silva et al. (2016) studies for a small sample of publications in ecology journals the relationship between the number of publications in a journal and its publication delays. Figure 3 shows for the publication year 2016 and the five disciplines the median value of the publication delays versus the number of publications per journal: no relationship emerges. This is confirmed by the Spearman rank correlation between the number of publications per journal and median value of the publication delays. In Table 6, we can see that for the five disciplines the values of the Spearman rank correlation are often negative and their absolute value is never larger in absolute value than 0.31 (except for PS&E in 2008, where the correlation between the journal size and the median value of the period between the received and accepted date was −0.37 and −0.42 for the median value of the period between the accepted and cover date). Another indicator of a journal’s size is its annual number of volumes and issues. For a journal with two or more volumes in a publication year, the total number of different pairs of volumes and issues were summed up. This means that for a journal with different volumes in a publication year but no division in issues, the number of volumes is used. The absolute value of the Spearman rank correlation between this indicator and the different delays is never larger than 0.3, indicating the absence of or a very weak relationship between the size of a journal and the publication delays. As we have seen in Fig. 2, at the publication level, the median value of the technical delay for online publishing decreased spectacularly during the period 2000–2016, while the technical delay for publishing the issue was also reduced by a third. These
Characteristics of Publication Delays Over the Period 2000–2016
103
Fig. 3 For the publication year 2016 and the disciplines PS&E and SS&H, the scatterplots display the relationship between on the x-axis the number of journals and on the y-axis the median value of the number of days between the received and accepted date (Med. received-accepted), the accepted and online date (Med. accepted-online), and the accepted and cover date (Med. accepted-cover)
trends could also be expected to be present at the journal level. However, is not easy to make them visible, as the yearly coverage of journals in the dataset change due to a variety of reasons: new journals are published and other journals are discontinued, continued under a different name, or ignored in one or more years due to the criteria used in the analysis. To get an insight into the evolution of the delays for journals, the improvement between 2000 and 2016 in the editorial delay, technical delay for online posting, and the technical delay for publishing the issue was calculated. As can be seen in Fig. 4, the technical delay for online posting decreased for all journals and the technical delay for publishing the issue decreased substantially for the majority of journals. The reduction in the editorial delay provides a mixed picture. We observe a decreased editorial delay for about half the journals, but an increased editorial delay for the other half. Table 7 lists the top 10 journals with the highest improvement in the reduction of editorial delay and technical delay for publishing the issue. Although SS&H is one of the disciplines with the smallest number of journals, it is the best represented discipline for both types of delays.
Open Access Journals Versus Subscription Journals As indicated in the introduction, from the beginning of this century, OA has become an alternative to the toll access model for publishing in peer reviewed journals. Some 90% of all gold OA journals owned by Elsevier were started after 2009, with some
104
M. Luwel et al.
Table 6 For each of the five disciplines and for the total dataset the Spearman rank correlation between the number of publications in each journal and the median of the number of days between the received and accepted date (Median days received-accepted), the accepted and online date (Median days accepted-online), and the accepted and cover date (Median days accepted-cover). The correlation coefficients and the number of journals are presented for the publication years (a) 2016, (b) 2008, and (c) 2000 Discipline
No. of journals
Spearman correlation between no. of pub. and the median of the delay Median days received-accepted
Median days accepted-cover
Median days accepted-online
(a) 2016 B&HS
601
−0.11
−0.26
−0.01
L&ES
237
−0.16
−0.16
0.12
M&CS
129
−0.05
0.03
0.19
PS&E
317
−0.22
−0.31
0.16
217
−0.20
−0.28
−0.01
1881
−0.08
−0.31
0.11
B&HS
410
−0.11
−0.14
−0.01
L&ES
181
−0.13
−0.24
−0.05
M&CS
89
0.04
−0.03
0.17
PS&E
236
−0.37
−0.42
−0.20
SS&H
111
−0.08
−0.10
0.15
1058
−0.23
−0.25
−0.07
B&HS
237
−0.05
−0.07
−0.08
L&ES
122
−0.11
−0.14
0.12
SS&H Total dataset (b) 2008
Total dataset (c) 2000
M&CS
42
0.08
0.03
0.01
PS&E
186
−0.26
−0.21
−0.14
SS&H
55
0.06
0.04
0.23
651
−0.27
−0.19
−0.15
Total dataset
older journals going back to 1985 (these journals have probably changed from toll access to OA). However, it is still only a fraction of the total number of Elsevier journals: 14% in 2016 (Table 8). As indicated in the results section, only journals with at least 10 publications with the four dates in chronological order are taken into account in 2016. For the publication year 2016, the number of publications in OA and toll access journals, as well as the statistics on publication delays, are given in Table 9. Only 4.8% of the publications are in OA journals. It is remarkable that the median value of the delay between the accepted and online publication date and the IQR is about 2 weeks longer for the OA journals, compared to the toll access journals. On the
Characteristics of Publication Delays Over the Period 2000–2016 Fig. 4 Distribution of the reduction in the number of days between a the received and accepted date (Received-accepted), b the accepted and online date (Accepted-online), and c the accepted and cover date (Accepted-online) over all journals with publications in 2000 and 2016
105
(a) Received-accepted
(b) Accepted-online
(c) Accepted-cover
contrary, the median value of the delay between the received and accepted date and between the accepted and cover date, respectively, is 18 and 17 days shorter for the OA journals. The number of publications in OA journals is large enough to carry out a meaningful statistical analysis only for the disciplines B&HS and PS&E (Table 10). The first observation is the relatively small difference in the median value of the delay between the received and the accepted date: for B&HS, it is 9 days longer for the OA journals and for the other discipline, it is 3 days shorter. For both disciplines, the median value of the accepted to cover date is more than 3 weeks shorter for the OA journals. Surprisingly, for B&HS, the median value of the delay between the accepted and online date is 3 weeks longer for OA journals, contrary to the other discipline for which the median value of these differences is only 4 days. Finally, for
0166-0462 0149-7189 0167-4870
Journal of Economic Psychology
0098-1354
Computers & Chemical Engineering
Evaluation and Program Planning
0965-9978
Advances in Engineering Software
Regional Science and Urban Economics
0160-4120 0166-3615
Computers in Industry
0014-2921
European Economic Review
Environment International
0165-1889 0895-9811
Journal of South American Earth Sciences
ISSN
Journal of Economic Dynamics and Control
(a) Received-accepted
Journal title
SS&H
B&HS
SS&H
PS&E
PS&E
M&CS
L&ES
SS&H
L&ES
SS&H
Discipline
80
107
73
335
151
91
382
102
146
111
2016
33
32
28
77
63
33
13
52
15
31
2000
No. of pub.
271
181
233
168
147
225
97
345
150
233
2016
485
396
455
393
384
472
352
607
457
548
2000
−214
−215
−222
−225
−237
−247
−255
−262
−307
−315
(continued)
Difference between 2016 and 2000
Median days received-accepted
Table 7 Top 10 journals with at least 10 publications in 2000 and 2016 that show the highest reduction in the median value of the number of days (Difference between 2016 and 2000) between (a) the received and accepted date (Median days received-accepted) and (b) the accepted to cover date (Median days accepted-cover). The journal title, the ISSN number, the journal’s assigned discipline, and the number of publications published in 2000 and 2016 are also presented
106 M. Luwel et al.
0014-2921 0029-8018 0022-1996 0263-7863 0965-8564 0142-9418 0141-0296
Journal of International Economics
International Journal of Project Management
Transportation Research Part A: Policy and Practice
Polymer Testing
Engineering Structures
0167-7187
International Journal of Industrial Organization
Ocean Engineering
0362-546X
Nonlinear Analysis: Theory, Methods & Applications
European Economic Review
0161-8938
Journal of Policy Modeling
(b) Accepted-cover
Table 7 (continued)
PS&E
PS&E
SS&H
SS&H
SS&H
PS&E
SS&H
SS&H
M&CS
SS&H
664
289
186
148
104
569
102
63
297
69
144
102
25
27
46
65
52
45
47
28
96
67
64
101
74
51
98
69
96
66
493
474
472
513
515
549
639
671
767
1034
−397
−407
−408
−412
−441
−498
−541
−602
−671
−968
Characteristics of Publication Delays Over the Period 2000–2016 107
108 Table 8 For each of the five disciplines and for the total dataset the number of journals published in 2016 under the open access and the toll access model. Only journals with at least 10 publications in 2016 are taken into account
M. Luwel et al. Discipline
No. of journals Open access
Toll access
B&HS
45
556
L&ES
13
224
M&CS
2
127
PS&E
21
296
SS&H Total dataset
6
211
271
1610
both disciplines, the statistical dispersion of the distribution, the IQR, of the three intervals, is much larger for the OA journals compared with the toll access journals.
Discussion and Conclusions As indicated in the introduction, researchers from many disciplines often complain that the publication process takes too much time and that the interval from submitting a manuscript till it is publicly available is too long. The publication process is complex and publication delays and their impact on knowledge dissemination and the priority reward system have been discussed in scholarly literature for long. The first paper with the words publication delay in its title and published in a journal included in the WoS database dates from 1961 (Wilcken, 1961). With the emergence of electronic publishing, the open access model, and the use of social media, the number of studies on this subject have increased considerably over the last 20 years. As discussed in the introduction, due to the tedious and often manual work of collecting data on the publication process, the common characteristic of these studies is their limitation to relatively small samples of papers. These small datasets make it difficult to analyze differences in the publication process between and within disciplines and even more complicated to identify trends as well as the impact of the above mentioned changes on the diffusion of the results of scholarly work. Today, information processing tools make it possible to extract the relevant information on the publication process from the full text of papers published in peerreviewed journals. In this paper, using the papers published in Elsevier journals between 2000 and 2016, we have attempted to provide foundational knowledge about and fresh insights into the publication process. The first observation is the increase over this period in the number of publications and journals which contain the relevant dates on a paper’s publication trajectory in the metadata: submission/receipt date, accepted date, online publication date, and cover date of the journal issue in which the paper is finally published.
No. pub.
18,882
370,178
389,060
Journal type
Open access
Toll access
All journal types
148
149
116
117 115
115
103
IQR
Min.
0
0
0
Max.
3217
3217
1544 18
17
42 8
8
23
Med.
17
16
36
IQR
Avg.
99
Med.
Avg.
126
Days accepted-online
Days received-accepted
0
0
0
Min.
1267
1267
875
Max.
105
103
154
Avg.
91
91
74
Med.
71
70
118
IQR
Days accepted-cover
0
0
0
Min.
2045
1868
2045
Max.
Table 9 For the publication year 2016 and for the journals published under the open access and the toll access model as well as for all the journals the average (Avg.), the median (Med.), the interquartile range (IQR), the minimum (Min.) and the maximum (Max.) of the number of days between the received and accepted date (Days received-accepted), the accepted and online date (Days accepted-online), and the accepted and cover date (Days accepted-cover). Only journals with at least 10 publications in 2016 are taken into account
Characteristics of Publication Delays Over the Period 2000–2016 109
No. pub.
116149
Toll access
3928
137871
Open access
Toll access
(b) PS&E, 2016
4928
Open access
(a) B&HS, 2016
Journal type
137
129
132
122
102
99
103
112
95
106
93
114
0
0
0
0
2665
1544
1658
1040
Max.
12
28
20
54
7
11
8
31
Med.
13
24
18
46
IQR
Avg.
Min.
Days accepted-online
IQR
Avg.
Med.
Days received-accepted
0
0
0
0
Min.
768
796
1143
758
Max.
101
267
95
133
Avg.
95
71
83
53
Med.
63
118
65
112
IQR
Days accepted-cover
0
0
0
0
Min.
1145
2045
1244
1743
Max.
Table 10 For the publication year 2016 and for the journals assigned to (a) B&HS and (b) PS&E the value of the same statistical parameters as in Table 9 are presented
110 M. Luwel et al.
Characteristics of Publication Delays Over the Period 2000–2016
111
In this analysis, the revision date was not used. The number of publications mentioning a revision date in the metadata increased over the years, but at the end of the period for 15%, this information was still not available. Moreover, the median value of 7 days between the revision date and the accepted date suggests that only the date of the last revision is given, hence this piece of information is of less value. Further, no information is available on the number of revisions and their corresponding dates. The metadata therefore do not allow us to analyze this important aspect of the publication process and to identify potential causes of long revision delays. To study the revision process in follow-up research, the additional information has to be collected probably from the editors and the publishers. During 2000–2016, the online publication of papers before publication in an issue was gradually introduced in all disciplines. In 2016, the median value of the time between the accepted and online date was between 1 and 2 weeks. Compared to 2008, this delay was reduced by a factor of almost three. However, within each discipline, there are outliers and the manual inspection of a number of these cases indicates that large delays are most of the time not due to misprints but to a very long publication process. At the same time the technical delay for publishing in an issue also decreased; the most striking reduction occurred for SS&H and M&CS. In 2016, for all disciplines the median number of days between the accepted date and the cover date is around 90. Based on these results, it can be concluded that over the last two decades, Elsevier and the editorial boards of its journals managed to speed up the work on the technical aspects of the publication process. Even though the number of publications that needed to be processed increased substantially, the period between the acceptance and the actual publication of a paper on the journal’s website or in an issue decreased. Further, no correlation has been found between the two technical delays and the size of a journal (expressed as the number of publications or issues per annum). Although the editorial delay for papers in M&CS and SS&H has been considerably reduced, it remains twice as long as both the median value for all disciplines and the technical delay for publishing the issue. In the beginning of the 2000s, the median editorial delay for the other two disciplines (B&HS and PS&E) was about 100 days, already the lowest and was not further reduced over 2000–2016. This value seems to be a lower perimeter and is again not correlated with the journal’s number of publications or issues per annum. This part of the publication process is the most labor intensive and depends both on the quality of the manuscripts and the number of revisions deemed necessary and on the workload on the reviewers and their responds time. Again, for individual papers, the delay can be much longer, considering that papers with very short editorial delays have been accepted in the dataset. These short delays could point to solicited papers almost immediately accepted without the full review process (Shen et al., 2015). The overall conclusion is that for Elsevier journals, publication delays have been reduced. That observation is all the more remarkable when taking into account the steady increase in the number of papers published and in the number of new journals launched each year. However, our observation does not contradict 4 complains that
112
M. Luwel et al.
in some research domains the delays are not reduced and may even increase. In this study, papers and journals are classified into five broad disciplines. It is beyond doubt that complementary to this study, the analysis of the characteristics of the publication process of sub-disciplines using carefully selected journals has to be made. The impact of the OA model could be studied on a relatively large number of papers, mostly from B&HS and PS&E. As might be expected, for both disciplines, the technical delay between acceptance and publication in an issue is shorter for papers in OA journals compared to toll access journals. Perhaps more striking is the longer technical delay between acceptance and online publishing for papers in OA journals. However, these results have to be interpreted with some caution, as the dataset is limited to Elsevier journals and this publisher was not at the forefront of the OA movement. The number of OA journals compared to toll access is rather small and cannot be representative of the total number of published OA journals. To get a proper insight into the influence and effects of OA publishing, a similar analysis should be conducted on the journal portfolio and a number of Gold OA publishing houses, over a number of years. Only then can we assess to what extent Elsevier is doing really different, as well as get a better impression on the effect of operating two publishing models next to each other, in particular in comparing the various phases that constitute the scholarly publishing process. This is certainly a subject of great importance for further research, in which other forms of open access publishing must also be included. The analysis of the characteristics of the publication process of other commercial publishers and of learned societies is necessary to verify whether the conclusions of this study limited to Elsevier journals, can be generalized. The concept of publication date can also be further fine-tuned, by making a distinction between an issue’s cover date and the date it is electronically made accessible.
References Abt, H. A. (1992). Publication practices in various sciences. Scientometrics, 24(3), 441–447. Alves-Silva, E., Porto, A. C. F., Firmino, C., Silva, H. V., Becker, I., Resende, L., et al. (2016). Are the impact factor and other variables related to publishing time in ecology journals? Scientometrics, 108(3), 1445–1453. Amat, C. B. (2008). Editorial and publication delay of papers Submitted to 14 selected food research journals. Influence of online posting. Scientometrics, 74(3), 379–389. Azar, O. H. (2004). Rejections and the importance of first response times. International Journal of Social Economics, 31(3), 259–274. Bjork, B. C., & Solomon, D. (2013). The publishing delay in scholarly peer-reviewed journals. Journal of Informetrics, 7(4), 914–923. Boyack, K. W., van Eck, N. J., Colavizza, G., & Waltman, L. (2018). Characterizing in-text citations in scientific articles: A large-scale analysis. Journal of Informetrics, 12(1), 59–73. Chen, H., Chen, C. H., & Jhanji, V. (2013). Publication times, impact factors, and advance online publication in ophthalmology journals. Ophthalmology, 120(8), 1697–1701. Dong, P., Loh, M., & Mondry, A. (2006). Publication lag in biomedical journals varies due to the periodical’s publishing model. Scientometrics, 69(2), 271–286.
Characteristics of Publication Delays Over the Period 2000–2016
113
Echeverria, M., Stuart, D., & Cordon-Garcia, J. A. (2017). The influence of online posting dates on the bibliometric indicators of scientific articles. Revista Espaˇnola de Documentación Cientifica, 40(3), e183. Egghe, L., & Rousseau, R. (2000). The influence of publication delays on the observed aging distribution of scientific literature. Journal of the American Society for Information Science, 51(2), 158–165. Fiala, D., Havrilova, C., Dostal, M., & Paralic, J. (2016). Editorial board membership, time to accept, and the effect on the citation counts of journal articles. Publications, 4(3), 21. Garg, K. C. (2016). Publication delays in periodicals published by CSIR-NISCAIR. Current Science, 111(12), 1924–1928. Gonzalez-Betancor, S. M., & Dorta-Gonzalez, P. (2019). Publication modalities ‘article in press’ and ‘open access’ in relation to journal average citation. Scientometrics, 120(3), 1209–1223. Haustein, S., Bowman, T. D. & Costas, R. (2015). When is an article actually published? An analysis of online availability, publication, and indexation dates. In Proceedings of the 15th International Conference of the International Society for Scientometrics and Informetrics. Retrieved from https://arxiv.org/abs/1505.00796. Heneberg, P. (2013). Effects of the print publication lag in dual format journals on scientometric indicators. PLoS ONE, 8(4), e59877. Hicks, D. M. (2004). The four literatures of social science. In Handbook of quantitative science and technology research (pp. 473–496). Springer. Kaufman, P. (1998). Structure and crisis: Markets and market segmentation in scholarly publishing. In B. L. Hawkins & P. Battin (Eds.), The mirage of continuity: Reconfiguring academic information resources for the 21st century (pp. 178–192). Washington DC: CLIR and AAU. Larivière, V., Haustein, S., & Mongeon, P. (2015). The oligopoly of academic publishers in the digital era. PLoS ONE, 10(6), e0127502. Lin, Z. Q., Hou, S. C., & Wu, J. S. (2016). The correlation between editorial delay and the ratio of highly cited papers in nature, science and physical review letters. Scientometrics, 107(3), 1457–1464. Lotriet, C. J. (2012). Reviewing the review process: Identifying sources of delay. Australian Medical Journal, 5(1), 26–29. Lozano, G. A., Larivière, V., & Gingras, Y. (2012). The weakening relationship between the impact factor and the papers’ citations in the digital age. Journal of the American Society for Information Science and Technology, 63(11), 2140–2145. Luwel, M., & Moed, H. F. (1998). Publication delays in the science field and their relationship to the ageing of scientific literature. Scientometrics, 41(1–2), 29–40. Merton, R. K. (1957). Priorities in scientific discovery: A chapter in the sociology of science. American Sociological Review, 22(6), 635–659. Moed, H. F. (2007). The effect of “open access” on citation impact: An analysis of the ArXiv’s condensed matter section. Journal of the American Society for Information Science and Technology, 58(13), 2047–2054. Montgomery, S. L. (2013). Does science need a global language? English and the future of research. University of Chicago Press. Piwowar, H., Priem, J., Larivière, V., Alperin, J. P., Matthias, L., Norlander, B., et al. (2018). The state of OA: A large-scale analysis of the prevalence and impact of open access articles. PeerJ, 6, e4375. Shen, S., Rousseau, R., Wang, D. B., Zhu, D. H., Liu, H. Y., & Liu, R. L. (2015). Editorial delay and its relation to subsequent citations: The journals nature, science and cell. Scientometrics, 105(3), 1867–1873. Shi, D., Rousseau, R., Yang, L., & Li, J. (2006). A journal’s impact factor is influenced by changes in the publication delays of the citing journals. Journal of the American Society for Information Science and Technology, 68(3), 780–789. Suber, P. (2012). Open access. Cambridge, MA: MIT Press Essential Knowledge series.
114
M. Luwel et al.
Tort, A. B. L., Targino, Z. H., & Amaral, O. B. (2012). Rising publication delays inflate journal impact factors. PLoS ONE, 7(12), e53374. Tosi, H. (2009). It’s about time!!!! What to do about long delays in the review process. Journal of Management Inquiry, 18(2), 175–178. van Leeuwen, T. N., van Wijk, E., & Wouters, P. F. (2016). Bibliometric analysis of output and impact based on CRIS data: A case study on the registered output of a Dutch university. Scientometrics, 106(1), 1–16. van Leeuwen, T. N., Moed, H. F., Tijssen, R. J. W., Visser, M. S., & van Raan, A. F. J. (2001). Language biases in the coverage of the science citation index and its consequences for international comparisons of national research performance. Scientometrics, 51(1), 335–346. Van Orsdel, L., & Born, K. (2002). Doing the digital flip. Library Journal, 127(7), 51–56. van Raan, A. F. J., van Leeuwen, T. N., & Visser, M. S. (2011). Non-English papers decrease rankings. Nature, 469(1), 34. Waltman, L., & van Eck, N. J. (2012). A new methodology for construction of a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392. Ware, M. & Mabe, M. (2015). The STM report. An overview of scientific and scholarly publishing. The Hague, Netherlands: International Association of Scientific, Technical and Medical Publishers. Retrieved from https://www.stm-assoc.org/2015_02_20_STM_Report_2015.pdf. Wilcken, D. (1961). Delay in publication. Lancet, 1(7180), 1286. Yegros, A. Y., & Amat, C. B. (2009). Editorial delay of food research papers is influenced by authors’ experience but not by country of origin of the manuscripts. Scientometrics, 81(2), 367–380. Yu, G., Guo, R., & Li, Y. J. (2006). The influence of publication delays on three ISI indicators. Scientometrics, 69(3), 511–527.
When the Data Don’t Mean What They Say: Japan’s Comparative Underperformance in Citation Impact David A. Pendlebury
Introduction Over the past five decades, national science indicators based on bibliometric data have become increasingly accepted and are now institutionalized within the arena of government science policymaking and funding. The US National Science Foundation’s first Science Indicators report, issued in 1973 and based on data from the Institute for Scientific Information’s Science Citation Index, was followed by similar reports from other countries, especially from the 1990s onward, describing the status of their national research systems (Grupp, & Mogee, 2004; Narin, Hamilton, Olivastro, 2000). In 2004, the then Chief Scientific Advisor to the UK government Sir David A. King, in an article in Nature featuring national publication and citation indicators, stated: “The ability to judge a nation’s standing is vital for the governments, businesses and trusts that must decide scientific priorities and funding” (King, 2004). It may be noted that summaries of national capacity and achievement in scientific and scholarly research are very difficult to obtain through traditional peer review, but the top-down view that the literature provides meets this demand. Members of the scientometric community frequently urge caution in the application of citation data for evaluation of individuals, institutions, and journals but have generally endorsed the use of national indicators since, at this level of aggregation, artifacts of various kinds, including random errors, are characterized as background noise incidental to a strong and reliable signal of research performance as conventionally understood in Mertonian terms. In other words, as the size of the data analyzed increases, so does the validity of the indicators derived from them, or at least that is what is assumed.
D. A. Pendlebury (B) Institute for Scientific Information, Clarivate Analytics, Philadelphia, PA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_5
115
116
D. A. Pendlebury
The research of Henk Moed has consistently questioned conventions and assumptions. He has repeatedly investigated and exposed inaccuracies and biases in measuring and characterizing research performance. He has done so with respect to database coverage and data quality issues, delineation of fields and normalization schemes, the effects of publishing in languages other than English, national patterns of publishing and citing, the construction and meaning of particular indicators such as the impact factor, university rankings, usage and social media indicators, and research assessment schemes that are too often ill-conceived and one dimensional. His contributions include the foregoing and much more, but a consistent pattern in his research is a detailed and properly skeptical investigation of data and sources, careful consideration of the potential and limitation of the data as valid indicators, and an insistence on their proper application, combined with qualitative input, in evaluative and policy contexts. Henk is the epitome of a scrupulous scholar and as such has earned the respect and esteem of many colleagues. This essay aspires to the spirit and scope of Henk’s research program and is offered in admiration and in appreciation for his guidance and friendship. It asks a simple question: Why has the citation impact of Japan been so low compared to that of other advanced scientific nations?
The Citation Impact of Japan Figure 1 presents normalized citation impact data over nearly four decades for 12 nations including Japan. The data for this chart were extracted from Clarivate Analytics InCites platform using the Web of Science Core Collection for publication years 1981–2018. Only papers classified as articles were surveyed. Thus, all fields are represented and owing to field differences in terms of citation potential the indicator chosen is category normalized citation impact (CNCI). The data are presented in overlapping five-year windows of papers published and cited in the same window. Two observations concerning Japan can be offered. First, according to the CNCI indicator, Japan consistently scored below world average (1.00). Second, both Mainland China and South Korea have now surpassed Japan in CNCI: in the final window 2014–2018, China scored 0.96, South Korea 0.94, whereas the reading for Japan was 0.91. The citation gap between Japan and other scientifically mature nations is significant. In rough terms, Japan’s citation impact is one-third less than that of the United States. For the entire period 1981–2018, the nations that have long had advanced scientific and scholarly research systems exhibit CNCI scores of, in rank order: The Netherlands 1.45, USA 1.39, Sweden 1.33, United Kingdom 1.31, Australia 1.27, and Germany 1.11. Japan, at least since the 1980s, has had a research enterprise comparable to those of North America and Europe, yet its CNCI score for the entire period is 0.86. The performance of Japan differs by field, of course, but rarely has the nation exceeded the world average, or at least not by much. In terms of the 22 broad fields
When the Data Don’t Mean What They Say …
117
Fig. 1 Category normalized citation impact (CNCI) for 12 nations, 1981–2018, in five-year windows of papers published and cited, restricted to Web of Science Core Collection journals and documents coded as articles. Source Clarivate Analytics, InCites
used in Clarivate Analytics Essential Science Indicators, for the last four decades, only in space science did Japan earn an above world average CNCI (1.10); in the last two decades, only in immunology (1.07), molecular biology and genetics (1.05), and space science (1.19); and in the last decade, in geosciences (1.02), immunology (1.08), molecular biology and genetics (1.12), physics (1.05), and space science (1.24). Apart from immunology, these fields often produce highly cited papers through large multi-country collaborations that strongly affect national citation indicators (Aksnes, & Sivertsen, 2004); such collaborations have grown considerably in the last two decades. Other bibliometric indicators paint a similar picture of citation underperformance for Japan, whether percentage of papers in the top 1%, in the top 10%, or mean percentiles. These are highly correlated with CNCI: for all fields in InCites and over the last decade (2009–2018), the scores for Japan were 0.92 for top 1% papers, 8.34 for top 10% papers, and a mean (inverted) percentile of 54.5 as compared to 41.2, 43.2, 45.3, 46.0, 46.3, and 46.9 for, respectively, The Netherlands, Sweden, Australia, United Kingdom, USA, and Germany. China and South Korea exceeded Japan by this measure, too, with mean percentile scores of 50.6 for China and 53.8 for South Korea.
118
D. A. Pendlebury
Japan’s comparative and continuing underperformance in citation impact appears whatever data source and standard methodology is used. Whole or fractional counting, for example, changes the profile for Japan very little (Aksnes, Schneider, Gunnarsson, 2012; Huang, Lin, Chen, 2011; Larsen, Maye, von Ins, 2008; Waltman, & van Eck, 2015): Japan’s low citation impact in comparison to other nations remains. Over the years, it has been the same reading, whether consulting the Science and Engineering Indicators reports of the US National Science Foundation or reports of other nations, including those of the Japanese government, such as NISTEP’s Science and Technology Indicators. Other citation databases, such as Elsevier’s Scopus, yield similar results (Elsevier, 2016). In a recent report by Clarivate Analytics on the research performance of the G20 nations, Japan’s scientific standing was summarized as follows: “Citation impact is relatively low for a well-established research economy with a high level of GERD/GDP (3.2%). The impact profile [in the report] shows that performance is lifted above the G20 average through international collaboration, although it is relatively low, at 30%, for a total output that has remained very flat over the decade” (Adams, Rogers, Szomszor, 2019b). Another bibliometric study of Japan’s research from a decade ago by the Clarivate team stated: “Japan has a well-established research enterprise, world-class universities and government laboratories, and has produced a number of Nobel Prize winners. Yet its relative impact, across all fields taken together, remains below world average. While its neighbouring nations’ citation impact is on the rise, Japan’s numbers have lagged” (Adams, King, Miyairi, Pendlebury, 2010). As seen, the increase in impact for other Asian nations continues. Analysts at Clarivate Analytics have not been alone in commenting on Japan’s curiously low citation impact, and a few researchers have offered ideas for why this may be so. Narin and Frame pointed to language as a reason for relative under-citation (Narin, & Frame, 1988); King suggested Japan’s scientific isolation as the reason (King, 2004); Hayashi and Tomizawa pointed to publication in low-impact Japanese journals with few foreign contributions (Hayashi, & Tomizawa, 2006); Bornmann and Leydesdorff wondered about high or low citation impact being determined by a nation’s specific research portfolio which would be advantaged or disadvantaged in a field normalization scheme (Bornmann, & Leydesdorff, 2013); Bornmann, Wagner, and Leydesdorff mentioned Japan as “among the least internationalized nations in percentage terms” and also the importance of publishing in Japanese-language journals for Japanese researchers (Bornmann, Wagner, Leydesdorff, 2018a); and, Wagner and colleagues suggested that lack of mobility and openness may be “dragging on Japan’s performance” and called this a deficit in “brain circulation” (Wagner, & Jonkers, 2017; Wagner, Whetsell, Baas, Jonkers, 2018). It should be noted that almost none of these explanations interpret lower citation impact for Japan as an indicator of inferior research with respect to other nations. Instead, technical issues of analysis, such as normalization, and features of Japanese papers, such as language or venue of publication, are raised. Only lack of “brain circulation” related to mobility hints at the idea of less innovative research. But even that is not the only interpretation since mobility typically leads to coauthorship and internationally collaborative papers are on average more cited than
When the Data Don’t Mean What They Say …
119
domestic papers—whether because of the research itself or because more authors facilitate more visibility for the paper resulting in more citations is always difficult to determine. Beyond the world of scientometricians, national science indicators—as well as university rankings—are usually interpreted as direct measures of research quality and standing by some in government agencies, by the press typically, and through the press by the public. A persistent narrative in the national and foreign press is of Japanese science in decline, overshadowed and overwhelmed by China’s rise (Sawa, 2019; Suda, 2019). A recent special issue of Nature Index on Japanese research noted “the alarming decline in Japan’s contribution to global science” (Armitage, 2019) and the nation’s “bid to halt the slide in its international research performance” (McNeill, 2019). Evidence for these characterizations is frequently national and university indicators of publication output, world share, and citation impact. In other words, and simply put, the popular interpretation of national science indicators is that the data mean what they say: Japanese researchers have and continue to publish papers that are less influential and therefore less important in the global scientific landscape than those from peer nations. But do the data mean what they say in this sense? If they do not, an inaccurate interpretation may have significant negative consequences for the Japanese research community, especially “if the results are used….by policy decision-makers” (Abramo, & D’Angelo, 2007). The public, too, may be misled, and domestic support for science undermined. The following examines factors influencing citations (FICs) that are separate from citations that accrue to a paper because of its content. They are extrinsic aspects of publication, not intrinsic qualities, yet more and more scientometricians are recognizing their importance and trying to determine how these FICs—and there are many—contribute to citation rates.
Some Factors That May Affect Japan’s Citation Impact FICs have been a focus of much research over the last decade (Bornmann, Schier, Marx, Daniel, 2012; Bornmann, & Leydesdorff, 2015; Bornmann, 2019; Bornmann, Haunschild, Mutz, 2019; Didegah, & Thelwall, 2013a, 2013b; Onodera, & Yoshikane, 2015; Tahamtan, Afshar, Ahamdzadeh, 2016; Tahamtan, & Bornmann, 2018). Many of these studies have identified the journal impact factor as highly correlated to a paper’s citation count. This is not surprising: highly cited papers are found in high impact factor journals and high impact factors journals publish many highly cited papers. Whatever the causal relationship between citations and journal impact factors (Traag, 2019), the importance of publication venue for citation impact is widely appreciated and will be the focus of discussion below. Before that, however, we may ask if other FICs may be especially prominent in the context of Japanese research publication. Our hypothesis is that Japanese research is not substantially weaker than that of North American and European researchers (or
120
D. A. Pendlebury
as much as the indicators portray), so a search for extrinsic factors that may suppress citation rates for Japanese papers is warranted. The FICs briefly considered here are language of publication, average number of authors, international collaboration, researcher mobility, self-citation, and national research focus and diversity. Language. That publications in languages other than English are on average less cited than English-language papers was recognized from the beginnings of scientometric studies. Moed and colleagues demonstrated how national science indicators for France and Germany were misinterpreted by May (1997) because he did not appreciate how French- and German-language papers dampened the citation impact of these two countries. When Moed and colleagues examined English-only papers, the citation impact for France and Germany relative to other nations dramatically increased (van Leeuwen, Moed, Tijssen, Visser, van Raan, 2000, 2001). With respect to Japan and ISI data, Garfield demonstrated that English-language papers by Japanese authors were cited three times as much as Japanese-language papers by Japanese authors (Garfield, 1987). Certainly, Japanese-language papers would reduce citation impact for the nation if substantial in number. But over the past four decades the percentage of Japanese-language items in ISI’s databases has been modest and is today almost insignificant. Considering only scientific papers coded as articles, the percentage of Japanese-language papers in the Web of Science has steadily decreased from about 8% at the beginning of the 1980s to under 1% today, and it has been under 3% from 2000 on. Moed and colleagues (van Leeuwen et al., 2001) showed that restricting the analysis to English-language papers by Japanese researchers changed the nation’s citation impact indicators very little, and their result is nearly identical to that shown in Fig. 1. Thus, publication in the Japanese language is not adequate to explain lower citation impact for Japan relative to other nations. Number of authors. Many studies have demonstrated that papers with more authors tend to receive more citations (Abramo, & D’Angelo, 2015; Adams, Pendlebury, Potter, Szomszor, 2019c; Bornmann, 2017; Leimu, & Koricheva, 2005; Larivière, Gingras, Sugimoto, Tsou, 2015; Thelwell, & Sud, 2016) but not all show a strong correlation (Bornmann, 2019; Didegah, & Thelwall, 2013b; Onodera, & Yoshikane, 2015). Despite these mixed results, it seemed worthwhile to examine the average number of authors on Japanese publications. Thelwall and Maflahi reported that Japan exhibits the highest average number of authors per paper (4.2), two-thirds more than the UK (2.5), for the period 2008–2012 (Thelwall, & Maflahi, 2020). Clarivate’s own unpublished analysis for the period 2009–2018 produced roughly similar results, with Brazil and China also high in median and modal number of authors (Potter, 2020). Thelwall and Maflahi observed that whatever the field in which Japan is engaged, its researchers are more likely than those from other nations to collaborate. The authors attributed this to a collectivist cultural dimension, and in this, they noted, China is not far behind Japan. Japan’s advantage in number of authors per paper does not, however, translate into significantly increased citation impact, or at least enough to boost Japan to a higher standing relative to peer nations. International collaboration. Internationally coauthored papers consistently achieve more citations than domestically coauthored papers or single-authored papers (Adams, 2013; Adams, & Gurney, 2018; Adams et al., 2019c; Glänzel, 2001; Glänzel,
When the Data Don’t Mean What They Say …
121
& Schubert, 2001; Guerrero-Bote, Olmeda-Gomez, de Moya-Anegón, 2013; Katz, & Hicks, 1997; Moed 2005, pp. 285–290; Narin, Stevens, Whitlow, 1991; Puuska, Muhonen, Leino, 2014; van Raan, 1998). Japan exhibits a low rate of international collaboration, as it has for many years. According to data from InCites for the years 2009–2018 and restricted to articles only, Japan’s percentage of internationally coauthored papers was 30.0%. Turkey, Iran, India, China, and South Korea exhibited even lower rates, at 21.2%, 23.1%, 23.5%, 24.8%, and 29.3%, respectively. On the other hand, Sweden, The Netherlands, the UK, Germany, Australia, the USA, and even Russia and Brazil, collaborated across borders more than Japan did (in order, 61.5, 59.0, 53.1, 53.1, 51.6, 35.1, 34.0, and 32.2%). But, owing to history, geography, size, and research standing the alignment for these nations between rates of international collaboration and citation impact is not uniform. For example, India and China each collaborated internationally on about a quarter of their papers, but CNCI for India was 0.79 over the decade while China’s was 1.04. The USA and Russia each collaborated internationally on just over a third of their output, but the US CNCI was 1.33 while Russia scored just 0.62. Japan with 30.0% international collaboration and South Korea with 29.2%, on the other hand, registered very similar CNCI scores at 0.89 and 0.91, respectively. While Japan certainly does not benefit in citation impact relative to other nations from its comparatively low level of international collaboration—in fact, this may well be a major influence on its below world-average citation impact—other FICs are likely at play as well. As Japan’s rate of international collaboration increased over the years, citation impact did not. Given the figures presented above and a frequent misalignment between international collaboration and CNCI, I doubt this is a complete explanation for Japan’s underperformance in citation impact. Researcher mobility. Moed and colleagues have played a leading role in studying researcher mobility using bibliometric data (Halevi, Moed, Bar-Ilan, 2016; Moed, Aisati, Plume, 2013; Moed, & Halevi, 2014). Mobility is, of course, associated with international collaboration, which is related to higher-than-average citation impact for papers that are the product of such collaboration, as noted above. One study reported that “mobile scholars have about 40% higher citation rates, on average, than non-mobile ones” (Sugimoto, Robinson-Garcia, Murray, Yegros-Yegros, Costas, Larivière, 2017). There is a significant number of Japanese researchers who can be characterized as mobile in several ways (Robinson-Garcia, Sugimoto, Murray, Yegros-Yegros, Larivière, Costas, 2019), enough to rank the nation in the top 10 for 2008–2015. The number as a proportion to population, however, is only half as much as that of Spain and Italy and about one-fifth that of Australia. Wagner, Jonkers, and colleagues have combined international collaboration and researcher mobility data to fashion an index of national openness (Wagner, & Jonkers, 2017; Wagner et al., 2018), which they found to be strongly correlated with citation impact among advanced scientific nations. Nations found low in openness were Russia, Turkey, Poland, China, South Korea, and Japan, among others. Noting that Japan’s research output and citation impact “have remained flat since 2000,” the authors suggested that a “lack of international engagement may be dragging on Japan’s performance” (Wagner et al., 2018). Lack of mobility for younger Japanese researchers
122
D. A. Pendlebury
and a strictly hierarchical, senority-based (k¯oza) system within universities, including inbreeding, are frequently mentioned: “The Japanese scientific community is rather homogeneous in terms of nationality and somewhat detached from the rest of the world” (Shibayama, & Baba, 2015). Inbreeding (Horta, Sato, Yonezawa, 2011; Horta, 2013; Morichika, & Shibayama, 2015) and obstacles to mobility are central themes of an ethnographic study of Japanese bioscientists (Coleman, 1999), and although important reforms have been advanced the cultural context of researchers in Japan changes only slowly. To the degree there is friction in exchange and association with the global research community, whether through collaboration or mobility, citation impact for Japan is certainly reduced, but determining precisely how much is difficult to specify. National self-citation. National rates of self-citation have been studied less often than author- or journal- self-citation, but the phenomenon has received attention during the past decade (Bakare, & Lewison, 2017; Bornmann, Adams, Leydesdorff, 2018b; Baccini, De Nicolao, Petrovich, 2019; Jaffe, 2011; Khelfaoui, Larrègue, Larivière, Gingras, 2020; Ladle, Todd, Malhado, 2012; Larivière, Gong, Sugimoto, 2018; Minasny, Hartemink, McBratney, 2010; Shehatta, & Al-Rubaish, 2019; Tang, Shapira, Youtie, 2015). Typically, researchers cite papers from their own country disproportionately to national output. If, compared to others, a nation is a relative underor over-self-citer, national science indicators might be biased negatively or positively, respectively. According to data presented in most of the studies mentioned above, Japan appears to be neither a very low nor very high self-citing nation. It is about average or slightly above average in national self-citation, which may be explained in part by a relatively low level of international collaboration. National self-citation rates have also been deployed to measure the concepts of insularity (Ladle et al., 2012) and inwardness (Baccini et al., 2019). In the latter study, Japan was shown as more inward for the period 2000–2009 than several European nations, however by 2016 Italy and Germany surpassed Japan in this regard. In sum, Japan’s citation gap cannot be explained by a low rate of national self-citation since it is average or slightly above average. National research portfolio. As mentioned above, Bornmann and Leydesdorff (2013) noted that a nation’s research portfolio, including mix and volume of publications in different fields, may affect indicators when normalization procedures are inadequate to enable like-for-like comparisons (on normalization, see Waltman, & van Eck, 2019). Normalization may fail, for example, if a nation were to focus its publication in subfields that tend to have lower rates of citation than the fields to which the subfields belong; also, a research culture more oriented to applied than basic research may yield lower overall citations than a diversified output of basic and applied research. An aspect of the latter would include disproportionate generation of papers directed at practitioners such as physicians or engineers, who consume the content of such papers but relatively rarely publish themselves and cite these works. Such “dishomogeneity of scientific specialization among nations” can lead to distortions at the aggregate level (Abramo, & D’Angelo, 2007). An older study, in fact, found that among large nations Japan, with Italy, exhibited a high level of specialization (Pianta, & Archibugi, 1991). A forthcoming study found that Japan
When the Data Don’t Mean What They Say …
123
continues to have the least diverse publication portfolio of the G7 nations (Adams, Rogers, Smart, Szomszor, 2020). Moed and colleagues have also studied concentration and disciplinary specialization and their effect on indicators at the level of national university systems (López-Illescas, de Moya-Anegón, Moed, 2011; Moed, de Moya-Anegón, López-Illescas, Visser, 2011). In one of these papers (Moed et al., 2011), Japan does not appear to be an outlier. An analysis of these dimensions of output and impact for Japan and other nations is beyond the scope of the present contribution (and a complex task in several respects, as Moed states). In general, while Japan tilts toward physical and engineering sciences in its output (Adams, 1998; Adams, Rogers, Szomszor, 2019b), it is doubtful that the nation is excessively penalized in terms of normalized citation impact on account of the nature of its national research portfolio, albeit more specialized than most.
Publication Venues, Visibility, and Citation Opportunity In describing citation impact, Dag Aksnes has emphasized both quality dynamics and visibility dynamics, which while separate as intrinsic and extrinsic aspects of publication are intertwined (Aksnes, 2003). An example has been discussed: internationally collaborative papers may be more cited than domestic-only papers because they derive from researchers who are more talented than others and able to form partnerships with peers abroad (a kind of self-selection process), or who assemble key colleagues and sufficient funds to tackle frontier problems that produce more important results, or both; on the other hand, the work of such groups is communicated more widely through personal networks covering a wider geography. And both dynamics are present if the research of a productive collaboration is important enough to be published in a top journal that is highly regarded by peers and read by many, resulting in more citations on average. “Generally, articles published in journals with high impact factors are likely to obtain higher visibility than articles published in less cited journals,” Aksnes observed. “Although the visibility is higher in high impact journals, this cannot per se explain the high citation counts—the quality dynamics are also important” (Aksnes, 2003). Therefore, by examining publication venues of Japanese papers, we may learn something about the nation’s citation impact profile. In this analysis, papers are sorted by journal impact factor quartiles, which are normalized for field-categories employed in Clarivate’s Journal Citation Reports. In Figs. 2 and 3, the percentage of papers in quartile 1 and in quartile 3 journal impact factors are shown for 12 nations including Japan for the period 1999–2018. Similar data for quartile 2 and quartile 4 are not presented since they do not reveal much of interest in comparing Japan with the other nations: in quartile 2, most nations bunch around 25% (only Russia is different at 14–18%) and in quartile 4 differences may be ascribed in many cases to publication in non-English language journals. The order of nations by percentage of papers published in quartile 1 impact factor journals is expected: developed nations rank above developing (and recovering in
124
D. A. Pendlebury
Fig. 2 Percentage of papers for 12 nations, 1999–2018, published in journals ranking in the first quartile by impact factor and presented in overlapping five-year windows. Analysis is restricted to items coded as articles. Source Clarivate Analytics, InCites, Journal Citation Reports
the case of Russia) nations, with Japan and South Korea, and now China, occupying a middle ground. From 1999–2003 to 2014–2018, nine of the 12 nations increased their percentage of papers published in quartile 1 impact factor journals, none more so than China, from 28.2% to 42.5%. Brazil lost ground marginally, from 33.2% to 32.4%, the US more substantially, from 56.0% to 52.9%, and Japan the most, from 42.8% to 38.5%. In the last five-year window, China and South Korea published a higher percentage of papers in quartile 1 titles than did Japan, and this picture resembles that for the CNCI indicator for these three nations. In quartile 3 impact factor journals (in which a higher percentage is negative), only the US and Japan ended the period worse than at the beginning: the US increased from 12.8% to 13.7% and Japan from 18.0% to 21.3%, the highest share of the 12 nations in this quartile. To understand more about the character of publication venues used by Japanese researchers, Japan-addressed papers for the period 2014–2018, SCI-E and articles only, were collected from the Web of Science. A ranked list was created of the journals in which Japan published 0.1% or more of its output, to reveal the nation’s focus and concentration of publication. In all, 144 journals were identified including 125,800 papers, or 33.5% of Japan’s output of 374,964 items. Table 1 lists 58 of the 144 titles,
When the Data Don’t Mean What They Say …
125
Fig. 3 Percentage of papers for 12 nations, 1999–2018, published in journals ranking in the third quartile by impact factor and presented in overlapping five-year windows. Analysis is restricted to items coded as articles. Source Clarivate Analytics, InCites, Journal Citation Reports
including number of Japanese papers, percentage of Japanese output, percentage of the journal content with a Japanese author address, the journal impact factor quartile for 2018, and the Japanese academic society associated with the title. The journals are ranked by percentage of the journal content with Japanese authorship. The 58 titles were selected from the collection of 144 journals because of a high percentage of content by Japanese authors. In fact, in 52 of the 58 titles more than half of the papers are by Japanese researchers. This is about 10 times higher than Japan’s representation in the Web of Science for 2014–2018, SCI-E and articles only (5.5%). The second distinguishing feature of each journal listed is an association with a Japanese scientific, engineering, or medical society. Not all these journals have a Japanese publisher (some are Elsevier or Springer journals, for example), but all are formal outlets for Japanese academic societies. A third observation is the low impact factor of these titles: only three are in quartile 1 and 12 in quartile 2, whereas 23 are in quartile 3 and 20 in quartile 4. In other words, three-fourths of these heavily Japanese, society-associated journals have lower or low impact factors. The skew is in the opposite direction for the other 86 of 144 titles. By journal impact factor quartile, 39 rank in quartile 1, 30 in quartile 2, 13 in quartile 3, and four in quartile 4. In other words, 80% of this group occupies the top two impact factor
126
D. A. Pendlebury
Table 1 58 of 144 journals that published 0.1% or greater of Japan’s output, 2014–2018, SCI-E and articles only, ranked by percentage of journal content by a Japanese author. The titles were selected for their high percentage of Japanese papers and their association with Japanese scientific, engineering, and medical societies Journal title
Number papers
% Japan output
% Papers Japan in journal
JCR quartile
Japanese academic society
Journal of the japan institute of metals
400
0.107
100.0
4
Japan Institute of Metals
Electronics and communications in Japan
488
0.130
99.6
4
Institute of Electrical Engineers of Japan
Bunseki Kagaku
443
0.118
99.6
4
Japan Society for Analytical Chemistry
Electrical engineering in Japan
527
0.141
99.2
4
Institute of Electrical Engineers of Japan
Tetsu to hagane journal of the iron and steel institute of Japan
543
0.145
97.0
4
Iron and Steel Institute of Japan
Genes to cells
388
0.103
93.9
3
Molecular Biology Society of Japan
2331
0.622
91.7
3
Japanese Society of Internal Medicine
Publications of the astronomical society of Japan
582
0.155
88.3
2
Astronomical Society of Japan
Journal of the physical society of Japan
1815
0.484
85.4
3
Physical Society of Japan
Bulletin of the chemical society of Japan
659
0.176
85.4
2
Chemical Society of Japan
Journal of photopolymer science and technology
528
0.141
84.5
4
(Japan) Society of Photopolymer Science and Technology
Surgery today
811
0.216
82.3
2
Japan Surgical Society
Progress of theoretical and experimental physics
692
0.185
81.7
2
Physical Society of Japan
Internal medicine
(continued)
When the Data Don’t Mean What They Say …
127
Table 1 (continued) Journal title
Number papers
% Japan output
% Papers Japan in journal
JCR quartile
Japanese academic society
Journal of infection and chemotherapy
652
0.174
81.7
4
Japanese Society of Chemotherapy and Japanese Association for Infectious Diseases
IEICE transactions on electronics
649
0.173
80.4
4
(Japan) Institute of Electronics, Information and Communication Engineers
Journal of gastroenterology
402
0.107
79.4
1
Japanese Society of Gastroenterology
1145
0.305
78.1
3
Japan Society for Bioscience, Biotechnology, and Agrochemistry
International journal of clinical oncology
528
0.141
77.5
3
Japan Society of Clinical Oncology
Modern rheumatology
597
0.159
77.2
4
Japan College of Rheumatology
Japanese journal of clinical oncology
592
0.158
77.2
3
ex National Cancer Centre, Japan; now Oxford
Chemistry letters
1889
0.504
76.4
3
Chemical Society of Japan
Japanese journal of applied physics
4863
1.297
75.5
3
Japan Society of Applied Physics
Journal of veterinary medical science
1123
0.299
75.3
3
Japanese Society for Veterinary Sciences
Clinical and experimental nephrology
424
0.113
73.7
3
Japanese Society of Nephrology
Circulation journal
945
0.252
73.5
2
Japanese Circulation Society
Hepatology research
581
0.155
73.5
3
Japan Society of Hepatology
Bioscience biotechnology and biochemistry
(continued)
128
D. A. Pendlebury
Table 1 (continued) Journal title
Number papers
% Japan output
% Papers Japan in journal
JCR quartile
Japanese academic society
Endocrine journal
390
0.104
71.6
4
Japan Endocrine Society
Journal of the ceramic society of Japan
657
0.175
71.5
2
Ceramic Society of Japan
Journal of orthopaedic science
589
0.157
70.8
3
Japanese Orthopaedic Association
Heart and vessels
589
0.157
69.5
3
Japan Research Promotion Society for Cardiovascular Diseases
Journal Of cardiology
512
0.137
68.2
3
Japanese College of Cardiology
Nippon Suisan Gakkaishi
596
0.159
67.5
4
Japanese Society of Fisheries Science
Cancer science
798
0.213
67.1
2
Japanese Cancer Association
Biological pharmaceutical bulletin
893
0.238
66.2
4
Pharmaceutical Society of Japan
Journal of nuclear science and technology
501
0.134
65.3
3
Atomic Energy Society of Japan
Materials transactions
946
0.252
64.5
3
Japan Institute of Metals and Materials
Journal of pharmacological sciences
387
0.103
63.8
3
Japanese Pharmacological Society
Earth planets and space
569
0.152
62.9
2
Seismological Society of Japan, Society of Geomagnetism and Earth, Planetary and Space Sciences, Volcanological Society of Japan, Geodetic Society of Japan, and the Japanese Society for Planetary Sciences (continued)
When the Data Don’t Mean What They Say …
129
Table 1 (continued) Journal title
Number papers
% Japan output
% Papers Japan in journal
JCR quartile
Japanese academic society
Pediatrics international
702
0.187
62.3
4
Japan Pediatrics Society
Journal of bioscience and bioengineering
690
0.184
61.6
3
Society for Biotechnology, Japan
Chemical and pharmaceutical bulletin
521
0.139
61.5
3
Pharmaceutical Society of Japan
Journal of chemical engineering of Japan
390
0.104
60.9
4
Society of Chemical Engineers, Japan
International journal of hematology
526
0.140
60.6
3
Japanese Society of Hematology
International heart journal
417
0.111
60.3
3
(Japan) International Heart Journal Association
Brain and development
410
0.109
59.7
3
Japanese Society of Child Neurology
IEICE transactions on communications
766
0.204
58.5
4
(Japan) Institute of Electronics, Information and Communication Engineers
IEICE Transactions on fundamentals of electronics communications and computer sciences
1046
0.279
58.4
4
(Japan) Institute of Electronics, Information and Communication Engineers
562
0.150
58.2
3
Japan Society for Analytical Chemistry
1102
0.294
57.5
2
Japan Society of Applied Physics
511
0.136
51.6
4
Japan Institute of Heterocyclic Chemistry
Analytical sciences Applied physics express Heterocycles
(continued)
130
D. A. Pendlebury
Table 1 (continued) Journal title
Number papers
% Japan output
% Papers Japan in journal
JCR quartile
Japanese academic society
Journal of dermatology
547
0.146
50.9
1
Japanese Dermatological Association
IEICE transactions on information and systems
932
0.249
50.5
4
(Japan) Institute of Electronics, Information and Communication Engineers
Animal science journal
440
0.117
48.6
2
Japanese Society of Animal Science
Geriatrics gerontology international
440
0.117
44.7
2
Japan Geriatrics Society
Plant and cell physiology
405
0.108
44.1
1
Japanese Society of Plant Physiologists
ISIJ international
679
0.181
42.2
2
Iron and Steel Institute of Japan
Journal of obstetrics and gynaecology research
543
0.145
39.2
4
Japan Society of Obstetrics and Gynecology
Journal of stroke cerebrovascular diseases
619
0.165
25.4
4
Japan Stroke Society
Source Clarivate Analytics, Web of Science
quartiles. In these journals Japanese papers are generally represented at one to two times the nation’s contribution to papers in Web of Science. By focusing on the journals in which Japan concentrates its publication, we can see two populations—a group with high journal impact factors that is international in orientation and a second group with low journal impact factors that is more nationally oriented and associated with Japanese academic societies. The 58 journals with a national orientation published 44,272 Japanese papers, or just 11.8% of Japan’s output for the period, but only one third of Japan’s output was surveyed. In a forthcoming publication by Clarivate analysts, the percentage of Japan’s papers published in journals with 50% or more Japanese-authored papers is about 20% (Potter, 2020). This figure is consistent with earlier studies that found Japanese researchers publishing about one in five of their papers in nationally oriented journals (Hayashi, & Tomizawa, 2006; Sun, Kakinuma, Negishi, Nisizawa, 2008). Except for Brazil and Russia, the comparator nations do not exhibit the same degree of publishing in nationally oriented journals as does Japan. In a separate analysis of journals accounting for a more concentrated top 20% of output of each
When the Data Don’t Mean What They Say …
131
country, 2014–2018, SCI-E and articles only, Russia and Brazil published more than half of their papers in nationally journals (one third of Brazil’s output was in the Portuguese language). Japan’s contribution to nationally oriented journals in this analysis was 30% (15 of 50 titles). South Korea, India, and Australia chose nationally oriented publication venues at rates of 19%, 13%, and 11%, respectively. In terms of journal impact factor quartiles, 60% of Brazil’s output and 82% of Russia’s appeared in quartile 3 and 4 titles. By contrast, more than 60% of the output of Australia, Germany, The Netherlands, Sweden, the UK, and the US appeared in internationally oriented journals ranking in quartile 1. It is noteworthy that for this recent five-year period, China was close behind with 57% in quartile 1 impact factor titles. Thus, as has been recognized, publications that have a national rather than an international orientation typically have a smaller audience, reduced visibility, and are less cited. The relationship between national orientation and low impact factors is not uniform, as Moed and colleagues have demonstrated in a forthcoming study (Moed, de Moya Anegón, Guerrero-Bote, López-Illescas, 2020; see also Moed, 2017 p. 145), but for journals with more than 50% of their papers by researchers of a single nation, the impact factor declines in a linear fashion with a higher percentage of national authorship. Japanese academic society journals, both those published in English or Japanese and indexed in the Web of Science, received careful study more than a decade ago (Sun et al., 2008). The authors wrote that “Japanese academic societies face a crisis and need to make their journals more visible internationally.” A bibliometric study on efforts to restructure the Japanese national research system noted that Englishlanguage journals indexed in the Web of Science and published by Japanese academic societies accounted for 18.2% of Japan’s output, and that in some of these titles “almost all authors were Japanese, and citations by foreign researchers were rare” (Hayashi, & Tomizawa, 2006). Japanese researchers, like those in other nations, generally prefer to publish in internationally oriented journals (Haiqi, & Yamazaki, 1998), but a significant percentage of their papers finds a home in journals with a more national orientation, including journals of Japanese academic societies. A survey conducted two decades ago found that about 40% of Japanese researchers submitted papers only to Japanese publications (Saegusa, 1999), and while there has certainly been change since, this reveals something of the character of scientific publication for the nation. A more recent study of Japanese biologists revealed that those with foreign experience return to Japan and put more emphasis on publishing in journals with high impact factors than they previously did (Shibayama, & Baba, 2015). Yet another cultural dimension influencing publication venue for Japanese scientists may be the k¯oza, or chair, system of universities in which a professor may also serve as editor-in-chief of a Japanese academic society title. Japanese papers may be preferentially directed to such journals through social networks, such as by former students, for reasons of support and even obligation. The concept of ‘inbred’ papers in the Japanese context may be worth investigation. In summary, there are many and complex aspects to Japan’s nationally oriented journals, including historical, social, and cultural origins and purposes. The Japanese research community retains a strong tradition of publishing in these titles. As noted, many of these journals have little
132
D. A. Pendlebury
content by foreign authors. Papers published in these nationally oriented journals are less visible and less cited than Japanese papers published in internationally oriented journals among titles indexed in the Web of Science (Negishi, Sun, Shigi, 2004). Moed has long studied nationally oriented and internationally oriented journals, how to define them, their relative rates of citation, as well as their influence on the measurement of research activity and performance (Moed, 2002, 2005 pp. 131–135; Moed, 2017 pp. 144–145; Moed et al., 2020; López-Illescas, de Moya Anegón, Moed, 2009). Other scholars have also explored nationally and internationally oriented journals indexed in the Web of Science (Waltman, & van Eck, 2013a, 2013b; Zitt, & Bassecoulard, 1998; Zitt, Perrot, Barré, 1998; Zitt, & Bassecoulard 1999; Zitt, Ramanana-Rahary, Bassecoulard, 2003). Michel Zitt and colleagues described how “correcting glasses” are required to resolve the presence of a sub-population of journals in the Web of Science that “does not reflect the same international openness” as other journals (Zitt et al. 2003). This sub-population of nationally oriented journals creates distortions in indicators that result in an “over-estimation of share and under-estimation of citation impact” for specific countries. For example, the authors noted the distinctly different publication and citation profile of Russia that focuses its publication in nationally oriented journals, as well as the effect of languages other than English in penalizing the citation impact of a nation with many nationally oriented non-English language journals (van Leeuwen et al., 2000, 2001; also see, van Raan, van Leeuwen, Visser, 2011). They found Japan published less than expected in internationally oriented journals (Zitt & Bassecoulard, 1999). Zitt’s suggestion to address the problem: Cut off the tail containing low-impact journals, many nationally oriented, for a more reliable benchmarking of nations. “In the case of Russia…, when truncating the tail, the publication share is cut by half at least, whereas the relative impact increases by a factor of 2 or more,” he observed (Zitt, 2015). Two decades ago, Moed explored the use of Web of Science data to measure the research performance of China (Moed, 2002). He focused on the low rate of citation to nationally oriented Chinese journals within Web of Science coverage, defined as Chinese-language publications or journals with more than 50% of their content by authors at Chinese institutions. “An assessment of Chinese research on the basis of the total collection of journals covered by the ISI indexes results in a rather diffuse picture, in which both the national and the international viewpoint are merged,” Moed wrote. “Consequently, such an analysis does not provide an appropriate picture of Chinese research performance from an international perspective, nor from a national point of view.” Moed recommended that the group of nationally and the group of internationally oriented journals in which Chinese researchers published be evaluated separately, and that assessment of Chinese research from a perspective of international influence should exclude nationally oriented Chinese journals. A decade ago, Moed and colleagues studied national performance in oncology, based on Web of Science data and the larger journal coverage in this field found in Scopus, which includes more nationally oriented journals than does Web of Science. The authors found that including more nationally oriented journals from Scopus in their analysis resulted in larger publication figures for countries than obtained
When the Data Don’t Mean What They Say …
133
from Web of Science but lower citation rates. This was another demonstration of the sensitivity of national indicators to the population of journals surveyed, as well as the varying publication outlets and profiles of different countries (López-Illescas et al, 2009). Moed and colleagues advised that “in an assessment of research activity from an international perspective it seems appropriate to exclude nationally oriented journals.” In a forthcoming paper Moed and colleagues have deepened our understanding of nationally and internationally oriented journals based on Scopus data. The authors introduced a method of measuring national orientation of journals not only by the percentage of authors from a single nation publishing in a title but also by the percentage of authors from a single nation citing the journal. They found that journal impact factor and national or international orientation are not uniformly aligned (the relationship takes the shape of an inverted U), and that English-language publication (or citing) does not guarantee internationality for a journal. In their study, Japan, India, and South Korea are closely related by output and percentage of papers in journals with greater than 80% national orientation (slightly below 20%). In addition, they demonstrated that journals by Japanese publishers covered in Scopus have over time shown some broadening in their citation impact. If anything, these latest results suggest that characterizing journals with respect to national or international orientation is even more complex than formerly appreciated, that each journal presents different attributes in varying degrees along a spectrum of national/international making binary labelling challenging. This also implies that different nations publishing across a range of titles have a distinct publication profile, even a publication fingerprint. The subtle ways in which a specific national publishing profile affects national science indicators deserves much more study. With this more nuanced understanding, the solution to remove journals (or individual papers) from indicator datasets, I believe, warrants reconsideration. Waltman and van Eck have described how they distinguish so-called core and non-core publications and include only core publications in the CWTS Leiden Ranking of some 1,000 universities worldwide (Waltman, & van Eck, 2013a, 2013b; Waltman, 2016). Core items must be in English and published in a core journal. They state: In the Leiden Ranking, a journal is considered a core journal if it meets the following conditions: The journal has an international scope, as reflected by the countries in which researchers publishing in the journal and citing to the journal are located. The journal has a sufficiently large number of references to other core journals, indicating that the journal is situated in a field that is suitable for citation analysis…. In the calculation of the Leiden Ranking indicators, only core publications are taken into account. Excluding non-core publications ensures that the Leiden Ranking is based on a relatively homogeneous set of publications, namely publications in international scientific journals in fields that are suitable for citation analysis. The use of such a relatively homogeneous set of publications enhances the international comparability of universities (CWTS Leiden Ranking 2019)
The methodology described has undoubtedly improved the validity of indicators on university research performance. It is likely, however, that more investigation both of the character of specific journals and of specific university publishing profiles would refine the Leiden Ranking indicators and reveal groups of similar
134
D. A. Pendlebury
institutions enabling comparisons on a more equal basis. Japanese Imperial universities, such as Tokyo, Kyoto, and Osaka, are a case in point. In the Leiden Ranking, these institutions, long recognized as among leading research institutions worldwide, seem particularly disadvantaged in terms of citation impact. As discussed, Japanese researchers publish their papers differently than researchers in, say, the United Kingdom or Sweden. Clarivate analysts reviewed the core and non-core journals of the CWTS Leiden Ranking and found many seemingly nationally oriented journals of Japan (and China) included as core journals. This is not a criticism but a statement about the difficulty of characterizing the nature of a journal to say nothing of the nature of a university’s publication profile. Thus, the problem in the construction of reliable national or university indicators is not only what journals are included or excluded, but also how publication profiles of entities that create different levels of citation opportunity are handled. Current approaches to normalized citation impact do not, I believe, fully address this aspect of different patterns of publication for specific nations or institutions.
Discussion National science indicators for Japan present us with a puzzlement. How can it be that an advanced nation, a member of the G7, with high investment in R&D, a total of 18 Nobel Prize recipients since 2000, and an outstanding educational and university system looks more like a developing country than a developed one by these measures? The citation gap between Japan and its G7 partners is enormous and unchanging over decades. Japan’s underperformance in citation impact compared to peers seems unlikely to reflect a less competitive or inferior research system to the degree represented. Among FICs briefly reviewed here that likely contribute to lower citation impact for Japan are modest levels of international collaboration and comparatively low levels of mobility. To these may be added Japan’s substantial volume of publication in nationally oriented journals that have limited visibility and reduced citation opportunity. Eugene Garfield decided that journal coverage for the Science Citation Index would be selective rather than encyclopedic. He sought to index what he termed internationally influential journals and expanded coverage using journal citing and cited data from the initial corpus (Garfield, 1979). His approach to journal selection easily gave the impression that the Science Citation Index focused on a group of elite journals of relatively uniform character. He certainly knew that the database included nationally oriented journals that were on average less cited than high-impact internationally oriented journals and that papers in non-English language journals received fewer citations than those in English-language titles. Both subjects were frequently featured in his weekly essays in Current Contents. When Science Citation Index data were used in the creation of national science indicators, however, the variegated nature of SCI-indexed journals was obscured.
When the Data Don’t Mean What They Say …
135
As described above, Moed and others demonstrated that (now) Web of Science journals are not homogeneous, and that the distinction between nationally and internationally oriented journals is important in constructing national indicators, more so for some countries than for others. In Japan’s case, researchers frequently publish in nationally oriented, academic society journals. Glänzel and colleagues long ago showed that Japan frequently published in low-impact journals and received lower than average citations (Glänzel, Schubert, Braun, 2002). More recent studies describe Japan’s continuing use of low-impact and nationally oriented publication venues (Smith, Weinberger, Bruna, Allesina, 2014; Zhou, & Pan, 2015). The extent to which Japanese researchers publish in nationally oriented journals is perhaps not recognized since most of these are English-language titles and often foreignpublished. Such a publication profile, determined by the collective decisions of thousands of researchers about where to send their papers, expresses history and ultimately social and cultural preferences. These choices also determine visibility for a nation’s research output and citation opportunities. Nations may have similar or very different publication profiles. For example, Japan, South Korea, India, and China are similar in publishing about 20% of their papers in nationally oriented journals and 80% in internationally oriented journals in the Web of Science, where the definition is whether a journal exhibits less than or more than a 50% share of papers by a single nation. In terms of citation impact, such as CNCI, Japan and South Korea are closely matched in the category of national journals and Japan, Brazil, and India in the category of international journals (Adams et al., 2019b; Potter, 2020). As mentioned, Russia offers a completely different publication profile, and in terms of use of non-English titles, Brazil stands apart. National science indicators do not reveal such similarities or differences in publication profiles. CNCI and other approaches to citation normalization using journal-to-field schemes fail to address the volume of low- or high-impact titles within a category or field for a nation (Waltman, & van Eck, 2013a, 2013b). A nation that disproportionately publishes in lower impact, nationally oriented titles in a category is penalized by normalization procedures. Despite recommendations from Moed, Zitt, Waltman, van Eck and others, nationally oriented journals are not usually excluded. Consequently, standard national science indicators valorize North American and European cultures of publication, in which there is extensive use of high-impact, internationally oriented journals. By favoring this type of publishing culture, it becomes institutionalized as the standard for research performance. Nations with a different publication profile, such as Japan, South Korea, India, Brazil, and Russia, are then disadvantaged and their research performance misrepresented. It should be recognized, as well, that a cumulative disadvantage process is at work for nations that publish substantially in low visibility, nationally oriented journals (Bonitz, Bruckner, Scharnhorst, 1997). Conceding that comparisons of nations with dissimilar publication profiles may be invalid, one might focus on the performance of the nation against itself over time. A nation’s publication profile, however, may change significantly, as it has for China. Since the 1990s, Chinese researchers have targeted and favored international journals over national journals. The nation’s rise in citation impact is, in part, a
136
D. A. Pendlebury
consequence of this change in publication strategy. Moreover, a change in publication strategy by one nation has knock-on effects for others as baseline measures are modified. Again, China is an example of how one nation’s behavior has influenced and reshaped the entire science system and the indicators used for monitoring the system (Stahlschmidt, & Hinze, 2018). What can be done to address structural biases of standard national science indicators? An obvious alternative for normalization would be a journal-level scheme, which might be usefully employed for journals with a national orientation. However, normalization of citation impact at the journal level frequently yields high scores in low-impact venues and low scores in high-impact venues that when aggregated may be misleading in another way. Any refinement of field normalization would depend on subjective judgments about further transformations and offer less transparency. New indicators are continually proposed but the majority are duplicative to other indicators or provide only incremental improvement. I doubt that national science indicators can be made more accurate because of the multiple, complex, and probably immeasurable ways FICs affect those indicators. A nation’s publication profile has been emphasized here and is only one but a dominant FIC. A suggestion: Provide supplementary data to show how indicators could be biased against specific nations with distinctively different research practices and publishing profiles. For example, relative rates of international collaboration for a nation should be presented in tandem with CNCI or percentage of papers in the top 10%. Revealing the character and scope of a nation’s publishing profile, including its use of nationally or internationally orientated journals and the language of its publications, would help put citation impact indicators in context. A recently issued report by Clarivate on the performance of G20 nations features multiple indicators including citation impact distributions, called impact profiles (Adams, Gurney, Marshall, 2007) and research footprints (radar charts) showing relative output and impact by fields for each country (Adams et al., 2019b). At Clarivate, we have come to speak more and more of “profiles, not metrics” (Adams, McVeigh, Pendlebury, Szomszor, 2019a). An appeal for more data to accompany standard national science indicators is merely recognition of the multidimensional nature of research and a need for multiple indicators to provide a fuller picture of activity and performance, as Moed has advocated (Moed, & Halevi, 2015). The foregoing does not argue that Japan is the scientific superior or even equal of its G7 partners. The nation’s scientific and scholarly research system faces short-term challenges and long-standing structural problems. The need for more international collaboration, greater mobility and more diverse career paths for younger researchers, and increased funding of the higher education sector are acknowledged domestically. Compared to the research performance of other nations, Japan may well and in many areas be a relative laggard. However, if Japan does underperform according to standard national science indicators it likely overperforms with respect to what the indicators are supposed to represent. In this way, the data don’t mean what they say. Thus, it is doubtful that Japan’s research can or should be described as “below world average.”
When the Data Don’t Mean What They Say …
137
Conclusion Henk Moed takes a special interest in the use and misuse of bibliometric indicators in evaluation and policy contexts (Moed, 2005, 2017; Moed, & Halevi, 2015). He believes that scientometricians have a responsibility to state clearly the potential and limits of indicators they create to those outside academia who draw upon these data. For this reason he recently established the journal Scholar Assessment Reports “in order to establish optimal conditions for an informed, responsible, effective and fair use of such methodologies and their metrics in actual scholarly assessment practices” (www.scholarlyassessmentreports.org/). Henk has published frequently on aspects of national science indicators, including technical issues and their interpretation. Plainly, national science indicators have a direct influence on policy decisions within governments, but they also have wider impact with the public who view these summaries of a nation’s research performance as both valid and authoritative. If the press reports these indicators as simple narratives of winners or losers, as it often does, that is even more reason for scientometricians to offer more guidance and context. This essay has attempted to highlight how national science indicators as currently constructed and presented may mislead in the case of nations with research practices and publication profiles that are distinctly different from those of North America and Europe. In the spirit of Henk Moed’s research, past and continuing, it has attempted to identify deficiencies in what we do and how we describe our work to outsiders, and especially to point a way forward to possible improvements in the creation, application, and interpretation of national science indicators. Acknowledgments I would like to thank Jonathan Adams and Martin Szomszor for discussions and Ross Potter for discussions and provision of data used here. All are members of the Institute for Scientific Information, Clarivate Analytics, London, UK. In addition, I thank Satoko Ando and Fumitaka Yanagisawa, both of Clarivate Analytics, Tokyo, Japan, for information and discussions. The ideas presented are the responsibility of the author alone.
References Abramo, G., & D’Angelo, C. A. (2007). Measuring science: Irresistible temptations, easy shortcuts and dangerous consequences. Current Science, 93, 762–766. Abramo, G., & D’Angelo, C. A. (2015). The relationship between the number of authors of a publication, its citations and the impact factor of the publishing journal: Evidence from Italy. Journal of Informetrics, 9, 746–761. https://doi.org/10.1016/j.joi.2015.07.003. Adams, J. (1998). Benchmarking international research. Nature, 396, 615–618. https://doi.org/10. 1038/25219. Adams, J. (2013). The fourth age of research. Nature, 497, 557–559. https://doi.org/10.1038/ 497557a. Adams, J., & Gurney, K. A. (2018). Bilateral and multilateral coauthorship and citation impact: Patterns in UK and US international collaboration. Frontiers in Research Metrics and Analytics 3, article number 12. https://doi.org/10.3389/frma.2018.00012.
138
D. A. Pendlebury
Adams, J., Gurney, K. A., & Marshall, S. (2007). Profiling citation impact: A new methodology. Scientometrics, 72, 325–344. https://doi.org/10.1007/s11192-007-1696-x. Adams, J., King, C., Miyairi, N., & Pendlebury, D. (2010). Global research report: Japan. Philadelphia, PA: Thomson Reuters. Adams, J., McVeigh, M., Pendlebury, D., & Szomszor, M. (2019a). Profiles, not metrics. Philadelphia, PA: Clarivate Analytics. Adams, J., Rogers, G., & Szomszor, M. (2019b). The annual G20 scorecard—Research performance 2019. Philadelphia, PA: Clarivate Analytics. Adams, J., Pendlebury, D., Potter, R., & Szomszor, M. (2019c). Global research report: Multiauthorship and research analytics. Philadelphia, PA: Clarivate Analytics. Adams, J., Rogers, G., Smart, W., & Szomszor, M. (2020). Longitudinal variation in national research publication portfolios: Steps required to index balance and evenness. Quantitative Science Studies (forthcoming). Aksnes, D. W. (2003). Characteristics of highly cited papers. Research Evaluation. 12 159–170. https://doi.org/10.3152/147154403781776645. Aksnes, D. W., & Sivertsen, G. (2004). The effect of highly cited papers on national citation indicators. Scientometrics, 59, 213–224. https://doi.org/10.1023/B:SCIE.0000018529.58334.eb. Aksnes, D. W., Schneider, J. W., & Gunnarsson, M. (2012). Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods. Journal of Informetrics, 6, 36–43. https://doi.org/10.1016/j.joi.2011.08.002. Armitage, C. (2019). Unfinished business. Nature, 567, S7. https://doi.org/10.1038/d41586-01900829-z. Baccini, A., De Nicolao, G., & Petrovich, E. (2019). Citation gaming induced by bibliometric evaluation: A country-level comparative analysis. PLoS ONE 14, article number e0221212. https:// doi.org/10.1371/journal.pone.0221212. Bakare, V., & Lewison, G. (2017). Country over-citation ratios. Scientometrics, 113, 1199–1207. https://doi.org/10.1007/s11192-017-2490-z. Bonitz, M., Bruckner, E., & Scharnhorst, A. (1997). Characteristics and impact of the Matthew Effect for countries. Scientometrics, 40, 407–422. https://doi.org/10.1007/BF02459289. Bornmann, L. (2017). Is collaboration among scientists related to the citation impact of papers because their quality increases with collaboration? An analysis based on data from F1000Prime and normalized citation scores. Journal of the Association for Information Science and Technology, 68, 1036–1047. https://doi.org/10.1002/asi.23728. Bornmann, L. (2019). Does the normalized citation impact of universities profit from certain properties of their published documents—such as the number of authors and the impact factor of publishing journals? A multilevel modeling approach. Journal of Informetrics, 13, 170–184. https://doi.org/10.1016/j.joi.2018.12.007. Bornmann, L., & Leydesdorff, L. (2013). Macro-indicators of citation impacts of six prolific countries: InCites data and the statistical significance of trends. PLoS ONE 8, article number e56768 https://doi.org/10.1371/journal.pone.0056768. Bornmann, L., & Leydesdorff, L. (2015). Does quality and content matter for citedness? A comparison with para-textual factors over time. Journal of Informetrics, 9, 419–429. https://doi.org/ 10.1016/j.joi.2015.03.001. Bornmann, L., Haunschild R., & Mutz, R. (2019). Should citations be field-normalized in evaluative bibliometrics? An empirical analysis based on propensity score matching. (forthcoming). Preprint available on https://arxiv.org/abs/1910.11706 Bornmann, L., Schier, H., Marx, W., & Daniel, H.-D. (2012). What factors determine citation counts of publications in chemistry besides their quality? Journal of Informetrics, 6, 11–18. https://doi. org/10.1016/j.joi.2011.08.004. Bornmann, L., Wagner, C., Leydesdorff, L. (2018a). The geography of references in elite articles: Which countries contribute to the archives of knowledge? PLoS ONE 13, article number e0194805. https://doi.org/10.1371/journal.pone.0194805
When the Data Don’t Mean What They Say …
139
Bornmann, L., Adams, J., Leydesdorff, L. (2018b). The negative effects of citing with a national orientation in terms of recognition: National and international citations in natural-sciences papers from Germany, the Netherlands, and the UK. Journal of Informetrics 12, 931–949. https://doi. org/10.1016/j.joi.2018.07.009. Coleman, S. (1999). Japanese science: From the inside. Abingdon, UK, and New York, NY: Routledge. ISBN-13: 978-0415201698. Didegah, F., & Thelwall, M. (2013a). Which factors help authors produce the highest impact research? Collaboration, journal and document properties. Journal of Informetrics, 7, 861–873. https://doi.org/10.1016/j.joi.2013.08.006. Didegah, F., & Thelwall, M. (2013b). Determinants of research citation impact in nanoscience and nanotechnology. Journal of the American Society for Information Science and Technology, 64, 1055–1064. https://doi.org/10.1002/asi.22806. Elsevier. (2016). International comparative performance of the UK research base 2016. https:// www.elsevier.com/research-intelligence?a=507321. Garfield, E. (1979). Citation indexing—Its theory and application in science, technology, and humanities. New York, NY: Wiley. ISBN-13: 978-0471025597. Garfield, E. (1987). Is Japanese science a juggernaut? Current Contents, 46, November 16, 3–9. Reprinted in: Eugene Garfield, Peer Review, Refereeing, Fraud, and Other Essays [Essays of an Information Scientist: 1987]. Philadelphia, PA: ISI Press, 342–348. http://www.garfield.library. upenn.edu/essays/v10p342y1987.pdf Glänzel, W. (2001). National characteristics in international scientific co-authorship relations. Scientometrics, 51, 69–115. https://doi.org/10.1023/A:1010512628145. Glänzel, W., & Schubert, A. (2001). Double effort = double impact? A critical view at international coauthorship in chemistry. Scientometrics, 50, 199–214. https://doi.org/10.1023/A: 1010561321723. Glänzel, W., Schubert, A., & Braun, T. (2002). A relational charting approach to the world of basic research in twelve science fields at the end of the second millennium. Scientometrics, 55, 335–348. https://doi.org/10.1023/A:1020406627944. Grupp, H., & Mogee, M. E. (2004). Indicators for national science and technology policy. In: H. F. Moed, W. Glänzel, & U. Schmoch (Eds.) Handbook of quantitative science and technology research: The use of publication and patent statistics in studies of S&T systems (pp. 75–94). Dordrecht, Netherlands: Kluwer Academic Publishers. ISBN-13: 978-1402027024. Guerrero-Bote, V. P., Olmeda-Gomez, C., & de Moya-Anegón, F. (2013). Quantifying the benefits of international scientific collaboration. Journal of the American Society for Information Science and Technology, 64, 392–404. https://doi.org/10.1002/asi.22754. Haiqi, Z., & Yamazaki, S. (1998). Citation indicators of Japanese journals. Journal of the American Society for Information Science, 49, 375–379. https://doi.org/10.1002/(SICI)10974571(19980401)49:4%3c375:AID-ASI7%3e3.0.CO;2-X. Halevi, G., Moed, H. F., & Bar-Ilan, J. (2016). Researchers’ mobility, productivity and impact: Case of top producing authors in seven disciplines. Publishing Research Quarterly, 32, 22–37. https:// doi.org/10.1007/s12109-015-9437-0. Hayashi, T., & Tomizawa, H. (2006). Restructuring the Japanese national research system and its effect on performance. Scientometrics, 68, 241–264. https://doi.org/10.1007/s11192-006-0163-4. Horta, H. (2013). Deepening our understanding of academic inbreeding effects on research information exchange and scientific output: New insights for academic based research. Higher Education, 65, 487–510. https://doi.org/10.1007/s10734-012-9559-7. Horta, H., Sato, M., & Yonezawa, A. (2011). Academic inbreeding: Exploring its characteristics and rationale in Japanese universities using a qualitative perspective. Asia Pacific Education Review, 12, 35–44. https://doi.org/10.1007/s12564-010-9126-9. Huang, M. H., Lin, C. S., & Chen, D.-Z. (2011). Counting methods, country rank changes, and counting inflation in the assessment of national research productivity and impact. Journal of the American Society for Information Science and Technology, 62, 2427–2436. https://doi.org/10. 1002/asi.21625.
140
D. A. Pendlebury
Jaffe, K. (2011). Do countries with lower self-citation rates produce higher impact papers? Or, does humility pay? Interciencia, 36, 694–698. Katz, J. S., & Hicks, D. (1997). How much is a collaboration worth? A calibrated bibliometric model. Scientometrics, 40, 541–554. https://doi.org/10.1007/BF02459299. Khelfaoui, M., Larrègue, J., Larivière, V., & Gingras, Y. (2020). Measuring national self-referencing patterns of major science producers. Scientometrics, (forthcoming). King, D. A. (2004). The scientific impact of nations. Nature, 430, 311–316. https://doi.org/10.1038/ 430311a. Ladle, R. J., Todd, P. A., & Malhado, A. C. M. (2012). Assessing insularity in global science. Scientometrics, 93, 745–750. https://doi.org/10.1007/s11192-012-0703-z. Larivière, V., Gingras, Y., Sugimoto, C. R., & Tsou, A. (2015). Team size matters: Collaboration and scientific impact since 1900. Journal of the Association for Information Science and Technology, 66, 1323–1332. https://doi.org/10.1002/asi.23266. Larivière, V., Gong, K., & Sugimoto, C. R. (2018). Citations strength begins at home. Nature, 564, S70–S71. https://doi.org/10.1038/d41586-018-07695-1. Larsen, P. O., Maye, I., & von Ins, M. (2008). Scientific output and impact: Relative positions of China, Europe, India, Japan, and the USA. COLLNET Journal of Scientometrics and Information Management, 2, 1–10. https://doi.org/10.1080/09737766.2008.10700848. Leimu, R., & Koricheva, J. (2005). What determines the citation frequency of ecological papers? Trends in Ecology and Evolution, 20, 28–32. https://doi.org/10.1016/j.tree.2004.10.010. López-Illescas, C., de Moya Anegón, & Moed, H. F. (2009). Comparing bibliometric countryby-country rankings derived from the Web of Science and Scopus: The effect of poor cited journals in oncology. Journal of Information Science 35, 244–256. https://doi.org/10.1177/ 016555150809860. López-Illescas, C., de Moya-Anegón, F., & Moed, H. F. (2011). A ranking of universities should account for differences in their disciplinary specialization. Scientometrics, 88, 563–574. https:// doi.org/10.1007/s11192-011-0398-6. May, R. M. (1997). The scientific wealth of nations. Science, 275, 793–796. https://doi.org/10.1126/ science.275.5301.793. McNeill, D. (2019). Reaching out: Japan seeks to boost its scientific research performance by transforming its insular universities to better accommodate international collaboration. Nature, 567, S9–S11. https://doi.org/10.1038/d41586-019-00830-6. Minasny, B., Hartemink, A. E., & McBratney, A. (2010). Individual, country, and journal selfcitation in soil science. Geoderma, 155, 434–438. https://doi.org/10.1016/j.geoderma.2009. 12.003. Moed, H. F. (2002). Measuring China’s research performance using the Science Citation Index. Scientometrics, 53, 281–296. https://doi.org/10.1023/A:1014812810602. Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht, The Netherlands: Springer. ISBN-13: 978-1402037139. Moed, H. F. (2017). Applied evaluative informetrics. Dordrecht, The Netherlands: Springer. ISBN13: 978-3319605210. Moed, H. F., & Halevi, G. (2014). A bibliometric approach to tracking international scientific migration. Scientometrics, 101, 1987–2001. https://doi.org/10.1007/s11192-014-1307-6. Moed, H. F., & Halevi, G. (2015). Multidimensional assessment of scholarly research impact. Journal of the Association for Information Science and Technology, 66, 1988–2002. https://doi. org/10.1002/asi.23314. Moed, H. F., de Moya-Anegón, F., López-Illescas, C., & Visser, M. (2011). Is concentration of university research associated with better research performance? Journal of Informetrics, 5, 649– 658. https://doi.org/10.1016/j.joi.2011.06.003. Moed, H. F., Aisati, M., & Plume, A. (2013). Studying scientific migration in Scopus. Scientometrics, 94, 929–942. https://doi.org/10.1007/s11192-012-0783-9. Moed, H. F., de Moya Anegón, F., Guerrero-Bote, V., & López-Illescas, C. (2020). Are nationally oriented journals indexed in Scopus becoming more international? The effect of publication
When the Data Don’t Mean What They Say …
141
language and access modality. Journal of Informetrics (forthcoming).. https://doi.org/10.1016/j. joi.2020.101011. Morichika, N., & Shibayama, S. (2015). Impact on scientific productivity: A case study of a Japanese university department. Research Evaluation, 24, 146–157. https://doi.org/10.1093/ reseval/rvv002. Narin, F., & Frame, J. D. (1988). The growth of Japanese science and technology. Science, 245, 600–605. https://doi.org/10.1126/science.245.4918.600. Narin, F., Stevens, K., & Whitlow, E. S. (1991). Scientific co-operation in Europe and the citation of multinationally authored papers. Scientometrics, 21, 313–323. https://doi.org/10.1007/ BF02093973. Narin, F., Hamilton, K. S., & Olivastro, D. (2000). The development of science indicators in the United States. In: B. Cronin & H. B. Atkins (Eds.). The web of knowledge: A festschrift in honor of Eugene Garfield (pp. 337–360). Medford, NJ: Information Today, Inc. ISBN-13: 9781573870993. Negishi, M., Sun, Y., & Shigi, K. (2004). Citation database for Japanese papers: A new bibliometric tool for Japanese academic society. Scientometrics, 60, 333–351. https://doi.org/10.1023/B:SCIE. 0000034378.38698.b2. Onodera, N., & Yoshikane, F. (2015). Factors affecting citation rates of research articles. Journal of the Association for Information Science and Technology, 66, 739–764. https://doi.org/10.1002/ asi.23209. Pianta, M., & Archibugi, D. (1991). Specialization and size of scientific activities: A bibliometric analysis of advanced countries. Scientometrics, 22, 341–358. https://doi.org/10.1007/ BF02019767. Potter, R. (2020). Personal communication. Puuska, H.-M., Muhonen, R., & Leino, Y. (2014). International and domestic co-publishing and their citation impact in different disciplines. Scientometrics, 98, 823–839. https://doi.org/10.1007/ s11192-013-1181-7. Robinson-Garcia, N., Sugimoto, C. R., Murray, D., Yegros-Yegros, A., Larivière, & Costas, R. (2019). The many faces of mobility: Using bibliometric data to measure the movement of scientists. Journal of Informetrics 13, 50–63. https://doi.org/10.1016/j.joi.2018.11.002 Saegusa, A. (1999). Survey finds deep insularity among Japanese scientists. Nature, 401, 314. https://doi.org/10.1038/43740. Sawa, T. (2019). The global decline of Japanese universities. Japan Times. https://www.japantimes. co.jp/opinion/2019/01/18/commentary/japan-commentary/global-decline-japanese-universities/ #.Xkc5NGhKjIU Shehatta, I., & Al-Rubaish, A. M. (2019). Impact of country self-citations on bibliometric indicators and ranking of most productive countries. Scientometrics, 120, 775–791. https://doi.org/10.1007/ s11192-019-03139-3. Shibayama, S., & Baba, Y. (2015). Impact-oriented science policies and scientific publication practices: The case of life sciences in Japan. Research Policy, 44, 936–950. https://doi.org/10.1016/ j.respol.2015.01.012. Smith, M. J., Weinberger, C., Bruna, E. M., & Allesina, S. (2014). The scientific impact of nations: Journal placement and citation performance. PLoS ONE 9, article number e109195. https://doi. org/10.1371/journal.pone.0109195 Stahlschmidt, S., & Hinze, S. (2018). The dynamically changing publication universe as a reference point in national impact evaluation: A counterfactual case study on the Chinese publication growth. Frontiers in Research Metrics and Analytics, 3, article number 30. https://doi.org/10. 3389/frma.2018.00030 Suda, M. (2019). China rises to world no. 2 in science research while Japan declines: Survey. The Mainichi. https://mainichi.jp/english/articles/20190506/p2a/00m/0na/002000c Sugimoto, C. R., Robinson-Garcia, N., Murray, D. S., Yegros-Yegros, A., Costas, R., & Larivière, V. (2017). Scientists have most impact when they’re free to move. Nature, 550, 29–31. https:// doi.org/10.1038/550029a.
142
D. A. Pendlebury
Sun, Y., Kakinuma, S., Negishi, M., & Nisizawa, M. (2008). Internationalizing academic research activities in Japan. COLLNET Journal of Scientometrics and Information Management, 2, 11–19. https://doi.org/10.1177/1028315315574102. Tahamtan, I., & Bornmann, L. (2018). Core elements in the process of citing publications: Conceptual overview of the literature. Journal of Informetrics, 12, 203–216. https://doi.org/10.1016/ j.joi.2018.01.002. Tahamtan, I., Afshar, A. S., & Ahamdzadeh, K. (2016). Factors affecting number of citations: A comprehensive review of the literature. Scientometrics, 107, 1195–1225. https://doi.org/10.1007/ s11192-016-1889-2. Tang, L., Shapira, P., & Youtie, J. (2015). Is there a clubbing effect underlying Chinese research citation increases? Journal of the Association for Information Science and Technology, 66, 1923– 1932. https://doi.org/10.1002/asi.23302. Thelwall, M., & Maflahi, N. (2020). Academic collaboration rates and citation associations vary substantially between countries and fields. Journal of the Association for Information Science and Technology, (forthcoming). Preprint available on https://arxiv.org/abs/1910.00789 Thelwall, M., & Sud, P. (2016). National, disciplinary and temporal variations in the extent to which articles with more authors have more impact: Evidence from a geometric field normalized citation indicator. Journal of Informetrics, 10, 48–61. https://doi.org/10.1016/j.joi.2015.11.007. Traag, V. A. (2019). Inferring the causal effect of journals on citations. (forthcoming). Preprint available on https://www.arxiv.org/pdf/1912.08648.pdf van Leeuwen, T. N., Moed, H. F., Tijssen, R. J. W., Visser, M. S., & van Raan, A. F. J. (2000). First evidence of serious language-bias in the use of citation analysis for the evaluation of national science systems. Research Evaluation, 9, 155–156. https://doi.org/10.3152/147154400781777359. van Leeuwen, T. N., Moed, H. F., Tijssen, R. J. W., Visser, M. S., & van Raan, A. F. J. (2001). Language bias in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance. Scientometrics, 51, 335–346. https://doi.org/10. 1023/A:1010549719484. van Raan, A. F. J. (1998). The influence of international collaboration on the impact of research results: Some simple mathematical considerations concerning the role of self-citations. Scientometrics, 42, 423–428. https://doi.org/10.1007/BF02458380. van Raan, A. F. J., van Leeuwen, T. N., & Visser, M. S. (2011). Severe language effect in university rankings: Particularly Germany and France are wronged in citation-based rankings. Scientometrics, 88, 495–498. https://doi.org/10.1007/s11192-011-0382-1. Wagner, C. S., & Jonkers, K. (2017). Open countries have strong science. Nature 550, 32–33. https:// doi.org/10.1038/550032a. Wagner, C. S., Whetsell, T., Baas, J., & Jonkers, K. (2018). Openness and impact of leading scientific countries. Frontiers in Research Metrics and Analytics, 3, article number 10. https://doi.org/10. 3389/frma.2018.00010 Waltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10, 365–391. https://doi.org/10.1016/j.joi.2016.02.007. Waltman, L., & van Eck, N. J. (2013a). Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison. Scientometrics, 96, 699–716. https://doi. org/10.1007/s11192-012-0913-4. Waltman, L., & van Eck, N. J. (2013b). A systematic empirical comparison of different approaches for normalizing citation impact indicators. Journal of Informetrics, 7, 833–849. https://doi.org/ 10.1016/j.joi.2013.08.002. Waltman, L., & van Eck, N. J. (2015). Field-normalized citation impact indicators and the choice of an appropriate counting method. Journal of Informetrics, 9, 872–894. https://doi.org/10.1016/ j.joi.2015.08.001. Waltman, L., & van Eck, N. J. (2019). Field normalization of scientometric indicators. In: W. Glänzel, H. F. Moed, U. Schmoch, & M. Thelwall (Eds.). Springer handbook of science and technology indicators (pp. 281–300). Cham, Switzerland: Springer. ISBN-13: 978-3030025106.
When the Data Don’t Mean What They Say …
143
Zhou, P., & Pan, Y. (2015). A comparative analysis of publication portfolios of selected economies. Scientometrics, 105, 825–842. https://doi.org/10.1007/s11192-015-1707-2. Zitt, M. (2015). The excesses of research evaluation: The proper use of bibliometrics. Journal of the Association of Information Science and Technology, 66, 2171–2176. https://doi.org/10.1002/ asi.23519. Zitt, M., & Bassecoulard, E. (1998). Internationalization of scientific journals: How international are the international journals? Scientometrics, 41, 255–271. https://doi.org/10.1007/BF02457982. Zitt, M., & Bassecoulard, E. (1999). Internationalization of communication: A view on the evolution of scientific journals. Scientometrics, 46, 669–685. https://doi.org/10.1007/BF02459619. Zitt, M., Perrot, F., & Barré, R. (1998). The transition from ‘national’ to ‘transnational’ model and related measures of countries’ performance. Journal of the American Society for Information Science, 49, 30–42. https://doi.org/10.1002/(SICI)1097-4571(1998)49:1%3c30:AID-ASI5%3e3.0. CO;2-3. Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2003). Correcting glasses help fair comparisons in international science landscape: Country indicators as a function of ISI database delineation. Scientometrics, 56, 259–282. https://doi.org/10.1023/A:1021923329277.
Origin and Impact: A Study of the Intellectual Transfer of Professor Henk F. Moed’s Works by Using Reference Publication Year Spectroscopy (RPYS) Yong Zhao, Jiayan Han, Jian Du, and Yishan Wu
Introduction The way humans pass down their skills and knowledge to the next generations and the process by which scientific knowledge grows have long been important research topics in the philosophy, sociology, and history of science. There are different theories and views among scientists on how new scientific theories are predicated upon old ones and how scientific knowledge grows, such as Francis Bacon’s cumulative view of scientific progress, Karl Popper’s falsificationist methodology and Thomas Kuhn’s concept of paradigm shift. Similarly, scientists are always interested in the relationship between a researcher’s study and the existing intellectual foundation laid by the past scientific community. When dealing with scientific development issues, researchers in the philosophy, sociology, and history of science, usually focus on historical events, while the earliest scientometricians tried to focus their research on citations in scientific papers with an eye on the relationship between the growth of scientific knowledge and scientific literature (Price, 1965). Since 1963, when the Science Citation Index (SCI) was created by Eugene Garfield, citation analysis as a research method has played a significant role in measuring scientific progress, detecting characteristics of disciplinary structure, and assessing the impact of research. Meanwhile, a new and interesting
Y. Zhao (B) · J. Han Information Research Center, China Agricultural University, Beijing, China e-mail: [email protected] J. Du National Institute of Health Data Science, Peking University, Beijing, China e-mail: [email protected] Y. Wu Chinese Academy of Science and Technology for Development, Beijing, China e-mail: [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_6
145
146
Y. Zhao et al.
research field in the history of science, called quantitative study of the history of science, was created. In the article Why Do We Need Algorithmic Historiography? published in 2003, Garfield, Pudovkin, and Istomin (2003) proposed to create historiographs of scholarly topics in a given field based on citation relationships by using a new program called HistCite. HistCite presents a genealogic profile of the evolution of concepts in given field and assists users to identify milestone events (papers) and core people (authors) in this field. In 2014 Marx, Bornmann, Barth, and Leydesdorff introduced a citation-based method—Reference Publication Year Spectroscopy (RPYS)—that can identify the most frequently cited publications in a field, about a topic, or by a researcher. RPYS is especially suitable for studying the historical roots of fields, topics, or researchers. In recent years RPYS was applied to examine the historical roots of research areas such as iMetrics (Leydesdorff, Bornmann, Marx, & Milojevic, 2013), the Higgs boson (Barth, Marx, Bornmann, & Mutz, 2014), the legend of Darwin’s finches (Marx, Bornmann, Barth, & Leydesdorff, 2014), the global positioning system (Comins and Hussey 2015), citation analysis (Hou, 2017), climate change (Marx, Haunschild, Thor, & Bornmann, 2017), academic efficiency (Rhaiem & Bornmann, 2018) and visual analog scale (Yeung & Wong, 2019), and the historical roots of scientists’ works (Bornmann, Haunschild, & Leydesdorff, 2018; Zhao & Wu, 2017). In this study, we trace the intellectual transfer paths of Prof. Moed’s works by using RPYS in terms of two approaches. One is the historical roots of his works. We trace the origin of Moed’s academic thoughts and examine the important references he cited that have made significant contributions to his work. The other is his academic contribution. We study Moed’s impact on bibliometrics and informetrics, and identify his most influential works.
Dataset The study is based on all papers published by Moed during 1985–2018. We use his name to retrieve his publications from Web of Science Core Collection (WoSCC) on January 2, 2019. Our first dataset consists of 163 papers published, of which 110 papers have a cited references. Meanwhile, 3,090 papers citing Moed’s 163 papers were retrieved from WoSCC. After removing self-citations, there are 3,013 papers left in the second dataset.
Origin and Impact: A Study of the Intellectual Transfer …
147
Methods Reference Publication Year Spectroscopy (RPYS) Reference Publication Year Spectroscopy (RPYS) is based on the analysis of the frequency with which references are cited in the publications of a specific research field in terms of publication years of those cited references. It visualizes the cited references for any given reference publication year (e.g., n) from the 5-year median of reference publications (n − 2, n − 1, n, n + 1, n + 2). The peaks in the data distribution correspond to years containing a larger number of cited references within these discrete bins of time. Thor, Marx, Leydesdorff, and Bornmann (2016) developed the CRExplorer software, a program specifically designed for RPYS. The removing, clustering, and merging functionalities of the CRExplorer have been used to clean the cited reference (CR) dataset in our study, especially for variants of the same CR. To assist users to identify the CRs with the greatest impact in a given paper set, CRExplorer considers six indicators, including the number of times the cited reference (CR) has been cited (N_CR), the proportion of the number of times a CR has been cited among the number of all CRs in the same RPY (PERC_YR), and the proportion of the number of times a CR has been cited among the number of CRs over all RPYs (PERC_ALL). Several CRs are cited in more than one publication with different publication years, resulting in the indicator N_PYEARS. The next indicator is the percentage of years in which the CR was cited at least once relative to total citing years (PERC_PYEARS). N_TOP50, N_TOP25, and N_TOP10 respectively stand for those CRs belonging to the 50%, 25%, or 10% most frequently cited publications (CRs) over many citing publication years. Previous studies have shown evidence that only the papers following the trajectory of a “sticky knowledge claim” can be expected to have a sustained impact (Baumgartner & Leydesdorff, 2014), and the influential publications embodying intellectual transfer are usually characterized by such features as a large number of citations and longer citation periods of time (Shang, Feng, & Sun, 2016). Therefore, our study is based on RPYS by using three indicators, N_CR, N_PYEARS and N_TOP10. We rank the values of three indicators in descending order. We define those papers whose three indicators ranked in the top 1% as influential publications of intellectual transfer. Using this method, we then identify the historical roots of Moed’s works by putting all of the papers authored by Moed and all of the references of these papers into a dataset.1 A second dataset covers all publications citing Moed’s papers and their references. This latter dataset is used to identify his contributions to bibliometrics and informetrics. 1 In
research practice, there is often a synergetic effect between the insight that a scientist gains from regarding other scholars’ work (“standing on the shoulders of giants”, as Newton said) and his or her own inspiration. Therefore, we did not exclude Moed’s self-citation here, because some firm beliefs or assumptions of a scientist are often part of the historic roots of his or her intellectual ideas. Bornmann et al. (2018) also included self-citation of E. Garfield when he examined Garfield’s works by RPYS.
148
Y. Zhao et al.
Co-citation Analysis of Influential Publications To measure the semantic similarity of influential publications based on RPYS analysis over the two datasets, we detected the co-citation relationships for these influential publications and grouped similar publications into a number of clusters by using VOSviewer software.
Results Prof. Henk F. Moed and His Papers Prof. Henk F. Moed was the winner of the 1999 Derek John de Solla Price Award, which is the premier international award of excellence in scientometrics. Since 1986, Moed had been a senior staff member with the Centre for Science and Technology Studies (CWTS) in the Department of Social Sciences at Leiden University. Awarded the Ph.D. degree in Leiden University in 1989, he was the professor of Research and Evaluation Methodology. He used to be Elsevier’s Senior Scientific Advisor from March of 2010 to November of 2014. From September of 2014 to December of 2015, he was a visiting professor at the Department of Computer, Automatic and Management Engineering Antonio Ruberti (DIAG) of Sapienza University of Rome. From 1985 to 2018, Moed published 163 papers in 23 journals, seven conference proceedings and one book. In terms of article types, there are 105 articles (64.4%), 29 book chapter articles (17.8%), and 25 proceedings papers (15.3%). Table 1 shows the document sources where Moed published at least five papers. The journals shown cover 58.9% of his publication output. In 2005, Springer published his monograph Citation Analysis in Research Evaluation, which gives a systematic introduction to the nature of citation analysis, construction of citation indicators, as well as both the advantages and disadvantages Table 1 Document sources with at least 5 papers authored by Prof. Moed Publication source
Type
Papers
Percentage
Scientometrics
Journal
47
28.8%
Citation analysis in research evaluation
Book
29
17.8%
JASIST
Journal
12
7.4%
Journal of Informetrics
Journal
9
5.5%
Allergy
Journal
8
4.9%
Nature
Journal
7
4.3%
Research evaluation
Journal
7
4.3%
Research policy
Journal
6
3.7%
Origin and Impact: A Study of the Intellectual Transfer … 40
500 Publications
Citations
450
35
400
30
350
25
300
20
250
15
200 150
10
Number of Citations
100
5
50
0
0
1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
Number of Publications
149
Publication year
Fig. 1 Number of Prof. Moed’s publications and citations
of citation analysis in research evaluation. Twenty nine individual chapters of this book were indexed in WoSCC as if they were independent publications. This book discussed the theoretical positions of several scholars (i.e. Garfield E., Small H., Merton R.K., Zuckerman H.) who had contributed to a more profound understanding of citation-based indicators. To that purpose, Moed quoted and briefly discussed some key passages from their works. Moed published 99 papers (60.7%) as the first author or the corresponding author, of which there were 60 papers (36.8%) where he was the sole author. He co-authored publications with 87 scholars from 16 countries. Four scholars who have co-authored more than ten papers with Moed: A.F.J. van Raan (18 papers), Thed N. van Leeuwen (15 papers), Marc Luwel (13 papers), and Martijn S. Visser (11 papers). Figure 1 shows Moed’s annual publications and citations. During the 34 years from 1985 to 2018, he published 4.8 papers on average each year. A publication peak is observed in 2005, when he published his monograph Citation Analysis in Research Evaluation, whose 29 chapter articles are equivalent to 29 articles according to the indexing criteria of WoSCC. His papers received 4,860 citations, i.e. 142.9 citations on average each year. We observe a dramatic growth in yearly citation frequency both in 2005 and 2009, with a peak of 438 citations in 2013.
Historical Roots of Prof. Moed’s Works Figure 2 illustrates his cited references published between 1926 and 2018 using RPYS. It shows that the earliest paper that Moed cited is Lotka’s article on the frequency distribution of scientific productivity published in Journal of the Washington
150
Y. Zhao et al.
100 NCR
Median-5
Cited References
80 60 40 20 0
1926
1936
1946
1956
1966
1976
1986
1996
2006
2016
-20
-40
Cited Reference Year
Fig. 2 RPYS of Prof. Moed’s papers
Academy of Sciences in 1926, which is one of the three well-known laws of bibliometrics (Bradford’s Law, Price’s Law and Lotka’s Law). It is quite obvious to note the five peaks of cited references of Moed’s papers in 1979, 1983, 1989, 1996 and 2005. Table 2 shows the 12 most important publications that Prof, Moed cited. These publications appeared in the references listed in Moed’s 67 papers, accounting for 60.9% of his total papers. Furthermore, Moed’s 43 papers cited at least two important publications shown in Table 2. The important publications also include his own four journal articles and one book, which reflect his main academic thoughts and ideas. We also notice that those most important publications were published from the end of the 1970s to the mid-1990s, a period corresponding to Moed’s early academic career. Figure 3 shows the co-citation network and cluster analysis result of the 12 important publications in Table 2. On first glance, we can observe easily that Garfield’s book (No. 2 CR in Table 2) published in 1979 occupies a central position in the co-citation network. That book describes the nature and history of the development of citation indexing, the design and production of a citation index, and applications of citation indexing for bibliographic searching as well as for use as a research tool for the understanding of science, scientists, and scientific journals. This book is the most-cited reference (28 times) and has been cited for the longest time (15 years) in Moed’s papers. Twenty important publications can be divided into three clusters: (1) the use of bibliometric indicators for scientific assessment; (2) the measures of journal citation impact; and (3) the development of appropriate quantitative research assessment methodologies for various domains of science and scholarship as well as for various organizational levels.
Origin and Impact: A Study of the Intellectual Transfer …
151
Table 2 Important publications Prof. Moed cited No.
CR
RPY
N_CR
1
Garfield, E. (1972). Citation analysis as a tool in journal evaluation: journals can be ranked by frequency and impact of citations for science policy studies. Science, 178(4060), 471–479
1972
10
N_PYEARS 8
N_TOP10 8
2
Garfield, E. (1979). Citation Indexing. Its theory and application in science, technology and humanities. New York: Wiley
1979
28
15
13
3
Martin, B.R., & Irvine, J. (1983). Assessing basic research: some partial indicators of scientific progress in radio astronomy. Research Policy, 12(2), 61–90
1983
14
10
9
4
Moed, H.F., Burger, W.J.M., Frankfort, J.G., & Van Raan, A.F.J. (1985). The use of bibliometric data for the measurement of university research performance. Research Policy, 14(3), 131–149
1985
13
11
9
5
Schubert, A., Glänzel W., & Braun, T. (1989). Scientometric data files. A comprehensive set of indicators on 2649 journals and 96 countries in all major science fields and subfields 1981–1985. Scientometrics, 16(1), 3–478
1989
9
7
7
6
Moed, H.F., Bruin, R.E.D., & Leeuwen, T.N.V. (1995). New bibliometric tools for the assessment of national research performance: database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422
1995
20
12
10
7
Moed, H.F., & Vanleeuwen, T.N. (1995). Improving the accuracy of Institute for Scientific Information’s journal impact factors. Journal of the American Society for Information Science, 46(6), 461–467
1995
13
7
5
8
Moed, H.F., & Van Leeuwen, T.N. (1996). Impact factors can mislead. Nature, 381(6579), 186–186
1996
15
9
7
9
Garfield, E. (1996). How can impact factors be improved? British Medical Journal, 313(7054), 411–413
1996
10
7
7
(continued)
152
Y. Zhao et al.
Table 2 (continued) No.
CR
RPY
N_CR
10
Van Raan, A.F.J. (1996). Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics, 36(3), 397–420
1996
11
N_PYEARS 8
N_TOP10 6
11
Van Raan, A.F.J. (2004). Measuring science. In H.F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research. The use of publication and patent statistics in studies of S&T systems (pp. 19–50). Dordrecht, the Netherlands: Kluwer Academic Publishers
2004
9
7
7
12
Moed, H.F. (2005). Citation analysis in research evaluation. Dordrecht, Netherlands: Springer
2005
19
11
9
Note The publications in bold type are Prof. Moed’s works
Fig. 3 Co-citation network of important papers cited by Moed. Note Normalization method: Fractionalization; Co-citation frequency ≥3
The first cluster in Fig. 3 includes three papers published in the 1980s (No. 3, No. 4, and No. 5 CRs in Table 2). During his early career, Moed explored the use of bibliometric indicators for the measurement of research groups, departments, institutions, and countries. He accepted the view of Martin and Irvine (1983) that impact and scientific quality were not identical concepts, and further developed the
Origin and Impact: A Study of the Intellectual Transfer …
153
notion of the multi-dimensional research assessment matrix, which was built upon the notion of multi-dimensionality of research expressed in the above mentioned article by Ben Martin. Meanwhile, he held the same opinions with Schubert, Glänzel, and Braun (1989), and argued that each scientific field had its own citing practices. The second cluster of CRs related to the measures of journal citation impact is the largest group in Fig. 3. This group includes Garfield’s three publications on journal citation factors (No. 1, No. 2, and No. 9 CRs in Table 2). In the mid-1990s, Moed’s research was focused on the potential and drawbacks of journal impact factors (i.e. No. 7 and No. 8 CRs). He combined Garfield’s notions of a field’s “citation potential” with the ideas of correcting for differences in citation potential across subject fields proposed by Small and Sweeney, developing a novel indicator: SNIP (Source Normalized Impact per Paper). SNIP measures a journal’s citation rate per cited reference in documents in the journal’s subject field and gives an approximate answer to the question: “What would the value of a given journal’s impact factor be since in some fields one habitually cites on average more papers more than in other fields?”. In his monograph (No. 12 CR) published by Springer in 2005, Moed argued that citation counts can be conceived as manifestations of intellectual influence, but the concepts of citation impact and intellectual influence do not coincide. Outcomes of citation analysis must be valued in terms of a qualitative, evaluative framework that takes into account the substantive contents of the works under evaluation. The conditions for proper use of bibliometric indicators at the level of individual scholars, research groups or university departments tend to be more readily satisfied in a peer review context than in a policy context. The third cluster in Fig. 3 includes two papers authored by Moed’s colleague AFJ van Raan, who is a professor of Quantitative Science Studies at the Centre for Science and Technology Studies (CWTS), Leiden University. Van Raan’s research focus is the application of bibliometric indicators in research evaluation. He presented an overview of the potentials and limitations of bibliometric methods for the assessment of strengths and weaknesses in research performance, and for monitoring scientific developments (No. 10 CR). He further gave a thorough survey of the main methodologies applied in the measurement of scientific activity (No. 11 CR). Moed derived significant inspiration from Van Raan’s above mentioned works. In addition, Moed gave an outline of a new bibliometric database and the types of bibliometric indicators, and revealed their application potential in research performance assessment at the institution level and at the country level (No. 6 CR). He suggested that for more technical details of the use of citation analysis in research evaluation, the reader should refer to Moed et al. 1995 (No. 6 CR), and for more examples to Van Raan 2004 (No. 11 CR).
Academic Contributions of Moed’s Works Based on the RPYS analysis of the 3,013 publications citing Moed’s papers, his 28 influential papers were identified (Table 3). These papers were published between
154
Y. Zhao et al.
Table 3 Prof. Moed’s influential papers No.
CR
RPY
N_CR
N_PYEARS
N_TOP10
1
Moed, H.F., Burger, W.J.M., Frankfort, J.G., & Van Raan, A.F.J. (1985). The use of bibliometric data for the measurement of university research performance. Research Policy, 14(3), 131–149
1985
237
33
32
2
Moed, H.F. (1985). The application of bibliometric indicators: important fieldand time-dependent factors to be considered. Scientometrics, 8(3-4), 177–203
1985
70
30
30
3
Moed, H.F., & Vriens, M. (1989). Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15(2), 95–107
1989
48
22
22
4
Vianen, B.G.V., Moed, H.F., & Van Raan, A.F.J. (1990). An exploration of the science base of recent technology. Research Policy, 19(1), 61–81
1990
52
23
23
5
Braam, R.R., Moed, H.F., & Van Raan, A.F.J. (1991). Mapping of science by combined co-citation and word analysis. 1. Structural aspects. Journal of the American Society for Information Science, 42(4), 233–251
1991
188
26
26
6
Braam, R.R., Moed, H.F., & Van Raan, A.F.J. (1991). Mapping of science by combined co-citation and word analysis. 2. Dynamic aspects. Journal of the American Society for Information Science, 42(4), 252–266
1991
116
25
25
7
Moed, H.F., Bruin, R.E.D., Nederhof, A.J., & Tijssen, R.J.W. (1991). International scientific co-operation and awareness within the European community: problems and perspectives. Scientometrics, 21(3), 291–311
1991
43
23
23
8
Moed, H.F., Bruin, R.E.D., & Van leeuwen, T.N. (1995). New bibliometric tools for the assessment of national research performance: database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422
1995
264
23
23
(continued)
Origin and Impact: A Study of the Intellectual Transfer …
155
Table 3 (continued) No.
CR
RPY
N_CR
N_PYEARS
N_TOP10
9
Moed, H.F., & Van leeuwen, T.N. (1995). Improving the accuracy of Institute For Scientific Information’s journal impact factors. Journal of the American Society for Information Science, 46(6), 461–467
1995
138
24
24
10
Moed, H.F., & Van Leeuwen, T.N. (1996). Impact factors can mislead. Nature, 381(6579), 186–186
1996
126
23
23
11
Moed, H.F., Van leeuwen, T.N., & Reedijk, J. (1996). A critical analysis of the journal impact factors of Angewandte Chemie and the Journal of the American Chemical Society - Inaccuracies in published impact factors based on overall citations only. Scientometrics, 37(1), 105–116
1996
54
20
20
12
Moed, H.F., Van leeuwen, T.N., & Reedijk, J. (1998). A new classification system to describe the ageing of scientific journals and their impact factors. Journal of Documentation, 54(4), 387–419
1998
52
18
18
13
Noyons, E.C.M., Moed, H.F., & Luwel, M. (1999). Combining mapping and citation analysis for evaluative bibliometric purposes: a bibliometric study. Journal of the Association for Information Science & Technology, 50(2), 115–131
1999
90
20
20
14
Moed, H.F., Van leeuwen, T.N., & Reedijk, J. (1999). Towards appropriate indicators of journal impact. Scientometrics, 46(3), 575–589
1999
57
17
16
15
Moed, H.F. (2000). Bibliometric indicators reflect publication and management strategies. Scientometrics, 47(2), 323–346
2000
54
18
18
16
Van leeuwen, T.N., Moed, H.F., Tijssen, R.J.W., Visser, M.S., & Van Raan, A.F.J. (2001). Language biases in the coverage of the science citation index and its consequences for international comparisons of national research performance. Scientometrics, 51(1), 335–346
2001
184
17
17
(continued)
156
Y. Zhao et al.
Table 3 (continued) No.
CR
RPY
N_CR
N_PYEARS
N_TOP10
17
Wolfgang Glänzel, & Moed, H.F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193
2002
240
17
17
18
Moed, H.F. (2002). The impact-factors debate: the ISI’s uses and limits. Nature, 415(6873), 731–732
2002
147
17
17
19
Moed, H.F. (2002). Measuring China’s research performance using the science citation index. Scientometrics, 53(3), 281–296
2002
86
16
16
20
Van leeuwen, T.N. & Moed, H.F. (2002). Development and application of journal impact measures in the Dutch science system. Scientometrics, 53(2), 249–266
2002
46
15
14
21
Moed, H.F., Luwel, M., & Nederhof, A.J. (2002). Towards research performance in the humanities. Library Trends, 50(3), 498–520
2002
41
15
14
22
Van leeuwen, T.N., Visser, M.S., Moed, H.F., Nederhof, T.J., & Van Raan, A.F.J. (2003). The holy grail of science policy: exploring and combining bibliometric tools in search of scientific excellence. Scientometrics, 57(2), 257–280
2003
89
15
15
23
Moed, H.F. (2005). Citation analysis in research evaluation. Dordrecht, Netherlands: Springer
2005
176
13
13
24
Moed, H.F. (2005). Statistical relationships between downloads and citations at the level of individual documents within a single journal. Journal of the American Society for Information Science & Technology, 56(10), 1088–1097
2005
65
14
14
25
Moed, H.F. (2007). The effect of “Open Access” on citation impact: An analysis of ArXiv’s condensed matter section. Journal of the American Society for Information Science & Technology, 58(13), 2047–2054
2007
77
11
11
26
Moed, H.F. (2008). UK research assessment exercises: Informed judgments on research quality or quantity?. Scientometrics, 74(1), 153–161
2008
73
11
11
(continued)
Origin and Impact: A Study of the Intellectual Transfer …
157
Table 3 (continued) No.
CR
RPY
N_CR
N_PYEARS
N_TOP10
27
Moed, H.F. (2009). New developments in the use of citation analysis in research evaluation. Archivum Immunologiae Et Therapiae Experimentalis, 57(1), 13–18
2009
117
11
11
28
Moed, H.F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277
2010
239
10
10
Note The publications in bold type are important works where Moed cited himself
1985 and 2010. The total citations to these 28 papers reached 3,169. In other words, 28 influential papers represent merely 17.2% of Moed’s total papers but account for 65.2% of the total citations he accrued. In addition, Moed’s five publications that he himself often cited are all influential papers cited by other scholars. We can also observe that No. 28 CR is one of his two highly cited papers indexed in WoSCC, but his other highly cited paper published in 2017 (Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation-Review of the Literature) was not ranked into the 28 influential papers, probably due to the limited citing years (N_PYEARS = 2). Figure 4 shows that Moed’s significant contributions to bibliometric and informetrics centre on the following five themes: (1) indicators of research performance;
Fig. 4 Co-citation network of Prof. Moed’s influential papers. Note Normalization method: Fractionalization; Co-citation frequency ≥3
158
Y. Zhao et al.
(2) journal citation measures; (3) theoretical understanding and proper use of bibliometric indicators in research evaluation; (4) monitoring scientific development; and (5) alternative types of indicators of research performance. The first group in Fig. 4 includes six papers (No. 1, No. 7, No. 8, No. 16, No. 21, and No. 23 CRs in Table 3) related to bibliometric indicators of research assessment at the discipline-level, institution-level, and country-level. Moed proposed the idea of weighing publication according to type and publication channel, which addresses research performance monitoring in the social sciences and the humanities using citation analysis (No. 21 CR). He also found that language biases play an important role in the comparison and evaluation of national science systems (No. 16 CR). The potentialities and limitations of the use of bibliometric indicators in research evaluation were well illustrated in his book (No. 23 CR). Some of his articles in this field were frequently cited by such distinguished scientometricians as Loet Leydesdorff (32 papers), Thed N van Leeuwen (31 papers) and AFJ van Raan (28 papers). Furthermore, he considered that scientific collaboration and migration were important phenomena that can be properly studied with bibliometric and informetric methods. His paper (No. 7 CR) was frequently cited in the field of international research collaboration. The second group in Fig. 4 is related to the topic of journal citation measures. 10 influential papers authored by Moed enter in this group (No. 2, No. 9, No. 10, No. 11, No. 12, No. 14, No. 17, No. 18, No. 20, and No. 28 CRs in Table 3). Journal impact factors and related citation measures are designed to assess the significance and performance of scientific journals. Since the1990s, the potentialities and limitations of the journal impact factors have been discussed by Moed. His works in collaboration with Wolfgang Glänzel, Thed N van leeuwen and Jan Reedijk revealed the methodological flaws of the journal impact factor, and provided a new classification system to describe the aging of scientific journals and their impact factors (No. 9, No. 10, No. 11, No. 12, No. 14, No. 17 and No. 20 CRs). He also explored a new indicator of journal impact (No. 28 CR), denoted as Source Normalized Impact per Paper (SNIP). Unlike the journal impact factor, SNIP corrects for differences in citation practices between scientific fields, thereby allowing for more accurate between-field comparisons of citation impact. His works on improving the accuracy of journal impact factors also caught the attention of many scientometricians, such as Loet Leydesdorff (34 papers), Lutz Bornmann (27 papers), and Giovanni Abramo (26 papers). The third group in Fig. 4 is adjacent to the first group. There are six influential papers related to theoretical understanding and proper use of bibliometric indicators in research evaluation here (No. 3, No. 15, No. 19, No. 22, No. 26, and No. 27 in Table 3). Moed was early to present the possible inaccuracies in citation analysis, including how the very process of gathering and processing so much information was leading to errors (No. 3 CR). He argued that general differences in rankings of publications and citations cannot be interpreted only in terms of quality or significance of research (No. 15 CR), and found that bibliometric assessments had been shown to contribute to both positive and negative cultural changes in the publishing activities of individuals (No. 26 CR). Furthermore, he was alert to China’s advance in certain
Origin and Impact: A Study of the Intellectual Transfer …
159
indicators such as publications (No. 19 CR). He recommended the combined use of several indicators that give information on different aspects of scientific output (No. 22 CR). Some of his reflexive views on what are appropriate ways to use bibliometric indicators in research evaluation were quoted and discussed by other scholars, such as Rodrigo Costas (11 papers), Loet Leydesdorff (10 papers), and Maria Bordons (10 papers). The fourth group in Fig. 4 includes four papers (No. 4, No. 5, No. 6, and No. 13 CRs in Table 3) first-authored by Moed’s colleagues. As early as in 1990, Moed and his coauthors conducted patent citation analyses to explore contributions of science to the development of technology, and revealed the importance of foundational scientific knowledge in patent citation in the fields of chemistry and electronics (No. 4 CR). They also discussed the bibliometric mapping methodology, which is essential in monitoring scientific developments (No. 5 and No. 6 CRs). Meanwhile, they further integrated science mapping and research performance analysis for improving the quality of the existing evaluative indicators (No. 13 CR). The above mentioned papers are frequently cited by Wolfgang Glänzel (19 papers), Chaomei Chen (10 papers), and Enrique Herrera-Viedma (10 papers). Those three scholars all contributed much to the bibliometric mapping techniques and their applications. The last (but not the least) group in Fig. 4 discusses the alternative types of classical indictors of research performance, including 2 papers (No. 24 and No. 25 CRs). Moed explored the usage-based indicator, such as the number of times a full text article is downloaded (No. 24 CR). He also examined the effect of Open Access (OA) upon citation impact and confirmed the citation advantage of OA articles (No. 25 CR), which is applicable when extending the comparison from citation to article views and social media attention. Many scholars in the fields of webometrics, altmetrics, and big data research, such as Mike Thelwall (11papers), Juan Gorraiz (9 papers), and Kayvan Kousha (8 papers) cited the above mentioned papers.
Discussion Before we turn to our conclusions, let us first critically discuss the usefulness of RPYS as a method for tracing the intellectual transfer paths of scientists and identifying the influential publications showcasing their academic origin and impact. The strategy of RPYS strongly depends on the size of the publications and the references dataset to be analysed and the specific focus of the analysis over early to more recent works (Thor et al., 2016). Our study shows that RPYS is suitable for identifying the external characteristics of the intellectual transfer path of scientists by calculating some bibliometric measures, such as citation numbers or citation duration. However, it is just at first step, because if we want to know which knowledge entity (e.g. concept, idea, method, or tool) is transferred and how a knowledge entity is cited, the characteristics of citation content in the full-text articles should be examined. Therefore, we propose to combine RPYS and content-based citation analysis (Ding
160
Y. Zhao et al.
et al., 2014; Zhang, Ding, & Milojevic, 2013) in the study of intellectual transfer paths of scientists. These two methods are complementary to each other. We also observe that some papers published in recent years fail to be identified as Moed’s influential works because of the short citing years (N_PYEARS). For example, in recent years, focusing his study on the future of bibliometric and informetrics, Moed published several articles on the development of informetric indicators and their application in research assessment processes. In 2017, his monograph Applied Evaluative Informetrics was published by Springer, which is one of the few valuable textbooks in the field of evaluative informetrics. Thus, we can improve this study by considering the combination of co-citation analysis, content-based citation analysis and RPYS as an integrated analyzing method to give more insight into assessing Prof. Moed’s academic contributions.
Conclusion The intellectual transfer of Prof Henk F. Moed’ works was traced using RPYS. We identified the 12 important publications Moed most often cited and his 28 influential papers that other scholars had most frequently cited. The findings from our research show that, without a doubt, Moed has been one of the influential contemporary scientists in the field of bibliometrics and informetrics. Inspired by Garfield’s notions of a field’s “citation potential”, he proposed SNIP to improve the accuracy of journal impact factors. Meanwhile, he made a deep study into the empirical and theoretical aspects of citation analysis and its use in research evaluation. His research bridges the gap between the complex methodology and the prerequisites for research evaluation, serving as the basis for members of the research community, and for persons involved in research evaluation and research policy. Furthermore, he dealt not only with traditional bibliometric indicators based on publications and citation counts, but also with emerging altmetrics and usage-based metrics. He also offers a perspective on the future in the development of informetric indicators and their application in research assessment processes. This study also provides us some insights on methodology. The method of citation analysis plays an important role in studying scientists’ intellectual transfer, especially through RPYS, which not only clearly traces the academic career paths of scientists, but also gives a strong bibliographic support for research in the history of science. In our future study, we will combine RPYS with citation content analysis (CCA) and co-citation analysis, and provide more fine-grained depiction of the intellectual transfer path of other distinguished scientists. Acknowledgments This work was financially supported by the MOE (Ministry of Education in China) Project of Humanities and Social Sciences (18YJC870027).
Origin and Impact: A Study of the Intellectual Transfer …
161
References Barth, A., Marx, W., Bornmann, L., & Mutz, R. (2014). On the origins and the historical roots of the Higgs boson research from a bibliometric perspective. European Physical Journal Plus, 129(6), 1–13. Baumgartner, S. E., & Leydesdorff, L. (2014). Group-based trajectory modeling GBTM of citations in scholarly literature: Dynamic qualities of “transient” and “sticky knowledge claims”. Journal of the Association for Information Science and Technology, 65(4), 797–811. Bornmann, L., Haunschild, R., & Leydesdorff, L. (2018). Reference publication year spectroscopy (RPYS) of Eugene Garfield’s publications. Scientometrics, 114(2), 439–448. Comins, J. A., & Hussey, T. W. (2015). Detecting seminal research contributions to the development and use of the global positioning system by reference publication year spectroscopy. Scientometrics, 104(2), 575–580. Ding, Y., Zhang, G., Chambers, T., Song, M., Wang, X. L., & Zhai, C. X. (2014). Content-based citation analysis: The next generation of citation analysis. Journal of the Association for Information Science and Technology, 65(9), 1820–1833. Garfield, E., Pudovkin, A. I., & Istomin, V. S. (2003). Why do we need algorithmic historiography? Journal of the American Society for Information Science and Technology, 54(5), 400–412. Hou, J. H. (2017). Exploration into the evolution and historical roots of citation analysis by reference publication year spectroscopy. Scientometrics, 110(3), 1437–1452. Leydesdorff, L., Bornmann, L., Marx, W., & Milojevic, S. (2013). Reference publication year spectroscopy applied to iMetrics: Scientometrics, Journal of Informetrics, and a relevant subset of JASIST. Journal of Informetrics, 8(1), 162–174. Martin, B. R., & Irvine, J. (1983). Assessing basic research: Some partial indicators of scientific progress in radio astronomy. Research Policy, 12(2), 61–90. Marx, W., Bornmann, L., Barth, A., & Leydesdorff, L. (2014). Detecting the historical roots of research fields by Reference Publication Year Spectroscopy (RPYS). Journal of the Association for Information Science and Technology, 65(4), 751–764. Marx, W., Haunschild, R., Thor, A., & Bornmann, L. (2017). Which early works are cited most frequently in climate change research literature? A bibliometric approach based on reference publication year spectroscopy. Scientometrics, 110(1), 335–353. Price, D. J. de Solla. (1965). Little science, big science. New York: Columbia University Press. ISBN 0-2310-8562-1. Rhaiem, M., & Bornmann, L. (2018). Reference Publication Year Spectroscopy (RPYS) with publications in the area of academic efficiency studies: What are the historical roots of this research topic? Applied Economics, 50(3), 1442–1453. Schubert, A., Glänzel W., & Braun, T. (1989). Scientometric data files. A comprehensive set of indicators on 2649 journals and 96 countries in all major science fields and subfields 1981–1985. Scientometrics, 16(1), 3–478. Shang, H. R., Feng, C. G., & Sun, L. (2016). Evaluation of academic papers with academic influenceProposing two new indicators of academic inheritance effect and long-term citation. Chinese Science Bulletin, 61(26), 2853–2860. Thor, A., Marx, W., Leydesdorff, L., & Bornmann, L. (2016). Introducing CitedReferences Explorer (CRExplorer): A program for reference publication year spectroscopy with cited references standardization. Journal of Informetrics, 10(2), 503–515. Yeung, A. W. K., & Wong, N. S. M. (2019). The historical roots of visual analog scale in psychology as revealed by reference publication year spectroscopy. Frontiers in Human Neuroscience, 13, 1–5.
162
Y. Zhao et al.
Zhang, G., Ding, Y., & Milojevic, S. (2013). Citation content analysis (CCA): A framework for syntactic and semantic analysis of citation content. Journal of the American Society for Information Science and Technology, 64(7), 1490–1503. Zhao, Y., & Wu, Y. S. (2017). Tracing Origins: A study on the characteristics of important literature cited by a famous scientometrician (in Chinese). Journal of the China Society for Scientific and Technical Information, 36(11), 1099–1107.
Delineating Organizations at CWTS—A Story of Many Pathways Clara Calero-Medina, Ed Noyons, Martijn Visser, and Renger De Bruin
The Origins of Address Analysis: The Journal Profiles The tradition of processing address data in the bibliometric research of the CWTS group goes back into mid-1980s. Although Henk Moed pioneered with data from the address field in the Science Citation Index early in the 1980s, it started seriously with an experimental project in 1987. With his Ph.D. supervisor, Prof. Pierre Vinken (1927–2012), Henk developed a bibliometric marketing tool for scientific journals. Vinken, who was chairman of Elsevier Publishers, thought that this instrument, the so-called journal profiles, would provide crucial information for his scientific branch Elsevier Science Publishers (ESP). ESP commissioned the LISBON Institute of the Leiden University, the predecessor of CWTS, to develop a prototype. The journal profiles promised to reveal patterns in the journals, which could provide editorial boards and publishers with crucial information for decision making, more detailed than the impact factor of the Science Citation Index (SCI). Analyses could be made with respect to most cited and most publishing authors and institutes. These data in the journal profiles enabled to organize the recruitment of authors and board members as well as the selling of subscriptions much more efficiently. To achieve this, the author and address fields in the SCI publication records provided the essential information. Collecting the data was not an easy game in those days. Calling into the database of C. Calero-Medina · E. Noyons (B) · M. Visser CWTS, Leiden University, Leiden, The Netherlands e-mail: [email protected] C. Calero-Medina e-mail: [email protected] M. Visser e-mail: [email protected] R. De Bruin Department of History and Art History, Utrecht University, Utrecht, The Netherlands e-mail: [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_7
163
164
C. Calero-Medina et al.
the Deutsches Institut für Medinizische Dokumentation und Information (DIMDI) in Cologne via a beeping modem created a connection that was highly vulnerable for disturbances. After processing the publication data at the central computer center of the Leiden University, citation commands were uploaded to DIMDI to establish the citation score of each article. The first results looked promising. However, Henk Moed was aware of serious flaws in the product. He observed inaccuracies in citation analysis, due to problems in the data acquired from the SCI (Moed & Vriens, 1989). One of these was the variation in the spelling and initials in the author name field. More serious was the variation in the address field. The names of publishing institutes showed a wide range in indications. Leiden University, for instance, was mentioned also as Leyden Univ, University of Leiden/of Leyden, Rijksuniversiteit Leiden, Rijksuniv Leiden, State Univ Leiden/Leyden or Leiden/Leyden State Univ. Perhaps most problematic were universities in Paris. These have both names and numbers. The Université Pierre et Marie Curie, for instance, is Paris 6. The use of French and English, different abbreviations and Roman or Arabic numerals result in many variations, just for one university. The problem was further aggravated by the habit to name only the department, hospital or laboratory in the address of the publication, leaving out the university. In our data, we found this phenomenon relatively often in publications by scientists working in university hospitals. We found variations not only in institutional names but also in city and even in country names. These differences were partly due to the use of English or the local vernacular, but also to the fact that universities in big cities are often located in autonomous suburbs. Paris was again the biggest problem. Universities were spread over Paris itself and many ‘villes’ outside the ‘périferique’, but inside the ‘région parisienne’ and counting as one of the Paris universities. The vast numbers in variations resulted in scattering the scores, and consequently in unreliable rankings of most publishing and most cited institutions in our journal profiles, hampering the principle goals: recruiting authors and board members and selling subscriptions. For ESP, a reliable assessment of key institutions in a specific field of science was strategic information. The solution was to create a database streamlining the variations in the address fields of the SCI records in order to assign all publication and citation scores from a specific institution to that address under one described name, in the right hierarchical order. We called this the unification process (De Bruin & Moed, 1990). Also the geographical variations had to be corrected. To create a clear picture of the performance by universities and research institutes in the French capital, all the suburbs had been brought under the denominator Paris. The raw data consisted of the collected publication records from the SCI. The address fields were divided into the institutional, city and country parts. The variations were listed. Some were easy to solve: Leyden and Leiden or Univ Paris 06, Paris 6 and Paris VI. To establish the identification of the Université Pierre et Marie Curie as the University Paris 6 additional information was needed. In the pre internet era that meant checking the data on the print with university address guides like The World of Learning and the World List of Universities/Liste Mondiale des Universités in the reading room of the University Library of Leiden. The results of this literature search were typed into the
Delineating Organizations at CWTS—A Story of Many Pathways
165
database and processed. This processing created a Masterfile that was used to match new raw data. In the results it was visible whether addresses were recognized by the Masterfile or not and whether a recognition was confirmed by research in the address guides. In the development of the unification procedure, Renger de Bruin, who joined the project in data collecting, used his experience as an historian. In a way, writing a collective biography of the Utrecht city government (1795–1813) resembled the chaotic picture of academic addresses (De Bruin, 1986). The result of the unification process was a considerable reduction in scattering, making the rankings in the journal profiles much more reliable. For ESP, the product met the expectations, to say the least, and the publisher ordered more and more journal profiles, not only of its own journals, but also of its competitors. In 1991, journal profiles of a set of Pergamon publications played a role in the ESP takeover of this house, then owned by media tycoon Robert Maxwell (1923–1991). During the period 1987–1991 the course of downloading publication data, uploading citation commands and processing the results was improved and fastened considerably. The unification of addresses in scientific publications was presented to the colleagues in the field at the Second International Conference on Bibliometrics, Scientometrics and Informetrics in London, Ontario, Canada, in July 1989. The paper was published in the proceedings (De Bruin & Moed, 1990).
New Applications of Address Analysis While the journal profiles continued to be an important source of income for the group and the unification database grew with every new commission, the Leiden group, renamed in CWTS, found new applications for the use of the address field in SCI records. One of these was the delimitation of scientific subfields using terms in corporate addresses in scientific publications. In department, laboratory and section names of the institutions the research focus was often visible. The lower in the hierarchy, the more specific the field indication, e.g. University of Leiden, Faculty of Medicine, Department of Immunology. Words in the address field of the SCI records indicating a scientific discipline were characterized as ‘cognitive address words’, such in contrast to organizational or postal words (De Bruin & Moed, 1993). The use of cognitive address words offered a new possibility to visualize the structures of science, after co-citation analysis, co-word analysis, the use of indexing systems based on controlled or non-controlled keywords and the use of classification of scientific subfields or categories. As a source, databases like Exerpta Medica and Physics Briefs had been used. In processing the data from these sources, mapping techniques were developed at CWTS (Braam, 1991). Also, for the analysis of the collected cognitive address words these techniques were applied. The method on the use of cognitive address words was presented at a conference in Potsdam in April 1991 and published in Scientometrics two years later (De Bruin & Moed, 1993). Address data played a crucial role in a vast project evaluating research performance of three Belgian universities: Ghent, Leuven and Antwerp. The project started
166
C. Calero-Medina et al.
with the University of Ghent, followed by Leuven and Antwerp. ‘Input data’ with respect to the academic staff were provided by the three universities. In matching these data with the SCI, the address field was important, but to avoid mistakes, everything was checked both by the CWTS researchers and the involved scientists in Belgium. The citation scores were compared to the world average and to the average impact of the set of journals, in which the scientists of the three universities published. The results served as tools for university policy makers for research planning and for the scientists involved for improving their publication strategy (De Bruin et al., 1993; Moed et al., 1997; Van den Berghe et al., 1998). The analysis of corporate addresses in SCI records played an important role in other commissions for evaluation research, by the Flemish government and by the European Commission. Directorate General XII (Science, Research and Development) of the EC asked CWTS to assess whether European funding had led to an increased co-operation between scientists from EU Member States in the field of agricultural sciences. Here, the country indication in the SCI records was crucial to establish trans-national publications in the sense of publications written by authors from different countries. For the years 1979, 1982, 1985 and 1988, SCI data were collected for a set of articles in agricultural scientific journals, published by authors from European countries that were already members of the European Union (which was called the European Communities in those days) in 1979. The number of articles processed was between 2,500 and 3,000 per investigated year. The main search was for international co-authorships. Methodologically, the agricultural study linked up to co-authorship investigations by bibliometric pioneer Donald de B. Beaver and work on the global network of science by the Pittsburgh sociologist Thomas Schott (Beaver & Rosen, 1978; Schott, 1988). The EU official for the project, the British scientist Grant Lewison, had applied bibliometric data on co-authorship to evaluate research programs of the European Commission (Lewison & Cunningham, 1989). The result of the search did not show a spectacular success for the EU funding fostering inter-European scientific: the share of EU-EU co-authorships within the whole set of trans-national publications rose from 12.2% in 1979 to 14.3% in 1985, with the same result for 1988 (Moed & De Bruin, 1990). Stronger than co-operation within the EU were long-term cultural ties, such as those between Germany and Austria (in the 1980s still outside the EU) and between Denmark and the other Northern European countries. Also, strong ties between EU member states and their former colonies were established. The latter observation encouraged a wider investigation of neo-colonial ties in the global network of science (Nagtegaal & De Bruin, 1994). Data were collected from the online version of the SCI for the period 1974–1990, covering all fields of science. Only numbers of publications were established, using the country element in the address fields. The main conclusion was that particularly francophone countries in Africa were highly dependent on the former mother country for their visibility in the SCI. The same was true for a country like Surinam. On the other hand, for India, co-publications with the United Kingdom played a limited role: 1% of its total output was co-publication with authors from the former colonizing country against 38% for Gabon and 49% for Surinam. Already in the 1980s, India had such
Delineating Organizations at CWTS—A Story of Many Pathways
167
a vast scientific infrastructure that there was no need for dependence. However, also Malaysia and Ghana (9 and 11%) had far weaker ties with the old colonizer than former French colonies. Part of the explanation is that France kept much more control on its former colonies, at least in Africa, than the British did, with even frequent military interventions. Another factor is that the SCI had an ‘Anglo-Saxon bias’: publications in English were overrepresented in an era in which English was still not the generally accepted publication language (Arvanitis & Chatelin, 1988). Moreover, the coverage of Third World countries was limited in the 1970s and 1980s (Moravcsik, 1988). Consequently, scientists in francophone countries were far more dependent on the former mother countries for access to SCI covered journals than their colleagues in English speaking countries. The conclusion that long term political and cultural patterns influenced international scientific co-operation demanded further investigation. An arousing question came up when the Iraqi dictator Saddam Hussein invaded Kuwait in August 1990. The CWTS address duo collected and processed publication data from countries in the Gulf region between 1980 and 1989, later extended with an on-line search on numbers for the period 1974–1979. Address analysis was combined with other techniques like co-word and co-citation analysis. The results were striking. The isolation of Iran after the Islamic Revolution in 1979 was visible in a dramatic decrease of publications in SCI-covered journals and in co-publications with Western colleagues. Western attention shifted towards Kuwait, Saudi Arabia and Jordan. The political orientation of Iraq and Syria during the 1970s and 1980s mirrored in copublication with the USSR and other Warsaw Pact countries. In this co-operation, nuclear research played a considerable role, for instance between the I.V. Kurchtov Institute for Atomic Energy in Moscow and the Nuclear Research Center in Baghdad. From the co-word analysis, it was clear that both Iraq and Iran showed an extraordinary interest in nuclear and chemical research. The results were published in Nature, under the title Bibliometric lines in the sand (De Bruin, Braam, & Moed, 1991). Since the Gulf War was still raging on, the article drew wide attention from the media. Within four years the CWTS address analysis had developed from a problem in the making of journal profiles to a useful tool in various parts of bibliometric research. After reading the Nature article, SCI founder Eugene Garfield saw a new intelligence application for his brainchild (Garfield, 1992). At the end of the decade, Grant Lewison carried out a similar investigation on former Yugoslavia (Lewison, 1999).
Application of Affiliation Data: Rankings of Universities The developments in the nineteen eighties at CWTS as described in the previous section were promising but were limited by the available data and technology. As mentioned, the data underlying the studies and tools had to be collected via an online service before unified and analyzed locally. Within that context the studies based on author affiliation data were usually limited to country level analyses. The first studies
168
C. Calero-Medina et al.
using bibliographic data were based on data from the Science Citation Index (SCI). The initiative of Eugene Garfield to develop the Science Citation Index was primarily meant to support research and literature review, in other words bibliographic use or information retrieval. As such Garfield’s ideas were key in the development of information retrieval in general and of search engines like Google and Google scholar in specific (Cantu-Ortiz, 2017). In this paper I propose a bibliographic system for science literature that can eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers. It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such paper, if they could be located quickly. The citation index makes this check practicable. (Garfield, 1955)
By the time the SCI was actually established, the evaluative use of such data sources was also noticed. In the mid-seventies already, comprehensive studies were developed to evaluate research output (Narin, 1976). The term evaluative bibliometrics was coined. This extensive and comprehensive study involves countries as actors and primarily regards output indicators in an elaborate way. Fields of science were defined by journal categories; countries were defined by author affiliations. Since then more studies involving country-based analyses were conducted but comprehensive ones like this were not published very often. Another good example of a rigor study was published almost ten years later (Schubert, Zsindely, & Braun, 1985). This study regards medical research (output and impact) only with a specific focus on mid-sized countries. This is one of the first comprehensive studies including citation-based impact analyses, including field-normalization. Another 10 years later an extensive approach (and sample study) was published (Moed, De Bruin, & Van Leeuwen, 1995) in which individual institutes (anonymized universities) were included. During the nineteen nineties, CWTS started to develop its in-house version of the Citation Indexes, under the supervision of Henk Moed. This created a wide variety of new opportunities. In the previous section the journal profiles were already mentioned as a key report/service. Such advanced types of analyses require high quality data. At that time the infra-structure of CWTS was created in such a way that the process of address cleaning/unification could be executed structurally and more efficiently. Because of the advanced processes of data cleaning and affiliation unification, an interesting opportunity emerged which was picked up at CWTS (van Raan, 2005), a ranking of universities using bibliometric data. Output and impact using the state-ofart methodology could be established and updated (Waltman et al., 2012). Meanwhile, similar initiatives were developed elsewhere (e.g., ARWU or Shanghai ranking of universities, Scimago institution ranking, Times Higher Education ranking). Initially, these were also based on SCI, later Web of Science data, and suffered from the same issue of uncleaned author affiliation data. A straightforward counting of publications in Web of Science over the years using a search strategy relating to “universities” on the one hand and “countries” on the other, combined with “comparison” or “ranking” and “bibliometrics”, shows that
Delineating Organizations at CWTS—A Story of Many Pathways
169
60
Number of publications
50
40
30
20
10
19 91 19 92 19 93 19 94 19 95 19 97 19 98 19 99 20 00 20 01 20 02 20 03 20 04 20 05 20 06 20 07 20 08 20 09 20 10 20 11 20 12 20 13 20 14 20 15 20 16 20 17 20 18
0
Publication year ("country" near (compari* or rank*)) and "bi bliomtr*"
("universit*" near (compari* or rank*)) and "bibliomtr*"
Fig. 1 Trend of publications related to comparison or ranking of actors (universities or countries) 1991–2018
both follow a similar trend over the years but that the studies on the level of countries is somewhat more matured. In fact, the few papers on the level of universities before 2005 are slightly off-topic, if not false positives (Fig. 1). Regarding the analyses comparing universities, the above shows that the evolution of university ranking has not led to a steep increase of publications on that topic in relation to bibliometrics. The trend lags behind the trend of similar studies on countries and does not seem to increase that much. This may be caused by a lack of interest to develop better indicators or by the fact that most of these university rankings have their own platform for dissemination and communication. We also know that the emergence of university rankings has led to an increase of attention to the correct identification author affiliations in publication data. Initiatives to be discussed in the remainder of this paper show that. We will also show the complexity of the proper identification of organizations, which is key to assessing their bibliometric performance.
CWTS Register of Organizations Identifying organizations that publish scientific publications involves a number of challenges. The main one is that academic and research systems, together with the organizations that integrate them, are different throughout the world and are
170
C. Calero-Medina et al.
dynamics. These reorganizations subsequently pose a challenge when it comes to systematically identifying, defining and delimiting the types of organizations. CWTS started to carry out this enormous task a couple of decades ago, as already explained. Most of the organizations covered at that time were universities and university hospitals with a large number of publications on the Web of Science and mainly located in the United States and European countries. However, due to the expansion of CWTS scope, the launch of the Leiden Ranking, the involvement in the U_Multirank, European projects such as RISIS1 (Research Infrastructure for Research and Innovation Policy Studies) or Knowmak2 (Knowledge in the making in the European society) and the increasing complexity of academic and research systems worldwide, many other types of organizations are currently identified. More than twelve types of organizations can now be found in the in-house organization database created in CWTS using the metadata that appears in the addresses and funding acknowledgments of the publications. Not only universities and university hospitals have been identified, but also university campuses, research organizations, teaching organizations, teaching organizations campuses, hospitals, hospital groups, government agencies, funding bodies, funding channels, museums… One of the main issues when assigning a type to an organization is to identify the core activity of the organization. For instance, sometimes one organization can be a hospital and a research center, or certain institutions can be assigned as government agencies and as funding organizations. If the core activity is not clearly defined the institution will be assigned twice. One of the challenges that we have faced in recent years has been the construction of a register of funding organizations based on the funding acknowledgement data from the Web of Science (WoS). A large amount of the FA data provided by WoS has been mapped to thesaurus entries including funding organizations (Wellcome Trust, Bill & Melinda Gates Foundation, Deutsche Forschungsgemeinschaft); funding schemes (Cancer Center Support Grants, Horizon, 2020), and organizations mentioned in the FA that are not primarily funding-oriented (e.g. universities, research institutions, etc. (van Honk, Calero-Medina, & Costas, 2016)). Another challenge that we deal with is how to identify the relationship between organizations. The biggest issue comes to hospitals and the relationship with universities. A very important question when classifying universities is whether to assign the production of publications from hospitals affiliated to universities, During the first decade of this century, developed by Henk Moed, we applied the following approach. In a first round, papers were selected with the name of a university (and its major departments) mentioned explicitly in the address. Name variations were taken into account. For instance, Ruprecht Karls University is a name variant of the University of Heidelberg, TUM of the Technical University München; and Université Paris 06 of Université Pierre et Marie Curie. For European universities, this round took into account all variations occurring five or more times. For non-European universities this threshold was set to 25. 1 RISIS
is a Horizon2020 project, https://www.risis2.eu/. is a Horizon 2020 project, https://www.knowmak.eu/.
2 KNOWMAK
Delineating Organizations at CWTS—A Story of Many Pathways
171
In a second round, additional papers were selected from affiliated teaching hospitals on the basis of an author analysis. This round added to a particular university’s article output selected in the first round papers from affiliated hospitals, published by authors who did not explicitly mention this university’s name in their institutional affiliation, but who showed strong collaboration links with that university, as its name appeared in the address lists of at least half of their papers. In this way, for instance, a part of the papers containing the address Addenbrookes Hospital was assigned to University Cambridge, and a part of the papers with the address Hospital La Pitié Salpetrière to University of Paris VI, and another part to University of Paris V.’ (Calero-Medina et al., 2008).
Later on, and during the same period that we officially created the CWTS register of organizations, we changed the approach to identify publications from hospitals and the relationship with universities (Praal et al., 2013; Reyes-Elizondo, Calero-Medina, & Visser, 2019). We work on a system that aligns the university-hospital relationship with one of three general models. The first one is the complete integration of the hospital and the medical school in a single organization. The second one is related with health science centers, in this case the hospitals and the medical school remain separate entities, although under the same governance structure. Finally, we found constructions in which universities and hospitals are separate bodies that collaborate with each other. This classification provides a standard by which we can assign publications that record affiliations with academic hospitals. The three models mentioned above are summarized in two types of relationships for the assignment of publications: “associated” and “component”. If a hospital and a medical school are fully integrated or in the cases where a hospital is part of a health science center, the relationship is considered as a component. In the case of a hospital that follows the model of collaboration and support, the relationship is classified as associated.
Source Reyes-Elizondo, Calero-Medina, and Visser (2019)
A Comparative Analysis of Organizational Registries Nowadays, there are quite a few registers that provide persistent organizational identifiers ranging from commercial products such as Ringgold to open registers such
172
C. Calero-Medina et al.
Table 1 Overview of registers and bibliographic data sources Register of organizations
Linked with: Web of Science
1. CWTS register
X
2. Organizations Enhanced List
X
3. Affiliation Identifiers 4. GRID
Scopus
Microsoft Academic
Dimensions
X
X
X
as ROR.3 In this section we compare the coverage of the CWTS register with 3 alternative registers of organizations. Among the registers of organizations these are currently the most relevant ones for bibliometric researchers because these are linked with affiliation data in large multidisciplinary bibliographic databases. The analysis presented here comprises the following combinations of registers and databases (Table 1). GRID database published by Digital Science is the only register included in the analysis that is publicly available as an independent data source. The GRID register is used in the Dimensions database and in Microsoft Academic. The affiliations in Web of Science and Scopus are linked to internal registers that are part of the bibliographic database itself. The Organizations—Enhanced List is provided within the Web of Science as a facility to support building search queries. Users are enabled to browse through the register to select the preferred organization name for their query. The Affiliation Identifiers facility in Scopus allows users to search for and select an organization but a browsing option for the register as a whole is not available. The CWTS register of organizations is used internally and its contents are partially made public through derived data sources.4 To facilitate the comparison we created a customized set of over 6 million journal articles from 2015–2018 that were indexed by all four bibliographic databases and are classified in the Web of Science as articles or reviews.5 The bibliographic data sources and registers analyzed concern static versions that are part of the CWTS data system. • Web of Science: the WoS version used including the Organizations Enhanced List was updated until the last week of 2018 and comprises data from Science Citation Index Expanded, Social Sciences Citation Index and Art & Humanities Citation Index.
3 For
an overview of providers of organizational identifiers see Bilder et al. (2016).
4 For example through OrgReg as part of the RISIS project: https://www.risis2.eu/registers-orgreg/. 5 These
6,099,143 publications were identified as part of an ongoing research project at CWTS on matching publications from different bibliographic data sourcesis. First results were published in Visser, Van Eck, and Waltman (2019).
Delineating Organizations at CWTS—A Story of Many Pathways
173
• Scopus: we used a dump of the Scopus database including the accompanying Affiliation Identifiers that was delivered to us in April 2019 and relates to publications up to 2018. • Dimensions: we received the Dimensions data used in this analysis in June 2019. • Microsoft Academic: we used a dump of Microsoft Academic Graph data from March 2019 made available on Zenodo.6 • GRID: we downloaded the GRID database in May 2019. • CWTS register of organizations: we used the version available in December 2019. The four bibliographic data sources use different content selection policies resulting in substantial differences in the number of publications covered. Limiting the publication set to publications indexed by all four bibliographic data sources allows us to focus on discrepancies between the assignment of publications to organizations in the registers. Figure 2 shows the percentage of affiliations that are linked to one or more organizations included in the different registers. A first important observation concerns the differences between the overall number of affiliations reported by the bibliographic databases for the same publications. Microsoft Academic indexes less than two thirds of the affiliations that are available in WoS and Scopus. It appears that the automated retrieval and parsing methodology used by Microsoft Academic fails to capture the affiliations of scientific papers indexed in the database to a substantial extent. We also observe a difference in the number of affiliations between WoS and Scopus albeit a much smaller one. The primary reason for this discrepancy appears to be a difference
Fig. 2 Percentage of linked affiliations in bibliographic databases 6 Microsoft Academic. (2019,
April 4). Microsoft Academic Graph (Version 2019-03-22). Zenodo. http://doi.org/10.5281/zenodo.2628216.
174
C. Calero-Medina et al.
in the representation of affiliations between the two databases. When a single author affiliation line on a paper refers to more than one organization, Scopus represents the affiliation line in a literal way resulting in a single author affiliation in the database while in WoS the same author affiliation line may be split into multiple author affiliations. For Dimensions we are unable to determine the total number of affiliations because the data file that was delivered to us only contained those affiliations that were actually linked to the GRID register. This does not mean that unlinked affiliations are not part of the Dimensions database. The online version of Dimensions also provides information regarding on those unlinked affiliations and for a small sample that we checked we did not find any indication that Dimensions suffers from the same problem as Microsoft Academic in capturing affiliation data. If we assume that the total number of affiliations available in Dimensions is the same as in Scopus, we can conclude that 62% of the affiliations available in Dimensions is linked to the GRID database. This is well below the more than 80% of affiliations in the Web of Science linked to either the Organizations Enhanced List or the CWTS register of organizations. It seems unlikely this is due to the register itself as the GRID database contains a much larger number of organizations than both Organization Enhanced List or the CWTS register of organizations. Microsoft Academic links almost 11 million affiliations to the GRID database which is roughly the same number as linked affiliations available in Dimensions. This means that Microsoft Academic links more than 80% of the total number of affiliations available in Microsoft Academic. However, the total number of affiliations available in Microsoft Academic is much lower than in Scopus and WoS. If we relate the number of linked affiliations to the total number of affiliations indicated in Scopus as a more realistic number of the number actually mentioned in the papers, the percentage of linked affiliations drops to 61%. Almost all affiliations in Scopus are linked to an affiliation identifier. This is probably a reflection of a methodological choice by Scopus that a new entry is created in the register whenever an affiliation is encountered that cannot be assigned to an existing entry in the register. For that reason, the percentage of linked affiliations in the Affiliation Identifiers cannot be directly compared to the percentage of linked affiliations for the other registers. Figure 3 depicts the percentage of linked affiliations by country. Fifty-three countries accounting for 92% of the WoS affiliations have a percentage of linked affiliations ranging between 80% and 90%. On the other end of the spectrum there are more than 120 countries accounting for only 7 pro mille of the affiliations for which less than 50% of the affiliations was linked with the Organization Enhanced List. The remaining 49 countries have a percentage between 50% and 80% of linked affiliations. Relatively low percentages of linked affiliations typically are observed for countries with lower numbers of publications in the Web of Science. Research institutions located in these countries are not mentioned frequently enough in WoS affiliations to surpass a certain threshold required to include them in the Organization Enhanced List. Of course, the Organization Enhanced List is also a reflection of the customer base of Clarivate Analytics.
Delineating Organizations at CWTS—A Story of Many Pathways
175
Fig. 3 Percentage of Web of Science affiliations linked to Organization Enhanced List by country
In this paragraph we analyzed the extent to which four different register are linked to affiliations in four different bibliographic databases. The analysis shows some important consequences for the use of these databases and accompanying registers in bibliometric analyses. A first finding concerns the difference in the number of affiliations processed by bibliographic databases. More than one third of the affiliations available in Scopus and WoS is missing from Microsoft Academic. A second finding concerns the relative low number of affiliations linked to the GRID database in Dimensions and Microsoft Academic. For Microsoft Academic this is to a large part due to the fact that affiliations are not included in the database but this could not be determined for Dimensions. A third finding concerns the fact that nearly all affiliations within Scopus are linked to an Affiliation Identifier. Our interpretation here is that this reflects the internal process of assigning affiliation identifiers and we do not infer that Scopus provides a nearly comprehensive disambiguation of organizations within author affiliations. Finally, a fourth finding relates to the coverage of the Organizations Enhanced List. The relatively high overall rate does not mean that coverage is high for all countries. Especially for countries with a less developed scientific infrastructure the share of affiliations linked to institutes included in the Organization Enhanced List may be quite low. Using the Organization Enhanced List for cross country comparisons may accentuate differences larger than reality. It is important to be aware that the outcomes of this analysis are based on snapshots of the bibliographic databases and registers which are continuously expanded and updated. The analysis presented here was a first exploration and further research is necessary to evaluate the linkages between registers and bibliographic databases. This research needs to investigate the accuracy of the assignments of organization identifiers and not only the extent to which these assignments are available. Recent studies in which such an approach was adopted include Donner, Rimmert, and Van Eck (2020) and Huang et al. (2019). By combining both aspects it will be possible to assess to what extent and in which manner the assignment of organization identifiers affects the outcomes of bibliometric studies.
176
C. Calero-Medina et al.
Acknowledgments Part of the work presented here was supported by RISIS2—Research Infrastructure for Research and Innovation Policy Studies 2, an Horizon2020 EU Project, grant agreement no: 824091 and KNOWMAK—Knowledge in the making in the European society funded by European Union’s Horizon 2020 under grant agreement 726992. Another project that partially funded this work is the U-Multirank (funded with support from the Bertelsmann Stiftung, the European Commission’s Erasmus+ Programme and Santander Group).
References Arvanitis, R., & Chatelin, Y. (1988). National strategies in tropical soil science. Social Studies of Science, 18, 113–146. Beaver, D., & Rosen, R. (1978). Studies in scientific collaboration—Part I. The professional origins of scientific co-authorship. Scientometrics, 1, 1, 65–84. Springer. Bilder, G., Brown, J., & Demeranville, T. (2016). Organisation identifiers: current provider survey at https://orcid.org/sites/default/files/ckfinder/userfiles/files/20161031%20OrgIDProviderSurvey. pdf. Braam, R. R. (1991). Mapping of Science: Foci of Interest in Scientific Literature Thesis University of Leiden, DSWO Press, Leiden. de Bruin, R. E. (1986). Burgers op het kussen. Volkssoevereiniteit en bestuurssamenstelling in de stad Utrecht 1795–1813 (diss. Utrecht, 1986); De Walburg Pers, Zutphen. de Bruin, R. E., & Moed, H. F. (1990). The unification of addresses in scientific publications. In L. Eggheen & R. Rousseau (Ed.), Informetrics 89/90 (pp. 65–78). Amsterdam: Elsevier Science Publishers. de Bruin, R. E., Braam, R. R., & Moed, H. F. (1991). Bibliometric lines in the sand. Nature, 349, 559–562. de Bruin, R. E., & Moed, H. F. (1993). The use of cognitive address words in the delimitation of scientific subfields. Scientometrics, 26, 65–78. de Bruin, R. E., Kint, A., Luwel, M., & Moed, H. F. (1993). A study of research evaluation and planning: The University of Ghent. Research Evaluation, 3, 25–41. Calero-Medina, C., Lopez-Illescas, C., Visser, M. S., & Moed, H. F. (2008). Important factors in the interpretation of bibliometric rankings of world universities. Research Evaluation, 17, 71–81. Cantu-Ortiz, F. J. (2017). Research analytics: Boosting university productivity and competitiveness through scientometrics. https://doi.org/10.1201/9781315155890. Donner, P., Rimmert, C., & van Eck, N. J. (2020). Comparing institutional-level bibliometric research performance indicator values based on different affiliation disambiguation systems. Quantitative Science Studies, 1(1), 150–170. https://doi.org/10.1162/qss_a_00013. Garfield, E. (1955). Citation indexes for science. Science, 122, 3159, 108–111. Retrieved from JSTOR. Garfield, E. (1992). Contract research services at ISI—Citation analysis for government, industry, and academic clients. Current Contents, 23, 5–7. Huang, C.-K., Neylon, C., Brookes-Kenworthy, C., Hosking, R., Montgomery, L., Wilson, K., & Ozaygen, A. (2019). Comparison of bibliographic data sources: Implications for the robustness of university rankings bioRxiv 750075, https://doi.org/10.1101/750075. Lewison, G., & Cunningham P. (1989). The use of bibliometrics in the evaluation of community biotechnology programmes. In Science and Technology Indicators. Their Use in Science Policy and their Role in Science Studies. Leiden (Netherlands): DSWO Press. ISBN 9066950366 9789066950368. Lewison, G. (1999). Yogoslav politics, “ethnic cleansing” and co-authorship in science. Scientometrics, 44(2), 183–192.
Delineating Organizations at CWTS—A Story of Many Pathways
177
Moed, H. F., & Vriens, M. (1989). Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15(2), 95–107. Moed, H. F., de Bruin, R. E., & van Leeuwen, Th. N. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33, 381–422. Springer. Moed, H. F., Luwel, M., de Bruin, R. E., Houben, J. A., Van Den Berghe, H., & Spruyt, E. (1997). ‘Trends in research output at Flemish universities during the 80’s and early 90’s: A retrospective bibliometric study’. In Proceedings of the Sixth Conference of the International Society for Scientometrics and Informatrics, June 16–19, 1997, The School of Library, Archive and Information Studies of the Hebrew University of Jerusalem, Israel (pp. 277–287). Moravcsik, M. J. (1988). The coverage of science in the Third World: The “Philadelphia program”. In L. Egghe & R. Rousseau (Eds.), Informetrics 87/88 (pp. 147–155). Amsterdam: Elsevier. Nagtegaal, L. W., & de Bruin, R. E. (1994). The French connection and other neo-colonial patterns in the global network of science. Research Evaluation, 4, 119–127. Narin, F. (1976). Evaluative bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity. Praal, F. E. W., Kosten, M. J. F., Calero-Medina, C., & Visser, M. S. (2013). ‘Ranking Universities: The challenge of Affiliated Institutes’, In S. Hinze, A. Lottmann (Eds.), Proceedings of the 18th International Conference on Science and Technology Indicators: Translational Twists and Turns: Science as a Socio-Economic Endeavour, Berlin, IFQ (pp. 284–289). Reyes-Elizondo, A., Calero-Medina, C., & Visser, M. S. (2019). A pragmatic approach to allocating academic hospitals’ affiliations for bibliometric purposes. Working Paper. Schott, T. (1988). International influence in science: Beyond center and periphery. Social Science Research, 17, 219–238. Schubert, A., Zsindely, S., & Braun, T. (1985). Scientometric indicators for evaluating medical research output of mid-size countries. Scientometrics, 7, 3, 155–163. Springer. van den Berghe, H., Houben, J. A., de Bruin, R. E., Moed, H. F., Kint, A., Luwel, M., & Spruyt, E. H. J. (1998). Bibliometric indicators of university research performance in Flanders. Journal of the American Society for Information Science (JASIS), 49, 59–67. Wiley publisher. van Honk, J., Calero-Medina, C., & Costas, R. (2016). Funding acknowledgements in the web of science: Inconsistencies in data collection and standardization of funding organizations. In 21st International Conference on Science and Technology Indicators-STI 2016. Book of Proceedings. van Raan, A. F. J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143. https://doi.org/10.1007/ s11192-005-0008-6. Visser, M., Van Eck N. J., & Waltman, L. (2019). Large-scale comparison of bibliographic data sources: Web of science, scopus, dimensions, and crossref. In Proceedings of the 17th International Conference of the International Society for Scientometrics and Informetrics (pp. 2358–2369). Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., Van Eck, N. J., … Wouters, P. (2012). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63, 12, 2419–2432. https://doi.org/10.1002/asi.22708.
Research Trends—Practical Bibliometrics and a Growing Publication Gali Halevi
Background Research Trends (www.researchtrends.com) was founded in September 2007 by a group of Elsevier employees led by Iris Kisjes and Andrew Plume, who sought to provide insights into scientific trends using bibliometric analysis of Scopus data. At first, the main purpose of what they named as a ‘quarterly newsletter’, was to promote Scopus, which at the time was just launched and marketed as a source of reliable bibliometric data and a competitor to Web of Science. The newsletter was a mixture of original articles written by Elsevier staff, using Scopus data as well as contributions from bibliometrics experts working in various universities and institutions around the world. The unique aspect of this publication was the practical approach it took to bibliometric research, making it accessible and understandable to readers outside the bibliometric community. Since bibliometrics is a data-driven science which uses complex statistical modules and calculations to measure the impact of scientific output and trends in the scientific arena, the results of such research and their practical implications are sometimes difficult to decipher. RT aimed to simplify this by using less complex statistical calculations and putting emphasis on the findings themselves and their practical interpretations. By using this approach, the founders of RT were hoping to reach and engage with a wider audience from a variety of disciplines. This type of analytics, which beforehand was only published in a few, very specialized scientific journals, became accessible to readers outside of the bibliometric scientific community and facilitated getting Scopus’ name out to users. RT saw a gradual evolution from a “newsletter” to a scientific magazine through the years. The first issue of the “newsletter” for example, focused on short articles explaining the basic concepts behind the science of citations analysis and scientometrics coupled with short pieces depicting country-related scientific output trends G. Halevi (B) Icahn School of Medicine at Mount Sinai, New York, NY, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_8
179
180
G. Halevi
(Research Trends, Issue 1, 2007). As a part of the effort to communicate the value of bibliometrics to a wider audience, the RT team invited experts in the field to contribute to the newsletter from the very beginning. These experts included some of the most prominent researchers in the bibliometrics field such as Eugene Garfield, Leo Egghe, Wolfgang Glänzel, Loet Leydesdorff, Katherine W. McCain, Anthony F.J. van Raan, Henk Moed and others. Sharing perspectives, research findings, data and methods, these experts helped transform the nascent newsletter into a scientific magazine by contributing high quality and reliable articles to its growing content. RT is therefore a unique case of a publication which began as a marketing tool and evolved into a noteworthy scientific periodical that included cutting-edge analysis and coverage of innovations in the field of bibliometrics, scientometrics and altmetrics. This chapter will demonstrate the evolution of RT which can be largely attributed to the leadership of Henk Moed.
Early Contributions Henk’s contribution to RT began in 2008 with the article “The effects of bibliometric indicators on research evaluation” (Moed, 2008). In this issue, the RT team focused on the UK universities ranking report that was published in November 2008 while tackling issues related to the use of bibliometric methodologies to calculate universities’ rankings and research impact overall. Henk’s contribution included a short essay on the side-effects that these bibliometric indicators might have including citations manipulations and trading, written to provide some realistic perspectives on the report. In September 2008, which is usually the time for the De Solla Price Medal to be awarded, the RT team published contributions from 7 former Medal awardees in an issue dedicated to this distinguished recognition in the field of scientometrics (Research Trends, 2008, Issue 7). The de Solla Price Medal is one of the most prestigious awards in the field of bibliometrics. It is awarded every 2 years and each nominee is subject to an extensive review process by a dedicated committee. Issue 7 of RT, published in September 2008, was a compilation of perspectives from former awardees answering the question on how the work of Derek de Solla influenced or inspired them. The contributors included Professors. Eugene Garfield the winner of the first 1984 de Solla Medal (RIP), Leo Egghe, 2001 winner, Anthony Van Raan, 1995 winner; Wolfgang Glänzel, 1999 winner, Loet Leydesdorff, 2003 winner, Katherine McCain, 2007 winner, and Henk Moed, 1999 winner. In his contribution to this issue, Henk spoke about da Solla’s paper from 1970s (de Solla Price, 1980) that contained some of the most forward thinking citations analysis which he titled the “citation cycle”. This paper, according to Henk, was an inspiration considering its’ finding of a productivity paradox which demonstrates that the overall number of publications divided by the number of active scientists remains constant for decades. This finding and the discovery of the effects of policy and collaborations as means
Research Trends—Practical Bibliometrics and a Growing Publication
181
of explaining this phenomenon, made this work one of the most impactful papers for Henk and one that also informed his investigations in future studies (Moed, 2008). In the 2010 special issue of RT (issue 15) dedicated to the role of bibliometric indicators in journal evaluation, Henk was interviewed about his journal ranking invention—the SNIP (Source Normalized Impact per Paper). In this interview Henk explains the process that led to the development of SNIP as a measure to address the lack in contextual perspectives when it comes to journal evaluation metrics. Reading through the interview it becomes clear that by developing SNIP, Henk was working towards a greater purpose of developing a variety of ranking systems that will be properly and contextually used. According to him “All indicators are weighted differently, and thus produce different results. This is why I believe that we can never have just one ranking system: we must have as wide a choice of indicators as possible.” (Pirotta, 2010).
Joining Elsevier and Becoming Editor-in Chief Henk joined Elsevier in 2011 as the director of research metrics following the success of the SNIP indicator and its adoption by Elsevier’s Scopus. The SNIP journal indicator was appearing on Elsevier’s journal home pages as well as Scopus’ covered journals as a direct competitor to the traditional Impact Factor (IF) embedded in Web of Science. In addition to his role which included customer education, engagement and consultant metrics expert, Henk also took on the role as the editor in chief of RT. Henk expanded the RT editorial team, bringing knowledgeable staff members on board as regular contributors and editors. The first issue under Henk’s editorial role was issue 21, published in January of 2011. The theme of the issue was “Renewal and Rebirth”. Although the issue focused on changes in metadata such as authors’ name changes or journal title changes and their effect on bibliometrics, it also symbolized a new era for the growing publication in which was now directed by one of the most famous names in the field. Issue 21 introduced new branding, new and interactive publishing platform as well as the use of social media such as Facebook and Twitter to promote its content; methods that were considered innovative and experimental at that time. The new issue also introduced two new “focus” segments the “Regional Focus” and “People Focus”. These segments simplified complex concepts in bibliometrics and put them in a cohesive context that was easy for anyone to understand in the context of world affairs. For example, in 2011 The Arab Spring was in full swing across many nations. During this period series of anti-government protests, uprisings, and armed rebellions spread across the Middle East. Issue 21, hinting at the political atmosphere around the region, included a series of articles that examined scientific phenomena which had a sense of revolution and rebirth. For example, looking at mature scientific discoveries such as black holes or general relativity and their reappearance in current publications (Jones, 2011), or the concept of ‘sleeping beauties’ in science; articles
182
G. Halevi
that gain attention years after publication (Huggett, 2011a) and even the effect that journal re-branding has on citations and impact (Huggett, 2011b). In addition to these conceptual articles metaphorically hinting at rebirth and renewal, the issue also included an article that pertain to scientific output in the middle east (Plume, 2011a). This creative approach to content was the beginning of transforming RT from a newsletter to a scientific magazine. Tapping into current events as a source of inspiration for the issue’s theme while using bibliometrics to showcase related aspects, sparked readers’ interest. The articles were interesting, focused and simple to understand while being methodologically sound and using validated data. Bringing scientometrics and bibliometrics closer to the academic community, the following issues of RT transformed readers’ perceptions of these sciences and their overall utility, making them easier to understand and apply to current events. By featuring the US and China’s scientific endeavors (Boucherie, 2011; Plume, 2011b) or explaining the university ranking systems (Richardson, 2011) or the effects of publishing in distinguished journals (Arkin, 2011) the authors and contributors demonstrated how bibliometrics can be utilized to understand world scientific trends. These articles transformed statistics into stories that touched on various aspects of the scientific endeavor and shed a light on publishing processes and what they mean to countries, institutions and even individuals progress. In mid-2011, Henk was busy at work traveling and giving lectures about scientific metrics all over the world. As part of his position at Elsevier, he fostered strong relationships with academic institutions and research bodies while assisting them with their research evaluation tasks and goals. Understanding the need to better understand these processes and the data analytics behind the metrics, Henk directed a few RT issues to focusing on these methods. Issues 23 and 24 of RT published a series of articles on the topic of research assessment and included Henk’s seminal work “The Multi-Dimensional Research Assessment Metrix” (Moed & Plume, 2011), which was in its early days of development. This issue was the first to set forth an organized, themed based future publications that offered deep-dive into the complex issue of research assessment. Since assessment is done on various levels including institutional, individual, national and so forth, and includes multiple levels of metrics and perspectives, Henk directed the sequential issues to focus on these with the purpose of educating the academic community as to the advantages and disadvantages of each while offering straightforward examples and context. The research assessment matrix article in RT was followed by a series of presentations and videos that Henk provided in person and online which marked the beginning of a dialog between the bibliometrics experts and the academic communities. This issue came at a time when more and more research evaluation frameworks were being published and fostered a debate across scientific entities (Reinhardt & Milzow, 2012; Van den Besselaar, Inzeot, Reale, De Turckheim, & Vercesi, 2012). Since these reports were based on series of data, extensive calculations and often complicated metrics, Henk made it a goal for RT to provide clarity around these issues by combining methodologically sound articles that explained the results, their impact and meaning in a manner that
Research Trends—Practical Bibliometrics and a Growing Publication
183
could be easily understood by readers; whether academics or lay people. “The MultiDimensional Research Assessment Matrix” was a big step towards clarifying the complexity of assessing science and its output. This issue was straightened by expert opinions from research evaluation scientists who contributed their perspectives to RT. For example, Prof. Diana Hicks who serves as the chair of the department of Public Policy at Georgia Tech and Professor Margaret Sheil a member of the Cooperative Research Centers Committee (the Prime Minister’s Science Innovation and Engineering Council in Australia), were both interviewed for RT and offered clarifications and insights into some of the universities’ rankings and evaluation reports that were published around that time. These interviews contributed to overall goal of the series which was to clarify the issued surrounding the assessments of national and institutional research performance. The articles produced by the RT team demonstrated the complexity of such metrics and the need for a variety of methods to be available to evaluators depending on the purpose, scale and ultimate goals of such exercises. I joined the RT team in 2011 in issue 24 and took part in the publication until its’ closing. Issue 24 of the publication focused on international trends in scientific publishing and demonstrated country trends from various perspectives, demonstrating how bibliometrics is used to track collaborations, readership and global research topics. Starting with a thought-provoking article on a bottom-up approach for global institutional evaluation, Henk and Judith Kamalski demonstrated how the number of articles’ downloads and readership counts can be used to analyze the impact of institutions on a global scale (Moed & Kamalski, 2011). Henk and I followed similar investigative lines with our analysis of collaborations using bibliometric indicators. We defined such collaborations as “inward” or “outward” facing. Using the affiliation metadata, we were able to determine whether institutions were mostly collaborating within their own countries or outside of them. We found that geographical distances are the main drivers behind collaborations followed by language. This means that the science itself might not be the main driver of scientific collaborations but rather external factors such as geographical proximity and language (Moed & Halevi, 2011). Another interesting area that Henk encouraged the RT team to write about was publications support. RT contributors offered practical tools, tips and strategies to authors around topics such as article promotion, discoverability and the like. With the launch of “The Research Trends Seminars Series”, the team carted meaningful engagement channels by bringing together global experts and have mini-conferences around topics of interest to the scientific community. That year, the team launched the first symposium in the series titled “Mapping & Measuring Scientific Output” which was developed in sync with the RT publication and its’ content. Henk led the symposium and hosted Dr. Eugene Garfield (RIP) who honored the event with his presence and gave a comprehensive presentation on the evolvement of the Impact Factor. Henk was not only the host of the event but also gave an impactful presentation about the multi-faceted approach to research evaluation. This symposium was groundbreaking in its approach to providing scientific evaluation perspectives given directly from the people who invented them (Halevi, 2011).
184
G. Halevi
Issue 25 of RT was a turning point for the publication. Moving from simple use of basic bibliometric methods to convey scientific trends to a more in-depth exploration of such methods and their impact on individual, institution and country level analysis. Dedicating sections of the publication to explain how bibliometrics data can be interpreted and providing contextual background to findings, were some of the steps taken to make the science behind the data more accessible and understandable; thus reaching larger audiences. Quoting from Henk’s editorial: “Bibliometric indicators are derived from large databases, often using advanced methodologies. But the numbers themselves are not what count—the key issue is how they are interpreted” (Moed, 2011, p. 1). Issue, 25th therefore, illustrated the importance of a valid interpretation and the difficulties encountered in interpreting bibliometric results through a variety of articles and topics. Moreover, for the first time, the articles in this issue began to explore other uses of bibliometrics beyond evaluation, looking at areas such as business and the relationships between research and innovation for example. Bibliometrics data is very receptive to visualization since it is able to demonstrate relationships and connections between journals, articles, authors, institutions and even countries. Such connections can be revealed through citations and co-citations analysis and by analyzing the bibliographical information of articles. Visualizations are also a popular way to engage with readers. Seeing the potential in visualizations and their ability to explain bibliometric data and analytics, Henk was seeking to include more visualizations in the publication overall. One of the most talented contributors of RT was Matthew Richardson. With his sophisticated illustrations of complicated data, Mathew was able to bring it to life. His data visualizations can be seen in many RT publications including issue 26 where he was able to map thousands of journals to their respective subject areas and demonstrate how they connect with each other through citations networks which indicate the manner by which disciplines influence and borrow from each other to create the next discovery or innovation. One of his maps identifies journals that connect research disciplines (Richardson, 2012). This issue also included an interview with Katy Börner, who is also a curator of the Places & Spaces: Mapping Science initiative at Indiana University. Her maps apply a series of unique information visualization tools that enable a deeper understanding of global scientific, environmental and economic trends (Börner, 2012). One of the topics that Henk felt very passionate about was the ability to measure the societal impact of research. Since science is mostly funded by the public, Henk felt that bibliometrics should be used to also measure the impact of science on society. This topic was the focus of his work regarding the multidimensional assessment of science. Issue 27 of RT was therefore dedicated to the topic of exploring the technological, social, economic and cultural impact of science. In this issue, Henk demonstrated his directive insight whereas despite of the sensitivity of the issue, the RT team produced a series of articles that tackle topics such as ‘brain drain’; how free encyclopedias influence science and even how library sciences influenced the production of technological patents (Halevi & Moed, 2012a). The impact of science on society could be a sensitive issue especially when it touches upon public policy, perceptions and even geopolitical issues. For example, studying ‘brain drain’
Research Trends—Practical Bibliometrics and a Growing Publication
185
through bibliometrics is very revealing of a country’s shortcoming as far as funding or development. Analyzing affiliations can demonstrate how researchers move from country to country, how many of them do not return to their home countries, what disciplines show the most brain drain and so forth. But since these findings might not be favorable to some countries they might not be welcomed (Plume, 2012). Also, innovation in itself is not enough unless it has some positive influence on human lives. Yet, how do we demonstrate the connection between science, innovation and impact on society? These questions were at the heart of the issue and consequence ones where the RT team explored new ways to demonstrate these connections through bibliometrics. The year 2012 also saw a significant growth in the area of altmetrics exploration. The Issue of Open Access publishing became a center of debate across countries and institutions and there was an overall sense of change when it comes to finding more wholesome ways to measure scientific impact including the exploration of the correlation between Open Access and scientific impact. RT became an active participant in these discussions with Henk leading the way with his article on the correlation between open access, citations and readership (Moed, 2012a). Issue 28 also dedicated articles that explored usage as a way to evaluate research, a concept that was new back then and not as widely researched (Lendi & Huggett, 2012) That year Henk and I also launched the Elsevier Bibliometrics Program which aimed to have prominent researchers use data provided by Elsevier to produce innovative research in the field of bibliometrics. This program drew applications from all over the world and the results of their research were featured in consequent RT issues including interviews and perspectives which enriched the content making it appeal to professionals around the world. Unlike today, the use of Twitter and Facebook or other social media to promote science or scientific publications was in its infancy. However, RT quickly took advantage of these tools opening Twitter and Facebook channels through which we began engaging with our readers in real time. Within a short period of time RT was followed by over 1,600 readers who commented on the articles, suggested new topics and shared RT content online. This was one of the most exciting times in the publication’s life which not only gave us an opportunity to connect with users directly but also to gain perspective on what their interests are. One of the topics that we identified as gaining attention through these channels was big data in bibliometrics and what that meant to the future of the field as far as applications and impact. Based on that, Henk directed the consequent issue to be dedicated to this topic. This was a special issue where prominent researchers from different institutions and disciplines have been invited to write about the use of big data and analytics in their work, providing us with examples of tools, platforms and models of decision making processes. For this issue, Henk and I wrote an article that mapped the evolution of big data as a scientific topic of investigation in an article that framed the topic within the peer reviewed literature. We found that the term ‘Big Data’ was used in 1960s and 1970s to refer to atmospheric and oceanic related data well before it was used in its current computational context which began in 2008 (Moed & Halevi, 2012). This article opened the special issue which hosted some well-known researchers and
186
G. Halevi
scientists analyzing big data in a variety of fields. Henk also wrote an article on bibliometrics research measurements and evaluations using big datasets which was a new concept at the time. In his article Henk illustrated how usage, citations, full text, indexing and other large bibliographic datasets are being combined and analyzed to follow scientific trends, the impact of research and unique uses of information artifacts in the scientific community (Moed, 2012b). There is no doubt that this issue transformed RT and made it a forward-looking and influential scientific publication. Henk’s ability to recruit well-known researchers to contribute to the publication and write on a topic that at the time was a popular theme drove RT to the next level. Following the success of the big data issue, RT embarked on a new venture to shed light on the evaluation of academic disciplines in the Arts & Humanities. At this point RT was gaining visibility and attention within the bibliometric community and Henk identified the need to focus on these areas. Research evaluation is mostly focused on the scientific arenas while social sciences and Arts & Humanities (A&H) are somewhat neglected. Therefore, Henk directed issue 32 to focus on these disciplines in order to demonstrate and illustrate the potential and limitations of applying bibliometric techniques to assess the impact of research in A&H. This issue included a variety of articles which looked at A&H from different perspectives while using bibliometrics to sketch trends in productivity, funding, impact, multidisciplinary and more. This issue with its rounded treatment of A&H as areas of academic research with their own funding, citations, publications and overall impact was well received by readers which was seen via social media engagement and increased downloads of the RT PDF. The societal impact of research remained a major topic for RT. Driven by Henk’s passion to find ways to measure and observe how science influences society and in what ways, two issues of RT were dedicated to this topic. In issue 33 RT dedicated a few articles to the topic of patents and how their analysis might shed light on the connection between science and its use in everyday life. Examining citations to and from parents to scientific articles as a way to measure such impact was explored as well as an interview with Prof. Francis Narin who was the first to explore the connection between science and innovation through patent citations (Halevi, 2013a, b, c). In addition, contributions by experts covered issues such as how research is measured in light of its societal impact and how that, in turn, influences scientific policy. The exploration of the societal impact of research opened a new discussion which later on will become a special issue on altmetrics. Back in 2013 altmetrics was a relatively new term slowly making its way into the research evaluation arena. At the time, it wasn’t a mainstream term and very few researchers really knew what it meant and how to use these metrics as evaluation tools if at all. However, being a part of the bibliometrics community and tuned to the most recent developments in the field, Henk identified the potential of this area and included an article written by Mike Taylor about the use of altmetrics as a way to measure scholarly and societal impact (Taylor, 2013a). In line with its name “Research Trends”, the publication devoted a special issue to the topic of altmetrics. In that special issue the RT team headed by Mike Taylor solicited articles from various researchers in the area of altmetrics which at the time
Research Trends—Practical Bibliometrics and a Growing Publication
187
was a growing topic which began just a couple of years prior in 2010 with the Altmetrics manifesto in which the term “altmetrics” was introduced. This special issue introduced some of the most advanced approaches to altmetrics, their analysis and application to research evaluation. For example, Christian Schlögl interpreted correlations between citation, full text download and readership data in terms of the degree in overlap between the user communities of the three systems from which data was extracted (Schlögl, 2014). Vicente P. Guerrero-Bote and Félix MoyaAnegón also examined statistical correlations between downloads and citations (Guerrero-Bote & MoyaAnegón, 2014) and Hadas Shema introduces another promising altmetric data source: scholarly blogs (Shema, Bar-Ilan, & Thelwall, 2014). Euan Adie, the founder of altmetric.com also took part in this issue and demonstrated how grey literature including pre-prints and policy documents become available for research and as a source for new metrics (Euan, 2014). The growth in researchers’ participation in social media at the time drove the exploration of applying alternative approaches to individual evaluation which was reflected in the contribution by Judit Bar-Ilan who introduces the portfolio concept developed in the ACUMEN project (Academic Careers Understood through Measurement and Norms) funded by the European Commission, aimed at “studying and proposing alternative and broader ways of measuring the productivity and performance of individual researchers”. The author shows how online and social media presence and altmetrics are well represented in the expertise, output and influence sub-portfolios (Bar-Ilan, 2014). As can be seen from consequent issues, RT continued to explore new and innovative ways to shed light on new measurements of science and make connections between science, society and innovation (Halevi, 2013a, b). Some of the most forward-looking topics that were tackled then included misconduct in science and its influence on medical misinformation. Today, as we are all exposed to the concept of “Fake News”, finding a way to track retracted articles and demonstrate how they influence science and society was relatively new. Issue 34 of RT featured a couple of articles that tackled misinformation in medical research and how it has direct impact on people’s lives (Scheerooren, 2013; Taylor, 2013b). In addition to putting emphasis on societal impact of research and demonstrating how scientific articles can be linked to innovation and public policy, RT also covered country trends quite extensively. The main motivation behind this was Henk’s international understanding of the global scientific arena and his understanding of the importance of creating awareness to science in developing countries. The complexity of the topic was tackled through attention to language barriers in publishing (van Weijen, 2013) international collaborations, scientific migration, global publishing trends (Schemm, 2013; Moed & Halevi, 2013; Huggett, 2013) and more. Henk saw RT as an international publication that should be of interest to all scientific communities around the world. Therefore, a part of his mission was to publish articles that touch upon global research trends. Issue 38 was the last issue that Henk served as the editor in chief. The publication was closed in 2014 with the final issue being 39. Issue 38 was a retrospective issue with articles looking back at Scopus data which celebrated its 10th anniversary in 2014. The RT team produced a series of articles that captures country, disciplinary
188
G. Halevi
and even individual level impact and trends using Scopus data. This issue was rich in content that spanned topics and perspectives, using bibliometrics to produce engaging and interesting articles that are of interest to a wide audience around the world. With this issue, RT reached the status of a scientific publication with a track record of quality content.
Impact Unfortunately, RT was never indexed in a citations database. In addition, its title “Research Trends” is used by other publications indexed in Scopus. Therefore, searching for the RT domain in the references field was the most effective way to discover how many times RT articles were cited. To demonstrate the impact that the publication, Scopus was searched for all references which contained the website’s URL is the following format [WEBSITE (https://www.researchtrends.com)]. As can be seen from Fig. 1, the number of citations to RT articles has significantly grown since Henk became its editor in chief. Overall, according to Scopus, RT articles were cited over 300 times and continue to be cited today. The most cited articles in 2010 were “Buckyballs, nanotubes and graphene: on the hunt for the next big thing” (Plume, 2010, pp. 5–7) and “Bibliometrics Comes of Age: an interview with Wolfgang Glänzel” (Geraeds, Gert-Jan; Kamalski, J., pp. 10–12) In 2011 the most cited articles were “The Multidimensional Research Assessment Matrix” (Moed. H.F., & Plume, A., pp. 5–8) and “Tipping the balance: The rise of China as a science superpower” (Plume, 2012, pp. 11–12). 2012 featured a series of issues on big data and data visualization which received a high number of citations. “The language of (future) scientific communications” (Van Weijen, 2012, pp. 7–9)
Fig. 1 Citations of RT articles
Research Trends—Practical Bibliometrics and a Growing Publication
189
and “The evolution of big data as a research and scientific topic” (Moed & Halevi, 2012, pp. 3–7) received the highest amount of citations with 81 and 78 citations respectively. To demonstrate the global impact that RT had, the affiliations of the citing documents were analyzed, using Scopus’ “analyze results”. As can be seen from Fig. 2, citations to RT documents are seen to be coming from all over the globe. It is quite impressive to see the international impact that RT had in this manner. From North America to Europe, Middle East to Russia, Africa and Australia, RT articles were read and cited globally. Table 1 features the top 5 countries that cited and continue to cite RT articles. Finally, the subject areas to which these citing articles are assigned were analyzed in order to capture which disciplines cited RT articles. Again, it is quite impressive to see that citations of RT articles are receives from a wide variety of disciplines including Medicine, Biochemistry, Computer Science, Pharmacology, Immunology and more (see Fig. 3).
Fig. 2 Global view of citations to RT articles
Table 1 5 top countries citing RT articles
Country
Number of citations
United States
109
United Kingdom
35
Italy
25
Brazil
20
China
19
190
G. Halevi
Fig. 3 Citations to RT articles by discipline
Concluding Remarks After four years as Editor-in-Chief of RT, Henk Moed decided to resign his duties. During his tenure as Editor-in-Chief, Research Trends enjoyed the privilege of being guided and enriched by Henk. As a prestigious and well-acknowledged scholar in the bibliometrics community, Henk joined Elsevier as a senior scientific advisor and also took Research Trends under his wings. Under his supervision, RT not only evolved in content but also in quality, becoming a respected publication followed by thousands, with several of its articles translated into different languages and cited regularly in peer reviewed journal articles. In addition to guiding the content and coverage of RT, Henk also heavily contributed to each and every issue with thought-provoking research articles and analyses such as research evaluation approaches, new methods for regional and country level scientific output analysis, disciplinary and content analysis and many more. As Editor-in-Chief, Henk also broadened Research Trends’ scope by including interviews with leading scientific figures, reporting back from conferences and events and publishing special issues on current topics such as Big Data and Altmetrics. Beyond his obvious scientific and professional contributions to Research Trends as a growing publication, he always served as a mentor to those of us on the Editorial Board. Offering scientific advice on methodologies, research topics and approaches, he encouraged each of us to pursue research and writing and strive to produce better articles in each and every issue.
Research Trends—Practical Bibliometrics and a Growing Publication
191
References Arkin, I. T. (2011). Science, music, literature and the one-hit wonder connection. Research Trends, 22, 9–11. Bar-Ilan, J. (2014). Evaluating the individual researcher—adding an altmetric perspective. Research Trends, 37, 31–33. Börner, K. (2012). The power of scientific mapping and visualization. Research Trends, 26, 15–17. Boucherie, S. (2011). An update on Obama and American science. Research Trends, 22, 7–9. de Solla Price, D. J. (1980). The citation cycle. In B. C. Griffith (Ed.), Key papers in information science (pp. 195–210). White Plains, NY, USA: Knowledge Industry Publications. Euan, A. (2014). The grey literature from an altmetrics perspective—opportunity and challenges, 2014. Research Trends, 37, 23–25. Guerrero-Bote, V., & MoyaAnegón, F. (2014). Downloads versus citations and the role of publication language. Research Trends, 37, 20–23. Halevi, G. (2011). Mapping & measuring scientific output. Research Trends, 24, 11–12. Halevi, G., & Moed, H. (2012a). The evolution of big data as a research and scientific topic: Overview of the literature. Research Trends, 30(1), 3–6. Halevi, G. (2013a). The science that changed our lives. Research Trends, 33, 3–5. Halevi, G. (2013b). Military medicine and its impact on civilian life. Research Trends, 33, 3–5. Halevi, G. (2013c). Ancient medicine in modern times. Research Trends, 35, 13–15. Huggett, S. (2011a). What’s in a name? Journal rebranding and its consequences on citations. Research Trends, 21, 5–6. Huggett, S. (2011b). “Sleeping Beauties” or delayed recognition: When old ideas are brought to bibliometric life. Research Trends, 21, 9–10. Huggett, S. (2013). The bibliometrics of the developing world. Research Trends, 35, 3–7. Jones, T. (2011). Children of the (scientific) revolution: A bibliometric perspective on Kuhnian paradigm shifts. Research Trends, 21, 3–4. Lendi, S., & Huggett, S. (2012). Usage: An alternative way to evaluate research. Research Trends, 28, 7–9. Moed, H. F. (2008). Why “The citation cycle” is my favorite de Solla Price paper. Research Trends, 7, 9. Moed, H. F., & Plume, A. (2011). The multi-dimensional research assessment metrix. Research Trends, 22, 5–8. Moed, H. F., & Kamalski, J. (2011). On the assessment of institutional research performance. Research Trends, 24, 3–5. Moed, H. F., & Halevi, G. (2011). Emerging scientific networks. Research Trends, 24, 5–7. Moed, H. F. (2011). Editorial. Research Trends, 25, 1. Moed, H. F. (2012a). Does open access publishing increase citation or download rates? Research Trends, 28, 3–5. Moed, H. F. (2012b). The use of big datasets in bibliometric research. Research Trends, 30, 31–34. Moed, H. F., & Halevi, G. (2012). The evolution of big data as a research and scientific topic overview of the literature. Research Trends, 30, 3–5. Moed, H. F., & Halevi, G. (2013). Migration and co-authorship networks in Mexico, Turkey and India. Research Trends, 35, 7–11. Pirotta, M. (2010). Sparking debate. Research Trends, 15, 8–9. Plume, A. (2010). Buckyballs, nanotubes and graphene: on the hunt for the next big thing. Research Trends, 18, 5–7. Plume, A. (2011a). Rebirth of science in Islamic countries? Research Trends, 21, 6–8. Plume, A. (2011b). Tipping the balance: The rise of China as a science superpower. Research Trends, 22, 11–13. Plume, A. (2012). The evolution of brain drain and its measurement. Research Trends, 27, 3–5. Richardson, M. (2011). The value of bibliometrics. The Research Excellence Framework: Revisiting the RAE. Research Trends, 22, 3–5.
192
G. Halevi
Richardson, M. (2012). Citography: The visualization of nineteen thousand journals through their recent citations. Research Trends, 26, 3–7. Reinhardt, A., & Milzow, K. (2012). Evaluation in Research and Research Funding Organisations: European Practices. https://doi.org/10.22163/fteval.2012.97. Schemm, Y. (2013). Africa doubles research output over past decade, moves towards a knowledgebased economy. Research trends, 35, 11–12. Scheerooren, S. (2013). Charlatans and copy-cats. Research fraud in the medical sector. Research Trends, 34, 11–13. Schlögl, C. (2014). A comparison of citations, downloads and readership data for an information systems journal. Research Trends, 37, 14–18. Shema, H., Bar-Ilan, J., & Thelwall, M. (2014). Scholarly blogs are a promising altmetric source. Research Trends, 37, 11–13. Taylor, M. (2013a). The challenges of measuring social impact using altmetrics. Research Trends, 33, 11–15. Taylor, M. (2013b). The peculiar persistence of medical myths. Research Trends, 34, 19–22. Van den Besselaar, P., Inzeot, A., Reale, E., De Turckheim, E., & Vercesi, V. (2012). Indicators of Internationalisation for Research Institutions: A New Approach. https://doi.org/10.22163/fteval. 2012.92. Van Weijen, D. (2012). The language of (future) scientific communication. Research Trends, 31(11), 2012. van Weijen, D. (2013). How to overcome common obstacles to publishing in English. Research Trends, 35, 17–19.
The Evidence Base of International Clinical Practice Guidelines on Prostate Cancer: A Global Framework for Clinical Research Evaluation Elena Pallari and Grant Lewison
Introduction GL first encountered Henk in 1989 at the University of Leiden, when he was working as a contractor for a study on European agricultural research for the European Commission’s Research Evaluation Unit. At the time, bibliometrics and citation work was still based on the printed volumes of Eugene Garfield’s Science Citation Index, and on the Dialog online database. Henk mastered these sources of data and was instrumental in setting up the Leiden publications database, which became the de facto world standard of research evaluation (Moed, Burger, Frankfort, & Van Raan, 1985), and was used extensively by many clients. GL was enormously impressed, both by Henk’s technical expertise, and by his strategic grasp of the background to, and pitfalls of, the use of simple citation counts for research evaluation (Glänzel & Moed, 2002). Henk created the source normalized impact per paper (SNIP) (Moed, 2010), an important indicator for the establishment of a journal’s citation potential by correcting and differentiating for it within and across subject fields. He also developed the theory of citations, with many scientific papers, and was awarded the Scientometrics de Solla Price medal (with Wolfgang Glänzel) in 1999 for his contributions.
E. Pallari (B) MRC Clinical Trials and Methodology Unit, University College London, 90 High Holborn, London WC1V 6LJ, UK e-mail: [email protected] G. Lewison Division of Cancer Studies, School of Cancer and Pharmaceutical Sciences, King’s College London, Guy’s Hospital, London SE1 9RT, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_9
193
194
E. Pallari and G. Lewison
Citations in Clinical Practice Guidelines (CPGs) The citation metric based on journal article references is still accepted as the primary indicator of academic merit. However, simple counts of citations only tell part of the story and are often mistakenly referred to as “impact” instead of “citation impact” metrics and hence get misused in research evaluation. Citations in the peer-reviewed serial literature are not the only way in which research can be evaluated. Citation counts provide a number which serves multiple purposes: research visibility and productivity as well as providing credit for the authors or their institutions. However, this number limits its utility as an indicator of research impact because studies show that it tends to favour basic biomedical research rather than clinical work (van Eck et al., 2013). Although this practice can be allowed for if the papers being evaluated are classified by their research level, this bias often remains. Other modes of citation are sometimes used for evaluative purposes, but they remain small-scale and ad hoc as no full database is available. Recently Minso Solutions AB in Sweden have introduced the Clinical Impact® database, which comprehensively lists the references on the CPGs from several European countries (Denmark, Finland, Germany, Ireland, Norway, Sweden and the UK), the USA and some international organisations (Eriksson, Billhult, Billhult, Pallari, & Lewison, 2020). This can be used to identify medical research papers that have been frequently cited, although some countries’ CPGs are updated regularly, and many of their references are unchanged from one year to another. The practice of research evaluation based on the references from CPGs began with two papers from The Wellcome Trust (Grant, 1999; Grant, Cottrell, Cluzeau, & Fawcett, 2000), and another later one (Kryl, Allen, Dolby, Sherbon, & Viney, 2012). It was continued at The City University (Lewison & Wilcox-Jay, 2003), and has led to several additional papers more recently (Begum, Lewison, Wright, Pallari, & Sullivan, 2016; Lewison & Sullivan, 2008; Pallari, Fox & Lewison, 2018a; Pallari, Lewison, Ciani, Tarricone, Sommariva, Begum, & Sullivan, 2018b). These papers have been complemented by ones that evaluated the CPGs themselves, based on the evidence base that they cited (Pallari et al., 2018a) and other factors (Abdelsattar, Reames, Regenbogen, Hendren, & Wong, 2015; Burgers, Cluzeau, Hanna, Hunt, & Grol, 2003; Burgers et al., 2004; Cecamore et al., 2011; Fervers et al., 2005; Grimmer et al., 2014; Harris, 1997; Kryworuchko, Stacey, Bai, & Graham, 2009; LegidoQuigley et al., 2012; Steinberg et al., 2000). However, recently some doubt has been cast on the independence of guideline developers because of possible industrial influence (Checketts, Sims, & Vassar, 2017; Horn, Checketts, Jawhar, & Vassar, 2018), so this might bias some of their references. Our objective in this chapter of Henk’s Festschrift is to go beyond simple citation counts to assess prostate cancer research internationally on the basis of its contribution to clinical practice as well as its conventional impact on other research in the serial literature.
The Evidence Base of International Clinical Practice Guidelines …
195
The Development of Clinical Practice Guidelines Clinical practice guidelines (CPGs) are being developed as a guide to assist doctors and other medical personnel on the diagnosis and treatment of a number of diseases and disorders (Field & Lohr, 1990). The principle of using evidence to guide clinical decision-making was soon used to assist the development of evidence-based CPGs (Anonymous, 1994, Browman, 1994; Gibson, 1993; Sox & Woolf, 1993). The formation of the Scottish Intercollegiate Guidelines Network (SIGN) took place in 1993 (Network) and its first CPG was published in 1995. It was followed by the National Institute for Health and Care Excellence (NICE) for England and Wales in 1999. The importance of this movement was recognised (Ewalt, 1995), especially for the use of CPGs to improve clinical outcomes and reduce the associated healthcare costs (Lohr & Field, 1992; Lohr, 1994, 1995). Other research focused on criticising the methodological aspects of CPGs, and the importance of shaping the development and maintenance process of evidence integration through a structural and systematic approach (Audet, Greenfield, & Field, 1990; Cook, Greengold, Ellrodt, & Weingarten, 1997; Greengold & Weingarten, 1996; Hayward & Laupacis, 1993; Hayward et al., 1993; Todd, 1990; Woolf, 1992). CPGs normally include a list of references that provide the evidence on which their recommendations are based. Hence, these references or evidence-base can be studied and can provide an indicator of the utility of the cited research for clinical practice. Grant et al. (2000) was the first to consider the references on CPGs as a useful means to demonstrate the utility of biomedical research, particularly from the point of view of the funding agencies. Previously, we systematically assessed the evidence-base of all the oncology CPGs developed by ESMO, NICE and SIGN and demonstrated a differential impact with respect to the cited research (Pallari et al., 2018a). In this study, we have applied the same methodology and rationale but with expanded search criteria to assess the global evidence that exists on CPGs for a single cancer site (the prostate). To our knowledge this is the first time that such a systematic search enquiry has been undertaken to examine the evidence-base of CPGs from such a long list of countries and international research organisations.
Objectives of This Chapter We are describing a new source of citations to medical research papers, and their characteristics. Our aim has been to compare these with those of all research papers on the subject, and in particular their research level (from clinical observation to basic research), the countries that contributed to them, and the types of research that they described. We found significant differences, and consequently suggest that CPG references should also be taken into account by the evaluators of medical research
196
E. Pallari and G. Lewison
who wish to see if the research has had an impact on the clinical diagnosis and treatment of patients, which is, of course, the ultimate rationale for the work.
Methods The Collection of the Prostate Cancer CPGs We undertook a systematic approach to identify all the published CPGs on prostate cancer, through Google and the Web of Science© Clarivate Analytics, using the search statement: “clinical guideline prostate cancer”. Many CPGs were in national languages, but their lists of references were printed in English, so we could easily identify and process them. For China, there were three CPGs, all in Chinese, by the Chinese Academy of Medical Sciences, and for two of them the references were given in English and for the third, the DOI data were available. We thus created a comprehensive collection of CPGs on prostate cancer and were able to obtain a pdf of one or more CPGs as available from the countries and organisations listed in Table 1, with their ISO2 codes or initials. Altogether, we were able to assemble a total of 72 different CPGs, and we noted the names of their developers and the year of publication, where available. Our search yielded a comprehensive collection of CPGs, but inevitably may have missed some from countries that previously published CPGs on other diseases. Table 1 List of 28 countries from which we were able to obtain prostate cancer CPGs with their digraph International Standards Organization (ISO2) codes, and two international bodies Country
ISO2
Country
ISO2
Country
ISO2
Country
ISO2
Australia
AU
Finland
FI
Malaysia
MY
South Africa
ZA
Belgium
BE
France
FR
Mexico
MX
South Korea
KR
Brazil
BR
Germany
DE
Netherlands
NL
Spain
ES
Canada
CA
India
IN
New Zealand
NZ
Sweden
SE
China
CN
International
INT
Poland
PL
United Kingdom
UK
Croatia
HR
Ireland
IE
Russia
RU
United States
US
Estonia
EE
Italy
IT
Saudi Arabia
SA
Europe
EUR
Japan
JP
Singapore
SG
The Evidence Base of International Clinical Practice Guidelines …
197
The Collection of the References and Creation of the Spreadsheet The set of references was copied from the pdf version of each guideline and pasted to MS Word® , where each reference was brought into the same bibliographic order: author’s surname, initials, publication year, title of reference, journal, volume and page number. Following data processing and cleaning, the references were copied and pasted over to a spreadsheet in MS Excel®. From the Excel database, we selected one or more co-authors, the year of publication, and either a group of characteristic words from the title, or the whole title in some cases where it was not distinctive, converting spaces into hyphens, and formed a concatenated search statement for each reference to be run at the Web of Science (WoS). For example: AU = (Crook, JM AND O’Callaghan, CJ AND Duncan, G) AND TI = (IntermittentAndrogen-Suppression-for-Rising-PSA-Level-after-Radiotherapy) AND PY = 2012
The title words were always separated by hyphens to ensure that the title was exactly as in the CPG with the words in the same order. These individual search statements were then concatenated to make composite search statements with 20 individual searches. These composite statements were run against the WoS, and usually 20 papers were identified. There might have been fewer than 20 if one or more of the cited journals was not processed in the given year, or more than 20 if a paper had been subject to commentary or criticism in the same journal and year, and the author(s) had written a response. In such cases, we manually performed a quality check on the title, year and journal to ensure the correct study was included. References that were not papers in journals were omitted. The details of the identified papers from each individual CPG were then downloaded from the WoS to text files, up to 500 at a time, as a “Full Record” and then converted to form an MS Excel spreadsheet by means of a special macro (Visual Basic Applications, VBA, program) written by Philip Roe of Evaluametrics Ltd. The data included details of funding, and also the paper identifiers (DOI and PMID), where available. At this stage, papers whose “doctype” was a letter or author’s reply were also discarded, if they had the same title as an original article or review. A final quality check was performed against the original bibliographic records, usually on title and journal name for each CPG. The exception was for the Finnish CPGs which did not provide the titles of their references, and the journal name was in an abbreviated form that we converted to its full name manually. The resulting spreadsheet for Finland was checked against the pagination given for each of the references in the CPG so that only the cited papers were retained. The data from the individual spreadsheets were then combined into a single spreadsheet, and to each reference we added three extra columns for the bibliographic details of its citing CPG: an identifier that we provided, the country ISO2 code, and the year of CPG publication.
198
E. Pallari and G. Lewison
The Analysis of the Cited References The composite spreadsheet of references was analysed in several ways. Firstly, the titles of all the references were listed separately, and the numbers of each individual one were determined. Secondly, the research level of each reference, either clinical or basic or both, was marked based on words in its title (see Lewison & Paraje, 2004). This enabled any selected group of references (such as those from an individual CPG or country) to have its mean research level (RL) calculated on a scale from clinical observation = 1.0 to basic research = 4.0. Thirdly, the addresses on each reference were parsed to show their contributing countries on a fractional count basis in individual columns. For example, a reference with two French and three Italian addresses would be classified as FR = 0.4, IT = 0.6. The percentage of references from each country could then be compared with its percentage presence in prostate oncology research (abbreviated PRO-ON), which was obtained previously from a file of all prostate cancer research papers from 2000 to 2016. Fourthly, the gap in years between the CPG publication and each of its cited references was calculated. Finally, a special macro was applied to the title and journal name of each reference in order to determine its research type or domain. We classified the cancer papers in 12 research domains, shown in Table 2. For analysis purposes we combined chemotherapy (CHEM) and targeted therapy (TARG) to form one single drug therapy (DRUG). Finally, we compared the mean number of CPG citations to papers from the leading countries with their mean citation scores for papers in prostate cancer research, with five-year citation counts in the WoS for the years 2000–2012. The mean number of CPG citations is, of course, much lower but we would expect the correlation to be positive but small, as well as affected by the mean research level of the countries’ research output, and possibly other factors. Table 2 List of research domains for prostate cancer CPG references, with their tetragraph codes Research domain
Code
Research domain
Code
Research domain
Code
Chemotherapy
CHEM
Palliative care
PALL
Radiotherapy
RADI
Diagnosis
DIAG
Pathology
PATH
Screening
SCRE
Epidemiology
EPID
Prognosis
PROG
Surgery
SURG
Genetics
GENE
Quality of life
QUAL
Targeted therapy
TARG
The Evidence Base of International Clinical Practice Guidelines …
199
Results Overall Collection of CPG References There were 72 CPGs from 28 individual countries with a total of 10,273 references. There was one international, four European and two US organisations that developed CPGs between 2001 until 2019 (the end of January 2019 was the cut-off search date). Of these references, 1,908 (19%) were unique references cited on more than one CPG. The distribution of the numbers of papers with given numbers of citations closely followed a power law, see Fig. 1. Two of the papers were heavily cited on the CPGs: one was cited 26 times and another 25 times: Tannock, IF deWit, R Berry, WR et al. (2004) Docetaxel plus prednisone or mitoxantrone plus prednisone for advanced prostate cancer. New England Journal of Medicine, 351 (15): 1502–1512 (26 cites) Bill-Axelson, A Holmberg, L Ruutu, M et al. (2011) Radical Prostatectomy versus Watchful Waiting in Early Prostate Cancer. New England Journal of Medicine, 364 (18): 1708–1717 (25 cites)
The distribution by year of the complete set of CPG references is shown in Fig. 2; the inter-quartile range is approximately 12 years: from 2001 to 2012. Over time, the RL increased from 1.11 in 2000–2004 to 1.20 in 2015–2018, see Table 3. However, this mean RL is much more clinical than the value for PRO-ON, which was 2.02 for the years 2000 to 2016.
Fig. 1 Distribution of number of references with given numbers of occurrences on CPGs
200
E. Pallari and G. Lewison
Fig. 2 The distribution in time of the prostate cancer CPG references. (The vertical bars show the inter-quartile range in years.)
Table 3 The calculation of mean paper research level for five time periods Years
Papers
Clinical
Until 1999
1823
1329
2000–2004
1916
1407
2005–2009
2583
2006
2010–2014
2710
2115
2015–2018
1241
1017
Basic
Both
Classed
Classed, %
RL p
88
39
1378
75.6
1.15
77
50
1434
74.8
1.11
116
67
2055
79.6
1.12
179
121
2173
80.2
1.16
105
73
1049
84.5
1.20
The Countries Whose Research Was Cited on the CPGs The analysis of countries contributing to the CPG references showed that 72 countries contributed to these cited references, but almost half the total contributions were from the USA. We compared the countries’ percentages with those in the world prostate cancer literature for the 17 years, 2000–2016, shown in Fig. 3. Smaller European countries, notably Belgium, the Netherlands, Sweden and Norway, have about twice the percentage presences among the CPG references as they have in prostate cancer research. However, the East Asian countries, notably Taiwan (TW), China (CN), South Korea (KR) and Japan (JP), appear to have made little contribution to the prostate cancer CPGs, although the last three have CPGs in our collection. The gap, in years, between publication of a CPG and its references showed a peak between two and three years, meaning that the gap is very similar to that between paper publication and the peak of its citations, see Fig. 4. This is expected according to our recent study in oncology guidelines (Pallari et al. 2018a) but rather shorter than what was found in earlier studies for cancer CPGs (Lewison & Sullivan, 2008). Although the mean gap varied by citing CPG, shown in Fig. 5, Belgium (BE) and
The Evidence Base of International Clinical Practice Guidelines …
201
Fig. 3 The percentage presence of countries in the CPG references compared with their presence in world prostate cancer research in 2000–16, fractional counts. (Heavy dashed lines show relative presence ×2 or × 0.5; light dashed line shows relative presence × 0.2. Country ISO2 codes in Table 1, plus AT = Austria, CH = Switzerland, CZ = Czech Republic, DK = Denmark, NO = Norway, TR = Turkey, TW = Taiwan.)
Fig. 4 The percentages of CPG references with gaps in years between their publication and the year of the citing CPG
202
E. Pallari and G. Lewison
Fig. 5 The mean gap between CPG publication and its references for country CPGs with at least 100 references. Numbers above columns are numbers of references for each country. UN = International Atomic Energy Agency (IAEA)
Canada (CA) cite to the most recent evidence, and the IAEA, the Netherlands (NL) and Russia (RU), the oldest.
The Research Types or Domains of the CPG References The next analysis was of the research types, listed in Table 2. The results are shown in Fig. 6 for all years for the cited references and for the 17 years, 2000–2016, for the comparison set of world prostate oncology (PRO-ON) papers. The major difference is that the CPG references contain very few genetics papers, but many more on the three main types of treatment: chemotherapy (especially targeted therapy), radiotherapy and surgery. There is very little research in either set on palliative care or quality of life, both of which are important for patients. The results for the leading countries (with 100 or more CPG references) are shown in Fig. 7, where they are ranked by the percentage of references that concern a treatment method for the countries. Estonia (EE) and Belgium (BE) are the only
Fig. 6 Comparison of research types of prostate cancer research (PRO-ON) and the CPG references
The Evidence Base of International Clinical Practice Guidelines …
203
Fig. 7 The three methods of treatment for prostate cancer (DRUG = chemotherapy and targeted therapy, RADI = radiotherapy, SURG = surgery), and their percentage presence among the references on the countries’ CPGs
countries where drug treatment accounts for more than 10% of the CPG references. Radiotherapy is the most popular treatment for CPG developers in the IAEA, Finland (FI), the USA, the UK, Japan (JP) and Germany (DE). Surgery is much the most popular treatment method in Canada (CA), Ireland (IE) and Sweden (SE), but also in the European associations, four other European countries, and overall. Palliative care, not shown here, is of little interest to CPG developers except in Australia (7.5% of references), Sweden (3.5%) and the Netherlands (3.1%).
Fig. 8 Plot of the reciprocal of Over-Citation Ratio (OCR) for own country papers cited on selected countries’ CPG references against their percentage presence in world prostate cancer research (PRO-ON), 2000–2016
204
E. Pallari and G. Lewison
Factors Affecting the Numbers of Citations on CPGs The over-citation ratio (the tendency of CPG developers preferentially to cite research from their own country) is examined in Fig. 8, where the percentages of “own” country references (on a fractional count basis) are compared with the countries’ percentage presences in PRO-ON, and the reciprocal of Over-Citation Ratio is plotted against these percentages. The correlation is positive and fair for those countries with at least 80 CPG references and some own-country presence on these references. The mean number of citations for references from a given country (on a fractional count basis) are shown in Fig. 9 and are compared with the five-year mean citation score in the WoS for countries’ papers from 2000 to 2012. There is again a positive correlation, but some countries show to advantage in terms of citations on CPGs,
Fig. 9 Plot of mean count of citations on CPGs on prostate cancer for papers from selected countries compared with the five-year mean citation count in the WoS of their prostate cancer research papers, 2000–2012; fractional counts for each indicator. Note: false origins for both axes; best trend-line is two-term polynomial, CPP = cites per paper. (For ISO2 codes, see Table 1. AT = Austria, CH = Switzerland and DK = Denmark.)
The Evidence Base of International Clinical Practice Guidelines …
205
Fig. 10 The mean citation count of countries’ papers on our collection of CPGs as a function of the mean journal research level (1.0 = clinical observation, 4.0 = basic research) of its prostate cancer research papers (fractional counts), 2000–16
Fig. 11 The mean citation count of countries’ papers on our collection of CPGs as a function of the percentage of clinical trials on its prostate cancer research papers (fractional counts), 2000–16
notably Sweden (SE), Finland (FI), France (FR), Spain (ES) and Norway (NO). Others are cited poorly there, particularly relative to their performance in terms of citations in the WoS, notably China (CN), Austria (AT), South Korea (KR), Japan (JP), the Netherlands (NL) and the USA (US). Most of the points above the trend-line are for European countries, except for Canada (CA). The points below the line include the three East Asian countries, although all of them have CPGs with references. The same is true for the Netherlands and the USA. However, Austria (AT), Denmark
206
E. Pallari and G. Lewison
Fig. 12 Percentage presence of selected countries’ papers among the references from other countries’ CPGs plotted against their presence in prostate cancer research, 2000–16, fractional country counts. Log-log scales; ISO2 codes as in Table 1 and AT = Austria, CH = Switzerland, DK = Denmark, NO = Norway, TR = Turkey. Heavy dashed lines show ratios of 2 or 0.5; light dashed lines show ratios of 5 or 0.2
(DK) and Switzerland (CH) do not have CPGs in our collection, so their relatively poor performance is understandable. Another factor that may have affected countries’ performance in terms of CPG citations is the mean research level of their papers in prostate cancer research. We have already noted (Table 3) that the papers cited on CPGs are very clinical, and much more so than prostate cancer research papers as a whole. Figure 10 shows a weak negative correlation between the mean CPG citation score and the mean RL for the journals in which they published their prostate cancer research papers, determined on a fractional count basis. The two outliers are Finland (FI) and Sweden (SE); in their absence the correlation would rise from 0.25 to 0.52. We also considered the possibility that the countries’ mean CPG citation counts might be influenced by the extent to which the cited prostate research papers involved
The Evidence Base of International Clinical Practice Guidelines …
207
clinical trials. The results are shown in Fig. 11, and again the correlation is positive and quite strong. Once again, Sweden and Finland are the outliers, as is France. Finally, we determined the percentage presences of references from selected countries among the references of all countries other than their own country with their presence in prostate cancer research in 2000–16. This is shown on a log-log plot in Fig. 12. The ratios represent the relative utility of each country’s clinical research in prostate cancer to the provision of evidence for international CPGs. They can be compared with the abscissa of the country spots in Fig. 9. The most highly esteemed countries (i.e., those furthest above the diagonal line in Fig. 12) are the Netherlands (NL), Sweden (SE), the UK, Canada (CA), Switzerland (CH), and Norway (NO). Many of these have high citation scores in the WoS, but not all (notably the UK and Norway), showing that the indicators are revealing different measures of research impact. The least esteemed countries are China (CN), India (IN), Turkey (TR), South Korea (KR) and Japan (JP).
Discussion The first point to notice is one that has been observed in previous studies (Grant, 1999; Grant et al., 2000; Lewison & Sullivan, 2008; Pallari et al., 2018b), namely that the papers cited on prostate cancer CPGs are very clinical, with a mean RL of 1.15. Even though the RL of these references has increased over time, the change is quite small, and they are still much more clinical than the average for world prostate cancer research literature. This perhaps may be an indication that there is an interest to cite evidence more relevant in the clinical context, despite efforts to understand the mechanism of cell proliferation and pathogenesis of prostate cancer (Chiarodo, 1991). It also appears that countries whose research is more clinical have more influence on the CPGs in our collection, and on ones from foreign countries, see Figs. 10 and 12. We found, by way of illustration, that prostate cancer research papers from 2000–12 with a journal research level between 1.0 and 2.0 were cited on average 16.4 times in their first five years, whereas those with a journal research level between 3.0 and 4.0 were cited 23.9 times. Our second observation is that some countries’ CPGs rely on much more recent research evidence than others (Fig. 5). This means that the recommendations on the CPGs from those countries on the right of the chart may be somewhat outdated, as the field is advancing rapidly, particularly in screening and diagnosis. The guidelines span an 18-year time period for some countries, although the majority are from 2012 onwards. As this is quite a long time, given the importance of the topic, perhaps those organisations should consider a shorter time frame, such as three years (Shekelle et al., 2001). We emphasise the importance for a balanced compromise between time and pragmatism when producing scientifically valid and rigorous clinical guidelines (Woolf et al., 1999; Browman, 2001; Shekelle et al., 2001; Pallari et al., 2018a). A third point concerns the different emphases of the countries’ CPGs on the treatment methods, shown in Fig. 7. It is hardly surprising that the IAEA references are
208
E. Pallari and G. Lewison
mainly concerned with radiotherapy, and they comprise almost two thirds of the total of 201 references, but the variation between the individual countries suggests a lack of consensus on which treatments are likely to be most beneficial. Canada and Ireland base their guidelines almost exclusively on surgery. Drug treatment (conventional and targeted chemotherapy) research plays a relatively small part in the evidence base except in Belgium and Estonia (just over one fifth of the references), and in Australia where they out-number the other two treatments. Our fourth point concerns the factors that appear to be positively correlated with frequent citation of a country’s papers on CPGs. This is a parallel investigation to the many papers that have sought to try to explain the variation in numbers of WoS citations with the parameters of individual papers. These include numbers of authors, numbers of acknowledged funding sources, research level, and the amount of international collaboration (Farshad et al., 2011; Kinchin, 2017; Lewison & Dawson, 1997). Such factors are unlikely to be relevant here, and our database excludes the large majority of prostate cancer research papers that are not cited on CPGs. What we have demonstrated, without of course proving causation, is that there is a positive correlation between a country’s CPG citation performance and (a) its papers being well cited by other papers in the WoS; (b) its papers being clinical rather than basic; and (c) its relevant research papers including many clinical trials. The last of these is a novel finding and could influence the composition of its research output portfolio if a country wished to improve the basis for new and updated recommendations for good clinical practice through CPGs. Fifth, we have noted the tendency of a country’s CPGs to over-cite its own research, and that this over-citation ratio is higher for countries with small research outputs, see Fig. 8 (see also Bakare & Lewison, 2017). South Korea (KR), Belgium (BE) and Italy (IT) appear to be outliers in that they self-cite less than what might be expected. This suggests that their research is not having the effect on their national guidelines that they might have expected. This inter-country comparison could be a rather useful metric to evaluate whether a country’s investment in research was likely to lead to good recommendations for the health care of its citizens, which must surely be one of the main reasons why countries carry out medical research. Our study has some limitations. Although we sought diligently for prostate cancer CPGs from as many countries (and international organisations) as we could find, we may have missed some from countries that we know are active in the publication of CPGs in other medical subject areas, such as Brazil, Latvia, Norway among others but of which a search strategy in a native language would instead have been appropriate. The second is the uneven coverage of the guidelines from different countries, and specifically, the reporting bias introduced by some CGPs having many more references than others. There appears to be a much bigger variation in the numbers of references from individual CPGs than there is from prostate cancer research papers. This is probably because there are no formal limits on the numbers of references on CPGs whereas many journals impose such limits on papers submitted to them. The third limitation is that we have only included references in the serial literature, and we are aware that some CPGs rely also on reports, book chapters, and other
The Evidence Base of International Clinical Practice Guidelines …
209
guidelines, and some of these will embody recent research. Moreover, we confined our search for the bibliographic data on the cited references to those ones that were processed and identified on the WoS. In practice, this meant that only relatively few journal papers were not included in our database and subsequent analysis. A fourth limitation, by no means confined to this study, is that we did not distinguish between the relative importance of the different references to a CPG. Clearly when recommendations for treatment are being made in a CPG, they must be based on the best-conducted clinical trials, involving the largest numbers of patients, and a double-blind procedure. We have shown that clinical trials are an important route whereby research does get translated into recommendations in CPGs.
Conclusions We have shown that an analysis of the evidence base of a medical subject area can provide a different but important means of evaluation of the research in the area that has been published by different actors. We have examined the differences between countries. The methodology can potentially be used for the evaluation of the outputs of researchers in selected universities or hospitals, or of the work funded by a charity or government agency. Counts of citations in the serial literature have been used almost exclusively for research evaluation but counts of citations on clinical practice guidelines give a different, and in some ways better, measure of the real-world impact of medical research on health care. Acknowledgments The authors would like to thank Ms Shoumiya Padman, the Nuffield Research Placement for providing support for the placement of the student, and Mr Hamish Sharp. Both students assisted the authors with the bibliography of the guidelines by running the search statements and downloading the files from the Web of Science. The authors would also like to thank Dr Philip Roe from Evaluametrics Ltd for developing the VBA macros used for the data extraction, processing of the downloaded papers, and their analysis.
References Abdelsattar, Z. M., Reames, B. N., Regenbogen, S. E., Hendren, S., & Wong, S. L. (2015). Critical evaluation of the scientific content in clinical practice guidelines. Cancer, 121(5), 783–789. Anonymous. (1994). American Society of Clinical Oncology. Recommendations for the use of hematopoietic colony-stimulating factors: Evidence-based, clinical practice guidelines. Journal of Clinical Oncology, 12(11), 2471–2508. Audet, A.-M., Greenfield, S., & Field, M. (1990). Medical practice guidelines: Current activities and future directions. Annals of Internal Medicine, 113(9), 709–714. Bakare, V., & Lewison, G. (2017). Country over-citation ratios. Scientometrics, 113(2), 1199–1207. Begum, M., Lewison, G., Wright, J. S., Pallari, E., & Sullivan, R. (2016). European noncommunicable respiratory disease research, 2002-13: Bibliometric study of outputs and funding. PLoS ONE, 11(4), e0154197.
210
E. Pallari and G. Lewison
Browman, G. P. (1994). Evidence-based recommendations against neoadjuvant chemotherapy for routine management of patients with squamous cell head and neck cancer. Cancer Investigation, 12(6), 662–670. Browman, G. P. (2001). Development and aftercare of clinical guidelines: The balance between rigor and pragmatism. Journal of the American Medical Association, 286(12), 1509–1511. Burgers, J. S., Cluzeau, F. A., Hanna, S. E., Hunt, C., & Grol, R. (2003). Characteristics of highquality guidelines: Evaluation of 86 clinical guidelines developed in ten European countries and Canada. International Journal of Technology Assessment in Health Care, 19(1), 148–157. Burgers, J., Fervers, B., Haugh, M., Brouwers, M., Browman, G., Philip, T., & Cluzeau, F. (2004). International assessment of the quality of clinical practice guidelines in oncology using the Appraisal of Guidelines and Research and Evaluation Instrument. Journal of Clinical Oncology, 22(10), 2000–2007. Cecamore, C., Savino, A., Salvatore, R., Cafarotti, A., Pelliccia, P., Mohn, A., & Chiarelli, F. (2011). Clinical practice guidelines: what they are, why we need them and how they should be developed through rigorous evaluation. European Journal of Pediatrics, 170(7), 831-836. Checketts, J. X., Sims, M. T., & Vassar, M. (2017). Evaluating industry payments among dermatology clinical practice guidelines authors. JAMA Dermatology, 153(12), 1229–1235. Chiarodo, A. (1991). National Cancer Institute roundtable on prostate cancer: Future research directions, AACR. Cook, D. J., Greengold, N. L., Ellrodt, A. G., & Weingarten, S. R. (1997). The relation between systematic reviews and practice guidelines. Annals of Internal Medicine, 127(3), 210–216. Eriksson, M., Billhult, A., Billhult, T., Pallari, E., & Lewison, G. (2020). A new database of the references on international clinical practice guidelines: A facility for the evaluation of clinical research. Scientometrics, 122(2), 1221–1235. Ewalt, P. L. (1995). Clinical practice guidelines: Their impact on social work in health. Social Work, 40(3), 293. Farshad, M., Maier, C., & Gerber, C. (2011, June). Do non-scientific factors influence citation rates of orthopedic journal articles?. In: Swiss medical weekly (Vol. 141, pp. 25S–25S). Farnsburgestr 8, CH-4132 muttenz, Switzerland: EMH Swiss Medical Publishers Ltd. Fervers, B., Burgers, J. S., Haugh, M. C., Brouwers, M., Browman, G., Cluzeau, F., & Philip, T. (2005). Predictors of high quality clinical practice guidelines: Examples in oncology. International Journal for Quality in Health Care, 17(2), 123–132. Field, M. J., & Lohr, K. N. (1990). Clinical practice guidelines: Directions for a new program. National Academies Press. Gibson, P. (1993). Asthma guidelines and evidence-based medicine. The Lancet, 342(8882), 1305. Glänzel, W., & Moed, H. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193. Grant, J. (1999). Evaluating the outcomes of biomedical research on healthcare. Research Evaluation, 8(1), 33–38. Grant, J., Cottrell, R., Cluzeau, F., & Fawcett, G. (2000). Evaluating “payback” on biomedical research from papers cited in clinical guidelines: Applied bibliometric study. BMJ, 320(7242), 1107–1111. Greengold, N. L., & Weingarten, S. R. (1996). Developing evidence-based practice guidelines and pathways: The experience at the local hospital level. The Joint Commission Journal on Quality Improvement, 22(6), 391–402. Grimmer, K., Dizon, J. M., Milanese, S., King, E., Beaton, K., Thorpe, O., … Kumar, S. (2014). Efficient clinical evaluation of guideline quality: development and testing of a new tool. BMC Medical Research Methodology, 14(1), 63. Harris, J. S. (1997). Development, use, and evaluation of clinical practice guidelines. Journal of Occupational and Environmental Medicine, 39(1), 23–34. Hayward, R., & Laupacis, A. (1993). Initiating, conducting and maintaining guidelines development programs. CMAJ: Canadian Medical Association Journal, 148(4): 507.
The Evidence Base of International Clinical Practice Guidelines …
211
Hayward, R. S., Wilson, M. C., Tunis, S. R., Bass, E. B., Rubin, H. R., & Haynes, R. B. (1993). More informative abstracts of articles describing clinical practice guidelines. Annals of Internal Medicine, 118(9), 731–737. Horn, J., Checketts, J. X., Jawhar, O., & Vassar, M. (2018). Evaluation of industry relationships among authors of otolaryngology clinical practice guidelines. JAMA Otolaryngology-Head & Neck Surgery, 144(3), 194–201. Kinchin, I. M. (2017). The importance of an engaging title or Titular colonicity: Is it a factor that influences citation rates? Journal of Biological Education, 51(1), 1–2. Kryl, D., Allen, L., Dolby, K., Sherbon, B., & Viney, I. (2012). Tracking the impact of research on policy and practice: Investigating the feasibility of using citations in clinical guidelines for research evaluation. British Medical Journal Open, 2(2), e000897. Kryworuchko, J., Stacey, D., Bai, N., & Graham, I. D. (2009). Twelve years of clinical practice guideline development, dissemination and evaluation in Canada (1994 to 2005). Implementation Science, 4(1), 49. Legido-Quigley, H., Panteli, D., Brusamento, S., Knai, C., Saliba, V., Turk, E., … McKee, M. (2012). Clinical guidelines in the European Union: Mapping the regulatory basis, development, quality control, implementation and evaluation across member states. Health policy, 107(2-3), 146–156. Lewison, G., Dawson, G., & Anderson, J. (1997). Support for UK biomedical research from tobacco industry. The Lancet, 349(9054), 778. Lewison, G., & Paraje, G. (2004). The classification of biomedical journals by research level. Scientometrics, 60(2), 145–157. Lewison, G., & Sullivan, R. (2008). The impact of cancer research: How publications influence UK cancer clinical guidelines. British Journal of Cancer, 98(12), 1944. Lewison, G., & Wilcox-Jay, K. (2003). Getting biomedical research into practice—The citations from UK clinical guidelines. In Proceedings of the 9th International Conference on Scientometrics and Informetrics, Beijing, China. Lohr, K. N. (1994). Guidelines for clinical practice: Applications for primary care. International Journal for Quality in Health Care, 6(1), 17–25. Lohr, K. N. (1995). Guidelines for clinical practice: What they are and why they count. The Journal of Law, Medicine & Ethics, 23(1), 49–56. Lohr, K. N., & Field, M. J. (1992). Guidelines for clinical practice: From development to use. National Academies Press. Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277. Moed, H. F., Burger, W., Frankfort, J., & Van Raan, A. F. (1985). The use of bibliometric data for the measurement of university research performance. Research Policy, 14(3), 131–149. Network, S. T. S. I. G. Who we are. Retrieved from https://www.sign.ac.uk/who-we-are.html. Pallari, E., Fox, A. W., & Lewison, G. (2018a). Differential research impact in cancer practice guidelines’ evidence base: lessons from ESMO, NICE and SIGN. ESMO Open, 3(1), e000258. Pallari, E., Lewison, G., Ciani, O., Tarricone, R., Sommariva, S., Begum, M., & Sullivan, R. (2018b). The impacts of diabetes research from 31 European Countries in 2002 to 2013. Research Evaluation, 27(3), 270–282. Shekelle, P. G., Ortiz, E., Rhodes, S., Morton, S. C., Eccles, M. P., Grimshaw, J. M., & Woolf, S. H. (2001). Validity of the Agency for Healthcare Research and Quality clinical practice guidelines: How quickly do guidelines become outdated? JAMA, 286(12), 1461–1467. Sox, H. C., & Woolf, S. H. (1993). Evidence-based practice guidelines from the US preventive services task force. Journal of the American Medical Association, 269(20), 2678. Steinberg, E. P., Eknoyan, G., Levin, N. W., Eschbach, J. W., Golper, T. A., Owen, W. F., & Schwab, S. (2000). Methods used to evaluate the quality of evidence underlying the national kidney foundation-dialysis outcomes quality initiative clinical practice guidelines: Description, findings, and implications. American Journal of Kidney Diseases, 36(1), 1–11.
212
E. Pallari and G. Lewison
Todd, J. S. (1990). Do practice guidelines guide practice? The New England Journal of Medicine, 322(25), 1822–1823. Van Eck, N. J., Waltman, L., van Raan, A. F., Klautz, R. J., & Peul, W. C. (2013). Citation analysis may severely underestimate the impact of clinical research as compared to basic research. PloS one, 8(4). Woolf, S. H. (1992). Practice guidelines, a new reality in medicine: II. Methods of developing guidelines. Archives of Internal Medicine, 152(5), 946–952. Woolf, S. H., Grol, R., Hutchinson, A., Eccles, M., & Grimshaw, J. (1999). Potential benefits, limitations, and harms of clinical guidelines. BMJ, 318(7182), 527–530.
The Differing Meanings of Indicators Under Different Policy Contexts. The Case of Internationalisation Nicolas Robinson-Garcia and Ismael Ràfols
Introduction The development and growth of the field of evaluative scientometrics cannot be understood without the fundamental contributions of Henk Moed. Along with his colleagues at the University of Leiden Centre for Science and Technology Studies (CWTS), he became a key player on establishing the basic pillars for the use of bibliometric indicators for research assessment (Moed, Bruin, & Leeuwen, 1995; Moed, Burger, Frankfort, & Van Raan, 1985). Moed’s work has been characterized by his critical notion on the use of indicators. He was one of the first to point out potential problems derived from the use of the Impact Factor for research assessment (Moed & van Leeuwen, 1996; Moed & Van Leeuwen, 1995), or the limitations of scientometrics when assessing the citation impact of non-English literature (Leeuwen, Moed, Tijssen, Visser, & Raan, 2001) among others. His two single-authored books (Moed, 2005, 2017b), essential readings for anyone interested on the field, are characterized by an open-minded and pedagogical tone which reflects a critical and constructive view of evaluative scientometrics. In his latest book, Moed proposes shifting away from a ‘narrow’ evaluative use of indicators to a more analytical one. He warns that “[t]o the extent that in a practical application an evaluative framework is absent or implicit, there is a vacuum, that may be easily filled either with ad hoc arguments of evaluators and policy makers, or with un-reflected assumptions underlying informetric tools” (Moed, 2017a, p. 29). In his view, the selection of indicators should be made within the ‘policy context’ in which they are going to be implemented (Moed & Halevi, 2015). Building upon this body N. Robinson-Garcia (B) Delft Institute of Applied Mathematics (DIAM), TU Delft, Delft, The Netherlands e-mail: [email protected] I. Ràfols INGENIO (CSIC-UPV), Universitat Politècnica de València, Valencia, Spain Centre for Science and Technology Studies (CWTS), Leiden University, Leiden, The Netherlands © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_10
213
214
N. Robinson-Garcia and I. Ràfols
of work, in this chapter we aim at further exploring this ‘analytical’ perspective on the use of scientometrics. We stress that context will not only provide the appropriate framework for the selection of indicators, but also for their interpretation, moving from a universal interpretation of indicators to a context-dependent one. At this point, it is important to distinguish between policy context and adapting the indicators to a given context, what Waltman (2019a, b) refers to as ‘contextualised scientometrics’. In the latter case, context is understood as a means to ensure transparency, and facilitate a better understanding on how the indicator is constructed and adapted to specific fields, countries or languages. The purpose in ‘contextualised scientometrics’ is to allow the user to grasp the limitations and biases inherent to scientometric indicators so that they are not misinterpreted due to technical and conceptual assumption on what the indicator is measuring. This is the line of thought followed by Gingras (2014) when defining the three desirable characteristics of a well-designed indicator: (1) adequacy for the object it measures, (2) sensitivity to the intrinsic inertia of the object, and (3) homogeneity of the dimensions of the indicator. However, the focus here is in policy context, which has to do with the understanding of the purpose of the assessment, the selection of the indicator and its interpretation based on broader social or policy factors which may be crucial to understand what the indicator is actually portraying. To illustrate the importance of policy context when interpreting scientometric indicators, in this chapter we will focus on their use for studying the effects of internationalisation policies in science. The aim is to highlight how a de-contextualised use of scientometric indicators can work against the expected goals for which indicators were originally introduced. The phenomenon of globalization in science gives us a good example to explore such ambiguity, as many countries have turned their attention towards scientometrics in order to implement internationalisation policies. Furthermore, they have introduced or interpreted indicators (wrongly) assuming that globalization affects equally all countries. Thus, this represents an excellent playground to understand how context shapes the meaning of indicators. For instance, the increase of international collaboration since the 1980s (Adams, 2012) is usually interpreted as a positive factor for increasing scientific impact (Persson, Glänzel, & Danell, 2004). Mobility has also increased and is usually perceived as benefitting research careers (Sugimoto et al., 2017; Zippel, 2017). However there are contradicting views on whether the national impact of mobility is positive or negative (Arrieta, Pammolli, & Petersen, 2017; Levin & Stephan, 1999; Meyer, 2001). Since the end of 20th century many governments introduced publication policies that push researchers to publish in international venues (namely, the journals indexed in Web of Science and Scopus databases) and English language as a means to improve their profile internationally (Jiménez-Contreras, de Moya Anegón, & López-Cózar, 2003; Van Raan, 1997; Vessuri, Guédon, & Cetto, 2014). These policies tend to be supported with indicators which are interpreted in the same manner—i.e. assuming that the more internationalisation and the more mobility, the better. For instance, an increase in international collaboration is assumed to be positively related to citation impact (Persson et al., 2004) and is especially encouraged
The Differing Meanings of Indicators Under Different Policy …
215
in countries with lower national scientific impact (Bote, Olmeda-Gómez, & MoyaAnegón, 2013). Mobility is also considered as positive at the individual and global levels (Sugimoto et al., 2017; Wagner & Jonkers, 2017). However, it is perceived differently in specific countries, e.g. in Spain or China it is seen as positive when scientists’ return is ensured (Andújar, Cañibano, & Fernandez-Zubieta, 2015; Jonkers & Tijssen, 2008), while in Africa it is perceived negatively due to the high risk of brain drain (Bassioni, Adzaho, & Niyukuri, 2016). Finally, publishing in English language is perceived as essential to improve the visibility of research outputs (BuelaCasal & Zych, 2012). In an influential piece, Leeuwen et al. (2001) proved the major biases against non-English languages journals in the Journal Impact Factor (JIF). Citation rates to these journals are consistently lower than in English language journals due to the lack of coverage to non-English literature in Web of Science. To counteract such bias, some journals from non-English countries have ceased publishing in their original language and turned into English with the expectation that this would increase their visibility and hence, their citation rates (Robinson, 2016). These three examples (international collaboration, mobility and English publishing in non-English speaking countries) will be discussed in this chapter to showcase how a de-contextualised use of scientometric indicators can work against the implementation of policies seeking to improve national research systems. The chapter is structured as follows. First, we frame the challenges raised by globalization of research, policies for internationalisation implemented in different countries and how this is shaping national scientific workforces. Next, we discuss two examples of where a de-contextualised use of indicators may lead to misinterpretations. These are the use of international collaboration to achieve greater scientific impact and the use of evaluation based on Journal Impact Factors to internationalise national scientific literature. Followed by this, we will discuss how a narrow interpretation of a global phenomenon such as the globalisation of the scientific workforce can lead to defining partial indicators which may be ill-suited. We conclude with some final remarks.
Globalization and Research Evaluation Research has always had a fundamental global component attached to its endeavour. However, it is usually assumed that the dawn of the 21st Century marks the beginning of a ‘truly’ global scientific system (Altbach, 2004; Robinson-Garcia & Jiménez-Contreras, 2017). Here we provide some evidence pointing in this direction. First, the rise of world university rankings with the launch of the Shanghai Ranking in 2003 (Aguillo, Bar-Ilan, Levene, & Ortega, 2010) unleashed a global competition for talent and resources (Hazelkorn, 2011). Despite their many and serious flaws, rankings have transformed the perceived prestige of universities (Bastedo & Bowman, 2010; Moed, 2017a) and have directly influenced decision making at the institutional level (Robinson-Garcia, Torres-Salinas, Herrera-Viedma, & Docampo,
216
N. Robinson-Garcia and I. Ràfols
2019, p. 233). Second, the shift from international scientific networks formed quasiexclusively by western countries, to more inclusive global scientific collaboration networks (Wagner, Park, & Leydesdorff, 2015); derived partly from the R&D growth in countries such as China (Quan, Mongeon, Sainte-Marie, Zhao, & Larivière, 2019) or Brazil (Leta & Chaimovich, 2002). These new global communities are characterized by a tight and small core of countries in which dissemination of knowledge is dependent on a reduced number of countries (Leydesdorff & Wagner, 2008), while allowing the inclusion of new players in the global network (Wagner & Leydesdorff, 2005). The transformation of the higher education landscape has confronted traditional universities with a new scenario. They are asked to respond to local problems and national priorities, while competing in and forming part of global scientific networks and responding to their expectations (Nerad, 2010). This dual challenge has directly influenced the development of scientometrics. Three examples are provided. First, the increasing interest on societal impact and interdisciplinary research (‘Mode 2’, Gibbons et al., 1994) has led to different proposals for measuring societal impact, in particular with ‘altmetrics’ (Díaz-Faes, Bowman, & Costas, 2019; Haustein, Bowman, & Costas, 2016; Robinson-Garcia, van Leeuwen, & Rafols, 2018), and developing indicators for measuring interdisciplinarity in research (Abramo, D’Angelo, & Costa, 2017; Larivière & Gingras, 2010; Leydesdorff & Rafols, 2011; Rafols, Leydesdorff, O’Hare, Nightingale, & Stirling, 2012). Second, the introduction of new public management methods in research management has led many governments and institutions to use indicators to assess individuals’ careers within performance-based assessment systems (Ràfols, Molas-Gallart, Chavarro, & Robinson-Garcia, 2016), inducing a plethora of studies on individual research assessment (i.e., Costas, van Leeuwen, & Bordons, 2010; Hirsch, 2019). Third, the formation of international networks as a result of proactive policies has raised interest on the study of international collaboration (Bote et al., 2013; Leydesdorff & Wagner, 2008), and more recently, international mobility (Moed, Aisati, & Plume, 2013; Moed & Halevi, 2014; Robinson-Garcia et al., 2019; Sugimoto et al., 2017). Scientometric indicators have grown in importance, in particular within national strategies of internationalisation. We will now briefly review some examples related with collaboration and publication venue. The globalization of science is often studied through the analysis of international co-authorship patterns and the structure of the networks that emerges from these patterns (Wagner, 2019). Despite some reservations raised (Persson et al., 2004), international collaboration is generally perceived as a positive factor to achieve a higher scientific impact and promote networks of prestigious scientists which may lead to more novel science (Wagner, 2019). As a consequence, it is common to observe its presence in world university rankings as well as to introduce mobility policies requiring scientists to return to the country of origin, so that they bridge between the receiving and sending countries (Fang, Lamers, & Costas, 2019).
The Differing Meanings of Indicators Under Different Policy …
217
Policies related with promoting certain publication strategies are well-known. They favour publishing in journals indexed in the Web of Science or Scopus, preferably in journals within the top quartile of Clarivate’s Journal Citation Reports according to their Impact Factor. Countries implementing these types of policies presently or in the past include China (Quan, Chen, & Shu, 2017), Finland (Adam, 2002), Spain (Jiménez-Contreras, López-Cózar, Ruiz-Pérez, & Fernández, 2002), Czech Republic (Good, Vermeulen, Tiefenthaler, & Arnold, 2015) and major Latin American countries such as Mexico among others (Vessuri et al., 2014). As the Impact Factor is biased against non-English language (Leeuwen et al., 2001), scientists and national journals in non-English countries are pushed into publishing in English language (González-Alcaide, Valderrama-Zurián, & Aleixandre-Benavent, 2012) as a means of fostering internationalisation and with the expectation of gaining greater citation impact. In all these cases, the rationale for introducing such policies is the same. International collaboration and publishing internationally (English language publications) is a signature of research quality that leads to high visibility. Science produced in this context (either through collaboration or by publishing in highly visible venues) leads to highly cited science. Finally, it is assumed that a system which produces more highly cited science is better (sometimes it may even be argued that it is more positive and beneficial for society, e.g., Baldridge, Floyd, & Markóczy, 2004). But, as we will now discuss, context will shape to what extent this is true and the potential pitfalls of this type of argument. In the following section, we further explore the cases of international collaboration and publishing in English language. We will discuss some common examples on how a universal interpretation of scientometric indicators can be misleading depending on the context in which it is used.
Two Cases on How de-Contextualized Indicators Lead to Wrong Interpretations International Collaboration International collaboration, measured by the share of publications in which affiliations from more than one country appear, is generally perceived as an intrinsically positive indicator. Thus, university rankings such as the World University Rankings, the Scimago Institutions Rankings or the Leiden Ranking, include the share of internationally co-authored publications as one of their dimensions. This perception is especially noticeable when discussing strategies for enhancing scientific development in countries outside the scientific core. For instance, Quan et al. (2019) state that: For developing countries, international collaborations represent an ideal opportunity to improve both scientific visibility and research impact by allowing their researchers to work with colleagues from more advanced scientific countries (p. 708)
218
N. Robinson-Garcia and I. Ràfols
While this may be true to some extent regarding citation impact, it is questionable, at least when analysing a country’s capacity to develop scientific knowledge autonomously and independently. In Fig. 1 we show the share of internationally co-authored publications for countries worldwide according to their income level. While these shares are partially influenced by the size of countries in each group, we observe that, in the cases of high income, upper middle and lower middle countries, the shares are always below 40% of their total output. However, for low income countries, this share increases up to 86% of their total output, evidencing the fragility of their research systems and their dependence on developed countries when producing research outputs. Evidence shows that lower and middle income countries have a much higher citation impact when collaborating internationally. But does this mean that a 95% international collaboration rate is better than a 75% collaboration? At which point should policy foster the development of domestic capabilities without reliance on international collaboration? This questions to what extent can the same bibliometric indicators be either interpreted or even applied in specific countries (Confraria, Mira Godinho, & Wang, 2017). In this context, it also seems reasonable to question if international collaboration can always best respond to local needs or it is driven by the desire to integrate global scientific networks. In a case study focusing on six South Asian countries (Woolley, Robinson-Garcia, & Costas, 2017), we identified differences in the choice of partner and fields of interest when decoupling international collaboration between bilateral (co-authors affiliated to two distinct countries) and multilateral (co-authors affiliated to more than two distinct countries). We concluded that these two different collaboration patterns may be related to the nature of the fields as well as to the existence of mobility programs in which the emigrant bridges between countries while establishing wider networks with other countries. In a follow up study (Robinson-Garcia,
Fig. 1 Share of internationally co-authored (yellow) and domestic (grey) publications indexed as articles in the Web of Science SCI, SSCI and A&HCI according to income country level (World Banks definition) in 2008–2017
The Differing Meanings of Indicators Under Different Policy …
219
Woolley, & Costas, 2019), we studied the degree to which countries follow global publication patterns using cosine similarity of the disciplinary profiles of countries with and without international collaboration. By combining these indicators, we can show that the interpretation of the indicator of proportion of international collaborations is not as straightforward as it could seem. Furthermore, context on the specific countries or regions is needed to better understand what is motivating such collaboration and if it seems reasonable to fit in with national interests. The reason for this is that international collaboration, as conceived in scientometrics, is usually interpreted as a reciprocal relationship in which all partners are equal. However, in fact developed countries have positions of relative power in collaboration networks (Leydesdorff & Wagner, 2008), what causes asymmetries in the scientific partnerships (Chinchilla-Rodríguez, Bu, Robinson-García, Costas, & Sugimoto, 2018; Feld & Kreimer, 2019). Figure 2 shows the disciplinary similarity of domestic versus internationally coauthored publications and the share of internationally co-authored publications for European (top) and African (bottom) countries. The case of Europe is of interest, as many policies have been put into place to coordinate the scientific integration of the different EU state members (Ackers, 2005). In this regard, we observe how northern and western countries tend to cluster together on the upper right side of the graph, correlating their domestic and international disciplinary profiles with their collaborating patterns (blue are in top chart in Fig. 2). From the perspective of the European Research Area (ERA), this would be a desired path to follow and it would be expected for the rest of countries included in the plot, to align to this pattern. However, one might question if this would be the most appropriate choice from a national point of view, especially for eastern European countries, which tend to show a more dissimilar disciplinary profile. In the case of Sub-Saharan Africa, the reading is completely different, and what we observe is a research profile completely overridden by international partners, with the exceptions of South Africa and Nigeria. This questions to what extent such research is based on local capabilities and responds to local demands and challenges. The very high share of internationally co-authored publications may actually be an indicator of the weakness of their national scientific systems and their dependence on international partners. These two examples illustrate how the same indicator can be interpreted in different ways within (eastern European vs. western and north European countries) and between regions (Europe vs. Sub-Saharan Africa).
Publishing in English as a Strategy for Internationalisation In this second case study we will focus on the influence of English language as a strategy to internationalize research in non-English speaking countries. We will focus on the share of publications in English language in non-English speaking countries. This indicator is perceived by researchers as a proxy for internationalising their research
220
N. Robinson-Garcia and I. Ràfols
Fig. 2 Scatterplots for countries from Europe and Central Asia (top) and Sub-Saharan Africa (bottom). X axis shows the cosine similarity of their disciplinary profile between their domestic and internationally co-authored publications, Y axis shows the proportion of internationally co-authored publications. The size of a point reflects the total number of publications. Time period 1980–2018. Data from SCI, SSCI and H&CI. For European countries’ colour and shape refers to region (red eastern European countries, blue southern European countries and green north and central European countries). Only countries with at least 8,000 publications are shown. For Sub-Saharan Africa only countries with at least 1,000 publications. Further information on the methodology is available in Robinson-Garcia et al. (2019)
The Differing Meanings of Indicators Under Different Policy …
221
outputs (Buela-Casal & Zych, 2012). Here, citation-based indicators generally, and more specifically the Journal Impact Factor, are the indicators motivating such strategies. Already in 2001, Leeuwen et al. (2001) noted systematic biases in the Web of Science in terms of coverage of non-English literature and of the citation impact of such literature. Indeed, as shown in Fig. 3, English language accounts for 96% of the publications indexed, with none of the other 46 languages included surpassing 1% of the database (German, the second most common language, represented 0.9% of the database in the 2000–2017 period). Furthermore, despite the small drop suffered between 2006 and 2010, due to the inclusion of non-English journals in the database, English language rapidly caught up and even increased its share, representing 97.6% of the database in 2017. However, this over-representation of English literature is seldom seen as a shortcoming of the data source. Instead, in many countries, lack of inclusion of national journals in the WoS is seen as evidence of the lack of internationalisation of their
Fig. 3 Mean Normalised Citation Score (MNCS at the top) and share of publications (bottom) of publications in the five most common languages available in the Web of Science for the 2000–2017 period
222
N. Robinson-Garcia and I. Ràfols
research outputs, as they are less visible (not indexed in these large databases) and less cited. This affects specially the Social Sciences and Humanities fields, as they are more prone to rely on national languages and address a more diversified audience than in other fields (Nederhof, 2006; Sivertsen, 2016). But we do see also such negative connotation in other fields in which translational research and contact with practitioners is essential (Rey-Rocha & Martín-Sempere, 2012), such as Clinical Medicine. Despite the poor coverage of non-English literature, we observe in Fig. 4 (top chart) that in the case of Spain, there’s been a huge flip from Spanish or other languages to English even within publications indexed in Web of Science (from more than 40% in non-English languages in 1980 to less than 20% in 2017). Furthermore, despite the efforts of these databases to include more nonEnglish literature, the weight on internationalisation seems to rapidly overcome such efforts. Figure 4 (bottom chart) shows the proportion of outputs from Brazil in the
Fig. 4 Proportion of publications in English, local and other languages for Spain in Clinical Medicine (top) and Brazil in Social Sciences (bottom) between 1980 and 2017
The Differing Meanings of Indicators Under Different Policy …
223
Social Sciences since 1980–2017. We observe a rapid turn into English in the last part of the 1980s and then an important increase of Portuguese literature in the second half of the 2000s, due to the addition of Brazilian journals in the database. However, this increase is rapidly overridden and by 2015 we have the same proportion of English literature as we had before the inclusion of national journals. This is due to the fact that many authors and journals from non-English speaking countries, switch their publications into English language, in the hope that they would achieve greater visibility and perhaps ensure higher citations. These strategies range from a complete switch to English, to maintaining bilingual versions of research articles or open in up to multilinguism, in which authors decide whether they wish to publish in their national language or English. Despite the overwhelming evidence on the citation advantage of English publications (see Fig. 3 and further evidences in González-Alcaide et al., 2012; Liu, Hu, Tang, & Liu, 2018), experiences on turning into or adding English in a journal, have resulted in contradictory results (Robinson, 2016). Larivière (2018) offers some possible reasons explaining why national journals may not succeed on increasing their impact when changing their publication language, or even if they manage to increase their impact, they never get similar Impact Factors to those achieved by journals from English speaking countries. First, there might be an author bias, as they tend to perceive national journals as less worthy and might decide to submit their lesser work to these journals. Second, these journals might focus on issues of local relevance which are not well covered by foreign journals (Piñeiro & Hicks, 2015). Also, these journals may have different functions than mainstream journals by serving as conduits to inform local communities (Chavarro, Tang, & Ràfols, 2017). For instance, recent correspondence in Nature raised awareness on the need to publish in non-English language to reach certain communities in India (Khan, 2019). Furthermore, forcing non-native speakers to publish in English presents a disadvantage with respect to native speakers, both for authors (Henshall, 2018) and journals (González-Alcaide et al., 2012), and may also lead to a duplication of research contents (both in their native and English language) to be able to reach national and international audiences, as observed in the case of Chinese literature (Liu et al., 2018). Given such evidence, how can we internationalise one country’s outputs without affecting their national visibility? In view of the dual challenge to which institutions are confronted in a global world, how can they open up their research findings to their local communities while becoming part of global scientific discussions? Sivertsen (2018) argues in this sense in favour of the promotion of multilinguism in science. He illustrates this notion with the case of Social Sciences and Humanities, commonly considered of a more localised nature (Hicks, 2005), but which can actually ‘be valued as an example of combining international excellence with local relevance in a multilingual approach to research communication’ (Sivertsen, 2018, p. 3). Therefore, the message is to shift away from what Neylon (2019) refers as false dichotomy, that is the setting of local priorities towards societal engagement and wider impacts is positioned as being in opposition to “objective” and “international” measures of “excellence” (p. 4).
224
N. Robinson-Garcia and I. Ràfols
Looking at International Mobility from Different Angles In this section we now change the focus. Instead of looking at the interpretation of an indicator in different contexts, we will examine how different policy contexts referring to the same phenomenon, can lead to the selection of different indicators, that is, international flows of the scientific workforce. Mobility flows of scholars is a case in which universal uses of indicators clearly enter in conflict with the context in which they are applied. As we will now show, discussions on brain drain/gain or brain circulation tend to reflect different points of view of the same phenomenon, in which all interpretations tend to be partial. As Nerad (2010, p. 2) puts it: [D]ue to globalization, institutions responsible for graduate education today must fulfil a dual mission: building a nation’s infrastructure by preparing the next generation of professionals and scholars for the local and national economy, both inside and outside academia, and educating their domestic and international graduate students to participate in a global economy and an international scholarly community. This dual mission is often experienced as a tension, because universities in many ways operate under a sole national lens.
Such tension is reflected on the literature studying mobility flows. Now we will show two contrasting views of such phenomenon. This case differs from the two previous ones in the sense that here a national or regional view of the same phenomenon is affecting the selection of the indicator used. We will now show two examples in which different indicators are used depending on the local policy context. On the one hand, the case of the European Union, where the emphasis is placed on brain circulation and the promotion of knowledge transfer among member states (Ackers, 2005). On the other hand, the United States, where interest in mobility is related with brain gain, or in other words, the capacity of attraction of highly skilled scientists (Levin & Stephan, 1999).
The European Union and the Promotion of International Mobility In the case of Europe, interest on mobility derives from the desire to promote a stronger and more cohesive European Research Area (ERA), by developing strong knowledge flows and a common labour market that can compete with the US. Here bibliometric indicators have played a key role on the development of the Framework Programmes with CWTS leading such movement (Delanghe, Sloan, & Muldur, 2010). The promotion of knowledge flows within the region is seen as one of the key strategies to ensure a more direct path towards innovation (Tijssen & van Wijk, 1999). This interest on mobility has permeated many countries, fostering scientists to undertake short-term mobility periods (Cañibano, Fox, & Otamendi, 2016) and making international experience a pre-requisite in programmes attracting talent (i.e., Cañibano, Otamendi, & Andujar, 2008; Torres-Salinas & Jiménez-Contreras, 2015).
The Differing Meanings of Indicators Under Different Policy …
225
The scientometric study of the effects of such policies is usually based in the analysis of increases in international collaboration (Andújar et al., 2015; Jonkers & Tijssen, 2008), with few attempts at bibliometrically tracking mobility of scholars (Laudel, 2003). But the development of author name disambiguation algorithms recently allowed people to bibliometrically track changes in affiliation of scientists, with Moed authoring the first large-scale analyses (Moed et al., 2013). This has allowed people to explore geographical, disciplinary or sectoral mobility flows of scientists (Moed & Halevi, 2014). Recent developments have expanded the notion of mobility by characterizing mobility types and mobility flows (Robinson-Garcia et al., 2019; Sugimoto et al., 2017), and have opened the door to better comprehending the relation between knowledge flows and mobility (Aman, 2018; Franzoni, Scellato, & Stephan, 2018) or characterising mobile scholars (Halevi, Moed, & Bar-Ilan, 2016a, b). Still, while being certainly novel and surpassing many of the limitations of previous attempts to study mobility flows (Sugimoto, Robinson-Garcia, & Costas, 2016), this type of approach offers a narrow definition of what mobile is, as it relies heavily on publication data (only if an author publishes in the origin and destiny countries can we define her as mobile) and ignores a very common type of mobility in Europe as is short term temporary mobility. This latter fact affects especially younger scholars who may move temporarily to other countries while retaining their home affiliation as part of their training.
The United States and the Attraction of Foreign-Born Scientists A completely different body of literature can be found in the United States with regard to the globalization of the scientific workforce. In this country, foreign-born scientists represent 24% of the faculty (Lin, Pearce, & Wang, 2009) and their proportion keeps increasing, overcoming even new hires of domestic racial/ethnic minority groups (Kim, Twombly, & Wolf-Wendel, 2012). As observed, here the interest lies on how to attract and integrate foreign scholars who arrive in the country, as they have become a key asset for their national science system (Levin & Stephan, 1999; Lin et al., 2009). International experience is measured as an inherent characteristic of the individual given by their visa status and not by their experience of working in several countries. Hence, US born scientists are considered domestic and no interest is shown in their capacity to integrate within global networks. Studies in this regard make use of scientometric indicators to analyse performance of this workforce (Stephan & Levin, 2001), but also rely on survey data, analysing other aspects such as job satisfaction (Mamiseishvili, 2011), productivity (Kim et al., 2012; van Holm, Wu, & Welch, 2019) or capacity of engagement with non-academic sectors (Libaers, 2014). In these two examples, we observe how flows of scholars are defined differently. In the case of Europe, a ‘global scholar’ is someone who has changed their affiliation
226
N. Robinson-Garcia and I. Ràfols
between countries, while in the case of the United States, is someone born abroad. When comparing both ways of operationalising the same phenomenon, we observe many disparities (Robinson-Garcia, van Holm, Melkers, & Welch, 2018). As a means to reconcile these two partial views of the same phenomenon, Welch et al. (2018) propose a comprehensive framework in which mobility or visa status are seen as two of the many features which characterize the global experience of scientists. They highlight the importance of considering such experience as a multi-layered concept. Also, these diverging ways of studying the effects of globalisation on the scientific workforce, highlight the importance of balancing the local and global aspects of the same phenomenon in order to define indicators which are truly meaningful in a given national or regional science system.
Concluding Remarks In this chapter we have explored how the meaning of scientometric indicators can vary depending on the policy context in which they are applied. We have focused on the topic of globalisation of science and the use of scientometric indicators that countries make to support or introduce internationalisation policies. The chapter is inspired by H. F. Moed’s oeuvre in three different ways. First and most importantly, his conceptualisation and emphasis on the importance of policy context as a necessary framing when considering the use of scientometrics for research assessment (Moed & Halevi, 2015). Second, his work along with other colleagues on denouncing biases of the Journal Impact Factor, and the dangers of using it especially in nonEnglish speaking countries and social sciences fields (Leeuwen et al., 2001; Moed & van Leeuwen, 1996). Finally, his pioneering work on the use of author name disambiguation algorithms to track the mobility of scholars at large scales (Moed et al., 2013). By presenting two examples on how the use of scientometric indicators without consideration of the policy context in which they are applied, we have explored the interpretative ambiguity (mostly ignored) of widely used indicators, such as the share of internationally co-authored publications or the JIF. The purpose is not to condemn their use, as we do believe that the information provided by scientometric indicators is useful to inform policy managers. But we warn of the need to contextualise and use them as informative devices that can support research assessment exercises rather than as assessment devices which can be applied automatically. The third example illustrates a different case of misinterpretation. Here we show two cases in which the same phenomenon is analysed with different indicators. From this case, we learn how a narrow view of a phenomenon, which ignores the global policy context, can lead to short-sighted indicators which may not be adequate.
The Differing Meanings of Indicators Under Different Policy …
227
References Abramo, G., D’Angelo, C. A., & Costa, F. D. (2017). Do interdisciplinary research teams deliver higher gains to science? Scientometrics, 111(1), 317–336. https://doi.org/10.1007/s11192-0172253-x. Ackers, L. (2005). Moving people and knowledge: Scientific mobility in the European Union1. International Migration, 43(5), 99–131. https://doi.org/10.1111/j.1468-2435.2005.00343.x. Adam, D. (2002). The counting house. Nature, 415, 726–729. https://doi.org/10.1038/415726a. Adams, J. (2012). Collaborations: The rise of research networks. Nature, 490, 335–336. https://doi. org/10.1038/490335a. Aguillo, I. F., Bar-Ilan, J., Levene, M., & Ortega, J. L. (2010). Comparing university rankings. Scientometrics, 85(1), 243–256. Altbach, P. G. (2004). Globalisation and the university: Myths and realities in an unequal world. Tertiary Education and Management, 10(1), 3–25. https://doi.org/10.1023/B:TEAM.0000012239. 55136.4b. Aman, V. (2018). A new bibliometric approach to measure knowledge transfer of internationally mobile scientists. Scientometrics, 117(1), 227–247. https://doi.org/10.1007/s11192-018-2864-x. Andújar, I., Cañibano, C., & Fernandez-Zubieta, A. (2015). International stays abroad, collaborations and the return of spanish researchers. Science Technology & Society, 20(3), 322–348. https://doi.org/10.1177/0971721815597138. Arrieta, O. A. D., Pammolli, F., & Petersen, A. M. (2017). Quantifying the negative impact of brain drain on the integration of European science. Science Advances, 3(4), e1602232. https://doi.org/ 10.1126/sciadv.1602232. Baldridge, D. C., Floyd, S. W., & Markóczy, L. (2004). Are managers from Mars and academicians from Venus? Toward an understanding of the relationship between academic quality and practical relevance. Strategic Management Journal, 25(11), 1063–1074. https://doi.org/10.1002/smj.406. Bassioni, G., Adzaho, G., & Niyukuri, D. (2016). Brain drain: Entice Africa’s scientists to stay. Nature, 535(7611), 231–231. https://doi.org/10.1038/535231c. Bastedo, M. N., & Bowman, N. A. (2010). U.S. News & world report college rankings: Modeling institutional effects on organizational reputation. American Journal of Education, 116(2), 163– 183. https://doi.org/10.1086/649436 Bote, V. P. G., Olmeda-Gómez, C., & de Moya-Anegón, F. (2013). Quantifying the benefits of international scientific collaboration. Journal of the American Society for Information Science and Technology, 64(2), 392–404. https://doi.org/10.1002/asi.22754. Buela-Casal, G., & Zych, I. (2012). How to measure the internationality of scientific publications. Psicothema, 24(3), 435–441. Cañibano, C., Fox, M. F., & Otamendi, F. J. (2016). Gender and patterns of temporary mobility among researchers. Science and Public Policy, 43(3), 320–331. https://doi.org/10.1093/scipol/ scv042. Canibano, C., Otamendi, J., & Andujar, I. (2008). Measuring and assessing researcher mobility from CV analysis: The case of the Ramon y Cajal programme in Spain. Research Evaluation, 17(1), 17–31. https://doi.org/10.3152/095820208X292797. Chavarro, D., Tang, P., & Ràfols, I. (2017). Why researchers publish in non-mainstream journals: Training, knowledge bridging, and gap filling. Research Policy, 46(9), 1666–1680. https://doi. org/10.1016/j.respol.2017.08.002. Chinchilla-Rodríguez, Z., Bu, Y., Robinson-García, N., Costas, R., & Sugimoto, C. R. (2018). Travel bans and scientific mobility: Utility of asymmetry and affinity indexes to inform science policy. Scientometrics, 116(1), 569–590. https://doi.org/10.1007/s11192-018-2738-2. Confraria, H., Mira Godinho, M., & Wang, L. (2017). Determinants of citation impact: A comparative analysis of the Global South versus the Global North. Research Policy, 46(1), 265–279. https://doi.org/10.1016/j.respol.2016.11.004. Costas, R., van Leeuwen, T. N., & Bordons, M. (2010). A bibliometric classificatory approach for the study and assessment of research performance at the individual level: The effects of age on
228
N. Robinson-Garcia and I. Ràfols
productivity and impact. Journal of the American Society for Information Science and Technology, 61(8), 1564–1581. https://doi.org/10.1002/asi.21348. Delanghe, H., Sloan, B., & Muldur, U. (2010). European research policy and bibliometric indicators, 1990–2005. Scientometrics, 87(2), 389–398. https://doi.org/10.1007/s11192-010-0308-3. Díaz-Faes, A. A., Bowman, T. D., & Costas, R. (2019). Towards a second generation of ‘social media metrics’: Characterizing Twitter communities of attention around science. PLoS ONE, 14(5), e0216408. https://doi.org/10.1371/journal.pone.0216408. Fang, Z., Lamers, W., Costas, R. (2019). Studying the Scientific Mobility and International Collaboration Funded by the China Scholarship Council. Presented at the ISSI/STI. (2019). Conference. Rome: Italy. Feld, A., & Kreimer, P. (2019). Scientific co-operation and centre-periphery relations: Attitudes and interests of European and Latin American scientists. Tapuya: Latin American Science, Technology and Society, 0(0), 1–27. https://doi.org/10.1080/25729861.2019.1636620. Franzoni, C., Scellato, G., & Stephan, P. (2018). Context factors and the performance of mobile individuals in research teams. Journal of Management Studies, 55(1), 27–59. https://doi.org/10. 1111/joms.12279. Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M. (1994). The new production of knowledge: The dynamics of science and research in contemporary societies. London: SAGE. González-Alcaide, G., Valderrama-Zurián, J. C., & Aleixandre-Benavent, R. (2012). The impact factor in non-English-speaking countries. Scientometrics, 92(2), 297–311. https://doi.org/10.1007/ s11192-012-0692-y. Good, B., Vermeulen, N., Tiefenthaler, B., & Arnold, E. (2015). Counting quality? The Czech performance-based research funding system. Research Evaluation, 24(2), 91–105. https://doi. org/10.1093/reseval/rvu035. Halevi, G., Moed, H. F., & Bar-Ilan, J. (2016a). Does research mobility have an effect on productivity and impact? International Higher Education, 86(86), 5–6. https://doi.org/10.6017/ihe.2016.86. 9360. Halevi, G., Moed, H. F., & Bar-Ilan, J. (2016b). Researchers’ mobility, productivity and impact: Case of top producing authors in seven disciplines. Publishing Research Quarterly, 32(1), 22–37. https://doi.org/10.1007/s12109-015-9437-0. Haustein, S., Bowman, T. D., & Costas, R. (2016). Interpreting ‘altmetrics’: Viewing acts on social media through the lens of citation and social theories. In C. R. Sugimoto (Ed.), Theories of Informetrics and Scholarly Communication (pp. 372–406). Retrieved from http://arxiv.org/abs/ 1502.05701. Hazelkorn, E. (2011). Rankings and the reshaping of higher education: The battle for world-class excellence. Palgrave Macmillan. Retrieved from http://eric.ed.gov/?id=ED528844 Henshall, A. C. (2018). English language policies in scientific journals: Signs of change in the field of economics. Journal of English for Academic Purposes, 36, 26–36. https://doi.org/10.1016/j. jeap.2018.08.001. Hicks, D. (2005). The four literatures of social science. In Handbook of Quantitative Science and Technology Research (pp. 473–496). Retrieved from http://link.springer.com/chapter/10.1007/14020-2755-9_22 Hirsch, J. E. (2019). hα: An index to quantify an individual’s scientific leadership. Scientometrics, 118(2), 673–686. https://doi.org/10.1007/s11192-018-2994-1. Jiménez-Contreras, E., de Moya Anegón, F., & López-Cózar, E. D. (2003). The evolution of research activity in Spain: The impact of the National Commission for the Evaluation of Research Activity (CNEAI). Research Policy, 32(1), 123–142. Jiménez-Contreras, E., López-Cózar, E. D., Ruiz-Pérez, R., & Fernández, V. M. (2002). Impactfactor rewards affect Spanish research. Nature, 417(6892), 898–898. https://doi.org/10.1038/ 417898b.
The Differing Meanings of Indicators Under Different Policy …
229
Jonkers, K., & Tijssen, R. (2008). Chinese researchers returning home: Impacts of international mobility on research collaboration and scientific productivity. Scientometrics, 77(2), 309–333. https://doi.org/10.1007/s11192-007-1971-x. Khan, S. A. (2019). Promoting science in India’s minority languages. Nature, 573, 34–34. https:// doi.org/10.1038/d41586-019-02626-0. Kim, D., Twombly, S., & Wolf-Wendel, L. (2012). International faculty in American Universities: Experiences of academic life, productivity, and career mobility. New Directions for Institutional Research, 155, 27–46. https://doi.org/10.1002/ir.20020. Larivière, V. (2018). Le français, langue seconde? De l’évolution des lieux et langues de publication des chercheurs au Québec, en France et en Allemagne. Recherches sociographiques, 59(3), 339– 363. https://doi.org/10.7202/1058718ar. Larivière, V., & Gingras, Y. (2010). On the relationship between interdisciplinarity and scientific impact. Journal of the American Society for Information Science and Technology, 61(1), 126–131. https://doi.org/10.1002/asi.21226. Laudel, G. (2003). Studying the brain drain: Can bibliometric methods help? Scientometrics, 57(2), 215–237. https://doi.org/10.1023/A:1024137718393. Leeuwen, T. N. V., Moed, H. F., Tijssen, R. J. W., Visser, M. S., & Raan, A. F. J. V. (2001). Language biases in the coverage of the science citation index and its consequences for international comparisons of national research performance. Scientometrics, 51(1), 335–346. https://doi.org/ 10.1023/A:1010549719484. Leta, J., & Chaimovich, H. (2002). Recognition and international collaboration: The Brazilian case. Scientometrics, 53(3), 325–335. https://doi.org/10.1023/A:1014868928349. Levin, S. G., & Stephan, P. E. (1999). Are the foreign born a source of strength for U.S. science? Science, 285(5431), 1213–1214. https://doi.org/10.1126/science.285.5431.1213 Leydesdorff, L., & Rafols, I. (2011). Indicators of the interdisciplinarity of journals: Diversity, centrality, and citations. Journal of Informetrics, 5(1), 87–100. https://doi.org/10.1016/j.joi.2010. 09.002. Leydesdorff, L., & Wagner, C. S. (2008). International collaboration in science and the formation of a core group. Journal of Informetrics, 2(4), 317–325. https://doi.org/10.1016/j.joi.2008.07.003. Libaers, D. (2014). Foreign-born academic scientists and their interactions with industry: Implications for university technology commercialization and corporate innovation management. Journal of Product Innovation Management, 31(2), 346–360. https://doi.org/10.1111/jpim.12099. Lin, Z., Pearce, R., & Wang, W. (2009). Imported talents: Demographic characteristics, achievement and job satisfaction of foreign born full time faculty in four-year American colleges. Higher Education, 57(6), 703–721. https://doi.org/10.1007/s10734-008-9171-z. Liu, F., Hu, G., Tang, L., & Liu, W. (2018). The penalty of containing more non-English articles. Scientometrics, 114(1), 359–366. https://doi.org/10.1007/s11192-017-2577-6. Mamiseishvili, K. (2011). Teaching workload and satisfaction of foreign-born and U.S.-born faculty at four-year postsecondary institutions in the United States. Journal of Diversity in Higher Education, 4(3), 163–174. https://doi.org/10.1037/a0022354 Meyer, J.-B. (2001). Network approach versus brain drain: Lessons from the Diaspora. International Migration, 39(5), 91–110. https://doi.org/10.1111/1468-2435.00173. Moed, H. F., Bruin, R. E. D., & Leeuwen, T. N. V. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422. https://doi.org/10.1007/BF02017338. Moed, H. F., Burger, W. J. M., Frankfort, J. G., & Van Raan, A. F. J. (1985). The use of bibliometric data for the measurement of university research performance. Research Policy, 14(3), 131–149. https://doi.org/10.1016/0048-7333(85)90012-5. Moed, H. F., & van Leeuwen, T. N. (1996). Impact factors can mislead. Nature, 381(6579), 186. Moed, H. F., & Van Leeuwen, Th N. (1995). Improving the accuracy of institute for scientific information’s journal impact factors. Journal of the American Society for Information Science, 46(6), 461–467. https://doi.org/10.1002/(SICI)1097-4571(199507)46:6%3c461:AIDASI5%3e3.0.CO;2-G.
230
N. Robinson-Garcia and I. Ràfols
Moed, H. F. (2005). Citation analysis in research evaluation (Vol. 9). Retrieved from http://books. google.es/books?hl=en&lr=&id=D9SaJ6awy4gC&oi=fnd&pg=PR9&dq=citation+analysis+ in+research+evaluation&ots=FFpZIv-Qg0&sig=w_eOO2xmcRUTReMwdvlVJPo3cno Moed, H. F. (2017a). A critical comparative analysis of five world university rankings. Scientometrics, 110(2), 967–990. https://doi.org/10.1007/s11192-016-2212-y. Moed, H. F. (2017b). Applied evaluative informetrics. Cham: Springer. Moed, H. F., Aisati, M., & Plume, A. (2013). Studying scientific migration in Scopus. Scientometrics, 94(3), 929–942. https://doi.org/10.1007/s11192-012-0783-9. Moed, Henk F., & Halevi, G. (2014). A bibliometric approach to tracking international scientific migration. Scientometrics, 1–15. https://doi.org/10.1007/s11192-014-1307-6 Moed, H. F., & Halevi, G. (2015). Multidimensional assessment of scholarly research impact. Journal of the Association for Information Science and Technology, 66(10), 1988–2002. https:// doi.org/10.1002/asi.23314. Nederhof, A. J. (2006). Bibliometric monitoring of research performance in the Social Sciences and the Humanities: A Review. Scientometrics, 66(1), 81–100. https://doi.org/10.1007/s11192006-0007-2. Nerad, M. (2010). Globalization and the internationalization of graduate education: A macro and micro view. Canadian Journal of Higher Education, 40(1), 1–12. Neylon, C. (2019). Research excellence is a neo-colonial agenda (and what might be done about it). In E. Kraemer-Mbula, R. Tijssen, M. L. Wallace, & R. McLean (Eds.), Transforming research excellence. Retrieved from https://hcommons.org/deposits/item/hc:26133/. Persson, O., Glänzel, W., & Danell, R. (2004). Inflationary bibliometric values: The role of scientific collaboration and the need for relative indicators in evaluative studies. Scientometrics, 60(3), 421–432. https://doi.org/10.1023/B:SCIE.0000034384.35498.7d. Piñeiro, C. L., & Hicks, D. (2015). Reception of Spanish sociology by domestic and foreign audiences differs and has consequences for evaluation. Research Evaluation, 24(1), 78–89. Quan, W., Chen, B., & Shu, F. (2017). Publish or impoverish. Aslib Journal of Information Management. https://doi.org/10.1108/AJIM-01-2017-0014. Quan, W., Mongeon, P., Sainte-Marie, M., Zhao, R., & Larivière, V. (2019). On the development of China’s leadership in international collaborations. Scientometrics, 120(2), 707–721. https://doi. org/10.1007/s11192-019-03111-1. Rafols, I., Leydesdorff, L., O’Hare, A., Nightingale, P., & Stirling, A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between Innovation Studies and Business & Management. Research Policy, 41(7), 1262–1282. https://doi.org/10.1016/j.respol.2012. 03.015. Ràfols, I., Molas-Gallart, J., Chavarro, D. A., & Robinson-Garcia, N. (2016). On the dominance of quantitative evaluation in ‘peripheral’ countries: Auditing research with technologies of distance (SSRN Scholarly Paper No. ID 2818335). Retrieved from Social Science Research Network website, https://papers.ssrn.com/abstract=2818335 Rey-Rocha, J., & Martín-Sempere, M. J. (2012). Generating favourable contexts for translational research through the incorporation of basic researchers into hospitals: The FIS/Miguel Servet research contract programme. Science and Public Policy, 39(6), 787–801. Robinson, B. J. (2016). Flying in the face of illusion. A comparative study of the variables that interact in English-language scientific journals publishing translations. In L. Ilynska & M. Platanova (Eds.), Meaning in translation. Illusion of precision (pp. 335–351). Newcastle-upon-Tyne: Cambridge Scholar Publishing. Robinson-Garcia, N., van Holm, E., Melkers, J., & Welch, E. W. (2018). From theory to practice: Operationalization of the GTEC framework. In STI 2018 Conference Proceedings (pp. 1542– 1545). Retrieved from https://openaccess.leidenuniv.nl/handle/1887/65243 Robinson-Garcia, N., & Jiménez-Contreras, E. (2017). Analyzing the disciplinary focus of universities: Can rankings be a one-size-fits-all? http://services.igi-global.com/resolvedoi/ resolve.aspx?doi=10.4018/978-1-5225-0819-9.Ch009, pp. 161–185. https://doi.org/10.4018/ 978-1-5225-0819-9.ch009
The Differing Meanings of Indicators Under Different Policy …
231
Robinson-Garcia, Nicolas, van Leeuwen, T. N., & Rafols, I. (2018b). Using almetrics for contextualised mapping of societal impact: From hits to networks. Science and Public Policy, 45(6), 815–826. https://doi.org/10.1093/scipol/scy024. Robinson-Garcia, N., Sugimoto, C. R., Murray, D., Yegros-Yegros, A., Larivière, V., & Costas, R. (2019a). The many faces of mobility: Using bibliometric data to measure the movement of scientists. Journal of Informetrics, 13(1), 50–63. https://doi.org/10.1016/j.joi.2018.11.002. Robinson-Garcia, N., Torres-Salinas, D., Herrera-Viedma, E., & Docampo, D. (2019b). Mining university rankings: Publication output and citation impact as their basis. Research Evaluation, 28(3), 232–240. https://doi.org/10.1093/reseval/rvz014. Robinson-Garcia, N., Woolley, R., & Costas, R. (2019). Making sense of global collaboration dynamics: Developing a methodological framework to study (dis)similarities between country disciplinary profiles and choice of collaboration partners. ArXiv:1909.04450[Cs]. https://doi.org/ 10.5281/zenodo.3376411 Sivertsen, G. (2016). Patterns of internationalization and criteria for research assessment in the social sciences and humanities. Scientometrics, 107(2), 357–368. https://doi.org/10.1007/s11192-0161845-1. Sivertsen, G. (2018). Balanced multilingualism in science. BiD: Textos Universitaris de Biblioteconomia i Documentació, (40). https://doi.org/10.1344/BiD2018.40.25 Stephan, P. E., & Levin, S. G. (2001). Exceptional contributions to US science by the foreign-born and foreign-educated. Population Research and Policy Review, 20(1–2), 59–79. https://doi.org/ 10.1023/A:1010682017950. Sugimoto, C. R., Robinson-Garcia, N., & Costas, R. (2016). Towards a global scientific brain: Indicators of researcher mobility using co-affiliation data. ArXiv:1609.06499[Cs]. Retrieved from http://arxiv.org/abs/1609.06499 Sugimoto, C. R., Robinson-Garcia, N., Murray, D. S., Yegros-Yegros, A., Costas, R., & Larivière, V. (2017). Scientists have most impact when they’re free to move. Nature, 550(7674), 29. https:// doi.org/10.1038/550029a. Tijssen, R. J. W., & van Wijk, E. (1999). In search of the European Paradox: An international comparison of Europe’s scientific performance and knowledge flows in information and communication technologies research. Research Policy, 28(5), 519–543. https://doi.org/10.1016/S00487333(99)00011-6. Torres-Salinas, D., & Jiménez-Contreras, E. (2015). El efecto Cajal: Análisis bibliométrico del Programa Ramón y Cajal en la Universidad de Granada. Revista Española de Documentación Científica, 38(1), e075. van Holm, E. J., Wu, Y., & Welch, E. W. (2019). Comparing the collaboration networks and productivity of China-born and US-born academic scientists. Science and Public Policy, 46(2), 310–320. https://doi.org/10.1093/scipol/scy060. Van Raan, A. F. (1997). Science as an international enterprise. Science and Public Policy, 24(5), 290–300. https://doi.org/10.1093/spp/24.5.290. Vessuri, H., Guédon, J.-C., & Cetto, A. M. (2014). Excellence or quality? Impact of the current competition regime on science and scientific publishing in Latin America and its implications for development. Current Sociology, 62(5), 647–665. https://doi.org/10.1177/0011392113512839. Wagner, C. S. (2019). Global science for global challenges. In D. Simon, S. Kuhlmann, J. Stamm, & W. Canzler (Eds.), Handbook on science and public policy (pp. 92–103). Edward Elgar Publishing. Wagner, C. S., & Jonkers, K. (2017). Open countries have strong science. Nature, 550(7674), 32. https://doi.org/10.1038/550032a. Wagner, C. S., & Leydesdorff, L. (2005). Network structure, self-organization, and the growth of international collaboration in science. Research Policy, 34(10), 1608–1618. https://doi.org/10. 1016/j.respol.2005.08.002. Wagner, C. S., Park, H. W., & Leydesdorff, L. (2015). The continuing growth of global cooperation networks in research: A conundrum for national governments. PLoS ONE, 10(7), e0131816. https://doi.org/10.1371/journal.pone.0131816.
232
N. Robinson-Garcia and I. Ràfols
Waltman, L. (2019a). Quantitative literacy for responsible research policy. Presented at the Inaugural lecture by Ludo Waltman as newly appointed Professor of Quantitative Science Studies, Leiden (The Netherlands). Retrieved from https://www.cwts.nl/news?article=n-r2x264&title= inaugural-lectures-by-sarah-de-rijcke-and-ludo-waltman. Waltman, L. (2019b). Put metrics in context. Retrieved 6 August 2019, from Research Europe website: http://www.researchresearch.com Welch, E. W., van Holm, E., Jung, H., Melkers, J., Robinson-Garcia, N., Mamiseishvili, K., & Pinheiro, D. (2018). The Global Scientific Workforce (GTEC) Framework. In STI 2018 Conference Proceedings (pp. 868–871). Retrieved from http://hdl.handle.net/1887/65210. Woolley, R., Robinson-Garcia, N., & Costas, R. (2017). Global research collaboration: Networks and partners in South East Asia. ArXiv:1712.06513[Cs]. Retrieved from http://arxiv.org/abs/ 1712.06513 Zippel, K. (2017). Women in global science: Advancing academic careers through international collaboration. Stanford University Press.
De Profundis: A Decade of Bibliometric Services Under Scrutiny Juan Gorraiz, Martin Wieland, Ursula Ulrych, and Christian Gumpenberger
Rise and Thrive—The Department for Bibliometrics and Publication Strategies @ University of Vienna Departmental History, Philosophy, Activities and Services The Vienna University Library observed the increasing trend towards quantitative research assessment at an early stage. Already in 2009 it embraced bibliometrics by implementing a dedicated department—the Department for Bibliometrics and Publication Strategies1 —within the library research support services. This pioneering step gained much recognition after international promotion (Gumpenberger et al., 2012, 2014). The departmental philosophy is at the same time comprehensive and ambitious: • building a positive attitude towards bibliometrics for all interested stakeholders • supporting (particularly junior) researchers in their publication strategies (‘publish or perish’ dilemma) • enhancing individual visibility (adoption of permanent identifiers; assistance in the (self-) promotion game) as well as institutional visibility (rankings & web presence; monitoring) • preventing research administration from bad use of bibliometric practices and stimulating sound ‘informed peer review’ (Moed, 2007a) • facing the challenges of the digital era and the use of new metrics The Department for Bibliometrics and Publication Strategies is involved in several key activities like teaching, organizing events, promoting development partnerships, etc. (see https://bibliothek.univie.ac.at/bibliometrie/). However, the focus is clearly 1 https://bibliothek.univie.ac.at/bibliometrie/en/.
J. Gorraiz (B) · M. Wieland · U. Ulrych · C. Gumpenberger Department for Bibliometrics and Publication Strategies, University of Vienna, Vienna University Library, Boltzmanngasse 5, 1090 Vienna, Austria e-mail: [email protected] © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_11
233
234
J. Gorraiz et al.
Fig. 1 Tailored bibliometric services
on the various offered bibliometric services, which are tailored to the special needs of the different target-groups (see Fig. 1). Departmental Facts and Figures Within the last decade, the demand for bibliometric services has multiplied significantly and staff numbers have increased from 2.5 to 4 full-time equivalents. Since 2009 the department has delivered: • >250 reports for individual bibliometric assessments • Almost 50 reports for professorial appointment procedures (incl. almost 1000 analysed candidates) • 8 reports for faculty evaluations • >10 fee-based reports for national and international external customers (universities, research organizations, research foundations, funders) The department is also co-founder of the European Summer School for Scientometrics (esss),2 which recently celebrated its 10th anniversary. It is a huge success story with attendees from all over the world and we are proud an honoured to have Henk Moed on our list of lecturers for he is well known for his enlightening lectures, vivid discussions and humorous comments.
2 https://www.scientometrics-school.eu/.
De Profundis: A Decade of Bibliometric Services Under Scrutiny
235
On Bibliometric Analyses for Evaluation Purposes Recommended Evaluation Process One of our departmental principles is the avoidance of “desktop bibliometrics” practices (Bornmann and Leydesdorff, 2014). The term describes the application of bibliometrics by decision makers (e.g. deans, administrators) by means of haphazard “click the button-tools” with no bibliometric expertise, deliberately bypassing professionals in bibliometrics or scientometrics who clearly know about the limitations and are needed for data interpretation in the qualitative context. Moreover, the researchers in the fields to be evaluated should always be included in the process BEFOREHAND. This is crucial in order to establish what is to be assessed, what analysis makes sense and what data sources should be used (PREREQUISITES). They are again to be involved AFTER the analysis, when the obtained results require their VALIDATION, before the final report is generated. Sound bibliometric analyses should always be conducted (or at least be supervised) by bibliometricians (experts in the field) and ought to refrain from “quick and dirty” practices. This is particularly true for individual assessments, which have a direct impact on the career development of researchers. Furthermore, it implies close communication and cooperation with the contracting authorities and the involved scientists. The recommended structure of an evaluation process is illustrated in Fig. 2.
Prerequisites
• interview with scienst or research group under evaluaon • what is to be assessed • what makes sense • peculiaries of the discipline • data sources selecon • data validaon
Bibliometric Report
• customized bibliometric report
Validaon
Fig. 2 Evaluation process
• results are presented and discussed • help with their interpretaon • limitaons and restricons
236
J. Gorraiz et al.
Fig. 3 Dimensions of bibliometric profile
Bibliometric Profiles Meaningful and responsible bibliometric analyses require much effort and expertise. Since there is no “do-it-all” indicator that would do justice to all the different aspects of research outputs, a rather multidimensional approach is needed in order to paint a picture as comprehensive as possible. Hend Moed has also presented a typology of research impact dimensions, and has indicated which metrics are the most appropriate to measure each dimension (Moed & Halevi, 2015). The dimensions that are usually covered in our bibliometric profiles are shown in Fig. 3 (Gorraiz and Gumpenberger, 2015; Gorraiz et al., 2016). The next sections will discuss the prerequisites as well as the different dimensions of our standard bibliometric profile in depth.
Data Coverage and Validation Data Coverage No data source is perfect considering accuracy and completeness. The selection of data sources or databases and their degree of coverage will directly impact the
De Profundis: A Decade of Bibliometric Services Under Scrutiny
237
results of the analysis and will hint at their significance. Therefore, it makes sense to use several data sources suitable for the purpose of the analysis and the considered discipline or research field (Moed, 1988, 2006). In this respect bibliometric analysis should always mention the criteria used for the selection of data sources as well as their degree of coverage. Our recommendation is to combine the citation databases Web of Science Core Collection and Scopus as well as Google Scholar. The Web of Science (WoS) Core Collection constitutes the preferred data source for bibliometric analyses, since being indexed in this database is generally perceived to indicate “high impact” or at least “high visibility” within the scientific community. Scopus serves as a second citation database, in order to avoid or correct indexing errors in WoS and to benefit from the larger number of indexed journals (almost twice as many as in WoS). Google Scholar (GS) via “Publish or Perish”3 stands out because of its higher coverage for some publication types (like monographs, reports, etc.), which are more relevant in the social sciences and the humanities. Analyses in GS should be taken with a pinch of salt. GS is rather a search engine than a database, and therefore indexing remains non-transparent and documentation is lacking. The suitability of Google Scholar as data source for scientific evaluation has also been discussed by the honored person and co-authors (Halevi et al., 2017). This set of data sources should be complemented with at least one subject specific database (e.g. ADS, Chemical Abstracts or Mathematical Reviews) in order to enhance the activity and citation analyses. The choice is made based on the previously mentioned interview or consultation (see prerequisites).
Data Validation After data retrieval and cleaning, attentive and critical data validation is highly recommended. Within the framework of individual assessment procedures or professorial appointments, researchers are asked to provide their own publication lists for comparison. Current Research Information Systems (CRIS) are an additional means for this purpose provided that they are regularly updated and maintained. Research groups or faculties should also be asked to control and approve the data before generating the final bibliometric report. Reliable bibliometric analyses can only be performed based on objective data, following the same standards and conditions. Therefore, calculations of self-reported bibliometric data in application documents are problematic as they occur in different data sources using different citation and publication windows and consequently do not allow sound comparisons between candidates (see for example, Bar-Ilan, 2008). Such data always need to be checked due to their subjective nature which normally requires as much time and effort as self-calculation. Standardization of document 3 ‘Publish or Perish’ is a software programme that retrieves and analyses academic citations. It relies
on GS to obtain the raw citations (see also, http://www.harzing.com/pop.htm).
238
J. Gorraiz et al.
types is another time-consuming but essential task to provide a solid analysis. Thus, application guidelines should require the strict attribution of given document types on candidate publication lists. It is advisable to clearly distinguish the following categories: books or monographs, edited books and journal issues, chapters in books, articles in journals, proceedings papers, patents, book reviews, reports and workings papers, meeting abstracts, talks and other publications (in newspapers, on the internet, etc.). Finally, application guidelines should recommend the use of a regularly updated permanent person identifier record, such as ORCID iD or ResearcherID. Data coverage and data validation are two crucial issues not sufficiently mentioned in the Leiden Manifesto. Our experiences show that the use of various sources has been helpful in correcting coverage errors and expanding coverage considerably at both micro and meso levels.
Publication Activity Measuring publication activity based on the number of publications in the selected time period seems to be an obvious indicator choice. However, measuring publication activity is not as trivial as it may seem. We therefore recommend considering the following three aspects: 1. Differentiation according to publication type A differentiation according to publication type is necessary already at an early stage. The most relevant and discipline-specific publication types have already been discussed and determined in the “prerequisites”. It is advisable to further distinguish and separately analyze either all publication types, all citable items (i.e. articles, reviews, proceedings papers and book chapters in serials), or only research articles. Normally the so-called citable items represent the most relevant publication types in most disciplines. Research notes, letters and editorials can be significant as well and require at least a mention. For example, the importance of proceedings papers in computer science is well-known (Moed, 2006), whereas we are far less aware of the importance of working papers in economics and other social sciences, or the influence of pre-prints in theoretical and particle physics. Monographs and books should be analyzed separately. To mix them with other document types is not reasonable and should be avoided. In the case of social sciences and humanities, we recommend to include an additional group, namely “other document types”, and to analyze the assigned items in detail. For example, editorial materials can attract a very high number of citations and be of significant interest in theoretical fields. Last but not least, patents, research data as well as popular science publications in magazines and in other mass media also require acknowledgment according to the associated discipline and the aim of the analysis.
De Profundis: A Decade of Bibliometric Services Under Scrutiny
239
2. Consideration of the number of co-authors Co-authorship is a crucial problem in activity analysis, both the number of coauthors as well as the role played by each of them (see below). Following thorough discussions, our recommendation is to avoid the use of fractional counts. First, fractional counts do not work for publications with a very large number of coauthors (like in high-energy physics, astrophysics or some medical fields with more than 1500 co-authors). Despite the introduction of indicators to remedy this problem, these are not feasible in practice due to their complex and non-transparent nature which leads to a low acceptance by scientists and science policy makers. Furthermore, fractional counts distort the message to be conveyed. Reducing authorship to only one number paints a very incomplete picture. Hence more diverse co-author information is preferable rather than minimization, which is reflected in our analysis control data. For example, if a candidate for a professorial appointment has a very high publication activity, but he is first, last or corresponding author in only 30% of his publications, this should be considered in the bibliometric report. We use the following indicators: (a) mean and median number of co-authors; (b) maximum number of co-authors; (c) number of single-authored publications; and d) high author-dependence, i.e. high percentage of publications with the same coauthor. The last indicator is only included in the report if its value is higher than 80%. It should also be stressed that all these findings are only descriptive and shall be deemed neither positive nor negative. This information has proven to be highly relevant in both individual assessments and professorial appointment procedures. 3. Consideration of the role played by the scientist, group or institution in evaluation Naturally different discipline-specific habits or rules need to be taken into consideration and be addressed in the prerequisites (see Fig. 2). In our analyses, we consider the number and percentage of publications, where the author or the institution is either first, last or corresponding author. Furthermore, account will be taken of the evolution over time. During career progression, a switch from first to last author is frequently observed given the fact that researchers start building their own research groups and foster fellow students in their role of first authors. These data are again presented as control data. Activity analyses are based on the provided publication lists and performed in different data sources (see section Data Coverage and Validation). Affiliation information needs special attention since its correctness is not only crucial for the preparation of the report, it is also an important factor for a high visibility and, with regard to institutions, directly influences the position in university rankings. Most rankings rely on data from WoS or Scopus. Therefore, affiliation analyses are usually performed in these databases. Affiliation changes of researchers are also considered in this type of analysis as they might affect their publication activity (publication gaps, delays, etc.).
240
J. Gorraiz et al.
Visibility Background and Rationale One of the special features of our bibliometric services is the introduction of the visibility dimension as opposed to the impact dimension, the latter clearly based on the citations attracted by each publication. Visibility analyses are helpful whenever research assessment exercises are performed for a more recent time period (i.e. usually for the last, most recent years). In these cases the citation window is practically too short for retrieving a significant number of citations in many disciplines. This is particularly true for fields with a long cited half-life. Furthermore, we rely on the hypothesis that the visibility of a document is determined by the reputation or the impact of the source, where it was published. By recognizing this editorial barrier, publication strategies can be unveiled. Therefore, we also analyze the journals or sources used by the researcher under evaluation as publication channels. A visibility analysis comprises of three parts: first, the number and percentage of publications indexed in the different international, well-respected selected data sources (also considered in the activity analysis); second, the number and percentage of publications in top journals or sources; and third, the number and percentage of publications in Open Access sources (Gorraiz and Gumpenberger, 2015; Gorraiz et al., 2016, 2017).4 Henk Moed has also pointed to the effect of “open access” on citation impact (Moed, 2007b). His study provides evidence that the preprint server ArXiv accelerates citation, due to the fact that it makes papers earlier available rather than that it makes papers freely available. Focusing on the second part of the beforehand-described approach, the identification of the top journals is normally based on journal impact measures, like e.g. Garfield’s Journal Impact Factor5 (Garfield, 1972, 2004). According to Moed, citation-based metrics are appropriate tools in journal assessment provided that they are accurate and used in an informed way (Moed et al., 2012). The most important sources of possible biases and distortions for calculation and use of journal citation measures have been brilliantly formulated by Glänzel and Moed (2002). In a complementary reference analysis, the most cited sources and journals are determined, analyzed (percentage of top journals) and compared with the previously identified publication channels (see visibility analysis). A good match is a strong indication that the scientist or institution under evaluation has been successful in publishing in the most relevant sources of his research area. Finally, the citing journals are retrieved as well and matched with the results of the previous analyses.
4 Promotion
strategies related to “altmetrics” are considered separately.
5 Of course, other journal impact measures like “Article Influence Score”, “SJR” or “SNIP” can also
be used, depending on the data source (Scopus or Web of Science Core Collection).
De Profundis: A Decade of Bibliometric Services Under Scrutiny
241
It should be stressed that the purpose of visibility analyses is solely to provide a quantitative description of the research output and to reveal potentially meaningful symptoms. Usually researchers have good reasons for their choice of publication channels. However, particularly junior scientists should be made aware of the consequences of careless or even poor publication strategies. The rationale behind the use of Journal Impact Measures (JIM) in order to assess the visibility of a publication can be summarized in the following postulates or hypotheses: • The visibility of a document is determined by the reputation or the impact of the source where it was published; • JIM reflect the editorial barrier and unveil publication strategies (Moed, 2000); • Being published in journals with high JIM is much more difficult (higher rejection quotes), and successful publication in these journals needs recognition; • JIM help to identify the top journals in each field according to their impact or prestige; • JIM is a measure for the visibility, but not for the impact or quality of single research outputs. It is, however, questionable whether this approach can be applied to disciplines, where citations do not have a high significance and where database coverage is insufficient (see limitations). Measuring visibility relies on two aspects: the choice of the most suitable indicators according to the coverage of the different databases used, and the calculation and allocation according to the year of publication or the last edition of the indicator.
Indicators Since the introduction of the Impact Factor by Garfield, there has been a great variety of JIMs (Moed, 2005a, b): 1. Journal Impact Factor (JIF, GIF, IF) by Garfield (Garfield and Sher, 1963) in JCR since 1975 2. SIC (specific impact contribution) by Vinkler (2004, 2010) 3. h-index for journals by Braun (Braun et al., 2006) 4. Eigenfactor Metrics (Eigenfactor & Article Influence Scores) by Bergstrom (2007), Bergstrom et al. (2008) 5. SJR—SCImago Journal Rank by de Moya (González-Pereira et al., 2009) (prestige metrics, Scopus) 6. SNIP—Source Normalized Impact per Paper by Moed (2010) 7. CiteScore (by Scopus, 2017).6
6 https://www.scopus.com/sources.
242
J. Gorraiz et al.
2–7 are all corrected versions of Garfield’s JIF. Our jubilarian himself has introduced a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It takes into account characteristics of a properly defined subject field, the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field’s literature (Moed, 2010). However, nothing so far compares to the popularity and recognition of the original Journal Impact Factor. This can be justified by its history, the simplicity of its definition and its central and standardized availability.
Calculation There are two crucial questions when calculating the visibility of a publication: (1) Assignment: Which JIM should be assigned to each publication? The assignment of impact measures to each publication will of course be made according to the journal, where it has been published. The crucial question is rather what annual value of the JIM should be taken into account. There are three possibilities: • Using data of the last JCR-edition for all publications + current impact measures of the journal at the evaluation time − ignores possible fluctuations of annual IF values • Using the JCR-edition related to the publication year of each publication + more correct assignment that eliminates biases caused by fluctuations − assignment not ideal because of synchronous approach: 2, 5 PYs • Using the mean value of the last x years according to the time period under study + compensation for fluctuations of annual IF values − cumbersome calculation, data not yet ready to be included in JCR. Probably, the most correct approach is the mean impact factor covering all the publication years considered in the analysis. But the data are not easily available, and it is even more difficult to generate normalized values. In any case, it is recommended to use different calculation methods whenever possible, or at least to check that the variations are not too high. (2) Normalization: How can the specific differences of each category be corrected? • One common practice for the visibility assessment is to multiply each publication by the JIM of its corresponding journal and add them up: V1 =
n k=0
n*JIM(n)
De Profundis: A Decade of Bibliometric Services Under Scrutiny
243
This calculation does not yet consider the differences between fields. In order to correct these differences, normalization will be performed by dividing the value of the JIM by the Aggregate Impact Factor of the subject category, to which the journal has been assigned: V2 =
n
n*(JIM(n)/Aggr egate J I M)
k=0
• Using the Quartiles introduced by Garfield for each category and calculating the number and percentage of Q1, Q2, Q3 and Q4. Today the trend is rather towards using the percentiles (Top 1% and Top 10%) than the quartiles. In both cases, overlaps (multiple journal assignment to one or more categories) need to be considered. In our approach, we always use the most favorable quartile, if journals have been assigned to several categories.
Limitations Visibility assessment is highly discipline-dependent. For example, in Computer Science, most of the publications are proceedings in series that do not have a JIM in JCR. It becomes clear that this analysis is limited to journals indexed either via WoS/JCR or Scopus. Given the greater number of journals indexed in Scopus, this database can provide more comprehensive results. Regarding the Humanities, the metrics based on Scopus are the only ones that can be applied, as JCR completely lacks this section. Clarivate Analytics has so far remained true to the philosophy of Eugene Garfield and is still reluctant to calculate journal impact factors (JIF) in the Arts & Humanities based on data from the Web of Science Core Collection. However, other providers of alternative journal impact measures, like Elsevier, SCImago and CWTS, have skipped this restriction, even if this course of action clearly contradicts the Leiden Manifesto recommendations. They release annual editions of CiteScore, Source Normalized Impact per Paper (SNIP) or Scimago Journal Rank (SJR), which also include the Arts & Humanities. Garfield’s initial decision not to include the Arts & Humanities in JCR was fundamentally based on the lack of necessary information for sound calculations of the Impact Factors. There were two main reasons: • Initial coverage of journals in this area was very limited • Strong citation distribution among books & book chapters in the Arts & Humanities, unlike in the Sciences and Social Sciences where references to articles predominate
244
J. Gorraiz et al.
However, the number of journals indexed in the categories of Arts & Humanities Citation Index has considerably increased, and a new index (Emerging Source Citation Index) considering regional journals has recently been included. A study performed by the authors concerning the calculation of JIM in the Arts & Humanities (Repiso et al., 2019) shows: • Interdisciplinary journals belonging to other indices and/or categories are strongly favored. These journals seriously distort rankings and interpretation. • This fact is further aggravated by the huge overlap between categories. For example, 65% of the journals indexed in Scopus are assigned to two or more categories, and one journal is even indexed in 13 categories. Considering the Humanities, Scopus indexes 3824 journals, of which 1940 journals have also been assigned to Social Sciences categories (50%). High overlap between the areas generates some inequality. • The number of citations attracted by most of the journals is still very low or almost insignificant and hampers the calculation of sound journal impact measures. Given all this, it is only a question of time that JCR will finally include an Arts & Humanities Edition, but the calculation of the JIF for the A&H will finally reveal that selection criteria for database indexing cannot be based on citations due to their insufficient number. The inaccurate use of Journal impact measures has already been seriously criticized in the hard sciences (Moed et al., 2012). Their improper use in the Arts & Humanities can even be more harmful. Reflection and discussion are top priorities for bibliometricians and librarians in order to prevent unwished developments and effects. It should be stressed that the visibility analysis is UNSUITABLE for assessing the quality or the impact of single publications, but rather for assessing the reputation or impact of the journals in which original research was published. JIM measures are only available for journals indexed in JCR (around 13.000) or in Scopus (more than 20.000). Many relevant journals of the Arts & Humanities are not included in these databases. In his article in “Research Trends” from 2013, Moed also addressed the potential and the limitations of the application of bibliometric techniques in the assessment of the Arts & Humanities (Moed, 2013). Moed was also the first one to analyze the statistical relationship between downloads and citations at the level of individual documents within a single journal (Moed, 2005a). Before that, Bollen et al. (2003) have already employed usage analysis for the identification of research trends. Later on, Bollen and Van de Sompel (2006) proposed to map science by the proxy of journal relationships derived from usage data. They also introduced a “usage impact factor” (Bollen & Sompel, 2008). The disciplinary differences observed for the behavior of citations and downloads, especially concerning the obsolescence, has also been reported by Gorraiz et al. (2014). This study points to the fact that citations can only measure the impact in the ‘publish or perish’ community. Therefore this approach is neither applicable to the whole scientific community nor to society in general. Furthermore, the authors emphasize
De Profundis: A Decade of Bibliometric Services Under Scrutiny
245
that usage metrics should consider the unique nature of downloads and should reflect their intrinsic differences from citations. Since the citation-based approaches are flawed (Halevi & Moed, 2014), alternative usage-based solutions like MESUR (Metrics from the Scholarly Usage of Resources; Bollen et al., 2007) or SERUM (Standardized Electronic Resources Usage Metrics; Gorraiz and Gumpenberger, 2010) have been suggested, but have not been embraced successfully (Kurtz & Bollen, 2011). Research administrators worldwide often cling to the standard of Q1 journals. However, particularly sub-disciplines in the Humanities and the Social Sciences often rely on journals, which do not meet this requirement, but are still deemed to be valuable, high quality publication outlets. Therefore, faculties, institutions and nations came up with their individual journal indexes in order to address the challenges. The best-known example was ERIH (European Reference Index for the Humanities), which was first published by the ESF (European Science Foundation) in 2008. The lists were revised in 2011–2012. In 2014, ERIH was transferred to the NSD (Norwegian Centre for Research Data) and was renamed to ERIH PLUS,7 since it has been extended to include the Social Sciences as well. Visibility analyses in the Humanities and the Social Sciences should therefore always go beyond the JIM-based approach and include discipline-specific journals listings if available. So far, we have exclusively focused on journals as major publication outlets. However, books are still relevant publication types in many sub-disciplines of the Humanities and the Social Sciences. Only few studies have already dealt with the visibility measurement of books (Kousha et al., 2011; Leydesdorff and Felt, 2012; Gorraiz et al., 2014; Torres-Salinas et al., 2014, 2017). This is another issue to be tackled in the attempt to assess the visibility of the complete publication output. Last but not least, promotion strategies on the internet can be essential for some extra visibility of the research output. This aspect will be further discussed in the section “New and Alternative Metrics”.
Impact Citations are used as a proxy for the impact (and not for the quality) of publications in the “publish or perish” community. To assess the citation impact (i.e. number of citations attracted) it is recommended to perform a citation analysis for all types of documents (including book chapters) as well as for citable items only. In case of deviations between the two approaches, the results should be analyzed in more detail. Editorials, Letters, and Research Notes might also attract a large number of citations in some fields, which should at least be commented on. Books are to be analyzed separately, because of citation habits and dynamics entirely different from other document types. This anomaly is also discussed in Henk Moed’s monographs 7 https://dbh.nsd.uib.no/publiseringskanaler/erihplus/about/index.
246
J. Gorraiz et al.
that are closely related to research evaluation (Moed, 2006, 2017) and is taken into account in the creation of bibliometric reports at the University of Vienna (Gorraiz et al., 2013, 2016).
Citation Window The selection of the citation window is a crucial issue in citation analysis as Henk Moed already discussed in his book from 2006 (Moed, 2006). Some institutions rely on a fixed three-year citation window for each publication year, which has indeed become a well-established practice. However, our recommendation is to use the most extended window possible. The reasons for this are: • The three years are insufficient to collect a significant volume of citations in many disciplines. • The publications of the first months have a citation advantage in comparison with those of the final months (Donner, 2018). This problem could theoretically be corrected by including normalizations for each month, instead of relying on the overall year of publication. Unfortunately, such normalizations are not readily available, and to calculate them yourself would be a great effort. In addition, it is not always easy to determine the month of publication, as some publications already appear “online first” and monopolize citations prior to their official publication. Therefore, the best solution is to use the most extended citation window available, to compare between the different years of publication, and to rely on normalized citation indicators.
Self-citations and Negative Citations The calculation of self-citations is far from trivial, since definitions are not consistent and calculation methods differ in bibliometric databases (Moed, 2008; Costas et al., 2010). For WoS CC self-citations mean that authors (under evaluation) cite themselves, whereas in Scopus either the authors themselves or their co-authors can originate self-citations. The latter seems to be the better concept, but in practice the calculation is difficult and its correctness cannot be verified. Moreover, in many cases the number of citations is reduced to an insignificant value. The calculation is an even more cumbersome process when considering research groups, faculties or institutions.
De Profundis: A Decade of Bibliometric Services Under Scrutiny
247
In our bibliometric analyses, self-citations are included as additional control data, which are not subtracted from the total number of citations. Values of up to 20– 30% are considered acceptable. In case of comparisons, e.g. in professorial appointment procedures, this information can be very useful (highly cited author with high percentages of self-citations vs. less cited authors with much lower percentages). Sometimes author self–citations are automatically excluded in bibliometric analyses as is the case of the Leiden Ranking. (Leiden Ranking Info.8 ) Negative citations are normally not distinguished from positive citations in our analysis, since their identification would be too time-consuming. We actually believe in the explicit value of negative citations in the problem-solving process. Furthermore, it is equally important to consider any retracted papers in the activity analysis.
Basic Citation Counts Basic citation indicators are meant to give an idea of the received citations distribution. Our recommendations are: • Total number of citations, • Average value of citations per publication (or average value of citations per publication cited), • Maximum number of citations received per publication. Standard deviation would be a natural fourth expected parameter, but evaluators and science policy makers usually do not pay due attention. These indicators can be complemented by: • Percentage of publications cited, • h-index, • (and in some cases) g-index. Despite their widespread use these indices can only provide additional information.
Normalized Citation Counts Normalized citation indicators are particularly important to accommodate the diverse citation windows applied to publications, ranging from a few months up to many years. The main indicators are: • CNCI (Category Normalized Citation Impact), 8 https://www.leidenranking.com/information/indicators.
248
J. Gorraiz et al.
• MNCS (Crown Indicator; mean value of the CNCI applied to a collection of publications), • Percentiles Top 1% and Top 10% (number and percentage of Top 1% and Top 10% most cited publications from the same document type and the same publication year in the corresponding category, as calculated in InCites). There is an abundance of literature on this topic, thus we want to focus on two specific aspects that we have encountered in our analyses repeatedly: (a) Considerations from a mathematical point of view The calculation of MNCS has already been heavily debated (Leydesdorff and Opthof, 2010; Waltman et al., 2011). Our preferred departmental approach is the use of the mean value of all the mean values, rather than dividing the sum of all citations attracted by all the publications by the sum of all expected citation average values for all publications considered in the set. However, we think it would be even better to group the mean values for each subject category and consequently calculate the final mean value on category and not publication level. Example: If the data set contains five publications in Mathematics with scores (10/5, 16/10, 8/8, 14/7, 8/8), seven in Physics (30/10, 26/14, 45/15, 60/20, 80/13, 24/8, 57/13), and three in Psychology (9/5; 17/14; 3/12), the suggested modified “MNCS” would be calculated as the mean value of the MNCS for each category (one third for each one) and not the mean value of all the values. This approach would result in a more balanced distribution among the categories, and particularly differentiate most universal universities when competing with more specialised institutions. Moreover, we recommend to use a wider spectrum of percentiles (Top 1%, Top 5%, Top 10%, Top 15%, Top 20% and Top 50%), which results in a better description of the citations distribution (Gorraiz et al., 2016). A pie chart with open brackets is preferable in order to demonstrate the non-aggregated values (see Fig. 4). This approach paints a more complete picture and avoids wrong interpretations (for example, in the case of a scientist with almost no Top 10% most cited publications but a large amount of publications within the Top 10% and Top 15% percentiles). (b) Considerations from the classifications point of view The subject classification issue has always been a weakness of bibliometrics. Various approaches have been used to design journal-level ontologies, but the scholarly research and practical application of these taxonomies have always revealed their limitations. So far, no single classification scheme has been widely adopted by the bibliometric community (Archambault et al., 2011). Despite of the deficiencies and severe limitations observed in all the currently available solutions, the normalization of citation indicators corresponds with the classification used by the data sources. Comparing normalized indicators of WoS CC and Scopus results in clearly perceptible differences caused by applying either broader or narrower categories. WoS CC and its corresponding analytical tool InCites are a perfect example to illustrate that this problem can even occur within the same database: different results are obtained
De Profundis: A Decade of Bibliometric Services Under Scrutiny
249
Fig. 4 Example for a pie chart of the percentiles distribution
whether the narrower ±240 WoS Subject Categories or the broader 22 ESI Categories are used for normalization. The Leiden Ranking carries this circumstance to the extreme by distinguishing more than 4500 micro-level fields. Based on a specific computer algorithm, each publication in WoS is assigned to a field, delimited by its citation relations with other publications (Pudovkin & Garfield, 2002). This variety of classification approaches makes it virtually impossible to come up with THE well-accepted solution for researchers and other stakeholders. One might be tempted to believe that narrower categories would finally result in more correct calculations. However, a finer granularity will also generate more overlap between categories, which are usually considered in a fractional way. Our experience tells us that particularly for researchers with a high degree of interdisciplinarity, the observed differences in the percentage of the Top 10% percentile can even exceed 50%.
Analysis of the Citing Documents The relationship between citING and citED publications is a crucial issue in bibliometrics and was already addressed by Garfield in his legendary articles in the “Current Contents” print editions (see April 15, Garfield, 1994). The high visibility and impact of the citing documents reinforces the bibliometric analyses of the publications themselves and provides additional information on their resonance and usefulness in the scholarly community. This can also help to correct some biases.
250
J. Gorraiz et al.
For example, if a publication received few citations but all of them originate from publications belonging to the top most cited publications in the corresponding field, this should be considered or at least mentioned in the citation analysis. Analyses of citing documents are performed according to several criteria (e.g., citing countries, citing institutions, citing source titles) in order to demonstrate the wider impact of publications and to determine the degree of internationalization. Citing country analysis gives insights on whether broader impact actually exists or if it can be seen as a cooperation by-product. On the other hand, analysis of the citing documents at journal level can be used to increase visibility and to improve publication strategies. Finally, at author (and institutional) level it is interesting for researchers to learn about other possible applications of their results, to broaden their horizon, as well as to establish new cooperation strategies.
Cooperation and Co-publication Analyses Multiple Affiliations The major problem of co-publication analyses is the difficult nature of multiple affiliations. It must be clarified from the outset whether multiple affiliations are taken into account or not. Dual affiliations can be very challenging, as it is by far not trivial to decide if they might also represent some type of co-publication or cooperation. Last but not least, research mobility can also have an effect on productivity and impact, as recent studies have shown (Halevi et al., 2016; Robinson-Garcia et al., 2019). The outcome of co-publication analyses is highly dependent on the applied methodology. It is either possible to use normal counts or fractional counts. With normal counts, e.g. each institution or country listed in the affiliation gets only one credit, regardless of the occurrence frequency. In contrast, fractional counts are based on the number of occurrences. The calculation of fractional counts is not easy and depends on the underlying level: country, organization or author. Offering different results might be challenging and confusing for clients. Therefore, it is recommended to discuss the wished outcome beforehand and choose the appropriate counting approach accordingly.
Knowledge Base or Reference Analysis In our view this is one of the most relevant dimensions, particularly for individual evaluations. Reference analyses inform about the knowledge base of the researcher or the research group under evaluation. The total number of cited references, the percentage of cited journals or serials and the percentage of citations to other disciplinespecific publication types are determined. This analysis unveils the publications and
De Profundis: A Decade of Bibliometric Services Under Scrutiny
251
journals, which are perceived as core by the researcher or research group. The identification of the most cited publications gives us an idea of how researchers work and publish, i.e. whether being focused on their very own universe (reflected in many self-citations) or looking beyond. However, in some small or very specific fields, self-citations are perfectly justified. Identifying the most cited and relevant sources in highly specific research fields is very useful for correcting apparent limitations of subject classifications. This is consequently helpful in correcting deficiencies and errors of the application of journal impact measures. Moreover, the state-of-the-art (up-to-dateness of publication years) of the cited references is analyzed and compared with the cited half-life in the corresponding research field. True to our philosophy, our bibliometric services are considered to also help researchers or research groups under evaluation to improve their publication strategies. Therefore, we offer an additional (optional) service, namely the identification of three or more competitors in the research field, in order to compare their reference analyses accordingly. This approach can reveal new, previously overlooked publications or sources, and might finally help to expand the knowledge basis of the researchers under evaluation.
Focus Analyses We have extended our bibliometric analysis by including knowledge maps,9 which allows us to identify the research focuses of individuals, research groups or institutions. For this purpose, we generate co-occurrence maps for the selected time period(s) using following units: title words, title and abstracts words, keywords from authors, and other keywords available in the databases like thesaurus keywords or KeyWords Plus available in WoS Core Collection. However, the absence of controlled vocabulary in most of the data sources used for bibliometric analysis as well as the non-existence of suitable visualization tools that incorporate them is a big issue. Prior to the generation of the knowledge maps, some data normalization efforts are necessary which have to be performed manually. It is usual to consider following criteria: 1. terminology: avoiding synonyms or quasi-synonyms; 2. grammar: unifying singular and plural, verbal forms, etc.; and 3. spelling variations (e.g. spin-off vs. spinoff; licence vs. license). KeyWords Plus has proven to be most applicable, corroborated by preceding and less successful analyses performed with title words or author keywords (labeled as DE in WoS Core Collection). Interestingly, title words lacked relevance, whereas author keywords were not available for almost half of the retrieved items in WoS Core Collection. This was certainly a lesson learned, and particularly junior scientists should be made aware to choose more meaningful titles and keywords. 9 There is almost no bibliometric topic that has not been addressed by Moed in his monumental work.
The dynamical aspects of science maps resulting from combined co-citation and word analysis also drew his attention in the 1990s (Braam et al., 1991).
252
J. Gorraiz et al.
For data processing, cleaning and normalization we strongly recommend the use of BibExcel (Persson et al., 2009), while the maps are generated with Pajek (De Nooy et al., 2018) and/or VoSViewer (Van Eck and Waltman, 2010), a tool enabling to identify thematic clusters and their evolution attracting the attention of the publishing scholar community. It is important to highlight that all results have to be interpreted and validated by experts. Unfortunately, neither scientists nor evaluators have given much value or credit to these maps so far, which are often perceived as more “beautiful” than valuable. Nevertheless, they can attract a lot of attention, if appropriate comparative sets are used: e.g. maps created for all publications compared with the ones only generated for the most significant percentiles (Top 10% or Top 20%). By doing so, it is not only possible to identify the research fronts of topics with the highest impact, but also the associated authors.
Funding Information Furthermore, funding analyses are included in our bibliometric reports. These analyses are performed in WoS and Scopus and provide quite reliable results since 2008 (Costas and van Leeuwen, 2012) and inform about the number and percentage of funded publications as well as the main funding agencies. Since its inclusion, funding information data has been improved in all databases, especially in “Dimensions” where this information takes a star place. In our analysis, we always consider the percentage of publications with funding information and compare them with those expected in each category. Unfortunately, there are no reference values by category. For this reason, we are calculating them for each of the disciplines over the last 15 years in a joint project with the Carlos III University. The results will be published soon.
Other Metrics Including Web Attention Citations are used to assess the impact that publications have received. However, this approach is only feasible within the scholarly “publish or perish” community. In some disciplines, particularly the Humanities and the Social Sciences, the addressed audience is much broader. It does not only include the entire scholarly community including teaching academics, but also the other sectors of the so-called triple helix, i.e. the governmental and the industrial sectors. Moreover, sometimes research output is addressed to society as a whole. Therefore, exhaustive and modern bibliometric analyses should include other metrics than citations, in order to paint a more complete picture of the broader impact (Haustein, 2016; Moed, 2017). In his editorial contribution for the Issue 27 of “Research Trends” (March 2012), Moed has already pointed with great skill to the problems, challenges and limitations of the assessment
De Profundis: A Decade of Bibliometric Services Under Scrutiny
253
of the societal impact and its use for academic evaluation. He has always recommended a multidimensional approach including a whole amalgam of indicators to inform about each aspect or dimension. In his book, “Applied Evaluative Informetrics”, he dedicated special attention to the application context of quantitative research assessment. Highlighting the potential of new informetric techniques, he hints at a series of new features that could be implemented in future assessment processes, and sketches a perspective on the appropriate use of altmetrics (Moed, 2017). In Chap. 9, he even refers to two crucial issues in the assessment of societal impact related to the complexity to assess societal value, and to the time delay with which a scientific achievement may generate societal impact, respectively (Moed, 2017). Our experience shows that following issues should be taken into account.
Conceptual Issues There are two divergent trends in the practical use of new metrics, which is well reflected by two of the most widely used products in this regard, namely Altmetric.com and PlumX. On the one hand, Altmetric.com has gained momentum in the publishing world. Its “altmetric” concept includes other metrics (even citations from other data sources, like Wikipedia, policy documents, etc.). The product relies on a total composite indicator, “altmetric attention score”. The name underlines what they are indeed trying to measure: the mere attention that a publication or a work has received in the internet universe. The composite indicator is represented by the already well-known donut. On the other hand, PlumX was especially designed for measuring all metrics. We have thus moved from the concept of “altmetrics” to “all metrics”. PlumX puts its emphasis on maintaining the multidimensional character by means of distinguishing between five major criteria or categories: citations, usage metrics, captures, mentions and social media. The graphic representation is a kind of “flower”, “jewel” or “trinket”, in which the different colors and dimensions of the petals symbolize the diversity of the five measurements. By now PlumX has been acquired by Elsevier and has been incorporated into Scopus as one of the latest novelties. We have always criticized composite indicators and identified this approach as a shortcoming of altmetric.com (Gumpenberger et al., 2016). However, PlumX is far from perfect. In its commitment to maintain a multidimensional character, PlumX had to introduce a classification, which is not free from criticism and deficiencies. Creating a solid and robust classification system is a challenging task, but deemed to be more appropriate for a group of experts than for a commercial company. However, it is true, that no group of experts ever built any bibliometric solution adopted by the entire community, while the tools provided by commercial companies are widely used.
254
J. Gorraiz et al.
Furthermore, two main groups of indicators or measures should be distinguished according to their different obsolescence patterns: long-term (e.g. citations) and short-term indicators (usage, mentions, social media). In some cases, and particularly for some specific target groups, short-term indicators can act as predictors of long-term indicators. Rather than predictors these new metrics are simply additional instruments for achieving a more comprehensive view of the resonance of publications, which is now known as “broader impact”. The consideration of the different obsolescence patterns would avoid absurd and insignificant correlations.
Different Ways to Count In a recent analysis carried out in order to compare the results from both tools, PlumX and altmetric.com, we have found that only one indicator was comparable in both sources, namely the number of captures in Mendeley (Peters et al., 2017). Altmetric.com is more inclined to measure the total number of users, while PlumX relies on the total number of signals. Unfortunately, no general rule or consensual approach was detectable in either of the two sources. Since all this information is available anyway, it would make sense to combine both approaches for each publication year and for each measuring window, and to finally group the obtained results by institutions, countries and sectors. Let’s compare, for a moment, with the citation metric. We have all learned to distinguish between the total number of citations received and the number of citing articles, but we have not considered the number of citing authors citing. Would it then count ten times more to be cited by a work that is signed by ten authors compared to be cited by a single author? Another difference between citation metrics and altmetrics is what we have named the “mystery of zero”. In altmetrics, there are no zero counts. At least one signal must have been traced, otherwise no information is available. In contrast, citation and even usage metrics rely on the percentage of uncited documents as a usual indicator. The discrepancy is due to the two “different universes” that we need to consider in each case. The key to the mystery is represented by the structure of the citation indices (like the Science Citation Index, and actually the Web of Science Core Collection). It is common knowledge that this index consists of two central parts, namely the source part or Core Collection (hence the importance of this word in its name), and the citation part. These are indeed two different universes with different characteristics: a limited and controlled source part versus an unlimited and uncontrolled citation part. Publications that have not been cited by WoS CC indexed journals are given zero credit, even if they have been cited by other non-WoS CC indexed journals. Whereas results obtained from the Cited Reference Search in the citation part only reflect documents, which have been cited at least once like in the common altmetrics tools (Gorraiz, 2018).
De Profundis: A Decade of Bibliometric Services Under Scrutiny
255
Different Processes: A Citation Is More Than an Attention Signal The process of citing is of equal parity, since one publication cites another, and the two publications are comparable. The situation is quite different in the realm of new metrics, though. Here we have a user—sometimes not even the author of anything—who views, downloads, comments or discusses a publication (Gorraiz, 2018). There is also an essential difference in the effort required to produce the one or the other. Citations are based on the assumption that they are generated in the process of a creative act such as writing a publication, whereas altmetric signals are mostly results of a mere reaction by pressing a button in order to indicate interest, approval, expression of liking, etc. General acceptance for altmetrics within the scientific community is a prerequisite for using them to a greater extent (Haustein et al., 2014). Our experience to date has shown that altmetrics are still in their infancy and acceptance, and uptake among researchers is slow, particularly in the humanities. Our analyses so far resulted in a low proportion of publications with data and showed low relevance. Apart from citations and (always incomplete) usage data, only the number of readers and tweets reached some significance. However, these two indicators lack applicable value for academic evaluation practices according to the opinion of our scientists and science policy makers. Only the mentions in news were considered to be somehow meaningful as an indicator in order to assess the societal impact. Therefore, the use of these new metrics for evaluative purposes is still very controversial and challenging. Unresolved issues such as standardization, stability, reliability, completeness, interrelationship, scalability and normalization of the collected data are still to be tackled (Haustein, 2016). Moreover, these new metrics should be rather appealing for researchers to promote their research outputs, enhance their visibility and increase the likeliness to be cited. Finally, to conclude these comparisons, we would like to re-emphasize that the emergence and rise of the new metrics in no way signifies a replacement or a weakening of the citation analysis, but is rather intended as a reinforcement by underlining its special value and necessity. In addition, classic and new metrics are intertwined and mutually supportive. The new metrics are helpful to unmask how strongly citations are influenced by social networks and other factors.
Facing an Uncertain Future Currently more than one publication is released per second and can potentially be somehow promoted and multiplied in all traditional and novel communication channels. This development sparks the debate whether this really means progress for scholarly communication or not. What if we are perhaps already building the literal tower of Babel, where millions of scientists talk or write at the same time and produce
256
J. Gorraiz et al.
billions of papers, talks, emails, blog entries, tweets, etc., to be evaluated, discussed, mentioned, commented, re-blogged, re-tweeted and scored by others? This virtual information tower is constantly growing, and his builders and visitors are losing the ability to listen to and understand each other. It is like in a science fiction movie, where only in some floors the most privileged ones (like WoS CC users) have access to selected and controlled content of the scientific communication flow. Thus, for them the gained information has a meaning at the end of the day, whereas less privileged in other floors are kept excluded. Open access to scientific content is gaining momentum. On one hand, it contributes to increase the height of the tower, on the other hand, it will hopefully improve the access to the relevant information. Nevertheless, only a very reduced amount of all produced information, the top of the iceberg, is actually visible, accessible and widely used. Visibility is no longer achieved by publishing in prestigious journals, but will now be increasingly affected by additional promotion activities, which is actually more about money than merit. Therefore, there is a danger that these new metrics, and especially the ones related to social media, open the door to a radical change in the sciences. There is legitimate concern that altmetrics rather turn science communication into a marketing competition than foster focusing on true merits. We are now at a turning point for the future development of science communication processes. All involved sectors are called upon to act now and to respond to the challenge posed by this flood of metrics and new data in order to use them in a coherent and responsible manner. Last but not least, we would in particular like to underline the importance of monitoring altmetrics data at national and institutional level and providing them as complementary information in our bibliometric reports.
Conclusions Paying tribute to one of the foremost experts of scientometrics and informetrics is best achieved by offering a deep reflection on more than a decade of bibliometric services at the University of Vienna. Practical applications of this new discipline have been strongly influenced by the works, theories and ideas of Henk Moed. He also plays a crucial role in the bibliometric world of our university: He actively supported us as a program chair in the organization of two bibliometric conferences (STI 2008 and ISSI 2013), that were both hosted in Vienna, and still supports us with his immense expertise and enthusiastic engagement in our annual summer school (esss). In his book “Applied Evaluative Informetrics”, Henk Moed expresses his critical views on a series of fundamental problems in the current use of scientometric indicators in research assessment (Moed, 2017). Our contribution to this Festschrift is our humble continuation of those critical reflections in order to underline and enrich them with practical insights on. This review article combines both, ideas already discussed in previous articles as well as new and to date unpublished thoughts. It is rather meant to be a kind
De Profundis: A Decade of Bibliometric Services Under Scrutiny
257
of confession (as reflected in the title) than a manifesto. Opposite to authors of manifestos, who generally claim a certain degree of authority to tell others what they should do or avoid, our contribution should be understood as a proof of humility or modesty. The concept of humility has already been recommended by the Metric Tide,10 recognizing that quantitative evaluation should support—but not supplant— qualitative expert assessment, but also revealing and recognizing that neither our bibliometric methods, nor data sources, nor indicators are immaculate or perfect. For this reason, it was our intention to present some of the most burning problems and challenges that bibliometricians will encounter when carrying out their analyses. Performing bibliometric analyses always requires a clear structure, a formulation of the aim of the assessment, a thorough discussion and consideration of the disciplinary peculiarities, a smart selection of appropriate data sources and an according validation of the data to be used, a presentation and discussion of the results, and finally clarification of the inherent limitations (e.g. accuracy, completeness, etc.). Simply one incorrect analysis can not only damage the career of a scientist or the future of a whole department, but also your own professional reputation as bibliometrician. We have also pointed out that the choice of data sources, different calculation methods, as well as diverging definitions of indicators can seriously alter the results of our analyses. We must therefore act with great caution and, if possible, apply different methods to check the reliability of our analyses. Quick and dirty analyses should be avoided at any time. Moreover, we should always bear in mind that we can only reveal signs, trends or irregularities, but that our analyses are not intended to provide a final diagnosis. Our reports are rather meant to ask the right questions and should contribute to a better understanding of the entire publication processes. Finally, we would like to stress once again that the task of bibliometricians, and particularly of librarians providing bibliometric services, is not solely to support research administrators in their assessment exercises, but also to guide researchers, and especially the younger generation, to optimize their publication strategies in order to achieve greater visibility (Gorraiz et al., 2017). If we grant ourselves the power and authority to assist in the evaluation of the research outputs of scientists and research groups, we also need to be able to tell them how they can do better. We had the great honor to discuss all these insights in long and entertaining conversations with Henk Moed and we are looking forward to the next opportunity to comment on new trends and to share ideas.
10 Report
of the Independent Review of the Role of Metrics in Research Assessment and Management, http://www.hefce.ac.uk/pubs/rereports/year/2015/metrictide/.
258
J. Gorraiz et al.
References Archambault, É., Beauchesne, O. H., & Caruso, J. (2011). Towards a multilingual, comprehensive and open scientific journal ontology. In Proceedings of the 13th international conference of the international society for scientometrics and informetrics (pp. 66–77). South Africa: Durban. Bar-Ilan, J. (2008). Which h-index?—A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257–271. Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News, 68(5), 314–316. Bergstrom, C. T., West, J. D., & Wiseman, M. A. (2008). The eigenfactor™ metrics. Journal of Neuroscience, 28(45), 11433–11434. Bollen, J., Luce, R., Vemulapalli, S. S., & Xu, W. (2003). Usage analysis for the identification of research trends in digital libraries. D-Lib Magazine, 9(5), 1082–9873. Bollen, J., & Van de Sompel, H. (2006). Mapping the structure of science through usage. Scientometrics, 69(2), 227–258. Bollen, J., Rodriguez, M. A., & Van de Sompel, H. (2007). MESUR: Usage-based metrics of scholarly impact (No. LA-UR-07-0663). Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Bollen, J., & Sompel, H. V. D. (2008). Usage impact factor: The effects of sample characteristics on usage-based impact metrics. Journal of the American Society for Information Science and Technology, 59(1), 136–149. Bornmann, L., & Leydesdorff, L. (2014). Scientometrics in a changing research landscape. EMBO Reports, 15(12), 1228–1232. Braam, R. R., Moed, H. F., & Van Raan, A. F. (1991). Mapping of science by combined cocitation and word analysis. II: Dynamical aspects. Journal of the American Society for Information Science, 42(4), 252–266. Braun, T., Glänzel, W., & Schubert, A. (2006). A Hirsch-type index for journals. Scientometrics, 69(1), 169–173. Costas, R., van Leeuwen, T., & Bordons, M. (2010). Self-citations at the meso and individual levels: effects of different calculation methods. Scientometrics, 82(3), 517–537. Costas, R., & van Leeuwen, T. N. (2012). Approaching the “reward triangle”: General analysis of the presence of funding acknowledgments and “peer interactive communication” in scientific publications. Journal of the American Society for Information Science and Technology, 63(8), 1647–1661. De Nooy, W., Mrvar, A., & Batagelj, V. (2018). Exploratory social network analysis with Pajek: Revised and expanded edition for updated software (Vol. 46). Cambridge University Press. Donner, P. (2018). Effect of publication month on citation impact. Journal of Informetrics, 12(1), 330–343. Garfield, E., & Sher, I. H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14(3), 195–201. Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178(4060), 471–479. Garfield, E. (1994). The relationship between citing and cited publications: A question of relatedness. Current Contents, 13. Garfield, E. (2004). The agony and the ecstasy—The history and meaning of the journal impact factor. J Biol Chem, 405017(6.355), 6585. Glänzel, W., & Moed, H. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193. González-Pereira, B., Guerrero-Bote, V., & Moya-Anegon, F. (2009). The SJR indicator: A new indicator of journals’ scientific prestige. arXiv preprint arXiv:0912.4141. Gorraiz, J., & Gumpenberger, C. (2010). Going beyond Citations: SERUM—A new tool provided by a network of libraries. Liber Quarterly, 20(1), 80–93.
De Profundis: A Decade of Bibliometric Services Under Scrutiny
259
Gorraiz, J., Purnell, P. J., & Glänzel, W. (2013). Opportunities for and limitations of the B ook C itation I ndex. Journal of the American Society for Information Science and Technology, 64(7), 1388–1398. Gorraiz, J., Gumpenberger, C., & Schlögl, C. (2014). Usage versus citation behaviours in four subject areas. Scientometrics, 101(2), 1077–1095. Gorraiz, J., & Gumpenberger, C. (2015). A flexible bibliometric approach for the assessment of professorial appointments. Scientometrics, 105(3), 1699–1719. Gorraiz, J., Wieland, M., & Gumpenberger, C. (2016). Individual bibliometric assessment@ University of Vienna: From numbers to multidimensional profiles. El Profesional de la Informacion, 25(6), 901–915. Gorraiz, J., Wieland, M., & Gumpenberger, C. (2017). To be visible, or not to be, that is the question. International Journal of Social Science and Humanity, 7(7), 467–471. Gorraiz, J. (2018). A thousand and one reflections of the publications in the mirrors’ labyrinth of the new metrics. El profesional de la información, 27 (2), 231236. http://www. elprofesionaldelainformacion.com/contenidos/2018/mar/01.pdf. Gumpenberger, C., Wieland, M., & Gorraiz, J. (2012). Bibliometric practices and activities at the University of Vienna. Library Management, 33(3), 174–183. Gumpenberger, C., Wieland, M., & Gorraiz, J. (2014). Bibliometrics and Libraries-a promising Liaison. Zeitschrift für Bibliothekswesen und Bibliographie, 61(4–5), 247–250. Gumpenberger, C., Glänzel, W., & Gorraiz, J. (2016). The ecstasy and the agony of the altmetric score. Scientometrics, 108(2), 977–982. Halevi, G., & Moed, H. F. (2014). Usage patterns of scientific journals and their relationship with citations (pp. 241–251). Context Counts: Pathways to Master Big and Little Data. Halevi, G., Moed, H. F., & Bar-Ilan, J. (2016). Does research mobility have an effect on productivity and impact? International higher education, 86, 5–6. Halevi, G., Moed, H., & Bar-Ilan, J. (2017). Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation—Review of the literature. Journal of informetrics, 11(3), 823–834. Haustein, S., Peters, I., Bar-Ilan, J., Priem, J., Shema, H., & Terliesner, J. (2014). Coverage and adoption of altmetrics sources in the bibliometric community. Scientometrics, 101(2), 1145–1163. Haustein, S. (2016). Grand challenges in altmetrics: heterogeneity, data quality and dependencies. Scientometrics, 108(1), 413–423. Kurtz, M. J., & Bollen, J. (2011). Usage bibliometrics. arXiv preprint arXiv:1102.2891. Kousha, K., Thelwall, M., & Rezaie, S. (2011). Assessing the citation impact of books: The role of Google Books, Google Scholar, and Scopus. Journal of the American Society for Information Science and Technology, 62(11), 2147–2164. Leydesdorff, L., & Opthof, T. (2010). Remaining problems with the “New Crown Indicator” (MNCS) of the CWTS. arXiv preprint arXiv:1010.2379. Leydesdorff, L., & Felt, U. (2012). Edited volumes, monographs, and book chapters in the Book Citation Index (BKCI) and Science Citation Index (SCI, SoSCI, A&HCI). arXiv preprint arXiv: 1204.3717. Moed, H. F. (1988). The use of online databases for bibliometric analysis. In L. Egghe & R. Rousseau (eds.), Informetrics 87/88 (pp. 15–28). Elsevier Science Publishers, Amsterdam. ISBN 0-444-70425-6. Moed, H. F. (2000). Bibliometric indicators reflect publication and management strategies. Scientometrics, 47(2), 323–346. Moed, H. F. (2005a). Citation analysis of scientific journals and journal impact measures. Current Science, 1990–1996. Moed, H. F. (2005b). Statistical relationships between downloads and citations at the level of individual documents within a single journal. Journal of the American Society for Information Science and Technology, 56(10), 1088–1097. Moed, H. F. (2006). Citation analysis in research evaluation (Vol. 9). Springer Science & Business Media.
260
J. Gorraiz et al.
Moed, H. F. (2007a). The future of research evaluation rests with an intelligent combination of advanced metrics and transparent peer review. Science and Public Policy, 34(8), 575–583. Moed, H. F. (2007b). The effect of “open access” on citation impact: An analysis of ArXiv’s condensed matter section. Journal of the American Society for Information Science and Technology, 58(13), 2047–2054. Moed, H. F. (2008). UK research assessment exercises: Informed judgments on research quality or quantity? Scientometrics, 74(1), 153–161. Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of informetrics, 4(3), 265–277. Moed, H. F., Colledge, L., Reedijk, J., Moya-Anegon, F., Guerrero-Bote, V., Plume, A., et al. (2012). Citation-based metrics are appropriate tools in journal assessment provided that they are accurate and used in an informed way. Scientometrics, 92(2), 367–376. Moed, H. F. (2013). New perspectives on the Arts & Humanities. Research Trends, 32, 1. Moed, H. F., & Halevi, G. (2015). Multidimensional assessment of scholarly research impact. Journal of the Association for Information Science and Technology, 66(10), 1988–2002. Moed, H. F. (2017). Applied evaluative informetrics. Springer International Publishing. ISBN: 978-3-319-60521-0 Persson, O., Danell, R., & Schneider, J. W. (2009). How to use Bibexcel for various types of bibliometric analysis. Celebrating scholarly communication studies: A Festschrift for Olle Persson at his 60th Birthday, 5, 9–24. Peters, I., Kraker, P., Lex, E., Gumpenberger, C., & Gorraiz, J. I. (2017). Zenodo in the spotlight of traditional and new metrics. Frontiers in Research Metrics and Analytics, 2, 13. Pudovkin, A. I., & Garfield, E. (2002). Algorithmic procedure for finding semantically related journals. Journal of the American Society for Information Science and Technology, 53(13), 1113– 1119. Repiso, R., Gumpenberger, C., Wieland, M., & Gorraiz, J. (2019). Impact measures in the humanities: A blessing or a curse? Book of Abstracts QQML 2019. http://qqml.org/wp-content/uploads/ 2017/09/Book-of-Abstracts_Final_AfterConf_v1.pdf Robinson-Garcia, N., Sugimoto, C. R., Murray, D., Yegros-Yegros, A., Larivière, V., & Costas, R. (2019). The many faces of mobility: Using bibliometric data to measure the movement of scientists. Journal of Informetrics, 13(1), 50–63. Torres-Salinas, D., Robinson-García, N., Cabezas-Clavijo, Á., & Jiménez-Contreras, E. (2014). Analyzing the citation characteristics of books: edited books, book series and publisher types in the book citation index. Scientometrics, 98(3), 2113–2127. Torres-Salinas, D., Gumpenberger, C., & Gorraiz, J. (2017). PlumX as a potential tool to assess the macroscopic multidimensional impact of books. Frontiers in Research Metrics and Analytics, 2, 5. Van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523–538. Vinkler, P. (2004). Characterization of the impact of sets of scientific papers: The Garfield (impact) factor. Journal of the American Society for Information Science and Technology, 55(5), 431–435. Vinkler, P. (2010). The evaluation of research by scientometric indicators. Oxford [u.a.]: CP, Chandos Publishing XXI, 313 S. ISBN: 1-84334-572-2. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. (2011). Towards a new crown indicator: An empirical analysis. Scientometrics, 87(3), 467–481.
A Comparison of the Citing, Publishing, and Tweeting Activity of Scholars on Web of Science Rodrigo Costas and Márcia R. Ferreira
Introduction Social media platforms such as Twitter, Facebook, Research Gate, and Mendeley enable researchers to make research outcomes widely available. This phenomenon has led to an explosion of digital traces related to different types of scholarly communication that can be quantified. The study and validation of those digital traces, in particular social media indicators, against conventional bibliometric indicators was the main focus of the field of “altmetrics” or “social-media metrics” (Sugimoto et al., 2017; Wouters, Zahedi, & Costas, 2018). These indicators were also proposed as complementary to conventional research evaluation methods (Wouters & Costas, 2012). Several altmetric studies have focused on the topic of scholars on Twitter. However, these studies were limited to scholars from specific disciplines (e.g. astrophysicists Haustein et al., 2014; computer science—Teka Hadgu & Jäschke, 2014; bibliometrics—Martín-Martín, Orduña-Malea, Delgado López-Cózar, 2018), countries (e.g. South Africa in Joubert & Costas, 2019), or small datasets of manually identified researchers (76 researchers in Ortega, 2016, and less than 60 researchers across 10 disciplines in Holmberg & Thelwall, 2014). An important common finding of this research is that there is a lack of a strong association between tweeting activities and scholarly activities, as measured by altmetric and bibliometric indicators. In his discussion of the field of altmetrics, Henk Moed suggested that the social media activity of scholars could be seen as “traces of the computerization of the R. Costas (B) Centre for Science and Technology Studies (CWTS), Leiden University, Leiden, The Netherlands e-mail: [email protected] Centre for Research on Evaluation, Science and Technology (CREST), Stellenbosch University, Stellenbosch, South Africa M. R. Ferreira Complexity Science Hub Vienna, Vienna, Austria TU Wien (Vienna University of Technology), Vienna, Austria © Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_12
261
262
R. Costas and M. R. Ferreira
research process” (Moed 2016, p. 362) in addition to other traces such as publications and citations. Moed (p. 362) proposed four aspects of the computerization of research: “collection of research data and development of research methods; scientific information processing; communication and organization; and […] research assessment”. He further added that the “communication and organization” (p. 362) aspect of scholarship essentially relates to how “researchers communicate and organize themselves, and how the new technologies […] could improve these processes” (p. 365). By this Moed hinted at the idea that web-based indicators may reflect other aspects of the research process that are related to public communication, societal engagement, and social networking rather than scientific impact. In this chapter, we extend Moed’s perspective on the computerization of research by analyzing scholarship processes on Twitter and by studying how these processes relate to the scholarly activities of researchers. We describe and compare the main characteristics of researchers identified on Twitter in terms of their research activities (i.e. citing and publishing) and their tweeting activities. We think that indicators based on digital traces are not only useful for studying the alternative aspects of the performance of researchers, but also for understanding their communication context, networking and socialization practices. More precisely, our aim is to provide answers to the following research questions: • What is the relationship between bibliometric indicators and Twitter-based indicators? • What do individual researchers author, cite and tweet? • How many of their own publications do scientists self-cite and self-tweet? • What is the impact of the publications authors self-mention? • Do they self-mention their best publications? • How many publications do researchers publish, cite, and tweet given their academic age? • What is the cognitive similarity between what scientists publish, cite, and tweet? To answer these questions, we use a large-scale database of scholars on Twitter from all disciplines that has been linked to researchers’ publication, citation and tweeting activities (Costas, Mongeon, Ferreira, Van Honk, & Franssen, 2019). With this information we connect the scholarly activities of individual scholars (e.g. publication or citation activities) to their Twitter activities, an aspect that has been neglected by previous research (e.g. Ke et al., 2017) related to the identification of scholars from all fields on Twitter. The chapter is organized as follows. In the next section, we introduce the context and methodology. Then, we describe the data and present our results. The chapter ends with a summary of the key findings and a recommendation for the future research that we consider to be important to the study of the Twitter activities of researchers.
A Comparison of the Citing, Publishing, and Tweeting Activity …
263
Database and Methods The main data source used in this study is a list of disambiguated authors in the Web of Science (WoS) linked to a Twitter account that corresponds to the same individual. The original list consists of over 162,000 distinct authors in the WoS who have tweeted at least one publication recorded in Altmetric.com until October 2017. Each of the authors in this list have been matched to a Twitter account (also known as tweeters) using a rule-based scoring algorithm developed by Costas et al. (2019).1 The aim of the scoring algorithm is to identify pairs of researchers and their Twitter accounts. We only use pairs that have a matching score greater than 4, and where the Twitter account is the unique best match for the researcher and vice versa. We limit the number of researchers in the dataset to those researchers that have tweeted at least one publication (based on Altmetric.com data) and at least one WoS publication published and cited in the period 2012–2017. We only consider publications with a Digital Object Identifier (DOI). The final dataset consists of 124,569 researchers and 2,832,335 publications that were authored, cited, or tweeted by those researchers. For each publication in this set we calculated several bibliometric and altmetric indicators such as the number of citations until 2018 and the number of tweets until October 2017. Additional citation-based indicators were calculated using all publications in the WoS database, i.e., they were not limited to the 2.8 million publications set described earlier, but including all publications published between 2012 and 2018. This gives us a broader view on the citation impact of the different sets of publications obtained at the individual-level. Self-citations, i.e., both author and co-author self-citations (Costas, Van Leeuwen, & Bordons, 2010) were excluded. Twitter-based indicators were calculated in a similar way to citation-based indicators. Here we consider original tweets and retweets received by publications and the entire Altmetric.com database. The indicators were not restricted to tweets by the scholars included in the analysis. The Altmetric.com database version used in the study was up-to-date until October 2017, and the window for tweet counts is the 2012–2017 (October). Finally, all citation-based and tweet-based publication-level scores were grouped for each scholar.
Individual-Level Indicators We calculated indicators for each of the authors in our dataset on the basis of the 2.8 million publications set described in the previous section. The indicators (See Table 1) were calculated based on the type of relationship (i.e. as authors, as citers, or as tweeters) an author has with a publication. Each researcher is thus characterized by a profile of bibliometric and Twitter indicators based on three different sets of 1 A similar approach was already outlined in a previous stage (Costas, Van Honk, & Franssen, 2017);
the new one is a more advanced and refined version of that approach.
264
R. Costas and M. R. Ferreira
Table 1 List of indicators calculated at the individual-level for researchers identified on Twitter Variable
Description
[yfp]
Year of first publication of the researcher (see Nane, Costas, & Lariviere (2017) for a discussion of this indicator)
[tweets_to_papers]
Tweets sent to papers by the Twitter account of the researcher
[followers]
Number of followers of the researcher on Twitter
[p_authored]
Number of publications authored by the researcher. This indicator may also be referred to as [p]
[tcs_authored]
Total citation score of the authored publications. This indicator may also be referred to as [tcs]
[mncs_authored]
Mean Normalized Citation Score (MNCS) of the authored publications. This indicator may also be referred to as [mncs]
[tws_authored]
Total number of tweets to authored publications. This indicator may also be referred to as [tws]
[mtws_authored]
Mean number of tweets to authored publications. This indicator may also be referred to as [mtws]
[p_cited]
Number of distinct publications cited by the researcher
[mncs_cited]
MNCS of publications cited
[mtws_cited]
Mean number of tweets to cited publications
[p_self_cited]
Number of publications self-cited (i.e. publications authored that have been self-cited at least once) by the researcher
[mncs_self_cited]
MNCS of publications self-cited by the researcher
[mtws_self_cited]
Mean number of tweets to self-cited publications
[p_tweeted]
Number of distinct publications tweeted by the researcher
[mncs_tweeted]
MNCS of publications tweeted by the researcher
[mtws_tweeted]
Mean number of tweets to publications tweeted by the researcher
[p_self_tweeted]
Number of publications self-tweeted (i.e. publications authored that have been self-tweeted at least once) by the researcher
[mncs_self_tweeted]
MNCS of publications self-tweeted by the researcher
[mtws_self_tweeted]
Mean number of tweets to self-tweeted publications
publications: the publications they have authored, the publications they have cited, and the publications they have tweeted. Table 1 shows the selection of indicators used in the study. The selection comprises of size-dependent indicators, i.e., indicators that capture the overall production or activities of scholar such as publications, citations, and tweets, and sizeindependent indicators such as the mean normalized citation score, the share of self-cited publications, and mean tweets per publication.
A Comparison of the Citing, Publishing, and Tweeting Activity …
265
Table 2 Indicators of relative self-mentioning at the individual-level for researchers identified on Twitter Variable
Description
[pp_authored_self_cited]
Proportion of authored publications that are self-cited (at least once). ([p_self_cited]/[p_authored])
[pp_authored_self_tweeted]
Proportion of authored publications that are self-tweeted (at least once). ([p_self_tweeted]/[p_authored])
[pp_cited_self_cited]
Proportion of cited publications that are self-cited (at least once). ([p_self_cited]/[p_cited])
[pp_tweeted_self_tweeted]
Proportion of tweeted publications that are self-tweeted (at least once). ([p_self_tweeted]/[p_tweeted])
Self-mention Indicators The different forms of ‘self-mentions’ by researchers are also analyzed. Selfmentions refer to self-citations or self-tweets. Following Aksnes (2003), we adopt a “synchronous” perspective, meaning that we study the self-mentions that are “given” by the researchers to their own publications. This is a slightly different approach than the “diachronous” approach, focused on the number and share of self-mentions (citations or tweets) that an author “receives”. The diachronous approach is often used in scientometrics by studies that focus on the impact of publications and researchers. The synchronous perspective is chosen for this study because we are interested in the information activities of researchers (i.e. how they choose -or not- their own publications in their citing and tweeting behavior). Accordingly, we also use the concept of “author self-mention”, following the suggestion by Costas et al. (2010) of only considering self-mentions when an author cites or tweets one of his own publications. Table 2 shows the indicators for capturing the self-mention activities of scholars. With these indicators, we quantify the share of their own publications that authors cite and tweet.
Cognitive Distance Indicators Following Mongeon (2018) and Mongeon et al. (2018) we compute cosine scores (Salton & McGill, 1986) to measure the cognitive similarity between the works published, cited, and tweeted by individual researchers. For each researcher we count the numbers of distinct papers cited, tweeted, and authored and aggregate these counts to the level of Web of Science Journal Subject Categories (JSCs). The WoS classification scheme contains 250 elements and each journal is assigned to one or a few JSCs. The cosine similarity is calculated as follows. Author A published three papers in one journal which have been classified using a single JSC, let’s say B. The same
266
R. Costas and M. R. Ferreira
author also tweeted a total of three publications assigned to the same JCS. This means that the author has published and tweeted in the same fields between 2012 and 2017, therefore, the cosine similarity is 1. By contrast, the cosine similarity will be 0 for an author who only tweeted publications from JSCs in which they have not published. The cosine similarity is given by the following formula: n
Ai Bi A·B i=1 , = similarity = cos(θ ) = AB n n 2 2 Ai Bi i=1
i=1
The same procedure is applied to compute the following three measures of similarity at the researcher level: – [au_cit_cos]: cosine similarity between papers authored and papers cited by a researcher. – [au_tw_cos]: cosine similarity between papers authored and papers tweeted by a researcher. – [tw_cit_cos]: cosine similarity between papers tweeted and papers cited by a researcher.
Results What is the relationship between bibliometric indicators and Twitter-based indicators? In Table 3 we present the main descriptive values of bibliometric and Twitter-based Table 3 Descriptive statistics for the 124,569 scholars identified on Twitter Variable
Mean
yfp
2007.97
tweets_to_papers followers p
52.65
S.D.
25% 8.04
250.7
799.05
14,658.96
10.43
22.75
50%
75%
Min.
Max.
1913
2017
2004
2010
2014
3
9
34
1
33,504
48
151
429
0
3,192,872
2
4
11
1
924
40
154
0
44,413
tcs
231.22
939.4
9
mcs
16.22
48.8
4
8.5
16.5
0
3,960.5
mncs
1.83
4.09
0.62
1.15
1.98
0
326.9
130.69
478.04
5
21
82
0
26,540
14.46
68.19
1.44
4.4
12
0
13,270
tws mtws
A Comparison of the Citing, Publishing, and Tweeting Activity …
267
indicators. The table shows that the researchers in our sample represent a relatively young community since half of those researchers started to publish in 2010 or after, and only about 25% started to publish before 2004. On average, researchers tweet publications approximately 52 times and the distribution of tweeted publications is skewed as is shown by the median. This suggests that the tweeting activity of scholars is heterogeneous. Interestingly, the number of tweets their papers have received (tws = 130.69) is higher than the number of tweets they have given to papers (tweets_to_papers = 52.65). Indicating that the visibility of researchers on Twitter is not only due to their self-promotion activities. Furthermore, the distribution of the number of Twitter followers is skewed, with an average of 799 followers and a median of 151. The researchers included in this study have published an average of 10 papers (median of 4), with an average field normalized impact (mncs) of 1.83, which is quite high. In fact, the majority of researchers have a mncs higher than 1.15. These results suggest that the population of scholars identified on Twitter have a relatively high bibliometric performance. In Table 4, we correlate the individual-level indicators to bibliometric and Twitterbased indicators. Unsurprisingly, there is a negative correlation between the year of first publication (yfp) and most size-dependent indicators (e.g. p, tcs and tws), suggesting that the longer researchers have been active, the more chances to have higher counts of publications (p), citations (tcs) and tweets (tws). Table 4 Spearman correlations between variables
268
R. Costas and M. R. Ferreira
These three indicators have moderate-to-high correlation coefficients, implying that the more publications, the more overall citations and tweets an author accumulates. Size-independent citation indicators (mcs and mncs) also correlate with each other and are negatively correlated with the year of first publication. Interestingly, the year of first publication is negatively correlated with the number of tweets to papers and the number of followers. This correlation is weak, but could nonetheless suggest that acquiring experience and visibility on Twitter (i.e. accumulation of followers) takes time. Looking at correlations between bibliometric and Twitter based indicators, the strongest correlations are found between the total numbers of tweets received (tws) and production (p), citations (tcs). The number of followers is quite uncorrelated with any of the other indicators included in the study, the only exception being the moderate correlation with the number of tweets to papers. This suggests a relatively moderate relationship between the tweeting activity of researchers and the number of other users following them.
What Do Individual Researchers Author, Cite, and Tweet? Here, we present the results for the outputs researchers authored, cited, and tweeted. Table 5 shows the descriptive values of the differences between publications authored and tweeted. On average, researchers have tweeted approximately 26 publications and have authored approximately 10 publications. This supports the obvious fact that tweeting is a more effortless activity than authoring publications. However, the fact is that only 25% of the scholars studied have tweeted approximately 14 or more publications, implies that although tweeting is an effortless activity, researchers do not tweet publications aimlessly. In terms of the impact of the publications selected to be tweeted by each individual, results show that the mncs of the publications tweeted is very high, with a mncs score of 5.5 and median score of 3.17. This shows that researchers choose to tweet publications with high citation impact, and higher impact than their own publications. The selection of publications tweeted by the individual researchers also tend to have high impact on Twitter itself, with a mean tweet score Table 5 Descriptive values of publications authored versus publications tweeted Variable p_authored
Mean
S.D.
25%
50%
75%
Min.
Max.
10.43
22.75
2
4
11
1
924
mncs_authored
1.83
4.09
0.62
1.15
1.98
0
326.9
mtws_authored
14.46
68.19
1.44
4.4
12
0
13270
p_tweeted
26.3
125.65
1
4
15
1
16004
mncs_tweeted
5.5
13.19
1.45
3.17
5.94
0
1220.4
mtws_tweeted
236.06
16.69
69
180
1
38273
1311.07
A Comparison of the Citing, Publishing, and Tweeting Activity …
269
Table 6 Descriptive values of publications cited versus publications tweeted Variable p_cited
Mean 68.97
S.D. 131.14
25%
50%
75%
Min.
Max.
8
26
75
1
3347
mncs_cited
6.51
9.49
2.46
4.18
7.29
0
540.9
mtws_cited
16.32
47.04
2.2
7.13
17.93
0
7686.5
p_tweeted
26.3
125.65
1
4
15
1
16004
mncs_tweeted
5.5
mtws_tweeted
236.06
13.19 1311.07
1.45
3.17
5.94
0
1220.4
16.69
69
180
1
38273
(mtws_tweeted) of 236.06 tweets and a median score of 69 tweets per publication. In other words, researchers opt to tweet publications that have a high tweeting impact. Table 6 presents a comparison between publications cited and tweeted. The number of publications cited at the individual-level is much higher than the number of publications tweeted and authored. Since reading and citing previous work is a common practice in the scholarly process, it is no surprise to see that on average the number of cited publications is the largest of all. Besides, the co-authors of the researchers may also add citations in their joint papers, thus contributing to the larger average number of publications cited by the individual researchers. Researchers tweet on average publications that have a higher Twitter impact than the ones they cite. These two patterns suggest that the choice of what to cite and what to tweet may fulfill slightly different functions depending on the medium on which they are being performed. Thus, individual researchers would choose more highly cited publications for their own citations, and more highly tweeted papers for their own tweets.
How many of their own publications do scientists self-cite and self-tweet? And what is the impact of the publications that authors self-mention? This section focuses on the authors self-mentioning activity. Table 7 presents the descriptive statistics of self-mentioned publications. In this part we are focusing on the contrast between authored versus self-cited and self-tweeted publications. Table 7 shows that researchers on average tend to self-cite about 4 of their publications. They self-tweet less than 2 of their own publications, while only the upper 25% of the population have self-tweeted 2 or more publications. The mncs indicator of the publications that were self-tweeted by researchers is lower than the average mncs of the authored publications of the researchers, suggesting that scholars do not necessarily choose their most impactful publications in terms of citations to tweet. A similar argument can be used for the self-cited publications, which also have a lower mncs impact compared to that of the overall
270
R. Costas and M. R. Ferreira
Table 7 Self-mentioning of publications: authored versus self-cited versus self-tweeted Variable
Mean
p_authored
10.43
S.D. 22.75
25%
50%
75%
Min.
Max.
2
4
11
1
924
mncs_authored
1.83
4.09
0.62
1.15
1.98
0
326.9
mtws_authored
14.46
68.19
1.44
4.4
12
0
13270
p_self_cited
4.21
12.84
0
1
4
0
663
mncs_self_cited
1.56
5.1
0
0.6
1.74
0
327.74
mtws_self_cited
8.1
58.74
0
0.5
4.84
0
15306
p_self_tweeted
1.56
3.3
0
1
2
0
168
mncs_self_tweeted
1.47
6.37
0
0.37
1.53
0
526.98
mtws_self_tweeted
18.15
101.77
0
3
13
0
15561
authored publications. Interestingly, the impact on Twitter (mtws) is highest for the self-tweeted publications, while the Twitter impact of the self-cited publications is the lowest. This reinforces the idea that the choices of what is tweeted or cited are somehow modulated by the function of the mention and the medium on which it is performed. Researchers tend to self-tweet those publications that achieve a relatively higher impact on Twitter. Table 8 presents the descriptive statistics of the proportions of publications selfmentioned by the researchers under analysis. On average, the proportion of authored publications that are self-cited (pp_authored_self_cited), the proportion of authored publications that are selftweeted (pp_authored_self_tweeted), and the proportion of tweeted publications that are also self-mentions (pp_tweeted_self_tweeted) are similar, with scores ranging between 25 and 27%. The proportion of publications cited that are self-citations (pp_cited_self_cited) is the lowest (6%). This is due to the fact that researchers cite substantially more than they tweet or author, therefore, there is a larger set of publications cited at the individual-level that are not self-citations. According to the percentile values, about 50% of all the researchers have self-tweeted 11% of their own publications, while the same value is 25% for self-citations. However, the top 75% of the researchers with the highest share of publications self-tweeted about is larger than that of self-cited (50% vs. 46%). This suggests that the distribution of self-tweets and self-citations is different, and that a more concentrated set of researchers exists who self-tweet more strongly about their publications compared Table 8 Descriptive statistics of the proportions of publications self-mentioned Variable
Mean
S.D
25%
50%
75%
Min.
Max.
pp_authored_self_cited
0.25
0.24
0
0.25
0.46
0
1
pp_authored_self_tweeted
0.27
0.35
0
0.11
0.5
0
1
pp_cited_self_cited
0.06
0.11
0
0.03
0.08
0
1
pp_tweeted_self_tweeted
0.25
0.35
0
0.06
0.36
0
1
A Comparison of the Citing, Publishing, and Tweeting Activity …
271
Fig. 1 Relationship between the shares of self-mentioning and the number of publications authored
to self-citations. This supports the idea that self-mentioning activities may also be affected by the overall activity of the researchers (i.e. the total number of authored papers, or the total number of papers they have tweeted or cited). In order to study this point further, Figs. 1, 2 and 3 plot the average proportions of self-tweeting and self-citing, controlling by the number of publications that authors have authored, tweeted, or cited. In Fig. 1, the proportions of authored publications self-tweeted and self-cited are plotted together with the proportion of publications tweeted that are self-tweeted. The latter is the most stable indicator, suggesting that regardless of the output of the researchers, the share of publications tweeted that are self-tweeted remains quite stable at about 25%. The main differences are observed for the contrast between the share of authored publications that are self-tweeted or self-cited. The latter presents a clearly increasing pattern, i.e. the more publications an author produces, the higher the chances that they will self-cite more of their own publications, although it is remarkable that the pattern reaches a plateau at about 40%. The self-tweeting pattern is inverse to that of self-citations. The more papers an author publishes, the lower the proportion of them that are self-tweeted. This may be related to the fact that tweeting (in contrast to citations and publications that are essentially connected processes—one needs to publish to self-cite one’s own publications) is an independent activity from the publication activity, meaning that the more an author publishes, the more tweeting by the author takes place to self-tweet the publications. For example, according to Fig. 1, authors with just one publication only self-tweet their publications in about 45% of the cases, while researchers with two publications self-tweet
272
R. Costas and M. R. Ferreira
Fig. 2 Relationship between the shares of self-mentioning and the number of publications tweeted
Fig. 3 Relationship between the shares of self-mentioning and the number of publications cited
A Comparison of the Citing, Publishing, and Tweeting Activity …
273
on average about their publications in less than 35% of the cases. However, since the share of publications tweeted that are self-tweeted remains quite constant (i.e. the line of publications tweeted that are self-tweeted—m_pp_tweeted_self_tweeted), one could argue that with an equal share of self-tweets, the more publications one produces, the less self-tweeting there is about these publications. Overall, it seems that there is a stronger connection between what a researcher publishes and self-cites, than between what a researcher publishes and self-tweets. In order to further explore the patterns shown in Fig. 1, we plot the same indicators while controlling by the number of tweeted publications (p_tweeted) in Fig. 2. The most remarkable trend in this graph is that it shows that the more publications an author tweets about (i.e. the higher the p_tweeted), the less frequent these will be self-tweeted publications. However, it seems that there is a slight increase in the share of publications authored that are self-tweeted, corresponding to the increase in the number of publications tweeted, indicating that there is a tendency to selftweet somewhat more about one’s publications, particularly if we take the share of self-cited publications as a benchmark. For the sake of completeness, in Fig. 3 the same proportions as before are also explored, but this time controlling by the number of publications cited. The strongest relationship is with the share of publications authored that are self-cited. This means that the more publications an author has cited, the greater the chances of self-citing other of their own publications, although overall is the lowest of the similarity scores. With respect to self-tweeting, the relationship is weaker, although it seems that when a researcher cites more publications, the proportion of self-tweeting marginally decreases, indicating perhaps that researchers with strong literature awareness would be more likely to tweet about other publications than just their own.
What is the impact of the self-tweeted publications of researchers? Do they self-tweet their best publications? In Fig. 4, we study the field normalized impact score (mncs) of the publications authored, tweeted and self-tweeted by the researchers, while controlling by the academic age of the researchers. The idea here is to assess in a comparative way whether researchers tweet or self-tweet higher-citation-impact publications as they become more ‘senior’. While there is a slight increase in the citation impact of the tweeted publications as researchers become more senior, the impact of the tweeted publications tends to be very high particularly for those publications that are not self-tweeted, regardless of the academic age of researchers. Interestingly, researchers seem to self-tweet more impactful publications from their publication profile, while their tweets to publications that are not-self-tweeted are usually about publications that have a very high mncs values. This pattern becomes clearer in Fig. 5, where the impact of publications tweeted, self-tweeted and not-self-tweeted is explored, while controlling by the number of publications
274
R. Costas and M. R. Ferreira
Fig. 4 MNCS impact of tweeted, authored, and self-tweeted and not-self-tweeted publications by academic age of researchers
Fig. 5 MNCS impact of tweeted, authored, and self-tweeted and not-self-tweeted publications by overall output
A Comparison of the Citing, Publishing, and Tweeting Activity …
275
Table 9 Correlations among self-mentioning variables (Spearman correlations)
authored by the researchers. In this case, we have also included a line to highlight the mncs impact of the publications that researchers have not self-tweeted, clearly showing that this is actually a subset of the researchers’ publications that is comparatively less cited. Table 9 presents the correlations between all the variables related to the analysis of the self-mentioning activities of scholars. In general, the correlations are relatively low, except for the correlations of p_authored with p_cited and p_self_cited, again reinforcing the idea that authoring, citing and self-citing are connected processes, and substantially differentiated from tweeting activities. Interestingly, p_self_tweeted is only weakly correlated with the number of publications authored or cited. Other correlations of a certain merit include the mncs values of self-cited and self-tweeted, in which correlations are more moderate, probably because self-mentioned publications are sets of publications that come from the researcher’s set of publications (i.e. p_authored).
276
R. Costas and M. R. Ferreira
What is the cognitive distance between what scientists publish, cite, and tweet? In this section, we study the cognitive distance between the publications that researchers have authored, cited, and tweeted. The objective is to determine whether the topics tweeted by the authors are related to what they have published, or to what they have cited in their publications. In the figures below, we calculate the average cosine measure of the researchers in the comparison of authored-tweeted papers, authored-cited papers and tweeted-cited papers. In our analyses above, it has already been discussed that overall activity (i.e. authoring, citing and tweeting) may have an effect on the understanding of the different indicators; in this case we present all the analysis controlling by p_authored (Fig. 6), p_cited (Fig. 7) and p_tweeted (Fig. 8). Figure 6 shows how, in general terms, there are quite high levels of similarity in terms of what researchers publish, cite and tweet, in all cases with cosine values higher than 0.8. However, it is noticeable that the more authors publish (i.e. the higher p_authored), the greater the similarity with their cited publications; probably because they use a relatively stable set of citations, moderately changing as they evolve in their production. From another perspective, their tweeting activity tends to be less similar to both their production and citation habits as their number of authored publications increases, particularly after 6 and 7 publications. This suggests that as researchers increase in their production, they also expand their topics of Twitter activity. It is also interesting to see that publications tweeted are cognitively
Fig. 6 Average distribution of cosine comparisons, controlling by p_authored
A Comparison of the Citing, Publishing, and Tweeting Activity …
Fig. 7 Average distribution of cosine comparisons, controlling by p_cited
Fig. 8 Average distribution of cosine comparisons, controlling by p_tweeted
277
278
R. Costas and M. R. Ferreira
more similar to the set of publications authored by the researchers than those cited publications (avg(au_tw_cos) > avg(tw_cit_cos)), suggesting that in their tweeting activities, researchers are closer to those topics they publish about than the topics they cite. Figure 7 shows once again high cosine levels (> 0.88) in general, this time when we control by the number of publications cited by the researchers. It is interesting to see that as the number of publications cited increases, the similarity with the other sets (tweeted or authored) also decreases. Finally, Fig. 8 shows the average cosine measures of the researchers controlling by p_tweeted. Again, an overall high similarity is found among all pairs of individual sets of publications (cosine > 0.80 in all cases). When controlling by the number of publications tweeted, the highest similarity is found for publications cited and authored, with a stable pattern, suggesting that the relationship between authoring and citing is relatively independent from the tweeting activities of researchers. From another perspective, the similarity between what researchers’ tweet and what they publish and cite decreases with the number of publications tweeted. In other words, when researchers tweet few publications, they tend to be thematically more similar to what they publish and cite overall, while they seem to expand their topics with their increasing tweeting activity. Once again, the similarity was greater between what researchers tweeted and authored, than what they cited.
Discussion and Conclusions In this chapter, we have only started to scratch the surface of a major area of research in scientometrics, namely the study of the relationships, impacts and effects of the social media activities of researchers (more specifically on Twitter) in relation to their own scholarly activities. Previous studies that have approached this topic had a much narrower scope in terms of the set of researchers they studied (Haustein et al., 2014; Côte & Darling, 2018), their disciplinary orientation (Teka Hagdu & Jäschke, 2014), or they did not link the Twitter activities of the identified researchers to their bibliometric activities (Ke, Ahn, & Sugimoto, 2017). To the best of our knowledge, there are no other similar studies that compared the bibliometric activities of researchers with their social media activities at the same scale as in this study. Following up on Henk Moed’s proposal to study the “computerization of the scientific process” we compared the authoring, citing and tweeting activities of a set of 124,569 individual scholars active in publishing Web of Science papers and tweeting publications covered by Altmetric.com. As a consequence, in this book chapter, we are conceptually moving the area of research on social media metrics from the mere study of the reception of publication outputs on social media platforms to a more nuanced perspective in which the social media activities of scholars (as part of their computerization of the scientific process) are the new focus. We specifically studied the relationship between the bibliometrically captured activities of researchers (publications and citation activities) and their altmetric activities (tweeting publications
A Comparison of the Citing, Publishing, and Tweeting Activity …
279
activities). Three major perspectives have been discussed, the main characteristics of researchers on Twitter, their choices for citing and tweeting publications (including self-mentions), and the cognitive similarity between publications cited, authored and tweeted by researchers.
Main Characteristics of Researchers on Twitter The researchers identified on Twitter comprise a relatively young cohort of scholars that mostly began publishing after 2010. These scholars tweeted publications on average 52 times. Yet, the distribution of tweeted publications was found to be skewed with 50% of researchers tweeting publications 34 times or more. This finding is consistent with previous studies on science tweeters (Yu et al., 2019; Joubert & Costas, 2019). Our results suggest that at the individual researcher level, Twitter-based indicators are empirically different from those bibliometrically based, a concept already suggested by Wouters, Zahedi & Costas (2018). Twitter-based indicators were not correlated with bibliometric indicators of production and citation impact, which is in line with the results reported by Haustein et al. (2014), where lower correlations between researchers’ number of publications and their tweets per day were also found, suggesting that tweeting activities are not related to publishing activities. From another perspective, the number of Twitter followers of researchers is overall low, as only 25% of the researchers studied have more than 429 followers. According to Côte & Darling (2018), researchers with more than 1,000 followers are more capable to reach wider audiences (e.g., the public, news outlets, etc.). Therefore, we can conclude that most researchers on Twitter do not have a strong capacity for reaching broader non-scholarly audiences. Moreover, the number of followers of researchers correlates to the number of overall tweets sent to publications and to the overall tweets received by researchers’ publications, but not to their number of authored papers, their citations or their field-normalized impact. Martín-Martín et al. (2018) found similar results for a set of 240 bibliometricians on Twitter. They showed that the number of tweets published, the number of followers, and the number of followers did not correlate with other scholarly metrics. These results support the idea that symbolic capital on Twitter (e.g. accruing a large number of followers and Twitter reputation) is not directly related to the scientific capital of researchers (e.g. to their citation impact or publications), and seems to be more related to Twitter activities themselves. This is in line with previous results (Díaz-Faes et al., 2019) in which differentiated dimensions of scholarly relevance and social media reputation were identified for Twitter users active in disseminating scientific publications.
280
R. Costas and M. R. Ferreira
Choices of Scholars on Twitter, Citations, and Self-mentions Our study shows that researchers tend to tweet papers with high citation impact, and particularly publications with high Twitter impact. However, when it comes to their citation patterns, researchers select higher-cited sets of publications, although with lower Twitter impact. These results suggest that researchers tend to select publications with higher citation impact (or citation potential) when they are citing, and publications with higher Twitter impact (in terms of becoming highly tweeted) when tweeting, this being true also when they self-tweet their own papers (with those selftweeted sets of publications also being the most tweeted of the individual researchers’ sets of publications). These observations again support a scholarly-focused orientation vs. a more Twitter-focused orientation of researchers’ choices for citations and tweeting activities. The study of self-mentioning activities (either through citations and tweets, or any other social media activity that researchers can perform, e.g. blogging, Facebooksharing, etc.) is a relatively complex question (both conceptually and empirically). An extensive analysis of these activities exceeds the boundaries of this book chapter. However, we have started to explore some of the basic patterns of self-mentioning among researchers. Our main conclusion regarding the self-mentioning activities of researchers is that they do not seem to excessively self-tweet their own publications, or at least not more than they self-cite their own publications. Our results show that, on average, scholars self-tweet about 27% of their publications (in contrast to the selfcitation of about 25% of their publications). It seems, however, that there is a stronger concentration of researchers self-tweeting about their own publications, with 25% of the most active being self-tweeting scholars who have self-tweeted about 50% or more of their own publications, in contrast to 46% of the same value for self-citations. From another point of view, on average about 25% of the publications tweeted by researchers are self-tweeted, meaning that researchers on average engage in tweeting a substantial number of publications that are not their own.
Impact and Cognitive Distance of Publications Cited and Tweeted by Researchers All the results in this chapter point to the same general pattern: the topics researchers write, cite and tweet are overall relatively similar. This observation contrasts with the results obtained by Haustein et al. (2014), in which they found little similarity between the abstracts of the papers researchers authored and their tweets. This difference may be caused by the methodological approaches used in the two studies, since here we used a publication-based approach (i.e. we compare vectors of publications), while in the Haustein et al. (2014) they applied a semantic approach of comparing noun phrases. Future research should investigate whether there are stronger semantic differences between what researchers are tweeting, citing and publishing.
A Comparison of the Citing, Publishing, and Tweeting Activity …
281
Researchers also seem to choose to tweet highly cited publications, this also holding true when researchers tweet their own publications. When we controlled by the number of publications authored, the field normalized citation impact of the self-tweeted publications is also higher than those non self-tweeted publications. Another important conclusion is that researchers tweet more closely to the fields they write about than to those they cite in their publications. Overall, our results show that in general the cognitive distance of publications increases with activity (citations or tweets), suggesting that more activity overall broadens the scope of researchers’ fields.
Limitations of the Study We acknowledge that our research may have some limitations. Firstly, it is a very first analysis from a descriptive point of view on a very new topic. This probably raises many more questions than those discussed in this chapter, and therefore many of these questions will need to be addressed in future research (see our section on Future Research). Secondly, the main data sources (Web of Science and Altmetric) selected for the identification of the publication, citation and Twitter activities of researchers are also not free from limitations (Wang & Waltman, 2016), including publication coverage (Mongeon & Paul-Hus, 2016), language coverage (VeraBaceta et al., 2019), reliance on publication identifiers (Gorraiz et al., 2016), and issues in the identification of Twitter activity by altmetric data providers (Zahedi & Costas, 2018). Thirdly, the author name-disambiguation algorithm behind the list of researchers identified (Caron & Van Eck, 2014) and the Author-Twitter matching approach (Costas et al., submitted) are both based on rules and assumptions that may have unknown relevance to the findings discussed here. There may be researchers who have activities that are not captured by bibliometric or altmetric databases (e.g. they produce outputs not covered in Web of Science or similar databases, such as books, software, etc.; their outputs may not be captured by current altmetric data aggregators), and they may combine personal and professional activities on Twitter (Bowman, 2015), making it difficult to distinguish between their professional and non-professional activities. Therefore, the results reported in the study should be seen as conservative, and they should be considered as exploratory. Future research needs to take these limitations into account. In the next section, we discuss a possible research agenda on this topic.
Further Research This study is the first large-scale attempt to study researchers’ activities on social media and their relationship with bibliometrics. However, it is difficult in such a short outlet to provide answers to all the potential questions that may be raised regarding the social media activities of researchers. Nevertheless, based on the results presented
282
R. Costas and M. R. Ferreira
here, we propose a research agenda for the analysis of the social media activities of researchers.
Expansion of data sources and algorithms to capture the activities of researchers on social media As discussed above, the sources used in this study (Web of Science, Altmetric, etc.) are not free of limitations; therefore, expanding the sources and improving the algorithms of identification of researchers and their academic and social media activities must be an important necessary step in future research on the topic. An important addition would be the inclusion of other social media activities of researchers (e.g. blogging, news-media engagements, online recommending activities, etc.) in order to get a much broader landscape of the range of scholarly activities happening on the social web. Similarly, expanding the data collection of social media features (e.g. follower and followees networks, replies, retweets, posts, shares, comments, etc.), and the development of more advanced network properties of social media platforms (Robinson-Garcia, Van Leeuwen, & Rafols, 2017; Said et al., 2019) are approaches that will play a role in the development of more advanced studies of researchers and social media interactions.
Study of the socio-demographic aspects of social media activities of scholars In this study we have not developed any analysis considering the socio-demographic characteristics of researchers, aside from the academic age of researchers. So, future research should also focus on aspects related to the geographic characteristics of researchers, their gender, their disciplinary differences (Holmberg & Thelwall, 2014), their affiliations, or their professional category, among others. The study of these socio-demographic aspects will very likely provide more contextualized perspectives for the understanding of the social media activities of researchers.
Development of more advanced network approaches in the analysis of the social media activities of researchers Studying the interactions of papers, researchers, and universities using simple graphs has been the most common approach in the fields of bibliometrics and altmetrics. Standard analyses include citation networks, bibliographic coupling networks, cocitation networks, or collaboration networks. However, network interactions occur
A Comparison of the Citing, Publishing, and Tweeting Activity …
283
at multiple levels or layers as we have seen throughout this chapter. This generates networks that can be interpreted as interdependent graphs. There are a number of research questions that we plan to address using a multilayered network framework, such as overlapping of communities across different networks, the extent to which citation networks overlap with tweeting networks, and how these networks co-evolve over time and influence the impact of science beyond science.
Study of the relational perspectives between social media activities and academic activities An important relevant open question is the extent to which social media activities and academic activities have an effect on each other. In other words, the study of whether tweeting and self-tweeting may have an effect on the further citation impact of publications, whether highly tweeted researchers may also earn symbolic academic capital (Desrochers et al., 2018), whether researchers can develop better dissemination strategies on Twitter, or simply if the best research is the one that is disseminated and promoted by researchers, thus helping to create better forms of public understanding of science (Alperin, Gomez, & Haustein, 2018) and more responsible scientific communication strategies, will remain fundamental questions to research in the future. Acknowledgments Rodrigo Costas was partially funded by the South African DST-NRF Centre of Excellence in Scientometrics and Science, Technology and Innovation Policy (SciSTIP). Márcia R. Ferreira was partially funded by the Austrian Research Promotion Agency FFG under grant #857136. The authors thank Nicolás Robinson-García from TU Delft (NL) for his technical help with the cosine analysis and Philippe Mongeon for his critical comments and feedback on earlier drafts of this work.
References Aksnes, D. W. (2003). A macro study of self-citation. Scientometrics, 56(2), 235–246. Alperin, J. P., Gomez, C. J., & Haustein, S. (2018). Identifying diffusion patterns of research articles on Twitter: A case study of online engagement with open access articles. Public Understanding of Science, 28(1), 2–18. https://doi.org/10.1177/0963662518761733. Bowman, T. D. (2015). Differences in personal and professional tweets of scholars. Aslib Journal of Information Management, 67(3), 356–371. Caron, E., & van Eck, N. J. (2014). Large scale author name disambiguation using rule-based scoring and clustering. In Proceedings of the 19th International Conference on Science and Technology Indicators (pp. 79–86). Costas, R., Mongeon, P., Ferreira, M.R., van Honk, J., & Franssen, T. (2019). Large-scale identification and characterization of scholars on Twitter. Quantitative Science Studies, 1(2), 771–791. https://doi.org/10.1162/qss_a_00047.
284
R. Costas and M. R. Ferreira
Costas, R., van Honk, J., & Franssen, T. (2017). Scholars on Twitter: Who and how many are they? In International Conference on Scientometrics and Informetrics, China (Wuhan). Costas, R., van Leeuwen, T., & Bordons, M. (2010). Self-citations at the meso and individual levels: Effects of different calculation methods. Scientometrics, 82(3), 517–537. Côte, I. M., & Darling, E. S. (2018). Scientists on Twitter: Preaching to the choir or singing from the rooftops? FACETS a Multidisciplinary Open Access Science Journal, 682–694. Desrochers, N., Paul-Hus, A., Haustein, S., Costas, R., Mongeon, P., Quan-Haase, A., et al. (2018). Authorship, citations, acknowledgements and visibility in social media: Symbolic capital in the multifaceted reward system of science. Social Science Information, 57(2), 223–248. Díaz-Faes, A. A., Bowman, T. D., & Costas, R. (2019). Towards a second generation of ‘social media metrics’: Characterizing Twitter communities of attention around science. PLoS ONE. https://doi.org/10.1371/journal.pone.0216408. Gorraiz, J., Melero-Fuentes, D., Gumpenberger, C., & Valderrama-Zurián, J. C. (2016). Availability of digital object identifiers (DOIs) in web of science and Scopus. Journal of Informetrics, 10, 98–109. Haustein, S., Bowman, T. D., Holmberg, K., Peters, I., & Lairivière, V. (2014). Astrophysicists on Twitter. Aslib Journal of Information Management, 66(3), 279–296. Holmberg, K., & Thelwall, M. (2014). Disciplinary differences in Twitter scholarly communication. Scientometrics, 101, 1027–1042. Joubert, M., & Costas, R. (2019). Getting to know science Tweeters: A pilot analysis of South African Twitter users tweeting about research articles. Journal of Altmetrics, 2(1), 2. http://doi. org/10.29024/joa.8 Ke, Q., Ahn, Y. Y., & Sugimoto, C. R. (2017). A systematic identification and analysis of scientists on Twitter. PLoS ONE, 12(4), e0175368. https://doi.org/10.1371/journal.pone.0175368. Martín-Martín, A., Orduna-Malea, E., & Delgado López-Cózar, E. (2018). Author-level metrics in the new academic profile platforms: The online behaviour of the Bibliometrics community. Journal of Informetrics, 12, 494–509. Moed, H. F. (2016). Altmetrics as traces of the computerization of the research process. In C.R. Sugimoto & B. Cronin (Eds.), Theories of informetrics and scholarly communication (pp. 360– 371). https://doi.org/10.1515/9783110308464-021. Mongeon, P. (2018). Using social and topical distance to analyze information sharing on social media. In Proceedings of the 81st Annual ASIS&T Meeting, Vancouver, 10–14 November 2018. Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of web of science and Scopus: A comparative analysis. Scientometrics, 106(1), 213–228. Mongeon, P., Xu, S., Bowman, T. D., & Costas, R. (2018). Tweeting library and information science: a socio-topical distance analysis. In Proceedings of the 23rd International Conference on Science and Technology Indicators. Leiden, 12–14 September 2018. Nane, G., Larivière, V., & Costas, R. (2017). Predicting the age of researchers using bibliometric data. Journal of Informetrics, 11(3), 713–729. Ortega, J. L. (2016). To be or not to be on Twitter, and its relationship with the tweeting and citation of research papers. Scientometrics, 109, 1353–1364. Robinson-Garcia, N., van Leeuwen, T. N., & Rafols, I. (2017). Using altmetrics for contextualised mapping of societal impact: From hits to networks. Science and Public Policy, 45(6), 815–826. Said, A., Bowman, T. D., Abbaso, R. A., Ajohani, N. R., Hassan, S. U., & Nawaz, R. (2019). Mining network-level properties of Twitter altmetrics data. Scientometrics, 120(1), 217–235. Salton, G., & McGill, M. J. (1986). Introduction to modern information retrieval. McGraw-Hill, Inc. Sugimoto, C. R., Work, S., Larivière, V., & Haustein, S. (2017). Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology, 68, 2037–2062. Teka Hagdu, A., & Jäschke, R. (2014). Identifying and analyzing researchers on Twitter. In Proceedings of the 2014 ACM conference on Web science. Bloomington (USA). http://doi.org/10. 1145/2615569.2615676
A Comparison of the Citing, Publishing, and Tweeting Activity …
285
Vera-Baceta, M. A., Thelwall, M., & Kousha, K. (2019). Web of science and Scopus language coverage. Scientometrics. https://doi.org/10.1007/s11192-019-03264-z. Wang, Q., & Waltman, L. (2016). Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus. Journal of Informetrics, 10, 347–364. Wouters, P., & Costas, R. (2012). Users, narcissism and control—tracking the impact of scholarly publications in the 21st century. The Netherlands: SURFfoundation. http://research-acumen.eu/ wp-content/uploads/Users-narcissism-and-control.pdf Wouters, P., Zahedi, Z., & Costas, R. (2018). Social media metrics for new research evaluation. In W. Glänzel, H. F. Moed, U. Schmoch, & M. Thelwall, (Eds.), Handbook of quantitative science and technology research. Springer. Yu, H., Xiao, T., Xu, S., & Wang, Y. (2019). Who posts scientific tweets? An investigation into the productivity, locations, and identities of scientific tweeters. Journal of Informetrics, 13(3), 841–855. Zahedi, Z., & Costas, R. (2018). General discussion of data quality challenges in social media metrics: Extensive comparison of four major altmetric data aggregators. PLoS ONE. https://doi. org/10.1371/journal.pone.0197326.
Library Catalog Analysis and Library Holdings Counts: Origins, Methodological Issues and Application to the Field of Informetrics Daniel Torres-Salinas and Wenceslao Arroyo-Machado
Introduction In the Social Sciences and the Humanities, the evaluation of scientific activity and, especially, of the academic book, has been an unresolved issue because evaluation in bibliometry has, until quite recently, been monopolized by citation indexes and the Thomson Reuters databases (Nederhof, 2006). Hence, even though the vast majority of research studies demonstrate the importance of books in scientific communication (Archambault et al., 2006; Hicks, 1999; Huang & Chang, 2008), any proposed evaluation of books has largely been restricted to limited, partial applications using the traditional citation indexes. In 2009, the lack of more ambitious initiatives and alternative databases was challenged by a proposed set of indicators based on consulting the Online Public Access Catalogue (OPAC)1 thanks, above all, to certain technological developments—such as the Z39.50 protocol—and, especially, the Online Computer Library Center’s (OCLC) launch of WorldCat.org, in 2006. This open-access catalog unified in a single search engine millions of libraries enabling users to determine where any given title could be found (Nilges, 2006). The library count–based methodology was initially termed Library Catalog Analysis (LCA) or Library Holdings Analysis and was one of the first approaches to evaluation to challenge the use of citations; it was launched two years before the Altmetric manifesto (Priem et al., 2010) was published. Since then, the framework and methods enabling researchers to analyze the impact, diffusion and use of books, have broadened substantially.
1 An
OPAC is an online database that enables us to consult a library catalog.
D. Torres-Salinas (B) · W. Arroyo-Machado Dpto Información Y Comunicación, Universidad de Granada, EC3metrics SL, CTT - Gran Vía 48, 18071 Granada, Spain e-mail: [email protected]
© Springer Nature Switzerland AG 2020 C. Daraio and W. Glänzel (eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment, https://doi.org/10.1007/978-3-030-47665-6_13
287
288
D. Torres-Salinas and W. Arroyo-Machado
Consequently, we currently have access to a broad-ranging set of indicators applicable to any document type, including all those generated on social media networks like Twitter, Wikipedia or newsfeeds (Torres-Salinas, Cabezas-Clavijo, & JiménezContreras, 2013). Furthermore, other specific indicators have appeared and are unique to scientific books. These include the number of reviews recorded by the Book Review Index or the number and score available on the Goodreads or Amazon Reviews web platforms—the latter being related to popularity (Kousha & Thelwall, 2016). Similarly, in recent years, mentions in Syllabi, Mendeley bookmarks, or citations from audiovisual resources like YouTube (Kousha & Thelwall, 2015) have also been used. Moreover, to unify these metrics, platforms like Altmetric.com or PlumX Analytics have appeared—the latter pays special attention to the book since it integrates many earlier indicators, including library holdings. Therefore, sufficient sources of information about books and indicators of books are currently available. So, bearing in mind the current surfeit of bibliometric resources, in the present chapter we seek to focus on the aforementioned Library Catalog Analysis (LCA) methodology—proposed by Torres-Salinas and Henk F. Moed—which could be considered one of the pioneering proposals in altmetrics, at least with reference to the evaluation of books. The objective of the present chapter is not just to pay tribute to Henk, it is also to offer a current perspective of library holdings–based indicators. The text has been organized in five parts. In the first, we present the origins of LCA and the first proposals, comparing their common characteristics and differences. The next section focuses on correlations with other indicators and discusses theories about their significance. We then continue with a critical analysis of the principal sources of information (WorldCat, PlumX Analytics, etc.). And finally, for the first time, we illustrate and apply the use of the WorldCat Identities tool2 to the field of Informetrics in order to identify the principal authors and monographs.
The Origins of Library Catalog or Library Holdings Analysis The seeds of collaboration between one of the present authors (D.T.-S.) and Henk F. Moed leading to the LCA proposal, date back to 2007 when, as a visiting researcher, Henk spent some time at the University of Granada (Spain)—at the invitation of the Grupo EC3 research group—in order to prepare the 11th ISSI 2007 conference in his role as program chair. During his stay, Henk and D.T.-S. discussed a range of topics relating to the latter’s upcoming research visit to CWTS Leiden (The Netherlands) and decided to work on a new approach to the evaluation of the scientific book. Thus, the LCA proposal was born—to be further developed during D.T.-S.’s visit from October 2007 to February 2008. Initial results were presented at the 10th STI conference (Torres-Salinas & Moed, 2008) and the first draft paper was finally 2 WorldCat
Identities: https://worldcat.org/identities/.
Library Catalog Analysis and Library Holdings …
289
submitted to the Journal of Informetrics in August 2008; it was accepted in October 2008 and published online on December 30, 2008 (Torres-Salinas & Moed, 2009). To better define that proposal, the present authors have recovered an e-mail message in which Henk explained LCA to Charles Erkelens, the then Editorial Director at Springer: The aim of this project is to analyse the extent to which scientific/scholarly books published by a particular group of scientists/scholars are available in academic institutions all over the world, and included in the catalogs of the institutions’ academic libraries. The base idea underlying this project is that one can obtain indications of the ‘status’, ‘prestige’ or ‘quality’ of scholars, especially in the social sciences and humanities, by analysing the academic libraries in which their books are available. We developed a simple analogy model between library catalog analysis and classical citation analysis, according to which the number of libraries in which a book is available is in a way comparable to the number of citations a document receives. But we realise that our data can in principle be used for other purposes as well. The interpretation of the library catalog data is of course a very complex issue. (Moed, Henk F. Personal communication - email, January 21, 2008).
More precisely, in the 2009 study LCA was defined as “the application of bibliometric techniques to a set of library online catalogs”. As a case study, Torres-Salinas and Moed selected the field of Economics and searched for books on the topic available in 42 university libraries in 7 different countries. They analyzed 121 147 titles included on 417 033 occasions in the sample libraries, making this one of the first bibliometric studies to use large-scale data about books. The authors proposed 4 indicators—the most noteworthy being the Number of Catalog Inclusions and the Catalog Inclusion Rate (Table 1)—and successfully extrapolated techniques like Multidimensional Scaling (MDS) and Coword Mapping. They then conducted a further two case studies which focused on the University of Navarra (Spain) and studied the major publishing houses in the field of Economics. Fundamental to the development of their methodology were conversations with Adrianus J. M. Linmans who, in 2007–2008, was a member of the CWTS staff. University of Leiden librarian Linmans had also considered the use of catalogs as a tool to obtain quantitative data, especially in the field of the Humanities, and had conducted several applied studies the results of which had been presented internally at CWTS (Linmans, 2008). Part of these was subsequently published in Scientometrics in May 2010 (online in August 2009) (Linmans, 2010). The role of Henk F. Moed was also crucial to these contributions as Linmans himself explicitly acknowledged: “I am grateful to Henk Moed for his encouraging me to investigate library catalogues as a bibliometric source” (Linmans, 2010, p. 352). Linmans’ contributions offer a different perspective to that of Torres-Salinas and Moed since he does not focus on a specific field but, rather, on the analysis of 292 lecturers ascribed to the Faculty of Humanities at the University of Leiden; hence the context and level of aggregation are totally different and more applied in nature. As well as library catalogs, Linmans made use of traditional indicators that enabled him to calculate the first correlations. Terminologically speaking, it should be noted that instead of Library Catalog Analysis he referred to his methodology
290
D. Torres-Salinas and W. Arroyo-Machado
Table 1 Main indicators for Library catalog or libcitations analysis proposed in 2009 Indicator
Definition
Proposed by Torres-Salinas & Moed CI Catalog Inclusions
The total number of catalog inclusions for a given set of book title(s). This indicates the dissemination of a (given set of) book title(s) in university libraries
RCIR Relative Catalog Inclusion Rate
This is defined as the ratio of CIR of the aggregate to be assessed and the CIR of the aggregate that serves as a benchmark in the assessment. A special case is the calculation of an RCIR in which the CIR of an institute under assessment is divided by the CIR calculated for the total database. A value above 1 indicates that an institution’s CIR is above the world (or total database) average
DR Diffusion Rate
The percentage of catalog inclusions of book titles produced by a given aggregate relative to the total number of possible catalog inclusions. The number of possible inclusions is equal to the product of the number of titles in the set and the number of catalogs included in the analysis. DR values range between 0 and 1. A value of 1 indicates that each title analyzed is present in all the library catalogs studied
Proposed by White et al. Libcitations Library Citations
For a particular book (i.e., edition of a title), this increases by 1 every time a different library reports acquiring that book in a national or international union catalog. Readers are invited to think of union catalogs in a new way: as “librarians’ citation indexes”
CNLS Class Normalized Libcitation Score
To compute the CNLS, we obtained the number of books in each target item’s LC [Library of Congress] class and the sum of libcitations of all those books. These data allowed us to compute the mean libcitations in each LC class as an expected value by which to divide the book’s observed libcitation count. For example, if the mean libcitation count for an LC class is 20 and the book’s libcitation count is 40, then CNLS = 2, or twice the average for that LC class
RC Rank in Class
We also show each book’s LC class and its rank in that class with respect to other titles. This measure resembles one already used in evaluative bibliometrics: the position of an author’s or research unit’s citation count in an overall distribution of citation counts
as Library Holdings Analysis (in the present chapter these terms are treated as synonyms). With regard to the sample, Linmans analyzed 1135 books present in 59 386 book holdings in the United States, United Kingdom and The Netherlands, and employed the WorldCat collective catalog for his calculation. Linmans introduced interesting methodological variations in his calculations of indicators at author level:
Library Catalog Analysis and Library Holdings …
291
he distinguished types of book production by responsibility (editor or author), and differentiated between the language of publication of books. However, these proposals were not unique since Howard D. White described a similar methodology in JASIST —published online in February 2009. This can only lead us to conclude that in 2008 both in Europe and in the United States researchers had been simultaneously working on the development of the same method in complete ignorance of each other. In fact, JASIST received White’s paper on July 31, and the Journal of Informetrics received Torres-Salinas & Moed’s submission on August 1. The phenomenon of simultaneous discovery—quite common in Science—was confirmed by White in his introduction: “After this article had been submitted to JASIST, we learned that the same parallelism between citation counts and library holdings counts had been proposed in a conference paper by Torres-Salinas and Moed in 2008. The appearance of similar proposals in wholly independent projects suggests that this is an idea whose time has come”. (White et al., 2009, p. 1084). White’s proposed methodology is theoretically and practically the same as that of LCA, or Library Holdings Analysis, although White did choose the elegant term “libcitations” to describe the number of library holdings in which a book is found. Undoubtedly, White’s term for the new indicator seems better suited than TorresSalinas and Moed’s “Library Inclusions” and we cannot help but recognize it as being more descriptive and more appropriate. Methodologically, White’s proposal is more akin to Linmans’ approach since, firstly, faced with a macro- or disciplineoriented perspective, he too focuses at micro level on the production of 148 authors from different departments (Philosophy, History and Political Science) of various Australian universities (New South Wales, Sydney); secondly, also like Linmans, he chose WorldCat as his source of information. However, in relation to the indicator he does have more in common with Torres-Salinas and Moed in designing more complex indicators such as the Class Normalized Libcitation Score (CNLS) which facilitates a contextualization of the results and is much like these authors’ proposed Relative Catalog Inclusion Rate (Table 2). We conclude that the LCA was, at that time, a new methodology that offered a quantitative vision and an alternative narrative to the traditional bibliometric indicators. So, it was born as a surprising, simultaneous, triple proposal and drew essentially on the technological change of the period with the creation of new sources of information, in this case WorldCat. Over the last 10 years, limited but determined research interest has centered on the evaluation of the scientific book, now integrated into the universe of altmetrics where the methodology has been tested in different ways (Zuccala et al., 2015; Biagetti et al., 2018b). In the coming sections we will focus on some of these aspects, especially in relation to other indicators and the sources of information available when undertaking to analyze library catalogs.
292
D. Torres-Salinas and W. Arroyo-Machado
Table 2 Principal characteristics of the three studies of Library Holdings published simultaneously Bibliographic Reference
Publication history
Denomination and definition
Level of analysis
Torres-Salinas, D., & Moed, H. F. (2009). Library Catalog Analysis as a tool in studies of social sciences and humanities: An exploratory study of published book titles in Economics. Journal of Informetrics, 3(1), 9–26
Journal of Informetrics: – Received: 1 August 2008 – Accepted: 22 October 2008 – Published online: 30 December 2008
Library Catalog Analysis The application of Bibliometric techniques to a set of online library catalogs. In this paper LCA is used to describe quantitatively a scientific–scholarly discipline and its actors, on the basis of an analysis of published book titles
Analysis by discipline Field analyzed: Economics
White, H. D. et al. (2009). Libcitations: A measure for comparative assessment of book publications in the humanities and social sciences. Journal of the American Society for Information Science and Technology, 60(6), 1083–1096
JASIST: – Received:30 July 2008 – Accepted: 9 January 20098 – Published online: 20 February 2009
Libcitation analysis The idea is that, when librarians commit scarce resources to acquiring and cataloging a book, they are in their own fashion citing it. The number of libraries holding a book at a given time constitutes its libcitation count
Author level Field analyzed: History, Philosophy, and Political Science
Linmans, A. (2010). Why with bibliometrics the humanities does not need to be the weakest link: Indicators for research evaluation based on citations, library holdings, and productivity measures. Scientometrics, 83(2), 337–354
Scientometrics – Received: 28 January 2009 – Published online: 13 August 2009
Library holdings analysis A set of impact indicators, measuring the extent to which books by the same authors are represented in collections held by representative scientific libraries in different countries
Faculty level Field analyzed: Humanities and Social Sciences
Correlations and meaning As is usually the case when a new indicator takes to the stage, studies that analyze its correlations with other metrics tend to abound. Library Holdings Counts, or inclusions, are no exception to the rule and in the light of the results we can confirm that
Library Catalog Analysis and Library Holdings …
293
they offer a different view to that of citation indicators. Although research into the relation between citations and libcitations has almost always used different methods, sources of information and disciplines, a pattern does appear: correlations, although occasionally significant, are usually low and of little relevance as, for example, in the work of Linmans (2010) and Zuccala and Guns (2013). Everything suggests that citations and libcitations do capture certain information in common but no causeeffect relation appears to exist in either direction. We are faced with an indicator that measures or depicts a type of impact or diffusion that is different to that of the citation. Among the studies that confirm these facts (Table 3), the first is that by Linmans (2010) which tackles the correlations between library holdings and citations calculated from the Web of Science in the context of the University of Leiden’s Faculty of Humanities. The correlation Linmans calculated from his data set was 0.29, rising to 0.49 for English language books. Zuccala and White (2015) reported on data for two disciplines and two time spans, distinguishing between centers that belonged to the Association of Research Libraries (ARL) and those that did not. Their analysis covered the period 1996-2011, the Scopus database, and 10 disciplines within the Humanities. For the two major disciplines, History and Literature, they obtained correlations of 0.24 and 0.20, respectively; when limiting their study to ARL centers, these rose to 0.26 and 0.24, respectively. In general, if we consider the 10 disciplines and two time spans, correlations rarely exceed 0.20, and only exceptionally reach 0.28. Table 3 Reported correlations between the number of inclusions in library holdings, or libcitations, and the number of citations Authors
Study type
Citation database
Correlation coefficient
Linmans (2010)
Published books in a Faculty of Humanities library
Web of Science
0.29 All books 0.40 Books in English
Kousha and Thewall (2016)
759 books in Social Sciences and 1262 in the Humanities
Book Citation Index
0.145 Social Sciences 0.141 Humanities
Kousha and Thewall (2016)
759 books in Social Sciences and 1262 in Humanities
Google Books
0.234 Social Sciences 0.268 Humanities
Zuccala and White (2015)
20 996 books in History and 7541 in Literature and Literary Theory cited in Scopus journals for 2007–2011
Scopus
0.24 History 0.20 Literature & Literary Theory
Zhang, Zhou, and Zhang (2018)
2356 indexed in the Chinese Social Sciences Citation Index
Chinese Social Sciences Citation Index
0.291 Ethnology